Prodware Solutions - Dallas, TX

posted 2 months ago

Full-time
Dallas, TX
Professional, Scientific, and Technical Services

About the position

As a Data Engineer, you will play a crucial role in designing and building ETL jobs that support the Enterprise Data Warehouse. This position requires a strong understanding of data engineering principles and the ability to work collaboratively with business teams to gather requirements and assess the impact on existing systems. You will be responsible for writing Extract-Transform-Load (ETL) jobs using standard tools and implementing new data provisioning pipeline processes specifically for Finance and External reporting domains. Your expertise will be essential in monitoring and troubleshooting operational or data issues within the data pipelines, ensuring that data flows smoothly and efficiently. In addition to your technical skills, you will drive architectural plans and implementations for future data storage, reporting, and analytic solutions. This involves not only understanding current data needs but also anticipating future requirements and designing systems that can scale accordingly. You will be expected to leverage your experience with big data processing technologies, including cloud platforms such as AWS, Azure, or Google Cloud Platform, as well as tools like Apache Spark and Python. Your ability to write and optimize SQL queries will be critical in managing large-scale, complex datasets, and your knowledge of higher abstraction ETL tooling will enhance the efficiency of data processing tasks. This role is hybrid, requiring you to work onsite in Dallas, TX for three days a week, allowing for a balance between collaborative in-person work and remote flexibility. You will be part of a dynamic team that values innovation and continuous improvement in data engineering practices, contributing to the overall success of the organization by ensuring that data is accessible, reliable, and actionable for decision-making processes.

Responsibilities

  • Design and build ETL jobs to support the Enterprise Data Warehouse.
  • Write Extract-Transform-Load (ETL) jobs using standard tools.
  • Partner with business teams to understand business requirements and assess the impact on existing systems.
  • Design and implement new data provisioning pipeline processes for Finance/External reporting domains.
  • Monitor and troubleshoot operational or data issues in the data pipelines.
  • Drive architectural plans and implementations for future data storage, reporting, and analytic solutions.

Requirements

  • 5+ years of relevant work experience in data engineering or software engineering equivalent.
  • 3+ years of experience in implementing big data processing technology: AWS / Azure / Google Cloud Platform, Apache Spark, Python.
  • Experience writing and optimizing SQL queries in a business environment with large-scale, complex datasets.
  • Working knowledge of higher abstraction ETL tooling (e.g., AWS Glue Studio, Talend, Informatica).
  • Detailed knowledge of databases like Oracle, DB2, SQL Server, data warehouse concepts, and technical architecture, infrastructure components, ETL, and reporting/analytic tools and environments.
  • Hands-on experience in cloud technologies (AWS, Google Cloud, Azure) related to data ingestion tools (both real-time and batch-based), CI/CD processes, cloud architecture understanding, and big data implementation.

Nice-to-haves

  • AWS certification or experience with cloud technologies.
  • Working knowledge of Glue, Lambda, S3, Athena, Redshift, Snowflake.
  • Strong verbal and written communication skills, excellent organizational and prioritization skills.
  • Strong analytical and problem-solving skills.
  • Data Bricks and Azure certification is a must.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service