Horizontal Talent - Dallas, TX

posted about 2 months ago

Full-time
Dallas, TX

About the position

We are looking for self-motivated, responsive individuals who are passionate about data. In this role, you will build data solutions to address complex business questions, taking data through its lifecycle—from the pipeline for data processing and data infrastructure to creating dataset data products. You will be responsible for designing and building ETL jobs to support the Enterprise Data Warehouse, ensuring that data is processed efficiently and effectively to meet business needs. Your core responsibilities will include writing Extract-Transform-Load (ETL) jobs using standard tools and partnering with business teams to understand their requirements. You will assess the impact of these requirements on existing systems and design and implement new data provisioning pipeline processes specifically for Finance and External reporting domains. Monitoring and troubleshooting operational or data issues in the data pipelines will also be a key part of your role, as will driving architectural plans and implementations for future data storage, reporting, and analytic solutions. This position requires a strong background in data engineering, with a focus on big data processing technologies. You will need to leverage your experience to optimize SQL queries in a business environment that deals with large-scale, complex datasets. A detailed knowledge of databases and data warehouse concepts, as well as hands-on experience with cloud technologies, will be essential for success in this role.

Responsibilities

  • Design and build ETL jobs to support the Enterprise Data Warehouse.
  • Write Extract-Transform-Load (ETL) jobs using standard tools.
  • Partner with business teams to understand business requirements and assess the impact on existing systems.
  • Design and implement new data provisioning pipeline processes for Finance/External reporting domains.
  • Monitor and troubleshoot operational or data issues in the data pipelines.
  • Drive architectural plans and implementations for future data storage, reporting, and analytic solutions.

Requirements

  • 6-8 years experience in Data Engineering.
  • 3+ years of experience in implementing big data processing technology: AWS / Azure / GCP, Apache Spark, Python.
  • Experience writing and optimizing SQL queries in a business environment with large-scale, complex datasets.
  • Working knowledge of higher abstraction ETL tooling (e.g., AWS Glue Studio, Talend, Informatica).
  • Detailed knowledge of databases like Oracle, DB2, SQL Server, data warehouse concepts, and technical architecture, infrastructure components, ETL, and reporting/analytic tools and environments.
  • Hands-on experience in cloud technologies (AWS, Google Cloud, Azure) related to data ingestion tools (both real-time and batch-based), CI/CD processes, cloud architecture understanding, and big data implementation.
  • DataBricks certification and Azure certification.

Nice-to-haves

  • AWS certification or experience with AWS services.
  • Working knowledge of Glue, Lambda, S3, Athena, Redshift, Snowflake.
  • Strong verbal and written communication skills.
  • Excellent organizational and prioritization skills.
  • Strong analytical and problem-solving skills.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service