Isoft - Columbus, OH

posted 10 days ago

Full-time - Senior
Remote - Columbus, OH
Professional, Scientific, and Technical Services

About the position

The Senior Azure Data Engineer will collaborate with engineering teams and enterprise architecture to establish standards and practices for data management. This role involves creating and maintaining data ingestion frameworks, conducting complex data analysis, and automating data pipelines using Azure technologies. The engineer will also focus on data quality, process automation, and training business users on data literacy, all while adhering to industry standards for data security.

Responsibilities

  • Team up with engineering teams and enterprise architecture to define standards, design patterns, accelerators, development practices, DevOps and CI/CD automation.
  • Create and maintain the data ingestion, quality testing and audit framework.
  • Conduct complex data analysis to answer queries from Business Users or Technology team partners.
  • Build and automate data ingestion, transformation and aggregation pipelines using Azure Data Factory, Databricks/Spark, Snowflake, Kafka, and Enterprise Scheduler tools.
  • Setup and evangelize a metadata-driven approach to data pipelines to promote self-service.
  • Continuously improve data quality and audit monitoring as well as alerting.
  • Evaluate process automation options and collaborate with engineering and architecture to review proposed designs.
  • Demonstrate mastery of build and release engineering principles and methodologies.
  • Adhere to, enhance, and document design principles and best practices in collaboration with Solution and Enterprise Architects.
  • Participate in and support the Data Academy and Data Literacy program to train Business Users and Technology teams on Data.
  • Respond to SLA-driven production data quality or pipeline issues.
  • Work in a fast-paced Agile/Scrum environment.
  • Identify and assist with implementation of DevOps practices for fully automated deployments.
  • Document Data Flow Diagrams, Data Models, Technical Data Mapping, and Production Support Information for Data Pipelines.
  • Follow industry-standard data security practices and promote them across the team.

Requirements

  • 5+ years of experience in an Enterprise Data Management or Data Engineering role.
  • 3+ years of hands-on experience in building metadata-driven data pipelines using Azure Data Factory and Databricks/Spark for Cloud Data Lake.
  • 5+ years of hands-on experience with data analysis and wrangling using Databricks, Python/PySpark, Jupyter Notebooks.
  • Expert level SQL knowledge on databases such as Snowflake, Netezza, Oracle, SQL Server, MySQL, Teradata.
  • 3+ years of hands-on experience with big data technologies such as Cloudera Hadoop, Pivotal, Vertica, MapR is a plus.
  • Experience working in a multi-developer environment and using Azure DevOps or GitLab.
  • Preferably experienced in SLA-driven Production Data Pipeline or Quality support.
  • Experience or strong understanding of traditional enterprise ETL platforms such as IBM Datastage, Informatica, Pentaho, Ab Initio.
  • Functional knowledge of technologies like Terraform, Azure CLI, PowerShell, Containerization (Kubernetes, Docker).
  • Functional knowledge of Reporting tools such as PowerBI, Tableau, OBIEE.
  • Team player with excellent communication skills, able to communicate with customers directly and explain deliverable statuses in scrum calls.
  • Ability to implement Agile methodologies and work in an Agile DevOps environment.
Job Description Matching

Match and compare your resume to any job description

Start Matching
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service