Sr. Data Reliability Engineer

$145,600 - $176,800/Yr

Solugenix Corporation - San Antonio, TX

posted about 2 months ago

Full-time - Mid Level
San Antonio, TX
Professional, Scientific, and Technical Services

About the position

The Sr. Data Reliability Engineer position is a critical role within a prestigious investment management company, focusing on ensuring the highest standards of data reliability and system performance. This hybrid position, based in either San Antonio, TX or Irvine, CA, requires a blend of expertise in data pipeline technologies and Site Reliability Engineering (SRE) principles. The successful candidate will be responsible for designing, building, and maintaining the infrastructure and data pipelines that support data transformation, data structures, metadata, dependency, and workload management. In this role, you will develop and maintain scalable, reliable, and cost-effective data solutions utilizing AWS technologies and big data tools such as Databricks, Airflow, and Dremio. You will implement robust monitoring and alerting systems using tools like Datadog to ensure proactive management of production environments. Collaboration with data scientists and analytics teams is essential to engineer and optimize data models using DBT (Data Build Tool), ensuring seamless data flow across all segments. The position also involves enhancing data validation and data quality metrics integration within data pipelines to ensure the accuracy and reliability of data. You will automate manual processes, optimize data delivery, and redesign infrastructure for greater scalability. Additionally, you will handle the deployment of additional AWS services, manage data storage solutions, and collaborate with IT and DevOps teams to enhance system performance and reliability. Continuous improvement efforts will be a key focus, as well as providing support and mentorship to offshore teams, ensuring best practices in coding, testing, and deployment are followed. Troubleshooting complex issues across multiple databases and working with various stakeholders to maintain robust architecture and operational standards will also be part of your responsibilities.

Responsibilities

  • Design, build, and maintain the infrastructure and data pipelines to support data transformation, data structures, metadata, dependency, and workload management.
  • Develop and maintain scalable, reliable, and cost-effective data solutions using AWS technologies and big data tools like Databricks, Airflow, and Dremio.
  • Implement robust monitoring and alerting systems using tools such as Datadog to ensure proactive management of the production environments.
  • Work closely with data scientists and analytics teams to engineer and optimize data models using DBT (Data Build Tool) and ensure seamless data flow across all segments.
  • Enhance data validation and data quality metrics integration within data pipelines to ensure accuracy and reliability of data.
  • Automate manual processes, optimize data delivery, and re-design infrastructure for greater scalability.
  • Handle the deployment of additional AWS services such as Lambda functions and manage data storage solutions.
  • Collaborate with IT and DevOps teams to enhance system performance and reliability through AWS solutions such as SNS, SQS, and Elastic Load Balancer.
  • Engage in continuous improvement efforts to enhance performance and provide increased functionality across data platforms.
  • Provide support and mentorship to offshore teams, ensuring best practices in coding, testing, and deployment are followed.
  • Troubleshoot complex issues across multiple databases and work with various stakeholders to ensure robust architecture and operational standards are maintained.

Requirements

  • Bachelor's or Master's degree in Computer Science, Engineering, Information Technology, or a related field.
  • Proven experience as a Data Engineer, Data Reliability Engineer, or in a similar role in an SRE environment.
  • Demonstrable experience with end-to-end monitoring of data pipelines and implementing reliability in data systems.
  • Strong proficiency in Apache Airflow, Python programming, and AWS cloud services.
  • Experience with real-time monitoring tools such as Datadog.
  • Expertise in data transformation tools like DBT and Dremio.
  • Strong experience with Databricks and AWS Lambda.
  • Excellent verbal and written communication skills, capable of working with cross-functional teams and managing relationships with business stakeholders.

Nice-to-haves

  • Familiarity with AWS services such as SNS, SQS, Elastic Load Balancer, CodeBuild, CodePipeline, ECR, and EKS.
  • Experience with both Linux and Windows operating systems.
  • Knowledge of Microsoft Azure Data Lake, Synapse, and additional Python scripting.

Benefits

  • Competitive hourly pay ranging from $70/hour to $85/hour based on experience and location.
  • Opportunity for contract-to-hire conversion.
  • Work in a hybrid environment with 3 days onsite.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service