CME Group - Chicago, IL

posted about 2 months ago

Full-time - Mid Level
Hybrid - Chicago, IL
Securities, Commodity Contracts, and Other Financial Investments and Related Activities

About the position

As a Data System Reliability Engineer (dSRE) at CME Group, you will play a pivotal role in our Cloud data transformation initiatives. This position is designed for individuals who are passionate about ensuring the reliability, scalability, and efficiency of our data infrastructure as we expand our Google Cloud Platform (GCP) data footprint. You will be aligned with data product pods, working closely with data domain owners, data scientists, and other stakeholders to optimize data pipelines and ensure data integrity and consistency across our systems. Your contributions will directly impact our ability to serve our customers better and enhance our operational capabilities. In this role, you will be responsible for designing, building, securing, and maintaining our data infrastructure, which includes data pipelines, databases, data warehouses, and data processing platforms on GCP. You will implement robust monitoring and alerting systems to proactively identify and resolve issues in our data systems, ensuring minimal downtime and data loss. Additionally, you will develop automation scripts and tools to streamline data operations, making them scalable to accommodate growing data volumes and user traffic. Your expertise will be crucial in optimizing our data systems to ensure efficient data processing, reduce latency, and improve overall system performance. You will collaborate with various teams to forecast data growth and plan for future capacity requirements, while also ensuring compliance with data protection regulations and implementing best practices for data access controls and encryption. Continuous assessment and improvement of our data infrastructure and processes will be key to enhancing reliability, efficiency, and performance. You will also maintain clear and up-to-date documentation related to data systems, configurations, and standard operating procedures, contributing to a culture of transparency and knowledge sharing within the organization.

Responsibilities

  • Optimize data pipelines to ensure data integrity and consistency.
  • Enhance system resiliency and maintain data security.
  • Implement proactive alerting and monitoring for data pipelines.
  • Automate repetitive data-oriented tasks on GCP.
  • Design, build, secure, and maintain data infrastructure including data pipelines, databases, and data processing platforms on GCP.
  • Measure and monitor the quality of data on GCP data platforms.
  • Develop automation scripts and tools to streamline data operations.
  • Collaborate with data and infrastructure teams to forecast data growth and plan for future capacity requirements.
  • Ensure compliance with data protection regulations and implement best practices for data access controls and encryption.
  • Continuously assess and improve data infrastructure and processes.

Requirements

  • Bachelor's or Master's degree in Computer Science, Software Engineering, Data Science or related field, or equivalent practical experience.
  • 6+ years of experience related to the role.
  • Proven experience as a Data Site Reliability Engineer or a similar role, with a strong focus on data infrastructure management.
  • Good understanding of SRE practices.
  • Proficiency in data technologies such as relational databases, data warehousing, big data platforms (e.g., Hadoop, Spark), and cloud services (e.g., AWS, GCP, Azure).
  • Strong programming skills in languages like Python, Java, or Scala, with experience in automation and scripting.
  • Experience with containerization and orchestration tools like Docker and Kubernetes is a plus.
  • Experience with data governance, data security, and compliance best practices on GCP.
  • Solid understanding of software development methodologies and best practices, including version control and CI/CD pipelines.
  • Strong background in cloud computing and data-intensive applications and services, with a focus on Google Cloud Platform.

Nice-to-haves

  • Experience with data quality assurance and testing on GCP.
  • Proficiency with GCP data services such as BigQuery, Dataflow, and Cloud Composer.
  • Knowledge of AI and ML tools is a plus.
  • Google Associate Cloud Engineer or Data Engineer certification is a plus.

Benefits

  • Competitive salary and performance bonuses.
  • Health, dental, and vision insurance.
  • 401(k) retirement plan with company matching.
  • Flexible work hours and hybrid work model.
  • Professional development opportunities and tuition reimbursement.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service