Sr Machine Learning Engineer

$100,500 - $182,700/Yr

Blue Cross Blue Shield

posted 3 months ago

Full-time - Mid Level
Insurance Carriers and Related Activities

About the position

Blue Cross and Blue Shield of North Carolina is seeking a Senior Machine Learning Engineer to join their ML engineering team, which plays a crucial role in executing the enterprise's Data Science MLOps strategy and capabilities. This position is pivotal in automating AI/ML models, including traditional data science, machine learning, and generative AI, particularly focusing on GenAI automation within the AWS ecosystem. The AI/ML Center of Excellence (COE) is responsible for building foundational AI/ML capabilities and solutions that cater to various cross-domain business use cases, while also providing thought leadership on the transformative potential of AI/ML for the enterprise. The COE is dedicated to maintaining and enhancing the AI/ML operations environment, as well as establishing standards and best practices to facilitate the scaling of data science solutions from ideation through to production and integration into business workflows. Additionally, the COE fosters a community of practice for data scientists at BCBSNC and develops partnerships with universities and other Blues plans. In this role, the Senior Machine Learning Engineer will be responsible for defining and extracting data from multiple sources, integrating disparate data into a unified data model, and ensuring that data is efficiently integrated into target databases, applications, or files. The engineer will document and test complex data systems that consolidate data from various sources, making it accessible to data scientists and other users through scripting and programming languages. The position also involves improving, deploying, and maintaining models prepared by Data Scientists into production-grade cloud systems, utilizing scalable and efficient machine learning operations pipelines. The engineer will write and refine code to enhance the performance and reliability of data extraction and processing, lead requirements gathering sessions with business and technical staff, and develop advanced SQL queries for data extraction and model construction. Furthermore, the engineer will own the delivery of large, complex data and ML engineering projects, design and develop scalable data pipeline processes, ensure the performance and reliability of data processes, and collaborate with cross-functional teams to resolve data quality and operational issues. The role also includes developing and implementing scripts for database maintenance, monitoring, and performance tuning, as well as analyzing databases to recommend improvements and optimizations. Lastly, the engineer will design advanced visualizations to effectively convey information to users.

Responsibilities

  • Define and extract data from multiple sources, integrating disparate data into a common data model.
  • Document and test complex data systems that consolidate data from various sources for accessibility to data scientists and other users.
  • Improve, deploy, and maintain models prepared by Data Scientists into production-grade cloud systems using scalable machine learning operations pipelines.
  • Write and refine code to ensure performance and reliability of data extraction and processing.
  • Lead requirements gathering sessions with business and technical staff to distill technical requirements from business requests.
  • Develop advanced SQL queries to extract data for analysis and model construction.
  • Own delivery of large, complex data and ML engineering projects.
  • Design and develop scalable, efficient data pipeline processes for data ingestion, cleansing, transformation, integration, and validation.
  • Ensure performance and reliability of data processes.
  • Document and test data processes, including performance through data validation and verification.
  • Collaborate with cross-functional teams to resolve data quality and operational issues and ensure timely delivery of products.
  • Develop and implement scripts for database and data process maintenance, monitoring, and performance tuning.
  • Analyze and evaluate databases to identify and recommend improvements and optimizations.
  • Design advanced visualizations to convey information to users.

Requirements

  • Bachelor's degree and 5 years of experience with Data Warehouses and Data Lakes, Big Data platforms, and programming in Python, R, or other related languages.
  • Understanding of ML Algorithms, experience creating and executing efficient MLOps pipelines, and tuning ML models.
  • Familiarity with large language models and implementation of AI solutions in a cloud environment.
  • In lieu of a degree, 7 years of experience as stated above.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service