STAFF DATA ENGINEER

$95,000 - $190,000/Yr

Abbott Laboratories - Chicago, IL

posted 2 months ago

Full-time - Principal
Chicago, IL
10,001+ employees
Miscellaneous Manufacturing

About the position

The Principal Data Engineer position at Abbott is a pivotal role within the Medical Devices Digital Solutions organization, located in Chicago, IL. This role is designed for an experienced data engineer who will be responsible for leading and executing major data engineering initiatives that are crucial for the development of innovative medical devices and therapy solutions. The Principal Data Engineer will work closely with cross-functional product teams to design, develop, and deploy advanced data engineering techniques and streamlined data ingestion processes. The goal is to extract valuable insights from large and complex medical datasets, which include both structured and unstructured data. In this role, the Principal Data Engineer will establish standards and guidelines for data modeling and standardization, directly contributing to the enhancement of patient care quality. The position requires a hands-on approach, where the engineer will analyze data to identify trends and insights, collaborate with product and engineering teams to define data requirements, and drive data-driven decision-making. The engineer will also be responsible for designing and implementing data models that effectively support various product use cases, as well as maintaining scalable and optimized data architectures that adapt to evolving business needs. The Principal Data Engineer will evaluate and recommend appropriate data storage solutions to ensure data accessibility and integrity, develop and optimize data ingestion processes for improved reliability and performance, and design, build, and maintain robust data pipelines and platforms. Additionally, the role involves establishing monitoring and alerting systems to proactively identify and address potential data pipeline issues, supporting data infrastructure needs, and creating documentation to educate product teams on data best practices and tools. Effective communication of technical concepts to both technical and non-technical audiences is also a key aspect of this position.

Responsibilities

  • Lead and contribute hands-on to major data engineering initiatives from inception to delivery.
  • Analyze data to identify trends and insights.
  • Collaborate with product and engineering teams to define data requirements and drive data-driven decision-making.
  • Design and implement data models to effectively support various product use cases.
  • Design, implement, and maintain scalable and optimized data architectures that meet evolving business needs.
  • Evaluate and recommend appropriate data storage solutions, ensuring data accessibility and integrity.
  • Develop and continuously optimize data ingestion processes for improved reliability and performance.
  • Design, build, and maintain robust data pipelines and platforms.
  • Establish monitoring and alerting systems to proactively identify and address potential data pipeline issues.
  • Support Data infrastructure needs such as cluster management and permission.
  • Develop and maintain internal tools to streamline data access and analysis for all teams.
  • Create and deliver documentation to educate product teams on data best practices and tools.
  • Communicate technical concepts effectively to both technical and non-technical audiences.

Requirements

  • Bachelor's Degree in Data Science, Computer Science, Statistics with minimums of 8 YEARS OF DATA ENGINEERING with a strong focus on data architecture and data ingestion, or a Master's Degree in Data Science, Computer Science, Statistics, and 6 years of relevant experience.
  • Experience in the Life Science Industry.
  • Strong understanding of data modeling (conceptual, logical, and physical) using different data modeling methodologies and analytics concepts.
  • Proven experience designing, building, and maintaining data pipelines and platforms.
  • Expertise in data integration, ETL tools, and data engineering programming/scripting languages (Python, Scala, SQL) for data preparation and analysis.
  • Experience with Data Ops (VPCs, cluster management, permissions, Databricks configurations, Terraform) in Cloud Computing environments (e.g., AWS, Azure, GCP) and associated cloud data platforms, cloud data warehouse technologies (Snowflake/Redshift), and Advanced Analytical platforms (e.g., Dataiku and Databricks).
  • Familiarity with data streaming technologies like Kafka and Debezium.
  • Proven expertise with data visualization tools (e.g., Tableau, Power BI).
  • Strong understanding of data security principles and best practices.
  • Experience with CI/CD pipelines and automation tools.
  • Strong problem-solving and critical thinking skills.
  • Excellent written and verbal communication skills to convey complex technical concepts and findings to non-technical stakeholders and collaborate effectively across teams.

Nice-to-haves

  • Prior experience with healthcare domain data, including Electronic Health Records (EHR).
  • Experience with triple stores or graph databases (e.g., GraphDB, Stardog, Jena Fuseki).
  • Proficient with building domain ontologies and relevant W3C standards - RDF, RDFS, OWL, SKOS, SPARQL and associated Ontology Editors (e.g., TopBraid Composer, Protégé).
  • Experience with semantic validation languages (e.g. SHACL, SPIN) and associated semantic software packages and frameworks (e.g., Jena, Sesame, RDF4J, RDFLib).
  • Knowledge of data governance and compliance policies.

Benefits

  • Career development with an international company where you can grow the career you dream of.
  • Free medical coverage for employees via the Health Investment Plan (HIP) PPO.
  • An excellent retirement savings plan with high employer contribution.
  • Tuition reimbursement, the Freedom 2 Save student debt program and FreeU education benefit - an affordable and convenient path to getting a bachelor's degree.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service