Winmax Systems - Seattle, WA

posted 3 months ago

Full-time - Mid Level
Seattle, WA
Professional, Scientific, and Technical Services

About the position

We are seeking a detail-oriented Machine Learning Data Engineer to join our team. As an ML Data Engineer, you will be responsible for designing, building, and maintaining scalable data pipelines that ingest, transform, and load data from various sources into our cloud-based systems. You will work closely with machine learning teams to ensure that data is accurate, enriched, reliable, and readily available for analytics and model training. This role is crucial in supporting the data needs of our machine learning initiatives and ensuring that our data infrastructure is robust and efficient. In this position, you will create efficient, reliable, streamable, and scalable data pipelines using industry-standard tools and techniques, such as TorchData, WebDataset, Apache Parquet, Python, and SQL. You will develop strategies for ingesting data from various data providers, ensuring that the data quality and consistency are maintained throughout the process. Additionally, you will implement parallel pre-processing to clean, transform, de-duplicate, combine, and normalize data, which is essential for maintaining high-quality datasets. You will also curate, augment, and enrich existing datasets to improve data quality and provide valuable insights to stakeholders. Collaborating with synthetic data teams will be part of your responsibilities, as you will generate synthetic data and incorporate it into existing pipelines. Working closely with ML scientists, engineers, and product teams, you will understand data requirements and collaborate on data delivery to meet project goals. Monitoring the performance of data pipelines, identifying errors and bottlenecks, and implementing regular maintenance and updates will be key aspects of your role. Staying updated with the latest trends in data engineering and incorporating best practices into data pipelines will ensure that our systems remain cutting-edge. Finally, you will document data pipelines, settings, and procedures for easy maintenance and knowledge sharing within the team.

Responsibilities

  • Design and Build Data Pipelines: Create efficient, reliable, streamable, and scalable data pipelines using industry-standard tools and techniques, such as TorchData, WebDataset, Apache Parquet, Python, and SQL.
  • Data Ingestion: Develop strategies for ingesting data from data providers, ensuring data quality and consistency.
  • Data Pre-processing: Implement parallel pre-processing to clean, transform, de-duplicate, combine and normalize data.
  • Data Curation and Enrichment: Curate, augment, and enrich existing datasets to improve data quality and provide valuable insights to stakeholders.
  • Synthetic Data Generation: Collaborate with synthetic data teams to generate data and incorporate into existing pipelines.
  • Collaboration with ML Teams: Work closely with ML scientists, engineers, and product teams to understand data requirements, and collaborate on data delivery.
  • Monitoring, Maintenance & Updating: Monitor data pipelines for performance, errors, and bottlenecks, and implement regular maintenance and updates.
  • Technical Documentation: Document data pipelines, settings, and procedures for easy maintenance and knowledge sharing.

Requirements

  • Bachelor's degree in Computer Science, Information Technology, or a related field.
  • At least 3 years of experience as a Software Engineer or Data Engineer.
  • Strong software engineering skills, proficiency in Python.
  • Experience with data processing tools and formats such as Apache Parquet, WebDataset, TorchData, Pandas, Shell Scripting, Protobuf, TFRecord.
  • Knowledge of data warehouse architectures and cloud-based systems (e.g., AWS S3).
  • Strong problem-solving and analytical skills.
  • Excellent communication and collaboration skills.

Nice-to-haves

  • Master's degree in Data Science or a related field.
  • Experience with data curation and enrichment techniques, particularly for large scale text, image and video data.
  • Familiarity with natural language processing (NLP), machine learning (ML) concepts and frameworks (PyTorch).
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service