Transmedics - San Diego, CA

posted 2 months ago

Full-time - Mid Level
San Diego, CA
Transit and Ground Passenger Transportation

About the position

As a Mid-level Data Science Engineer at Trabus Technologies, you will play a crucial role in supporting the maintenance planning and forecasting models that are essential for our operations. Your work will involve the development and maintenance of datasets that are critical for our data-driven decision-making processes. You will be responsible for writing parsers and ingestors to efficiently ingest data from various disparate sources and formats into our databases using Python. This role requires a strong understanding of data management and the ability to ensure data quality through effective curation practices. In addition to data ingestion, you will develop API calls to extract maintenance data from our databases, enabling seamless access to the information needed for analysis. Your analytical skills will be put to the test as you analyze historical maintenance data to identify trends and insights, which you will then visualize to communicate your findings effectively. You will also be tasked with generating training and test datasets for AI and machine learning workflows, contributing to the development of advanced analytical models. A significant aspect of your role will involve the development of web-based products that utilize AI and machine learning algorithms. You will be responsible for deploying these products into government-hosted computing infrastructures, which may include containerizing your code for deployment and applying continuous integration and continuous deployment (CI/CD) practices. This position offers an exciting opportunity to work at the intersection of data science and technology, contributing to innovative solutions that address real-world challenges in the government and military sectors.

Responsibilities

  • Support TRABUS' maintenance planning and forecasting models and associated datasets.
  • Write parsers and ingestors to ingest data from disparate sources/formats into databases using Python.
  • Develop API calls to extract ingested maintenance data from databases.
  • Maintain and develop data curations to ensure data quality.
  • Generate training and test datasets for AI/machine learning workflows.
  • Analyze historical maintenance data for trends and analysis and demonstrate results using visualizations.
  • Develop web-based products that utilize AI/ML algorithms and visualize historical maintenance data.
  • Deploy developed web-based products into government-hosted computing infrastructure, including containerizing code for deployment and applying CI/CD practices.

Requirements

  • 3-4 years of experience in data science programming using Python.
  • 1-2 years' experience in using Python libraries such as pandas, numpy, scipy, and plotly.
  • Experience in ingesting raw data into databases using ORM.
  • 2 years of experience in Python-based web frameworks such as Django or Flask.
  • Experience with AI and machine learning/deep learning libraries such as scikit-learn, TensorFlow, PyTorch, and Keras is a plus.
  • 2 years of experience in database technologies such as PostgreSQL or MySQL (including writing SQL queries) and NoSQL data stores such as Redis or MongoDB (includes working with JSON).
  • Familiarity with API technologies such as GraphQL or REST and developing using tools such as Postman and Swagger.
  • Comfortable working with Linux/Unix and on cloud-based infrastructure (such as AWS and Digital Ocean).
  • Experience in using version control software such as git.
  • Experience with CI/CD automations and Docker.

Nice-to-haves

  • Experience with AI and machine learning/deep learning libraries such as scikit-learn, TensorFlow, PyTorch, and Keras.

Benefits

  • Paid Time Off
  • Holidays
  • Health Insurance
  • Dental Insurance
  • Vision Insurance
  • Flexible Spending Account
  • 401(k)
  • Life AD&D
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service