Introlligent - Cupertino, CA

posted about 2 months ago

Full-time - Mid Level
Cupertino, CA
Professional, Scientific, and Technical Services

About the position

The Data Engineer role involves designing and building data models, writing ELT pipelines, and collaborating with diverse teams to deliver high-quality data insights. The position requires a strong foundation in data engineering principles, with a focus on improving data accessibility, efficiency, and quality. The candidate will work with advanced technologies and be responsible for maintaining data traceability and automating reporting processes.

Responsibilities

  • Write ELT pipelines in SQL and Python.
  • Utilize advanced technologies for modeling enhancements.
  • Test pipeline and transformations, and document data pipelines.
  • Maintain data and software traceability through GitHub.
  • Build a high-quality data transformation framework, implementing and operating data pipelines with an understanding of data and ML lifecycles.
  • Understand end-to-end nature of data lifecycles to deliver high-quality data and debug data concerns.
  • Drive development of data products in collaboration with data scientists and analysts.
  • Automate reporting where possible to make the team more efficient.
  • Analyze factory, user, and failure data to resolve battery problems using engineering understanding mechanisms.
  • Work with diverse teams including data scientists, engineers, product managers, and executives.
  • Deliver high-quality analytic insights from a data warehouse.
  • Provide ad-hoc reporting as necessary, sometimes with urgent escalation.
  • Write programs for data filtering, organization, and reporting.
  • Write programs for uploading to and maintaining data in SQL database.
  • Develop basic data management and selection programs on SQL.

Requirements

  • 2+ years as a data engineer, software engineer, or data analyst.
  • Battery Engineering / Electrical Engineering experience desired.
  • Working knowledge and experience with big data.
  • Strong working knowledge of Python, SQL, and Git.
  • Basic knowledge of SQL databases, Spark, data warehousing, and shell scripting.
  • Solid competency in statistics and the ability to provide value-added analysis.
  • Self-starter with entrepreneurial experience and ability to interact with other functions in a matrix environment.
  • Proven creativity to go beyond current tools to deliver the best solution to the problem.
  • Familiarity with database modeling and data warehousing principles.
  • Experience in designing and building data models to improve accessibility, efficiency, and quality of data.
  • Experience building scalable data pipelines using Spark is a plus.

Nice-to-haves

  • Experience with Apple OS, such as iOS, MacOS, etc.
Job Description Matching

Match and compare your resume to any job description

Start Matching
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service