This job is closed

We regret to inform you that the job you were interested in has been closed. Although this specific position is no longer available, we encourage you to continue exploring other opportunities on our job board.

Toyota Motors - Cypress, TX

posted about 2 months ago

Full-time - Senior
Cypress, TX
Transportation Equipment Manufacturing

About the position

The Principal Data Engineer within the Data Science and Analytics team plays a crucial role in architecting, implementing, and managing robust, scalable data platforms. This position demands a blend of cloud data engineering, systems engineering, data integration, and machine learning systems knowledge to enhance the organization's data capabilities, supporting advanced analytics, machine learning projects, and real-time data processing needs. The role involves guiding team members and collaborating closely with cross-functional teams to design and implement modern data solutions that enable data-driven decision-making across the organization.

Responsibilities

  • Collaborate with Business and IT functional experts to gather requirements or issues, perform gap analysis and recommend/implement process and/or technology improvements to optimize data solutions.
  • Design data solutions on Databricks including Delta Lake, Data Warehouse, Data Mart and others to support the data science and analytical needs of the organization.
  • Design and implement scalable and reliable data pipelines to ingest, process, and store diverse data at scale, using technologies such as Databricks, Apache Spark, Kafka, Flink, AWS Glue or other AWS services.
  • Work within cloud environments like AWS to leverage services including but not limited to EC2, RDS, S3, Athena, Glue, Lambda, EMR, Kinesis, and SQS for efficient data handling and processing.
  • Develop and optimize data models and storage solutions (SQL, NoSQL, Key-Value DBs, Data Lakes) to support operational and analytical applications, ensuring data quality and accessibility.
  • Utilize ETL tools and frameworks (eg, Apache Airflow, Talend) to automate data workflows, ensuring efficient data integration and timely availability of data for analytics.
  • Implement pipelines with a high degree of automation for data workflows and deployment pipelines using tools like Apache Airflow, Terraform, and CI/CD frameworks.
  • Collaborate closely with business analysts, data scientists, machine learning engineers, and optimization engineers, providing the data infrastructure and tools needed for complex analytical models, leveraging Python, Scala or R for data processing scripts.
  • Ensure compliance with data governance, compliance and security policies, implementing best practices in data encryption, masking, and access controls within a cloud environment.
  • Establish best practices for code documentation, testing, and version control, ensuring consistent and reproductive data engineering practices across the team.
  • Monitor and troubleshoot data pipelines and databases for performance issues, applying tuning techniques to optimize data access and throughput.
  • Ensure efficient usage of AWS and Databricks resources to minimize costs while maintaining high performance and scalability.
  • Cross functional work understanding data landscape, developing proof of concepts, and demonstrating to stakeholders.
  • Lead one or more data projects and support with internal and external resources.
  • Coach and mentor junior data engineers.
  • Stay abreast of emerging technologies and methodologies in data engineering, advocating for and implementing improvements to the data ecosystem.

Requirements

  • Bachelor's Degree in Computer Science, Data Science, MIS, Engineering, Mathematics, Statistics or other quantitative discipline with 5-8 years of hands-on experience in data engineering, with a proven track record in designing and operating large-scale data pipelines and architectures.
  • Proven experience designing scalable, fault-tolerant data architecture and pipelines on Databricks delta lake, lakehouse, unity catalog, streaming, AWS, ETL/ELT development and data modeling, with a focus on performance optimization and maintainability.
  • Deep experience of platforms and services like Databricks, and AWS native data offerings.
  • Solid experience with big data technologies (Databricks, Apache Spark, Kafka) and AWS cloud services related to data processing and storage.
  • Strong hands-on experience with ETL/ELT pipeline development using AWS tools and Databricks Workflows.
  • Strong experience in AWS cloud services, with hands-on experience in integrating cloud storage and compute services with Databricks.
  • Proficient in SQL and programming languages relevant to data engineering (Python, Java, Scala).
  • Hands on RDBMS and data warehousing experience (data modeling, analysis, programming, stored procedures).
  • Good understanding of system architecture and design patterns to design and develop applications using these principles.
  • Proficiency with version control systems like Git and experience with CI/CD pipelines for automating data engineering deployments.
  • Familiarity with machine learning model deployment and management practices is a plus.

Nice-to-haves

  • Experience with SAP, BW, HANA, Tableau, or Power BI is a plus.
  • Experience with auto, manufacturing, or supply chain industries is a plus.
  • AWS Certified Solution Architect.
  • Databricks Certified Associate Developer for Apache Spark or other relevant certifications.

Benefits

  • Medical insurance
  • Dental insurance
  • Vision insurance
  • Wellness programs
  • Retirement plans
  • Generous paid leave
Job Description Matching

Match and compare your resume to any job description

Start Matching
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service