Data Engineer, Lead

$138,200 - $175,000/Yr

Autodesk - San Francisco, CA

posted 2 months ago

Full-time - Mid Level
Remote - San Francisco, CA
Publishing Industries

About the position

The Lead Data Engineer at Autodesk is responsible for designing and implementing strategies for enterprise databases and data warehouse systems. This role involves leading large-scale data engineering projects, collaborating with cross-functional teams, and maintaining scalable ETL pipelines to handle large datasets. The position requires a deep understanding of data systems and the ability to propose enhancements to improve performance and reliability.

Responsibilities

  • Design strategies for enterprise databases and data warehouse systems.
  • Propose and implement enhancements to improve system performance and reliability.
  • Modify existing software to correct errors and adapt to new hardware.
  • Develop and direct software system testing and validation procedures.
  • Lead large-scale data engineering projects and manage teams of Data Engineers.
  • Collaborate with cross-functional teams to build scalable data pipelines.
  • Maintain scalable ETL pipelines on distributed software systems and cloud platforms.
  • Collaborate on the design and maintenance of operations data mart systems.
  • Acquire data from warehouse and transactional source systems using various tools.
  • Design and develop software to automate data reporting functions.
  • Develop and maintain large-scale data analytics systems.
  • Manage data-related activities with internal developers and external consultants.
  • Interact with internal users regarding data system availability and visualization.
  • Drive improvements in cross-functional data integration efforts.
  • Review reports and analysis to identify efficient data summarization techniques.
  • Detect and analyze data quality issues and implement fixes.

Requirements

  • Bachelor's degree in Computer Science, Engineering, Computer Information Systems, or related field.
  • 5 years of progressive experience in data engineering or software engineering.
  • Experience in building scalable Massive Parallel Processing (MPP) data pipelines.
  • Proficiency in designing STAR and Snowflake schemas for data warehousing.
  • Strong SQL skills including advanced functions and query optimization techniques.
  • Programming experience in Python, Java, or Scala.
  • Experience with AWS services such as EC2, Lambda, and S3.
  • Familiarity with HIVE and SPARK or similar distributed processing platforms.
  • Experience in scaling and optimizing schemas and performance tuning SQL and ETL pipelines.
  • Experience with enterprise ETL tools like Informatica or SSIS.
  • Familiarity with Agile development processes and tools like JIRA and GitHub.

Benefits

  • Health and financial benefits
  • Time away and everyday wellness
  • Annual cash bonuses
  • Stock grants
  • Comprehensive benefits package
Job Description Matching

Match and compare your resume to any job description

Start Matching
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service