senior data engineer

$115,000 - $128,000/Yr

Unclassified - Folsom, CA

posted about 2 months ago

Full-time - Senior
Folsom, CA

About the position

The Senior Data Engineer position in Folsom, CA, is a permanent role that involves designing, building, and maintaining end-to-end data pipelines. The role requires a strong understanding of data engineering principles and practices, with a focus on optimizing data infrastructure and ensuring data quality. The position operates on a hybrid schedule, with approximately 50% of the work being onsite, depending on business needs.

Responsibilities

  • Enforce standards, data engineering principles and practices.
  • Design, build, deploy, automate, and maintain end-to-end data pipelines for new and existing data sources and targets utilizing modern ETL/ELT tools and practices, including stream processing technologies where appropriate.
  • Demonstrates problem solving ability that allows team for timely and effective issue resolution.
  • Drives and completes project deliverables within the data engineering & management area according to project plans.
  • Utilize in-depth technical expertise regarding data models, data analysis and design, master data management, metadata management, reference data management, data warehousing, business intelligence, and data quality improvement.
  • Ability to influence internal clients to leverage standard capabilities and data driven decisions.
  • Work with internal technical resources to optimize the data Lakehouse through hardware or software upgrades or enhancements.
  • Design and implement data models that balance performance, flexibility, and ease of use, considering both analytical and operational needs.
  • Enable and support self-service analytics by designing intuitive data models and views, collaborating with the Business Intelligence team to ensure data is easily accessible and interpretable for business partners.
  • Work with vendors to troubleshoot and resolve system problems, providing on-call support as required.
  • Manage and automate the deployment of upgrades, patches, and new features across the data infrastructure, ensuring minimal disruption to data services and maintaining system integrity.
  • Conduct code review and approvals for data pipelines developed and implemented by team.
  • Ensure compliance to all data Lakehouse administration activities.
  • Design and manage implementation of data models to meet user specifications, while adhering to prescribed standards.
  • Manage and collect business metadata and data integration points.
  • Coordinate with business analysts and prepare data design for systems; analyze user requirements; prepare technical design specifications to address user needs.
  • Develop and implement comprehensive testing strategies, including automated unit, integration, and end-to-end tests, to ensure the accuracy, reliability, and performance of data pipelines and procedures.
  • Provide technical support and coordination during Lakehouse design, testing, and movement to production.
  • Enforce standards and procedures to ensure data is managed consistently and properly integrated within the Lakehouse.
  • Create and maintain thorough, up-to-date documentation for all data engineering projects, processes, and systems, adhering to organizational standards and leveraging modern documentation tools and practices.
  • Implement business rules via coding, stored procedures, middleware, or other technologies ensuring scalability and maintainability of implemented solutions.
  • Analyze processes in specialty areas to isolate and correct problems and improve workflow.
  • Implement and maintain robust data quality assurance processes, including automated checks and balances, to ensure the integrity, accuracy, and reliability of data across all stages of processing and storage.
  • Maintain an awareness of data management and business intelligence trends, products, technical advances, and productivity tools that to the company environment through vendor and third-party classes, self-study, and publications.
  • Establish, document, and enforce coding standards, best practices, and architectural guidelines for the data engineering team, promoting consistency, efficiency, and maintainability in all data solutions.

Requirements

  • Minimum 8 years of experience in data engineering or related field.
  • Bachelor's degree in a relevant field.
  • Advanced proficiency in Python and SQL.
  • Proficiency in Java or Scala.
  • Strong experience with at least one major cloud platform (AWS, Azure, or GCP).
  • Experience with modern data warehousing solutions (e.g., Snowflake, Amazon Redshift, Google BigQuery).
  • Proficiency in designing and implementing scalable data pipelines using ETL/ELT tools (e.g., Apache Airflow, AWS Glue, Databricks).
  • Strong knowledge of both relational (e.g., PostgreSQL, Oracle) and NoSQL (e.g., MongoDB, Cassandra) databases.
  • Proficiency in dimensional modeling and data warehouse design.

Nice-to-haves

  • Familiarity with R for statistical computing.
  • Experience with data lake architectures and technologies (e.g., Delta Lake, Apache Hudi).
  • Familiarity with Docker and container orchestration (e.g., Kubernetes).
  • Understanding of data governance principles and practices.
  • Familiarity with MLOps practices and tools.
  • Basic proficiency with data visualization tools (e.g., Power BI, Tableau).
  • Experience working in Agile environments (e.g., Scrum, Kanban).

Benefits

  • Health insurance coverage
  • 401K contribution
  • Incentive and recognition program
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service