SunSource - Warren, MI

posted 5 days ago

Full-time
Warren, MI
Transportation Equipment Manufacturing

About the position

The Data Engineer position at SunSource focuses on developing and delivering enterprise applications that support the company's business strategy and growth. This role involves working closely with business and technology leaders to enhance integration and analytical capabilities, particularly in Customer & Supplier Integration and Advanced Analytics, to leverage data for competitive advantage.

Responsibilities

  • Perform BI solution development activities including requirements gathering, technical specifications, ETL design & development, unit & integration testing.
  • Develop and support backend data models and ETL workflows on Databricks, focusing on building scalable data processing pipelines using Python and Spark.
  • Support data integration between supplier partners to provide accurate and up-to-date information to internal and external customers.
  • Review, interpret, and debug complex code.
  • Develop maintainable code via proper structures, comments, and design.
  • Generate automated, repeatable, and stable processes to support data transformation, data load, data movement, and data synchronizations.
  • Contribute toward the continuous improvement of internal data capabilities.
  • Perform source data analysis, data profiling, validation, and conceptual and logical data modeling to determine the suitability of source data for reporting requirements.
  • Facilitate and lead senior IT and business teams through challenging data issues.
  • Meet with subject matter experts to learn relevant business concepts and develop an understanding of how their data supports business processes.
  • Assist in developing the roadmap for Analytics and Advanced Analytics across the company to enable business strategy.
  • Participate in information architecture strategies such as data management, data governance, master data management, and metadata management.

Requirements

  • Experience in BI solution development and ETL design & development.
  • Proficiency in Python and Spark for building scalable data processing pipelines.
  • Strong understanding of data integration processes and data modeling techniques.
  • Ability to review and debug complex code effectively.
  • Experience in developing maintainable code with proper documentation and design practices.
  • Knowledge of data transformation, data load, and data synchronization processes.
  • Experience in data profiling, validation, and conceptual and logical data modeling.

Nice-to-haves

  • Familiarity with Databricks and its functionalities.
  • Experience in advanced analytics and data governance practices.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service