Penske Automotive Group - Reading, PA

posted about 2 months ago

Full-time - Senior
Reading, PA
501-1,000 employees
Truck Transportation

About the position

As a Senior Data Engineer at Penske Truck Leasing, you will play a pivotal role in leading the technical design, engineering, and support of complex data integrations between business-critical applications and systems. This position involves collaborating with a team to define and implement data engineering and analytics strategies while mentoring other data engineers on best practices. You will be responsible for developing, enhancing, and supporting existing and new data pipelines, data ingestions, data storage, and traditional Data Warehouse/BI systems across medium and large-sized projects concurrently. Your expertise will support software developers, database architects, data analysts, and data scientists on vital initiatives, ensuring optimal data delivery architecture throughout ongoing projects. In this role, you will have the opportunity to influence the data and analytics roadmap through constant interactions with various business and technology teams. You will lead the design of vital components and suggest or implement new frameworks and tools based on industry and technological trends. Additionally, you will consult with process owners to review, interpret, and develop systems per user requirements, ensuring code quality through proper documentation, best practices, and code reviews. Working in a collaborative environment, you will further develop your skills while acting as a mentor and technical guide to associate data engineers. Your responsibilities will include being the primary contact and lead support associate for multiple critical projects/data pipelines and integrations, resolving customer issues promptly. You will collaborate with technical and business leaders, product owners, software engineers, data architects, and data analysts to acquire and understand requirements and acceptance criteria. You will also participate in defining Data Engineering Process and Technology strategies, provide day-to-day task leadership, and communicate status and issues to team members and stakeholders. Your role will involve producing deliverables for medium to large-sized projects with clean, well-documented, and maintainable code that adheres to defined coding standards. Furthermore, you will design and implement internal process improvements, partner with cross-functional teams to build analytics tools, and participate in building data security and governance guidelines.

Responsibilities

  • Lead the technical design, engineering, and support of complex data integrations between business-critical applications and systems.
  • Develop, enhance, and support existing and new data pipelines, data ingestions, data storage, and traditional Data Warehouse/BI systems across medium and large-sized projects.
  • Support software developers, database architects, data analysts, and data scientists on vital initiatives to ensure optimal data delivery architecture.
  • Influence the data and analytics roadmap through interactions with various business and technology teams.
  • Lead the design of vital components and suggest/implement new frameworks and tools based on industry trends.
  • Consult with process owners to review, interpret, and develop systems per user requirements.
  • Ensure code quality through proper documentation, best practices, and code reviews.
  • Be the primary contact and lead support associate for multiple critical projects/data pipelines and integrations.
  • Collaborate with technical/business leaders, product owners, software engineers, data architects, and data analysts to acquire and understand requirements.
  • Participate in defining Data Engineering Process and Technology strategies.
  • Provide day-to-day task leadership and project reviews.
  • Communicate status and issues to team members and stakeholders.
  • Produce deliverables for medium to large-sized projects with clean, well-documented, and maintainable code.
  • Design and implement internal process improvements, automating manual processes and optimizing data delivery.
  • Partner with cross-functional teams to build analytics tools that utilize the data pipeline.
  • Participate in building data security and data governance guidelines.
  • Conduct new hire interviews and provide constructive input to department management regarding team members assigned to the project.

Requirements

  • Bachelor's degree in computer science/computer engineering or equivalent years of software development experience required.
  • 10+ years of overall technology experience.
  • 8+ years of Data Engineering, Data Modeling, and Data Warehousing experience required.
  • 2+ years of experience working with Agile teams preferred.
  • 1-2 years of experience leading medium to large projects.
  • Expert knowledge of the full data engineering lifecycle.
  • Exposure to Data Lake, Lake House concepts, and technologies like Snowflake, AWS RedShift, or Databricks.
  • Experience in multidimensional data modeling, logical, physical, and star schemas, snowflakes, and de-normalized models.
  • Experience in handling Big Data, IoT Data, and specifically a variety of Big Data file formats such as Avro and Parquet.
  • Expertise in cloud technologies, preferably AWS.
  • Expert experience with Data Engineering tools such as Talend, SAP BODS, Kafka, Python, PySpark, Python APIs, Kubernetes, S3, Glue, Athena, EMR, etc.
  • Exposure to BI & analytics tools such as Qlik Sense, Sagemaker, SAS Viya, and Dataiku.
  • Experience in handling purposeful database platforms that include Relational, Key-Value, In-Memory, Document, Time series, and Geospatial data stores.
  • Strong SQL background.
  • Expertise in building data pipelines, data ingestions, data integrations, data preparations, data transformation, data structures, metadata, and traditional Data warehouses and Data Marts.
  • Strong analytic skills related to working with complex datasets.
  • Expertise in designing, validating, and implementing multiple projects across hybrid infrastructure (On-cloud to On-Premises and vice versa).
  • In-depth knowledge of appropriate design frameworks and patterns and experience in implementing them.
  • Knowledge of industry-wide technology strategies and best practices.
  • Understanding of message queuing, stream processing, and highly scalable 'big data' data stores.
  • Solid written and oral communication skills; ability to present ideas in business-friendly and user-friendly language.

Nice-to-haves

  • Experience with data governance frameworks and best practices.
  • Familiarity with machine learning concepts and tools.
  • Experience in mentoring junior data engineers.

Benefits

  • Health insurance coverage.
  • 401k retirement savings plan.
  • Paid holidays and vacation time.
  • Professional development opportunities.
  • Flexible scheduling options.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service