W.W. Grainger - Chicago, IL

posted about 2 months ago

Full-time - Senior
Chicago, IL
Merchant Wholesalers, Durable Goods

About the position

As a Senior Staff Data Engineer on the Customer Information Management (CIM) team at Grainger, you will play a pivotal role in building analytical data products that drive growth actions across sales and marketing. The CIM team's primary mission is to create, gather, maintain, and operationalize real-world information about our customers. This position is based in Chicago and reports to the Manager of CIM, Product Engineering. In this role, you will be responsible for designing and developing data pipelines and data products. You will collaborate with subject matter experts (SMEs), architects, analysts, data scientists, stakeholders, and other team members to build solutions that integrate, process, and store data from various internal and external enterprise data sources. Your work will enable analytics and reporting by centralizing and integrating high-quality, large, and complex data sets into a highly performant and scalable cloud analytical platform. You will work closely with architects and product teams to implement data pipelines that ingest, cleanse, and enrich data for analytics. Additionally, you will design and implement secure and performant data models to meet the scalability and performance needs of data products. Staying abreast of trends and emerging technologies will be crucial as you evaluate the performance and applicability of potential tools for our requirements. Your responsibilities will also include building data models with DBT to transform data and ensure data quality through the creation of DBT unit tests, as well as maintaining data quality dashboards in Streamlit. You will partner with various stakeholders, assisting them with data-related technical issues and developing data products/systems using large and complex data sets to meet business and technical requirements. Furthermore, you will define the roadmap, communication, and architecture in collaboration with product and business teams. Promoting effective team practices, shaping team culture, and engaging in active mentoring will be key aspects of your role. You will pair program with developers daily to ensure better quality code, shared knowledge, and increased resiliency of applications. Advocating for best practices on the team and prescribing coding and testing standards and tools will also be part of your responsibilities, as will working across the CIM Domain to establish best practices and coherent ways of working between and within teams.

Responsibilities

  • Enable analytics and reporting by centralizing and integrating high quality, large, complex data sets in a highly performant and scalable cloud analytical platform
  • Work closely with architects, stakeholders, and product teams to implement data pipelines to ingest, cleanse, and enrich data for analytics to downstream stakeholders
  • Design and implement secure, performant data models to meet the scalability and performance needs of data product
  • Understand trends and emerging technologies and evaluate the performance and applicability of potential tools for our requirements
  • Build data models with DBT to transform data and ensure data quality by creating DBT unit tests
  • Build and maintain the data quality dashboards in Streamlit
  • Partner with stakeholders including data, design, product and executive teams and assisting them with data-related technical issues
  • Develop data products/systems using large and complex data sets to meet business and technical requirements
  • Work with product and business to define roadmap, communication and architecture
  • Promote effective team practices, shape team culture, and engage in active mentoring
  • Pair program with developers daily to ensure better quality code, shared knowledge and increased resiliency of our applications
  • Advocate for best practices on the team and prescribe coding and testing standards and tools
  • Work across the CIM Domain to establish best practices and coherent ways of working between and within teams

Requirements

  • 10 years of experience with Modern Data Engineering projects and practices: designing, building, and deploying scalable data pipelines
  • 5+ years of experience in designing, building, deploying cloud native solutions
  • At least 3 years of experience with AWS, Snowflake, DBT, Airflow/Astronomer, Python, Docker/Kubernetes, CI/CD, Git, familiarity with Databricks
  • Experience in working closely with architects to design and develop data lake, data pipelines, and data product publication strategy
  • Familiarity with Data mesh architecture
  • Experience designing and implementing efficient, reusable, and scalable data processing systems and pipelines in Databricks and Snowflake
  • Educate and mentor the junior data engineers on data engineering best practices
  • Experience partnering with internal departments to establish requirements
  • Proven experience collaborating across teams to develop and implement data engineering best practices
  • Exposure to analytics and machine learning
  • Familiarity with BI tools such as Tableau, PowerBI

Benefits

  • Medical, dental, vision, and life insurance plans
  • Generous paid time off (PTO) and 6 company holidays per year
  • Automatic 6% 401(k) company contribution each pay period
  • Employee discounts, parental leave, 3:1 match on donations and tuition reimbursement
  • A comprehensive set of emotional, financial, physical and social wellbeing programs
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service