Staff Data Engineer

$152,000 - $179,000/Yr

National Grid - Waltham, MA

posted 2 months ago

Full-time - Mid Level
Remote - Waltham, MA
Utilities

About the position

As a Staff Data Engineer at National Grid, you will play a pivotal role in our Product Engineering department, focusing on making critical data accessible to our business teams. This position is available in various locations including Waltham, MA, Brooklyn, NY, Hicksville, NY, or Syracuse, NY, and is open to candidates residing in nearby states such as Connecticut, New Jersey, New Hampshire, Pennsylvania, Rhode Island, Vermont, or Maine. Your work will be integral to the US IT Electric business unit, where innovation and adaptability are key. In this role, you will be part of a Data Engineering team that utilizes the agile framework to build end-to-end data pipelines. You will adhere to rigorous engineering standards and coding practices to ensure that the data delivered is of the highest quality and readily accessible. Your contributions will also extend to modernizing our architecture and tools, enhancing our output, scalability, and speed. You will design and develop highly scalable and extensible data pipelines that facilitate the collection, storage, distribution, modeling, and analysis of large datasets from various channels. Key responsibilities include leading the Data Engineering team in developing, testing, documenting, and supporting scalable data pipelines. You will build new data integrations, including APIs, to accommodate the increasing volume and complexity of data. Additionally, you will implement scalable solutions that align with our data governance standards and architectural roadmaps for data integrations, storage, reporting, and analytics. Collaboration with analytics and business teams will be essential to improve data models that enhance business intelligence tools, thereby fostering data-driven decision-making across the organization. You will also design and develop data integrations and a data quality framework, write unit/integration/functional tests, and document your work. Furthermore, you will automate the deployment of our distributed system for collecting and processing streaming events from multiple sources, perform data analysis to troubleshoot data-related issues, and guide junior engineers on coding best practices and optimization.

Responsibilities

  • Lead the Data Engineering team to develop, test, document, and support scalable data pipelines.
  • Build out new data integrations including APIs to support continuing increases in data volume and complexity.
  • Build and implement scalable solutions that align with our data governance standards and architectural road maps for data integrations, data storage, reporting, and analytic solutions.
  • Collaborate with analytics and business teams to improve data models that feed business intelligence tools, increasing data accessibility, and fostering data-driven decision making across the organization.
  • Design and develop data integrations and a data quality framework. Write unit/integration/functional tests and document work.
  • Design, implement, and automate deployment of our distributed system for collecting and processing streaming events from multiple sources.
  • Perform data analysis needed to troubleshoot data-related issues and aid in the resolution of data issues.
  • Guide and mentor junior engineers on coding best practices and optimization.

Requirements

  • 4-year college degree or equivalent combination of education and experience in Computer Science, Mathematics, Statistics, or related technical field.
  • 8 years of relevant work experience in analytics, data engineering, business intelligence or related field.
  • Skilled in object-oriented programming, particularly in Python.
  • Strong experience in Python, PySpark, and SQL.
  • Strong experience in Databricks and Snowflake.
  • Experience developing integrations across multiple systems and APIs.
  • Experience with or knowledge of Agile software development methodologies.
  • Experience with cloud-based databases, specifically Azure technologies (e.g., Azure data lake, ADF, Azure DevOps, and Azure Functions).
  • Experience using SQL queries as well as writing and perfecting SQL queries in a business environment with large-scale, complex datasets.
  • Experience with data warehouse technologies.
  • Experience creating ETL and/or ELT jobs.

Benefits

  • Flexible work schedule
  • Hybrid work structure allowing remote or office work as needed
  • Career advancement opportunities
  • Collaborative, team-oriented culture
  • Competitive salary based on experience and location
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service