CDW

posted about 2 months ago

Full-time
Merchant Wholesalers, Durable Goods

About the position

The Senior Software Engineer II - Data will play a pivotal role in building and operationalizing the minimally inclusive data necessary for the enterprise data and analytics initiatives following best practices. The bulk of the data engineer's work would be in building, managing, and optimizing data pipelines and then moving these data pipelines effectively into production for key data and analytics consumers like business/data analysts, data scientists, or any persona that needs curated data for data and analytics use cases across the enterprise. This position requires a strong focus on developing scalable data solutions that can handle increasing data volume and complexity, ensuring that the data is accessible and usable for various stakeholders within the organization. In this role, the engineer will develop and maintain scalable data pipelines to support the growing demands of data processing. They will interface with other technology teams to extract, transform, and load data from a wide variety of data sources using big data technologies and SQL. Collaboration with business teams is essential to improve data models that feed business intelligence tools, thereby increasing data accessibility and fostering data-driven decision-making across the organization. The engineer will also work closely with data science teams to engineer data sets that can be used for advanced analytics algorithms, including statistical analysis, prediction, clustering, and machine learning. The Senior Software Engineer II - Data will be responsible for using innovative and modern tools, techniques, and architectures to automate common, repeatable, and tedious data preparation and integration tasks. This will help minimize manual and error-prone processes, ultimately improving productivity. Additionally, the engineer will train counterparts such as data scientists and data analysts in data pipelines and preparation techniques, making it easier for them to integrate and consume the data they need for their own use cases. Compliance and governance during the use of data will also be a key responsibility, along with a commitment to staying informed about new data management techniques and their application to solve business problems.

Responsibilities

  • Develops and maintains scalable data pipelines to support continuing increases in data volume and complexity.
  • Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using big data technologies and SQL.
  • Collaborates with business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-driven decision making across the organization.
  • Collaborate with other technology teams to help engineer data sets that data science teams use to implement advanced analytics algorithms that exploit our rich datasets for statistical analysis, prediction, clustering, and machine learning.
  • Responsible for using innovative and modern tools, techniques, and architectures to partially or completely automate the most-common, repeatable, and tedious data preparation and integration tasks in order to minimize manual and error-prone processes and improve productivity.
  • Train counterparts such as data scientists, data analysts, and data consumers in data pipelines and preparation techniques, which make it easier for them to integrate and consume the data they need for their own use cases.
  • Helps ensure compliance and governance during the use of data.
  • Be curious and knowledgeable about new data management techniques and how to apply them to solve business problems.

Requirements

  • BS or MS degree in Computer Science or a related technical field.
  • 7+ years data application development experience.
  • Strong experience with various Data Management architectures like Data Warehouse, Data Lake, Data Hub and the supporting processes like Data Integration, Governance, Metadata Management.
  • Extensive experience with popular data processing languages including SQL, PL/SQL, Python, and others for relational databases and on NoSQL/Hadoop oriented databases like MongoDB, Cassandra, and others for nonrelational databases.
  • Demonstrated ability to build rapport and maintain productive working relationships cross-departmentally and cross-functionally.
  • Demonstrated ability to coach and mentor others.
  • Excellent written and verbal communication skills with the ability to effectively interact with and present to all stakeholders including senior leadership.
  • Strong organizational, planning, and creative problem-solving skills with critical attention to detail.
  • Demonstrated success of facilitation and solutions implementation.
  • History of balancing competing priorities with the ability to adapt to the changing needs of the business while meeting deadlines.

Nice-to-haves

  • Extensive experience with Azure Data Factory
  • Extensive experience working with cloud platform (at least one of Azure, AWS, GCP)
  • Experience with ETL Tools (SSIS, Informatica, Ab Initio, Talend), Python, Databricks, Microsoft SQL Server Platform (version 2012 or later), and/or working in an Agile environment.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service