CDW - Illinois City, IL

posted about 2 months ago

Full-time
Illinois City, IL
Merchant Wholesalers, Durable Goods

About the position

The Senior Software Engineer II - Data will play a pivotal role in building and operationalizing the minimally inclusive data necessary for the enterprise data and analytics initiatives following best practices. The bulk of the data engineer's work would be in building, managing, and optimizing data pipelines and then moving these data pipelines effectively into production for key data and analytics consumers like business/data analysts, data scientists, or any persona that needs curated data for data and analytics use cases across the enterprise. In this role, you will develop and maintain scalable data pipelines to support continuing increases in data volume and complexity. You will interface with other technology teams to extract, transform, and load data from a wide variety of data sources using big data technologies and SQL. Collaboration with business teams will be essential to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-driven decision-making across the organization. You will also collaborate with other technology teams to help engineer data sets that data science teams use to implement advanced analytics algorithms that exploit our rich datasets for statistical analysis, prediction, clustering, and machine learning. Additionally, you will be responsible for using innovative and modern tools, techniques, and architectures to partially or completely automate the most common, repeatable, and tedious data preparation and integration tasks in order to minimize manual and error-prone processes and improve productivity. Training counterparts such as data scientists, data analysts, and data consumers in data pipelines and preparation techniques will be part of your responsibilities, making it easier for them to integrate and consume the data they need for their own use cases. You will also help ensure compliance and governance during the use of data, and it is important to be curious and knowledgeable about new data management techniques and how to apply them to solve business problems.

Responsibilities

  • Develops and maintains scalable data pipelines to support continuing increases in data volume and complexity.
  • Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using big data technologies and SQL.
  • Collaborates with business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-driven decision making across the organization.
  • Collaborate with other technology teams to help engineer data sets that data science teams use to implement advanced analytics algorithms that exploit our rich datasets for statistical analysis, prediction, clustering and machine learning.
  • Responsible for using innovative and modern tools, techniques and architectures to partially or completely automate the most-common, repeatable and tedious data preparation and integration tasks in order to minimize manual and error-prone processes and improve productivity.
  • Train counterparts such as data scientists, data analysts, data consumers in data pipelines and preparation techniques, which make it easier for them to integrate and consume the data they need for their own use cases.
  • Helps ensure compliance and governance during use of data.
  • Be curious and knowledgeable about new data management techniques and how to apply them to solve business problems.

Requirements

  • BS or MS degree in Computer Science or a related technical field.
  • 7+ years data application development experience.
  • Strong experience with various Data Management architectures like Data Warehouse, Data Lake, Data Hub and the supporting processes like Data Integration, Governance, Metadata Management.
  • Extensive experience with popular data processing languages including SQL, PL/SQL, Python, others for relational databases and on NoSQL/Hadoop oriented databases like MongoDB, Cassandra, others for nonrelational databases.
  • Demonstrated ability to build rapport and maintain productive working relationships cross-departmentally and cross-functionally.
  • Demonstrated ability to coach and mentor others.
  • Excellent written and verbal communication skills with the ability to effectively interact with and present to all stakeholders including senior leadership.
  • Strong organizational, planning and creative problem solving-skills with critical attention to detail.
  • Demonstrated success of facilitation and solutions implementation.
  • History of balancing competing priorities with the ability to adapt to the changing needs of the business while meeting deadlines.

Nice-to-haves

  • Extensive experience with Azure Data Factory
  • Extensive experience working with cloud platform (at least one of Azure, AWS, GCP)
  • Experience with ETL Tools (SSIS, Informatica, Ab Initio, Talend), Python, Databricks, Microsoft SQL Server Platform (version 2012 or later), and/or working in an Agile environment.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service