Trace3 - Grand Rapids, MI

posted about 2 months ago

Full-time - Senior
Remote - Grand Rapids, MI
Professional, Scientific, and Technical Services

About the position

We are seeking a Senior Data Integration Engineer with a robust background in data engineering and development. This role involves collaborating with a team of software and data engineers to create client-facing, data-first solutions using technologies like SQL Server and MongoDB. The engineer will be responsible for developing data pipelines to transform and integrate data across various zones, ensuring high-quality data management and API development.

Responsibilities

  • Develop processes and data models for consuming large quantities of 3rd party vendor data via RESTful APIs.
  • Develop data processing pipelines to analyze, transform, and migrate data between applications and systems.
  • Analyze data from multiple sources and negotiate differences in storage schema using the ETL process.
  • Develop APIs for external consumption by partners and customers.
  • Support the ETL environment by recommending improvements, monitoring, and deploying quality and validation processes to ensure data accuracy and integrity.
  • Design, develop, test, deploy, maintain, and improve data integration pipelines.
  • Create technical solutions that solve business problems and are well engineered, operable, and maintainable.
  • Design and implement tools to detect data anomalies (observability).
  • Ensure data accuracy, completeness, and high quality across all platforms.
  • Troubleshoot data issues and perform root cause analysis to proactively resolve product and operational issues.
  • Assemble large and complex data sets; develop data models based on specifications using structured data sets.
  • Develop familiarity with emerging and complex automations and technologies that support business processes.
  • Develop scalable and reusable frameworks for ingestion and transformation of large datasets.
  • Work within an Agile delivery / DevOps methodology to deliver product increments in iterative sprints.
  • Collaborate with stakeholders including Executive, Product, Data, and Design teams to support their data infrastructure needs.
  • Develop data models and mappings and build new data assets required by users.
  • Perform exploratory data analysis on existing products and datasets.
  • Identify, design, and implement internal process improvements including redesigning infrastructure for greater scalability, optimizing data delivery, and automating manual processes.
  • Engage in logical and physical design of databases, table creation, script creation, views, procedures, packages, and other database objects.
  • Create documentation for solutions and processes implemented or updated.

Requirements

  • 5+ years of relational database development experience, including SQL query generation and tuning, database design, and data concepts.
  • 5+ years of backend and RESTful API development experience in NodeJS (experience with GraphQL is a plus).
  • 5+ years of development experience with Python, Java, C#/ .NET.
  • 5+ years of experience with SQL and NoSQL databases, including MS SQL and MongoDB.
  • 5+ years of consuming RESTful APIs with data ingestion and storage.
  • 5+ years of developing RESTful APIs for use by customers.
  • 3+ years of professional work experience designing and implementing data pipelines in a cloud environment.
  • 3+ years of experience working within Azure cloud.
  • Experience in integrating and ingesting data from external data sources.
  • Strong diagnostic skills and ability to research, troubleshoot, and logically determine solutions.
  • Ability to effectively prioritize tasks in a fast-paced, high-volume, and evolving work environment.
  • Excellent written and verbal communication skills.
Job Description Matching

Match and compare your resume to any job description

Start Matching
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service