Prominds Business Consulting - Indianapolis, IN

posted 2 months ago

Full-time
Indianapolis, IN
Professional, Scientific, and Technical Services

About the position

The Data Architect position is a long-term contract role based in Indianapolis, IN, requiring onsite presence. The primary focus of this role is to leverage advanced data engineering, analytics, and machine learning capabilities using DataBricks. The successful candidate will be responsible for designing and implementing data solutions that enhance the organization's data architecture and support various data-centric projects. This position demands a strong understanding of cloud services, particularly AWS, and proficiency in various programming languages and tools that facilitate data processing and integration. In this role, the Data Architect will work closely with cross-functional teams to ensure that data solutions align with business objectives and meet the needs of stakeholders. The candidate will be expected to document processes meticulously and communicate complex technical concepts in a clear and effective manner. The ability to collaborate with other data professionals and contribute to the overall data strategy of the organization is essential. Additionally, experience with MS Fabric for data visualization and integration will be beneficial in enhancing the data architecture. The Data Architect will also be involved in the development and maintenance of CI/CD pipelines using GitHub Actions and AWS CDK, ensuring that data solutions are deployed efficiently and reliably. The role requires a proactive approach to problem-solving and a commitment to continuous improvement in data processes and systems.

Responsibilities

  • Leverage DataBricks for data engineering, analytics, and machine learning.
  • Design and implement data solutions that enhance the organization's data architecture.
  • Collaborate with cross-functional teams to align data solutions with business objectives.
  • Document processes and communicate complex technical concepts effectively.
  • Develop and maintain CI/CD pipelines using GitHub Actions and AWS CDK.
  • Utilize SQL and PLSQL for data manipulation and querying.
  • Implement data processing solutions using Python and PySpark or AWS Glue API.
  • Enhance data visualization and integration using MS Fabric.

Requirements

  • Proficiency in using DataBricks for data engineering, analytics, and machine learning.
  • Strong skills in reading, understanding, and working with SQL and PLSQL code.
  • Experience in Python programming, particularly in the context of data processing.
  • Solid experience with AWS services, especially in data-centric projects.
  • Hands-on experience with PySpark and/or AWS Glue API development.
  • Excellent ability to document processes and communicate complex concepts effectively.

Nice-to-haves

  • Experience with MS Fabric for enhancing data visualization and integration.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service