Heliogic Staffing - Seattle, WA

posted 5 days ago

Full-time - Senior
Seattle, WA

About the position

The Senior Data Architect position is designed for a Principal Software Engineer who is dedicated to developing high-performance data platforms. This role emphasizes effective communication, collaboration, and a strong foundation in programming and computer science. The successful candidate will work with a distributed team to deliver scalable, maintainable, and testable software solutions, while also leading architectural discussions and mentoring other specialists.

Responsibilities

  • Work with a distributed team of engineers across multiple products building software collaboratively.
  • Work cross-team to build consensus on approach for delivering projects.
  • Collaborate with business partners to understand and refine requirements.
  • Eliminate ambiguity in projects and communicate direction to engineers to help team members work in parallel.
  • Ramp up quickly on existing software to deliver incremental, integrated solutions in a complex environment.
  • Build high-performance, stable, scalable systems to be deployed in an enterprise setting.
  • Lead high-level architecture discussions and planning sessions.
  • Participate in the code review process by assessing pull requests.
  • Support systems and services during production incidents as part of the on-call rotation.
  • Author and recommend technical proposals and root cause analyses.
  • Provide mentoring and advice for other specialists.
  • Establish engineering practices and standards within the team to drive quality and excellence.

Requirements

  • Bachelor's degree in computer science, Information Systems, Software, Electrical or Electronics Engineering, or comparable field of study, and/or equivalent work experience.
  • Minimum of 15+ years of related big data engineering experience modeling and developing large data pipelines and platforms.
  • Hands-on experience with distributed systems such as Spark, Hadoop (HDFS, Hive, Presto, PySpark) to query and process data.
  • Expert in Python and SQL skills processing big datasets.
  • A strong grasp of computer science fundamentals (data structures, algorithms, databases, etc.).
  • Experience with at least one major MPP or cloud database technology (Snowflake, Redshift, Big Query, Databricks).
  • Expert in Data Modeling techniques and Data Warehousing standard methodologies and practices.
  • In-depth experience with Cloud technologies like AWS (S3, ECS, EMR, EC2, Lambda, etc.).
  • Solid experience with data integration toolsets (i.e Airflow), CI/CD.
  • Experience using source control systems and CI/CD pipelines.

Nice-to-haves

  • Proficient with Scrum and Agile methodologies.
  • Experience directly managing and mentoring a team.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service