Epsoft Technologies - Seattle, WA

posted 11 days ago

Full-time - Senior
Seattle, WA
Professional, Scientific, and Technical Services

About the position

The Principal Architect (Big Data) role is designed for an experienced software engineer with a strong background in product development and big data engineering. This position focuses on creating high-performance data platforms and requires effective communication and collaboration skills. The successful candidate will lead architecture discussions, mentor team members, and ensure the delivery of scalable and maintainable software solutions in a hybrid work environment.

Responsibilities

  • Work with a distributed team of engineers across multiple products building software collaboratively.
  • Work cross-team to build consensus on approach for delivering projects.
  • Collaborate with business partners to understand and refine requirements.
  • Eliminate ambiguity in projects and communicate direction to engineers to help team members work in parallel.
  • Ramp up quickly on existing software to deliver incremental, integrated solutions in a complex environment.
  • Build high-performance, stable, scalable systems to be deployed in an enterprise setting.
  • Lead high-level architecture discussions and planning sessions.
  • Participate in the code review process by assessing pull requests.
  • Support systems and services during production incidents as part of the on-call rotation.
  • Author and recommend technical proposals and root cause analyses.
  • Provide mentoring and advice for other specialists.
  • Establish engineering practices and standards within the team to drive quality and excellence.

Requirements

  • Bachelor's degree in computer science, Information Systems, Software, Electrical or Electronics Engineering, or comparable field of study, and/or equivalent work experience.
  • Minimum of 15+ years of related big data engineering experience modeling and developing large data pipelines and platforms.
  • Hands-on experience with distributed systems such as Spark, Hadoop (HDFS, Hive, Presto, PySpark) to query and process data.
  • Expert in Python and SQL skills processing big datasets.
  • A strong grasp of computer science fundamentals (data structures, algorithms, databases, etc.).
  • Experience with at least one major MPP or cloud database technology (Snowflake, Redshift, Big Query, Databricks).
  • Expert in Data Modeling techniques and Data Warehousing standard methodologies and practices.
  • In-depth experience with Cloud technologies like AWS (S3, ECS, EMR, EC2, Lambda, etc.).
  • Solid experience with data integration toolsets (i.e Airflow), CI/CD.
  • Experience using source control systems and CI/CD pipelines.

Nice-to-haves

  • Proficient with Scrum and Agile methodologies.
  • Experience directly managing and mentoring a team.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service