Databricks

posted 3 days ago

Full-time - Mid Level
501-1,000 employees
Professional, Scientific, and Technical Services

About the position

As a Specialist Solutions Architect (SSA) - Data Engineering at Databricks, you will play a pivotal role in guiding customers to build big data solutions on the Databricks platform. This customer-facing position requires hands-on experience with Apache Spark and other data technologies, focusing on the design and implementation of essential workloads. You will work closely with Solution Architects and provide technical leadership to ensure successful project outcomes while continuously enhancing your technical skills through mentorship and training.

Responsibilities

  • Provide technical leadership to guide strategic customers to successful implementations on big data projects, ranging from architectural design to data engineering to model deployment.
  • Architect production level data pipelines, including end-to-end pipeline load performance testing and optimization.
  • Become a technical expert in an area such as data lake technology, big data streaming, or big data ingestion and workflows.
  • Assist Solution Architects with advanced aspects of the technical sale including custom proof of concept content, estimating workload sizing, and custom architectures.
  • Provide tutorials and training to improve community adoption, including hackathons and conference presentations.
  • Contribute to the Databricks Community.

Requirements

  • 5+ years experience in a technical role with expertise in Software Engineering/Data Engineering or Data Applications Engineering.
  • Extensive experience building big data pipelines and maintaining production data systems.
  • Deep Specialty Expertise in scaling big data workloads, migrating Hadoop workloads to the public cloud, and working with cloud data lake technologies.
  • Bachelor's degree in Computer Science, Information Systems, Engineering, or equivalent experience.
  • Production programming experience in SQL and Python, Scala, or Java.
  • 2 years professional experience with Big Data technologies (e.g., Spark, Hadoop, Kafka) and architectures.
  • 2 years customer-facing experience in a pre-sales or post-sales role.

Nice-to-haves

  • Experience with large scale data ingestion pipelines and data migrations, including CDC and streaming ingestion pipelines.
  • Expertise in performance tuning, troubleshooting, and debugging Spark or other big data solutions.

Benefits

  • Comprehensive benefits and perks tailored to meet employee needs.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service