Geico - Chevy Chase, MD

posted about 2 months ago

Full-time - Senior
Chevy Chase, MD
5,001-10,000 employees
Insurance Carriers and Related Activities

About the position

The Senior Engineer - Data ETL (SQL & Spark) at GEICO is a pivotal role focused on building high-performance, low-maintenance platforms and applications to support the company's transformation into a tech-driven organization. This position emphasizes engineering excellence, psychological safety, and continuous improvement, while delivering high-quality technology products and services in a fast-paced environment. The ideal candidate will possess extensive technical knowledge across various systems and will be responsible for designing and implementing data ingestion platforms, collaborating across teams, and mentoring fellow engineers.

Responsibilities

  • Design and implement a data ingestion platform.
  • Scope, design, and build scalable, resilient distributed systems.
  • Build product definition and leverage technical skills to drive towards the right solution.
  • Engage in cross-functional collaboration throughout the entire software lifecycle.
  • Lead design sessions and code reviews with peers to elevate engineering quality.
  • Define, create, and support reusable application components/patterns.
  • Build processes for optimal extraction, transformation, and loading of data.
  • Work with teams to design, develop, test, implement, and support technical solutions.
  • Perform unit tests and conduct reviews to ensure code quality and performance.
  • Share knowledge of tech trends and mentor other engineering community members.

Requirements

  • 4+ years of professional software development experience in Java, Spark, Scala, or Python.
  • 3+ years of experience with architecture and design.
  • 3+ years of experience with AWS, GCP, Azure, or another cloud service.
  • 2+ years of experience in open-source frameworks.
  • Strong working knowledge of SQL and ability to write, debug, and optimize SQL queries and ETL jobs.
  • Experience developing and enhancing data processing components including Data Ingest, Data Transformation, and Data Quality.
  • Advanced programming experience and big data experience.

Nice-to-haves

  • Experience with Databricks, DBT, Python, Airflow, Azure Data Factory, and/or KAFKA.
  • Experience with data formats such as Parquet, Avro, ORC, XML, JSON.
  • Experience with streaming applications (Spark Streaming, Flink, Kafka).
  • Experience with container orchestration services including Docker and Kubernetes.
  • Experience with CI/CD deployment and test automation tools like ADO, Jenkins, Gradle.

Benefits

  • Premier Medical, Dental and Vision Insurance with no waiting period.
  • Paid Vacation, Sick and Parental Leave.
  • 401(k) Plan.
  • Tuition Reimbursement.
  • Paid Training and Licensures.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service