This job is closed

We regret to inform you that the job you were interested in has been closed. Although this specific position is no longer available, we encourage you to continue exploring other opportunities on our job board.

Slack - Atlanta, GA

posted 2 days ago

Atlanta, GA
Publishing Industries

About the position

As a Search Infrastructure Data Engineer, you will work across the Search Infra and ML Infra teams to support their data engineering needs. You will be responsible for designing, building, and maintaining the data infrastructure and pipelines that power our search and recommendation systems. You will work closely with data scientists, machine learning/ai engineers, and software developers to ensure that our search algorithms are efficient, scalable, and deliver high-quality results.

Responsibilities

  • Design and develop scalable and resilient information retrieval infrastructure to power search and other products.
  • Build and integrate scalable backend systems, platforms, and tools that power our data warehouse and help our partners implement, deploy, and analyze data assets.
  • Develop and maintain ETL processes to ensure data quality and consistency.
  • Collaborate with data scientists and machine learning engineers to deploy machine learning models for semantic retrieval in our own kubernetes-based deployment system, working with tools like Chef and Hashicorp Terraform.
  • Optimize data storage and retrieval to support real-time search queries and recommendations.
  • Monitor and troubleshoot data pipelines in production.
  • Work with the Search and ML Infrastructure teams to maintain and improve various data pipelines.
  • Mentor other engineers and deeply review code.
  • Improve engineering standards, tooling, and processes.

Requirements

  • Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field.
  • 5+ years of relevant technical experience, including significant experience in data engineering, with a focus on search.
  • Experience with search technologies such as Elasticsearch, Solr, or Lucene.
  • Proficiency in programming languages such as Python, Java, or Scala.
  • Experience with big data technologies such as Airflow, EMR, Hadoop, Hive, Spark, and Kafka.
  • Solid understanding of SQL and NoSQL databases.
  • Experience with cloud platforms (e.g., AWS, GCP, Azure) and containerization (e.g., Docker, Kubernetes).
  • Excellent problem-solving skills and attention to detail.
  • Strong communication and collaboration skills.

Nice-to-haves

  • Knowledge of natural language processing (NLP) techniques and tools.
  • Experience with A/B testing and experimentation frameworks.
  • Familiarity with data visualization tools and techniques.
  • Experience with vector-based retrieval systems like Vespa, Milvus, or Solr.
  • Experience with ML model serving frameworks/toolkits like Kubeflow, MLflow, Sagemaker, and AWS Bedrock.

Benefits

  • Competitive salary and benefits package.
  • Opportunity to work on cutting-edge search technologies.
  • Collaborative and inclusive work environment.
  • Professional development and growth opportunities.
Job Description Matching

Match and compare your resume to any job description

Start Matching
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service