Scale AI - San Francisco, CA

posted 2 days ago

Full-time - Mid Level
San Francisco, CA
Computing Infrastructure Providers, Data Processing, Web Hosting, and Related Services

About the position

As the leading data and evaluation partner for frontier AI companies, Scale plays an integral role in understanding the capabilities and safeguarding large language models (LLMs). Safety, Evaluations and Alignment Lab (SEAL) is Scale's frontier research effort dedicated to tackling the challenging research problems in evaluation, red teaming, and alignment of advanced AI systems. We are actively seeking talented researchers to join us in shaping the landscape for safety and transparency for the entire AI industry. We support collaborations across the industry and academia and the publication of our research findings. As a Research Scientist working on Scalable Oversight, you will develop and evaluate methods for evaluation and supervision of advanced AI systems.

Responsibilities

  • Design experiments to exemplify failure modes of current supervision protocols for language models
  • Design experiments to simulate expertise and capability gaps between supervisor and model for scalable oversight experiments
  • Develop new supervision protocols and gather human annotations using these protocols
  • Train language models using reinforcement learning, analyzing their behavior and comparing between models

Requirements

  • Commitment to our mission of promoting safe, secure, and trustworthy AI deployments in the industry as frontier AI capabilities continue to advance
  • Practical experience conducting technical research collaboratively, with proficiency in frameworks like Pytorch, Jax, or Tensorflow
  • A track record of published research in machine learning, particularly in generative AI
  • At least three years of experience addressing sophisticated ML problems, whether in a research setting or in product development
  • Strong written and verbal communication skills to operate in a cross functional team

Nice-to-haves

  • Hands-on experience with open source LLM fine-tuning or involvement in bespoke LLM fine-tuning projects using Pytorch/Jax
  • Experience in crafting evaluations or a background in data science roles related to LLM technologies
  • Experience working with cloud technology stack (eg. AWS or GCP) and developing machine learning models in a cloud environment

Benefits

  • Comprehensive health, dental and vision coverage
  • Retirement benefits
  • Learning and development stipend
  • Generous PTO
  • Commuter stipend
Job Description Matching

Match and compare your resume to any job description

Start Matching
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service