Scale AI - San Francisco, CA
posted 8 days ago
As the leading data and evaluation partner for frontier AI companies, Scale plays an integral role in understanding the capabilities and safeguarding large language models (LLMs). Safety, Evaluations and Analysis Lab (SEAL) is Scale's new frontier research effort dedicated to building robust evaluation products and tackling the challenging research problems in evaluation and red teaming. At SEAL, we are passionate about ensuring transparency, trustworthiness, and reliability of language models, while simultaneously igniting the advancement of model capabilities and pioneering novel skills - we are setting the northern star for the AI community, where safety and innovation illuminate the path forward. We are seeking talented research interns to join us in shaping the landscape for safety and transparency for the entire AI industry. We support collaborations across the industry and the publication of our research findings. This year, we are seeking top-tier candidates for multiple projects, focusing on frontier agent data, evaluation and safety; scalable oversight and alignment of LLMs; science of evaluation for LLM; and exploring the frontier and potentially dangerous capabilities of LLMs with effective guardrails.
Match and compare your resume to any job description
Start Matching