ByteDance-posted 3 months ago
Intern
San Jose, CA
5,001-10,000 employees
Publishing Industries

Founded in 2023, the ByteDance Doubao (Seed) Team is dedicated to pioneering advanced AI foundation models. Our goal is to lead in cutting-edge research and drive technological and societal advancements. With a strong commitment to AI, our research areas span deep learning, reinforcement learning, Language, Vision, Audio, AI Infra and AI Safety. Our team has labs and research positions across China, Singapore, and the US. Leveraging substantial data and computing resources and through continued investment in these domains, we have developed a proprietary general-purpose model with multimodal capabilities. In the Chinese market, Doubao models power over 50 ByteDance apps and business lines, including Doubao, Coze, and Dreamina, and is available to external enterprise clients via Volcano Engine. Today, the Doubao app stands as the most widely used AIGC application in China. This position is responsible for researching and building the company's LLMs. The role involves exploring new applications and solutions for related technologies in areas such as search, recommendation, advertising, content creation, and customer service. The goal is to meet the increasing demand for intelligent interactions from users and to significantly enhance their lifestyle and communication in the future. We are looking for talented individuals to join us for a Student Researcher opportunity in 2025. Student Researcher opportunities at ByteDance aim to offer students industry exposure and hands-on experience. Turn your ambitions into reality as your inspiration brings infinite opportunities at ByteDance. The Student Researcher position provides unique opportunities that go beyond the constraints of our standard internship program, allowing for flexibility in duration, time commitment, and location of work.

  • Design advanced reinforcement learning algorithms for large language models by integrating technologies such as heuristic-guided search, multi-agent reinforcement learning, and other related techniques.
  • Formulate a novel reward modeling methodology aimed at significantly enhancing robustness, improving generalization capabilities, and increasing overall accuracy.
  • Develop scalable oversight mechanisms that enable efficient monitoring and control of LLMs as they grow in size and complexity, ensuring consistent alignment with predefined objectives.
  • Focus on enhancing the interpretability of language models, ensuring that their decision-making processes and outputs are transparent and understandable to users and stakeholders.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service