Meta - Menlo Park, CA

posted about 1 month ago

Full-time - Intern
Menlo Park, CA
Web Search Portals, Libraries, Archives, and Other Information Services

About the position

The Research Scientist Intern position in Systems ML and HPC at Meta focuses on the co-design of software and hardware technologies for AI at datacenter scale. The intern will work on optimizing machine learning workloads, enhancing performance, and collaborating with various engineering teams to ensure that AI workloads are well-suited for the hardware infrastructure. This role involves employing advanced optimization strategies to maximize training throughput for large-scale AI models and working with external partners to influence product development.

Responsibilities

  • Develop tools and methodologies for large scale workload analysis and extract representative benchmarks (in C++/Python/Hack) to drive early evaluation of upcoming platforms.
  • Analyze evolving Meta workload trends and business needs to derive requirements for future offerings.
  • Utilize extensive understanding of CPUs (x86/ARM), Flash/HDD storage systems, networking, and GPUs to identify bottlenecks and enhance product/service efficiency.
  • Collaborate closely with software developers to re-architect services, improve codebase through algorithm redesign, reduce resource consumption, and identify hardware/software co-design opportunities.
  • Identify industry trends, analyze emerging technologies and disruptive paradigms.
  • Conduct prototyping exercises to quantify the value proposition for Meta and develop adoption plans.
  • Influence vendor hardware roadmap and broader ecosystem to align with Meta's roadmap requirements.
  • Work with Software Services, Product Engineering, and Infrastructure Engineering teams to find the optimal way to deliver the hardware roadmap into production and drive adoption.

Requirements

  • Currently in the process of obtaining a PhD degree in the field of Computer Science or a related STEM field.
  • Experience with hardware architecture, compute technologies and/or storage systems.
  • Must obtain work authorization in the country of employment at the time of hire, and maintain ongoing work authorization during employment.
  • Intent to return to degree program after the completion of the internship/co-op.

Nice-to-haves

  • Track record of achieving results as demonstrated by grants, fellowships, patents, as well as first-authored publications at leading workshops or conferences such as MICRO, ISCA, HPCA, ASPLOS, ATC, SOSP, OSDI, MLSys or similar.
  • Architectural understanding of CPU, GPU, Accelerators, Networking, Flash/HDD Storage systems.
  • Experience with distributed AI training and inference with a focus on performance, programmability, and efficiency.
  • Some experience with large-scale infrastructure, distributed systems, full stack analysis of server applications.
  • Experience or knowledge in developing and debugging in C/C++, Python and/or PyTorch.
  • Experience driving original scholarship in collaboration with a team.
  • Interpersonal experience: cross-group and cross-culture collaboration.
  • Experience in theoretical and empirical research and for answering questions with research.
  • Experience communicating research for public audiences of peers.

Benefits

  • $7,800/month to $11,293/month compensation based on skills and experience.
  • Comprehensive benefits package including health insurance, paid holidays, and professional development opportunities.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service