Apple - Cupertino, CA

posted 4 months ago

Full-time - Mid Level
Cupertino, CA
Computer and Electronic Product Manufacturing

About the position

The AIML - Machine Learning SW/HW Co-Design Engineer position is part of the Machine Learning and Platforms Technology (MLPT) team within Apple's AIML organization. This team is responsible for building the inference stack that runs all machine learning (ML) networks on Apple Silicon. The role involves writing converters and compilers that translate source network definitions into formats that hardware execution units can interpret. Additionally, the engineer will develop tools for network optimizations and create the runtime that schedules and manages execution on hardware. This position also entails providing guidance for hardware/software co-design for current and future workloads alongside hardware accelerators. The MLPT team collaborates cross-functionally with various partner teams within Apple, including CPU, GPU, Neural Engine, speech understanding, Camera, Photos, and VisionPro, as well as with external app developers. A notable product from this team is Core ML, which serves as an external-facing application. The role requires a deep dive into the latest research on efficient on-device inference, prototyping new approaches to enhance inference on critical models without compromising accuracy. The engineer will conduct thorough analyses of both the software stack and hardware, seeking innovative methods for improvement. Furthermore, the position involves evaluating ML inference performance across a diverse range of devices, from small wearables to the largest Apple Silicon Macs.

Responsibilities

  • Write converters and compilers for translating source network definitions to hardware-executable formats.
  • Develop tools for network optimizations and create runtime management for hardware execution.
  • Provide guidance for hardware/software co-design for current and future workloads alongside hardware accelerators.
  • Collaborate with cross-functional teams within Apple and external app developers.
  • Conduct deep dive analyses of software and hardware to identify innovative improvement methods.
  • Prototype new approaches to enhance inference on critical models without sacrificing accuracy.
  • Evaluate ML inference performance across various devices, from wearables to Apple Silicon Macs.

Requirements

  • Understanding of the basics of machine learning and familiarity with adapting and training neural networks using frameworks like PyTorch, TensorFlow, or JAX.
  • Knowledge of computer architecture (CPU and GPU) and performance modeling and analysis of computer systems.
  • Experience with ML systems, particularly for on-device inference scenarios.
  • Ability to perform comprehensive analyses for performance, power, and accuracy starting from first principles of deep learning techniques.
  • Strong programming and software design skills, with proficiency in C/C++ and/or Python.
  • Excellent communication skills and a collaborative, product-focused mindset.

Nice-to-haves

  • Experience with high-performance extensible software architecture and APIs.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service