Nvidia - Santa Clara, CA

posted about 2 months ago

Full-time - Mid Level
Santa Clara, CA
5,001-10,000 employees
Computer and Electronic Product Manufacturing

About the position

NVIDIA is seeking a Python Software Engineer to further our efforts to GPU-accelerate data engineering for Large Language Model (LLM) tools and libraries. This role is pivotal in accelerating pre-processing pipelines for high-quality multi-modal dataset curation. The day-to-day focus is on developing efficient, scalable systems for de-duplicating, filtering, and classifying training corpora for foundation model LLMs, as well as ingesting and prepping datasets for use in Retrieval Augmented Generation (RAG) pipelines. Fundamental to these efforts are iterative testing and improvement in system cost, speed, and accuracy through micro-optimization, prompt engineering, fine-tuning, and applying new research. The ideal candidate is happiest releasing early and often! They court user feedback with an ear open to the spirit of related feature requests. You are comfortable objectively evaluating the latest AI models and frameworks with an eye on acceleration potential. Would you like to run your training and test experiments on our supercomputers on thousands of GPUs? Come work with us!

Responsibilities

  • Develop and optimize Python-based data processing frameworks, ensuring efficient handling of large datasets on GPU-accelerated environments, vital for LLM training.
  • Contribute to the design and implementation of RAPIDS and other GPU-accelerated libraries, focusing on seamless integration and performance enhancement in the context of LLM training data preparation and RAG pipelines.
  • Lead development and iterative optimization of components for RAG pipelines, ensuring they demonstrate GPU acceleration and the best performing models for improved TCO.
  • Collaborate with teams of LLM and ML researchers in the development of full-stack, GPU-accelerated data preparation pipelines for multimodal models.
  • Implement benchmarking, profiling, and optimization of innovative algorithms in Python in various system architectures, specifically targeting LLM applications.
  • Work closely with diverse teams to understand requirements, build and evaluate POCs, and develop roadmaps for production level tools and library features within the growing LLM ecosystem.

Requirements

  • Advanced degree in Computer Science, Computer Engineering, or a related field (or equivalent experience).
  • 5+ years of Python library development experience, including CI systems (GitHub Actions), integration testing, benchmarking, and profiling.
  • Proficiency with LLMs and RAG pipelines: prompt engineering, LangChain, llama-index.
  • Deep understanding of the PyData and ML/DL ecosystems, including RAPIDS, Pandas, numpy, scikit-learn, XGBoost, Numba, PyTorch.
  • Familiarity with distributed programming frameworks like Dask, Apache Spark, or Ray.
  • Visible contributions to open-source projects on GitHub.

Nice-to-haves

  • Active engagement (published papers, conference talks, blogs) in the data science community.
  • Experience with production-level data pipelines, especially SQL-based.
  • Experience with software packaging technologies: pip, conda, Docker images.
  • Familiarity with Docker-Compose, Kubernetes, and Cloud deployment frameworks.
  • Knowledge of parallel programming approaches, especially in CUDA C++.

Benefits

  • Competitive salary package
  • Equity options
  • Comprehensive health benefits
  • Diversity and inclusion programs
  • Flexible work environment
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service