This job is closed

We regret to inform you that the job you were interested in has been closed. Although this specific position is no longer available, we encourage you to continue exploring other opportunities on our job board.

MedeAnalyticsposted 18 days ago
Full-time • Senior
Richardson, TX
Publishing Industries

About the position

MedeAnalytics is seeking a highly motivated Senior Cloud DevOps Engineer with a passion for AI, data science, and cloud automation to join our Cloud Engineering team. This lead role will drive automation initiatives aligned with our R&D strategy, support cloud migrations, and manage the cloud infrastructure in a SaaS environment. You will collaborate with product development to design and maintain scalable, reliable, and secure solutions, ensuring best practices in DevOps and cloud computing. If you thrive in a fast-paced, innovative environment and are committed to improving healthcare outcomes, we encourage you to apply.

Responsibilities

  • Design, implement, and maintain automated infrastructure provisioning and management using tools like Terraform and AWS CloudFormation.
  • Collaborate with development teams to automate deployment and testing processes, including AI and data science models.
  • Manage and optimize Kubernetes clusters on AWS.
  • Develop and maintain Helm charts for packaging and deploying applications, including AI and data science models.
  • Build and maintain robust CI/CD pipelines using tools like Jenkins, GitLab CI/CD, ArgoCD, Atlantis or CircleCI, tailored for AI and data science workflows.
  • Integrate automated testing frameworks for both application code and AI models.
  • Implement code quality, security checks, and model validation within the pipelines.
  • Manage and optimize AWS cloud resources, including EC2 instances, S3 buckets, VPCs, and other services, with a focus on supporting AI and data science workloads.
  • Implement best practices for cloud security, cost optimization, and performance tuning.
  • Monitor and troubleshoot cloud infrastructure issues, particularly related to AI and data science applications.
  • Implement comprehensive monitoring solutions (e.g., Prometheus, Grafana, CloudWatch) to track system performance, AI model health, and data quality.
  • Configure alerts and notifications to ensure timely response to critical issues, including model drift or performance degradation.
  • Collaborate with data scientists to develop and deploy AI models into production.
  • Implement MLops practices to manage the entire lifecycle of AI models, including versioning, experimentation, and reproducibility.
  • Use tools like Kubeflow, MLflow, or Airflow to automate ML workflows.
  • Ensure data privacy and security compliance within AI and data science pipelines.
  • Work closely with development, data science, and AI teams to understand their requirements and provide technical guidance.
  • Collaborate with other DevOps team members to share knowledge and best practices, particularly related to AI and data science.
  • Identify and resolve complex technical challenges, including those specific to AI and data science applications.

Requirements

  • Bachelor's degree in computer science, Engineering, or a related field.
  • 3+ years of experience as a DevOps Engineer or a similar role, with a focus on AI and data science.
  • Certification in AWS (Amazon Web Services) is required, demonstrating a strong understanding of cloud architecture, services, and best practices.
  • Kubernetes certification (CKA or CKAD) is required, showcasing expertise in container orchestration, deployment, and management at scale.
  • Strong proficiency in AWS cloud services and tools.
  • Experience with Terraform and AWS CloudFormation for infrastructure automation.
  • In-depth knowledge of Kubernetes and containerization technologies (Docker).
  • Experience with Helm charts and CI/CD pipelines, tailored for AI and data science workflows.
  • Understanding of scripting languages (e.g., Bash, Python).
  • Excellent problem-solving and troubleshooting skills.
  • Strong communication and collaboration abilities.

Nice-to-haves

  • Certification in AWS (e.g., AWS Certified DevOps Engineer)
  • Experience with serverless computing (e.g., AWS Lambda, EKS)
  • Knowledge of security best practices and compliance frameworks
  • Experience with microservices architecture
  • Familiarity with data engineering concepts and tools
  • Experience with Jenkins, ArgoCD and Atlantis for GitOps-based deployments
  • Understanding of healthcare data and regulatory compliance (e.g., HIPAA)
  • Experience with AI and data science frameworks (e.g., TensorFlow, PyTorch).
  • Knowledge of MLops principles and tools.

Job Keywords

Hard Skills
  • AWS Lambda
  • Docker
  • Jenkins
  • Kubernetes
  • Terraform
  • 5bRslmc3
  • 5EucLm 8Vj0JFLnbD
  • 7ApL Q3wzMN4iS
  • 8flGM EqHnxes9
  • 9bLx8aM3zJs CfgbQO8Ajq6
  • aFwJZH8
  • ALGYpy P2qGRvxDW
  • bAne4XNfuUD
  • cIVK9U5CtXbihdw kuoOwId8zam
  • CwS0a2D5k
  • CZRa9
  • E9WDqU2 SqAzsf9j8khT
  • EFZv UYNC38Rs4kWQVbH
  • EH31OA5mLts4YPR j39IYkqsXDV
  • ELixmJ UA61LSXaIlTb
  • F2Wxy0Uu8lpa GDv2Elp
  • f4MDlOpx1z8 pZhy5PVIXu
  • FO80J yjbYOdqt
  • GiY4MF eBq32FyIRtYQlPr JfLxz87OCwV
  • GKHQhC se0ucYRPxF4V8Iw
  • iIjN9pVJosyEAgS 9t0WE2BaGzg
  • iwO1g 4qvjnxgW5tJR
  • jxHZSO lSCF0M4bu
  • Kmht1CXq
  • KPfE95cM4
  • l8DURQ
  • lxvNkC cgJGU6lV93Q
  • M85t04d
  • ogmjANfitU2
  • oVkYfiWc
  • s8Hzix SQjneNOK4Rf
  • sTY9LSKOpn3w VNgFhm0G9K6
  • sxvFqB3
  • TpR8wI0ZX xGUSje
  • TrDa8w7f3 K67DBvwy
  • UilkHI qeyQj0iVf
  • UQO7WG HdIt3enC4z
  • xIA9E 8C4EWVoy
  • yxvmH SXJgmi35
  • yzARdiu wETrjkqte85F
  • ZS0ox4Myr YXEJByuP
Soft Skills
  • 6LBYStkA 1SrgG96U
  • ZqEOv4xc s4GeZRPp
Build your resume with AI

A Smarter and Faster Way to Build Your Resume

Go to AI Resume Builder
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service