CarMax - Richmond, VA

posted 4 months ago

Full-time - Mid Level
Richmond, VA
10,001+ employees
Motor Vehicle and Parts Dealers

About the position

At CarMax, we are industry disruptors, and at the heart of our innovation is the development of new digital products. As a Senior Data & Analytics Cloud Platform Engineer, you will be involved in various aspects of enhancing the customer experience through technology. Your role will encompass the entire lifecycle of product development, from inception to completion, where you will create tools and technologies that directly impact our business performance. Your contributions will ensure that customers can purchase vehicles in a manner that suits their needs, leveraging our spirit of experimentation to learn and adapt quickly. The Data and Analytics Engineering - Platform team is pivotal in executing our organization's data strategy. In this position, you will be responsible for building and managing modern, adaptive, data-driven, and secure platforms and processes that enable Data & Analytics Engineering. A significant part of your role will involve capturing and reporting on Azure cost data while identifying opportunities for savings. You should possess a strong foundation in FinOps, Data/Software Engineering, and DevOps, along with hands-on experience with Azure Cloud services. You will collaborate on cross-functional initiatives and projects, ensuring that our data platforms are efficient and effective. Your responsibilities will include designing, implementing, and maintaining data pipelines that integrate cost data from Azure, Databricks, and Snowflake. You will also create dashboards, alerts, and visualizations for this cost data, working closely with other data teams to identify cost anomalies and savings opportunities. Additionally, you will design and implement intelligent, reliable, and scalable platforms for Data Science Machine Learning (DSML), Enterprise Data Lake/Warehouse (EDL/EDW), and Master Data Management (MDM) functions. Evaluating capacity requirements and automating scaling within the Azure environment will also be part of your duties, along with implementing various Azure cloud capabilities and maintaining cloud-based Data & Analytics platforms.

Responsibilities

  • Design, implement, and maintain data pipelines integrating cost data from Azure, Databricks, and Snowflake
  • Design, implement, and maintain dashboards, alerts, and visualizations for Azure, Databricks, and Snowflake cost data
  • Collaborate with other data teams to identify cost anomalies and savings opportunities
  • Design and implement intelligent, predictable, reliable, scalable, cost-efficient, and easy-to-use platforms and frameworks for Data Science Machine Learning (DSML), Enterprise Data Lake/Warehouse (EDL/EDW), and Master Data Management (MDM) functions
  • Evaluate capacity requirements and automate necessary scaling within Azure environment
  • Implement Azure cloud capabilities including but not limited to Data Factories, Batch Accounts, storage, Key Vaults, Log Analytics, and automated deployments
  • Implement, configure, monitor, and maintain cloud based Data & Analytics platforms such as Databricks and Snowflake
  • Troubleshoot and resolve environment performance issues, connectivity issues, and security issues
  • Self-directed and able to balance the priorities of multiple teams, systems, and products
  • Work collaboratively in teams and develop meaningful relationships to achieve common goals
  • Streamline development and operation processes via continuous integration, deployment, automated testing while leveraging SRE principles
  • Drive efficient resolution for system outages as well as performance and functional shortcomings.

Requirements

  • 3+ years experience with FinOps and Azure cost management, including forecasting, alerting, and show-back and charge-back models
  • 3+ years experience with reporting and visualization tools such as Tableau or Power BI
  • 5+ years Cloud Platform Engineering experience in an enterprise-level environment
  • Advanced proficiency with Python, SQL, GitHub, Azure DevOps, Bicep templates, and PowerShell is required
  • 5+ years of DevOps experience with a clear understanding of related tools, structure, and processes working in an Agile/Scrum setting
  • 3+ years leading the end-to-end design and development of scalable services to be consumed by the enterprise, including monitoring and production support
  • Experience in evaluating and purchasing Azure cost savings plans and reserved instances
  • Experience in DevOps and SRE practices, testing frameworks, Infrastructure as Code, and building CI/CD pipelines
  • Excellent communication skills with the ability to adapt to various audience types
  • Experience on a fast-paced, highly collaborative agile team within a Product-oriented organization
  • Effective problem-solving, analytical thinking, and a cloud-native and security-first mind set
  • Strong documentation, communication, and presentation skills
  • Ability to positively influence team norms, culture, and technical vision

Nice-to-haves

  • Bachelor's/Master's in Computer Science or equivalent area
  • Experience with cloud services such as Snowflake, Databricks, Data Factory, Event Hub, Functions, Batch, Key Vault, and Log Analytics
  • Azure Administrator certification, Solution Architect, or Databricks Platform Administrator certification, Snowflake SnowPro certification is a strong plus
  • Familiarity with microservices software architecture
  • Machine Learning experience is preferred

Benefits

  • Competitive salary
  • Hybrid work arrangement
  • Comprehensive health insurance
  • 401(k) retirement plan
  • Paid time off and holidays
  • Employee discounts
  • Professional development opportunities
  • Diversity and inclusion programs
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service