SpyCloud - Austin, TX

posted 1 day ago

Full-time - Senior
Austin, TX
Publishing Industries

About the position

As a Principal Data Engineer at SpyCloud, you will lead our efforts to modernize data pipelines, optimize costs, and design robust streaming architectures using AWS technologies. You will play a critical role in steering our architectural direction and enhancing our data ingestion and processing capabilities.

Responsibilities

  • Redesign existing data pipelines to improve efficiency, scalability, and maintainability.
  • Regularly analyze and optimize the costs associated with the daily ingestion and processing of large data sets.
  • Design and implement streaming data solutions using AWS services (e.g., MSK, Lambda, S3, DynamoDB).
  • Lead the Architecture Review Board, making strategic decisions about architectural practices and standards.
  • Develop and maintain comprehensive documentation for reference architectures and architectural decision records.
  • Mentor junior engineers, providing guidance and support in their professional development.
  • Evaluate and adopt new technologies and tools to enhance the data architecture and support new data initiatives.
  • Lead the creation and execution of a strategic data plan that aligns with the company's goals, focusing on data collection, storage, and utilization.
  • Work closely with Product Management to align architectural strategies with the product roadmap.

Requirements

  • Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
  • 15+ years of experience in data architecture, especially in a cloud environment.
  • Proven experience with AWS cloud services and streaming architectures.
  • Strong experience in designing and optimizing data pipelines.
  • Excellent leadership skills and experience leading technical teams and projects.
  • Demonstrable experience in cost optimization and financial management of IT resources.
  • Strong communication skills and ability to articulate complex technical ideas to non-technical stakeholders.
  • Proficiency in programming languages such as Python, Java with hands-on experience in building Kafka based streaming data pipelines.
  • Strong understanding of machine learning principles and model lifecycle management.
  • Working knowledge on building and managing MLOps pipelines.
Job Description Matching

Match and compare your resume to any job description

Start Matching
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service