Amazon - Seattle, WA

posted about 2 months ago

Full-time - Mid Level
Seattle, WA
Sporting Goods, Hobby, Musical Instrument, Book, and Miscellaneous Retailers

About the position

At Amazon Web Services (AWS), we are inventing the future of cloud computing with a team of builders who try new things and dream big. Our web services provide a platform for IT infrastructure in-the-cloud that is used by hundreds of thousands of developers and businesses worldwide. Ten years ago, we couldn't have imagined how far we would come. Now we're inventing for the next ten years and beyond. But it is still Day 1 for us, and we are looking for curious people to be part of our diverse team of builders. As a member of AWS SMGS DP&I team, you will shape the AWS business's direction as a whole. You will build greenfield solutions from the ground up while using native AWS services and frameworks. The right candidate will have a robust system delivery background, well-rounded technical knowledge, and demonstrated experience working on large scale projects. The candidate must be customer-obsessed, have strong analytical and communication skills, enjoy working with native AWS services, and thrive on solving challenging business problems. The candidate should be comfortable working in an agile environment and collaborating with multiple teams. You will be responsible for designing, building, and maintaining scalable, secure, and efficient data pipelines and infrastructure to support the organization's data and analytics needs. This highly technical role requires expertise in data modeling, ETL processes and orchestration, data storage and security. You will work closely with other engineers, product managers, and business stakeholders to understand requirements and develop innovative solutions.

Responsibilities

  • Design and implement robust, fault-tolerant, and high-performing data pipelines using technologies such as Apache Spark, Kafka, Airflow, Databricks, etc.
  • Build and optimize data storage systems to enable efficient data processing and analysis
  • Create advanced data models, leveraging techniques like slowly changing dimensions, to support complex business requirements
  • Automate data ingestion, transformation, and load processes to ensure reliable and timely data delivery
  • Monitor data pipeline performance, identify bottlenecks, and implement optimization strategies
  • Develop and maintain metadata management, data lineage, and data governance frameworks
  • Collaborate with cross-functional teams to understand business needs and translate them into technical solutions
  • Mentor and train junior data engineers to develop their skills and expertise
  • Stay up-to-date with the latest data engineering trends, tools, and best practices, and implement them when appropriate

Requirements

  • Bachelor's Degree in Computer Science, Data Science, Engineering or related technical field
  • 3+ years of work experience with ETL, Data Modeling, and Data Architecture
  • Expert-level skills in writing and optimizing SQL
  • Proficiency in one of the scripting languages - Python, JavaScript, Perl, or similar
  • Experience operating very large data warehouses or data lakes

Nice-to-haves

  • Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions
  • Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases)
  • Experience with Big Data technologies such as Hive/Spark
  • Experience delivering end to end projects independently

Benefits

  • Flexible work hours and arrangements
  • Mentorship and career growth opportunities
  • Diversity and inclusion programs
  • Employee-led affinity groups
  • Ongoing events and learning experiences
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service