Insight Global - Los Angeles, CA

posted about 2 months ago

Full-time - Senior
Los Angeles, CA
Administrative and Support Services

About the position

Insight Global is seeking a Senior Data Engineer to join a media-based organization, specifically within the Data Platform team. This role is pivotal in supporting the development of a new streaming service, where the engineer will work closely with various data types, including subscription, behavioral, and marketing data. The primary focus will be on creating robust data pipelines and ingestion solutions that are essential for the new service's success. The ideal candidate will be responsible for designing data flow diagrams, drafting technical design specifications, and preparing testing documentation to ensure the integrity and efficiency of data processes. Collaboration is key in this role, as the Senior Data Engineer will engage with analytics and business teams to enhance data models, improve data accessibility, and promote data-driven decision-making across the organization. The position requires a strong foundation in AWS and Snowflake environments, as well as a proactive approach to problem-solving, given that the team is relatively small. The candidate must be a self-starter, capable of working independently while also contributing to team objectives.

Responsibilities

  • Support the build of a new streaming service by creating data pipelines and ingestion solutions.
  • Design data flow diagrams and technical design specifications.
  • Prepare testing documentation to ensure data integrity and efficiency.
  • Collaborate with analytics and business teams to improve data models and increase data accessibility.
  • Foster data-driven decision-making across the organization.
  • Work within an AWS and Snowflake environment to manage data processes.

Requirements

  • 7+ years of experience as a Senior Data Engineer.
  • Experience leading a team.
  • 5+ years of experience supporting a Snowflake environment.
  • Experience working with Databricks.
  • Extensive experience building highly optimized data pipelines and data models for big data processing.
  • Experience working with real-time data streams processing using Apache Kafka, Kinesis, or Flink.
  • Experience building highly optimized and efficient data engineering pipelines using Python, PySpark, and Snowpark.
  • Experience working with various AWS Services (S3, EC2, EMR, Lambda, RDS, DynamoDB, Redshift, Glue Catalog).
  • Experience leveraging CI/CD infrastructure with GitHub Actions or Jenkins.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service