This job is closed

We regret to inform you that the job you were interested in has been closed. Although this specific position is no longer available, we encourage you to continue exploring other opportunities on our job board.

Businessolver - Denver, CO

posted 4 days ago

Full-time - Mid Level
Denver, CO
Publishing Industries

About the position

This role will serve on the Innovation Works team. The Data Engineer (DE) will be responsible for architecting, developing, implementing, and operating stable, scalable, low cost solutions to source data from production systems into the data lake (AWS) and data warehouse (Redshift) and into end-user facing applications (AWS Quicksight). The ideal candidate should be able to work with Infrastructure, Data Analysts, and Machine Learning Engineers in a fast-paced environment, understanding the business requirements, and implementing ETL, analytics, machine learning, and cloud solutions. You should excel in the understanding of distributed architectures and frameworks such as Hadoop, MapReduce, or Spark Clusters. Your expertise will drive the optimization of data flow and collection to support data initiatives, analytics, and business intelligence solutions.

Responsibilities

  • Building fault tolerant cloud solutions for Data Engineering
  • Aggregate, organize and translate large amounts of data to meet business requirements
  • Develop and optimize data and date pipeline architecture as well optimize data flow
  • Design and build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources such as Oracle, Amazon Relational Databases (RDS), SQL and AWS 'big data' technologies
  • Implement data storage solutions in AWS, utilizing services like Amazon S3, Redshift, RDS, and DynamoDB. Ensure systems are scalable and optimized for performance.
  • Partner with software engineers, BI team members, and data scientists to architect and build data-driven solutions, assist with data-related technical issues and support their data infrastructure needs.
  • Maintaining and Enhancing Existing Data Loads to the Data Warehouse and Data Lake
  • Maintaining Streaming Data from production Systems
  • Peer Reviewing code
  • Research opportunities for data acquisition and new uses for existing data. Develop data set processes for data modeling, mining, and production.

Requirements

  • Degree in Computer Engineering/Science or related field, with 5+ years of professional experience in database/data lake development
  • Experience with multiple data sources such as Oracle, SQL, RDS, data lakes as well as NoSQL solutions
  • Experience building and optimizing 'big data' data pipelines, architectures, and data sets
  • 3+ years experience with AWS big data cloud services such as Kinesis, Redshift, EMR, Athena and Glue deployed through Cloudformation
  • Proficient with ETL and Data Warehouse/Lake processes
  • Strong experience using Python or Unix shell scripting (preferably both) and a bonus if you have used boto3
  • Experience with Architecting Cloud Solutions
  • Experience in leading Multiple sprint project and Epics
  • Excellent verbal and written communication skills
  • Strong troubleshooting and problem-solving skills
  • Thrive in a fast-paced, innovative environment
  • Project management and organizational skills.

Nice-to-haves

  • Cloud AWS Experience
  • Oracle, Postgres, EMR, Redshift, Linux experience
  • Ability to quickly understand business requirements and transform them into a data model
  • AWS CDK or Lakeformation experience is a plus
  • Experience with Agile Methodologies
  • Experience with complex/large data sets (Big Data)
  • Experience operating a Data Lake
  • Experience with Cloud Architecture/Engineering

Benefits

  • Annual bonus incentive plan
Job Description Matching

Match and compare your resume to any job description

Start Matching
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service