Epsilon - Chicago, IL

posted about 2 months ago

Full-time - Mid Level
Chicago, IL
5,001-10,000 employees
Professional, Scientific, and Technical Services

About the position

As the Sr. AWS Data Engineer at Epsilon, you will be an integral part of an interdisciplinary team dedicated to creating innovative, data-driven cloud-based solutions. Your primary responsibility will be to ensure the success of all technical and functional aspects of various projects. This role involves processing billions of events daily using Spark and a variety of AWS services, alongside comparable Microsoft Azure services. You will have the opportunity to contribute to the development of a new product offering, which will require a comprehensive skill set across the development stack, from user interface design to backend data storage and processing. This position is ideal for someone looking to work on exciting, fully cloud-based solutions and to shape a new product area within the company. In this role, you will spend approximately 80% of your time coding, refactoring, and enhancing our existing solutions. You will also be involved in the technical design and implementation of practical, maintainable project solutions. Your participation in design reviews will be crucial, as you will recommend improvements based on your expertise. Additionally, you will provide support throughout all phases of the Software Development Life Cycle (SDLC) and will have the opportunity to train and mentor other engineers as needed. Collaboration with fellow engineers will be essential as you plan, prioritize, and execute tasks within set deadlines, ensuring the successful delivery of projects.

Responsibilities

  • Spend 80% of time on coding, refactoring, and improving solutions.
  • Contribute to the technical design and implementation of project solutions.
  • Participate in design reviews and recommend improvements.
  • Provide support in all phases of the Software Development Life Cycle (SDLC).
  • Train and mentor other engineers when required.
  • Collaborate with other engineers in planning, prioritizing, and executing tasks within deadlines.

Requirements

  • Bachelor's or Master's Degree in Computer Science or related field.
  • 6+ years of hands-on development experience with Python.
  • Experience with Big Data technologies, including Hadoop and Spark.
  • Strong understanding of software engineering methodologies such as functional programming and object-oriented design.
  • Experience with distributed data processing and management systems.
  • Experience working with large data sets or data-driven applications.
  • Experience deploying and configuring DataBricks in AWS and Azure environments is a plus.
  • Proficiency in Linux/MacOS/Windows development environments.
  • Proficiency with CI/CD systems, specifically GoCD and Jenkins.
  • Solid understanding of DevOps methodologies.
  • Solid experience in testing methodologies, test-driven design, and creating effective test cases.
  • Experience with Bash Scripting and tools for provisioning cloud-based infrastructure like Ansible and Terraform.
  • Strong background with Linux systems.
  • Excellent team skills.

Benefits

  • Competitive pay
  • Comprehensive health coverage
  • Endless opportunities for career advancement
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service