Ampcus - Bedminster, NJ

posted 2 months ago

Full-time - Mid Level
Remote - Bedminster, NJ
Professional, Scientific, and Technical Services

About the position

The Sr Data Engineer position is a critical role within our engineering team, focusing on the development and implementation of data solutions that leverage cloud technologies and modern data processing frameworks. This position is based in multiple locations including McLean, VA, Richmond, VA, Plano, TX, Chicago, IL, and NYC, NY, and is structured as a 6-month assignment with the potential for extension based on performance and project needs. The ideal candidate will possess a strong background in data engineering, with a focus on cloud development and data movement technologies. As a Sr Data Engineer, you will be responsible for designing and building scalable data pipelines that facilitate the movement and processing of large datasets. You will work closely with cross-functional teams to understand data requirements and translate them into technical specifications. Your expertise in AWS cloud services will be essential as you develop solutions that are both efficient and cost-effective. Additionally, you will utilize tools such as Kafka for stream processing, Spark for batch processing, and various programming languages including Java, Python, and Scala to implement robust data solutions. The role requires a collaborative mindset, as you will be part of a team that values knowledge sharing and continuous improvement. You will also have the opportunity to work with cutting-edge technologies such as Databricks and Snowflake, enhancing your skills and contributing to the overall success of the organization. This position is ideal for engineers who are passionate about data and eager to tackle complex challenges in a dynamic environment.

Responsibilities

  • Design and implement scalable data pipelines for data movement and processing.
  • Collaborate with cross-functional teams to gather and analyze data requirements.
  • Utilize AWS cloud services to develop efficient data solutions.
  • Implement stream processing using Kafka and batch processing with Spark.
  • Develop applications using Java, Python, Scala, and other relevant programming languages.
  • Work with Databricks and Snowflake to manage and analyze large datasets.
  • Participate in code reviews and contribute to team knowledge sharing.

Requirements

  • 5-10 years of experience in data engineering or related field.
  • Proficiency in AWS cloud development.
  • Experience with Kafka for data movement and stream processing.
  • Strong knowledge of Spark for large batch file processing.
  • Proficient in Java, particularly with Spring framework.
  • Experience with Python frameworks such as Fast API, Flask, and Django.
  • Familiarity with Scala and AKKA for concurrent programming.
  • Knowledge of Go and GIN for web applications.
  • Experience with JavaScript frameworks like Angular and React.

Nice-to-haves

  • Experience with Databricks for data engineering tasks.
  • Familiarity with Snowflake for data warehousing solutions.
  • Experience working with the client's technology stack.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service