Jsr Tech Consulting - Newark, NJ

posted about 2 months ago

Full-time - Mid Level
Newark, NJ
51-100 employees
Professional, Scientific, and Technical Services

About the position

JSR is seeking an AWS Data Engineer for an immediate opening with a Fortune 100 Financial Services client based in Newark, NJ. The ideal candidate will have a strong background in implementing and supporting data lakes, data warehouses, and data applications on AWS for large enterprises. This position requires a Bachelor's degree in computer science, Software Engineering, MIS, or a related field, along with relevant experience in the field. The AWS Data Engineer will be responsible for designing, building, and maintaining efficient, reusable, and reliable architecture and code, ensuring the best possible performance and quality of high-scale data engineering projects. The role involves collaborating with cross-functional teams to deliver projects throughout the software development cycle, participating in architecture and system design discussions, and independently performing hands-on development and unit testing of applications. The AWS Data Engineer will also be responsible for building reliable and robust data ingestion pipelines, identifying and resolving performance issues, and keeping up to date with new technology developments and implementations. Candidates should have solid experience with AWS services such as CloudFormation, S3, Athena, Glue, EMR/Spark, RDS, Redshift, DynamoDB, Lambda, Step Functions, IAM, and KMS. Experience in serverless application development using AWS Lambda, knowledge of ETL/ELT processes, and the ability to architect and implement CI/CD strategies for EDP are also essential. Additionally, familiarity with high-velocity streaming solutions using Amazon Kinesis, SQS, and Kafka is preferred. AWS Solutions Architect or AWS Developer Certification is a plus, along with a good understanding of Lakehouse/data cloud architecture.

Responsibilities

  • Designing, building and maintaining efficient, reusable, and reliable architecture and code.
  • Building reliable and robust data ingestion pipelines (within AWS, on-prem to AWS, etc.).
  • Ensuring the best possible performance and quality of high-scale data engineering projects.
  • Participating in architecture and system design discussions.
  • Independently performing hands-on development and unit testing of the applications.
  • Collaborating with the development team and building individual components into complex enterprise web systems.
  • Working in a team environment with product, production operation, QE/QA, and cross-functional teams to deliver a project throughout the whole software development cycle.
  • Identifying and resolving any performance issues.
  • Keeping up to date with new technology development and implementation.
  • Participating in code reviews to ensure standards and best practices are met.

Requirements

  • Bachelor's degree in computer science, Software Engineering, MIS or equivalent combination of education and experience.
  • Experience implementing, supporting data lakes, data warehouses and data applications on AWS for large enterprises.
  • Programming experience with Python, Shell scripting and SQL.
  • Solid experience of AWS services such as CloudFormation, S3, Athena, Glue, EMR/Spark, RDS, Redshift, DynamoDB, Lambda, Step Functions, IAM, KMS, SM etc.
  • Solid experience implementing solutions on AWS based data lakes.
  • Good experience with AWS Services - API Gateway, Lambda, Step Functions, SQS, DynamoDB, S3, Elasticsearch.
  • Serverless application development using AWS Lambda.
  • Experience in AWS data lake/data warehouse/business analytics.
  • Experience in system analysis, design, development, and implementation of data ingestion pipeline in AWS.
  • Knowledge of ETL/ELT processes.
  • Experience with end-to-end data solutions (ingest, storage, integration, processing, access) on AWS.
  • Ability to architect and implement CI/CD strategy for EDP.
  • Experience implementing high velocity streaming solutions using Amazon Kinesis, SQS, and Kafka (preferred).
  • Experience migrating data from traditional relational database systems, file systems, NAS shares to AWS relational databases such as Amazon RDS, Aurora, and Redshift.
  • Experience migrating data from APIs to AWS data lake (S3) and relational databases such as Amazon RDS, Aurora, and Redshift.
  • Experience implementing POCs on any new technology or tools to be implemented on EDP and onboard for real use-case.
  • AWS Solutions Architect or AWS Developer Certification preferred.
  • Good understanding of Lakehouse/data cloud architecture.

Nice-to-haves

  • AWS Solutions Architect or AWS Developer Certification preferred.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service