Yahoo Holdings - Baltimore, MD

posted 2 months ago

Full-time - Mid Level
Hybrid - Baltimore, MD

About the position

The Software Dev Engineer II at Yahoo is responsible for analyzing, programming, debugging, and modifying software enhancements and new products. This role involves collaborating with Big Data engineers to develop data warehouse designs and working in an agile Scrum environment to deliver innovative products. Key duties include designing applications, writing code, testing, debugging, and documenting work, while staying updated with relevant technologies to enhance application functionality.

Responsibilities

  • Design and implement reusable frameworks, libraries, and Java components in collaboration with business and IT stakeholders.
  • Ingest data from various structured and unstructured data sources into Hadoop and other distributed Big Data systems.
  • Support the sustainment and delivery of an automated ETL pipeline.
  • Validate data extracted from sources like HDFS, databases, and other repositories using scripts and automated capabilities.
  • Enrich and transform extracted data as required.
  • Monitor and report the data flow through the ETL process.
  • Perform data extractions, data purges, or data fixes in accordance with internal procedures and policies.
  • Track development and operational support via user stories and technical tasks in issue tracking software, including GIT, Maven, and JIRA.
  • Troubleshoot production support issues post-deployment and develop solutions as required.
  • Mentor junior engineers within the team.

Requirements

  • B.S. or M.S. in Computer Science (or equivalent experience).
  • Three years of related industry experience.
  • Experience in back-end programming languages such as Java, JS, Python, Node.js, and OOAD.
  • Experience with database technologies like Vertica, Oracle, Netezza, MySQL, BigQuery.
  • Experience working with large scale databases.
  • Knowledge and experience of Unix (Linux) platforms and shell scripting.
  • Experience in writing Pig Latin scripts, MapReduce jobs, HiveQL, etc.
  • Good knowledge of database structures, theories, principles, and practices.
  • Familiarity with data loading tools like Flume and Sqoop.
  • Knowledge of workflow/schedulers like Oozie and Airflow.
  • Analytical and problem-solving skills applied to the Big Data domain.
  • Proven understanding of Hadoop (Dataproc), HBase, Hive, Pig.
  • Knowledge of cloud providers like AWS, GCP, Azure.
  • Ability to write high-performance, reliable, and maintainable code.
  • Expertise in version control tools like GIT.
  • Good aptitude in multi-threading and concurrency concepts.
  • Effective analytical, troubleshooting, and problem-solving skills.
  • Strong customer focus, ownership, urgency, and drive.

Benefits

  • Healthcare
  • 401K savings plan
  • Company holidays
  • Vacation
  • Sick time
  • Parental leave
  • Employee assistance program
Job Description Matching

Match and compare your resume to any job description

Start Matching
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service