Lorven Technologies - Austin, TX

posted 2 months ago

Full-time - Mid Level
Austin, TX
Professional, Scientific, and Technical Services

About the position

Our client is seeking a highly skilled Data Engineer with expertise in Spark and Scala for a full-time position based in Sunnyvale, CA or Austin, TX. This role requires a deep understanding of big data technologies and the ability to design and implement comprehensive data warehouse solutions. The ideal candidate will have a minimum of 10 years of experience in the field, with at least six years specifically focused on developing applications using Spark and Scala. The position is onsite, emphasizing the need for effective collaboration and communication within the team and with stakeholders. The Data Engineer will be responsible for working with big data databases, such as HBase, and utilizing PySpark and Python for data processing and analysis. Familiarity with tools like Airflow and Hive is advantageous, as these will be integral to managing data workflows and ensuring efficient data handling. The candidate should also have experience with data warehousing concepts and practices, as well as a good understanding of AWS, which is considered a plus. In addition to technical skills, the role demands strong communication abilities, both verbal and written, to effectively convey complex ideas and collaborate with team members. The Data Engineer will need to demonstrate problem-solving capabilities, particularly in situations where requirements may be unclear, showcasing their ability to architect solutions that meet business needs. This position offers an exciting opportunity to work on cutting-edge data engineering projects in a dynamic environment.

Responsibilities

  • Design and implement full-scale data warehouse solutions.
  • Develop applications using Spark and Scala.
  • Work with big data databases like HBase.
  • Utilize PySpark and Python for data processing.
  • Manage data workflows using tools like Airflow and Hive.
  • Collaborate with team members and stakeholders to clarify requirements and deliver solutions.

Requirements

  • Minimum of 10+ years of experience in data engineering.
  • At least 6 years of experience in developing with Spark and Scala.
  • Strong knowledge of big data databases, such as HBase.
  • Experience with PySpark and Python.
  • Familiarity with data warehousing concepts and practices.
  • Good communication skills, both oral and written.
  • Problem-solving and architecting skills.

Nice-to-haves

  • Knowledge of AWS is a plus.
  • Familiarity with Airflow and Hive.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service