It & Ebusiness Consulting Services - Mountain View, CA

posted about 2 months ago

Full-time - Mid Level
Remote - Mountain View, CA
Professional, Scientific, and Technical Services

About the position

The Palantir Foundry, Pyspark Data Engineer/Lead position is a critical role focused on enhancing and optimizing data processing capabilities within the organization. This position is fully remote and requires a candidate with a strong background in Palantir Foundry, as well as proficiency in Python, Spark, and AWS technologies. The ideal candidate will have a proven track record of developing and enhancing data-processing workflows, orchestration, and monitoring systems using popular open-source software and automation tools like GitLab. In this role, you will collaborate closely with product and technology teams to design and validate the capabilities of the data platform. You will be responsible for identifying, designing, and implementing process improvements that automate manual processes, optimize usability, and redesign systems for greater scalability. Providing technical support and usage guidance to users of the platform's services will also be a key aspect of your responsibilities. Additionally, you will drive the creation and refinement of metrics, monitoring, and alerting mechanisms to ensure visibility into production services. This position requires a proactive approach to problem-solving and a commitment to continuous improvement in data processing and orchestration.

Responsibilities

  • Develop and enhance data-processing, orchestration, monitoring, and more by leveraging popular open-source software, AWS, and GitLab automation.
  • Collaborate with product and technology teams to design and validate the capabilities of the data platform.
  • Identify, design, and implement process improvements: automating manual processes, optimizing for usability, re-designing for greater scalability.
  • Provide technical support and usage guidance to the users of our platform's services.
  • Drive the creation and refinement of metrics, monitoring, and alerting mechanisms to give visibility into production services.

Requirements

  • 4+ years of advanced working knowledge of Palantir Foundry, SQL, Python, and PySpark.
  • 2+ years of experience with using a broad range of AWS technologies.
  • Experience building and optimizing data pipelines in a distributed environment.
  • Experience supporting and working with cross-functional teams.
  • Proficiency working in a Linux environment.
  • Experience using tools such as Git/Bitbucket, Jenkins/CodeBuild, CodePipeline.
  • Experience with platform monitoring and alerts tools.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service