UBS - Weehawken, NJ

posted 25 days ago

Part-time - Mid Level
Weehawken, NJ
Securities, Commodity Contracts, and Other Financial Investments and Related Activities

About the position

The Senior Data Engineer role at UBS involves building complex, secure data platforms using Azure and Databricks with a focus on Scala and Python. The position is responsible for engineering reliable data pipelines, crafting transformation pipelines, automating processes, ensuring quality and compliance, and monitoring production health. The role requires collaboration within the Group Compliance & Regulatory Governance Technology team to leverage data as an asset and support advanced analytics initiatives.

Responsibilities

  • Engineer reliable data pipelines for sourcing, processing, distributing, and storing data using Databricks and Airflow.
  • Craft complex transformation pipelines on multiple datasets to produce valuable insights for business decisions.
  • Develop, train, and apply data engineering techniques to automate manual processes and solve business problems.
  • Ensure the quality, security, reliability, and compliance of solutions by applying digital principles and implementing requirements.
  • Build observability into solutions, monitor production health, and help resolve incidents.
  • Leverage Airflow to build complex branching Data Driven pipelines and Databricks for the spark layer of data pipelines.
  • Utilize Python and Scala for complex data operations and share best practices with other engineers.

Requirements

  • Bachelor's or master's degree in computer science or a similar engineering field.
  • 5+ years of total IT experience in software development or engineering.
  • 3+ years of hands-on experience designing and building scalable data pipelines for large datasets on cloud data platforms.
  • 3+ years of hands-on experience in distributed processing using Databricks, Apache Python/Spark, Kafka, and Airflow.
  • 2+ years of programming experience in Scala (must have), Python, and Java (preferred).
  • Experience with monitoring solutions such as Spark Cluster Logs, Azure Logs, AppInsights, and Graphana.
  • Proficiency in working with large code base management systems like Github/Gitlab, Gitflow.
  • Experience with Agile development methodologies and delivering within Azure DevOps.
  • Expertise in optimized dataset structures in Parquet and Delta Lake formats.
  • Expertise in optimized Airflow DAGS and branching logic for tasks.

Nice-to-haves

  • Experience with automated testing tools for CI and release management.
  • Knowledge of traditional SQL and No-SQL authorship.

Benefits

  • Flexible working arrangements including part-time, job-sharing, and hybrid working.
  • Opportunities for career development and gaining new experiences.
  • Support for diversity, equity, and inclusion initiatives.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service