LinkedIn - Sunnyvale, CA

posted 2 months ago

Full-time - Mid Level
Remote - Sunnyvale, CA

About the position

The Senior Data Engineer role at LinkedIn focuses on leveraging big data to empower business decisions and deliver data-driven insights. This position involves collaborating with cross-functional teams to develop infrastructure and tools that facilitate data-driven decision-making, ultimately driving member engagement and business growth. The role offers a hybrid work option, allowing flexibility in work location while contributing to a data-centric culture.

Responsibilities

  • Work with a team of high-performing data science professionals and cross-functional teams to identify business opportunities and build scalable data solutions.
  • Build data expertise, act like an owner for the company, and manage complex data systems for a product or a group of products.
  • Perform all necessary data transformations to serve products that empower data-driven decision making.
  • Build and manage data pipelines, design and architect databases.
  • Establish efficient design and programming patterns for engineers and non-technical partners.
  • Design, implement, integrate, and document performant systems or components for data flows or applications that power analysis at a massive scale.
  • Ensure best practices and standards in our data ecosystem are shared across teams.
  • Understand the analytical objectives to make logical recommendations and drive informed actions.
  • Engage with internal platform teams to prototype and validate tools developed in-house to derive insight from very large datasets or automate complex algorithms.
  • Be a self-starter, initiate and drive projects to completion with minimal guidance.
  • Contribute to engineering innovations that fuel LinkedIn's vision and mission.

Requirements

  • Bachelor's Degree in a quantitative discipline: Computer science, Statistics, Operations Research, Informatics, Engineering, Applied Mathematics, Economics, etc.
  • 3+ years of relevant industry or relevant academia experience working with large amounts of data.
  • Experience with SQL/Relational databases.
  • Background in at least one programming language (e.g., R, Python, Java, Scala, PHP, JavaScript).

Nice-to-haves

  • BS and 5+ years of relevant work experience, MS and 3+ years of relevant work experience, or Ph.D. and 1+ years of relevant work/academia experience working with large amounts of data.
  • MS or PhD in a quantitative discipline: statistics, operations research, computer science, informatics, engineering, applied mathematics, economics, etc.
  • Experience in developing data pipelines using Spark and Hive.
  • Experience with data modeling, ETL (Extraction, Transformation & Load) concepts, and patterns for efficient data governance.
  • Experience with using distributed data systems such as Spark and related technologies (Presto/Trino, Hive, etc.).
  • Experience with either data workflows/modeling, front-end engineering, or back-end engineering.
  • Deep understanding of technical and functional designs for relational and MPP Databases.
  • Experience in data visualization and dashboard design including tools such as Tableau, R visualization packages, streamlit, D3, and other libraries, etc.
  • Knowledge of Unix and Unix-like systems, version control systems such as Git.

Benefits

  • Generous health and wellness programs
  • Time away for employees of all levels
  • Annual performance bonus
  • Stock options
  • Comprehensive benefits package
Job Description Matching

Match and compare your resume to any job description

Start Matching
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service