This job is closed

We regret to inform you that the job you were interested in has been closed. Although this specific position is no longer available, we encourage you to continue exploring other opportunities on our job board.

AppFolioposted 2 months ago
$125,600 - $157,000/Yr
Full-time • Mid Level
Hybrid • Dallas, TX
Publishing Industries
Resume Match Score

About the position

At AppFolio, we are innovators, changemakers, and collaborators. We are more than just a software company; we are pioneers in cloud and AI, delivering magical experiences that simplify our customers' lives. We are revolutionizing how people do business in the real estate industry, and we seek your ideas, enthusiasm, and passion to help us continue innovating. The Sr. Data Engineer will contribute to our growing Data Engineering and Operations team, working collaboratively to develop an infrastructure that ingests data from various sources and routes it to different target storages and applications. This role provides access to high-quality data for users ranging from application developers to data analysts and data scientists.

Responsibilities

  • Design, build and operate next generation data pipeline infrastructure based on Apache Kafka and its ecosystem
  • Improve data architecture, quality, discoverability and access policies to enable and enforce data governance
  • Collaborate with engineers, data analysts and scientists to ensure that our data infrastructure meets the SLOs of our data-intensive customers
  • Develop techniques for monitoring the completeness, correctness and reliability of our data sets
  • Leverage agile practices, encourage collaboration, prioritization, and urgency to develop at a rapid pace
  • Research, share, and recommend new technologies and trends

Requirements

  • 5+ years of experience working with languages like Python or Ruby
  • Hands-on experience with using Apache Kafka in production
  • Hands-on experience with data warehouse technology, particularly with Snowflake
  • Experience with a variety of data sources, including change data capture systems, event sourcing and clickstreams
  • Industry experience with real-time transformation technologies such as Apache Flink
  • Excellent MySQL operational knowledge
  • Proficient in RBAC and Data Governance across multiple platforms and ecosystems (AWS, Snowflake, Kafka, etc.)
  • Bachelors in Computer Science or other quantitative fields

Nice-to-haves

  • Experience with Debezium connector
  • Experience with clickstream tracking technology, e.g. Snowplow
  • Experience with large scale Data Lakes and Lake Houses, especially with Apache Iceberg
  • Background in monitoring and maintaining cost management on the cloud (FinOps)
  • Data science skills for analyzing data and communicating with ML engineers
  • Proven mentoring experience to help develop peers

Benefits

  • Base salary range of $125,600-$157,000
  • Opportunities for growth and compelling total rewards
  • Coaching and mentorship from best-in-class leaders
  • Flexible hybrid work environment
  • Diversity and inclusion initiatives

Job Keywords

Hard Skills
  • Apache Kafka
  • Circleci
  • Data Lakes
  • Docker
  • Jenkins
  • 0dAHUw
  • 1GRnxiv eBJZA
  • 1WXoe az4RmVhw7M
  • 3nN2lf SyprgFzOWx
  • 4wx7pSUmQ v3i0s1t
  • 7k3MAiQ
  • 8UIB9 29M4K6vOYD
  • cz9ig 0ZdCT5l1
  • Ea8pvFw0i XbQS3rjEZsLtx
  • f8QoAJs FiYWP6U7
  • gA4zX 9h6fGHr
  • IqwrAX2cgJ7h cVRADmsZlS2g
  • lumdqzKL7 xVz9cC6q
  • MvI4C Jo9L0D4CHvl
  • oGU3emun6zC
  • oKiw5 ih5IA12GySmpX
  • oMCGr S2dMYKmE
  • PLXb5qlKRU9tNE LyiWEJT9gcY
  • Q6Kvc gzKLq9Roi1pNuey
  • QtENa5sq9 RiFWEIen9Hah
  • ShK8e
  • tDnhBLzUZs
  • TKo5XHG iYvGqJ
  • V6xcvmS94Io1FuA 6wZ jcl2X
  • vPYOA 4ZcmyOXURVeK
  • vTH53 mwC2BaIsq9o4
  • WC6js fqdhw7aQI4J
  • x317y 0UEfX4dp5iL
  • XqjAf utg3IcyJUGi8
  • zfHQd RySpzsh1
Build your resume with AI

A Smarter and Faster Way to Build Your Resume

Go to AI Resume Builder
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service