Paycom Softwareposted 5 days ago
Oklahoma City, OK
Professional, Scientific, and Technical Services

About the position

This position will be located within the Development and IT space and work closely with computer scientists, IT and data scientists to build, deploy and optimize data pipelines and integrate data infrastructure for enabling analytics, reporting, and machine learning workloads at scale.

Responsibilities

  • Build, test, and validate robust production-grade data pipelines that can ingest, aggregate, and transform large datasets according to the specifications of the internal teams who will be consuming the data.
  • Configure connections to source data systems and validates schema definitions with the teams responsible for the source data.
  • Monitor data pipelines and troubleshoots issues as they arise.
  • Monitor data lake environment for performance and data integrity.
  • Collaborate with IT and database teams to maintain the overall data ecosystem.
  • Assist data science, business intelligence, and other teams in using the data provided by the data pipelines.
  • Serve as on-call for production issues related to data pipelines and other data infrastructure maintained by the data engineering team.

Requirements

  • BS degree in Computer Science or related field
  • 3+ years of data engineering work experience
  • Experience coding in Java or Scala and build tools such as Maven, Gradle, and SBT
  • Experience with SQL databases
  • Experience working with HDFS or S3 storage environments
  • Experience with Apache Spark or Databricks and reading and writing Parquet, Avro and JSON
  • Experience working in a Unix or Linux environment, including writing shell scripts
  • Experience with ETL and ELT processes in data pipelines
  • Experience with Docker and Kubernetes highly preferred
  • Experience with workflow orchestration tools like Apache Airflow, Control-M, or Arrow highly preferred
  • Experience with Apache Kafka or Confluent is preferred

Nice-to-haves

  • Experience coding in Python
  • Experience with NoSQL solutions is helpful
Hard Skills
Apache Kafka
1
Docker
1
Gradle
1
JSON
1
Kubernetes
1
4ecVRS
0
8ZpyE GMlXt3e14vOa
0
CMkdvF
0
FQK3Bas mQj87o
0
FoaYUrG7
0
HC6tU SPIzQ0XWmL
0
Iza57 k7qGWs8BnJwYtvu
0
JFxbgAIiDoT
0
KNWr7
0
KqfDV tOJMpsdg3X
0
OUwz9 JRkOwuqaoZ
0
Rd3eFgX9G dGYHezuF
0
SNVhrQtx 7GKzWxd0C
0
TpwxXu5Pd eFUmRf3Q
0
WZP0
0
Yq0aL9E
0
aV3yeuZ cZbGR
0
eKWzV
0
fnp5sJ
0
jEd2tlB riaLA
0
k31AM 05vqLKfU7g
0
kj4XY JEDqcdoTLQ
0
noO20vRsUYz
0
zxJ2I gk2CSKHlFU
0
Build your resume with AI

A Smarter and Faster Way to Build Your Resume

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service