Penske Automotive Group - Reading, PA

posted 2 months ago

Full-time - Mid Level
Reading, PA
Truck Transportation

About the position

As a Penske Data Platform Engineer (Associate Data Platform Engineer II), you will serve as a technical expert on the Big Data platform team, focusing on the design, development, implementation, administration, and support of the Gemfire/in-memory data grid/IoT/Kafka platform. This role requires a comprehensive understanding of core Java applications and the ability to provide guidance on modeling behavior to developers. You will work on the NoSQL database environment on Cloud, ensuring the data platforms meet business needs and security requirements while supporting significant data platform initiatives.

Responsibilities

  • Work on development and administration of data platform for Penske, specifically on NoSQL database environment on Cloud.
  • Meet or exceed internal customer expectations in terms of availability, reliability, performance, and support.
  • Support technology selection, building, testing, deployment, and documentation of the data platforms related to big data.
  • Ensure that Penske data platforms are on the latest technology stack as per the business needs.
  • Support development related to significant data platform initiatives for the enterprise.
  • Maintain an up-to-date knowledge on data security requirements.
  • Offer database expertise to multiple critical projects and initiatives, delivering on time, schedule and in budget while meeting business needs.
  • Remain aware of data platform level production issues and provide support.
  • Design and build the security setup to comply with corporate security in creating Platform as a Service.
  • Design & Develop Concourse pipeline, Java script to support Development teams for continuous development and Continuous Integration.
  • Create Key Performance Indicators in PCF hosted on AWS cloud to assess the performance of Guided Repair.

Requirements

  • Bachelor's degree in Computer Science, Engineering or Certification in DB2/OS/networking.
  • 3+ years of industry experience, preferably with relational databases like Oracle/Teradata.
  • Programming experience in Java is required.
  • Some hands-on database project experience is required.
  • Experience with Linux scripting is required (Perl and Korn shell/Bourne Shell/Python).
  • Experience in JVM/JDK configuration and tuning is required.
  • Basic SQL commands necessary to perform administrative functions are needed.
  • Backup/error recovery and disaster recovery of databases is needed.

Nice-to-haves

  • Real-time analytics, NoSQL technologies (e.g. HBase, Cassandra, and MongoDB) is a huge plus.
  • Understanding of Pivotal Big Data Suite - Greenplum, Gemfire, Spring Cloud Data Flow, and Rabbit MQ.
  • Experience with Amazon Web Services, Cloud Foundry, and vCloud Air.
  • Knowledge of Big Data Technologies - Hadoop, Kafka, Zookeeper etc.
  • Familiarity with Cloud Native technologies, principles, and techniques such as Kubernetes, microservices; 12-factor apps.
  • Experience with Concourse for creating and establishing CI/CD pipelines.
Job Description Matching

Match and compare your resume to any job description

Start Matching
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service