This job is closed

We regret to inform you that the job you were interested in has been closed. Although this specific position is no longer available, we encourage you to continue exploring other opportunities on our job board.

IBMposted 12 days ago
Hybrid • Kochi, IN
Professional, Scientific, and Technical Services
Resume Match Score

About the position

In this role, you will work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. A career in IBM Consulting embraces long-term relationships and close collaboration with clients across the globe. You will collaborate with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio, including IBM Software and Red Hat. Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you will be supported by mentors and coaches who will encourage you to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground-breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and learning opportunities in an environment that embraces your unique skills and experience.

Responsibilities

  • Develop, maintain, evaluate and test big data solutions.
  • Build data pipelines to ingest, process, and transform data from files, streams, and databases.
  • Process data with Spark, Python, PySpark, and Hive, HBase or other NoSQL databases on Azure Cloud Data Platform or HDFS.
  • Develop efficient software code for multiple use cases leveraging Spark Framework using Python or Scala and Big Data technologies.
  • Develop streaming pipelines.
  • Work with Hadoop/Azure ecosystem components to implement scalable solutions to meet increasing data volumes.

Requirements

  • Bachelor's Degree.
  • Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills.
  • Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark/Python or Scala.
  • Minimum 3 years of experience on Cloud Data Platforms on Azure.
  • Experience in DataBricks/Azure HDInsight/Azure Data Factory, Synapse, SQL Server DB.
  • Exposure to streaming solutions and message brokers like Kafka technologies.
  • Experience with Unix/Linux Commands and basic work experience in Shell Scripting.

Nice-to-haves

  • Master's Degree.
  • Certification in Azure and Data Bricks or Cloudera Spark Certified developers.

Job Keywords

Hard Skills
  • Azure Data Factory
  • Cloud Database
  • Databricks
  • IBM WAS
  • Python
  • 1xPBp PBZbO8HYoGNr
  • 2lAR1
  • 67UFP lQ4DkIXWbVu
  • 7BK1iy5QGM
  • aL6s p3
  • BmHhID
  • CPcJyA1xB RulWSwo
  • CWhMp4 cT5ZyBD
  • E2CMX1Vz
  • E4XR8xm oKxgzYX6fIiJ
  • EaFMurBzw1R 8a519hCelUE37
  • FdMa9 R2BkaP0NIw8h
  • FzvNrp aiJ63RXpFe
  • iBSHjW GC4i7QlwWvbPr
  • l2k0MsudE TzfQ3IbOaeWr9
  • mY94LOZ V62wNT
  • OJP5 398H
  • s7ZC2njg XAs7wjJE3W
  • SBVI
  • TANOYH hS2Vx70Hni
  • U1WCPQ
  • UCrH6q
  • ujwQ WAdcl
  • v0x9f ldIViU8RZ4YT
  • va6U2DqoV iznOtfVuakSE
  • VHiv C0rHZ
  • XT9D Kl
  • ydLpN K52Pho7vCmyafWN
  • ZvIpTz3yEFQtPs85u2O4m D9r2BCSapt
  • zZLtu JiZY8Tr3yqD
Soft Skills
  • 4ZncKs TKpVbSD9
Build your resume with AI

A Smarter and Faster Way to Build Your Resume

Go to AI Resume Builder
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service