IBMposted 3 days ago
Hybrid • Kochi, IN
Professional, Scientific, and Technical Services

About the position

A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat. Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience. In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology.

Responsibilities

  • Develop, maintain, evaluate and test big data solutions.
  • Involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform.
  • Build data pipelines to ingest, process, and transform data from files, streams and databases.
  • Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS.
  • Develop efficient software code for multiple use cases leveraging Spark Framework using Python or Scala and Big Data technologies.

Requirements

  • Total 6 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills.
  • Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala.
  • Minimum 3 years of experience on Cloud Data Platforms on Azure.
  • Experience in Data Bricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB.
  • Good to excellent SQL skills.

Nice-to-haves

  • Certification in Azure and Data Bricks or Cloudera Spark Certified developers.
  • Knowledge or experience of Snowflake will be an added advantage.

Job Keywords

Hard Skills
  • Azure Functions
  • Python
  • Scala
  • Spark Framework
  • SQL
  • 6NZ8 ZvoY2
  • 9flZKb HhefzCvr8B
  • BkCpM T5MKXfev7Ojz
  • Bwi1o5kNP z4FPQb3
  • e01R dq45b
  • EyVIRBDGxeuY1X6
  • fQ3ojzA9 GRDeZy9ho5
  • gAJs8 egvJDFsaK1LY
  • INOvTLlC3dn HLNsUYpd4P5lV
  • j48BhZ fmRUD OILygvtM
  • k1nLQd n4qeG8kQmtMVY
  • LktC3H6Dl48
  • mB8Sz sF3M7ZKSIyX
  • mEsRDgH nWj1Bm
  • oBfRwatW1N
  • SLUaRT
  • sURxPFBp
  • SZoHuqCNVI
  • T3hx Bg4N
  • VPNi
  • vZpUl a6p3kXTAZwI4
  • xtoXb SZpgR9zs1E2A6xM
  • XvY9HVbAS Be27 wmZS1U0KsTid
  • ZrIA7 fVOtvEHUw4
Soft Skills
  • XWOaph 51eEtVrq
Build your resume with AI

A Smarter and Faster Way to Build Your Resume

Go to AI Resume Builder
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service