Wipro - Memphis, TN

posted 12 days ago

Full-time - Mid Level
Memphis, TN
10,001+ employees
Professional, Scientific, and Technical Services

About the position

The Big Data Developer role at Wipro Limited focuses on leveraging Azure Data Factory and Databricks to design, develop, and implement data pipelines for processing large datasets. The position requires direct client engagement for requirement gathering and solution design, ensuring that business needs are effectively translated into technical specifications. This onsite role emphasizes collaboration with stakeholders to ensure project success and efficient technical delivery.

Responsibilities

  • Collaborate directly with clients to understand business requirements and translate them into technical specifications.
  • Conduct regular meetings with stakeholders to gather, document, and analyze requirements for Big Data solutions.
  • Act as the point of contact between the client and the internal technical team to ensure seamless communication and delivery.
  • Provide regular updates to both clients and internal teams on project progress, potential risks, and milestones.
  • Design, develop, and implement data pipelines using Azure Data Factory and Databricks to process large datasets from multiple sources.
  • Develop scalable and high-performance ETL solutions that integrate structured and unstructured data.
  • Utilize Databricks to build and optimize Spark-based solutions for data transformation, aggregation, and processing.
  • Work with cloud storage solutions like Azure Data Lake, Blob Storage, and SQL Databases to manage data ingestion, transformation, and storage.
  • Automate workflows and orchestrate jobs using ADF pipelines, ensuring timely and efficient data processing.
  • Monitor and troubleshoot data pipelines for performance optimization and ensure data quality and consistency.

Requirements

  • Strong technical background in Azure Data Factory and Databricks.
  • Experience in designing, developing, and implementing data pipelines.
  • Strong knowledge of Apache Spark, SQL, and Delta Lake.
  • Proficiency with Python and PySpark for data engineering tasks.
  • Experience with CI/CD pipelines using Azure DevOps, Jenkins, or similar tools.
  • Familiarity with data warehousing concepts and experience with SQL Server or other RDBMS.
  • Experience with version control systems like Git/Bit Bucket.
  • Strong problem-solving skills and ability to troubleshoot data pipeline issues effectively.

Nice-to-haves

  • Strong client management and relationship-building skills.
  • Ability to multitask and manage multiple projects simultaneously.
  • Excellent problem-solving and troubleshooting skills.
  • Proactive approach to identifying risks and resolving issues.
  • Strong organizational skills and attention to detail.
  • Ability to work in both technical and business environments.

Benefits

  • Full range of medical and dental benefits options
  • Disability insurance
  • Paid time off (inclusive of sick leave)
  • Other paid and unpaid leave options
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service