Ernst & Young - Springfield, IL

posted about 2 months ago

Full-time - Senior
Springfield, IL
Professional, Scientific, and Technical Services

About the position

At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. EY is seeking a Data Architect with strong technology and data understanding having proven delivery capability. In this role, you will lead the design, development, and management of the organization's data architecture, ensuring scalable, efficient, and secure data solutions that align with business goals and support enterprise-wide data initiatives. You will create, maintain, and support the data platform and infrastructure that enables the analytics front-end, which includes the testing, maintenance, construction, and development of architectures such as high-volume, large-scale data processing and databases with proper verification and validation processes. Your responsibilities will include designing, developing, optimizing, and maintaining data architecture and pipelines that adhere to ETL principles and business goals. You will develop and maintain scalable data pipelines, build out new integrations using AWS native technologies to support continuing increases in data source, volume, and complexity. Additionally, you will define data requirements, gather and mine large scale of structured and unstructured data, and validate data by running various data tools in the Big Data Environment. You will also support standardization, customization, and ad hoc data analysis and develop the mechanisms to ingest, analyze, validate, normalize, and clean data. Writing unit/integration/performance test scripts and performing data analysis required to troubleshoot data-related issues will also be part of your role. You will implement processes and systems to drive data reconciliation and monitor data quality, ensuring production data is always accurate and available for key stakeholders, downstream systems, and business processes. Leading the evaluation, implementation, and deployment of emerging tools and processes for analytic data engineering to improve productivity will be essential. You will develop and deliver communication and education plans on analytic data engineering capabilities, standards, and processes, while also learning about machine learning, data science, computer vision, artificial intelligence, statistics, and/or applied mathematics. Your role will involve solving complex data problems to deliver insights that help achieve business objectives and implementing statistical data quality procedures on new data sources by applying rigorous iterative data analytics.

Responsibilities

  • Design, develop, optimize, and maintain data architecture and pipelines that adheres to ETL principles and business goals
  • Develop and maintain scalable data pipelines, build out new integrations using AWS native technologies to support continuing increases in data source, volume, and complexity
  • Define data requirements, gather and mine large scale of structured and unstructured data, and validate data by running various data tools in the Big Data Environment
  • Support standardization, customization and ad hoc data analysis and develop the mechanisms to ingest, analyse, validate, normalize, and clean data
  • Write unit/integration/performance test scripts and perform data analysis required to troubleshoot data related issues and assist in the resolution of data issues
  • Implement processes and systems to drive data reconciliation and monitor data quality, ensuring production data is always accurate and available for key stakeholders, downstream systems, and business processes
  • Lead the evaluation, implementation and deployment of emerging tools and processes for analytic data engineering to improve productivity
  • Develop and deliver communication and education plans on analytic data engineering capabilities, standards, and processes
  • Learn about machine learning, data science, computer vision, artificial intelligence, statistics, and/or applied mathematics
  • Solve complex data problems to deliver insights that help achieve business objectives
  • Implement statistical data quality procedures on new data sources by applying rigorous iterative data analytics

Requirements

  • Experience in the development of Hadoop APIs and MapReduce jobs for large scale data processing
  • Hands-on programming experience in Apache Spark using SparkSQL and Spark Streaming or Apache Storm
  • Hands on experience with major components like Hive, Spark, and MapReduce
  • Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB
  • Experienced in Hadoop clustering and Auto scaling
  • Good knowledge in apache Kafka & Apache Flume
  • Knowledge of Spark and Kafka integration with multiple Spark jobs to consume messages from multiple Kafka partitions
  • Advanced experience and understanding of data/Big Data, data integration, data modelling, AWS, and cloud technologies
  • Strong business acumen with knowledge of the Industrial Products sector is preferred, but not required
  • Ability to build processes that support data transformation, workload management, data structures, dependency, and metadata
  • Ability to build and optimize queries (SQL), data sets, 'Big Data' pipelines, and architectures for structured and unstructured data
  • Experience with or knowledge of Agile Software Development methodologies
  • Demonstrated understanding and experience using Data Engineering Programming Languages (i.e., Python)
  • Distributed Data Technologies (e.g., Pyspark)
  • Cloud platform deployment and tools (e.g., Kubernetes)
  • Relational SQL databases
  • DevOps and continuous integration
  • AWS cloud services and technologies (i.e., Lambda, S3, DMS, Step Functions, Event Bridge, Cloud Watch, RDS)
  • Databricks/ETL
  • IICS/DMS
  • GitHub
  • Event Bridge, Tidal

Nice-to-haves

  • Experience in leading and influencing teams, with a focus on mentorship and professional development
  • A passion for innovation and the strategic application of emerging technologies to solve real-world challenges
  • The ability to foster an inclusive environment that values diverse perspectives and empowers team members

Benefits

  • Comprehensive compensation and benefits package
  • Medical and dental coverage
  • Pension and 401(k) plans
  • Wide range of paid time off options
  • Flexible vacation policy
  • Time off for designated EY Paid Holidays, Winter/Summer breaks, Personal/Family Care, and other leaves of absence
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service