Senior Data / ML Engineer

$110,000 - $190,000/Yr

American Express - Phoenix, AZ

posted about 2 months ago

Full-time - Senior
Phoenix, AZ
Credit Intermediation and Related Activities

About the position

As a Senior Data / ML Engineer at American Express, you will play a crucial role in the Enterprise Salesforce organization, which is part of the larger Enterprise Platforms Centre of Excellence. This position involves designing, implementing, and maintaining a cohesive data layer that connects the Salesforce platform with various enterprise capabilities. Your work will be pivotal in enabling data and integration capabilities in both real-time and batch modes, ensuring that our Sales and Marketing teams have access to accurate and timely data for decision-making. You will also be responsible for designing and building a data fabric that supports advanced personalization efforts, enabling targeted marketing campaigns based on customer behaviors and preferences. Additionally, you will design, implement, and maintain NLP pipelines for sales practices and monitoring initiatives. In this role, you will collaborate with a diverse tech team, where you can architect, code, and ship software that enhances our customers' digital experiences. You will have the opportunity to work with the latest technologies and contribute to the broader engineering community through open-source initiatives. American Express values continuous professional development, providing dedicated time for you to keep your skills fresh and relevant. You will be recognized for your contributions and leadership, with the opportunity to share in the company's success. The Salesforce platform team is at the heart of American Express' global marketing, sales, and account development functions, making this role critical to the company's customer-centric and growth objectives. You will be expected to perform business requirements analysis, develop ETL data pipelines using open-source big data technologies, and deploy microservices-based APIs over Docker containers. Your ability to work independently and collaboratively with cross-functional teams will be essential in delivering features on time and driving projects from inception to closure.

Responsibilities

  • Perform business requirements analysis and translate requirements into technical requirements.
  • Provide technical expertise in driving projects from inception to closure.
  • Develop ETL data pipelines using open source big data technologies.
  • Good experience in building/tuning Spark pipelines in Scala/Python.
  • Develop and deploy microservices based APIs over Docker containers.
  • Work independently as well as collaborate effectively with cross functional teams on case-by-case basis.
  • Work with Product team, other data engineers and DevOps to deliver features on time.
  • Be a change agent with hands-on ability to build POC products and set the best practices.
  • Perform code reviews and help the team to produce quality code.
  • Work in a geographically distributed team setup.
  • Identify opportunities for further enhancements and refinements to standards and processes.
  • Fine tune the existing application with new ideas and optimization opportunities to reduce the latency.
  • Contribute to solution scoping and effort sizing with a cross-functional team.
  • Stay informed on technology trends.

Requirements

  • Bachelor's degree in engineering or Computer Science or equivalent OR master's in computer applications or equivalent.
  • 7+ years of experience within Data Engineering/Data Warehousing using Big Data.
  • Awareness of data analytics, machine learning, NLP & AI concepts with 2+ years of experience.
  • Expert on Distributed ecosystem including Map-Reduce, Hive, Spark (core, SQL and pyspark).
  • Hands-on experience with programming using Core Java or Python.
  • Expert on Hadoop and Spark Architecture and its working principle.
  • Hands-on experience on writing and understanding complex SQL (Hive/PySpark-dataframes), optimizing joins while processing huge amounts of data.
  • Experience in UNIX shell scripting.
  • Ability to design and develop optimized Data pipelines for batch and real-time data processing.
  • Experience with ML, AI and NLP products development.
  • Provides advanced knowledge of technical and functional principles.
  • Experience in analysis, design, development, testing, and implementation of system applications.
  • Experience with GitHub/Bitbucket and leveraging CI/CD pipelines.
  • Creative problem solving (Innovative).
  • Excellent technical and analytical aptitude.
  • Self-starter with a curiosity and appetite for new technology.
  • Teamwork & ability to multi-task.
  • Excellent communication skills.
  • Ability to influence and lead others.
  • Willingness to understand the business and participate in discussions around project requirements.

Benefits

  • Competitive base salaries
  • Bonus incentives
  • 6% Company Match on retirement savings plan
  • Free financial coaching and financial well-being support
  • Comprehensive medical, dental, vision, life insurance, and disability benefits
  • Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need
  • 20+ weeks paid parental leave for all parents, regardless of gender, offered for pregnancy, adoption or surrogacy
  • Free access to global on-site wellness centers staffed with nurses and doctors (depending on location)
  • Free and confidential counseling support through our Healthy Minds program
  • Career development and training opportunities
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service