The Hartford - Chicago, IL

posted 1 day ago

Full-time - Entry Level
Hybrid - Chicago, IL
Insurance Carriers and Related Activities

About the position

The Hartford is seeking an Associate Data Engineer within Claims Data Science to design, develop, and implement modern and sustainable data assets to fuel machine learning and artificial intelligence solutions across a wide range of strategic initiatives. As an Associate Data Engineer, you will participate in the entire software development lifecycle process in support of continuous data delivery, while growing your knowledge of emerging technologies. We use the latest data technologies, software engineering practices, MLOPs, Agile delivery frameworks, and are passionate about building well-architected and innovative solutions that drive business value. This cutting edge and forward focused organization presents the opportunity for collaboration, self-organization within the team, and visibility as we focus on continuous business data delivery. This role will have a Hybrid work arrangement, with the expectation of working in an office location (Hartford, CT, Charlotte, NC, Chicago, IL, Columbus, OH) 3 days a week (Tuesday through Thursday).

Responsibilities

  • Participate in developing high quality, scalable software modules for next generation analytics solution suite
  • Engage in activities with cross-functional IT unit stakeholders (e.g., database, operations, telecommunications, technical support, etc.)
  • Participate in formulation of logical statements of business problems and devises, tests and implements efficient, cost-effective application program solutions
  • Identify and validate internal and external data sources for availability and quality. Work with SMEs to describe and understand data lineage and suitability for a use case
  • Participate in creating data assets and build data pipelines that align to modern software development principles for further analytical consumption. Perform data analysis to ensure quality of data assets.
  • Perform preliminary exploratory analysis to evaluate nulls, duplicates and other issues with data sources
  • Assist in developing code that enables real-time modeling solutions to be ingested into front-end systems
  • Produce code artifacts and documentation using GitHub for reproducible results and hand-off to other data science teams

Requirements

  • Bachelor's degree in Computer Science, Engineering, IT, Management Information Systems, or a related discipline
  • 1+ years of experience in working with data
  • Exposure to Python and SQL
  • Exposure to AWS Services (i.e. S3, EMR, etc)
  • Exposure to ingesting data from a variety of structures including relational databases, Hadoop/Spark, cloud data sources, XML, JSON
  • Exposure to ETL concerning metadata management and data validation
  • Exposure to Unix and Git
  • Exposure to Automation tools (Autosys, Cron, Airflow, etc.)
  • Exposure to Cloud data warehouses, automation, and data pipelines (i.e. Snowflake, Redshift) a plus
  • Able to communicate effectively with both technical and non-technical teams
  • Able to translate complex technical topics into business solutions and strategies
  • Candidate must be authorized to work in the US without company sponsorship. The company will not support the STEM OPT I-983 Training Plan endorsement for this position.

Benefits

  • Short-term or annual bonuses
  • Long-term incentives
  • On-the-spot recognition
Job Description Matching

Match and compare your resume to any job description

Start Matching
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service