This job is closed

We regret to inform you that the job you were interested in has been closed. Although this specific position is no longer available, we encourage you to continue exploring other opportunities on our job board.

AbbVieposted 7 months ago
Full-time - Mid Level
Remote - Atlanta, GA
Chemical Manufacturing

About the position

This position is part of AbbVie's Information Security & Risk Management (ISRM) team, which is dedicated to empowering partners with the knowledge, tools, and support necessary to effectively utilize data and technology while managing risk. The Cyber Security Engineering (CSE) Team is seeking a highly motivated and talented individual to join their ranks. This role can be performed remotely from anywhere in the U.S. and focuses on the installation, management, optimization, and automation of tools used by the broader Information Security and Risk Management teams. The CSE team is responsible for data management services, which serve as a foundation for their portfolio, including data transformations and pipelining to downstream systems. As subject matter experts, the team assists with training and development beyond the scope of Information Security and Risk Management. This position represents an expansion of capabilities within the Cyber Security Engineering Team, emphasizing data pipelines, data models, and adherence to standards across datasets. The Data Engineer will play a crucial role in delivering the value of data management toolsets, including data pipelines and the SIEM platform, while assisting with data onboarding, normalization, pipelining, data modeling, and documentation, all while striving for automation and quality delivery. The team leverages CI/CD pipelines for automated builds and deployments across all supported toolsets, implementing a mix of legacy and cloud-native infrastructure and services. The ideal candidate should be adaptable and willing to learn new terminology, processes, and techniques used within Information Security teams, and be comfortable working in Scrum and Agile/DevOps methodologies.

Responsibilities

  • Implementation and development of data pipelines that feed the SIEM and other analytics engines using existing toolsets
  • Creating structured data sets from unstructured data
  • Build data models and enhance standard schemas across different technologies
  • Normalize/Harmonize data across various platforms
  • Verify data integrity and translations against multiple systems
  • Creation and support of analytic toolsets outside the SIEM
  • Assist in analysis and defining data requirements and specifications
  • Assist in analysis and planning for anticipated changes in data capacity requirements
  • Assist in developing and documenting data standards, policies, and procedures
  • Perform compilation, cataloging, caching, distribution, and retrieval of data within the SIEM and other platforms
  • Analyze data sources to provide actionable recommendations
  • Develop standards and automations in metrics aggregation and dissemination
  • Manage data lineage across various systems
  • Design enhancements, updates, and programming changes for portions and subsystems of data pipelines, repositories or models for structured/unstructured data
  • Analyze design and determine coding, programming, and integration activities required based on specific objectives and established project guidelines
  • Execute and write portions of testing plans, protocols, and documentation for assigned portion of application; identify and debug issues with code and suggest changes or improvements
  • Participate as a member of a project team to develop reliable, cost-effective, and high-quality solutions for data systems, models, or components.

Requirements

  • Bachelor's Degree with 6 years' experience; Master's Degree with 5 years' experience; PhD with 0 years' experience OR relative work experience
  • Skills in developing data models, dictionaries, and reports within a SIEM platform
  • Experience building and configuring data pipelines and architectures
  • Experience with regular expressions and parsing unstructured data
  • Deep understanding of data administration and data standardization policies
  • Knowledge of database management systems, query languages, table relationships, and views
  • Experience in validating data sets and calculations
  • Ability to work both independently without direction and within a group for day-to-day activities
  • Capable of learning new concepts and processes quickly, and adapting to a constantly changing environment
  • Experience with CI/CD Pipelines and Git
  • Experience with database & system integration technologies
  • Prior experience working with ETL in a SIEM environment (ELK, Splunk, Exabeam, etc)

Nice-to-haves

  • Prior experience working in an Agile team
  • Familiarity with cybersecurity, privacy principles, cyber threats, and vulnerabilities
  • Demonstrated experience in implementing regular expressions
  • Experience working with development tools and scripting languages (Python / PowerShell / Go)
  • Experience analyzing and pivoting on large sets of data, with the ability to identify patterns, anomalies, and outliers
  • Ability to identify basic common coding flaws
  • Demonstrated experience in log analysis and parsing of unstructured data (ETL)
  • Amazon Solutions Architect / Azure Data Engineer Associate / Cloud Professional Data Engineer Certification

Benefits

  • Opportunity to work remotely
  • Engagement in a diverse, global team
  • Career growth within a leading biopharmaceutical company
  • Involvement in digital transformation initiatives
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service