Ribbon Communications Operating Company - Westford, MA

posted 5 months ago

Full-time - Mid Level
Westford, MA
51-100 employees
Professional, Scientific, and Technical Services

About the position

Ribbon Communications is seeking a Machine Learning Engineer to join our Ribbon Analytics development team. This role is pivotal in designing and developing machine learning solutions that enhance our Ribbon Analytics product, a big data network analytics and security tool. This product is designed to collect, process, and react to vast amounts of data from networks, utilizing machine learning techniques to analyze trends and detect anomalies, thereby mitigating security threats and fraud within customer networks. As a Machine Learning Engineer, you will work with cutting-edge technologies in the Big Data and Analytics field, employing contemporary machine learning and data pipeline frameworks to create scalable analytics solutions that address real-world challenges in telecommunications. The ideal candidate will be self-driven, possess a strong work ethic, and have a keen interest in developing highly scalable machine learning applications. You should be enthusiastic about working with new technologies and comfortable in a dynamic work environment. In this role, you will collaborate closely with Product Managers, Architects, and Data Engineers to understand customer use cases and translate them into efficient, scalable machine learning solutions. You will evaluate and implement machine learning capabilities such as anomaly detection, classification, and clustering, while also monitoring and refining the performance, accuracy, and reliability of these solutions. Additionally, you will architect and develop large-scale, distributed data processing and machine learning pipelines using technologies like Apache Trino/Impala, Flink, and Airflow, and design efficient data ingestion, transformation, and storage solutions for both structured and unstructured data. Staying current with the latest trends and best practices in machine learning and data engineering will be essential to your success in this role.

Responsibilities

  • Collaborate with Product Managers, Architects, and Data Engineers to understand real customer use cases and translate them into efficient, scalable machine learning solutions.
  • Evaluate, recommend, and implement machine learning capabilities such as anomaly detection, classification, clustering, etc utilizing the latest tools.
  • Monitor and assess performance, accuracy, and reliability of the machine learning solutions, refining them as required.
  • Architect and develop large-scale, distributed data processing/machine learning pipelines using technologies like Apache Trino/Impala, Flink, and Airflow.
  • Design and implement efficient data ingestion, transformation, and storage solutions for structured and unstructured data.
  • Stay up-to-date with the latest trends, technologies, and industry best practices in the machine learning and data engineering domains.
  • Participate in code reviews, design discussions, and technical decision-making processes.

Requirements

  • Degree in Computer Science, Electrical Engineering, Computer Engineering, or a related field, with specialization in Machine Learning. Master Degree Preferred.
  • Strong understanding of MLOps principles.
  • Strong programming skills in Python and Java as it relates to machine learning, data science, and big data frameworks.
  • Strong understanding of SQL for analytics use cases.
  • Strong foundation in mathematics, statistical analysis, and probability.
  • Extensive experience and knowledge of machine learning frameworks/libraries such as XGBoost, PyTorch, Tensorflow, etc.
  • Strong understanding of database technologies (SQL and NoSQL).
  • Ability to independently decompose larger problems and progressively work a solution to completion.
  • Ability to quickly pickup new tools and technologies to assist in rapid prototypes.
  • Ability to employ good judgment in efficiently selecting methods, techniques, and evaluation criteria for obtaining results.

Nice-to-haves

  • Experience with distributed systems, streaming systems, and data engineering tools, such as SQL, HDFS, S3, Kubernetes, Airflow, Kafka, Flink, etc.
  • Experience developing micro service architecture (Kubernetes, REST API).
  • Experience developing machine learning solutions in Openshift, AWS.
  • Experience with Deep Learning frameworks.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service