Tiktok - Seattle, WA
posted 27 days ago
TikTok is the leading destination for short-form mobile video, and our mission is to inspire creativity and bring joy. The Trust and Safety R&D team is a fast-growing unit responsible for building machine learning models and systems to identify and defend against internet abuse and fraud on our platform. Our mission is to protect billions of users and publishers globally every day. We leverage state-of-the-art machine learning technologies to detect and improve the vast amount of data generated on TikTok, ensuring the best user experience and bringing joy to everyone in the world. In this role, you will be responsible for a whole sub-module in the moderation system or a research direction, which includes but is not limited to large language models (LLM) and their application in safety and moderation system iteration. You will collaborate with team members to design next-generation moderation systems using new technologies, work on neural network models and LLM-based models to address TikTok's online safety challenges, and partner with engineering teams to implement model pipelines and deploy services at scale. Additionally, you will work closely with the product team to define objectives and enhance our trust and safety strategy, as well as collaborate with data analysis teams to understand and identify data patterns.