Aquent - Seattle, WA
posted 4 months ago
Join us as part of a developing initiative for measuring fairness in large-scale language technologies. In this role, you will be a member of an interdisciplinary product team working closely with research to contribute to this important initiative. Your primary responsibility will be to conduct in-depth analyses in partnership with feature teams, focusing on the identification, measurement, and mitigation of fairness-related harms caused by large-scale language technologies. You will leverage your expertise in linguistics and the social sciences to identify patterns across various fairness-related harms. This involves developing methods, practices, resources, and tools that can be applied across different language technologies, use cases, and deployment contexts. The goal is to maximize the initiative's impact by capturing as much information as possible regarding potential harms in real-world applications and producing metrics that assess the risk of harm in models. Your experience with AI ethics and the fairness space will be crucial as you generate fairness measurements using a structured process that includes lexicons and features specific to the templates you will be working with. This role requires a strong background in linguistics, particularly in natural language processing and semantics, as well as analytical skills to navigate the complexities of fairness in AI.