Aquent - Seattle, WA
posted 4 months ago
Join us as part of a developing initiative for measuring fairness in large-scale language technologies. In this role, you will be a member of an interdisciplinary product team that works closely with research to address fairness-related challenges in language technologies. Your primary responsibility will be to conduct in-depth analyses in collaboration with feature teams, developing methods, practices, resources, and tools aimed at identifying, measuring, or mitigating fairness-related harms caused by these technologies. Utilizing your expertise in linguistics and social sciences, you will identify patterns across various fairness-related harms and encapsulate these patterns into actionable methods and resources. This work will be crucial in ensuring that the initiative maximizes its impact across different language technologies, use cases, and deployment contexts. You will also be involved in generating metrics for assessing the risk of harm in models, contributing to the broader understanding of AI ethics and fairness in technology. The position requires a strong foundation in natural language processing, semantics, and analytical linguistics, as well as a commitment to responsible AI practices. You will be expected to leverage your skills to enhance the initiative's objectives and contribute to the development of fair and ethical language technologies.