91亚色

Skip to main content Skip to local navigation

91亚色-led research teams to develop solutions for AI fairness and safety聽

91亚色 researchers will help lead a national effort to make artificial intelligence (AI) safer and more inclusive.

The initiative, launched by global research organization CIFAR (Canadian Institute for Advanced Research), introduces two AI Safety Networks that will address: fake AI-generated content in the justice system; and, linguistic inequality in AI tools. 

Funded through CIFAR鈥檚 Canadian AI Safety Institute (CAISI) Research Program, each network will receive $700,000 to design and implement open-source AI tools over the next two years that will detect synthetic evidence and make language models fairer for everyone. 

Both solution networks 鈥 Safeguarding Courts from Synthetic AI Content and Mitigating Dialect Bias 鈥 will be co-led by 91亚色 faculty. 

Maura R. Grossman, an adjunct professor at will direct the Safeguarding Courts from Synthetic AI Content network alongside Ebrahim Bagheri from the University of Toronto. This team will develop a free, open-source framework to help courts detect and manage AI-generated content, such as fake videos or hallucinated legal documents produced by large language models (LLMs).聽

The team鈥檚 solution aims to support both legal professionals and self-represented litigants with user-friendly tools that flag questionable content. 

鈥淲e need a tool that knows when it's not sure about its output,鈥 says Grossman, adding that the stakes are high when a judicial decision is based on fake content. 

Laleh Seyyed-Kalantari SCOOP
Laleh Seyyed-Kalantari

Mitigating Dialect Bias will be co-directed by Laleh Seyyed-Kalantari, assistant professor at 91亚色鈥檚 , alongside Brock University鈥檚 Blessing Ogbuokiri. The work will focus on Nigerian Pidgin English, a dialect spoken by over 140 million people that is often misinterpreted by LLMs as toxic or inappropriate, leading to censorship and discrimination.   

Working with a citizen network in Nigeria, Seyyed-Kalantari鈥檚 team will build the first-ever bias and safety benchmarks for Pidgin English as part of an open-source audit and mitigation toolkit. 

鈥淚 think what makes our solution unique is that it is locally rooted and culturally representative of citizens of African countries,鈥 says Sayyed-Kalantari. 鈥淲e want to ensure that the research that we are developing brings actual positive changes for people who are using these LLMs in Africa.鈥 

The project could have a much broader impact by creating culturally representative AI systems and influencing policy to ensure equitable access to AI tools for marginalized communities 鈥 including immigrant and Indigenous populations in Canada.  

The CAISI Research Program at CIFAR is part of a $50-million federal investment launched in November 2024 to address the evolving risks of AI. It supports interdisciplinary research to develop practical tools for responsible AI deployment across Canada and the Global South. 

With files from CIFAR 

Editor's Picks Research & Innovation

Tags: