For AI to serve communities fairly, researchers need diverse and inclusive datasets to rigorously evaluate fairness in their models. In applications of computer vision and speech recognition, AI researchers need data to assess how well a model works for different demographic groups. And this data can be difficult to gather due to complex geographic and cultural contexts, inconsistency between different sources and challenges with accuracy in labeling.
Today, we’re releasing Casual Conversations v2, a consent-driven, publicly available resource that enables researchers to better evaluate the fairness and robustness of certain types of AI models. This comprehensive dataset offers a granular list of 11 self-provided and annotated categories to further measure algorithmic fairness and robustness in these AI systems. The release of this dataset is one of the key highlights of our civil rights progress, created in consultation with internal experts in this field. The dataset features 26,467 video monologues recorded in seven countries featuring 5,567 paid participants who provided self-identified attributes such as age and gender, and is the next generation following the original Casual Conversations consent-driven dataset, which we released in 2021. To our knowledge, it’s the first open source dataset with videos collected from multiple countries using highly accurate and detailed demographic information to help test AI models for fairness and robustness.
With Casual Conversations v2, we wanted to use a multilingual dataset to support the development of inclusive natural language processing models. In addition to an expanded list of categories, Casual Conversations v2 differs from the first version with the inclusion of participant monologues recorded outside the United States. The seven countries included in v2 are Brazil, India, Indonesia, Mexico, Vietnam, the Philippines and the US. In the future, we hope to further expand the dataset to additional geographies. Another difference in the latest dataset is that participants were given the chance to speak in both their primary and secondary languages.
Learn more about Casual Conversations v2 on our AI blog.
The post Introducing a More Inclusive Dataset to Measure Fairness appeared first on Meta.