logo

Wake up daily to our latest coverage of business done better, directly in your inbox.

logo

Get your weekly dose of analysis on rising corporate activism.

logo

The best of solutions journalism in the sustainability space, published monthly.

Select Newsletter

By signing up you agree to our privacy policy. You can opt out anytime.

Riya Anne Polcastro headshot

Harnessing AI for Social Good Starts With Inclusive Data

Depending on the data it learns from, artificial intelligence (AI) can produce biased responses and outcomes. Ensuring AI models are trained on inclusive data from the start mitigates harm, allowing the tech to be used to create a positive social impact.
An illustration of a computer with popups surrounding it — AI bias

(Image: summertime flag/Unsplash)

Artificial intelligence (AI) can only be as unbiased as the data it learns from. Training AI to perform tasks requires feeding it a lot of data so it can recognize patterns and make decisions. This leaves it open to carrying over existing human biases, which make their way into its projections and can lead to erroneous, harmful conclusions. 

Most people are probably familiar with ChatGPT, which was trained on an estimated 570 gigabytes of information from across the internet and still manages to include cultural biases in its responses. But smaller AI models designed for more specific purposes also rely on curated datasets that can be problematic — like health data in which women or minority groups are underrepresented or historical arrest data that reinforces patterns of racial profiling.

When humans collect data or design systems to collect data that isn’t inclusive, the AI learning from that information will produce biased responses and outcomes. Medical diagnosis systems trained on data from patients who are mostly white and male, for example, could produce less accurate results for people of color and women. But some developers are creating inclusive and equitable AI solutions from the start, demonstrating the difference a proactive approach can make.

“It's an equation, and it's an equation that requires inclusive inputs and inclusive output,” said Payal Dalal, executive vice president of global programs at Mastercard’s Center for Inclusive Growth, the financial services company’s social impact arm.

Payal Dalal — AI bias
Payal Dalal, executive vice president of global programs at the Mastercard Center for Inclusive Growth. (Image courtesy of Mastercard Center for Inclusive Growth.)  

Beyond necessary harm reduction, there’s also the opportunity to use the technology to promote inclusion and create a positive social impact, Dalal said. That is something the Center for Inclusive Growth emphasizes across its AI efforts, including efforts like equipping small businesses with digital tools and funding innovative models.

Choosing the right datasets is a balancing act, Dalal said. With innovation and responsibility being top priorities, how they converge is important.

“Ensuring the right data sets are used in AI development is a process that requires openness, transparency, dialogue and collaboration,” she said. “If we want to ensure that our AI solutions are inclusive, equitable and aligned with our commitment to being a force for good, then we need to bring in diverse voices and perspectives, including those of the community and end user.”

The Center for Inclusive Growth relies on vigorous tools, protocols, training and education, and robust governance strategies to achieve this, Dalal said. As the final assurance that bias hasn’t found its way into the datasets and AI solutions it uses, the team reviews each system that is put in place.

Encouraging inclusive AI with the AI2AI Challenge 

The Mastercard Center for Inclusive Growth and Data.org, a social impact project backed by the center and The Rockefeller Foundation, are working together to encourage a proactive approach to creating inclusive and equitable AI solutions through the Artificial Intelligence to Accelerate Inclusion Challenge (AI2AI Challenge). It’s a search for models that are already operational and focused on tackling disparities and promoting economic growth. Winners receive technical assistance, mentoring and $200,000 to help them scale up.

More than 500 AI models were submitted, representing 82 countries from every region of the world. “We saw lots of proposals around financial inclusion, education, gender, health, agriculture,” Dalal said. “In Africa, we saw a lot around climate resilience, and we saw a lot of proposals around socioeconomic well-being through education and financial inclusion. North America … we saw a lot of equity and social justice initiatives … In Asia, we saw a lot of education proposals. And in Latin America, there was a lot of work around how to use AI in support of micro and small businesses.”

AI2AI Challenge winner Buzzworthy Ventures — AI bias
AI2AI Challenge winner Buzzworthy Ventures created Beekind, an app that provides beekeepers in India with important data for honey production and keeping hives healthy. (Image courtesy of Data.org) 

The five winners demonstrate the potential for AI to tackle a host of economic, agricultural, and social problems by promoting equity and inclusivity and assisting marginalized communities. Beekind, an app tailored towards small-scale beekeepers in India, provides data and analysis on keeping hives healthy, producing more honey and adapting to changing conditions. Another winner, Signpost, helps displaced people navigate services and regulations as they search for safety during a crisis. And Quipu is helping micro ventures in Colombia access capital.

The other two winners are focused on health outcomes. Link Health connects people in the United States with financial assistance to support their overall well-being. And IDinsight equips health extension workers in Ethiopia with advice on difficult cases via a call center that runs on AI.

The diversity of winners and submissions shows there are plenty of ways to use AI for social impact, and people are already actively working on such solutions. 

AI2AI Challenge winner Link Health — AI bias
AI2AI Challenge winner Link Health uses AI to connect people in the U.S. with federal financial assistance programs related to health and nutrition. (Image courtesy of Data.org)

Preventing bias isn’t a universal priority yet

Unfortunately, proactive approaches to weed bias out of datasets and AI are not universal. A 2022 University of Southern California study found that up to 38 percent of so-called “facts” from two databases commonly used in AI training were prejudiced or biased. The data discriminated against things like religions, genders, races and professions. An analysis by a team of Chinese and American researchers found that 83 percent of AI models used to diagnose mental health conditions based on images of a person’s brain had a high risk of bias. And these are just two examples.

Still, Dalal doesn’t think developers are intentionally failing to correct for bias. Rather, the proliferation of bias in AI points to the need for a more diverse pool of developers and technologists. By involving diverse perspectives, the resulting solutions will be more thoroughly assessed, she said.

“How do we make sure that there's a really robust pipeline of data scientists and those that work with data who are oriented around inclusive growth?” she remarked. “That’s where I would start, making sure that there's a really great pipeline of not only data sets but people who are oriented around social impact.” 

From there, users must harness the technology for social good, Dalal said. 

Riya Anne Polcastro headshot

Riya Anne Polcastro is an author, photographer and adventurer based out of Baja California Sur, México. She enjoys writing just about anything, from gritty fiction to business and environmental issues. She is especially interested in how sustainability can be harnessed to encourage economic and environmental equity between the Global South and North. One day she hopes to travel the world with nothing but a backpack and her trusty laptop.

Read more stories by Riya Anne Polcastro