Disaster relief, remote-healthcare diagnostics, tracking rhino poachers, upping student achievement—each of these disparate activities could get a big boost from artificial intelligence (AI). A recent discussion paper from the McKinsey Global Institute (MGI), “Notes from the AI frontier: Applying AI for social good,” lays out just how AI can help tackle some of the world’s most challenging social problems, analyzing 160 use cases.
The good news: about one-third are already being used today. Still, there’s much more that can be done, both to implement these solutions and to fully understand the breadth of what they can do for social-good organizations.
We gathered questions about this topic from our social-media audience around the globe for Michael Chui, an MGI partner based in McKinsey’s San Francisco office and one of the report’s authors.
Watch the videos or read the transcripts below to get the inside scoop about actionable ways people can bring AI into their workstreams and organizations to help ameliorate social problems.
What types of AI applications could have the most benefit for social good?
—Dana in Australia
Michael Chui: Now, first of all, what we found was there is a huge number of different applications. We looked at 160. They cover all of the United Nations Sustainable Development Goals. But at least some of the ones that we’ve analyzed that we think have a lot of great potential are the applications of computer vision.
And that’s everything from disaster relief, where you might use satellite imagery in order to identify where there are passable roads, where there are emergent situations like a building coming down, to public health, where there is the ability to do remote diagnosis of everything from skin cancer to other types of diseases. Those are some of the applications where we see lots of potential.
And in fact, one of the surprising things was you can deploy these AI technologies not only in these more complex types of data, imagery, et cetera, but even in good, old-fashioned databases. And that’s a place where sometimes nonprofits have lots of data, just in traditional databases. And applying AI to those databases results in some of the surprising things that we’ve seen nonprofits be able to do.
Just even routing to make sure that people can get to the places of need more quickly and more efficiently. So there’s application all over the board.
How can nonprofits prepare for the next wave of technological disruption? How can technologies be used to promote more ethical giving?
—Beth in the United States
Michael Chui: First of all, we did find in our research that many nonprofits and socially oriented organizations need to increase their capacity around technology, AI [artificial intelligence] in general. And perhaps some of the organizations which have this capability and have people who know those technologies can allow them to contribute their time and their expertise.
The other thing that we know is that, in fact, for deep learning particularly, these organizations need more data—and to be able to access that will be incredibly important.
And one of the other things that we discovered was these technologies are extremely general. And so, in fact, you can deploy AI in order to improve the types of offerings, the types of communications, and the targeting in order to actually do better fund-raising. So that’s at least one application of AI for nonprofits.
What are the additional talent needs that AI might bring? Will we see more collaboration between sectors?
—Rahul in the United States
Michael Chui: So, certainly, I think one of the things that we discovered is a bottleneck to capturing more value from AI for social good is, in fact, making sure that we have the talent in place, particularly in these social-good organizations, in order to take advantage of AI.
So we do think that it’ll basically need to be an all-of-the-above type of strategy. Sometimes you’ll be able to train people who are already in these organizations. Sometimes you’ll be able to hire them. Sometimes you’ll need to look for partners.
And to the point about crossing sectors, I think we’ll see increasingly—and we hope we’ll see increasingly—the commitment of organizations, whether they be tech firms or other companies that actually have data science and AI capabilities, to allow and encourage their employees with those capabilities to give their time, to give their talents, to give their expertise to social-good organizations in furthering some of this important potential of AI to actually help the world and help the world’s citizens.
What are some of the obstacles that need to be overcome to use AI for social good?
—Jonathan in the United States
Michael Chui: We identified a number of different obstacles, these bottlenecks, that have to be overcome. Chief among them, in many cases, is accessibility to the data that you need in order to train these AI systems.
In many cases, that data exists. It’s just locked away. It might be locked away in a commercial organization that sees value in selling that data. It might be just locked away because of bureaucratic inertia. There are lots of things, lots of data within governments, for instance, which could be valuable for addressing some of these social-good challenges, and yet it just hasn’t been made available. So I think that’s one of the challenges that we’ve identified during our research.
Another one is talent. We’ve talked about it before. But you do need people who have these skills, this expertise, this understanding in order to deploy these technologies. And they’re in short supply. They’re paid a lot in the for-profit realm. And so, how can we start to bring more of that talent to bear, more of that data to bear, in order to use AI for social good?
And then, finally, there are a set of “last-mile” challenges. Not all of them actually have anything directly to do with AI. But whether it’s funding, whether or not it’s having connectivity in place, whether or not it’s just having the number of people in place that you need in order to effect change: you might have superior insight from AI, but unless you can change conditions on the ground, it’s actually not going to be able to move the ball forward in terms of social good.
That said, the other thing we identified in our research: there is a set of risks. In fact, in the worst case, some of these applications of AI, if misused—either intentionally or unintentionally—could actually hurt the people that you’re trying to help the most.
If you’re using AI in order to try to identify people who are vulnerable in one way or another, whether it be refugees or whether it be victims of some sort of violence or crime, that very same technology can be used by people who actually would otherwise cause those vulnerable people to have worse lives.
And so there are a number of these other challenges, including implicit or explicit bias as a result of different challenges embedded in the training data that you use to design these systems. There are questions about privacy. All of those are real risks.
The number-one thing you can do is try to identify what those risks are, right? That’s part of a risk-management approach. And then, for each one, there’s a different way to mitigate it. But as powerful as this technology is, we’ll simultaneously need to understand those risks and be able to mitigate them as well.
How can we make sure that AI doesn’t pick up the same bad habits as humans?
—Anne in the United States
Michael Chui: Number one is it is a real challenge. And I think, actually, one of the things that perhaps is surprising is that oftentimes the way that bias gets introduced into these systems is not because the software engineer codes up rules that end up in the system. But rather, it’s implicit in the training data that’s used to create these systems.
We talk about machine learning, but that’s actually a little bit confusing. The machine doesn’t run off, and go learn something, and come back. Actually, we train it. And we train it with data. The problem often is that the data actually incorporates bias within it. And then you do get systems that pick up “the bad habits”—maybe those of people but, more importantly, the bad habits that are incorporated in that data.
There isn’t one straightforward way to guard against it. Number one is you actually have to understand what that bias might be—the thing that you want to protect against. What are the classes of individuals whose characteristics you want to make sure are not incorporated into your systems, in terms of their decision making? And then do some testing.
Understand what the data is, and then understand the system that you’ve created and the models that you’ve created using those systems. And, in many cases, it can be helpful to have third-party validation of those models as well.
So number one is just understanding, being very clear about the ways in which you don’t want bias to show up. Because, after all, actually, you do want certain types of bias in order to do classification. It’s the good type of discrimination you want to do.
And then secondly, be able to test it—both in terms of the data that you have, the systems you’ve created—and it is often helpful to have third parties do some of that testing with you.
How do we ensure AI systems meet our ethical standards?
—Jackie in South Africa
Michael Chui: The truth is these are powerful systems. In a certain sense, they are imbued with the values of the people who design and deploy them. And so, to a certain extent, this is not a technical problem at all.
We do have to understand what our goals are when we design these systems and then be able to point these powerful tools in such a way that they’re actually solving those problems.
One of the things that we identified as we looked at this research around social good is that AI and these technologies have tremendous potential for actually doing social good. But, number one: we have to intentionally focus on using these technologies to do so.
And then also make sure in the implementation—in the actual work that we do with technologies—that they don’t exhibit some of the negative impacts, whether it’s bias, whether it’s violating privacy or discrimination, while we’re still trying to do social good.