In this edition of Author Talks, McKinsey Global Publishing’s Mike Borruso chats with Dr. Fei-Fei Li, professor of computer science at Stanford University and founding director of the Stanford University Institute for Human-Centered Artificial Intelligence, about her new book, The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI (Flatiron Books/Macmillan Publishers, November 2023). Li proposes a human-centered AI framework that supports new scientific breakthroughs, while focusing on augmenting, not replacing, human contributions. She advocates for establishing effective governance models that prioritize human dignity and well-being for tomorrow’s AI. An edited version of the conversation follows.
What’s the significance of the book’s title?
This book is a science memoir, so it captures both the science part of AI as well as the journey of a scientist who is coming of age. My background is, I guess, not that of a typical kid. So I do traverse different worlds physically, temporally. As a scientist who has been involved not only in the science of it but also in the social aspect of the science, I see the worlds in different dimensions, so it was very important that I made this plural, The Worlds. Because I’m a computer-vision AI scientist, the worlds I see capture that very essence of seeing.
Why was it important to capture the essence of ‘seeing’?
Capturing the essence is important because people can tell stories when they open their eyes and see a scene. I’m seeing you, and I can tell a story of you. That is part of the cornerstone of intelligence.
When I was a PhD student, I thought that would be my lifelong goal and dream: to get computers to “see” and tell the gist of the story. So I made it a project with my graduate students when I was a professor. But I was pleasantly surprised to see that the technology has this long, linear acceleration. We pretty much solved that problem way earlier than my life’s work has finished.
Why did you write this book?
At the beginning of the pandemic, in 2020, I was invited to write a science–AI book for the public. I spent a year writing a nerd book while thinking about the public general audience. I showed it to my very good friend and codirector of Stanford Institute for Human-Centered AI, John Etchemendy. He’s a very wise guy and a philosopher. He called me and said, “You have to rewrite.” I was very shocked and depressed by his comment.
He said, “I know you’re a scientist; you can write about AI. But it’s a missed opportunity if the young people out there from all walks of life—immigrants, women, people of diverse backgrounds—miss an opportunity to see themselves in an AI scientist they don’t often see.”
I kind of screamed and kicked because I don’t really like writing about myself. I’m still very uncomfortable that this is half-memoir, half-science. I love the science part of this, but I see what he means. And I identify with that even on my own journey. So I kind of cringed and rewrote, keeping in mind that it is a vehicle to deliver that voice to young people.
Tell us what you mean by ‘human-centered AI.’
Human-centered AI is a framework of how I see we should do AI but also of an institute I co-established with John Etchemendy and many Stanford faculty about four-and-a-half years ago. The idea is to recognize that AI technology is very important and will be affecting human lives and society.
There are also a lot of unknowns still to be explored in AI. How do we put guardrails around AI? How do we develop tomorrow’s AI? How do we move into the future, so that this technology can maximally benefit humanity and we can mitigate and govern the guardrails and the risks? We are calling for a human-centered AI framework.
This framework looks at AI in three aspects. One is that it recognizes AI as part of a multidisciplinary field; it’s not just a niche computer science field. We use AI to do scientific discovery, we want to understand AI’s economic impact, we want to use AI to super-power education and learning. It’s deeply interdisciplinary. We want to make sure we study and forecast what’s coming.
We also recognize that the most important use of a tool as powerful as AI is to augment humanity, not to replace it. This is very much a theme of my book. When we think about this technology, we need to put human dignity, human well-being—human jobs—in the center of consideration. That’s the second part of human-centered AI.
When we think about this technology, we need to put human dignity, human well-being—human jobs—in the center of consideration.
Last, but not least, intelligence is nuanced; it is complex. We are all excited by a large language model and its power. But we should recognize human intelligence is very, very complex. It’s emotional, it’s compassionate, it’s intentional, it has its own blind spots, it’s social. When we develop tomorrow’s AI, we should be inspired by this level of nuance instead of only recognizing the narrowness of intelligence. That’s what I see as human-centered AI.
What can organizations do to ensure they’re using AI ethically?
I think putting guardrails and governance around the powerful technology is necessary and inevitable. Some of this will come in the form of education. We need to educate the public, policy makers, and decision makers about the power, the limitation, the hype, and the facts of this technology.
Then we need to establish norms. Every organization carries its own values system, and if they use AI—and I predict almost all organizations will somehow be using or be impacted by AI—we need to build in that norm. I suppose most organizations would want AI to be fair, to respect privacy, to not bring harm, and to actually have a level of prediction and forecasting about unintended harm consequences.
So these are the norms we need to establish in organizations. Finally, there will be a regulatory framework, and those will be laws. That’s a big topic we can dive into, but the bottom line is that we also need those kind of legal guardrails.
How urgently are guardrails needed?
I feel it’s very urgent. But I don’t speak from a viewpoint of gloom and doom and an existential-terminator crisis. I intellectually respect [the need for guardrails], and I am saying this not just to be nice. I intellectually respect it. That’s part of scholarly intellectual work that humanity has always engaged in when we have new innovations and discovery.
But I do think the urgency is on social, current issues, and some of them are catastrophic risks. For example, disinformation’s impact on democracy, jobs and workforces, biases, privacy infringement, weaponization: these are all very urgent. A more urgent issue that many don’t see from a risk point of view—but I do—is the lack of public investment. We have an extreme imbalance of resources in the private sector versus the public sector. That is going to bring harm.
What were some of the most creative applications of AI you saw when you worked at Google Cloud?
I learned a lot at Google. Even though it was only in 2017 and 2018, the AI was not like today’s AI. But, as a cloud business, you get to work with all enterprise. For example, one of my favorite stories from working there is about a Japanese cucumber farmer who uses TensorFlow, I believe, and an object recognition system to help sort cucumbers. That is a very endearing example.
We also have major financial services, like insurance companies, using AI to help productivity, assess damage, and deliver better service to its users and customers. Also during my time at Google, a small team of researchers used the face recognition algorithm to call out Hollywood’s biases. They looked at hundreds of hours of movies and studied how much time male actors were given—screen time, or screen talk time—compared with female actors.
This is the kind of work humans cannot do by hand. It’s just too daunting. Yet AI helps to call out these kinds of biases. Of course, my personal area of application is healthcare, and there are many, many ways of using AI.
Did any of those uses surprise you?
I don’t know if surprised is precisely the word, but it’s definitely so amazing to see the wide range of business use. I would never have thought that agricultural uses, for example, could do so much with AI and machine learning to optimize and do better with energy uses. From the mundane, like digitizing documents, all the way to the incredible, like saving lives and discovering new drugs—the surprise is the breadth and depth that this technology can bring us.
From the mundane, like digitizing documents, all the way to the incredible, like saving lives and discovering new drugs—the surprise is the breadth and depth that this technology can bring us.
In the meantime, I don’t know if it’s because I was working at Google at the time, but 2017 to 2018 was the first year of seeing the messiness of this technology coming of age. As a technologist—speaking for myself, at least—you have to recognize the messiness of it and recognize that we have a shared responsibility in ushering society into the AI era. That shared responsibility will put us in areas that we were not trained for, such as ethics, regulations, and all of that.
Early in your career, you chose academia over some higher-paying corporate opportunities. Any regrets?
Looking at where I am and at the exciting world of AI, I have absolutely no regrets. Was it scary? Yes, and the book includes details about my family situation, my mom’s healthcare situation.
There were many moments where I was walking in the dark, figuratively speaking, and wondering if I’d made the right choice. So it wasn’t easy during those stretches of the journey. But what I find that makes me really grateful and excited is that now I’m working with all fields, including McKinsey, and that is very rewarding.
Having interviewed at McKinsey all those years ago, do you have any advice for future applicants?
I completed the entire McKinsey problem-solving case study all the way to the partner interview. I did the whole thing, and I got a job offer.
Management consulting is a wonderful job, and the more we collaborate with McKinsey, the more I appreciate. But that young Fei-Fei was a scientist in her heart. For all young people out there, find your own passion and North Star, and be brave, be persevering, be persistent about it.
For all young people out there, find your own passion and North Star, and be brave, be persevering, be persistent about it.
Your mother once asked you, ‘What else can AI do to help people?’ What’s your answer?
My mom really represents the general public: an elderly lady who doesn’t know anything about technology, despite what her daughter does. It’s so illuminating. Since this book was written, I think the world has seen how AI can help people. It does have problems; I’m not going to gloss over it. We need to collectively deal with all the possibilities: I call them catastrophic risks.
But AI helps children to learn. It’s so fascinating: you can just put a kid in front of, for example, ChatGPT or any app that uses this kind of technology, and say, “Tell me more about what fusion is.” That is just a super exciting way of helping the world.
We’re seeing the medical profession using AI technology. I have doctor friends at Stanford Medicine telling me that medical summaries are very painful for doctors and nurses; they take away time from patients. Now you can get a language model to help. AI can be applied to new scientific discovery—such as climate solutions—or just for help composing emails. So, yes, the applications of it are blossoming.
What is AI4ALL doing to overcome AI’s ‘sea of dudes’ problem?
AI4ALL is a not-for-profit organization that I started with my former student Olga Russakovsky and my colleague Rick Sommer, back in 2015. Yes, we recognize the sea of dudes problem in AI. Frankly, the sea of dudes problem is possibly even worse right now. AI4ALL is just one small effort, and I really hope there are more. We need to invite more young people from diverse backgrounds to participate in this technology.
I think it was in 2015 or 2016, during the summer camp of AI4ALL at Stanford, that I walked into a lunch hour break of young high school girls studying AI in our lab. It was such a heartwarming scene to see these 14- and 15-year-olds braiding each other’s hair while talking about AI algorithms and neural networks.
It is so much fun to look at this. This is why I want to encourage young people, and I think getting them involved while they’re young is important. Since they are in their formative years, they need to be excited by the possibility of this technology. This is part of the reason I wrote the book.
You say in the book that many great physicists developed late-career interests in the mystery of life. How did that inform your life’s work?
My whole career in AI is owed to the physics I studied and the physicists I studied with. I was a physics lover, and I’m still passionate about physics. In hindsight, I realize that what I love about physics is the way of thinking.
It’s that audacity to ask the most ambitious questions of curiosity: “What is the beginning of time? Where is the limit of the universe? What are the smallest subatomic particles? What is the unifying force of the universe?” Not that we have found it.
But it’s very audacious, and I love that. It’s so curious and whimsical. These giants of physics—from Albert Einstein to [Erwin] Schrödinger, to Roger Penrose—were also fascinated by nonatomic physical-world questions about intelligence and life.
That totally fascinated me when I was a college student. I realized that I, too, am more interested in the audacious question of intelligence than other audacious questions. I’m still curious about the boundary of the universe, but it just captured my imagination. That’s when I shifted my intellectual interests from physics to AI.