In this episode of the McKinsey Podcast, Simon London speaks with McKinsey partner Bryan Hancock and senior partner Bill Schaninger about why people analytics matters even more in a world awash with data and more advanced computing and analytics capabilities.
Podcast transcript
Simon London: Hello, and welcome to this episode of the McKinsey Podcast, with me, Simon London. A certain breed of executive has always looked down on the people side of management. They see it as soft, squishy, and lacking in hard data. But while that may have been somewhat true 20 or 30 years ago, it certainly isn’t true today. There really is a revolution in progress as companies start to apply big data and advanced analytics to the human side of the enterprise. To talk about the promise and pitfalls of people analytics, I sat down in Philadelphia with McKinsey partners Bryan Hancock and Bill Schaninger. As we’ll hear, Bryan and Bill are optimistic, with the caveat that getting real value from people analytics requires not only technical smarts but also a solid understanding of organizational behavior and a pretty good grasp on how the business actually makes money.
So Bryan and Bill, welcome back to the podcast.
Bryan Hancock: Thank you.
Bill Schaninger: Thanks for having us.
Simon London: So an obvious first question for a nonspecialist is: When we’re talking about people analytics, what are we talking about?
Bryan Hancock: What we’re talking about is bringing data on people to specific business decisions. It can be a decision to hire. It can be a decision on how to configure a team. It can be a decision on where to source people. But it is bringing a data set to people decisions. It’s as simple as that.
People analytics has existed as a concept for a long time. It’s not that there’s anything radical about the idea of people analytics. What’s cool is that now there are new sources of data and advanced computing powers that allow you to do more with the data. But the underlying idea of using data to inform people-related business decisions is not necessarily a new thing.
Bill Schaninger: We make decisions every day about who we hire, how we deploy them, what teams we put them in, what we have them working on. Then we sit in judgment of their performances. Every one of those decisions can be made better with data. Not all those decisions are equally important, so you don’t have to bring it to bear in all of them, but you should probably bring it to bear more than we are doing today.
Simon London: As you say, it’s not an entirely new idea. But what’s the opportunity today? What makes this a particularly important and interesting topic?
Bryan Hancock: I think what makes it a particularly interesting and exciting topic today is a combination of a few factors.
One, there really are new advanced computing capabilities that allow you to factor in more variables and determine more of what really matters. Of course, you can’t just do that independently. It has to be linked to good research in what matters, but the advanced computing power does matter.
There are new and different sources of the data—super exciting and interesting. Those matter.
Also, there is the acceptance, more broadly, of advanced analytics—be it from marketing to sports. That is making people think, “Oh, if I can do this in understanding my consumer or understanding the quality of my first baseman, why can’t I do this for understanding what makes a good salesperson tick?”
Simon London: Talk a little bit about the sources and the categories of data. What kind of data are we talking about here?
Bill Schaninger: There are some interesting pools that I think, in many cases, we hadn’t really connected. Maybe it’s just because we were isolating around the individual employee in a way that wasn’t helpful. So there’s the obvious. There’s the information about the employee: where they went to school, and where they’ve worked. That’s basic.
But then you can also get into things about their attributes, their traits, their personalities. And then you think, “OK, well, what other data do we have?” We can collect data about performance. We can also collect data about the environments they’re in: perceptions of the boss, the boss’s effectiveness, what they’re working on, who they’re working on it with, how long they’re working, the company overall, competitive positioning, location.
Historically we wouldn’t have always brought those together. But as soon as we say, “We really want to understand the person, the environment the person’s working in, who they’re working with and how that’s going,” it’s way more natural to bring that together. And you can start showing the longitudinal effects.
Simon London: And then, presumably, perceptions as well: That’s another bucket of data?
Bill Schaninger: For sure. Most data around the organization—its effectiveness, its climate—is perception based. A good rule of thumb about this is if you’re about to make a choice about a person, you can probably do it better with some data. It is really that simple. Basic things like, Who should we hire? Where should we go to recruit? Who should we put together on a team? What should that team work on? How well are they performing? Why are there differences between units in performance?
We have a couple of things coming together at once. Of your two capitals, we are long on financial capital, and we are short on human capital. So now every decision you make about people matters more. We’re at an era where computing power and analytic techniques have allowed us to do more than we could in the past.
In particular—in “stat speak”—you would swap the models really quickly. You could only test so many interaction terms. When they talk about machine learning and the advances in nonparametric statistics, basically, what they’ve done is said, “We’re going to ignore the rules of parametric statistics. We won’t assume everything’s normal. And we’re going to run many, many more combinations than previously possible to find unique segments of people.”
Simon London: From what we see today, how many companies are actually doing this at scale? How many companies have really adopted people analytics in a way that is really having an impact?
Bryan Hancock: There’s a spectrum of what people analytics can be and do. The majority of companies, the vast majority, are doing basic reporting-type analytics: Who’s turning over? Where are they turning over? They’re doing some initial root-cause problem solving as to what’s behind it. Most companies are doing assessments of folks who are coming in to see if they’re a good fit for the role.
The question is, how many companies are going beyond that basic reporting, basic analytics, to using some of the bigger data sets, to using advanced computing power, and combining those data sets with well-proven academic theory on what really drives performance in an organization? How many organizations are using that at the next frontier? Very, very few.
Bill Schaninger: Three years ago, when we first started talking about this and started investing in it heavily, the basic question was, Who’s about to leave? It was about retention. That was the earliest use case across the board. That was pretty straightforward, because you were predicting a one or a zero.
As the people who’ve bought into this have migrated toward answering business problems and considering the impact of people and the combination of people on business problems, the use cases have gotten better and more impactful. But that list is probably still pretty small. We’re still running into some similar roadblocks, if you will. There is a belief set, convenient or not, that “we don’t have enough data.”
This is not true. Whatever data you have, you can start with. And, particularly now that you don’t need a data lake, you need an ecosystem of data, that’s a real “unlock.” You can get at it. Acknowledge that the outside world probably already has better data on your employees than you do. Just go use it. It’s there. They’ve given it to other sites. Go get at it.
Want to subscribe to The McKinsey Podcast?
Bryan Hancock: Some of the data is quite public. Many professionals post their résumés and professional backgrounds on LinkedIn. By understanding, in aggregate, tens of thousands of LinkedIn profiles—and patterns among them, including which types of talent aggregate and which functions and which companies—it becomes possible to compare, for example, Amazon versus Target and really look at where their merchants came from and what their backgrounds are.
Another one of my favorite things to do anytime a client is talking about my employee experience is to note that there is a wealth of information, be it on Glassdoor, or professional-service firms’ sites, or Vault, or Above the Law for a law-firm context. You can go “outside in” to not just get an understanding of “hey, Above the Law gave you a C+ and gave somebody else an A–” but also look at all of the qualitative comments. And you can structure those together and understand, in a fact-based way, what the themes are at your law firm, accounting firm, tech start-up. And you can understand how those themes compare with others. This can highlight real gaps in certain leadership behaviors and other management behaviors and practices.
Bill Schaninger: And then—Bryan just hinted at this—maybe acknowledge that, for the better part of 60 years, some really good psychologists have written theory and done all the initial testing and validation on what good science looks like when it comes to people. We’ve just basically ignored it.
There was a great article written in the late ’90s—when I was getting my PhD, so I read journal articles then—and they were looking at the correlation between the most valid tools for selection and frequency of use. And they were inversely correlated. In plain language, the best tools weren’t being used. The worst tools, like the unstructured interview, were being used all the time.
That’s because we’ve done two things. We’ve grown five generations of managers who know “it” when they see it, and they’re sure they know. They’re, of course, wrong, but they’re sure they know. On the flip side, we have systematically devalued, underfunded, and eviscerated most HR functions. So many HR teams aren’t capable of doing people analytics, even if we wanted them to.
In the past, you might have expected HR personnel to serve more as custodians, running the bureaucracy. Now you see these three legs that really matter. One, they actually need to know how the business makes money. Otherwise you can’t pick the use cases that matter. Two, they need to understand the science behind talent. Whether it’s industrial-organizational psychology, or management, or HR, there is a science—again, as I was saying, 60 years on—about what works and what doesn’t. Then there’s the quant part—and this part has become emergent—which is, you don’t have to be a statistician, but you do need to know enough to be a good consumer of statistics. In that case, we’re finding many of these departments lacking. Or they’ve set the group up separately, and there is no actual translation.
So who alone can handle understanding how we make money and what stats we should deploy, based on existing and prevailing theory that we know is right? When you get that right, you get good problems brought to the fore and answered. When you don’t, you’re answering questions no one cares about.
Bryan Hancock: There are two forces outside of HR that are helping HR make the case for analytics.
One is what’s been happening in the marketing department. As marketing has used more and more advanced analytics to get deeper and deeper views of consumers and consumer behavior and more predictive tools and analytics, folks in HR can now say, “Shouldn’t we have the same level of fact bases on our employees as we do on our customers?”
And, in fact, many HR departments, when building a people-analytics capability and looking for that analytic capability, are tapping into the marketing function and tapping into people who are using the analytics in a different way. After all, figuring out why somebody chose an employer isn’t all that different a scientific exercise as figuring out why they chose your product over a competitor’s product.
The second thing that is useful is sports—in particular, baseball. Rarely do I get into a people-analytics conversation, even with non-sports lovers, where the head of HR doesn’t say, “Hey, you know what? What we really need here is … ”
Simon London: Moneyball.
Bryan Hancock: “… Moneyball.” Exactly. We’re talking about wins above replacement. Now the baseball analogy gets deeper and deeper, because people say, “Hey, it’s not just wins above replacement”—which is an individual metric—“but it’s also about how we shift the defense on the field”—which is a team metric. So now HR leaders are empowered with analogies that are familiar to everybody. It’s not this scary Big Brother people analytics. No, no. It’s what the Houston Astros are doing. What we’re talking about here is not just fielding the best people on the team but also figuring out how to align them in the right way.
Simon London: Something that strikes me, particularly as you talk about baseball, is there’s probably enough data out there, and there’s probably a good sense of what makes individual players successful and what makes teams successful. When you go and talk to companies, do they know what makes their people successful or what makes their teams successful? Or is that the first thing that you need to align upon?
Bill Schaninger: Believe it or not, before you can talk about what makes the individual successful, you have to be able to answer the question, “For what job?” What’s the job to be done? What matters? Many individuals who are in roles now have gone through periods of restructuring, job enlargement, job enhancement, or massive shift in role relative to what they were hired for. We tend to fall in love with incumbency. Get a client to talk about a role, and within 45 seconds, they will be using a name. They get hooked in on a person.
That role mutates over time, but if you were being clinical about it, you would take a pause and say, “What is this role contributing in terms of value? How important is it, and what are the jobs to be done?” As soon as you can get to jobs to be done, then you can go old school, like the John Vorises of the world, who did this for the government and said, “This is what job analysis really looks like.” There’s a description of what has to happen, and there’s a spec on what the perfect candidate is. And that switch over to this is saying, “What do they need to know? What do they have to be able to do? What attributes really matter? What experiences would we look to for comfort?”
We’re dealing with many clients that have had proliferation of roles over time. Too many job titles. Too many roles. And they might have too many people to invest in the kind of detail that we’re talking about. Sometimes it’s best just to get the small number of roles that matter way more than others and prove it there—in the 25, 50 roles that account for 60 percent of the value of the company in the next five years. Just get your head around that, and then you can go to brass tacks. What do they have to do? What are the knowledge, skills, and attributes they need? Then you can turn to well-worn tools.
Simon London: So maybe this is where the sports analogy breaks down for many organizations. When you’re looking at a sports team, there are only a limited number of roles. Now, in an organization, if you’ve got 100,000 people doing one role, well, fine, you can look at that. But if you’ve got 1,000 roles with 100 people doing each of them, where do you begin? What drives success in each of those is a much more complicated problem. Is that a good way to think about it?
Bryan Hancock: That’s absolutely a good way to think about it. We would think that, in any given organization, there are 30 to 50 individual roles that are worth understanding at a granular level because they drive a disproportionate amount of value for the business, whether it’s the current-momentum value of the existing business or value for driving growth. It’s a relatively small number, 30 to 50. In addition, there are probably eight to ten broad skill pools. Sales might be one example of a broad skill pool. Of those folks, there may be 20,000 in a given organization, but you can understand and come up with the archetype there.
The sales one is most exciting to me because a lot of the people-analytics folks look at it and say, “Well, that’s really interesting, but how much of a difference did that make?” And I can say, “Give me half of your sales organization. Give me the existing data you have. Let me get one or two new pieces of insight, but not more than one or two, and I can raise your sales 3 to 5 percent.” If I don’t, no harm, no foul. Thank you for your time. But I’m convinced that analytics in sales is one of the things that every organization should be doing because it rings the cash register in such an obvious way and is one that can have such clear comparison sets.
Would you like to learn more about our Organization Practice?
Simon London: And it’s clear what success looks like, which is improved sales. Whereas in many roles, coming to a common understanding of what success looks like and then what drives success, I’d imagine, is a lot more complex.
Bill Schaninger: It is harder. In fact, we’ve seen one of the interesting challenges is around performance management and a sense of who’s actually successful. How do we know how well they’re doing? When you can count widgets sold or airplane engines sold, you get it. You can look at margin and things like that. But for these more complicated roles, you say, “Well, who’s good?” You get some really squishy answers because you’ll ask, “Why are they good?”
We’ve turned on something in the last couple years called imputed performance, where we accept that most performance-appraisal ratings are useless—not because ratings are bad, but because everyone is a four out of a five. You’re living in Lake Wobegon. And you’re just asking and getting answered questions like, What would be great here? Is greatness exceeding the budgeted number or the way you set a stretch target—so the goal is to hit that number? Is it getting people promoted? Is it turning on new internet protocol? Is it conversion within your existing client base? What are the seven to eight things that really matter? Then they are weighted. Now you have a new number. But what you also have is differentiation. The science behind being able to predict people is you need variability. As one of the people I studied under used to say, “Variance is your friend because you can explain variance. And as soon as you can explain variance among people, you know what matters and what doesn’t.”
Simon London: Another new source of data is a sociometric badge. Now, as an employee, that makes my skin crawl a little bit. I’m not sure I want to be asked to wear a sociometric badge. Just talk a little bit about what these badges actually do. What do they collect? What don’t they collect? And if you’re going to use them as an employer, how do you go about that?
Bryan Hancock: They have the most power as diagnostic tools, as ways of understanding what the patterns of human interaction between employees and a given environment are. I don’t think using such badges is a great thing to do as an ongoing management tool.
For example, “Did you realize that Bill spent a total of 30 minutes in the washroom today? That is up five minutes from his washroom visit before and deviates from the average by x percent.” That’s not how it should be used. But where it can be incredibly powerful is understanding, for example, if you’re in a quick-service restaurant, how much are employees talking to customers versus talking to colleagues? Which colleagues are they talking to? Is it mainly the manager?
Bill Schaninger: Front line, the dyad of the boss and the employee: again, in 60 years of research, there is no more important relationship at work, period. How can you understand what we’ve asked people to do versus what needs to happen? Then it’s backed up by data, not by someone going, “Oh, I know how it should be.”
A brief derivative on this because this client was interesting. It had hired a head of operations from another chain, and its entire model was built around speed of service—including how they staffed and where they put people. The minute we tried to draw the link to customer satisfaction, it wasn’t there. I think it was zero. Because once somebody came inside the restaurant, were they that concerned about speed? No. They wanted an accurate order. And they’d like it to taste good. That was it. Then we started looking at the company and saying, “Actually, everything it’s gearing for is ‘get it out, get it out, get it out.’”
Simon London: So it was optimizing for the wrong outcome.
Bill Schaninger: Completely the wrong thing. The manager had a weird mix of “administrivia,” bureaucracy, and pushing for speed as opposed to getting it right. That had a huge impact on the stores.
Simon London: As an employee, trying to take that lens, I get more comfortable, as you said, because it’s diagnostic. I’m not being asked to wear the badge, and I’m not under supervision constantly. It’s for a set period to help me understand how I work and whether I’m working in the best way—then I get more comfortable.
Bryan Hancock: It’s the pharma equivalent of a black-label drug. I think a lot of these need a warning before they get put in the organization in the wrong hands because there is an incredible power of these tools to really create a soul-sapping level of transparency into what you do. That can lead to a soul-sapping questioning of every decision you made over every part of your day that can really create a dystopian workplace.
If you introduce these tools in an environment where people say, “OK, here are the risks, here are the downsides; let’s make sure that the business leaders who may have instincts to go in the risk direction don’t, because it has these negative effects; and let’s make sure we’re focused on diagnosing the right interactions, reinforcing the right things,” there’s an unlimited potential.
Simon London: Can we talk a little bit about data privacy? Because, presumably, one of the other dangers here, if these programs are done badly, is you can trip up quite a lot of legal requirements—GDPR [General Data Protection Regulation] and other things—around what you can and can’t do with personal data. Presumably, this is not just an HR-plus-data-wonk exercise. There needs to be some legal-governance framework around this as you’re putting these programs together.
Bryan Hancock: Yes. I’m a lawyer by training, so I get into all the exciting nitty-gritty with the legal teams. Both from the privacy standpoint and from the employment-law standpoint, you want to be very careful of how you’re protecting the data and how you’re making sure that your data is being used to make fair and equitable decisions on people. To the extent that you create something that doesn’t have data privacy or is not fair and equitable, you’ve introduced a tremendous amount of legal risk to the organization.
What we find very helpful is going back to the first principle of why we are worried about this data. We’re worried about it from the individual protections. That means that it can’t cross these country boundaries. That means that we don’t collect it on servers, and we deal at an aggregate level. Those are very practical things we can solve to address privacy concerns and where it moves.
But often those concerns deal with things in the aggregate versus the specific managerial interactions that let you do what you want to get done. There is a real concern that if somebody comes up with a correlation—even with the best intent—that demonstrates adverse impact, the company will start to act on it. That introduces a lot of risk if there’s not a real solid basis for the data.
Bill Schaninger: If the model that you’re building to predict something—who should be hired, who might leave, or whatever it is—if it’s faulty, it doesn’t matter what the math says, because the model’s wrong. Remember, we went all the way back to an understanding of the science.
If I came to you and said, “I have found the most amazing correlation, ice-cream sales and asphalt sales are unbelievably correlated. It’s astonishing.” So it must mean that when people are pitching new roads, they go out and buy ice cream. Or temperature might be important.
If it’s not modeled, it’s not there. So there is a danger. This is why you need to kick the tires on the model. We’re even aggressive with our own data-science folks. Being aboveboard with data, we ought to be able to show every employee every bit of data we’ve collected and why and how long it will be kept and what it’s used for.
Are we long—or short—on talent?
Bryan Hancock: I’ll give you another really interesting source of external data. We were working with an oil driller, trying to understand how it could attract more people to its drilling operation. For oil drillers, the number-one constraint to growth is finding enough people to operate the machinery. It’s a super-tight labor market. What makes the difference in the oil patch between growth and not growth is, Do you have the people? The CEO of our client came with a question of, “How do I get more people to work in my drilling operation versus my competitors’?”
His hypothesis was two things. One, “We have to pay them more. It’s a tight labor market. We need to pay significantly more than our competitors to attract people.” The second was, in the camps where the workers reside, the company would cater a steak dinner once a week as a thank you. If it was known for having the best food and for having the best pay, it would disproportionately attract folks. We said, “Let’s test it.”
The questions the CEO had were, “How are you going to test this? These folks don’t really have good internet access in the camps. How are you going to get a competitor’s perspective, anyway? You could only get our people. They’re not necessarily on LinkedIn. How are you going to find these folks?” And I said, “Where do they hang out?”
We ended up going to the various bars, other dining establishments, in Williston, North Dakota, and other places in the oil patch to find folks who worked for our client and the ones who worked for its competitors. We gave them a little bit of compensation for their time and asked them a set of traditional, well-proven marketing-survey questions.
What came out of the analysis was we were able to get not just insight into our client and the main competitors, but we also got an understanding of the whole market. Because as soon as somebody heard, “Hey, they’re paying for a little bit of your time if you just go down to the bar,” they would come. We got this really rich data set. Then since we had this nontraditional source of data, we were able to run analytics on it.
What mattered most was not pay or the steak dinner. Pay had to be in the range of the competitors. But differential pay wouldn’t necessarily get more folks. The number-one factor was safety. Are people going to make it home? Are they going to make it home in one piece? And folks wanted to work for the drillers that had the best safety reputation. What about our client? Our client was one of the safest if you looked at all the objective metrics. But it wasn’t perceived as such. By working on some of that perception, it was able to move the needle in a way that pay wouldn’t.
And by the way, what about the steak dinner? As it turned out, when we probed workers on whether they’d want this bundle or this bundle, folks didn’t want the steak dinner as much as they wanted a larger freezer in the camps so they could bring more of their own food from home.
Simon London: Interesting. What’s fascinating about that one is when we talk about people analytics, I think there is a temptation to think it’s all very automated, that it’s all about big data sets that you’re pulling in from big systems. That’s a classic example of something very creative, going to the source, getting the data that you need through quite traditional methods. But it’s data. And it’s good data.
Bryan Hancock: Absolutely. It’s amazing in the tech industry what you can learn in parking lots in Silicon Valley and other places. The oil-field labor market has a very niche set of skills. We had to go to where the people were working. For other skill sets, there’s no reason why you can’t apply similarly creative methods to understand them more deeply.
Simon London: So as you mentioned, we’re in a very tight labor market, certainly here in the US at this point in time. How does people analytics help organizations address that?
Bill Schaninger: We’ve spent a lot of time talking about selection and who you pick. But upstream from that, it’s sourcing. Where do you go to look for them? Who do you try to attract? Who do you get into the recruiting funnel before you even get to selection?
We had a client that we were serving, and it was facing a lot of cross-offers. It was going to the usual suspects. Those candidates are all incredibly skilled. They’re going to have offers from the high-end consulting firms, from the investment banks, et cetera.
So the client said, “Where else can we go?” The first instinct was “let’s just keep going to the same places. Let’s just offer a whole lot more money or come up with new approaches to working.” And we said, “OK. Well, you could. But you know you could go look in other places.”
Some of it is a willingness to break through the idea that it has to be that school. How do you get comfortable breaking through that? Now the client, still interested in testing this out, said, “Let’s go look at the people who’ve been successful in your organization.” Do they all have an Ivy League education? The answer is no. By the way—and this is not a knock on the Ivies—but in the client’s situation, the best-performing managers tended to come from a Patriot League school. They also had the personality attributes of a little bit of edge and a little bit of a chip on their shoulders. They are smart enough but with some real desire to perform and show people. That happened to work well in that environment.
What’s the takeaway from that? You can measure all of that. That’s all available to you. You don’t even have to give a test for that.
Simon London: The interesting thing about that case is that, again, you’re not talking necessarily about big data, machine learning. You can figure out who the successful people were at your organization with small data.
Bill Schaninger: Absolutely right.
Simon London: But it’s the application of a fact base and analytics to people decisions rather than the nostrums, the gut feel, the idea of, “we’ll just carry on doing what we’ve always done.”
Bill Schaninger: In fact, in selection research, there’s something called “biodata,” or markers of something in the person’s life that gives you an indication of who they might be as a person. If you wanted someone who was entrepreneurial, a good thing to look for is, Did they own a paper route? Did they grow it three years successively? If you’re looking for resilience, people who come from divorced households tend to have a little bit more of it—and then proceeded to not see a drop in school performance during that period of time.
Simon London: How much of this is culturally specific when you work on these issues globally? Do you need fundamentally different data sets from different countries? Is that something that we come across?
Bryan Hancock: The approaches are similar across geographies. You have to put the work you’re doing in the right cultural context. And the cultural context could vary based on expected interactions between managers and subordinates. Cultural interactions could also be about data expectations, pay expectations. There are a number of things that vary by context.
But the underlying science, I think, does apply across. The ability to apply facts and data to make specific people decisions, I do think that that’s applicable across cultural contexts. We’ve seen this work in Asia. We’ve seen it work in Latin America, Western Europe, and the US. We do see applicability of people analytics broadly.
Bill Schaninger: The extent to which you’re predicting an environment where you’re trying to understand what about the environment drives performance—that, I think, you have to be a little bit more sensitive to, and to what’s possible.
We were doing work with an engineering company once, looking at the right composition of engineering teams. It had been operating during that time but, basically, running 24/7. You’d follow the sun. When the company went and pushed on it, it was not about countries not working well together. It was not, “The engineering center in India doesn’t work well with the engineering center in California.”
The bigger issue was time zones. When you had more than three time zones, it broke down. But in understanding the person, the benefit of industrial-organizational psychology, the constructs around personality, the definition of general mental ability: those things are well known and well validated.
Simon London: So we’re out of time today. But thank you very much Bryan Hancock and Bill Schaninger for a fascinating discussion.
Bryan Hancock: Thank you. Enjoyed it.
Bill Schaninger: Thanks for having us.
Simon London: And thanks as ever to you, our listeners, for tuning into this episode of the McKinsey Podcast. To learn more about organization, talent management, analytics, and more, please visit us at McKinsey.com.