The results of our survey of more than 1,300 business leaders and 3,000 consumers globally suggest that establishing trust in products and experiences that leverage AI, digital technologies, and data not only meets consumer expectations but also could promote growth. The research indicates that organizations that are best positioned to build digital trust are also more likely than others to see annual growth rates of at least 10 percent on their top and bottom lines. However, only a small contingent of companies surveyed are set to deliver. The research suggests what these companies are doing differently.
A majority of consumers believe that the companies they do business with provide the foundational elements of digital trust, which we define as confidence in an organization to protect consumer data, enact effective cybersecurity, offer trustworthy AI-powered products and services, and provide transparency around AI and data usage. However, most companies aren’t putting themselves in a position to live up to consumers’ expectations.
Consumers value digital trust
Consumers report that digital trust truly matters—and many will take their business elsewhere when companies don’t deliver it.
Most respondents say it’s important for companies to provide transparency around their digital-trust policies.
They want clarity about how their data will be used. Nearly half of all respondents frequently consider another brand if the one that they are considering purchasing from is unclear about how it will use their data. These figures increase among some segments, such as Gen Z.
Consumers even believe some digital-trust tenets are nearly as important as common purchase decision factors, such as cost and delivery time.
Many will only buy from companies that are known for protecting consumer data. More than half of respondents say that they often or always make online purchases or use digital services from a company only after making sure that the company has a reputation for being trustworthy with its customers’ data. Again, this figure increases among some demographics.
And a substantial proportion of respondents will take their business elsewhere if trust is violated: forty percent of all respondents report that they have pulled their business from a company after learning that the company was not protective of its customers’ data. This rate increases among frequent online shoppers, B2B purchasers, and Gen Z respondents. In the past year alone, 14 percent of all respondents stopped doing business with a company because they disagreed with its ethical principles, and 10 percent did so because they learned of a data breach, even when they didn’t know if their own data had been stolen.
Consumers believe that companies establish a moderate degree of digital trust
When it comes to how organizations are performing on digital trust, consumers express a surprisingly high degree of confidence in AI-powered products and services compared with products that rely mostly on humans. They exhibit a more moderate level of confidence that the companies they do business with are protecting their data. For organizations, this suggests that digital trust is largely theirs to lose.
More than two-thirds of consumers say that they trust products or services that rely mostly on AI the same as, or more than, products that rely mostly on people (Exhibit 1). The most frequent online shoppers, consumers in Asia–Pacific, and Gen Z respondents globally express the most faith in AI-powered products and services, frequently reporting that they trust products relying on AI more than those relying largely on people—41 percent, 49 percent, and 44 percent, respectively.
However, these survey results could be influenced, at least in part, by the fact that consumers may not always understand when they are interacting with AI. Although home voice-assisted devices (for example, Amazon’s Alexa, Apple’s Siri, or Google Home) frequently use AI systems, only 62 percent of respondents say that it is likely that they are interacting with AI when they ask one of these devices to play a song.
While 59 percent of consumers think that, in general, companies care more about profiting from their data than protecting it, most respondents have confidence in the companies they choose to do business with. Seventy percent of consumers express at least a moderate degree of confidence that the companies they buy products and services from are protecting their data.
And the data suggest that a majority of consumers believe that the businesses they interact with are being transparent—at least about their AI and data privacy policies. Sixty-seven percent of consumers have confidence in their ability to find information about company data privacy policies, and a smaller majority, 54 percent, are confident that they can surface company AI policies.
Most businesses are failing to protect against digital risks
Our research shows that companies have an abundance of confidence in their ability to establish digital trust. Nearly 90 percent believe that they are at least somewhat effective at mitigating digital risks, and a similar proportion report that they are taking a proactive approach to risk mitigation (for example, employing controls to prevent exploitation of a digital vulnerability rather than reacting only after the vulnerability has been exploited). Of the nearly three-quarters of companies reporting that they have codified policies on data ethics conduct (meaning those that detail, for example, how to handle sensitive data and provide transparency on data collection practices beyond legally required disclosures) and the 60 percent with codified AI ethics policies, almost every respondent had at least a moderate degree of confidence that those policies are being followed by employees.
However, the data show that this assuredness is largely unfounded. Less than a quarter of executives report that their organizations are actively mitigating a variety of digital risks across most of their organizations, such as those posed by AI models, data retention and quality, and lack of talent diversity. Cybersecurity risk was mitigated most often, though only by 41 percent of respondents’ organizations (Exhibit 2).
Given this disconnection between assumption of coverage and lack thereof, it’s likely no surprise that 57 percent of executives report that their organizations suffered at least one material data breach in the past three years (Exhibit 3). Further, many of these breaches resulted in financial loss (42 percent of the time), customer attrition (38 percent), or other consequences.
A similar 55 percent of executives experienced an incident in which active AI (for example, in use in an application) produced outputs that were biased, incorrect, or did not reflect the organization’s values. Only a little over half of these AI errors were publicized. These AI mishaps, too, frequently resulted in consequences, most often employees’ loss of confidence in using AI (38 percent of the time) and financial losses (37 percent).
Advanced industries—including aerospace, advanced electronics, automotive and assembly, and semiconductors—reported both AI incidents and data breaches most often, with 71 percent and 65 percent reporting them, respectively. Business, legal, and professional services reported material AI malfunctions least often (49 percent), and telecom, media, and tech companies reported data breaches least often (55 percent). By region, AI and data incidents were reported most by respondents at organizations in Asia–Pacific (64 percent) and least by those in North America (41 percent reported data breaches, and 35 percent reported AI incidents).
The survey results suggest that delivering on digital trust could provide significant benefits beyond satisfying consumer expectations. Leaders in digital trust are more likely to see revenue and EBIT growth of at least 10 percent annually.
Digital-trust leaders lose less and grow more
Digital-trust leaders are defined as those companies with employees who follow codified data, AI, and general ethics policies and that engage in at least half of the best practices for AI, data, and cybersecurity that we asked about. These companies are outperforming their peers both in loss prevention and business growth.
Loss prevention. The companies doing the most to establish digital trust are less likely to have experienced a negative AI incident in the past three years. Forty percent of digital-trust leaders experienced an adverse event in the past three years versus 53 percent of all other institutions. Leaders in digital trust are also less likely to have suffered a data breach, though the difference is less stark: 49 percent versus 57 percent of all others.
Growth. Digital-trust leaders are 1.6 times more likely than the global average to see revenue and EBIT growth rates of at least 10 percent. In fact, with every step of progress a company makes toward establishing robust digital trust, we see a correlative increase in the likelihood that a company reports these higher revenue and EBIT growth rates. For example, simply codifying ethical conduct, rather than not doing so, is commensurate with higher growth. Making a further commitment to digital trust by incorporating these policies into mission statements correlates with still higher propensities for better growth. And adding in specific best practices in cybersecurity, data protection, and the provision of trustworthy AI increases the likelihood of higher growth further still, with more practices leading to more likelihood for such growth.
What digital-trust leaders do differently
A look at the practices of digital-trust leaders shows that their success starts with goal setting. First, they simply set more goals—leaders in digital trust set twice as many goals for trust building (six) than all other organizations. They are also more likely to focus on value-driving goals—particularly, strengthening existing customer relationships and acquiring new customers by building trust and developing competitive advantage through faster recovery from industry-wide disruptions (Exhibit 4).
As digital-trust leaders pursue these goals, they are more likely to mitigate every single digital risk we asked about, from the most obvious, such as cybersecurity, to the less so, such as those associated with cloud configuration and migration (Exhibit 5).
And while, by definition, digital-trust leaders engage in at least half of all the AI, data, and cybersecurity practices we asked about, they are also about twice as likely to engage in any—and every—single one (Exhibit 6).
About the research
The data for this article were obtained through two global online surveys: one answered by business leaders, the other by consumers. Both were conducted from April to May 2022. The business leader survey included responses from 1,333 senior business executives (one-third of whom were CEOs) across 27 industries in 20 countries, including Australia, Brazil, Colombia, Germany, India, Indonesia, Pakistan, Singapore, Spain, the United Kingdom, and the United States. The consumer survey included responses from 3,073 adults from the same countries. The data were adjusted to better fit the survey sample to population estimates within each country using age and gender weights globally and, in the United States only, by weighting for region, income, and ethnicity.