Mistrust and Misunderstanding: Consumers Don’t Know What to Make of Artificial Intelligence in Search
This memo is part of a series on generative artificial intelligence, aimed at creating a foundational understanding of consumer attitudes on the emerging technology.
Read more of our coverage:Interest in Other AI Applications|Who’s Using Generative AI|Concerns Over Generative AI
Key Takeaways
-
While nearly half of adults say they are interested in AI-powered search, “using AI to improve search results” ranked last among important qualities for search engines.
-
More than half the public believes AI integrations into products and services are the future of technology, but just 1 in 3 adults think AI technologies will be developed responsibly.
-
Just 1 in 4 people trust AI-powered search to avoid gathering content that contains misinformation, and nearly 1 in 3 trust it to provide factual results.
Chatbots, computer programs that are capable of engaging in a conversation, have existed fornearly as long as computers. But the development of large language models — systems that use massive databases of existing information to predict and create content — has revolutionized the technology with the introduction of generative artificial intelligence. Now, chatbots are not just capable of responding with pre-programmed responses; they can generate new content to answer a question or request.
This technology has taken hold in a major way. ChatGPT, a chatbot developed by OpenAI that is capable of doing everything from responding to questions to creating poetry or contracts, has amassed more than100 million monthly active usersin two months after its public launch in November 2022.
Microsoft Corp., a major investor in OpenAI, has integrated the technology into its Bing search engine to create conversational search functionality, effectively turning search queries into a conversation. Google, the dominant company in search, responded by announcing its own chatbot called Bard.
这些工具生成大量of interest and attention: Half of all U.S. adults have seen, read or heard at least something about AI-powered chatbots, and about 2 in 3 expressed interest in conversational search features, according to a Morning Consult survey. However, trust in these systems is low — and the general public has a litany of wants from search engines that they rank higher than AI-powered functionality.
AI Ranks Lowest Among Features That Adults Consider When Deciding Which Search Engine to Use
In general, there is interest among the public in AI-powered online search. Nearly half of people surveyed said they were somewhat or very interested in such a technology. But when asked what is important when deciding which search engine to use, 46% said using AI to improve results was important, ranking last among all considerations.
Nearly twice as many adults said data privacy was important for their search engine choice, and 85% said honest, unbiased and impartial results were important. Other features like integration into web browsers were important to nearly 7 in 10 respondents, and 2 in 3 said it’s important for search to be a part of a broader ecosystem of tools such as emails and maps. This suggests that convenience and familiarity are more valuable to many consumers than AI functionality.
Putting AI breakthroughs before user needs
Chirag Shah, associate professor at the University of Washington’s Information School and founding director of InfoSeeking Lab, which focuses on issues related to information seeking and human-computer interactions, believes that when it comes to AI search, “nobody has really asked what users want.”
“Microsoft and Google, I don't see them interested in asking that. They're too busy just one-upping each other,” Shah said. “A lot of the ‘innovation’ right now is driven by how these companies try to prove themselves to have a better AI system than the other.”
Shah added that the companies are taking a “build it and they will come” approach to their AI-powered search tools, hoping that users will come to adapt to the new model after seeing it in action.
The reason these systems and other generative AI tools — like text-to-image programs Midjourney and DALL-E — have suddenly become available is because the technology has reached an inflection point that has allowed them to rapidly improve, according to Rep. Jake Auchincloss (D-Mass.), who delivered thefirst-ever AI-written speechon the floor of the House earlier this year.
Auchincloss, who spent time working as a project manager at a cybersecurity startup before running for Congress, said three trends had to converge to lead to the current breakthrough in generative AI: “You had the academic research that created the algorithms and deep learning, you had摩尔's lawthat got semiconductors efficient and powerful enough to actually be the cloud computing resources to train these models, which was not even feasible 10 years ago. And you've got the internet-scale quality and quantity of data.”
It’s something of a gold rush for generative content, which the general public appears to recognize: More than half of adults believe AI integrations into products and services, similar to Microsoft’s integration of conversational AI into Bing, are the future of technology.
Despite this, users are less convinced that this future will benefit them: Just 1 in 3 adults believe AI will help them save money, and a nearly equal share believe AI will create new jobs.
There is also doubt among the public about whether AI will be developed and managed effectively. More than 2 in 5 respondents said theydisagree that AI systems will be developed responsibly, and a similar share disagreed that the technology will be easily controlled.
Establishing trust in AI is an uphill battle
There is a noteworthy population of people who have not made up their mind on AI yet — about 1 in 5 don’t know or have no opinion about whether it will be the future of technology, and 1 in 4 are unsure if it will be developed responsibly — but there is considerable doubt that companies will have to overcome.
Concerns over whether the tools are being developed responsibly could certainly swell based on some of the initial interactions with Microsoft’s AI-powered Bing search. Users have run into instances of the chatbot “hallucinating” — a term used to describe when an AI system responds in unpredictable ways or with convincing but completely made-up information. Others have had the chatbotthreaten them.
In some cases, users are engaging with these systems in unexpected ways. But there is another condition at play here, according to Shah: economics. Microsoft holds a sliver of the search market, while Google dominates, with more than90% of all searchgoing through its platform.
“Microsoft, for the first time, in all these years, saw a real opportunity to claim some of that market share,” Shah said.
Microsoft and Google did not respond to requests for comment at the time of publication.
Majority of Adults Unlikely to Switch to AI Search Engine
Integrating AI into search engines might have the potential to bring in new users: The Morning Consult survey found that about 3 in 10 adults would be willing to change search engines for an AI-powered option.
“Microsoft has nothing to lose, in a way: It is a small player in the search business. But if it could claim even a couple of percent of Google's market share, that could mean billions of dollars a year,” Shah added. Microsoft rushed into releasing its AI-powered Bing chatbot, Shah said, and forced Google to rush to respond.
That rush has downsides. While conversational AI is relatively straightforward in terms of how it works, there is a steep learning curve in explaining what is happening behind the scenes.
“AI is a misnomer. It's not intelligent,” Auchincloss said. “It can't reason about the world. It is a word prediction algorithm. What it does is it predicts the next word in a sequence of words.”
Some people have questioned whether chatbots are sentient — a concern that has been exacerbated by hallucinations in which Bing hastold users it wants to be “alive.”Auchincloss, who has called for regulation of AI systems, expressed doubts that people really grasp what it is they are interacting with. Asked if the general public or his colleagues on the Hill understand generative AI, the congressman replied simply, “no.”
At Least a Third of the Public Doesn’t Trust AI Search to Provide Unbiased Results, Avoid Misinformation
That lack of understanding may be contributing to the lack of trust in AI systems that much of the public appears to be experiencing. Currently, just 1 in 4 adults trust AI-powered search engines to avoid content from websites that promote misinformation, and only 1 in 3 trust them to provide factual results.
The systems have provided reasons for the lack of trust: Both Bing’s and Google’s chatbotsdisplayed incorrect results during demosdesigned to show the functionality of the systems. That, combined with experiences in which chatbotsmake up information, are reason enough for users to express doubt.
Those issues will need to be addressed to encourage adoption of these systems, as 63% of adults expressed at least some concern over the accuracy of results in search engines that use AI. Additionally, 68% said they are concerned about misinformation in AI search results, and 62% are worried about bias.
AI-powered搜索有潜力,限制
Auchincloss said industries, particularly those likely to be early adopters of this technology, should already be thinking about how to address these concerns.
“We don't need to be afraid of it, but we do need to be in control of it,” he said. “It can't be like social media where it was allowed to scale and to influence much of our private and public lives before we really got a handle on it — and frankly, still haven't gotten a handle on it.”
Public Favors Typical Search Engine Over AI-Powered Search for Various Services
Until then, the public may be hesitant to fully embrace AI-powered search tools. More people believe that standard search engines are a better option than AI-powered search for providing accurate and factual information and for providing results free of bias.
Part of that may be because traditional search results, presented as a series of links, have trained users how to search for information and consider a number of different sources — an exercise that Shah said is ultimately a good thing.
“我认为有价值的人会through some of these steps themselves, because in some cases, they might even start questioning their original question,” he said. “In some cases, they might learn something that allows them to change their path. Almost always, you're able to use this process to validate and verify what you’re finding, because you can see things playing out in front of your eyes.”
By contrast, AI-powered search currently provides a singular answer, meant to seem definitive. That presentation of a confident response, though, can lead to users’ belief in information that isn’t true. Many cases where systems are “hallucinating answers, the only way you would know that that was a hallucination is by actually going and checking the answer manually,” Shah said.
Some of these issues can be addressed with additional guardrails. But, Shah said, some of it is a fundamental problem with large language models.
“Language models are not intelligent machines,” he said. “They are essentially stochastic machines that are really good at spitting out natural sounding occurrences of words, which often is what we are looking for. But unless we curb our expectations for these systems, they're always going to be problematic.”
AJ Dellinger is a data reporter at Morning Consult covering technology.@ajdell