When asked, most users will say they don’t trust AI (Artificial Intelligence). Again, while giving information, AI gives wrong or fictitious information in 60 percent of cases. Still, recent research suggests that consumers trust AI-generated summaries more than human-written summaries when reading product reviews or comments online. Not only that, they are also tempted to buy the product by reading these summaries.
Live Science, a website that publishes science content, said in a report on Saturday that this information has emerged from the research of a group of researchers at the University of California in San Diego, USA. They claim this is the first quantitative study of how AI is influencing human behavior.
The research, presented at an international AI conference on Natural Language Processing (NLP) in December 2025, had several steps. The research used a news database containing six LLMs (Large Language Model), 1,000 electronic product reviews, 1,000 interviews published in the press and 8,500 reports.
The researchers asked the AI to generate summaries of various product reviews and media interviews with this information. They are then asked to fact check. As it turns out, AI struggles to distinguish between real news and fictional news.
The researchers wrote in their report, ‘Inability to understand the difference between real and fake news is a major limitation of AI. It cannot distinguish truth from falsehood.’
Misleading consumer decisions
The most surprising aspect of the study was about online shopping reviews. It has been found that people are much more interested in buying a product after reading a short summary created by AI than after reading a long review written by a human.
Researchers have found two main reasons behind this – the importance of initial data and incomplete data. AI models tend to over-emphasize information at the beginning of the summary and under-emphasize the middle part. This matter affects people’s thinking from the beginning. However, when dealing with any new information that the AI has not been trained on, it tends to give incorrect information or false answers.
Tests showed that AI chatbots changed the sentiment of a user’s original review in about 26.5 percent of cases. And when asked directly about the product, they gave incorrect or fictitious answers 60 percent of the time.
Research shows that 52 percent of customers who read human reviews are more likely to buy the product. And of those who read AI-generated summaries, 84 percent decided to buy the product. In other words, the information given by the AI is more enticing for people to make purchases despite the fact that it is wrong.
In this situation, the researchers warned, although these mistakes of AI do not seem to be a big problem in the case of general purchases, their impact can be dire in complex areas.
Abir Alessa, lead author of the research paper, told Live Science, “If AI were to repackage the information in this way, whether it’s summarizing health care documents or educational admissions profiles, it could mislead people’s entire perception of a person or subject.”
The research team hopes that through this research, it will be possible to prevent the distortion of information generated by AI and reduce its negative impact in media, education and government policy making.
