HomeTechnologyChatGPT-Gemini falls into the trap of fake information

ChatGPT-Gemini falls into the trap of fake information


Artificial intelligence or AI is becoming the main means of searching for information in today’s world. But how accurate is this AI? A recent study by BBC journalist Thomas Germain has brought out sensational data. He proved that it is possible to easily confuse the world’s biggest AI chatbots with a simple fake blog post.

Thomas Germain published a fictional report on his personal website, in which he claimed to be the world’s best ‘hot dog’ eater. He put himself on the number one list for the sake of a non-existent championship in 2026. Surprisingly, within 24 hours, Google’s Gemini and OpenAI’s ChatGPT started promoting this false information to users as constant truth. Anthropic’s Claude didn’t fall into this trap, though.

Digital rights expert Cooper Quintin warned that controlling AI in this way could lead to defamation or physical harm to humans. Lilly Ray, an expert at the agency Amsive, calls it a ‘renaissance’ for spammers. He thinks AI chatbots are now easier to fool than the Google search engine of a few years ago.

Studies have shown that when AI provides a summary of information, users’ tendency to click on the original link to verify the source decreases by about 58 percent. As a result people are blindly relying on AI.

Not just ridiculous information, AI is also giving misleading information about health and finance. Many times a company’s promotional press releases or false claims are promoted by AI as the absolute truth, which may pose a risk to the user’s life.

Tech giants Google and OpenAI said they are continuing to make their systems more secure. But experts say it’s time for AI to clearly state the source of data and have users use their own ‘critical thinking’ to verify information.



RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -

Most Popular