HomeTechnologyCloudflare disaster disrupts the internet around the world, many sites are also...

Cloudflare disaster disrupts the internet around the world, many sites are also down in Bangladesh


Krista Pawlowski still remembers the moment she changed her position on the ethics of artificial intelligence. Working on Amazon’s Mechanical Turk platform, this AI worker checks AI-generated text, images and videos every day, catches mistakes, and compares data. Two years ago he took a job sitting at the dining table at home; Identifying whether tweets (currently X) are racist or not. There he saw a tweet, ‘Listen to that mooncricket sing’. At first Krista was going to choose ‘no’, because she didn’t know the meaning of the word. Later found out, ‘mooncricket’ is actually a derogatory racist slur used for blacks.

Pawlowski said, ‘I stopped and thought, how many times have I made this mistake before?’ Thousands of workers like him may be making the same mistake, and that’s why bias, abusive language, misinformation are constantly creeping into AI training; Which no one even notices.

Over the years, Pavlowski’s attitude has completely changed. He now doesn’t use any generative AI himself, and doesn’t even let his teenage daughter use tools like ChatGPT. “The use of AI is strictly prohibited in my home,” Pavlowski said.

When he saw the list of new jobs in the mechanical tractor, he thought, can this work cause harm to people? Often his answer is, yes.

According to an Amazon statement, workers on the platform decide which jobs to take on and have access to job details. Working hours, wages and instructions are determined by the requesting organization. Incidentally, Mechanical Turk is a crowdsourcing website where people can earn money doing small jobs, such as doing surveys or data entry. It is a service of Amazon, where companies or individuals submit their tasks to ‘crowdworkers’ and the workers get paid for completing those tasks.

Pavlowski is not alone. Dozens of AI raters who have worked on multiple models—from Google’s Gemini to Elon Musk’s Grok—told The Guardian that they are warning entire families against AI use, many forbidding it altogether, because of AI’s inherent weaknesses.

One worker who rated Google Search’s AI overview said he was appalled by the AI’s attitude to medical questions. Employees without medical training are allowed to verify these questions. So he does not allow his 10-year-old daughter to use any chatbot.

A major complaint of AI raters—impossibility to maintain quality under pressure of pace and deadlines. Brooke Hansen, who works at Mechanical Turk, said AI itself is not dubious, but companies value profit over responsibility.

Hansen has been involved in data science since 2010. “We who develop AI models, we don’t have enough training, the instructions are unclear, the time frame is short,” he said. According to him, the way AI training is done in such a haphazard manner, it is impossible to maintain safety, accuracy or ethics.

When the model doesn’t know the answer, it still gives the confidence-building information that ‘experts see this as a big risk’.

Recent research by US-based non-profit organization NewsGuard says that the non-response rate of major AI models in 2024 was 31 percent. But in 2025 it has come down to zero. In contrast, the repetition rate of false information increased from 18 percent to 35 percent. That is, AI models still give wrong information but have almost stopped ‘not giving answers’. Be it wrong or right they will give one answer.

“We often joke that AI would be all right if they stopped lying,” says a tutor who has worked on Gemini, ChatGPT and Grok.

Another said he was tasked with asking difficult questions of an AI model built for Google in 2024. He then asked the AI ​​model some history-related questions. He said, ‘When asked about the history of the Palestinian people, he was not giving any answers. But the history of Israel, when asked, gave a detailed list.’ He also said, I reported it to Google too. But they did not respond.’

According to him, when the input of training is bad, the output will be bad. This is the principle of programming ‘garbage in, garbage out’.

Taking part in the social debate about AI, Hansen told people that AI is not magic, but stands for enormous human labor, bias, haste, environmental damage, and enormous cost.

Adio Dinika, a researcher at the Distributed AI Research Institute, said, ‘Those who have seen the work behind AI, see it not as a technology of the future, but as a fragile system.’

Pawlowski and Hansen recently presented to school officials at the Michigan Association of School Boards conference on the ethical and environmental risks of AI. Many are surprised to hear—especially about the human labor and environmental costs behind AI.

According to Pavlowski, the ethics of AI is just like the textile industry. For a time, ordinary consumers did not know how such cheap clothes were made. The labor involved in making clothes was invisible. Who made it and what is their working environment, all this information does not reach the buyers. The question is the same for AI. Where is the data coming from? What is copyright infringement? Did the workers receive fair wages? We don’t know all the answers yet. But if you start asking questions, change will come.

Abridged from The Guardian



RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
- Advertisment -

Most Popular

Recent Comments