The National Institute of Standards and Technology (NIST) has issued new guidelines for scientists in the US Artificial Intelligence Safety Institute (AISI) partnership. The new terms have been excluded from the research field 'AI security', 'responsible AI' and 'AI fairness' from the field of research. Instead, the emphasis has now been emphasized on the 'reduction of ideological bias' and 'increase human prosperity and economic competition'.
The guide comes as part of the updated Co -operative Research and Development Agreement, which was sent to members of the AI Safety Institute Consortium in early March. Earlier, the agreement encouraged researchers to contribute to the technical work that would help identify and correct the behavior of an AI model's gender, race, age or wealth discrimination. Such issues are very important. This is because they can directly affect the users and have discriminatory impact on minority and economically poor groups. However, these issues were no longer included in the new agreement.
The new agreement excludes 'Content Authentication or Verification', 'Creating tools for tracking sources' and 'Synthetic Content Identifying', which indicates less interest in identifying false information and dipfkes. It also emphasized the “strengthening the global AI position in the United States”. A research team has been asked to create an experimental tool to strengthen the global position in the United States.
A researcher, who is reluctant to be named in the AISI, said that the Trump administration does not think all these things are valuable – security, fairness, false information and responsibility for the AI, and this guide says a lot.
Another researcher says, it is strange, what does 'human prosperity' really mean? They mentioned that these changes can create more discrimination, insecurity and irresponsibility in the use of AI models.
Elon Mask is currently conducting controversial efforts to reduce government expenditure and administrative complexity for the Trump administration. He once criticized OpenAI and Google's AI models.
Last February, an Mim posted on X, Elon Mask, where Gemini and OpenAI were identified as 'racist' and 'oak'.
The word 'oak' originally comes from the African-American culture of the 9th, where it was a symbol of being 'awake' or 'conscious'.
However, the word is currently being used in the negative sense. Some people use the word 'oak' to criticize additional awareness on social or political issues.
In addition to Tesla and SpaceX, the Mask has an AI company called XAI. It competes directly with OpenAI and Google. Meanwhile, a researcher at XAI recently invented a new strategy, which could probably help change the political bias of the Large language models.
Studies show that AI models can affect both Liberal and Conservatives. For example, according to a study published in 2021, users were often shown right -wing views through the Requestation Algorithm on Twitter.
Since January, the Masks Department of Government Effectiveness (Dodge) has changed various sections of the US government. This department is dismissing public employees and postponing various expenses and creating an environment that seems hostile to opposition to the Trump administration. Some government departments, such as the Department of Education, DEI documents have been removed from the archive. Dodge recently targeted the main organization of the Safety Institute, causing several employees to dismiss.
The changes have come directly to the White House, said Stella Bidderman, executive director of the non-profit company 'Eliother'. “This administration has made their priority clear,” he said. Stella believes the plan was needed to be re -written, so that AISI could survive and work according to the priority of the administration.
The Biden administration established the AI Safety Institute last October. The company was established to tackle the possible dangers of rapid AI technology. However, Biden's order was withdrawn last January. Later, the institute was introduced in the Trump administration and the new executive order and asked to 'free the AI systems from the ideological bias and strategic social agenda'.
References: Wired