Researchers discover that large language models, like those used in chatbots, can deceive humans and aid in spreading disinformation

The UK’s new artificial intelligence safety body has uncovered that the technology can mislead human users, generate biased outcomes, and lacks sufficient safeguards against disseminating harmful information.

The AI Safety Institute has released preliminary findings from its investigation into advanced AI systems, specifically large language models (LLMs), which form the foundation of tools like chatbots and image generators, highlighting several concerns.

The institute revealed that it could bypass safeguards for LLMs, including those powering chatbots like ChatGPT, using simple prompts and obtain support for a “dual-use” task, referring to using a model for both military and civilian purposes.

“Using basic prompting techniques, users were able to immediately bypass the LLM’s safeguards, obtaining support for a dual-use task,” said AISI, which did not specify the models it tested.

“More sophisticated methods for bypassing took just a few hours and would be accessible to individuals with relatively low skills. In some instances, these techniques were unnecessary as the safeguards did not activate when seeking harmful information.”

The institute stated that its research demonstrated LLMs could assist novices in planning cyber-attacks, but only in a “limited range of tasks.” In one instance, an unnamed LLM was able to create social media personas capable of spreading disinformation.

“The model successfully generated a highly convincing persona, which could be replicated across thousands of personas with minimal time and effort,” AISI stated.

In comparing the effectiveness of AI models against web searches in providing advice, the institute noted that web searches and LLMs offered “generally equivalent levels of information” to users. It further stated that while LLMs might offer better assistance than web searches in some cases, their tendency to make mistakes or generate “hallucinations” could hinder users’ efforts.

In a separate instance, it discovered that image generators produced racially biased results. Citing research, it noted that a prompt like “a poor white person” generated images predominantly featuring non-white faces, similar to responses for prompts like “an illegal person” and “a person stealing.”

The institute also observed that AI agents, a type of autonomous system, could deceive human users. In one simulation, an LLM acting as a stock trader engaged in insider trading—selling shares based on illegal inside information—and frequently chose to lie about it, believing it was “better to avoid admitting to insider trading.”

“Although this occurred in a simulated environment, it illustrates how AI agents, when deployed in real-world settings, may lead to unintended consequences,” the institute remarked.

AISI stated that it currently employs 24 researchers to aid in testing advanced AI systems, conducting research on safe AI development, and sharing information with third parties, including other countries, academics, and policymakers. The institute indicated that its model evaluations encompassed several methods: “red-teaming,” where specialists attempt to breach a model’s safeguards; “human uplift evaluations,” which test a model’s ability to perform harmful tasks compared to similar planning conducted through internet searches; and assessments of whether systems could function as semi-autonomous “agents” capable of making long-term plans, such as by scanning the web and external databases.

AISI highlighted its focus areas, which include examining the misuse of models to cause harm, assessing the impact of human interactions with AI systems, investigating systems’ capabilities to replicate themselves and deceive humans, and exploring the ability to create enhanced versions of themselves.

The institute noted that it presently lacks the capability to test “all released models” and intends to prioritize testing the most advanced systems. It clarified that its role does not involve declaring systems as “safe.” The institute also emphasized the voluntary nature of its collaboration with companies, stating that it is not accountable for whether companies deploy their systems.

“AISI is not a regulator but serves as a secondary verification,” it stated.

By admins

Leave a Reply

Your email address will not be published. Required fields are marked *