Authors and academics warn that progressing systems without safety checks is “utterly reckless.

A group of senior experts, including two pioneers in AI, has issued a caution about the potential threat that powerful AI systems pose to societal stability. They are adamant that AI companies should be held responsible for any harm caused by their products. This warning comes ahead of an AI safety summit at Bletchley Park, where international politicians, tech companies, scholars, and civil society leaders are set to convene next week.

One of the co-authors of the policy proposals, among the 23 experts, expressed concern that pursuing more powerful AI systems without understanding how to ensure their safety is “completely reckless.”

Stuart Russell, a professor of computer science at the University of California, Berkeley, stressed the importance of taking advanced AI systems seriously, emphasizing that they are not mere playthings. He believes that advancing their capabilities before comprehending how to make them safe is a profoundly reckless course of action.

Russell further stated, “AI companies face fewer regulations than sandwich shops.”

The document encourages governments to adopt various policies, including:

  • Governments dedicating one-third of their AI research and development funding to ensuring the safe and ethical use of these systems.
  • Companies allocating one-third of their AI R&D resources to the same purpose.

Implementing measures for independent auditors to access AI labs.

Introducing a licensing framework for the development of advanced models.

Requiring AI firms to implement safety protocols upon detecting hazardous features in their models.

Holding tech companies accountable for foreseeable and avoidable damages arising from their AI systems.

The document’s additional co-authors include prominent figures like Geoffrey Hinton and Yoshua Bengio, recognized as two of the “godfathers of AI.” They received the ACM Turing Award in 2018, considered the Nobel Prize in computer science, for their significant contributions to AI.

Both Hinton and Bengio are among the exclusive list of 100 attendees invited to the summit. Hinton’s departure from Google this year raised concerns about the “existential risk” associated with digital intelligence. In March, Bengio, a computer science professor at the University of Montreal, joined him and thousands of other experts in signing a letter calling for a pause in large-scale AI experiments.

The co-authors of these recommendations also include notable individuals such as Yuval Noah Harari, bestselling author of “Sapiens,” Nobel laureate in economics Daniel Kahneman, Sheila McIlraith, an AI professor at the University of Toronto, and renowned Chinese computer scientist Andy Yao.

The authors expressed concerns that hastily developed AI systems pose a significant threat, potentially amplifying social injustice, undermining established professions, destabilizing society, facilitating large-scale criminal or terrorist activities, and eroding our shared perception of reality, which forms the bedrock of our society.

They cautioned that current AI systems were already demonstrating disconcerting capabilities, hinting at the emergence of autonomous systems capable of planning, setting goals, and taking action in the physical world. For example, they noted that the GPT-4 AI model, powering the ChatGPT tool and developed by the US company OpenAI, displayed the ability to design and execute chemistry experiments, browse the web, and utilize various software tools, including other AI models.

The experts also highlighted the risk of developing highly advanced autonomous AI, creating systems that independently pursue undesirable objectives, and the challenges in effectively controlling them.

Additional policy recommendations within the document include:

  • Mandatory reporting of incidents involving models displaying concerning behavior.
  • Implementation of measures to prevent hazardous models from self-replicating.
  • Empowering regulators with the authority to halt the development of AI models demonstrating dangerous behavior.

The upcoming safety summit will focus on existential threats associated with AI, such as their potential involvement in crafting new bioweapons and evading human oversight. The UK government, in collaboration with other participants, is working on a statement expected to underscore the extent of the threat posed by frontier AI, referring to advanced systems. However, the summit is not anticipated to formally establish a global regulatory body, despite outlining AI risks and measures to mitigate them.

By admins

Leave a Reply

Your email address will not be published. Required fields are marked *