A leading researcher at this week’s AI safety summit in London raises concerns about a “real threat to public discourse

Aidan Gomez, a senior industry figure and co-author of a pivotal research paper in chatbot technology, argues that excessive focus on AI doomsday scenarios distracts from pressing issues like widespread misinformation. Attending this week’s AI safety summit, Gomez advocates for studying and addressing long-term risks, such as existential threats from AI. However, he emphasizes that delving into these scenarios could divert policymakers from addressing immediate potential harms. Gomez believes that discussing existential risks within the context of public policy is unproductive and detracts from more tangible and immediate risks requiring attention from the public sector.

As the CEO of Cohere, a North American company specializing in AI tools for businesses, including chatbots, Gomez is an active participant in the two-day summit starting this Wednesday. Notably, in 2017, at the age of 20, Gomez was part of a Google research team responsible for creating the Transformer, a pivotal technology supporting AI tools like chatbots.

Gomez asserts that AI, encompassing computer systems capable of intelligent tasks, is already widely employed. He suggests that the summit should focus on existing applications, such as ChatGPT and image generators like Midjourney, which have impressed the public with their ability to generate credible text and images based on simple prompts.

“This technology is already integrated into products used by a billion users, including those by Google and other companies. This introduces a range of new risks that require discussion, none of which are of an existential or doomsday nature,” stated Gomez. “Our primary focus should be on aspects that are poised to impact people imminently or are actively affecting them, rather than engaging in more abstract, academic, or theoretical conversations about the distant future.”

Gomez expressed particular concern about misinformation, involving the dissemination of misleading or inaccurate information online. “Misinformation is a primary concern for me,” he emphasized. “These AI models have the capability to produce content that is exceedingly convincing, highly persuasive, and nearly indistinguishable from text, images, or media created by humans. Therefore, it is imperative that we urgently address this issue and determine how to empower the public to differentiate between these various forms of media.

On the inaugural day of the summit, various AI-related topics will be explored, encompassing concerns related to misinformation, such as its potential impact on elections and the erosion of social trust. On the following day, a select assembly of countries, experts, and technology executives, convened by Rishi Sunak, will deliberate on tangible measures to mitigate AI risks. Notably, U.S. Vice President Kamala Harris will be among the attendees.

Gomez, emphasizing the summit’s importance, highlighted the increasing plausibility of a legion of bots, software designed for repetitive tasks like posting on social media, disseminating AI-generated misinformation. He cautioned that if this scenario materializes, it poses a genuine threat to democracy and the integrity of public discourse.

Last week, the government released documents outlining AI-related risks, encompassing concerns like AI-generated misinformation and labor market disruption. In these documents, the government acknowledged the possibility of AI development reaching a point where it could pose a threat to humanity.

One of the risk papers published last week stated, “Given the substantial uncertainty in forecasting AI advancements, there is insufficient evidence to definitively rule out the potential of highly capable Frontier AI systems, if misaligned or inadequately controlled, posing an existential threat.”

The document further noted that while many experts considered this risk highly improbable, it would necessitate the occurrence of various specific scenarios, including an advanced AI system gaining control over weapons or financial markets. Concerns about an existential threat from AI primarily revolve around the concept of artificial general intelligence, referring to an AI system capable of performing diverse tasks at a level of intelligence equivalent to or surpassing human abilities. Such a system could theoretically replicate itself, elude human control, and make decisions contrary to human interests.

These concerns led to the issuance of an open letter in March, signed by over 30,000 technology professionals and experts, including Elon Musk, advocating for a six-month halt to massive AI experiments.

Following this, two of the three contemporary “godfathers” of AI, Geoffrey Hinton and Yoshua Bengio, issued an additional statement in May, stressing the need to address the risk of AI-driven extinction with the same seriousness as the threats posed by pandemics and nuclear warfare. However, Yann LeCun, their fellow “godfather” and co-recipient of the ACM Turing Award, considered the equivalent of the Nobel Prize in computing, dismissed concerns about AI potentially eradicating humanity as “absurd.”

LeCun, currently serving as the Chief AI Scientist at Meta, Facebook’s parent company, told the Financial Times this month that several “conceptual breakthroughs” would be necessary before AI could reach human-level intelligence, a stage where it could escape human control. LeCun added, “Intelligence is not synonymous with a desire to dominate, and this isn’t even true for humans.

By admins

Leave a Reply

Your email address will not be published. Required fields are marked *