The man aiming to demystify AI in Christmas lectures acknowledges some speculative concerns but emphasizes genuine risks

Touted as an existential risk comparable to pandemics, artificial intelligence is not causing concern for at least one pioneer. Professor Michael Wooldridge, delivering this year’s Royal Institution Christmas lectures, expressed worry over AI potentially becoming an overbearing boss. He emphasized concerns about tools available today that monitor employees’ emails, provide continuous feedback, and possibly make firing decisions. As a computer science professor at the University of Oxford, Wooldridge aims to use the prestigious lectures to demystify AI.

This is the year that, for the first time, we had mass-market, general-purpose AI tools, by which I mean ChatGPT,” said Wooldridge. “It’s very easy to be dazzled.”

It’s the first time that we had AI that feels like the AI that we were promised, the AI that we’ve seen in movies, computer games, and books,” he said.

However, he stressed that tools like ChatGPT were neither magical nor mystical.

In the [Christmas] lectures, when people see how this technology actually works, they’re going to be surprised at what’s actually going on there,” Wooldridge said. “That’s going to equip them much better to go into a world where this is another tool that they use, and so they won’t regard it any differently than a pocket calculator or a computer.

He will have company: robots, deepfakes, and other prominent figures from AI research will be accompanying him to delve into the technology.

The lectures will feature a Turing test, a well-known challenge initially proposed by Alan Turing. In essence, if a human engages in a typed conversation and cannot distinguish whether the responding entity is human or not, then the machine has exhibited human-like capabilities. While some experts firmly believe that the test has not been successfully passed, others hold a different perspective.

Some of my colleagues think that, essentially, we’ve passed the Turing test,” said Wooldridge. “At some point, very quietly, in the last couple of years, the technology has reached a point where it can generate text indistinguishable from text produced by a human.”

However, Wooldridge holds a different perspective.

I think what it tells us is that the Turing test, simple and beautiful and historically important as it is, is not really a great test for artificial intelligence,” he said.

For the professor, an exciting aspect of today’s technology is its potential to experimentally test questions that have previously been relegated to philosophy, including whether machines can attain consciousness.

We don’t understand really, at all, how human consciousness works,” Wooldridge said. But, he added, many argue that experiences are important.

For instance, while humans can perceive the scent and flavor of coffee, extensive language models like ChatGPT cannot.

They will have read thousands upon thousands of descriptions of drinking coffee, and the taste of coffee and different brands of coffee, but they’ve never experienced coffee,” Wooldridge said. “They’ve never experienced anything at all.

Furthermore, if a conversation is interrupted, such systems lack a sense of the passage of time.

However, Wooldridge contends that while factors like these explain why tools like ChatGPT are not considered conscious, machines with such capabilities may still be feasible. After all, humans are essentially a collection of atoms.

For that reason alone, I don’t think there is any concrete scientific argument that would suggest that machines can’t be conscious,” he said. He added that while machine consciousness would likely differ from human consciousness, it might still require meaningful interaction with the world.

With artificial intelligence already making significant strides in various fields, ranging from healthcare to art, its potential appears vast. However, Wooldridge highlights associated risks.

He emphasizes that AI has the ability to analyze your social media activity, discern your political inclinations, and subsequently present disinformation with the aim of influencing actions, such as altering your voting preferences.

Additional concerns revolve around AI systems like ChatGPT potentially providing users with inaccurate medical guidance and inadvertently perpetuating biases present in their training data. Some fear unintended consequences stemming from AI usage, including the development of preferences that may not align with human values, although Wooldridge contends that this is currently unlikely with existing technology.

Wooldridge proposes that the key to addressing these risks lies in fostering skepticism, particularly acknowledging that ChatGPT is not infallible, and ensuring transparency and accountability in AI systems.

However, he refrained from endorsing the statement issued by the Center for AI Safety, which cautioned against the perils of the technology, as well as a similar missive from the Future of Life Institute, both of which were released this year.

The imperative to address the risk of extinction posed by AI should be a global priority, comparable to other societal-scale risks like pandemics and nuclear war,” asserted the former.

Wooldridge explained his decision not to sign these statements, stating, “I believe they conflated some immediate concerns with highly speculative long-term issues.” He pointed out that while there are conceivable misuses of AI with potentially severe consequences, he emphasized the need to differentiate between near-term risks and exceedingly speculative scenarios. Wooldridge underscored that while there are undoubtedly “spectacularly dumb things” that could be done with AI and acknowledged the importance of recognizing risks to humanity, certain extreme possibilities, such as putting AI in control of a nuclear arsenal, were not being seriously considered by credible sources.

He asserted, “If we’re not relinquishing control of something potentially lethal to AI, it becomes considerably more challenging to envision it posing an existential risk.”

Although Wooldridge appreciates the significance of the inaugural global summit on artificial intelligence safety scheduled for this autumn and the establishment of a taskforce in the UK dedicated to crafting secure and dependable large language models, he remains skeptical about drawing direct parallels between the apprehensions expressed by J Robert Oppenheimer during the development of nuclear bombs and the concerns voiced by contemporary AI researchers.

By admins

Leave a Reply

Your email address will not be published. Required fields are marked *