CMA notes dangers: soaring prices, false information, fraud, and fake reviews

The UK’s competition watchdog cautions against assuming positive outcomes from the AI boom, highlighting risks like the spread of false information, fraud, fake reviews, and high technology costs. The Competition and Markets Authority acknowledges potential benefits for individuals and businesses but raises concerns about dominant players and violations of consumer protection laws. This warning is part of an initial review focused on foundation models, the technology supporting AI tools like the ChatGPT chatbot and image generators such as Stable Diffusion.

The introduction of ChatGPT has sparked discussions on the repercussions of generative AI. This umbrella term encompasses tools generating convincing text, images, and voice outputs based on human prompts. The debate revolves around its potential economic impact, including the displacement of white-collar jobs in fields like law, IT, and media. Additionally, concerns arise about the mass production of disinformation targeting both voters and consumers.

Sarah Cardell, the CEO of CMA, emphasized the rapid integration of AI into daily life for individuals and businesses, describing it as “dramatic.” This trend has the potential to simplify millions of everyday tasks and enhance productivity, measured as the economic efficiency or output generated by a worker per hour worked.

Nonetheless, Cardell cautioned against assuming a favorable outcome. In a statement, she emphasized, “We can’t take a positive future for granted.” Cardell expressed concerns about the potential for AI use to evolve in a manner that undermines consumer trust or is controlled by a few entities wielding market power, hindering the realization of full benefits across the economy.

The CMA classifies foundation models as “large, general machine-learning models trained on extensive data, adaptable to various tasks and operations.” This includes their role in powering chatbots, image generators, and Microsoft’s 365 office software products.

The regulator estimates that approximately 160 foundation models have been introduced by various companies, including major players like Google, Meta (owner of Facebook), and Microsoft. Additionally, emerging AI entities like OpenAI, the developer behind ChatGPT, and Stability AI in the UK, which supports the Stable Diffusion image generator, have contributed to this landscape.

The CMA highlighted that numerous companies are already involved in two or more crucial aspects of the AI model ecosystem. Major AI developers like Google, Microsoft, and Amazon not only own essential infrastructure for creating and disseminating foundation models (such as data centers, servers, and data repositories) but also have a significant presence in markets like online shopping, search, and software.

The regulatory body also emphasized its close monitoring of the repercussions of significant tech corporations investing in AI developers. Examples include Microsoft’s investment in OpenAI and Alphabet (Google’s parent company) in Anthropic. Notably, both transactions involve the provision of cloud computing services, a critical resource for the AI sector.

The CMA stressed the “essential” nature of preventing the concentration of the AI market within a small number of companies. The immediate risk is the potential exposure of consumers to substantial levels of false information, AI-driven fraud, and fake reviews. In the long term, this concentration could empower or solidify the market positions of firms developing foundation models, leading to high prices for utilizing the technology.

The report highlights that a shortage of access to crucial components for constructing an AI model, such as data and computing power, may result in elevated prices. Specifically addressing “closed source” models like OpenAI’s GPT-4, which serves as the foundation for ChatGPT and is not accessible or modifiable by the public, the report suggests that the development of leading models could be confined to a select few companies.

The report states, “Those remaining firms would develop positions of strength which could give them the ability and incentive to provide models on a closed-source basis only and to impose unfair prices and terms.

The CMA also emphasized the significance of intellectual property and copyright, noting concerns raised by authors, news publishers, including The Guardian, and the creative industries regarding the uncredited use of their material in constructing AI models.

As part of the report, the CMA put forth a set of principles for AI model development. These include ensuring that foundation model developers have access to essential resources like data and computing power, preventing early AI developers from gaining an entrenched advantage. The principles advocate for the coexistence of “closed source” models like OpenAI’s GPT-4 and publicly available “open source” models, which can be adapted by external developers. Additionally, businesses should have various options to access AI models, including the option to develop their own. Consumers should have the flexibility to use multiple AI providers, and anticompetitive conduct such as “bundling” AI models into other services should be prohibited. Furthermore, consumers and businesses should receive clear information about the use and limitations of AI models.

By admins

Leave a Reply

Your email address will not be published. Required fields are marked *