What does the bill say about AI, consumer protection, and big tech?

The European Parliament endorsed the proposed AI law, marking a milestone in technology regulation. The legislation is expected to become law within weeks, with compliance deadlines spread over three years. Guillaume Couneson from Linklaters noted that users will trust vetted AI tools. The bill’s impact extends beyond the EU, similar to the GDPR’s influence on data management.

Couneson added that many countries will observe the EU’s adoption of the AI act. The EU’s approach may only be replicated if proven successful.

How is AI defined in the bill?

A fundamental understanding of AI is that it’s a computer system performing tasks typically associated with human intelligence, like writing an essay or creating artwork.

The legislation provides a more nuanced definition, describing the AI technology it oversees as a “machine-based system designed for varying levels of autonomy,” which clearly includes tools like ChatGPT.

This system may exhibit “adaptiveness after deployment” – meaning it learns as it operates – and derives from its inputs how to produce outputs like predictions, content, recommendations, or decisions that can impact physical or virtual environments. This definition includes chatbots as well as AI tools that analyze job applications.

The legislation prohibits systems that pose an “unacceptable risk” but exempts AI tools intended for military, defense, or national security purposes, a concern for many tech safety advocates. It also excludes systems designed for scientific research and innovation.

Kilian Vieth-Ditlmann, deputy head of policy at German non-profit organization Algorithmwatch, expressed concerns about the national security exemptions in the AI Act, suggesting they could allow member states to bypass crucial AI regulations, potentially leading to abuse.

How does the legislation address the risks associated with AI?

Some systems will be prohibited, including those that aim to harm people by manipulating them, “social scoring” systems that categorize individuals based on their social behavior or personality, similar to the system in Rongcheng, China, where the city assessed residents’ behavior, predictive policing resembling Minority Report, monitoring emotions at workplaces or schools, biometric categorization systems that use biometric data (retina scans, facial recognition, fingerprints) to infer details such as race, sexual orientation, political opinions, or religious beliefs, and compiling facial recognition databases by scraping facial images from the internet or CCTV.

Exceptions for law enforcement

Facial recognition has been a controversial aspect of the legislation. The use of real-time biometric identification systems, including facial recognition technology on live crowds, is prohibited except for law enforcement in specific situations. Law enforcement can utilize this technology to locate a missing person or prevent a terrorist attack, but they must obtain approval from authorities. In exceptional circumstances, it can be used without prior approval.

What about systems that are considered risky but not prohibited?

The legislation includes a special category for “high-risk” systems that will be legal but closely monitored. These systems are used in critical infrastructure such as water, gas, and electricity, as well as in education, employment, healthcare, and banking. Certain law enforcement, justice, and border control systems will also fall under this category. For example, a system used to determine someone’s admission to an educational institution or job eligibility will be considered high-risk.

The law mandates that these tools be accurate, undergo risk assessments, have human oversight, and keep a log of their usage. EU citizens can also request explanations about decisions made by these AI systems that have affected them.

What are the implications for generative AI?

Generative AI, which refers to systems creating plausible text, images, videos, and audio from basic prompts, falls under the regulations for what the act terms “general-purpose” AI systems.

The EU’s approach involves a two-tiered system. The first tier mandates compliance with EU copyright law and requires developers to provide detailed summaries of the content used to train the model. Open-source models are exempt from this requirement. The second, stricter tier applies to models posing a “systemic risk,” such as chatbots and image generators, and includes reporting serious incidents and conducting “adversarial testing.” Existing models face compliance challenges, with legal action already underway against some, like OpenAI and StabilityAI.

How will deepfakes be affected?

Those creating deepfakes must disclose if the content is artificially generated or manipulated. Content for artistic, creative, or satirical purposes must also be flagged appropriately. Text generated by chatbots for public interest must be labeled as AI-made, unless it has undergone human review. Developers must ensure AI-made output can be detected.

How do AI and tech companies perceive the bill?

The bill has sparked mixed reactions. While major tech firms publicly support its principles, they are wary of its specifics. Amazon is committed to collaborating with the EU for safe AI development, while Meta warns against overregulation, emphasizing AI’s potential to drive innovation and competition. Privately, there’s more criticism. A US company executive cautioned that the EU’s computing power limit for AI training is much lower than in the US, potentially prompting European firms to move west to avoid restrictions.

What penalties does the act impose?

Fines under the act vary: from €7.5m or 1.5% of a company’s total worldwide turnover (whichever is higher) for providing incorrect information to regulators, to €15m or 3% of worldwide turnover for breaching certain provisions like transparency obligations, to €35m or 7% of turnover for deploying or developing banned AI tools. Smaller companies and startups will face more proportionate fines.

The obligations will take effect after 12 months, likely next year once the act becomes law, with certain category prohibitions starting after six months. Providers and deployers of high-risk systems have three years to comply. Additionally, a new European AI office will be established to set standards and serve as the main oversight body for GPAI models.

By admins

Leave a Reply

Your email address will not be published. Required fields are marked *