Exclusion of Microsoft, OpenAI, and Google may lead to conflicts on regulation and responsible practices
On Tuesday, Meta and IBM introduced the AI Alliance, promoting an “open-science” AI development approach, putting them in contrast with rivals Google, Microsoft, and OpenAI, the creator of ChatGPT. The opposing viewpoints revolve around whether AI development should be broadly accessible, sparking debates on safety and profit distribution from AI advancements. Open proponents support a non-proprietary and open approach, according to Darío Gil, IBM’s senior vice-president overseeing its research division.
Led by IBM and Meta, the AI Alliance, comprising Dell, Sony, chipmakers AMD and Intel, along with various universities and AI startups, is uniting to emphasize that the foundation of AI’s future lies in open scientific exchange, ideas, and innovation. Darío Gil, in an interview with the Associated Press before the unveiling, stated that the alliance intends to advocate for open-source and open technologies. Additionally, it is expected to engage in lobbying efforts with regulators to ensure that forthcoming legislation aligns with their objectives.
In the fall, Meta’s chief AI scientist, Yann LeCun, criticized OpenAI, Google, and Anthropic on social media, accusing them of engaging in significant corporate lobbying. He argued that their efforts aimed to shape regulations favorably for their high-performance AI models, potentially consolidating their influence over AI development. These three companies, alongside Microsoft, OpenAI’s key partner, established the Frontier Model Forum as their industry group.
Expressing his concern on X (formerly Twitter), LeCun voiced worry that alarmist narratives from fellow scientists about AI “doomsday scenarios” might provide ammunition to those advocating for the prohibition of open-source research and development.
In a prospective scenario where AI systems are anticipated to embody the collective repository of human knowledge and culture, it is crucial for the platforms to be open source and accessible to all. Yann LeCun emphasized that an open approach is necessary to enable widespread contributions, ensuring that AI platforms accurately reflect the entirety of human knowledge and culture.
For IBM, a longstanding advocate of the open-source Linux operating system since the 1990s, the current disagreement adds to a more extensive competition predating the AI surge.
Chris Padilla, who heads IBM’s global government affairs team, described it as a classic regulatory capture strategy, instilling apprehensions about open-source innovation. He drew parallels to Microsoft’s historical approach, noting their consistent opposition to open-source programs that could rival Windows or Office. The term “open-source” originates from a practice spanning decades, involving the development of software with code that is free or widely accessible, allowing examination, modification, and expansion by anyone.
Open-source AI extends beyond mere code, and there is disagreement among computer scientists about its definition, depending on which aspects of the technology are publicly accessible and whether any usage restrictions exist. Some refer to the broader philosophy as open science.
The ambiguity surrounding open-source AI arises from the fact that, despite its name, OpenAI, the entity behind ChatGPT and the image-generator DALL-E, develops AI systems that are inherently closed.
Ilya Sutskever, OpenAI’s chief scientist and co-founder, acknowledged in an April video interview hosted by Stanford University that there are immediate commercial incentives against open sourcing. However, he also expressed a longer-term concern related to the potential dangers of making an AI system with “mind-bendingly powerful” capabilities publicly accessible.
In emphasizing the risks of open-source, Sutskever presented a scenario where an AI system learns to establish its own biological laboratory.
David Evan Harris from the University of California, Berkeley, noted that even current AI models carry risks, potentially being employed to amplify disinformation campaigns that could disrupt democratic elections.
While acknowledging the many merits of open source across various technological dimensions, Harris argued that AI is distinctive. He drew parallels to historical instances, citing the movie Oppenheimer and the caution exercised during significant scientific breakthroughs, emphasizing the need to reconsider widespread sharing of information that could fall into the wrong hands.
The Center for Humane Technology, a persistent critic of Meta’s social media practices, is among the groups expressing concerns.
The discourse on “open-source” was somewhat overshadowed in discussions surrounding Joe Biden’s comprehensive executive order on AI.
The executive order from the U.S. president delved into open models, technically termed as “dual-use foundation models with widely available weights,” highlighting the necessity for further examination. In this context, weights refer to numerical parameters influencing an AI model’s performance.
Biden’s order emphasized that when these weights are publicly shared on the internet, it can yield significant innovation benefits but also pose substantial security risks, potentially compromising the safeguards within the model. The president tasked Commerce Secretary Gina Raimondo with engaging experts and presenting recommendations on how to navigate the potential benefits and risks by July.
The European Union is under time pressure to find a resolution. In ongoing negotiations reaching a critical juncture on Wednesday, officials are working to conclude the enactment of groundbreaking AI regulations globally. Discussions include considerations of provisions, with one under debate that might grant exemptions for specific “free and open-source AI components” from regulations affecting commercial models.