The White House has revealed a $140 million investment in AI advancements prioritizing ethics, trustworthiness, responsibility, and public welfare

Before convening with top AI industry executives like Google, Microsoft, and OpenAI, the US President and Vice President outlined measures to mitigate potential risks linked to unregulated AI advancements. The White House stressed that companies involved in AI development must prioritize safety before deploying or releasing technology to the public.

There is increasing apprehension that the unregulated development of AI could jeopardize jobs, heighten the risk of fraud, and infringe on data privacy if not properly governed.

The US government unveiled its intention to allocate $140 million to fund seven new national AI research institutes on Thursday. These institutes will be dedicated to creating ethical, trustworthy, responsible, and public-serving AI advancements. Despite the current dominance of the private sector in AI development, with the tech industry producing 32 major machine-learning models last year compared to academia’s three, leading AI developers such as OpenAI, Google, Microsoft, and the UK’s Stability AI have committed to public evaluations of their systems at the upcoming Defcon 31 cybersecurity conference.

As per the White House, the upcoming public evaluation of AI systems is anticipated to offer valuable insights to researchers and the general public, shedding light on the impact of such models. During the meeting, President Biden, who has previously experimented with ChatGPT, underscored the significance of addressing the risks posed by AI to individuals, society, and national security. Vice President Harris recognized that generative AI, encompassing products like ChatGPT and Stable Diffusion, brings forth both risks and opportunities. She emphasized that the private sector holds an ethical, moral, and legal obligation to ensure the safety and security of their products, as outlined in a post-meeting statement.

On Thursday, the US government introduced additional policies, including draft guidance from the President’s Office of Management and Budget regarding the use of AI in the public sector. In October of the previous year, the White House outlined a blueprint for an “AI bill of rights,” proposing measures to safeguard individuals from unsafe or ineffective AI systems through pre-launch testing and ongoing monitoring. The proposal also aimed at protecting against abusive data practices, such as unchecked surveillance.

While Robert Weissman, the president of Public Citizen, a consumer rights non-profit organization, viewed the White House’s announcement as a positive step, he argued that more assertive measures were needed. Weissman suggested the implementation of a moratorium on new generative AI technologies.

An expert emphasizes the importance of safeguarding major tech firms from their own actions. These companies and their prominent AI developers acknowledge the potential risks linked to generative AI. Nevertheless, they find themselves in a competitive arms race and feel compelled to maintain a rapid pace. On Thursday, the UK’s competition regulator expressed worries about AI development and initiated an investigation into the models underpinning products like ChatGPT and Google’s Bard chatbot. Furthermore, Dr. Geoffrey Hinton, a renowned British computer scientist often referred to as the “godfather of AI,” recently resigned from Google to openly address the perils of AI.

By admins

Leave a Reply

Your email address will not be published. Required fields are marked *