Artificial Intelligence (AI) sparks concern over its potential risks while boasting innovative applications. Recently, a group of AI scientists warned against the possibility of humans losing control over the technology. They drew attention to developing a global contingency plan to tackle AI’s potential dangers.
Lack of Advanced System
The AI experts spotted the lack of an advanced system to confront AI’s adverse impacts. They remain anxious about its harmful effects if unregulated. They stated that the AI system could bring catastrophic effects for humanity if its control is lost. They also spotted the current system’s inefficiency in safeguarding and controlling the AI space.
Professor Gillian Hadfield highlighted the need to introduce an AI regulatory framework. Hadfield shared concerns over the absence of advanced technology to restrain AI from going beyond human reach.
We don't have to agree on the probability of catastrophic AI events to agree that we should have some global protocols in place in the event of international AI incidents that require coordinated responses. More here: https://t.co/FrKyLXJLYf
— Gillian Hadfield (@ghadfield) September 16, 2024
International AI Regulatory Body
As per the scientists, governments should scrutinize AI research labs and companies. There should be a convenient mode of communication between the government and AI platforms. Countries should establish AI safety authorities within their borders to ensure regulatory compliance. Besides, the globally coordinated regulatory body should keep restrictions on AI capabilities.
Moreover, the scientists proposed three key procedures to confirm AI regulation. This includes emergency response protocols, safety standards framework, and research on AI safety.
OpenAI Faces Major Upheaval as Key Executives Depart: ReportCalifornia’s Watermark Bill
AI’s potential risks have forced global forces to embrace stringent regulations. In a recent development, California proposed the “Watermark” bill, AB 3211, which has been supported by the AI giant OpenAI. The bill focuses on ensuring a clear distinction between AI and human-made content. As per the bill, AI companies should add a digital watermark for AI-generated content.