Site icon The People's Voice

State Department Warns of ‘Extinction-Level’ Threat Due to Rise of AI

State Department warns of extinction-level threat posed by AI.

The State department has sounded the alarm about the ‘extinction-level’ threat AI poses to the future of humanity.

A report by Gladstone AI calls for the establishment of an official body to urgently regulate AI development, highlighting risks to national security and the potential for human annihilation.

Yournews.com reports: A report titled An Action Plan to Increase the Safety and Security of Advanced AI, produced by consulting firm Gladstone AI and commissioned by the State Department, calls for increased governmental oversight of artificial intelligence (AI) to mitigate “urgent and growing risks to national security” and prevent an “extinction-level threat to the human species.” The report proposes the formation of a new federal agency tasked with the stringent regulation of AI development, including implementing restrictions on the computational power utilized by AI systems, which would limit technological progression to near-current capabilities.

The consultancy’s recommendations come in response to the mixed outcomes associated with the public’s interaction with AI technologies, such as ChatGPT, which has been criticized for disseminating disinformationpolitical censorship, and displaying erratic behavior, alongside instances of misuse by individuals.

The focus of the report is on the development of Artificial General Intelligence (AGI), defined as AI capable of outperforming humans across economically and strategically significant domains. The report warns of a “loss of control” scenario where future AI could surpass human containment efforts, likening the potential impact to that of weapons of mass destruction and suggesting the possibility of human extinction.

Echoing concerns raised by OpenAI CEO Sam Altman and others in a public statement of AI risk, the report emphasizes the need for global priority in mitigating AI extinction risks, comparable to pandemics and nuclear warfare. Altman highlighted the challenge of pausing AI research due to international competition, particularly pointing out that cessation in the U.S. would not lead to a halt in China. He advocated for the development of precautionary standards that could be globally adopted, aligning with the report’s recommendations.

The proposed federal agency would enforce strict AI research regulations, including caps on computational power and criminalizing the unauthorized distribution of AI code. The report’s authors argue that without such measures, the competitive drive to develop AGI could lead to reckless advancements by “frontier” companies, prioritizing speed over safety and security. This regulatory approach aims to slow down software development to prevent the premature emergence of AGI from existing or near-future technology.

However, the report acknowledges the limitation of such regulations, noting the likelihood of AI researchers relocating to less restrictive jurisdictions to continue their work, underscoring the complexity of managing AI development within a global context.

Exit mobile version