Explore
Settings

Settings

×

Reading Mode

Adjust the reading mode to suit your reading needs.

Font Size

Fix the font size to suit your reading preferences

Language

Select the language of your choice. NewsX reports are available in 11 global languages.
we-woman
Advertisement

US Government Report Warns of Genuine Threat: AI Poses Extinction Risk To Humanity

According to the report, the rise of advanced AI and Artificial General Intelligence (AGI) could parallel the destabilizing effect of nuclear weapons on global security. AGI refers to technology capable of performing tasks equal to or surpassing human abilities, with industry leaders foreseeing its potential within the next five years or sooner.

US Government Report Warns of Genuine Threat: AI Poses Extinction Risk To Humanity

The world is reportedly confronted with a serious threat, as the US government issues a warning about a potential extinction-level danger to the human species due to AI. The US report cautions that advanced AI has the potential to destabilize global security, comparable to the impact of nuclear weapons. The report recommends stringent regulations, including the possibility of imprisonment for violations related to the disclosure ofARTIFI models.

Commissioned by the US government in October 2022, the report by Gladstone AI assessed the proliferation and security threats associated with weaponized and misaligned AI. Over a year later, the findings suggest that AI could pose an “extinction-level threat to the human species.”

According to the report, the rise of advanced AI and Artificial General Intelligence (AGI) could parallel the destabilizing effect of nuclear weapons on global security. AGI refers to technology capable of performing tasks equal to or surpassing human abilities, with industry leaders foreseeing its potential within the next five years or sooner.

The assessment report urges the US government to take swift and decisive action to prevent growing risks to national security posed by AI.

Authored by three researchers, the report draws on insights from conversations with over 200 individuals, including government officials, experts, and employees from leading AI companies like OpenAI, Google DeepMind, Anthropic, and Meta. The findings reveal concerns among AI safety professionals about potential negative motivations influencing decision-making processes of company executives.

The report proposes an Action Plan to proactively address these challenges. It suggests an unprecedented set of policy measures that could significantly impact the AI sector. Recommendations include making it illegal to train AI models using computational power exceeding a specified limit, determined by a newly established federal AI agency. The report also advocates for mandatory government authorization for training and deploying new AI models beyond a defined computational threshold.

Furthermore, the report emphasizes exploring the prohibition of public disclosure of intricate details (weights) of powerful AI models, with potential penalties, including imprisonment for violations. It calls for increased governmental oversight over AI chip production and exportation, directing federal funding toward research initiatives focused on aligning advanced AI technologies with safety measures.

(sources)

mail logo

Subscribe to receive the day's headlines from NewsX straight in your inbox