Scientists Warn of ‘Existential Threat’ from Uncontrolled AI

Last Updated:
Scientists Warn of 'Existential Threat' from Uncontrolled AI
  • AI scientists warn against the possible threats of AI if humans lose control.
  • The experts urge nations to adopt a global contingency plan to tackle the risks.
  • They lamented the lack of advanced technology to confront AI’s harms.

AI scientists have sounded the alarm on the potential dangers of artificial intelligence. In a statement, a group of experts warned about the possibility of humans losing control over AI and called for a globally coordinated regulatory system.

The scientists, who played a role in developing AI technology, expressed concerns about its potential harmful effects if left unchecked. They emphasized the current lack of advanced science to “control and safeguard” AI systems, stating that “loss of human control or malicious use of these AI systems could lead to catastrophic outcomes for all of humanity.”

 Read also: OpenAI Launches New AI Model Designed to ‘Think’ Before Answering

Gillian Hadfield, a legal scholar and professor at Johns Hopkins University, underscored the urgent need for regulatory measures. She highlighted the current lack of technology to control or restrain AI if it were to surpass human control.

Call for Global Contingency Plan

The scientists stressed the necessity of a “global contingency plan” to enable nations to identify and address the threats posed by AI. They emphasized that AI safety is a global public good requiring international cooperation and governance.

The experts proposed three key processes for regulating AI:

  • Establishing emergency response protocols
  • Implementing a safety standards framework
  • Conducting thorough research on AI safety

While addressing the emergency of adopting new regulatory guidelines, the experts stated that AI safety is a global public good that needs international cooperation and governance. They put forward three key processes that involve the regulation of AI. The scientists recommended establishing emergency response protocols, implementing a safety standards framework, and conducting adequate research on AI safety.

Countries worldwide are taking steps to develop regulations and guidelines to mitigate the growing risks of AI. In California, two bills, AB 3211 and SB 1047, have been proposed to safeguard the public from potential AI harms. AB 3211 focuses on ensuring transparency by distinguishing between AI and human-generated content. SB 1047 holds AI developers accountable for the potential harms caused by their models.

Disclaimer: The information presented in this article is for informational and educational purposes only. The article does not constitute financial advice or advice of any kind. Coin Edition is not responsible for any losses incurred as a result of the utilization of content, products, or services mentioned. Readers are advised to exercise caution before taking any action related to the company.

CoinStats ad

Latest News