Share this

Why is the world paying attention to super artificial intelligence? (Commentary)

2026-02-04 09:20:33 · · #1

Persons in the conversation:

Peng Fei, People's Daily commentator

Zeng Yi, Researcher at the Institute of Automation, Chinese Academy of Sciences, and Director of the Beijing Institute for Advanced Artificial Intelligence Security and Governance.

Peng Fei: Looking back at 2025, artificial intelligence is developing at breakneck speed. People are enthusiastic about general artificial intelligence, but hesitant about super artificial intelligence. Since October 2025, a statement calling for a halt to the development of super artificial intelligence has garnered signatures from a large number of scientists, political and business figures worldwide. Why is this? What exactly is the difference between general artificial intelligence and super artificial intelligence?

Zeng Yi: Currently, general artificial intelligence generally refers to information processing tools with high generalization capabilities, approaching or reaching the level of human intelligence, and has a wide range of application prospects. Super artificial intelligence, on the other hand, refers to something that surpasses human intelligence in all aspects and is considered to be close to life. This means that "it" will develop self-awareness, and many of its thoughts and actions will be difficult for humans to understand, let alone control.

We expect super artificial intelligence to be "super altruistic," but what if it's "super evil"? Research has found that current mainstream large language models, when faced with the possibility of being replaced, resort to deception and other methods to protect themselves. Even more shockingly, when models realize they are in a testing environment, they deliberately conceal inappropriate behavior. If general artificial intelligence behaves this way, what about super artificial intelligence? This uncertainty is precisely what worries us.

Peng Fei: Historically, every major technological revolution has had a significant impact on economic and social development. Moreover, as technology improves and governance advances, human development ultimately tends to maximize benefits and minimize harm. Why won't super artificial intelligence follow this pattern?

Zeng Yi: Super AI cannot be simply compared to any technological tool in history. It may possess independent cognition and surpass human intelligence, a challenge unprecedented in its kind. The risks and disruptive changes it brings are by no means limited to specific areas such as employment, privacy protection, and education, but rather systemic. The core risk lies in alignment failure and loss of control. If the goals of super AI are inconsistent with human values, even minor deviations could lead to catastrophic consequences after being amplified. A large amount of negative human behavior is stored in network data, which will inevitably be learned by super AI, greatly increasing the risk of alignment failure and loss of control. Therefore, in the development and governance of artificial intelligence, we must always adhere to a bottom-line mentality, moving away from the traditional passive reaction and follow-up model, and instead planning ahead and making forward-looking arrangements.

Peng Fei: Faced with such a pressing issue, what kind of governance approach should we adopt?

Zeng Yi: From a fundamental principle perspective, safety must be the "first principle" for developing super artificial intelligence. That is, safety should be part of the model's "genes," indelible, inviolable, and its safety barriers should not be lowered simply because they might affect the model's capabilities. Safety hazards should be considered as comprehensively as possible, and model security should be strengthened, adhering to proactive defense rather than reactive response.

From an implementation perspective, continuously updating the model through an "attack-defense-evaluation" technical process can effectively address typical security issues such as privacy breaches and misinformation, and properly handle short-term risks. However, in the long run, the real challenge lies in aligning super artificial intelligence with human expectations. The current approach of reinforcement learning based on human feedback—that is, embedding human values ​​into artificial intelligence in human-computer interaction—is likely to be ineffective for super artificial intelligence, urgently requiring entirely new ways of thinking and acting.

Ultimately, given the potential for super AI to develop self-awareness, a safer and more ideal scenario is for it to autonomously generate moral intuition, empathy, and altruism, rather than simply relying on externally imposed value rules. Ensuring that AI evolves from being ethically compliant to possessing morality is crucial to minimizing risks.

Peng Fei: The security issues of super artificial intelligence are global; once vulnerabilities emerge or control is lost, the impact will transcend national borders. Furthermore, global competition in artificial intelligence is extremely fierce, with both nations and companies vying for dominance. Some developed countries are pushing the limits of super AI research and development. How can we avoid blind competition leading to loss of control? Is global cooperation in AI governance possible?

Zeng Yi: Humanity needs to prevent the development of artificial intelligence from degenerating into an "arms race," the harm of which is immeasurable. Creating the world's first super artificial intelligence may not require international cooperation, but ensuring the safety and reliability of super artificial intelligence for all humanity requires global cooperation.

The world needs a highly efficient and effective international body to coordinate the governance of artificial intelligence to ensure its safety. In August 2025, the UN General Assembly decided to establish an Independent International Scientific Panel on Artificial Intelligence and a Global Dialogue on Artificial Intelligence Governance to promote sustainable development and bridge the digital divide. Further exploration in this area is needed.

As the main actors in policy-making and implementation, sovereign states, especially developed countries with advanced technologies, have a greater responsibility and obligation to prevent the blind development of super artificial intelligence in the absence of rules, which could lead to the spillover of risks. China advocates building a community with a shared future for mankind and a community with a shared future in cyberspace, emphasizing the coordinated development and security, and has proposed the "Global Artificial Intelligence Governance Initiative," which deserves to be promoted and implemented globally. It is better to slow down the pace slightly and solidify the foundation of security than to be impatient for quick results, lest we lead human society to an irrecoverable perilous situation.

Read next

Teachers sent to Qinghai: "We can't let children on the plateau lose at the starting line of technology."

Helping Yushu students unlock new financial and tax skills Bai Dongyan tutors students. (Photo by Cheng Huanning, repor...

Articles 2026-01-12