<tt id="6hsgl"><pre id="6hsgl"><pre id="6hsgl"></pre></pre></tt>
          <nav id="6hsgl"><th id="6hsgl"></th></nav>
          国产免费网站看v片元遮挡,一亚洲一区二区中文字幕,波多野结衣一区二区免费视频,天天色综网,久久综合给合久久狠狠狠,男人的天堂av一二三区,午夜福利看片在线观看,亚洲中文字幕在线无码一区二区
          Global EditionASIA 中文雙語Fran?ais
          China
          Home / China / National affairs

          Framework seeks to keep AI in line

          Rapid development of technology presents potential safety risks

          By JIANG CHENGLONG | China Daily | Updated: 2025-11-13 08:49
          Share
          Share - WeChat
          SHI YU/CHINA DAILY

          In a move reflecting the fast-paced breakthroughs in artificial intelligence, on Sept 15, China released its upgraded AI Safety Governance Framework 2.0.

          The latest framework signals a significant strategic evolution from its predecessor, shifting from a static list of risks to a full life cycle governance methodology.

          It comes just a year after the first framework was released by the National Technical Committee 260 on Cybersecurity, China's key body responsible for cybersecurity standardization.

          In its preface, the new iteration notes that the update was driven by breakthroughs in AI technology that had been "beyond expectation". These breakthroughs include the emergence of high-performance reasoning models that drastically increase AI's intellectual capabilities, and the open-sourcing of high-efficacy, lightweight models, which have strongly lowered the barrier to deploying AI systems.

          At the same time, the manifestations and magnitude of AI security risks — and people's understanding of them — are evolving rapidly.

          The core objective has evolved from simply preventing risks to ensuring technology remains under human control, according to Wang Yingchun, a researcher at the Shanghai Artificial Intelligence Laboratory, who called the move a "major leap" in governance logic.

          In a commentary published on the official website of the Cyberspace Administration of China, he emphasized that the framework aims to guard the bottom line of national security, social stability and the long-term survival of humanity.

          Preventing loss of control

          The significant shift in the framework involves the introduction of a new governance principle, compared with version 1.0, which focuses on trustworthy applications and the prevention of the loss of control, Wang said.

          This principle is supported by the framework's new addendum listing the fundamental principles for trustworthy AI, which mandates ultimate human control and value alignment.

          Hong Yanqing, a professor specialized in cybersecurity at the Beijing Institute of Technology, said in a commentary that the newly added principle is intended to ensure that the evolution of AI remains safe, reliable and controllable. It must guard against runaway risks that could threaten human survival and development, and keep AI firmly under human control, he said.

          Reflecting this high-stakes focus, the new framework lists real-world threats that directly impact human security and scientific integrity. It includes the loss of control over knowledge and capabilities of nuclear, biological, chemical and missile weapons.

          It explains that AI models are often trained on broad, content-rich datasets that may include foundational knowledge related to nuclear, biological and chemical weapons, and that some systems are paired with retrieval-augmented generation tools.

          "If not effectively governed, such capabilities could be exploited by extremists or terrorists to acquire relevant know-how and even to design, manufacture, synthesize and use nuclear, biological and chemical weapons — undermining existing control regimes and heightening peace and security risks across all regions," the framework said.

          Derivative societal risks

          For the first time, the new framework introduces a category of risk involving derivative safety risks from AI applications, sending warnings to potential systemic risks AI applications could bring to macro social systems.

          The framework warns that AI misuse could disrupt labor and employment structures.

          "AI is accelerating major adjustments in the forces and relations of production, restructuring the traditional economic order. As capital, technology and data gain primacy in economic activity, the value of labor is weakened, leading to a marked decline in demand for traditional labor," it said.

          The framework also cautions that resource supply-demand balances may be upset, stressing that some problems that emerged in AI development, like disorderly construction of computing infrastructure, are accelerating consumption of electricity, land and water, posing new challenges to resource balance and to green, low-carbon development.

          The framework even warns that AI self-awareness cannot be ruled out in the future, with potential risks of systems seeking to break free of human control.

          "In the future, it cannot be excluded that AI may experience sudden, beyond-expectation 'leaps' in intelligence — autonomously acquiring external resources, self-replicating, developing self-awareness and seeking external power — thereby creating risks of vying with humanity for control," the framework said.

          The framework also warns that AI may foster addictive, anthropomorphic interactions. "AI products built on human-like interaction can lead users to form emotional dependence, which in turn shapes their behavior and creates social and ethical risks," it said.

          Moreover, the existing social order could be challenged, it added, noting that AI's development and application are "bringing major changes to tools and relations of production, accelerating the restructuring of traditional industry models, and upending conventional views on employment, childbirth, and education — thus challenging the traditional social order".

          Researcher Wang said the newly added section goes beyond familiar safety topics such as "harmful content" and "cognitive confrontation", bringing social structures, scientific activity and humanity's long-term survival and development into the scope of AI safety governance.

          The higher-lever aim, he said, is to hold the bottom line of national security, social stability and the long-term continuity of humankind.

          China solution

          Amid global competition and cooperation in AI, the framework not only supports the healthy development of China's AI sector, but also signals China's firm resolve to safeguard AI security and ensure AI benefits humanity, according to Hong, the professor at the BIT.

          Version 2.0 aligns concrete measures with international governance practice, he said, adding it emphasizes labeling and traceability for AI-generated content, which is in line with the approaches of the United States and the European Union for regulating deep-synthesis media, according to the expert.

          Beyond AIGC labeling and traceability, in the safety guidance, the framework also calls for deploying deepfake detection tools in scenarios such as government information disclosure and judicial evidence collection, which will be used for source verification and cross-checking of information suspected to be generated by large models.

          "These measures demonstrate China's openness and willingness to cooperate in global AI governance," Hong said.

          Internationally, there has been unprecedented attention on AI safety governance, he said, adding that countries and international organizations roll out initiatives and rules in quick succession.

          "By further aligning with international norms through Framework 2.0, China is responding to consensus concepts such as trustworthy AI and AI for good", said Hong.

          In addition, the nation is also matching international best practices in content labeling and governance guidelines, offering a China solution to global AI governance, he added.

          Top
          BACK TO THE TOP
          English
          Copyright 1995 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
          License for publishing multimedia online 0108263

          Registration Number: 130349
          FOLLOW US
           
          主站蜘蛛池模板: 精品少妇无码一区二区三批| 久久亚洲精品成人av秋霞| 亚洲中文字幕一区二区| 日本一区二区三区18岁| 国产在线午夜不卡精品影院| 亚洲熟女综合色一区二区三区| 亚洲精品国产aⅴ成拍色拍| 天美传媒mv免费观看完整| 国产福利在线免费观看| 国产美女69视频免费观看| 亚洲AV无码专区色爱天堂老鸭窝| 国内露脸互换人妻| 精品久久综合1区2区3区激情| 色综合久久久久综合体桃花网| 亚洲男人在线天堂| 99国产精品自在自在久久| 亚洲 日本 欧洲 欧美 视频 | 成人亚洲网站www在线观看| 亚洲妓女综合网995久久| 老师穿超短包臀裙办公室爆乳| www亚洲精品| 亚洲精品综合一区二区三区| 亚洲日韩精品无码一区二区三区| 婷婷色爱区综合五月激情韩国| 国产亚洲tv在线观看| 成人午夜视频一区二区无码 | 亚洲精品无码成人A片九色播放| 中文字幕乱码一区二区免费| 中文字幕精品久久久久人妻红杏1| 极品少妇的粉嫩小泬视频| 成人区精品一区二区婷婷| 亚洲国产欧美日韩一区二区| 884aa四虎影成人精品| 在线a亚洲老鸭窝天堂| 国语精品自产拍在线观看网站| 99人体免费视频| 国产精品妇女一二三区| 又色又爽又黄又无遮挡的网站| 中文字幕午夜福利片午夜福利片97 | 国产成人无码av一区二区在线观看| 欧美丝袜高跟鞋一区二区|