<tt id="6hsgl"><pre id="6hsgl"><pre id="6hsgl"></pre></pre></tt>
          <nav id="6hsgl"><th id="6hsgl"></th></nav>
          国产免费网站看v片元遮挡,一亚洲一区二区中文字幕,波多野结衣一区二区免费视频,天天色综网,久久综合给合久久狠狠狠,男人的天堂av一二三区,午夜福利看片在线观看,亚洲中文字幕在线无码一区二区
          Global EditionASIA 中文雙語Fran?ais
          China
          Home / China / National affairs

          Framework seeks to keep AI in line

          Rapid development of technology presents potential safety risks

          By JIANG CHENGLONG | China Daily | Updated: 2025-11-13 08:49
          Share
          Share - WeChat
          SHI YU/CHINA DAILY

          In a move reflecting the fast-paced breakthroughs in artificial intelligence, on Sept 15, China released its upgraded AI Safety Governance Framework 2.0.

          The latest framework signals a significant strategic evolution from its predecessor, shifting from a static list of risks to a full life cycle governance methodology.

          It comes just a year after the first framework was released by the National Technical Committee 260 on Cybersecurity, China's key body responsible for cybersecurity standardization.

          In its preface, the new iteration notes that the update was driven by breakthroughs in AI technology that had been "beyond expectation". These breakthroughs include the emergence of high-performance reasoning models that drastically increase AI's intellectual capabilities, and the open-sourcing of high-efficacy, lightweight models, which have strongly lowered the barrier to deploying AI systems.

          At the same time, the manifestations and magnitude of AI security risks — and people's understanding of them — are evolving rapidly.

          The core objective has evolved from simply preventing risks to ensuring technology remains under human control, according to Wang Yingchun, a researcher at the Shanghai Artificial Intelligence Laboratory, who called the move a "major leap" in governance logic.

          In a commentary published on the official website of the Cyberspace Administration of China, he emphasized that the framework aims to guard the bottom line of national security, social stability and the long-term survival of humanity.

          Preventing loss of control

          The significant shift in the framework involves the introduction of a new governance principle, compared with version 1.0, which focuses on trustworthy applications and the prevention of the loss of control, Wang said.

          This principle is supported by the framework's new addendum listing the fundamental principles for trustworthy AI, which mandates ultimate human control and value alignment.

          Hong Yanqing, a professor specialized in cybersecurity at the Beijing Institute of Technology, said in a commentary that the newly added principle is intended to ensure that the evolution of AI remains safe, reliable and controllable. It must guard against runaway risks that could threaten human survival and development, and keep AI firmly under human control, he said.

          Reflecting this high-stakes focus, the new framework lists real-world threats that directly impact human security and scientific integrity. It includes the loss of control over knowledge and capabilities of nuclear, biological, chemical and missile weapons.

          It explains that AI models are often trained on broad, content-rich datasets that may include foundational knowledge related to nuclear, biological and chemical weapons, and that some systems are paired with retrieval-augmented generation tools.

          "If not effectively governed, such capabilities could be exploited by extremists or terrorists to acquire relevant know-how and even to design, manufacture, synthesize and use nuclear, biological and chemical weapons — undermining existing control regimes and heightening peace and security risks across all regions," the framework said.

          Derivative societal risks

          For the first time, the new framework introduces a category of risk involving derivative safety risks from AI applications, sending warnings to potential systemic risks AI applications could bring to macro social systems.

          The framework warns that AI misuse could disrupt labor and employment structures.

          "AI is accelerating major adjustments in the forces and relations of production, restructuring the traditional economic order. As capital, technology and data gain primacy in economic activity, the value of labor is weakened, leading to a marked decline in demand for traditional labor," it said.

          The framework also cautions that resource supply-demand balances may be upset, stressing that some problems that emerged in AI development, like disorderly construction of computing infrastructure, are accelerating consumption of electricity, land and water, posing new challenges to resource balance and to green, low-carbon development.

          The framework even warns that AI self-awareness cannot be ruled out in the future, with potential risks of systems seeking to break free of human control.

          "In the future, it cannot be excluded that AI may experience sudden, beyond-expectation 'leaps' in intelligence — autonomously acquiring external resources, self-replicating, developing self-awareness and seeking external power — thereby creating risks of vying with humanity for control," the framework said.

          The framework also warns that AI may foster addictive, anthropomorphic interactions. "AI products built on human-like interaction can lead users to form emotional dependence, which in turn shapes their behavior and creates social and ethical risks," it said.

          Moreover, the existing social order could be challenged, it added, noting that AI's development and application are "bringing major changes to tools and relations of production, accelerating the restructuring of traditional industry models, and upending conventional views on employment, childbirth, and education — thus challenging the traditional social order".

          Researcher Wang said the newly added section goes beyond familiar safety topics such as "harmful content" and "cognitive confrontation", bringing social structures, scientific activity and humanity's long-term survival and development into the scope of AI safety governance.

          The higher-lever aim, he said, is to hold the bottom line of national security, social stability and the long-term continuity of humankind.

          China solution

          Amid global competition and cooperation in AI, the framework not only supports the healthy development of China's AI sector, but also signals China's firm resolve to safeguard AI security and ensure AI benefits humanity, according to Hong, the professor at the BIT.

          Version 2.0 aligns concrete measures with international governance practice, he said, adding it emphasizes labeling and traceability for AI-generated content, which is in line with the approaches of the United States and the European Union for regulating deep-synthesis media, according to the expert.

          Beyond AIGC labeling and traceability, in the safety guidance, the framework also calls for deploying deepfake detection tools in scenarios such as government information disclosure and judicial evidence collection, which will be used for source verification and cross-checking of information suspected to be generated by large models.

          "These measures demonstrate China's openness and willingness to cooperate in global AI governance," Hong said.

          Internationally, there has been unprecedented attention on AI safety governance, he said, adding that countries and international organizations roll out initiatives and rules in quick succession.

          "By further aligning with international norms through Framework 2.0, China is responding to consensus concepts such as trustworthy AI and AI for good", said Hong.

          In addition, the nation is also matching international best practices in content labeling and governance guidelines, offering a China solution to global AI governance, he added.

          Top
          BACK TO THE TOP
          English
          Copyright 1994 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
          License for publishing multimedia online 0108263

          Registration Number: 130349
          FOLLOW US
           
          主站蜘蛛池模板: 国产精品毛片一区二区| 欧美日韩国产va在线观看免费 | 国产av午夜精品福利| 中文字幕国产精品自拍| 亚洲天堂久久一区av| 亚洲精品无amm毛片| 日本成熟老妇乱| 国产精品综合av一区二区国产馆| 自拍偷自拍亚洲一区二区| 国产一区二区在线有码| 亚洲精品成人福利网站| 成人做爰www网站视频| 亚洲日韩AV秘 无码一区二区| 欧美精品一产区二产区| 免费A级毛片无码A∨蜜芽试看| 一本色道婷婷久久欧美| 闷骚的老熟女人15p| 99国产欧美精品久久久蜜芽| 风韵丰满熟妇啪啪区老老熟妇| 国产av普通话对白国语| 亚洲一区二区三区在线播放无码 | 精品国产丝袜自在线拍国语| 亚洲精品乱码久久久久久中文字幕| 亚洲精品色午夜无码专区日韩| 欧美日韩国产三级一区二区三区| 99福利一区二区视频| 成人自拍短视频午夜福利| 亚洲无人区视频在线观看| 亚洲中文字幕一区久久| 久久精品国产一区二区蜜芽| 亚洲av成人一区国产精品| 午夜通通国产精品福利| 欧美黑人XXXX性高清版| 亚洲国产色婷婷久久99精品91| 亚洲精品国产字幕久久不卡| 亚洲AV永久无码嘿嘿嘿嘿| 91一区二区三区蜜桃臀| 午夜在线观看成人av| 性无码专区无码| 伊人久久大香线蕉AV网| 男男欧美一区二区|