<tt id="6hsgl"><pre id="6hsgl"><pre id="6hsgl"></pre></pre></tt>
          <nav id="6hsgl"><th id="6hsgl"></th></nav>
          国产免费网站看v片元遮挡,一亚洲一区二区中文字幕,波多野结衣一区二区免费视频,天天色综网,久久综合给合久久狠狠狠,男人的天堂av一二三区,午夜福利看片在线观看,亚洲中文字幕在线无码一区二区
          Global EditionASIA 中文雙語Fran?ais
          World
          Home / World / Americas

          AI risks come to fore amid standoff with Anthropic

          By YANG RAN | China Daily | Updated: 2026-03-09 09:57
          Share
          Share - WeChat
          FILE PHOTO: Anthropic logo is seen in this illustration taken May 20, 2024. [Photo/Agencies]

          A high-stakes standoff between the US government and tech company Anthropic has brought into sharp focus the dangers of rapidly militarizing artificial intelligence.

          Experts warn that rushing to deploy AI in lethal weapons systems could trigger a global AI arms race and heighten the risk of conflict, urging the international community to quickly establish clear red lines.

          Anthropic's large language model, Claude, has been making headlines recently. According to multiple Western media reports, the US military has utilized Claude for key operational support in actions against Venezuela and Iran, highlighting AI's expanding role in live combat.

          However, on Feb 27, the US administration ordered all government agencies to cease using Claude, instituting a six-month phaseout. And Anthropic was formally identified by the Pentagon on Thursday as a supply-chain risk.

          This drastic move followed Anthropic's refusal to compromise its guardrails that prevent the technology's application in fully autonomous weapons and domestic mass surveillance. In a public statement, Anthropic CEO Dario Amodei declared the company "cannot in good conscience accede to" the US Department of War's request, framing it as an ethical line the firm will not cross.

          Jiang Tianjiao, research fellow of the Center for Global AI Innovative Governance at Fudan University, said that while AI is increasingly being used to assist military decision-making, current large language models like Claude lack the predictability, robustness, and safety needed for lethal autonomous weapons or mass surveillance tasks.

          "Even powerful models," he argued, "cannot guarantee reliability in 'real battlefield' conditions, where errors can have deadly consequences and risk escalating international conflicts."

          He also warned that the Pentagon's push to integrate AI more deeply into military applications could fuel a global AI arms race. "These demands may conflict directly with international law and ethical standards,"Jiang added. "Autonomous lethal weapons, for example, clash with principles of international humanitarian law, which requires distinction between combatants and civilians and accountable human command."

          Anthropic's principled stance has cost the firm its US government business. Shortly after the ban, OpenAI announced a deal to deploy its models within the Department of War's classified networks. The US Departments of State, Treasury and Health and Human Services have also instructed the staff members to stop using Anthropic's AI products.

          Sun Chenghao, head of the US-Europe Program at Tsinghua University's Center for International Security and Strategy, said that punishing firms for upholding safety guardrails incentivizes the industry to "prioritize contracts over constraints," pushing risks to the battlefield and society.

          Jiang further warned that the US moves risk politicizing the global tech ecosystem, forcing companies to prioritize national security over ethics or face sanctions. "Once militarization is forcibly advanced, the line between commercial and military sectors can become increasingly blurred, potentially making existing security review mechanisms purely cosmetic," he said.

          Ironically, Anthropic's loss of government contracts has coincided with a surge in its public popularity. Its chatbot Claude recently topped the Apple App Store, and the company's annualized revenue has reportedly jumped.

          Ethical boundaries

          Sun noted that among a considerable user base, "safety red lines" and "ethical boundaries" genuinely influence consumption and platform choices. "But this reflects a rejection of 'unlimited militarization' and of including surveillance or lethal applications as options, rather than a blanket opposition to all defense-related AI," he added.

          Experts pointed out that the confrontation underscores a significant governance lag, as existing international law and rules concerning AI militarization remain underdeveloped.

          Sun said that while existing international law offers some principled constraints, it's insufficient for governing AI militarization effectively."AI isn't a single, easily countable or verifiable weapon system, so traditional arms control methods don't apply well. External verification is also hindered by commercial confidentiality and national security secrecy."

          "The biggest challenge for global governance on AI militarization isn't a lack of principles, but a lack of actionable common definitions and tiered regulations and a lack of minimal political trust that can be sustained amid great power competition," Sun added.

          A UN General Assembly resolution adopted in December underscores the urgent need for the international community to address the challenges posed by emerging technologies in lethal autonomous weapons systems.

          "The feasible path is not an abstract call for a total ban, but to promote a set of tiered, verifiable, and implementable safety guardrails," Sun said, "The international community should prioritize reaching a minimum consensus on 'meaningful human control' over the most dangerous lethal applications and embed the principle of 'ultimate human command and accountability' into national policies and international agreements."

          Jiang highlighted the need for consensus on the red lines for military AI reached within the United Nations framework as soon as possible, advocating for strategic communication mechanisms among major powers to manage the risks effectively.

          Most Viewed in 24 Hours
          Top
          BACK TO THE TOP
          English
          Copyright 1994 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
          License for publishing multimedia online 0108263

          Registration Number: 130349
          FOLLOW US
          主站蜘蛛池模板: 国产视频有码字幕一区二区| 欧美精品在线观看视频 | 97欧美精品系列一区二区| 中文字幕无字幕加勒比| 亚洲成av一区二区三区| 三级国产在线观看| 午夜免费啪视频| 吃奶还摸下面动态图gif| 精品国精品无码自拍自在线| 国产精品天天看天天狠| 亚洲熟少妇一区二区三区| 国产av日韩精品一区二区| 人妻偷拍一区二区三区| 中文字幕无线码在线观看| 韩国深夜福利视频在线观看| av老司机亚洲精品天堂| 久久久久亚洲A√无码| 夜色福利站www国产在线视频| 国产精品国产三级国快看| 中文字幕乱码亚洲美女精品| 亚洲熟女一区二区av| 一道本AV免费不卡播放| 亚洲欧美日本久久网站| 亚洲精品天堂在线观看| 国产 麻豆 日韩 欧美 久久| 国产成人高清在线观看视频| japanese精品少妇| 国产精品午夜福利精品| 亚洲最新中文字幕一区| 色吊丝一区二区中文字幕| 久久精品国产亚洲精品2020| 成人国产精品一区二区不卡| 四虎永久精品免费视频| 精品国产亚洲区久久露脸| 午夜免费福利小电影| 永久无码天堂网小说区| 亚洲综合久久一区二区三区| 中文字幕av熟女人妻| 国产成人久久精品流白浆| 久久精品人成免费| 日本一道一区二区视频|