<tt id="6hsgl"><pre id="6hsgl"><pre id="6hsgl"></pre></pre></tt>
          <nav id="6hsgl"><th id="6hsgl"></th></nav>
          国产免费网站看v片元遮挡,一亚洲一区二区中文字幕,波多野结衣一区二区免费视频,天天色综网,久久综合给合久久狠狠狠,男人的天堂av一二三区,午夜福利看片在线观看,亚洲中文字幕在线无码一区二区
          Global EditionASIA 中文雙語(yǔ)Fran?ais
          Opinion
          Home / Opinion / Chinese Perspectives

          Risk control key in AI-guided weapon system

          By Liu Wei | China Daily | Updated: 2024-09-12 07:19
          Share
          Share - WeChat
          SHI YU/CHINA DAILY

          Editor's note: The booming AI industry has not only created opportunities for economic and social development but also brought some challenges. Further developing AI standards can help promote technological progress, enterprise development and industrial upgrading. Three experts share their views on the issue with China Daily.

          The development of artificial intelligence (AI) and autonomous weapon systems will boost the defense capability of some countries, changing traditional strategic and tactical landscapes, giving rise to new challenges and making risk management more difficult.

          Generally, the application of AI technology in the defense sector can increase the precision of attacks, but it could also lead to more misunderstandings and misjudgments, escalating disputes and confrontations. For instance, precision-guided weapon systems can strike targets more efficiently and effectively, and AI's fast decision-making can improve combat efficiency but increase the chances of contextual misjudgment.

          Particularly, AI's application in information warfare, including "deepfakes", can spread false information and thus increase the chances of the "enemies" making the wrong decision, rendering conflicts more unpredictable and uncontrollable.

          First, AI-guided weapon systems may misjudge and misidentify targets, mistaking civilians or friendly and non-military forces as targets, causing unnecessary deaths and destruction. Such misjudgments can provoke strong reactions or retaliation from the "enemies".

          If AI-guided weapon systems cannot understand the complex battlefield environment, they could make wrong decisions. For example, AI might fail to understand the enemy's tactical intentions or background information, leading to failed action or excessive use of force.

          Second, AI-guided weapon systems may lack flexibility in decision-making or fail to handle the complex dynamics of the battlefield, because of their dependence on preset rules and algorithms.

          Third, AI-guided weapon systems can be vulnerable to hacking attacks and carry cybersecurity risks. For example, hackers can disrupt defense systems or, worse, manipulate them into taking wrong decision or launching attacks. In fact, the AI-guided weapon systems, once they go out of control, could pose a serious threat to their own side. Also, AI-guided weapon systems could fall into the hands of terrorists.

          And fourth, although AI apps can hasten the decision-making process it is doubtful whether AI can make fast decisions after assessing the consequences of its decisions. This calls for establishing strict international rules and oversight mechanisms to minimize the risks posed by AI-guided weapon systems, and to ensure AI is used for the betterment of the people.

          To make sure AI-guided weapon systems are controllable, they should be designed with human-machine collaboration, with humans controlling the process. There should be interfaces and feedback mechanisms to give the operators regular updates on the developments, so timely intervention can be made if and when needed.

          As for the countries collaborating to formulate international rules and regulations, they should highlight the importance of global treaties and agreements — similar to the Biological Weapons Convention and the Chemical Weapons Convention — in regulating the use of AI in military applications. Setting technological standards and promoting best practices can help ensure the AI weapon systems meet the safety and ethical requirements.

          Besides, AI systems need to be more transparent, and countries need to follow an open and transparent development process, which can be reviewed and verified if and when such a need arises. This will help identify potential problems and make the systems more reliable. And independent auditing firms should be hired to annually audit the manufacturers' accounts and regularly conduct inspections to make sure they are complying with international regulations.

          There is also a need to strengthen the ethical and legal frameworks to ensure the application of AI systems for military use aligns with humanitarian and international laws, and to hold the manufacturers of AI weapon systems accountable if they violate the established norms.

          More important, control and monitoring mechanisms should be established to ensure human supervision in AI decision-making processes, because only humans can effectively prevent automated systems from going rogue.

          Measures should also be taken to strengthen cybersecurity in order to thwart hacking bids and tampering of multi-layered security arrangements such as encryption, authentication, intrusion detection, and emergency response mechanisms and, if need be, quickly restore operations after a cyberattack or system failure.

          In short, countries should engage in international cooperation and information sharing in AI military applications to collectively overcome the technological challenges, while taking measures to ensure rules and regulations are not breached, because they could create chaos, leading to misjudgments, wrong decisions which could trigger conflicts. Global efforts should be aimed at reducing the risks of conflict and ensuring the control of AI's military applications is in human hands.

          The author is the director at the Laboratory of Human-Machine Interaction and Cognitive Engineering, Beijing University of Posts and Telecommunications.

          The views don't necessarily represent those of China Daily.

          If you have a specific expertise, or would like to share your thought about our stories, then send us your writings at opinion@chinadaily.com.cn, and comment@chinadaily.com.cn.

           

          Most Viewed in 24 Hours
          Top
          BACK TO THE TOP
          English
          Copyright 1994 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
          License for publishing multimedia online 0108263

          Registration Number: 130349
          FOLLOW US
          主站蜘蛛池模板: 久久一本人碰碰人碰| 精品一区二区不卡免费| 国产三级国产精品国产专| 亚洲精品无码AV人在线观看国产| 第一页亚洲| 亚洲欧美日韩在线码| 国产三级自拍视频在线| 国产草草影院ccyycom| 日韩免费人妻av无码专区蜜桃| 青青青久热国产精品视频| 97精品久久九九中文字幕| 青草99在线免费观看| 国产成人MV视频在线观看| 久久精品国产亚洲av高清蜜臀| 四虎国产精品成人| 三上悠亚ssⅰn939无码播放 | 无码丰满人妻熟妇区| 亚洲性夜夜天天天| 色窝窝免费一区二区三区| 欧美牲交A欧美在线| 日本大胆欧美人术艺术动态| 精品午夜久久福利大片| 无码精品一区二区久久久| 国产精品不卡区一区二| 亚洲人成日本在线观看| 国产精品二区中文字幕| 一级毛片在线播放免费| 国产又爽又黄的激情视频| 中文毛片无遮挡高潮| 亚洲国产大片永久免费看| 97免费人妻无码视频| 国产人妻熟女呻吟在线观看| 日韩黄色av一区二区三区| 欧美成人www免费全部网站| 久久99热精品这里久久精品| 国产男女猛烈无遮挡免费视频网址| 久久中文字幕一区二区| 中国精学生妹品射精久久| 国产不卡一区二区四区| 色猫咪av在线观看| 国产精品自拍视频免费看|