<tt id="6hsgl"><pre id="6hsgl"><pre id="6hsgl"></pre></pre></tt>
          <nav id="6hsgl"><th id="6hsgl"></th></nav>
          国产免费网站看v片元遮挡,一亚洲一区二区中文字幕,波多野结衣一区二区免费视频,天天色综网,久久综合给合久久狠狠狠,男人的天堂av一二三区,午夜福利看片在线观看,亚洲中文字幕在线无码一区二区
          Global EditionASIA 中文雙語Fran?ais
          Business
          Home / Business / Finance

          Special attention needed to ensure AI safety, US professor says

          By Mike Gu in Kong Hong | chinadaily.com.cn | Updated: 2025-01-14 19:00
          Share
          Share - WeChat
          US computer science professor Stuart Russell talks to the media at the 2025 Asia Financial Forum (AFF) in Hong Kong on Tuesday. MIKE GU / CHINA DAILY

          Stuart Russell, a distinguished professor of computer science at the University of California, Berkeley, emphasized the need for special attention to the safety of artificial intelligence (AI) during its development, when participating in a group interview at the 2025 Asia Financial Forum (AFF) held in Hong Kong.

          For safety reason, AI systems need to have behavioral red lines, Russell said. "The problem with general-purpose AI is that it can go wrong in so many ways that you can't easily write down what it means to be safe. What you can do is write down some things that you definitely don't want the systems to do. These are the behavioral red lines," he explained to the reporters why building behavioral red lines for AI is important.

          "We definitely don't want AI systems to replicate themselves without permission. We definitely don't want them to break into other computer systems. We definitely don't want them to advise terrorists on how to build biological weapons," Russell said.

          He added that it is hoped that AI development will always be under human control, rather than becoming uncontrollable.

          This is why it is crucial to generate behavioral red lines at the early stages of AI development, especially with the help of governments, Russell said.

          "So, you can make a list of things that you definitely don't want to do. It is quite reasonable for governments to say that before you can put a system out there, you need to show us that it's not going to do these things," he said.

          Russell pointed out that AI gives rise to new forms of cybercrime. Currently, criminals are using AI to craft targeted emails by analyzing social media profiles and accessing personal emails, he said. This allows AI to generate messages that reference ongoing conversations, impersonating someone else, he added.

          Russell, however, stated that AI also boosts the defense of crimes. "On the other side, we have AI defenses. I'm part of a team in various universities in California working together to use AI as a defense to screen emails against fishing attacks, to look at the activities of algorithms operating within the network, and to see which ones are possibly engaging in various activities", he said.

          When asked about AI competition between countries, Russell said, "I think, in general, competition is healthy". However, he emphasized that excessive competition in AI should be approached with caution, as it could jeopardize AI safety. "Safety failures damage the entire industry. For example, if one airline doesn't pay enough attention to safety and airplanes start crashing, that damages the whole industry," he said.

          AI cooperation, based on safety, is both allowable and economically sensible, Russell said. "In collaboration with several AI researchers from the West and China, we've been running a series of dialogues on AI Safety, specifically to encourage cooperation on safety. Those have been quite successful. The behavioral red lines I mentioned earlier are a result of those discussions," he said.

          Regarding AI cooperation between China and the United States, Russell stated that both countries now place a strong emphasis on ensuring AI safety.

          "I think there's at least as much interest in that direction in China as there is in the US. Several senior Chinese politicians have talked about AI safety and are aware of the risks to humanity from uncontrolled AI systems. So, I really hope that we can cooperate on this dimension," he said.

          "The US and China have agreed not to allow AI to control the launch of nuclear weapons, which I think is sensible," he added.

          mikegu@chinadailyhk.com

          Top
          BACK TO THE TOP
          English
          Copyright 1995 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
          License for publishing multimedia online 0108263

          Registration Number: 130349
          FOLLOW US
          CLOSE
           
          主站蜘蛛池模板: 国产精品无遮挡猛进猛出| 午夜福利在线一区二区| 国产精品白浆在线观看| 久久综合九色欧美婷婷| 国产成人精品一区二区秒拍1o | 国产va免费精品观看| 亚洲高清 一区二区三区| 中文人妻av高清一区二区| 成年美女黄网站色大片免费看| 国产福利姬喷水福利在线观看| 久久人人爽天天玩人人妻精品| 久久亚洲精品成人综合网| 水蜜桃视频在线观看免费18| 人妻激情一区二区三区四区| 又爽又黄又无遮掩的免费视频| 日本高清免费不卡视频| 久久久精品94久久精品| 久久久网站| 熟妇的奶头又大又长奶水视频| 亚洲精品在线视频自拍| 久热这里只有精品12| 人人妻人人澡人人爽不卡视频| 午夜综合网| 国产亚洲美女精品久久久| 无码专区 人妻系列 在线| 国产日韩一区二区天美麻豆| 国产另类ts人妖一区二区| 国产亚洲精品超碰| 秋霞国产av一区二区三区| 中文字幕永久免费观看| 国产精品成人自产拍在线| 精品国产成人国产在线观看 | 亚洲AV无码不卡在线播放| 亚欧成人精品一区二区乱| 中文字幕在线精品国产| 亚洲人成在线观看网站不卡| 少妇真人直播app| 91久久国产热精品免费| 成人永久性免费在线视频| 日本亚洲一区二区精品| 尹人香蕉久久99天天拍欧美p7|