<tt id="6hsgl"><pre id="6hsgl"><pre id="6hsgl"></pre></pre></tt>
          <nav id="6hsgl"><th id="6hsgl"></th></nav>
          国产免费网站看v片元遮挡,一亚洲一区二区中文字幕,波多野结衣一区二区免费视频,天天色综网,久久综合给合久久狠狠狠,男人的天堂av一二三区,午夜福利看片在线观看,亚洲中文字幕在线无码一区二区
          Global EditionASIA 中文雙語Fran?ais
          Opinion
          Home / Opinion / Op-Ed Contributors

          Experts talk on AI technology

          China Daily | Updated: 2023-06-12 08:00
          Share
          Share - WeChat
          [Photo/VCG]

          Editor's note: Artificial intelligence, many say, is the most advanced technological innovation. AI applications based on large models can serve transportation, energy and other fields, thereby boosting the economy. But there are fears that AI can disrupt social and political activities. Thanks to supportive policies, China has emerged as a pioneer in AI development in recent years. Two experts share their views on the issue with China Daily.

          Good and bad of AI face-swapping technology

          By Calvin Tang

          AI face-swapping technology allows users to obtain facial features, expressions, body movements and voice characteristics of target images through identification technology, and then use the information to create fake videos that can deceive viewers. In 2019, a "deepfake" user on a forum in the United States used this technology to replace multiple Hollywood stars and even pornographic video actors, and then publicly released the technology code, leading to the spread of this technology.

          The advancement of artificial intelligence technology is both advantageous and disadvantageous to people. On the one hand, technological progress supports the growth of the entertainment industry and tackles obstacles in producing works even after the passing of prominent actors. For example, AI face-swapping technology allowed Paul Walker to make a posthumous appearance in the movie Fast and Furious 7 after his untimely death during filming.

          On the other hand, such technology poses the risk of being misused and can infringe on the rights related to personal dignity, such as reputation and image rights. This includes illicitly trading videos that use face-swapping, stealing user information for fraudulent purposes.

          Multiple risks posed by AI face-swapping tech

          AI face-swapping technology is still in its infancy, and there is a need for improvement in the governance of the following risks by social institutions and legal frameworks.

          People with malicious intent can exploit AI face-swapping technology to produce convincing fake videos for fraudulent purposes. These fraudulent activities comprise a broad range of illicit practices, including but not limited to identity theft, social engineering attacks, phishing scams, political manipulation, financial fraud, consumer fraud, and more.

          Perpetrators can steal identities and impersonate real individuals online, commit crimes, orchestrate social engineering attacks, fabricate videos featuring relatives or friends of victims, and solicit money and sensitive personal information. They can also weaponize this technology for phishing scams, disseminating realistic videos and images online to trick victims into sharing sensitive information or downloading malicious software. The technology can be used to perpetrate financial fraud too, by persuading investors or customers to make endorsements or promises.

          Additionally, e-commerce livestreamers have been known to deceive consumers into making purchases by using celebrity faces through AI face-swapping technology.

          The misuse of such technology manifests in three primary forms: pornography-related crimes, defamation and rumors, and telecommunications and financial fraud. The first and most pervasive use of AI face-swapping technology was in the pornography industry, where the use of well-known figures generated significant traffic and had a more pronounced negative impact, making it challenging to prevent crime.

          Defamation and rumors lead to the spread of fake news and videos, causing people to propagate rumors and misinformation. A false video about former US president Donald Trump, which criticized Belgium's internal affairs, caused considerable public discontent in Belgium in 2019. And the spread of rumors can easily lead to social unrest and undermine social trust.

          Additionally, fraudsters who engage in telecommunications and financial fraud can utilize AI face-swapping, voice-swapping and fake videos to imitate the targeted persons' relatives and friends, thereby prompting the persons to lower their vigilance. Since AI face-swapping technology can create realistic fakes, people are profoundly threatened.

          The risks of not addressing the risks

          Further, the challenges posed by AI technology have not yet been integrated into the criminal legal system, making it challenging for authorities to investigate related crimes. For instance, the unreasonable collection and use of user information by the ZAO app, introduced by social media app developer Momo, generated severe mainstream media backlash, including criticism from Chinese media outlets like the People's Daily and Guangming Daily, and was questioned by the public.

          However, despite its market position, ZAO could evade legal responsibility due to its unreasonable user agreements and market advantage. And the Ministry of Industry and Information Technology could only approach the case based on the standard clauses of the Contract Law. The use of facial data is yet to be classified as personal information under the Criminal Law, necessitating further legal clarification and judicial interpretation.

          Also, platforms exploit contractual freedom to weaken the legal basis for criminal liability, making it challenging for authorities to demand platform cooperation.

          At its core, the risks posed by AI face-swapping technology are rooted in three technical factors: personal information is easily abused without consent; authentic-looking videos and images prompt people to lower their guard; and authorities are constrained by legal loopholes, making it difficult to track down and punish wrongdoers through platforms. Based on these factors, the authorities can take targeted measures to address these risks.

          There is a need to incorporate the personal information required by AI face-swapping technology into the legal definition of personal information under the law. Facial data is the most crucial personal information required. The authorities, for example, could clarify the interpretation of personal information under the law or issue judicial interpretations to determine that facial data are part of protected personal information. This is because facial data is easily infringed upon in the context of using AI face-swapping technology, which can have a more significant negative impact.

          Moreover, some platforms exploit contractual freedom to exclude criminal liability and continue to illegally collect and use personal information. The authorities can classify such cases as "illegally collecting citizens' personal information by other means" and include them in the "crime of infringing on citizens' personal information".

          This would help solve the authorities' problem of not being able to search for evidence and punish wrongdoers, prompting platforms to cooperate with investigations and deter users from misusing technology. This way, the government can strengthen the fight against crimes such as obscenity, defamation, rumormongering, fraud, and personal information infringement, and address the problem of AI face-swapping technology's misuse in social governance.

          The authorities should also strengthen the regulations on the management of internet information services. The central government has issued a regulation which explicitly requires service providers to add identifiers that do not affect user usage, store log information, and assist the authorities in searching for evidence and investigating relevant crimes. The regulation also requires services providers to notify users and obtain consent before editing users' personal information, in order to reduce the possibility of personal information being abused without the users' knowledge.

          However, the authorities should further strengthen regulations and take measures to hold platform managers accountable for any misuse of personal information. Specific measures could include submitting a list of high-level compliance managers and contact information when registering a business.

          Once a violation is confirmed, the authorities can punish the platform according to the severity of the case, including but not limited to private warnings to responsible persons or companies, imposing fines on violators, prohibiting licensed persons or companies from operating for a certain period, revoking enterprise practice licenses, and listing them as enterprises with abnormal business operations or as enterprises that seriously violate laws and regulations.

          In addition, the authorities should cooperate with research institutions and enterprises to develop countermeasures for AI face-swapping technology, enhance public awareness of misuse of personal information so people can guard against it, and provide protection against such misuse. The fundamental reason why AI face-swapping technology poses a social risk is that the information it presents seems authentic. As long as this remains unchanged, wrongdoers can use the technology to commit crimes.

          Boost R&D to prevent misuse of information misuse

          Therefore, netizens need to learn to counter technology and identify fraudulent AI face-swapping technology to prevent crime. Since people can train AI to recognize human voices, facial features and body postures to create face-swapping videos, they can use the same principle to train AI to identify fake ones.

          And the government, research institutions and enterprises should work closely together to strengthen the research and development of countermeasures and upgrade them, publicize relevant information on social risks, and enhance the public's awareness, digital literacy and media literacy to prevent the misuse of personal information.

          In conclusion, the government should also incorporate facial information into the legal definition of personal information under the law; further improve the regulations on the management of internet information services to hold platform managers accountable; and cooperate with research institutions and enterprises to develop countermeasures for AI face-swapping technology, and enhance public awareness to prevent the misuse of personal information.

          The author is an EMPA candidate, Tsinghua University, and a member of China Retold.

          1 2 Next   >>|
          Most Viewed in 24 Hours
          Top
          BACK TO THE TOP
          English
          Copyright 1995 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
          License for publishing multimedia online 0108263

          Registration Number: 130349
          FOLLOW US
          主站蜘蛛池模板: 亚洲 卡通 欧美 制服 中文| 亚洲国产成人综合熟女| 精品久久人人妻人人做精品| 国产一区二区精品福利| 国产成人亚洲欧美二区综合| 97精品伊人久久大香线蕉APP| 久久99久久99精品免观看| 色二av手机版在线| 中文字幕在线观看国产双飞高清| 韩国福利片在线观看播放| 亚洲一区二区女优av| 国产精品白丝久久av网站| 中文字幕在线日韩| 精品久久久久久无码专区| 亚洲乱码中文字幕小综合| 久久精品亚洲精品国产色婷| 中文字幕久区久久中文字幕| 日韩亚洲精品中文字幕| 制服丝袜国产精品| 成人内射国产免费观看| 毛片网站在线观看| 人成午夜大片免费视频77777| 久久无码高潮喷水| 丁香婷婷综合激情五月色| 亚洲国产精品综合久久2007| 国产午夜精品理论大片| 久久久www成人免费毛片| 亚洲国产午夜精品理论片妓女| 成人永久免费A∨一级在线播放| 国产女精品视频网站免费蜜芽| 亚洲大尺度无码无码专线| 精品国产综合一区二区三区 | 成人无码午夜在线观看| 啊轻点灬大JI巴太粗太长了欧美| 狠狠亚洲丁香综合久久| 天天做天天爱夜夜爽导航| 无码人妻aⅴ一区二区三区蜜桃 | 国产午夜在线观看视频| 亚洲精品国产av成人网| 夜色爽爽影院18禁妓女影院| 亚洲中文字幕巨乳人妻|