<tt id="6hsgl"><pre id="6hsgl"><pre id="6hsgl"></pre></pre></tt>
          <nav id="6hsgl"><th id="6hsgl"></th></nav>
          国产免费网站看v片元遮挡,一亚洲一区二区中文字幕,波多野结衣一区二区免费视频,天天色综网,久久综合给合久久狠狠狠,男人的天堂av一二三区,午夜福利看片在线观看,亚洲中文字幕在线无码一区二区
          Global EditionASIA 中文雙語Fran?ais
          Opinion
          Home / Opinion / Chinese Perspectives

          Collaborative and human-centric approach to AI governance vital for the world

          By Chen Haiming | chinadaily.com.cn | Updated: 2024-10-21 09:51
          Share
          Share - WeChat
          Figurines with computers and smartphones are seen in front of the words "Artificial Intelligence AI" in this illustration taken, Feb 19, 2024. [Photo/Agencies]

          Artificial Intelligence (AI) has emerged as a transformative force, promising to enhance productivity, improve decision-making and address a variety of global challenges. However, alongside its vast potential comes significant ethical considerations, employment disruptions and safety concerns. As we navigate this complex landscape, it is crucial that AI governance adopts a collaborative and human-centric approach that balances innovation with safety, upholds ethical standards and ensures that the trustworthy AI benefits all of humanity.

          Collaboration is imperative in AI governance due to the rapidly evolving nature of technology that transcends national borders. Various stakeholders—including governments, international organizations, academia, the private sector companies and civil society—must come together to create a comprehensive framework that addresses the multifaceted challenges posed by fast-evolving AI models. "We are seeing life-changing technological advances…And life-threatening new risks—from disinformation to mass surveillance to the prospects of lethal autonomous weapons," says UN Secretary-General António Guterres.

          As such, prioritizing collaboration at all levels is essential. Achieving effective AI governance necessitates a multi-stakeholder approach that engages diverse actors to contribute their perspectives and expertise. Governments can share effective practices and regulatory frameworks while technology companies can collaborate with academics to conduct ethical assessments of their algorithms. Civil society organizations and citizens should be central to these efforts, ensuring that the voices of affected communities are heard and promoting transparency through public dialogues and participatory decision-making processes.

          Moreover, this collaboration should be framed within the guidelines set forth by the United Nations, which advocates for global standards and fosters dialogue on AI governance. One valuable mechanism in this context is international soft law, which pertains to regulatory frameworks that, although not legally binding, can still guide international conduct and enhance cooperation. The UN General Assembly adopted, on 21 March 2024, a landmark resolution on the promotion of "safe, secure and trustworthy" artificial intelligence systems for sustainable development.

          As AI technologies advance, policymakers face the dual challenge of fostering innovation while ensuring safety. This delicate balance is essential; excessively restrictive regulations may stifle innovation, whereas lenient regulations could lead to detrimental societal consequences. To achieve equilibrium, ongoing dialogue among stakeholders is necessary. Engaging technologists in the regulatory process can help identify risks early in the development phase.

          More important, adaptive regulatory frameworks, which are designed to evolve alongside AI technologies, can maintain this balance, ensuring that safety measures are integrated without impeding innovative solutions. Also, in AI governance, it is imperative to draw on lessons from existing models of technology governance, such as those for nuclear weapons and biological and chemical weapons.

          At the core of effective AI governance is a commitment to human values and rights. AI systems must be designed with users in mind, ensuring equity, accountability and transparency. This human-centric approach recognizes that technology should serve people and contribute positively to societal well-being. Ethical considerations, such as bias mitigation, informed consent and data privacy, are vital facets of safeguarding individual rights. In addition, a human-centric focus entails recognizing the potential risks associated with AI, such as the perpetuation of existing societal biases or the misuse of technology for surveillance and control. Therefore, regulatory frameworks must ensure responsible AI development and implement measures to protect individuals from harm.

          One of the most pressing concerns regarding AI is its potential impact on employment. As automation and AI-driven technologies evolve, certain jobs may become obsolete, thereby resulting in significant economic and social repercussions. "AI can endanger workers, worsen poverty and lead to unemployment and instability," says Hisham Khogali. Therefore, addressing job displacement is a critical aspect of AI governance.

          In order to address the issue of unemployment caused by AI, it is imperative to adopt relevant countermeasures. Proactive investment in workforce retraining and upskilling programs is necessary to prepare individuals for the jobs of the future. What's more, financial experts should consider the possibility of imposing additional taxes on businesses that benefit from AI automation, with the proceeds used as relief funds to compensate for workers who have lost their jobs as a result of this automation. An educational paradigm that nurtures human creativity, critical thinking and emotional intelligence—skills that are less likely to be automated—will empower individuals and aid in minimizing the risks associated with AI-induced job displacement.

          Despite its potential negative impacts and risks, AI holds immense potential to substantially contribute to the achievement of the United Nations' 2030 Agenda for Sustainable Development, which encompasses 17 Sustainable Development Goals. By leveraging AI, nations can effectively address complex challenges such as poverty, climate change and healthcare. For instance, AI can enhance efficiency in resource management, allowing for smarter agriculture and energy solutions, while in the healthcare sector, AI innovations can lead to better diagnostics and personalized treatment plans, thus significantly improving health outcomes globally.

          However, realizing this potential hinges on ensuring that AI technologies are accessible to developing countries, thereby bridging the digital divide. International cooperation plays a pivotal role in this context, as countries must collaborate to share cutting-edge knowledge and advanced technologies so as to narrow the gap between high and low-income countries. To harness the benefits of AI for the achievement of the United Nations Sustainable Development Goals requires a cohesive and coherent approach to governance that prioritizes inclusivity. Scientific and technological powers should refrain from suppressing the technological development of other countries under the pretext of geopolitics and ideology, and from hindering the export of AI technology and advanced chips.

          As one of the leading countries in AI research and implementation, China actively advocates for a cooperative and people-centered global AI governance model aimed at managing risks while fostering growth. It advocates for the responsible use of AI to benefit mankind and actively promotes enhancing AI capacity-building in developing countries. China's emphasis on AI governance is evident in its hosting of the 2024 World AI Conference and the High-Level Meeting on Global AI Governance, as well as the unanimous adoption of its resolution by the 78th session of the United Nations General Assembly to strengthen international cooperation in the capacity building of artificial intelligence.

          In short, the path to effective AI governance requires a collaborative and human-centric approach that successfully balances innovation with safety. Adopting a multi-stakeholder framework rooted in global cooperation will empower individuals and guide the responsible development of AI technologies. Although the future of AI is filled with great potential, it is essential that we collaborate to establish a governance framework that prioritizes ethical principles, human welfare and sustainable development for all, while mitigating risks.

          The author is a professor at the Foreign Studies College and director of the Center for Global Governance and Law, Xiamen University of Technology. The views don't necessarily reflect those of China Daily.

          If you have a specific expertise, or would like to share your thought about our stories, then send us your writings at opinion@chinadaily.com.cn, and comment@chinadaily.com.cn.

          Most Viewed in 24 Hours
          Top
          BACK TO THE TOP
          English
          Copyright 1995 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
          License for publishing multimedia online 0108263

          Registration Number: 130349
          FOLLOW US
          主站蜘蛛池模板: 精品一区二区不卡无码av| 手机看片日本在线观看视频| 亚洲一区二区日韩综合久久| 国产精品无遮挡猛进猛出| 99精品热在线在线观看视| 国产精品乱子伦一区二区三区| 亚洲av免费看一区二区| 忘忧草www日本韩国| 一区二区三区四区四色av| 91香蕉视频在线| 伊人欧美在线| 大桥未久亚洲无av码在线| 真实国产老熟女无套内射| 国产精品国产三级国AV| 日本久久一区二区三区高清 | 国产一区二区三区国产视频| 一本精品中文字幕在线| 亚洲国产成人精品av区按摩| 超碰国产一区二区三区| 日韩在线视频网| 亚洲av优女天堂熟女久久| 亚洲av无码片在线播放| 亚洲一本大道在线| 日本精品中文字幕在线不卡| 亚洲国产成人精品女人久久久| 男同精品视频免费观看网站| 欧美产精品一线二线三线| 日产国产一区二区不卡| 午夜成人亚洲理伦片在线观看| 亚洲天堂成人一区二区三区| 国产一区二区不卡在线| 欧美黑人又粗又大又爽免费| 一本大道无码av天堂| 国产精品制服丝袜白丝| 正在播放国产剧情亂倫| 天下第一社区在线观看| 欧美极品色午夜在线视频| 男男高h喷水荡肉爽文| 亚洲精品日韩在线观看| 色一情一乱一伦视频| 虎白女粉嫩尤物福利视频|