<tt id="6hsgl"><pre id="6hsgl"><pre id="6hsgl"></pre></pre></tt>
          <nav id="6hsgl"><th id="6hsgl"></th></nav>
          国产免费网站看v片元遮挡,一亚洲一区二区中文字幕,波多野结衣一区二区免费视频,天天色综网,久久综合给合久久狠狠狠,男人的天堂av一二三区,午夜福利看片在线观看,亚洲中文字幕在线无码一区二区
          Global EditionASIA 中文雙語Fran?ais
          Opinion
          Home / Opinion / Chinese Perspectives

          What can be governed in AI and what won't be

          By XUE ZHAOFENG | China Daily | Updated: 2025-11-29 09:51
          Share
          Share - WeChat
          JIN DING/CHINA DAILY

          Regulators looking at artificial intelligence should begin with a simple but often forgotten truth: not everything can be governed. Some forces — technological progress, shifts in costs and benefits, and the enduring aspects of human nature — push so strongly that swimming against them is costly, and often futile. This is not an argument against regulation; it is a call for candid, marginal cost-and-benefit-minded regulation that defends public goods while accepting what has effectively become irreversible.

          Economists draw a useful distinction between welfare, the things people truly want, and toil — the work they do to obtain those things. Frédéric Bastiat's timeless lesson — people want light, not candles — is instructive here. When electricity made illumination abundant and cheap, we celebrated the gains even as candle-makers adjusted. AI is doing for routine cognitive labor what electricity once did for illumination: delivering widespread welfare gains even as some workers face displacement and uncertainty.

          To be sure, AI also has its limits. Current systems generate probabilistic outputs, not deductive proofs, so they can be inconsistent and occasionally even hallucinate. Biased or imperfect training data amplify errors while proprietary and copyrighted material limits the access to knowledge. Besides, machines do not experience human values or empathy. Yet many of these problems can be addressed. Researchers are combining probabilistic models with symbolic checks to reduce hallucinations. Licensed access to specialized databases raises reliability, while machines can free humans to focus on value judgments, creativity and empathy. History shows people do not wait for tools to be perfect before adopting them — they adopt those that offer convenience.

          Privacy is another area where full control is unlikely. Whether personal information remains hidden or gets exposed ultimately depends on its use-value. As transaction costs fall and the value of personalized services rises, information tends to flow toward higher-value uses. Only those living off the grid still retain the old notion of complete privacy. Most people accept privacy tradeoffs to enjoy digital services. Governments, often citing public safety, also seek access. Under commercial and public pressures, citizens have grown semi-transparent, and AI's analytical power accelerates that trend. Privacy protection remains essential, but regulators should choose targeted, high-value protection rather than chase anachronistic total secrecy.

          The drive to differentiate among individuals — for hiring, underwriting or marketing — is similarly hard to stop. Societies have long invested in distinctions because they create value and lower costs: education screens talent, medical examinations reduce adverse selection in insurance while analytics sharpen matching. If AI improves precision, incentives to use it will persist. The real risk here is erroneous discrimination — excluding people on spurious correlations. Legal rules, litigation, reputational penalties and competition can deter such abuses. But the underlying push toward finer distinctions is not something regulation will easily roll back.

          Human dependence on tools and delegated decision-making is also irreversible. Delegation saves time and cognitive effort, but someone must still bear responsibility for the outcomes. AI alters decision processes but not the need to allocate responsibility. In practice, liability will be apportioned to those best able to prevent harm — providers, users, insurers and regulators — following familiar law-and-economics logic. The regulatory focus should therefore be on efficient responsibility allocation, not on forbidding the use of helpful tools.

          Likewise, it is difficult to stop attempts to capture transient profit in financial markets through algorithmic trading. In reasonably efficient markets, price movements reflect information arrival, and short-lived arbitrage rewards speed. Trying to ban the race for speed would be arbitrary and could be counterproductive. Instead, regulation should aim to prevent actions that distort price discovery or entrench insiders.

          Where regulation can be decisively beneficial is in protecting truth and safety. Falsehoods have accompanied every new communication medium since the printing press. In domains where accuracy matters — medicine, infrastructure, public safety — society will and should pay for reliable information. Technical measures such as provenance, watermarking, traceability and robust reputational systems, combined with legal standards, can raise the cost of deception and help trustworthy providers stand out.

          These examples are illustrative, not exhaustive. New technologies inevitably bring both nuisance and progress. Concerns about the erosion of privacy and misinformation are real, but personal displeasure should not be conflated with structural reality. The salient question for policymakers is not whether to regulate, but where regulation will yield net social benefits and where it will merely struggle against a rising tide. That pragmatic clarity must guide those who defend sovereignty, protect public goods, and seek to harness AI's undeniable welfare gains.

          Beyond law and technology, governance rests on social choices. Sensible AI governance should recognize limits: it should protect where protection matters most, and adapt where change is relentless.

          The author is adjunct professor at the National School of Development, Peking University and author of Economics Lecture Notes (Graphic Edition).

          The views don't necessarily reflect those of China Daily.

          Most Viewed in 24 Hours
          Top
          BACK TO THE TOP
          English
          Copyright 1994 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
          License for publishing multimedia online 0108263

          Registration Number: 130349
          FOLLOW US
          主站蜘蛛池模板: 开心五月激情综合久久爱| 国产成人精品成人a在线观看| 亚洲gv天堂无码男同在线观看| 成人精品区| 欧美日韩精品综合在线一区| 国产一区二区三区观看视频| 日韩在线一区二区不卡视频| 久久国产精品99久久蜜臀| 中文 在线 日韩 亚洲 欧美| 蜜臀av一区二区国产在线| 国产精品一区二区三区三级| 波多野结衣亚洲一区| 40岁大乳的熟妇在线观看| 亚洲国产在一区二区三区| 国产偷国产偷亚洲清高APP| 亚洲香蕉伊综合在人在线| 亚洲精品综合久中文字幕| 精品人妻蜜臀一区二区三区| 影音先锋2020色资源网| 亚洲婷婷丁香| 欧美性猛交xxxx乱大交极品| 人摸人人人澡人人超碰手机版| 亚洲日韩中文无码久久 | 国产亚洲中文字幕久久网| 国产精品露脸3p普通话| 天堂mv在线mv免费mv香蕉 | 久久久亚洲欧洲日产国码是av| 精品国产乱码久久久人妻| 国产一区二区三区在线观看免费 | 成人污视频| 国产成AV人片久青草影院| 亚洲ⅴa曰本va欧美va视频| 色噜噜av男人的天堂| 国产精品国产自线拍免费软件| 久久精品夜夜夜夜夜久久| 亚洲乱色熟女一区二区蜜臀| 亚洲欧美人成电影在线观看| 久久亚洲欧美日本精品| 国产亚洲精品综合一区| 国产成人午夜精品永久免费| 日韩黄色av一区二区三区|