<tt id="6hsgl"><pre id="6hsgl"><pre id="6hsgl"></pre></pre></tt>
          <nav id="6hsgl"><th id="6hsgl"></th></nav>
          国产免费网站看v片元遮挡,一亚洲一区二区中文字幕,波多野结衣一区二区免费视频,天天色综网,久久综合给合久久狠狠狠,男人的天堂av一二三区,午夜福利看片在线观看,亚洲中文字幕在线无码一区二区
          Global EditionASIA 中文雙語Fran?ais
          Opinion
          Home / Opinion / Global Views

          Trial and error reduction

          By Zhou Yujia and Liu Yiqun | China Daily Global | Updated: 2026-02-24 21:22
          Share
          Share - WeChat
          MA XUEJING/CHINA DAILY

          AI can help governments reduce the cost of policy mistakes, but it should support rather than replace accountable decision-making

          Governments worldwide are entering a period in which costly policy mistakes are becoming increasingly difficult to absorb. Slower economic growth, aging populations and rapid technological change are narrowing fiscal space and magnifying the consequences of governance misjudgments. In this environment, the central challenge for policymakers is no longer whether reform is necessary, but how it can be pursued with lower risk and fewer irreversible errors.

          According to the OECD, public spending in advanced economies already averages more than 40 percent of GDP, resulting in highly rigid fiscal structures that make both policy reversals and corrective adjustments significantly more costly. At the same time, global aging is expected to reduce annual GDP growth by between 0.5 and 1 percentage point in the coming decades, while slower productivity growth further limits the capacity to absorb policy misallocation errors. These pressures are compounded by the structure of today’s interconnected economies. Digital platforms, global supply chains and financial integration allow policy spillovers to travel faster and further, and the World Bank has repeatedly shown that elevated policy uncertainty is associated with weaker investment and employment outcomes, particularly in open economies. Under such conditions, trial-and-error governance becomes not only costly, but systemically risky.

          It is against this backdrop that artificial intelligence is gaining attention as a practical governance tool. Beyond administrative efficiency, AI is being integrated into analytical and predictive functions that inform decision-making and service design. Global research by Ernst & Young and Oxford Economics finds that 96 percent of public-sector organizations recognize the need to accelerate AI adoption. Meanwhile, OECD surveys indicate that more than 60 percent of member governments already use AI tools in policy analysis, forecasting and public service design, reflecting a growing shift toward data-supported decision-making.

          What distinguishes AI-enabled governance from traditional policymaking is its point of entry into the policy process. Rather than being used only after policies are launched, AI is increasingly applied at the front end of decision-making. Before a policy is finalized, simulation tools are being used to test how different options might affect households, companies, regions and public finances under a range of conditions. Instead of relying solely on historical experience or limited pilot programs, policymakers explore multiple scenarios in a virtual environment.

          This approach allows governments to ask more concrete questions in advance. How would different income groups respond to a change in taxation or subsidies? How might companies adjust investment or employment under new regulatory requirements? Which regions or sectors would bear the greatest adjustment costs? By modeling these responses before implementation, potential risks and unintended effects can be identified earlier, when policy design remains flexible.

          For example, in China’s urban governance, Hangzhou’s City Brain initiative uses AI-driven traffic simulation and real-time data analytics to model and optimize urban mobility before implementing control strategies. As a result, average travel speed has increased by around 15 percent, and emergency response times have been significantly reduced by refining signal timing and resource allocation. International institutions such as the World Bank similarly use AI-enabled microsimulation models to assess the distributional and fiscal impacts of alternative subsidy designs before implementation, reducing the social costs of policy experimentation.

          AI-supported simulation shifts part of this learning process to the pre-decision stage. It enables policymakers to compare alternative designs, stress-test assumptions and identify trade-offs without exposing society to the full costs of real-world experimentation.

          Moreover, simulations suggest that policies designed with positive intentions can nonetheless produce adverse outcomes, such as inadvertently raising compliance costs in ways that disadvantage small companies and weaken their competition and innovation. Crucially, AI-supported experimentation does not merely help policymakers choose among predefined options; it can also surface latent dynamics and risk channels that may not be apparent through conventional reasoning or past experience.

          Clear boundaries, however, remain essential. AI can offer projections and structured analysis, but it cannot supply legitimacy or resolve value conflicts. Even the most accurate model can only describe what is likely to happen under certain assumptions; it cannot determine what should be done. Decisions involving fairness, social priorities and ethical trade-offs remain the responsibility of human judgment.

          This principle is widely reflected in international governance debates. The European Union’s Artificial Intelligence Act, together with frameworks developed by the United Nations and its agencies, places human oversight at the center of AI governance. These initiatives reflect a shared understanding that AI should support accountable decision-making rather than replace it. Data can inform choices, but public deliberation and institutional checks ultimately determine whether decisions are accepted.

          Lower technical costs of experimentation do not eliminate governance risk. A central vulnerability of AI-supported policymaking lies in its limited transparency and weak explainability. Many AI systems are built on highly complex models that are difficult for non-specialists to interpret, making it challenging to clearly trace how specific policy recommendations are produced. When decision-support tools function as black boxes, policymakers may struggle to justify policy outcomes, while errors or distortions can remain concealed until they manifest as tangible real-world consequences.

          Algorithmic bias is another major challenge. When AI systems are trained on historical data and deployed without sufficient oversight, they can replicate and institutionalize existing social inequalities. During the COVID-19 pandemic, the United Kingdom’s exam regulator Ofqual introduced an algorithm to standardize students’ grades after nationwide exams were canceled. The model combined teachers’ assessments with schools’ historical performance. As a consequence, students from schools with weaker historical outcomes — many of which are located in lower-income and disadvantaged communities — were systematically downgraded, even when individual performance indicators were strong. The episode illustrates how, without adequate oversight, AI systems can embed structural inequality into formal policy decisions, thereby weakening public trust in governance.

          Ultimately, effective governance depends not only on technical capability, but on public confidence and institutional credibility. AI can reduce the financial and administrative costs of policy experimentation, but it cannot substitute for transparency, accountability or social consensus. The World Bank research consistently shows that countries combining digital tools with institutional reform achieve stronger policy outcomes than those relying on technology in isolation.

          As governments increasingly turn to AI to navigate uncertainty, governance capacity must evolve in parallel. Legal frameworks need to be strengthened, algorithmic use must be made transparent and channels for public participation should be expanded. AI can help societies experiment more safely, but whether it leads to better governance will depend not on computing power, but on the principles that guide its use.

          Zhou Yujia
          Liu Yiqun

          Zhou Yujia is an assistant researcher at the Department of Computer Science and Technology at Tsinghua University. Liu Yiqun is a professor at the Department of Computer Science and Technology, and the director of the Office for Research at Tsinghua University.

          The authors contributed this article to China Watch, a think tank powered by China Daily. The views do not necessarily reflect those of China Daily.

          Contact the editor at editor@chinawatch.cn.

          Most Viewed in 24 Hours
          Top
          BACK TO THE TOP
          English
          Copyright 1994 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
          License for publishing multimedia online 0108263

          Registration Number: 130349
          FOLLOW US
          主站蜘蛛池模板: 久久精品人人做人人爽97| 中文成人无字幕乱码精品| 清纯唯美人妻少妇第一页| 欧美白人最猛性xxxxx| 亚洲av乱码一区二区| 鲁丝片一区二区三区免费| 久久精品中文字幕极品| 国产精品十八禁一区二区| 亚洲男人第一无码av网| 亚洲精品tv久久久久久久久久 | 亚洲精品一区二区在线播| 成人精品一区日本无码网| 国产精品久久久福利| 国产高颜值极品嫩模视频| 97视频精品全国免费观看| 久久天天躁狠狠躁夜夜婷| 精品国产自| 东方四虎av在线观看| 无遮无挡爽爽免费视频| 亚洲av成人精品日韩一区| 四虎网址| 亚洲精品国产综合麻豆久久99| 国产系列丝袜熟女精品视频 | 婷婷综合缴情亚洲五月伊| 国产精品国三级国产专区| 国产成人av片在线观看| 色综合久久精品亚洲国产| 久久精品国产99久久6| 亚洲色成人网站www永久四虎| 久久天天躁夜夜躁狠狠| 熟妇人妻中文字幕| 国产精品亚洲二区在线播放| 国产成人在线综合| 日韩人妻无码精品久久| 中文字幕网久久三级乱| 亚洲成人高清av在线| 亚洲午夜福利精品一二飞| 国产精品妇女一区二区三区| 白丝美女办公室高潮喷水视频| 无人区码一码二码三码区| 国产亚洲精品成人aa片新蒲金|