<tt id="6hsgl"><pre id="6hsgl"><pre id="6hsgl"></pre></pre></tt>
          <nav id="6hsgl"><th id="6hsgl"></th></nav>
          国产免费网站看v片元遮挡,一亚洲一区二区中文字幕,波多野结衣一区二区免费视频,天天色综网,久久综合给合久久狠狠狠,男人的天堂av一二三区,午夜福利看片在线观看,亚洲中文字幕在线无码一区二区
          Global EditionASIA 中文雙語Fran?ais
          Opinion
          Home / Opinion / China and the World Roundtable

          Are we losing the ability to think due to LLMs?

          By Virginia Dignum | China Daily | Updated: 2025-06-24 06:13
          Share
          Share - WeChat
          SONG CHEN/CHINA DAILY

          In the age of large language models (LLMs) and generative AI, we are witnessing an unprecedented transformation in how knowledge is produced, disseminated and consumed. These tools can summarize dense texts, write code, draft legal contracts, or respond to philosophical questions in seconds.

          LLMs, we are told, make us more efficient, simplify complex work, automate mundane tasks and allow us to focus on what matters. But as we marvel at their capabilities, a pressing concern emerges: Are these models genuinely boosting efficiency, or are they subtly eroding our capacity for independent thought, judgment and critical reflection?

          Efficiency is not a neutral term. It reflects values, what we choose to prioritize, what we define as valuable, and what we are willing to sacrifice. The current narrative around generative AI treats efficiency as synonymous with progress. It suggests that the faster something is done, the better. But faster is not always better. And not everything that can be automated should be.

          The popular belief is that LLMs "free up" cognitive bandwidth. That is, they allow humans to delegate repetitive thinking to machines and reserve their energy for more reflective tasks. But the opposite is often true. As more intellectual labor — writing, summarizing and decision-making for example — is handed over to AI, the less we will engage with it ourselves. Instead of reserving our thoughtfulness for higher tasks, we will increasingly lose the opportunities, and perhaps even the ability, to think critically.

          An apt example is the increasing synthetic content online. Not only are images and text being fabricated by machines, but so too often are the public reactions to them. Content no longer spreads because it presents the truth or is relevant, but because of its emotional pull. Fake images spark fake outrage in comments, which then fuel real engagement from users who cannot distinguish between what is human and what is AI generated.

          The result is a synthetic discourse loop that simulates social consensus. "Everyone is talking about it," we hear, when in fact no one is — until the content, and the reaction to it, are manufactured to serve the profit-driven strategy of platforms. Their goal is not informed conversation, but to draw continued attention, which translates into short-term revenue.

          This is not just a technical challenge of detecting what's real. It's an epistemological crisis. When falsehoods are propped up by simulated reactions and amplified by algorithms optimized for attention, the notion of public discourse itself becomes unstable. Our sense of what others believe is no longer based on shared experience or deliberation, but on machine-curated illusions. In such an environment, critical thinking doesn't just decline, it is structurally discouraged.

          So what do we really mean by "efficiency"? If it means shortcutting the time it takes to write a report, perhaps we have succeeded. But if it means replacing the intellectual effort that creates depth, coherence and reflection, then it's not a gain; it's a loss. The moment we accept LLMs as thought substitutes, rather than thought aids, we begin to erode the very conditions under which human reasoning thrives: questioning, dialogue, uncertainty and contradiction.

          This is particularly dangerous at a time when democratic values are at stake, when critical reflection and informed disagreement are essential. The legitimacy of democratic processes relies on citizens engaging with ideas, evaluating claims, and forming judgments. But when engagement is replaced by reaction to machine-generated one-liners, that is, content crafted for manipulation rather than understanding, our political agency is undermined. We don't just risk being misled; we risk no longer knowing what it means to evaluate truth for ourselves.

          There is a temptation to see LLMs as neutral tools. But they are not. They are shaped by the data they are trained on, the goals of their developers, and the market incentives that drive their deployment. Their outputs reflect a history of biases, omissions and assumptions that are often invisible to users. And the more seamlessly these outputs integrate into our workflows, the more easily they escape scrutiny. In this way, the danger is not only what the AI says, but that we stop asking how it came to say it.

          To call this "efficiency" is to ignore what is actually happening: a transfer of epistemic authority from humans to machines, without the structures of accountability and transparency that should accompany such a shift. We are being asked to trust a system we cannot interrogate, on the basis that it sounds plausible and delivers quickly.

          But speed is not the same as understanding. And plausibility is not truth.

          Instead of fetishizing efficiency, we need to refocus on resilience: the capacity of individuals and societies to question, adapt and resist manipulation. This means investing in AI literacy — not just how to use the tools, but how to critique them. It means recognizing that no AI can replace the ethical, cultural and contextual dimensions of human reasoning. It means being willing to slow down, to question the output, and to value the effort of thinking as much as the result.

          Governments, tech companies, and citizens each have a role to play. Regulation is necessary, but it is not sufficient. The foundation of responsible AI is not technical compliance; it is ethical intent. That begins with "question zero": When should AI be used? Not every problem needs an AI solution, and not every deployment leads to benefit. Responsible AI is not to put AI-first, it's to put people-first. It starts by asking why, not by rushing to deploy AI. Tech developers must embed responsibility into the very design of systems, not as an afterthought but as a guiding principle.

          More important, individuals must be empowered to question AI outputs, understand their implications, and resist the normalization of passive dependence. Only by centering human judgment and agency can we ensure AI serves society, rather than reshaping it to fit commercial imperatives.

          There is no turning back the presence of LLMs in our lives. But we can choose how to live with them. The question is not whether they will think for us, but whether we will let them define what it means to think at all. Efficiency, in the true sense, should not be about doing more with less thought. It should be about doing better, with deeper attention, stronger ethics and sustained human insight. Anything less is not progress. It is surrender.

          The author is a professor of computer science and the director of the AI Policy Lab at Ume? University, Sweden.

          The views don't necessarily reflect those of China Daily.

          If you have a specific expertise, or would like to share your thought about our stories, then send us your writings at opinion@chinadaily.com.cn, and comment@chinadaily.com.cn.

          Most Viewed in 24 Hours
          Top
          BACK TO THE TOP
          English
          Copyright 1994 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
          License for publishing multimedia online 0108263

          Registration Number: 130349
          FOLLOW US
          主站蜘蛛池模板: 国内精品免费久久久久电影院97| 好大好硬好深好爽想要20p| 日夜啪啪一区二区三区| 中文字幕乱码亚洲美女精品| 亚洲精品国产精品不乱码| 人妻丝袜无码专区视频网站| 人妻少妇被猛烈进入中文字幕| 亚洲AVAV天堂AV在线网阿V| 亚洲精品国产免费av| 在线 欧美 中文 亚洲 精品| 日韩精品亚洲国产成人av| 亚洲av日韩av中文高清性色| 国产蜜臀在线一区二区三区| 国产农村妇女高潮大叫| 精品人妻久久久久久888| 精品视频在线观看免费观看| 夜色爽爽影院18禁妓女影院| 国产一区| 国产精品一区二区av片| 亚洲a免费| 精品国产成人a在线观看| 成人欧美一区二区三区在线观看| 国内a级毛片| 国产资源精品中文字幕| 亚洲午夜无码AV不卡| 九色综合国产一区二区三区| 国产成人亚洲综合色婷婷秒播| 国产极品粉嫩福利姬萌白酱| 成人看的污污超级黄网站免费| 国产精品无码专区在线观看不卡 | 国产高清在线男人的天堂| 综合色一色综合久久网| 人成午夜免费大片| 国产精品视频免费一区二区三区| 国产精品色三级在线观看| 99精品久久精品| 免费无码黄网站在线看| 欧美亚洲日本国产综合在线美利坚| 无码人妻丰满熟妇啪啪| 怡红院一区二区三区在线| 国产精品一码在线播放|