The lost art of admitting what you don’t know - FT中文网
登录×
电子邮件/用户名
密码
记住我
请输入邮箱和密码进行绑定操作:
请输入手机号码,通过短信验证(目前仅支持中国大陆地区的手机号):
请您阅读我们的用户注册协议隐私权保护政策,点击下方按钮即视为您接受。
自我管理

The lost art of admitting what you don’t know

Even LLMs are starting to show this worrying human tendency
00:00

{"text":[[{"start":6.48,"text":"When I applied to Cambridge university, my first interview was with a professor who invited me to sit, pressed his fingertips together, looked at me searchingly, then said: “Is the nation-state in decline?”"}],[{"start":21.92,"text":"My heart fell. Not only did I not know the answer, I didn’t even really understand the question. But I had heard — possibly from my state school, or else from the university — that these interviews were “not testing what you know, but how you think”. So I took a breath and said: “I’m not sure what a nation-state is.”"}],[{"start":48.02,"text":"It worked out well. The professor said that was fine, asked me a few simple questions to help me figure out the term, then a few more as we worked our way through the original question. In the end, I was offered a place."}],[{"start":64.56,"text":"It was a formative experience for me. Even so, I have found it harder and harder to say the words “I don’t know” as the years have gone on, and I don’t think I’m alone. "}],[{"start":77.15,"text":"In many ways, this is understandable. The more “expert” you become, the more you think you ought to know, and the more you fear your credibility will suffer if you ever admit otherwise."}],[{"start":91.60000000000001,"text":"But the aversion seems to have spread to all sorts of places, including settings where it should be perfectly fine to say you don’t know. A student at a prestigious US business school recently told me of a fellow student who sat in front of her during lectures. When the professor asked a question, he would type it surreptitiously into ChatGPT, then read out the answer as though it was his own."}],[{"start":120.70000000000002,"text":"What is going on? One possibility is the lack of role models. Confidence is rewarded in public life. It is rare to hear the phrase “I don’t know” in TV interviews. Little wonder: many media training courses teach people the “ABC” technique to avoid having to say those words when faced with a question to which they do not know the answer (or do not want to give it). Acknowledge the question. Bridge to safer ground (“What’s really important to know is . . . ”). Communicate the message you have already planned to convey."}],[{"start":160.20000000000002,"text":"New technology has also made it easier to bluff. First search engines, and now large language models like ChatGPT, have made it more simple than ever to avoid the discomfort of admitting what you don’t know."}],[{"start":177.73000000000002,"text":"And yet, one of the great ironies of LLMs is that they have the exact same tendency we do. When they do not know the answer to a question, for example because they can’t access a vital file, they often make something up rather than say “I don’t know”. When OpenAI put its o3 model through one particular test, for example, the company found that it “gave confident answers about non-existent images 86.7 per cent of the time.”"}],[{"start":213.13000000000002,"text":"There are costs to not admitting what you don’t know. For a start, you miss the opportunity to learn. Most experts are remarkably generous to those who ask curious questions. Some of my favourite journalism projects over the years have begun with an interesting question to which I didn’t know the answer when I began."}],[{"start":236.56000000000003,"text":"There is also the risk that you undermine your credibility even more when you bluff. We have probably all had an experience like this at some point: an impressive polymath pundit or publication ventures into your own area of expertise, and you realise with a shock that they don’t know what they’re talking about. After that, you begin to doubt them on every topic."}],[{"start":265.11,"text":"The AI industry is particularly alert to this risk. The technology companies know their tools will be of limited use in sectors like law and medicine if they continue to give confident-but-wrong answers some of the time. Efforts are under way to teach LLMs how to say “I don’t know”, or to at least express their level of confidence for a given answer. OpenAI says it has trained its new model, GPT-5, to “fail gracefully when posed with tasks that it cannot solve”."}],[{"start":304.01,"text":"But this is not an easy problem to fully fix. One problem is that LLMs do not have a concept of “truth”. Another is that they have been trained by humans steeped in the culture we have just discussed. “In order to achieve a high reward during training, reasoning models may learn to lie about successfully completing a task or be overly confident about an uncertain answer,” as OpenAI has put it."}],[{"start":336.56,"text":"In other words, while these are not necessarily the tools we need, they might just be the tools we deserve."}],[{"start":353.64,"text":""}]],"url":"https://audio.ftmailbox.cn/album/a_1755443067_7650.mp3"}

版权声明:本文版权归FT中文网所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。

“蓝领繁荣”的真相

技术型体力劳动带来些许亮点,但恐不足以扭转艰难的就业形势。

Anthropic如何在AI编程取得突破并撼动商业格局

新的人工智能驱动工具正在压缩软件开发的时间与成本,并对从法律到广告等行业构成威胁。

并非所有软件都面临相同的AI威胁

从安全服务到能够彻底改造的公司,许多企业或许都能在“AI末日大决战”中存活下来。

李开复:为何中国将在消费级AI领域击败美国

这位中国人工智能先驱谈到了AI领域两大强国之间的竞争,以及企业为何需要更积极主动地采用AI技术。

据信俄罗斯间谍航天器已拦截欧洲关键卫星通信

欧洲安全官员认为,莫斯科正将未加密的欧洲通信内容作为攻击目标。

印度欢迎特朗普的“协议”,但回避讨论俄油禁令

分析人士对美国总统声称莫迪已承诺停止购买俄罗斯原油一事深表怀疑。
设置字号×
最小
较小
默认
较大
最大
分享×