How to prevent AI from provoking the next financial crisis | 如何防止人工智能引发下一次金融危机 - FT中文网
登录×
电子邮件/用户名
密码
记住我
请输入邮箱和密码进行绑定操作:
请输入手机号码,通过短信验证(目前仅支持中国大陆地区的手机号):
请您阅读我们的用户注册协议隐私权保护政策,点击下方按钮即视为您接受。
FT英语电台

How to prevent AI from provoking the next financial crisis
如何防止人工智能引发下一次金融危机

New systems have benefits for markets, but risks to stability must be managed
新体系对市场有利,但稳定性风险必须加以管理
00:00

undefined

Amid talk of job cuts due to artificial intelligence, Gary Gensler thinks robots will actually create more work for financial watchdogs. The US Securities and Exchange Commission chair puts the likelihood of an AI-driven financial crisis within a decade as “nearly unavoidable”, without regulatory intervention. The immediate risk is more of a new financial crash than a robot takeover.

Gensler’s critics argue that the risks posed by AI are not novel, and have existed for decades. But the nature of these systems, created by a handful of hugely powerful tech companies, requires a new approach beyond siloed regulation. Machines may make finance more efficient, but could do just as much to trigger the next crisis.

Among the risks Gensler pinpoints is “herding”, in which multiple parties make similar decisions. Such behaviour has played out countless times: the stampede of financial institutions into packages of subprime mortgages sowed the seeds of the 2008 financial crisis. The growing reliance on AI models produced by a few tech companies increases that risk. The opaque nature of the systems also makes it difficult for regulators and institutions to assess what data set they are reliant on.

Another danger lies in the paradox of explainability, noted by Gensler in a paper he co-wrote in 2020 as an MIT academic. If AI predictions could be easily understood, simpler systems could be used instead. It is their ability to produce new insights based on learning that makes them valuable. But it also hampers accountability and transparency; a lending model based on historical data could produce, say, racially biased results, but identifying this would take post facto investigation.

Reliance on AI also entrenches power in the hands of technology companies, which are increasingly making inroads into finance but are not subject to strict oversight. There are parallels with the world of cloud computing in finance. In the west, the triumvirate of Amazon, Microsoft and Google provides services to the biggest lenders. This concentration raises competition concerns, and affords at least the theoretical ability to move markets in the direction of their choice. It also generates systemic risk: an outage at Amazon Web Services in 2021 affected companies ranging from robot vacuum producer Roomba to dating app Tinder. An issue with a trading algorithm could trigger a market crash.

Watchdogs have pushed back against the awkward nexus of technology and finance in the past, as with Meta’s digital currency, Diem, formerly known as Libra. But to mitigate the risks from AI requires expanding the perimeter of financial regulation or pushing authorities across different sectors to collaborate far more effectively. Given the potential for AI to affect every industry, that co-operation should be broad. The history of credit default swaps and collateralised debt obligations shows how dangerous “siloed” thinking can be.

The authorities will also need to take a leaf from the book of those convinced that AI is going to conquer the world, and focus on structural challenges rather than individual cases. The SEC itself proposed a rule in July addressing possible conflicts of interest in predictive data analytics, but it was focused on individual models used by broker-dealers and investment advisers. Regulation should study the underlying systems as much as specific cases.

Neo-Luddism is not warranted; AI is not inherently negative for financial services. It can be used to speed up the delivery of credit, support better trading or combat fraud. That regulators are engaging with the technology is also welcome: further adoption could accelerate data analysis and develop institutional understanding. AI can be a friend to finance, if the watchmen have the right tools to keep it on the rails.

版权声明:本文版权归FT中文网所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。

据信俄罗斯间谍航天器已拦截欧洲关键卫星通信

欧洲安全官员认为,莫斯科正将未加密的欧洲通信内容作为攻击目标。

印度欢迎特朗普的“协议”,但回避讨论俄油禁令

分析人士对美国总统声称莫迪已承诺停止购买俄罗斯原油一事深表怀疑。

特斯拉能自己造芯片吗?

与火星殖民或神经植入等项目相比,建设芯片制造厂更扎根于现有的工业实践。但历史表明此类冒险举措尤其容易导致价值破坏。

Lex专栏:Moltbook的AI代理像人类一样耍心机、开玩笑和吐槽

就像对人一样,需要设定规则并记录出入,这也凸显了管理者始终不可或缺。

特朗普对日本企业界5500亿美元“敲诈”内幕

东京方面与美国总统达成了迄今为止最大的一笔交易。这些投资最终能否落地?

美国电费飙升的政治代价

为AI热潮提供动力的数据中心正给电网带来压力,并推高电价,这可能对特朗普不利。
设置字号×
最小
较小
默认
较大
最大
分享×