海外之声丨人工智能对中央银行的影响
导读
人工智能(AI)正在迅速推动全球经济的变革:AI不仅重新定义了企业的运营方式和决策流程,也深刻影响了中央银行在制定经济政策时的思考模式。深度学习技术的快速发展使得 AI 能够从大量非结构化数据中提取有价值的信息,广泛应用于多个领域,并产生深远的影响。
首先,AI 在金融领域的应用表现尤为突出,特别是在信用评估、欺诈检测和资产管理等方面。通过机器学习技术对庞大数据集进行数学结构化分析,AI 能够发现隐藏的模式,极大地提高业务流程的效率,成为金融系统中的关键工具。这一特性在经济活动的即时预测(nowcasting)中得到了充分体现。传统预测模型往往因数据滞后和任务定义狭窄而受限,而 AI 模型能够在新领域中通过“零样本学习”提供精准预测,这使得其在应对未知领域时也能表现出色。同时,AI 能够结合结构化与非结构化数据,进一步提升预测的准确性。例如,通过将社交媒体文本或卫星图像与传统时间序列数据融合,AI 可以为中央银行提供更全面的经济视图帮助制定出更精准的决策。在 BIS 的 Aurora 项目中,AI 模型通过将洗钱活动的模拟实例引入支付数据,并与传统的规则基础方法进行比较,表现出显著的优越性。特别是在跨境数据共享的背景下,AI 模型能够更精准地预测和检测洗钱活动。
其次,AI 在支付系统中的应用也展示了巨大潜力。洗钱网络常利用跨境金融交易的复杂性进行犯罪活动, AI 可以帮助央行更有效地识别此类非法活动。Aurora 项目的研究进一步证明了 AI 模型能出色地监测洗钱活动,尤其是在保护隐私的合作环境下,其表现更加突出。这对于打击金融犯罪行为至关重要。然而,AI 的广泛应用也伴随着新的风险,尤其是在金融稳定性和网络安全方面。AI 应用依赖于少数几种算法,可能会加剧市场波动和顺周期性风险。例如,AI 可能助长市场的羊群效应、流动性囤积以及金融机构的挤兑。虽然 AI 可以通过早期预警系统来监测潜在的系统性风险,但央行在使用 AI 时仍需保持谨慎。此外,AI 的普及还可能带来更加复杂的网络攻击,这要求加强对大型语言模型漏洞的防范,以防止敏感信息被操纵或泄露 。
最后,在宏观经济层面,AI 对劳动力市场和经济的影响取决于AI对劳动替代程度、生产率提升幅度以及创造新任务的数量。AI 的引入将在短期和长期内增加产出,但其对通胀压力的影响取决于总需求和总供给的相对作用。如果家庭预期收入增加,他们将倾向于增加当前消费,从而导致短期内的通胀压力上升。相反,如果这种预期不强,AI 对短期通胀的影响将较为温和。
面对 AI 带来的机遇和挑战,中央银行需要重新审视其在数据收集、治理和使用方面的角色。随着 AI 技术越来越依赖非结构化数据,中央银行不仅需要提升内部的数据管理能力,还需加强与私人数据提供者的合作。此外,建立一个“实践社区”以共享知识、数据和 AI 工具,对中央银行至关重要。这种合作可以降低 AI 工具的使用门槛,帮助中央银行更好地应对 AI 带来的经济挑战。
作者 |申铉松(国际清算银行(BIS)经济学家兼调查局长)
Artificial Intelligence And The Economy: Implications For Central Banks
Speech by Hyun Song Shin
Economic Adviser and Head of Research, Bank for International Settlements
on the occasion of the Bank’s Annual General Meeting
in Basel on 30 June 2024
Artificial intelligence (AI) has taken the world by storm and set off a gold rush across the economy, with an unprecedented pace of adoption and investment in the technology. This year’s special chapter discusses the impact of AI on the financial sector and the real economy and lays out the implications for central banks.
The technology behind AI can be traced back to the early days of computing itself. But it was the advent of deep learning in the 2010s, based on the combination of massive amounts of data and computing power, that set the stage for today’s AI applications. Today’s machine learning models excel at imposing mathematical structure on unstructured data to identify patterns of interest in vast amounts of data.
One of the leading machine learning methods is the “embedding” of words in a vector space so that words become arrays of numbers. The vectors preserve meaning, in that similar words are closer together in the vector space.
The latest models take this concept one step further. Thanks to so-called transformer architecture, they take account of the surrounding context in the embedding of a word rather than having just one embedding for each word. Think of the word “bond”. It could refer to a fixed income security, a connection or link, or 007 James Bond. Depending on the context, the vector embedding for the word “bond” will be different. It could lie geometrically close to words such as “treasury” and “yield”, or to words such as “family” and “cultural”, or perhaps to words such as “spy” and “martini”.
This ability to take account of context sets transformer-based large language models apart from previous models. Previous expert systems were tailor-made for specific applications. They needed skilled operators to develop and refine them. For example, AlphaGo, the Go-playing AI system that made headlines in 2016 by beating the world champion Lee Sedol is highly specialised. It is great at playing Go but would struggle to answer the question, “What would John Maynard Keynes have said?”
In contrast, the latest AI applications are versatile so-called “zero-shot learners” that can tackle previously unseen tasks. At worst, they are “few-shot learners” that need only minimal additional training to become conversant in a new, unfamiliar domain. This means they are suitable for use cases beyond textual analysis.
The source of this versatility lies in the combination of vast reservoirs of data and the massive computing power of the latest generation of hardware (Graph 3). The latest large language models have been trained on the totality of the text and non-text data on the internet. In this way, AI has moved from narrow systems that solve specific tasks to more general systems that deal with a wide range of tasks, and all in ordinary language rather than in specialised code.
Financial Sector Applications and Central Bank Use Cases
The financial sector is a particularly promising area for the application of AI. Already, machine learning has made substantial inroads in the business processes of private financial institutions. Examples include credit assessment and lending, assessing damages in insurance, and various applications in asset management. Important use cases also arise in fraud detection and compliance tasks such as customer verification.
Is the reason for AI’s prevalence in the financial sector because AI has some kind of magic ability to see things? The answer is no. Rather, the “secret ingredient” is data, or more precisely, a lot of data. The ability to impose mathematical structure on unstructured data makes AI ideally suited to identify patterns that are otherwise obscured.
This ability “to find the needle in the haystack”, even in previously unseen haystacks, could offer breakthroughs in nowcasting economic activity and in the monitoring of financial systems for the buildup of risks. AI thus stands poised to impact central banks as users of the technology.
A key application of large language models is nowcasting real activity or inflation. Traditionally, nowcasting has been hindered by the limited availability of timely data and the need to develop and train models for narrowly defined tasks.
Large language models could help overcome the narrow scope of previous nowcasting models . As versatile zero-shot learners, they can provide forecasts or nowcasts without fine tuning, and so find needles in previously unseen haystacks. Just as large language models are trained to guess the next word in a sentence using a vast database of textual information, macroeconomic forecasting models can use the same techniques to forecast the next numerical observation. For the AI, any input is just an array of numbers, and the pattern recognition abilities that apply to words can be equally applied to statistical series.
Combining time series data with other forms of unstructured data could further enhance the capabilities of nowcasting models. For example, adding non-standard data, such as satellite images, text from social media and so on, could provide additional context to the numerical time series data. The AI model could then be further refined and trained for the nowcasting exercise.
Although these are powerful tools, central banks should not succumb to “magical thinking” – that somehow the tools alone will bring miraculous outcomes. Timely and plentiful data are key to the success of nowcasting applications. AI excels at finding “needles in the haystack”, but there needs to be a haystack with the needles to be found.
But perhaps even more than in nowcasting, it is in the payment system where AI holds the greatest potential. Money laundering networks exploit the complexity of interconnections across firms both within and across borders to obscure the nature of financial transactions.
AI tools can improve the detection of money laundering networks, as illustrated by Project Aurora from the BIS Innovation Hub. Aurora uses simulated instances of money laundering activities that are sprinkled into the payment data. Aurora compares the performance of various machine learning tools with that of the prevailing rule-based approach to assess how well the instances of money laundering are caught by the various approaches.
The comparison occurs under three scenarios: first, transaction data that are siloed at the bank level. Second, national-level pooling of data. And third, cross-border data cooperation using privacy-preserving methods that do not divulge the underlying data to the authorities in other jurisdictions. The results show that machine learning models outperform the traditional rule-based methods prevalent in most jurisdictions. The pooling of data at the national level gives another boost to performance. Most strikingly, machine learning methods really excel when data from different jurisdictions are shared in a privacy-preserving way. Data cooperation improves detection dramatically over the current rule-based method.
There are of course risks arising from AI, two of which deserve particular attention. First is financial stability. Reliance on the same handful of algorithms could amplify procyclicality and market volatility by exacerbating herding, liquidity hoarding, runs and fire sales. But AI could also be harnessed for more effective financial stability monitoring. It could help in building early warningindicators that alert supervisors to emerging pressure points known to be associated with system wide risks.
The second risk is a greater prevalence of cyber attacks. As well as more sophisticated versions of familiar tricks such as phishing, there could be entirely new sources of cyber risk that exploit weaknesses in large language models to make the model behave in unintended ways, or to reveal sensitive information. But here again, just as AI increases cyber risks, it can be harnessed by cyber defenders in their threat analysis and monitoring of computer networks. In a recent BIS survey of central bank cyber experts, most central banks deem AI to be effective or moderately effective at combatting cyber attacks. They believe that AI systems can outperform traditional methods in enhancing cyber security management, especially in areas such as the automation of routine tasks or threat detection.
AI and the Macroeconomy
When it comes to the labor market and the macroeconomy, the impact of AI will depend on different channels: how many workers AI displaces, by how much it raises productivity, and how many new tasks it creates. The relative strength of these channels will determine aggregate employment and wage dynamics, and it also has implications for inequality.
What is clear is that AI will expand both aggregate demand and supply – and thereby lead to an increase in output (Graph 8). But the effects on inflationary pressures in the near term depend on the relative impact on aggregate demand versus supply. If households anticipate higher incomes tomorrow, they will spend more today. Inflationary pressures depend on whether this additional spending outstrips supply. Conversely, when there is less anticipation, AI will be less inflationary in the short run.
Towards a Community of Practice Among Central Banks
Central banks will need to be attuned to these dynamics. But more generally, there is a need to rethink central banks’ role in collecting and using data, and how central banks should respond to the challenges.
Traditionally, most data were collected and hosted within statistical agencies, including the central bank, with clearly defined access rights. And public institutions have traditionally acted as data providers to private sector firms and the general public.
But our intuition for ”data” appeals to existing structured data sets organised around traditional statistical classifications. The age of AI will rely increasingly on unstructured data drawn from all walks of life, with more and more data collected by autonomous AI agents. Central banks are already using AI and unstructured data to fulfil their mandates. But much of the unstructured data reside in the hands of the private sector, which increasingly acts as data provider.
So, one important question is how much central banks would rely on in-house data and how much they would source externally. Another key challenge is setting up the necessary IT infrastructure. This is very expensive but crucial in the age of AI. Staffing also arises as a key priority. The challenge of having the right mix of skills will only grow.
How can central banks address these challenges? For one, the rising importance of data and emergence of new data sources call for even greater attention to sound data governance practices. Another issue is the importance of metadata, or the “data about the data”. In the future, data will be assembled increasingly by autonomous machine learning applications that need to be guided on what to look for. Metadata frameworks are hence crucial for data retrieval, as well as better comparability
Conclusion
Central banks have a history of successful collaboration in overcoming new challenges. In the era of AI, they are well-placed to expand cooperation in the fields of data and data governance, as well as in the technology itself.
We should not underestimate the efforts needed to harness the full potential of AI, but the fruits of cooperation in a community of practice will be considerable, and the BIS stands ready to play its part.
编译:浦榕
监制:崔洁
来源|国际清算银行(BIS)
版面编辑|傅恒恒
责任编辑|李锦璇、蒋旭
主编|朱霜霜
近期热文
朱青:构建可持续发展的社会保障制度
外汇宏观审慎政策与汇率风险敞口
范希文:人民币国际化路径探讨
陈雨露:深入学习贯彻《习近平关于金融工作论述摘编》着力推动金融高质量发展
双层存款乘数框架下的货币乘数