top of page
  • Writer's pictureWilliam Webster

Obstacles to Using LLMs


In previous articles, I've discussed some of the economic benefits of using LLMs, as well as the problems concerning hallucinations and how simple steps can mitigate their effects.


After learning about the applications of LLMs in treasury and risk management, you might want to try them yourself. However, if you work for a bank or building society, it may not be that straightforward. Many firms have prohibited the use of these models. Let's explore what's causing this and whether there are steps we can take to move forward. (The following discussion concerns the use of LLMs in treasury and risk management. This means that the model's use predominantly applies to information used internally and, in the business, not externally with customers).

 

Challenges in Implementing LLMs

 

Why have some firms not allowed their employees to use GPT-4, or such, at work? This list is not exhaustive, but you may recognise some of these challenges in your own business. Let’s look at three issues of concern:

 

  1. Risk Management: This includes legal issues, data security, privacy, and the risk of misinformation from hallucinations. It's about the apprehension of potential negative consequences of using LLMs, including legal complications, data breaches, and regulatory non-compliance. In sensitive businesses like financial institutions, these risks require careful appraisal. At no point can you say the risk is nil and that's what frightens people, how can you quantify what you are getting yourself into? You can't it's a matter of using judgment - something we do all the time in financial markets.

  2. Technological Understanding and Trust: A lack of knowledge and a gap in understanding of what LLMs are capable of, their potential benefits, and limitations. This lack of familiarity or mistrust results in a wait-and-see approach, with decision-makers hesitant to adopt new technology until it is widely accepted within their peer group.

  3. Organisational Culture and Resistance to Change: This is related to operational inertia and a reluctance to adopt new technology, especially when integrating LLMs into established workflows. Many traditional firms resist change, not just because of a lack of immediate perceived benefits but also because it suits the short-term interests of those involved.

 

Practical Steps

 

The economic argument for using Large Language Models (LLMs) is compelling. Even the simplest application of these models to improve workflow can boost efficiency by at least 10%, leading to savings in headcount and costs. If you are experiencing resistance, three practical steps may help:

 

  1. Understand where the blockage is: Identify the concern. Who is raising it? And, what do they need to know to become comfortable?

  2. Address individual use cases: For example, in treasury, if you want to use an LLM to prepare for ALCO meetings, put forward a case. Show how an LLM can repurpose something you write for a different audience - as I've discussed in a previous article.

  3. Form an AI committee for collaborative decision-making: Initiating an AI committee within the firm brings together diverse viewpoints. This committee will serve as a platform to assess AI's varied applications and implications in the business and is a path towards drafting an AI policy – something regulated industries need to take seriously.

 

Need for Policy

 

The catalyst likely to trigger change is whether your peer group starts using the technology. There's no doubt that adoption will accelerate once this happens. Starting now and putting things in writing helps to think through the unique circumstances affecting your firm.


As these technologies evolve, expertise is key to enhancing the workflow and efficiency and maintaining a competitive market position. Security will be high on the agenda, including the potential for accessing LLMs through an Application Programming Interface (API) and ensuring that sensitive data is not used for training by the LLM provider.


In collaboration with the IT department, the committee will need to assess encryption methods. These methods are essential for securing data transmitted over the Internet, particularly when accessing LLMs through APIs.


Furthermore, the committee will need to oversee data storage, especially with external cloud service providers like Amazon, Microsoft, and Google, as well as LLM providers such as OpenAI and Anthropic.


There is also a need to be aware that data models (like LLMs) could exhibit bias. Could this translate into active decision-making without being aware?


Ironically, firms that currently don't allow LLMs to be used can't stop employees from using these models to solve work problems at home. By not addressing the issues, the business has, by default, turned a blind eye to what employees are doing. That's what I call a risk.

 

5 views0 comments

Recent Posts

See All
bottom of page