The majority of financial firms are planning for pilots or deployment of generative artificial intelligence according to a webinar hosted by Broadridge Financial Services, the fintech provider.
In a Broadridge webinar on 26 October, Generative AI: Lessons From the Front Line, a poll found that only 8.2% of attendees have no plans to use the technology. Just under one fifth, 17%, are already using generative AI for multiple live use cases, with the remainder either planning pilots or deployment.
Brant Swidler, principal product manager, Amazon Bedrock at Amazon Web Services (AWS), said on the webinar that those numbers will start to skew towards more live applications in the next year.
Joseph Lo, head of enterprise platforms at Broadridge, said on the webinar that an increasing number of applications that businesses are going to use or buy from vendors will incorporate generative AI, so the technology will become very commonplace. He said: “I think generative AI is going to be the impetus for firms to get their digitalisation programmes in place.”
Roger Burkhardt, chief technology officer of capital markets and AI at Broadridge, moderated the webinar and said many organisations are starting with efficiency use cases, where they focus on low-risk internal use before changing their fundamental relationship with customers.
“When the productivity gains become high enough, you are changing the business and the technology becomes transformational,” he added.
Burkhardt continued that he had been using AI for more than 20 years, starting in his role as chief technology officer at New York Stock Exchange, where the exchange used AI for market surveillance. However, generative AI only burst into the public consciousness about a year ago following the adoption of ChatGPT as its chat interface led to a radical democratisation of the technology, similar to the internet democratising access to data.
“There is tremendous interest in applying generative AI in financial services, which I hear from C-level clients,” said Burkhardt. “However, many financial service firms are grappling with how to capture value while ensuring safety, accuracy and compliance in a very regulated industry so not many organisations have deployed it in client-facing applications.”
Lo said financial services firms are really excited about being able to use generative AI chatbot-based tools to deliver knowledge to their users, and also to personalise the customer experience.
“We are seeing a lot of that in customer service, operations and even the front office where people want to be able to access all the information that firms have already collected, or access documentation, procedures and guides,” added Lo.
BondGPT
Another possible use case is the desire to aggregate complex and disparate data sources using natural language models for insights, as opposed to needing to go through a data scientist or data analyst.
In June 2023 LTX, a subsidiary of Broadridge, launched BondGPT which incorporates OpenAI GPT-4 to help users rapidly identify relevant corporate bonds on the LTX corporate bond trading platform and make better trading decisions. In October this year LTX released BondGPT+, the enterprise version of BondGPT which includes new features such as the incorporation of clients’ own data and the ability to integrate into clients’ enterprise applications, including trading workflows. Users can tailor BondGPT+ to their requirements by adding preferences such as if a user or firm has a policy where they only trade high yield bonds that are above a certain credit rating.
Lo said: “We believe BondGPT is the first large language model-powered application for fixed income, allowing bond traders, portfolio managers and anyone in the ecosystem to surface pre-trade insights and workflows using generative AI.”
Swidler said financial services firms have no real interest in using generative AI “out of the box” as they want to train models on their own firm’s data. Amazon Bedrock is a managed offering allowing clients to access a wide variety of foundation models and data from different providers to develop their generative AI applications. Many clients begin by building internal functionality with generative AI that helps them reduce risk as internal employees can test, and discover, use cases.
“Financial services firms are really hesitant to use any out of the box models because they tend not to understand the firm’s compliance or regulatory environment,” added Swindler.
BondGPT was purpose-built for institutional fixed income users, so the application relies on verified, curated data sources and allows system administrators to configure various levels of data access through user and firm level entitlements. The enhanced BondGPT+ allows compliance officers at client firms to add custom rules based on their firm’s unique compliance and risk management processes.
“When we delivered BondGPT we had to make sure our application was not going to give advice on what was a good bond, or what is the most appropriate bond for any given situation,” added Lo. “We started uncovering rocks and understanding the needs of our users, and built guardrails for what people could ask and also the types of responses BondGPT could give.”
Risks
Swidler said large language models are probabilistic so there is always some inherent risk in the output they generate so firms typically take one of two approaches.
In the first approach companies ingest data from external sources based on prompt engineering. A functionality called RAG (retrieval augmented generation) analyses prompts and the model executes a search query if it decides that it needs external information. Another approach is to confine the model to only respond with proprietary data.
Lo explained that generative AI and large language models, in general, are essentially predicting the next word or phrase based on the probabilistic nature of the data so it is good at language, but not math or predictive analysis.
“You have to understand the scope of the data,” added Lo. “Users love to trick the LLM to make it say bad stuff so there are a lot of gotchas there to watch out for.”
Another limitation is hallucinations, where the model can inject new information into prompts. Swidler gave the example of lawyers using ChatGPT to write legal briefs, where it made up previous cases. He recommended there is always a human review of the content that is produced.
The Amazon Bedrock team spends a lot of time on prompt composition.
“If you say write a blog, then it will write a blog,” Swidler added. “If you say create or imagine a blog, those slight tweaks can have drastic impact on what the model thinks it should be doing and could make it more imaginative versus more factual.”
Implementation
After employees have played with generative AI in a test environment, Swidler said identified use cases should be funnelled to development teams who can implement applications across the enterprise, and that users need to be educated on how to interact with these tools.
“It is easy to implement low-risk applications versus more difficult challenging applications that require more interactions from an end-user,” he added.
Lo said the the first priority for implementing generative AI is to equip users to understand the capabilities of the new technology and give them knowledge in terms of expectations.
“It’s really important to get your internal stakeholders involved, especially in financial services,” Lo added. “People are involving their legal, compliance and risk teams early on.”
Once users recognise safe patterns when using large language models, the application can be applied across other business areas.
“The pioneering work we did with BondGPT is now being adopted in other trading, operations and reporting applications,” said Lo. “Good AI democratises the tools, but firms who have done a good job in collecting, organising and structuring data will have an advantage.”