Unlocking the Potential of Large Language Models in Insurance Operations

Summary:

  • GenAI, particularly large language models LLMs, can revolutionise insurance by automating core processes like data extraction and underwriting assessments.

  • While initial applications often focus on chatbots, LLMs can offer deeper benefits by embedding AI models directly into operational workflows.

  • To effectively leverage GenAI, insurers need to digitise processes for example using no-code/low-code platforms to reduce the time and cost of integration.

  • For complex tasks, LLMs should be deployed with human oversight to validate and enhance decision-making while continuously improving the models.

  • Adopting GenAI with a strategic approach can help insurers improve efficiency, optimise processes, and stay competitive in an evolving market landscape.

Generative AI (GenAI) has been touted as a revolutionary breakthrough, promising transformative benefits across industries. While insurers and other organisations have been swift to establish proof-of-concepts (POCs) or adopt commercial tools harnessing this technology, the actual value of these early initiatives remains unclear. In this article, we will explore the workings of GenAI, its potential applications in insurance, and the steps necessary to effectively integrate this technology into existing processes.

GenAI explained

GenAI is a subset of artificial intelligence (AI) that gained prominence with ChatGPT's release in late 2022. Unlike traditional AI, which needs large datasets for specific tasks, GenAI models are pre-trained and versatile, excelling at handling unstructured data. At the heart of GenAI are large language models (LLMs) that generate text by predicting one word after another based on input. While they can sometimes produce inaccuracies (known as hallucinations), LLMs are highly effective at creating coherent text and mimicking human reasoning.

Sidebar: the LLM as an intern

Think of the LLM as a skilled intern joining your insurance company fresh out of university, equipped with a strong foundation but still learning the ropes. They come with a basic understanding of insurance from their studies but are still getting acquainted with the specific processes they need to support. However, they are eager and quick learners, able to read through lengthy documents and manage repetitive tasks when guided with clear instructions. This is how LLMs should be approached: do not rely on the LLM’s internal knowledge due to the risk of hallucination but leverage its ability to parse and evaluate complex information.

The limited value of chatbots

For many individuals, ChatGPT serves as their initial encounter with Generative AI, which likely accounts for the early prevalence of chatbot applications across diverse platforms, ranging from Microsoft Co-Pilot to enterprise-specific GPTs. Evidence [1] indicates that LLMs can enhance employee productivity, particularly for less experienced employees. However, these benefits are uneven and limited by reliance on human interaction, which can hinder overall efficiency. In short, AI-assisted humans may achieve an incremental boost, but they remain the primary bottleneck in complex insurance processes.


How Generative AI Creates Value for (Re)Insurers

Check out our latest whitepaper for insights into key patterns, use cases, and tailored recommendations from our experts.


How to deploy LLMs productively

To eliminate bottlenecks caused by human interfaces, AI models need to be seamlessly integrated into core insurance processes. This is particularly evident in the early, high-value application of data extraction. For instance, when extracting key risk information from incoming submissions, AI tools should automate the retrieval of submissions from their initial point of receipt and transfer the extracted data to the appropriate destination, such as the pricing system or risk database. This approach also applies to more complex scenarios, including the evaluation of underwriting standards or claims coverage assessments. Since the model primarily augments human decision-making for tasks of low to medium complexity, it needs to enhance, and ideally automate, process orchestration to drive meaningful efficiency gains.

Sidebar: from intern to analyst

Unlike interns who leave after a few months, LLMs are here to stay. They can continue to operate based on the same instructions and become part of recurring processes, with their performance monitored through output assessments. Once this occurs, LLMs assume a role more akin to an analyst, processing their own inbox of repetitive tasks instead of waiting for ad hoc human instructions. A model taking on a more specific role does not necessarily require fine-tuning. While still somewhat debated, industry consensus has recently favored prompt-centric approaches to benefit from the rapid development of new foundational models.

Designing LLMs with humans in the loop

While LLMs can automate significant portions of insurance processes, we do not believe they are ready to replace humans entirely. This is due to their performance on the most complex tasks and the lack of a framework to make these models properly accountable for complex decisions. Instead, AI models must be designed requiring human interaction to monitor, validate, and correct model decisions when necessary. This human-in-the-loop (HITL) model requires extra care at the application and process architecture level, but it also creates exciting opportunities for models to learn from humans and continuously improve their performance. While models might not rival humans in complex decisions, such as large risk underwriting, they can still support humans by extracting and structuring information from vast amounts of risk and historical data directly at the point of decision.

Addressing process debt with digitisation

To embed models into processes, those processes must first be digital. This has long been a topic among insurers who still rely on emails and spreadsheets to do their work. The good news is that GenAI presents a strong case for transformation, and recent advancements in no-code and low-code platforms can substantially lower the cost and time needed for digitisation. In fact, procrastinators who have delayed investments might find themselves rewarded due to an unencumbered landscape leading lower cost of integration between existing systems and new AI solutions.

Sidebar: infinite interns

WMost insurance processes are designed with human limitations in mind. However, the scalability of LLMs introduces a question previously confined to science fiction: "What if I had unlimited interns?" Or more precisely, "How would my processes evolve without the constraints of human capacity? Would I still triage based on metadata, or would I run full costing on all submissions before risk selection?” We are already seeing early GenAI adopters reflect on this, and we expect to see much more as solutions and patterns evolve.

Learning from opportunities

Like any digital transformation, the transition to GenAI will not occur in a single project. What we are seeing among GenAI leaders is a focus on addressing individual sub-processes independently. This approach allows them to achieve quick results and gather insights on which platforms and methods work best. Once digitised, these sub-processes can be continuously rearranged to optimise the end-to-end process chain. This approach promises the highest benefits in the long run while also creating significant value in the short term, but it requires a strong vision and a deliberate approach to building out foundations in a fully interoperable manner.


Get Data + AI insights delivered

Sign up to stay updated on GenAI articles

Seizing the GenAI opportunity

To capitalise on the transformative potential of GenAI, insurers must move beyond mere proof-of-concepts and commit to the productive deployment of LLMs tightly embedded in their operating model. This requires a strategic approach, focusing on high-value use cases, building robust foundational capabilities, and fostering a culture of AI enablement and adoption. By embedding GenAI into core operations and leveraging human-in-the-loop architectures, insurers can drive efficiency, improve decision-making, and unlock new opportunities.

By taking a proactive approach and learning from industry best practices, insurers can position themselves as leaders, harness efficiency gains, and strengthen their ability to navigate the bottom-line challenges of an approaching competitive soft-market cycle in the years ahead.

 

Originally published on our Synpulse website, we have updated this article with fresh insights. For a comprehensive overview, you can refer to the original article. 

Previous
Previous

AI for Data Extraction in (Re)Insurance: Build or Buy?

Next
Next

How Generative AI Creates Value for (Re)Insurers