One of the most promising developments in recent years is the emergence of generative artificial intelligence (AI). Generative AI, driven by sophisticated algorithms and deep learning techniques, has the ability to create new content, insights, and solutions that were previously thought to be exclusively within the realm of human creativity. As the insurance sector continues to explore and implement generative AI, several opportunities and risks come to the forefront.
The title of this article and the opening paragraph you have just read were not drafted by a human being. They are – word for word – what the generative AI tool, ChatGPT, produced when we asked it to write an introduction for an article for the insurance industry on the opportunities and risks arising from the use of generative AI. It isn’t quite how we would have put it, but it’s not a bad effort – it is on point, it makes sense, the grammar is correct, the sentences flow well and even the tone is appropriate.
Generative AI is an artificial intelligence technology that can produce text, images, artworks, audio, computer code and other content in response to instructions given in everyday English. It works by using complex algorithms to run ‘foundation models’ that learn from data patterns in the enormous volume of data that is available online and produces new content based on what it has seen in that data. This goes a step beyond the AI tools that have commonly been available until now. Those tools will typically analyse examples of a subject, such as pictures of plants, and learn from them to identify plants of a particular species or those that are diseased. They can also understand and respond to simple queries and commands within a limited range of parameters, as most people have become accustomed to, through interacting with AI-powered tools such as Apple’s Siri and website ‘chatbots’. Generative AI takes a step forward from this, as it can not only interpret pictures or other content or answer simple queries, but it can also create wholly new content. The latest generation of generative AI has taken a further leap forward in capability by utilising selfsupervised learning based on the data that is available online, rather than being guided by humans.
What does this mean for the insurance industry? The answer lies in the areas of insurance practice that require evaluative assessments or the generation of a written work product. A number of potential uses spring to mind. These could produce substantial efficiencies, as well as more reliable and accurate assessments and responses, resulting in better customer outcomes. However, there are some potential pitfalls.
This is not merely a future possibility – some insurers are using this technology already. Lemonade, a peer to peer insurer in New York that provides cover to homeowners and renters, advertises that it uses AI for underwriting and claims processing and is investing in generative AI to automate other business processes. Global insurer Chubb is also considering the use of generative AI, although its recent public statements have expressed caution about the time it is likely to take before the technology is sufficiently mature.
Generative AI has the potential to revolutionise customer service in the insurance industry. AI-driven chatbots are already engaging in natural language conversations with customers, providing real-time assistance and answers to queries. Tower Insurance, for instance, boasts a chatbot named Charlie, ‘born and bred in Auckland’. At present, these chatbots tend to be limited to answering simple queries or directing customers to the right page of a website. We asked Charlie whether it could tell us if our claim would be accepted, to which the answer was a polite suggestion that to get an update or discuss our claim, we should contact our claims manager directly (with a thumbs up emoji), along with some links to the claims pages on the website. A question about whether there was a maximum sum insured for a house was answered with a suggestion that we refer to the policy wording, along with some information relating to cover for lawns, flowers and shrubs. While using a chatbot may be quicker and easier than searching a website, the outcome is often largely the same.
Generative AI-driven chatbots, in contrast, have the potential to offer personalised advice and recommendations based on the customer’s risk profile, history and needs, thus enhancing customer satisfaction and loyalty as well as reducing personnel costs as AI tools replace human employees. Generative AI chatbots will have the advantage of access to an enormous database of information from which they will be able to derive principles to answer new questions and deal with new challenges. A generative AI chatbot might, for instance, have access to a database of hundreds or thousands of questions and answers between customers and customer service agents, from which they will be able to derive and process answers in new cases. A chatbot that has learned that customers within certain age ranges who have certain health profiles are offered life and disability insurance, or are offered it on certain terms, will be able to draw conclusions in new cases and provide preliminary responses to inquiries.
The claims process is a critical aspect of the insurance industry. Generative AI can be employed to analyse and process claims efficiently. By examining claim data and policy details, AI algorithms can determine the appropriate response to a claim, such as whether it should be approved, denied, or subjected to further investigation.
A generative AI tool could learn the details of thousands of claims made under a particular insurance policy, which of them were accepted or declined and the reasons why, from which it will be able to deduce the outcomes of future claims following the same principles. Such a tool could review and assess claims submitted online and write a response either accepting or declining the claim, with reasons, or asking for more information. This could be done almost instantly, so that customers would not have to wait for a decision and could ask for decisions to be reconsidered in real time if more information was provided.
AI may also assist in detecting fraudulent claims, based upon an assessment of a claim against features that arise from a large database of fraudulent claims. A claim that presents no obvious red flags to a human observer may trigger an alert when assessed by a sophisticated algorithm.
This is a markedly different approach from the traditional expectation of the way in which technology might replace human claims assessors, only a few years ago. There was once an expectation that computer-assisted claims assessments would involve a program being designed to reflect the requirements of an insurance policy by asking a series of predetermined questions that stepped through a flowchart to identify whether a claim met the relevant criteria. That approach is necessarily limited to the specific words used in the policy and binary questions; it does not allow for decisions to be made at the margins or judgement calls, and it may not be able to deal with complex claims that raise a number of issues. The difference with generative AI is that it is capable of analysing thousands of evaluative decisions made by claim managers against policy requirements and reflecting their usual approach, potentially resulting in a more consistent and reliable outcome than a human operator. This has the potential to enable quicker, more accurate and more consistent claims processing, reducing operational costs and enhancing customer trust.
Assessing risks accurately is fundamental to insurance operations. Generative AI can analyse vast amounts of data from various sources to provide insurers with insights into potential risks. By identifying patterns and trends, AI algorithms can aid underwriters in making informed decisions about policy issuance and premium rates, ultimately leading to more tailored and competitive insurance products.
This has the potential to streamline applications for cover, particularly in areas where customers’ individual risk profiles are highly relevant to whether cover will be offered and at what premium. Cyber policies, for instance, are notorious for requiring extensive information about a prospective customer’s systems and processes. A generative AI tool could assist in putting those in context against a database of other responses and loss data, rather than merely assessing them against a list of criteria that has been prepared by a human and is essentially subjective in terms of its assessment of risk. A generative AI tool could also, for instance, identify new risks and trends in underwriting more quickly and accurately than humans who rely upon imperfect market information.
Generative AI can already be used to draft simple contracts. ChatGPT will draft an insurance policy if asked to do so. Generative AI could potentially assist in converting traditional policies into “plain English” policies or make substantive changes as the market moves. The technology also offers the opportunity to spot market trends and move quickly to update policies when circumstances change, or other insurers begin to make changes. For instance, a generative AI tool could identify a need for a new clause to exclude, for instance, claims arising from a pandemic or epidemic, and then draft it.
Developing clear and comprehensive policy documents is, however, a complex task, ideally undertaken by lawyers. Small differences in policy wording may have a very substantial effect, particularly if they appear in long term policies such as life and health policies that are not amended or policies that are widely used and are relevant to a large-scale loss event such as the Canterbury earthquakes. Generative AI can, however, assist in drafting policy wording by preparing first drafts, suggesting issues that need to be covered, and by analysing legal and technical terminology, ensuring that policies are accurately written and easy to understand. This can help prevent misunderstandings between insurers and policyholders, reducing disputes and enhancing transparency.
The difference with generative AI is that it is capable of analysing thousands of evaluative decisions made by claim managers against policy requirements and reflecting their usual approach, potentially resulting in a more consistent and reliable outcome than a human operator.
Insurance brokers play a crucial role in connecting customers with suitable insurance providers. Generative AI can assist brokers by analysing customer profiles against insurers’ offerings to match customers with the most appropriate insurers and policies. There is an obvious potential not only to save time for brokers but also to ensure that customers receive policies that align with their needs and preferences. There is a risk, however, that over-reliance on AI tools may lead brokers into error, particularly if the tool does not have all the relevant and up to date information.
The use of generative AI, a technology still very much in its infancy, is not without risk. We discuss some important considerations below.
Data privacy and security
The insurance industry deals with sensitivepersonal and financial information. The adoption of generative AI introduces potential vulnerabilities to data breaches and unauthorised access. Implementing robust cybersecurity measures and data protection measures is essential to mitigate these risks generally, but generative AI introduces new vulnerabilities.
One important challenge is that the use of generally available generative AI tools such as ChatGPT requires the input of information from the user which is then available to the tool, which the user does not control. This means that the insurance industry cannot use tools such as ChatGPT unless they are careful to anonymise the data submitted in their requests. Many firms that wish to benefit from generative AI, such as law firms, are working to develop their own, in-house generative AI tools that draw from publicly available data but do not share the firm’s own information outside their own IT systems. Insurers will need to consider doing the same.
Initially, generative AI should be applied to closed data sets. The generative AI model may itself be a pre-trained large language model, but it should be used with the insurer’s own data initially. There are risks in combining internal data with external data, and certainly insurers’ own data should not be disclosed to external databases.
Bias and fairness
Generative AI systems can inadvertently perpetuate biases present in the data on which they are trained. Biased data could lead to unfair policy pricing or discrimination against some demographics, or even biased claims decisions. Insurers must be cautious in the selection and pre-processing of training data to ensure equitable outcomes.
While generative AI can produce impressive results, the lack of transparency in how it arrives at conclusions can pose challenges. Insurers will need to ensure that AI-driven decisions are accurate and understandable, as complex models may produce outputs that are difficult to interpret or validate. ChatGPT famously produces wildly inaccurate statements and conclusions at times, which is a reflection of the unreliability of parts of the data pool from which it draws. Lawyers using it to draft legal opinions or submissions have been surprised to find cases referred to that do not support the principles or conclusions for which they are cited, and in some instances are even wholly imaginary.
The insurance industry is subject to strict regulations that govern its conduct and practices, particularly with respect to customer outcomes. The introduction of generative AI will need to produce outcomes that align with these obligations to avoid legal and compliance issues. The Financial Markets Authority is highly critical of financial services firms that do not do enough in its view to invest in systems and processes to ensure that errors do not affect customers negatively. Generative AI is an immature technology which is more likely than mature technologies to give rise to errors.
Humans will need to remain in the loop, at least until the technology fully matures. Striking the right balance between automation and human expertise is crucial to ensure that the integration of generative AI enhances efficiency without compromising the value of human judgement and interaction. Decision making cannot be delegated to an AI model, however impressive, as human checking or input is essential as a sense-check.
Insurers may manage the risks of beginning to utilise generative AI by starting with the safest parts of the operations first. The first uses may be with employee-facing tasks, as if they go wrong, the employees are likely to be able to identify and resolve the issue without customers knowing or being affected. A higher level of risk arises when generative AI is used to deal directly with customers, as errors or inappropriate responses may result in embarrassment, complaints and even regulatory action.
The integration of generative AI into the insurance industry offers considerable potential for transforming various aspects of its operations. From optimising customer interactions to revolutionising risk assessment, claims assessment and policy drafting, generative AI could revolutionise the way insurers operate. However, careful consideration of the associated risks and ethical implications will be important to ensure that these opportunities are harnessed responsibly and safely.
Finally, this article contains another paragraph that was also generated entirely by ChatGPT – the first paragraph under the Risk Assessment heading. Did you spot it?
Read more of our related insights.View all insights