top of page

Insurance and AI: Pros and Cons

Artificial Intelligence (AI) is this year's new trend for creating content. However, as with any tool, there are positives and negatives regarding how reliable that information is. We will explore the area of Generative AI in determining risk and the best ways to identify and implement mitigation strategies.


What is Generative AI? 

What is an AI tool? For that matter, what is Generative AI?


Generative AI is a tool that creates written content from a prompt given by a user. While this sounds very similar to a web search, this form of AI is completely different. General "AI" will supply one answer to a query. Generative AI pulls from multiple information sites and compiles that information into one cohesive response. The result is a popular way to create and research written and verbal communication.

 

Another difference in utilizing AI programs over traditional research is that AI doesn't cite sources. Instead, shared information is pulled together from various sources anonymously.  



AI can be everywhere, even your local store.
An AI Robot Greeting Customers in Osaka, Japan.

The Four Pitfalls of Using AI

With all the advantages of using Generative AI, organizations must be aware of some common pitfalls in relying on a chatbot to create content. We will address four potential risk factors for organizations to consider when using AI resources, such as ChatGPT, one of the more widely used AI resources*.  


Businesses and non-profit organizations should consider the following issues that may create a liability in dealing with AI information.


1) Reliability 

As with all computer-generated information, AI is only as reliable and informed as the resources it draws from. AI's ability to create reliably accurate content means that content creators and organizational leaders cannot mindlessly use responses from a chatbot to answer questions and address concerns. ChatGPT openly cites inaccuracy as a limitation of its tool, as there is no way the AI program can use human reason to decipher a question it may not recognize. 


When the program tries to form a response to a difficult question, there may be times the organization will get an answer that "sounds good" but makes little sense. Double-checking AI responses is vital to avoid spreading bad or even libelous information. Not doing so could create a serious liability for your organization.*


2) Copyright/Plagiarism 

Example of a copyright symbol on a page.
AI Doesn't Remove Copyright Licensing.

Sharing copyrighted information and citing sources is essential for anyone who utilizes the work of others. This is extremely important in written work, advertising, media, and sharing someone else's work in public. 


A good example is the rise of remote church attendance (attending church via Zoom). Many churches responded to the COVID pandemic by going online and posting song lyrics and videos. They learned that sharing song lyrics and videos related to worship music involves buying a license to share publicly. Copyrights remain a severe issue. 

With ChatGPT, for instance, there is little oversight for copyright and plagiarism.


Organizations should check with their legal representatives to ensure AI-generated material is properly distributed, displayed, and disseminated. Using a chatbot service does not negate the organization's legal responsibility to abide by copyright laws.*


3) Issues of Perspective 

When "humans" write, the research process doesn't just uphold the writer's perspective. The process can also change that perspective. Ideally, researching a paper via the web means the writer is thinking and analyzing, not just finding work that backs up their position. Sometimes, a writer may realize there is more support for a different perspective than the one they started with. The writing process can thus change the writer's perspective. 

With a chatbot, however, the initial question directly affects the information you receive. That question becomes a filter that blocks other perspectives. The danger of introducing bias in your final product becomes a risk.  


One way to avoid creating bias is to ask various questions from different perspectives to get a broader answer for your topic. Discrimination and bias are serious problems in using AI. As an organizational leader, you should still be cautious of your topic and take a broader perspective in understanding it.*


4) Privacy


Computer screen with a man staring back with Facebook covered glasses.
AI Isn't Private.

In a world of electronic tracing and digital footprints, businesses are subject to the dangers of a cyber breach of privacy and the unintentional sharing of personal information. 


Whenever an individual shares information with an AI generator, all that information becomes a part of the database for that chatbot. Sources such as ChatGPT retain and collect your IP address and browser information and log any communications and conversations within their program for training purposes.  


Content creators and business communicators should avoid sharing too much information within a chatbot. Block any storage for information you want to keep hidden. Improving privacy means regularly reviewing your organization's internet firewall and training employees on what (and what not) to share online.*

 



 

Getting Information about Cyber Liability Insurance

If you do not have a policy that includes Cyber Liability protection, please consider contacting your Loomis agent to check into this vital coverage. In most cases, we can add Cyber Liability as an endorsement to your current General Liability coverage. If you want to learn more about this coverage, we are here to help. Call us today!



The view of Earth from the International Space Station.

11 views
Home | Blogs | Current Blog
bottom of page