Generative AI: What do you need to consider surrounding usage and policies for your organisation?

by Jennifer Moseley on Nov 15, 2023  

As an organisation, it would be naïve of you to think that you don’t need to address the usage of AI – just because you haven’t introduced it formally doesn’t mean your staff aren’t using it or experimenting with it.


There is much to consider surrounding the responsible use of generative AI as an organisation; we hope this article gives you some starting thoughts as we enter this new tech era.


Do You Need AI?

You may not need AI, but it would be a mistake to dismiss it outright without considering the potential value and weighing that up against the risks to make an informed decision.

 

AI has the potential to support your staff, enabling them to be more productive and effective with some of their more repetitive and administrative tasks, freeing up time to spend on areas that AI doesn’t add value, like strengthening relationships.


Data Protection & Security Concerns with Generative AI

Data Breaches

When we provide our personal or corporate data to software applications, we trust that the company will handle it responsibly and have robust safeguards against cyberattacks. Nevertheless, using generative AI tools may unconsciously reveal more information than intended.



AI apps do walk around in our user data to fetch important information to enhance our user experience. The lack of proper procedures for collecting, using, and dumping data raises some serious concerns.

- Ryan Faber, founder and CEO of Copymatic

Data Leaks

If you’ve tried using different AI tools already, you likely know that providing the AI chatbot with background information and context through a well-written prompt is crucial for getting the best response.


Well, what you may not realise is that by doing so, you may be inadvertently sharing proprietary or confidential information with the AI chatbot when doing so. Unintentionally, employees may be sharing valuable intellectual property, confidential strategic information, and sensitive client data belonging to the company.


Research done by Cyberhaven, a data security company, found that

  • 11% of data employees paste into ChatGPT is confidential.
  • 4% of employees have pasted sensitive data into ChatGPT at least once.


A most concerning risk for organisations is data privacy and leaking intellectual property. Employees might share sensitive data with AI-powered tools, like ChatGPT and Bard. Think about potential trade secrets, classified information, and customer data that is fed into the tool. This data could be stored, accessed, or misused by other service providers, including competitors.

- Dennis Bijker, CEO of SignPost Six, an insider risk training and consultancy firm

Where to Start with AI in Your Organisation

After reading the above, you’re probably left wondering how to move ahead with these risks in mind. As AI is an ever-changing area with new tools being released daily, you need to be agile while remaining vigilant with your introduction plan.


Start With a Usage Policy

Implement a policy on responsible AI use and ensure that your AI usage policy aligns with your organisation's data and compliance policies. At a minimum, this policy should be used to create guardrails for a safe space where your staff can explore the use of responsible AI to support their role.


Template AI Usage Policies:


Find a Secure AI Tool

Research the Company Behind the Tools

You can evaluate an app's reputation and track record with other tools and services, but remember that a well-known name doesn't always mean adequate security.


Reviewing the privacy policy and security features before sharing information with any AI tool is essential. Any info shared may be added to its large language model (LLM) and may be used in responses to other people's prompts, including your direct competitors.


Train Employees On Safe And Proper Use Of AI Tools

You likely have acceptable social media usage policies for employees, and you should be training employees on good cybersecurity behaviours already. As generative AI tools become more prevalent, it is necessary to incorporate new policies and training topics into the existing framework. Some examples of these may include:


  • What they can and cannot share with generative AI apps
  • General overview of how LLMs work and potential risks in using them
  • Only allowing approved AI apps to be used on company devices


This is a helpful summary of how major companies utilise generative AI technology among their workforce.


Test the Tools

Test any new technology in real-time test environments so that you know how it performs under pressure before deploying on an enterprise scale.




Are you looking for help with new HR policies? Our HR Consulting Services can guide you and help implement policies within your business. 



Request a Consultation

SHARE THIS ARTICLE


Recent Articles

By Will Koutney 17 Apr, 2024
The Cayman Islands, long recognized for its pristine beaches and as a leading captive insurance domicile, is now making waves in the reinsurance industry. Over the past six years, the islands have seen a meteoric rise in their reinsurance sector, with double-digit annual growth rates and a significant increase in the number of reinsurers obtaining licenses from the Cayman Islands Monetary Authority (CIMA).
By Aisling Fitzpatrick 01 Feb, 2024
Are you in the Cayman Islands and coming to the end of your audit contract? CML can help you with a career move into Industry.
By Adele Keane 05 Jan, 2024
The Cayman Islands have established themselves as a prime location for FinTech and Crypto businesses seeking to enhance their expansion. The Islands possess a highly regarded financial services and technology framework, which attracts companies worldwide to domicile here.
More Posts
Share by: