Recent blog articles
Back to top

As an organisation, it would be naïve of you to think that you don’t need to address the usage of AI – just because you haven’t introduced it formally doesn’t mean your staff aren’t using it or experimenting with it.

There is much to consider surrounding the responsible use of generative AI as an organisation; we hope this article gives you some starting thoughts as we enter this new tech era.

Do You Need AI?

You may not need AI, but it would be a mistake to dismiss it outright without considering the potential value and weighing that up against the risks to make an informed decision.

AI has the potential to support your staff, enabling them to be more productive and effective with some of their more repetitive and administrative tasks, freeing up time to spend on areas that AI doesn’t add value, like strengthening relationships.

Data Protection & Security Concerns with Generative AI

Data Breaches

When we provide our personal or corporate data to software applications, we trust that the company will handle it responsibly and have robust safeguards against cyberattacks. Nevertheless, using generative AI tools may unconsciously reveal more information than intended.

“AI apps do walk around in our user data to fetch important information to enhance our user experience. The lack of proper procedures for collecting, using, and dumping data raises some serious concerns.”
–Ryan Faber, founder and CEO of Copymatic

Data Leaks

If you’ve tried using different AI tools already, you likely know that providing the AI chatbot with background information and context through a well-written prompt is crucial for getting the best response.

Well, what you may not realise is that by doing so, you may be inadvertently sharing proprietary or confidential information with the AI chatbot when doing so. Unintentionally, employees may be sharing valuable intellectual property, confidential strategic information, and sensitive client data belonging to the company.

Research done by Cyberhaven, a data security company, found that
• 11% of data employees paste into ChatGPT is confidential.
• 4% of employees have pasted sensitive data into ChatGPT at least once.

“A most concerning risk for organisations is data privacy and leaking intellectual property. Employees might share sensitive data with AI-powered tools, like ChatGPT and Bard. Think about potential trade secrets, classified information, and customer data that is fed into the tool. This data could be stored, accessed, or misused by other service providers, including competitors.”
-Dennis Bijker, CEO of SignPost Six, an insider risk training and consultancy firm

Where to Start with AI in Your Organisation

After reading the above, you’re probably left wondering how to move ahead with these risks in mind. As AI is an ever-changing area with new tools being released daily, you need to be agile while remaining vigilant with your introduction plan.

Start With a Usage Policy

Implement a policy on responsible AI use and ensure that your AI usage policy aligns with your organisation’s data and compliance policies. At a minimum, this policy should be used to create guardrails for a safe space where your staff can explore the use of responsible AI to support their role.

Template AI Usage Policies:

Find a Secure AI Tool

Research the Company Behind the Tools
You can evaluate an app’s reputation and track record with other tools and services, but remember that a well-known name doesn’t always mean adequate security.

Reviewing the privacy policy and security features before sharing information with any AI tool is essential. Any info shared may be added to its large language model (LLM) and may be used in responses to other people’s prompts, including your direct competitors.

Train Employees On Safe And Proper Use Of AI Tools

You likely have acceptable social media usage policies for employees, and you should be training employees on good cybersecurity behaviours already. As generative AI tools become more prevalent, it is necessary to incorporate new policies and training topics into the existing framework. Some examples of these may include:

  • What they can and cannot share with generative AI apps
  • General overview of how LLMs work and potential risks in using them
  • Only allowing approved AI apps to be used on company devices

This is a helpful summary of how major companies utilise generative AI technology among their workforce.

Test the Tools

Test any new technology in real-time test environments so that you know how it performs under pressure before deploying on an enterprise scale.

Looking for support with your HR processes, policies and procedures? Our team is here to help you! Reach out to our HR Consulting team at today to see how we can support your HR endeavours.


Share this blog article