Top Tips on using AI strategically for business success: Key points to think about.

27th September 2024, 2:12 pm

AI has recently been dominating the news and conversations with clients in a way that no other topic has during my career in commercial and data law. Technology is a great enabler, but AI is a rapidly evolving area and we are still very much in a period of exploration, development and uncertainty, with different regulatory regimes taking effect globally including the recent EU AI Act.

Every week, AI providers are announcing new partnerships, investment and integrations with business core technology that affects both clients’ strategic decisions and their day-to-day working practices. As a result, the adoption of AI across all businesses seems to be moving from a question of “if” to “when”.

It is both exciting and interesting to see the impact of the rapid growth of AI on operations and obligations of businesses across different sectors, but this growth comes with risks. Clients are asking for help with their purchasing and usage of AI in a number of ways and I have set out below some of my key practical tips for businesses to think about in relation to AI.

 

  1. Questions to ask AI providers

I strongly advise that you ask AI providers for detailed information before entering into a contract to use their AI systems. Some of the key questions that we have been advising clients to ask suppliers are on the following lines:

  • Technical and product related: to investigate what underlying technologies are used, how quality and accuracy is ensured, what testing is undertaken, and how outputs are supervised.
  • Support: to understand whether the AI system can be customised, the terms of the SLA, and how updates and new releases are managed.
  • Security: the AI provider should be asked to explain how the AI generates decisions or outputs, to set out measures in place to prevent harmful content being generated, and to provide information about its compliance with relevant standards such as ISO 42001.
  • Ethics and social impact: we have seen clients ask questions about how suppliers minimise the AI’s energy consumption, and about how they address and mitigate bias and ethical considerations.
  • Data: it is key to understand what data was used to train the AI, whether your data will be used to train it, and where and how any personal data is processed.

 

  1. Employee usage of AI

It can be difficult to govern how your employees use AI as they can of course easily access AI on personal devices and often employers are unaware of how their employees are using AI. Common personal usage of AI ranges from the less contentious (e.g. writing linkedin updates, presentation or article structures and sick notes, and using AI to check grammar and spelling) to those which could cause real issues for employers (e.g. any inputting of personal data or confidential information). Other potentially problematic usage includes where employees use AI to write tenders or bids – a manager may not realise that AI has been used, but aside from confidentiality / IP considerations this may result in a number of tenders containing exactly the same language.

Our view is that it is unrealistic to ban the usage of AI by employees. The more practical and sensible approach is to develop a culture of trust and openness which is backed by training and policies (see tip no 6) so that employees are really clear over what is and is not acceptable both when using AI that the business makes available to them, and when using AI which they personally access.

  1. Think about your intellectual property (IP)

There are three key issues to think about on the IP front when using AI. Firstly, AI is trained using data and images from other sources and it is not always clear whether this has been done legally – for example, Open AI is facing claims from a variety of entities including the New York Times and Mumsnet, with accusations that Chat GPT has been trained using their copyrighted data. If you use AI to create images or text, how do you know that these ‘creations’ are not based on a third party’s IP?

Secondly, who owns anything you or your employees create using AI? In a recent case a computer scientist tried to register a patent citing an AI system as the ‘inventor’. This was rejected on the basis that the inventor must be a human, but there is still a question mark over whether the owner of an AI system could also own the outputs from that system.

Thirdly, what happens if an employee puts confidential IP into an AI system? Samsung employees famously used ChatGPT to help fix problems with source code, but in doing so inputted highly confidential source code, meeting notes and data – which was then made public.

Our key tips are to ensure that:

  • you develop a comprehensive IP strategy
  • you ask AI suppliers relevant questions (see tip no 1!)
  • your contract clearly sets out warranties from the supplier about the training data and that you own any output data
  • your employees are very clear over what they can and cannot use AI for (see tip no 6!)

 

  1. Using AI in recruitment

AI now plays a key role in talent acquisition, with many employers and recruitment agencies using AI to sift through applications and CVs. Two issues arise here: firstly that UK GDPR stipulates that decisions should not be made on an automated basis when they can have a significant effect on an individual, and secondly that AI can be at risk of being inherently biased or discriminatory.

Whether recruitment is undertaken internally or externally via agents, businesses should establish and adopt monitoring policies and ensure that decisions are reviewed by humans and checked and challenged.

  1. Think carefully about personal data

When acting as a data controller (i.e. making decisions about what personal data you hold and why) it is important to assess whether any usage of AI results in any new form of data processing. You will need to think about whether your privacy policy properly covers your usage of AI and whether you need to undertake a data privacy impact assessment.

AI providers usually act as a processor of personal data, on behalf of their client as the controller. However, some AI providers will also state that they act as a controller also when using client data to train the AI system. If this is the case, you will need to review the provider’s privacy notice to understand what your data is being used for, and ensure this is made available to relevant data subjects.

A number of AI providers undertake processing of data in international jurisdictions. The ICO views transfers of data to countries which are outside the EEA or with no adequacy decision in place as being high risk. If your data is being sent abroad you will need to ensure that the international transfer is secure – this will involve appropriate transfer clauses being in place, and a risk assessment.

Data protection policies should also be reviewed to ensure they take into account any usage of AI involving personal data.

  1. Internal governance

Whilst many organisations are required by law to have certain policies in place, there is currently no legal requirement in the UK to have policies in place governing the development or usage of AI. We strongly recommend though that policies are implemented to ensure that:

  • AI is being used consistently and transparently;
  • suppliers are properly assessed;
  • data privacy legislation is complied with; and
  • staff are fully aware of the risks of AI and how to use it safely.

We recommend that you start with a review of how AI is being used within the business (both current and intended use) and identify risks. Then it sensible to review any existing policies to see if they need to be updated, assess if you need any AI specific policies, and undertake staff training with clear communications to ensure that they understand your organisation’s attitude to AI and compliance. For larger organisations an AI governance group may be useful, to include key stakeholders such as HR, IT and procurement.

  1. Who is responsible when it goes wrong?

AI usage can ultimately result in litigation if risks are not properly considered and addressed from the outset. AI claims have to date arisen in a range of contexts including copyright, data privacy, equality and employment related issues and also less foreseeable actions such as an Australian mayor and a US radio host who have threatened OpenAI with defamation claims after wrong statements were made that they had defrauded a charity and been found guilty of bribery.

Even using AI in, for example, a chatbot can present issues such as breaches of consumer protection laws if the AI provides misleading information about products or services.

There is no AI specific legislation to govern the relationship between AI providers and customers. It is therefore absolutely key to ensure that contracts between AI providers and their corporate customers contain provisions to govern and appropriately apportion liability.

Insurance will be key also – there are policies available (such as Technology Errors and Omissions Insurance) which may offer coverage in the event of certain AI related claims. Robust internal governance (tip 6) can also help to mitigate litigation risks.

Next Article

To Be or Not to Be in the Office?

The workplace has undergone significant transformation over the past decade, with the debate over remote, hybrid, and office-based work taking […]
Read Article