Trustworthy AI – responsible AI integration in the workplace

Thursday, 13th June 2024

Michelle Sally is a Partner in the Technology, IP and Data team at national law firm TLT.

The business world is rapidly getting to grips with the superpowers of artificial intelligence (AI), including the deployment of this evolving technology into the workplace.

Smarter decision-making and streamlining time-consuming tasks are among the myriad benefits, boosting productivity and freeing up employees for more strategic work. There may be pressure on employers to adopt fast and ask questions later – perhaps in the mistaken belief that competitors are further ahead of the curve.

As always, technology continues to move faster than legislative and governance frameworks. Businesses are expected to learn quickly about how new technology fits within the current legal landscape as well as adapting to new digital regulations being proposed and enacted by governments around the world.

A shifting risk landscape

The AI legal landscape ranges from old legal principles (such as data protection and privacy issues to discrimination claims) to new principles (such as AI governance frameworks and human oversight); with businesses trading in Europe also potentially being caught by, the first of its kind, the AI Act.

For now, the UK is favouring a more flexible, pro-innovation approach to regulation than the European Union. At the time of writing, we’re anticipating that UK regulators will be required to prepare AI guidance tailored to their industry.

Employers who truly want to stay ahead of the curve need strong internal governance, a holistic view of the regulatory landscape and a digital transformation strategy in place to ensure their practices are ethical and compliant – with a view to fostering trust in AI.

High risk applications…

Using an AI tool to make management decisions comes with several risks, one of which is bias being ‘baked in’ to the system itself. Because machine learning models are trained on data provided by humans, historical prejudices or unequal representation of genders, ethnicities and other demographics can creep into the system at multiple points. As the model learns on these flawed datasets, it can amplify the biases, potentially leading to unfair and discriminatory outcomes.

Compounding this risk is the “black box” nature of algorithms. A machine learning tool’s complexity and ability to learn by itself means that even its developer is unlikely to know how it reached a particular conclusion. Employers that cannot justify the reasonableness or fairness of a decision based on generative AI could face legal challenges.

And lower risk efficiencies.

There are however many lower risk opportunities for businesses to drive efficiencies using AI. Businesses could think about using calendar management or document management tools, items that could help them with administrative tasks.

How TLT can help

At TLT, we are helping a growing number of employers navigate AI challenges. Clients come to us for bespoke AI risk assessments, governance framework support, sector-focused training and AI playbooks for topics such as contract negotiation. We’ve also developed the TLT AI Navigator, a tool to help employers assess how AI has been adopted into their business and whether to take such development to the next stage. With the support of our FutureLaw legal tech team, we are also collaborating with clients to customise and deploy AI-driven legal tech.

Whatever your plans with AI, TLT is ready to support you with what comes next.

www.tlt.com