AI Regulation, Risk, and Opportunities

November 13, 2023

Artificial intelligence has had a phenomenal year, spurred on by advancements in large language models, generative AI, and RLHF. Unlike 2022, which was dominated by NFTs, blockchain, and cryptocurrency, this year the headlines have belonged to the likes of OpenAI (ChatGPT), Midjourney (AI art), and NVIDIA (AI Computing).

Despite this, the rise of AI has not been without controversy. Regulators have scrambled to keep up with the fast-moving technology, creators have lambasted its potential misuse of intellectual property, and entrepreneurs have warned of potentially apocalyptic impacts.

One thing is for certain, however: artificial intelligence is here to stay.

So, what lies ahead for AI? Which opportunities lie in wait? And how can businesses leverage the technology, while shielding their business from legal and ethical risk?

In this article, we tackle AI, the benefits of the technology, and the dangers that lie in wait for companies that fail to do their legal due diligence.

So, let’s start at the beginning - how did we get here? And how can you be part of the AI future?

The rise of AI… and the ensuing controversy

While the foundations of AI began in the 1950s, the technology has only really stepped into the commercial spotlight in recent years. However, 2023 in particular saw a series of breakthroughs that democratised the use - and creation - of AI, for the masses. As a result, startups and household names alike are now looking to AI to automate key parts of their operations, fast-track production pipelines, and ultimately - discover a new source of revenue.

However, with great innovation has come great controversy. In March 2023, academics, entrepreneurs, and regulators came together to demand a 6-month pause on the training of AI systems more powerful than GPT-4, citing “profound risks to society and humanity”.

Famous signatories included Tesla CEO Elon Musk and Apple Co-Founder Steve Wozniak. However, despite its viral campaign, the letter itself soon came under controversy, with many signatories being revealed to be fake, or taken entirely out of context.

As the dust settled, a measured middle ground emerged: a recognition, firstly, that AI is here to stay, and secondly, that AI necessitates regulatory input.

In response, a number of governing bodies and regulators have clarified their stance on AI, including:

  • A policy paper from the UK government in March 2023, titled “AI Regulation: A Pro-Innovation Approach”. This paper seeks to provide a framework for regulating AI, with a focus on safety, security, transparency, fairness, accountability, and contestability.
  • Clarification from the US government in May 2023 on its AI stance, citing the importance of “responsible innovation that serves the public good, while protecting our society, security, and economy.”
  • The EU AI Act, put together by European Union lawmakers, with the intent to tackle AI systems with “an unacceptable level of risk.” This act is considerably more advanced than the UK’s paper and proposes fines of up to 40 million euro (or 7% of a company’s annual worldwide revenue - whichever is higher) for breach of the act.

The common theme between these is a promotion of responsible AI development that mitigates the risks of this evolving technology.

Using AI in your business

Despite the apocalyptic headlines, the reality of AI is considerably less Sci-Fi than you might think. While, like any growing technology, AI has its risks, there is a growing landscape of safety nets designed to make AI use safe, reliable, and ethical. Let’s break this into two parts: the practical components of safe AI, and the legal avenues for legitimate use.

Practical components of safe AI

For businesses seeking to create AI, it’s important to build with compliance in mind. Pay careful attention to the tech stack used to train and develop AI, and ensure it prioritises data explainability, transparency, and security. While there are many tools available to support AI development, some pose a greater risk than others. Make sure you do your due diligence, and seek out tools that place the security and integrity of your data above all.

With the right infrastructure in place, you can develop well-documented AI that is robust against future legal and regulatory shifts. The extra added benefit, particularly amidst a storm of AI hype, is that safe AI development will help you rightfully earn the trust of discerning customers.

Aside from looking at your tech stack, it’s also important to look into your company’s code of ethics when dealing with AI. What is your stance on AI use? How should AI be used in your business? What constitutes a valid experiment, versus an unethical adventure? Many companies are now establishing, and embedding, an AI ethics policy. To build this, it helps to tackle your stance in three parts:

  • What is your company’s ethical stance on AI? Which restrictions need to be in place? How do you record and monitor the use of AI in your business?
  • What is your due diligence process for AI projects? Who takes accountability for the AI? What needs to be investigated before committing to AI use?
  • How do you educate your staff members on the challenges of AI? Are they sufficiently informed on the risks, and how to avoid them? What happens in the event of ethical or legal breaches? What is your reporting system for risks?

In the words of Sri Amit Ray, author of ‘Ethical AI Systems’, “Ethics is the compass that guides artificial intelligence towards responsible and beneficial outcomes. Without ethical considerations, AI becomes a tool of chaos and harm.”

Legal components of safe AI

Beyond the practical steps you can take as a business to prioritise safe AI, it's also crucial to invest time and energy into your legal due diligence.

It’s important to keep an eye on proposed regulations as they continue to unfurl and to always air on the side of caution when it comes to experimenting within your business. It helps to have a technology lawyer on your side, who can monitor shifts, and ensure you’re following best practices from day one. A technology lawyer can help you to:

  • Investigate safe, and unsafe, uses of AI in your business.
  • Develop policies for the use and creation of AI in your business.
  • Support you in AI risk assessments.
  • Prepare you for legal shifts in AI.
  • Determine your legal use of data and intellectual property in the creation of AI.

How could you use AI in your business?

With the recognition that AI is here to stay - how can you potentially leverage this technology for your business? And which legal hurdles lie in wait? Below, we tackle the opportunities at hand… and the risks on the horizon.

First up - let’s tackle the good stuff. What is AI most commonly used for?

  • Virtual assistants: AI is being used to create virtual assistants, from Apple’s voice-activated “Siri”, to Microsoft’s “Cortana”. These virtual assistants follow voice and text commands, to make the lives of their users a little bit easier.
  • Customer experience: One of the most popular uses of AI is for the improvement of customer experience, with a litany of innovations in this space, including customer support chatbots and helpful AI assistants.
  • Content creation: AI has been particularly popular as a means of rapidly generating content across text, images, audio, and video. Platforms like ChatGPT and Midjourney have enjoyed viral success in 2023, thanks to their ability to conjure impressive content from just a few text prompts.
  • Automation: AI is being used to automate monotonous repetitive tasks, clearing room for more meaningful work. Examples include the use of AI in warehouses to manage stock intake.
  • Legal AI: One particularly interesting use of AI has been within the legal space, which has seen the rise of AI chatbots, document processing tools, and legal research assistants. Perhaps the most famous of these is Harvey AI, which raised $21 million in Series AI funding for its ability to assist with contracts, due diligence, litigation and regulatory compliance.

What are the key challenges of AI?

We’ve talked a lot about mitigating the risks of AI - but what are the most common challenges that lie in wait?

Reputational damage

Without safeguards in place, AI can expose your business to reputational risk. For example, what happens if you fail to do legal due diligence, and your AI leverages data it does not have any legal rights over? What happens if your AI produces harmful biases, just as Amazon’s recruitment AI showed prejudice against women? What if your AI is the target of a cyber-security hack, and you haven’t invested in the security of its data?

The reputational damage that can come with AI can be monumental, eroding your customer base in the process. More than most technologies, it’s imperative you invest in due diligence from the start - and only release AI that you would proudly stamp with your name.

Data protection hurdles

AI has faced waves of controversy, particularly in how people have collected and used the data needed to train their AI. Perhaps the most infamous of these was a $275M GDPR fine for Meta in 2022, when the global giant failed to protect users from illegal web scraping. For context, web scraping is a strategy used to collect data from websites, to be used to train an AI model.

With this in mind, it’s essential that you pay careful attention to data protection laws, and where necessary, enlist the help of a data protection expert who can help you understand how you can use the data you possess. A data protection expert can also help you understand the obligations you have to protect the data you hold.

For example, let’s say you’re a business that has decided to use your internal company data to create a Human Resources AI. While it’s an idea that could be potentially lucrative, it’s important that you understand how the data can be used, how it needs to be processed, and when it is strictly off-bounds.

Intellectual property hurdles

Finally, AI poses a number of interesting intellectual property challenges, exemplified in an ongoing court case involving Midjourney and Stability AI. A collection of artists have come together to sue artificial intelligence companies, to establish copyright over an artistic style - something which, as of today, lacks legal protection. This is because the AI that fuels Midjourney and Stability AI is trained on existing works of established artists, that the AI uses to generate new images.

The lawsuit, as of yet, has not earned any new protections for artists but it has sparked debate around the viability of modern intellectual property laws.

For companies that choose to use and create AI, it’s important to probe deeply into the intellectual property hurdles your AI may face - or risk complicated legal consequences.

Technology lawyers in the UK

As technology-first lawyers, we’ve backed countless clients with a technologically minded legal strategy. From intellectual property and usage rights, to contract negotiations and regulation, we specialise in delivering pragmatic advice that fuels and protects software and hardware companies.

As AI continues to advance, we’re working to ensure our clients can brave a new frontier with peace of mind.

Preparing to embrace AI, or wondering whether you’ve already wandered into murky water?

Get in touch with our technology law team to see how we can help.

Receive our insights directly to your inbox by signing up to our newsletter

Recommended content