The rise of generative AI: regulatory implications
Technology, Media & Telecoms Focus
Artificial intelligence has been making headlines in 2022, with the one particular AI tool, ChatGPT, catching attention of all commercial industries.
Law Update: Issue 358 - Technology, Media & Telecoms Edition
Andrew FawcettPartner,Digital & Data
Ali AbbasovTrainee Lawyer,Arbitration
Artificial intelligence has been making headlines in 2022, with the one particular AI tool, ChatGPT (Generative Pretrained Transformer), catching attention of all commercial industries due to its usability and applicability. The rise of ChatGPT as a supplementary tool for operation of various industries also has a number of legal implications. Consequently, governments across the world are faced with the challenge of how to regulate the fast-developing AI industry without stifling innovation. Various states are already proposing different approaches to AI regulation.
Artificial Intelligence (“AI”) is the ability of robots and computers to carry out tasks that are normally executed using the natural intelligence of humans. It has revolutionised the way many things are done, by automating both mundane and complicated tasks, making processes and workplaces increasingly efficient.
One of the most high-profile AI technologies is the popular language bot ChatGPT, which was developed by Open AI and recently released and has reportedly become the fastest growing app in the world with its user base, having reached some 100 million users in February this year – just two months after launching.
On 13 March 2023 OpenAI released GPT-4, the fourth multimodal language model created by Open AI in its GPT series.
Just over a week later a research paper was released titled “Sparks of Artificial General Intelligence: Early experiments with GPT-4”. ‘Artificial General Intelligence” essentially means AI that performs at a human level.
The Future of Life Institute Future of Life Institute, a US not for profit, chaired by an MIT physics professor, that has the mission of “steering transformative technology towards benefitting life and away from extreme largescale risks” on 22 March 2023 published an open letter calling on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4 on the basis that AI systems with human-competitive intelligence can pose profound risks to society and humanity and should be planned for and managed. The letter has attracted nearly 19,000 signatories, including the likes of Elon Musk and Steve Wozniak, and received global news media attention.
Asking companies to voluntarily stop working on AI models might not seem that realistic, but the letter’s authors refer to human cloning and human genome modification as examples of other technologies where society has hit pause.
The Future of Life open letter also calls for AI developers to work with policy makers to dramatically accelerate development of robust AI governance systems.
It is not that governments worldwide have been ignoring the need for national AI governance policies. The difficulty with laws are that generally they are reactive to a societal change, the ChatGPT and its uptake of generative AI represents an exceptionally rapid change.
Nearly 4 years ago the OECD formally adopted the first set out intergovernmental policy guideline of AI, which recommended to governments that they:
Invest in AI R&D;
Foster a digital ecosystem for AI;
Shape an enabling policy environment for AI;
Build human capacity and preparing for labour market transformation; and
Foster international co-operation for trustworthy AI.
Currently, there is no legislation in the UAE that governs AI. However, the UAE government, through the UAE AI Council and the Minister of State for AI, has reaffirmed their commitment to becoming a leading player in the AI industry. Consequently, the UAE AI Council has put in place the UAE National Strategy for Artificial Intelligence 2031 aiming to integrate AI in the different industries in the UAE.
The European Union (EU) is in the processof passing its draft Artificial Intelligence Act (“AI Act”), a stringent and comprehensive piece of legislation intended to govern nearly all uses of AI. The AI Act groups AI applications into four risk categories, each of which is governed by a predefined set of regulatory tools for applications deemed to pose an “unacceptable risk” (such as social scoring and certain types of biometrics which are banned) are banned to “minimal risk” AI uses which are subject only to voluntary measures.
Due to the so-called “Brussels Effect” the AI Act will likely influence regulation of AI adopted in other jurisdictions
Yet on 29 March 2023, the UK government issued its white paper “AI regulation: a pro-innovation approach” that sets out the UK government’s proposals for implementing a proportionate, future-proof and pro-innovation framework for regulating AI that are contrast to the EU’s approach to AI.
The centralised approach of the EU AI Act is seen by the UK government as not flexible enough and innovation-limiting. Instead, the current intention of the UK government is not to give responsibility to one single regulatory body, but rather allow different regulators to take a tailored approach to the use of AI in a range of settings. This means that there would be no intentions of passing new separate laws for regulation purposes. Rather the plan is to provide guidelines to the existing authorities (e.g. the Information Commissioner’s Office which regulates personal data protection, Ofcom the telecommunication regulator and the Competition and Markets Authority which tackles monopolies).
The UK white paper asserts that the context-specific approach is advantageous given the broad applicability of AI, making a one-size-fits-all approach not fit the purpose.
As AI technology rapidly improves, the popularity of the likes of ChatGPT is demonstrating the transformative nature of AI. If used well AI has huge capacity to support innovation and efficiency. At the same time there are undeniable risks to the rights and safety of individuals.
Balancing such interests is not a new task for governments. However, to be at the forefront of technological developments it is critical that the right environment is created in our jurisdictions to utilise and exploit the benefits of AI.
Fundamentally that requires getting regulation right so any material risks posed by AI can be addressed without stifling innovation.
For further information, please contact Andrew Fawcett or Ali Abbasov.
Published in May 2023