The Artificial Intelligence (“AI”) Act is a proposed European law that attempts to regulate the use of AI across the 27 EU member states.
Krishna Jhala Senior Associate,Digital & Data
The Artificial Intelligence (“AI”) Act is a proposed European law that attempts to regulate the use of AI across the 27 EU member states. It is the first law on AI by a major regulator pursuant to which other states such as Brazil and UK have followed suit. The AI Act assigns applications of AI to three risk categories: (a) unacceptable risk applications and systems which are banned (such as government-run social scoring of the type used in China); (b) high-risk applications which are subject to specific legal requirements (such as a CV scanning tool that ranks job applicants) and (c) minimal risk applications which are not explicitly banned or listed as high-risk and are largely left unregulated.
The aim of the AI Act is to cover all providers of AI including traditional symbolic AI, machine learning, as well as hybrid systems.
Article 2 of the AI Act states that it applies to:
a. providers placing on the market or putting into service AI systems in the EU, irrespective of whether those providers are established within the EU or in a third country;b. users of AI systems located within the EU;
Providers is broad enough to cover product manufacturers, importers, distributors, or other third parties involved.
The AI Act does not apply to AI systems developed or used exclusively for military purposes.
Like the EU’s General Data Protection Regulation ("GDPR"), the AI Act also has an extraterritorial effect. The AI Act will apply to providers and users of AI systems that are located in a third country, where the output produced by the system is used in the EU.
The AI Act explicitly prohibits use of AI systems that:
uses subliminal techniques to manipulate a person’s behaviour in a manner that may cause psychological or physical harm;
exploits vulnerabilities of any group of people due to their age, physical, or mental disability in a manner that may cause psychological or physical harm;
enables governments to use general-purpose “social credit scoring;
provides real-time remote biometric identification in publicly accessible spaces by law enforcement except in certain time-limited public safety scenarios.
Applications in the field of safety components of regulated product such as medical devices, machinery will constitute as high-risk AI systems.
Certain (stand-alone) AI systems in the following fields:
Biometric identification and categorisation of natural persons
Management and operation of critical infrastructure
Education and vocational training
Employment and workers management, access to self-employment
Access to and enjoyment of essential private
services and public services and benefits
Migration, asylum and border control management
Administration of justice and democratic processes
The AI Act sets out compliance requirements for high-risk AI systems along with establishing and implementing risk management processes. Additionally, high-risk AI systems shall need to be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable users to interpret the system’s output and use it appropriately.
The EU AI Act could become a global standard like the GDPR, determining to what extent AI has a positive rather than negative effect on your life wherever you may be. The EU’s AI regulation is already making waves internationally.
The UAE has a clear vision through its National Strategy for AI to become the world leader in AI by 2031.
The UAE Federal government has established the Office for Artificial Intelligence, Digital Economy & Remote Work Applications. The government also established the UAE Council for Artificial Intelligence and Blockchain (formed in 2018 and renewed in 2021) which is tasked with proposing policies to create AI friendly ecosystem; to advance research in the sector; and to promote collaboration between the public and private sectors, including international institutions, to accelerate the adoption of AI. This Council oversees the implementation of the UAE National Strategy for Artificial Intelligence 2031.
The National Strategy for AI 2031 sets out eight strategic objectives which include: (i) build a reputation as an AI destination; (ii) increase the UAE competitive assets in priority sectors through deployment of AI; (iii) develop a fertile ecosystem for AI; (iv) adopt AI across customer services to improve lives and government; (v) attract and train talent for future jobs enabled by AI; (vi) bring world-leading research capability to work with target industries; (vii) provide the data and supporting infrastructure essential to become a test bed for AI; and (viii) ensure strong governance and effective regulation. AI Office is headed by H.E. Omar Sultan Al Olama Minister of State for Artificial Intelligence, Digital Economy and Remote Work Applications.
For further information, please contact Krishna Jhala.
Published in May 2023