Data protection rules, AI and automated processing: a UAE perspective
Technology, Media & Telecoms Focus
Artificial Intelligence (“AI”) tools such as Machine Learning (“ML”) and natural language processing continue to be deployed in a range of sectors.
Law Update: Issue 358 - Technology, Media & Telecoms Edition
Darya GhasemzadehTrainee Lawyer,Arbitration
Artificial Intelligence (“AI”) tools such as Machine Learning (“ML”) and natural language processing continue to be deployed in a range of sectors such as recruitment, financial services, and marketing to improve efficiency, better manage risks, and create a better customer experience. In this context, developing transparent, explainable AI software systems where the underlying logic can be deciphered has reached paramount importance, not just from a best practice point of view but also for compliance with the data protection laws in the region, some of which require controllers that rely on automated decision making to be able to explain the underlying logic of the algorithms they have deployed.
Coinciding with this increased deployment of AI tools in processing personal data, data protection regulations, such as the EU’s GDPR, have adopted the principle of “technology neutrality”, which essentially means that the protection of natural persons should be technologically neutral and should not depend on the techniques used.
The UAE has in the past three years seen the introduction of GDPR style data protection laws, at both the federal and financial free zone level. These include the DIFC Data Protection Regulations 2020 (“DIFC DP Law”), the ADGM Data Protection Regulations 2021 (“ADGM DP Law”), which apply within the DIFC and the ADGM financial free zones respectively, and at the federal level, the Federal Decree-Law No. 45 of 2021 On the Protection of Personal Data (“Federal DP Law”) which has introduced core data privacy principles such as transparency, accountability, data minimization, and technical and organizational measures for the first time at a federal law level.
Accountability and transparency are at the core of all three of the UAE’s data protection laws. The Federal, ADGM and DIFC data protection rules all require controllers to provide certain information to the data subject both at the time of collecting their personal data, and upon the data subject’s request at a later stage. For instance, under all three laws, data subjects have a right to be informed where Profiling or automated decision-making will occur. Moreover, data subjects also have the right to not be subject to a decision which is based solely on automated processing.
However, the principle of transparency is challenged when using AI, as many of the ML decision making systems deployed today are “black boxes” rather than old-style rule-based expert systems, and therefore fail to comply with the main data protection requirements of transparency, accountability and data minimization. The “black-box” effect in AI refers to a common phenomenon seen with ML whereby the programmers who developed deep learning data fueled ML algorithms are unable to explain why the algorithm arrived at certain decisions.
The ADGM and DIFC DP laws suggest that the threshold for explainable AI or other means of automated processing deployed on personal data is for the controller to be able to explain the underlying logic, if requested.
The ADGM DP Law, states that data subjects have a general right to access, and to be informed. This includes the right to be informed of: “The existence of automated decision-making, including Profiling, and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such Processing for the Data Subject.” Section 13(1)(h).” Profiling, is defined separately under the DIFC, ADGM and Federal data protection laws but can be broadly defined as any automated processing of personal data used to evaluate, analyse or predict personal aspects relevant to a natural person, such as their health, preference or work performance. Similarly, under the DIFC DP Law, Article 29 part (vi) provides: “if applicable, the existence of automated decision-making, including Profiling, and meaningful information about the logic involved, as well as the significance and the possible outcomes of such Processing for the Data Subject”.
Many of the machine learning decision making systems deployed today are “black boxes” rather than rule-based programs, and therefore fail to comply with the main data protection requirements of transparency, and accountability under UAE data protection laws.
Lastly, under Article 13 of the Federal DP Law, the Data Subject has the right to obtain information without charge about decisions made based on automated processing, including Profiling. However, the Federal DP Law is less clear about whether data subjects are entitled to be informed about the underlying logic involved, and it would be interesting to see if the impending Executive Regulations will touch on this once implemented.
The challenge is that, although it is a mandatory requirement under the law, it is not always possible to include meaningful information about the underlying logic of the algorithm, especially for newer tools such as deep learning and automated feature extraction, because we do not know how the data evaluation is being done or what features [i.e. data points] are being used. As a result of the above provisions, companies which are subject to the UAE’s data protection law(s), and use ML for automation should ensure that they consider explainability as a feature when choosing their AI vendors, in the event that the data subjects requests meaningful information about the underlying logic of such programmes.
Specifically, companies should assess and question whether the software provider has models that are explainable regarding the underlying logic, and whether the underlying code is both explainable and documented.
Wherever AI is involved, there are privacy risks, which have to be overcome in an ethical and proportional way.
The new data protection laws in the UAE seem to apply a higher threshold of explainability to automated decision making than to human decision making. However, the data protection laws (in particular the ADGM and DIFC DP Laws) are ambiguous about the requirement to provide meaningful information about the underlying logic of an algorithm, and what information constitutes as “meaningful”.
The law does not provide us with a clear check-list, but at minimum requires that algorithms should be accountable in some way and it remains up to the controllers who are using automated decision making and Profiling to have procedures in place ensure sufficient accountability and assessment of data related risks. ML processes cannot be treated as black boxes if they are going to comply with data protection principles, and will have to make it clear how they arrive at decisions.
The principle of privacy by design and default applies here as all softwares that allow automated processing must be designed from the ground up with privacy and data protection in mind. The process will have to be documented and demonstrated to meet the requirements of transparency and accountability.
Controllers who use AI, ML or other means of automated decision making need to be aware that they have a legal obligation to provide meaningful information about the logic involved. Where the controllers acquire the algorithm or software from a third party provider they should ensure that ‘explainability” is a feature of the automated decision making solution they are obtaining.
For further information,please contact Andrew Fawcett or Darya Ghasemzadeh.
Published in May 2023