Essay 4 (A legal definition of AI Jonas Schuett)

 A Legal Definition of AI Jonas Schuett

Goethe University Frankfurt 

September 4, 2019



Abstract

When policy makers want to regulate AI, they must first define what AI is. However, legal definitions differ significantly from definitions of other disciplines. They are working definitions. Courts must be able to determine precisely whether or not a concrete system is considered AI by the law. In this paper we examine how policy makers should define the material scope of AI regulations. We argue that they should not use the term "artificial intelligence" for regulatory purposes because there is no definition of AI which meets the requirements for legal definitions. Instead, they should define certain designs, use cases or capabilities following a risk-based approach. The goal of this paper is to help policy makers who work on AI regulations.



Artificial Intelligence

The most obvious approach would be to use the term "artificial intelligence". The material scope could then be formulated as follows: This regulation applies to the development, deployment and use of AI systems.

However, there is no generally accepted definition of the term "artificial intelligence". Since its first usage by John McCarthy in 1955 [35], a vast spectrum of definitions has emerged. In the following, we discuss three of the most popular definitions. For a more comprehensive collection of definitions we refer to the relevant literature [22, 23, 24].

Example 1 (Turing Test). The Turing test is arguably the best known AI definition. In 1950, Allan Turing proposed a test which he called "imitation game" [36]. Based on this test, AI could be defined as follows: 

"Artificial intelligence" means any computer that passes the Turing test.

"Turing test" means a game which is played with three participants: (1) a human, (2) a computer and (3) a human judge. The human judge is separated from the other two participants. They can only communicate via text. The Turing test is passed if the human judge cannot effectively discriminate between the human and the computer.

Example 2

Another popular definition goes back to John McCarthy. In 2007, he published the paper "What is Artificial Intelligence?" [37]. In this paper he defines AI as follows:

"Artificial intelligence" means the science and engineering of making intelligent machines.

 "Intelligence" means the computational part of the ability to achieve goals in the world.

Example 3 (Intelligent Agent). Today, many AI researchers define AI as the study of intelligent agents. For example, Stuart Russell and Peter Norvig use the following definition in their standard textbook "Artificial Intelligence: A Modern Approach" [22]:

"Artificial intelligence" means an intelligent agent.

"Agent" means a software system which perceives its environment through sensors and acts upon that environment through actuators.

"Intelligence" means the ability to select an action that is expected to maximize a performance measure.



Design

Policy makers could define how AI systems are designed ("how it’s made"). This class of elements could be used to address the inherent risks of certain technical approaches.

Example 4 (Reinforcement Learning). For example, the material scope could be limited to systems based on reinforcement learning. Reinforcement learning is used in many real-world AI systems, such as games [16, 41, 42], robotics [43] and recommender systems [44]. Policy makers may want to address certain safety risks which are directly linked to reinforcement learning. This includes, among others, reward hacking [45, 46] and interruptibility [47]. The following definition could be used:

 "Reinforcement learning" means the machine learning task of learning a policy from reward signals that maximizes a value function [48].

Example 5 (Supervised and Unsupervised Learning). Policy makers could also define supervised and un- supervised learning. These techniques are equally popular. For example, they are used for image recognition [49], speech recognition [50] and text detection [51]. Policy makers could use these elements to prevent certain kinds of discrimination. In particular, they could address the inherent risk of supervised and unsu- pervised learning to reproduce biases which were contained in the training data [52, 53]. The elements could be defined as follows: 

"Supervised learning" means the machine learning task of learning a function that maps from an input to an output based on labeled input-output pairs [22]. 

"Unsupervised learning" means the machine learning task of learning patterns in an input even though no explicit feedback is supplied [22].

Example 6 (Artificial Neural Networks). Another approach would be to define artificial neural networks. Many machine learning algorithms are based on them. They are responsible for several problematic attributes, such as interpretability [54] and foreseeability [27, 55]. From a regulatory perspective, these attributes pose certain risks (e.g. regarding the liability for damages caused by an AI system). Policy makers could use this element to address these risks. Artificial neural networks could be defined as follows:

"Artificial neural network" means a software architecture which is composed of units connected by directed links. Each link has a numeric weight associated with it which determines the strength and sign of the connection. Each unit first computes the weighted sum of its inputs. Then it applies an activation function to derive the output.


Conclusion

Recommendations. In this paper we have examined how policy makers should define the material scope of AI regulations. In particular, we have made the following recommendations:

• Section 2: Policy makers should not use the term "artificial intelligence" for regulatory purposes because there is no definition of AI which meets the requirements for legal definitions. Instead, they should define certain designs, use cases and/or capabilities following a risk-based approach.

• Section 3: Policy makers should not use a single element to define the material scope, unless the regulation is about a specific use case. In most cases, they should use multiple elements.

• Section 4: In most cases, policy makers should define the material scope differently for different parts of the regulation.



Full text pdf : (PDF) A Legal Definition of AI (researchgate.net)

Comments

Popular Posts