However, for computational reasons, ontology languages constrain the structure of such logical rules to one consequence only. Thus, the first intellectual task of the legal expert is to decompose each norm into the following scheme:. In the example of Sec. The next intellectual task for the legal expert is to replace each SF and LC by elements of the ontology.
In a first step, this is achieved by identifying relevant classes. LC is replaced by Lawfulness.
An artificial intelligence approach to legal reasoning
In a second step, explicit links between the chosen classes are inserted by means of relations. The legal norm typically contains indications for such explicit links, e. In a third step, the legal expert checks for implicit links which are not directly mentioned in the legal norm but are mentally complemented by the interpreter. For example, there exists an implicit link between Consent and the Collection , Processing and Use. Namely, it is the consent that permits such actions. As an example, Sec. Their names can be chosen arbitrarily. Legal norms have complex interrelationships that feed back into the formalisation of an individual norm.
First, two or more legal norms might have overlapping SFs that lead to different LC s depending on whether they are given or not. An example for this case has been discussed at the beginning of Section 3. Second, a legal norm might be an explicit exception to another legal norm. In this case, both norms must be rewritten as shown in formula 6.
The occurrence of SF in both norms indicates that overlapping states of affairs might be given at the same time. As an example, consider Sec. One exception is mentioned in the same norm special circumstances. However, there might also be exceptions in other provisions, e. This situation is formalised as sketched in formulae 7 — 9. The third case is an implicit exception between legal norms with contradicting consequences, e. When just considering the text, this leads to logical rules that both overlap in their SFs and lead to conflicting LCs:.
A possible solution for resolving the conflict is to rewrite both rules according to the scheme depicted in 6 , i. The subject matter has to be formalised as completely as possible for automated processing. This is also a prerequisite before being able to advise the developer in a software development environment.
- Computer Law Series: Automation of Legal Reasoning, Vol 11.
- Marthe: The Story of a Whore.
- An Owl on Every Post!
- Kira Systems - Upcoming Events!
- Pyrantel Parasiticide Therapy in Humans and Domestic Animals.
- Deterministic Chaos in Infinite Quantum Systems.
Fortunately, in our case, a major part of the subject matter is already inherent in the software. It is more a matter of representing this information suitably, such that an automated processing is facilitated. The schema further discussed in Section 4. The schema has to be manually designed only once by a domain expert.
Instances of such classes then represent a specific subject matter further discussed in Section 4. These instances can be obtained automatically from the software. Using the same ontology language and also the same foundational ontology facilitates finding correlations with the statutory ontology during the legal reasoning process in Section 5.
It is the task of a domain expert to build the schema, i. They are aligned under, i. However, each class can have an arbitrary number of relations with different semantics. It is an intellectual and manual task of the domain expert to identify and model such relevant relations. Figure 7 shows some examples for arbitrary relations around the class Web Service. A piece of Software , e. While the schema of the subject matter ontology only captures relevant concepts in form of classes and relations, it is the instances that represent a specific subject matter depending on a given software.
That means the schema does not change but rather the instances depend on the actual subject matter. Instances are concrete elements of classes that have relations to other instances according to the schema. In contrast to the schema, the instances are obtained automatically in different ways, as discussed in the following section. Most of the relevant instances are inherent in the code or accompanying descriptors and are extracted automatically.
Concrete instances for automatic extraction from the software in the subject matter ontology have multiple sources. One primary source is integrated software development environments that keep track of software components and their interdependencies. In addition, software visualisation tools  perform a more in-depth analysis of invocations for better understanding complex software. This information is leveraged for our purposes.
Another source is descriptor files, typically given in XML. For each source, an adaptor has to be developed that realises a mapping from the source information to the subject matter ontology. Adaptors have to be designed such that they only extract relevant information. Not all information can be extracted and is actually known during the design time of the software. In this case, proxy instances are put in place that have to be resolved by the developer during the legal reasoning process see Section 5. An example is depicted in Figure 8.
The class Information Object is associated with an ontology design pattern of the same name, prescribing the depicted relations to the remaining grey classes in the figure. For all remaining relations, proxy instances have to be put in place identified by a common naming scheme:?
Symbolic Reasoning (Symbolic AI) and Machine Learning
Some information is known in advance depending on the context of our approach. This information is exploited and represented as instances in the subject matter ontology. Simple examples are the user and provider of the software. Both roles are always present and result in default instances! A more sophisticated example is the software context. In our scenario, we deal with mobile apps and corresponding operating systems. For example, the user is asked whether the app is allowed to transmit the coarse- network based or fine-grained GPS location of the device.
A corresponding default instance! Formalising the norm graph, as well as the subject matter, is the prerequisite for semi- automating the legal reasoning process. The semi-automation of the legal reasoning process happens within an integrated software development environment based on a given software. This process essentially finds correspondences between the subject matter and one or more statutory ontologies and requires the interaction with the developer. Information of the software to be analysed is represented by a set I of instances in the subject matter ontology. These instances are automatically extracted from the source code as described in Section 4.
Semi-automated legal reasoning transfers the instances to a statutory ontology, i. For example, the instance WSOpI 1 of class Web Service Operation Invocation in the subject matter ontology corresponds to the class Transfer in the data privacy ontology for private bodies. In this case, WSOpI 1 must be subsumed under, i. In the following we present a subsumption algorithm that relieves the developer as much as possible from finding such correspondences. The algorithm starts with the question for applicability of a specific statutory provision see Question 1 in Section 2. This question is reduced to a selection box where the developer can choose a statutory ontology for a specific field of law, e.
Depending on the chosen field of law, the developer is prompted to select one, several, or all desired legal consequences. In the data privacy ontology for private bodies, legal consequences are lawfulness as well as the obligations of the data controller defined by the FDPA. Each legal consequence is associated with a norm graph, e. Consequently, the algorithm tries to subsume the subject matter under the nodes of the chosen norm graph.
Figure 9 : Subsumption algorithm. The steps of 5. Completing Subjective Relations and 5. Guided Interpretation require the interaction of the developer. The five process steps in Figure 9 each of them further explained in Sections 5. Whether legal concepts above LC are given can be automatically inferred further discussed in Section 6.
For Personal Data, R consists of the following relations and accompanying target classes: expressesDirectly Identity , expressesIndirectly Identity , about Natural Person according to the right side of Figure 5. In each process step of the algorithm, the set I of all instances of the subject matter ontology is reduced to a result set containing potential candidates for subsumption under the current legal concept LC.
As soon as the result set is empty, the algorithm can terminate since no subsumable instance exists. If all steps are executed and I remains not empty, its instances are transferred to the corresponding statutory ontology. Case 1 — Preliminary Check : Our approach allows to establish general correspondences between a class in a statutory ontology and a class C in the subject matter ontology.
For instance, C might be the class User or Provider whose instances always correspond to Data Subject and Data Controller in the data privacy ontology for private bodies.
- Galloways marathon FAQ: the 100 most frequently asked questions.
- CSDL | IEEE Computer Society;
- Basic Biology and Clinical Impact of Immunosenescence (Advances in Cell Aging and Gerontology);
- Women and Race in Early Modern Texts.
- Troubleshoot and Optimize Windows 8 Inside Out: The ultimate, in-depth troubleshooting and optimizing reference.
- Singlehanded Sailing: Thoughts, Tips, Techniques & Tactics.
If such a correspondence is present for the current legal concept LC , the subject matter ontology can be queried directly for instances of C i. Case 2 — LC is part of the common foundational ontology : There might be general concepts that are contained in the foundational ontology, and, thus, are valid in both a statutory and the subject matter ontology. An example for such a concept is Natural Person see Figure 4 and Figure 6. In this case, the subject matter ontology can be directly queried for the legal concept, i.
Case 3 — LC is part of the subject matter ontology : A legal concept might be a terminus technicus. Case 4 — LC features relations that are contained in the subject matter ontology: Although LC might not be part of the subject matter ontology, all the relations of LC might point to classes contained in the subject matter ontology. In this case, the query is constructed out of all instances that feature the corresponding relations independent of a specific class.
In case the query yields an empty result set, the algorithm can be terminated for the current legal concept LC , i. Otherwise the algorithm continues with steps 5. Guided Interpretation , respectively. As soon as at least one relation of the current legal concept LC is neither part of the foundational nor the subject matter ontology, a direct query is not possible. Instead, the query constructed in 5. Its relation about Natural Person is relaxed to about Entity , correspondingly. Because of the generalisation, a non-empty query result might contain instances that do not classify as personal data.
These must be manually selected by the user in the subsequent step 5. If the query result is empty, there are no subsumable instances and the algorithm can be terminated for the current legal concept LC. This step iterates over all generalised relations and prompts the developer to concretise each relation for every instance in the result set. This is supported by a wizard as depicted in Figure 10 where the left side represents the legal concept and current relation and the right side represents the current instance. The corresponding query in the previous step yielded the instance FBID as a result.
Consequently, the wizard asks the developer whether FBID might be about a Natural Person and even lets him choose concrete instances of Natural Person the default instance! If this step ends with an empty result of instances, the algorithm can be terminated for the current legal concept LC. Otherwise, the algorithm proceeds with completing subjective relations in the subsequent steps. Subjective relations rely on perception and require information about subjective or personal attitudes which are not accessible to another person.
In our approach, legal concepts which cannot be formalised, such as whether a person is identified by a given data, are also treated as subjective.
Sign up for the Kira blog.
Accordingly, this step iterates over all subjective relations and all instances of the result set. The step requires the interaction of the developer via a wizard see Figure The left side of the wizard displays a graphical rendition of the current legal concept LC where the subjective relation is highlighted with accompanying explanations.
As an example, the relation expressesDirectly Identity of Personal Data is classified as being subjective in the scenario. The right side shows an instance of the current result set, e. The developer has to decide if the subjective relation is given for every instance on the basis of the provided information. Consequently, the developer positively answers the corresponding question of the wizard.
In case of an empty result set after the iteration, the algorithm can be terminated for the current legal concept LC. Otherwise, the algorithm proceeds with step 5. Guided Interpretation. Figure User interface design for step completing subjective relations. When legal experts interpret a statute, their approach is usually eclectic: they first try to interpret the statutory text of the relevant legal concept and norm, respectively. If this interpretation yields ambiguities or uncertainties, the legislative context, history, and purpose have to be consulted for a resolution. The legislative history considers interpretations of the evolution of the statutory text over time and its original intention.
Finally, the legislative purpose considers the intended overall legal policy of the corresponding provision. The steps so far dealt with relations that can be interpreted by considering the statutory text only. In the following, we present a wizard that guides the developer in dealing with ambiguous or uncertain relations that require additional interpretation.
In this part of the study an elaborated jurisprudential model of legal reasoning is introduced, reflecting different sub-processes and various types of legal knowledge exercising influence over them. In addition, a critical analysis of various AI-approaches that have been suggested for the field of law is provided. The investigation leads up to the formulation of a design approach for advanced AI-systems for law, based on a functional decomposition of legal knowledge, the integration of various computational techniques and the structural integration of different types of small-scale AI-systems.
Log In Sign Up. Giovanni Sartor. Argumentation in Legal Reasoning. This immediately makes many lawyers sceptical about the usefulness of such systems: this mechanical approach seems to leave out most of what is important in legal reasoning. A case does not appear as a set of facts, but rather as a story told by a client. For example, a man may come to his lawyer saying that he had developed an innovative product while working for Company A. Now Company B has made him an offer of a job, to develop a similar product for them. Can he do this? The lawyer firstly must interpret this story, in the context, so that it can be made to fit the framework of applicable law.
Several interpretations may be possible. In our exam- ple it could be seen as being governed by his contract of employment, or as an issue in Trade Secrets law. Next the legal issues must be identified and the pros and cons of the various interpretations considered with respect to them.
Does his contract in- clude a non-disclosure agreement? If so, what are its terms? Was he the sole devel- oper of the product? Did Company A support its development? Does the product use commonly known techniques? Did Company A take measures to protect the secret? Some of these will favour the client, some the Company. Each interpretation will require further facts to be obtained. For example, do the facts support a claim that the employee was the sole developer of the product?
What is the precise nature of the agreements entered into? Some precedents may point to one result and others to another. In that case, further arguments may be produced to suggest following the favourable precedent and ignoring the unfavourable one. Or the rhetorical presenta- tion of the facts may prompt one interpretation rather than the other. Surely all this requires the skill, experience and judgement of a human being? Granted that this is true, much effort has been made to design computer programs that will help people in these tasks, and it is the purpose of this chapter to describe the progress that has been made in modelling and supporting this kind of sophisticated legal reasoning.
We will review1 systems that can store conflicting interpretations and that can propose alternative solutions to a case based on these interpretations. We will also describe systems that can use legal precedents to generate arguments by drawing analogies to or distinguishing precedents. We will discuss systems that can argue why a rule should not be applied to a case even though all its conditions are met. Then there are systems that can act as a mediator between disputing parties by struc- turing and recording their arguments and responses.
Finally we look at systems that suggest mechanisms and tactics for forming arguments. Much of the work described here is still research: the implemented systems are prototypes rather than finished systems, and much work has not yet reached the stage of a computer program but is stated as a formal theory. Our aim is therefore to give a flavour certainly not a complete survey of the variety of research that is going on and the applications that might result in the not too distant future.
Also for this reason we will informally paraphrase example inputs and outputs of systems rather than displaying them in their actual, machine-readable format; moreover, because of space limitations the examples have to be kept simple. All of them concern the construction of arguments and counterarguments. Thorne McCarty e. This required highly sophisticated reasoning, constructing competing theories and reasoning about the deep structure of legal concepts to map the specific situation onto paradigmatic cases.
Although some aspects of the system were prototyped, the aim was perhaps too ambitious to result in a working sys- tem, certainly given the then current state of the art. Note also the emphasis on persua- sion, indicating that we should expect to see argumentation rather than proof. Both the importance of theory construction and the centrality of persuasive argument are still very much part of current thinking in AI and Law.
An artificial intelligence approach to legal reasoning ( edition) | Open Library
Another early system was developed by Anne Gardner 15 in the field of of- fer and acceptance in American contract law. One set of rules was derived from the Restatement of Contract Law, a set of principles abstracting from thousands of contract cases. These rules were intended to be coherent, and to yield a single answer if applicable. This set of rules was supplemented by a set of interpretation rules derived from case law, common sense and expert opinion, intended to link these other rules to the facts of the case.
Some of the issues were resolved by the program with a heuristic that gives priority to rules derived from case law over restatement and commonsense rules. The rationale of this heuristic is that if a precedent conflicts with a rule from another source, this is usually because that rule was set aside for some reason by the court. The remaining issues were left to the user for resolution. C3 and E1. If the case is tried and E1 is held to have precedence, E1 will now be a precedent rule, and any subsequent case in which this conflict arises will be easy, since, as a precedent rule, E1 will have priority over C3.
To be applicable to a new case, however, the rule extracted may need to be analogised or transformed to match the new facts. Nor is extracting the rationale straightforward: judges often leave their reasoning implicit and in reconstructing the rationale a judge could have had in mind there may be several candidate rationales, and they can be expressed at a variety of levels of abstraction. In such domains a rationale of a case often just expresses the resolution of a particular set of factors in a specific case.
- Primary Science for Teaching Assistants.
- Automation of Legal Reasoning.
- Freely available.
Moreover, cases are more than simple rationales: matters such as the context and the procedural setting can influence the way the case should be used. In consequence, some researchers have attempted to avoid using rules and rationales altogether, instead representing the input, often interpreted as a set of factors, and the decisions of cases, and defining separate argument moves for interpreting the relation between the input and decision e.
This approach is particularly associated with researchers in US, where the common law tradition places a greater stress on precedent cases and their particular features than is the case with the civil law jurisdictions of Europe. In HYPO cases are represented according to a number of dimensions. A dimension is some aspect of the case relevant to the decision, for example, the security measures taken by the plaintiff.
One end of the dimension represents the most favourable position for the plaintiff e. Typically a case will lie somewhere be- tween the two extremes and will be more or less favourable accordingly. HYPO then uses these dimensions to construct three-ply arguments. First one party say the plaintiff cites a precedent case decided for that side and offers the dimensions it shares with the current case as a reason to decide the current case for that side. In the second ply the other party responds either by citing a counter example, a case decided for the other side which shares a different set of dimensions with the cur- rent case, or distinguishing the precedent by pointing to features which make the precedent more, or the current case less, favourable to the original side.
In the third ply the original party attempts to rebut the arguments of the second ply, by distin- guishing the counter examples, or by citing additional precedents to emphasise the strengths or discount the weaknesses in the original argument. Subsequently Ashley went on, with Vincent Aleven, to develop CATO most fully reported in 1 , a system designed to help law students to learn to reason with precedents. In CATO the notion of dimensions is simplified to a notion of factors. A factor can be seen as a specific point of the dimension: it is simply present or absent from a case, rather than present to some degree, and it always favours either the plaintiff or defen- dant.
A new feature of CATO is that these factors are organised into a hierarchy of increasingly abstract factors, so that several different factors can be seen as meaning that the same abstract factor is present. One such abstract factor is that the defendant used questionable means to obtain the information, and two more specific factors in- dicating the presence of this factor are that the defendant deceived the plaintiff and that he bribed an employee of the plaintiff: both these factors favour the plaintiff.
To give an example of downplaying, if in the precedent defendant used deception while in the new case instead defendant bribed an employee, then a distinction made by the de- fendant at this point can be downplayed by saying that in both cases the defendant used questionable means to obtain the information. The program matched portions of the network for the new case with parts of the networks of precedents, to identify appropriate analogies. Of all this work, HYPO in particular was highly influential, both in the explicit stress it put on reasoning with cases as constructing arguments, and in providing a dialectical structure in which these arguments could be expressed, anticipating much other work on dialectical procedures.
Recall that Gardner allows for the presence in the knowledge base of conflicting rules governing the interpretation of legal concepts and that she defines an issue as a problem to which either no rules apply at all, or conflicting rules apply. Now in logical terms an issue can be defined as a proposition such that either there is no argument about this proposition or there are both arguments for the proposition and for its negation. Some more recent work in this research strand has utilised a very abstract AI framework for representing systems of arguments and their relations developed by Dung For Dung, the notion of argument is entirely abstract: all that can be said of an argument is which other arguments it attacks, and which it is attacked by.
Given a set of arguments and the attack relations between them, it is possible to determine which arguments are acceptable: an argument which is not attacked will be acceptable, but if an argument has attackers it is acceptable only if it can be defended, against these attackers, by acceptable arguments which in turn attack those attackers. This framework has proved a fruitful tool for understanding non- monotonic logics and their computational properties.
See below and Chapter?? Thus her system was able to solve some of the cases to which conflicting rules apply. This relates to much logical work in Artificial Intelligence devoted to the resolution of rule conflicts in so-called com- monsense reasoning. If we have a rule that birds can fly and another that ostriches cannot fly, we do not want to let the user decide whether Cyril the ostrich can fly or not: we want the system to say that he cannot, since an ostrich is a specific kind of bird.
Naturally attempts have been made to apply these ideas to law. One approach was to identify general principles used in legal systems to establish which of two conflicting rules should be given priority. To this end the logics discussed above were ex- tended with the means to express priority relations between rules in terms of these principles so that rule conflicts would be resolved. Researchers soon realised, how- ever, that general priority principles can only solve a minority of cases.
Firstly, as for the specificity principle, whether one rule is more specific than another often depends on substantive legal issues such as the goals of the legislator, so that the specificity principle cannot be applied without an intelligent appreciation of the par- ticular issue. Secondly, general priority principles usually only apply to rules from regulations and not to, for instance, case rationales or interpretation rules derived from cases.
Accordingly, in many cases the priority of one rule over another can be a matter of debate, especially when the rules that conflict are unwritten rules put forward in the context of a case. For these reasons models of legal argument should allow for arguments about which rule is to be preferred. As an example of arguments about conflicting case rationales, consider three cases discussed by, amongst others, 10; 7; 32 and 8 concerning the hunting of wild animals.
In all three cases, the plaintiff P was chasing wild animals, and the defendant D interrupted the chase, preventing P from capturing those animals. The issue to be decided is whether or not P has a legal remedy a right to be compensated for the loss of the game against D. In the first case, Pierson v Post, P was hunting a fox on open land in the traditional manner using horse and hound, when D killed and carried off the fox. In this case P was held to have no right to the fox because he had gained no possession of it.
In the second case, Keeble v Hickeringill, P owned a pond and made his living by luring wild ducks there with decoys, shooting them, and selling them for food. Out of malice, D used guns to scare the ducks away from the pond. Here P won. In the third case, Young v Hitchens, both parties were commercial fisherman. In this case D won.
The rules we are concerned with here are the rationales of these cases: R1 Pierson: If the animal has not been caught, the defendant wins R2 Keeble: If the plaintiff is pursuing his livelihood, the plaintiff wins R3 Young: If the defendant is in competition with the plaintiff and the animal is not caught, the defendant wins. Note that R1 applies in all cases and R2 in both Keeble and Young. To start with, note that if, as in HYPO, we only look at the factual similarities and differences, none of the three precedents can be used to explain the outcome of one of the other precedents.
For instance, if we regard Young as the current case, then both Pierson and Keeble can be distinguished. A way of arguing for the desired priorities, first mooted in 10 , is to refer to the purpose of the rules, in terms of the social values promoted by following the rules. The logic of 35 provides the means to formalise such arguments.
Consider an- other case in which only plaintiff was pursuing his livelihood and in which the ani- mal was not caught.
In the following imaginary dispute the parties reinterpret the precedents in terms of the values promoted by their outcomes, in order to find a con- trolling precedent we leave several details implicit for reasons of brevity; a detailed formalisation method can be found in 32 ; see also 8.
Plaintiff : I was pursuing my livelihood, so by Keeble I win Defendant: You had not yet caught the animal, so by Pierson I win Plaintiff : following Keeble promotes economic activity, which is why Keeble takes precedence over Pierson, so I win. Defendant: following Pierson protects legal certainty, which is why Keeble does not take precedence over Pierson, so you do not win. Therefore, I am right that Keeble takes precedence over Pierson, so I still win. This dispute contains priority debates at two levels: first the parties argue about which case rationale should take precedence by referring to values advanced by following the rationale , and then they argue about which of the conflicting pref- erence rules for the rationales takes precedence by referring to the relative order of the values.
In general, a priority debate could be taken to any level and will be highly dependent on the context and jurisdiction. The most fully developed logical theory about what it takes to apply a rule is reason-based logic, developed jointly by Jaap Hage and Bart Ver- heij e. Their account of rule application can be briefly summarised as follows. If these questions are answered positively and all three are open to debate , it must finally be determined that the rule can be applied, i.
On all four questions reason-based logic al- lows reasons for and against to be provided and then weighed against each other to obtain an answer. Consider by way of illustration a recent Dutch case HR , NJ , in which a male nurse aged 37 married a wealthy woman aged 97 whom he had been nursing for several years, and killed her five weeks after the marriage. However, the court refused to apply these statutes, on the grounds that applying it would be manifestly unjust. Let us assume that this was in turn based on the legal principle that no one shall profit form his own wrongdoing the court did not explic- itly state this.
In reason-based logic this case could be formalised as follows again the full details are suppressed for reasons of brevity. Claimant: Statutory rule R is a valid rule of Dutch law since it was enacted ac- cording to the Dutch constitution and never repealed.