index
int64
0
18.8k
text
stringlengths
0
826k
year
stringclasses
38 values
No
stringlengths
1
4
500
GEhERATING MEDICAL CASE REPORTS WITH THE LINGUISTIC STRING PARSER* Ping-Yang Li, Martha Evens, and Daniel Hier Department of Computer and Computer Science Department Depart,ment of Neurology Information Sciences Illinois Institute of Technology Michael Reese Hospital University of Alabama at Birmingham Chicago, IL 60616 Chicago, IL 60616 Birmingham, AL 35294 (205)934-2213 ABSTRACT We are building a text generation module for a decision support system designed to assist physicians in the management of stroke. This module produces multi-paragraph reports on stroke cases stored in the Stroke Data Base or on cases being processed by the decision support system. Analysis of human-generated case reports using Sager’s Linguistic String Parser (LSP) led to a characterization of the stroke sublanguage in terms of four components: a Text Grammar for stroke case reports, a set of Stroke Information Formats, a Relational Lexicon for the stroke sublanguage, and a Linguistic String Grammar for this sublanguage. At this point, we have produced free text by using reverse transformations from our LSP grammar to combine fragments into sentences. Our future goal lies in discov- ering how to generate good paragraphs, using these components as tools. I INTRODUCTION Our exhaustive study of stroke case reports has revealed essential information about the stroke sub- language. Based on this study we have written a Linguistic String Grammar for the stroke sublanguage and a stroke lexicon containing about 3560 entries. We have also developed a set of eleven stroke information formats, which describe the conceptual structures that turn up repeatedly in our reports. As sentences are analyzed with the Linguistic String Parser, they are broken down into elementary assertions, which are then stored in these formats. Inverse transformations from the same grammar are used to combine simple sentences into complex ones in the generation process. In addition we have developed a text grammar which accounts for many of the salient facts about the structure of case reports and which serves as the basis for guiding the process of text generation. In the following paragraphs, we will briefly describe the stroke sublanguage in terms of four components: a Text Grammar for stroke case reports, a set of Stroke Information Formats, a Rela- tional Lexicon for the stroke sublanguage, and a Linguistic String Grammar for this sublanguage. II THE STROKE LEXICON The best and most direct way to gain information * This work has been supported in part by the AMOCO Foundation through a grant to the Neurology Department at Michael Reese Hospital. about the stroke sublanguage is to analyze handwritten case reports generated by physicians. This analysis is the basis of not only the lexicon and the grammar of the stroke sublanguage, but also the semantic classes and the discourse structures, as well as the relations between those classes and attributes. Our strategy was: first, to generate vocabulary lists and KWIC (Key Word In Context) indices for the texts; secondly, to study KWIC indices to find which words are associated with each other, and to identify lexical-semantic relationships between words. Further steps are then taken to generate word-relation-word tri- ples incorporating the results of the previous steps to record lexical and semantic relationships between words. One main object of this analysis is to create a Relational Lexicon [Ahlswede and Evens, 19831 contain- ing all the words that might be used in a stroke report. The Relational Lexicon for the Stroke Sublanguage con- tains both information about words and information about that part of the world we are trying to describe, mainly the anatomy and physiology of the brain. The lexicon is structured as a large network of words con- nected by arcs representing the relations between them, such as synonymy, taxonomy, part-whole, or relative spatial orientations. The most familiar exam ple of relation is probably synonymy as in anosognia SYN denial a lexical-sem antic Some equally are taxonomy, important though less familiar the “is-a-kind-of’ relation, as in relat,ions carotid TAX artery meaning that the carotid “is a kind of” artery, and the part-whole relation, as in ventricle PART heart signifying that a ventricle is a part of the heart. Many other relations appear in less explicit form in ordinary English. The relations LEFT, RIGHT, ABOVE, BELOW, IN-FRONT-OF, and IN-BACK-OF are useful in describing anatomy. The CAUSE relation is particu- larly useful in explaining reasoning from anatomy to symptoms. We have devised a Relational Lexicon for this sublanguage to record lexical and semantic relation- ships between words, in the hope of increasing the cohe- sion of the generated text. Lesion of left occipital lobe CAUSE alexia NATURAL LANGUAGE / lO(,c) From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. We have also included a number of fixed phrases that occur repeatedly in the stroke case reports such as “visual field”, “CT scan”, “nursing home , “left sided weakness” and “right sided weakness.” Frequently these are phrases which are most conveniently treated as if they were single words. The notion of a phrasal lexicon was suggested by Becker, who proposes that people gen- erate utterances “mostly by stitching together swatches of text that they have heard before.” [Becker, 1975, ~631 Since the goal of this research is not only to analyze medical case reports but to generate them automati- cally, the notion of the phrasal lexicon is adopted to facilitate both the parsing and the generation processes. The stroke lexicon, thus, contains not just single words but a number of multi-word phrases that physicians seem to manipulate as a single unit. III THE LSP GRAMMAR FOR THE STROKE SUBLANGUAGE The next step in analyzing the stroke reports was to parse them using the LSP. This step served several valuable purposes. First, there is a very close relation- ship between “parsing grammars” and “generation gram- mars”; the development of a parsing grammar thus taught us much of what we needed to know to develop the generation grammar. Secondly, the parsed reports exhibit information about the syntactic and even semantic contexts of words and phrases in a systematic way, that is not easy to achieve with unparsed text. Consequently, this greatly simplified the production of the relational lexicon and the text grammar. The medical sublanguage, we found, deviates from standard English in a number of ways. Neurologists use a number of terms which are not current in ordinary language, and others which are current but which have special meanings in medical contexts. It is full of incom- plete sentences. Often the subject is omitted, generally when it is understood to be the patient. Abbreviations are frequent. One typical report begins, “This 47 YO BF was admitted 25 August, 1983 for right sided weak- ness.” “YO” is short for “year old” and ‘BF” for “black female.” As in this example, prepositions are frequently omitted. So are more major parts of speech, earticu- larly verbs, as in, “Lnoxin the only medication. Much of the time the text becomes merely a string of noun phrases. “No CT scan.” “No medical rx, as was intolerant to ASA.” A Linguistic String Grammar for the stroke sublanguage has been developed after studying a number of human-generated stroke case reports. The grammar is initially based on Sager’s intermediate grammar. By iterative revision, this grammar has been adjusted for the stroke medical texts. Approximately 64 subclasses of the major word classes are currently recog- nized in the grammar. Special word classes for categories like medications and operative procedures facilitate parsing with our Linguistic String Grammar for stroke reports and help to ensure that sentences are semantically as well as syntactically well-formed. Recently, we have converted this parsing grammar into a generation grammar in order to achieve our goal of generating medical case reports automatically, using techniques suggested by Grishman [1979]. IV STROKE INFORMATION FORMATS We started with actual case reports and followed the techniques described by Friedman [1983] in develop- ing our information formats for stroke. Each sentence of a case report is eventually categorized into a number of elementary assertions or fragmentary assertions, called information formats. Eventually, we have identified 11 information formats for stroke reports as shown below: Format 0: Identification Data Format 1: Admission Data Format 2: Chief Complaint Format 3: Onset of Deficit Format 4: Past Medical History Format 5: Physical Examination Results Format 6: Test Results Format 7: Final Diagnosis Format 8: Treatment with Drugs Format 9: Treatment with Operative Procedure Format 10: Discharge Information Because of the way in which the formats are con- structed, there is a close correspondence between word class membership and format column. Once the infor- mation formats are constructed on the basis of an analysis of a sample set of case reports, subsequent documents of the same type can be automatically mapped into them. We thus obtain a structured form of the information that is suitable for computerized data processing, Each column or field in an information for- mat contains words or phrases that carry the same kind of information in the texts; each format has certain fixed fields and some of the fields have subfields. Often, formats are connected to each other by conjunctions, prepositions, or other relations. For example, many examples begin with a statement about the patient’s admission to the hospital, e.g., “Patient 137 is a 47 year old right-handed black woman admitted for a stroke with mild left-sided weakness.” It seems to us that this sentence is a combination of Format-O, Format-l, and Format-2. Since information formats can be considered as a kind of semantic representation for specialized information, this inspired us to use information formats as a base from which to generate sentences. V THE TEXT GRAMMAR FOR CASE REPORTS A careful analysis of case reports will not only dis- close the grammar of this sublanguage, but also reveal essential knowledge of the text structure. Although our collection of stroke case reports show some wide varia- tions in syntax, the topics reported and the order in which they appear are highly constrained. This has enabled us to devise a text grammar which fits most of our reports very closely and represents the discourse structure of the case reports. Just as information formats can be thought of as expressing sentence structures of case reports, so can text grammar be thought of as the internal discourse structure of case reports. It is clear that simple sen- tences are not the highest level of structured linguis- tic input. Sentences themselves can serve as arguments for higher level organization. In this module, we have developed a text grammar which accounts for many of 1070 / ENGINEERING the salient facts about the structure of case reports and which serves as the basis for guiding the process of text generation. The primary functions of the text grammar are to select information formats and to organize the text content according to the information format. Thus, it will produce an ordered list of the information for- mats to specify what information to be talked about first, what next, and so forth in an appropriate way. Figure 1 shows the text grammar that we have developed for our stroke case reports. The paragraph level organization is almost fixed. The first paragraph identifies the patient and describes the chief complaint and the evolution of the deficits. The second paragraph gives relevant information about the patient’s past medical history. The next two paragraphs then report the physical examination, and detail the tests per- formed. Paragraph five shows the result of the final clinical diagnosis which includes the category of the disease, the areas and vessels involved, and t hp underlying mechanism. If there is more than one diag- nosis derived by the decision support system, all alter- natives will be listed. The last paragraph states the hos- pital medication received, and the final outcome which includes the patient’s discharge or autopsy information. Although preset paragraph boundaries are embedded in the formulas, they can be dynamically modified depend- ing on the presence of certain symptoms. If the patient, for example, has gone through several operative pro- cedures in the hospital, these will be grouped together in an additional separate paragraph. In this way, a comparatively smooth text can be generated. VI GENERATION WITH THE LSP The techniques used in generating free text are based on the Linguistic String Parser [Sager, 19811. The LSP grammar has two principal components: a BNF grammar and a set of restrictions. The context-free grammar associates with each input sentence a set of parse trees. Restrictions have many functions; one is to state conditions on a parse tree that must be met in order for the tree to be accepted as a correct analysis of the input sentence. These restrictions are used to express detailed wellformedness constraints that are not conveniently statable in the context-free component. In addition, the restriction component contains a number of transformations that decompose a complex sentence into two or more simpler sentences. For instance, the sentence “An echocardiogram showed atria1 myxoma and mitral valve lesion.” is decomposed into “An echo- cardiogram showed atria1 myxoma.” and ‘An echocar- diogram showed mitral valve lesion.” We have taken our Linguistic String Grammar for the Stroke Sublanguage and reversed the transforma- tions using the techniques suggested by Grishman 119791. A major component of our text generation module is a set of reverse transformational rules derived from our LSP grammar for the stroke sublanguage. The reverse transformational rules consist of a set of aggre- gation rules and a set of syntactic, semantic, and rhe- torical constraints. Both sets of rules function in cooperation to add or delete words from a sentence, reorder the words of a sentence, or combine two sen- tences to form a larger sentence. We use both simple transformations, which will convert a sentence from one form to another, and complex transformations, which will combine two sentences to form a third. Deletion, substitution, and adjunction are simple transformations which can be thought of as single-sentence transforma- tions. Em bedding and conjoining are complex’ transformations which combine sentences. They can be recursively applied to generate even more complex sen- tences. The function of an embedding transformation is to take material from a subordinate clause and make it part of the main clause. Conjoining transformations link two coordinate sentences by using conjunctions. In the example below, two sentences, Sl and S2, are merged by using an embedding transformation. Sl: THE PATIENT IS A WOMAN. S2: THE PATIENT IS BLACK. [Relative Clause Transformation] $T-RANSFM-I = IF $1 THEN ALL OF $Pl, $P2, $P3. $l= IF VALUE Xl OF SUBJECT OF ASSERTION OF X9 IS NOT EMPTY THEN VALUE OF SUBJECT OF ASSERTION OF X5 IS Xl. $Pl = EITHER IF Xl HAS ATTRIBUTE NHUMAN THEN REPLACE X4 BY SUBJECT X4 OF ASSERTION OF X9 (‘WHO’) OR IF Xl HAS ATTRIBUTE NONHUMAN THEN REPLACE X4 BY SUBJECT X4 OF ASSERTION OF X9 (‘WHICH’). $P2 = REPLACE X7 BY RN X7 OF LNR OF NSTG OF SUBJECT OF ASSERTION OF X5 (ASSERTION OF X9). $P3 = BOTH DELETE X9 AND $T-RANSFM-3. This is a simplified transformational rule for relative clauses. To perform this transformation, the system first checks whether the subjects in both sentences are identical. Since this is a global transformational rule, Registers X5 and X9 are used to stand for these two sentences. Once the requirements are satisfied, three operations, $P 1, $P2, and $P3, are performed in sequence. Starting with $Pl, the system further checks the attributes of this identical subject. If an attribute “NHUMAN”, which means the subject is a human being, is found, the system then replaces the subject of Sentence X9 by a relative pronoun, WHO! Otherwise, if a attribute “NONHUMAN’ is found, a relative pronoun, WHICH, is introduced. Therefore, we can have many different sentences being generated by this rule, “THE PATIENT WHO . . ..‘I. and “DIABETES WHICH . . ..‘I In $P2, the system copies the modified tree structure of X9 and adjoins it immediately after the subject of X5. Finally, the structure tree of X9 is deleted from the ori- ginal place and we have the sentence “THE PATIENT WHO IS BLACK IS A WOMAN”. The transformational rule, $T-TRANSFM-3, mentioned in $P3 will further transform “THE PATIENT WHO IS BLACK IS A WOMAN” to “THE PATIENT IS A BLACK WOMAN”. Further details can be found in [Li et al., 19851. NATURAL LANGUAGE / 107 1 VII OUR TEXT GENERATION MODULE The system consists of four components: the Text Structure module, the Information Format module, the Transformation module, and the LSP module. Data from the database is transformed as it flows from one module to the next. In order to manage these com- ponents, we have also developed a top-level driver. The top-level driver contains the control information that determines the order in which the components are activated. This monitor also serves as a simple user interface, displaying messages and asking for com- mands. The text grammar of the stroke sublanguage has been implemented and merged in the Text Structure module which can produce an ordered list of informa- tion formats to organize the text content. The Informa- tion Format module contains the 11 information for- mats and the Information Extraction unit. The main tasks of the Information Extraction unit are to infer the numeric data from the database and to map these data into a simple sentence fragment or a series of sentence fragments. Within each information format, there is a set of embedded ordering rules which can organize the information at the sentential level. The Transformation module contains a set of reverse transformational rules. These rules are used to compose a sentence by integrat- ing information from several information formats. The choice of the appropriate transformations is based on the types of sentence fragments available. The LSP module contains Sager’s Linguistic String Parser. The following simplified example may be helpful. Initially the system displays welcome messages and asks for entering the report number; that is, the patient’s regis- tration number. Control is then passed to the Text Structure module. The topic of the first paragraph is “Initlnfo” (Initial-Information) which consists of “Ptlnfo” (Patient-Information) and ‘bfctlnfo” B Deficit-Information). The Text Structure module then rst produces and passes an ordered listed of informa- tion formats for Patient-Information to the Informat;np Format module. Upon receiving this list, the specified formats are activated and the Information Extraction unit then extracts the desired information from the database and maps it into the appropriate format slots. In Figure 2, Format-O contains the patient’s identification information which includes the patient’s registration number, age, handedness (right or left), race (white, black, or oriental), and sex. The informa- tion existing in each slot can initially be expressed by a simple primary sentence. Format 0: Identification Data 1 Patient 1 Reg-NoI Agel Handness I Race I Sex I ______________--___------------- ____________________---------------------- PATIENTI 423 I 47 I RIGHT I BLACKI FEMALE I S1. THE PATIENT’S NUMBER IS 423. S2. THE PATIENT IS 47 YEARS OLD. S3. THE PATIENT IS RIGHT-HANDED. S4. THE PATIENT IS BLACK. S5. THE PATIENT IS A WOMAN. Figure 2. Format-O and Simple Sentences These sentences are then parsed by the LSP one at a time. The embedding rule of this format specifies EMBEDDING APPOSITION( Sl), i EMBEDDING(S2, EMBEDDING S3, EMBEDDING(S4, S5)))) as the sug- 1072 / ENGINEERING gested transformation order. Therefore, the relative clause transformation and apposition transformation are recursely performed by of the Transformation module. We finally obtain the complex sentence “THE PATIENT 423 IS A 47 YEAR OLD RIGHT-HANDED BLACK WOMAN.” The linguistic string analysis presented here thus gives us a method of constructing well-formed sentences from certain sentence fragments. Figure 3 shows an example generated by our system. Clearly we have only begun to explore the possi- bilities of reverse transformations. Sager’s Restriction Language [1981] makes it easy to write and experiment with other transformations. Further study of complex objects, adverbs, and conjunctions will reveal methods of generating a richer set of sentence level structures. VIII CONCLUSIONS AND FUTURE GOALS We have produced text by reversing Linguistic String transformations, but our real interest lies in dis- covering how to generate good paragraphs, using the Text Grammar, the Stroke Information Formats, the Relational Lexicon, and the Linguistic String Grammar as tools. In the realm of paragraph organization, we are particularly interested in two strategies. Mann’s [1981] Fragment-and-Compose paradigm, with its emphasis on building a paragraph from very small linguistic com- ponents, is appropriate to our plan of generating text from fragmentary information in information formats and also to the structure of the Relational Lexicon. The other important consideration in generating case reports is deciding what information is to be included in the report and what is to be left out. The salience principle discovered by Conklin and McDonald [1982] in their analysis and synthesis of house descriptions seems to operate as well in medical reports. Within any area the grossest pathology is described first; presumably this is the most salient point from the physician’s point of view. Then comes a discussion describing which associ- ated areas are affected. We want to experiment with more creative ways to use the lexicon. Becker’s theory of the phrasal lexi- con [1975] tells us that we should be combining long phrases not just individual words. We hope to improve the cohesion of our paragraphs by using the lexical rela- tionships in our Relational Lexicon. Even our brief experiments with Mandarin and English case reports [Li and Evens, 19851 have sug- gested that focus mechanisms work differently in these two languages. We want to experiment with the focus- ing techniques of McKeown’s [1982] work in both languages. Three aspects of our work seem to have particular theoretical interest: the relational lexicon as a knowledge representation structure, the possibilities of the Linguistic String Parser in text generation, and the little-understood problem of text generation at the para- graph level. REFERENCES [l] Ahlswede, T. and Evens, M. 1983. “Generating a Relational Lexicon from a Machine-Readable Diction- ary,” Workshop on Machine-Readable Dictionaries, SRI, April. [2] Becker, J. 1975. “The Phrasal Lexicon,” in Theoreti- cal Issues in Natural Language Processing, eds. R. Schank and B. Nash-Webber, Cambridge, June, 60-63. [3] Conklin, E.J., and McDonald, D.D. 1982. “Salience: the Key to the Selection Problem in Natural Language Generation,” Proc. 20th Annual Meeting of the Associa- tion for Computational Linguistics, 129-135. ‘b 4 Friedman, C., Sager, N., Chi, E., Marsh, E., C ristenson, C., and Lyman, M. 1983. “Computer Structuring of Free-Text Patient Data,” Proc. Seventh Annual Symposium on Computer Applications in Medi- cal Care, IEEE, Washington, D.C., October 23-26, 688- 691. [5] Grishman, R. 1979. “Response Generation in Ques- tion Answering Systems,” Proc. 17th Annual Meeting of the Association for Computational Linguistics, 99-101. Case-Report %%= Init_Info + Md_Hstry + PhyJZxam t Lab,Tst + Fin-Dex + Outcome Init-Info %%= PtJnfo + Dfct-Evoltn P t-Info %%= Reg-No + Age + Hndnes + Race + Sex + Admson + Chf-Complnt Admson %%= Disease + (Admson-Dat 1 Null) Dfct_Evoltn %%= Onset-Activity + Dfct-Prgrs + Dfct-Symptoms Dfct-Symptoms %%= Headache + CnsciusJmpair + Vomit + Seizure Md-Hstry %%= Hstry-Stroke + Hstry-TIA + Hstry-Cardiac + Othr-Arhythm + OthrMd-Hstry OthrMd-Hstry %%= Hypertension + Diabetes + Coagulopathy Hstry-Stroke %%= No-Strke + TypStrke + YearStrke Hstry-TIA %%= No,TIA + Typ-TIA + TIA_Territory Hstry-Cardiac %%= Cardiomegaly + Heart-Disease + AtriaLFbrlatn + Valvular_Lesion 1~1 Li. P.Y.. Ahlswede. T.. Evens. M.. Curt. C.. and kier, D. 1985. “A Text Generation Module for a ‘Deci- sion Support System for Stroke,” Proc. of the Confer- ence on Intelligent Systems and Machines, Oakland University, Rochester, MI, April. [7] Mann, W. and Moore, J. 1981. “Computer Genera- tion of Multiparagraph English Text,” American Jour- nal of Computational Linguistics, Vol. 7, No. 2, 17-29. [8] McKeown, Kathleen R. 1982. Generating Natural Language Text in Response to Questions about Data- base Structure. Ph.D. Dissertation, LJ. Penn. [9] Sager, N. 1981. Natural Language Information Pro- cessing: A Computer Grammar of English, Addison- Wesley, Reading, MA. Michael Reese Hospital Stroke Service Report Patient 165 is a 39 year-old right handed white woman admitted for a stroke with a moderate headache. The deficit came on when she got up in the middle of the night. It was maximal at onset. At the onset of the deficit, there was a moderate headache, a gradual onset of obtundation, and vomiting within the first 12 hours, but no seizure activity. Past medical history revealed no stroke, TIA, or cardiac disease. There was no evidence that she had sys- temic emboli or arteriosclerosis. Examination revealed a lethargic woman with blood pressure of 105/70. There was stiff neck but no carotid bruit. Mental status is normal. Cranial nerve testing showed right ptosis, right Horner’s syndrome, and 3rd nerve palsy of right side. Lumbar puncture showed that CSF was bloody, a A CT scan showed the right ventricular space, and CSF xanthochromia 3/10, and a CSF protein 255. The E.E.G. was normal in area appropriate to neurologic meaningocerebral deficit. An angiogram of both carotids showed a saccu- hemorrhage lar aneurysm of the right posterior communicating into the right temporal lobe. artery. There were no complications of the angiography. + Systemic-Emboli + Arteriosclerosis Phy_Exam %%= GeneraLExm + Hghr-CortcLExm + CraniaLExm + Motor-Exm + Sensory_Exm + Cerebellar_Exm Lab-Tst %%= Echocardiogram + LumbarPuncture + Angiography + Brain-ScanJ’lw-Stdy + BrainScanStatc-Stdy + CerebraLBloodJ?lw E.E.G. + Complication + Phonoangiography 4 Oculoplesthymography + Doppler_stdy + CholesteroLLvl + CT-Scan Fin-Dex %%= Strke-Category + Vessel-Involved + Area-Involved + Mechanism Outcome %%= Medication + DschgeJ’lan Medication %%= (MedDrug 1 Null) + (Med-Surgical 1 Null) Figure 1. The Text Grammar for Stroke Case Reports The final clinical diagnosis was subarachnoid hemorrhage. Another possibility was cerebral infarction. The most likely area involved by stroke was the right subarachnoid space. Another possibility was the right temporal lobe. The most likely vessel involved in the stroke was the right posterior communicating artery. The most likely mechanism underlying the stroke was hemorrhage caused by aneurysm. She died due to stroke formed. but no autopsy was per- Figure 3. A Sample Output of the Stroke Case Report Generator NATURAL LANGUAGE / 1073
1986
56
501
A Relational Representation of Modification Samuel Bayer The MITRE Corporation 1 Burlington Road Bedford, MA 01730 Mail Stop A045 Abstract The KING KONG parser being developed at The MITRE Corpora- tion combines an argument-structure shorthand with recent work on the relationship between spatial and non-spatial sets of relations and a relational model of abstract relations to produce a robust ap- proach to modifier constructions. I. Introduction One of the overlooked problems in natural language processing is the representation of abstract relations like LENGTH, DURATION, and DISTANCE and the definition of words which refer to them. The natural language group at The MITRE Corpora- tion, in the course of designing a portable, extensible natural lan- guage interface for expert systems, has drawn strategies from work in three areas to implement a robust and easily extensible approach to comprehending such terms. This paper will discuss these three strategies and the motivations behind them, and then provide an example of their cooperation. II. Relational Approach to Attributes Crucial to our design is a relational approach to abstract relations, as opposed to some sort of attribute-value representation. The latter is present in the parsing strategy of DYPAR and its des- cendants and in the knowledge representations KL-ONE and NIKL in the form of ROLES. The problem with this sort of approach is the restriction it places on the nature of its relations: as pointed out in [Vilain85], ROLES correspond semantically to two-place relations (one-place predicates). However, while this accounts for attributes like LENGTH quite nicely, it does not, in its simplest form, permit the description of three-place relations like DISTANCE. The attribute-value approach can certainly be aug- mented to handle predicates of more than one argument; two com- mon alternatives are coercing such predicates into combinations of predicates of one argument (perhaps by allowing the values of at- tributes to be predicates themselves), or adding some primitive representation of predicates of three or more arguments. However, as [Woods751 points out, it is far from clear in the former strategy that all predicates of more than one argument can be broken down into predicates of just one argument in a conceptually satisfying way. The latter strategy, as Woods states, might amount to a reevaluation of the ontological status of relation statements. In an attribute-value representation the relation is represented by a link between two nodes; Woods introduces the work of Fillmore and his notion of case, and notes that, in a case representation of events, “Instead of the assertion of a fact being carried by a link between two nodes, the asserted fact is itself a node (p. 229).” KL-ONE, which already has ROLES as nodes rather than links, takes this latter approach; however, since the semantics of ROLES restricts them to two-place relations, KL-ONE requires a separate mechanism as well. We find such an account to be formally divisive. Since an additional mechanism in which assertions about abstract relations like DISTANCE exist as nodes instead of links seems motivated, why not use it for all assertions of this type, rather than just those which do not adhere to the two-argument restriction? Our goal of developing an interface which can be ported to many target systems makes this choice even easier; we must be concerned with access rather than organization, and one of the major advantages of a rela- tional representation over an attribute-value representation is its uniform access of data. Since an attribute-value representation has the arguments of its relations in distinguished, non-parallel posi- tions, queries in such a representation must be handled one way in questions involving the value of an attribute and another way in questions involving the object which bears the attribute. A general relational scheme, on the other hand, allows any argument to be accessed with equal ease. The processing required by the backend might vary widely, but the role of the interface, as I said, is that of access, not organization. The one drawback to an approach like this, where rela- tions belong to an ontological category distinct from objects and events, is that it is not conceptually object-based, as opposed to an attribute-value representation such as a nominal case-frame, which is. Indisputably, there are many situations in which an attribute must be tightly bound to an object, but a relational representation does not preclude the expression of such a connection; it is merely silent about it. An attribute-value representation, without augmentation, on the other hand, actively precludes the expression of more com- plex relationships; and an attribute-value representation with a con- ceptually adequate augmentation amounts to the addition of a rela- tional mechanism, which renders the attribute-value mechanism redundant. III. Shorthand for Representation of Argument Structure Once we have settled on this relational representation, we need to extract the arguments of these relations from linguistic structures. In addition, we would like the method of extraction to embody generalizations about the possible ways these arguments can fit together. We have implemented a linguistically motivated shor- thand for argument positions which captures just such generaliza- tions. In general, there seem to be three classes of expressions which associate relations with objects: 1074 / ENGINEERING From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. (a) expressions of predication: The runway is long. (b) expressions of attribution where the object is the head: the long runway (c) expressions of attribution where the attribute is the head: the length of the runway the runway’s length While these phrases denote different things, the connection between runway and the notion of length is the same in all of them. Ideally, our statement of how long relates to the LENGTH relation will be the same in both (a) and (b), and should extend trivially to the relation between length and LENGTH in (c). Not only is the semantic similarity between these two cases great, but their syntactic behavior can be quite similar at times as well, since nouns like length and distance can at times function in constructions like (a) as well as (c) compare, for example, Dresden is far from Hahn and Dresden is a great distance from Hahn. In order to capture these patterns, we use the following shorthand. The possessive position, for example, in the runway’s length in (c) corresponds to the THEME (or OBJ, in the terms of Schank) in The runway is long (case (a)), as far as the roles of the relation LENGTH are concerned. In most circumstances, this position is the same as the head in object-centered NPs (case (b)). As a mnemonic for this position we use the symbol POSS-OBJ, combining possession with the semantic OBJ position.* The POSS-OBJ argument location looks at the relation from the point of view of the word or phrase which denotes the relation; the argument in POSS-OBJ is either the semantic OBJ of the relation word or the POSSessor. The other shorthand argument location, PRED-MOD, looks at the relation from the point of view of one of the arguments. The PRED-MOD position corresponds to the value of the relation, either designated by the word that designates the relation (as in far in Dresden is far from Hahn; this is the PRED position), or by its modifiers (as in great in Dresden’s great distance from Hahn; this is the MOD position). Once again, case (b) usually behaves like the others (but see previous note). With the use of this argument shorthand, we can capture the meaning of designators of length, for example, by saying that the object measured is in the POSS-OBJ position and the value *However, there are circumstances in which the head NP does not correspond to the position of POSS-OBJ: The behavior of the notion of ACCEPTABILITY differs between predicative and non-predicative constructions. The airbase is acceptable as a target and the airbase’s acceptability as a target both have the object in POSS-OBJ position and the role it is to play in the as PP; however, the acceptable targets has the role in head noun position, the position that usually reduces to POSS-OBJ. This difficulty, I feel, is most likely a subproblem of the general issue of representing adjectives such as late whose predicational and attributive meaning differ radically. of the measurement is in PRED-MOD for the relation LENGTH. The only difference between the meaning of length and the meaning of short or Eong is that the latter two have scalar designations to fix their value in the relation. Long is designated as :GREAT, while short is designated as :SMALL.** This generality allows us to handle examples like Dresden is a great distance from Hahn as hoped, since once we recognize that there is an attribute designator in PRED position and a potential argument in OBJ position, we can handle it in the same way as we handle Dresden is far from Hahn, since their “meanings” are quite similar. IV. Generalizations among spatial and non-spatial fields We have argued for a relational approach to abstract relations in an interface and have shown how such an approach can combine with a thematic representation of argument structure to express the relationships in meaning between semantically related words. We can simplify our representation of relation words even further, by generalizing between senses which are related in a coherent way. This research is based on the recent work of Jackendoff, recasting older work of [Gruber65]. The problem is to relate the meanings of long in the long runway and in the long meeting. Jackendoff begins with a detailed analysis of the former and arguably more complex class of relations, that is, spatial relations, concentrating on the relations denoted by prepositions. Jackendoff distinguishes between the notion of PLACE and the notion of PATH, the former exemplified by the phrase in the room in John is in the room and the latter by the phrase into the room in John ran into the room. Jackendoff recognizes three different types of PATHS: bounded paths, directions, and routes. From and to typically designate bounded paths, in which the argument is an endpoint of the movement. These contrast with directions, designated by toward and some uses of from, where the argument is in the direction of the motion but not necessarily reached. The third type, routes, presents the argument as some point along the path. By, in the man ran by the river, demonstrates this function. However, these PATHS and PLACES can exist not only in space. At night is a PLACE in time, while towards sunrise is a temporal direction. The great insight of Gruber, Jackendoff notes, is that the meaning of these path and place functions represented by prepositions can be parametrized, in general, by the ontological category of their argument, and that those non-spatial expressions that result will be a subset of the possible spatial expressions. So while the preposition at converts a THING into a spatial PLACE, it converts a TIME into a temporal PLACE. Similarly, the preposition to, which produces a spatial PATH out of a,THING, produces a temporal PATH out of a TIME, and a possessive PATH out of a **These degree designations will become much more sophisticated with time, but we intend to resist specifying values. NATURAL LANGUAGE / 1075 THING in proper contexts: 1 gave a book to my cousin.* The meanings of prepositions, then, are not relations per se but a group of relations, differentiated in part by the ontological category of their argument. Such an analysis can be extended to an adjective such as long in an analogous way, substituting the notion of EXTENT for the notions of POSITION, DIRECTION-TOWARD, DIRECTION-FROM and the like that are active for prepositions. Our implementation of Jackendoff’s and Gruber’s ideas recognizes relation TYPES, which are basically the groups of relations alluded to above, and relation FAMILIES, the ontological parameters which interact with the TYPES to determine the actual relation involved. We currently recognize such TYPES as POSITION, EXTENT, INTERVAL, ORIGIN, and DESTINATION (leaving aside for the moment Jackendoff’s distinction between bounded paths and directions), and such FAMILIES as SPACE, TIME, and POSSESSION. The definition of long and length designate the EXTENT type, with the family left undetermined. Similarly, the definition of at specifies the POSITION type, with the family left similarly unspecified. * * V. Implementation The KING KONG parser being developed at MITRE implements most of the ideas above. As an illustration, consider the definition and processing of long and length. We will examine three stages: the relation type EXTENT, the structure which connects the relation-designating words to this type (structures which we call accessors) , * * * and finally the relation itself. Consider first the definition of the type which will be part of the definitions of long and length. * * * * (def-db-type extent ’ (topic span) :mapping-to-families ‘((((topic . event) (span . time)) . time) (((topic . object)) . space))) Figure 1. This explicit account of relationships between relations combines with our argument structure shorthand described above to broaden the possible coverage of a natural language understanding system in some fascinating ways. First, Jackendoff points out that a significant subset of the ordinary semantics of words is inherently metaphorical: that is, a significant number of possible relations between entities is expressed, ultimately, in terms of spatial relations. The metaphorical power embodied in this approach is exploitable by an interface, especially if the analysis is extended to verbs (see fn. 3). An expression like Time flies, for example, could be comprehended by considering the definition of fly, relaxing the conditions on the ontological category of the subject, and find a corresponding action in the temporal family which embodies the concept of rapid motion. The EXTENT type has an arbitrary ordered argument structure which all relations of this type must have; it maps to families via an a-list of argument positions and ontological restrictions on that argument. The definition of the query accessor associates words with queries or query types and tells the parser how to assemble the argument structure. (def-query-accessor 3 extent ‘((topic poss-obj) (span pred-mod)) :simple-designations ‘(long length short) :degree-designations ‘((long . :great) (short . :small) (great . : great) ) :relation-designations ‘(((width) . width) ((altitude) . altitude) ((range comrad) . range) ((very great) . size)) :canonical-accessor-p t) Second, our shorthand argument representation allows us to generalize the meaning of relation-designating words across structures and parts of speech. This suggests that morphological derivation from attribute words can be at times trivial; the -ness and -Zy suffixes, which effectively change the part of speech of a word without affecting its meaning in the productive case, can be handled generally without worrying about semantic effects. The longness of the runway, if such a phrase were coined, or even the closeness of the plane to the runway could be handled simply, with a distinction in the morphology and syntax and not in the semantics. Figure 2. *This last example demonstrates that there are interrelationships between verbs and these functions; while Jackendoff discusses these interrelationships extensively, we will not investigate their utility here. * *More than one relation may be present at the type-family intersection; thus EXTENT of SPACE may be SPACE-LENGTH, WIDTH, or ALTITUDE. One relation is the default, selected by those words like length which specify only a type; others are accessed directly. The definition of altitude, then, while fitting into the type-family matrix, specifies the relation ALTITUDE explicitly. l **These structures are attached to the definition of the word, and should probably be part of the original definition instead of being defined separately. This is a problem which we will address later. ****The definition of the family is not important for this discussion; it contains information about units relevant to the family (such as MILE for SPACE and HOUR for TIME) and conversions between them. 1076 / ENGINEERING Its first argument is an arbitrary number assigned to this accessor for reference purposes. The second argument is the relation type which it accesses, while the third argument is an a-list of argument names and sources for those arguments. Here each argument has only one source (although there could be more). Notice the use of the shorthand POSS-OBJ and PRED-MOD discussed above. The simple designations are the words with which this accessor is associated; by virtue of this definition, long, length, and short all have this accessor on their frames and thus point not to a single relation, but to all relations of type EXTENT, spatial, temporal, or otherwise. The relation designations are those words which map directly to a single relation of type EXTENT; by virtue of this definition width, for example, has an accessor on its frame which specifies the WIDTH relation. * The degree designations are the strength assignments made for the words listed; so long is a GREAT EXTENT. Now consider the question How long is the runway? The parser finds the accessor associated with long and uses the definition of the EXTENT type to determine the family. The runway, in the OBJ position, will be mapped into the TOPIC argument of whatever EXTENT relation is chosen via the mapping (topic pos s-ob j ) . Since runway belongs to the ontological class OBJECT (as determined by a frame-like class hierarch which is part of KING KONG’s declarative model of the domain), the EXTENT type will specify the family as SPACE. At this point, the interface must examine all the relations which are EXTENT of SPACE, since this is all it knows about the relation designated by long; more than one relation lives at this point in the type-family matrix, including LENGTH, WIDTH, and ALTITUDE. We designate one of these relations to be the default relation at its point in the matrix, and this is the one that is chosen: (def-db-relation length (space . extent) ’ (topic span) :default-relation-p t :reply-string** ‘((span “The length of -w is -d”) (topic I’-@w is -w”) ) :match-table ’ ( ( (runway distance) . ( (span . : of -runway) ) ) ) ) Figure 3. *Note that these words are not specified for family; this has proven so far to be unnecessary, but we intend to specify the information later anyway. **The generator. reply string is used in lieu of a natural language Relations are implemented in KING-KONG as flavors. The name of the relation is LENGTH; the second argument places it in the type-family matrix. The next argument is the argument structure it has by virtue of being an EXTENT relation. It is specified as the default relation at this point in the matrix. The match table determines how the information is accessed; this table has a single entry, which says that a TOPIC of class RUNWAY* * * can access information about the SPAN, given all other arguments, by sending the relation the :OF-RUNWAY message. * * * * Once the connection between the occurrence of long in the sentence above and the actual LENGTH relation is made, the analysis of the meaning of this attribute in this context is complete. VI. Conclusions We have demonstrated how a faithful implementation of a synthesis of these three approaches can lead to a simple and elegant account of abstract relations in an interface. We have yet to implement all the aspects of these various strategies; as pointed out in the first footnote, a finer granularity must be established in the argument structure shorthand, and Jackendoff makes distinctions between some path functions which we have yet to recognize. The success we’ve had so far, even without these refinements, testifies to the utility of these ideas; many possibilities that arise with these mechanisms, including the extensions through morphological derivation and metaphorical extension, have yet to be explored. In general, however, we feel that we have developed a powerful and coherent mechanism that can be extended to cover a much wider variety of linguistic phenomena in an insightful way. ***At the moment, the DISTANCE specification for the SPAN argument in this situation is not needed. Our current implementation is most effective, regrettably, when talking about the ontological classes OBJECT and EVENT. The coherent incorporation and representation of other classes in a way faithful to Jackendoff’s work is a subject of current research. * * **This last part is part of the codification of the “glue” which connects our portable interface to whatever target expert system it is an interface to. We will describe this in detail in a future paper. NATURAL LANGUAGE / 1077 Acknowledgements I would like to thank the members of the natural language group at The MITRE Corporation, without whom this approach would never have been conceived of, much less developed. This research was funded by the Rome Air Development Center. References Brachman, Ronald (1979). “On the Epistemological Status of Semantic Networks,” reprinted in Brachman (1985), pp. 192-215. Brachman, Ronald, ed. (1985). Readings in Knowledge Representa;ion. Morgan Kaufmann: Los Altos, CA. Brachman, Ronald and James Schmolze (1985). “An Overview of the KL-ONE Knowledge Representation System,” Cognitive Science 9.2, pp. 171-216. Gruber , Jeffrey S . (1965). Studies in Lexical Relations. University Linguistics Club: Bloomington, IL. Indiana Jackendoff, Ray (1983). Cambridge, MA. Semantics and Cognition. MIT Press: Vilain, Marc (1985). “The Restricted Language Architecture of a Hybrid Representation System,” in Joshi, A., ed., Proceedings of the Ninth International Conference on Artificial Intelligence. Morgan Kaufmann: Los Altos, CA, pp. 547-51. Woods, William (1975). “What’s in a Link: Foundations for Semantic Networks,” reprinted in Brachman (1985), pp. 218-41. 1078 / ENGINEERING
1986
57
502
Cat egorial Disambiguat ion Gavan Duffy Department of Government The University of Texas at Austin Austin, Texas 78712 ARPAnet: AI.DuffyQRZO.UTEXAS.EDU When considering a design for the categorial disambiguator, an immediate inspiration was Waltz’ [4] constraint-propagation approach for detecting legal junctions in line-drawings. Applying this approach to the detection of legal category combinations in English sentences proved to be quite straightforward. Abstract This paper presents an implemented, computation- ally inexpensive technique for disambiguating cate- gories (parts of speech) by exploiting constraints on possible category combinations. Early resolutions of category ambiguities provide a great deal of leverage, simplifying later resolutions of other types of lexical ambiguity. Prioritized pattern action rules (CONDS) are specified for each known categorial ambiguity. Approximately 40 such rules are currently in place. Each rule is passed lists of the words and categories preceding and succeeding a categorial ambiguity. Other categorial ambiguities are represented as lists embedded within these preceding and succeeding category lists. Unambigu- ous categories are represented as symbols in those lists. Addi- tionally, each rule knows the current word and maintains a list of its possible category assignments, which had previously been extracted from a lexicon. 1 Introduction Ambiguities pervade natural language. Many strategies exist for resolving many varieties of ambiguity, including phrasal and clausal attachment ambiguities, anaphor ambiguities, referential When a disambiguation rule succeeds, it propagates its res- olution as a constraint for disambiguating neighboring ambigui- ties. When a rule fails to resolve the ambiguity, the ambiguous alternatives remain in the category lists. When subsequent dis- ambiguations propagate additional constraint, the disambigua- tion rule for this ambiguity is re-evaluated. ambiguities, and sense ambiguities. Categorial ambiguities arise when the lexical entry for a token (word or morpheme) indicates that the token may be given alternative category (part of speech) assignments, depending upon the context of usage. Few strategies exist for resolving categorial ambiguities, the simplest lexical ambiguity of all. Existing approaches resolve categorial ambiguities either (a) by following all categorial pars- ing paths until a grammatical path terminates, or (b) as part of the process of resolving sense ambiguities. Both approaches are computationally expensive. This paper presents an alterna- tive approach that h as been implemented in a working English- Ordinarily, disambiguation rules need examine only the cat- egories of its sentential neighbors. These usually provide suffi- cient constraint to select one correct interpretation. Occasionally, however, the words in a sentence must be consulted in addition to their possible category assignments. For example, deciding whether a particular word is an adjective sometimes depends upon knowing whether a preceding verb accepts predicate adjec- tive arguments. For maximal flexibility, disambiguation rules are free to query the lexicon about syntactic and semantic properties (subcategories) of particular words. language parser [l]. The categorial disambiguator described here constitutes a simple method for resolving categorial ambiguities before phrase structures are created. As a result, only one parsing path need be followed and any later sense disambiguation is greatly simplified. 3 An Example of Categorial Disambiguation The disambiguator resolves sentence (1) in the following steps: 2 An Overview of Categorial Disambiguation The categorial disambiguator assigns the appropriate category (part of speech) to a word whenever more than one such assign- ment is possible. For example, in sentence (1) below, the words “doctor”, “might”, ‘cure”, and “patient” are each categorially ambiguous. Both “doctor” and ‘%ure” could be a noun or a verb. “Might” could be a noun or a verb auxiliary. And “patient” could be an adjective or a noun. In (l), only the two instances of the definite article “the” are not categorially ambiguous. Yet, as we “The” is unambiguously a determiner. UDoctor” cannot be a verb because it follows the determiner “the”. It must therefore be a noun. uMight” might attach to ‘doctor” genitively, just as “door” sometimes attaches to ucar” to form “car door”. Since we are working without information regarding sense, we can- not reject this possibility. Thus, ‘might” remains categori- ally ambiguous. It is either a noun or a verb auxiliary. NATURAL LANGUAGE / 1079 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. 4. “Cure” as a verb is consistent with the interpretation of “might” as a verb auxiliary. If “might” were a noun, then ‘<cure” could not be a verb, since it disagrees in number with “might” as its subject. Of course “doctor might cure” could itself be a genitive formation, much the same as “car door handle”. But if this were the case, then the next word, “the”, would be anomalous. Two independent noun phrases rarely appear before a verb. ‘Cure” must there- fore be a verb. Since “cure” is a verb, ‘might” must be a verb-auxiliary. Otherwise, the number of “cure” would be inconsistent with the number of either “doctor” or “might” as the clausal subject. 5. ‘The”, again, is unambiguous. 6. ‘Patient” follows the determiner “the”, so it could still be either a noun or an adjective. However, “patient” termi- nates the current clause, so the analysis of “patient” as a noun is preferred. 4 The Fallibility of Categorial Disambiguation The careful reader will have noticed that categorial disambigua- tion is fallible. Two possible sources of error appear in sentence (1). First, “cure” might indeed be a noun, since two independent noun phrases sometimes do appear before a verb phrase, when a complement has been deleted, as in sentence (2). (2) Men dogs bite scream. In (1)) the rule that disambiguates “cure” can check the cur- rent clause for other possible verbs. Since ‘doctor” has already been disambiguated as a noun, no other possible verbs remain in the clause. Thus, the interpretation of “cure” as a verb is preferred. One simple categorial disambiguation rule decides whether a word (in this case only the word ‘to”) is a preposition or the infinitival complement. It is presented schematically in Table 1. Condition Consequent In (2), when the rule which disambiguates “dogs” (which might be a noun or a verb) examines the clausal environment, it notices that two verbs - “bite” and “scream” are available in the current clause, warranting the nominal interpretation of ‘dogs”. 1. unambiguous noun phrase follows 2. unambiguous verb phrase follows 3. adverb follows with following verb phrase 4. WH clause termination point 5. non-WH clause termination point 0. next word is either a singular noun or a verb 7. next word is either a plural noun or a verb 8. disambiauation failure preposition complement complement preposition complement complement preposition alternatives Actually, this case is somewhat more complex, since both “bite” and ‘scream” are themselves categorially ambiguous. Each could be either a noun or a verb. The ambiguity is resolvable, however. Since “men” is unambiguously a noun in (2), and since the three remaining tokens are all doubly ambiguous, there are 2s = 8 possible sequences of noun phrases (NP) and verb phrases (VP) in this sentence. Only one of these sequences - NP NP VP VP with a deleted complement between the two NPs - is gram- matical. Table 1: Rule for disambiguating “to” Both of these examples involve secondary searches through the clause. These occur only for a very limited range of cases. For example, had the verb auxiliary in (1) been ‘would”, in- stead of ‘might”, such a search would not be needed. In (2) the presence of an auxiliary, a determiner, or a cpomplement might obviate such a secondary search. On the basis of a worst-case analysis, one might consider this practice explosive, but since such secondary searches are rare and since most clauses are not extremely long, the disambiguation procedures terminate quickly in practice. Each condition evaluates sequentially until one succeeds. The correct answer (rule consequent) replaces the alternatives in the category lists. Conditions 1 and 2 handle the simplest cases, in which the following categories unambiguously indicate a verb phrase or a noun phrase. Condition 3 handles split infinitives. Conditions 4 and 5 handle cases in which the word is at the end of a sentence or clause. If the clause begins ‘with a subset of WH complements (who, which, what, where), “to” is considered to be a preposition (something up with which Winston Churchill would not put). Otherwise, uton is made the infinitival complement (with an ellipsis). Conditions 6 and 7 handles cases in which the next element is also ambiguous, but is either a noun or a verb. If the next word would be a singular noun (e.g., “to store”), %o” must be a complement. If it would be a plural noun (e.g., ‘to stores”), <‘ton must be a preposition. The second possible source of error involves sense ambiguities Condition 8 is the failure condition. The undisambiguated masquerading as categorial ambiguities. For example, the inter- alternatives are returned. Rule failures always return the ambi- pretation of Upatient” as an adjective in (1) might certainly be guity. In such cases, succeeding disambiguations may propagate sensible. Since it could imply that the doctor never treats the impatient, (1) is semantically ambiguous. The categorial disam- biguator assumes no ambiguities of sense. This night be seen as a weakness in the approach. In a system that includes a full-fledged discourse component, however, the disambiguator could be made sensitive to discourse cues indicating an adjectival interpretation for such cases. Perhaps the most important source of the fallibility of cate- gorial disambiguation is the fact that no exhaustive set of dis- ambiguation rules exists. Categorial disambiguation relies on an First, an initial set of the most intuitively obvious pattern- action clauses was constructed for the most common categorial ambiguities. Whenever sufficient constraint was unavailable us- ing these rules, and whenever the disambiguator selected a cate- gory incorrectly, users were asked to select the appropriate cate- gory. These selections triggered a background mail process that reported the sentence, the particular ambiguity, and the user’s selection to the implementor. This information proved indispen- sible in developing a large set of rules covering a broad range of ambiguous conditions. 5 A Categorial Disambiguation Rule 1080 / ENGINEERING enough constraint so that a reapplication of the rule would suc- enough constraint so that a reapplication of the rule would suc- cessfully resolve the ambiguity. Only when no possible additional cessfully resolve the ambiguity. Only when no possible additional sources of constraint exist will an ambiguity be considered irre- sources of constraint exist will an ambiguity be considered irre- solvable. In such cases, the user is asked to resolve the ambiguity, solvable. In such cases, the user is asked to resolve the ambiguity, and the user’s action is recorded so that the rule might be ex- and the user’s action is recorded so that the rule might be ex- tended. tended. 6 Heuristics for Rule Construction 6 Heuristics for Rule Construction The disambiguation rules are designed to minimize the amount The disambiguation rules are designed to minimize the amount of computation needed for successful resolution. Rules for ambi- of computation needed for successful resolution. Rules for ambi- guities involving two alternatives attempt to rule in the correct guities involving two alternatives attempt to rule in the correct category. Generally, for two alternative rules, conditions which category. Generally, for two alternative rules, conditions which are least computationally expensive to evaluate are evaluated are least computationally expensive to evaluate are evaluated first, while more expensive conditions are evaluated later. first, while more expensive conditions are evaluated later. Rules for ambiguities involving more than two alternatives Rules for ambiguities involving more than two alternatives attempt to rule out, rather than rule in, particular alternatives. attempt to rule out, rather than rule in, particular alternatives. For efficiency, alternatives considered least likely or easiest to rule For efficiency, alternatives considered least likely or easiest to rule out are checked first, If an alternative can be ruled out, control out are checked first, If an alternative can be ruled out, control passes to the rule that disambiguates the remaining alternatives, passes to the rule that disambiguates the remaining alternatives, and so on, until the ambiguity is resolved. Ruled-out alternatives and so on, until the ambiguity is resolved. Ruled-out alternatives do not reappear on the list of alternatives for a particular word do not reappear on the list of alternatives for a particular word in a particular sentence. Instead, only the remaining alternatives in a particular sentence. Instead, only the remaining alternatives appear for possible later disambiguation. appear for possible later disambiguation. 7 7 Assessment of the Current Model Assessment of the Current Model Categorial ambiguities are currently resolved prior to the detec- Categorial ambiguities are currently resolved prior to the detec- tion of phrasal boundaries. The speedy development of a disam- tion of phrasal boundaries. The speedy development of a disam- biguator with a broad range of coverage motivated the choice to biguator with a broad range of coverage motivated the choice to implement it as a module separate from other parser components. implement it as a module separate from other parser components. There is no reason, in principle, why the disambiguator could There is no reason, in principle, why the disambiguator could not be more tightly integrated with the parser. To do this, how- not be more tightly integrated with the parser. To do this, how- ever, a stack of previously parsed words and categories would ever, a stack of previously parsed words and categories would need to be maintained. Information regarding their linear or- need to be maintained. Information regarding their linear or- der might be lost in the phrase structures. Also, since decisions der might be lost in the phrase structures. Also, since decisions regarding the appropriateness of particular parse rules often de- regarding the appropriateness of particular parse rules often de- pend upon knowledge of succeeding categories, an integrated im- pend upon knowledge of succeeding categories, an integrated im- plementation would require occasional resolutions of categorial plementation would require occasional resolutions of categorial ambiguities ahead in the sentence. ambiguities ahead in the sentence. As is well-known from expert-systems research, changes in As is well-known from expert-systems research, changes in one rule can sometimes produce unexpected negative results from one rule can sometimes produce unexpected negative results from other rules that were not changed. The categorial disambiguator other rules that were not changed. The categorial disambiguator is by no means free of these problems. They are largely elimi- is by no means free of these problems. They are largely elimi- nated, however, by (a) nated, however, by (a) avoiding side-effecting consequents in the avoiding side-effecting consequents in the rules even when a successful condition might warrant the im- rules even when a successful condition might warrant the im- mediate disambiguation of a category elsewhere in the sentence, mediate disambiguation of a category elsewhere in the sentence, relying instead on constraint propagation, and (b) the installa- relying instead on constraint propagation, and (b) the installa- tion of debugging tools that trace the evaluation of rules and rule tion of debugging tools that trace the evaluation of rules and rule conditions. As a result, the causes of failed disambiguations are conditions. As a result, the causes of failed disambiguations are simple to detect and thus also to remedy. simple to detect and thus also to remedy. Variations on the current approach are certainly possible and Variations on the current approach are certainly possible and might constitute interesting lines of research. For example, one might constitute interesting lines of research. For example, one might implement an expert-system variant, in which disambigua might implement an expert-system variant, in which disambigua tion rules are represented declaratively and perhaps weighted to tion rules are represented declaratively and perhaps weighted to indicate their relative values evidential impact upon a resolu- indicate their relative values evidential impact upon a resolu- tion. Such an approach would undoubtedly run slower than the tion. Such an approach would undoubtedly run slower than the prioritized procedural approach described here, but the model prioritized procedural approach described here, but the model would also be more easily manipulable and applicable to other would also be more easily manipulable and applicable to other languages. languages. One might also attempt to remove the implementor from One might also attempt to remove the implementor from the debugging loop, implementing a variant which automatically learned new rules from disambiguation failures. The practical difficulty of this approach, however, involves the automatic de- tection of the reasons for such a failue. Errors might be due, for instance, to the specific subcategories of particular words. With- out a full-blown theory of the role of subcategories in categorial disambiguation, it is difficult to see how a program could be made to select the correct subcategory consistently. 8 Related Work Very little research seems to have been conducted on the resolu- tion of categorial ambiguity. This has been somewhat surprising, since the technique is quite straightforward and the results are most powerful. 8.1 Wilks’ Preference Semantics As part of his “preference semantics” approach, Wilks [6] resolves categorial ambiguities by characterizing sentences as alternative sequences of semantic primitives and testing each for goodness of fit using a database of templates expressed in those primitives. To use one of Wilks’ examples, sentence (3), in which the term “father” is categorially ambiguous, may be characterized as two alternative sequences of semantic primitives, (3a) and (3b). (3) Small men sometimes father big sons. (3a) KIND MAN HOW MAN KIND MAN (3b) KIND MAN HOW CAUSE KIND MAN The alternative interpretations are then reduced to a sequence of ‘bare templates” by stripping off the adjectival KIND and the adverbial HOW, resulting in MAN MAN MAN and MAN CAUSE MAN. These two sequences are then matched against a database of legitimate bare templates. Since only the second sequence appears in that database, (3b) is correctly chosen as the appropriate interpretation. It is important to note that Wilks’ technique was not designed to resolve categorial ambiguities. It does so as a by-product of its resolution of sense ambiguities. This does not mean that sense disambiguation, by this method or by any other (e.g., [3]) ob- viates categorial disambiguation. Quite the contrary, it means that categorial disambiguation can be an inexpensive prelude to sense disambiguation. Had sentence (3) been categorially disam- biguated earlier, “father” would have been recognized as a verb, so the nominal usage of ‘(father” would already have been ruled out. No reduction to primitives and no matching in a template database would have been needed. 8.2 Breadth-First Parsing Another alternative method for resolving categorial ambiguities - pursuing all categorially possible parse paths in breadth-first fashion [2] - is exponential in the number of categorial ambigui- ties. The number of alternative parse paths which would be tried is the product of all possible category assignments. For sentence (l), for example, the number of alternative paths would be: The doctor might cure the patient 1 x 2 x2x2x1x 2 = 16 Worse, many categorial ambiguities involve three or four possible category assignments. For example, some words can be progres- sive verb forms (“You are winning”), gerundive adjectives (“the NATURAL LANGUAGE / 108 1 winning entry”), or gerundive nouns (“Winning is everything”). Extremely common words, like “in”, take even more possible cat- egory assignments. 8.3 Phenomenologically-Plausible Parsing Waltz and Pollack 151 present a connectionist model in which . . - categories are disambiguated concurrently with other lexical am- biguities. Unlike the serial processing case, prior resolution of categorial ambiguities in a connectionist models would not result in any significant time savings. In fact, in a connectionist model, sequential resolution of categorial ambiguities prior to the res- olution of other ambiguities would consume more time. There is a trade-off, however. As in the serial case, earlier categorial disambiguation would constrain the range of possible sense in- terpretations. Instead of saving time, in the case of massively parallel processing, prior categorial disambiguation would con- serve processors. 9 Conclusions Categorial disambiguation is a computationally inexpensive means for reducing the ambiguity of a sentence. While categorial disam- biguation does not resolve all the ambiguities that may appear in a sentence (e.g., anaphor ambiguities and ambiguities of preposi- tional attachment, clausal attachment, and sense), it does elimi- nate a source of ambiguity which pervades sentences. Moreover, the prior resolution of categorial ambiguities radically simplifies procedures that resolve these other classes of ambiguity. The implemented disambiguator remains in a development stage. No tests have yet been performed, using representative texts, to estimate its degree of coverage. Nevertheless, experi- ence with moderately complex sentences indicates that the cur- rent set of rules is quite robust. As disambiguation failures are detected, the rulebase is extended, making it more robust. Most importantly, its efficiency over breadth-first approaches and the leverage it provides for other forms of disambiguation suffice to warrant use and extension of the technique. As a final note, a large amount of syntactic knowledge is em- bedded within the categorial disambiguation rules. These rules are also a potential source of considerable constrain .t for the res- olution of ambiguous morpheme sequences output by a speech recognition program. By applying a categorial disambiguator to this output, syntactic constraints on possible morpheme se- quences may be applied without the overhead involved in testing alternative parse-tree constructions. IO Acknowledgments This paper was improved by comments from John Batali, J. Michael Brady, John C. Mallery, Sidney Markowitz, and two anonymous reviewers. Erik Devereux helped typographically. The author remains responsible for the content. Much of the re- search reported here was conducted at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. II PI PI PI PI 151 PI References Duffy, G., “Viewing Parsing as Patterns of Passing Mes- sages,” forthcoming, 1986, available from author. Martin, W. A., Church, K. W., and Patil, R. S., “Prelimi- nary Analysis of a Breadth-First Parsing Algorithm: The- oretical and Experimental Results,” TR 261, MIT Labo- ratory for Computer Science, 1981. . Small, S., “Word Expert Parsing: A Theory of Distributed Word-Based Natural Language Understanding,” TR 954, University of Maryland, Department of Computer Science, 1980. Waltz, D. L., “Understanding Line Drawings of Scenes with Shadows”, in Patrick H. Winston, ed., The Psychol- ogy of Computer Vision, New York, McGraw-Hill, 1975. pp. 19-91. Waltz, D. L., and Pollack, J. B., “Phenomenologically Plausible Parsing”, AAAI-84: Proc. of the Nat. Conf. on Artificial Intelligence, AAAI, August 1984, pp. 335- 339. Wilks, Y ., “Preference Semantics,” Memo AIM-206, Stan- ford Artificial Intelligence Laboratory, Stanford Califor- nia, 1973. 1082 / ENGINEERING
1986
58
503
FOCUSING AND REFERENCE RESOLUTION IN PUNDIT Deborah A. Dahl Research and Development Division SDC -- A Burroughs Company PO Box 517 Paoli, PA 19301 ABSTRACT This paper describes the use of focusing in the PUN- DIT text processing system.* Focusing, as discussed by [Sidncr1979] ( as well as the closely related concept of centering, as discussed by [Groszl983] ), provides a powerful tool for pronoun resolution. However, its range of application is actually much more general, in that it can be used for several problelns in reference resolution. Specifically, in the PUNDIT’ system, focus- ing is used for one-ansphora, elided noun phrases, and certain types of definite and illdcfinite noun phrases, in addition to its use for pronouns. Another important feature in the PUNDIT reference resolution system is that the focusing algorithm is based on syn- tactic constituents, rather than on thematic roles, as in Sidncr’s system. This feature is based on considera- tions arising from the extension of focusing to cover one- anaphora. These considerations make syntactic focusing a more accurate predictor of the interpreta- tion of one-anaphoric noun phrases without decreas- . ing the accuracy for definite pronouns. I BACKGROUND A. FOCUSING Linguistically reduced forms, such as pronouns, are used to refer to the entity or entities with which a discourse is most centrally concerned. Thus, keeping track of this entity, (the topic of [Gunde11974], the focus of [Sidner1979], and the backward-looking center of [Grosz1983, Kameyamal9851 ) is clearly of value in the interpretation of pronouns. However, while ‘pronoun resolution’ is generally presented as a problem in computational linguistics to which focusing can provide an answer (See for example, the discus- sion in [Hirst1981]), t i is useful to consider focusing as a problem in its own right. By looking at focusing from this perspective, it can be seen that its applica- tions are more general than in simply finding referents for pronouns. Focusing can in fact play a role in the interpretation of several types of noun phrases. In support of this position, I will show how focus is used in the PUNDIT (Prolog UNDerstander of Integrated Text) text processing system to interpret a variety of * This work is su under contract orted in part by DARPA NO001 - 5-C-0012 d?ii admimstered b the Oflice of Naval Research. APPROVED FOR PU$I LIC RELEASE, DISTRIBUTION UNLIMITED. forms of anaphoric reference; in particular, pronouns, elided noun phrases, one-anaphora, and context- dependent full noun phrase references. A second position advocated in this paper is that surface syntactic form can provide an accurate guide to determining what entities are in focus. Unlike pre- vious focusing algorithms, such as that of [Sidner1979], which used thematic roles (for example, the me, agent, instrument as described in (Gruber19761 ), th e algorithm used in this system relies on surface syntactic structure to determine which entities are expected to be in focus. The exten- sion of the focusing mechanism to handle one- anaphora has provided the major motivation for the choice of syntactic focusing. The focusing mechanism in this system consists of two parts--a FocusList, which is a list of entities in the order in which they are to be considered as foci, and a focusing algorithm, which orders the FocusList. The implementation is discussed in detail in Section 5. B. OVERVIEW OF THE PUNDIT SYSTEM I will begin with a brief overview of the PUNDIT system, currently under development at SDC. PUN- DIT is written in Quintus Prolog 1.5. It is designed to integrate syntax, semantics, and discourse knowledge in text processing for limited domains. The system is implemented as a set of distinct interacting com- ponents which communicate with each other in clearly specified and restricted ways. The syntactic component, Restriction Grammar, [Hirschmanl982, Hirschmanl.9851, performs a top-down parse by interpreting a set of context-free BNF definitions and enforcing context-sensitive res- trictions associated with the BNF definitions. The grammar is modelled after that developed by the NYU Linguistic String Project [Sagerl981]. After parsing, the semantic interpreter is called. This interpreter is based on Palmer’s Inference Driven Semantic Analysis system, [Palmer1985], which decomposes verbs into their component meanings and fills their thematic roles. In the process of filling a thematic role the semantic analyzer calls noun phrase analysis on a specific syntactic constituent in order to find a referent to fill the role. Reference resolution instantiates the referent. NATURAL LANGUAGE / 1083 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. Domain-specific information is available in the knowledge base. The knowledge base is implemented as a semantic net containing a part-whole hierarchy and an isa hierarchy of the components and entities in the application domain. The current domain is that of reports of computer equipment failures. The sys- tem is being ported to reports of air compressor failures. Following the semantic analysis, a discourse component is called which updates the discourse representation to include the information from the current sentence and which runs the focusing algo- rithm. II USES OF FOCUSING As stated above, reference resolution is called by the semantic interpreter when it is filling a thematic role. Reference resolution proposes a referent for the constituent associated with that role. For example, if the verb is replace and the semantic interpreter is filling the role of agent, reference resolution would be called for the surface syntactic subject. After a pro- posed referent is chosen for the subject, any specific selectional restrictions on the agent of replace (such as the constraint that the agent has to be a human being) are checked. If the proposed referent fails selec- tion, backtracking into reference resolution occurs and another referent is selected. Cooperation between reference resolution and the semantic interpreter is discussed in detail in [Palmerl986]. The semantic interpreter itself is discussed in (Palmer1985]. A. PRONOUNS AND ELIDED NOUN PHRASES Pronoun resolution is done by instantiating the referent of the pronoun to the first member of the FocusList unless the instantiation would violate syn- tactic constraints on coreferentiality.ti: (As noted above, if the proposed referent fails selection, back- tracking occurs, and another referent is chosen.) The reference resolution situation in the mainte- nance texts however, is complicated by the fact that there are very few overt pronouns. Rather, in con- texts where a noun phrase would be expected, there is often elision, or a zero-np as in Won’t power up and Has not failed since Hill’s arrival. Zeroes are han- * The syntactic constraints on coreferentiality current1 direct o iT used by the system are ver ject is reflexive it must be ins ;21 simple. If the antiated to the same referent as the subject. Otherwise it must be a different referent. Obviously, as the system is extend- ed to cover sentences with more corn more sophisticated ;treatment of s lex structures, a coindexing fy n attic constraints f on using some 0 the insights of dled as if they were pronouns. That is, tllcy are assumed to refer to the focus. The hypothesis that elided noun phrases can be treated in the same way as pronouns is consistent with previous claims jn [Gun- dell 9801 and [Kameyama 19851 that in languages SIIVI~ as Russian and Japanese, which regularly allow zero- rip’s,, the zero corresponds to the focus. If these claims are correct, it is not surprising that in a subl;Lnguage like that found in the maintenance texts, which also allows zero-np’s, the zero should correspond to the focus.* B. IMPLICIT ASSOCIATES Focusing is also used in the processing of certain full noun phrases, both definite and indefinite, which involve implicit associates. The term implicit associ- ates refers to the relationship between a disk drive and the motor in examples like The field engineer installed a disk drive. The motor jailed. It is natural for a human reader to infer that the motor is part of the disk drive. In order to capture this intuition, it is necessary for the system to relate the motor to Lhc disk drive of which it is part. Relal ionships of this kind have been extensively discussed in the literature on definite reference. For example, implicit associates correspond to inferrable entities described by [Princel981], the associated use definites of [Haw- kins1978], and the associated type of implicit back- wards specification discussed by [Sidner1979]. Sidner suggests that implicit associates should be found among the entities in focus. Thus, when the system encounters a definite noun phrase mentioned for the first time, it examines the members of the FocusList to determine if one of them is a possible associate of the current noun phrase. The specific association rela- tionships (such as part-whole, object-property, and so on) are defined in the knowledge base. This approach is also used in the processing of certain indefinite noun phrases. In every domain, there are certain types of entities which can be classified as dependent. By this is meant an ent,ity which is not typically mentioned OII its own, but which is referred to in connection with another entity, on which it is dependent. In the maintenance domain. for example, parts such as keyboards, and printed cir- cuit boards are dependent, since they are normally mentioned with reference to something else, such as a [Reinhart 19761, and [Chomskyl981] will be required. * Another kind of pronoun (or zero) also occurs in the maintenance texts, which is not associated with the local focus, but is concerned with global aspects of the text. For example, the field engineer is. a default agent in the maintenance domain as in Thtnks prob- lem is in head select area. This is handled by definin referen & delault elided referents for the domain. The is instantiated to one of these if no suitable candidate can be found in the FocusList. 1084 / ENGINEERING disk drive, or printer. * In an example like Y’he syslem is down. The field engineer replaced a bad prinled circuit board, it seems clear that a. relationship between the printed circuit board and the system should be represented. These are treaLed in the same way as the definites discussed above. C. ONE-ANAPHORA PUNDIT extends focusing to the analysis of one-anaphora following [Dahll984], wl~ich claims that focus is central to the interpretation of one-anaphora. Specifically, the referent of a one-anaphoric noun phrase (e.g., the blue one, some large ones) is claimed to be a member or members of a set which is the focus of the current clause. For example, in Installed two disk drives. One failed, the set of two disk drives is assumed to be the focus of One failed, and the disk drive that failed is a member of that set, This analysis can be contrasted with that of [Halli- day19761, which treats one-anaphora as a surface syn- tactic phenomenon, completely distinct from refer- ence. It is more consistent with the theoretical discus- sions of [Hankamerl976], and [Webber1983]. These analyses advocate a discourse-pragmatic treatment for both one-anaphora and definite pronouns.*Vhe main computational advantage of treating one-anaphora as a discourse problem is that the basic anaphora mechanism then requires little modification in order to handle one-anaphora. In contrast, an implementation following the account of Halliday and Hasan would be much more complex and specific to one-anaphora. The process of reference resolution for one- anaphora occurs in two stages. The first stage is reso- lution of the anaphor, one, and this is the stage that, involves focusing. When the system analyzes the head noun one, it instantiates it with the category of the first set in the FocusList (disk drive in this example).:+‘**In other words, the refercnl of the noun phrase must be a member of the previously mentioned set of disk drives. The second stage of reference reso- lution for one-anaphora assigns a specific disk drive as the referent of the entire noun phrase, using the same procedures that would be used for a full noun phrase, a disk drive. *** Currently the only sets in the FocusList are those which were explictly mentioned in the text. P pointed $?bve~~~i98? Dah11984j out by [Dah11982.], and plictly menL:oned are other sets besides those ex- Lvailable for a,naphoric refer- ence. These have not yet been added to the system. The extension of the system to one-anaphora provides the clearest motivation for Lhe choice of a synlactic focus in PUNDIT. Before I discuss the kinds of examples which support this a.pproach, I will briefly describe the relevant part of the focusing algorithm based 011 thematic roles which is proposed by[Sidnerl979]. After each sentence, the focusing algorithm orders the elements in the sentence in the order in which they are to be considered as potential foci in the next sentence. Sidner’s ordering and that of PUNDIT are compared in Figure 1. The feature of one-anaphora which motivates the syntactic algorithm is that the av;lilability of cer- tain noun phrases as antecedents for one-anaphora is affected by surface word order variations which -.- Sidner I’UNDIT Theme Other thematic roles Agent Verb Phrase Sentence Direct Object Subject Objects of I”rcpositiona1 R hrascs Figure 1: Comparison of Potential Focus Ordering in Sidner’s System and PUNDIT --- change syntactic relations, but which do not, affect thematic roles. If thematic roles are crucial for focus- ing, then this pattern would not be observed. Consider the following examples: (1) A: I’d like t o plug in this lamp, but the book- cases are blocking the electrical outlets. 13: Well, can we move one? (2) A: I’d like to plug in this lamp, but the electrical outlets are blocked by the bookcases. B: Well, can we move one? In both (1) and (2) th e electrical outlets are the theme, which means that in a thematic-role based approach, the outlets represent the expected focus in both sentences. However, only in (l), do informants report an impression that B is talking about moving the electrical outlets. This indicates that the expected focus following (1) A is the outlets, while it is the bookcases in (1) B.* * In the case of (l), th e expected focus is eventu- ally reJecLed on the basis of world knowledge about what is likely to be movable, but focusin is only in- tended to determine the order in which lscourse en- 8; tities are considered as referents, not to determine whrch referent is actual1 osed b lZnowlec&e. correct. The referent pro- focusing must a ways be confirmed by world ;Y NATURAL LANGUAGE / 1085 Similar examples using definite pronouns do not seem to exhibit the same effect. In (3) and (4), they seems to be ambiguous, until world knowledge is brought in. Thus, in order to handle dclinite pronouns alone, either algorithm would be adequale. (1) (3) A: B: (4) A: B: I’d like to plug in this lamp, but the book- cases are blocking the electrica, outlets. Well, can we move them? I’d like to plug in this lamp, but the electrical outlets are blocked by the bookcases. Well, can we move them? (2) (3) Another example with one-anaphora can be seen in (5) and (6). In (5) but not in (6), the initial impres- sion seems to be that a bug has lost its leaves. As in (1) and (2), however, the thematic roles are the same, so a thematic-role-based algorithm would predict no difference between the sentences. (5) The plants are swarming with the bugs. One’s already lost all its leaves. Discourse id’s, which represenl; classifications of entities. For example, id(field^engineer,[engineerl]) means that [engineer11 is a field engineer. ‘I: Facts about part-whole relationships (hasparts). Represenlations of the events in the discourse. For example, if the event is that of a disk drive having been replaced, the representation consists of a unique identifier ([eventl]), the surface verb (replace(time($)), and the decomposition of the verb with its (known) arguments instantiated. The thematic roles involved are objectl, the replaced disk drive, objecta, the replacement disk drive, time and instrument which are uninstantiated, and agent, the field engineer. (See[Palmerl986], for details of this representa- tion). Figure 2 illustrates how the CurrentContext looks after the discourse- initial sentence, The field engineer replaced the ,disk drive. (6) The bugs are swarming over the plants. One’s already lost all its leaves. In addition to theoretical considerations, there are a number of practical advantages to defining focus on constituents rather than on thematic roles. For example, constituents can often be found more reli- ably than thematic roles. 111 addition, Lhematic roles have to be defined individually for each verb.* Furth- ermorc, since thematic roles for verbs can vary across domains, defining focus on syntax makes it less domain dependent, and hence more portable. id(field^engineer,[engineerl]), id(disk^drive,[drivel]), id(system, [systeml]), id(disk^drive,[drive2]), id(event,[eventl]), haspart([systeml],[drivel]), haspart( [systeml], [drive2])] III IMPLEMENTATION A. THE FOCUSLIST AND CURRENTCONTEXT The data structures that retain information from sentence to sentence in the PUNDIT system are the FocusList and the CurrentContext. The FocusList is a list of all the discourse entities which are eligible to be considered as foci, listed in the order in which they are to be considered. For example, after a sentence like The field engineer replaced the disk drive, the following FocusLiat would be created. [[eventl], [d rivel], [engineerl]] event( [eventl], replace(time(-)), [included(objectt( [drivez]),time(-)), missing(objectl( [drivel]),time(-)), use(instrument(8406), exchange(objectl([drivel]), object2( [d rive2]) ,time( -))) , cause(agent( [engineerl]), use(instrument(8406), exchange(objectl( [drivel]), object2( [d rive2]),time(_))))]> The members of the FocusList are unique identifiers that have been assigned to the three discourse entities -- the disk drive, the field engineer, and the state of affairs of the field engineer’s replacement of the disk drive. The CurrentContext contains the informa- tion that has been conveyed by the discourse so far. After the example above, the CurrentContext would contain three types of information: Figure 2: Currentcontext after The field engineer replaced the disk drive. * Of course, some generalizations can be made about how arguments map to thematic roles. Howev- er, they are no more than guidelines for finding the themes of verbs. The verbs still have to be classified individually. * field*engineer is an example of the represen- tation used in PUNDIT for an idiom. 1086 / ENGINEERING B. THE FOCUSING ALGORITHM The focusing algorithm used in this system resembles that of (Sidnerl9791, although it does not use the actor focus and uses surface syntax rather than thematic roles, as discussed above. It is illus- trated in Figure 3. (1) (2) First Sentence of a Discourse: Establish expected foci for the next sen- tence (order FocusList): the order reflects how likely that constituent is to become the focus of the following sentence. Sentence Direct Object Subject Objects of (Sentence-Level) Prepositional Phrases Subsequent Sentences (update FocusList): If there is a pronoun in the current sen- tence, move the focus to the referent of the pronoun. If there is no pronoun, re- tain the focus from the previous sen- tence. Order the other elements in the sentence as in (1). Figure 3: The Focusing Algorithm IV SUMMARY This paper has described the reference resolution component of PUNDIT, a large text understanding system in Prolog. A focusing algorithm based on sur- face syntactic constituents is used in the processing of several different types of reduced reference: definite pronouns, one-anaphora, elided noun phrases, and implicit associates. This generality points out the use- fulness of treating focusing as a problem in itself rather than simply as a tool for pronoun resolution. ACKNOWLEDGMENTS I am grateful for the helpful comments of Lynette Hirschman, Marcia Linebarger, Martha Palmer, and Rebecca Schiffman on this paper. John Dowding and Bonnie Webber also provided useful comments and suggestions on an earlier version. REFERENCES [Chomskyl981] Noarn Chomsky, Lectures on Cr’ooernment an.d Binding. Foris Publications, Dordrccht, 1981. [Dah11982.] Deborah A. Dahl, Discourse Structure and one-anaphora in English, preselrted at the 57th Annual Meeting of the Linguistic Society of America, San Diego, 1982.. [Dahll984] Deborah A. Dahl, The Structure and Func- tion of One-Anaphora in English, PhD Thesis; (also published by Indiana University Linguistics Club, 1985), University of Min- nesota, 1984. [Groszl983] Barbara Grosz, Aravind K. Joshi, and Scott Weinstein, Providing a Unified Account of Definite Noun Phrases in Discourse. Proceedings of the 2Zst Annual Meeting of the Association for Computa- tional Linguistics, 1983, pp. 44-50. [Gruberl976] Jeffery Gruber, Lexical Structure in Syntax and Semantics. North Holland, New York, 1976. (Gundell Jeanette K. Gundel, Role of Topic and Com- ment in Linguistic Theory, Ph.D. thesis, University of Texas at Austin, 1974. [Gunde11980] Jeanette K. Gundel, Zero-NP Anaphora in Russian. Chic ago Linguistic Society Parasession on Pronouns and Anaphora, 1980. [Hallidayl976] Michael A. K. Halliday and Ruqaiya Hasan, Cohesion in English. Longman, London, 1976. [Hankamerl976] Jorge Hankamer and Ivan Sag, Deep and Sur- face Anaphora. Linguistic Inquiry 7(3), 1976, pp. 391-428. [Hawkinsl978] John A. Hawkins, Definiteness and Indefiniteness. Humanities Press, Atlantic Highlands, New Jersey, 1978. NATURAL LANGUAGE / 1087 [Hirschmanl98‘2] I,. Hirschman and K. Puder, Restriction Grammar in Prolog. In Proc. of the First International Logic Programming Confer- ence, M. Van Caneghem (ed.), Association pour la Diffusion et le Developpement de Pro- log, Marseilles, 1982, pp. 85-90. [Hirschman L. Hirschman and K. Puder, Restriction Grammar: A Prolog Implementation. In Log- ic Programming and its Applicalions, D.H.D. Warren and M. VanCancghem (ed.), 1985. [IIirstl981] Graeme Hirst, Anaphora in Natural Language Understanding. Springer-Verlag, New York, 1981. [Kameyama1985] Megumi Kameyama, Zero Anayhora: The Case of Japanese, Ph.D. thesis, Stanlord University, 1985. [Palmer19851 Martha S. Palmer, Driving Semantics for a Limited Domain, Ph.D. thesis, University of Edinburgh, 1985. [Palmer1986] Martha S. Palmer, Deborah A. Dahl, Rebecca J. Schiffman, Lynette Hirschman, Marcia Linebarger, and John Dowding, Recovering Implicit Information, to be presented at the 24th Amlual Meeting of the Association for Computational Linguistics, Columbia University, New York, August 1986. [Prince19811 Ellen F. Prince, Toward a Taxonomy of Given-New Information. In Radical Pragm.at- its, Peter Cole (ed.), Academic l’ress, New York, 1981. [Reinhart 19761 Tanya Reinhart, The Syntactic Domain of Anaphora, Ph.D. thesis, Massachusetts In- stitute of Technology, 1976. [Sager19811 N. Sager, Natural Language Information Processing: A Computer Grammar oj En- glish and Its Applications. Addison-Wesley, Reading, Mass., 1981. (Sidner19791 Candace Lee Sidner, Towards a Computa- t ional Theory of Definite Anaphora Comprehension in English Discourse, MIT- AI TR-537, Cambridge, MA, 1979. [Webber1978] Bonnie Lynn Webber, A Formul Approach to Discourse Anaphora. Garland, New York, 1978. [Webber1983] Bonnie Lynn Webber, So What Can We Talk About Now?. In Computational Models oj Discourse, Michael Brady and Roberl C. Berwick (ed.), 1983. 1088 / ENGINEERING
1986
59
504
Merging Objects and Logic Programming: Relational Semantics Herve Gallaire European Computer-Industry Research Centre (E.C.R.C) Arabellastr. X7, D-8000 Muenchen 81 FRG Abstract This paper proposes new semantics for merging object pro- gramming into logic programming. It differs from previous at- tempts in that it takes a relational view of method evaluation and inheritance mechanisms originating from object program- ming. A tight integration is presented, an extended rationale for adopting a success/failure semantics of backtrackable methods calls and for authorizing variable object calls is given. New method types dealing with non monotonicity and deter- minism necessary for this tight integration are discussed. The need for higher functions is justified from a user point of view. as well as from an implementation one. The system POL is only a piece of a more ambitious goal which is to merge logic programming, object programming and semantic data models which can be seen as an attempt to bridge the gap between AI and databases. The paper is restricted to a programming perspective. 1. INTRODUCTION This paper is not about yet another mix of logic program- ming and object programming. Its goals are at a different level than those of most systems which have been presented up to now p],p],p],p],[5] which merely copy object- programming semantics 16],/7],!8] into a logical one. The question addressed here is whether the notions of inheritance and of procedural semantics attached to the object programming systems can be kept, or whether they have to be revisited in the light of a relational rather than functional paradigm. The answer given in thic paper ic that these conreptc should indeed be defined differently in this context, and a complete solution is presented. This requires to adopt a success/failure semantics of backtrackable method calls. instead of the call/return classic mechanism. it also requires to allow for variable object calls and to introduce new types of methods dealing with non monotonicity and determinism. A for these decisions is given. Methods and method calling are discussed extensively here, but slots are not because they are orthogonal to the actual topics of interest in this paper. Similarly the paper does not analyse objects in the perspective of parallel execution as ob- ject programming and logic are still very much sequential. The problems studied do not suffer from these limits. Another important thrust of this study has been to develop ideas leading to realistic implementations of the complex opera- tions required in a full relational context as proposed in this paper. but they are not discussed here. Early applications of this system have sbown how this in- tegration could still be improved. Several more complex operators have been defined to give adequate tools to applica- tion developers in order to integrate operations to method calls (eg average,..) while retaining the efficiency of the method evaluation technique developed. In many ways they are the corresponding operators of the classical ‘set-of one in a logic- oriented framework, and in some sense of the aggregation operators added to the relational algebra in a database framework. The rest of the paper is divided into four further sections and a conclusion. Section 2 states the requirements set up to build the POL system. Section 3 defines the syntax of POL. Section 4 deals with the semantics. especially the method call interpretation. Section 5 presents the higher order operators. Initial ideas about the POL system have been presented briefly in [Q:. The work described here is part of a wider effort to bridge the gap between knowledge representation techniques as used in the A.1 (objects. frames) and in the Database communities (entities. relationships.....). based on the use of logic as a pos- sible unifying framework. Obviously POL bears relations to languages developed from the other ends: Al languages ex- tended to handle inference ! IO,. Ill:. Theorem Provers using theory resolution 117j, semantic database systems 1121 offering inference mechanisms [13:, /14j, 115,. It is anticipated that there will be a strong convergence through approaches of this type between the programming, the AI and the database fields. POL has been built through progressive refinement, intro- ducing first an interpreter of method calls: then a simple com- piled version has been developed and progressive optimisations which appeared necessary to fulfil the additional requirements were introduced later. 2. REQUIREMENTS The following sums up the main ones retained for the development of POL. reql: the POL system must be a superset of Prolog, any Prolog program should run unchanged. req2: the POL system must allow for a programming based on objects. classes and method calls. Inheritance should be a feature of POL. including multiple inheritance. Slots are not disrussed here.but they are supported. & : the relational framework must he used to refine ap- propriately concepts such as inheritance and method evalua- tion. as the usual function-based semantics is not appropriate 754 / ENGINEERING From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. reqd the main features of log~r programming. riamel! thr logic z ariahle. uon-determinism (including backtracking to int- plcment it) should tie used throughout. Thus nre! hod calls must be fully general and backtrackable. They provide fully associative retrieval of sets of objects. Requirements 1 and 2 are straightforward. Requirements 3 and 4 give preeminence to logic. The semantics of object pro- gramming is for the most part system dependent. and this paper adds to the long list of such contributions with specific motivations. req5 : the system must allow dynamic creation and dele- tion of objects. classes, methods. There are additional requirements which tie the system to the semantic models domain. Although they are out of the by scope POL Of are this discussion. relationships 1121 some of and fully the features supported deductive relationships. 3. SYNTAX A sentence declaration: in POL is a clause belonging lo Prolog+ or a Prolog-t. A clause in Prolog+ is a Prolog clause, including additional built-in evaluable predicates (defined for most of them as Prolog operators) ; in particular the infuc operator ‘:’ to indicate a method call. Method calls are written ‘X:Y’ where X is an object (instance), or a Prolog variable. and where Y is a Prolog literal which must have been defined in an associated method declaration (see below); the parameters of Y can be objects or variables indifferently. Other operators introduced to deal with higher order functions are not dis- cussed here. As usual in Prolog. upper case letters denote vari- ables. while lower case letters denote constants. The added built-in predicates are simulated in the current implementation. but they are to be understood as true built-in, in a final im- plementation. 4. SEMANTICS for the instarrces of that type which have another concrete structure possibly derived from the generic one: thus the generic structure would be attached to the class itself. POL requires objects to have names. We will note method calls X:methodname(Parameter), or X:methodname(Parameterl. Parametera). It is perhaps easier to view this notation as an alternative for another predicate, methodname(X,Parameter) or methodname(X.Parameterl.Parameter2), etc. 4.1. Basic Choices The first point of interest to discuss has to do with the evaluation of a method call. In classical object programming system, a call Object:methodname(Parameter) is usually answered according to the following rules : take the first method according to the inheritance rules which has the name of the method call, evaluate that method for the couple (Object.Parameter), one of which is (almost) always a constant (Object), the other a variable (Parameter); only one such couple will be used; the call may then instantiate or not Parameter - usually it would do so. In this respect, the method call behaves exactly as a procedure call with a call!return paradigm. Of course some languages, e.g flavors in ISi offer ways to combine the values of relevant methods, but this is still in the functional context. In the logic framework both aspects of the above rules must be questioned. First even if logic provides also a procedural interpretation, its main interest stems from a declarative interpretation which corresponds to a success/failure paradigm rather than to a call/return one. Thus it is appeall- ing to modify and adapt the method call to the success/failure paradigm. This change has consequences that are analysed later. Similarly, but this is obvious. the constraint of ‘one’ answer must be released to fit the relational environment. Thus each method call will be hacktrackable. To clarify the issues we summarise the various possibilities : Case a : a call object:methodname(Paameter), where objecl is a known object, not a variable. This is called a constant object call. There are three possible independent semantic interpretation choices: al - call/return versus success/failure paradigm which in- fluences the notion of “first answer” a2 - multiple answers or single answer of the selected method by al a3 - multiple methods calls or single method call to provide additional answers when possible Most systems choose call/return, single answer to a2, and single method to a3. Instead POL implements success/failure, multiple answers to a2, multiple methods to a3. Obviously the programmer will have the possibility to control this and to ac- cept only one method, one answer, etc. This also corresponds to ESP /I 1, but see further comments. In 131 are provided a success/failure paradigm. multiple answers, single method. AI LANGUAGES AND ARCHITECTURES / 755 Cast t, variablp in : a call )i:rnethod (Pararucter ). w here x is a frep the logic sense. Then the question is whether such a call. referred IO as variable object call in the sequel. is correctI> and efficient)! handled. in all contexts. i.e. even 111 contexts where non-monotonicit, is handled. In 13, this is not dealt with: instead. as in man! other systems, one has to program this search at an exlernal level. Consider a predicate ‘findallobjects’ defined as. findallobjects(&ickname):- Obj instance Something. Obj:whatis(Nickname) where ‘whatis’ could be a user-defined method returning the nickname of the object it is applied to. The call mechanism involving the complex evaluation scheme of ‘:’ will be reac- tibatrd for each object. a penalty. It is not possible to only have a pure loop around the method evaluation in order to produce obJecl variable calls. This would also have the obvious serious problem of redundant answers in lattices. Another problem due to the way defaults (i.e. non-monotonicity) are implemented in 131. would be the loss of completeness in such calls. ie not all answers would be obtained. Similar remarks apply to ESP 11;. One system seems to address such calls, Sidur 12,. but the programmer has to give a ‘program to do so. POL does solve these problems completely, but requires a much more complex implementation to be efficient, and new types of methods. Here are the rules retained; justifications are given after- wards: (Rl) a method is deemed to provide an answer to a method call when and only when the method predicate evaluates to true. Thus when a method predicate evaluates to false, the search for an answer goes on according to inheritance rules. This is the success/failure paradigm and it allows to simulate the call/return paradigm. (R2) inheritance is basically from bottom-up in the hierarchy of classes (which imposes to write the declaration of the methods with identical names corresponding to the lowest classes first). At a given level. siblings are examined in the or- der of declarations of the methods (not of the hierarchy). The above comparisons dealt with other systems integrating objects and logic programming. It must be noted that at the opposite end of the spectrum. a language like Gemstone 116 which unifies Smalltalk and databases obviously offers variable objects thanks to its set notation. However it does not offer any deductive capabilities. While we are reviewing hybrid sys- tems, let us again mention /lO],/llj. which combine various modelling aspects and deductive features, at least to some ex- tent. They do not appear to offer the variable object feature. 4.2. Success/failure versus call/retjurn paradigms student researcher - Figure1 describes a hierarchy of classes, with method-names. Corresponding to it we would have the following: ._.. student researcher isa researcher (dl) - student researcher isa student id21 - 756 I ENGINEERING researcher isa staff (d3) staff isa person id41 pat instance researcher ida instance researcher frane instance student researcher - age( ida,Z5) age(pat3) age(franz,no value) - (d7) (d8) (d9) (fl) (f2) (f3) researcher with X:is- aged(Y):-age(X,Y).Y= J=no-value (rl) researcher with X:diplomlevel(Y):-dip(X.Y).Y=/=no-value (r2) staff with X:beginner:- X:diplomlevel(Y).Y=<4, (r3) X:experiencelevel(Z), Z=<3 person with X:is- aged(Y):-askuset(age,X,Y) (r5) person with X:diplomlevel(Y):-askuser(diplomlevel.X,Y) (r6) researcher with X:topic(Y):-X:in- team(Z),topics(Z,Y) (4 student with X:topic(Y):-tecord(X,Y) PI Examples in this paragraph will be taken from Figure1 which describes a rather simple domain. They will be used to justify the success/failure paradigm as opposed to the call,/return one. This choice does not prevent from simulating the call/return evaluation as it would be a simple matter to introduce a control such as a ‘!’ (cut) at the beginning of each of method body. As will be seen however, more elaborate solutions will have to be found to properly take into account the variable object calls. Assume a query such as ?-frans:is aged(Y) Such a query in an object programming system with call/return features. would yield as answer a free variable as it would be deemed to have had an answer given at class te- searcher where the ‘procedure’ is aged is defined (but fails in the logic programming sense), hence yielding the free variable answer. or perhaps an error. To palliate this. a ‘sendsuper’ call. classical in object systems. could be used in the body of is aged. but this is highly procedural in nature and thus - should probably not be used in this context. It would present other problems as well (see next). Another more problematic example would indeed be ?-franz:topic( Z) As topic is defined first al class researcher before being defined at class student. if one is to take the call/return paradigm. it will evaluate that and only that method. This will call the procedures in the body of ‘topic’ defined at class researcher and return again a free variable for Z if franc is not registered in any team (he may instead have a topic which is his university topic, not given by the company). This means that the method call in researcher class fails and thus paradigm Here a the search would by slops. itself On the continue contrary, the the search success/fail correctly. sendsuper mechanism would not work as the search has to proceed in a different branch of the inheritance graph. Of course adopting the success/failure paradigm does not mean that one is necessarily interested in all answers and there should be ways to control that search. We turn to that now. 4 .3. Default Methods Let us now assume the query I-pat:is aged(Y) with S/F (success/failure) as the call mechanism semantics. This would yield a first answer “35” using data of Figure1 and when backtracked over, would call for ‘askuser’, an ob- viously non-desirable feature. Prolog programmers would thus rewrite (rl) as researcher with X:is-aged(Y):- age(X,Y),Y =/=no-value,! (rla) which solves this problem, by stoppmp propagation. But then. what about the query ?- Who+ aged(Y) - i.r. what about variable object calls for such a query? The ‘I’ which is in (rla) prevents any backtrack. i.e. only one answer is obtained. There is no way to combine both features. The solution adopted in POL is to introduce a different type of method. so-called ‘default’ methods. Doing so, having a clear default semantics will put the burden on the structuring of methods and objects, not on coding individual bodies of methods. One would thus keep (rl) and further replace (r5) by (r5a) : person withdefault X:is-aged(Y):- askuser(age,X,Y) (r5a) Additionally (rl) could be replaced by (rla), but this would just be for (small) efficiency gains. Thus the semantics of ‘:’ evaluation and of default must be defined as follows: (R3) a default method is used on object X (constant) if and only if no earlier call of the same method has succeeded in the current call evaluation. Here, earlier is to be understood according to R2 in 4.1. (R4) variable object calls must work on all objects of relevant classes, no matter the contents (bodies) of the methods (i.e. even if those contain cuts). R4 is the one difficult rule to implement correctly and efficiently. These rules form the basis for the associative retrieval func- tions. The presence of such calls with variable objects has another important consequence that is now to be reviewed. 4.4. Deterministic Methods Granted that we make provision for variable object calls, implying backtracking to get all relevant objects, when there are several (inherited) methods (other than default ones) ap- plying to a given class, each object will be applied to all methods relevant for that class. This may be redundant in some cases or desired; thus there must be ways to control that process. For instance assuming that ‘topic’ was also defined at class staff in addition to being defined at class researcher, it is lihely that one would want to evaluate only one ‘topic’ method for a given researcher: researcher with X:topic(Y):- body1 (4 student with X:topic(Y):- body3 (r8) staff with X:topic(Y):- body2 kg) It is possible to control the use of methods in different classes by using the universal and versatile ‘!’ (cut) of Prolog once again, e.g. by replacing (r7) by researcher with X:topic(Y):- bodyl,! Wa) This I!’ dnrs not cause problem* as far a+ evaluation of calls such as X:topic(Y) thanks to the implementation of ‘:’ dis- cussed at the end of subsection 4.3. This can be contrasted to II],/3 - see the discussion in the introduction of section 4. But consider its semantics : it would in fact cut paths for branches which use true multiple inheritance. The few systems which have addressed this problem normally considered there was an OR between all methods, thus any cut had to stop all evalua- tions. It is more flexible to consider that any cut is to be only local to a branch in which it appears. Thus any cut in body3 in (r8) will not affect (r9) use. On the other hand it is sometimes necessary to have such methods which do stop evaluation on all branches : to do so we have introduced another type of method, deterministic ones. (R5) : When it anwers (i.e stops all calls for the same eluding on sibling methods. succeeds) a deterministic method object when backtrack occurs, in- Thus such methods correspond to high priorIt? oues. Assume that in the example we want to give stronger preference to topics defined in the research centre than to topics defined in relation to external bodies. then we would repldrv (r7). (r9) with (r7a) as above and (r9a) : staff withdeterministic X:topic(Y).- body2 Wa) Here are a few possible methods calls and their possible answers : ?-ida:topic( Y) ?-frane:topic(Y) ?-joe:topir(Y) ?-john:topic(Y) Y=knowledge-bases could be given by (r7a) -no more answer can be obtained. ida is a researcher and the path to (r9a) is cut in (r7a); as ida is not a student at the same time no link to (r8) exists Y=decision-making could be given by (r9a) assuming (r7a) fails for frane; even though frane is also a student, (r8) will not be tried as (r9a) is deterministic assuming joe is a student-researcher, and as- suming that (r7a) and (r9a) fail for him, then (r8) would be tried and might yield compilation-techniques as answer. No more search is involved Y=knowledge-bases and Y=logic, could be given by (r7a) and (rE), assuming john is a student-researcher whose work on knowledge bases is given by the non-deterministic rule (r7a). cutting the path to the deterministic (r9a) but leaving open the path to (r8) and of course, thanks to the ‘:’ evaluation (R4) ?-X:topic(Y) would then yield (assuming the above data) : X = ida, Y = knowledge bases X = franz. Y = decision-making X = joe, Y = compilation-techniques X = john, Y = knowledge bases X = jobn, Y = logic but nothing for X = pat in case none of (r7a), (r8) (r9a) succeed for him. What has just been discussed is, of course, a way to have non-monotonic behaviour. While such a bebaviour is easily ob- tained in, e.g. 111, 131, where variable object calls are not dealt with, it has to be specifically produced in this more complex and complete context. 4.5. Summary Rubs (Rl) through (RS) give the semantics of the method call mechanisms. They differ significantly from traditional ways of evaluating method calls but can be reduced to them by the programmer if he/she wishes to do so. Completeness of evalua- tion relative to these rules requires careful implementation of variable object calls. 5. SOME HIGHER ORDER FUNCTIONS Following the running example, let us imagine the query is to find all beginners from the class staff only, i.e. not all of them. The obvious query to do that is ’ I-X:beginner, X in- stance staff, unless it is ‘?-X instance staff, X:beginner’. AI LANGUAGES AND ARCHITECTURES / 757 80th queries have significant drawbarbs. Thts problem of where to put the generator or the test (in the first query. X instance staff is .a test. a filter. while in the second query it 1s a generator) is classic in the database field. and not well sob ed. The first query generates (with the high cost of method evaluation) too many values for X. the second generates people (probably too many) and loses any optimisa- lions of the call X:beginner which have been discussed in sec- tion 4 for variable object calls because X is now a consCant which has just been generated. What is needed here is to have a restriction operator for ‘:‘, namely the built-in evalu- able predicate ‘inclasses(X,Listofclasses.X:method(Y))’ which provides a kind of many-sorted logic in its implementation. This operator has been implemented and yields significant im- provements in performance. when used. But it is only one of a few others which have proved quite useful; a discussion of these is out of scope here. and only examples of their use are given. continuing the running example. Before doing so, let us stress that all these higher level operators share the goal to restrict the search as intimately as possible in the ‘:’ evalua- tion process. This is why they are useful. They are very similar in spirit to the classical set-of operator of Prolog, but they worh in the more complex context of classes and method calls. All following predicates are also built-in evaluable ones. Examples : laocwc (/researcher], X:has-children(Y),L) lists couples(an instance of researcher.a child) laocwcb (jresearcher], X:has- children(Y),Y,L) lists instances of researcher for whom X:has-children(Y) suc- ceeds: Y is hidden laocwcbis (Iresearcher],X.Y-(X:has- children(Y)),L) differs from the previous one because the constraint could have been any predicate, not just a call to a method; cannot be used in all contexts dfaor (Istaff;, X:is-aged(Y), tally(Y,Z), avge(Z,Av)) computes the average-age of a set of instances of class Obviously such one could define : unaesthetic operators can be hidden, and averageage (Listofclasses,Averageage) or average(Something,Listofclasses,Average) Using these operators not only simplifies considerably ap- plications writing but also helps in getting satisfactory perfor- mance. Finally, there is another available set of evaluable operators which allows the schema to be queried, i.e. the set of declara- tions itself so as to give metaleve! features : these operators are either global to the schema (e.g. list all methods, etc.) or local to a class (hst all methods attached to it,....) or to a method/relationship. Manipulating such a schema gives a way for end-users to understand what is described and manipulated in the knowledge base. CONCLUSION The ideas presented here appear to be original in that they try to really merge several formalisms : logic programming, object programming (and semantic data tern should be developed C(I g6 beyond the experimental slage described here, but the basics of the implementation nerd onl) be carried over. 11 is hoped thal ideas along these lines can contribute to the emergence of a better breed of logic lan- guages as it is obvious that the need of systems offering semantic modellinp capabilities jointly with inference capabilities is not yet satisfied- let alone if we add database requirements. Acknowledgments This work has benefitted from discussions with the Logic Programming group of ECRC, especially David Chan and Reinhard Enders. REFERENCES [l] Chikayama. T. : Unique features of ESP. Proceedings FGCS’84. ICOT, Tokyo, November 1984, PP. 292-298 I?] Kogan. D.. Freiling, M. : SIDUR - A structuring for- malism for knowledge information processing systems. Proceed- ings FGCS’84, ICOT, Tokyo, November 1984, pp. 596-605 131 Zaniolo, C. : Object-oriented programming in Prolog. Proceedings 1984 International Symposium on Logic Program- ming, Atlantic City, February 1984, pp. 265-270 141 Enders, R., Chan, D. : Specification of BOP - an object-oriented extension to Prolog. ECRC, Technical Report LP-2, March 1985 (51 Furukawa. K.. et al. : Mandala, a logic based knowledge programming system. Proceedings FGCS’84, ICOT, Tokyo, November 1984, pp. 613-622 [S] Moon, D ., Stallman, R., Weinreb, manual 5th ed., MIT AI Laboratory 1983 D. : Lisp Machine 171 Goldberg, A., Robson, D. : Smalltalk-80. and its implementation, Addison-Wesley, 1983 The language I81 PARC, Bobrow, 1983 D., Stefik, M. : The Loops manual. Xerox 191 Gallaire, H. : Logic Programming - further develop- ments. Proceedings of IEEE Symposium on Logic Program- ming , Boston, July 1985 IlO] Brachman, R., Pigman Gilbert, V., Levesque. H.J. : An essential hybrid reasoning system - knowledge and symbol level accounts of KRYPTON. Proceedings JJCAI-85, pp. 532-539 11 I] Vilain, M. : the restricted language architecture of a hybrid representation system. Proceedings IJCAI-85, pp. 547-551 1121 Chen, P. : The entity-relationship approach: towards a unified view of data. ACM Transactions on Database Systems, Vol. 1, No. 1, March 1976, pp. 9-36 1131 Stonebraker. M. : Thoughts on new and extended data models. Journees AD1 - Bases de donnees avancees, Saint Pierre de Chartreuse, March 1985 1141 Gallaire, H., Minker, J., Nicolas, J.M. : Logic and databases, a deductive approach. Computing Surveys, Vol. 16, No. 2, June 1984, pp. 153-185 modelling), rather than purely juxtaposing them in a common formalism as seems to have been the main thrust of the work on merging objects and logic up to now. The system POL can be characterized by the fact that it offers a full treatment of variable object calls. and of default as well as deterministic mechanisms. Its implementation has been designed to deal with the above in the non-obvious way. Finally it proposes higher level operators which are original and have a high pay-off both in efficiency and for appiication development. A full sys- 1151 Lloyd, J., Topor, R.W. : A basis for deductive database systems. TR 85-l. University of Melbourne, revised version, April 1985 116) Copeland. G.. Maier. D. : Making Smalltalk a database system. ACM 0-89791-128-8/84/006/0316, pp. 316-325 (171 Stickel. M.E. : Automated deduction by theory resolu- tion. Journal of Automated Reasoning 1, 1985, ~~333-355 758 / ENGINEERING
1986
6
505
ATRANS: Automatic Processing of Money Transfer Messages Steven L. Lytinen and Anatole Gershman Cognitive Systems, Inc. 234 Church Street New Haven, CT. 06510 ABSTRACT Unformatted natural-language nioney.transfer messages play an important role in the int~ernational banking system. Manually reading such messages and encoding them in the format understandable by a bank’s automsl,ic payment system is relatively slow and espensive. Due to the very restricted nature of the domain, the problem lends itself naturally to a Conceptual Dependency (CD), script,-st,yle solution. This paper illustrates the solutions to a number ol problems that arise when an acaclemic theory is applied to a real-world problem. In particular, we concentrate on the problem of context localization in the absence of reliable syntactic clues, such as sentence boundaries. I. INTRODUCTION This paper describes a real-world natural language understanding system, ATRANS (Au tom at ic Fu11ds TRANSfer Telex-Reader), which extracts information from telex messages. The messages are requests for transfers of money which banks send to each other. ATRANS reacls these messages, extracts the necessary informa Iion, and t311cn outputs it in a form suitable for autJomatic execut8ion of the transfer. This paper will present an overview of t)he problems presented by the domain, outline the general solution, and discuss in more detail the solution to one of t)he problkms, namely context localization and the resolution of semantic lexical ambiguities. ATRANS routinely processes a wide variety of money transfer messages sent, by banks around t#he world. These telexes are often composed by people \vhose ideas of English spelling, sentence construction, st)andard abbreviations, amounts and date conventions are VCI‘\’ different from Standard American English. In addition, sil;ce these messages were intended for human visual inspection, senders very often introduce various kinds of visual “cml)ellishinents” such as table formats, stars, dashes, frames, et,c., which can easily confuse a purely linguistics-based analyzer. In spit#e of these difficulties, ATRANS correctly extract,s approximately 80% of the desired information fields. About 15% of the information items are missed and 5% are identJifiecl incorrectly. (When ATRANS has any “cloul~t~s,” an item is not filled, rather than filled incorrectly.) With sbout~ half of the messages, all information fields arc processed completely and correctly. All messages are then verified and, if necessary, corrected by a human operator. *The ATRANS System was developed by Steve Lytincn, Steve h/liklos, Anatole Gershman, h4ichael Lipman. Richard 1Vyckoff, and Ignace D’Haenens. In the next section we introduce the domain of international money transfer messa.ges and outline some of the major difficulties it presents. Section 3 prescnt(s our general approach to the solution of t’he problem. Section 4 discusses the problem of context localization in greater detail. II. THE DOMAIN OF MONEY-TRANSFER MESSAGES We will begin by present,ing two simple examples of international money-transfer telex messages. FROM: GEBABEBB18A : GENERALE BANK ANTWERPEN TO : TLX CTIUS33m : BIG BANK NEW YORK NY REF : 1977675454 MSG : NORMAL TEST: 51375 : BRUSSELS ON 1748 USD TLXN011/1909TB VALUE 851118 DEBITING GENERALE BRUSSELS CREEDIT USD 174.806,65 TO : CREDIT LYONNAIS PARIS REF : FX / CVDW / 96098 / 45492 COLL f74.806,65 x : 15/11/85 14 28 ISN : 00125 15/11/85 14 40 OSN : 00005 BGBKUS33XXX MEDIC REF ORG/NEW 10630/13769 This telex requests that $174,806.65 be t,ransi’errecl from the account of Societe General de Banclue, Brussels (from “debiting Generale Brussels” in the test) t)o the account) of Credit Lyonnais, Paris. Presumably both banks have accounts with Big Bank, New York. Thus, Big Bank should simply transfer this amount of money from one account, to the other. Most messages also include several other pieces of information. The value date of Nov. 18, 1085 (from “value 851118”) means that any currency exchanges necessary for this transaction should be done using the exchange rat)es fol this date. The test key, 51375, is used t,o verify the authenticity of the message. It is computed from the value date and the amount1 and currency of the transaction. Reference numbers such as “FS / CVD\V / OG008 / -1.5-192” are attached by the sencler ant1 the beneficiary to provide both a unique identification of the transfer and an audit trail. NATURAL LANGUAGE / 1089 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. All of this information is conveltcd by ATRANS into a standard format, from which it, t,hen gencralcs an out,put, format appropriate for the client’s payment processing system. The following is a fragment, of (he s~antlard format! for the above message produced by ATR,INS. Test key: 51375 Amount: 174806.65 Currency : USD Value Date : Nov. 18, 1985 Sender name : General Bank Sender city: Antwerpen Sender ref: TLXN011/2909TB Beneficiary ref: FX/CVDW/96098/45492 Credit party account: 12345678 Credit party name: Credit Lyonnais Credit party city: Paris Debit party account: 87654321 Debit party name: General Bank Debit party city: Brussels What is required to process a message such as the above example? First, most messages contain a great deal of irrelevant information. In this telex, there arc strings of characters identifying telex lines, message numbers, etc. Some messages even contain greet,ings from telex 0perat)ors or other irrelevant text. The program must,, t,hcrcfore, be quite robust, capable of accounting for, or ignoring, every word in the input. Lexical access in the system must, also be very robust. First, words are sometimes misspelt), such as “creedit,” above. Second, the names of banks and customers are often given in the messages in non standard ways. The above message mentions “Generale Brussels, ‘I which refers to a bank in Brussels whose full name is “Societe Generale de Banque. ” The same bank is also often referrecl t,o as SGB. The syst)em must be able to identify which bank is rcfcrrecl t#o by t#hese non-standard names. The problem of bank ancl customer name recognit,ion is very serious. There are many variations of what constitutes the “standard” name of a bank. The “standard” name of the New York branch of Barclays Bank is “Barclays Bank of New York” which is rarely used by telex senders. Instead, we often see something like “Barclays, New York.” The Flemish branches of Societe Generale de Banque are called Generale Bankmaatshappij, the British Commonwealth branches of the same bank are called Belgian Bank, and the German branches, Belgische Bank. In most cases, people will use the name of the bank that is most common in their own country. Thus, a beneficiary of a transfer may be specified as “Societe Ge.nerale de Banque, Antwerpen,” even though the teles receiver’s database does not list a bank under t#hat name in Antwerp. This problem is compounded by the fact that! t#here is no single complete database with “st8anclard” bank names. Each bank uses its own, which in most cases was originally designed for mailing purposes and was typed in by several generations of secretaries. (In one such database, we fount1 about 1200 entries beginning with “TO: I’. A typical large bank’s database of corresponding banks ant1 commercial customers contains anywhere from 20,000 to 40,000 entries.) Messages are often ungrammatical and are usually written in one very long sentence, which gives no clues as to where different sections of the message begin and end. Ill addition, the input often contains ambiguous lexical it,ems. 1090 / ENGINEERING In this example, both the recipient, of t,hc telex message (Big Bank) and the beneficiary of t,he t,ransnct ion (Credit Lyonnais) are marked by the worcl “to.” Similarly, t,he word “credit” is used as a synonym for t,he word “pay,” but it. also appears as the first word in the name of a bank. Similar expressions are interpreted differenlly depending on where in the telex they are encountered. The way in which numbers should bc int.erpret,ed in this message also varies. After the word “value,” the program must know to interpret the number “851118” as a (late (Nov. 18, 1985). However, if the same st,ring of numbers appeared after a currency type, such as “USD” (U.S. Dollars), t#hen it would be interpreted as an amount, or $851.118.00. Similarly, after “ref,” which intlicates that, n reference number follows, numbers must simply be t,reat,ecl as strings, copied verbatim into the reference field. The above examples show that even in such a narrow domain as money-transfer telexes, a test, unclel~stnnclin,rr~g system must show a great, deal of flesibilit~, both in tolerating the appearance of lexical items in the tscst \\hich are unknown to the program and in determining when known words or phrases are misspelled or referred to in noli-standard ways. In addition, the extraction of standard fields fol money transfers must proceed without! explicit cues, such as separate sentences, that might indicate lvhere the fields can be found, and must take place in the presence of lcsical ambiguities that can complicate t(he process. III. HOW ATRANS WORKS To deal with the problems outlined in the last section effectively, ATRANS uses a knowledge based approach to text analysis. Although the struct,ure of t,elcs messages CM varv a great deal. their content is very predictable. \\-e can use the predictability of the content to gllide the parsing process and overcome the problems we discussed earlier. Much of ATRANS’ knowledge of the input domain can be organized in terms of a script’ [9] or a standard sequence of actions which we can expect to occur in a money transfer. The script is the following: 1. Customer OC (Originating Customer) in count,ry A asks his local bank OB (Originating Bank) to send some money M to a beneficiary BC (Beneficiary Customer) in count,ry B. Bank OB asks a large internat,ional bank SB (Sender Bank) in country A to forward t,he money. Bank SB sends a request (the message t!hnt we are reading) to its corresponding bnnk RB (Receipicnt, Bank) in country B. Bank RB pays the money t’o a local bank 1313 (Beneficiary Bank) with whom t(he beneficiary customer has an account. Bank BB pays the beneficiary cust)omer BC. Bank RB wants to be reimbursecl for the money it pays. According to the instructions contained in the message, it either debits SB’s account with itself, or waits until the money is credit,ed to one of its accounts with some other bank CB (Cove1 Bank). There are a number of variations of the above script, including a number of intermecliary banks, banks trading on their own accounts, different methods of payment,s, etc. A message can also request several payments to clifferent beneficiaries. raw unformatted message I I message I I classifier I +------------+ --> +------------+ I diction- I I I aries I +----------+ +-----~~----~+ --> I text I I analyzer I +----------+ I +a-a-----------+ I message I I interpreter I +------------+ --> +---------------+ I bank and I I I customer dbl +--------es--+ +------------+ --> 1 output I I formatter I +-----------+ Figure 1: Structure of the ATR.-\NS System The ATRANS system consist,s of four parts. as illustrated in figure 1. The nlessage.classificn( iou motlule determines the type of message being processed ;~ncl cl~oo~cs a variation of the transfer script, to be applied. 11’ the nicssage contains multiple transfers, the moclulc~ iclc~ntifics the common portions of the transfer and composes several single t,ransrer messages. “Visual” clues, such as table like illigllIllC2llt of amounts and dates, play an important1 role in det,ernnining if a message contains a request for multiple payments. The Text, Analyzer is the heart of t,he system. It processes each telex from left to right, in a determinist~ic manner, producing a Conceptual Dependency (CD) representation [8] of the telex content8. The Analyzer follows the general line of semantically-hased predict,ive conceptual analyzers (for details, see [7], fl], [s], and (‘;I). The basic script for international money t,ransfers consist8s of a number of frames, some of which can occur only in a prescribed order and some of which can occur anywhere in t,he message t,est. Using the script, the dictionaries, and the context localization mechanism (described in the next) sect8ion), t’he Analyzer identifies the frames being referred t,o by the test (e.g., payment, test, cover, etc.) and sets up espectalions which interpret and ext,ract informstion items completing t’hose frames (e.g., amounts, dates, banks, etc.). The same information items can be specified in different places within the same message. For example, the sender of the telex can be explicitly stst~ed in the beginning of t,he message (e.g., “Mere is . ..I’ or “from . . . ‘I), al the end of the message (e.g., ‘I Regards, . . . I’), or as a telegraphic answerback key (e.g., “918824 ESTNCO G”). s ‘ome of this informat,ion may not be 100% reliable, as when the sender uses son~cbocly else’s telex machine, producing a misleading answerback key. However, if different passages in the test confirm one another, we can conclude with a high degree of confidence that the telex was understood correctly. The Analyzer does not verify the extracted information x check it for consistency. This is the job of tthe hiessage Interpreter. It verifies and consoliclates the extractfed nformation items, looks up in the data base the appropriate sccount numbers and customer addresses, and decides on the nost appropriate method of payment. The result is represented internally in what we call a Universal hlessage Format. From this format the Ou t,puI Generator produces the output in the form a.ppropriate for the particular user of the system (e.g., SWIFT, CHIPS, Fecl\vire). IV. CONTEXT LOCALIZATION IN ATRANS Now that we have given an overvicn’ of t,hc problems which must be solved in order t,o process mcssagcs in the domain of international money t,ransfers. 1i.c \vill concenlr;lte on l;he solution of one of these problems: context, localization and, in particular, how it is llsctl to resolve lexical ambiguities. It is well known that, cont,est, can often eliminate semantic lexical ambiguities in test,s. \Vortls which in general have many different meanings often have only one possible meaning within a limitecl enough contest,. R iesbeck [7] presented the follo\ving esample of t,his situation: John and Rlary were racing. John boat, n&.ry. In general, “beat” has several meanings, such as ‘1 to hit repeatedly, I’ “to l>e victorious in a compelition,” or ‘I to mix thoroughly” (e.g., to beat, an egg). I-Io\ve\*car, in t,he coutcst of “racing,” it is clear that “best)” means ‘I to he \*ictorious in a competit8ion. I’ In script based systems, part,icular contests “prime” 01 give preference to part,icular senses of ambiguous words by using what is called “scriptal lexicons” [2] [3]. In Llie above example, the word I’ racing” would activate espect,ations associated with the concept of racing, including a specializecl vocabulary of “racing terms” in which t,he \vortl 11 beater would have the single meaning of II to be yic t,orious. II ATRANS uses an estension of the scriptsal lexicon idea to focus its expectations and resolve smbiguilies. Inst,ead of associating a scriptal lexicon wit,11 a relat’ively large script,, ATRANS uses a hierarchy of local contexts, each of which uses a smaller “context,ual lesicon.” As is t,he case wit’11 every context-based system, the following issues must, be acltlrc~ssecl: 1. What is the mechanism by which a local contest’ is activated? 2. How broad a range of word senses should a given context prime? 3. How long should a context be active (i.e., how do we know when the context has changed)? To bring contextual information to bear on the resolution of ambiguities, ATRANS has ;t set of separate lexicons, each of which contains clefinitions for words or word senses which refer to a certain class of objects which the program must, find. For example, one of t!hc lexicons contains only names of banks. Another lexicon con t,ains definitions of words which are likely t,o appear in addresses, SUCK xs “street,” a,.g well as names of cit,ies and inforrrlati01l >IbOllt, how to process numbers such as zip codes. Other lexicons contain only currency types, only words having t)O Cl0 \vith dates, or only non-bank customer names. Within any single lexicon, lesical items are unambiguous. For example, in the Address lexicon, numbers are defined exclusively as zip codes or street numbers, not, as dates or amounts. In the Bank name lexicon, tlhe word “Credit” is defined as the first word in t,he names of several banks, such as “Credit Lyonnais,” but not as meaning the same thing as “pay.” During the processing of a telex message, ATR.+1NS maintains a list of lexicons which are current,ly act,ive. The NATURAL LANGUAGE / 1091 system has a set of rules which delermine when Lhis list should be altered, either by activat,iug new lexicons or de activating currently-active lesicons. ThllS. pot clu t,inl ambiguities are resolved by virtue of which lexicons are active when the word is encountered. For example, it’ t,hcl Date lexicon is active, "851113" is interpreted a5 :I dl1te, because of the definition of a number in t#he Date lexicon. IIo\vc~~r, if the Currency lexicon were active, t,he definition of t,his same number would be int,erpreted as ” $851 ,113.OO. ” Similarly, if the Bank lexicon were active, the ivorcl “Credit’” WOUND cause the parser to try to match the input aga,inst bank names beginning with “Credit, ‘I rather t;haii try to inkrpret, the word as meaning “pay.” The types of lexicons we have describccl so far are appropriate when context predicts that, a certain type of object will occur nest in the input. For esaniple, after the phrase “value date, ‘I it is very likely that’ a date will follow. Thus the Date lexicon is activated. At, tliffercii t t’imes, however, the level of specificity of the cspectat’ions t’hat context can provide varies a great deal. Because of t’liis, ATRANS also has a range of lexicons which vary in their level of specificity. Because ATRANS’ job is to find the fillers of particular fields in a telex message which correspond to t’hc most, specific lexicons in the system, more general lexicons exist solely to determine when contest can be refined enough to activate the specific lexicons, For esample, the most general lcsicon, called the Telex lesicon, contains definitions of words which mark general divisions of the telex message, such as the heading, the body, and the sign.off. This lexicon contains words such as ‘I from I’ and ‘I to, “ which mark the beginning 01’ a. mei;sage header; “pay” and “credit” (the sense meaning “pay”), which often mark the beginning of the bocly of t,he message; and words such as “regards,” which mark the end of the body. Part of the definitions of these words is information that activates more specific contexts. For example, after t,lle word I’ pay, II it is likely that only ccltain informat,ion about the transaction will appear, such as information about, bhe beneficiary and intermecliate banks. Thus, one lexicon lvhich “pay” activates is the Pay lexicon, lvliich coiit,ains tlc~fiiiit,ions of words such a~ “in favor of,” ‘1 to” (meaning l~beneficiaryl~), “account,” etc. The definitions of these words contain information which in turn causes more specific lesicons to be activated. For instance, since the beneficiary is likely to follow immediately after “in favor of,” t)his phrase ncIivat,es the Bank lexicon and the Customer lesicon. Because of the way in which lesicons in XTR.kNS activate each other, they can be viewed as being arranged into a hierarchy. Very general lexicons at’ the top of the hierarchy, such as the Telex lesicon, contain definit,ions of words which activate lexicons at the next level of the hierarchy, such as the Pay lexicon. These lexicons in t>urn contain definitions which activate lexicons at, the nest, level down. This continues down to lexicons at the bot)tom of the hierarchy, such as the Date lexicon, the Ba,nk lexicon, et,c., which look for specific fields in the transaction. We will now address the problem of what words should be included in a contextual lexicon. Clearly, the \vords which directly refer to the expected concepts should be included. In many cases, however, the meanings of words which only indirectly refer to expected concepts should also be favorecl over other meanings of these words. For example: John went to a restaurant. He ordered a rare .., At this point in the sentence, it is alreacly possible to 1092 / ENGINEERING disambiguate “rare” to mean “not \vell douc” rather t,han “highly unusual. ” However, this word does not, refer to one of the roles or evenk which are explicit, in the restaurant script. It refers to a property of food, which is an explicit role-filler, but does not refer clirectly to t,he food. In the ATRANS system, t,his problem is overcome in two ways. First, lexicons which contain word senses referring to particular objects also contain words referring to related concepts. For example, when mentioning the reimbursement account for a transaction, the telex message will oft,en give the type of account or the branch of the sender bank to which the account belongs. Thus in the Reimbursement Iesicon, although the type of object explicitly being looked for is an account, words and phrases such as ” branch, ” “head office, ” and “foreign office” are also includc~tl in this lesicon. Secondly, lexicons are often paired together, so t,hat one lexicon will always be activated whenever another lesicon is activated. For instance, whenever ATRANS looks for a customer, both the Customer lexicon and the Address lexicon are activated, because it is likely that an address will accompany the customer name in the kles message. Finally, we have to aclclress t,he issue of contc~st, de activation. Once a set of word senses is primed, how long should they stay primed? For example: John and Mary were racing. hlary won. .John got mad and beat her. At some point in this story, we must realize that the racing context no longer applies, and that “beat” t,hcrefore means “hit repeatedly. ‘I The ATRANS system uses the hierarchical organization of its lexicons to determine when to switch COiltf?XtS. .-it, all times, the system maintains a stack of previously act,ive lexicons. This stack is maintained so (hat, the system can return to previously-act,ive, less specil’ic contests \~hcn the specific expectations of currently act,ive cont,est,s are uot, met. Whenever the Analyzer encounters a word which is not defined in the current contest but8 which does have a definition in one of the previous contest,s on the stack, the Analyzer abandons the current contest and rcst,ores the previous contest. For example: TO: BIG BANK, NEW YORK PAY USD 100,000 IN FAVOR OF BANK A ACCOUNT WITH YOURSELVES IN COVER OF CREDOC #133563 REGARDS, BANK B NEW YORK The phrase “in cover of” activates a set of lesicons usecl to find information about reimbursement, for the recipient bank. This set of lexicons includes the Bank lesicon, which contains bank names. However, in this particular message, no information about reimbursement is given. Therefore, the Analyzer needs to know when t.0 stop looking for t,his information. When the worcl “regards” is reached, the Analyzer knows that the reimbursement con test sl~oulcl be abandoned because “regards” is not clefined in any of the currently-active lexicons but, is defined in a I>reviously-nct.ive lexicon, namely the Telex lexicon, which contains definitions of words which mark different sections of the telex message. Because of this, the context in which t,he Teles lesicon \vas active is restored, thus de-activating the cont.c~st set up by “in cover of. ‘I In this case, since the tc>les contest, \vllich is I-e-activated was active several contexts ago, t,he popping of the context stack also eliminates the possiI)iIit,~~ that, other, more recently-active, contexts might be rc-nctivatd, such as the ‘I pay” context which looks for phrase:, such as “in favor of, ‘1 “account,” etc. V. CONCLUSION l\‘e have presented a lillo\\klg-e 1,ahrcl tcstc understanding system which processes telcs mehh;~gcs rcIinl)I> and robustly in the cloma.in of intcrnnt ion;11 money t r;~nst’crs. Although the input messages are noisy. including irrclcvant text, misspellings, non standard references to bnnlis. and many ambiguities, the system’s use of I;nol\~lctlge about t,Ile domain allows it, to extract the iniport,aiit, informat,ion in R robust manner. We have presented in detail t,hc solution to one of t,Ile issues that must be faced in such a s!~slern, namely [,]ie resolution of lexical ambiguities. ATR.4KS takes acl\.anhge of the fact that, in some contexts, lvortls which in general are ambiguous can be treated as if tlicby have only one rnenning. Although the structure of telex messages gi\.es us few contextual clues, ATRANS is able to u>e its kno\vlcclge of t,Ile domain to determne when part2icular contesh should be activated or de-act,ivat,ed. To decide when a particular lexicon or it of lexicons should be activated, lexicons in ATRXKS arc arranged hierarchically. Thus, when expect ations provitlCd by contest, are very general, very general lexicons are uhcd I,)- t,he sl,stem. As context creat,es more specific espec ta t ions, more specific lexicons are activated. This approach also provides a nat,ural solution to the problem of knowing when t,o de activak a particular context. A st,ack of previous cont;csts is maintained by the system. Whenever a word Or phrase which was defined in a previous context but not, in the present context is encountered, this is taken as a signal that the present context should be abancloned ant1 the prei,ious context should be re-activated. In this wan, the system is able to de.activate specific expect,ations a1 the approprint’e times, and fall back on previously active general expectations to determine lvhat the next cont,est8 in the message shoul~l be. In addition to benefits in performance, t’he MC of local lexicons in ATRANS proves to have orgnnizat,ionaI bcncfits as well. Because the system uses local contexts, diffcrcnt programmers were able to develop parsing rules for different contexts independently. In contrast, to other message parsing systems such as FRUMP [4]or TESS [lo], which concentrat,e primarily on message classification and summarization, ATliANS carefully analyzes every word in a message, producing a highly detailed representation of its content. To the best of our kn~~lcclge, ATRANS is unique in its robust coverage of a domain at this level of detail. Finally, we offer some in~plementntional details. ATRANS is currently undergoing live testing at n majol international bank. The system is impIemcnt~ed in the T dialect of LISP under the VAX/Vh4S operating systrm. The average processing time on a VAX 11/785 is unclcr SO seconds per telex. REFERENCES 1. Birnbaum, L., and Selfridge, &4. Problems in Conceptual Analysis of Natural Langua,ge. 168, Yale lTniversit1 Department of Computer Science, Oct,ober. 1979. 2. Charniak, E. h4s. Rlalaprop: -4 Language C’omprchcnsion Program. Proceedings of the Fifkh Int,ernntional Joint Conference on Artificial Intelligence, Proceedings of t,hc Fifth International Joint Conference on Artificial Inkelligence, Cambridge, h/Iass., August, 1977. 3. Cullingforcl, R. Script Application: Compufe, U~zderstn?adiszg o-f i!Tewspaper Stories. Ph.D. ‘I’ll., Yale University, 1978. Research Report8 #llS. 4. DeJong, G. Skimming Stories iu Real Time: Au Experiment in Itttegrnted lJntlelsttr)2[li)l.g. Ph.D. Th., Yale University, h4ay 1979. Research Report #151. 5. Dyer, h4. In depth U~zdersinndiug: A Computel* AIodel of Integrated Processing for Narrati~~e C'o~~bprehe,lsio,z, Ph.D. Th., Yale University, h4ay 1982. Research Rcpolt #219. 6. Gershman, A. Knowledge based Parsing. Ph.D. Th., Yale University, 1979. Research Report #lSG. 7. Riesbeck, C. Conceptual Analysis. In Concept utrl Infornzation Processing, North-Holland, Amst,erdam, 1975. 8. Schank, R.C. “Conceptual Dependency: A Theory of Natural Language Understancling” . Cog,lifiue Psychology 3, 4 (1972), 552-631. 9. Schank, R.C. and Abelson, R.. Scripts, Plans, G’oals and Understanding. Lawrence Erlbaum Associates. Hillsdale, New Jersey, 1977. 10. Young, S. and Hayes, P. Automatic Classification and Summarization of Banking Telexes. Proceedings of the Second Conference on Artificial Intelligence AppIica.tions, IEEE Computer Society, December, 1985. NATURAL LANGUAGE / 1093
1986
60
506
ROBOT NAVIGATION IN UNKNOWN TERRAINS OF CONVEX POLYGONAL OBSTACLES USING LEARNED VISIBILITY GRAPHS B. John Oommen S.S. Iyengar and Nagerwara S.V. Rao School of Computer Science Department of Computer Science Carleton University Lousiana State University Ottawa: KlS 5l36, CANADA Baton Rouge, L,A 70803, USA RL. Kashyup * Department of Electrical Engineering Purdue University West Lafayette,IN 47907,USA ABSTRACT The problem of navigating an autonomous mobile robot through an unexplored terrain of obstacles is the focus of this paper. The case when the obstacles are ‘known’ has been extensively studied in literature. The process of robot naviga- tion in completely unexplored terrains involves both learning the information about the obstacle terrain and path planning. We present an algorithm to navigate a point robot in an unex- plored terrain that is arbitrarily populated with disjoint con- vex polygonal obstacles in the plane. The navigation process is constituted by a number of traversals; each traversal is from an arbitrary source point to an arbitrary destination point. Ini- tially, the terrain is explored using a sensor and the paths of traversal made may be sub-optimal. The visibility graph that models the obstacle terrain is incrementally constructed by integrating the information about the paths traversed so far. At any stage of learning, the partially learnt terrain model is represented as a learned visibility graph, and it is updated after each traversal. The proposed algorithm is proven to yield a convergent solution to each path of traversal. It is also shown that the learned visibility graph converges to the visi- bility graph with probability one, when the source and desti- nation points are chosen randomly. Ultimately, the availabil- ity of the complete visibility graph enables the robot to plan globally optimal paths, and also obviates the further usage of sensors. 1, INTRODUCTION Path planning and navigation is one of the most important aspects of autonomous roving vehicles. The jind-path problem deals with navigating a robot through a completely known ter- rain of obstacles. This problem is extensively studied by many researchers - Brooks [2], Lozano-perez and Wesley [6], and Oommen and Reichstein [7] are some of the most important contributors. Another interesting problem in robot navigation deals with navigating a robot through an unknown or a par- tially explored obstacle terrain. Unlike the find-path problem, this problem has not been subjected to a rigorous mathematical treatment, and this could be attributed, at least partially, to the inherent nature of this problem. However, this problem is also researched by many scientists - Chatila [3], Crowley [4], Iyen- gar et al [S], Rao et al [9], and Turchen and Wong [lo], present many important results. In this paper we discuss a technique for the navigation of a point robot in an unexplored terrain that is arbitrarily popu- *Research supported in part by the National Science Foundation under CDR8500022. lated with disjoint convex polygonal obstacles of unknown dimensions and locations. The robot is required to undertake a number of traversals; each traversal is from a source point to a destination point. Initially no information about the obstacle terrain is available. We note that for the obstacle terrain of our problem, the availability of the visibility graph enables the planning of optimal paths from any point to any point [6]. Our approach is based on incrementally acquiring the visibility graph of the obstacle terrain. The outline of our solution is as follows: The initial traversals are based an a local navigation strategy that uses the sensor information obtained by scanning the terrain. At any stage in the navigation the terrain is charac- terized by a partially built visibility graph called the Learned Visibility Graph (LVG). The LVG is updated time-to-time by integrating the information from the sensor readings. Then we use a global navigation strategy that uses the LVG in the regions it is available, and resorts to local navigation in the regions the LVG is not available. The two key issues here are the path planning and learning. We show that the proposed technique always obtains a path, if exists, to the destination point. We also show that the LVG will converge to the VG with probability one, if the source and destination points are randomly chosen. In this paper, we present our basic results, and the details such as the correctness proof of algorithms, etc. can be found in our report [ 81. Our treatment is more formal than many earlier approaches to this problem. This formal framework enables us to discuss issues such as the convergence of path planning, learning, etc., which are not very explicit in earlier approaches. Again, our problem is to be contrasted from the terrain acquisi- tion problem [lo], wherein the robot navigates with the only purpose of acquiring the terrain model. As stated earlier in our problem, the robot is required to execute a number of traversals in an unexplored terrain, and the learning phase of acquiring the LVG is a part of our solution (to make the later traversals more efficient). The organization of this paper is as follows: Section 2 introduces the definitions and notations. The local navigation technique that incorporates learning and path planning is presented in section 3. In section 4, the power of local naviga- tion algorithm is enhanced by incorporating backtracking. As a result the interior restriction on the obstacle terrain is relaxed. In section 5, a global navigation strategy that makes use of the existing terrain model. The important result that the learning eventually becomes complete is presented in section 6. The execution of the navigation algorithms on a sample obstacle terrain is presented in section 7. ROBOTICS / 1101 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. 2. NOTATIONS AND DEFINITIONS The robot is assumed to be a point in a plane that is arbi- trarily populated with stationary convex polygonal obstacles. Initially the terrain is completely unexplored, and the robot is required to undertake a number of traversals; each traversal is from a source point to a destination point. Furthermore, the obstacles polygons are mutually non-intersecting and non- touching. Of paramount importance to this entire problem is a graph termed as the Visibility Graph (VG). The VG is a pair (V,E), where (9 (ii) V is the set of vertices of the obstacles, and. E is the set of edges of the graph. A line joining the ver- tices vi to Vj forms an edge (Vi ,Vj) E E if and only if it is an edge of an obstacle or it is not intercepted by any other obstacle. Initially the VG is totally unknown to the robot, and the robot graduates through various intermediate stages of learning during which the VG is incremental!y constructed. These intermediate stages of learning are captured in terms of the Learned Visibility Graph (LVG), which is defined as follows: LVG = (V*,E*), where, V* 5 V and E* c E. The LVG is initially empty, and is incrementally built. Ultimately, the LVG converges to the exact VG . We assume, throughout this paper, that the robot is equipped with a sensor capable of measuring the distance to an obstacle in any specified direction. Also, we assume that the robot is equipped with sensors which enable the navigation along the edges of an obstacle. Thus, the robot can navigate arbitrarily close to the obstacle edges. We denote the interior of any polygon 5 by INT(&. The straight line from the point P to the point Q is denoted by <Q. Further, ?& denotes the unit vector along the straight line p”e. 3. LOCAL NAVIGATION AND LEARNING When the robot navigates in a completely unexplored ter- rain, its path of navigation is completely decided by the sensor readings. The obstacles in the proximity of the source point are scanned and a suitable path of navigation is chosen. This localized nature of the local navigation makes a globally optimal path unattainable in a terrain with an arbitrary distribu- tion of obstacles. However, local navigation is essential during the initial stages of the navigation. In this section, we propose a local navigation technique that enables the robot to detect and avoid obstacles along the path from an arbitrary source point, S to an arbitrary destina- tion point, D . The robot is equipped with a primitive motion command MOVE@ ,A ,h), where (a) S is the source point, namely, the place where the robot is currently located. (b) A is the destination point which may or may not be specified. (c) h is the direction of motion, which is always specified. If A is specified, then the robot moves from S to A in a straight line path. In this case, the direction of motion h is the vector q=, the unit vector in the direction of & If A is not specified, then the robot moves along the direction h as follows: If the motion is alongside an edge of an obstacle, then the robot Fig. 1. Value returned by MOVF(S A ,A), when A is not specifiti. moves to the end point of the edge along the direction h. This end point is returned to the calling procedure as point A as in Fig. l(a). If motion is not alongside an edge of an obstacle, then the robot traverses along the direction h till it reaches a point on the edge of an obstacle as shown in Fig. l(b). This point is returned as the point A to the calling procedure. For the treatment in this section we assume that the obsta- cles do not touch or intersect the boundaries of the terrain R . In other words, the obstacles are properly contained in the ter- rain R . This is formally represented as bJA’T(wi) c INT(R ) (1) i=l As a consequence of this assumption there is always a path from a source point S to a destination point D . However, this restriction is removed in the next section. We present the procedure NAVIGATE-LOCAL that uses a hill-climbing technique to plan and execute a path from an arbitrary source point S to an arbitrary destination point D . The outline of this procedure is as follows: The robot moves along SD till it gets to the nearest obstacle. It then circum- navigates this obstacle using a local navigation strategy. The technique is then recursively applied to reach D from the inter- mediate point. Further, apart from path planning the procedure also incorporates the learning phase of acquiring the VG . The robot moves along the direction rlSD till it encounters an obstacle at a point A which is on the obstacle edge joining the two vertices, say, A 1 and A 2. At this point the robot has + two possible directions of motion: along AA t or s2 as shown in Fig. 2. We define a local optimization criterion function J as follows: J = VSD .h (2) where h is a unit vector along the direction of motion. Let ht and h, be the unit vectors along Ai 1 and i2 respectively. Let h’ E {h&} maximize the function J given in equation (2). The robot then undertakes an exploratory traversal along the direction -h* till it reaches the corresponding vertex called the exploratory vertex. At this exploratory point the terrain is explored using the procedure UPDATE-VGRAPH. Then the robot retraces along the locally optimal direction h’ till it reaches the other vertex S’ , whence it again calls UPDATE- VGRAPH. The procedure NAVIGATE-LOCAL is recursively applied to navigate from S * to D . The procedure UPDATE-VGRAPH implements the learning component of the robot navigation. Whenever the robot reaches a new vertex vi this vertex is added to the LVG . 1102 / ENGINEERING From this vertex, the robot beams its sensor in the direction of all the existing vertices of the LVG. The edge (Vi ,v) is added to the edge set E* , corresponding to each vertex v E V* visi- ble from Vi. The algorithm is formally presented as follows: procedure UPDATE-VGRAPH(v); input: The vertex v which is newly encountered. output: The updated LVG = (V’ ,E * ). Initially the LVG is set to ($,$). comment: DZST ( v l,v *) indicates the euclidian distance between vertices v 1 and v2, if they are visible to each other. This is the auxiliary information stored along the LVG , begin 1. v* = v*“{v}; 3 I. forallvlE V*-(v}do 3. if (v 1 is visible from v ) then 4. DIST(vl,v) = \v;v\; 5. E* =E*yWl,v)l; 6. else 7. DIST(v l,v) = -; endif endfor; end; The procedure NAVIGATE-LOCAL uses the motion command MOVE and the procedure UPDATE-VGRAPH dur- ing execution. This procedure is formally described follows: procedure NAVIGATE-LOCAL@ ,D ); Input: The source point S and the destination point D Output: A sequence of elementary MOVE commands begin 1. if (D is visible from S ) then 2. MOWS P ,rlso ) 3. else 4. if (S is on an obstacle that obstructs its view) then 5. compute {h&}, two possible directions of motion; 6. h* = direction maximizing hi .QD ; 7. if (S is a vertex) then 8. if ( S $ V* ) then UPDATE-VGRAPH(S ); 9. MOVE@ ,S * ,h* ); 10. else 11. MOVE@ ,S 1,-h* ); { exploratory trip to S 1 } 12. if ( S 1 $ V* ) then UPDATE-VGRAPH(S 1); 13. MOVE@ 1,S * ,h* ); { retrace steps to S * > 14. if (S * $ V* ) then UPDATE-VGRAPHQ * ); endif; 15. NAVIGATE-LOCAL@ l p ); 16. else { move to next obstacle } 17. MOVE@ ,S* ,qso ); { move to next obstacle } 18. NAVIGATE-LOCAL@ * ,D ); endif; endif; end; We shall now present a Theorem from [8] that shows that the procedure NAVIGATE-LOCAL converges. A,= 5’ The robot reached a point on the obstacle. TIIEOREM 1: The procedure NAVIGATE-LOCAL always finds a path from S to D in finite time. • I Note that we have chosen to minimize the projected dis- tance along SD by maximizing the function J in equation (2). This method may not give rise to a globally optimal path as shown in Fig. 3. Such counter examples exist for any localized navigation scheme for the want of global information about the obstacles. Fig. 3. Solution given by local navigation may not be globally optimal. 4. LIMITATIONS OF LOCAL NAVIGATION The procedure NAVIGATE-LOCAL introduced in the previous section always yields a path if one exists and if the obstacles do not touch the terrain boundaries as a consequence of condition (1). The relaxation of the assumption in (1) results in two cases, in which the procedure NAVIGATE- LOCAL is not guaranteed to halt: (a) There is no path existing between the source point S to the destination point D . Fig. 4 shows one such case. It is to be noted that in this case when the robot starts moving around the obstacle, its way is blocked in both possible directions. (b) The angle between the obstacle edge and the terrain boun- dary is less than n/2. In such a case the robot may be forced to move to the dead comer formed by the obstacle and terrain boundary (see Fig. 5). s’ ?L’ / --AD ~~ so.--- - Fig. 4. No path from S to D . In this section, we relax the condition in (l), and enhance the capability of NAVIGATE-LOCAL by imparting to it the ability to backtrack. The robot backtracks ( by invoking pro- cedure BACKTRACK ) whenever it reaches a point from which no further moves are possible. This procedure intelli- gently guides the robot in the process of retracing. That is, the robot backtracks along the edges of the obstructing obstacle till ROBOTICS / 1103 I tiis. 5. Dead-comer S* , formed by the obstacle and boundary. an edge (S ,S 1), that makes an angle less than l-I/2 with QD is encountered. The fact that such an edge exists is guaranteed because of the convexity of the obstacles. The search for this t’dge is performed by the while loop of lines 3-6 of procedure 13ACKTRACK. As a result the robot moves to a point from which the NAVIGATE-LOCAL can take over. If for the same obstacle. the robot has to backtrack twice, then there is no path between S and D . In other words, if a path from S to D exists, then the robot needs to backtrack at most once along the edges of any obstacle. procedure BACKTRACK(S ,L) ,S * ); Input: The point D is the destination point. S is a dead comer, i.e., a vertex of an obstacle and is also on the boundary of the terrain. Output: A sequence of MOVES from S in such a way that if a path exis, then it can be determined using NAVIGATE-LOCAL. The location S * is returned to the calling procedure. begin 1. 2. % =s; h = only permitted direction of motion on the obstacle; 3. while (&.h* ~0) do 4. 5. MO~GlS* ,I*); 6. si=s ; h = only permitted direction of motion on the obstacle; endwhile; end; The convergence of the procedure BACKTRACK is proved in the following theorem (see [8] for proof). THEOREM 2: The procedure BACKTRACK leads to a solu- tion to the navigation problem, if one exists. Cl We note that if a path exists between S and D , then the robot backtracks at most once for each obstacle that lies on its way from S to D . That is because, if the procedure BACK- TRACK leads the robot to another “dead-end” on the same obstacle, clearly, the robot can not navigate across the obstacle. Hence, no path exists between S and D . Let the procedure NAVIGATE-LOCAL with the enhanced capability to backtrack be called procedure NAVIGATE--BACKTRACK. This procedure utilizes NAVIGATE-LOCAL to navigate till the robot encounters a dead-end. At this point, the procedure BACKTRACK is invoked, after which the NAVIGATE-LOCAL is resorted to. The navigation is stopped if no path exists between S and D . The formal statement and correctness proof of procedure NAVIGATE-BACKTRACK easily follow from those of NAVIGATE-LOCAL and BACKTRACK. 5. GLOBAL NAVIGATION The procedures described in the preceding sections enable a robot to navigate in an unexplored terrain. But, the naviga- tion paths are not necessarily globally optimal from the path planning point of view. However, the extra work carried out in the form of learning is inevitable because of the lack of infor- mation about the obstacles. Furthermore, the LVG is gradually built as a result of learning. In the regions where the visibility graph is available, the optimal path can be found by computing the shortest path from the source point to the destination point on the graph. The computation can be carried out in quadratic time in the number of nodes of the graph by using the Dijkstra’s algorithm [l]. Such a trip can be obtained by using only computations on the LVG and not involving any sensor operations. We shall now propose a technique that utilizes the avail- able LVG in planning the navigation paths. In the regions where no LVG is available, the procedure NAVIGATE- LOCAL is used for navigation. In these regions the LVG is updated for future navigation. The outline of the global navi- gation strategy as follows: procedure NAVIGATE-GLOBAL(S ,D ); begin 1. Compute-Best-Vertices(S * ,D * ); 2. NAVIGATE-BACKTRACK@ ,S * ); 3. Move-On-LVG(S * ,D l ); 4. NAVIGATE-BACKTRACK(D * ,L) ); end Given S and D , two nodes S* and D * on the existing LVG are computed. The robot navigates from S to S* using local navigation. Then the navigation from S * to D * is along the optimal path computed using the LVG . Again, from D * to D the local navigation is resorted to. Computation of S* and D * , corresponding to line 1 of NAVIGATE-GLOBAL, can be carried out using various criteria. We suggest three such possi- ble criteria below: Criterion A: S * and D l are the nodes of the LVG closest to S and D . The computation of these nodes involves 0 ( 1 V* I) distance computations. Criterion B: S* is a vertex such that it is the closest to the line S%. D * is similarly computed. Again the complexity of this computation is 0 ( ) V* I). Criterion C: S’ is a vertex which minimizes the angle S * SD. Again the complexity of this computation is 0 ( 1 V’ I). The closeness of the paths planned by NAVIGATE- GLOBAL to the globally optimal path depends on the degree to which the LVG is built. The paths tend to be globally optimal as the LVG converges to the VG . 1104 / ENGINEERING 6. COhlPLETE LEARNING Learning is an integral part of NAVIGATE-LOCAL, pri- marily because the robot is initially placed in a completely unexplored obstacle terrain, and the LVG is incrementally con- structed as the robot navigates. The central goal of the leam- ing is to eventually construct the VG of the entire obstacle ter- rain. Once the VG is completely constructed, the globally optimal path from S to D can be computed before the robot sets into motion as in [ 151. Furthermore, the availability of the complete 1/G obviates the further usage of sensors. Now, we present two basic results about the learning process incor- porated in our algorithm (see [8] for proof). TIIEOREhl 3: If no point in the free space has a zero proba- bility measure of being a source or a destination point or a point on a path of traversal, then the LVG converges to the VG with a probability one. 0 TIIEOREM 4: The number of sensor operations performed within the procedure UPDATE-VGRAPH to learn the com- plete VG is 0 (I V 1 2). Cl In the next section we present some experimental results to illustrate the working of our technique. Fig. 6. Unexplored terrain. 7. SIMULATED RESULTS In this section, we describe experimental results obtained using a rectangular obstacle terrain shown in Fig. 6. Initially the terrain is unexplored and the LVG is empty. A sequence of five paths is undertaken in succession by the robot. In other words, the robot first moves to 2 from 1, then to 3 from 2, etc. till it reaches 6. Fig. 7(a)-(g) illustrate the various paths traversed and the corresponding LVG s. Initially, during the motion from 1 to 2, the robot learnt four edges of the VG shown in Fig. 7(b). In the next traversal, seven more edges of the VG are learnt. A curve showing the number of edges learnt as a function of the number of traversals is given in Fig. 9. It is to be noted that as many as 31 out of total 39 edges of VG are learnt in five traversals. Suppose that at this point the global navigation strategy is invoked to navigate to 7 from 6. The S* and D* obtained by using criterion (A) of section 5 are shown in Fig. 8. The robot navigates locally from S to S* the*n along the LVG from S* to D*, and finally locally from D to D. Note that the path from S * to D* does not involve any sensor operations, but, only quadratic time computation on the LVG to find the shortest path. 8. CONCLUSIONS In this paper, we propose a technique that enables an autonomous robot to navigate in a totally unexplored terrain. The robot builds the terrain model as it navigates, and stores the processed sensor information in terms of a learned visibil- ity graph. The proposed technique is proven to obtain a path if one exists. Furthermore, the terrain is guaranteed to become conzpfetcly learnt, when the complete visibility graph of the entire obstacle terrain is built. After this stage the robot traverses along the optimal paths, and no longer needs the sen- sor equipment. The significance of this technique is the char- acterization of both the path planning and learning in a precise mathematical framework. The convergence of the path plan- ning and the learning processes is proven. REFERENCES Lll PI 131 [41 [51 [cl r71 PI r91 1101 AHO, A., J. IIOPCROFT, and J. ULLMAN, The Design and Analysis of Computer Algorithms, Addison-Wesley, Reading, Mass., 1974. BROOKS, R. A., Solving the Find-path Problem by Good representation of Free-space. IEEE Trans. Systems, lllan and Cybernetics, Vol. SMC-13, No. 3, March/April 1983. CHATILA, R., Path Planning and Environment Learning in a Mobile Robot System, Proc. European Conf. Artificial Intelli- gence, Torsey, France, 1982. CROWLEY, J. L., Navigation of an Intelligent Mobile Robot, IEEE J. Robotics and Automation, Vol. RA-1, No. 2, March 1985, pp.31-41. IYENGAR, S.S., C. C. JORGENSEN, S. V. N. RAO, and C. R. WEISBIN, Robot Navigation Algorithms Using Learned Spa- tial Graphs, ORNL Technical Report TM-9782, Oak Ridge National Laboratory, Oak Ridge, August 1985, to appear in Robotica, Jan. 1986. LOZANO-PEREZ, T., and M. A. WESLEY, An Algorithm for Planning Collision-free Paths Among Polyhedral Obstacles, Commun. ACM, Vol. 22, No. 10, October 1979, pp. 560-570. OOMMEN J.B. and I. REICHSTEIN, On Translating Ellipses Amidst Elliptic Obstacles, Proc. 1986 IEEE Int. Corlf. on Robotics and Auromation, San Farancisco, California, 1986, pp. 1755- 1760. OOMMEN J.B., S.S. IYENGAR, N.S.V. RAO, and R.L. KASHYAP, Robot Navigation in Unknown Terrains Using Learned Visibility Graphs. Part I: The Disjoint Convex Obsta- cles Case, Tech. Rep. #SCS-TR-86, School of Computer Sci- ence, Carleton University, Ottawa, Canada, Feb. 1986. RAO, N.S.V., S.S. IYENGAR, C.C. JORGENSEN, and C.R. WEISBIN, Concurrent Algorithms for Autonomous Robot Navigation in an Unexplored Terrain, Proc. 1986 IEEE Znt. Conf on Robotics and Automation, San Francisco, California, 1986, pp. 1137-1144. (A revised version to appear in J. Robotic Systems). TURCHEN M. P., and A. K. C. WONG, Low Level Learning for a Mobile Robot: Environmental Model Acquisition, to be published in Proc. 2nd Int. Conf. Artificial Intelligence and Its Applications, December 1985. ROBOTICS / 1105 (9 L - - (4 ----- 00 (b) Fig, 8. NAVIGATE-GLOBAL from 6 to 7. dumber Lf path: trav&d 5 Fig. 9. Graph showing the construction of the VG Fig. 7. Illustration of the navigation process. 1106 / ENGINEERING
1986
61
507
PLANNING SENSORLESS ROBOT MANIPULATION OF SLIDING OBJECTS M. A. Peshkin and A. C. Sanderson * Robotics Institute Carnegie-Mellon University Pittsburgh, Pennsylvania 15213 ABSTRACT: The physics of motion of a sliding object can be used to plan sensorless robot manipulation strategies. Prediction of a sliding object’s motion is difficult because the object’s distribution of support on the surface, and the resulting frictional forces, are in general unknown. This paper describes a new approach to the analysis of sliding motion, which finds the set of object motions for all distributions of support. The analysis results in the definition of discrete regions of guaranteed sticking and slipping behavior which lend themselves to use in planning. Unlike previous work our approach produces quantitative bounds on the rate at which predicted motions can occur. To illustrate a manipulation plan which requires quantitative information for its construction, we consider a strategy based on “herding” a sliding disk toward a central goal by moving a robot finger in a decreasing spiral about the goal. The optimal spiral is constructed, and its performance discussed. KEYWORDS: Center of rotation, sliding, slipping, pushing, grasping, manipulation, sensorless manipulation, friction, robot, planning. This work was supported by a grant from Xerox Corporation, and by the Robotics Institute, Carnegie-Mellon University. 1. Introduction Sliding operations can be used constructively to manipulate and acquire objects, without sensing, and despite uncertainty in the orientation and position of the object. For instance, in a typical grasping operation a robot opens a two-jaw gripper wide enough to accommodate both an object to be grasped, and any uncertainty in the object’s position. In the general case, the object will be nearer initially to one jaw than to the other, and as the jaws close the nearer jaw will make contact first. There follows a sliding phase until the second jaw makes contact. During the sliding phase the object is likely to rotate. (Once both jaws come into contact with the object, sliding on the table becomes less important than slipping of the object with respect to the faces of the jaws. This regime will not be considered here.) The behavior of an object during both phases is discussed in [l]. Brost finds grasp strategies which bring the object into a unique configuration in the gripper, despite substantial uncertainty in its initial configuration. * Current address: Philips Laboratories, Briarcliff Manor, NY Another example of the use of sliding is the interaction of an object on a moving belt or ramp (as in a parts feeder) with a fixed, slanted, fence. (Equivalently the object may be on a stationary table, and the fence moving under robot control.) One of many possible behaviors of the object when it hits the fence is to rotate until a flat edge is flush against the fence, and then to slide along the fence. The behavior of objects on encountering a fence has been considered in [l] and [2]. In [2], Mani and Wilson find strategies for manipulation which can orient an object on a table by pushing it in various directions with a fence. Each push aligns a facet of the object with the fence. R. P. Paul demonstrated a clever grasping sequence on a hinge plate. The strategy makes use of sliding to simultaneously reduce the uncertainty of a hinge plate’s configuration to zero, and then to grasp it [3] [5]. To understand this and similar operations, Mason [3] determined the conditions required for translation, clockwise (CW) rotation, and counter-clockwise (CCW) rotation of a pushed object. Mason’s results are used in both [l] and [2], and also in this work. The contribution of our work, summarized in section 2, is to place quantitative bounds on the rate at which a predicted motion occurs, and to demonstrate the application of these bounds to the planning of manipulation tasks. Without rate information none of the above methods can produce manipulation strategies guaranteed to succeed. For instance, to implement one of Mani and Wilson’s orientation strategies, it is necessary to find the worst case distance a sliding object must be pushed by a fence, before a flat edge of the object comes into alignment with the fence. The rate information found by our method can be used to determine the worst-case distance. 2. Summary of Analytical Results A sliding object has three degrees of freedom. If we require the object to be in contact with another object (a pusher), the sliding object retains two degrees of freedom, which are most conveniently expressed as the coordinates of a point in the plane called the center of rotation (COR). Any infinitesimal motion of the object can be expressed as a rotation 68 about some COR, chosen so that the infinitesimal motion of each point G of the object is perpendicular to the vector from the COR to the point Y~J Finding the COR of a sliding object in planar contact with a surface is complicated by the fact that changes in the distribution of support forces under the object substantially affect the motion. The distribution of support may be changed dramatically by tiny deviations from flatness of the surfaces. Since we wish to ROBOTICS / 1107 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. determine the motion of any object, without knowing the distribution of support for it, our goal is to find the locus of CORs under alE possible support distributions. The coefficient of friction with the supporting surface (cl,) does not affect the motion of the object if we use a simple Coulomb model of friction. It is also assumed that all motions are slow (the quasi-static uppro&?autio?z. [4]) We will take the object being pushed to be a disk with its center of mass (CM) at the center. Given another object of interest (e.g. a square), we can consider a disk centered at the CM of the square, big enough to enclose it. The radius a of the disk is the maximum distance from the CM of the square to any point on the square. Since any support distribution on the square could also be a support distribution on the disk, the COR locus of the disk must enclose the COR locus of the square. The locus for the disk therefore provides useful bounds on the locus for the real object. The parameters of the COR problem are the point of contact Z between the pusher and the object, and the angle CY between the edge and the line of pushing, as shown in figure 2-1. The values of (Y and Z shown are the ones which are needed in considering the motion of the five-sided object shown inscribed in the disk. We do not require the point of contact -d to be on the perimeter of the disk, as this would eliminate applicability of the results to objects inscribed in the disk. Similarly, we do not require (Y to be such that the edge being pushed is perpendicular to vector 2 , as it would be if the object were truly a disk. The disk (with radius a), Q, z , and the CM, are shown in figure 2-l. A particularly simple distribution of support forces, in which the support is concentrated at just a “tripod” of points (q,$, ;;S) is indicated, along with what might be the COR for that distribution of support. Figur 2-l: Parameters of the pushing problem. / CnR Peshkin and Sanderson [6] analyze the motion of the sliding object in detail. The approach is to minimize the energy dissipated by friction with the surface for arbitrary infinitesimal motions. Analytical relations are found between the set of all support distributions, an intermediate formulation called the Q-locus, and the locus of CORs. Boundaries of the COR locus are found by evaluating the resulting analytical expressions. Figure 2-2 shows examples of the COR loci found [6] for various values of cv and Z . In each section the angle c~ of the edge with respect to the line of pushing is indicated. The edge may be the front edge of a fence pushing a corner of an inscribed object, or it may be an edge of the inscribed object in contact with a pushing point (as in figure 2-l). Z is the vector from the CM (at the center of the disk) to the point of contact indicated by the arrowhead. The boundary of the COR locus is shown in bold outline. Every point within the locus is the COR for some possible distribution of support forces on the disk. Our results [6] indicate that no distribution of support forces can result in a COR outside the boundary shown. In figure 2-2, the coefficient of friction between the pusher and the object (p,) is zero. These elementary COR loci are denoted {COR}a. 0 -P . F. P Figure 2-2: Defining the unit vector & = (cos a, sin cy), we observe that the COR loci have an axis of symmetry about &. Note that the pushing force is directed perpendicular to &, (not parallel to the line of motion,) since pc= 0. We see in figure 2-2(c), that if the pushing force is directed from the point of contact almost directly through the CM, the maximum distance from the CM to an element of the COR locus becomes great. This distance, called r tap’ is infinite if the pushing force is directed at the CM, as shown by Mason [3]. In [6] we found a simple formula for r . ttP : U2 ‘tip = K (1) As the angle (Y is varied, the tip of {COR}(2 traces out a straight line called the tip li?ze. The tip line, (figure 2-3), is perpendicular to 2, and a distance u2/c from the CM. If the coefficient of friction between pusher and object ~1, is non- zero, we find that we can combine two of the elementary (II, = 0) COR loci (such as are shown in figure 2-2), and the tip line construction, to create a COR sketch comprising all the possible locations of the COR for the system with non-zero friction p,. A COR sketch is shown in figure 2-4. The two elementary COR loci 1108 / ENGINEERING wwaiv which are to be combined, are shown in outline. The half-width of the friction cone is V= tan -lp . The locus of possible values of the COR consists of three diitinct intersection regions: (1) the part of the elementary COR locus {COR},+, left of the sticking line (shown shaded), (2) the part of {COR}a-y right of the sticking line (also shaded), and (3) part of the sticking line (described below) above the tip line. Figure 2-3: rtip (CX) vs. CY, and construction of the tip line. STICKING MODE Figure 2-4: Construction of the COR sketch. 2.1. Modes of Motion The sticking lilze is the normal to the line of motion of the pusher, at the point of contact. If the COR lies on the sticking line, there is no slipping of the object relative to the pusher as motion advances. If the COR lies left (resp. right) of the sticking line, the object slips down (resp. up) relative to the pusher as motion advances. The three component parts of the COR sketch described above are designated the down-slipping, up-slipping, and sticking loci, respectively. In the example shown, any of the three modes of motion (sticking, slipping down, slipping up) are possible, but this is not always the case. We can determine the possible modes of motion, and their minimum and maximum rates, by constructing the COR sketch [7]. Whether a clockwise or a counterclockwise mode of rotation occurs can also be determined from the COR sketch, or by using the rules found by Mason [3]. 2.2. Application to Gross Motion We have seen how bounds can be placed on the possible instantaneous motions of a sliding object being pushed by another object, in the presence of unknown frictional forces between object and table, and between object and pusher. Often we wish to calculate not the bounds on the instantaneous of motion, as above, but bounds on a gross motion of the object which can occur concurrently with some other gross motion of known magnitude. (For instance, we may wish to find bounds on the displacement of the pusher which occurs while the object rotates 15 degrees.) Our approach to dealing with gross motion follows a definite strategy outlined below, and explained in more detail in [7]. Examples are given in [7]. Suppose we wish to find the greatest possible change in a quantity 2, while quantity p changes from ,Binitial to ,Bfinar. From the geometry of the problem we find a differential equation of motion relating the instantaneous motions dx and dp. We then construct the COR sketch for each value of p. In each sketch we locate the possible COR which maximizes dx/d/3. Using that COR, we integrate the differential equation of motion from Pinitial to ,Blinar, yielding an upper bound for the quantity x. 3. Planning Sensorless Manipulation Planning a sensorless manipulation strategy requires construction of a sequence of interactions of pusher with pushed object, such that the set of all possible positions of the object (which in our case is a subset of three dimensional configuration space) is reduced from an initial volume to a smaller final volume. Optimally the final set of configurations consists of just a point in configuration space. Manipulation with sensory feedback permits comparison of intermediate states with goal states in order to modify the control strategy. In sensorless manipulation prediction of intermediate states depends on reliable models of motion. The models are used to determine preconditions and results of each operation, and a plan evolves by matching a series of subgoals to bounds on motion in order to constrain resulting configurations. For planning of manipulation strategies it is useful to have a graphical representation of the mode of motion of the sliding object (i.e. clockwise rotation, counterclockwise rotation, up- slipping, down-slipping, and sticking), as a function of the parameters which determine the mode: object orientation, and direction of pushing. Since in many cases more than one mode is possible, the regions corresponding to each mode may overlap. ROBOTICS / 1109 Brost [I] and Mani [2] have independently developed graphica: representations for a simplified set of modes of motion consisting of only clockwise and counterclockwise rotation. Using our results the representations can be extended to incorporate the slipping and sticking modes. Design of manipulation strategies requires, first, that the possible modes of motion be understood. This understanding can be used to guide a search for pusher motions which reduce the configuration space volume of the possible positions of the pushed object. But using only the set of modes, it is not possible to move down, as it continues along its path (figure 4-l). To guarantee that the disk will be pushed into the spiral, we must make sure that it moves down faster than does the pushing point. One way of comparing rates of moving down is by considering the increase or decrease in the angle p, called the collision parameter, in figure 4-l. If as the pusher’s motion along its spiral progresses /3 increases, the disk is being pushed into the spiral; localization is succeeding. But if as the pusher’s motion progresses p decreases, the disk is being pushed out of the spiral; localization is failing. produce a guaranteed manipulation strategy. It is also necessary to understand the quantitative response of the pushed object to any proposed push. Since in general one must consider the effect of a proposed push on every possible initial position of the pushed object, it is essential that calculating the effect of a push be computationally inexpensive. The qualitative and quantitative results described in section 2 fulfill both of the requirements listed above. 4. Spiral Localization of a Disk Several examples of sensorless manipulation strategies are analyzed in 161 and 171, using the results summarized above (section 2). In I Figure 4-l: Geometry at the moment of the second collision of pusher and disk. this section we describe an unusual robot motion by which a. disk, free to slide on a tabletop, can be localized without sensing. The approach is to enclose the set of possible initial position of the disk within a spiral executed by a robot finger. As the spiral decreases in radius, the disk is to be pushed towards the center of the spiral. Our quantitative results allow us to optimize this strategy by choosing the maximum convergence rate of the spiral subject to the constraint that the object must not escape. CENTER OF SPIRAL (PC1 4.2. Critical Case: Pusher Chasing the Disk around a If the disk is known init,ially to be located in some bounded area of radius b,, we begin by moving a point-like pusher in a circle of radius b,. Then we reduce the pusher’s radius of turning by an amount Ab with each revolution, so that the pusher’s motion describes a spiral. Eventually the spiral will intersect the disk (of radius a), bumping it. We wish the disk to be bumped toward the center of the spiral, so that it will be bumped again on the pusher’s next revolution, If the spiral is shrinking too fast, however, the disk may be bumped out of the spiral instead of toward its center, and so the disk will be lost and not localized. We wish to find the maximum shrinkage Ab consistent with guaranteeing that the disk is bumped into the spiral, and not out. (Ab will be a function of the present spiral radius.) We also wish to find the number of revolutions which will be required to localize the disk to some radius b, with a < b <b,, and the limiting value of 6, called bm, below which it will not be possible to guarantee localization, regardless of the number of revolutions. 4.1. Analysis Suppose the pushing point has just made contact with the disk. Since the previous revolution had radius only Ab greater than the current revolution, the pusher must contact the disk at a distance at most Ab from the edge of the disk, as shown in figure 4-1. We will consider only the worst case, where the distance of the pusher from the edge is the full Ab. Circular Path In the critical case the angle p does not change with advance of the pusher. The critical case, shown in figure 4-2, is unstable. The pushei’s motion is shown as an arc of a circle, labeled path of pusher, and centered at PC. To maintain the critical case, the path followed by the CM of the disk (labeled critical path of CM) must be as shown in the figure: an arc of a circle, concentric with the arc path of pusher. Instantaneously, the direction of motion of the CM must be along the line labeled motion of CM, tangent to the crilicul path of CM. The critical line, drawn through FY? and CM, is by construction perpendicular to the line motion of CM. The COR of the disk must fall on the critical line, in order that the instantaneous motion along the line motion of CM be tangent to the critical path of CM. If the COR falls to the left of the critical line, the CM diverges from the critical path of CM by moving inside the arc. Therefore p will increase with advance of the pusher (i.e. localization is succeeding.) If the COR falls to the right of the critical line, the CM diverges from the critical path of CA! by moving outside the arc. Therefore p will decrease with advance of the pusher (i.e. localization is failing.) The critical line divides the plane into two zones: if the COR falls in the left zone, the disk is pushed into the pusher circle, while if the COR falls in the right zone, the disk is pushed out of the pusher circle. In figure 4-3 we have constructed the COR sketch with collision parameter /?. To make sure that the whole COR locus falls to the left of the critical line, we need only place the center of the pusher We know that if Ab < a the disk will move downward [3]. This is not sufficient to assure that the disk will be pushed into the spiral (rather than out of the spiral), because the pushing point will also motion (PC) below the lower endpoint of the sticking locus. The figure shows the marginal case where FC is exactly at the lower endpoint of the sticking locus. 1110 / ENGINEERING CFVl’ICAL PATH OFCM i I PATH OF Figure 4-2: Critical case: pusher “chasing” disk around a circular path. / Figure 4-3: COR sketch for critical case, and solution for location of PC. 4.3. Critical Radius vs. Collision Parameter For every value of ,B, (the collision parameter), we compute the distance from the pusher’s line of motion to the lower endpoint of the sticking locus. This defines a critical radius r*(p). For each collision parameter p, r*(p) is the radius of the tightest circle that the pusher can describe with the guarantee that the disk will be pushed into the circle. In figure 4-4, a / r*(p) is plotted as a function of collision parameter ,8 for each of several values of ~1,. The inverse of the function r*(p) will be denoted p*(r ), representing the smallest value of j3 for which a pusher motion of 0.1 0.2 0.3 0.4 0.5 collwon parameter Cbela/pd Figure 4-4: Inverse of the radius of the critical circle r*(B) as a function of collision parameter ,B/rr radius r still results in guaranteed localization. In terms of the pusher’s distance from the top edge of the disk, d, (figure 4-3), we can use .the relationship a (1 - sin p) = d (2) to define the critical distance from grazing d *(r ) as a function of r. d *(r ) is the largest d’ t is ante of the pusher from the top edge of the disk for which a pusher motion of radius r still results in guaranteed localization. 4.4. Limiting Radius for Localization If there is a limiting radius boo of the spiral motion below which localization cannot be guaranteed, then as the spiral approaches radius boo the motion must become circular. Ab --+ 0 as bw is approached, so collisions become grazing collisions, and we have the distance from grazing d --+ 0. (In terms of the collision parameter p, we have ,B + 7r/2.) If the disk is not to be bumped out of the spiral, we must have bw = r*(a=s/2). bco can be shown analytically to be boo = a(p,+l) for pe 5 1 (3) ba = 2 a for pc 2 1 ROBOTICS / 1111 We see that only if pLc= 0 can a disk be localized completely, i.e. localized to within a circle the same radius as that of the disk. Otherwise the tightest circle within which the disk can be localized is given by equation 3. 4.5. Computing the Fastest Guaranteed Spiral Let bn be the radius of the nth revolution of the pusher, so that we have initially radius b,, and bm is the limiting radius as n -+ CO. We define recursively bn = b,ml -d *(b,) The difference between the radii of consecutive turns of the spiral n-1 and n, is Ab = d *(btl). Equation 4 thus enforces the condition that on the nth revolution, the value of d is exactly the critical value for circular pushing motion of radius bn. Figure 4-5 shows the deviation of spiral radius bn above bm, vs. number of turns n, on logarithmic and on linear scales. We start (arbitrarily) with b, = 100 U. The spiral radius was computed numerically for p,= .25, using the results for p*(r) shown in figure 4-4, and equation 4. Figure 4-5 shows that when the spiral radius is large compared to the disk radius a (which is taken to be 1 in the figure), we can reduce the radius of the spiral by almost a with each revolution. As the limiting radius is approached, the spiral reduces its radius more and more slowly, approaching the limiting radius bm a+s about n -1.6 , where n is the number of revolutions. Figure 4-5 demonstrates the best performance that the “herding” strategy can achieve. i c s 6 10.00: e oi .s .Z E = : 1.00: 0 ; p B 3 0.10: 3 0.01: o.oo_._ 1 10 100 IOW Spiral revolulrons (log scale) Localization of unit disk, for mu = .25 5. Conclusion 1Ve have shown how bounds can be placed on the possible instantaneous motions of a sliding object being pushed by another object, in the presence of unknown frictional forces between object and table, and between object and pusher. These bounds provide the basis for planning manipulation of sliding objects with or without sensors. As an example a sensorless strategy for localizing a disk was developed and optimized. We believe that the motion of a sliding object is now sufficiently well understood that reliable robot strategies taking advantage of sliding motion can be designed and verified. References 1. Brost, Randy. Automatic Grasp Planning in the Presence of Uncertainty. Proceedings, IEEE Int’l Conf. on Robotics and Automation, April, 1986. 2. Mani, Murali and Wilson, W. R. D. A Programmable Orienting System for Flat Parts. Proceedings, NAMRII XIII, 1985. 3. Mason, Matthew T. and Salisbury, J. I<.. Robot Hands and the Mechanics of Manipulation. The MIT Press, 1985. 4. Mason, Matthew T. On the Scope of Quasi-static Pushing. Proceedings, 3rd Int’l Symp. on Robotics Research, October, 1985. 6. Pingle, I<., Paul, R., Bolles, R. Programmable Assembly, three short examples. Film, Stanford AI Lab, 1974. 6. Peshkin, M. A. and Sanderson, A. C. The Motion of a Pushed, Sliding Object (Part 1: Sliding Friction). CMU-RI-TR-85-18, Robotics Institute, Carnegie-Mellon University, 1985. 7. Peshkin, M. A. and Sanderson, A. C. The Motion of a Pushed, Sliding Object (Part 2: Contact Friction). CMU-RI-TR-86-7, Robotics Institute, Carnegie-Mellon University, 1986. Figure 4-6: Deviation of spiral radius from ultimate localization radius, vs. number of spiral revolutions completed. 1112 I ENGINEERING
1986
62
508
AND/OR GRAPH REPRESENTATION OF ASSEMBLY PI.ANS* Luiz S. Momcm dc Mcllo and Arthur C. Sandcrson Department of Electrical and Computer Engineering and Robotics Institute Carnegie-Mellon University Pittsburgh Pennsylvania 15213 ABSTRAC’f This paper presents a compact representation of all possible as- sembly plans of a given product using AND/OR graphs. Such a representation forms the basis for efficient planning algorithms which enable an increase in assembly system flexibility by allowing an intcl- ligcnt robot to pick a course of action according to instantaneous conditions, Two applications are discussed: the selection of the best assembly plan (off-line planning), and opportunistic scheduling (on- line planning). An example of an assembly with four parts illustrates the use of the AND/OR graph representation to find the best assembly plan based on weighing of operations according to complexity of manipulation and stability of subassemblies. In practice, a generic search algorithm, such as the AO* may be used to find this plan. The scheduling efficiency using this representation is compared to fixed sequence and precedence graph reprcscntations. The AND/OR graph consistently reduces the average number of operations. I IN’l’liOl )l.Jc”l’lON Robotic as3crnbly often rcquircs reprogramming or reconfiguration in order to handle a variety of designs in the s;nnc system. ‘I’hc design and implcmcntation of such flcxiblc systems is diflicult, and automated planning tcchniqucs may provide major advantages. Such task pkmning for robotic assembly is critically dcpcndcnt on the task rcprescntation; a new approach to task rcprcscntation using AND/OR graphs is dcscribcd in this paper. Flexibility in robotic workcells provides a number of advantages, Flcxiblc robotic workcclls may bc rcconfigurcd to hnndlc a wide range of styles and products. Further flexibility can bc achieved if those workcclls arc able to asscmblc the same product in diffcrcnt ways. In order to accomodatc the assembling of scvcral diffcrcnt products in the same shop, it is ncccssary to schcdulc the available machines to each job. Since diffcrcnt machines may have diffcrcnt capabilities, the assembly procedure may vary depending on what machine is schcdulcd to do the job, Another advantage is an im- provement in the ability to recover from errors and other unexpected effects that cause the cxccution of a task to deviate from the prcplanncd course of actions. When deviations occur, it is preferred that the task cxccution continue, as effrcicntly aS possible, from the unpredicted state towards the goal. Deviations of the desired course of actions arc not necessarily error conditions, but may be due to random factors that affect the manufacturing process, and flexible shops should be able to cope with those factors autonomously. *This research is supported in part by Conselho National de Descn- volvimcnto Cientifico e ‘I’ccnolbgico (Brazil) and by the Robotics In- stitute of Carnegie-Mellon University. Even with flexibility of the mechanical hardware, current robotic assembly systems are not able to follow many different courses of actions within a given task. A principal reason for this limitation is the inadequate data structure for the representation of task plans. Ordered lists of actions, that have been used in early robot systems, which wcrc dcvcloped outside the manufacturing context, do not per- mit flexibility in task execution. Triangle tables [Fikes 721 have been used for the representation of plans, and they improve the capability to recover from errors, but only within one fixed sequence. A more significant improvement was the use of precedence diagrams [Fox.B. 851 for the representation of plans, but that technique has limitations also, and in most cases allows only a small amount of flexibility. This paper presents a compact representation for the set of all pos- sible assembly plans of a given product. Such a representation en- ables an increase in assembly flexibility .by allowing an intelligent robot to pick the more convenient course of actions, according to the instantaneous conditions at the shop. In sections II and III, the neces- sary background is established. Section IV shows the representation, andsection V presents its use for the assembly of a simple product, Two applications are discussed: section VI shows how the selection of the best assembly plan can be implemented as a graph search, and section VII shows the use of the representation in opportunistic scheduling. Section VIII summarizes the contribution of the paper and points to further research. II SCIIl~I>I.II.lNG ANI) I’lANNING hsscnibly of one prodrlcl rcquircs sclcction of a scqucncc of opcra- lions and assignment of times and resources for each operation. The problcnt is usucrlly divided into two Ixtrts: planning, or process rout- ing. which is the sclcction of a scqucncc of operations, and schcdul- ing. which is the assignment of times and rcsourecs. Schcdtiling problems, including job-shop scheduling, projcet scheduling, and assembly-line balancing, have been intcnsivcly inves- tigatcd in Management Scicnccs and Operations Rcscarch [Bellman 821. Mathematical programming tcchniqucs have most often been used to solve those problems. Mord rcccntly, the scheduling problem has been studied using constraint-dircctcd reasoning [t:ox,M. 831. Planning has been an important research issue in artificial intcl- ligc~xc. BUILD [Fahlman 741 and STRIPS [Fikcs 711 arc two early cx- amplcs. Both systems aim to gcncrate plans that cnablc robots to perform certain tasks. Typically, the tasks consist of achieving a state that satisfies some goal condition from a current state of the world (i.e., the robot cnvironmcnt), and the plans consist of order-cd se- qucnccs of actions that will transform the initial state into a goal state. The representation of plans arc commonly based on ordered lists of preprogrammed primitive actions. Thcrcare some extensions to that reprcscntation scheme that cnablc the robot to take advantage of the work already done in planning, in cast unexpected events happen during the execution of a plan. STRIPS, for example, uses a tabular ROBOTICS / 1113 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. form, called a triangle Iable, to store a plan. BUILD associates to each primitive action a REASON list (subgoals) as well as a description of the states of the world bcforc and after the action is executed. More recent systems, ‘such as NOAH [Sacerdoti 771, represent plans as par- tially ordered sequences of actions with respect to time. A major emphasis of research on planning has been on the search aspect of the problem, especially control schemes for the search. Priority has been given to develop efficient, powerful and general purpose procedures that can find at lcast one plan in a wide variety of situations rather than procedures that eventually find the most ef- ficient plan in a more restricted type of situation. In applications where plans are executed one time only, inefficiencies in the plan do not cause any major harm. Also, if plans are generated on line, high speed in plan generationis often preferable to optimal plans. Search for the most efficient plan requires a criterion to decide whether one plan is better than another. This decision, however, usually requires information available at execution time only, and producing the plan in real time may degrade the robot operation, or even be unfeasible, due to the long computing time it usually takes to generate a plan. The choice between planning ahead of time (off line) and planning in real time (on line) is difficult; the former may lead to inefficient plans, whereas the latter may cause a degradation in the robot operation. 111 PLANNING FOR ROBOTIC ASSEMBLY To achieve the desired high levels of productivity, the assembly plans must be efficient and keep wasted time and resources to a min- imum. Should inefflciencics in the assembly plan of one product be multiplied by the size of the lot, which in common robotic assembly applications ranges from 1,000 to 100,000 units, the resulting total waste may reduce drastically the productivity and may jeopardize the whole process. Conditions at the shop, however, change with time (for example, parts may come in random order), and-usually, there is no single plan that is efficient in every possible situation. Fox and Kempf [Fox,B. 851 address the need to act opportunis- tically, as opposed to always follow a preprogrammed fixed order of operations. They suggest that plans gencratcd off-line to be given to the robot be a set of operations with minimal ordering constraints. Such a plan was represented by a precedence diagram and would actually encompass several possible sequences of operations that would peiform the task of assembling a given product. In real time, depending on the conditions at the shop, the intelligent robot would pick the most appropriate sequence. Using Fox and Kempf notation, the selection of one sequence, and the assignment of operations to specific machines is what is commonly referred to as the scheduling process. Since that selection process involves much less computing time than the planning process, no degradation in the efficiency of the robot operation should occur. Planning, in this sense, should yield all possible sequences of opera- tions that can be used to assemble a product. That information is the input to the scheduling process, which in real time selects one of those sequences and assigns the machines that will do each operation. The problem with the precedence diagram formalism, as Fox and Kempf themselves point out, is that for most products no single par- tial order can encompass every possible assembly sequence. The as- sembly of the simple product shown in exploded view in figure 1, for example, may be completed by following one of the ten different sequences of operations that are represented graphically in figure 2. It is possible to combine some sequences into one partial order using precedence diagrams. Figure 3 shows three possible ways to combine two of the first four sequences in figure 2; the only restriction is that the insertion of the stick cannot be the last operation. It is possible to combine three of those four sequences into one partial order by using 1114 / ENGINEERING a dummy operation, but it is not possible to combine the four se- quences into one partial order, nor it is possible to combine any of those sequences with the other six sequences in figure 2. A closer look at the partial ordering representation of plans, in the light of the above assembly example, shows another deficiency of that solution. Two distinct feasible sequences, A-B-C and B-A-C, for ex- ample, do not differ simply by the sequence of the operations. Insert- ing the stick first is not the same operation as inserting it after the receptacle and the cap have been screwed together. The latter opera- tion is probably easier to execute. Similarly, screwing the receptacle and the handle with the stick inside is probably easier to do if the receptacle and the cap are screwed, than otherwise. The partial order- ing approach, howcvcr, does not capture this subtle difference. The next section will describe another approach to the representation of plans that captures this difference, and that can combine all possible assembly sequences. IV ANlWOR GRAtW I~l’i’lillSf~N’I’/\‘I’IC)N 01: ASSI’M 131 ,Y PI ANS Planning the assembly of one product made up of scvcral com- poncnt parts can be seen as path search in the state space of all pos- sible configurations of that set of parts. The initial state is that con- figuration in which all parts arc disconncctcd from each other. and the goal state is that in which the part! arc properly joined to form the desired product l’hc moves that change one sratc into another cor- respond to the assembly operations since they change the relative position of at Icast one part. ‘I’hcre may bc many din’crcnt paths from the initial state to the goal state. Krogh and Sandcrson [Krogh 851 present an ovcrvicw of task decomposition and operations. In this context, any set of parts that arc joined to form a stable unit is called an assembly. A component part is also an assembly, with a special property. The word subassembly rcfcrs to an assembly that is part ofi another, more complex assembly, and it always carries the subs&./set connotation. Because there are many configurations that can bc made from the same parts, the branching factor from the initial state to the goal state is greater than the branching factor from the goal state to the initial state. A backward search, thcrcfore, will be more efficient than a forward search for the assembly planning problem. The problem of finding how to assemble a given product can bc converted to an equivalent problem of finding how the same product can be disassembled. Since assembly operations are not ncccssarily revcrs- iblc, the equivalence of the two problems will hold only if each opera- tion used in disassembly is the reverse of a feasible assembly opera- tion, regardless of whether these reverse operations themselves are feasible or not. The expression disassembly operation, therefore, refers to the reverse of a feasible assembly operation. The backward search suggests a decomposable production system in which the problem of disassembling one product is decomposed into distinct subproblems, each one being to disassemble one subassembly. Each decomposition must correspond to a disassembly operafion. If solutions for both subproblems that result from the decomposition are found, then a solution for the original problem can be obtained by combining the solutions to the subproblems and the operation used in the decomposition. For subassemblies that contain one part only, a trivial solution containing no operation always exists. Usually there will not be a unique way to decompose the problem, or to cuf the assembly, because there may be several different ways to assemble the same product, CAP STICK RECEPTACLE Figure 1: A simple product A B SCREW INSERT RECEPTACLE STICK INTO AND CAP RECEPTACLE B v INSERT STICK INTO RECEPTACLE A v SCREW RECEPTACLE AND CAP c v SCREW RECEPTACLE AND HANDLE c v SCREW RECEPTACLE AND HANDLE (A-B-C) (1) n (B-A-C) (2) D PLACE STICK ON CAP B INSERT STICK INTO RECEPTACLE c v SCREW RECEPTACLE AND HANDLE A v SCREW RECEPTACLE AND CAP (B-C-A) (3) PLACE STICK ON CAP A v SCREW RECEPTACLE AND CAP c v SCREW RECEPTACLE AND HANDLE (D-A-C) (5) E PLACE STICK ON HANDLE c v SCREW RECEPTACLE AND HANDLE A \’ SCREW RECEPTACLE AND CAP (EGA) (8) c v SCREW RECEPTACLE AND HANDLE A v SCREW RECEPTACLE AND CAP (D-C-A) (6) E PLACE STICK ON HANDLE A v SCREW RECEPTACLE AND CAP c v SCREW RECEPTACLE AND IiANDLE L (E-A-C) (9) c HANDLE C SCREW RECEPTACLE AND HANDLE El v INSERT STICK INTO RECEPTACLE A v SCREW RECEPTACLE AND CAP (C-B-A) (4) SCREW RECEPTACLE AND HANDLE D v PLACE STICK ON CAP A v SCREW RECEPTACLE AND CAP (C-D-A) m A SCREW RECEPTACLE AND CAP E v PLACE STICK ON HANDLE c v SCREW RECEPTACLE AND HANDLE (A-E-C) W) A RECEPTACLE B STICK INTO RECEPTACLE C SCREW RECEPTACLE AND tlhNI>LE B INSERT STICK INTO RYCEPTACLE L SCREW RECEPTACLE ANDCAP , (2) B RECEPTACLE RECEPTACLE (3) Figure 3: Precedence diagrams: (I) combines A-B-C and B-A-C; (2) combines C-B-A and B-A-C; (‘3) combines B-A-C and R-C-A Structures called AND/OR graphs [Nilsson 801, or Izypergraphs, are uscfi~l in representing dccomposablc problems and they have been used to represent the disassembly problem. The nodes in such a hy- pergraph correspond to assemblies; nodes corresponding to as- semblies that contain only one part are the terminal nodes. The hy- perarcs (or k-connectors, k being any integer greater than zero) cor- respond to the disassembly operalions. Each hyperarc that leaves one node corresponds to a disassembly operarion applicable to the as- sembly of that node, and the successor nodes to which the hyperarc points correspond to the resulting subassemblies produced by the disassembly operation. Because for most products the assembly operations usually mate two subassemblies, the hyperarcs in the cor- responding AND/OR graph are usually 2-connectors. There are cases, however, of operations that mate more than two subassemblies (e.g., assembling a hinge with two wings and one pin), as well as operations that involve only one subassembly (e.g., drilling a hole in a part). Hyperarcs in AND/OR graphs can represent all those possibilities. A solurion tree from a node N in an AND/OR graph is a subgraph that may be defined recursively as either N itself if N is a terminal node, or N plus one of its outgoing hyperarcs plus the set of solution trees from each of N’s successors through that hyperarc. This defini- tion assumes that the graph contains no cycle 4s is true in the disassembly problem. There may be none, one, or several solution trees from a node in an AND/OR graph. The useful feature of the AND/OR graph representation for the as- scmbly problem is that it encompasses all possible partial orderings of assembly operations. Moreover, each partial order corresponds to a solution tree from the node corresponding to the final (assembled) product. This feature is demonstrated through the example in the next section. Figure 2: Possible scqucnccs of operations to asscmblc the product shown in figure 1 ROBOTICS / 1115 Figure 4: AND/OR graph for the product shown in figure 1 v A SIMPLE EXAMPLE Figure 4 shows the AND/OR graph for the product in figure 1. Each node in that graph is labeled by a database that correponds to an assembly. In flgurc 4, the databases are represcntcd by exploded view drawings, whereas in a computational implementation, the databases are relational data structures. To facilitate the exposition, both the nodes and the hyperarcs in figure 4 have idcntiflcation numbers. The root node in figure 4 (node 1) is labclcd by a database that describes the assembled product. There are four hyperarcs leaving that node. Each of those four hyperarcs corresponds to one way the whole assembly can be disassembled and each one points to two nodes that are labeled by databases that describe the resulting sub- assemblies. Similarly, the other nodes in the graph have a leaving hyperarc for each possible way in which their corresponding sub- assembly can be disassembled. 1116 / ENGINEERING Figure 5: Solution tree corresponding to sequence 4 in fig 2 Figure 6: Solution tree corresponding to sequences 6 and 7 in fig 2 Figure 7: Solution tree corresponding to sequence 1 in fig 2 Any subassembly that can be made up of the component parts may appear only once in the graph, even when it may be the result of different disassembly operations. The subassembly of node 4, in figure 4, for example, may result From two different operations, which correspond to hyperarcs 5 and 10. Moreover, those two hyperarcs come from two distinct nodes. Nodes corresponding to component parts (nodes 9,10,11 and 12) are the terminal or goal nodes since they correspond to disassembling problems for which a (trivial) solution is known. There are eight solution trees from the root node (node 1) and three of them are shown in figures 5 to 7. One important feature of the solution tree representation is that the distinction between operations becomes apparent because distinct operations correspond to distinct hyperarcs. In other words, two distinct assembly sequences include the same operation only if the two corresponding solution trees in- clude the hyperarc corresponding to that operation. The sequence diagrams in figure 2 and the prccedcnce diagrams in figure 3 fail to make this distinction. The solution tree shown in figure 6 corresponds to two sequences, but unlike the precedence diagrams of figure 3, the operations are exactly the same, regardless of the order in which they are executed. To solve problems that require optimization, such as the sclcction of the best assembly plan, one must bc able to travcrsc the space of all candidate solutions, rcgardlcss of the method used to solve the problem. The choice of the rcprcscntation is critical since it is or\en difficult it) delimit the set of potential solutions in a form which cnumcratcs all the clcmcnts. ROBOTICS / 1117 The AND/OR graph rcprcscntation encompasses all possible ways to assemble one product, and thcrcfore allows one to cxplorc the space of all possible plans. Since plans correspond to solution trees in the AND/OR graph, the sclcction of the best plan can be seen as a search problem. Any such search problem rcquircs a criterion to compare plans. One possibility is to assign to the hyperarcs weights propor- tional to the difficulty of their corresponding operations, and then compute the cost of a solution tree from a node, recursively, as: l zero, if the node has no leaving hypcrarc; or l the sum of the weight of the hypcrarc leaving the node and the costs of the solution trees from the successor nodes. The best plan corresponds to the solution tree that has the minimum cost. The search for the best plan can be conducted using generic algorithms such as the AO* [Nilsson 801. A variety of factors might be considered in assigning weights to hypcrarcs, including time duration of their corresponding operations, requirements for reorientation of fixturing, cost of resources needed, reliability, as well as production priorities and constraints. For the product in figure 1, the AND/OR graph (figure 4) has 15 hyperarcs, which correspond to 15 different assembly operations. Table 1 shows one possible assignment of weights to hyperarcs. Those weights have been computed by adding two factors. The first factor is the type of assembly operation, with screw operation weigh- ing 4, insertion 2 and placement 1, in accord with typical time, fixtur- ing and manipulation requirements. The second factor taken into account is the difficulty of handling the participating subassemblies, and is proportional to their number of degrees of freedom; sub- assemblies with more degrees of freedom are more unstable, and therefore more difficult TV handle. Using that assignment of weights to hyperarcs, the total cost for the solution trees can be computed. The solution trees in figures 5 and 7 have the minimum cost of 11. For. more complex assemblies, instead of a complete enumeration as suggested above, search algorithms can be used to reduce computa- tion. For the product in figure 1, a search using AO* will yield one of the solution trees shown in figures 5 or 7, depending on how the partial solutions and tip nodes are ordered for expansion. Table 1: Assignment of weights to hypcrarcs hypcrarc operation type subassemblies degrees of freedom total weight 1 4 1 5 2 4 4 8 3 4 4 8 4 4 1 5 5 4 2 6 6 4 4 8 7 2 0 2 8 2 0 2 9 4 4 8 10 4 2 6 11 2 0 2 12 1 0 1 13 4 0 4 14 4 0 4 15 1 0 1 1118 / ENGINEERING VlI Ol’1’OI~‘I’~JNlS’l’lC SCI11’1)1ll .l 1N(; (ISING THF; ANl)/Oli GKAI’I I IZlil’l(l’SIIN’I’A’I’lON To evaluate how the USC of AND/OR graph reprcscntation for as- sembly plans afl’ccts assembly cflicicncy. a comparntivc analysis among the three rcprcscntation schcmcs discussed in this paper has been conducted. The product in flgurc 1, and the robot workstation of figure 8 have been used as cxamplcs. The workstation is cquippcd with two manipulators and the parts arc prcscnted in random order. It is as- sumcd that a cap, a stick, a rcceptaclc, and a handle always come together, varying only in their order. It is also assumed that both manipulators arc controlled by the same central unit and they both arc able to execute the following actions: l acouire: fetching, by one of the manipulators, of one part from the part feeder l buffer: temporarily storing one part into a fixed location within the workstation l w: joining two subassemblies which are currently held by the manipulators l rctrievc: fetching, by one of the manipulators, one part known to be in the parts buffer The efficiency of this assembly station depends on the capacity to handle parts in random order. This requires on-lint scheduling of system resources dcpcnding on the order of parts arrival. The relative impact of plan representation schemes on assembly efficiency can be compared by the average number of operations needed: a smaller average number of operations corresponds to more efficiency. The first sequence of figure 2 (A-B-C) has been used as an example of fixed sequence reprcscntation and the first prcccdcncc diagram of figure 3 (combines A-B-C and B-A-C) as an example of precedence graph representation. Similar results will be produced using the other fixed sequences or precedence graphs. The number of operations that would be performed for each of the 24 possible orderings in which the four parts of the simple product can be acquired is shown in Table 2. At least 7 operations are necessary: four acquisitions and three matings; depending on the order in which the parts are presented, buffering, and therefore retrieving may also be necessary. When using the fixed sequence representation of plans, extensive buffering is necessary. For example, if the order the parts come is R H s c (receptacle, handle, stick, and cap) both the handle and the stick must be buffered since they are not used in the first operation; adding two bufferings and two retrievings to the four acquisitions and three matings that are always necessary yields 11 operations. The average number of operations for all 24 possible orders is 9.8. Using precedence diagrams for the representation of plans avoids some of the buffering and reduces the average number of operations to 9.2. For the sequence R H s c, for example, only the handle must be buffered since the insertion of the stick into the receptacle may be the first operation. Using the AND-OR graph representation of plans, however, avoids most of the buffering, and yields the average of 8 operations. For the same R H s c sequence, for example, no buffering is necdcd because the robot can follow the sequence of operations corresponding to the solution tree shown in figure 5. VIII CONCLUSION A compact representation for the set of all possible assembly plans of a product has been presented, along with its applications in me selection of the best assembly plan and in opportunistic scheduling. One important feature of that representation is that it allows one to Table 2: Number of operations needed to assemble the product of fig 1 for all the sequences in which the parts may be acquired, and for the three schemes of plan representation C = cap S = stick R = receptacle H = handle first sequence first precedence AND/OR graph sequence in fig2 diagram in fig 3 fig4 CSRH 9 9 7 CSHR 11 11 9 CRSH 7 7 7 CRHS 9 9 9 CHSR 11 11 9 CHRS 9 9 9 SCRH 9 9 7 SCHR 11 11 9 SRCH 9 7 7 SRHC 11 9 7 SHCR 11 11 9 SHRC 11 9 7 RCSH 7 7 7 RCHS 9 9 9 RSCH 9 7 7 RSHC 11 9 7 RHCS 9 9 9 RHSC 11 9 7 HCSR 11 11 9 HCRS 9 9 9 HSCR 11 11 9 HSRC 11 9 7 HRCS 9 9 9 HRSC 11 9 7 average 9.8 9.2 8 traverse the space of all possible assembly plans, and theretore provides an opportunity to select an optimal schedule and dynami- cally adapt scheduling to changing conditions. Both the fixed se- quence representation and the precedence diagram representation are very limited in this aspect. A number of issues related to this representation are under inves- tigation. One important issue is the development of algorithms for opportunistic scheduling suitable for real time operation. As pointed out in section VII, some buffering could not be avoided, even with the use of AND/OR graph representation of plans. For complex products, the choice of which part or subassembly to buffer may affect the overall assembly efficiency and criteria for that decision will be neces- sary. These criteria will certainly depend on evaluation fimctions, also under investigation, used to select a plan, especially functions that do not possess the recursive property like the one used in section VI. An additional important ongoing research issue is the development of a representation of assemblies suitable for the automatic generation of plans. Such automation can be helpfil in design of both new products and assembly systems. In designing new products, the desig ner can quickly assess the difficulty of assembling and eventually I I MANlPULAToRS I BUFFER 1 Figure 8: Robotic workstation modify the design to facilitate the assembly. In designing new as- sembly systems, the designer can evaluate the performance of a proposed design for a given set of products. [Bellman 821 Bellman, R. et al. Maihematical Aspects of Scheduling and Applications. Pergamon Press, 1982. [Fahlman 741 Fahlman, Scott Elliott. A Planning System for Robot Construction Tasks. Artificial intelligence 5( 1): l-49,1974. pikes 711 Fikes, Richard E. and Nilsson, Nils J. STRIPS: A New Approach to the Application of Theorem Proving to Problem Solving. Artificial Inlelligence 2~189-208, 1971. [Fikes 721 Fikes, Richard E. et al. Learning and Executing Generalized Robot Plans. Artificial Infelligence 3:251-288,1972. [Fox,B. 851 Fox, B. R. and Kempf, K. G. Opportunistic Scheduling for Robotics Assembly. In 1985 IEEE International Cotference on Robolics and Automation, pages 880-889. IEEE Com- puter Society, 1985. [Fox,M. 831 Fox, Mark S. Constraint-Directed Search: A Case S/udy of Job Shop Scheduling. PhD thesis, Carnegie-Mellon University, December, 1983. [Krogh 851 Krogh, Bruce H. and Sanderson, Arthur C. Modeling and Control of Assembly Tasks and Systems. Technical Report CMU-RI-TR-86-1, Carnegie- Mellon University - The Robotics Institute, July, 1985. [Nilsson SO] Nilsson, Nils J. Principles of Arlificial Intelligence. Springer-Verlag, 1380. [Sacerdoti 771 Sacerdoti, Earl D. A Structure for Plans and h’chavior. Elsevier North-Holland, 1977. REFERENCES ROBOTICS / 1119
1986
63
509
A Mobile Robot with Onboard Parallel Processor and Large Workspace Arm Rodney A. Brooks, Jon Connell, and Anita Flynn MIT Artificial Intelligence Lab 545 Technology Square Cambridge, Mass, 02139 ABSTRACT a. The MIT AI Lab’s second mobile robot, MOBOT-2, has a number of unique design features. In this paper we describe two of them in detail. First, MOBOT-2 has an extremely cheap 32 processor distributed control system. The proces- sor system, called BARNACLE, runs asynchronously with no central locus of control. Unlike almost all other parallel processors this one has no expensive communications rout- ing network. The communication topology is determined by a distributed patch panel. All computing is done onboard the robot. Second, MOBOT-2 has an onboard arm. It is lightweight, but has an extremely large working volume. The arm is controlled by the parallel processor. Figure 1. MIT mobile robots a. MOBOT-1 b. Partially constructed MOBOT-2. 1. Introduction 1.1 Previous Lessons The MIT AI Lab MOBOT project has been underway since January 1985. In that time we have built our first robot, MOBOT-1, and tested it wandering around laboratories and machine rooms [Brooks 861. MOBOT-1 has 12 sonar depth sensors and a pair of TV cameras. It has an onboard micro- processor which communicates over a radio and TV link to an offboard Lisp machine where the real computing is done. It has no actuators other than its wheels. The main research emphasis with this first robot has been on a distributed par- allel control system which is simulated on the Lisp machine. [Brooks 861 describes the motivations and strengths of the control architecture, known as the suhumption architecture. We have found from previous experience with MOBOT-1 that we spend almost as much time taking the robot apart and putting it back together in order to enhance, modify or debug it, as we do actually running it. This is because of the fact that once it works at any given level, it becomes uninteresting and there’s always something to add to make it more interesting. Hence, we would like MOBOT-2 to be extremely easy to strip down and reassemble. This dictates the type of physical fastening systems we use. Our second robot, the one described here, is intended to be an improvement on the first and to make for a richer ex- perimental testbed for further pursuing the distributed control system. To this end it includes an onboard parallel processor which runs the subsumption architecture, and a lightweight arm which will enable the robot to do much more interesting tasks in the world than simply move around. The remainder of this paper explores the design decisions and trade-offs in these two aspects of MOBOT-2. Figure 1 shows MOBOT-1 and the new MOBOT-2. Another observation made from previous experience with mobile robots is that there is an explosive phenomenon regard- ing power and weight. Big robots need hefty motors, which call for large batteries. Large batteries add more weight, re- quiring larger motors and even bigger batteries (e.g. [Giralt et al 841) and so on. We would like to reverse this trend and build each succeeding mobot smaller and lighter. In the limit, all our problems will be solved. 2. The Parallel Processor Support for this work was provided in part by an IBM Faculty Develop- ment Award, in part by a grant from the Systems Development Founda- tion, and in part by the Advanced Research Projects Agency under Office of Naval Research contracts N00014-8GC-0505 and NOOOlrl-82-K-0334. All serious previous mobile robot projects (e.g. [Crowley 851, [Giralt et al 841, [M oravec 831, [Nilsson 841) have used an offboard processor to do the bulk of the computation for perception, world modelling and planning as required by the robot. We adopted this approach for MOBOT-1. Now how- ever, due to the availability of more computationally power- ful, low power CMOS processors and a new decompostion of robot control systems [Brooks 861, we believe the time is ripe to move to all onboard processing. b. 1096 / ENGINEERING From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. b. reason about behavior of objects plan changes to the world identify obJecta monitor changes build maps explore wander avoid object, Figure 2. Slicing a control system a. Traditional decom- position b. Subsumption architecture 2.1 The Subsumption Architecture The usual approach to building control systems for mo- bile robots is to decompose the problem into a series (roughly) of functional units as illustrated by a series of vertical slices in figure 2a. After analyzing the computational requirements for a mobile robot we have decided to use task uchievzng he- haviors as our primary decomposition of the problem. This is illustrated by a series of horizontal slices iu figure 2b. As wit,11 a functional decomposition we implement each slice explicit Iy, then tie them together t’o form a robot control system. The difference is that after building the lowest layer we already have a contSrol sy&em which achieves a certain level of CO~I- petence. We leave that running system intact and build a second layer to augment it. The process continues building layer upon layer to give successively more competent control systems as in figure 3. We call this architecture a subsuntptzon architecture. Our new decomposition leads to a radically different architecture for mobile robot control systems, with radically different im- plementation strategies plausible at the hardware level. It also confers a large number of advantages concerning robustness, buildability, and testability. One of these advantages is that the architecture is never bandwidth limited. The most expensive part of most parni- lel processors is the switch which lets processors talk to each other or talk to memory. Under the subsumption architec- ture the topology of communications between processors is fixed for a particular run of the robot. Thus there is no need for a fast dynamic switch to reconfigure message topology, as this can essentially happen offline. Instead the communication topology is predetermined by a distributed patch panel. 2.2 Physical Layout We have chosen to build layers from collections of very simple processor pairs, which we call modules. Each processor , 4 level 9 level P level 1 SeWX9 Ez& level 0 Actuators Figure 3. Augmenting an existing control system by adding more levels. Figure 4. A processor board from BARNACLE. pair consists of a finite state machine control processor and a peripheral geometry coprocessor. The finite state machine controls the data flow t,hrough the module. It can wait for cer- tain inputs, save partial results in an internal register, or act as a watchdog t.imer. The finite state machine .also occasion- ally passes data to its attached geometry coprocessor which computes functions such as polar coordinate vector addition, scaled comparison, and monotonic functions. The current design calls for 32 processor boards. That is the number of modules required to control the robot arm and base. Our initial idea was that precisely one physical processor should play the role of one finite state machine- geometric coprocessor pair, Recently we are of the opinion that we may be able to simulate more than one processor pair per physical processor. Each processor board is 4 inches wide by 3 inches high and contains a Hitachi 6301 microcontroller and logic for per- forming suppression (described below). The 6301 is a CMOS processor with an architecture similar to the 6800. It has 128 bytes of internal RAM and an external 8K piggy-back EPROM. The board shown in figure 4 also has provisions for adding an extra 2K of external RAM if needed. The EPROM will contain the program to emulate the finite state machine(s) and the geometric co-processor(s), as well as the code for com- municating between modules. The RAM will hold the internal state variables of each module and serve as a scratch pad for the geometric processor and for the decoding of messages. ROBOTICS / 1097 Figure 5. Physical arrangement of processor boards. Each processor board has 3 serial inputs and 3 serial out- puts. The input and output lines of the module terminate in subminature phone jacks on each individual board. Outputs of one module are connected to inputs of another using patch wires made from a piece of coaxial cable with plugs at each end. A single lead contains two conductors, one to carry data and the other to carry control signals. All input jacks come in pairs so that signals which must fan out can be daisy chained together. ln addition each jack has a built in switch which forces the input to a known state if no jack is inserted. Figure 5 illustrates the complete BARNACLE system. The processor boards are mounted around thC periphery of the robot with the chips pointing outwards. Currently we plan on building four octagons and stacking them (conceptually) above each other on a plexiglass frame. Particular t,opologies for the processors and suppressor nodes will be pabched to- gether with the cables being clipped into these trays. The batteries required for running the robot reside in the center of the octagon. 2.3 Communication Between Modules All messages in the BARNACLE processor are 24 bit packets. Making the packets a fixed size significantly simpli- fies the communication protocol. Our analysis showed that 24 bits was the right number for our application for several reasons. None of the physical quantities with which the robot must deal are known to more than 8 bits of accuracy nor can any of its effecters be controlled more precisely than this. Since 8 bits make a byte, that seemed like a reasonable size to represent any physical quantity. The remaining question was how many bytes per packet. The configuration of the robot in the world can be represented by three bytes: Z, y, and 0. A motion command takes two bytes, one for heading, and one for distance. A third byte can represent speed if desired. A corridor description can fit in three bytes: length, width, and orientation relative to the current robot position. With 12 sonar sensors mounted on the robot (as in MOBOT-1, but more likely infrared depth sensors on MOBOT-‘L), two bits per ranger can be placed in a single 24 bit packet. Two bits is plenty for local obstacle detection and avoidance. Dominant Inferior nnn output a. b. C. Figure 6. In each of the three cases above the dominant input suppresses the inferior input. a. Inferior packet is ignored. b. Inferior packet ie blocked. c. Partially transmitted inferior packet is fluehed. Messages are sent serially over 2 conductors, one conduc- tor is the data line and the other is a control line. Both control and data wires are connected to single bits of the processors parallel ports. We use 12 parallel I/O lines to supply three input and three output lines on each processor. All ports are polled every 500 microseconds (time for about 100 instruc- t,ions) which is an easy speed to maintain. Packets of 24 bits can thus he sent in about one tenth of a second. This is much faster than the communication rate we used to run MOBOT-1 using our lisp machine simulation of BARNACLE. Inhibition, a necessary part of the subsumption architec- ture, is accomplished in BARNACLE by simply forbidding t.he module to send any messages for a specified amount of time. There is a special inhibit input to each processors which is connected to a timer whose period can be varied from a tenth of a second to several minutes by a potentiometer. A pulse on the control line of the inhibit input starts the timer which in t,urn prevents the processor from sending any new messages, although it is allowed to finish ones currently in progress. The timer can be re-triggered at any time during the inhibition in- terval to extended the duration. When at last the timer’s output goes low, the processor is free to send whatever mes- sages it wishes. Suppression in BARNACLE is also implemented using special hardware. Each processor board contains one sup- pressor node which has two inputs and one output. The state of a flip-flop controls which input makes it through to the output. This flip-flop is set by a pulse on the control line of the dominant input, and reset after a time-out period con- trolled by an associated potentiometer. Figure 6 shows how this works. The control line signals an impending message with a square pulse; the falling edge signals that the message is about to begin. There is a lower bound on the length of this pulse, but no upper bound, Reassertion of this line during a message indicates that the message is invalid and should be ignored-the new falling edge will indicate the start of a differ- ent message. Thus the two inputs to a suppressor node can be completely asynchronous and yet there are no timing or collision problems thanks to the protocol definition. 3. The Arm The fact that our robot is mobile makes a big difference 1098 / ENGINEERING in the design of the manipulator. A mobile rol,ot is ;I lot likfb an airplane in that all its resources are severely lilrlit,cb(l. It (nil only carry n certain amount. of weight, all its equil)lllent 111114 fit in a specified szze, and the power a~nil:~ble f~ 0111 Ijilt t crier: is linlited. Not only is electrical power scarce, com~~u~nt?o~(~I power is also scarce. Even with advanced \‘L!sl and C’M( IS, the more processors there are and the faster they ruII, t II(~ more power, space, and weight they consume. Fortunately, unlike electrical power, it is possible to beam information illto and out of a robot. Yet, because robots inhabit noisy environ- ments full of fluorescent lights and disk drives, the teleme(ry bandwidth is limited and occasionally communications are in- terrupted altogether. 3.1 Special Requirements What has been said above is true for any piece of ltnrd- ware residing on a mobile robot. There are, however, also specific requirements for manipulators on mobile robots. One of these is that the arm and its end effector must, not be so heavy that it tips the robot over. This needs to be true of the manipulator throughout its workspace and range of payloads. We want our mobile to manipulate reasonably heavy objects. Arms that can lift a couple ounces are fine for shuffling semi- conductor wafers, but they can’t move coffee mugs or pick up rocks for examination. Making the actual arm light also means that it can move faster without encountering control instabilities. Another shortcoming of most commercially available ma- nipulators is that they have too small a workspace. Mobile robots inhabit a three dimensional world that, has mntty dif- ferent heights: desks, tables, shelves, floors, etc. This me3;111s that t,he manipulator’s workspace needs a large mrount of v(br- tical freedom. The lateral mobility of the arm, however, does not need to be very big since the robot’s base allows it t,o move the whole arm around. Lastly, the precision of commercial manipulators is over kill for the actions we wish our robot to perform. We expect our mobots to transfer things from one location to another, not. t.o do low tolerance assembly. Sensors such as vision call not locate an object to a thousand of an inch. On the other end, getting the gripper to within half an inch is sufficient to grasp most things and setting something down within a]) inch of where you want it is usually fine. For cases where absolute positioning does matter, like removing a peg from a hole, there are often environmental constraints (like the edges of the hole) that can aid in the alignment given an appropriate control sys- tem [Lozano-PCrez et al 831. 3.2 Mechanical Design The mechanical design of the arm for MOBOT-2 is rel- atively straight forward. It is a 2 degree of freedom planar manipulator which moves in a vertically oriented plane pass- ing through the central vertical axis of the robot. The two degrees of freedom are used to select a height for t’he gripper and to give fine grain control over the hand’s radial position. Coarse radial and angular control is provided by moving the x=f~cos8+f~cos~+d y = II sin 8 + 12 sin 4 - f d (“‘) Figure 7. The manipulator and its kinematics. Note that the hand is always vertical. b. Figure 8. Tip positions for the mmipulntor. a. Total workspace. b. Normal operating area. ent,ire robot. To keep the number of joints down we have not provided fine grain control of the angular posit,ion of t,he hand; there is no sideways motion. The rationale behind this is that many long range sensors, in particular vision, supply very accurate headings toward a target object but relat’ively poor range estimates. Having a degree of freedom that can compensate for these errors is desirable. This is why with only 2 degrees of freedom we choose to mount the arm so it opera.tes in a radial rather than a tangential plane. Each section of the arm consists of a parallel four-bat linkage. Figure 7 shows how these linkages are arranged. Be- cause there is no wrist in this design we have decided to have the gripper always point straight down, an attitude which al- lows the hand to pick up s~nsll objects from flat surfaces. The four-bar mechanisms serve to reference the attitude of the gripper to t’he robot’s frame. The motors which actuate tfihe two joints are capable of lifting a payload of 2 pounds. Be- cause the motors are light, each motor is located at, the joint it controls. Mounting the motors back further would require iL complicated power transmission system that could introduce an unacceptable phase lag in the servo control of the fingers and would likely weigh as much as the motor itself. The complete workspace of the arm is shown in figure 8;~. ROBOTICS / 1099 b. Figure 9. Close-up of a prototype compliant gripper. However, we are primarily concerned with the vel tic al colun~n shown in figure 8b which is 40 inches high and 18 inches witle. This alloys the arm to work 0x1 both the floor all(l the 101)s of tables and to reach anywhere in the front half of a nor~nal desk top. Quantizing the joint angles to 8 bits eacll gives tllcx a1111 a quarter inch accuracy over the entire workspace. All 65530 possible fingertip positions are plotted in ligtlre 8a. The hand is a simple linear slide parallel jaw gripper. Thcb fingers are 1 inch wide by 3 inches long and contact the object, via two compliant rubber pads. Since there is no fine grain control over the angular location of the iWIll with respect, t.0 the robot, the jaws of gripper open to a wide 5 inches. This lateral leeway is important because it allows us to tolerate 2 degrees of error in the angular position of the arm at the furthest point in its workspace and larger errors at shorte1 extensions. hexagonal grid surrounding the fingers as can be seen in figure 10a. Each sensor provides a coarse (3 bit) depth measurement of the surface in its field of view. For the table shown in figure lOa, the IR depth map looks figure lob. 4. Conclusion MOBOT-2 has an onboard parallel processor. The pro- cessor is unique in that it has no dynamic switch, but rather relies on physical configuration of its communicat~ion topology. There is ILO central locus of control in the entire system. The parallel processor controls motors on the onboard arm, reacts to local moving obst,acles, processes sensor information, and formulabes high level plans all in a distributed fashion. 3.3 Sensors and Control The arm is controlled by specifying a speed for each of the joints. This is accomplished by slowly ramping the cont,lol voltage to a st,andard proportional controller at a particular rate. Controlling the speed of t,he joints rather than their po- sition lets us move the hand along a desired trajectory. 111 par- ticular, we can command the arm to raise the gripper straight up or move it directly forward by specifying joint speeds that vary with the configuration of the arm. MOBOT-2 also has an onboard arm. Ilnlike other robols with onboard arms this one has a large workspace enabling it to manipulate objects quite far above the base of the robot,. Special care has been taken to ensure that the robot can achieve this reach without tipping itself over. REFERENCES [Brooks 861 “A Robust Layered Control System for a Mobile Robot”, Rodney A. Brooks, IEEE Journal oj Robotics and Automation, RA-2, No l., April 1986 Not only are the joint positions sensed, but. the error volt- ages in the servo amps are also reported. If the servo has a known transfer function, such as a generalized spring, this can be very useful information. Errors in the joint angles iudicat e the amount, of torque being supplied by t#he mobors. By cou- pling this with the configurat,ion of the arm we can determine the weight of the payload being carried. For the fingers, mea- suring the control error tells how tightly the hand is grasping an object. This allows us to close the fingers slowly until a sufficiently large grasping force is sensed. The amount of force necessary is determined by the weight of the payload which can either be measured directly as described above, or esli- mated by measuring the finger separation. [Crowley 851 “Navigation for an Intelligent Mobile Robot”, James L. Crowley, IEEE Journal oj Robotics and Automafzon,, RA-1, March 1985, 31-41. [Giralt et al 841 “An Integrated Navigation and Motion Control System for Autonomous Multisensory Mobile Robots”, Georges Giralt, Raja Ch+tila, and Marc Vaisset, Robotics Re- search 1, Brady and Paul eds, MIT Press, 1984, 191-214. [Lozano-Pdrez et al 831 “Automatic Synthesis of Fine- Motion Strategies for Robots”, Tom&s Lozano-PCrez, Mathew T. Mason, and Russell H. Taylor, International Journal oj Robotics Research, Volume 3, Issue 1, 1983 [Morave<: 831 “The Stanford Cart and the CM17 Rover”, Hans P. Moravec, Proceedings of the IEEE, 71, July 1983, 872-884. Aside from basic kinesthesic sensors, the hand also has [Nilsson 841 ‘Shakey the Robot”, Nils J. Nileson, SRI Al a cluster of 8 infrared range-finders. These are arranged in a Center Technical Note 8!?3, April 1984. Figure 10. The hand is surrounded by :I ring of II? range finders (triangles). a. The arm over a table. b. Ileights returned by the IR ring. 1100 / ENGINEERING
1986
64
510
NOISETOLERANT RANGE ANALYSIS FOR AUTONOMOUS NAVIGATION l Aviv Bergman and Cregg K. Cowan Robotics Laboratory, SRI International 333 Ravenswood Avenue, Menlo Park, California 94025 ABSTRACT Techniques for detecting horizontal regions, obstacles, ditches, and shoulders along a road from range data are described. The noise level in each scan line of the range image is computed and an adaptive threshold is used for noise compensation. The sources of noise and the scanning geometry for a time-of-flight range sensor are discussed and experimental results of applying these techniques to ERIM range images are presented. I INTRODUCTION Autonomous navigation in an outdoor environ- ment requires solutions to such problems as finding the road or other route, detecting and avoiding obstacles, and distinguishing between true obstacles, such as boulders, and apparent obstacles, such as shrubs. This paper presents some techniques for analyzing range images of road scenes. Assuming the vehicle is on a road (a region that is horizontal relative to the vehicle), these techniques measure the noise characteris- tics of the range image and use this information to find the boundaries of the road region, identify obstacles in the road, and locate ditches and shoulders along the road. II DESCRIPTION OF RANGE IMAGERY In this section we describe characteristics of range sensors that use the phase difference between the refer- ence and reflected signals of a modulated laser beam to determine the range to a surface [I]. See [3] and [4] for a detailed description of this method. Figure 1 contains a typical road scene and corresponding range image detected by a sensor from the Environmental Research Institute of Michigan (ERIM). These images were ob- tained from the Martin Marietta Corporation. a. Road scene In Section II we describe the range sensing tech- nique, the sensor geometry, and sources of noise in range images. In Section III we describe and illustrate the techniques for measuring the noise level, detecting the horizontal region boundary, finding obstacles, and locat- ing ditches and shoulders along the road. b. Range image (distance encoded as intensity) * The research reported herein was supported by the FMC Corpora- tion, Ordnance Division, San Jose, California, under P.O. 147466, Work Directive 014. Figure 1: Typical Road Scene and Range Image 1122 / ENGINEERING From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. Because the ERIM sensor uses the phase difference between two signals to measure distance, the range values obtained are inherently ambiguous--objects whose distance from the sensor differs by exactly one modula- tion cycle have the same range value. For the ERIM sensor, the distance corresponding to one modulation cycle (the ambiguity interval) is 64 feet. A. ScanninE Geometry The configuration for sensing the range r to a world point P is illustrated in Figure 2. In this diagram the sensor lies on the z axis with its principal ray in the x-z plane. The tilt of the sensor is given by angle $, and ,8 is the pan angle from the center of the scan line. An ERIM range image consists of 64 horizontal scan lines, and each scan line contains 256 range values (obtained by sweeping the laser from -/3,,, to +Pmax). B. Sources of Noise in Range Images ---- Two noise effects of particular interest for an autonomous vehicle operating in an outdoor environ- ment are absorption and scattering of the laser beam by the atmosphere and noise contributed by background radiation, such as sunlight. Other noise sources for a laser ranging device are described in (51. The transmission loss from atmospheric effects is dependent on the range to the object surface and on the wavelength of the laser. For the ERIM sensor, the prin- cipal transmission loss is due to scattering [5! and is proportional to cmkB, where r is the range to the surface and k is a constant dependent on the laser wavelength and the visibility. The detected signal strength also depends on the surface orientation and reflectivity. Therefore the total received signal is of the form P(pf( p CO80 -y x (l- c+) where PO is the transmitted power, p is the diffuse sur- face reflectance, 19 is the angle between the incident beam and the surface normal, and r is the range. Figure 3 shows the scanning configuration for the ERIM road images. The relative angle between the road surface and the laser becomes more oblique for higher scan lines. Although the received signal decreases for higher scan lines (cos 0 decreases and r increases), the ERIM device has constant integration time for each range value, resulting in decreased signal to noise ratio for higher scan lines. (Other devices vary the integration time to achieve a constant signal to noise ratio (41). !J b P r P x”“4\, Figure 2: Sensor Geometry: Range r, Tilt + and Pan @ Scan line . . - 63 0 Figure 3: Configuration for Scanning a Road (Side View) III RANGE IMAGE ANALYSIS A. Correcting for Range Ambiguity The first step in analyzing a range image is to remove the ambiguity interval. We analyze each column in the image starting from the bottom; if we detect a decrease in range value greater than a threshold (currently 16 feet), we assume it to bei caused by the ambiguity interval and add 64 feet to the measured range value. This simple technique fails for some cases where the laser spot spans a range discontinuity (part of the spot is on the close object and part on the more distant object). In such cases the measured range value is an average of the range to the two surfaces. If the surfaces lie in different ambiguity intervals, then the measured range value can produce an arbitrarily small decrease in range, falling below the threshold. We correct such cases by hand. A method for automatically removing the ambiguity interval is described in [2]. VISION AND SIGNAL UNDERSTANDING / 1123 B. Measuring the Noise Level --- Although some sources of range data noise can be modeled a priori, other noise sources, such as the orien- tation and reflectivity of the target surface, are not generally known. We therefore choose to estimate the total noise by measuring the statistical characteristics of the actual range data. We currently measure the mean ~1 and standard deviation o for each scan line. Consider the values for one scan line when viewing a horizontal plane (the x-y plane) that is aligned with the range sensor as shown in Figure 4a. The range sen- sor lies on the z axis with its principal ray in the x-z plane. The line P,P, is the intersection of the x-y plane with the scanning plane (formed by sweeping from -4nax to +p,,). The range value r for an arbitrary point P is related to the range value along the principal ray r. by b = rO / co9 p]. Figure 4b displays this relationship between range and pan angle. Under the assumption that the horizontal plane is aligned with the sensor, that is, that the vehicle is not tilted relative to the road surface, the maximum range difference between the center and edge of each scan line is (ru - rO /cos(p,,)). If we multiply each value by cos /3 to compensate for its pan angle, then the range values within the horizontal region become constant. Measuring the mean ~1 and standard deviation Q of the modified range values for each scan line can then be considered as local estimation of the noise parameters p and u for the range image. Because we do not expect the horizontal region to fill the entire image, we measure p and o only in an “adaptive” analysis window of each scan line. This win- dow is based on the boundary of the horizontal region detected in the previous scan line. The analysis window for the first scan line is the entire line (the first scan line is assumed to be entirely in the road). In order to eliminate the effect of obstacles or ditches when measur- ing 0, we include only those points in the value range p f k/r. Another technique to eliminate the effect of obstacles on the standard deviation of the ith scan line (gi) is to discard those points with value outside pi f kai 1 (based on the assumption that the noise level changes slowly). The following sections describe the use of p and 0 in detecting the horizontal region boundaries and finding obstacles in the region. a. Sensor geometry Range I 1 4 max I 0 Pan angle I I P max b. Range vs. pan angle Figure 4: Scanning a Horizontal Plane C. Detecting Horizontal Region Boundaries Given pi and bi, we detect the left road-region boundary of the ith scan line by locating a sequence of range values that are within p f k,a. We start search- ing for this sequence at a point k, pixels outsidl:, (to the left of) the left boundary of the previous scan line. The parameter k, is currently set to zero because of perspec- tive; we expect the apprarent road width to narrow slowly with increasing distance. The left boundary is the first point in the sequence. We limit the change in boundary position to a value based on the width of the road detected in the previous scan line in order to safeguard later processing from possible errors. Detect- ing the right boundary is similar. 1124 / ENGINEERING An example of this processing is shown in Figure 5. The lower-left corner displays an ERIM im- age that has been corrected for ambiguity interval. The lower-right corner plots range values for several scan lines and illustrates the characteristic curvature of horizontal regions (compare to Figure 4b). Note the large change in range values along two scan lines that is caused by the presence of an obstacle. The upper right shows plots of range values after correcting for the pan angle. The location of the road boundaries are also marked in the upper right. The upper-left corner dis- plays the detected road region. The arrow in the upper- left corner indicates the range image rows that contain the obstacle. Figure 6: Detecting the Road Region 1). Detecting Jump Boundaries Using Sigma After locating the left and right road boundaries and calculating ~1 and u, we detect obstacles by finding horizontal and vertical discontinuities in range values (called jump boundaries) as specified below. The value ri j is the range value for pixel j on scan line i. 9 l Smooth the range values between the left and right road boundaries using a 1 by 3 kernel. l Detect a horizontal jump if I ‘i j - 'i,j+Q I > ‘lai’ 9 l Detect a vertical jump if I ‘i j - 'Cl,j I > c2bkl’ , After detecting these jump boundary locations, we link them into larger units by a grow-and-shrink opera- tion that connects jumps up to 4 pixels apart. Figure G shows an example of this method with the obstacle shown between the road boundaries in the upper-left corner. Figure 6: Detecting Jump Boundaries We found that using d to detect jump boundaries provides better results than other methods that we tested. Figure 7 shows jumps detected when d’>k ( r = range value) . This dynamic thrreshold method correctly locates distant obstacles and does not miss significant range discon- tinuities near the vehicle. However, this technique is ad- versely affected by the noise at the bottom of the image. Figure 7: Jumps Detected with 4 > k VISION AND SIGNAL UNDERSTANDING / 1125 E. Finding Ditches and Shoulders -m Roads often have shoulders and ditches that must be avoided (if large) and which may aid in navigation because they are usually parallel to the road. In order to locate these features, we examine a window outside the left and right road boundaries of each scan line, drawing a line between the road edge and the end of the window. The highest and lowest deviations from this line define the location of the shoulder and ditch respec- tively. In Figure 6, the ditches are shown as the out+ ermost lines in the upper-left corner. The shoulders of the road are between the ditch and the road boundary. The deviation in range data measured from a shoulder or ditch of constant height increases with dis- tance from the ERIM scanner because the angle between the surface and the laser beam decreases (becomes more oblique) with distance (see Figure 8). As previously mentioned, the noise level also increases as the angle decreases, with the result that shoulders and ditches are detected most precisely at an intermediate distance--not too close or too far. Figure 8: Scanning Configuration for a Ditch F. Experimental Results The methods described in the previous section were tested on five images--three of the type displayed in this paper and two range images of a park scene with trees. Good results were obtained on all the images without modifying the parameters k,,k, or C,,C,. As an indication of the sensitivity of these techniques to choice of parameter value, we modified each parameter by 25% of its value and experienced no significant failures. For example, the value of k, controls the effect of range value deviations (e.g., obstacles) on the detected road boundary. The value of k, used in Figure 5 differs by 25% from the value used in Figure 6. It is possible to observe the effect of this change by closely examining the road width for the scan lines that contain the obstacle (indicated by the arrow in the upper left quadrant of each figure). In Figure 5 the obstacle causes a deviation in the detected road boundary. The detected road boundary in Figure 6 (indicated by the in- nermost pair of lines) is not greatly affected by the obstacle. IV CONCLUSION In this paper we presented new techniques for analyzing range images by measuring the noise level in each scan line. The mean and standard deviation of the noise are used as the basis for several adaptive thresholds. These methods process each scan line separately and are therefore quite fast. The techniques incorporate the measured noise level to detect horizontal regions and locate obstacles in the range data. Tech- niques to identify ditches and shoulders along the road were also presented. 1. 2. 3. 4. 5. REFERENCES Binford, T.O., and J.M. Ter,?nbaum, ‘Computer Vison,’ Computer, Vol. 6 (May 1973). Hebert, M., and T. Kanade, ‘First Results on Out- door Scene Analysis Using Range Data,. Proc. of Image Underetanding Workshop, Miami, Florida (December 1985). Nitzan, D., .Scene Analysis Using Range Data,. 69, SRI International, Menlo Park, California (1972). Nit,zan, D., A.E. Brain, and R.O. Duda, *The Measurement and Use of Registered Reflectance and Range Data in Scene Analysis,’ Proc. IEEE, Vol. 65, No. 2 (February 1977). Ross, M., Laser Receivers, John Wiley and Sons, New York, New York, 1966. 1126 / ENGINEERING
1986
65
511
A REAL-TIME ROAD FOLLOWING AND ROAD JUNCTION DETECTION VISION SYSTEM FOR AUTONOMOUS VEHKL= .Darwin Kuan, Gary Phipps, and A-Chuan Hsueh Artificial Intelligence Center Central Engineering Laboratories FMC Corporation 1185 Coleman Ave. Santa Clara, CA 95052 ABSTRACT This paper describes a real-time road following and road junction detection vision system for autonomous vehicles. Vision-guided road following requires extracting road boundaries from images in real-time to guide the navigation of autonomous vehicles on the roadway. We use a histogram-based pixel classification algorithm to classify road and non-road regions in the image. The most likely road region is selected and a polygonal representation of the detected road region boundary is used as the input to a geometric reasoning module that performs model-based reasoning to accurately identify consistent road segments and road junctions. In this module, local geometric supports for each road edge segment are collected and recorded and a global consistency checking is performed to obtain a consistent interpretation of the raw data. Limited cases of incorrect image segmentation due to shadows or unusual road conditions can be detected and corrected based on the road model. Similarly, road junctions can be detected using the same principle. The real-time road following vision system has been implemented on a high-speed image processor connected to a host computer. We have tested our road following vision system and vehicle control system on a gravel road. The vehicle can travel up to 8 kilometers per hour speed on the road. I INTRODUCTION There are increasing interests on intelligent navigation of autonomous vehicles in a complex environment as a technology development test bed to integrate artificial intelligence research on planning, reasoning, perception, mobility control, and learning. An autonomous vehicle needs to plan its action, perceive its surroundings, execute its plan, and adapt itself to the environment for survival. Given a high level mission goal, the planning system needs to generate a plan to achieve the goal. Based on this plan, the autonomous vehicle starts to execute the plan in the real world. It collects information from sensors to perceive its environment, to follow a road, to navigate through obstacles, to identify terrain types, to recognize objects and landmarks, and to understand scenes. If some unexpected events happen that interfere with the current plan, the autonomous vehicle needs to replan in order to adjust itself to the current situations. There are several efforts on autonomous vehicle development at CMU, University of Maryland, Martin Marietta, and FMC. Under the Autonomous Vehicle Test Bed Program, we at FMC have developed a mission planning system [3] and a path planning system [ll on Symbolics Lisp Machines, a reflexive pilot system on SUN workstations [2], a high speed sonic imaging sensor, and a computer-controlled Ml13 armored personnel vehicle. The vehicle can perform real-time obstacle avoidance using the sonic imaging sensor at 8 kilomete’m per hour vehicle speed. In this paper, we describe our implementation of a real-time road following vision system that can follow a gravel road at 8 kilometers per hour vehcile speed. Visual navigation of autonomous vehicles on road networks is an important problem. Results on vision-guided road following have been reported in [5] 161. These approaches use a predictive edge tracking technique to follow paved roads. In the so called “feed-forward” mode [6], a prior detected road boundary taken together with the current vehicle motion, is used to predict the approximate location of important road features and place a window in a subsequent image. Only those pixels in the prediction window are processed. The detected edge location and orientation in the window combined with the road continuity constraint are sufficient to determine the next window location in the same image for road boundary tracking. Because only a small portion of the whole image needs to be processed, this approach significantly speeds up the computation. However, due to the sequential nature of the road boundary tracking operation and its heavy reliance on prediction, the road boundary tracker may be confused by shadows, vehicle tracks and tire marks, and fuzzy road boundaries to lock on the wrong edge features. Our autonomous vehicle is a tracked vehicle (Ml13 armored personnel carrier) that usually travels on dirt or gravel roads with fuzzy road boundaries and many vehicle tracks. These conditions make the use of prediction difficult. Consequently, we take a consistency checking approach that aggregates all the consistent evidence to reach a final interpretation. No attempt is made to optimize the image segmentation algorithm. Instead, we put our emphasis on developing a geometric reasoning module that can accurately identify road segments and road junctions based on imperfect image segmentation results. VISION AND SIGNAL UNDERSTANDING / 1127 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. The vision system operates in a loop (see Figure 1). It first acquires a color image from a camera and the current vehicle location from an inertial navigation system. The image segmentation module uses a pixel classification algorithm to segment the image into road and non-road regions. The road boundary tracking module finds the most likely road region and traces the contour of the region. The contour is then represented as a sequence of line segments using a line fitting algorithm. These line segments are then transformed from the image coordinate system to the local vehicle coordinate system and sent to the geometric reasoning module. The geometric reasoning module aggregates local geometric supports and assigns a consistent interpretation to these line segments. The resulting road interpretation is then fused with other sensor interpretation results (e.g., obstacle map from range sensor) and sent to the pilot system to generate a local path and perform the actual vehicle navigation. We have implemented this vision system on a high- speed image processor connected to a host computer. All the image segmentation functions are implemented on the image processor that operates at 30 frames per second. All the road boundary tracking, line fitting, and geometric reasoning functions are implemented on the host computer. The vision system currently takes approximately three seconds to process each road image. The pilot system takes the road description and generate a local path within 200 ms. We have successfully integrated the vision, planning, and pilot systems with the vehicle control system and the vehicle can travel at 8 kilometers per hour vehicle speed on a gravel road. II ROAD IMAGE SEGMENTATION -- The vision system first acquires the blue image from a color camera. The reason for selecting the blue band is because it gives the best result for distinguishing the road from the background. We use a pixel classification technique to segment the image into road and non-road regions. There are four possible classifications for each pixel: 1. an actual road pixel is classified as road 2. an actual non-road pixel is classified as non-road 3. an actual road pixel is classified as non-road 4. an actual non-road pixel is classified as road. The first two cases are correct classifications and the last two correspond to miss and false alarm respectively [4]. Cost factors are defined for each case and the classifier is designed to minimize the total cost. The resulting classifier is the ratio of two conditional probability density functions of pixel intensity distribution - one is conditional on the hypothesis that all the pixels are from the road class and the other is conditional on that all the pixels are from the non-road class. A pixel is classified as road if the conditional probability ratio of its intensity value is greater than a threshold that is determined by the cost factors and the a priori probabilities of the road and non-road classes. High Speed Pipeline Processor Ore in Mountain Road Track Traces Road Tracking Window . Expected Road Contours I Road Boundary Tracking Host Computer Line Segments Geometric Reasoning Visibility Limit / J Road Boundary Vehicle Figure 1: road following vision system architecture For each new image, a properly selected reference window, usually positioned at the center bottom of the image, is used to lock on a portion of the road. The normalized histogram of the pixels inside the reference window is calculated and used to approximate the conditional probability density function of the road pixel class. The normalized histogram of the whole image is then used to approximate the probability density function of pixel intensity for road plus non-road background. The conditional probability density function of the non-road class can be obtained as a weighted linear combination of the two histograms according to the assumed a priori i probabilities of the road and background. These conditional probability density functions are then substituted into the classifier to set up a lookup table for pixel classification in the current image. The segmented image is then smoothed to remove noise pixels and fill gaps. III ROAD BOUNDARY TRACKING The function of the road boundary tracking module is to find the most likely road region based on segmentation results and track its boundary. 1128 / ENGINEERING The image segmentation module returns a segmented binary image that contains several classified road regions. It takes a lot of time to track the boundary of each region in the image. Because the real road region is usually large compared to misclassified noise regions and spreads across the image near the bottom of the image, it is most likely to intersect with a vertical scan line starting at the center bottom of the image. The road boundary tracking module uses this heuristic and scans along the center column from the bottom of the image. If there is a road class region, the road boundary tracking module starts to follow the region contour until it returns to the same starting point. If the region contour length is greater than a threshold, then it is assumed to be the actual road region. Otherwise, the road boundary tracker continues to scan and track the next road class region. The road region contour is then represented in terms of a sequence of line segments by using a line fitting routine. These line segments are then sent to the geometric reasoning module for detailed shape analysis. Figure 2: typical road image. Figure 2 shows a typical road image obtained from camera. Figure 3 shows all the detected road class regions in the region of interest using the pixel classification algorithm. The large region with linear border is selected by the road boundary tracker as the road region. The vectors in the center of the picture show the projected road boundaries on the ground plane. The quadrangle that bounds the projected road boundaries delimits the visibility limit and the camera’s field of view. IV GEOMETRIC REASONING The image segmentation module extracts road regions only based on local intensity variation without reasoning about the global geometric properties of the road boundary. There are many situations where the image segmentation module does not work properly. For example, different lighting conditions, seasonal changes, puddles on the road, shadows on the road, just to name a few. The development of more sophisticated image segmentation techniques is certainly important. However, in some situations, geometric reasoning can eliminate erroneous data based on road model Figure 3: segmented road image and the projection of the road boundaries on the ground plane. and shape analysis. The image segmentation results are what the vision system “sees.” The geometric reasoning results are what the vision system “perceives.” The road model we use for geometric reasoning can be described in terms of three constraints: 3. continuity constraint - a road spans continuously on the ground plane, therefore, continuity between road boundaries exists in two images taken in sequence. 1. road sides consistency constraint - each road edge on the left side has at least one right side road edge that is parallel to and overlaps (along its The geometric reasoning module needs to use this road model to orientation) the given edge. 2. smoothness constraint - both the left and right sides of a road change direction smoothly even for curved road. l find the road left and right boundaries l check the road sides consistency constraint l check the smoothness constraint on each side of the road VISION AND SIGNAL UNDERSTANDING / 1129 l check the road continuity constraint between image frames 0 return a road interpretation and its goodness factor. A. Find Road Sides --- Given a polygonal description of a road region contour, we first transform these line segments from the image coordinate system to the local vehicle coordinate system since the road model constraints are defined on the ground plane. This transformation is obtained by first calibrating the camera and assuming the road is on the’ ground plane, then projecting image points to the ground plane. To determine the left and right sides of a road, we first find the closest and farthest points of the road boundary on the ground plane. These two points divide the road boundary into two parts - the left and right road boundaries. B. Road Sides Consistency Constraint -- The corresponding edge segments on two sides of a road are locally parallel. This property is used to remove erroneous road regions included due to imperfect image segmentation. For each edge segment on the left side, we check to see if there is any right side edge segment that supports the road model. That is, the two segments are locally parallel and have the correct distance interval between them. If there is one, then the amount of overlap along their orientation and other geometric information are recorded in the segment support structure. This is done for each edge segment on the left and right sides. If an edge segment has sufficient support from the other side to cover its extent, then it is labeled as consistent. If both sides of the road are smooth and every edge segment has support from the other side, then they are used as the final road interpretation. However, If some edge segments do not have geometric support from the other side, then we start to trace each side of the road to find consecutive consistent edge segments. If there is a break between two sequences of consistent edge segments, a “perceived” edge segment is created to link the two sequences and the original edge segments in between are removed. The road sides consistency constraint is then slightly relaxed and applied to the newly created “perceived” edge segments to make sure that they agree with the road model. This approach has the ability of data selection before model fitting. Locally consistent data are selected to reach a global interpretation, while inconsistent data are thrown away before interpretation. This step of geometric reasoning makes the road following vision system capable of working with imperfect segmentation results. Typical cases it can handle includes shadows casted on the road and fuzzy road boundary. Figure 4 shows a puddle on the right side of the road. Figure 5(a) shows the road boundary on the ground plane. Figure 5(b) shows the final road interpretation after geometric reasoning with the newly created “perceived” edges drawn in dashed lines. 1130 / ENGINEERING Figure 4: segmented road image with a puddle on the right side. Figure 5: (a) road boundaries / I (b) final road interpretation. before geometric reasoning, C. Smoothness Constraint Typical road boundary changes direction very smoothly. The geometric reasoning module checks the left and right road boundaries and returns a smoothness factor for each side. To check the smoothness constraint, the angle between two adjacent edge segments is calculated. If the angle is greater than a threshold that is a function of edge segment length, then the edge segments are labeled as not smooth. The reason to make the angle threshold vary according to edge segment length is to allow more tolerance for short edge segments. The smoothness factors of road sides are then calculated as a normalized measure of the smoothness factors of its component edge segments. D. Continuity Constraint The two constraints we discussed are applied in a single image frame. The continuity constraint is applied between adjacent image frames to enforce consistency in the time axis. This is useful in several ways. First, if the adjacent frame road boundaries are not consistent (e.g., there is no smooth transition between road segments), a warning is signaled to the road following system to slow down the vehicle. In this case, if both image frames have consistent road interpretation, the new frame road boundary is used because the current information is more accurate than the old road information. Second, continuity between frames is also used to evaluate the goodness of each road side in the current image. This makes the road following system work even if only one side of the road is visible. V ROAD JUNCTION DETECTION Visual navigation of autonomous vehicles on a road network not only needs to follow a single road, but also detect road junctions and turn to one of the intersecting roads. Recent results on road junction detection are reported in [51. In this approach, road junction appearance is first predicted based on the vehicle location and a road network map. Prominent road junction features are then used to guide the match of image features detected. In here, we only use a general road junction model without map prediction. If there is a road junction on the map and we want to turn to another road, the planning system will issue a road junction detection task to the vision system when the vehicle is near that region. This task command will trigger the road junction detection module in the vision system to perform additional road junction consistency constraint checking. On the other hand, if the vehicle wants to stay on the same road, the road following vision system will automatically treat the road junction region as erroneous data and try to ignore it. The road junction detection algorithm is very similar to the road sides consistency constraint technique we discussed in the last section. If the road junction detection module is not triggered, the road following system will treat junctions as imperfect road regions and the smoothness and road sides consistency constraints will remove them to form a final road interpretation. However, if the road junction detection module is triggered, instead of removing edge segments that do not have support from the other side, it tries to find support from edge segments on the same side. If it successfully finds supports for these edge segments, then they are the boundaries of the other road. In principle, this will work; however, in our case, road junctions usually have round corners and grass and trees may break the other road at the junction. We currently use more relaxed constraint that only checks if the perceived edges on both sides of the road support each other. Figure 6 shows a road junction scene. Figure 7(a) shows the road region and junction boundary. Figure 7(b) shows the final road interpretation and the perceived edges in dashed lines. In this case, edges on the same side of the road do not provide enough supports for junction detection. However, the perceived edges on two sides of the road support each other and is a weak evidence of the existence of a road junction. The road junction detection module and part of the geometric reasoning module are still in the experimental stage and is currently in the process of being optimized for real-time operation. VI PILOT SYSTEM -- Given a road scene description from the vision system, the pilot system is responsible for guiding the vehicle to follow the road and avoid obstacles. The pilot system used is a real-time reflexive pilot described in [2]. The road scene model contains left and right road boundaries and an artificial visibility limit placed at the end of the road. Candidate subgoals are positioned on the visibility limit line segment. A subgoal is found to be reachable by the vehicle without getting off the road if its left and right limiting rays bound a non-empty free-space cone. For each subgoal, a local path from the vehicle to the subgoal is generated in terms of executable vehicle commands and the subgoal that maximizes a predefined objective function is selected for execution. The pilot system currently takes approximately 200 ms to process one road scene. Figure 8, 9, and 10 show a time sequence of autonomous road following action. VII CONCLUSIONS In this paper, we described the implementation of a real-time road following vision system for autonomous vehicles. We have integrated the vision, planning, and pilot systems with the vehicle control system and the vehicle can travel at 8 kilometers per hour vehicle speed on a gravel road. We are currently working on obstacle avoidance on the roadway by fusing information obtained from a color camera and a sonic imaging sensor. We are also reimplementing our road following vision system on a more powerful pipeline image processor to achive 20 km/hr road llowing. Figure 6: segmented road image with road junction. Figure 7: (a> road and junction boundaries before geometric reasoning (b) final road interpretation. VISION AND SIGNAL UNDERSTANDING / 113 1 ACKNOWLEDGEMENTS The authors would like to acknowledge the encouragement of FMC management especially Andy Chang and Lou McTamaney. We would also like to thank the superb vision software support from Mary Cole and Darrell Smith, and the pilot software support from John Nitao and Steve Quen. REFERENCES [l]. Kuan, D., Brooks, R. A., and Zamiska, J. C., “Natural Decomposition of Free Space for Path Planning,” Proceedings of the 1985 IEEE International Conference on Robotics and Automation, St. Louis, Missouri, March 1985. [2]. Nitao, J. J., Parcdi, A. M., “A Real-Time Reflexive Pilot for an Autonomous Land Vehicle,” IEEE Control Systems Magazine, December 1985. [3]. Pearson, G., and Kuan, D., ‘Mission Planning System for an Autonomous Vehicle”, Proceedings of the IEEE Second Conference on Artificial Intelligence Applications, Miami, Florida, December 1985. [4]. Van Trees, Modulution Theory: H. L., Detection, Estimution, Part I, Wiley, New York, 1968. and [5]. Wallace, R., Matsuzaki, K., Goto, Y., Crisman, J., Webb, J., and Kanade, T., “Progress in Robot Road-Following,” Proceedings of the 1986 IEEE International Conference on Robotics and Automation, San Francisco, April 1986. [6]. Waxman, A. M., Le Moigne, J., and Srinivasan, B., “Visual Navigation of Roadways,” Proceedings of the 1985 IEEE International Conference on Robotics and Automation, April 1985. Figure 8: road-following sequence (first image). 1132 / ENGINEERING
1986
66
512
OBJECT RECOGNITION IN STRUCTURED AND RANDOM ENVIRONMENTS: LOCATING ADDRESS BLOCKS ON MAIL PIECES’ Ching-Huei Wang and Sargur N. Srihari Department of Computer Science State L‘niversity of New J’ork at Buffalo Buffalo, NJ‘ 14260 ABSTRACT .4 framelvork for determining special interest objects in images is presented in the context of determining destination &dress blochs on images of mail pieces such as letters, magazines, and parcels. The images range from those having a high degree of global spatial structure (e.g., carefully prepared letter mail envelopes lvhich conform to specifications) to those with no structure (e.g., magazines w ith randomly pasted address labels). .4 method of planning the use of a large numbers of specialized tools is given. The control utilizes a dependency graph, hnouledge rules, and a blackboard. 1. INTRODUCTION The central problem of vision is the identification and loca- tion of objects in the environment. The need to detect certain special interest objects while not necessarily having to identify all objects arises in several applications of computer vision. In the domain of postal automation, an important task is to locate the destination address block (DAB) on a mail piece such as a letter, flat (e.g., magazine) or parcel. The sub-image corresponding to the located I)413 is ‘lien to be presented to either a machine reader (an optical charac.er recognlzer or OCR) or a human reader who will determine the sort-category of the mail piece by read- ing the zipcode, state, city, and street Information. A typical mail piece image has several spatially contiguous regions or blocks that correspond to logical, or mail significant entities, e.g., D.4H, postage, return address, etc. Several mail pieces with different levels of complexity in determining the DAB are shown in Figure 1. A study of mail piece images reveals the fol- lowing characteristics: 0 the number of logical blocks is variable; it ranges from sim- ple first class letter mail containing just three blocks (D,4R, return address, postage stamp) to complex third class advertising mail with several additional regions correspond- ing to advertising text, logos, icons and graphics, 0 logical blocks have certain physical attributes, but there is wide variability, and 0 spatial relationships often hold among logical blocks. Since certain spatial relationships hold between regions, the problem mav seem at first to be appropriate for the model-based approach, i.e., one where model knowledge is used for reasoning about identities of regions[2]. Model hnowledge typically includes object attributes, e.g., size, length, height, contrast, loca- tion, texture, intensity, etc., and spatial structure, i.e., spatial relationships among objects. The effectiveness of model-based rea- soning depends on the completeness and certainty of model hnowledge. For images with different structure, a different model has to be built and stored. The model-based approach is appropriate when the mail ‘This ~vvnrh u-as supported bv rhe UnIted Srntez Postal Ser\lce Contract 104230 85 M3.319. piece face strictly adheres to prescribed specifications, e.g., care- fully prepared letter mail (see J3gure l(a)>. This is indeed the approach used bv commercial letter mail sorting machines today which assume a standard position for the 1>,413[6]. Occasionally, however, a mail piece face has no recognizable structure and the DAR may be placed randomly (Figure l(b)). Thus the problem at hand is ho\v to account for randomness that renders model-based spatial reasoning ineffective, while not ignoring the spatial rela- tionships that hold between regions in a large number of cases. This paper describes the framework of an image under- standing system ARJ~S (Address Block Location System), that accounts for both the structure and the randomness present in mail pieces. Section 2 is a description of ABJ,S as a collection of tools and a control structure that plans the use of the tools. Sec- tion 3 is a system level descripticjn of 4J3LS. Section 4 describes the representation of knov,.Iedge as a combination of frames and rules. Section 5 describes the interpretation cycle of ABLS. Experimental results are discussed in section 6. 2. ABLS OVERVI-Ex The primary objective of ABLS is to locate the J1XB when it is unknown whether the mail piece image conforms to a well- defined structure. The result is in the form of a list of candidate blocks, their orientations and degrees of support associated with being the DAB. Figure l(b). Figure l(a). Figure 1. Examples of mail images with different levels of complexity in locat- ing the DAB: (a> has a stan- dard structure, (b) has a ran- domlv placed J>AB, and cc> has an intermediate form v, here the DAB is near a per- mit mark and inside an attention region. VISION AND SIGNAL UNDERSTANDING / 1133 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. Several types of knowledge are useful in this tash. One is knowledge about visual properties of different significant regions and their labels. Seven possible labels correspond to: DAB, pOS- tage stamp or meter mark, return address bloch, yellow markup label, barcode, advertising text, and graphics. While the interpre- tation of every image region is not of direct concern, knowledge of spatial relationships between logical blocks is useful to guide label assignment. 2.1. Specialized Tools ABLS utilizes several tools to gather evidence. The most important evidence is that which distinguishes the DAR from other blocks. The knowledge engineering process of developing a tool can be summarized as follows: (1) A database of physical characteristics of mail pieces[4] is compiled, using the SPSS statistical pa&age, into a mail statistics database (MSD) [7]. The MSD is examined to pinpoint those features that help distinguish the DAR from other blocks. The circumstance under which a feature is most useful is also determined. These two facts are compiled into hnowledge rules for estimating the util- ity of obtaining this feature. (2) A tool is developed to detect this feature. (3) The tool is experimentally run under \Tarious conditions. Estimated cost and parameter settings under various con ditions are compiled into bnowledge rules. (4) Results of running the tool under various conditions are compared with the MSD. Based on the comparison, the utility of various results is determined and compiled into knowledge rules for results evaluation and interpretation. 2.2. Planning the Use of Tools W’hen a large number of complex tools are present, it is necessary to judiciously plan their use. Since many image pro- cessing tools are computationally intensive and slow, it is infeasi- ble to let the system invoke all available tools to obtain evidence. 5ome tools are interdependent and cannot be invoked randomly. In order to arrive at a plan for tool usage, it is necessary to know the following: 0 where to use a tool, i.e., applicable area of image, 0 w-hen to use a tool, i.e., appropriate time to use it, 0 why use a tool, i.e., given several tools for a task, which is the best one under a given circumstance, 0 how to use a tool, i.e., parameters to set before invocation, 0 how to change parameters and reapply the tool if results are unsatisfactory, l how to interpret as new evidence when results are satisfac- tory, and 0 when to terminate, i.e., when to stop using tools and report success w-hen enough evidence has been accumulated. The process of coordinating specialized tools is viewed as one of coordinating a “community of experts”to achieve the com- mon goal of detecting the DAB. Each specialized tool is viewed as a local expert that does its own benefit/cost estimation, param- eter selection, result evaluation, and result interpretation. The main advantage of this approach is modularity -- which facili- tates easy addition, deletion, and construction of tools. 2.3. Evidence Combination When several tools are used, it is necessary to combine evi- dence gathered from each tool application. Each new evidence generated by the application of a specialized tool is associated with a confidence value to represent the degree of supporting or refuting a particular labeling hypothesis. The scheme to combme confidence values of evidence is based on Dempster-Shafer’s rules of combination[ 1 J]. Since each region can only have a single label, w’e restrict the hypotheses of interest to singletons and their negation. For a given candidate bloch assume that there is evjidence E, w-hich supports it as the DAR w.ith degree S,. If new evidence Ez sup- ports this candidate as the DAR w-it11 degree Sz, then under the rule of combination the combined confidence value of consistent evidence I:‘, and 1~‘~ is 1 - (1-S,x1-S2). If new evidence 1~‘~ disconfirms this candidate as the 1>.4R with degree S?, then the combined confidence v-alue of conflicting evidence fi, and Jz’~ is &(1-S,) (1. Finally, the degree of support in the belief interv’al for the DAU labeling hypothesis can be computed from the corn bined confidence value of H, and EZ by using Rarnett’s[l] for- mula. 3. ABLS COMPONENTS AHIS is composed of six major components: the MSD, a rule-based inference engine, a system manager, a blachboard, a tool box, and a tool manager (figure 2). The \lSD contains statis- tics of geometric attributes of labels in mail piece image and pro- bability functions to compute the confidence value for new evi dence; the statistics are stored in a series of tables that can be indexed b.v giving a set of geometric attributes. The rule-based inference engine is used for doing reasoning in v-arious rule modules. The system manager is responsible for checking the ter- mination condition, verifying the consistency of labeling, select- ing DAB candidates, combining new evidence, and updating con- text. The blackboard contains the geometric attributes of blocks extracted from low-level image processing, the degree of support of labeling hypotheses, and the current context; all information in the blackboard can be either accessed or modified by other com- ponents in the system. Mail Statist’c Qatabac I I----- statisticla ' Tables I ‘_____ Q------7 (Probability, , Functions ---_--1 Legend Check Termination Conditions w--------. Evidence Combination I- IOW I Figure 2. ABLS organization. 1134 / ENGINEERING The tool box contains a collection of tools, most of which are image processing related. Several types of input images of a given mail piece may be operated upon by the tools, including: photopic, color (RGB), infra red and ultra-violet illuminated. These tools are: adaptive thresholding to convert a gray-level image into a binary image usmg local contrast[6], color threshold- ing to extract white labels in colored (KGB) image, connected component labeling, bottom-up segmenter to group characters into words, lines and bloc& shape analyzer to measure degree of rec- tangularity of a blob, a regularity analyzer that discriminates between machine printing and hand writing, texture discrimina tol’ for distinguishing formed character vs dot matrix print, a text reader, and an address syntax parser[5]. The tool manager is responsible for selecting the tool to be applied next. The order of applying tools is determined using a dependency graph. Each tool has a corresponding tool frame in the tool manager. Each tool frame contains rules for estimating the benefit and cost of using it, selecting parameters, evaluating results, and interpreting results. 4. KNOWLEDGE REPRESENTATION A hybrid of flame and rule-based knowledge representa- tion is used to model knov, ledge used in coordinating tools and in computing degrees of support of labeling hypotheses. The rela- tionship betmTeen knowledge used in ABLS and the knowledge units are given in Table I. This section describes how knowledge is represented in the system manager, blackboard, and tool manager. 4.1. System Manager Knowledge used in the system manager is modeled by the termination frame, candidacy frame, and compatibility rule module. The termination frame, which is used to represent cri- terra of accepting a bloch as the DAB, is defined as follows: I-et L be the set of seven possible labels in ABLS, B the set of segmented blocks, S(i, j> the degree of support of assigning label j to block i, T, the predefined threshold for criterion c, and Sk = Max({S(k, j> 1 j E L)), Sd = Max({S(i, destination) 1 i E R}). ‘fhe criteria for block k to be the DAB are: 1 J S(k, destination) = Sk = Sd > T,, 2) Sk - Max({S(k, j> I j E L} - {Sk}}) > 7’?, 3) Sd - Max({S(i, destination) 1 i E RI - {S,})> > 7’,. The criteria in the termmatlon frame are usually strict in order to reduce the chance of mislabeling. When no candidate block meets these termination criteria, the system will need to apply additional tools to generate more evidence. The candidacy frame contains the minimum requirement for a block to remain a candidate. Its purpose is to rule out TABLE I Knowledge Representation In ABLS. highly unlikely candidates for the 11,4B. The criteria for block k to remain a candidate are: 4) S(h, destination) > T,, 5) S’d - S(h, destination) < 7’,. The compatibility rule module models knowledge about the two dimensional layouts of labels on an image. The importance of spatial relationship knowledge is two fold. First, it provides knowledge necessary to check the overall consistency of assigning labels to each component of an image. Second, it provides clues to predict the existence of other blnchs XX hen there is ambiguity due to noise or unusual appearance. Some examples of rules in this module are shown below. Each rule has a confidence value representing the degree of supporting or refuting a labeling hypothesis. RULE (CMl): IF: 1) The postage block has been found. 2) A block is located either above or on the right hand side of postage block. THEN: Refute this block as destination address(l.O). RULE (CM2): IF: 1) The postage block has been found. 2) Block "A" is located on the left hand side of the postage block. 3) Block "A" lies below the postage block. THEN: Support block "A" as return address(0.7). 4.2. Blackboard Knowledge in the blachboard is stored in the block frame, the hypothesis frame, and the context flame, The block frame is used to represent the results of applying tools to an image. For each possible different feature which can be extracted by a tool from an image, there is a corresponding slot in the block frame to record that feature value. An example of a block frame used in ABLS is as follows: .4n attribute with unk- nown value is filled with a “nil”. ('block *id 4 ;unique id for this block. *minx 250 ;minx, miny, maxx, and maxy -miny 109 idefine the ^maxx 362 ;rectangular enclosing ^maxy 148 ;this block. -area 1132 ;the # of black pixels in a block. ^skew 1.2483874 ;the orientation of a block. *lines 4 ;the # of text lines in a block. ^comps 48 ;the # of components in a block. -grid 5 ;which grid this block lies on3 -left t ;are text lines left justified? *color white ;the background color. -formed nil ;formed character printed? -dot matrix nil ;dot matrix printed? -hand nil ;hand written? ^UV-orange nil ;orange in ultra-violet image? *rectangular nil ;is this block rectangular? > The hypothesis frame is used to record the degree of sup- port of labeling hypothesis in a candidate block. Since there are seven possible labels in ABLS, there are seven labeling hypotheses in each hypothesis frame. For each possible labeling hypothesis, there is a slot in the hypothesis frame to represent the degree of support and another slot to represent the degree of refutation, i.e., negation of this labeling hypothesis. The context frame is used to represent the current situation. It IS composed of three parts: candidate blocks, performance parameters, and difference value of each feature. The candidate blocks are those blocks which remain under the evidence accu- mulated so far. The performance parameters represent an esti- mate of difference between the current context and the goal, i.e., the difference between the termination condition and the current situation. The difference value of each feature represents the degree of difference of that feature between the most likely can- didate block and the second most lihely one. It provides impor- tant clues for selecting the next tool to be applied. VISION AND SIGNAL UNDERSTANDING / 1135 4.3. Tool Manager Knovvledge used in the tool manager is represented by a dependency graph, and tool j? ames. The dependency graph is a directed A\~>-OR graph to specify the order of applying specialized tools. An AND arc is composed of sev>eral arcs \nith a line connecting all of them. An arc with no line connecting it to any other arc is an OR arc. An ASD arc may consist of any number of arcs, all of which must be activated in order to activate it. A node is in ready state if one of the ASD or OR arc entering this node is activated. Each node in the dependency graph represents the readiness of a tool. A tool is not ready to be applied unless its associated node in the dependency graph is in ready state. The selection of a tool will cause the following changes to its associated node in dependency graph: I> all the arcs emanating from this node are activated, 2) all the arcs entering this node are deactivated, 3) this node is switched to unready state. The AND-OR dependency graph of the current ABLS is given in I’igure 3. The tool manager selects the tool to be applied next by first selecting those tools which are in ready state in the dependency graph. If there is only one tool in ready state, the selection is done, otherwise, the selection will be based on the benefit/cost estimate of those tools in ready state. Knowledge about the selection and utilization of each tool is stored in the tool frame. Inside the tool frame, there are fLve rule modules. The utility rule module contains knowledge about the intended purpose of its tool. The current context is used as fact to this rule module to estimate the expected gains of using this tool. The cost rule module. contains rules to estimate the cost of using this tool under the current context. The parameter setting rule module models the hnowledge about the influence of parameter setting on the results. The results gathered from applying this tool are evaluated by rules in the results evaluation rule module. If the results are not satisfactory, the parameters will be changed and the tool will be reapplied. The new ev-idence obtained is interpreted by rules in results inter.p?elation rule module to generate new evidence. Each new evidence gen- erated by this rule module is associated with a confidence value to represent the degree of supporting or refuting a particular labeling hypothesis. We u7ill use the bottom-up segmenter to show some exam- ples of rule modules. The input to the segmenter is the output of the connected component labeling tool (figure 3). It extracts primitive features from the connected components. Csing these connected components, regions with characteristics associated R ith the D,4B are detected. This similarity measure takes into account unary conditions on a component such as stroke width and dimensions of the component extent. If the unary conditions are within the desired limits, then binary tests are performed between other components that satisfy the unary test. These binary tests include the distance between the two components and components which contain a similar number of pixels. If these binary tests are successful, then a link is made between the tested components. After all pairs of components have been tested and linked, the resulting networks are called region adja- cency graphs (RAGS) with each RAG representing a “block of text. Examples of rule modules in segmentation tool frame fol- low. Utility Rule Madule RULE (SEGU~): IF: 1) Segmentation tool has not yet been used. THEN: Mark segmentation tool with maximum utility. cost Rule Ho&de RULE (sEGC~): IF: 1) Segmentation tool has not yet been used. THEN: The cost is equal to the entire size of image times the estimate Gost peg square pixel. Figure 3. Dependency Graph for specifying order of applying specialized tools. An arc with double arrows represents two arcs pointing in opposite directions. Each arc can be indivi- dually activated, or deactivated. Node numbers correspond to tools as fol1ov.s: 0) Image digitizer, 1) Color thresholding. 2) Adaptivee thresholding, 3) (:onnect component labeling, 3) Segm menter 5) Shape analyzer, 6) Texture discriminator for f~~rmed character vs dot matrix print, 7) Regularity analyzer for machine printing vs hand m riting, $3) Text reader, 9) .4ddress syntax parser. Parameter Setting Rule Module RULE IF THEN RULE IF THEN (SEGP~): 1) Machine printing block is expected. Set the unary size threshold to extract machine printing characters. (SEGPB): 1) Image type is medium resolution Set the binary distance threshold for medium resolution image. Results Evaluatian Rule Module RULE (SEGEl): IF: 1) Too many small blocks were segmented THEN: Resegment the image with larger binary distance threshold. RULE (SEGEB): IF: 1) Too many large blocks were segmented THEN: Resegment the image with smaller binary distance threshold. Results Interpretatitm Ride Maduk RULE (SEGIl): IF: 1)The size of a block is reasonable. 3) This block does not overlap with others. THEN: Compute confidence value of new evidence to either support or refute each labeling hypothesis of this block. RULE (SEGI~): IF: 1) The size of a block is reasonable. 2) This block overlaps with an existing block. THEN : 1) Merge the overlapped blocks together. 2) Compute the confidence values of new evidence to either support or refute each labeling hypothesis of the merged block. 5. INTERPRETATION CYCLE The interpretation cycle of izBLS is an integration of both bottom-up and top-dov, n processing. Initially, one of the thres- holding tools is chosen and applied to the entire mail piece image. The thresholded image is then processed by a connected corn- ponent labeling tool, and bottom-up segmented into blocks using a segmenter tool. The physical attributes of a segmented block are then interpreted to generate evidence to either support or refute a block as being the DAB. Since the global orientation of a mail piece image can affect the interpretation of the segmented blocks, it is important to know the correct global orientation of a mail piece image prior to the interpretation of the segmented block. tlBLS assumes that there are only four possible global orientations for a mail piece with rectangular shape: correct global orientation, or rotated by 90, 180, or 270 degrees. To begin, mail piece orientation is unk- nown to ABLS. The location of the postage or meter mark may 1136 / ENGINEERING be able to help determine the correct global orientation of a mail piece because 99% of the mail pieces have the postage or meter mark in the upper right corner[7]. If the correct orientation of a mail piece cannot be determined prior to the interpretation of seg- mented blocks, ABLS will interpret each segmented block in all four different global orientations. The correct global orientation IS then assumed to be the orientation in which a segmented block obtains the maximum degree of support to be the D4B. After the interpretation of the segmented blocks, the con- trol strategy of ABLS can be summarized as follows: 0 if only one segmented block satisfies the termination cri- teria, the DAB is considered found. 0 if no candidate block satisfies the candidacy criteria, another thresholding tool is chosen and applied, and then the con- nected component labeling tool and the bottom-up segmen- tation tool are again used to generate more candidate blocks. 0 otherwrse, the tool manager will select and apply one of the specialized tools on those candidate blocks to generate more evidence to either support or refute a candidate block as being the DAB (top-down processing). 6. EXPERIMENTAL RESULTS The complex mail images in figure l(b-c) are used as exam- ples to show how the DAB is located by using various tools. Fig- ure l(b) is the photopic image of a colorful magazine cover. Frrst, the color thresholding tool is used. It thresholds the image in color (KGB) space to obtain a brnary image (figure 4(a)). The connected white regions are then extracted. The bounding rectan- gle of each w-hite region is examined, and only those regions with reasonable size are retained as candidates. The shape analyzer is then applied to check the rectangularity of each white region. Only two rectangular white regions remain as candidates. l’mallg, the segmentation tool is applied to each rectangular w-hite region; the number of text lines and character components provide further clues to distinguish the D,4B from other candi- dates. I‘lgure 4(b) shows that the DAB is correctly identified and extracted after applying the shape analyzer and segmentatron tools. 1:igure l(c) is a photopic image of the cover of a mail-order catalog. The segmentation tool is first used to extract text blocks. t’rgure 5(a) shows the extracted text blocks. Only those blocks with reasonable size, length, height, and aspect ratio remain as candidates. Since all the segmented text blocks contain only machine printing, the texture discriminator tool for distinguish- ing formed character vs dot matrix prmt i\ applied to each candi- date block. Text blocks w;hich are dot matrix printed are more likely to be the DAB than those with formed character printed. Figure 5(b) shows the DAB IS correctly located and extracted after applying the segmentation ,lnd texture discriminator tools. 7. SUMMARY_ AND CONCLUSION We have described the architecture of ABLS, a system to locate the DAB in a vast variety of mail piece images. The approach has been to utilize specialized tools to distinguish the DiZB from other candidates. The framework is flexible enough to incorporate as many tools as possible into the system if experi- mental results can establish the usefulness of those tools. Knowledge about the selection and utilization of each tool is kept separately on each tool frame except that an additional depen- dency graph is needed to specify their interdependency. The addition, deletion, or modification of a tool can only affect its associated tool frame and the dependency graph. ABLS is a sys- tem under development. Future extensions include not only incorporating more tools into the system, but also continuing refinement of existing tools. Figure 4(a). Figure 4(b). Figure 4. (a) Color thres- hcjldrng results of l‘igure l(h). (b) The extracted DAB. Figure 5(a). Figure 5(b). Figure 5. (a) The results of segmented text blocks of Fig- ure l(c). (b) The extracted D.4 B. ACKNOWLEDGEMENTS We are indebted Jon Hull and Paul Palumbo for valuable suggestions and assistance. This work is made possible by the support and encouragement of Don D’Amato of ADL, and Marty Sack, Rick Glickman and Gary JJerring of USPS. 1. 2. 3 . . 4. 5. 6. REFERENCES Barnett, J. A., “Computatronal Methods for a Mathematical Theory of Evidence”, Proc. 7th IJCAI, 1981, 868-875. Binford, T. O., “Survey of Model-Based Image Analysis Systems”, International J. Robotics Res., 1, 1 (19821, 18-62. Harvey, ‘1’. D., Lawrence, J. I>. and Fischler, M. A., “An Inference Technique for Integrating Knowledge from Disparate Sources”, Proc. 7th ZJCAI, 1981, 319-325. Institute, G. T. R., .Automated Processing of Irregular Par ccl Post: I,etter Statistical Database, Electronics and Computer Systems Lab. , Yov., 1985. Palumbo, P. W., “TM o-Dimensional Heuristic Augmented Transition Network Parsing”, Proc. 2nd I EBh-CS Confer ence on .4rti&cial Intelligence Applications, Dec., 1985, 396-401. Srihari, S. N., Hull, J. J., Palumbo, P. W., Niyogi, D. and W’ang, V. Il., Address Kecognition Techniques in Mail Sorting: Research Directions, Tech. Rep. 85-09, Dept. of Computer Science, SUNY at Buffalo, Aug., 1985. Srihari, S. N., Hull, J. J., Palumbo, P. W. and Wang, C. El., Address Block Location: Evaluation of Image and Statistical Database, Tech. Rep. 86-09, Dept. of Computer Science, SUNY at Buffalo, Apr., 1986. Swamy, P., Palumbo, P. and Snhari, S. N., “Document Image Rinarization: Adaptive Threshnlding Performance”, Proc. SPIE Symposium on Digital Image Processing, San Diego, CA, Aug., 1986. (in press). VISION AND SIGNAL UNDERSTANDING / 1137
1986
67
513
A SIGNAL-SYMBOL APPROACH TO CHANGE DETECTION B. G. Lee, V. T. Tom and M. J. Carlotto The Analytic Sciences Corp. I Jacob Way Reading, MA 01867 ABSTRACT A hybrid (signal-symbol) approach for detecting significant changes in imagery uses a signal-based change detection algorithm followed by a symbol-based change interpreter. The change detection algorithm is based on a linear prediction model which uses small patches from a reference image to locally model the corresponding areas in a newly acquired image, and vice versa. Areas that cannot be accurately modelled because some form of change (signal significant) has occurred are passed on to the change interpreter. The change interpreter contains a set of “physical cause frames” which attempt to determine if the change is physically nonsignificant (e.g., due to clouds, shadowing, parallax effects, or partial occlusion). Changes due to nonsignificant changes are eliminated from further consideration. If the physical cause of the change cannot be determined, it is passed on to an image analyst for manual inspection. Preliminary results of work in progress are presented. These results indicate that the methodology is extremely effective in screening out large portions of imagery that do not contain significant change as well as cueing areas which are potentially significant. Key Words: Change Detection, Signal-Symbol Processing, Image Understanding, Image Analysis, Knowledge-Based Systems 1. INTRODUCTION The ability to detect changes between two or more images of the same scene is important in fields such as aerial reconnaissance, remote sensing, and cartography. The image analyst, in looking for changes between images, is confronted with substantial variation in image quality, perspective and illumination differences, and image formats covering large geographic expanses. The time-consuming and tedious nature of this process is compounded by the low rate of occurrence of significant changes. As a result of these factors, the change detection problem has received considerable attention in the literature. Previous efforts to automate change detection have focussed on implementations in either the signal or the symbolic domain. Signal change detection techniques produce a measure of dissimilarity between images by correlation techniques or image subtraction. In an early treatise, Rosenfeld (1961) outlined the principle steps involved in change detection and reviewed several measures of statistical correlation. NASA (1978) demonstrated the effectiveness of digital subtraction of Landsat multispectral imagery for monitoring land cover changes. Global subtraction highlights areas of change but also produces a large number of false alarms due to variations in image registration, sensor calibration, illumination and atmospheric conditions. In developing a pattern recognition system for city planners, Kawamura (197 1) computed statistical difference features such as correlation coefficients, average entropy change, and the change in probability of bright areas over subareas in aerial imagery. Subareas were then classified as either a “change of interest” or “no change of interest” based on these features. Additional studies have investigated the efficacy of performing change detection in the symbol domain. Price (1977) segmented two images into regions with similar characteristics (e.g., based on radiance and texture) and represented these regions by feature-based descriptions including information such as size, location, and geometric measures. Change detection is accomplished during a matching process which computes the similarity between regions of the two images and pairs regions which are most similar. Regions which do not match represent the appearance or disappearance of a feature. While successful, the resolution of feature-based symbolic matching is limited by the granularity of the segmentation of the images into regions. Since many spurious regions are generated during image segmentation, the matching process can be computationally expensive. As a result, additional criterion such as size and average radiance should be used to organize the regions and guide the matching process (Price, 1982). This paper outlines a hybrid change detection strategy which uses signal processing techniques to detect changes between registered images and symbolic reasoning methods to eliminate changes that are not physically significant. Our goal is to detect all local changes in the scene at the signal level and to filter out only those changes whose physical cause can be determined based on features of the changed areas. The proposed approach thus does not attempt to recognize and match objects in the two images. The advantage of this approach is that by using signal processing at $e initial stage, when there is no evidence of a change at the signal level, symbolic processing is not invoked. When there are few changes, the computational efficiency of the technique is similar to pure signal-based techniques; when there are many changes, the computational efficiency of the technique is similar to pure symbol-based techniques. The organization of the paper is as follows: Section 2 provides a framework for formulating the change detection problem. A signal-symbol architecture for change detection is outlined in Section 3. The signal change detection algorithm is detailed in Section 4 and a preliminary design for the knowledge-based change interpreter is discussed in Section 5. Initial results are presented in Section 6. 1138 / ENGINEERING From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. 2. BASIS FOR CHANGE DETECTION Ideally, an automatic change detection system should extract only significant changes between images. Exactly what is significant is often defined by the application. In the present application, localized man-made activities such as building construction and vehicle displacement, or large scale non-seasonal changes in surface material characteristics (e.g., forest-fire damage and changes in flood zone areas) are considered to be significant changes. Nonsignificant changes include atmospheric effects such as the presence of clouds or haze, and seasonal changes which affect vegetation and surface characteristics. In addition, nonsignificant changes may be induced by comparing images acquired at different times and perspective, and images which differ in contrast, resolution and noise level. In order to develop a consistent framework for change detection, changes are modelled at three distinct levels: signal, physical, and semantic. In previous work in multi-band image processing (Tom, 1985), it was observed that images of the same scene acquired at different wavelengths (possibly by different sensors) at the same time tend to be locally correlated at the signal level. That is, even though images sensed at different wavelengths may be globally uncorrelated, local structure (e.g., due to changes in albedo) tends to be highly correlated across wavelength, In previous applications this local correlation property has been exploited to use higher resolution/lower noise imagery to spatially enhance lower resolution/higher noise imagery. In applying the above technique to change detection, wavelength is replaced by time. The basic assumption then is that small patches in registered images acquired at different times tend to be locally correlated if the underlying scene has not changed. The detection of changes in imagery at the signal level is the first step in the change detection process. The second step is determining whether the changes detected at the signal level are physically significant (i.e., determining their physical cause). Changes attributed to nonsignificant physical effects such as differences in atmospheric conditions, perspective and illumination differences, and seasonal changes are eliminated. The third step is determining whether the remaining changes are significant in a semantic sense given a context for n interpretation. For example, if the goal is to detect large areas of change due to forest fire damage, small isolated areas may be ignored. The overall process generates hypotheses that areas have changed using signal-based models in a bottom-up fashion, and tests the hypotheses top-down based on heuristic models of physical cause and semantic relevance. 3. CHANGE DETECTION SYSTEM ARCHITECTURE A hybrid (signal-symbol) architecture for automatic image change detection is shown below in Fig. 1. Its primary function is to screen out imagery which does not contain significant change. The architecture is structured as a cascade of a signal-based change detector and a symbol-based change interpreter. At each level of processing, the amount of image data that needs to be processed is reduced. The change detector uses a locally adaptive image subtraction technique to detect and localize areas of change in an input image relative to one or more (spatially preregistered) reference images. Following adaptive subtraction, prediction error images are filtcrcd and combined to produce change cues. The output of the signal change detector is a map of cues indicating signal significant changes. For each change cue, descriptive processes build a symbolic representation of the changed area in terms of features derived from the original imagery. The change interpreter applies rules in a hypothesis-driven fashion to the change-tokens, determining the physical cause and semantic relevance of the change. Nonsignificant changes are eliminated, and the remainder are displayed to the image analyst. Currently, the change detection software is implemented on a VAX 780/FPS array processor system, and the change interpreter is implemented in Zetalisp on a Symbolics Lisp machine. Future versions of the system may factor collateral data (terrain data and maps) into the change detection process. 4. SIGNAL-BASED CHANGE DETECTION ALGORITHM The change detection process is an outgrowth of a detection technique based on two-dimensional (2-D) linear prediction by Quatieri (1983). His technique demonstrated that I ! I CHANGE CHANGE DETECTION REGISTERED INTERPRETATION \ IMAGERY ’ l SPATIAL PATCHES REGISTRATION 1 l GENERATE l ADAPTIVE CHANGE HYPOTHESIS SUBTRACTION 0 0 CUES 0 VERIFICATION 0 FILTERING I b SCREENED IMAGERY WITH UNEXPLAINED CHANGES m ’ COLLATERAL DATA I SIGNAL PROCESSING i SYMBOLIC , PROCESSING Figure 1 Overall Change Detection System Concept VISION AND SIGNAL UNDERSTANDING / 1139 image backgrounds of grass, fields, or trees (natural textures) in aerial photographs could be viewed as sample functions of a 2-D nonstationary random field and could be modeled by 2-D linear models. Manmade objects, whose statistics are generally unknown (since it is desired to detect a broad class of objects), are not modeled well by the linear approach and exhibit large modeling errors. Quatieri’s major contribution was the notion of using these linear prediction error residuals to derive a significance test for detection exhibiting a constant false alarm rate (CFAR detector). In addition to the detection of manmade objects, however, detection of natural boundaries also occurred. The approach in this paper overcomes that problem by using a multi-band approach, i.e., one in which a reference image is used to locally model a newly acquired image. The 2-D linear prediction approach involves solving for the optimal set of prediction coefficients that model a patch of a new image from a patch of a reference image using a noncausal mask. This procedure is recomputed for all patches of imagery (i.e. for a patch centered on each pixel location). In order to simplify computations, an approximation to the 2-D linear prediction method was implemented. The simplified method is appropriately termed the adaptive subtraction method. For a local patch of imagery, scale and offset coefficients are computed to optimally predict (in the minimum squared error sense) the new image from the reference and vice versa. The new image is predicted from the reference image (forward prediction), and the reference image is predicted from the new image (backward prediction). The prediction error is the difference between the estimate and the image patch that is being estimated at the center of the prediction mask ~fowu&m) = inew - inew = inew(n,m> - 1 a(n,m> &ef(n,m> + Wv-4 1 !Gbackw~d(n,m) = iref(nP> - &-ef(n,m> = iref(n,m) - [ c(n,m> inew(n,m> + Wm) 1 where the scale and offset coefficients a,b,c and d are continually computed by solving sets of overdetermined equations (Tom, 1985). Objects which appear or disappear in the imagery are evidenced by corresponding signatures in the forward or backward error images respectively. Objects which appear in the newly acquired image cannot be modeled by the reference and thus give rise to a large forward prediction error. (The backward prediction error is small since the absence of the object in the reference can be modeled in the newly acquired image by lowering the gain c and adjusting the offset d.) Where objects disappear in the newly acquired image, the situation is reversed. Objects that are spatially displaced are characterized by comparable signatures in both error images. In the process flow of the the signal-based change detection module (Fig. l), the new image is first registered to the reference image by an automatic registration technique. The images are first coarsely registered given the camera position, and then locked together using a statistically based technique for generating control points automatically. Next, the adaptive subtraction module generates the forward and backward prediction error images. These error images are thresholded for significant detections at a given CFAR level, combined to cancel complementary errors due to minor displacements, and then filtered to remove isolated noise peaks. The output from 1140 / ENGINEERING the signal change detector is a bit map which delimits the extent of areas which have undergone some form of signal level change (significant or not) as well as the corresponding registered imagery patches. 5. CHANGE INTERPRETATION The output from the signal-based change detector is a map of change cues where each cue represents an assertion that something has changed over the corresponding area in the image pair. The goal of change interpretation is to reduce the number of detected changes that must be ultimately examined by the image analysis. Our approach is to eliminate those changes that are not significant based on physical causes or semantic relevance. The preliminary implementation of the change interpreter focusses on identifying three types of nonsignificant changes common to many aerial scenes: shadows, clouds, and partial occlusion of existing objects. Experience with different geographic scenarios indicates that a large majority of nonsignificant changes result from these phenomena. Before the change cues can be interpreted, they must be converted into symbolic form. The first step in generating the symbolic description is to label connected areas in the map of change cues provided by the signal change detector. For each connected area, a change-token is created. Change-tokens contain slots for descriptive information (i.e., for features of the changed area) such as the size, shape, location, orientation and spatial context of the changed area, as well as information derived from the input and reference image (e.g., image radiance statistics and local correlation structure). The change interpreter (Fig. 2) contains a set of “physical cause frames” for clouds, shadows, and partially occluded objects. Descriptive information is computed on an “as needed basis” as individual physical cause frames are triggered during the interpretation process. Each physical cause (cloud, shadow, partial or total occlusion) activates descriptors which extract features from the imagery in and around the corresponding change cue. Descriptors are applied in a hierarchical fashion based on the cost of computation and the degree of evidence they provide in determining a physical cause. The control strategy is designed to minimize the amount computation needed to prove that a change is not significant. Coarse level information is initially computed for all change-tokens. Change-tokens generate physical cause hypotheses which then attempt to verify that they are the cause of the change. If there is insufficient evidence to conclude the cause of a detected change, finer level descriptive processes are dispatched. If the cause of the change cannot be determined, it is brought to the attention of the image analyst. As an example, the interpretation process begins by computing simple feature descriptions of the change-token (area and radiance statistics) and generating hypotheses that the change is due to shadow or cloud. If collateral information is available, the possibility of a shadow is eliminated entirely if the sun-angle is the same in both images, Otherwise, the shadow hypothesis records a high confidence level if the change-token has a low average radiance measurement over a small area, with little variation in the spectral variance. The cloud hypothesis is eliminated if available collateral data indicates that the image conditions were cloud-free; otherwise, the cloud ruleset operates on the radiance statistics. The cloud hypothesis is verified by a relatively high radiance measure covering a substantial area. If there is high confidence that the change is cloud or shadow, the change-token is eliminated from further consideration. The cloud hypothesis should be either proved or eliminated within the first cycle of description/verification. If weak evidence exists for shadow, secondary features are derived to verify that the change is the result of shadow or partial occlusion. The majority of change cues resulting from shadows mirrored about an object or minor shadow variation and parallax differences are eliminated by locally averaging the difference of the prediction error residuals as described in the next section. The remaining shadow changes occur in only one image. Shadow confirmation may be obtained by using measurements such as correlations between areas on opposite sides of the shadow edge (Witkin, 1982), or by examining the shadow-making regions which have long boundaries in common with the shadow and are oriented at the appropriate sun angle (Nagao, 1980). Because of differences in the look-angle of sensors, roads or buildings which are visible in one image may be occluded in the other image. The possibility of occlusion is explored if there is a change in camera position between acquisitions. If so, it is then necessary to decide if the occlusion is due to a significant object. As noted in Section 3, man-made objects are not modelled well by the linear approach and thus give rise to large modelling errors. Two types of changes occur; a man-made object occluded by a natural object, and a man-made object occluded by man-made objects. The former change is insignificant and is being examined because it frequently occurs as a result of natural object overlay, e.g., a tree obscuring one side of the road. In this case, partial occlusion can be identified by linear edges or regions which once extended in the changed image are similar to edges contained in the unchanged image. As it is currently being developed, change interpretation must handle a variety of scenes from different geographic areas. Efforts are being made to structure the physical cause ruleset so that it is robust across all scenes. Senario-specific rulesets are being developed for semantic level interpretation since the relevance of a change depends on what one is looking for in the imagery. 6. EXAMPLES For the following two examples, aerial photographs were acquired from USGS and digitized using a CCD camera. The images are cloud-free, and were acquired at about the same time of day. The pair of images in Fig. 3 are of a scene in which a building not present in the reference image (a) appears in the newly acquired image (b). The images differ both in perspective and in the amount of haze present (which is simulated). The images are registered so that features on the ground are spatially aligned. The pair of images in Fig. 5 show the prediction error obtained by predicting the image in Fig. 3a from that in Fig. 3b (5a), and the prediction error obtained in predicting the image in Fig. 3b from that in Fig. 3a (5b). (A 7x7 sliding window was used.) It is evident that prediction errors occur in the vicinity of the building which appeared in Fig. 3b as well as around buildings and other vertical structures due to parallax effects. To mitigate the effects of parallax, differences in illumination, as well as other effects due to minor misregistration and noise, the prediction error images are locally averaged. For parallax effects, the assumption is that the residual errors caused by vertical features will cancel within windows that are large compared to the feature of interest. The result of averaging the prediction error within a 33x33 Gaussian tapered window (Fig. 4) shows that the parallax effects do in fact cancel in areas that did not change; however, a net prediction error residual is evident in the vicinity of the building that appeared in the new image. The second example in Fig. 6 is of another scene in which a vehicle in (a) is missing in (b) and a building in (b) is missing in (a). By examining the sign of the prediction error one can identify objects that either appear or disappear between images. Fig. 7a shows an area of negative error caused by the disappearance of the vehicle in Fig. 6b. Fig. 7b shows an area of positive error due caused by the appearance of the building in Fig. 6b. Figure 2 Control Program 1 4 1 L I “Physical l Cause” Frames * Changed Areas Symbol-Based Change Interpreter VISION AND SIGNAL UNDERSTANDING / 114 1 7. SUMMARY A hybrid approach to detecting changes in imagery was described. It consists of a signal-based change detection algorithm which identifies all areas which have changed at the signal level (significant or not), and a symbol-based change interpreter which eliminates those areas caused by changes that are not physically significant or semantically relevant. Preliminary results of the signal change detection algorithm, and a discussion of the design of the change interpreter were presented. Preliminary results indicate that the methodology is extremely effective in screening out large portions of imagery which do not contain significant change. On-going work focusses on expanding the rulebases within the change interpreter which reason about the physical cause and semantic relevance of the detected changes. 111 PI [31 [41 151 161 [71 PI PI REFERENCES Rosenfeld, A., “Automatic Detection of Changes in Reconnaissance Data,” Proc. 5th Conv. Mil. Electron,,1961, pp. 492-499. National Aeronautics and Space Administration, Goddard Space Plight Center, “Landsat Image Differencing as an Automated Land Cover Change Detection Technique,” CSC/TM-78/6215, August 1978. Kawamura, J., “Automatic Recognition of Changes in Urban Development from Aerial Photographs,” IEEE Trans. Systems. Man and Cvbemetics, Vol. SMC-1, No. 3, July 197 1, pp. 230-240. Price, K., “Change Detection and Analysis in Multispectral Images,” Proc. of 5th International Joint Conference on Artificial Intellipence, 1977, pp. 619-625. Price, K., “Symbolic Matching of Images and Scene Models,” Proc. of the Workshop on Computer Vision, 1982, pp. 105-112. Tom, V., Carlotto, M., and Scholten, D., “Spatial Sharpening of Thematic Mapper Data using a Multiband Approach,” Proc. of Society of Photo-Qntical Instrumentation Engineeers, 1985, pp. 1026-1029. Quatieri, T., “Object Detection by Two-Dimensional Linear Prediction,” Proc. of IEEE International Conference on Acoustics, Speech, and Signal Processing, 1983, pp. 108-111. Witkin, A., “Intensity-Based Edge Classification,” Proc. of American Association for Artificial Intellipence, 1982,pp. 36-41. Nagao, M., and Matsuyama, T., A Structural Analysis of Comnlex Aerial Photopraohs, Plenum Press, New York, 1980. Fig. 3 a) . Aerial Photography $0 3W aken at Different Times Figure 4 Combined and Filtered Error 1142 / ENGINEERING Fig. 5 a) Fig. 5 bj Forward and Backward Lmear Prediction Error Fig. 6 a) Fig. 7 a) Fig. 6 b) Aerial Photography taken at Different Times 43 Fig. 7 b) Detected Changes Indicated in White VISION AND SIGNAL UNDERSTANDING / 11
1986
68
514
1 REASONING WITH SIMPLIFYING ASSUMPTIONS: A METHODOLOGY AND EXAMPLE Yishai A. Feldman and Charles Rich The Artificial Intelligence Laboratory Massachusetts Institute of Technology 545 Technology Square Cambridge, Mass. 02139 ARPANET: YishaiQMC, RichfRMC Abstract Simplifying assumptions are a powerful technique for dealing with complexity, which is used in all branches of science and engineering. This work develops a formal ac- count of this technique in the context of heuristic search and automated reasoning. We also present a methodol- ogy for choosing appropriate simplifying assumptions in specific domains, and demonstrate the use of this method- ology with an example of reasoning about typed partial functions in an automated programming assistant. Simplifying Assumptions Simplifying assumptions are a powerful technique for dealing with complexity, which is used in all branches of science and engineering. Stated informally, the basic idea of using simplify- ing assumptions is: Don’t worry about the details until you have the main story straight. For example, in working towards the solution of a difficult physics problem, it is often a good idea to begin by assuming the absence of friction and gravity. Using this simplified world model, it is much easier to explore and evaluate alternative so- lution approaches. The full complexity of the problem can then be re-introduced later, when you think you have found a viable approach. Similarly, if you are designing a complex software system, it makes sense to postpone consideration of issues like exception handling and round-off error until you have a design that is plau- sible with respect to the normal operation of the system. The role of simplifying assumptions in various types of human problem solving has been studied in previous work [ 11,8,7]. The contribution of this work is to develop a formal account of this technique in the context of heuristic search and automated rea- This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory’s artificial intelligence research has been provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-80-C-0505, in part by National Science Foundation grant MCS-8117633, and in part by the IBM Corporation. Yishai Feldman was supported by a grant from the Bantrell Charitable Trust. The views and conclusions contained in this document are those of the authors, and should not be interpreted as representing the policies, neither expressed nor implied, of the Department of Defense, of the National Science Foundation, nor of the IBM Corporation. soning, and to present a methodology for choosing appropriate simplifying assumptions in specific domains. The techniques discussed here are also closely related to techniques for reasoning with default assumptions, and non- monotonic reasoning generally. However, whereas most of the current work in this field (see [2,1]) focusses on the logical prop- erties of these types of reasoning, this work emphasizes method- ological and pragmatic issues. In particular, other current work does not address the questions of how to choose default assump- tions and what specific control mechanisms are necessary to rea- son effectively with such assumptions. 1.1 Heuristic Search Many types of problem solving can be viewed abstractly as search procedures in which part of the evaluation function involves prov- ing that some logical condition follows from a set of premises defined by the current search state, using a set of axioms which embody the problem solver’s “theory of the world.” In such situa- tions, the theorem-proving component of the evaluation function is often the dominant cost in the search. For example, program synthesis can be viewed as searching the space of possible programs (or partial programs) for one that satisfies a given specification and scores well on other evalua- tion criteria, such as time, space, etc. The premises in each search state encode the structure of the current program candi- date. The condition to be verified is the program’s specification. The axioms used by the problem solver embody the theory of the various symbols used in defining programs and specifications. If problem solving is viewed this way, the use of simplifying assumptions amounts to substituting a simplified world theory (set of axioms) for the “correct” one during the search process. When a promising candidate is found using the simplified theory, it is then checked using the full theory. The two key properties of a simplified theory are: l Proving the relevant conditions from the given premises should be less expensive than in the full theory. l The answers given by the simplified theory should be good predictors of answers in the full theory. (The formal logical relationship between simplified and full theories is discussed below .) Simplifying assumptions are thus a kind of heuristic, i.e., task-dependent information which reduces search effort. As with many heuristic search methods, the use of simplifying assump- 2 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. The top right diagram of Figure 1 illustrates a simplified the- ory which is strictly stronger than the full theory (i.e., more things are true). Intuition for this case can be gained by consid- ering a theory embodied in a set of axioms, each of which is in the form of an implication. Suppose also that the consequent of each implication is a useful conclusion, i.e., a proposition which is likely to advance the reasoning process toward verifying typ- ical search conditions. A simplified theory is one in which we replace each implication by its consequent. Clearly this theory is cheaper than the full theory. However, this strategy can eas- ily lead to contradictions, i.e., an inconsistent theory in which everything is provable. This brings us to the bottom diagram of Figure 1, in which the two theories mostly overlap, but sometimes differ. We believe Figure 1: Logical relationship between simplified theory and full theory. this is the most typical case. (The two theories of typed partial tions reduces search cost, but not without sacrificing the guar- functions have this relationship.) Our methodology in this case antee of finding an optimal solution. Furthermore, the savings is based on classifying the propositions appearing in the axioms in effort is usually seen only in the average over some class of of the full theory into the following two categories: problem instances. For example, an important part of the theory underlying pro- gram synthesis is the theory of typed partial functions (this ex- ample is developed in detail in the second half of the paper). The full theory of typed partial functions is somewhat complex, in- volving the instantiation of two axiom schemas for each function application. Most of this complexity, however, has to do with worrying about what happens when one of the arguments to the function is outside the domain. A simplified theory assumes that the value of a function application is defined, and therefore is in the range of the function. The methodology described in the next section addresses the key issue of how to derive such simplified theories in general. 1.2 A Methodology for Simplification The essential basis of the simplification methodology is an anal- ysis of the relationship between the full theory, the simplified theory, and problem being solved. Figure 1 shows three possible logical relationships between a simplified theory and a full theory. The top left diagram il- lustrates the case of a simplified theory which is strictly weaker than the full theory (i.e., fewer things are true). This is the easi- est case to deal with, corresponding to the common optimization of a two-stage evaluation function. The first stage is the simpli- fied theory, which filters out many candidates at a low cost. The second stage is the more expensive full theory, which is applied only to those candidates that pass the cheaper test. Further- more, using the dependency-directed techniques described in the following section, proofs developed in the simplified theory are reused in the full theory, if possible. l Propositions which, if true, are likely to advance the reason- ing process further, e.g., by interaction with other theories. We call these the main conclusions of the theory. l Propositions which are normally true, but concern details that can usually be ignored in the first-cut evaluation. We call these the dejault assumptions of the theory. If it is not possible to make these distinctions with some con- fidence, then the methodology is not applicable to the particular axioms. In order to apply the methodology, it must also be the case that, in the full theory, the default assumptions imply the main conclusions. (Note that it sometimes helps to restate the axioms of a theory in logically equivalent forms in order to facil- itate this analysis.) The axioms of the simplified theory are taken to be the col- lection of main conclusions. This causes the simplified theory to be partly stronger than the full theory. It is not necessary for all of the propositions to fall into either of two categories above. For example, there may be propositions which are implied by the default assumptions, but which are not likely to advance the reasoning process or may cause contradic- tions which do not exist in the original theory. The axioms con- taining these propositions are omitted from the simplified theory. This causes the simplified theory to be partly weaker than the full theory. The simplified theory resulting from applying this methodol- ogy will be cheaper than the full theory because the deduction required to prove the main conclusions is saved. The simplified theory will be a good predictor of the full theory to the extent the intuitions about the “normal case” are sound. Unfortunately, in many situations, including the example of 1.3 Reasoning Facilities typed partial functions, it is not possible to strictly weaken the theory without losing the ability to prove anything useful at all. Also, there is no intrinsic correlation between the “size” of a the- ory and the cost of proving theorems in that theory (viz. the empty theory and the theory in which everything is true, both of which have trivial proof procedures). Thus in many situa- tions a strictly weaker theory can be more expensive to compute with-for example, because all of the conclusions have more qual- ifications. The efficient implemention of the methodology above requires a reasoning system which provides several important control facil- ities. This section describes these facilities in the abstract. An example of such a reasoning system is described in more detail in the second half of the paper. The most important control facility which the reasoning sys- tem must provide is retraction. Once a condition has been proved using the simplified theory, the reasoning system must be able to AUTOMATED REASONING / .s undo the proof, and try it again using the full theory. It would also be beneficial if the retraction process was incremental (i.e., the system could use the full theory for some objects under dis- cussion, but not necessarily all) and if the system could exploit parts of the proof which carry over from the simplified to the full theory. A second important control facility is the ability to handle contradictions explicitly. Despite consideration in the method- ology towards keeping a simplified theory internally consistent, it is still possible for interactions between different theories to cause a contradiction. In this situation, the problem solver needs to explicitly detect that the heuristic approach has failed and fall back to the full theories, as opposed to proving everything true. A mechanism which supports both retraction and contradic- tion handling is the use of explicit dependencies [lo]. Dependen- cies are relations between assertions, which encode proof trees. For each true assertion in the data base, the dependencies record the set of antecedents and the inference rule (axiom) used to deduce it. Premises have the empty set of dependencies. Dependencies make it possible to retract the truth of an asser- tion whenever, due to changing circumstances or decisions, one of its antecedents is retracted. If the antecedents later become true again, the dependencies are used to reestablish the consequent assertion without having to rediscover the proof. Dependencies also provide a framework within which to analyze and respond to contradictions, such as by choosing a premise to retract. As an additional benefit, dependencies can also be used to help explain the reasons for the system’s conclusions. In reasoning with simplifying assumptions, the main conclu- sions of the simplified theory are initially installed as premises. When the axioms of the full theory are installed, these premises are retracted. If the main conclusions can be proved from the full axioms, then the dependencies will cause the previous proof be- tween the main conclusions and the search condition to be reused. Furthermore, since most theories are in the form of universally quantified facts which are instantiated for each object under dis- cussion, there is a separate set of premises for each object, which can be separately retracted. 2 An Example The example we describe here takes place within the context of building an interactive programming aid, called the Program- mer’s Apprentice (PA) [5,12]. The overall philosophy of this project and some of the specific technical decisions are important to understanding and motivating the approach we have taken to automated reasoning. A fundamental tenet of the PA project is that program de- velopment, like other engineering activities [6], is an evolutionary process. This means that change is the predominant feature of the process-specifications change, design decisions change, bugs are discovered and corrected, and so on. Furthermore, this evolu- tionary nature is an intrinsic property of large software systems. It is not possible for the designers or potential users of a large sys- tem to foresee all of the opportunities for the system’s use. Also, the environment in which the system operates is itself subject to change. New regulations, business practices, and technology appear and force modifications to the system. An implication of this view of the programming process are that, independent of the use of simplifying assumptions, the rea- soning component of the PA must support retraction and must be tolerant of contradictions. 2.1 The Full Theory of Typed Partial Functions Partial functions are an important mathematical construct used to model and reason about the behavior of programs. For exam- ple, one of the fundamental properties of computer programs is that sometimes they do not terminate. If we v iew a program as a function from the inputs to the outputs, such a function will be undefined on those inputs for which it never terminates. Another important use of partial functions is to represent errors, such as division by zero, or an array reference out of bounds. For simplicity of presentation we will discuss a function of two arguments; extension to the general case will be obvious. As in the usual formulation of partial functions, we introduce an unde- fined value 1. Let U be the set {I}, and D be the complement of U, namely the set of defined objects in the universe. Let f be a function from A x B to the range C. One implication of the functionality of f is that if its arguments are in the domains, then its value is in the range. Our first axiom is therefore Al. ~EDAZEAA~EB =+ f(z,y)~C. Note that this axiom includes as one of its antecedents the condi- tion that f is defined. This is because we allow terms in the logic in which the operator is itself a function application and may therefore be undefined. The domains and range of a function term are a syntactic property of the term, interpreted to mean that if the symbol is defined, then it has that functionality. In this formulation, a total function is a function whose range includes only defined objects, i.e., C c D; a partial function includes undefined values in its range, i.e. U c C. For example, the functionality of integer addition (+) is Integer x Integer - Integer. The functionality of integer division (/) is Integer x Integer - Integer u U, because the result of dividing by zero is undefined. An application may also be undefined because one of its argu- ments is not an element of the corresponding domain , or because the operator is undefined. In some systems, these are treated as syntax errors. However, in our context, since decisions about the properties of objects may change over time, we need to treat these cases within the logic. Our second axiom is therefore A2. f $D’-‘+Avy$B =+- f(w)EU. This axiom may or may not be stronger then the converse of Al, depending on whether or not f is total. 2.2 Cake In order to evaluate the cost benefit of simplifying the theory above, we first need to introduce some further specifics of the reasoning engine we are using. The reasoning component of the PA is called Cake [4]. It incorporates most of the algorithms of McAllester’s Reasoning Utility Package [3], such as unit proposi- tional resolution and congruence closure, plus additional decision procedures for some basic algebraic structures, such as partial or- ders, lattices, and boolean algebras. 4 / SCIENCE The fundamental data structure in Cake is the term. Terms are composed of subterms in the usual recursive way. Non-atomic terms are called applications. Note that the operator of an ap- plication may also be an application. Terms are indexed into a data base which provides input canonicalization (as in a symbol table) and simple associative retrieval. The basic inference mechanisms of Cake are propositional. Each boolean-valued term (proposition) in the data base is asso- ciated with a truth value, which is either true, false, or unknown. Propositions are connected into &uses, which are the axioms of the system. A clause is a list of literals, each of which contains a proposition and a sign specifying whether that proposition ap- pears positively or negatively. A literal is said to be satisfied either if the proposition appears positively and its truth value is true, or if the proposition appears negatively and its truth value is false. A literal is unsutisfiubte either if the proposition ap- pears positively and its truth value is false, or if the proposition appears negatively and its truth value is true. Deduction occurs when all but one of the literals in a clause are unsatisfiable and the proposition of the remaining literal has the unknown truth value. In this case, the truth value of this proposition is set to the value (true or false) which will satisfy the literal, with dependencies on the other propositions in the clause. If all the literals in a clause are unsatisfiable, a contradiction is signalled, invoking a higher level control structure to decide what to do. For example, the axiom P A Q + R A S would be installed in Cake by reducing it to conjunctive normal form, giving rise to -- -- the two clauses (P,Q, R) and (P,Q,S). If P and Q are true, the system will deduce that R is true and S is true. If R is false and Q is true, the system will deduce that P is false, and so on. The system also supports retraction (setting the truth value of a proposition to unknown) using the dependencies. For example, if R is deduced from P and Q using the first clause above, then if either P or Q is retracted, the system will retract R. Quantified knowledge is expressed in Cake using the tech- nique of pattern-direction invocation of procedures (demons). Each term in the data base can have associated with jt a pro- cedure called a noticer. Whenever a new application is created, the noticer associated with the operator term is invoked with the application as its argument. A typical use of such a noticer is to instantiate an axiom schema using the arguments of the application. The only additional mechanism of Cake that needs to be spec- ified before building an implementation of partial functions js the type algebra. Since the notion of data types is ubiquitous in rea- soning about programs, we decided to base Cake on a typed logic. Types in this logic are total functions from D u U (the universe of all terms) to Boolean. Types form a boolean algebra with the usual operators of meet, join, and complement. There are special- purpose mechanisms in Cake for performing inferences based on this structure. For example, if T is a subtype of (subsumed by) T’, then T’(z) follows from T(s). 2.3 The Cost of the Full Theory An implementation of the full theory of typed partial functions can now be defined as follows. The basic idea is to instantiate axioms Al and A2 for each application of f. The domains and range of f are implemented as type predicates. The set D is implemented as the type Defined. (All the usual data types such as Integer, Boolean, etc., are subtypes of Defined.) We install a noticer on the term f which, given a new application f (5, y), creates the following clauses: (Defined(f),A(2),B(Y),C(f(2,~))), (Defined(f (2, Y>), Defined(f I), (Defined(f (5, ~11, A(z)), (DefinNf (2, Y>), B(Y)). The cost of this implementation can be measured roughly as the number of new data structures created per new application, namely, five new terms and four new clauses, containing ten lit- erals. These new data structures translate into a corrresponding computational cost because, generally speaking, the amount of computation in the reasoning component increases strongly with the number of terms and clauses. In particular, we have found from experience that, due to the activities of the congruence clo- sure algorithm, it is particularly important to control the number of terms created in the system. A striking feature of this straightforward implementation is that half the literals in the clauses above involve terms with the operator Defined. Thus we argue that, especially within the con- text of evolutionary design, the system is spending a dispropor- tionate amount of its effort worrying about the details of whether things are defined or not. 2.4 Applying Simplifying Assumptions The application of the methodology to the theory of typed partial functions breaks into two cases, corresponding to whether f is total or partial. Let us first consider the case when f is total. In this case, the main conclusion of axioms Al and A2 is f (z, y) E C. This fact is likely to advance the reasoning by eliminating cases or triggering specialized information about the elements of C. For example, C might be the set of positive integers, and the term f (2, y) might appear in a conditional expression of the form if f (5, y) > 0 then . . . else . . . . The default assumption of the theory is f(z, y) E D. The “normal” state of affairs in reasoning is that most expressions are defined. The cases wherein certain terms are undefined can safely be considered “detail to be treated later.” Note that this default assumption does imply the main conclusion above, as required (this is easy to see by considering the contrapositive of A2). The role of the remaining propositions in Al and A2, namely f E D, z E A, and y E B, is interesting to consider for a moment. Logically, these propositions are in fact implied by the default assumption. However, we have chosen not to consider them as main conclusions. The reason for this is that this information is not intrinsic to the form of the terms f, z, or y, but rather to their appearance in a certain context. For example, the same variable z may appear in two applications with different operators having disjoint domains. Although this may not be a contradiction once AUTOMATED REASONING ! 5 the details of the reasoning are considered (the two applications may be on opposite sides of a conditional expression which tests the type of z), making these propositions part of the simplified theory could force the system to immediately invoke the details to resolve the contradiction, thereby defeating the whole purpose of the strategy. The case when f is partial has an additional wrinkle. As mentioned above, for partial functions U L C. In this case, the proposition f(z; y) E C is not likely to advance the reasoning process. For example, knowing that f(~. y) E Integer b li is not as useful as knowing that f(z, y) E Integer. We therefore restate the theory in a logically equivalent form for this case, by replacing axiom Al with the following simpler axiom, where C’ is the set C P D (i.e., subtracting out undefined): Al’. f(z, Y) E D * fk Y) E C’. We now take the proposition f(z, y) E C’ as the main conclu- sion in this case. Note that it is implied by the same default assumption as above: namely f(z,y) E D 2.5 Implementation After the theory has been analyzed according to the methodology, an efficient implementation in Cake was achieved as follows. We install a noticer on the term f which, given a new applica- tion f(z, y), creates the term for the main conclusion and makes it a premise. If f is total, this premise is simply C(f(x, y)). If f is partial, the procedure computes the type C’, obtained by in- tersecting C with Defined,’ and installs the premise C’(f(z, y)). These premises are also marked by the system as being supported by (implicit) simplifying assumptions. Thus in the first stages of reasoning, we create only a sin- gle term, as compared to the five terms and four clauses of the straightforward implementation. Furthermore, if the main con- clusion was well chosen, this premise may advance the reasoning enough to decide to abandon this path regardless of the details. We also define an operation on premises called discharging. When a premise is discharged, its truth value is retracted and, if it is marked as being supported by simplifying assumptions, a procedure is run to instantiate the rest of the underlying axioms.2 In the case of total functions, discharging the premise causes the same five terms and four clauses to be created as described in the straightforward implementation. In the case of partial functions, Al’ is instantiated instead of Al, giving rise to the following clause: (Defined(f(z, y)>,C’(f(z, Y))). The total number of terms and clauses eventually created in this case is the same as in the total function case. Notice that the term Wb, Y)> is never created in this case, since it is not usually a useful fact. If, however, this term is created by some other procedure, its truth is provable by the mechanisms of the type lattice from the axioms instantiated here. ‘This computation is possible since the type hierarchy has been made non-retractable for efficiency reasons. We have implemented a special data structure in the type lattice to support this computation. 2Discharging also removes the simplifying assumptions mark, to avoid instantiating the same axioms twice. Discharging of premises supported by simplifying assump- tions can occur in a number of ways. First, the higher level control structure may decide, for its own reasons, that now is the time to pursue the details. For example, the current design may look good enough to warrant spending additional resources working it through. Alternatively, a contradiction may be de- tected involving some of the marked premises. Rather than sim- ply abandoning one of the current set of premises, the contradic- tion handler may decide that the contradiction is only apparent and can be resolved by descending to the next level of detail. Fl- nally, we install a noticer on the term Defined which, given a nen application of the form Defined(f(z, y)), discharges the premise C(fb> Y>) or C’(fk, Y>), d e en in on whether f is total or par- P d g tial. This noticer embodies the heuristic that when you actually create the term for the default assumption, it means you want to begin to consider the details. We conclude this section with a brief example using the par- tial function /, with functionality Integer x Integer + Integer ci c’. In addition to knowing the functionality of /, let us assume the system also has the following axiom about t,he behavior of the function. Dl. i E Integer A j E Integer A j # 0 * i/j E Integer. Applying the methodology of simplifying assumptions to this “theory” , we decide that the main conclusion is i/j E Integer, and that the default assumptions are i E Integer, j E Integer, and j # 0. This is implemented by installing a noticer on / which, given a new application i/j, creates the premise for the main conclusion and marks it with the axiom Dl to be instan- tiated when this premise is discharged. Notice that the same proposition can be the main conclusion of more than one the- ory. Thus when a premise is discharged, more than one group of underlying axioms may be triggered. Now suppose that the term i/j is created. By the procedures described above, this will cause the term Integer(i/j) to be cre- ated and made a premise. Using this premise, the system may proceed, without stopping to prove that i and j are integers and that j # 0. For example, further reasoning may reveal that that the computation involving i/j is wrong and that this term should be i/(j + 1) instead. If and when the premise Integer(i/j) is discharged, the follow- ing clauses will be installed due to the theory of partial functions: (Defined(i/j), Integer(i/j)), (Defined(i/j), Defined(/)), (Defined(i/j), Integer(i)), (Defined(i/j), Integer(j)), and the following clause due to the theory of /: (Integer(i), Integer(j),j # 0, Integer(i/j)). 3 Conclusions We have described a general methodology for using simplifying assumptions in automated reasoning, and have illustrated its ap- plication to the implemention of a theory of typed partial func- tions in the context of evolutionary program development. We 6 / SCIENCE believe this methodology of reasoning. can profitably be applied in areas The next area in which we plan to apply the methodology is reasoning about side effects. To simplify the first stages of reasoning in this context, it is important to make the default assumption that there is no aliasing (i.e., two variables do not hold pointers to the same data structure or parts of the same data structure). Shrobe 191 has taken a similar approach in this area. As the reasoning component of the PA develops with many different kinds of simplifying assumptions for different purposes, we imagine the reasoning process will begin to resemble “peeling the layers of an onion.” Discharging one level of premises will cause the next lower level of detail to be instantiated, which may have its own simplifying assumptions, and so on. For example, in the reasoning involving applications of / above, we might in fact want to install control mechanisms to allow instantiation of the details of the partial function theory, while keeping the assumption j # 0. Another direction of future work we would like to mention here is to partition the undefined type into different sub-types to represent different kinds of exceptional conditions. For example, the term 5/O is undefined for a different reason than 5/“hello” is undefined, which is different again from the reason that the out- put of an non-terminating computation is undefined. We expect that the PA will be able to take advantage of these distinctions. Note that this extension would require some modifications to the axioms presented in the paper and to the definitions of partial versus total functions. Acknowledgements The authors would like to thank David Chapman and Dick Waters for their help in working out some of the ideas in this paper, References :l] AAAI Workshop on Non-Monotonic Reasoning, New Paltz, NY, October 1984. 12‘ Artificial Intelligence, Vol. 13, No. 1,2, Special Issue on Non- / Monotonic Logic, April 1980. [3] McAllester, D. A., “Reasoning Utility Package User’s Man- ual”, MIT Artificial intelligence Lab. Memo 667, April 1982. [4: Rich, C., “The Layered Architecture of a System for Rea- soning about Programs”, Proc. of the 9th Int. Joint Conj. on Artificial Intelligence, Los Angeles, CA, August 1985. 15: Rich, C., and H. Shrobe, “Initial Report on a Lisp Program- mer’s Apprentice”, IEEE Trans. on Software Eng., Vol. 4, No. 6, November 1978. 16: Rich, C., H. E. Shrobe, R. C. Waters, G. J. Sussman, and C. E. Hewitt, “Programming Viewed as an Engineering Activ- ity’, , (NSF P p ro osal), MIT Artificial Intelligence Lab. Memo 459, January 1978. [7] Rich, C., and R. C. Waters, “The Disciplined Use of Simpli- fying Assumptions’, , Proc. oj ACM SIGSOFT Second Soft- ware Engineering Symposium: Workshop on Rapid Proto- typing, ACM SIGSOFT Software Engineering Notes, Vol. 7, No. 5, December 1982. [Bj Sacerdoti, E. D., “Planning in a Hierarchy of Abstraction Spaces”, Artificial Intelligence, Vol. 5, No. 2, 1974. [9] Shrobe, H. E., “Common-Sense Reasoning About Side Ef- fects to Complex Data Structures”, Proc. of 6th Int. Joint Conf. on Artificial Intelligence, Tokyo, Japan, August 1979. [lo] Stallman, R. M., and G. J. Sussman, “Forward Reason- ing and Dependency Directed Backtracking in a System for Computer-Aided Circuit Analysis”, Artificial Intelligence, Vol. 9, October 1977, 135-196. [llj Sussman, G. J., “The Virtuous Nature of Bugs”, Proc. Conf. on Artificial Intelligence and the Simulation of Behavior, U. of Sussex, July 1974. [12] Waters, R. C., “The Programmer’s Apprentice: A Session with KBEmacs”, IEEE Trans. on Software Eng., Vol. 11, No. 11, November 1985. AUTOMATED REASONING / 7
1986
69
515
DOhIAINS IN LOGIC PROCNAMhfJNG Arahellast, 17 D-WJO Munich 81 \\ cst-(;ernlan> European Computer-industr) Resrarrh (‘entrfs (E. (‘.I] (’ ABSTRACT. When confronted with constraint sat isfaction prcjblerns (CSP). the “generate b test“ strateg! of Prolog is particular\ lnefficifnt tronal theorem proving arnoug ottters. The simple bacbt rack search [del)th frr.st senr~l~ with chi-or,o~u~~caI bacctrack,rlgi 15 very Prolog uses h Also. control mechanisms defined for logic programming lan- guages fall short In CSP because of their restrlcted use of con- strainta. Indeed. consrraints are used passiveI> for t casting pc>NerfuI control mechanisms can reduce the 5carch -pace. I hf,> fall short in CSP b&ause of their restrIcted pasql\r use of thr, generated values and not for actively pruning the search space h eliminating combinations of values which cannot appear together in a solution. One remedy is to introduce the domain concept in logic programming language. This allows for an active conat ralnts. Indeed. t hew rrlechanism\ can roroutining which ib bawd on thr “apply tr3ts a< Soon a+ possible“ heuristic5 which 13 not the best-suited one for thl< clasc use of constraints. This extension which does not impede the of problems. With these merhaniqms. a constramt IS tested a? declarative (logic) reading of logic languages. consists in a modification of the unification, the redefinition of the procedural semantics of some built-in predicates ( p , 5. <. 2. >) and a new evaluable function and can be implemented efficientlj Ul’ith- out an> chanpe to the search procedure and without introducing a new control mechanism. look ahead strategies, more intelligent choices and consistency techniques can be implemented nat uralI> in programs. Moreover. when combined with a dela! mechanism. this leads direct11 to a strategy which apphes active constraints as soon as possible. smn as its variables have rrceived their values Thus, 1 he search space is only reduced in an “a posteriori w a\” after the discovering that the generated value\ do not satisfl a constraint. The main draw back> of \uch an approach are the continual redlscoberjing of the same facts and the pathological behaviour of (chronological) backtracking. See (Mackworth. 1977) for a con- vincing example. Intelligent backtracking is a remed) to thtq state of affairs hut does not attack the real cause of the problems and introduces an important overhead when not nece+ There is another way to use constraints (wr will speak about an 1. Motivations As Prolog IS applied to more and more areas, inadequacies of its search procedure appear and, although there were substantial active use of the constraints ((;allairr. 198.5) which consists in reducing the search space m an “a priori” manner b) removing inconsistencies. combinations of values which cannot appeared together in a solution (E’rrudcr 19i8). This approach is the basis efforts to develop powerful control mechanisms. the proposed solutions are IlOt entirely satisfactor) for different kinds of problems. This is the case of constraint satisfaction problems of consist enc! tc~rhnique~ (Ilackuort h 19i7). (F’reuder 1978) which has been used III refinements of the simylr backtrack search (i.e foruard checking. Iooklng ahead procedure:,) (tlarallcb (CSP). ,4 constraint satisfaction problem can be defined as fol- and Elliot. 1380) and in Alire. a problem solver for cons- binatorial problrms (Laurierr. 1978) When facing a constraint satisfaction problem. a logic program (H hich can be considered as a kind of meta-interpreter) can t)e writren which implement*. sa). a forward checking strategy. II will general11 be mow ef- ficlent that the usual Prolog programs. However. thii requires an imporlant programming effort. leads to lesh readat)le and IPW maintenable programs and does not allow the full cfficlency of these approaches because II creates a level abc,\e l’rolog. Ah a matter of fact. logic programming languages lack primltlves for an actike treatment of constraints. It SCPI~IS dlfflcult to define new control mechanisms in order to use ct,nst raints more ac- t ively. This state of affairs comw from the fact that (first AI LANGUAGES AND ARCHITECTURES / 759 -4 s6ume the existence of a finite se1 J of variables jx,,x, ,.‘., A-f. supp ose each variable A’, takes iis values from a finite set I. 1 called the domain of the variable. .4 constraint C can be seen as a relation on a nor-empty subhef 1 = ,’ Y1. 1. ,. J’ 2 “’ M ,’ of J which d f’ e anes a set of tuples <u,. uM> The constraint satisfaction problem in to determine all th possible assignments f of values to variables such that the cor- responding values assignments satisfy the constraints. The class of CSP is related to man) problems in Al like logical puzzles. scene labeling, graph isomorphisms. graph colourmg and proposi- From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. order) Nom clauSes .obscure sotnettmes properties of pro,dlcate bvld t/luS prevenb tire Interpreter I-Mm us,na them. CsllS,der for III~I Zlncr lhe case 01 var~ablcs. In I uglc pr~Igrarnmlng. the vari- progra mm in g (SW tor IllSi alIce (hl\ crdt and O’licefe, 1984 !) . Buf the main new point oj thir paper i6 that using such a logic (II- IOUM procedural MeA which have no counterparts in an unmrted logic. The domain declarations can be used to determIne the definiLion domain of each Lariable.” In the following. a variable with an explicit and finlt,e definition domain is refered as a d- variable and its domain is noted I),. Other variables are refer4 as h-variable. Moreover, at an) tirne of the computation. thp ranpe ,)ver the Herbrand universe which IS in CSP. generalI> in- Of HoNever, in many applications and the domain variables is finite and much more restricted but this informatlon is hidden in predicates like permut.ation(X.J’) monadic predicates to restrict the range of variables or more generally in generators Introducing the domain concept guages leads directly to an actlbe in10 logic programming lan- use of constraints and the op- portunity to implement natural)! forward checking, consisLenc> possible, set (i.e. the set of values in which a d-variable take> its value) of a d-variable can be determmed. Jnit.iallJ ~ this set 15 the definition domain but the constraints can reduce it. For instance. a constraint “X # 3” can be used to remove “3” from the pas- techniques and the like The ne.xt sect ion int.roduces domain declarations in logic programming and its interest for using con- straints actively is discussed For supporting this use. Lhr procedural semantics is modified and thi, consist5 in a modiflca- sible set of the variable. This is an active way CO use the non- equality constraints contrarily to the passive use of Prolog which can only handle such a constraint when both arguments are in- stantiated. This use of constraints. combined with the opppor- tunity of generating onl> values in the possible set. increases sub- stantially the efficiency of logic programming for solving (‘SF’. Indeed, this allows to prune the search space in an “a priori” Lion of the unification, the redefinition uf the of some built-in predicates ( f . 5. <. procedural senlan- tics > -3 >) and a new evaluable function. Next. some examples are given and compared with usual Prolog programs. Final]). it is shown how some heuristics and consist,ency techniques can be built from our basic extension and implementation issues are discussed. The reader is refered Lo (Van Hentenryck and features defined here. Dincbas, 1986) for more manner instead of the ” a posteriori” manner of Prolog. on all the Aloreover. this can 1X used to reduce the gap between the declarative and procedural semantics for the built-in predicates and this I$ one of the directions of logic programming proposed by (ho~alski, 1985). 2. Domain declarations. It is often the case that the variables range over a finite cannot be expressed clearly in logic domain but this information programming languages. Domain declarations are introduced for t,aking this fact into account. A domain declaration for predi- cate p of arity n is an expression of the following form. 3. Procedural semantics. The invarlant of all proposed extensions is that t,he domain of a d-variable has a cardinality greater than 1. If domains of car- dinalit! 1 are defined. the value can directl! be assigned to the variables defined on this domain. domain p:<a ,..... a 3,. n where ai is either H or Dm 1 5 i < n. When ai is equal to H. this means that argument i of p ranges 3.1. Ilnification. over the Herbrand universe. Otherwise. it rnran* that argument i of m varlableh uhich ranges o\er 1). A:, in CSP the It 1s clear that the unification must Lake into account the be 15 a set domain of the variable\ The unification algorithm mu51 number m is fixed. these declarations are full\ <atlsfactory. In modified to handle the three following c&es the following, the domain D are finit 1% and explicit constants The domain declarations can be considered as set of a kind l If a h-cariable and a d-x ariable must be unified, the h-variable must be buu nd 10 the d-variable. of meta-knowledge although quite different from the one proposed definitions is usual11 in logic programming The effect of Lhese l If a consrant and a d-variable must be unified. the d- variable is bound to the constant if it is in the domain of the variable. Otherwise. the uniflcat ion fails. to reduce the class of interpretations which must be considered for a logic program. In facL. the declarative logic semantics of the -extended” language is a particular case of the many-sorted logic defined in (Cohn. 1983). It is well-known that man)-sorted The fact that we only considered constanls i> 11) no wa? rr-trirtive The definitions given here cdn be extended CO arblLrar1 gruund term* LIUI the generalization is not considered here for clarity arld bre\it> l If two d-variables must, be unified. then let I), the inLersection of their domains. If D, IS empr!. the unification fails. If D,=(v) then v is bound to both t* The domain of a variable ran be determined a~ cornpile time 760 / ENGINEERING variables. OtherwIse, both varla new variable Z who* d OIllalll IS ble are D 2’ bownd to a 3.2. Built-in predicates. The real interest of I he domain exrension 1s nor only in the logic part of the language but also in thra procedural part. sa>. in t hi r(Ldpfinll IOTI of +OIIIP built-in predlcat es. These predicates can nr,w takr Inlo acr,ounl the domain of 1 ariables 3.2.1. Non-equality predicate. The declarallve semant ICS of the predicate is n S # Y holds If X is not equal lo Y”. However. the usual procedural semant KS of this predicate IS given by the following clause X#Y+ not(X = Y). where not(G) is the “negation as failure” rule. Thus, the only use of such a constraint is a passive one. Moreover, there is a gap between the declarative and procedural semantics. A safe computation rule *must be defined which only selects the non- equality predicates when the) are ground. Moreover, a generator of values for X and J. must be provided in order to be com- plete. Our procedural semantics allows an active use of these predicates and also reduces the gap between the declarative and procedural semantics X # 1. is defined b> If X is d-variable and 1’ is a constant, let D, be D,\{Y}. If DZ={v). then X is unified with v and X # 1 succeeds. Otherwise, X is bound to Z (whose domain is Dz) and X f- Y succeeds. The case where Y is a d-variable and X 1s a con- stant is similar t(o the previous one. If X and Y are constants, X f Y succeeds if X and Y are distinct constants and fails ot,herwise. Its effect is undefined otherwise. Note (hat this definition can easily be combined with a delay mechanism like wait, geler, freeze, (Naish, 1985), (Colmerauer. hanoui and \-an Caneghem. 1983) ,(Dincbas. 1984) in order to dela? the constraint until one of the first three cases is fulfilled. In this rase. there are no undefined cases and this usill lead to u very efjiricnt way to handk non-equality constraints which con- sists in using tht construfnt in an actite way as soon ab possible. Indeed. a non-equality predicate ran he cnmidered aa active m two dijjerent sensea. Firsi, it c an remove a value from the pob- siblc bet of a variable. Second, it cm assign a laalue to a variable when only one consistent value is left for this variable. The procedural semantics is equivalenr to the declarative one in the fhy,t three cascts. The gap between the semantics ES IIT the fuure cae. If 4 safe csmputabolr rule whrch only Se&& d non-equality predicate whrn III~C of th(b f’rrg three cages is fulfilled, the procedural semantics will be sound HoMe\Pr. In ~rcl~,r 1~1 tie r.,)m- plere. a generator of values for X and 1’ mu>{ i,e prnvlljPd. Note that. in fact. this semantic> is suboptimal. Indf,rd 11 for in- stance. X is a d-variable with d, = (1.2.3) and j i< a d- variable with I), = (4.5.6;. then the predicate .‘\ f \ should succeed because w hal*ver the values that X and \ HIII take. they will be differenl. 3.2.2. Inequality predicates. We consider here the predicates “X 1. Y” uhose declarative semantics is given by “2; 5 J”‘ holds if X and J’ are integers and X is less than or P~IIB~ IO J‘” The usual procedural seman- 11~s of this predicate is the following : “X <_ J’ succeeds if X and J’ are instantiated to integers and X is less than or equal 1.0 J”‘. This time again. the procedural semantics entails only a passive use of this constraint. Also, there is a discontinuity be- tween the declarative and the procedural semantics. The procedural scmamtlcs can be redefined as follows. X 5 1’ succeeds in the following cases l If X is a d-variable, Y is an integer [#hen let sui = {I’: V E D, and V > Y} and D, = D, \ sup. If D, is empty, then X _< Y must. fail If not, if D, = {v}. then X is bound to v and X 5 Y succeeds. Otherwise, X is bound to Z and X 5 Y succeeds l ,The case where X is an integer and Y is a d- variable 15 similar to the previous one. . If X and 1’ are Instantiated to int,eger values, X < J‘ succeeds if X is less than or equal to J’ Others ise, it fails. l Otherwisp. its effect is undefined. Therefore. this predicate is an active predicate which removes 0 or I,...or n value5 from the domain of a variable and which can assign a value to a variable. It is clear that it can be combined with a delay mechanism for handling the last case. Note also that this implementation is suboptimal. If D, = { 1,2.3} and D, is {4..5.6). X 5 j’ should succeed. In the same wa). Y 5 N should fail. The others inequalIty predicates can be defined in 1he same waj. 3.2.3. Domain primitives. The extensions presented so far can be improved by giving to the user an access to t,he possible set of a variable. We intro- duce a new evaluable function domain(X) (which holds if dorrlain(X) returns the list of instances of a term uhlch satisfy AI LANGUAGES AND ARCHITECTURES / 761 the dolltalns) whose procedural semana~cs is domain( \) = the IIst of all t hc values in D, of N I\ a d-h ariablr = [x-j If x IS a ground term IS undefmrd ot herwlqe Thrh function can be used to generate values fnr a d-variable b> u<lng. for ins1 ante. “rrlernt~erO;.d(,rrlalrlO;))” where member(NII’) hold\ if S i> an elerrrrnt of the list J. *‘. This function is quite u\eful when the “first fail” principle and arc-consistency are to be used (see below). 4. Examples In the follov.ing. ue will give two examples of the basic nlechanlsms. The first one (h-queens problem) shows how a new search procedure (foruard checking) can be implemented in a logical w a! without the need LO rewrite a specific meta- Inlrrpreler. The second one is a logical puzzle which also shows that our extension combined with a delay mechanism can lead to a “data-driven computation” as. for instance, in the constraint language of (Sussman and Steele. 1980). 4.1. N-queens problem. The folio% ing \-cjuccns program implements a forward checking sLrateg> (based on rhe “lookahead in the future in order not to worr> about the past” heuristics). This means that the program chooses a possible value for a variable. removes the inconsistent \ alues for the other Lariables and so on until all \ arlables have a value. H> mo\lng along this way, there is no need to test the value assigned to the present variable against the values of al- ready assigncld variables. This is the most efficient heuristic for this problem (Haralich and Elliot. 1980). The program is the fol- lowing. queens(u). queens(lXjI7) C- membw(X.domain(X)) safe(X.Y.l) 1 queens(Y). safe( X,[J.Nb). safe( X.[FjT].Nb) r noattack(X.F.?jb Newnb is Nb + safe(.X.T.NeBnb) it2 However. the unification will test if each value is in the domain of thr~ bariable. Therefore. a predicate “indomain( ran tw int.rodured whose derlaratibr semantics is P’ rndomain(X) holds II mernber(S.durrlain(X)) holds” and which avoids the unification inefficienr!. The five-queens program can br expressed b> the following clause domain five-queens.<{ 1.2.3.4.5}5>. fi~e-queens({X;l.)i2.X5.X4.X5}) t queens([Xl.X2,XY,X;4,)(53). The usual Prolog program consists in assigning 10 the n 1 ariables of the list a permutation of [I, .., ] n and then in testing if the as- signment satisfies the constraints. This is a very inefficient ap- proach. Control mechanisms. based on control informations provided by the user, can be used to apply the tests as soon as possible and thus improve the efficiency of the search (see I(‘- Prolog (Clark and MC Cabe, 1979). Metalog (Dincbas. 1984). MU-Prolog (Naish, 1985)). However, consider the first steps of our program. The “membw(X.domain(X))” will choose a value for X in its domain. Let say 1. Then immediately. all the in- consistent values of X2....,X5 are ‘removed from their possible set” by the safe predicate. Indeed, the noattack(X,Y,Nb) can be used with the following modes. noattack(+,+,+) and noatt.ack(+.-.+). In the later case, n-F means that Y is a d- variable and the effect of the predicate is to remove X. X - Nb, S + Nb from the domain of Y. Therefore. at this time, the situation is given in figure I (0 represents an assigned value and X an inconsistent value). In the next step, the values. 1 and 2 will not be considered for X2. In a coroutining program. these values would have been t,est,ed. The next, step chooses 3 as value for X2. figure 1: 5-queens after 1 and 2 choices Therefore. this instantiates immediately X3 and X4 (to 5 and 2 because their possible set is reduced to I element) and their safe predicates will reduce CO one element the d:main of X5 (i.e 4). ‘The problem is solved with two choices and without any back- t.racking. Kow, consider, the search for another solution. In our case. the first backtrack point is X2 and the values assigned to X2,X3.X4 and X5 will no more be compared with Xl. It’s not the case for the logic program with a control mechanism: each time a value 762 / ENGINEERING IS assrgned to X$X3,X4 and X5, this value must. be campar~d with Xl Thus entatls a lot of redundancy. Conder now tlrka eight-queens probi’er~~ after 3 C~IOI~CY 12 J 4 5 6 7 8 figure 2: 8-queens after 3 choices X6 has alread> received a value as there is only one value left in 11s domain. Therefore. when choosing 2 as value for X4. the safe predicate will fail and this choice will be reviewed. This failure is detected as early as possible. Moreover. the real cause of the failure (X4) is detected. In a logic program with control mechanism. the failure will be detect.ed only when assigning values for X6 and the backtracking will consider all the values (from 1 to 8) for X5, X6. The point here is twofold: jirst. there exzst powerful search procedures for CSP which detert failure ear- lier than control mechanisms: choose the right barktrark point urthout any overhead and aooid a lot oj redundancy: second. a ertenaion iS suffirient to allou, for fully declarative programs which implements such a procedure without the need of control injormations. 4.2. A combinatorial problem. Th e problem is the following (Lauriere. 1978). Six couples took part in a tennis match. Their names were Howard. Kress. McLean, Randolph. Lewis and Rust. The first name< of their wives were Margaret. Susan. Laura. Diana. Grace and \‘irginia. Each of the ladies hailed from a different city: Forth North. Wlchlta. Mt \ ernon. Boston. Dayton. Kansas City. Finally. each of the women had a different hair color, name13 black, brown. gw. red. auburn and blond. Information5 are given to state doubles and single which were played For in- stance. Howard and Kress played against Grace and Susan or the gra> hair lady played against Margaret. There is only one other fact we ought to know to be able lo find the last names. home towns and hair colors of all six wives. and that is the fact that “no married couple ever took part in the same game”. Thp fol- JoMing Prolog program solves the problem. pernr([ljo,Ke,n~c,Ra,Le,Kw~, /ma,su,la,di,gr,va]). Ho + pr 11~~ # <u. kc, I;f- or kc, f *II. MC # la. MC 7 su.Ha # la. Ha # su. 31~ + g: Ra f gr. Lr # gr. Kr T la. Kie + vi. 31~ j. dl. MC =& \i. perm( H1.~r.C~r.Re.Au,Blo].~ma.su.la.d~.gr.~~~), Hr -f vi.Hr # Ho. Br 3 MC. Ra f <:r. Gr 7 la. Blo # la. Blo F di. LP # Blo. RIO =+ ma. perm([Fo. U.i. Mt, Bo, Da.h’u~.~r~a.str.la.di.gr.tlil/. Fo # Ho.Fo # Mc,F<, + Rh.n~ =# Ho.\ji +\lc. Da =# ma.Mt -# ma,Mt f dl, ])a + dl. Mt .=& \i. \I i f Ra. W i f Ke.Ru + Fo.Fo # Ke.Gr F Ho He # Da. Gr f Fo. Rr + h1t. 1310 f I)a. HI # 1~0. Bl + Da. Ka 9 ma. where perm(L.Res) holds if the list Res is a permutation of the list L. In this program. the permutation predicates assign value5 to t,he variables. &ext, the constraints are tested and if they are not satisfied. backtracking occurs in the permutations. Note also. that if a value for “Ho” generated in the first permutation can- not satisfy the constraint “Fo I Ho” tested after the third per- mutation. the backtracking will generat,e all the possible values for all the variables. This time again; this 1s quite inefficient. These constraints can be used immediately in order to remove inconsistencies. The program becomes the following. Let I) (ma.su,la,di,gr.vi} domain tennis:<D’.D6,D6>. tennis({Ho.Ke,~lc.Ra,Le,Ru},{Fo.Wi,Mt~.Bo.Da.Ka} ,{ Bl,Br,Gr.Re,Au,Blo}) e Ho # gr. Ho # su, Ke =f gr, Ke # su. hlc =# la. Mc f su.Ra # la. Ra# su. MC # gr. Ha ;f gr. Le # gr. Kr ;it la. Ke =& vi. Mc f di. MC + vi. Mt # ma.Mt =#di, Da # di. Mt f vi. Blo # la, Blo + di,Da # ma. Ka =# ma. Br f vi. Gr + la, Blo =#- ma. labeling(po,Ke,Me,Ra,Le.RuJ). Fo 7 Ho.Fo =+ Mc,Fo # Ra.Mi = Ilu.Ni 731~. Wi f Ra. W’i 1 Ke. Ru # Fo. Br 3 Ho. Br + MC. Le # Blo. Ra += Gr. Fo = Ke. labeling(~Bl,Br,Gr,Re,Au.Blo~). Cr + Bo. Rc % Da, Gr ,i Fo, Rc F %lt. I310 + Da, Bl F Bo, Bl # Da, labeling([Fo, Wi,Mt,Bo,Da,Ka]). labeling(u). labeling( [X(q) - member(X,domain(X)). out-of(X.Y). labeling(1’). our-of&[]). out-of(M,IFIT]) . x f F, out-of(X,T). The labeling procedure is used instead of the permutation proce- dure in order to assign to variables 0111) \ slurs in their possible set. The procedure labeling(L) h 11. (I c \ if all elements of the list L AI LANGUAGES AND ARCHITECTURES / 763 are dIfferant (which IS Insured by the “out-or’ prcdtcare). The putatK7ri C St on ralr~ts recfwe the passable sets of the urrcaMes. As list L must include only ground terms Or d-variables. In the lat- ter case. the dornaln of the varlabl(, I\ used as generator by the this variable und thdh jact 16 propagalcd by allowzng other cm- straints to be aclected ” mern her” predieare. The, rnam difference hpl w pen the two programs is that thr second nne Immedlatel~ solar\ nlost of the constraints and therefore reduces immediateI> the search space. 5. Others features of the extensions. The constraints are solved once for all. (lonsider the case of the constraint “Fo # Ho”. This constraint is solved after the first Our extension can be considered as a set of primitives Nhich ran be used to huild more qophislicated mechanisms and heurib- tars. An example is the “first fail principle” (IIaralick and Elliot. labeling prrcedure instead of after three permutal IOII~ Zlr)reover. when encountered, it will reduce the possible srt of “E’o“. Other constraints also reduce this set or assign values IO variable. The choices are made in smaller domains and only a few constraints depend on them. No pathological behaviour (like in the case of 1980) Forward checking (and other search procedures) can be substantial11 improved b> using the so-called “to succeed. tr> first where you are most likely to fail” heuristics. This heuristic> can he implemented in CSP b> choosing variables to be instantiated first. Consider the most constrained simple backtracking) will arise. This allows us to move from a the labeling procedure “generate and test” strategy towards a ‘.constraintd-search” strategy for problem solving. seen before. It can be rewritten as labeling(u) labeling([X(Y]) +- choosr-car(lX:)oY],Var,Other) , member(\-ar,domain(Var)) , out-of(\-ar,Ol her), labeling(Oc her). If a delay mechanism is used for the non-equality predicates. in the first program. all the constraints can be written first and will be tested as soon as possible (i.e. in this case when the Iwo variables are instantiated. remains as the constraints are used passively but the above-mentionned problem However. the The procedure choose-var(L,Var,Other) holds if Var is the ele- ment of L whose domain cardinality is the smallest one and Other is the list of other variables of L. The domain cardinalit) of a variable can be computed by a goal ” - length(domain(V).Lg)” where the procedure length(L.Lg) holds if I ,g is the length of the list L. It is clear that further efficient) can he obtained by building in the “choose-var(L,Var.Other)” program where labeling predicates are replaced by alldifferent predicateA (i.e. alldifjerent(fHo.h’e,Alc.Ra.Le.Ru]). alldifferent(~Bl.Br.Gr.Re,.4u,BloJ), alldzfferent(lFo. ~~-i.,~lt.Bo,Da,h’aj)l will solve the probletu if a delay mechanism is combined with our basic mechanism. The predicate alldifferent holds if all lhe elements of the hst L are not equal. It can be defined by the following clauses predicate This heuristics is particularly well suited for many alldifferen t ([I) alldifferent ([M\Y]) c out-of(X.)‘) , alldifferent ) problems like map (graph) coloring problems where man! guidelmes )‘efficiency” are known. In usual logic programs for CSP, the greatly affected by the order of the litterals inside is a clause or the order of arguments in predicates like permuta- tion. Such an order must be determined statically and requires a deep analysis of the problem. With our extension, the order of instantiation can be determined dynamicall> and requires no analysis of the problem. It seems ver> difficult to get a similar effect in usual logic languages without rewriting all the program in order to manipulate explicitely the domains. In (Van Henten- ryck and Dincbas, 1986), it is shown how arc-consistency and others more sophisticated mechanisms. like the reasoning on in- t ervalb of (Lauriere, 1978)) can be implemented easil) with the prirnitlvey presented here. The point here is twofold. first. our basic TrkeChanismS are sufficiently powerful to implement more sophisticated mechanisms which requires a lot of progratnming ej- jort in usual logic language. This gilles to the user the oppor- tunity to define his own mechanisms zf necessary Also, the ustr is not restricted to a particular strategy for applying these mechanisms Second. there exist specific mechanisms which are often used and which ran substantially reduce .!he search space. This predicate is the same as the Colmerauer’s one but it is used here in an active ua) inst>ead of in a purely passive wa> in ((,‘olmcraucr. hsnoul and \.an Canrghcm. 1935) ‘I’hcrr. an non- equality prfadicate IS <elected as soon as both arguments are ground In our case. ir i+ selected as soon as one of these ar- guments is ground and can assign values to variables. It entails that this problem can be solved without generation of values and thus without choices (1): the program just solves the constraints. This is indeed a particular case but it shows how the search space can be reduced with a simple extension. This will be very important for interesting (NP-complete) problems. In this case, it is very important to reduce as soon and as much as possible the search space in order to avoid the combinat,orial cx- plosion. The point here is twofold: first, active constrainls are used to reduce in an “a priori” manner the search space and thus avoids the pathological behaviour oj backtracking. Second. rambined with a delay mechaniswc, it allows a data-driven rom- 764 I ENGINEERING pr/mrtrvcs once the domarn extension has been prr)r,tdrd blned with a delay mechanism, this lead to a “data8rrvgyt” com- putation which aapl~es constraints actively as soon as possible. It has been shown h<)w more aophlqticated mechanisms can be built from the prlmlt~ves and that such extenslonb can be Implemented efficient]). 6. Implementation issues. The implementation of this extension entails no overhead when not used and can be Implemented efficiently. two condition+ stated by (Shapiro. 1983) What are the modifications required by our basic mechanisms ? In the variables environment. besides the usual informatlon. a pointer to the domain (or more precisely the possible set) mu\t be provided. In thr following. we consider only the case where the domains are defined as a set of consecu- tive integers This i> in no wa> restrictive. Indeed, a corrcspon- dance can be made at implementation level between a finite set of constants and a set of consecutive integers. Then. it is clear 1. Clark, K.L.. .\lc CabBe, F I’hc control fat illlies of I(‘- I’rolog. In Ezperl c~yslrltr~6 zti thf rrrzrro-electrontc age., ED Mitchie D. Edinburgh urliverhlt\ press.. 1979 2. (:ohn. A.G. Improving the Expressiveness of Many Sorted Logic AAAI-83. 1l‘ashlnrton DC. 1983. 3. Colmerauer, A., Kanoul. H.. Van Caneghem. M. “Prolog. bases theoriques et developpements actuels.” T.S.I. (techniques et sciences injormatiques) 2, 4 (83). 271-311. that this can also t,p done for set of integers and that we have 4. a direct access to elements of the domain. Therefore. only a Dincbas, M. . Lepape. J.P. Metacontrol of logic program in METALOG. Proceedings of FGCS’84.. Tokyo. Japan, November, boolean array ‘*a” is necessar) to represent the domain of a d- 84, pp. 361-370. variable At the beginning. all the booleans are true but the con- straints can modify them. At any time of the computation. if ari] = true then this means that i is in the possible set of the variable. Otherwise, it is not. However, it can be int,eresting to store the minimum and maximum indices. In this case, the pos- 5. Freuder E. C. “Synthesizing constraint expressions”. Comrn ACM 21 (November 1978), 958-966. 6. Gallaire. H. LQgic programming: further developments. IEEE symposium on logic programming, Boston. july. 85, pp. 88-99. Invited paper. sible set of a d-variable is given by all the values between the minimun and the maximun such that a[ij is true. The resolution of a non-equality predicate “X # i” consists in accessing ajij. If it is true, ali! must be set to “false”. If necessary, the variable must be put on the trail with a pointer to i. When backtrack- ing, the only thing to do is to reset a!il to true. In general. the inequality case is more complicated as a list of values could need to be reset. However. if maximum and minimum values are 7. Haralick R.M., Eliiot G.L. “Increasing tree search efficiency for constraint satisfaction problems.“. Artificial intelligence 14 (80). 263-313. a. Kowalski R. Directions of logic programming. Proceedings of the IEEE international symposium on logic programming, Bos- ton (USA), 85. invited paper. 9. Lauriere J.L. “A language and a program for stating and solving combinatorial problems”. .4rtificial znteffigence 10 (1978), stored. only these values must be modified and thus reset when backtracking occurs. However. a set of values must be stored if we unify two d-variables. In an! case, this modification can be implemented efficiently (especial11 when combined with a delay mechanism). 7. Conclusion An extension of logic programming languages has been proposed which increases their efficiency when solving CSP. It is based on domain declarations. a slight modification of unification. the redefinitions of sorne built-in predicates <, 5, >, 2 , # and a new evaluable function. Its main advantages are to bring active use of constraints into logic programming and to allow look ahead strategies. first fail heuristics, consistency techniques and the like to be implemented efficiently without programming effort and the need for extra control informations. The efficient! of logic programs for solving CSP is substantialI> improved by avoiding the pathological behaviour of backt,racking and by reducing the search space in an “a priori” manner. When com- 10. Mackworth. A.K. “Consistency in network of relations”. Artificial inlelligence 8. 1 (1977). 99-118 11. Mycroft. A. , O’Keefr R.A. “A Polymorphic type system for Prolog”. .4rtificial intelligence 23, 3 (1984). 295-307. 12. Naish L. “.4utomating control for logic programs”. Journal of logic programming .Z, 3 (October 1985). 167-184. 13. Shapiro E. Methodology of logic programming. Proceeding of logic programming workshop. Proceedings of logic program- ming workshop, Praia da falencia, Portugal, 26-june I-july, 1983. pp. 84-93. 14. Sussman, G.J. , Steele, G.L. “CONSTRAINTS: a language for expressing almost-hierarchical descriptions”. Artificial intel- ligence 14, 1 (1980), l-39. 15. Van Hentenryck. P. .Dincbas M. Associating domain to variables in order to solve C.S.P. in logic programming. lp-10. E.C.R.C (E uro ean p computer-industry research center), February, 86. 16. Walther. C. A mechanical Solution of Shubert’s Steam- roller by Many-sorted resolution. qth National Conference on Ar- tificial Intelligence (A-4.41-84). Austin. 1984. AI LANGUAGES AND ARCHITECTURES / 765
1986
7
516
Tucety - still Flying Some Reti on Abnormal Birds, Applicable Rules and a Default Prover Gerhard Bewka Gesellschaft fiir Mathematik und Datenverarbeitung Forschungsgruppe Expertensysteme Postfach 12 40 D 5205 Sankt Augustin, Federal Republic of Germany ABSTRACT l’his paper describes FAULTY, a default prover for a decid- able subset of p+edicata calculus. FAULTY is based on McDemtt’s and Doyle’s Nonmonotonic Logic I und avoids the wet&known weakness of this logic by a restriction to spe@ic theories, which OTQ sujgcictint for defuult reasoning purposes, howevet. i%e dafautts ~TQ represented in a way that allows explicit control of their applicabi&ty. By btock- ing the applicability of a default the problem of interacting defaults can be avoided. Keywords: Nonmonotonic Reasoning, &fault R&?asoning, tiosem Proving, Knowledge Bpresentation 1. Introduction During the last years the field of nonmonotonic reason- ing has attracted many AI researchers. Different kinds of nonmonotonic reasoning have been identified ([McC 853 gives a list of 7 types, this list certainly not being complete), and different formalizations of such reason- ing have been proposed. The most influential among these are - McDermott’s and Doyle’s Nonmonotonic Logic I (NML I) [McD Do 801, - - Reiter’s &fault Logic [Rei 801, McCarthy’s different versions of Circumscription [McC 801, [McC 841. The main problem with these formalizations is, that they are not semi-decidable. In the case of NML I and Default Logic this stems from the fact, that the prova- bility of a formula may depend on the unprovability of other formulas, and the unprovable formulas of first order logic (FOL) are not semi-decidable. In the case of Circumscription we have to deal with a second order formula, and second order logic is not semi-decidable (but note that Lifschitz [Lit 841 has identified interest- ing cases, where the Circumscription of a formula is equivalent to a first order formula). A common answer to this problem is to give up the idea of theoremhood and to replace it by something like believabikfy or reasoned believability. This is especially the viewpoint taken in Reason/Truth Maintenance Sys- tems as Doyle’s TMS [Doy 791, Goodwin’s WATSON [Goo 84][Goo 851 or de Kleer’s extended ATMS [deK 881. In these systems a network of dependencies between for- mulas is constructed, in which the derivability (believa- bility) of a formula never depends on unprovability of other formulas, but may depend on the fact that other formulas are currently unproven. What is modelled is not the ideal reasoning agent but instead the process of making inferences with limited resources. A central problem with this approach is clear: the status of a formula may change from believed (IN) to disbelieved (OUT) or vice versa without adding or delet- ing any information, simply because the system has made further inferences. Criteria for when to stop making inferences and rely on the systems beliefs are lacking. A more technical problem are the so called odd loops. A dependency network contains an odd loop whenever belief in a formula somehow depends on disbelief in the same formula. In the case of an odd loop TMS may run forever, WATSON diagnoses the loop and halts if it can- not label formulas correctly as IN or OUT. But note that WATSON may halt even if the corresponding logical theory is consistent. For instance the set of NML I for- mulas 1) -P-> P 2) Y-P-> -P is consistent, but if WATSON is given these axioms it creates an odd loop and halts (our system has no prob- lem with that case). De Kleer [deK 881 proposes to treat odd loops as contradictions. Nobody can be very happy with these properties, but we have to live with them if we want a nonmonotonic system with the full expressive power of FOL. But there is another approach to the problem of non- semidecidability: for many applications we do not need full FOL. The great success of PROLOG has shown this clearly. There are many interesting subsets of FOL which are decidable. If we restrict ourselves to such a subset, then also the nonmonotonic case becomes decidable and theoremhood need not be given up. This is actually the approach we followed. FAULTY is a default prover that can handle Horn clauses without functions. (Note that Horn clause logic is not the same as PROLOG, we have true negation and negative asser- tions.) With this restriction FOL is decidable, since the Herbrand universe is finite. There are two versions of FAULTY now, an older version described in [BreWi 841 [Bre 881 (in German) has recently been reimplemented on a SYMBOLICS Lisp machine. Examples in this paper are taken from the SYMBOLICS version of FAULTY. In this paper we will justify our decision to base FAULTY on NML I and show how we overcome the wellknown 8 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. weakness of McDermott’s and Doyles logic. We then describe how defaults are represented and how we deal with the problem of interacting defaults. An informal description of FAULTY’s proof procedure follows and the paper concludes with an example dialogue. out. The modal operator Y does not capture the full meaning of consistency, as intended. For instance the theory we -4 2. why NML I? When we decided to implement a default prover, we first had to choose a basic nonmonotonic formalism for it. We found that Circumscription was not a good candidate for our purposes. What we wanted to build was a system that could be used also by people not well trained in formal logic or even higher order logic. And our - may be very subjective - opinion is that Circumscription is quite difficult to use. The main reason is that it is not enough to represent defaults using the abnormal-predicate AS as proposed by McCarthy [McC 841. The effects of circumscribing AH crucially depend on the variables chosen for the Circumscription. If we have for instance 1) BIRD(x) & -AB(aspectl.x) -> FLIES(x) 2) BIRD(Tweety) then we get FLES(Tweety) as intended circumscribing AB z using AI3 and F’LIES as variables. But if we add 3) PENGUIN(x) -> -FLIES(x) 4) OSTRICH(x) -> -F’LlES(x) we only get the desired result FUES(Tweety) if we use PENGUIN and OSIWCH as additional variables. For untrained people it is not easy to see what the effects of changes in the axiom set are and which variables to use when circumscribing AR. And we did not have a good idea how to follow McCarthy’s suggestion [McC 84, p. 3021 to use a “policy” database containing metamathematical statements for our intended general purpose default prover. We therefore preferred to choose among McDermott/Doyle’s NML I and Reiter’s default logic. The main difference between NML I and Default Logic is that defaults in NML I are represented within the logical language and not as a kind of meta-statement as in Reiter’s logic. This allows defaults and ordinary axioms to be handled in a uniform manner and it turns out to be very easy (as we will see) to adapt standard resolution proof techniques for NML I. This was the reason we chose to base FAULTY on NML I. There is a well-known problem with NML I however: it is too weak, as McDermott and Doyle themselves pointed is consistent in NML I. This weakness has led to a lot of activity and authors, among them McDermott himself, have tried to strengthen the logic [McD 821 [Luk 841 [Moo 851. It is certainly right, that NML I is too weak, but too weak for what? For reasoning about consistency. But this does not mean that it cannot be used for default reasoning purposes, if some restrictions are complied with. We restrict NML I in the following way: 1) 2) The modal rules as operator His only admitted in default HA&B&MC->C where A is a special literal, as will be explained in the nezt section, B and C are formu1o.s (but the deja&t must be representable as a horn clause). We are only interested las not containing Y. in the provability of fownu- With these restrictions the undesired consequences of the weakness of NML I disappear. No statements about the consistency or inconsistency of a formula can be made; it only can be expressed that a formula is deriv- able if this formula (and another special formula, see next section) is consistent. And the only way to find out if a formula is consistent is to try to prove its negation. With our restrictions NML I can model default reasoning adequately. One more question has to be discussed here. In NML I (and similarly in Default Logic) the derivable formulas are defined in terms of fixed points. Since there may be any number of fixed points, the question arises, wether we want to define the derivable formulas as the inter- section of fixed points (as McDermott and Doyle do), or wether we want formulas contained in at least one fixed point (Reiter’s approach). Assume we have the follow- ing facts: 1) Most computer scientist23 are not millionaires. 2) Most Rolls Royce drivers are millionaires. 3) John is computer scientist and drives a Rolls Royce. If we would build a prover following Reiter’s approach and ask that prover “is it true that John is a mil- lionaire?“, then the prover would answer “yes”, given the above axioms. But if we now ask “is it true that John is not a millionaire?” we get again the answer “yes”, a somewhat unusual behavior for a prover. AUTOMATED REASONING / 9 Another unusual property of our system would be that if it has derived a fact A and another fact B, it does not follow that also the conjunction Ah B is derivable. This one fixed point approach is also pursued in Doyle’s and Goodwin’s Reason/Truth Maintenance Systems. These systems can be thought of as approximating the construction of one (arbitrarily chosen) fixed point. We believe that it is better to remain agnostic in the case of conflicting evidence, following McDermott and Doyle in this point: for FAULTY only the formulas con- tained in the intersection of the fixed points are prov- able (but since many people seem to like the other approach too, now it is possible to run FAULTY in a mode where it derives a formula if it is contained in at least one Axed point). 3. The representation of defaults. The problem of interacting defaults has been discussed broadly, see especially [Rei Cri 811. One problem among others arises if we have a default that is more specific than another, for instance 1) ADULT(x) & Y HARRIED(X) -> m(x) 2) sruDENT(x) & Y -HARRIED(x) -> -m(x) In this case we certainly want to be able to derive that a student named John is unmarried. But since default 1) creates a fixed point containing IfARRED(John) we cannot derive what we want. The question now is: how can we block the second unwanted fixed point from being created ? What goes wrong is that default 1) is applied to students, but we do not want it to be applied in this case. So we have to find a way to explicitly con- trol the applicability of a default rule. For that purpose we need a standard predicate APPL for applicable (pre- cisely, we have a set of predicates APPLi, where i is the arity of the default, but this is not important here) and write our default in the following way: 3) II APPL(Rl,x) k ADULT(x) & Y HARRIED(x)) -> HARRIED(x) Here the constant Rl is used as a unique name for default 3) itself. Now we can very easily default by simply stating block the applicability of a Recall that Circumscription is a kind of minimization technique. Minimizing abnormalty now is very similar to maximizing the applicability of defaults, since defaults express what normally holds. And exactly this maximi- zation is achieved with our approach, since defaults are applicable if not explicitly stated otherwise. And the close similarity between McCarthy’s cancellation of inheritance ax$om.s and our blocking of default appli- cability uzioms is obvious (note that one of the subti- tles of [Gro 851 is “Maximizing Defaults is Minimizing Predicates”). McCarthy himself was not too happy about his indexed aspects. He writes [McC 84, p. 2991: The aspects themselves are abstract entities, and their unintuitiveness is somewhat a blemish on the theory. Perhaps our use of names for the defaults is a bit more intuitive, in spite of the self-reference being introduced into the defaults. Note that this self-reference cannot lead to paradoxes, since APPL is used in a restricted way: it is only allowed in defaults under the scope of the modal operator and (negated) in the right side of the blocking of default applicability axioms. Moreover we can very easily hide the representation of defaults. The FAULTY user specifies defaults in a very natural way without having to be concerned about APPL’s or M’s. He simply writes (Rl (BlRD(-x) ==> FLlES(x))) where Rl is the name of the default, and FAULTY does the right thing ( “==>” is to be read as “typically implies”). 4. FAULTY’S proof procedure FAULTY’s proof procedure is essentially a generaliza- tion of McDermott and Doyle’s procedure for nonmono- tonic propositional logic [McD Do 801. The easiest way to explain it is to give some examples. Let’s talk about Tweety again: 1) BIRD(Tweety) 2) Y APPL(R1. x) & BIRD(x) & Y FLIES(x) -> FLIES(x) Now, of course, we want to prove FLIES(Tweety). FAULTY first runs a standard unit resolution refutation proof, where YQ is, for all formulas Q, treated as a literal. We cannot derive the empty clause but we get the interesting clause 4) STUDENT(x) -> APPL(R1.x) 3) -ld APPL(Rl.Tneety) v -M FLES(Tweety) and we derive that John is unmarried, as it was our intuition. This approach turns out to be very similar to McCarthy’s use of the AB (“abnormal”) predicate [McC 841 (but it has been developed independently from McCarthy and was first described in [BreWi 841). McCarthy writes defaults as 5) BIRD(x) & -AB(aspectl (x)) -> FLIES(x) and circumscribes the formula AI3 z. To solve the prob- lem of interacting defaults he uses cancellntion of inheritance axiom like This formula is interesting, because it only contains literals beginning with -Y, we call such clauses M- clauses. Y is intended to mean “is consistent”, so if we knew that APPL(Rl.Tweety) and FLES(Tweety) were consistent, we could finish our proof. Now the only way to show that these formulas are consistent is to show that their negation is not provable. We therefore start two other proofs, one for -FLES(Tweety), the other one for -APPL(Rl,Tweety). In both cases the proofs fail without yielding M-clauses (they get the status OPEN). This allows us to add M APPL(Rl,Tweety) and M Flies(Tweety) in our first proof, and the empty clause is derivable in this proof now (the proof becomes CLOSED: FLIES(Tweety) is proven. 6) OSlXICH(x) -> AB(aspecL1 (x)). 10 / SCIENCE to prove FLIES( Tueety) -RPPL(Rl, Tueety) -FLIES(Tucety) yields -II APPL(Rl,Tucety) v -II FLIES(Tuecty) Table 1 label lng CLOSED OPEN OPEN Table 2 to prove HILL( Jin) -APPL(R2, Jin) -HILL(Jtn) -APPL(R3, Jin) yields -H APPL(R2. Jin) -H APPL(R3, Jin) v -H HILL(Jin) v -H -HILL(Jln) labcllng 1 labcllng 2 CLOSED OPEN OPEN OPEN OPEN CLOSED OPEN OPEN Table 1 shows the (sub)proofs created. Only the is only a finite number of possible instances of literals interesting derived clauses are contained in the table. beginning with -II. Things are not always that easy, however. Let’s look at our millionaires example again (RRD stands for Rolls Royce driver, CS for computer scientist, MILL for millionaire) : 1) IrI APPL(R2.x) & RRD(x) & Y HILL(x) -> KILqx) 2) Y APPL(R3.x) k CS(x) k Y -HILL(x) -> -HILL(x) 3) RRD(Jim) & CS(Jim) Secondly, all admissible labelings for the still unlabeled proofs have to be found. To find out if a labeling is admissible, one proceeds as follows: for each proof for -Q with the label OPEN the literal Y Q is to be added to all proofs. Now in all OPEN proofs the empty clause must be underivable, in all CIX)SEiD proofs the empty clause must be derivable. Trying to prove YILL(Jim) we get the proofs shown in Table 2. The goal is proven, if its (sub)proof is CLOSED in all admissible labelings. The interesting thing here is that we can consistently label the proofs of our example in two different ways as failed (OPEN) or successfully finished (CLOSED). If we label the proof for -YILL(Jim) OPEN. Y YLL(Jim) can be added in all proofs and the proof for MILL(Jim) gets CLOSED. But we can do it also the other way around: labeling the proof for YlLL(JIH) OPEN makes the proof for -YILL(Jim) CLOSED. These different labelings correspond exactly to the different fixed points of our theory. Since there is one labeling in which the proof for MILL(Jim) is OPEN, YILL(Jim) is not contained in all fixed points and hence cannot be derived. This proof procedure is of course not the way FAULTY actually proceeds. There are some ways to cut the number of created proofs and the check of admissible labelings can easily be done by a propositional prover, but this is beyond the scope of this paper. 5. Example The following example shows how a FAULTY knowledge base is defined. Generally a FAULTY proof for a goal consists of two steps. The first step, the construction of (sub)proofs, can semi-formally be described in the following way: (deffaulty-kb flying-objects (axioIna (bird twee ty) (penguin hansi) (bird fred) push the goal onto the agenda until the agenda is empty do remove the top element from the agenda and start a proof for it if the empty clause is derived, mark this proof CLOSED else if no M-clause is derived, mark this proof OPEN else for each literal -Y Q in each derived M-clause unless -Q is contained in the agenda or there is already a proof for -Q push -Q onto the agenda This proof construction phase terminates, since there (not flies fred) (airplane jumbo) (penguin ,x -> bird -x) 1 (defaults (rl (bird-x ==> flies-x)) W (peng~ -3 ==> not flies -x)) (r3 (airplane 2 ==> flies 2)) (x-5 (flies-x ==> haswings -x)) i” (has-ninga_r ==> hasfeathers -x) (exceptions (penguin ,x -> not appl rl -x) (airplane -x -> not appl r6 2))) AUTOMATED REASONING / 11 The blocking of default applicability axioms are called ezceptions in the definition. The axioms are taken partly from [McC 841. The above definition creates the knowledge base as an instance of a Zetalisp Flavor. We can send messages to this knowledge base, the most interesting message is certainly :PROVE. Here are some examples (send flying-objects :prove ‘@es tweety)) yields: PROVABLE (send flying+bjects :prove ‘(flies hansi)) yields: UNPROVABLE (send dying-objects :prove ‘(not flies hansi)) yields: PROVABLE (if the first we would not get this result) exception were missing, (send flying-objects :prove ‘(h-wings jumbo)) yields: PROVABLE (send flying-objects :prove ‘(has-feathers jumbo)) yields: UNPROVABLE 6. Problems and future work The main problem with FAULTY is efficiency, naturally. A set of standard resolution proofs, which themselves are expensive enough, must be run. But we are not too pessimistic about that. First we think, a slow implemen- tation is better than none at all, and second there is much room for parallelization in FAULTY’s proof pro- cedure, so we can hope for much better efficiency when parallel computers become available. Another concern is that FAULTY does not record results, since it is a pure prover. The purpose of the Reason/Truth Maintenance Systems mentioned in the introduction, however, was not only to make nonmono- tonic inferences, but also to keep track of inferences made so far. This allows axioms to be changed without having to recompute everything. Now it’s a natural idea to combine the two approaches and to build a rea- son maintenance system, where all IN formulas are actually theorems of the underlying axioms and all OUT formulas are actually unprovable, not only currently unproven. This system, to be called TINA (This Is No Acronym), is under development. Acknowledgements The first version of FAUL7’Y was built in close cooperation with XH. fittur. Thanks also to F. di primio, who is the ‘father’ of BABYLON, the expert system building tool developed in our research group. REFERENCES [Bre Wi 841 Brewka, G. and Wittur, K.H. Nichtmonotone Logiken, Universitat Bonn, lnformatik Berichte Nr. 40., 1984. [Bre 861 Brewka, G. Uber unnormale Vogel, anwendbare Regeln und einen Default Beweiser. Proc. GWAI (German Workshop on Artificial Intelligence) 85, 1986. [deK 861 de Kleer, J. Extending the ATMS. Artificial Intelligence 28, 1986. [DOY 791 Doyle, J. A Truth Maintenance System. Artificial Intelligence 12, 1979. [Goo 841 Goodwin, J. WATSON: A Dependency Directed Inference System. Proc. Non-Monotonic Reasoning Workshop, 1984. [Goo 851 Goodwin, J. A Process Theory of Non-monotonic Inference. Proc. IJCAI 85. [Gro 841 Grosof, B. Default Reasoning As Circumscription. Proc. Non-Monotonic Reasoning Workshop, 1984. [Lif 841 Lifschitz, V. Some Results on Circumscription. Proc. Non-Monotonic Reasoning Workshop, 1984. [Luk 841 Lukaszewicz, W. Nonmonotonic Logic for Default Theories. Proc. ECAI 1984. [McC 801 McCarthy, J. Circumscription - A Form of Non-Monotonic Reasoning. Artificial Intelligence 13, 1980. [McC 841 McCarthy, J. Applications of Circumscription to Formalizing Common Sense Reasoning. Proc. Non-Monotonic Reasoning Workshop, 1984. [McD 821 McDermott, D. Nonmonotonic Logic II: Nonmonotonic Modal Theories. JACM Vol. 29 No. 1, 1982. [McD Do 801 McDermott, D. and Doyle, J. Non-Monotonic Logic I. Artificial Intelligence 13, 1980. [Moo 851 Moore, R.C. Semantical Considerations on Nonmonotonic Logic. Artificial Intelligence 25(l), 1985. [Rei 801 Reiter, R. A Logic for Default Reasoning. Artificial Intelligence 13, 1980. [Rei Cri 811 Reiter, R. and Criscuolo, G. On Interacting Defaults. Proc. IJCAI 1981. 12 / SCIENCE
1986
70
517
Representing Actions with an Assumption-Based Truth Maintenance System Paul H. Morris and Robert A. Nado IntelliCorp 1975 El Camino Real West Mountain View, California 94040 ABSTRACT The Assumption-based Truth Maintenance System, introduced by de Kleer, is a powerful new tool for organizing a search through a space of alternatives. However, the ATMS is oriented towards inferential problem solving, and provides no special mechanisms for modeling actions or state changes. We describe an approach to applying the ATMS to the task of representing the effects of actions. The approach extends traditional tree-structured context mechanisms to allow context merges. It also takes advantage of the underlying ATMS to detect inconsistent contexts and to maintain derived results. Some results are presented concerning possible approaches to the treatment of merges in questionable circumstances. Finally, the analysis of actions in terms of a truth maintenance system suggests the need for a more elaborate treatment of contradiction in such systems than exists at present. 1. Introduction The Assumption-Based Truth Maintenance System (ATMS), introduced by de Kleer [2], is a powerful new tool for organizing an efficient search through a space of alternatives. By explicitly recording the dependence of the reasoning steps on individual choices, a truth maintenance system is able to share partial results across different branches of the search space. In effect, knowledge gleaned in one context is automatically transfered to other contexts where it is relevant. The ATMS permits simultaneous reasoning about multiple, possibly conflicting contexts, avoiding the cost of context switching. The ATMS as presently constituted views problem solving a.s purely inferential. This is an appropriate stance for a broad class of constraint satisfaction problems. However, problems involving temporal changes or actions require some additional mechanism. h de Kleer [5] points out, ‘I... problem solvers [may] act, changing the world, and this cannot be modeled in a pure ATMS in which there is no way to prevent the inheritance of a fact into a daughter context.” In this paper we explore one approach to using the ATMS to support the modeling of actions. The basic idea is to extend a traditional tree-structured context mechanism (as in CONNIVER and QA4 [l]), taking advantage of an underlying ATMS to allow context merges, to detect inconsistent contexts, and to maintain derived results. This approach has been implemented in the KEEworldsTM facility of the KEETM (Knowledge Engineering EnvironmentTM) system.* * KEEworlds, KEE and Knowledge Engineering Environment are trademarks of IntelliCorp. This research was supported in part by the Defense Advanced Research Projects Agency (DARPA) under contract No. F30602 85 C 0065. The views and conclusions reported here are those of the authors and should not be construed as representing the official position or policy of DARPA or the U.S. government. In the following sections, we give a functional overview of the KEEworlds facility. We then describe the underlying representation in terms of the ATMS. Special attention is given to the situation where a world has multiple parents. This is followed by a discussion of non-monotonic reasoning about actions in a more general TMS setting, suggested by the worlds mechanism. We close with some remarks about related systems. 2. Worlds The basic structure provided for modeling actions is a directed acyclic graph of worlds. Each world may be regarded as representing an individual, fully specified action or state change. A world together with its ancestors in the graph represents a partially ordered network of actions. Each successor of a world in the graph then represents a hypothetical extension of the world’s associated action network to include a new subsequent action. The world graph as a whole may thus be regarded as representing multiple, possibly conflicting, action networks. Each partially ordered action network resembles a procedural net of NOAH [9], or NONLIN [lo], where the actions are fully specified. We assume that the effects of a fully specified action can be represented by additions and deletions of base facts, so each world has a set of additions and deletions associated with it that represent the actual primitive changes determined by the action. Since an action corresponds to an application of an operator, not an operator itself, this assumption is somewhat less restrictive than that of STRIPS [8], which requires operators to have fixed add and delete lists. Wl -on(c,d) +on(c, tab) W2 w3 Figure 2-l: A Worlds Graph from Blocks World Figure 2-l shows an example worlds graph from the blocks world. The additions at Wl produce the initial state, while the deletion and addition at W2 represent the effect of moving block a to the table. Those at W3 correspond to an alternative action of moving block c to the table. Notice that preconditions are not represented. These are assumed to have been tested in the parent world before W2 and W3 were constructed. Only the effects of the actions are recorded in the worlds. AUTOMATED REASONING / 13 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. To simplify the discussion we will assume for the moment that the graph is a tree, as in figure 2-1, i.e., each world has at most one parent and a branch of the tree corresponds to a linear sequence of actions. Later, we will consider the consequences of multiple parents. Observe that we may associate each world with the state that results from applying the changes encoded by the world and all of its ancestors (in figure 2-l the states are depicted inside the worlds). Hence, a world plays a double role, representing both a state change and a state. The facts in the state will in general be augmented with deductions using general knowledge of the domain. Thus, the facts that are true at a world fall into the following three categories: 1. facts inherited from ancestor worlds 2. direct additions at this world 3. deductions from facts in 1 and 2 In keeping with the view that additions and deletions represent actual changes, they are only recorded where they are effective, that is, an addition only occurs where the fact did not previously hold. and a deletion where it did hold. The inherited facts follow a principle of inertia (essentially the STRIPS assumption [ll]): a fact that is added at a world continues to be true in succeeding worlds, up until (but not including) a world where it is deleted. The deduced facts may include the distinguished fact FALSE, representing a contradiction. A world where FALSE can be deduced is marked as inconsistent. The system generally avoids further reasoning in such worlds (however, it is possible and sometimes useful to do meta-level reasoning about inconsistent worlds). 3. Worlds in ATMS Before discussing how the worlds graph is implemented in terms of the underlying ATMS, we give a brief sketch of the ATMS mechanisms that are used, primarily to establish terminology. The reader is urged to consult de Kleer [3, 4, 51 for a full description of the ATMS. The basic elements of the ATMS are assumptions and nodes. An assumption in the ATMS corresponds to a decision or choice, and is used as an elementary context descriptor. Nodes correspond to propositional facts or data, which may be justified in terms of other nodes, or assumptions. By tracing back through the justification structure, it is possible to determine the ultimate support for a derivation of a node as a set of assumptions. Such a set is called an environment for the node. Since a node may have multiple derivations, it may also have multiple environments. The set of (minimal) environments for a node is called its label. Computing the labels of nodes is one of the major activities of the ATMS. The primary transaction that the ATMS supports is adding a justification. This causes the labels of affected nodes to be recomputed. There is a special element called FALSE, denoting contradiction, which is similar to a node, and may have justifications. The environments that would be in its label are called nogoods and constitute minimal inconsistent environments. Environments that are discovered to be inconsistent, i.e., that are supersets of nogoods, are removed from the labels of nodes so that they are not used for further reasoning. Each world has two ATMS entities associated with it, reflecting its double role: a world assumption and a world environment. The world assumption corresponds to the action encoded by the world, and may also be thought of as the choice or decision that led to the action. The world environment, on the other hand, corresponds to the state, and actually consists of the set of world assumptions from the given world and all of its ancestors. It is convenient to use the ATMS itself to compute the world environment. This is accomplished by having a special world node associated with each world. This node may be thought of as representing the statement that the world’s action occurs. node’ Figure 3-l: Justification for World Node The world node, N,, is given a single justification N,&AW+ NW where N, is the world node of the parent, and A, is the world assumption of the given world. This is depicted graphically in figure 3-1. It is not difficult to see that this results in all world nodes having a single environment, of the form described. Directly adding a fact F at a world can now be accomplished by supplying a justification in terms of the world node. However, to allow for the possibility of later deletion, a nondeletion assumption is included. Thus, the justification has the form NwA A,w - F where A,, is the nondeletion assumption. A distinct nondeletion assumption is required for each separate addition of a fact at a world (to allow independent deletion). If F is deleted at a subsequent world WI, the justification A, A A,, - FALSE is supplied to the ATMS, where Awl is the world assumption for Wl. We will call nogoods resulting from justifications of this form deletion nogoods. This scheme for addition and deletion is shown in figure 3-2. Apart from the justifications supplied by the system to represent additions and deletions, and justifications for world nodes, there will be justifications installed by the user to represent deductions from the primitive facts, These deductions need be performed only once as the presence of the justifications in the ATMS allows the efficient determination, via label propagation, of which derived facts hold in which worlds. Derivations of FALSE arising from user justifications are used to determine inconsistent worlds, representing dead ends in the search. 1-i / SCIENCE non-deletion assumption assumptqon Figure 3-2: Addition and Deletion Justifications The nogoods determined by the ATMS may, however, contain nondeletion assumptions in addition to the world assumptions. However, only the latter represent choices in the search, and we wish these to take all the “blame” for dead ends (we discuss this further in section 5). Thus, the multiple worlds system incorporates a feedback loop that installs in the ATMS reduced nogoods with the nondeletion assumptions removed. These nogoods are subsets of the original ones, and so, in accordance with the minimality requirement, the latter are removed. Deletion nogoods are, of course, exempt from this process; the feedback procedure ensures that the deletion nogoods are the only ones left that contain nondeletion assumptions. To test whether a fact holds in a world, we can compare each environment in the node label with the world environment. The comparison is done as follows (in principle; the actual algorithm is equivalent, but more efficient). The world environment is extended with as many nondeletion assumptions as are consistent with it (the extension is necessarily unique since each nogood contains at most one nondeletion assumption). The extended world environment is then checked to see if it is a superset of the fact environment. If so, the fact is regarded as true in the world. For example, in figure 2-1, the world environment at W2 includes the world assumption for Wl. When extended, the environment also contains the nondeletion assumption for the addition of on(c,d) at Wl. Thus, on(c,d) is true at W2. Note, however, that the nondeletion assumption for the addition of on(a,b) at Wl cannot be consistently added to the environment at W2 because it shares a deletion nogood with the world assumption for W2. Hence on(a,b) fails to be true at W2. 4. Merges We now consider the more complex situation where a world has multiple parents: we call such a world a merge. The ability to perform merges allows a problem to be decomposed into nearly independent components, which can be worked on separately and later recombined. As before, the changes represented by the ancestor worlds are combined. In figure 4-1, the world W4 is a merge. Thus, the state corresponding to W4 will have both blocks moved to the table. We wish to stress that a merge is not the same as a simple union of the facts in the parent worlds, but rather -on(c,d) +on(c,tab) Figure 4-l: A Merge produces the state resulting from a union of the changes from all the ancestor worlds. In the example of figure 4-1, the changes along the two branches are independent. More generally, a difficulty arises in that the effect of changes may depend on the order in which they are applied, resulting in an ambiguous merge. In figure 4-2, we show two examples of such merges. In both cases, the state at W5 +P -P -P +P Figure 4-2: Ambiguous Merges depends on the order of the preceding changes. There are basically two ways of dealing with this difficulty. One is to forbid the merge in ambiguous cases. The other is to refine the definition of the merge so that the ambiguity is removed. It is also possible to adopt an intermediate position, forbidding some merges and further specifying others. We now consider examples of each approach. We have already introduced the requirement that additions and deletions at worlds be effective with respect to the state resulting from actions in ancestor worlds. However, from a strict standpoint of fully specified actions, the additions and deletions could be required to be effective even with respect to possible states resulting AUTOMATED REASONING / 15 from actions in sibling or cousin worlds. Thus, one might forbid a merge if the ancestor subgraph of the proposed merge possesses any linearization in which an addition or deletion is ineffective. One can then prove the following. Theorem 1: A merge that is not forbidden by the above criterion is unambiguous. The next result assists in the identification of such forbidden merges (remember we are assuming that additions are always effective with respect to ancestor worlds). Theorem 2: A graph of worlds admits a linearization in which an addition is ineffective if and only if there are at least two worlds in which the addition occurs such that neither is an ancestor of the other. A similar result holds for deletions. With this approach, the merges in figure 4-2 would be disallowed. It is of interest that the above restriction resembles that required for conflict-free procedural nets [lOI where actions that violate each . . others’ preconditions may not be unordered relative to each other. Indeed, additions and deletions that are mandatory are, in effect, preconditions. From this perspective, the separate branches of the networks of figure 4-2 are in conflict because each branch deletes a precondition of the other. In a sense, the above restriction is excessive: it forbids many merges that are unambiguous. If one does not require that additions and deletions be effective with respect to non-ancestor actions, a somewhat weaker sufficient condition (but still not necessary) follows. that guarantees unambiguous merges can be obtained, as Theorem 3: A sufficient condition for a merge to be unambiguous is that the ancestor subgraph may not contain two worlds, one of which deletes a fact and the other of which adds it, such that neither is an ancestor of the other. This criterion also prohibits the examples of figure 4-2. We now follow the other approach to removing the ambiguity, and adopt additional criteria for defining the merge. In the pessimistic merge, an individual fact belongs to the merge if it survives in every linearization of the actions. The rationale is that we may then be assured the fact holds, irrespective of the order in which the actions are performed. Otherwise, we are ignorant of the fact, and the absence of the fact from the merge simply denotes such ignorance, not falsity. Notice that this definition is consistent with the original when the merge is unambiguous. With the pessimistic merge, the fact P is absent at W5 in both examples of figure 4-2. A dual to the pessimistic merge is the optimistic merge where a fact is true in the merge if it is true in some linearization. Again, this is consistent with the original for the unambiguous case. With the optimistic merge, P is present at W5 in both examples. We now discuss the effect of merges on the ATMS representation. When a world has multiple parents, the justification for the world node must be expanded to include each of the parent world nodes among the justifiers. The justification scheme for additions and deletions is unchanged. The different kinds of merge are obtained by different selections of which additions the deletions affect, i.e.. which justifications for FALSE are entered. For the pessimistic merge, the deletions are effective with respect to all except descendant additions. For the optimistic case, the deletions are effective with respect to ancestor additions only (the optimistic merge tends to be easier to implement defensible on semantic grounds). efficiently, although less One might imagine a wide variety of possible merge algorithms. There are two overriding constraints that led to the schemes described here. One is the necessity of quickly determining whether a potential merge would produce a consistent world, since that is expected to be a high frequency operation. The schemes described allow the merge to be computed as a simple union of ATMS environments. The other constraint is the existence of a large core of unambiguous cases where there is only one reasonable value for the merge. A further merge type that has some intuitive appeal, but does not appear to admit an efficient implementation, arises as follows. Let US call a linearization of the ancestor subgraph of a proposed merge valid if it results in every addition and deletion being effective. It is possible to show that every valid linearization gives the same result for the merge. Thus, one might define the merge to be this common value (if there is any such linearization; the merge could be forbidden otherwise). In figure 4-2, this would lead to P holding at W5 in the left example, but not in the right. 5. Actions and NonMonotonicity It is instructive to consider how actions might be represented in a more general TMS setting, as suggested by the worlds system. For definiteness, and for contrast, this will be cast in terms of a Doyle- style truth maintenance system [S]. The general approach we follow is to use a form of nonmonotonic inference to reason about the effects of actions. However, the behavior we require in response to contradiction is somewhat different from the standard approach in truth maintenance systems. We will regard a context, or current state of the system, as describing the evolution of a situation to a particular point in time. Besides containing assertions about facts in the “present” such as “block a is on block b,” the context records past actions like A3: “block a was placed on block b”. Note that there may be several occurrences of individual actions with the same description; we distinguish between the occurrences by giving them unique identifiers such as A3. The numbering of the identifiers is not intended to imply temporal order. Thus - so far - the relative timing of past actions has not been represented. The positive effects of an action can be represented by justifications linking the occurrence of past actions to present facts. For example, A3 A I% + block a is on block b. P5 is a preservation condition of the form “block u was not moved off block b after A3.” In order to allow deletion, we justify P5 as an assumption by giving it a nonmonotonic justification of the form (05) - F5 Here, “(D5)” indicates that D5 is an OUT-justifier, where D5 is the statement that “some action after A3 moves block u off block 6”. If a subsequent action, say A4, moves the block off, we supply a justification A4-+05 causing the OUT-justifier to come IN, thereby undercutting t,he derivation of “block a is on block b.” Note that the information about the relative timing of actions is now implicitly represented by these justifications. A difficulty with this representation arises when the problem solving process generates contradictions that represent dead ends in the search space. We do not wish the preservation assumptions to be implicated in these; rather, we wish the assumptions representing choices of actions to be the ones considered for revision. Choosing a preservation assumption as culprit during backtracking would 16 / SCIENCE amount to postulating the existence of an unknown action that deletes one of the facts leading to the contradiction. However, if we make the separation between problem solving and truth maintenance suggested by de Kleer, then from the point of view of the TMS, the only actions that exist are those that the problem solver has informed it about. Some new mechanism is required to ensure that the TMS handles this correctly. One possibility is to have something like a “sheltered” assumption, which could be refuted directly, but not indirectly in response to a contradiction. Incidentally, the need for a more discriminating process of culprit identification is not confined to the difficulty with preservation assumptions. As another example, consider a situation where a burglar is planning to break into a house late at night. To accomplish his purpose, he must choose some method of entry. One method is to break in a window. However, this may have the consequence of waking the occupants, if they are home, which would defeat his purpose. Let us suppose the burglar makes the default assumption that the occupants are home. The difficulty is that a standard truth maintenance system, in attempting to resolve the “contradiction” of waking the occupants, might elect to revise the assumption that the occupants are home, even though that is not subject to the burglar’s control, instead of the real culprit, breaking the window. The system would in effect regard the undesired consequence of waking the occupants as evidence for their absence. However, it is only when there is independent evidence for the occupants being absent that this possibility is worth considering. This example of “wishful thinking” suggests that truth maintenance systems in general need a more refined treatment of contradiction handling. Although the approach outlined here could be adapted to using the ATMS more directly for modeling actions, it would be cumbersome for a user to have to input the justifications representing additions and deletions by hand. The worlds facility described earlier provides a framework that presents a more convenient interface to an action modeling system. 6. Closing Remarks The worlds considered here resemble the data pools of h?cDermott [7]. H owever, the result of a merge in the data pool approach is determined by the arbitrary chronological order in which items are recorded in data pools. This means that two graphs with the same apparent external structure may have different results for a merge. Another difference is that data pools apparently have no notion of contradiction (at least none is mentioned by McDermott in the paper). One attractive aspect of McDermott’s approach is that justifications may have OUT- justifiers. However, this requires that labels be computed by solving Boolean equations, rather than the simple propagation procedure of the ATMS. The ViewpointsTM facility of Inference Corporation’s ARTTM system appears quite similar in behavior to the worlds facility described here.* However, it is difficult to make detailed comparisons since little information has been made available about the underlying mechanisms of ART. LVe have described an approach to constructing a context mechanism that represents a partially ordered network of actions or state changes. A realization of the mechanism has been described in terms of an underlying Assumption Based Truth Maintenance System. An examination of a similar representation in a classical TMS system suggests a shortcoming in the way existing truth maintenance schemes handle contradictions. The approach described has been implemented as part of the KEEworlds facility of I(EE and appears to provide a useful and efficient tool for reasoning about multiple situations. The KEEworlds facility integrates the multiple worlds system with an existing frame-based representation system, provides a graphical browser for manual exploration of worlds and allows rule-based generation of worlds during either forward or backward chaining. An application-oriented discussion of the K.EEworlds facility, together with an example of its use, may be found in [8]. PI PI PI 141 i51 PI PI PI PI ilO1 illI [12] References Bobrow, G. and B. Raphael. New Programming Languages for Artificial Intelligence Research. Computer Surveys 6(3):153-174, 1974. de Kleer, J. Choices Without Backtracking. In Proceedings, AAAI-84. Austin, Texas, 1984. de Kleer, J. An Assumption-Based Truth Maintenance System. Artificial Intelligence 28(l), 1986. de Kleer, J. Extending the ATMS. Artificial IntelEigence 28(l), 198G. de Kleer, J. Problem Solving with the ATMS. Artificial Intelligence 28(l), 1986. Doyle, J. A Truth Maintenance System. Artificial Intelligence 12(3), 1979. McDermott, D. Contexts and Data Dependencies: A Synthesis. IEEE Transactions on Pattern Analysis and Machine Intelligence 5(3):237-246, May, 1983. Nardi, B. and A. Paulson. Multiple Worlds With Truth Maintenance In AI Applications. In Proc. ECAI-86. Brighton, England, 1986. Nilsson, N. J. Principles of Artificial Intelligence. Tioga Publishing Company, Palo Alto, Ca., 1980. Sacerdoti, E.D. A Structure for Plans and Behavior. Elsevier North-Holland, 1977. Tate, A. Generating Project Networks. Tn I.1C,-U-77, pages 888-893. Cambridge, Massachusetts, 1977. lj’aldinger, R. J. Achieving Several Goals Simultaneously. In Elcock, E. and Michie, D. (editor), Muchi?ze Intelligence 8, pages 94-136. Ellis Hoi-wood, Chichester, 1977. * Viewpoints and ART are trademarks of Inference Corporation. AUTOMATED REASONING / 17
1986
71
518
Automatic Compilation of Logical Specifications into Efficient Programs Donald Cohen’ USC. Information Sciences Institute 4676 Admiralty Way Marina del Rey, Ca. 90292 Abstract We describe an automatic programmer, or “compiler” which accepts as input a predicate calculus specification of a set to generate or a condition to test, along with a description of the underlying representation of the data. This compiler searches a space of possible algorithms for the one that is expected to be most efficient. We describe the knowledge that is and is not available to this compiler, and its corresponding capabilities and limitations. This compiler is now regularly used to produce large programs. 1. Int reduction This work is motivated by a desire to help programmers do their job better, i.e., more easily and quickly create and modify programs that are more efficient, correct and understandable. Our approach follows the well-travelled route of supplying a “higher level language” which allows a programmer to say more of what he wants the machine to do and less of the details of how it is to be done. This leads to programs that are shorter, easier to understand and modify, and contain fewer bugs. However, higher level languages tend to degrade efficiency, since their compilers fail to make many optimizations that a human might make. In fact, some optimizations cannot even be expressed in the higher level language. Our higher level language, APS, is an extension of lisp in which programs can be written with much less commitment to particular algorithms or data representations. This is a benefit to the degree (which we believe is quite large) that programmers spend their effort dealing with these issues. One way to avoid thinking about data representation is to use a single powerful representation for all data. To some extent this is the approach of APL [Pakin 681, PROLOG [Clocksin 841 and Relational Databases [Ullman 821. This unfortunately results in a large performance penalty. APT, SETL [Schonberg 811 and MRS [Genesereth 811 provide the illusion of a uniform relational representation, but avoid the penalty by representing different relations with different data structures. APT goes further by accepting ‘*specifications” that contain compound well-formed formulas (wffs). Its compiler assumes the responsibility of finding good algorithms to ‘This research was supported by the Defense Advanced Research Projects Agency under contract No. MDAtXX3 81 C 0335. Mews and conclusions contained in this report are the authors’ and should not be interpreted as representing the official opinion or policy of DARPA, the U.S. Government, or any person or agency connected with them. implement these specifications. Another advantage of multiple representations, though not the focus of this paper, is that new things can be regarded as data, e.g., the + function may be regarded as a relation which is true of infinitely many tuples. APT compiles uses of such relations into computations rather than data structure accesses2 The compilation process is not entirely done by the machine. The programmer must provide a small amount of “annotation”, which is not part of the actual specification. The most important annotations tell AP5 how to represent the data. If the predefined representations are inadequate, he can add new ones. The compiler searches a space of algorithms appropriate for the given representations. Its job is to find the most efficient one that meets the specification. of course, the variablity of the representations makes this job much more difficult. This paper describes how the compiler does its job. The following programming methodology seems to work well for APT: First the programmer writes a specification. Next he adds annotations that select general (but inefficient) data representations. The result is compiled into a prototype that can be tested on small examples. (Even specifications contain bugs!) Finally he optimizes the prototype by changing the most critical annotations. 2. Example Suppose we want a program to find the nephews of an input person. The data is a set of people along with the sex and parents of each. For simplicity we define a nephew as the son of a sibling (not including the son of a brother-in- law or sister-in-law), and we define siblings to include half- brothers and half-sisters. Figure 2-1 shows a sample lisp program for this task, and figure 2-2 shows a corresponding AP5 program. The APT program contains two wffs, one as a definition for the Sibling relation and the other as a specification of the objects desired as output. Wffs are represented in prefix notation. The quantifiers tl and 3 are represented by the symbols All and Exists. The connectives, A, V, -, etc. are represented by the symbols “and”, “or“, “not”, etc. Comparison of these two programs reveals that the lisp program specifies both a data representation and an algorithm while the APT program only specifies the result to 2Computations can also express the intent of recursive definitions, which AP5 prohibits because they allow multiple interpretations. 20 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. (Defun list-nephews (person) ; Algorithm: ; (1) get the parents of person, ; (2) get their children, excluding the . input person (siblings) i (3) get their children (nephews and nieces) ; (4) filter out the nieces ; (5) remove dupl icate nephews ; Representation: ; persons are symbols (with property lists) ; ii": x 'parents) is a list of x’s parents 'children) is a list of x's children ; (git i 'sex) is x's sex (remove-duplicates (loop for parent in (get person 'parents) nconc (loop for sibling in (get parent 'children) unless (eq person sibling) nconc (loop for nephew in (get sibling *children) when (eq (get nephew 'sex) 'male) ect nephew))))) -isp program to find nephews co1 Figure 2-1: L (DeclareRel Sex 2) (DeclareRel Parent (DefineRel Sib1 ing : a binary relation, : e.g., (Sex Sam Male) 2) ; (Parent child parent) 0 Y) (and (not (es x ~1) (Exists (parent) (and (Parent x parent) (Parent y parent))))) (Defun list-nephews (person) (loop for nephew s.t. (and (Sex nephew ‘ma1 e) (Exists (sibling) (and (Sib1 ing person sib1 ing) (Parent nephew sib1 ing)))) collect nephew)) Figure 2-2: AP5 program to find nephews be computed. The APT program can be written and understood without thinking about either representation or algorithm, while the lisp program cannot be written or understood without understanding both. When similar representations for the Sex and Parent relations were selected from the library, the APT compiler generated the same algorithm used in the lisp programa However, other annotations might cause it to produce a very different program. For instance if there were only ten males in the world and the typical person had a thousand children, a better algorithm might start by enumerating the males. Such statistical information comprises most of the annotations other than relation representations. 3. What the Compiler Does and Doesn’t Do The compiler expects to be told how to test and generate primitive relations such as Parent and Sex. It combines these small pieces of functionality into programs that test 3Almost - the only difference was that it removed duplicates from the intermediate set of siblings. I was too lazy to make this optimization in the lisp program, although I would expect the result to be twice as fast. Also, it turns out that the AP5 algorithm for removing duplicates is linear in the size of the input, while remove-duplicetes (at least in the commonlisp I use) is quadratic. and generate compound wffs (those with connectives and quantifiers). The compiler can be thought of as three parts: a simplifier, a compiler for tests and a compiler for generators. The second and third of these embody knowledge of how to test and generate various sorts of compound wffs. Before that knowledge is applied, the wff is simplified. We will not describe the simplifier in detail. It does the sorts of things that other simplifiers do: removing double negations, repeated conjuncts and disjuncts and vacuous quantifiers, detecting simple tautologies and contradictions, etc. Like all simplifiers it could be improved, but for present purposes the reader should assume that it produces reasonable results. The compilers for tests and generators expect their inputs to be simplified. Not surprisingly, the compiler is somewhat limited, as compared to human programmers: - It does not understand general lisp code. A human programmer (but not APT) might use his understanding of lisp to optimize (loop for x s.t. (P x) until (lisp-predicate) do (lisp-code x)) by generating x’s in an order that reduces the number of iterations. - It produces only “uniform” algorithms. A program written by a human to find a prime between two inputs might use these inputs to choose among several internal subprograms. APT does not write such programs. It believes that the cost of any algorithm is independent of the particular inputs. - It does not understand the problem domain. If a prime number is defined as an integer greater than one with no integer divisor between one and itself, AP5 will not know to quit looking for divisors after the square root. - It cannot be given special purpose algorithms for compound wffs. We all know a good algorithm for generating a range of integers, but APT cannot find a terminating algorithm for generating {n 1 integer(n) A lower < n A n < upper}4 Some of these limitations are discussed further in the section on future work. We now turn to a detailed description of what AP5 can do and how. 4. Testing The description of a representation provides the compiler with an algorithm for testing relations with that representation and a cost estimate for that algoriihm. It is trivial to compile programs that test conjunctions, disjunctions, negations, etc. and to estimate their COStS. The cost estimates (and size estimates) of conjuncts and 4 One way around this problem is to introduce the relation Integer-Between. We ten then tell AP5 how to generate this relation and never admit that it has anything to do with the “I” relation. We ten also replace the specification with a lower level program. Automatic Programming: AUTOMATED REASONING / 2 1 disjuncts can be used to further improve the order in which the conjuncts or disjuncts are tested. Wffs of the form (Exists vars tif) are tested by trying to generate an example: (loop for vars s-t. wff thereis t) As we will see, not all wffs can be generated, so some wffs cannot be tested. Wffs of the form (All vars wff) are tested as if they had been written (not (Exists vats (not wff))). 5. Generating The largest and most interesting part of the compiler is devoted to generating tuples that satisfy a wff. By generating we mean enumerating in any order all such tuples without producing any duplicates. of course, external mechanisms might decide to quit before they have all been produced, e.g., the algorithm for testing existential wffs quits when the first example is generated. This section attempts to: - describe what APT can and cannot generate - convince the reader (but not prove) that this covers all sets (with a few reasonable exceptions) that can be generated knowing only about logic and the primitive relations - convince the reader that APT usually finds efficient algorithms and point out known exceptions It is not assumed that the set of objects is enumerable.5 Thus some generation tasks cannot be accomplished. For example, {x 1 true} cannot be generated. Likewise, for any set of variables, vars, and Mf, wff, it is not possible to generate both {vars 1 t&f}, and {vars 1 -Wn}. Even finite sets can fail to be generated due to the lack of a suitable algorithm. Suppose elements of the set S are identified by a special property on their property lists. This enables us to test whether an object is in S, but without a way to find all objects in the world with property lists, there may still be no way to generate S. Of course, it is also possible simply to neglect to tell AP!S how to generate a primitive wff. For the remainder of this section we assume that APT is given a set of variables, V, and a simplified wff, W. If not all of the variables of V are free in W, AP5 complains at compile time. Clearly, if any set of values for V satisfies W, then uncountably many sets of values do: any value can be used for variables that are not free in W. We therefore are only sacrificing the ability to generate {V 1 W} when it turns out to be empty. Such programs are not necessary, and are probably not what the programmer intends anyway. The generator compiler consists of one compiler for each logical construct, e.g., conjunction or existential quantification. Each compiler recursively finds the best ways to generate or test subwffs and combines these into a program for its construct. Each of the following sections ?he set of objects is meant to include all potential lisp objects, e.g., all numbers and lists. Even if this set could be enumerated in principle, nobody would do it. Since we are interested in running programs it seems appropriate to consider the set of all objects not to be enumerable. will describe how a construct is generated, the coverage and efficiency of that algorithm, and how the cost and number of results are estimated. 5.1. Primitive wffs The description of a representation provides the compiler with a set of algorithms for generating relations of that representation, along with estimates of their costs. Each such algorithm requires values for certain positions as inputs and generates the others as outputs. It can thus be characterized by a list of “i’s and “o”s, standing for inputs and outputs. For example, an algorithm characterized by (i o i o o) accepts as input two values, vl and v2, and generates x,y,t triples such that R(v1, x, v2, y, z). If the Employer relation were represented as a list of (person, employer) pairs there would be an algorithm characterized by (o o) which simply enumerated the pairs in the list. Suppose <args> is a list of k arguments to a k-ary relation, R, and V is a list of the variables in <args>. APT can generate {V 1 R<args>} iff R has a primitive generating algorithm with a pattern that contains “0”s in all of the positions where <args> contains variables. For example, the pattern above would identify an algorithm that could be used to generate {x,y I R(1 ,x,2,y,3)}, but not {KY I WxJ ,Zy,3)1. APT uses the cheapest generator that is sufficient. It also compiles in code to filter out tuples which fail to match the pattern. Such filters are needed in two cases. One is that the algorithm generates positions that are actually supplied as inputs, e.g., the program to find John’s employers might look something like (loop for pair in Employer-Tuple-List when (eq (car pair) ‘John) collect (cdr pair)) The other case is that some variable was used more than once. For example the compiled program to find people who employ themselves would be something like (loop for pair in Employer-Tuple-List when (eq (car pair) (cdr pair)) collect (cdr pair)) There is currently no way to supply an algorithm for generating {x 1 R(x,x)}. Estimates of the number of tuples matching a pattern can be provided by annotation. A simple heuristic uses those values to estimate the number of tuples matching more specific patterns (changing “0”s to “i”s). A default value is used if no such estimate exists. 5.2. Negations The current version of APT cannot generate negations of primitive wffs. This is not a serious problem in practice, since people tend to name relations in such a way that the negations are not enumerable, e.g., we talk about the relation Parent, with finitely many positive tuples, rather than the relation Non-Parent with finitely many negative tuples. It would be easy to allow generator algorithms for negations of relations should the need arise. For negations of compound wffs, the negation is simply pushed inward (by the simplifier). )T ’ -- / SCIENCE 5.3. Disjunctions In order to generate {V I w, V w2 V . . . V w,}, APT generates {V I wi} for each 1 5 i < n and removes duplications of tuples that satisfy multiple disjuncts.” If a set described by a disjunction is countable then the set described by each disjunct is countable. Therefore the only way for this algorithm to fail to generate a disjunctive countable set is for it to fail to generate some countable disjunct. Either there is some algorithm for generating this disjunct that AP5 simply doesn’t know or else the disjunct could actually be simplified out, e.g., {x 1 [x is even] V [x is an even Godel number of a non-theorem]} One algorithm for generating any set that can be tested is to filter a generable superset. However, this does not allow APT to generate any new disjunctions since any such superset of the disjunctive set could just as well be used to generate each disjunct. On the other hand, it would be faster to generate the super-set once for the entire disjunction. APT does not currently make this optimization. The simplifier might achieve this in some cases, e.g., by replacing {x ] [P(x) A Q(x)] V [P(x) A R(x)]) with {x I P(x) A [Q(x) V R(x)]}, but suppose we want a list of people who either have parents or children, i.e., {x I 3 y [Child(x,y) V Child(y,x)]}. The compiler will create two loops which it also ought to consider merging. AP5 pessimistically estimates the number of tuples satisfying a disjunction as the sum of the numbers of tuples satisfying its disjuncts. The estimated generating cost is the sum of the costs of generating the disjuncts. 5.4. Existential Quantification Suppose we want to generate {V I 3 U w}, where U and V are lists of variables. We can assume that no variable appears more than once in V,U (the concatenation of the lists) and that every such variable is free in w.~ AP5 generates {V I 3 U w} by generating {VU I w}. The values for U are discarded and those for V returned, after removing duplicate tuples. There are some representations for which better algorithms exist, e.g., if R(x,y) is represented as an AList where the CDR of each entry is a list of y values related to the x value in the CAR, there is no need to look at all the elements of the CDR. However, this is an example of an algorithm for a compound wff that depends on the representation of a component relation, and APT cannot at present be given such algorithms. 6AP5 caches previously returned tuples in a hashtable. It knows that tuples of the first disjunct need not be tested and those of the last need not be stored. 7This is because (1) the simplifier deletes variables from U that are not free in w, (2) variables in U are not free in [3 U w], and (3) AP5 complains if V contains variables not free in [3 U w] (this includes duplicated variables). There are countable sets described by wffs of the form {V 1 3 U w} that cannot be generated by this algorithm: {V,U I w} might contain uncountably many tuples which all share the same few values for V. However algorithms for actually generating such sets always seem to rely on domain knowledge or on transformations of the wff that could be done by the simplifier. Again, it is instructive to imagine that we have S, a superset of {V I 3 U w} which we can generate and filter by 3 U w. This might be easier than generating {V,U 1 w}, since the values for V are already supplied. While finding an appropriate S requires domain knowlege in general, an example where logical knowledge suffices is when w is actually a conjunction of S and another wff, WI, i.e., we are generating {V I 3 U [S(V) A WI]}. But in this case S must not use any of the variables of U and the wff can be simplified to {V I S(V) A 3 U WI), for which, as we will see, APT will find the algorithm we described. The estimated cost of generating {V I 3 U w) is the cost of generating {V,U I w). The estimated number of tuples is the size of {VU I w) divided by the size of {U 3 w). 5.5. Conjunctions APT can generate {V I w, A w2 A . . . A wn3 iff it can: 1. choose some conjunct to generate first - we will assume without loss of generality that it is the first conjunct (we can always reorder the conjunct& Let VI be a list of variables in w,, and V2 be a list of the others 2. generate {VI I wl) 3. generate (V2 I w2 A . . . A w,} (given bindings for the variables in VI) If either VI or V2 is empty, the corresponding generation is just a test. The point is that we can use the bindings for the variables of VI that were obtained from WI in order to find bindings for the variables of V2. Example: Suppose we want {x I P(x) A Q(x)). Clearly, Vl will have to be {x3 and V2 the empty set. We still have to decide whether to generate P and test Q or vice versa. If only one can be generated there’s no choice. If neither can be generated, there’s no solution. If both can be generated, the choice can be made on efficiency grounds. The cost of generating P and testing Q is easily computed from the cost of generating P, the cost of testing Q (once), and the number of tuples expected in P. AP5 also considers the possibility of generating Q, storing the answers in a local cache which is more efficiently tested than the original representation of Q, then generating elements of P and testing them with the cache. This strategy is better if the cost of testing each element of P with the original representation of Q exceeds the cost of generating Q once, buildin ii! the cache and testing each element of P with the cache. % o really compare costs one must know how much of the set will actually be generated. AP5 assumes the whole set is needed, but the computation is organized to return the first values as soon as possible. Automatic Programming: AUTOMATED REASONING ; 2.3 In general, when a conjunct is to be used many times, it may be worthwhile to make a local cache that is optimized for the kind of access that is needed. APT currently only considers building temporary caches for testing an entire conjunct. This misses some opportunities for optimization, a deficiency we hope to correct. As an example, suppose we want a list of people who either have parents or have no children, where the Child relation is stored as a list of (parent, child) pairs. There are uncountably many objects with parents or no children, so APT tests each person separately, searching the entire Child relation. It might be better to first build separate caches for the objects with children and those with parents. Then enumerate people and filter them with the caches. Example: Suppose we want {x,y,z I P(x,y) A Q(y,z)). In this case Vl must be either {x,y} or {y,z). If (x,y I P(x,y)) can be generated, then it’s only necessary to generate {z I Q(y,z)). The alternative is to generate {y,z I Q(y,z)) and {x I P(x,y)). In either case the resulting program will look like a pair of nested loops. Again, if the inner loop is expensive it may be worthwhile to build a local cache. Example: Suppose we want {x,y I P(x) A Q(y)). Obviously we have to be able to generate both {x I P(x)) and (y I Q(y)). The nested loop tends to be more efficient if the inner loop generates the conjunct that takes less time per output? Again, it may be worthwhile to build a local cache for the inner loop. It’s easy to find countable sets described by conjunctions that cannot be generated with the algorithm above: imagine two uncountable sets with a countable intersection. The conjunction could be generated if we had a generable superset S of the intersection. One example mentioned earlier in the context of domain knowledge is {x I integer(x) A lowers x A x 5 upper). If logical knowledge is sufficient to recognize such a case, then it would seem to be the responsibility of the simplifier, e.g., if A is an infinite set with an infinite complement and B is a finite set, Cx I [A(x) V WI A [-A(x) V B(x)13 = ix l B(x)3 APT assumes pessimistically that the tuples that satisfy the conjuncts will be highly correlated, e.g., that if there are 100 elements of P and 1000 elements of Q, there are nearly 100 elements of the intersection. Given the estimates of the sizes of these sets, the costs of generating them (and testing them), and some algorithm as described above, it’s fairly easy to estimate the cost of generating the conjunction. APT could also use annotations estimating correlations among sets to improve its size estimates. This would allow it to apply the strongest filters earliest. Suppose sets P, Q and R each have the same size and the same cost for generating and testing. If P has a large intersection with each of 0 and R, but 0 and R have a small intersection, the best way to intersect all three is to intersect 0 and R first and leave P for last. Unfortunately %h e actual analysis shows that the comparison should be done on the generation time divided by one less then the size, since each relation has to be generated at least once either way. 2-t / SCIENCE this data seems too much to ask of the user. Some conjunctions can be generated in many different ways. Much effort has been spent optimizing the search for the best algorithm, but space does not permit a description of how this search is performed. The potential for exponential explosion has not been a problem in practice. Compilation of queries with conjunctions tends to be more expensive than other wffs, but not enough so to discourage their use. 5.6. Universal Quantification APT cannot generate {V I V U w). A degenerate case that APT could compile arises when {V,U I w) can be generated: (V I V U w) is trivially empty (since any set of values for V would require w to have too many tuples of U to generate). Again, we think it’s acceptable not to compile expressions with constant values since they are probably errors and could always be written in a better way. If {V I V U w) is countable, {V,U I w) must contain many U’s for a few V’s APT cannot verify that a V is in the set by checking all the U’s, because there are too many. Another approach is to determine that there are no values of U for which V fails to satisfy w. The remaining problem of generating candidate V’s could be solved if {V I V U w) were known to be a subset of some generable set, S. One case where this strategy would seem possible is where w has the form [S(V) A -PI, i.e., S does not depend on U, S is a super-set of the set we want, and the universal property can be checked by an algorithm that finds the counterexamples. In this case, we are trying to generate {V I V IJ PO’) A 4’13, which can be simplified to {V I S(V) A V U -P), which can be generated by the algorithm for conjunctions. As an example, suppose we have a relation Grade which is true of the 34uples, <x,y,z> such that student x got grade y in course z, One possible query with a universal quantifier is a request for a list of all the straight-A students, i.e., the students all of whose grades are A’s. Notice that APT couldn’t possibly generate the objects all of whose grades are A’s, since this would include all objects without any grades, which is almost all objects in the world. The point is that universal queries tend to specify a generable range, and that APT can use this range to compile an algorithm that generates the range and tests the universal condition. In APT the straight-A students could be generated by this program: (loop for x s.t. (and (Student x) (All (Y 2) (imp1 ies (Grade x y z) (= y ‘A)))) collect x) 6. Related work [Smith 851 discusses the problem of optimizing conjunctions, which is the APT compiler’s hardest problem. The space of algorithms considered is quite similar to the one used in APT but the cost model is much more simplistic: every conjunct is assumed to be either ungenerable or generable in constant time per tuple. Smith also deals extensively with the issue of searching for the best ordering, which we have not discussed here, other than remarking that AP5 seems to Solve it in practice. The largest body of work related to AP5 compilation deals with database query optimization [Ullman 821. The biggest difference is that database systems do not allow user defined data representations. The representations available can only represent finite relations, so general computations cannot be treated like relations. Another major difference is that AP5 makes the assumption, typical of most programming, that it’s dealing with data that fits in the address space. Database algorithms assume the opposite and therefore do not consider some of APB’s algorithms. For instance ~~5’s algorithm for eliminating duplicates requires that the set of previously generated tuples fit in the address space. Other differences between databases and APT are similar to those described by Smith in his comparison of databases with his own work. 7. Future Plans Several failings of the APT compiler which we hope to correct have already been mentioned. In addition, some of the limitations listed in section 3 can be attacked. For one thing, it would be easy to accept generating algorithms for arbitrary wffs. The hard problem is recognizing when another wff can be transformed to make use of that algorithm. A trivial example is that we would like to recognize that an algorithm for generating a particular conjuction applies when more conjuncts are added. Fortunately, this problem does not have to be solved completely in order to gain significant advantages. APT will never have as much domain knowledge as humans, but some kinds of domain knowledge are readily available and offer immediate advantage. One candidate is type information. Suppose we represent Parent(x,y) by putting y on the parent property of x and x on the child property of y. Then {x,y ] Parent(x,y)} cannot be generated. However, if we knew that the Parent relation could only relate people, we could instead try compiling {x,y ] Parent(x,y) A Person(x) A Person(y)), which would succeed if the set of people could be generated. The same type information could be used to optimize this to {x,y ] Parent(x,y) A Person(x)}. Similarly, if the set of people cannot be directly generated, it might have a generable super-type. We would ultimately like AP5 to choose representations for relations. One problem is that we usually want to run part of the program before the whole program is written. This requires representations to be chosen for the first part. Suppose, for example, that AP5 decides to represent the Parent relation with the parent and child properties. If a later addition to the program needs the set of (parent, child) pairs it will be too late to recover this data. New annotations might reserve (or forfeit) the right to make such requests. Of course, the global optimization problem is also more difficult, and requires more data, such as the relative frequency of different requests and their time constraints. 8. Conclusion We have described how APT compiles logical specifications into efficient lisp programs, given a small amount of annotation. We have also described the limitations of the compiler. Despite these limitations AP5 has proven very useful. The effect is to automate much of the work of programming. APT is currently used by a Small number of people on a regular basis. One indication of success is that we tend not to think about what the AP5 compiler does. We simply assume that our specifications are being compiled into acceptably efficient programs. The only reasons for looking at the algorithms chosen by the compiler are curiosity and performance bugs, which can usually be fixed by changing annotations. References [Clocksin 841 W. F. Clocksin and C. S. Mellish, Programming in Prolog, Springer=Verlag, New York, 1984. This book is chosen as a representative of a large Prolog literature. [Genesereth 811 Michael R. Genesereth, Russell Greiner and David E. Smith, MRS Manual, Stanford Heuristic Programming Project, 1981. Memo HPP-80-24 [Pakin 681 Sandra Pakin, APL\360 reference manual, Science Research Associates, Chicago, 1968. [Schonberg 811 Schonberg, E., Schwartz, J.T. 8 Sharir, M., “An automatic technique for selection of data representations in SETL programs,” ACM Transactions on Programming Languages and Systems 3, (2) April 1981, 126-143. [Smith 851 David E. Smith and Michael R. Genesereth, “Ordering Conjunctive Queries,” Artificial Intel/&ence 26, (2), May 1985, 171-215. [Ullman 821 Jeffrey D. Ullman, Principles of Database Systems, Computer Science Press, Rockville, Maryland, 1982. This book is chosen as a representative of a large database literature. Automatic Programming: AUTOMATED REASONING / 2 j
1986
72
519
FACTUAL KNOWLEDGE FOR DEVELOPING CONCURRENT PROGRAMS Albert0 Pettorossi IASI-CNR Viale Manzoni 30 00185 Roma (Italy) ABSTRACT We propose a system for the derivation of algo- rithms which allows us to use "factual knowledge" for the development of concurrent programs. From pre- . liminary program versions the system can derive new versions which have higher performances and can be evaluated by communicating agents in a parallel ar- chitecture. The knowledge about the facts or proper- ties of the programs is also used for the improve- ment of the system itself. I THE STRUCTURE OF THE SYSTEE We present some preliminary ideas for designing an interactive system which can be used for algo- rithm derivation. The components of the system are best understood by relating them to the Aurstall- Darlington methodology [2]. In that approach the pro- grammer is first asked to produce a correct version of the program, and thenhehas tocare about efficien- cy issues. Ze then improves that preliminary version by perfcrming "eureka steps" and applying correct- ness preserving transformation rules [2] (maybe with the help of a machine for rule application). :je gen- eralizethose concepts and we suggest the structure of a system (depicted in figure 1) where: i) the mathematical descriptions of the problems generalize the first correct program versions, ii) the factual knowledge [l] g eneralizes the eureka steps, and iii) the Logical System generalizesthemachine for the ap- plication of the transformation rules. For point i) we assume that the descriptions of the problems are constructive, that is, they corre- spond to executa.ble functional programs. We also as- sume that we nay have some constraints on their ex- ecutions as, for instance, on the number of comput- ing agents and their topological connections, on the space and time resources, etc. For point ii) we consider that during the develop- ment process the programmer acquires (maybe in an in- cremental way) the knowledge of some facts about the functions to be computed or the behaviour of the com- puting agents. Those new facts may or may not be log- ical consequences of the knowledge already available from the descriptions of the problems themselves. The Logical System of point iii) is more powerful than the traditional matching procedure, which ap- plies the transformation rules and verifies the re- lated validating conditions [3]. It is basically made out of three modules: - a Knowledge Base in which new facts are incrementally added by the pro-- grammer or the system itself, - an Analyzer-Synthe- Andrzej Skowron institute of Mathematics University of North Warsaw University Carolina at Charlotte PKiN IX p.907 Computer Science Department 00-901 Warsaw (Poland) Charlotte, NC 28223 (USA) Mathematical Descriptions of the Problems = Constructive Functions + Computational Constraints I . TFE LOGICAL SYSTmJ --I I Factual Knowledge on the Functions and the Computing Agents Efficient - Concurrent Programswith Communicating Agents Figure 1. The structure of the general system. sizer which checks the correctness of the acquired facts and draws the logical consequences from the currently available Knowledge Base, and - a Transla- tion Algorithm which uses the checked facts for the (semi)automatic derivation of new and more efficient versions of the programs. The Analyzer-Synthesizer module also provides an in- put to the Knowledge Base. It activates a "learning process" by updating the historical information a- bout the derivations of the algorithms already per- formed or the effectivity of the strategies which have been used.That information may be very valua- ble for the future developments of similar algo- rithms with constraints. Related ideas on the struc- ture of a program development system were suggested in [7]. The general system we have presented is also ca- pable of generating approximation algorithms for solving problems which may require exponential re- sources for an exact solution. In that case,in fact, the knowledge of the constraints may force the trans- lation procedure to derive only program versions which use polynomial time or space. We will not dis- cuss this point here. As a first step towards the realization of the general system we consider a specific instance of it, which is suited for dealing with a class of simple problems of the kind studied in [2]. We assume that the solutions of those problems can be expressed as 10 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. a set of recursive equations. In that case, in fact, some strategies for developing programs have been already analyzed in the literature (see, for in- stance, the divide-and-conquer strategy), and the programmer can easily provide factual knowledge from his past experience or through simple considera- tions. Figure 2 shows the structure of the particular in- stance of the system we consider inwhat follows. Functional Programs (Language LO & Semantics SemO) and Complexity Constraints Calculus . ii C for Translation Efficient checking Facts T Concurrent --------e-e --__ Programs with ------ Knowledge Base: Communicating Agents Transformation Techniques Historical Information ) (Language Ll & 1 Semantics Seml) 1 Factual Knowledge as equality of terms (Language of Facts LF) Figure 2. ACalculusfTrans lat ion System. The T ,ogical System is essentially made out of three parts: - a Calculus based on a Theorem Prover which uses symbolic evaluation and induction for checking the validity of the facts about the programs, - a Translation Algorithm, which translates checked facts into suitable communications among comput- ing agents, so that thederived program versions may achieve the desired performance, - a Xnowledge Base, which has a dictionary oftrans- formation rules and maintains the historical in- formation about the program derivations already performed. Our system extends the Burstall-Darlington ap- proach in the following respects: - itallowsfor thedevelopment of distributed and communicating algorithms from program specifica- tions; - the specification language (or the one of the initial program versions) may be different from the language of the derived programs,and there- fore the development of the algorithms is made easier; - the application of the transformation rules makes the new versions of the programs provably more efficient than the old ones; - the requirements for the desired complexity bounds are explicitly considered, and the system tries to meet them by applying transformation techniques which turned out to be successful in previous derivations. We assume that the factual knowledge about the programstobe developedis expressed as equality of terms. This notion will be formally defined later. The example we will give in the following Section will clarify the ideas. We will not deal here with the question on how the Knowldge i3ase is updatedand how new transformation techniques can be derived from old ones. II PROGRAM DERIVATION USING FACTUAL KNOWLEDGE Let us present the basic ideas of the approach we suggest, through an example. We consider the N Chinese Rings Problem. It is a generalization of a puzzle described in [4, p.631, and it is often anal- yzed in Artificial Intelligence papers. N rings, numbered from 1 to N, are placed on a stick. We are asked to remove all of them from the stickby a sequence of moves. We have to comply to thefollow- ing rule, where k (or k) denotes the move whichtakes away from (or puts back to) the stick the ring k: for k=2,. ..,Ii moves k or k can be performed iffrings 1 ,...,k-2 are not on the stick and ring k-l is on the stick. clear(k) computes the sequence of moves whichremoves rings l,..., k from the stick, if initially they are all on the stick. Conversely, put(k) computes the moves for putting back rings l,...,k on the stick, if initially they are not on the stick. clear(N), recursively defined by the following pro- gram P written in a language called LO (definedbe- low), solves the puzzle. clear(l)=l, clear(2)=2:1, p. clear(k+2)=clear(k):k+2:put(k):clear(k+l) i k>O . put(l)=l, put(2)=1:2 - -' put(k+2)=put(k+l):clear(k):k+2:put(k) k>O Suppose that we are also required to obtain a linear time algorithm, i.e., an algorithm which e- vokes a linear number of recursive calls. Nowfactu- al knowledge can be used for developing the above program with the given complexity constraints. Let us denote by s the sequence of moves mp...ml for any sequence &nl...mp. We can supplyto our system the following fact Fl: put(k)=clear(k). The calculus C (later defined) can check it using induction. The cases for k=l and 2 are obvious, and for the recursive case we have: put(k+2)=clear(k+l): put(k):k+2:clear(k)=clear(k+2) because z=s. ~- Once the fact Fl has been accepted, the trans- lation algorithm Tr produces from it the following program version: Pl. clear(l)=l, [ clear(2)=2:1, * clear(k+2)=s:k+2:s:clear(k+l) where s=clear(k) This program is more efficient than program P be- cause a smaller number of recursive calls is gener- ated. However, we have not derived yet the required linear algorithm. raotice, in fact, that each call of clear(k+2) requires the value of the left son call clear(k) and the right son call clear(k+l) (The or- der of the calls we used,is the left-to-right one, after substituting clear(k) for s in the expression of clear(k+2)). Now a new fact about the program Pl (or P) can be discovered by symbolic evaluation: F2: clear(k+2)]0=clear(k+2)]11 for kr0. Later on we will give a formal definition of the lap guage LF of facts. For the time being it is enough to remark that by clear(k+2)]0 we denote the left son call of clear(k+2) and by clear(k+2)]11 we denote the right son call of the right son call of clear(k+2). Fact F2 is obvious because both sides are equal to Automatic Programming: AUTOMATED REASONING / 2’ clear(k) (as it will be checked by our calculus us- ing symbolic evaluation). From fact F2 the transla- Its incorporation into program P2 using two memory locations produces: I;=lear(l)=l, clear(2)=2:1, tion algori thm Tr will derive the fol rclear(l >=I, clear(2)=2:1, "owing program: where s=clear(k)(E comm Rl)(Olcomm R2) (10 comm R2))decl Rl,R2 P2: The informal explanation of the communication Fact T3 speeds up the computation of P2 because dur- ing the evaluation of clear(k+5) the repeated eval- uation of clear(k) may be avoided. However, there is no point in incorporating Fact F3 into P3 because the agents with names x901 and x913 will not be generated. annotations added by Tr is as follows. We assume that recursively defined functions are e- valuated by a set of computing agents, i.e., triples of the form <agentname,message> :expression. Messages are the local memories of the agents expressions are their tasks, that is, what theyhave to evaluate. Agents dynamically create new agents III FACTS AND SEMANTICS OF CONCURRENT PROGRAYMS while the computation progresses. In particular in our program P2 the agent <x,m>::clear ('x+2) the two agents <xO,mO> ::clear(k) and generates In this Section we will give the definition of the language Ll in which programs with communication annotations are written, and its semantics Semi. We will also define the language LF of facts and the Calculus C, while the details of the Translation Al- gorithm Tr will be left to the reader. The definition of the language LO and its semantics SemO can be de- rived from thelanguage Ll (and Seml) if one does not take into consideration the communication annotations. Those definitions allow us to formally analyze some properties of our system and to state some basic re- sults. Let start off by introducing the following pre- liminary notions. An expression e& EXP in Ll is defined by: e::=nlx/g(e,...)]f(e,...)\e(s ann R)le[z] where z=e' where ne Constants, x&Variables, gc Basic-Functions, f E Recursive-Functions, s E (0 ,***, 'd", Re Locations, and annc {comm, read, write]. -- A program P in Ll is a set of recursive equations each of which is of the form: f(e,...)=n (base case) or f(e,. ..)=el (recursive case) where f occurs in el and el is of the form: e or e decl R. For simplicity we assume that no nested recursive calls of f occur in el and there is one recursive case only. It is possible, however, to extend our results releasing those hypotheses. In order to define the semantics Seml of Ll we need first to introduce the notion of agents. A (computing) agent is a triple of the form: <agn,msg> ::e where agnsAgn,msg E:Msg, ande ECEX~. Agn is a set of agentnames agn defined by: agn::=clagnOI... Iagnk. Msg is a set of messages such that: i) E (the empty message) e Msg, and ii) R+em+W E Msg where: em is the empty elementary message $ or it is a constant elementary message ne Constants, and R and W are the sets of the names of the agents which read and write (respectively) the message em. CExp is a set of closed expressions defined as in Ll with the additional case: .agn (.agn stands for the value of the expression of the agent agn). The semantics Seml is defined in an operational way by assigning to each program in Ll a set of con- ditional rewriting rules for agents. Those rules tell us how to produce new sets of agents from old ones. They are of the form: set of agents <= set of agents if condition and they can be applied inaparallelzyby rewriting non-conflicting subsets of agents [5]. <xl,ml>::clear(k+l). The naming conventionfor the agents is the fol- lowing: the father agent with name x generates the sons with names xO,...,xk,...,each of which is asso- ciated (in the left to right order) to a recursive call occurring inthecorresponding programequation. BY e comm R we mean that a memory location R is kept during the evaluation of e. Let f(...)c de- note the call f(...) itself, and let f(...)js recur- sively denote the s-son call of the j-son call of f(... ) for O~j~k,s~{O,...,k}*. By f(...)(s comm a) we mean that the s-son of the agent evaluatingf(...) may look at the value in the location R to know the result of its own computation. That s-son agent will write its result in the location R,if it did not find any value there. It can easily be seen that by writing and reading the location time may be shortened. To make su the computation that unneeded agents are not generated, in the language Ll we have also the annota tions of the form: s s write R. The first one forces the read R and s-son to wait for the value of its expression to be written in the location R by another agent. Conversely s write R forces the s-son to write its final result in the location R and it will never try to read R. The following figure 3 shows the use of the location R for program P2. clear(k+2) /\‘ clear(k-1) clea/r(k) Figure 3. Using the location R for the fact F2. The following program P3 generates a linear num- ber of agents only, and it meets the desired effi- ciency requirements for a linear algorithm. P3: clear(2)=2:1, write J?.) where s=clear(k)(e read R)) decl R One more fact can be discovered about the pro- gram PO: F3: clear(k+2)]001=clear(k+2)]OlO. 28 / SCIENCE Let a configuration be a (finite) set of agents and CON be the set of all configurations. 3y r(xl,..., xk) we denote a rule-schema r: lh <= rh if cond in which xl ,...,xk are the only (meta)variable oc- currences. Given the constants al ,..-, ak, r(a1 ,...,ak) or r or lh <= rh if cond -- denotes a concrete &tan= of rwhich can be de- rived by substituting al,...,ak for xl,...,xk. Let c,c'ECON, and r be the rule-schema of the form given above. Let us define the (one-step)trans- ition relation of r as follows: r - - c -> c' holds iff cond is true and lh c c and -- c'=(c-lh)U rh . The transition relation corresponding to a sequence s=rl... rk of instances of rule-schemas is defined -- - as the composition of the transition relations rl rk S -> ,---, -> and it is denoted by -> . Let Semi(P) denote t'ne rule-schemas associated to P for any program P in Ll (Theywillbe introducedbel- low).The (one-step) transition relation of a pro- gram P in Ll (written as p>) defines the seman- tics of P, and it is specified as follows: c 2. c' holds iff there exists a non-empty fi- nite sequence s of instances (derived by the same substitution) of rule-schemas in Semi(P), s.t. for an arbitrary permutation s' of 5 we have: S’ - I c => c . The condition on the permutations of s is the one which is usually considered for expressing that the atomic transitions rl,...,rk refer to non-conflict- ing subsets of agents, - and therefore they can be performed in parallel. More details are given in a companion paper [6] where we studied the behav- iour of sets of communicating agents which concur- rently evaluate functional programs. Therefore for computing,for instance, the value of f( . ..) where f is defined by a program Pin Ll, we consider an initial computing agent <c,E>::f(...), and by applying the rewriting rules we derive new agents from old ones. When we eventually obtain the agent <e,m>.. ..n where nc Constants,we say that the value of f(...) is n. Given a program in Ll, Seml produces the rewrit- ing rule-schemas for agents as follows. 1. Generation of sons with communications f(eO,...ep)=g(...,i(e,...)(s comm a),...,f(e',..),.., 1 f(el,.. )(slwriteR),..., j f(e2,..)(s2 read a),...) decl R k produces the rule-schema: {<x,E>::f(eO,.., e-p)} -c= (<x,x~fcj+xW>::g(..,.xO,.., .xi ,..,.xj ,..,.xk,...), <xO,E>::f(e,...),...,<xi,E>::f(e',..),..., <xj,E>::f(el,..),...,<xk,E>L:f(e2,...),...} ifBz&R Vy x # yz where v=[js 1 s write R <r s commR occurs in the j-th call}, and x=(ks 1 s read R or s comm R occurs in the k-th call}. xA denotes the set {xa 1 a E A}. The condition of the rule makes it impossible for an agent which has to make a reading communication, to generate new agents (That agent has to wait for the value of its expression to be computed by another agent). As usual, we identify by the numbers O,...,k,... the son calls in the left-to-right order. 2. Base Cases f(eO,..., ek)=n produces: {<x,E>::f(eO,...,ek)} <= {<x,E>::n} 3. Values to Fathers {<x,m>::g(...,.xj,...), <xj,m'>::n} <= {<x,m>::g(...,n,...), <xj,m'>::n} 4. Writing Communications -- {<x,R-+$tW>::e, <xs,m>::n} <= {<x,R-+$t(W-xs)>::e, <xs,m>::n} if XSEW - 5. Reading Communications {<x,RtntW>::e, <xs,m>::el} <= {<x,(R-xs)+n+W>::e, Cxs,m>::n} if xs ER - 6. Basic Functions Evaluation {<x,m>::g(nl,...)} <= {<x,m>::v} if v=g(nl,...) The g in the condition is the mathematical function. 7. Initial Agent For evaluating the expression f(nl,...) the initial configuration is: {<c,E>i:f(nl,...)}. The where-expressions are not considered by Seml be- cause one may get rid of them by substituting the corresponding expressions. Zowever, when applying the generation-of-sons rule,we assume that Seml creates the same agent for all substituted occurrences of the same where-expression. Now,as an example of the definition of Seml let us present the evaluation of clear(5). We write { . ..I---->(==. agl,..., agk} for denoting that the agents to the left are the ones to the right, except for agl,..., agk (see also figure 4). Seml(P3) contains (besides others) the following rule-schemas: (<x,E>::clear(k+2)} <= {<x,{xO}t~+{xll}~::.xO:k+2:.xO:.xl, <xO,E>::clear(k), (rl) <xl,E> ::clear(k+l)} if x#yll for any y; (<x,E> ::clear(l)} <= {<x,E>::l}; WI {<x,E>:: clear(2)) <= {<x,E>::2:1}; b-3 t<x,{xO}+- 4t{xll}>::e, <xll,m>::n} <= {<x,{xO}+n+-{}>::e, <xll,m>::n}; (rf+) {<x,CxO}tn+-O>::e, <xO,m>::el} <= (<x,{}-+ntC}>::e, <xO,m>::n}. WI The rule-schema rl comes from the Generation-of- Sons schema, the rule-schemas r2 and r3 from the Base-Cases schema, and r4 and r5 from the Writing and Reading Communications schemas. The initial agent is <c,E>::clear(5). C<c,E>::clear(5)} -----> {<&,{O} f + f- (11}~::.0:5:.0:.1, <O,E>::clear(3), <l,E>::xear(4)} ----> {==, <l,(lO} f $ f {111}>::.10:4:.10:.11, <lO,E> ::clear(2), <ll,E>::clear(3)} ----> {==, <11,{110} f- 4 + (1111}>::.110:3:.110:.111, <llO,E>::clear(l), <lll,E> ::clear(2)} ----> --- (z, <llO,E>::l, <lll,E>::2:1} -----> {==, <l,(lO} -+ 2:l + {}>::.10:4:~:.11} ----> (==, cl,{} -+ 2:l + {}>::.10:4:.10:.11, <lO,E>::2:1} Automatic Programming: AUTOMATED REASONING / 29 ----> . . . {==, <l,{) -+ 2:l + {}>::2:1:4*1:2:.11} -B-W> I==, -a,ci10~ f + f ~m~~Gi:7:i:.iii~ e--e---> {e, <ll,{llO} -+ $ + (1111]>::1:3:~:2:1} ----> <l,{} i- 2:l -+ {}>::2:1:4:1:2:1:3:1:2:1) -- -----> IITy -, <E,(O) + 1:3:1:2:1 +- 0>::.0:5:.0~.1~ -----> {==, <E,{} + 1:3:1:2:1 + {}>::.0:5:.0:.1, <O,E>::1:3:1:2:1) -----> (c, <E,{) + 1:3?:2:1 + {)>::1:3:1:2:1: 5 : - 1:2:i:3:i:.i~ -- -- ----> I==, <&,...>:: 1:3:1:2:1:5:1:2:1:3:1: -- -- 2:1:%1:2:1:3:1:2:1}. -- E::clear(5) g--l //-- ---- f O::clear(3) \ l::clear(4) ) %':-I 06 -;-bj;r;;; ! 10: :clear(2) ' llllLr(2) llO::cl;ar(l) :: II 1 2:l Figure 4. Flow of messages when computing clear(5) using P3. Notice that the sons agents after sending their values to the fathers, remain in the configurations because they may perform a writing communication. An improved operational semantics may garbage-col- lect the agents which are no longer needed for com- puting the final result. The syntax of the language LF of facts is de- fined as follows: e: :=. . . (as in Ll without communication annotations) 1 els with s E {O,l,...,k]* fact::= f(e,...)]sl=f(e',...)]s2 Igl(...,f(...>,...>=gZ(...,f(...),...) The Calculus C for checking facts about a given program P will be presented assuming that Phas onlyone recursive case for the defined function f and the facts are of the first form. A fact el]sl=e2]s2 is accepted by the Calculus C iff both expressions turn out to be identical (and different from error) after applying the rules of the 3asic-Functions algebra and the following rewrit- ing rules: i) e]E +--> e ii) n]s +---> error if S#E iii) x]s +---> error if S#& iv) g(eO,...,ek)]js +--z if O<j<k then ej]s - else error v) f(eO,...,ek)]s +---> if f(eO,...,ek)=e is an - instance of the recursive case of P then e]s else error For simplicity in the facts presented in the previous Section we used the s-selectors with ref- erence to the recursive calls only, so that for in- stance, g ( . . ..f(...)...,f(...)Y..)]js +---> f(...)]s. j k j The fact F2 is accepted by the Calculus C be- cause: clear(k+2)]0 +---> clear(k) and clear(k+2)]11 +---> clear(k+l)]l +---> clear(k). Fact Fl of Section II is an example of the sec- ond form of facts. IV SOME RESULTS AND CONCLUSIONS _~--~ The following results can be shown about our sys- tem for developing concurrent programs [S]. Correctness Theorem for Communications. If for every program P in LO and s ann R and s'ann R occurring in Tr(P) in the recursive call at positicn j and j'(respectively) f(...)]js=f(...)]j's' holds, and Tr(P) is deadlock-free then Tr is correct, that is, for every P in LO the programs P and Tr(P) compute the same function. 0 The proof of the above Theorem would require the formalization of the Translation Algorithm Tr, which we didnot presenthere. We have seen Tr in ac- tion when developing program P in Section II. Proposition. Given a program P in LO, if a read- ing communication takes place during the evaluation of Tr(P) with a non-linear recursion then an expo- nential number of calls can be saved, and in some cases one may obtain a linear time algorithm (see for instance, program P3). 0 That Proposition is important because it guar- antees the performance improvements of the derived programs, and often it allows to satisfy the given complexity requirements. We have seen that by adding suitable communica- tions to the functional programs we can derive more efficient executions. A general question arises: Is there an optimal set of facts from which one can obtain the most effi- cient communications to be added to a given program? The answer is positive in the case of programs with one recursive case only. It can be shown that given a fact of the form f(...)]sl=f(...)]s2, the corre- sponding optimal communication is produced by eras- ing the longest initial equal subsequence of sl and s2. For instance, from the fact F3 of Section II we can get fact F4: clear(k+2)]01=clear(k+2)]10. It can easily be seen that the communications derived from F4 save more computations steps than those de- rived from F3. We have presented some basic idea for the con- struction of a knowledge base system for developing concurrent functional programs. The system uses a calculus for checking the correctness of supplied "factual knowledge" (or facts) about the functions to be computed. It then translates those facts into suitable communications among concurrent agents so that the derived computations may satisfy given com- plexity constraints. REFERENCES [ l] BarstOW,D. "An Experiment in Knowledge-Based Auto- matic Programming" Artif. Intel. 12:2-(1979)73-119. r2]Burstall,R.M.,J.Darlington " A Transformation Svs- _ _ , temfor DevelopingRecursive Programs"JACI", 24:1(77) [3]Bauer,F.L.& al. "Notes on the Project CIP" TUM- INFO-77291nfomatik. TechnischeUniv. Miinchen (1977) [4]Iverson,K.E."AProgrammingLanguage"Wiley,N.Y. (62) [5]Pettorossi,A.,A.Skowron "A Hethodology for Improv- ing Parallel Programs by Adding Communications" LNCS n.208, Springer Verlag, 1985, pp.228-250. [6]Pettorossi,A.,A.Skowron "Using Facts for Improv- ing the Parallel Execution of Functional Programs" In Proc.1986 Int.Conf.Parall.Processing, Illin.(86) [7]Scherlis,W.L., D.Scott "First Steps Towards Infer- ential Programming" In Proc. IFIP 83 North Holland (1983). 30 / SCIENCE
1986
73
520
CONCEPTUAL CLUSTERING USING RELATIONAL INFORMATION Hernd Nordhauscm Department of Information aid Computer Science IJniversity of California, lrvine Irvine, CA 92717 ArpaNet: berndCQic:s.uci.edu Abstract Work in conceptual clustering has focused on creating ~Iasscs from objects with a fixed set of features, such as color or size. In this paper we describe a system which uses relations between the objects being clustered as well ah fcal,ures of the objects to form a hierarchy tree of classes. Ilrllikc~ previous conceptual clustering systems, this algo- rithlrl can define new attributes. Using relational infor- rnatiolk the system is able to find object classifications not possible with conventional conceptual clustering methods. 1. Introduction (:onceptual clustering involves grouping objects into con- c~~p~,uaIly similar classes and producing a characterization of Lhose classes. In recent years there has been active re- search in the area of conceptual clustering. For a survey of’ staveral conceptual clustering systems, see /Xl. All of thcscl systetns have focused on feature descriptions of the objcc.ts, such as color or size, to form a coherent classifi- cation. Only Stepp & Michalski 171 have left this narrow ttorlrdirl and used structural description of objects, i.e., at- tributcls of object cotnponents and the relalionship among thc>sch corrlponer~ts to form classes. liowever, no systeti1 thus far has used relationul infor- mation to classify the set of objects. This paper describes a systt~rn called 0 PIJ S implemented in Prolog, which ad- dresses this issue by using relations over the set of objects (acid uot sirnply object components as in structural de- scriptioll), as well as features of objects, to form classes. We thus extend the definition of conceptual clustering [6] lo include relational informalion. (i ivc>Il : 0 A set of objrcts l A set of features describing the objects l A set of relations between the objects 0 <:riteria to evaluate Lhe quality of a classification t”illd: l A tlierarchy of classes and a characterization of the c I asses Usirlg relational information, the 0 P TJ S system elitni- nates a deficiency of previous conventional clustering sys- tems; unlike the other systerns, this system is able to dis- tinguish between objects which have the same features but different relations. I;or exatnpte, in the domain of genetics, 0 1’ US is able to classify peas not only in terms of their color but also in terms of their offspring, effectively defin- ing the class of hybrids and purebreds. Another deficiency of other conceptual clustering systems is the inability to create new att,ributes; all attributes used to characterize objects have to be given to such systems. In contrast, 0 1’ US is able to generate attributes if it determines that the current description of the objects is not sufficient. New attributes are defined as chunks formed from relations and features. In the next section we describe the 0 P IJ S system, de- tailing the use of relations to form a classification and the generation of new attributes. In the third section we give two applications to illustrate the system. We conclude with two proposals for extending this work. 2. The OPUS System The input to the 0 1’ IJ S system consists of the objects to be classified, a set of features describing the objects, and a set of relations over the object set, such as eclt or parent. The system generates a hierarchical tree of classes, each class having a unique conceptual description. The sys- tem divides the object set into mutually exclusive classes, and recursively divides the classes until a final partition- ing is found. At first, features such as coio~ or size arr used as attributes to form classes. After the list of current attributes is exhausted (i.e., all members of a given class have the same value for the given features), new attributes are gener.Lted. llsirlg these new attributes, the cluster- ilig algorithrrl refines tl~e previously formed classes until all rrletribers of the classes have the same value for all cur- rent att,ributes. 0 I’ II S continues the cycle of generating attributes and refining classes until new attributes cannot be used to further divide classes. 0 I’ II S consists of two distinct parts, the clustering algorithm and the attribute generutor; these are described in detail ‘in the following sections. 508 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. ‘l‘t1tl 0 I’ IJ S clustering schetrle is based on the It CJ M- M A G t4: cluslering atgorithrri Ii/. The goat of the algorithm is to build a hierarchical trtbcb of mutually exclusive classes (clusters) for a given object set. Each object of the set has associated attribute/value pairs for a list of altributes. ‘t’tttb IIic~rarchy tree is built in a top down fashion. At each Ilotltb iI1 the tree, the algorithm selects an attribute which btlst partitions the object set according to some clustering critchria. ‘I’tle simpficity criterion is used to choostl a partitiori- illg at,tribute wllich forms a sitrlple description, so that it is easy to characterize and differentiate classes. A second criterion is used to avoid the trivial and arbitrary classifi- cation which might occur if the above criterion were used alone /6j/; the rnter cluster diflerence measures the disjoint- nt’ss of two complexes. ‘I’he less values overlap among the remaining attributes, the higher this degree of disjointness will t,ch. A good classification has simple class descriptions iirlci a tligh dt>gree of inter clusler difference to maxirriize the distance between classes. Af’ter an attribute has been selected, the objcict, set is di- vided inLo mutually exclusive classes whose rt~embers have the same value for the chosen attribute. An arc in the hier- archy tree is labeled with ttle value for the chosen attribute at that node, and any other value for attributes which are common to all members of that, class. The procedure is calIt recursively until thtt classes cannot be further di- vided using the given attributes. At this point 0 1’ US once again defines rlew attributes and applies the rluster- irig algorithtn to refine the classes. If the new attributes cannot further divide the classes, 0 P II S decides that it has determined the final classes and tertninates. ‘l‘lre simplicity measure is a norrrialixed value of the tiunl- her of terms in the complexes of an attribute. A cotttplex consists of a logical product of selectors. Il:acti selector is a list of elerrierlts from ttre possible values of an attribute linkclti by internal disjurlctiou. The complexity of u selec- tor is t tie nurnl,rAr of trrrns of’ t Ile selector divided by the number of terms the selector could have, i.e., the number of domain elcmc~nts for the attribute of the selector. The complexity of UPL uttribute is the average of’ the complexity values of all of the selectors of that attribute. The sim- plicity 01 an uttribute is defined to be the negative of the complexity 161. ‘I‘he complexity of the second selector of complex (2) in our example is :~, ’ because that selector has two elements, (1711 and [rn]), and there are three possible values (In], jrnj, and lm,7~j) that attribute C can have. In complex (I), the second selector has a complexity value of :I :1 I. The value of complexity for attribute A is the av- er-age of 1, I, i, and i which is ii. Thus the silllplicity for attribute A is - 1:. 2.1.1 The selection of an attribute (liven an object set and a list of attributes, we want to sehbct that attribute which best partitions the set over the reruaiuing attributes. In order to measure the quality of a proposed clustering, 0 1’ US forms a complex for each value of an attribute. A cotnplex is the logical implication for the value of an attribute over the remaining attributes I6jj. Suppose that we have the object set {K, I,, M, N, O} with associated attribute/value pairs for attributes A, B, alrd, C as follows’: (;ivtrrl this data, the complexes for attribute A for values lu/ and lb] over attributes B and C are: ‘t‘he computat,ion of the inter cluster difference of two complexes is more involved. We dcfiric a selector element to be an elcmcri t of a selector that is, an element of the domain of an attrib\rte. (Values of attributes in t II<> 0 [‘IT s systetri are sets.) ‘l’tir srrrlilurity bt:twt*t~rr two selector el- ements, cl and ez, is defilletl to be 5111~(c,, cz) i::l:::~ ..* The rfonirtt~i?rl sirrlilurrty of’ a rc~f’erenco elenlellt e of a sc- Iector .S1 to selector S, is trIax{ sirri(e, c:k)}, for all ek L .S,. ‘I’tle value t:, is the avtbrage of Lhe tilaxinIuIr1 similarities of (I) j(Lj > {(I) - lyj v 1x1) A (C - j771,7L] v jllll J illI)> (2) /h( > {(U - 1x1) A (C Im] v InJ)} ‘I’llaL is, if an object has a value of 111 for attribute A, it implies that it has a value of 121 f i>r attribute B, and a value of / ?I/ or I ~nj t ‘or aLtribute C. 0 1’ II S forms thtlsta cornptexes for all values of all attributes. The corriplexc3 are used to dt~l~~rt~~inc? the quality of a11 attributts. 0 t’ II S uses two clusLc:rirlg criteria, the simplicity of the cluster descriptiorl alld t.he itIter cluster difference, which we 110~ discuss. all selector t:lt:rrlerits of selc>cLor S, to selector S,. Now, thtl degree of srrrlifurzty of’ co~~~plcx (,“k to Cl, denoted .Sl~rlk~, is the avcragct ov(lr at I I’,, , w tlcrcl i and j art’ all t he selectors ot idcrlticat attril)utc parl>. ‘l’htl tfeyrcl: uj ciifleretwe of coin- pteX Ck LO COlIl~JlcX (,‘i, dc!IlOtXd fjljk[, k ,jIiSt I Sirrl~/. Finally, the lrftrr clrlster ififlkrerlc’e tfeyree of arl attribute X is the avt’rdg:” 01 dll /l~f’~~ values, k- / 1, wticlre k and f are corriptexc5 01‘ ill1 Vatut5 of’ Lhe attri bultl X. Referring ag$ri to ttie t~xartlple, wc calculate Ltle follow- irig valuc5 for the various computatiorls to calculate the of dlld set the two LEARNING / 509 inttlr (‘luster diff’erenct> degree for attribute A: b’or the selectors of attribute C, we compute iri a similar 1’12 IIlLlX{ ; ) .I, } t 111clx(0, I} t l,l,X{ l,O} 5 1a X 3 IIl‘lX( ; ,O) 1 } t IIIILX{ f ,l ,I)} s 1’2, - - 1 2 1 ‘I’IIIIS we have a degree of deyree oj similurity of corn- [JteX (I) lo c-orrt~Jtex (a) of’ att,rit)ute A of (i t :)/2 5 ;irl(l ii deyrcr o/ sirrdurrty of’ complex (2) to cotriplex (I) of I I 1 1 2 I. ‘I’her&re the rlqret: u/ ctr#t!rences are :( and 0 re- SJJN~ ivtlly. ‘I‘t ie inter cluster drflerence deyree for attribute A is (:: I O),‘:! :,. ‘I’llis computation of the irrter clustcar difference for an at tribute makes use of Lhe fact that, irl the 0 1’ II S sys- tt~rri, values of attributes are partially ordered. That is, value (~,6] is further from value (b,c,ct[ than it is from value l~,I,cl, and therefore si~rl( [u, 61, 16, c, dl) is less than sm(1u, 61, [a, 6, cl). Class descriptions should be as dis- tirict, as possible to ensure classes with different properties. Maxirrlixing the inter cluster difference will prorrrotc such The idea of an asyrrirrietric similarity rneasirre may seem c‘or~rrtcrirrt,uitive at first. llowever, ‘I’versky 181 supports a11 asyrrIrrit~t.ri(~ similarity riieasur-e, and tie provides evidence that hurr~arrs “terid to st~1t~c.t. the Itlore salient slimulus . . . as a rc~l’ert~rrl, and t.lit~ less salient stimulus . . . as a subject.” I<eferririg once again to thrh complexes irr the example, any object satisfying t,he conditions of corriplex (2) also satisfies thtb c~orlditions of corrlplex ( I ), but riot vice versa. Therefore .Si911:!, has a higher value ttiari Sz’rr~,~. 0 I’ 11 S rriaxirrii~t3 a t,rade off bc~tween the inter cluster tlifft!rc~rlc~c~ arid the sirrlplicity of a class description. At eaclr Ievc>l in the expanding hierarchy tree, a quality value for tAac.tI al,tribut,e is c~orriputed. This value is the sum of u 4 sirriplicit,y / II* inter cluster difference for hoirit’ user spc~citied cot~lfic~ierrls u and 2/. ‘I‘he user can thus wcbigli lhe irriport4ir.e of these two criteria. 0 1’ US rrraxirtlizchs the qualit,y valutt of the attributes selected at t>iL(.tI rlotlt’ in the eX~>i~l~dillg tree. 2.2 (kneratirlg Att,ril)utc!s .Nc~w attributes havtk to I)e defined when current at- trit,il tc5 are not sufficierlt. to distinguish betweerr rrierribers of Ltlc sarnr class. New at tribut.cs are chunks composed of rtbl;lt ioIls arid features. lt‘or I tris purpost’, we define a co4r&- ~~frx relutr‘o~i r f (X, Y, 2) Lo t )(’ ttrc composition of a rela- t iorr r(X ,Y> and in feature f (Y ,Z>. lcor exarr~ple iri the /0od r/mirL tlorriairr airirrrals could I,(1 tit3,c-ri bed by the feature size and the relationship eat. Thus the relation eat(X .Y> and the feature size (Y, 2) are composed to form the com- plex relation eat size (X ,Y, Z>, describing that X eats Y and Y is of sixe Z. Note that the first and second argument of a complex relation are members of the object, set, while the third is a value of the feature. Complex relations will be used as attributes. The value of an attribute is defined as follows. Given a complex relation r-f (X, Y, Z), the value of the attribute r f for the object X is the set {Z, 1 -1 Y 3 r f (X, Y,Z,)}. That is, the set of all Z’s, such that r f (X,Y .Z) is sat- isfied for sorrie Y. For exarripte, the value of eat size for snake?, in the food chain domain is [small, medium], because eat sizecsnakes. Y, small) is satisfied for Y bound t,o mice and insects, and eat size(snakes, Y, medium) is satisfied for Y bound to snakes. Thus, the attribute eat size has a value of [small. medium] for snakes, because snakes eat small and medium sized ani- rr1als. The systerrr is supplied with a small set of binary re- lations such as cut or parent. These primitive relations involve only two objects, and there is a direct “link” be- tween t Ire two objet ts. In order to define more involved at- tributes, relations consisting of several primitive relations are formed. We deline a feuel n relution as a relation us- ing YL primitive relations between two objects. A primitive relation is a relation supplied to the system or the inverse of t.tlirlL relation. ‘I‘htl rcllatiorl eaten(X,Y) describes the level one relation eden, meaning X is being eaten by Y, while eat eat (X ,Z> describes the level two relationship of X eats SOIIK~ Y and Y eats Z. Relations are defined in i~rcreasirrg levels of order, starting at level one. Now, a It:& 91 attribute is defined frorrr a complex relation com- posc~l of a level n retatiori and an existing feature. Each tinit> new attributes have to be defined the current level k is increased and level k / 1 relations are defined. These Itbvel k t 1 relCltions are corrlposed with features to define corrIpt(bx relations anti thuh level k t I attributes. Relations arc’ not. directly us4 in the clustering process, but rather used to define attributes. Only attributes are used to clus- ter objet ts. ‘l‘hus, objects are first classified based upon their features, ttrct~ based upon attributes with increasing corriplcxity. If at, any tirrlcs ilow relations cannot define at- tributes which rcbfine class~hs, the system terminates having reacht~d a final ctassilicatiorl. At each level k, new level k relations are defined. A level k 1 relation is composed with a level one relation to form a level k ret&ion. All inverses of retatiorls are defined. To limit the combinatorial clxptosion of the number of possible relations which carI be dt~fined at each level, only a limited nurrrber of the k I relatiorls arc considered to define new relations. 011ly t,tIe relatiorrs wtrich dtJlirred attributes used to refine classc:, at level k 1 are used at t,he rlext level to define new relations. 5 10 / SCIENCE 3. Two Examples 0 1’ II S has applications in any domain where objects ale described by a set of features and a set of binary rela- tjons. Two examples of such domains are presented in the following sections: the food chain dornain and the genetics tiorriain. In the food chain domain, we characterize anilnals using two features, size and locorrrotion, and relation, eat. I+‘or example, we describe songbirds using the following facts: size( songbirds, medium), locomotion(songbirds, fly) * eatcsongbirds. worms), eat(songbirds, insects), and eat (hawk, songbirds). All fourt,een ob- jects are characterized by the same two features. Fifty one rc>lational facts are asserted to describe the relationship cut ovc’r the objects set. At first, 0 E’lJ S uses features as attributes to classify the objects. size has the sanle siniplicity value as locomotion, but. a higher inter cluster difference value. Therefore size is chosen as the first attribute to divide the object set, in the hierarchy tree. For example, a class of rnediurn sized ob- jcc.ts is created with the following members: hawks, owls, songbirds, and snakes. After the system has used locomo- tion LO refine classes, there are no attributes left, and new attributes have to be defined. In response to that 01’1JY defirles all possible level one relations. The following coulplex relations and attributes are forrned : eat size, eat locomotion, eaten size, eaten locomotion. The first two describe the size and locomotion of anirnals eaten by an object, the latter two de- scribe the size and loc.ornotion of the animals that eat that object. ‘I‘hese four attributes are used to divide the exist- ing classes. For exarnplt:, the class of rnediurn sized flying objects is refined using the attribute eat size. llawks and owls eat rnediurn and small animals, while songbirds only eat small animals. After the current attributes have been used to refine the classes, there are only two classes with more than one ob- ject left, the class of frogs and toads, and the class of hawks and owls. The level two relations eat eat, eat eaten, eaten eat and their inverses are formed, and concatenated with the features to define level two attributes. Frogs and toads have the same values for these new attributes, there- fore that class is Itot refined. llowever hawks and owls have diKerent values for the attribute eat eaten size, namely [large, medium] and [large, medium, small]. ‘1’hal is, hawks chat animals which are eaten by large alld medium sized allirIlals, while owls eat animals which are eaten by large, ~IIC~~UIJI, and small animals. Thus, the attribute eat eaten size is used to divide that class. The next level relations cannot define attributes which refine t,he class of frogs and loads, so the systern terrrlinates. The resulting hierarchy tree is shown in Figure 1. 3.2 The Genetics Domain I,et us now consider an example from the field of genet- ics. The clust,ering problem in genetics consists of classi- fying objects based not orlly 011 their observable features, but also or1 features of their descendants and their ances- tors. Cregor Mendel, the founding father of genetics, ob- served that when a yellow garden pea was crossed with a green garden pea the resulting offspring pea was yellow 141. When he self -fertilized that pea, it produced both yel- low and green offspring. After he continued to self-fertilize peas, he discovered that some of the yellow peas had yellow and green offspring while other yellow peas only produced yellow offspring. Green peas consistently had green off- spring. Mendel thus hypothesized the class of purebreds, peas which produce offspring with exactly the same fea- tures as the parent, and the class of hybrids, peas which produce some offspring with the same features and other offspring with features different from their parent. When 0 1’ [JS is provided with inforrnation about the color of each pea and the oj’spriny each pea produces, it defines the classes of hybrids and purebreds. At first, the feature color is used as an attribute to distinguish yellow alld green peas. Next, the attributes off spring color and parent color are defined. For the class of yellow peas, the inter cluster difference and the simplicity value for these attributes are equal. In the running system parent -color was picked to refine the class of yellow peas. At this point all peas are correctly identified as either a yellow or green purebred or a (yellow) hybrid. Furthermore, the character- ixation of these classes corresponds with Mendel’s charac- terization. For cxarnple, the class of green purebreds only has green offspring, while the class of hybrids contains only yellow peas which have both yellow and green offspring. 0 1’ IJ S continues to refine the classes distinguishing, for example, between purebreds with hybrids as parents and purebreds with purebreds as parents. Mendel continued his experiments, crossing peas with two different traits, color and shape. He observed nine dif- ferent classes, all having different dominant and recessive traits. We supplied the 0 t’ IJ S system with the color and shape of each pea and asserted the relations over the object set. Again 0 I’ II S correctly defined and characterized as intermediate classes all nine classes which Mendel identi- fic:d as the various hybrids and purebreds. For example, 0 I’ IJ S defines two different classes of round green peas; one class has Inembers which orlly have round green peas as ofl’spring, while the other class has members which produce round green and wrinkled green ofFspring. 4. Summary and Further Research lo this paper, we presented a conceptual clustering sys- tem which uses relations over the object set to define a hierarchy of classes. llsing the relational information, this system is able to find classifications not possible with con- ventional methods of conceptual clustering. We presented an example from the domain of genetics where the system LEARNING / 511 Figure 1 Classification Tree for food chain domain is ahIt: to form the classes of hybrids and purekreds. Fur- thtlrrnorr, we introduced a method to define new attributes uscti in the classification process. ‘I‘hi:, work can be ext,ended in two ways. It is unrealis- tic LO assume that all the information describing objects is available initially. An incremental version of 0 I’ IJ S would buil(l the hierarchy Iret> using partial information, predict- ing rllissing properties of objects as well as missing objects. As IIWW data becomes available, predictions can either he conlirr~led, in wtiich cast‘ t,tit: t,elief in other si~~iilar predic- tion:, i, reinforced, or ~,hey call be disc.onfirrned, in which cast a rclvisiorl of classtlb OCCURS. ‘i‘tlc~ present version of Ol’tJS can handle only binary relat iorls. An exlension of the system working with 11. ary relations would great,ly enhance its power. For example, in thrt domain of chemistry, some compounds are classified as itcitlb, alkalis a11d salts depending on (among other prop- chrtit:s) 1,ticir reactive behavior. For example, alkalis react. wil tI cLc.ids to forrrl salts. Using 1,ernary relaliolis, these clax,t?, could be formed in a wdy similar to c; 1,11 IJ 13 l<K IY, yet iI1 a more efficient manner. At the moment, we are ;tct ivc>ly (~llgagt~d irl working in these directions. Acknowledgements I uould like to t hdllk I’at La~lgley, IIon IIose and ILar~tly .JOIIC~~ t‘or their help OII t.his work, as well as the numerous p~~)plt~ from the rnachirrt~ learnirlg group al IJC1 who gave IIL(’ valuable corrlnleI1t.s on drafts of this paper. This work wah supported in part by Contract NO0014 84 K 0345 f’rolrl t tie Inforr~ratiotl Scietlces I)ivision, Office of Naval He- search . References [L] Fisher, D. A hierarchical conceptual clustering algorithm. Tech. Report 85 21, Dept. of Information and Computer Srlence, Ilniversity of California, Irvine, CA, 1984. [Z] Fisher, D. and Langley, 1’. Approaches to conceptual clus- !,ering. Proceedings of the Ninth International Joint Con- ference on Artificial Intelligence, 691-697, 1985. [3] Langley, I’., Zy Lkow, J. M., Simon, H. A., and Bradshaw, G. 1,. The search for regularity: Four aspects of scientific discovery. 111 Muchine Learning: An Artijiciul Intelligence Approuch, Vol. I!, H. S. Michalski, J. G. Carbonell, and T. M Mit,chell, Eds., Morgan- Kaufman Publishers, Los Altos, CA, 19X6, 425 469. [4] Iltis, t1. The f,l/t of Mer~del, Ilafner Pub., New York, 1966. [5] Michalski, R. S. K nowledge acquisition through concep- tual clustering: A theoretical framework and algorithm for partitioning data into conjunctive concepts. lnterna- bona1 Journal oJ Policy Analysis and lnformution Systems, 4 (1980), 219 “43. [B] Michalski, H. S. and Stepp, K. E. Learning from observa- tion: Conceptual clustering. In Muchzne Leurning: An Ar- tificial Intelligence Approach, Ii. S. Michalski, J. G. Car- bonell, and ‘I’. M. Mitchell, &is., Tioga Press, Palo Alto, CA, 1983, 331 363. 17 ] Stepp, It. E. and Michalski, R. S. Conceptual clustering: Inventing goal oriented classification of structured objects. IJI Machine Learning: An Artijiciuf Intelligence Approach, Vol. !I, R. S. Michalski, J. G. Carborrell, and T. M. Mitchell, b;ds., Morgan Kaufman I’ublishers, Los Altos, CA, 1986, 471 498. [8] Tversky, A. Features of Similarity. Psyrhological Review X(4), 1977, 327 352. 5 12 / SCIENCE
1986
74
521
Learning by Failing to Explain’ Robert J. Hall Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, MA 02139 Abstract Explanation-based Generalization depends on having an expla- nation on which to base generalization. Thus, a system with an incomplete or intractable explanatory mechanism will not be able to generalize some examples. It is not necessary, in those cases, to give up and resort to purely empirical generalization methods, because the system may already know almost every- thing it needs to explain the precedent. Learning by Failing to Explain is a method which exploits current knowledge to prune complex precedents and rules, isolating their mysterious parts. This paper describes two techniques for Learning by Failing to Explain: Precedent Analysis, partial analysis of a precedent or rule to isolate the mysterious new technique(s) it embodies; and Rule Re-analysis, re-analyzing old rules in terms of new rules to obtain a more general set. 1 Introduction A primary motivation for studying learning from precedents is the intuition that it is easier for a domain expert to present a set of illustrative examples than it would be to come up with a useful set of rules. Explanation-based Generalization ((Mitchell, et al, 85) [8], (DeJong and Mooney, 86) [2]) is a powerful method for using knowledge to constrain generalization*. It can be defined as generalizing an explanation of why something is an example of a concept, in order to find a weaker precondition for which the explanation of concept membership still holds. (This weaker precondition describes the generalization of the example.) The motivation for this work is that explanation is hard and often impossible (e.g. theorem proving). On the other hand, the efficient student should, as much as possible, know what he doesn’t know. For example, “I don’t understand step 5,” is a much more productive query than “Huh?” There are at least two reasons why an explainer can fail: the theory is incomplete, so that there is no explanation; or the explainer simply can’t find the explanation, even though it exists. The latter case is not just mathematical nitpicking: the complexity of VLSI circuits and the rich set of optimizations possible creates large problems for any circuit-understander. On the other hand, it is seldom the case that a learner knows absolutely nothing about an example it fails to explain; fre- quently, a small mysterious thing comes embedded in a large, mostly well-understood example. For instance, consider a ‘This paper is bared upon work supported under n h’ntional Science Foundstion Grrtdunte Fellowship. ‘(Mnhndevnn, 85)i7 nnd (Ellmnn, 85)/3: have applied this to logic de- sign. (Smith, et al, 85 [lo], h:lve applied explzmntion-based techniques to i knowledge bnse refinement. (Mooney and DeJong, 85)[9] hnve applied it to learning achematrr. for nntural 1:ingunge processing. (Winston, el a/, 83)[12! zlbstracts nnalogy-based explan:itLns tcl form rules. multiplier circuit where the only difference between its design and a known one is in the way one particular XOR gate is im- plemented, It would be a shame to retain the complexity of the entire multiplier when the only new structural information was in one small subdevice. Rather than just reverting to completely empirical techniques when the explainer fails, it would be bet- ter to use current knowledge to throw away the portions of the example which are understood. This is what I call Learning by Failing to Ezplain. It is a complementary notion to explanation- based learning: the former operates precisely when the latter fails, and when the latter succeeds there is no reason to try the former. Learning by Failing to Explain could be used as a filter in a learning system which combines both explanation- and empirically-based learning methods. That is, when explanation fails, use Learning by Failing to Explain to isolate a much sim- pler example for the generalizer. This work does not use it this way: the additional generalization issues are beyond its scope. The current system simply formulates rules without further gen- eralization. It is not intended that this method is complete in itself with respect to learning about design. This work is not intended as a study of human learning. It is motivated by intuitions about human learning, but no claim is made that the techniques described here reflect human learning. There are two techniques which comprise Learning by Failing to Explain: in the first, the learner analyzes the given precedent as much as possible, then extracts the mysterious part as a new rule (or pair of rules). I call this Precedent Analysis. In the second, the learner uses new rules to re-analyze old rules. That is, Precedent Analysis needn’t be applied only to precedents; there are cases where it is beneficial to have another look at rules found previously. This is called Rule Re-analysis. The system and the latest experiments with it are documented in (Hall, 86)j5j. (Hall, 85) [6: has more detail with regard to the design competences, but documents an earlier version of the system. 2 Domain and Representation The current system learns rules of the form “structure X im- plements functional block Y ,” where by functional block I mean something like “PLUS,” which represents a constraint between its inputs and outputs. By structure, I mean an interconnection of functional blocks, where the interconnection represents data flow. As indicated, the illustration domain of the system is dig- ital circuit design. However, the algorithms should apply with minor modifications to other domains, such as program design, which are representable in a generalized data flow format. In fact the system has been applied successfully to learning struc- tural implementation rules in a simplified gear domain, where functional constraints take the form of simple arithmetic rela- tionships among shaft speeds. 568 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. It should be noted that, while the implemented algorithms are dependent on the functional semantics of the domains, the basic idea behind Learning by Failing to Explain should be ap- plicable to many other domains. This is one area for future work. The representation for design knowledge is a Design Gram- mur. This is a collection of rules of the form LHS =+ RBS, where LHS denotes a single functional block and RHS denotes a K description of an implementation for LHS. The basic represen- tational unit is a graph, encoding the functional interconnection of the blocks. Design Grammars are not, in general, context free grammars, as the system is allowed to run the rules from right to left as well as left to right. This allows us to understand opti- mization of designs as a reverse rule use, followed by a forward rule use: find a subgraph of the current design stage which is I I isomorphic to the RHS of a rule and replace it with the LHS ab c symbol. Then expand that new functional block instance with a different implementation. Figure 1: Learning Example Precedent. A Design Grammar is an interesting representation of struc- tural design knowledge both because it is learnable from exam- (the parsed sections), leaving two functionally equivalent sub- ples via the method described here, and because it enables four graphs which can be turned into two rules with the same LHS. interesting design competences: l Top-Down Design: the ability to take a relatively high level specification of the function of a device and refine it successively by choosing implementations of subfunctions, then refining the refinement, and so on. In terms of a De- sign Grammar, this is viewed as using rules in the forward direction. 3.1 An Example Suppose the learner is given the precedent shown in Figure 1. (The 2-l boxes represent time delay of one clock cycle.) The left-hand graph is considered to be the high level graph. Suppose further that the learner knows l Optimization: the ability to take one device and replace a piece of it with some other piece so that the resulting de- vice is functionally the same. In Design Grammar terms, an optimization step is viewed as a reverse rule use, fol- lowed by a forward rule use. l Analysis: the problem of establishing a justification for why some device performs some given function. This is the parsing problem for Design Grammars. (Winston, et al, 83)‘12] takes a similar approach, but extracts rules “on the fly” from analogous precedents. l that a one-bit multiplexer (MUX) is implemented by y = (OR (Ah’D a (NOT s)) (A&D b s)), l that (AND a (ZERO)) is an implementation of ZERO, l ‘,“,“d’ (OR 5 (ZERO)) is an implementation of BUFFER, l that a BUFFER may be implemented simply by a connec- tion point. l Analogical Design: the ability to solve a new problem in a way similar to some already solved problem, or by com- bining elements of the solutions to many old problems. In a Design Grammar setting, this is 9unning” known design derivations on new design problems. This is ac- complished by finding a partial match between the prob- lem specification and the initial specification of the known derivation, applying those transformations which have an analog in the problem, and leaving out steps which do not. This technique for controlling search has been explored by (Mitchell and Steinberg, 84)[11], and even as early as the MACROPS in STRIPS(Fikes, Hart, and Nilsson, 72)[4]. A Design Grammar certainly does not represent all there is to know about design. It is intended that it serve as the backbone of structural knowledge in a larger system of knowledge which includes such things as search control heuristics and analytic knowledge. 3 Learning Rules Using Precedent Anal- ysis Precedent Analysis is where the learner uses its current knowl- edge to partially explain an example, so that the truly mysteri- ous aspects of the example are brought to light. tial The algorithm has two steps: first construct a maximal par- parse of the precedent, then throw away the matched nodes These would all be represented straightforward manner. as Design Grammar rules in a By applying these rules to the left hand graph, in the order given, the system concludes that the MUX tied to ZERO on the left actually corresponds to the AND on the right. Moreover, because of their positions relative to inputs and outputs. the 2-i attached to y and the XOR attached to a and b can be seen to correspond to the ones on the left. With this motivation, the system transforms the high-level graph into the one on the left of Figure 2. It is possible to construct a partial matching between the left- and right-hand graphs of that Figure which matches all nodes but the ones circled in dashes. It is the equivalence of those two portions which is truly mysterious to the system. The rest is derivable. Therefore, the system supposes that the equivalence is true and creates new rules. Since both subgraphs are more than single nodes, neither can be the LHS of a rule. Thw, the system creates a new functional block type and makes it the LHS of two rules: one whose RHS is the subgraph on the right, the other whose RHS is the subgraph on the left. The variable correspondences are determined by the partial match. 3.2 The Method The current system implements a hill climbing approach to find- ing maximal partial matches. The algorithm is greedy. It searches for a partial derivation, which, when applied to the high-level graph, results in a graph with as large as possible a LEARNING / 569 two inputs, and at least two of those inputs remain unmatched.) This second way of extending the partial match moves in from the output edge. ab c u b c. ClfLG L&G Figure 2: Learning Example Precedent After Partial Under- standing (Transformation). partial match with the low-level graph. The partial match can- not, however, be just any partial match. It must be one which associates functionally equivalent nodes in the two graphs. The heuristic criteria for extending the partial match are given be- low. The partial match is initialized to the input and output variable correspondences given with the precedent. Starting from the current high level graph, the typical step looks ahead through all possible combinations of allowable trans- formations to a fixed search depth. (This depth is a parameter to the system. It’s value affects both success and speed of the algorithm.) It tries to find some combination which results in a valid extension to the partial match. If it ever finds one, all lookahead is terminated, the transformation is made, and the lookahead starts over anew. The process terminates when no progress is made on any lookahead path. Because the subgraph isomorphism problem is NP-complete, it is desirable to have grammar rules with graphs as small as possible. Therefore, when no progress is found, the smallest graph encountered in the last lookahead phase is returned. The overall criterion for extending the martial match is that the matched subgraphs must always”be functionally equivalent with respect to the overall function. This is implemented heuris- tically as follows. The partial match creeps inward from the in- put variables and the output variables. There are two principle ways the match can be extended. One is from the input “edge” of the match, the other is from the output “edge” of the match. Two connection points (one from each graph) which are driven by the same function of corresponding matched nodes may be matched (this moves in from the inputs). On the other hand, consider the case of two nodes (one in each graph) which are inputs to the same type of functional block. If the blocks drive matched nodes, then they can be matched, as long as there is no ambiguity in matching their inputs. (There is ambiguity if the function has more than one input of the same type as the There are some subtleties: consider what happens to the first criterion when one graph has two nodes which are (syntactically) the same function of the same inputs. This is an ambiguity which is not addressed by the criterion. However it is possible, as a pre-pass, to transform the graph to the equivalent one which has all of those functionally identical subgraphs merged into a single one. Since the graphs are assumed to have functional semantics, this move maintains functional equivalence. This pre-pass is computationally simple. To illustrate these criteria, consider the pair of graphs in Fig- ure 1. The first criterion would say that since connection points o and o’ are each XOR of corresponding previously matched nodes, they may be matched. The second criterion would say that since p and ,f3’ are each the unambiguous inputs of the 2-l boxes which drive y they may be matched. After the transforma- tions resulting in Figure 2, the match may be further extended using the second criterion to include the connection points 7 and Y’. There is another criterion for extending t,he partial match when a subgraph transforms to a single connection point. (That would happen when that subgraph was functionally equivalent to the identity function.) In that case, the previous two criteria don’t apply, as there are no new nodes to which to extend the match. In that case, the system looks for a situation in which the size of the inverse image of a connection point under the partial match decreases. For example, the system judges progress after transformation of “y =BUFFER(a)” to “a.” Once the partial match is extended as much as possible, it is straight-forward to construct the two rules by throwing away the matched subgraphs and creating a new functional block. 4 Rule Re-analysis The method described in Section 3 may produce rules which are not very general, because there might be more than one unknown rule used in constructing the precedent. Thus, the learned rules will have RHSs which are a combination of more than one unknown, more general rule. It is much less likely to see again a given complex group of rule instances than it is to see instances of the rules singly. It is possible, however, later to learn new rules which would allow Precedent Analysis to find the more general rules of which the first one was constructed. This leads to the idea of re-analyzing old learned rules in terms of newer rules. Suppose the rules are always presented to the learning sys- tem in the best possible order. Might it not be that Rule Re- analysis is a waste of time. 7 The answer to this is no. This is demonstrated, by counterexample, as follows. Suppose that, un- known as yet to the system, there are four general design rules involved in constructing three precedents. The four design rules are as follows. l h(x) ==-+ dd l ii(Z) - g2(4 l f3(T Y) - 93(X> Y) l f&☺ - 94(4 The three precedents are the following: 1, 93(fl(W2(Y)) f f3(91(4,92(Y)) 2. f2(f4@)) - g2(g4@)) 570 / SCIENCE Suppose that the Learning by Failing to Explain system is presented with these precedents in the order 1, 2, 3. On seeing 1, the system is not able to analyze it at all. Likewise, on seeing 2, the system can not analyze it at all. Thus far, the system has 4 rules: two rules implementing blocks representing each of the overall functions of the precedents (one rule for each graph of each precedent). On seeing precedent 3, the system may analyze it using rules derived from precedent 1. This results in one new rule: fdf,~) ==+ 94(x). Rule Re-analysis applies this new rule to prece- dent 2. This results in the rule, f2(5) =+- gz(2). The system may then re-analyze the precedent-l rules and arrive at two sim- pler rules. One has RHS gs(fl(r)), the other has RHS fs(gl(z)). Hence, the system is left with the following rules. l h(z, 4 - gdfl k), 4, l h(z, 4 ==+- f&l(z), ~1. On the other hand, if one picks any of the six possible orders of presentation and applies Precedent Analysis without Rule Re- analysis, the set of rules conjectured is less general than the four rules. For example, suppose they are given in the order 1, 2, 3. Without Rule Re-analysis, Precedent Analysis conjectures the following set of rules as an addition to those made from each entire precedent. (h is a block created by the system.) . h(Z> 4 - 93Ulk>, w) l h(z, w) - f3 (&): w) It thus failed to find the f2 rule. We decide which rule set is more general by asking which is capable of generating the other. Clearly, the set produced using Rule Re-analysis suffices to generate all the rules in the other set. However, there is no derivation of the f2 rule in terms of the rules produced without Rule Re-analysis. Thus, Rule Re- analysis resulted in more general rules. The reader may verify that all six orders of presentation result in less general rules if Rule Re-analysis is not used. The fact is that without re-analysis, the system requires more precedents to reach a given level of generality. Since precedents are in general much harder to come by than the time needed for Rule Re-Analysis, it is clear that re-analysis is worthwhile. 5 Role vs Behavior Since Learning by Failing to Explain is intended to be used as a component in a larger learning/design system, it is reasonable to assume that other knowledge sources might exist which enable other forms of reasoning about rules; say, knowledge about the semantics of the functional blocks. Is there a way the system could attempt to judge which contexts the conjectured rules are true in? It turns out that there is a case where a conjectured rule can be judged to be true in all contexts, using only properties of previously known functional blocks. To state this case most concisely, it is convenient to introduce terminology. A role is a mapping from inputs to sets of allowable outputs. That is, each input vector determines a set of allowable output vectors. A role is also called a behavior when it, uniquely determines each output for any given set of inputs. Thus, the squaring function on integers is a behavior, but the square root function on integers is only a role, because it maps negative integers to the empty set. Another example of a role which is not a behavior is when a component has “don’t care” entries in its truth table. To any subgraph, S, of a device, G, there corresponds a role which I shall refer to as the induced role of S in G. It is de- fined as follows. G represents a role (usually in this system, a behavior). Consider replacing S with any S’ that maintains the overall behavior of G. Define a new role, f, which maps an input vector, o, to the union over S’ of S’(V). It is this (unique) least restrictive role which I call the induced role of S in G. Note that no matter which S’ fills the hole, the induced role depends on the hole, not on S’. The criterion arises as follows. Looking back at the partial parse which generated the conjectured rule, one can ask about the induced roles of the unmatched subgraphs in their respective graphs. They will, of course, be the same, as the induced role depends only on the hole and not on what fills it. The matched portions of the two graphs are identical and they determine the induced roles of their complements. Suppose this induced role is a behavior. Then there is ex- actly one behavioral specification which could possibly fill the hole. Thus, the two subgraphs, even though they are struc- turally different must have the same behavior. On the other hand, if the induced role is not a. behavior, the two subgraphs may or may not be behaviorally equivalent. Summing this up, the conjectured rules will be true in all contexts if the induced role in the precedent is a behavior. What is interesting about this criterion is that it can tell us a fact about a previously completely mysterious object (the unmatched subgraph) solely in terms of the properties of known objects (the constituents of the matched portion of the precedent). Note also that this is merely a sufficient ioral equivalence, not a necessary one. condition for behav- 6 Summary and Discussion First, a summary of the main ideas: Four interesting design competences can be understood by having a Design Grammar as the backbone of a design system: top-down design, optimization, explanation, and Analogical Design. Precedent Analysis, wherein the learner uses current knowledge to partially understand the precedent before conjecturing a new rule, is a method for learning from precedents which does not require the ability to prove a rule before learning it, as in Explanation-Based Learning, yet still produces more plausible conjectures than empiri- cal generalization methods. It uses current knowledge to guide the system to general rule conjectures. Rule Re-analysis, the technique of using new rules to try to analyze old ones, is inherently more powerful than sim- ple acceptance of new rules, even if one supposes that the precedents are ordered optimally. The distinction between behavior and role sheds light on the conditions under which the conjectured rule can fail to be true in all contexts. It would seem that there is an interesting relationship be- tween this work and that of (Berwick, 85)[1]. Berwick’s model of learning can be construed as a Learning by Failing to Explain method. His domain was natural language learning, where the LEARNING / 571 grammars are, of course, string grammars. His mechanism at- tempted to parse an input sentence according to its current rules as much as possible, then if the result satisfied certain criteria the system proposed a new rule. His system did not attempt Rule Re-analysis. He argues that natural languages satisfy cer- tain constraints which enable them to be learned in this manner. Thus, his system could be described as Precedent Analysis, to- gether with some additional criteria regarding when to actually form a new rule. Inasmuch as there is no reason to believe that the world of design obeys such a learnability constraint, it is not to be ex- pected that Berwick’s mechanism would work in learning Design Grammars from any kind of realistic examples. (Of course, any system could learn if it were handed the most general rules as precedents,) It is possible, however, that the use of Rule Re- analysis can substitute, at least in part, for the missing learn- ability constraint. Some Limitations. Experimentation suggests the following limitations. Some of these are limitations of Learning by Failing to Explain in general, and some are limitations of the particular algorithms employed in the current system. Sometimes the maximal partial parse is not the most de- sirable partial parse to use. In some cases a much more useful rule can be obtained from a non-maximal parse. In some cases, it is desirable to find more than one partial parse. This algorithm currently finds only one. The algorithm can be too greedy at times; this causes it to miss a better partial parse by, for example, expanding some node instead of applying a better rule. , The system needs a better approach to search control in the analysis algorithm. In particular, some method of fo- cusing attention on small sections of large graphs would reduce the size of the search tree generated. Currently, the system keeps track of all paths from the initial graph. Future Work. l No account of learning design knowledge is complete with- out discussion of both acquisition of analytic knowledge and search control knowledge. It would be interesting to investigate Learning by Failing to Explain as applied to these very different types of knowledge. . . l The intuition behind the method seems to be applicable to domains other than design domains. How would the knowledge be represented, and what additional issues arise in applying Learning by Failing to Explain to other types of domains? l How can the search done by the analysis algorithm be reduced? o A Design Grammar is a restrictive representation. In par- ticular, it needs some method for representing generalized roles and parameterized structures. How will a more pow- erful representation affect the parsing performance? l What would it take to be able to reason about induced roles, so that the system could find the contexts in which a given inferred rule is true? Acknowledgements Thanks to Patrick Winston, for providing guidance for the origi- nal thesis; and thanks to Rick Lathrop and the Reviewers, whose comments greatly improved the final version of this paper. References PI PI PI PI 151 161 PI PI 19; Pl [111 112: Robert C. Berwick. The Acquisition of Syntactic Knowl- edge. MIT Press, Cambridge Mass., 1985. Gerald DeJong and Raymond Mooney. Explanation-Based Learning: An Alternative View. Technical Report UILU- ENG-86-2208, Coordinated Science Lab, University of Illi- nois, March 1986. Thomas Ellman. Generalizing logic circuit designs by ana- lyzing proofs of correctness. In Proceedings of the Ninth International Joint Conference on Artificial Intelligence, IJCAI-85, 1985. Richard E. Fikes, Peter E. Hart, and Nils J. Nilsson. Learn- ing and executing generalized robot plans. Artificial Intel- ligence, 3, 1972. Robert Joseph Hall. Learning by Failing to Explain. Technical Report, M.I.T. Artificial Intelligence Laboratory, 1986. forthcoming. Robert Joseph Hall. On Using Analogy to Learn Design Grammar Rules. Master’s thesis, Massachusetts Institute of Technology, 1985. Sridhar Mahadevan. Verification-based learning: a gener- alization strategy for inferring problem-reduction methods. In Proceedings of the Ninth International Joint Conference on Artificial Intelligence, IJCAI-85, 1985. Tom M. Mitchell, Richard M. Keller, and Smadar T. Kedar- Cabelli. Explanation-Based Generalization: A Unifying View. Technical Report ML-TR-2, SUNJ Rutgers, 1985. Raymond Mooney and Gerald DeJong. Learning schemata for natural language processing. In Proceedings of the Ninth International Joint Conference on Artificial Intelligence, 1985. Reid G. Smith, Howard Winston, Tom M. Mitchell, and Bruce G. Buchanan. Representation and use of explicit justifications for knowledge base refinement. In Proceedings of the Ninth International Joint Conference on Artificial Intelligence, 1985. Louis I. Steinberg and Tom M. Mitchell. A knowledge based approach to vlsi cad: the redesign system. In Proceedings of the 21st Design Automation Conference, IEEE, 1984. Patrick H. Winston, Thomas 0. Binford, Boris Katz, and Michael Lowry. Learning Physical Descriptions from Func- tional Definitions, Examples, and Precedents. Technical Report AIM-679, Massachusetts Institute of Technology, 1983. 572 / SCIENCE
1986
75
522
Mapping Explanation-Based Generalization onto Soar’ Paul S. Rosenbloom Knowledge Systems Laboratory Department of Computer Science Stanford University 701 Welch Road (Bldg. C) Palo Alto, CA 94304 ABSTRACT Explanation-based generalization (EBG) is a powerful ap- proach to concept formation in which a justifiable concept definition is acquired from a single training example and an un- derlying theory of how the example is an instance of the concept. Soar is an attempt to build a general cognitive architecture com- bining general learning, problem solving, and memory capabilities. It includes an independently developed learning mechanism, called chunking, that is similar to but not the same as explanation-based generalization. In this article we clarify the relationship between the explanation-based generalization framework and the Soar/chunking combination by showing how the EBG framework maps onto Soar, how several EBG concept- formation tasks are implemented in Soar, and how the Soar ap- proach suggests answers to some of the outstanding issues in explanation-based generalization. I INTRODUCTION Explanation-based generalization (EBG) is an approach to concept acquisition in which a justifiable concept definition is acquired from a single training example plus an underlying theory of how the example is an instance of the concept [l, 15,261. Because of its power, EBG is currently one of the most actively investigated topics in machine learning [3, 5, 6, 12, 13, 14, 16, 17, 18, 23, 24, 251. Recently, a unifying framework for explanation-based generalization has been developed under which many of the earlier formulations can be subsumed [15]. Soar is an attempt to build a general cognitive architecture combining general learning, problem solving, and memory capabilities [9]. Numerous results have been generated with Soar to date in the areas of learning [lo, 111, problem solving [7, 81, and expert systems [21]. Of particular importance for this article is that Soar includes an independently developed learning mechanism, called chunking, that is similar to but not the same as explanation-based generalization. The goal of this article is to elucidate the relationship between the general explanation-based generalization framework - as described in [15] - and the Soar approach to learning, by map- ping explanation-based generalization onto Soar.* The resulting mapping increases our understanding of both approaches and 1 Thrs research was sponsored by the Defense Advanced Research Projects Agency (DOD) under contracts NOOO39-83-C-0136 and F3361581-K-1539, and by the Sloan Foundation. Computer facilities were partially provided by NIH grant RR-00785 to Sumex-Aim. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency, the US Government, the Sloan Foundation, or the National Institutes of Health. 2 Another even more recent attempt at providing a uniform framework for explanation-based generalization can be found in [2]. It should be possible to augment the mapping described here to include this alternative view, but we do not do that here. John E. Laird Intelligent Systems Laboratory Xerox Palo Alto Research Center 3333 Coyote Hill Rd. Palo Alto, CA 94304 allows results and conclusions to be transferred between them. In Sections II -IV , EBG and Soar are introduced and the initial mapping between them is specified. In Sections V and VI , the mapping is refined and detailed examples (taken from [15]) of the acquisition of a simple concept and of a search-control con- cept are presented. In Section VII , differences between EBG and learning in Soar are discussed. In Section VIII , proposed solutions to some of the key issues in explanation-based generalization (as set out in [15]) are presented, based on the mapping of EBG onto Soar. In Section IX , some concluding remarks are presented. II EXPLANATION-BASED GENERALIZATION As described in [15], explanation-based generalization is based on four types of knowledge: the goal concept, the training example, the operationality constraint, and the domain theory. The goal concept is a rule defining the concept to be learned. Consider the Safe-to-Stack example from [15]. The aim of the learning system is to learn the concept of when it is safe to stack one object on top of another. The goal concept is as follows3 lFragileQvLighter(x,y)++Safe-to-Stack(x,y) (1) The training example is an instance of the concept to be learned. It consists of the description of a situation in which the goal concept is known to be true. The following Safe-to-Stack training example [15] contains both relevant and irrelevant infor- mation about the situation. On(ol,o2) Isa(o1 ,box) Color(o1 ,Red) Volume(o1 ,l) Density(o1 ,.l) (2) Isa(o2,endtable) Color(o2,blue) The operationality criterion characterizes the generalization language; that is, the language in which the concept definition is to be expressed. Specifically, it restricts the acceptable concept descriptions to ones that are easily evaluated on new positive and negative examples of the concept. One simple operationality constraint is that the concept description must be expressed in terms of the predicates that are used to define the training example. An alternative, and the one used in [15], is to allow predicates that are used to define the training example plus other easily computable predicates, such as Less. If the goal concept meets the operationality criterion then the prJoblem is already solved, so the cases of interest all involve a non- operational goal concept. One way to characterize EBG is as the process of operationalizing the goal concept. The goal concept is reexpressed in terms that are easily computable on the in- stances. The domain theory consists of knowledge that can be used in proving that the training example is an instance of the goal con- 3 Though this goal concept includes them actually gets used in the example. a pair of disjunctive clauses, only one of LEARNING / 561 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. cept. For the Safe-to-Stack example, the domain theory consists of rules and facts that allow the computation and comparison of object weights. Volume(pl,vl)ADensity(pl,dl)+ Weight(pl,vl*dl) (3) Weight(pl,wl)AWeight(p2,w2)r\less( wl, w2) + Lighter(pl,p2) (4) Isa(pl,endtable) -+ Weight(pl,S) (5) Less(.l,S) (6) . . . Given the four types of knowledge just outlined, the EBG algo- rithm consists of three steps: (1) use the domain theory to prove that the training example is an instance of the goal concept; (2) create an explanation structure from the proof - the tree struc- ture of rules that were used in the proof - filtering out rules and facts that turned out to be irrelevant; and (3) regress the goal concept through the explanation structure - stopping when operational predicates are reached - to yield the general con- ditions under which the explanation structure is valid. The desired concept definition consists of the conditions generated by the regression process. Volume(x,v)hDensity(x,d)AIsa(y,endtable)ALess( v*d,5) 4 Safe-to-Stack(x,y) (7) Ill SOAR The most complete description of Soar can be found in [9], and summaries can be found in most of the other Soar articles. With- out going into great detail here, the mapping of explanation- based generalization onto Soar depends upon five aspects of the Soar architecture. Problem spaces. Problem spaces are used for all goal-based behavior. This defines the deliberate acts of the architecture: selection of problem spaces, states, and operators. Subgoals. Subgoals are generated automatically whenever an impasse is reached in problem solving. These impasses, and thus their subgoals, vary from problems of selection (of problem spaces, states, and operators) to problems of operator instan- tiation and application. When subgoals occur within subgoals, a goal hierarchy results. An object created in a subgoal is a result of the subgoal if it is accessible from any of the supergoals, where an object is accessible from a supergoal if there is a link from some other object in the supergoal to it (this is all finally rooted in the goals). Production systems. A production system is used as the representation for all long-term knowledge, including factual, procedural, and control information. The condition language is limited to the use of constants and variables, the testing of equality and inequality of structured patterns, and the conjunc- tion of these tests. Disjunction is accomplished via multiple productions. The action language is limited to the creation of new elements in the working memory - functions are not al- lowed. The working memory is the locus of the goal hierarchy and of temporary declarative information that can be created and examined by productions. Decision Cycle. Each deliberate act of the architecture is ac- complished by a decision cycle consisting of a monotonic elaboration phase, in which the long-term production memory is accessed in parallel until quiescence, followed by a decision pro- cedure which makes a change in the problem-solving situation based on the information provided by the elaboration phase. All of the control in Soar occurs at this problem-solving level, not at the level of (production) memory access. Chunking. Productions are automatically acquired that sum- marize the processing in a subgoal. The actions of the new productions are based on the results of the subgoal. The con- ditions are based on those aspects of the initial situation that were relevant to the determination of those results. IV THE INITIAL MAPPING Given the descriptions of explanation-based generalization and Soar, it is not difficult to specify an initial mapping of EBG onto Soar (Figure 1). The goal concept is simply a goal to be achieved. The training example is the situation that exists when a goal is generated. The operationality criterion is that the con- cept must be a production condition pattern expressed in terms of the predicates existing prior to the creation of the goal (a disjunctive concept can be expressed by a set of productions). The domain theory corresponds to a problem space in which the goal can be attempted, with the predicates defined by the theory corresponding to operators in the problem space. goal concept w goal training example w pre-goal situat.ion operationality constraint w pre-goal predicates domain theory Q problem space Figure 1: The mapping of the EBG knowledge onto Soar. The Safe-to-Stack problem can be implemented in Soar by defining an operator, let‘s call it Safety?(x, y), which examines a state containing two objects, and augments it with information about whether it is safe to stack the first object on the second object. The concept will be operational when a set of produc- tions exist that directly implement the Safety? operator. When the operator is evoked and no such productions exist - that is, when the operational definition of the concept has not yet been learned - an operator-implementation subgoal is generated be- cause Soar is unable to apply the Safety? operator. In this sub- goal the domain-theory problem space can be used to determine whether it is safe to stack the objects. On the conclusion of this problem solving, chunks are learned that operationalize the con- cept for some class of examples that includes the training ex- ample. Future applications of the Safety? operator to similar ex. amples can be processed directly by the newly acquired produc- tions without resorting to the domain theory. V A DETAILED EXAMPLE In this section we take a detailed look at the implementation of the Safe-to-Stack problem in Soar, beginning with the training example, the domain theory, and the goal concept; followed by a description of the concept acquisition process for this task. The standard way to represent objects in Soar is to create a temporary symbol, called an identifier, for each object. All of the information about the object is associated with its identifier, in- cluding its name. This representational scheme allows object identity to be tested - by comparing identifiers - without test- ing object names, a capability important to chunking. In this representation, the Safe-to-Stack training example involves a slightly more elaborated set of predicates than is used in (2). The identifiers are shown as greek letters. In (2), the symbols 01 and 02 acted like identifiers. In the example below they are replaced by a and /? respectively. Wd) Name(a,box) Color(a,y) Volume(a,S) Density(a,&) Name(y,red) Name(G,l) Name(&,.l) (8) Name(P,endtable) Color(p,{) Name({,blue) The domain theory is implemented in Soar by a problem space, called Safe, containing the following four operators. Each of these operators is implemented by one or more productions. In production number 9, the use of a variable (w) in the action of the 562 I SCIENCE production, without it appearing in a condition, denotes new identifier is to be created and bound to that variable. that a Weight?(p) Name(p,endtable) + Weight(p,w)AName(wS) Volume(p,v)ADensity(p,d)AProduct(v,d,w)+ Weight(p,w) Lighter?(pl,p2) Weight(pl,wl)hWeight(p2,w2)ALess(wl,w2)* Lighter(p1 Less?(n 132) Name(nl,.l)AName(n2,5)+ Less(nl,n2) . . . Product?(n l,n2) Name(nl.1) -+ Product(nl,n2,n2) (9) (10) ,P2) (11) (12) (13) There are two notable differences between this implementation and the domain theory specified in EBG. The first difference is that the Less predicate is defined by an operator rather than simply as a set of facts. Because all long-term knowledge in Soar is encoded in productions, this is the appropriate way of storing this knowledge. The implementation of the operator con- sists of one production for each fact stored (it could also have been implemented as a general algorithm in an operator- implementation subgoal). The second difference is that the mul- tiplication of the volume by the density is done by an operator rather than as an action in the right-hand side of the production. Because Soar productions do not have the capability to execute a multiplication operation as a primitive action, Product needs be handled in a way analogous to Less. For this case we have provided one general production which knows how to multiply by one. To implement the entire Product predicate would require either additional implementation productions or a general mul- tiplication problem space in which a multiplication algorithm could be performed. The Safe problem space also contains one more operator that defines the goal concept. For the purposes of problem solving with this information, the rule defining the goal concept need not be treated differently from the rules in the domain theory. It is merely the last operator to be executed in the derivation. Safe-to-Stack?(p 142) Fragile(p2) --t Safe-toStack(pLp2) (14) Lighter(pLp2) t Safe-to-Stack(pLp2) (15) In addition to the operators, there are other productions in the Safe problem space that create the initial state and propose and reject operators.4 In what follows we will generally ignore these productions because they add very little to the resulting concept definition - either by not entering into the explanation structure (the reject productions), having conditions that duplicate con- ditions in the operator productions (the propose productions), or adding general context-setting conditions (the initial-state production). In other tasks, these productions can have a larger effect, but they did not here. Figure 2 shows the mapping of the EBG process onto Soar. The EBG proof process corresponds in Soar to the process of problem solving in the domain-theory problem space, starting from an initial state which contains the training example, and terminating when a state matching the goal concept is achieved. We can pick up the problem solving at the point where there is a state selected that contains the training example and a Safety? operator selected for that state. Because the operator cannot be directly applied to the state, an operator-implementation subgoal is immediately generated, and the Safe problem space is 4There are no search-control productions in this implementation of the Safe- to-Stack problem (except for ones that reject operators that are already accomplished). Instead, for convenience we guided Soar through the task by hand. The use of search-control productions would not change the concept learned because Soar does not include them in the explanation structure [9]. proof explanation structure goal regression ts problem solving = backtraced production traces (=$ backtracing and variablization Figure 2: The mapping of the EBG process onto Soar. selected for this new goal. If there is enough search-control knowledge available to uniquely determine the sequence of operators to be selected and applied (or outside guidance is provided), so that operator-selection subgoals are not needed, then the following sequence of operator instances, or one func- tionally equivalent to it, will be selected and applied. Weight?(P) Name(P,etidtable) --+ Weight(P,1))AName(q,S) Product? (16) Name(G,l)+ Product(G,E,&) Weight?(a) (17) Volume(a,8)ADensity(a,E)AProduct(a,&,&) + Weight(a,&) (18) Less?(&,q) Name(e,.l)AName(q,S)+ Less(&,$ Lighter?(a,P) (19) Weight(a,&)AWeight(P,q)ALess(&,q) --+ Lighter(a$) Safe-to-Stack?(a,P) (20) Lighter(a,P) + Safe-to-Stack(a,/?) (21) As they apply, each operator adds information to the state. The final operator adds the information that the Safety? operator in the problem space above was trying to generate - Safe-to-Stack(a, /I), A test production detects this and causes the Safety? operator to be terminated and the subgoal to be flushed. After the subgoal is terminated, the process of chunk acquisi- tion proceeds with the creation of the explanation structure. In Soar, this is accomplished by the architecture performing a backtrace over the production traces generated during the sub- goal. Each production trace consists of the working-memory elements matched by the conditions of one of the productions that fired plus the working-memory elements generated by the production’s actions. The backtracing process begins with the results of the subgoal - Safe-to-Stack(a, /I) - and traces back- wards through the production traces, yielding the set of produc- tion firings that were responsible for the generation of the results. The explanation structure consists of the set of produc- tion traces isolated by this backtracing process5 In the Safe-to- Stack example, there is only one result, and its explanation structure consists of the production traces listed above (productions 16-21). Other productions have fired -to propose and reject operators, generate the initial state in the subgoal, and so on - but those that did enter the explanation structure only added context-setting conditions, linking the relevant infor- mation to the goal hierarchy and the current state. They did not add any conditions that test aspects of the training example. The backtracing process goes a step beyond determining the explanation structure. It also determines which working-memory elements should form the basis of the conditions of the chunk by 5 It IS worth noting that earlier versions of Soar computed the conditions of chunks by determining which elements in working memory were examined by any of the productions that executed in the subgoal [ll]. When the problem solving is constrained to look only at relevant Information, as it was in the early work on human practice [20], this worked fine. However, in a system that is searching, often down what turn out to be dead ends, this assumption can be violated, leading to chunks that are overspecific. Backtracing was added to Soar to avoid these problems with dead ends [9, 111. This modification was based on our understanding of the EBG approach, but it was not done directly to model EBG. LEARNING I 563 isolating those that are (1) part of the condition side of one of the production traces in the explanation structure and (2) existed prior to the generation of the subgoal. The actions of the chunk are based directly on the goal’s results. The following instan- tiated production is generated by this process. Safety?(a,P) Volume(a,8)AName(6,1)ADensity(a,~)AName(~,.l) AName(P,endtable) ---) Safe-to-Stack(a$) (22) Soar’s condition-finding algorithm is equivalent to regressing the instantiated goal - actually, the results of the goal - through the (instantiated) production traces.6 It differs from the EBG goal regression process in [15] in making use of the instan- tiated goal and rules rather than the more general parameterized versions. The Soar approach to goal regression is simpler, and focuses on the information in working memory rather than the possibly complex patterns specified by the rules, but it does not explain how variables appear in the chunk. Variables are added during a later step by replacing all of the object identifiers with variables. Identifiers that are the same are replaced by the same variable and identifiers that are different are replaced by different variables that are forced to match distinct objects. The following variablized rule was generated by Soar for this task. Safety?(x,y) Volume(x,v)AName(v,l)ADensity(x,d)AName(~,.l) AName(‘y,endtable) + Safe-to-Stack(x,y) (23) This production is not as general as the rule learned by EBG (rule 7 in Section II ). Instead of testing for the specific volume and density of the first object, the EBG rule tests that their product is less than 5. This happens because the EBG im- plementation assumed that Less and Product were operational; that is, that they were predicates at which regression should stop. In the Soar example they were not operational. Both of these predicates were implemented by operators, so the regres- sion process continued back through them to determine what lead them to have their particular values. In EBG, operationalized predicates showed up in one of two ways: either the set of true instances of the predicate was in- cluded along with the domain theory and the training example (Less), or the predicate was a primitive operation in the rule lan- guage (Product). The key point about both of these approaches is that the computation of the value of the predicate will be cheap, not requiring the use of inference based on rules in the domain theory. In Soar, any preexisting predicate is cheap, but functions are not allowed in the rule language. Therefore, the way for Soar to generate the more general rule is to make sure that all of the operational predicates preexist in working memory. Specifically, objects representing the numbers and the predi- cates Less and Product need to be available in working memory before the subgoal is generated, and the endtable-weight rule must be changed so that it makes use of a preexisting object representing the number 5 rather than generating a new one. Under these circumstances, backtracing stops at these items, and the following production is generated by Soar. Safety?(x,y) Volume(x,v)ADensity(x,d)ANameCv,endtable)AProduct( v,d,d) ALess(d,w)AName( w,5) -t Safe-to-Stack(x,y) (24) This production does overcome the problem, but it is still more specific than rule 7 because the density and the weight of the box were the same in this example (they were represented by the same identifier) - the variablization strategy avoids creating overly general rules but can err in creating rules that are too specific. Thus the chunk only applies in future situations in 6 See [19] for a good, brief description of goal regression. which this is true. If an example were run in which the density and the weight were different, then a rule would be learned to deal with future situations in which they were different. VI SEARCH-CONTROL CONCEPTS One of the key aspects of Soar is that different types of sub- goals occur in different situations. The implication of this for EBG is that the type of subgoal determines the type of concept to be acquired. In the previous section we have described concept formation based on one type of subgoal: operator implemen- tation. In this section we look at one other type of subgoal, operator selection, and show how it can lead to the acquisition of search-control concepts - descriptions of the situations in which particular operators have particular levels of utility. The process of extending the mapping between EBG and Soar to this case reveals the underlying relationships among the various types of knowledge and processes used in the acquisition of search-control concepts. As described in [15], in addition to the four types of knowledge normally required for EBG, its use for the acquisition of search- control concepts requires two additional forms of knowledge: (1) the solution property, which is a task-level goal; and (2) the task operators, which are the operators to be used in achieving the solution property. For example, in the symbolic integration problem posed in [15], the solution property is to have an equa- tion without an integral in it, and the task operators specify trans- formation rules for mathematical equations. The domain theory includes task-dependent rules that deter- mine when a state is solved (the solution property is true) and task-independent rules that determine whether a state is solvable (there is a sequence of operators that will lead from the state to a solved state) and specify how to regress the solution property back through the task operators. The goal concept is to deter- mine when a particular task operator - 0~3, which moves a numeric coefficient outside of the integral in forms like I 7?dx - is useful in achieving the solution property. That is, we are look- ing for a description of the set of unsolved states for which the application of the operator leads to a solvable state. The EBG approach to solving this generalization problem in- volves two phases, both of which are controlled by the domain theory. In the first phase, a search is performed with the task operators to determine whether the state resulting from the ap- plication of Op3 is solvable. In the second phase, the solution property is regressed through the task operators in the solution path - a deliberate regression controlled by the domain theory, not the regression that automatically happens with EBG. These steps form the basis for the explanation structure. Using a train- ing example of [7>dx, the following concept description was learned [15]. Matches(y,f(.)l r.$~x)AIsa(r,real)AIsa(s,real) Alsa(ffunction)ANot-Equal(s, - 1) 4 Useful-Op3(y) (25) In the Soar implementation of this task, the solution property and task operators correspond to a goal and problem space respectively. The task-level search to establish state solvability corresponds to a search in this problem space. The regression of the solution property through the task operators simply cor- responds to chunking. In fact, all of the knowledge and process- ing required for this task map cleanly into a hierarchy of sub- goals in Soar, as shown in Figure 3. The details of this mapping should become clear as we go along. The problem solving that generates these goals is shown in Figure 4. At the very top of the figure is the task goal of having an integral-free version of the formula. This first goal cor- responds to the solution property in the EBG formalism (Figure 3). A problem space containing the task operators is used for this goal and the initial state represents the formula to be in- 564 / SCIENCE solution property CJ Goal 1: problem-solving task goal concept = Goal 2: operator selection domain theory = Goal 3: operator evaluation solved u goal test solvable u problem solving regression u chunking task operators tj Problem space for goals 1 & 3 Figure 3: Extending the mapping for search-control concepts. tegrated. Because both operator Op3 and another operator are acceptable for this state, and the knowledge required to choose between them is not directly available in productions, an operator-selection subgoal is created. This second subgoal cor- responds to the EBG goal concept (Figure 3) - the desire to determine the knowledge necessary to allow the selection of the most appropriate operator. In this subgoal, search-control knowledge about the utility of the competing operators (for the selected state) is generated until an operator can be selected. For such goals, Soar normally employs the Selection problem space. The Selection problem space contains an Evaluate operator, which can be applied to the competing task operators to determine their utility. If, as is often the case, the information about how to evaluate an operator is not directly available, an evaluation subgoal (to implement the Evaluate operator) is created. The task in this third-level subgoal is to determine the utility of the operator. To do this, Soar selects the original task problem space and state, plus the operator to be evaluated. It then applies the operator to the state, yielding a new state. If the new state can be evaluated, then the subgoal terminates, otherwise the process continues, 1. Task f 7x:x 2. Operator selection . . . ‘.‘.‘.’ ‘...‘.’ ‘.‘. . .::: ‘::: .::: . 3. Operator 1 success evaluation 4. Operator selection 5. Operator ‘.‘.‘.’ .::: . 1:::::. .:.:.:. . 5c!2 success evaluation \ 7/x*dx op9 7x3 -7 form J gsdx - is applied. At this point Op3 is given an evaluation of success, resulting in search-control knowledge being generated that says that no other operator will be better than it (similar processing also occurs for Op9). Because this new knowledge is sufficient to allow the selection of Op3 in the top goal, it gets selected and applied immediately, terminating the lower goals. The EBG domain theory maps onto several aspects of the processing of the third-level evaluation subgoal (Figure 3). The EBG rules that determine when a state is solved correspond to a goal test rule. The EBG rules that determine state solvability correspond to the problem-solving strategy in the evaluation subgoal. The EBG rules that deliberately regress the solution property through the task operators correspond to the chunking process on the evaluation subgoal - goal regression in Soar is thus always done by chunking, but possibly over different sub- goals. These chunks are included as part of the explanation structure for the parent operator-selection goal because the processing in the bottom subgoal was part of what lead to the parent goal’s result. Chunks are learned for each of the levels of goals, but the ones of interest here are for the operator-selection subgoals. These chunks provide search-control knowledge for the task problem space - the focus of this section. Soar acquired the following production for the top operator-selection goal. This production is presented in the same abstract form that was used for the corresponding EBG rule (rule 25). Proposed(a)AName(a,Op3)Alntegral(a,b) AMatches(b,l~.-8&+Alsa(r,real)Alsa(s,reai) ANot-Equal(s, - 1) --+ Best(u) (26) This production specifies a class of situations in which operator Op3 is best.7 Operator Op3 takes three parameters in the Soar implementation: (1) the integral (17ddx); (2) the coefficient (7); and (3) the term (2) which is the other multiplicand within the integral. The predicates in this production examine aspects of these arguments and their substructures. Though the details tend to obscure it, the content of this production is essentially the same as the EBG rule. Two additional points raised by this example are worth men- tioning. The first point is that the EBG rule (rule 25) checks whether there is an arbitrary function before the integral, whereas this rule does not. The additional test is absent in the Soar rule because the representation - which allowed both tree structured and flat access to the terms of the for.,lula - allowed any term to be examined and changed independent of the rest of the formula. Functions outside of the integral are thus simply ignored as irrelevant. The second point is that the learning of this rule requires the climbing of a type hierarchy. The training example mentions the number 7, but not that it is a real number. In Soar, the type hierarchy is defined by adding a set of rules which successively augment objects with higher-level descrip- tors. All of these productions execute during the first elaboration phase after the training example is defined, so the higher-level descriptors are already available - that is, operational - before the subgoal is generated. This extra knowledge abo$ the semantics of the concept description language is thus encoded uniformly in the problem solver’s memory along with the rest of the system’s knowledge. Figure 4: Problem solving in the symbolic integration task. generating further levels of selection and evaluation goals, until a task desired state - that is, a state matching the solution property - is reached, or the search fails. For this problem, the search continues until Op9 - which integrates equations of the 7 To Soar, stating that an object is best means that the object is at least as good as any other possibility. Because operators are only being rated here on whether their use leads to a goal state, best here is equivalent to useful in rule 25. LEARNING I 565 VII DIFFERENCES IN THE MAPPING The previous sections demonstrate that the EBG framework maps smoothly onto Soar, but three noteworthy differences did show up. The first difference is that EBG regresses a variablized goal concept back through variablized rules, whereas chunking regresses instantiated goal results through instantiated rules (and then adds variables). In other words, both schemes use the explanation structure to decide which predicates from the train- ing example get included in the concept definition - thus plac- ing the same burden on the representation in determining the generality of the predicates included’ - but they differ in how the definition is variablized. In EBG, this process is driven by unification of the goal concept with the relevant rules of the domain theory, whereas in chunking it is driven by the represen- tation of the training example (that is, which identifiers appear where). Putting more of the burden on the representation allows the chunking approach to be more efficient, but it can also lead to the acquisition of overly-specific concept definitions. The second difference is that predicates can be operation- alized in EBG either by including them as facts in the domain theory or by making them into built-in functions in the rule lan- guage. In Soar, only the predicates existing in working memory prior to the generation of a subgoal are operational for that sub- goal. This is not a severe limitation because any predicate that can be implemented in the rule language can also be imple- mented by an operator in Soar, but it could lead to efficiency differences. One direction that we are actively pursuing is the dynamic augmentation of the set of operational predicates for a goal concept during the process of finding a path from the train- ing example to the goal concept. If intermediate predicates get operationalized - that is chunks are learned for them - then the overall goal concept can be expressed in terms of them rather than just the preexisting elements.g The third difference is that the EBG implementation of search- control acquisition requires the addition of general interpretive rules to enable search with the task operators and the regression of the solution property through them”, while Soar makes use of the same goal/problem-spaceichunking approach as is used for the rest of the processing. In the Soar approach, the represen- tation is uniform, and the different components integrate together cleanly. VIII EBG ISSUES In [15], four general issues are raised about EBG: 1. The use of imperfect domain theories. 2. The combination of explanation-based and similarity- based methods. 3. The formulation of generalization tasks. 4. The use of contextual knowledge. The purpose of this section is to suggest solutions to three of these issues - the second issue has not yet been seriously in- vestigated in Soar, so rather than speculate on how it might be done, we will leave that topic to a later date. Other solutions have been suggested for these issues (see [15] for a review of many of these), but the answers presented here are a mutually compatible set derived from the mapping between EBG and *See [l l] for a generality in Soar. discussion of the interaction between representation and ‘The approach and Mooney [2]. is not unlike the 10 However, a more uniform approach rules by EBG can be developed [4]. independently developed by DeJong Soar. The first issue -the use of imperfect domain theories - arises because, as specified in [15], in order to use EBG it is necessary to have a domain theory that is (1) complete, (2) consistent, and (3) tractable. Should any of these conditions not hold, it will be impossible to prove that the training example is an instance of the concept. Not mentioned in [15], but also important, is that the correctness of the resulting generalization is influenced by two additional conditions on the domain theory: that it be (4) free of errors and (5) not defeasible (a defeasible domain theory is one in which the addition of new knowledge can change the outcome). If the process of generating an explanation is viewed not as one of generating a correct proof, but of solving a problem in a problem space, then the first four conditions reduce to the same ones that apply to any problem solver. Moreover they are all properties of applying the problem space to individual problems, and not of the problem space (or domain theory) as a whole. A space that is perfectly adequate for a number of problems may fail for others. As such, violations of these conditions can be dealt with as they arise on individual problems. In Soar, varying amounts of effort have gone into investigating how to deal with violations of these conditions. A variety of techniques have been used to make problem solving more tractable, including chunk- ing, evaluation functions, search-control heuristics, subgoals, and abstraction planning [9, 10, 11, 21, 221. However, the other conditions have been studied to a much lesser extent. The one condition that does not involve a pure problem solving issue is defeasibility. The explanation process may progress smoothly and without error with a defeasible theory, but it can lead to overgeneralization in both EBG and Soar. In the EBG version of the Safe-to-Stack problem, the theory is defeasible because, as specified in [15], the rule which computes the weight of the endtable (rule 5) is actually a default rule which can be overridden by a known value. The acquired concept defini- tion (rule 7) is thus overgeneral. It will incorrectly apply in situa- tions where there is a small non-default weight for the endtable. In Soar, domain theories can be defeasible for a number of reasons, including the use of default processing for the resolu- tion of impasses and the use of negated conditions in productions.” Sometimes the domain theory can be refor- mulated so that it is not defeasible, and at other times it is pos- sible to reflect the defeasibility of the domain theory in the con- cept definition - for example, by including negated conditions in the concept definition - but when defeasibility does exist and yields overgeneralization, the problem of recovering from over- generalization becomes key. Though we do not have this problem completely solved, Soar can recover when an over- general chunk fails to satisfy some higher goal in the hierarchy. As mentioned in [15], the third issue - the formulation of generalization problems - is resolved by Soar. Whenever a sub- goal is generated, a generalization problem is implicitly defined. The subgoal is a problem-solving goal - to derive the knowledge that will allow problem solving to continue - rather than a learning goal. However, one of the side effects of subgoal processing is the creation of new chunk productions which en- code the generalized relationship between the initial situation and the results of the subgoal. The fourth issue - the use of contextual knowledge - is straightforward in Soar. At each decision, all of the knowledge available in the problem space that is relevant to the current situation is accessed during the elaboration phase. This can include general background and contextual knowledge as well to the acquisition of search-control 11 Negated conditions test for the absence of an element of a certain type, which is not the same as testing whether the negation of an element is known to be true (as is done by the Not-Equal predicate in production 26). 566 / SCIENCE as more local knowledge about the task itself. IX CONCLUSION Explanation-based generalization and Soarichunking have been described and related, and examples have been provided of Soar’s performance on two of the problems used to exemplify EBG in [15]. The mapping of EBG onto Soar is close enough that it is safe to say that chunking is an explanation-based generalization method. However, there are differences in (1) the way goal regression is performed, (2) the locus of the operational predicates, and (3) the way search-control concepts are learned. Mapping EBG onto Soar suggests solutions to a number of the key issues in explanation-based generalization, lending credence to the particular way that learning and problem solving are integrated together in Soar. Also, based on the previous experience with Soar in a variety of tasks [9] - including expert- system tasks[21] - this provides evidence that some form of EBG is widely applicable and can scale up to large tasks. ACKNOWLEDGMENTS We would like to thank Tom Dietterich, Gerald DeJong, Jack Mostow, Allen Newell, and David Steier, for their helpful com- ments on drafts of this article. We would also like to thank the members of the Soar and Grail groups at Stanford for their feed- back on this material. REFERENCES 1. DeJong, G. Generalizations based on explanations. Proceedings of IJCAI-81,198l. 2. DeJong, G., 8 Mooney, R. “Explanation-based learning: An alternative view.” Machine learning 7 (1986). In press 3. Ellman, T. Explanation-based learning in logic circuit design. Proceedings of the Third International Machine Learning Workshop, Skytop, PA, 1985, pp. 35-37. 4. Hirsh, H. Personal communication. 1986 5. Kedar-Cabelli, S. T. Purpose-directed analogy. Proceedings of the Cognitive Science Conference, Irvine, CA, 1985. 6. Keller, R. M. Learning by re-expressing concepts for efficient recognition. Procedings of AAAI-83, Washington, D.C., 1983, pp. 182-186. 7. Laird, J. E. Universal Subgoaling. Ph.D. Th., Carnegie- Mellon University, 1983. 8. Laird, J. E., and Newell, A. A universal weak method: Sum- mary of results. Proceedings of the Eighth IJCAI, 1983. 9. Laird, J. E., Newell, A., & Rosenbloom, P. S. Soar: An ar- chitecture for general intelligence. In preparation 10. Laird, J. E., Rosenbloom, P. S., & Newell, A. Towards chunking as a general learning mechanism. Proceedings of AAAI-84, Austin, 1984. 11. Laird, J. E., Rosenbloom, P. S., & Newell, A. “Chunking in Soar: The anatomy of a general learning mechanism.” Machine Learning 1 (1986). 12. Lebowitz, M. Concept learning in a rich input domain: Generalization-based memory. In Machine Learning: An Artifi- cial Intelligence Approach, Volume II, R. S. Michalski, J. G. Carbonell, & T. M. Mitchell, Eds., Morgan Kaufmann Publishers, Inc., Los Altos, CA, 1986. 13. Mahadevan, S. Verification-based learning: A generaliza- tion strategy for inferring problem-decomposition methods. Proceedings of IJCAI-85, Los Angeles, CA, 1985. 14. Minton, S. Constraint-based generalization: Learning game-playing plans from single examples. Proceedings of AAAI-84, Austin, 1984, pp. 251-254. 15. Mitchell, T. M., Keller, R. M., & Kedar-Cabelli, S. T. “Explanation-based generalization: A unifying view.” Machine Learning 7 (1986). 16. Mitchell, T. M., Mahadevan, S., & Steinberg, L. LEAP: A learning apprentice for VLSI design. Proceedings of IJCAI-85, Los Angeles, CA, 1985. 17. Mitchell, T. M., Utgoff, P. E., & Banerji, R. Learning by ex- perimentation: Acquiring and refining problem-solving heuris- tics. In Machine Learning: An Artificial intelligence Approach, R. S. Michalski, J. G. Carbonell, T. M. Mitchell, Eds., Tioga Publishing Co., Palo Alto, CA, 1983, pp. 163-190. 18. Mooney, R. Generalizing explanations of narratives into schemata. Proceedings of the Third International Machine Learning Workshop, Skytop, PA, 1985, pp. 126-l 28. 19. Nilsson, N.. Principles of Artificial Intelligence. Tioga, Palo Alto, CA, 1980. 20. Rosenbloom, P. S., & Newell, A. The chunking of goal hierarchies: A generalized model of practice. In Machine Learn- ing: An Artifibial Intelligence Approach, Volume II, R. S. Michalski, J. G. Carbonell, & T. M. Mitchell, Eds., Morgan Kaufmann Publishers, Inc., Los Altos, CA, 1986. 2 1. Rosenbloom, P. S., Laird, J. E., McDermott, J., & Orciuch, E. “Rl -Soar: An experiment in knowledge-intensive programming in a problem-solving architecture.” EEE Transactions on Pat- tern Analysis and Machine Intelligence 7,5 (1985), 561-569. 22. Rosenbloom, P. S., Laird, J. E., Newell, A., Golding, A., & Unruh, A. Current research on learning in Soar. Proceedings of the Third International Machine Learning Workshop, Sky-top, PA, 1985, pp. 163-172. 23. Segre, A. M. Explanation-based manipulator learning. Proceedings of the Third International Machine Learning Workshop, Skytop, PA, 1985, pp. 183-185. 24. Tadepalli, P. Learning in intractable domains. Proceedings of the Third International Machine Learning Workshop, Skytop, PA, 1985, pp. 202-205. 25. Utgoff, P. E. Adjusting bias in concept learning. Proceed- ings of IJCAI-83, Karlsruhe, West Germany, 1983, pp. 447-449. 26. Winston, P. H., Binford, T. O., Katz, B., & Lowry, M. Learn- ing physical descriptions from functional definitions, examples, and precedents. Proceedings of AAAI-83, Washington, 1983, pp. 433-439. LEARNING I 567
1986
76
523
Learning to Anticipate and Avoid Planning Problems through the Explanation of Failures* Kristian J. Hammond Department of Computer Science Yale University ABSTRACT This paper presents an approach to learning during planning that focuses on learning to predict planning problems through an analysis of the planner’s own failures. The need to predict failures in order to avoid them is argued and a method for leam- ing the features that predict problems from a causal analysis of planning failures is discussed. A further argument is also given concerning the natural integration of this approach to learning with an overall theory of case-based planning. An implementation of these learning ideas is presented in the case-based planner CHEF, which creates new plans from old in the domain of Szechwan cooking. The CHEF planner uses an anticipafe and avoid approach to planning problems that is sharply contrasted with the create and debug approach taken by existing planners. I LEARNING FROM PLANNING In recent years the study of machine learning has moved away from simple concept learning towards the idea of learning in the context of other tasks. Many researchers have proposed learning systems that work with planners to store the results of the planner’s own efforts [1,3,4] or ihe refined versions of a tutor’s examples [2]. The stress in all of these systems has been on the learning of new plans for later use. The results reported by researchers developing such systems has centered around planning-time gains that result from reusing the stored plans. There is, however, another aspect of learning that these sys- tems do not address: learning to avoid failures that the planner has encountered before. Because existing systems focus on the final result of the planning process and not the errors that were made along the path to that result, they do nothing to ensure that those errors will not be made again. While these systems store plans in terms of the goals that they may satisfy, any prob- lems that the planner may have encountered and repaired are ignored. Goals or goal combinations that have proved to be problematic are not noted, meaning that they will be handled in the same fashion time after time, even if that handling always leads to a failure. to A different approach to learning from planning is to learn recognize the situations in which failures occur and use that *This report describes work done in the Department of Computer Sci- ence at Yale University. It was supported in part by ONR Grant #N00014- 85-K-0108. recognition to avoid them. One way to do this is to design a planner that uses its own failures to learn problematic features in a domain. These problematic features can then be used to predict problems in later planning situations so that the planner can construct its new plan knowing the problems it must avoid. By also storing plans in terms of the problems that were encoun- tered while building them, a planner could use the prediction of a problem to find a plan in memory that avoids it. This idea of using failures to learn the features that predict them is implemented in the case-based planner CHEF, that cre- ates new plans in the form of recipes in the domain of Szechwan cooking. The CHEF planner uses an anticipate and avoid ap- proach to planning problems that is sharply contrasted with the create and debug approach taken by existing planners. CHEF attempts to predict and plan for possible failures before they actually occur rabher than waiting for t,hem to happen and re- pairing them once they have. The ability to learn from its own failures allows the CHEF planner to anticipate and avoid those problems that it has seen before. II THE NEED FOR ANTICIPATION One of the persistent problems that planners have to deal with is the changes that take place in the world as a result of their own actions. 11’hen a planner has to plan for a set of goals from a given state of the world it is possible for it to const.ruct a plan for one goal t,hat will change the conditions that are required for planning for another. Goal and plan interaction has been the subject of much work in Artificial Intelligence [5,7,8] but has always been handled from the approach of dealing with the planning failures they arise. This has resulted in a class of planners t,hat are able to debug faulty plans but only after the faults have arisen. An alternative approach to planning failures is to antici- pate them before they arise. Once anticipated, a failure can be avoided by finding a plan that deals with the problem indi- cated by the prediction while also satisfying the planner’s cur- rent goals. The CHEF planner anticipates problems that it has previ- ously encountered, using links that it builds at the time of a failure from t,he features that caused it to the memory of the failure itself. When the features reoccur in later circumstances, the planner is reminded of the past experience and this remind- ing serves as a warning to the planner that it has to plan for the fact that this failure is going to occur again. Because CHEF 556 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. also stores the repaired plans that were built in response to past failures, indexed by the fact that they solve the problem corre- sponding to the failure, CHEF is able to use the prediction of the problem to find a plan that avoids it. For example, in trying to create a plan for a strawberry souffl6, CHEF encounters problems with the liquid added by the addition of chopped strawberries to the soufflC batter. This added liquid causes an imbalance in the relationship between liquid and leavening in the recipe. This alters the condition which was required for the soufflC to rise, which results in a fallen souffle. CHEF is able to recover from this failure by adding more egg white to the recipe. But because it does not want to repeat the failure later on, it, has to change more than the current plan. It also has to change the way in which it will plan for similar circumstances at a later date. So CHEF does two things. First it stores the new plan in memory, indexed by the fact that it is a plan to deal with the problem of added liquid in a souffld recipe. But indexing a plan by the fact that is solves a problem is of no use unless the problem is anticipated. So CHEF also builds links from the features in the situation that caused the problem (added fruit or extra liquid) to a memory of the failure itself. With these links in place, it is later able to infer the occurrence of the failure from the reoccurrence of the features that participated in causing it earlier. In dealing with a later situation, in which CHEF is planning for a soufflC with the liqueur kirsch, it is reminded of the past failure. This tells CHEF that it is in a problem situation and it adds the goal to avoid the problem to the list of goals used to search for an initial plan. It then finds the strawberry soufflk recipe, with the added egg white, and modifies it to include kirsch rather than strawberries. Without the anticipation of the failure, CHEF would have used a more basic vanilla soufflk recipe and would have built a plan with the same flaw as in the failed strawberry soufflC plan. By anticipating the failure, CHEF is able to find a plan that avoids it. III AN OVERVIEW OF CHEF CHEF’s input is a set of goals for different tastes, textures, ingredients and types of dishes and its output is a single recipe that satisfies a!! of its goals. Its basic algorithm is to find a past plan that satisfies as many of the most. important goals as possible and then modify that plan to sat,isfy the other goals as well. Before searching for a plan to modify, CHEF examines the goals in its input and predicts any failures that might rise out the interactions between the plans for satisfying them. If a failure is predicted, CHEF adds a goal to avoid the failure to its list of goals to satisfy and this new goal is also used to search for a plan. For example, if it predicts that stir frying chicken with snow peas will lead to soggy snow peas because the chicken will sweat liquid the pan, it searches for a stir fry plan that avoids the problem of vegetables getting soggy when cooked with meats. In doing so, it finds a past plan for beef and broccoli that solves this problem by stir frying the vegetable and meat separately. The important similarity between the current situation and the one for which the past plan was built is that the same problem rises out of the interaction between the planner’s goals, although the goals themselves are different. CHEF indexes its plans in memory by both the goals that they satisfy and the the problems that they avoid. It also tries to predict problems before any other planning is done. This means that the anticipation of a problem car be used to find a plan in memory that avoids it while also satisfying the goals that the planner has been given, allowing CHEF t.o anticipate and then avoid problems before t.hey actually arise. IV LEARNING FROM FAILURE Once CHEF buiids a plan, it runs a simulation of it that is CHEF’s version of the real world. The results of tliis simulation are checked against, the goals that CIIEF expects the plan to satisfy. These goals include the goals CHEF is given by the user as we!! as those it understands should be satisfied by an instance of the type of plan it hau built. CHEF understands that its plan for strawberry sot&k should include the strawberries requested by the user and also understands that it should, like a!! soufflE”s, be baked and fluffy. If the goals that the plan is designed to satisfy are not met in the results of the simulation, CHEF considers t,he plan a failure and begins the task of fixing the faulty plan and altering the faulty understandin, u of the world that was used to create the plan. It is important to note here that “failure8 means the failure of a plan to achieve its goals, not a failure of the planner to create a plan. CHEF builds up a causal explanation of why the failure occurred and uses that description to access a set of repair strategies. This explanation is built by back chaining from the failure to the initial st,eps or states that caused it, using a set of causal rules the describe the results of actions in different circumstances. Once t.he explanation is built and the strategies are accessed, CHEF tries to implement the different strategies as actual changes to the plan. It makes the one change which seems most likely to succeed without introducing any new problems. In the example of the failed strawberry souffl6, CHEF ex- plains the fallen SOUWC as a result of an imbalance between the liquid and leavening in the recipe. This imbalance is traced back to the strawberries that were added in order to meet the user’s goals. In terms of CHEF’s vocabulary, this problem is a case of SIDE-EFFECT:DISABLED-CONDITION:BAL,4NCE because a side-effect of adding the strawberries has disabled a balance condition that is required for the success of the BAKE step in the plan. This explanation is used to find a planning TOP, one of a set, of structures that correspond to different plan- ning problems, and this TOP suggests the actual strategies that are used to repair the plan. These TOPS are planning versions of the Thematic Organization Packets suggested by Roger Schank for use in understanding [6]. They are designed to organize memories around complex goal and plan interactions. Searching for TOP using following explanation: Failure = It is not the case that: The batter is risen. Initial plan = Bake the batter for twenty five minutes. Condition enabling failure = There is an imbalance between the whipped stuff and the thin liquid. Cause of condition = Pulp the strawberry. The goals enabled by the condition = NIL The goals that the step causing the condition enables - The dish nov tastes like berries. LEARNING I 557 Found TOP TOP3 -> SIDE-EFFECT:DISABLED-CONDITION:BALANCE TOP -> SIDE-EFFECT:DISABLED-CONDITION:BALANCE has 6 strategies associated with it: ALTER-PLAN:SIDE-EFFECT RECOVER ALTER-PLAN:PREC~lDIfION ADJUNCT-PLAN ADJUST-BALANCE : UP Each of the different sbrategies under the TOP suggests a change in the causal chain that leads to the current failure. l ALTER-PLAN:SIDE-EFFECT suggests using an action to achieve the initial goal that does not have the offending side-effect. In this situation this would mean finding a way to add the taste of strawberries that does not add extra liquid. The alteration CHEF finds is to use strawberry preserves rather than crushed strawberries. l ALTER-PLAN:PRECONDITION suggests finding a al- ternative to the blocked st,ep that does not require the conditions that the first part of t,he plan has violated. This would mean finding a step to make the batter rise that does not reqluire the balance between the liquid and leavening. CHEF can find no action that will do this. l RECOVER suggests putting a step between the action that caused the side-effect and the step it interferes with that will remove the offending state. This means finding a step that will remove the liquid that results from chopping the strawberries before the batter is baked. CHEF finds that draining the strawberries will do this. l ADJUNCT-PLAN suggests adding a new step to run con- current with the step that has the violated condition that will allow it to satisfy the goal even in the presence of the violation. In this example this means Finding a step that will allow baking the existing batter to have the desired effect. CHEF finds t&hat adding flour to the batter will do this. l ADJUST-BALANCE:UP suggests adjusting the down- side of the imbalanced relationship by adding more of what there is less of. In this example, this means adjusting the down-side of the imbalanced liquid and leavening relation- ship. This means adding more of the egg-white used as leavening. CHEF ends up using the suggestion made by ADJUST- BALANCE:UP to add more egg white because this is the change that has the least possibility of creating any unwanted side- effects. This is determined using a set of heuristics that evaluate different changes at the level of the domain. Once this change is made CHEF is in a powerful position. It has a working plan for a set of goals and it knows that this plan avoids a particular problem. It also has an explanation of why the problem occurred in the first pface and can use this explanation to figure out which features will predict it at a later date. This means it can perform both of the tasks it needs to do in order to avoid this failure in the future: it can index the new plan in memory by the fact that it is a special plan that deals with this problem and it can build the links between features in the situation and its memory of the failure that will allow it to anticipate the problem in similar circumstances and thus find the plan that handles it. CHEF indexes the new plan under all of the goals that it satisfies as well aa the problems that it solves. The fact that it solves a certain problem is one of the important features of a plan but is not the only one. Other features include the initial input goals and the goals inferred by CHEF from the nature of the dish requested and the ingredients used. Indexing STRAWBERRY-SOUFFLE under the features: Goals requested and inferred: Include strawberry in the dish. Make a souffle. The batter is now risen. The dish now tastes like berries. The dish nov tastes sweet. Problems avoided: The plan avoids the failure ‘It is not the case that: The batter is now risen.’ caused by conditions: “Chopping fruits produces liquid.” “Without a balance between liquids and leavening the batter will fall.” The repaired plan is only part of what the planner learns. It also learns to recognize the situations in which the plan is useful. It does this by stepping through the causal explanation it has built and using t5he constraints on the rules that were used in connecting actions to effects and states to the results that they enable. CHEF uses the explanation to point out which features in a situation are responsible for a failure and uses it again to find the features that will be predictive of the failure. Here CHEF wants to know not. only the exact features that caused the problem but also the more general versions of them that might cause it again. It gets these more general features by generaliait~g to the level of the rules. This means generalizing an object in an explanation up to the highest level of description possible, while staying within the confines of the rules t,hat explain the failure. In the STRAWBERRY-SOUFFLE situation, one rule ex- plains that the liquid was a product of the chopping of the straw- berries. A simple way to predict this failure in the future would be for the planner to mark STRAWBERRY as predictive of it and be reminded of t&he failure whenever it is asked to make a strawberry soufflk. But the rule that explains the added liquid as a side-effect of chopping the strawberries does not require that the the object of t,he step be strawberries. It explains that chopping any fruit will produce this side-effect. So, instead of marking STRAWBERRY as predictive of the problem, CHEF can mark FRUIT as predict,ive. Building demons Building demon: between rules: to anticipate failure. DEMON2 to anticipate interaction “Chopping fruits produces liquid.” “Without a balance between liquids and leavening the batter will fall.” Indexing demon: DEMON2 under item: FRUIT by test: Is the item a FRUIT. 558 / SCIENCE Indexing demon: DEMON2 under style: SOUFFLE Goal to be activated = Avoid failure of type SIDE-EFFECT:DISABLED-COBDITION:BALINCE exemplified by the failure ‘The batter is now flat’ in recipe STRAWBERRY-SIIUFFLE. A causal explanation of why a failure occurs is a chain of events and states in which each link is a potential predictor of the failure occurring again. The goal to include strawberries is the out.ermost link in this chain while the liquid they produce when chopped is a more direct cause of the failure. Because the liquid from the strawberries is just as much a cause of the problem as the goals t,o include the strawberries, it can also be used to predict the failure at a later date. States that are intermediate links in causing a failure are marked as predictive of the problem along with the initial goals that started the chain of events leading to it. CHEF generalizes these states up to the level of the rules that explain the failure and links them to a token representing the failure. Because the presence of liquid is implicated in causing the failure with the STRAW BERRY-SOUFFLE, t%he goal to include any liquid spice is linked to the memory of the failure. This is implemented by placing a test on SPICE that checks the texture and partially activates the memory of the failure when it is liquid. Building demon: DEJ4ON3 to anticipate interaction between rules: “Liquids make things wet.” Without a balance betlreen liquids and leavening the batter vi11 fall.” Indexing demon: DF.MON3 under item: SPICE by test: Is the IEXTURE of item LIDUID. Indexing demon: DEMON3 under style: SOUFFLE Goal to be activated = Avoid failure of type SIDE-EFFECT:DISARLED-CONDITION:BALANCE exemplified by the failure ‘The batter is now flat’ in recipe STRAWBERRY-SOUFFLE. By examining this failure, CHEF is able to learn the features that will predict similar problems at a later date. This knowl- edge is in the form of the links going from the surface features of CHEF’s own goals to memories of the failure itself. When these surface fcaturrs arise in later situations, the memory of t.he fail- ure is activated and CHEF infers that the problem is going to arise as well. These links are arranged so t,hat all features responsible for a failure have to be present for it to be predict,ed. But different combinations of features may all predict the same failure. Figure 1 shows a simplified version of the activation links leading to a failure. When all links leading into the memory of a failure are activated the memory is also activated. The test for the texture of the goal to include any spice cont,rols the flow of activation through that link. By using the explanation of the failure to identify the impor- tant features in the situation CHEF gains in three ways. First it is able to learn from a single instance and avoid the problems in- herent to the repetition of examples required by inductive learn- ing systems. Second, it is able to identify a range of situations as predictive of a problem by following the causal chain defined by the explanation from the first causes of the problem to the ones more immediate to the actual failure. Third, it is able to use the rules that were used to explain the situation to control the level of generalization of the features marked as predictive of the problem. The explanation gives CHEF the information it needs to learn from a single instance and anticipate this problem in the most general and widest range of situations possible. Make SOUFFLE Include SPICE Make SOUFFLE Figure 1: Links leading to the memory of the fallen soufflC V PREDICTING A NEW PROBLEM Once CHEF has learned the features that predict a problem, it is able to anticipate the problem from those features. It does this by sending out activations along all of the links from the goals in a new input and attending to any past failures t#hat it is reminded of. After solving the problem of the strawberry souffl15, CHEF is asked to make a soufllk with the liqueur kirsch. Before planning for these new goals, CHEF activates them and this activation is spread to any failures that they predict. In this case, the goal to include kirsch activates the goal to include any spice, which in turn sends an activation towards the memory of t,he fallen soufflC. Because kirsch is a liquid and thus passes the test along this line of activation, the signal reaches the memory of t.he failure. At the same time, the goal to make a soufflC sends off an act.ivation signal to the same memory. When both links leading to the memory are activated it is also activated and CHEF responds by adding a goal to avoid this problem into its current goal list. Searching for plan that satisfies - Include kirsch in the dish. Make a souffle. Collecting and activating tests. Fired: Is the dish STYLE-SOUFFLE. Fired: Is the item a SPICE. Is the TEXTURE of item LIQUID. Kirsch + Souffle = Failure “Liquids make things wet.” LEARNING / 559 Vithont a balance between liquids and leavening the batter rill fall.” Reminded of STRAWBERRY-SOUFFLE. Fired demon: DEMON3 Adding goal: Avoid failure of type SIDE-EFFECf:DISABLED-CONDIIION:BALANCE exemplified by the failure ‘The batter is now flat’ in recipe STRAYBERRY-SOUFFLE. CHEF is able to predict this problem, even though the sur- face features of its current situation do not match those of the past situation, because the links formed in response to the past failure were made on the basis of a causal understanding of why the failure actually occurred. By using this causal explanation, the planner was able to learn the true extent of the problem and anticipate it in markedly different circumstances than those in which it originally occurred. VI USING THE PREDICTION Once the prediction of a failure is made, the planner searches for an existing plan that satisfies as many of its current goals as possible while avoiding the predicted problem. Plans that are modified in response to failures are indexed by the fact that they deal with those failures, so the prediction of a particular problem can be used to index to a plan that solves it. In planning for the kirsch souffl& the prediction of the fail- ure allows CHEF to access the existing strawberry soufflk plan. Without the prediction, another plan, a recipe for a vanilla soufflk, would have been used because it has more surface fea- tures in common with the goals that CHEF has in hand. But this plan was used in the past to construct the failed strawberry soufflC and the standard modifications that CHEF uses would have also led to a failure in this instance. The fact that CHEF recognizes that its present situation is analogous to a past one in which a problem has occurred allows it to find a past plan that avoids that problem even though it has fewer surface fea- tures in common with the present situation than another plan in memory. Searching for plan that satisfies - Make a souffle. Avoid failure of type SIDE-EFFECT:DISABLED-CONDITION:BALANCE. Include kirsch in the dish. Driving down on: Make a souffle. Succeeded - Driving down on: Avoid failure of type SIDE-EFFECT:DISABLED-CONDITION:BALANCE. Succeeded - Driving down on: Include kirsch in the dish. Failed - Using recipe from current level. Found recipe -> REC12 STRAWBERRY-SOUFFLE Recipe exactly satisfies goals -> Make a souffle. Avoid failure of type SIDE-EFFECT:DISABLED-CLlNDITION:BALANCE. Recipe must be altered to match -> Include kirsch in the dish. Because this recipe has already been adapted to the problems of added liquid, it ca.n easily be modified to include kirsch rather than strawberries and runs without failure. VII C?QNCLUSIONz Planning failures can tell a planner where its own reason- ing has gone wrong. They can provide information about what ieatures will tend to lead to a failure and when to anticipate them in later planning. A planner that learns from one failure to anticipate later ones and uses that anticipation to find the plans that deal with it is able to avoid those failures that it has already encounterecl. CHEF learns from its own errors and thus avoids them in the later planning. Learning from a dozen examples at the same level of complcsity as the one discussed here, it identifies the problematic features of its domain and creates the plans to deal with them. By using a causal explanation of why a failure has occurred to identify the features will predict it in the future CHEF is able to learn from a single instance and anticipate the problem in the most general and widest range of situations possible. And once the problem is anticipated, it can be avoided by making use of a plan designed to deal with it. Unlike planners that only store their successes, CHEF is able to improve itself by learning to avoid the mistakes that ot.her p!anners are unable to anticipate. REFERENCES Carbonrll, J., Derivational Analogy and its Role in Problem Solving, Proceedings of the National Conference on Artificial Intelligence, RAAI, Washington, DC, August 1983. DeJong, G ., AcrrJiring Schemata Through Understanding and Generalizing Plans., Proceedings of the Eight Interna- tional Joint Conference on Artificial Intelligence, IJCAI, Karlsrhue, Germany, August 1983a. Minton, S, Selectively <?eneralizing Plans for Problem- Solving, Proceeding,? 01 the Ninth National Conference on .4rfdficial Intelligence, AAAI, Los Angeles, CA, August 1985, pp. 313-315. Mitchell, T., Learning and Problem Solving, Proceedings 01 the Eighth International Joint Conference on Artificial Intel- ligence, Iiarlsrhue, Germany, August 1983. Computers and Thought Award Lecture. Sacerdoti, F,., A structure for plans and behavior, Technical Report 109, SRI I?rtIficial Inteiligence Center, 1975. Schank, R.. Dynamic memory: A theory of learning in com- puters and people, Cambridge University Press, 1982. Sussman, G ., Artificial Intelligence Series, Volume 1: A computer model of skill acquisition, American Elsevier, New York, 1975. Wilensky, El., Pllrnning and Cinderstanding, Addison-Wesley, Reading, Mass, 1983. 560 / SCIENCE
1986
77
524
A DOMAIN INDEPENDENT EXPLANATION-BASED GENERALIZER Raymond J. Mooney Scott W. Bennett Coordinated Science Laboratory University of Illinois at Urbana-Champaign 1101 W. Springfield Ave. Urbana, IL 61801 ABSTRACT A domain independent technique for generalizing a broad class of explanations is described. This method is compared and contrasted with other approaches to generalizing explanations, including an abstract version of the algorithm used in the STRIPS system and the EBG technique recently developed by Mitchell, Keller, and Kedar-Cabelli. We have tested this gen- eralization technique on a number examples in different domains. and present detailed descriptions of several of these. I INTRODUCTION If one considers many of the different Explanation-Based Learning (EBL) systems under construction [Mitchell83, Mitchell85. Mooney85. O’Rorke84. Winston831. certain com- monalities become evident in the generalization phase of the learning process. Such systems work by first constructing an explanation for an example being processed. Next, this explana- tion is generalized. This latter process can be characterized in a domain independent way. Recent work on the generalization phase of EBL is under- way at Rutgers [Mitchell861 and here at the University of Illi- nois [DeJong86]. In this paper, we present a technique called Explanation Generalization using Global Substitution (EGGS) which we believe provides a natural way for conducting this generalization. This method is quite similar to both the EBG technique introduced in [Mitchell&] and to the MACROP learning process used in STRIPS [Fikes72]. Consequently, the generalization technique used in STRIPS and the regression tech- nique used in EBG are outlined and contrasted with EGGS. Lastly, a few of the examples to which EGGS has been applied are presented with their resulting generalizations. II EXPLANATIONS In different domains. various types of explanations are appropriate. In [Mitchell%], an explanation is defined as a logi- cal proof which demonstrates how an example meets a set of sufficient conditions defining a particular concept. This type of explanation is very appropriate for learning classic concept definitions, such as learning a structural specification of a cup, an example introduced in [Winston831 and discussed in [Mitchell86]. H owever. when learning general plans in a prob- lem solving domain (as in STRIPS [Fikes72] or GENESIS [Moo- ney85]). it is more appropriate to consider an explanation to be a set of causally connected actions which demonstrate how a goal state was achieved. Consequently, in this paper, we will take a very broad definition of the term explanation and consider it to be a con- nected set of units, where a unit is set of related patterns. A unit for an inference rule has patterns for its antecedents and its consequent, while a unit for an action or operator has ThiS research was supported by grant NSF IST 83 17889. the Natlonal Science Foundation under Light(Obj1) PartOf(Handlel,Objl) Handle(Handlel> and the following inference rules: Stable(?x) A Liftable A Open\‘essel(?x) + Cup(?x) Bottom(?y) A PartOf(?y,?x) A Flat(?y) + Stable(?x) Graspable A Light(?x) - Liftable Handle(?y) A PartOf(?y,?x) + Graspable Concavity(7y) A PartOf(?y,?x) A UpurardPointing(?y) + Open\‘essel(?x) a proof tree (explanation) can be constructed for the goal Cup(Obj1) as shown in Figure 1. The explanation structure for this proof is shown in Figure 2. The edges between patterns in a unit are assumed to be directed so that an explanation forms a directed acyclic graph. These directed edges define certain pat- terns in a unit as the support for other patterns in the unit. For example, the support for the consequent of an inference rule is its antecedents and the support for the effects of an action are its preconditions. Stable(Obj1) III Liftable(Obj1) III OpenC’esselfObjl) Ill PartOf(Bl,Objl) ” PartOf(Bl.Objl) / Graspable(Obj1) Ill Light(Obj1) II Graspablr(Obj1) Light(Objl) Handie HandIe PartOf(Hl.Objl) II PartOf(Hl,Objl) Figure 1: Explanation for Cup(Obj1) Triple edges indicate equalities between unit patterns. Double edges indicate equalities to initial assertions. LEARNING / 551 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. The task of explanation-based generalization is to take an explanation and its associated explanation structure and gen- erate a generalized explanation. which is the most general ver- sion which still maintains the structural validity of the original explanation. This means that substitutions must be applied to the patterns in the explanation structure so that it is con- strained in such a way that equated patterns unify directly without requiring any substitutions. The generalized explana- tion of the cup example is shown in figure 3. This generalized explanation can then be used to extract the following general definition of a cup: Bottom(?yl) A PartOf(?yl .?xl> A Flat(?yl> A Handle(?y2) A PartOf(?y2.?xl) A Light(?xl) A Concavity(?y3) A PartOf (?y3.?x 1) A UpwardPointing(?y3) + Cup(?x 1) In problem solving domains, the generalized explanation represents a general plan schema or MACROP for achieving a Stable(?xl) III Stable(?x2) Liftablee(?xl) III Liftable(?x3) OpenVessel(?xl) III Graspable(?x3) Ill Lighk?x3) Graspable(?x4) Handle(?yZ) PartOfVy2,?x4) Figure 2: Explanation Structure for Cup Example Triple edges indicate equalities between unit patterns. Stable(?xl) Ill Stable(?xl) Lif table(?xl ) ill Liftable(?xl) OpenVessel(?xl) Ill parrOf(:yl,?xl) / \ PartOfhWxl) Graspable(?xl) III Light(?xl) Graspable(?xl) Handlk?y2) PartOf(?yZ,?xl) Figure 3: Generalized Explanation for Cup Example Triple edges indicate equalities between unit patterns. particular class of goals. III EXPLANATION GENERALIZING ALGORITHMS Several algorithms have been developed for generalizing various types of explanations. The STRIPS system [Fikes72] incorporated a method for generalizing blocks-world plans into MACROPS. The EBG method [Mitchell%] uses a modified ver- sion of goal-regression [Waldinger77] to generalize proofs of concept membership. Concurrently with Mitchell et. al’s development of EBG. we developed a method [DeJong86] (which we now call EGGS) which generalizes the broad class of explanations defined in the previous section. However. the gen- eral techniques used by the other two methods can be abstracted to apply to the class of explanations defined above. Consequently, this section is devoted to presenting and compar- ing algorithmic descriptions of all three methods as applied to this class of explanations. All of the algorithms rely on unification pattern matching and we will use the unification notation described in [NilssonSO]. A. STRIPS MACROP Learning The first work on generalizing explanations was the learn- ing of robot plans in STRIPS [Fikes72]. STRIPS worked in a “blocks world” domain and after its problem solving com- ponent generated a plan for achieving a particular state, it gen- eralized the plan into a problem solving schema (a MACROP) which could be used to efficiently solve similar problems in the future. Work on the STRIPS system was the first to point out that generalizing a causally connected set of actions or infer- ences could not be done by simply replacing each constant by a variable. This method happens to work on the cup example given above. The proper generalized explanation can be obtained by replacing Objl by ?xl, Bl by ?yl. Hl by ?y2, and Cl by ?y3. However, in general, such a simplistic approach can result in a structure which is either more general or more specific than what is actually supported by the system’s domain knowledge. The following examples are given in [Fikes72] to illustrate that simply replacing constants with variables can result in improper generalizations. The following operators are used in these examples: GoThru(?d, ?rl, ?r2) {Go through door ?d from room ?rl to room ?r2) PushThru(?b. ?d, ?rl. ?r2) {Push box ?b through door ?d from room ?rl to room ?r2) SpecialPush {Specific operator for pushing box ?b from Room2 to Rooml). Given the plan: GoThru(Door1, Rooml, Room2Z), SpecialPush(Boxl>. simply replacing constants by variables results in the plan: GoThru(?d, ?rl, ?r2). SpecialPush( This plan is too general since SpecialPush is only applicable when starting in Room2. so having a variable ?r2 as the destination of the GoThru is too general and ?r2 should be replaced by Room2. Given the plan: GoThru(Door1, Rooml. Room2 >, PushThru(Box1. Doorl, Room2. Room1 > simply replacing constants by variables results in the plan: GoThru(?d, ?rl, ?r2). PushThru(?b, ?d. ?r2. ?rl). This plan is too specific since the operators themselves do not demand that the room in which the robot begins (?rl) be the same room into which the box is pushed. The correct generalization is: GoThru(?d. ?rl. ?r2). PushThru(?b, ?d. ?r2. ?r3). The exact process STRIPS uses to avoid these problems and correctly generalize an example is dependent on its particular representations and inferencing techniques: however, the basic technique is easily captured using the representations discussed in section II. How STRIPS problems are represented with inter- connecting units will be clarified with an example later in the paper. However, assuming they are represented in this fashion. a description of the explanation generalizing algorithm is shown in Table 1. It should be noted that the generalization process in STRIPS was constructed specifically for generalizing robot plans represented in triangle tables and using resolution to prove preconditions. There was no attempt to present a general learn- ing method based on generalizing explanations in any domain. However, the algorithm in Table 1 is a straight-forward gen- eralization of the basic process used in STRIPS. The basic tech- nique is to unify each pair of matching patterns in the explana- tion structure and apply each resulting substitution to all of the for each equality between pi and p, in the explanation structure do let 8 be the MGU of p, and p for each pattern pk in the exblanation structure do replace p, with p,6 Table 1: STRIPS Explanation Generalization Algorithm 552 / SCIENCE patterns in the explanation structure. After all of the unifications and substitutions have been made, the result is the generalized explanation since each pattern has been replaced by the most general pattern which allows all of the equality matches in the explanation to be satisfied. B. EBG Mitchell, Keller, and Kedar-Cabelli [Mitchell861 outline a technique for generalizing a logical proof that a particular example satisfies the definition of a concept. An example of such a proof is the one in Figure 1 explaining how a particular object satisfies the functional requirements of a cup. Unlike the STRIPS MACROP learning method. EBG is meant to be a gen- eral method for learning by generalizing explanations of why an example is a member of a concept. The EBG algorithm is based on regression [Waldinger77] and involves back- propagating constraints from the goal pattern through the explanation back to the leaves of the explanation structure. This process obtains the appropriate generalized antecedents of the proof: however, as pointed out in [DeJong86], it fails to obtain the appropriately generalized goal pattern and remaining explanation. As indicated in [DeJong86] and as originally specified in [Mahadevan85]. the remaining generalized explana- tion must be obtained by starting with the generalized antecedents obtained from regression and rederiving the proof. This propagates constraints forward from the generalized antecedents to the final generalized goal concept. Hence, the correct EBG algorithm involves both a back-propagate and a forward-propagate step as is shown in the abstract algorithm in Table 2. Once again, the result of the EBG algorithm is the gen- eralized explanation since each pattern has been replaced by the most general pattern which allows all of the equality matches in the explanation to be satisfied. let g be the goal pattern in the explanation structure BackPropagate(g 1 ForwardPropagate procedure BackPropagate for each pattern pi suPporting p do if p, is equated to some pattern then let e be the pattern equated to p, let 0 be the MGU of e and p, replace p with p0 for each pattern p, supporting p do replace p, with p,B for each pattern pi supporting p do if p, is equated to some pattern then let e be the pattern equated to pi let 8 be the MGU of e and p, replace e with e0 for each pattern p, supporting e do replace pi with p,0 BackPropagate procedure ForuardPropagate(p) for each pattern pi supporting p do if p, is equated to some pattern then let e be the pattern equated to p, ForwardPropagate let 8 be the MGU of p, and e replace p, with p,0 replace p with p0 Table 2: EBG Explanation Generalization Algorithm C. EGGS Finally, there is the EGGS algorithm which we developed to generalize explanations of the very abstract form defined and used in this paper. The algorithm is quite similar to the abstract STRIPS algorithm and is shown in Table 3. The difference between EGGS and the abstract STRIPS algorithm is that instead of applying the substitutions throughout the expla- nation at each step, all the substitutions are composed into one substitution 7. After all the unifications have been done, one sweep through the explanation applying the accumulated sub- stitution y results in the generalized explanation. Table 4 demonstrates this technique as applied to the cup example above. It shows how y changes as it is composed with the sub- stitutions resulting from each equality. Applying the final let y be the null substitution {} for each equality between p, and pj in the explanation structure do let 8 be the MGU of pI and p, J let y be ye for each pattern p, in the explanation structure do replace pr. with pky Table 3: EGGS Explanation Generalization Algorithm Table 4: EGGS Applied To the Cup Example substitution 7 to the explanation structure in Figure 2 results in the generalized explanation in Figure 3. D. Comparison of Explanation Generalizing Algorithms It is reasonably clear that all of the above algorithms com- pute the same desired generalized explanation. They all per- form a set of unifications and substitutions to constrain the explanation structure into one in which which equated patterns unify directly without requiring any substitutions. The difference between them lies in the number of unifications and substitutions required and the order in which they are per- formed. Assuming there are e equalities and p patterns in an expla- nation (p > e), the STRIPS method requires e unifications each resulting in p applications of a substitution (i.e. ep substitu- tions). The EBG method does a unification for each equality in both the back-propagating and forward-propagating steps for a total of 2e unifications. The number of substitution applica- tions required by EBG depends on the number of antecedents for each rule, but in the best case (in which each rule has only one antecedent) it requires one substitution for each pattern in both the back-propagating and forward-propagating steps for a total of 2p substitutions. Finally, EGGS requires e unifications to build the global substitution (y) and p substitutions to apply y to the explanation structure. Each composition of a substitu- tion with 7 also requires a substitution, so there are really e+p overall substitutions. Therefore, EGGS does less unifications and substitutions than either the abstract STRIPS method or EBG. However, this may be misleading since the complexity of each unification or substitution depends on the nature of the patterns involved. Consequently, these figures are not absolute complexity results, but only rough indications of overall complexity. LEARNING / 553 As described in [O’Rorke85]. generalizing explanations can be viewed as a process of posting and propagating constraints. Neither the abstract STRIPS algorithm nor EGGS impose any order on the posting of constraints (equalities between patterns) and both simultaneously propagate constraints in all directions. EBG. on the other hand, imposes an unnecessary ordering on the posting and propagation of constraints. First, constraints are propagated back through the explanation, and then a second pass is required to propagate constraints forward to obtain the appropriate generalized goal concept. We believe this adds undo complexity to the generalizing algorithm as is obvious from comparing the algorithmic descriptions in Tables 1-3. IV Application of EGGS to Several Domains The EGGS technique, which has been fully implemented, has generalized explanations in several domains. This set of examples currently includes narrative understanding [Moo- ney85], generating physical descriptions of objects from func- tional information [Winston83]. designing logic circuits [Mahadevan85, Mitchell851, solving integration problems [Mitchell831 [Mitchell86]. proving theorems in mathematical logic [O’Rorke84]. the Safe-To-Stack problem from [Mitchell86], th e suicide example from [DeJong86]. and STRIPS robot planning [Fikes72]. The Cup example was discussed in detail in section 2. In this section, we will describe the applica- tion of EGGS to the logic circuit design, STRIPS robot planning, and narrative understanding examples. All of the examples are discussed in a longer version of this paper [Mooney86]. A. LEAP example The LEAP system in [Mitchell851 is a learning apprentice in VLSI design which observes the behavior of a circuit designer. It attempts to learn in an explanation-based fashion from circuit examples it observes. Given the task of implementing a circuit which computes the logical function: (a V b) A (c V d), a circuit designer creates a circuit consisting of three NOR gates computing the function: -(-(a V b) V -(c V d)). The system attempts to verify that the given circuit actually computes the desired function. The explanation proving that the function the circuit computes is equivalent to the desired func- tion is shown in Figure 4. Since equated patterns are always identical in specific and generalized explanations, only one of each pair will be shown in this and future figures. In this example, the domain knowledge available to the system is: Equiv(?x,?y) ---) Equiv(-t(y(?x)).?y> Equiv((y?xAy?y),?a) + Equiv(-(?xV?y),?a) Equiv(?x.?a) A Equiv(?y.?b) + Equiv(?xA?y .?aA?b) Equiv(?x,?x) The generalized form of this proof is shown in Figure 5. Had constants simply been replaced by variables, the result would have been overly specific. As a result of the explanation-based approach, the resulting generalization is not sensitive to the fact that the first stage of the circuit involved an aVb and a cVd. For example, the generalization would support using two NAND gates and a NOR gate to AND four things together. Equiv(-(&Vb)),aVb) Equiv(-(&Vd)),cVd) f e Equiv(aVb,aVb) Equiv(cVd,cVd) Figure 4: LEAP Example -- Specific Explanation Esuiv(-(-(?al))A-(7(?bl)).?alA?bl) Equiv(T(-(?bl)),?bl) Equiv(?al,?al) Equiv(?bl,?bl) Figure 5: LEAP Example -- Generalized Explanation B. STRIPS Example The STRIPS example [Fikes72], as discussed earlier, involves a robot, located in Rooml, moving to Room2 through Doorl. picking up a box, and moving back to Room1 with the box. An explanation is constructed for the example using the following action definitions: Action GoThru(?a,?d,?rl ,?r2) Preconditions Effects InRoom(?a,?rl) InRoom(?a,?rZl Connects(?d.?rl.%2 1 PushThru(7a,?o,?d,?rl,‘?r2) InRoom(?a,?rl) InRoom(?a,%2) InRoom(?o,?r 1) InRoom(?o,?r2) Connects(?d,?rl .?r2) An inference rule used in this example is: Connects(?d. ?rl. ?r2) + Connects(?d, ?r2, ?rl). The specific explanation for this plan is shown in Figure 6. The resulting generalization, shown in Figure 7, doesn’t constrain the final destination of the robot to be the same as its room of origin. The generalized plan would support having the robot move the box to a Room3 connected to Room2 rather than back to Rooml. lnRoom(Robot,Rooml) PushThru(Robot,Box,Drl,Rwm2,Rooml) InRoom(Robot,Room2) Connects(Doarl,Room2,Rooml) InRoom(Robot,Rooml) ConnectdDcarl,Rooml,Room2) Figure 6: STRIPS Example -- Specific Explanation InRoom(?a2,?y27) PushThru(?a2,?bl,7d2,?x27,?y27) lnRoom(?a2,?x27) Connectsf?d2,?x27,?y27) t GoThru(?aZ,?dl,?xZl,?x27) Connects(?d2,?y27,?x27) InRoom(?a2,?x21) Connects(?dl.?x21,?x27) Figure 7: STRIPS -- Generalized Explanation C. GENESIS Example The arson example from the GENESIS system[Mooney85] is a more complicated one in which domain specific generaliza- tion rules are used to augment the normal EGGS procedure. The specific explanation structure shown in Figure 8 is con- structed from the following story which is presented to the narrative understanding system: Stan owned a warehouse. He insured it against fire for $100,000. Stan burned the warehouse. He called Prudential and told them it was burnt. Prudential paid him $100,000. The explanation is that, since Stan s goal was to get money, he insured an unburnt warehouse he owned with Prudential. He then proceeded to burn down the flammable warehouse and to 554 / SCIENCE Figure 8: GENESIS -- Specific Explanation Ina(?tl,Charaeter) Believe(!tl,Burnt(~4)) Flsmmsble(li4) Ias(!tl,Charaeter) Possess(ltl,?il) Isa(?i4,Inanimate) Isa(lcl,r.nsuranceCo) Figure 9: GENESIS - Generalized Explanation telephone Prudential to tell them about it. Since Prudential [Fikes72] R. E. Fikes, P. E. Hart and N. J. Nilsson, “Learning and believed the warehouse was burnt, the building was insured Executing Generalized Robot Plans,” Artificial with them, and they had the requisite money to reimburse Stan, Intelligence 3, (19721, pp. 251-288. they paid him the indemnity. [Mahadevan85] S. Mahadevan, “Verification-Based Learning: A The generalized explanation can be seen in Figure 9. In addition to the normal EGGS generalization process, hierarchical class inferences (ISA inferences) have been pruned to arrive at an appropriate generalization. The rule is to prune any facts supporting only ISA inferences. For instance. in this example, including the fact that the object burned was a warehouse would make the resulting generalization overly specific and less useful in understanding future narratives. However. it was important to include warehouse in the specific explanation in order to infer that it could be burned. Since the fact that the object was a warehouse only supports the fact that it is a build- ing, this fact is removed from the generalized explanation. Like- wise, the fact that the object is a building is also pruned. The fact that the object is an inanimate object cannot be pruned because it is a precondition for burn, insure-object, and indem- nify. Consequently, it becomes part of the generalized struc- ture. Generalization Strategy for Inferring Problem- Reduction Methods,” Proceedings of the Ninth International Joint Conference on Artificial Intelligence, Los Angeles, CA, August 1985, pp. 616- 623. [Mitchell831 [Mitchell851 T. M. Mitchell, “Learning and Problem Solving,w Proceedings of the Eighth International Joint Conference on Artificial Intelligence, Karlsruhe, West Germany, i\ugust 1983, pp. 1139-1151. T. M. Mitchell, S. Mahadevan and L. I. Steinberg, ‘LEAP: A Learning Apprentice for VLSI Design,” Proceedings of the Ninth International Joint Conference on Artificiul Intelligence, Los Angeles, CA, -4ugust 1985, pp. 573-580. [Mitchell861 [Mooney851 V CONCLUSION In an attempt to formulate a general framework for explanation-based generalization, we have developed a represen- tation and an algorithm which we believe are well suited for learning in a wide variety of domains. The representation of explanations defined in this paper has allowed easy representa- tion of a wide variety of examples from various domains. The EGGS algorithm is an efficient and concise algorithm which we have used to generalize each of these examples with the same generalizing system. Future research issues include techniques for improving generality. such as the pruning of hierarchical class inferences discussed above, and methods for dealing with imperfect and intractable domain theories and other problems outlined in [Mitchell%]. [Mooney861 T. M. Mitchell, R. Keller and S. Kedar-Cabelli, “Explanation-Based Generalization: A Unifying L’iew,” .Ilachine Learning 1, 1 (January 19861, . R. J. Mooney and G. F. DeJong, “Learning Schemata for Natural Language Processing,” Proceedings of the Ninth International Joint Conference on Artificial Intelligence, Los Angeles, CA, August 1985. R. Mooney and S. Bennett, “A Domain Independent Explanation-based Generalizer,” Working Paper 7 1, ii1 Research Group, Coordinated Science Laboratory, University of Illinois, Urbana, Il., May 1986. [Nilsson80] [O’Rorke84] N. J. Nilsson, Principles of .4rtifzcial Intelligence, Tioga Publishing Company, Palo Alto, CA, 1980. P. V. O’Rorke, ‘Generalization for Explanation-based Schema Acquisition,” Proceedings of the National Conference on 4rtifrcial Intelligence, Austin, TX, August 1984, pp. 260-263. [O’RorkeSS] ACKNOWLEDGEMENTS This research benefitted greatly from discussions with Paul O’Rorke and the direction of Gerald DeJong. P. V. O’Rorke, “Constraint Posting and Propagation in Explanation-Based Learning,” Working Paper 70, AI Research Group, Coordinated Science Laboratory, University of Illinois, Urbana, IL, November 1985. [U’aldinger77] R. Waldinger, “.4chieving Several Goals Simultaneously,” in .Ilachine Intelligenge 8, E. Elcock and D. Michie (ed.1, Ellis Horu-ood Limited, London, 1977. [DeJong86] REFERENCES G. F. DeJong and R. J. Mooney, “Explanation-Based Learning: An Alternative \‘iew,” Machine Learning I, 2 (April 1986), . [Winston831 P. H. UTinston, T. 0. Binford, B. Katz and M. Lowry, “Learning Physical Descriptions from Functional Definitions, Examples, and Precedents,” Proceedings of the National Conference on .4rtificial Intelligence, Washington, D.C., August 1983, pp. 433-439. LEARNING I 555
1986
78
525
The Role of Prior Causal Theories in Generalization Michael Pazzani, Michael Dyer, Margot Flowers Artificial Intelligence Laboratory 3531 Boelter Hall UCLA Los Angeles, CA 90024 Abstract OCCAM is a program which organizes memories of events and learns by creating generalizations describing the reasons for the outcomes of the events. OCCAM integrates two sources of information when forming a generalization: l Correlational events. information which reveals perceived regularities in l Prior causal theories which explain regularities in events The former has been extensively studied in machine learning. Recently, there has been interest in explanation-based learning in which the latter source of information is utilized. In OCCAM, prior causal theories are preferred to correlational information when forming generalizations. This strategy is supported by a number of empirical investigations. Generalization rules are used to suggest causal and intentional relational relationships. In familiar domains, these relationships are confirmed or denied by prior causal theories which differentiate the relevant and irrelevant features. In unfamiliar domains, the postulated causal and intentional relationships serve as a basis for the construction of causal theories. Introduction When learning the cause of a particular event, a person can utilize two sources of information. First, a person may detect a similarity between the event and previous events noticing that whenever C occurs, R also occurs. After noticing such a correlation, a person might induce that C causes R. Secondly, a person may use his prior causal theories. In machine learning, the correlational techniques have been extensively studied (e.g. [4, 12,23, 18,321). More recently, there has been interest in explanation-based learning [9,22,24,25, 281 in which prior knowledge used to understand an example guides the generalization process [31]. For example, the fust time a merchant asks the customer if he wants the carbon paper from a credit card purchase, the customer may wonder why the merchant is doing this. A clever person might be able to deduce that by taking the carbon paper, he can prevent a thief from retrieving the carbon paper from the merchant’s garbage. Since the carbon paper contains his credit card number, the thief can make mail and phone purchases with the credit card number. Since this is a somewhat complicated inference, it would be advantageous to remember it, rather than rederive it the next time it is needed. The topic of this paper is how these two sources of information, correlation of feature over a number of examples, and prior causal theories can be combined. There are a number of possibilities: l Correlational information is used exclusively. l Correlational information is preferred to prior causal theories. l Prior causal theories are preferred to correlational information. l Prior causal theories are used exclusively. In the remainder of this paper, we first discuss some examples of learning programs. Next, we review a number of experiments which assess how correlational information is combined with prior causal knowledge. Finally, we present an overview of our theory as implemented in OCCAM, a program under development explanation-based learning. at UCLA which integrates correlational and Related Work Correlational Learning In this section, we discuss lPP [ 181 a program which uses cormlational information exclusively. IPP was selected to exemplify correlational learning because a recent extension [20] also adds explanation-based capabilities. IPP is a program that reads, remembers, and makes generalizations from newspaper stories about international terrorism. IPP starts with a set of MOPS [30] which describe general situations such as extortion. After adding examples of events to its memory, it creates more specialized MOPS (spec-MOPS). Spec-MOPS are created by noticing the common features of a number of examples. Not all features of a generalization are treated equally. Some features are predictive; their presence allows IPP to infer the other features if they are not present. The Dredictive features are those that are unique to that generalization. The features that appear in a large number of generalizations are non-predictive. IPP keeps track of the number of times a feature is included in generalizations. The idea is that the causes of the non-predictive features. predictive features are likely to be Since IPP makes no attempt to explain its generalizations, it may include irrelevant information which is coincidentally true in a generalization. A mechanism to identify and correct erroneous generalizations when further examples are added to memory was included in IPP. This mechanism was later extended in UNIMEM, the generalization and memory component of IPP [19]. Explanation-based Learning GENESIS [251 is an example of a system which exclusively uses its prior causal theories in generalization. GENESIS accepts English language input of stories and produces a conceptual representation of the story. The conceptual representation contains a justification for the outcome of the story and the actions of the actors in terms of causal and intentional relationships. If an actor achieves a goal in a novel manner, the explanation for how the goal was achieved is generalized into a schema which can be used for understanding future events. This generalization process notes which parts of the conceptual representation are necessary for establishing the causal and imentional relationships. For example, consider how GENESIS learns about kidnapping. Part of this process is determining why the ransom is paid. Given only one example of a kidnapping in which a father pays the ransom for his daughter who is wearing blue jeans, GENESIS incorporates this fact into its schema: there must be a psitive interpersonal theme [lo] between the victim and the person who pays the ransom. This generalization is possible from just one example because the inference that explains why the ransom is paid contains the precondition that there be a positive interpersonal theme. In contrast, a correlational learner might have to see another kidnapping where the victim was not wearing blue jeans to determine that clothing is not relevant in kidnapping. Similarly, a correlational learner might have to see many examples before the father-daughter relationship could be generalized to any positive interpersonal theme. LEARNING / 545 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. At fust, explanation-based learning may seem confusing. After all, isn’t it just learning what is already known? For example, the schemata which GENESIS constructs encodes the same information which is in the inference rules used to understand the story. However, explanation-based learning serves an important role. Understanding using inference rules can be combinatorially explosive. Consider understanding the following story if there were no kidnapping schema: The teenage girl who was abducted on her way to school Monday morning was released today after her father left $50,000 in a trash can in a men’s room at the bus station. If there were no kidnapping schema, a very complex chain of inference is necessary to determine why the father put money in a mash can and why the teenage girl was released. In contrast, understanding this story with a kidnapping schema is simplier since the kidnapping schema records the inferences necessary to understand the relationship between the kidnapper’s goal of possessing money and the father’s goal of preserving the health of his child. Explanation-based learning produces generalizations which simplify understanding by storing the novel interaction between a number of inference rules. In this respect, the goals (but not the mechanism) of explanation-based learning are similar to those of knowledge compilation [l] andchunking [17]. Of course, in some domains an understander might not have enough knowledge to produce a detailed causal and intentional justification for an example. In such cases, explanation-based learning is not applicable and an understander might have to rely solely on correlational techniques. UNIMEM UNIMEM [20] is an extension to lPP which integrates correlational and explanation-based learning. UNIMEM operates by applying explanation- based learning techniques to correlational generalizations rather than instances. Hence, UNIMEM prefers correlation information to prior causal theories. UNIMEM first builds a generalization and identifies the predictive and non-predictive features. Then, it treats the predictive features as potential causes and the non-predictive features as potential results. Backward-chaining production rules representing domain knowledge are utilized to produce an explanation of how the predictive features cause the non-predictive. If no explanation is found for a non- predictive feature it is considered a coincidence and dropped from the generalization. There are two possible reasons that a predictive feature might not be used to explain non-predictive features: either it is irrelevant to the,generalization (and should be dropped from the generalization) or the feature may in fact appear to be cause (i.e., predictive) due to a small number of examples but in fact be a result. To test the later case, UNIMEM tries to explain this potential result in terms of the verified predictive features. The rational behind using correlational techniques to discover potential causal relationships which are then confiied or denied by domain knowledge is to control the explanation process. It could be expensive or impiactical to use brute force techniques to produce an explanation. Since the predictive features are likely to be causes, UNIMEM’s explanation process is more focused. However, since UNIMEM keeps track of the predictability of individual features rather than combinations of features it can miss some causal relationships. This occurs when no one feature is predictive of another but a conjunction of features is. For example, in kidnapping when the ransom is demanded from a rich person who has a positive interpersonal relationship with the hostage one could predict that the ransom would be paid. Of course, if the ransom were demanded from a poor relative or rich stranger, the prediction should not be made. Experimental Data How do people combine correlational information with prior causal theories? There have been a number of experiments in social psychology which assess the ability to learn causal relationships. Some of these am motivated by Kelley’s attribution theory [15, 141. Kelley proposed that the average person makes causal inferences in a manner analogous to a trained scientist. Kelley’s covariation principle is similar to Lebowitz’s notion of predictability*: “Ike primary difference between predictability and covariation is that covariation implies that there is a unique cause: whenever the result is present, the cause is present and whenever the cause is present, the result is present In contrast, predictability only requires that whenever the cause is present, the result is present The covariation principle is based on the assumption that eflects covary over time with their causes. The ‘with” in this statement conceals the important and little-studied problem of the exact temporal relations between cause and effect. The effect must not, of course, precede a possible cause... [14 page 71 However, this view is not without criticism: There is no assumption as critical to contemporary attribution theory (or to any theory that assumes the layperson’s general adequacy as an intuitive scientist) as the assumption that people can detect covariation among events, estimate its magnitude from some satisfactory metric, and draw appropriate inferences based on such estimates. There is mounting evidence that people are extremely poor at performing such covariation assessment tasks. In particular, it appears that a priori theories or expectations may be more important to the perception of covariation than are the actually observed data corgfigurations. That is, if the layperson has a plausible theory that predicts covariation between two events, then a substantial degree of covariation will be perceived, even if it is present only to a very slight degree or even if it is totally absent. Conversely, even powerful empirical relationships are apt not to be detected or to be radically underestimated if the layperson is not led to expect such a covariation. [26page IO] Some work in developmental psychology is also relevant to determining how prior causal theories are used. In particular, since younger children have less knowledge about the world, they are less likely to have prior causal theories. Perceiving Causality One of the earliest inquiries into causality was conducted by Michotte [21]. He conducted a series of experiments to determine when people perceive causality. In one experiment, subjects observed images of discs moving on a screen. When the image of one disc bumped a stationary disc and the stationary disc immediately began to move, subjects would state that the bumping caused the stationary disc to move. Michotte called this the Launching Effect. However, if the stationary disc starts moving one fifth of a second after it is bumped, subjects no longer indicate that the bumping caused the motion. Here we have an example of a perfect correlation in which people do not perceive causality. From this experiment, it is clear that correlation alone is not enough to induce causality. A similar finding was reported by Bullock 151. Children as young as live will not report causality if there is a spatial separation between the potential cause of motion and the potential result. Illusory Correlation Chapman and Chapman performed a series of tests to determine why practicing clinical psychologists believe that certain tests with no empirical validity are reliable predictors of personality traits. For example, in one study [61, clinical psychologists were asked about their experience with the Draw-a-Person Test (DAP). In this test, a patient draws a picture of a person which is analyzed by the psychologist. Although the test has repeatedly been proved to have no diagnostic value, 80% of the psychologists reported that men worried about their manliness draw a person with broad shoulders and 82% stated that persons worried about their intelligence draw an enlarged head. In the second experiment in this study, the Chapmans asked subjects (college undergraduates) to look at 45 DAP tests paired with the personality trait of the person who (supposedly) drew them. The subjects were asked to judge what sort of picture a person with certain personality traits did draw. Although the Chapmans paired the pictures with traits so that there would be no correlation, 76% of the subjects rediscovered the invalid diagnostic sign that men worried about their manliness were likely to draw a person with broad shoulders and 55% stated that persons worried about their intelligence drew an enlarged head. In the next experiment, the Chapmans asked another set of subjects about the strength of the tendency for a personality trait to call to mind a body part. For example, subjects reported a strong association between shoulders and manliness, but a weak association between ears and manliness. For four of the six personality traits studied, the body part which was the strongest associate was the one most commonly reported as having diagnostic value by clinical psychologists and subjects. In a final experiment, subjects were presented DAP Tests which were negatively correlated with their strong associates. In this study, subjects still found a correlation between personality traits and their strong associates but to a lesser degree (e.g., 50% rather than 76% reported that broad shoulders was 546 / SCIENCE a sign of worrying about manliness). phenomenon “illusory correlation”. The Chapmans labeled this A similar finding was found for particular Rorschach cards which have no validity [7]. These experiments clearly demonstrate that covariation may be noticed when it is not actually present if there is a reason to suspect covariation. Conversely, actual covariation may go undetected if it unexpected. Due to the phenomenon of illusory correlation, Kelley has qualified the covariation principle to apply to perceived rather than actual covariation. Developmental Differences Ausubel and Schiff [33 asked kindergarten, third and sixth grade students to learn to predict which side of a tetter-totter would fall when the correct side was indicated by a relevant feature (length) or an irrelevant feature (color). They found that the kindergarten children learned to predict on the basis of relevant or irrelevant features at the same rate. However, the older children required one third the number of trials to predict on the basis of a relevant feature than an irrelevant one. family member of the victim’s family pays the ransom to achieve the goal of preserving the victim’s health). Further examples create a specialization of this generalization which represents an inherent flaw in kidnapping: the victim can testify against the kidnapper, since the kidnapper can be seen by the victim. After some more examples, a similarity is noticed about the kidnapping of blond infants. This coincidence starts an explanation process which explains the choice of victim to avoid a possible goal failure, since infants cannot testify. Because the hair color of the victim was not needed to explain the choice of victim, it is not included in the generalization. Since there is a lot of background knowledge, OCCAM can use explanation-based techniques in this domain. Some of the knowledge of family relationships (e.g., parents have a goal of preserving the health of their children) was learned using correlational techniques. In another domain, OCCAM starts with no background knowledge and is presented with data from a protocol of a 4-year old child trying to figure out why she can inflate some balloons but not others. Since there are no prior causal theories in this domain, OCCAM uses correlation techniques to build a causal theory. Presumedly, the older children had a prior causal theory which facilitated their learning: a tetter-totter falls on the heavier side and the longer side is likely to be the heavier side. The younger children had to rely solely on correlation. Their performance on learning in the relevant and irrelevant conditions were comparable to the older children in the irrelevant condition. Generalization in OCCAM In this section we present the generalization strategy used by OCCAM. When a new event is added to memory the following generalization process occurs: While it may not be a causal theory in the strongest sense, it does appear that rats have an innate mechanism to relate illness to the flavor of food: Since flavor is closely related to chemical composition, natural selection wouldfavor associative mechanisms relating flavor to the aftereffects of ingestion. [11 page 7951 1. Find the most specific generalization applicable to the new event. 2. Recall previous events which are similar to the new event. Individual events in OCCAM’s memory are organized by generalizations. An individual event is indexed by its features which differ from the norm of the generalization [161. Events similar to the new event may be found by following indices indicated by the features of the new event. After similar events are found, a decision must be made to determine if the new event and other similar events are worth generalizing. DeJong [9] gives a number of criteria to determine if a single event is worth generalizing (e.g., Does the event achieve a goal in a novel manner)..,To his list, we add an event should be generalized if a similar event has been seen before. The idea here is that if two similar events have been seen, it is possible that more similar events will occur in the future. It is advantageous to create a generalization to facilitate understanding of future events. If the new event is not gener&ed, it is indexed under the most specific generalization found in Step 1. Otherwise, generalization is attempted:** Analysis 3. Postulate an explanation for the similarities among the events. There is considerable evidence that in people, prior causal theories are preferred to correlational information. Why should this be so? Should we design computer learning programs to do the same? There are a number of advantages to preferring causal theories: l As demonstrated by Ausubel and Schiff, prior knowledge can facilitate the learning process. Fewer examples are necessary to arrive at the correct generalization. recalled and compared in correlational learning. Explaining an example with prior causal theories is less demanding. l Detecting correlation among many events with a large set of features places great demands on memory. Many previous examples must be Generalization rules postulate causal or intentional relationships. Typically a generalization rule suggests a causal explanation for a temporal relationship. For example, the simplest generalization rule is “If an action always precedes a state, postulate the action causes the state”. l Why do people do the things they do? Gen&liz‘ation rules serve the same purpose that predictability serves in 0 What caused the outcome to occur? UNIMEM: to focus the explanation process. However, the experimental evidence reviewed earlier seems to cast doubt on the assertion that people use predictability as the sole indicator of causality. Instead, OCCAM uses rules which focus on answering two important questions: Of course, correlational information is quite important in an unfamiliar domain. The ideal combination would be to use prior causal theories when possible, but to use correlation information to learn the causal theories in an unfamiliar domain. This is the strategy used by OCCAM. If a potential explanation (in terms of human motivation or physical causality) for the similarity among a number of events is found, the next step is to verify the explanation: 4. Postulated causal and intentional explanations are confirmed or denied using prior causal theories. OCCAM OCCAM is a program which maintains a memory of events in several domains. As new events are added to memory, generalizations are created which describe and explain similarities and differences between events. This paper focuses on the generalization process. Details of the memory organization are found in [27]. -. If prior causal theories confirm the explanation, a new generalization is created. This type of generalization is called an explanatorv generalization. As in explanation-based learning, the features of the new generalization are those which are necessary to establish the causal relationship. The relevant features depend on the prior causal theories. For example, some person’s causal theories could explain the high crime rate in certain areas due to the In one domain, OCCAM starts with general knowledge about coercion and family relationships. After some examples, it creates a generalization which describes a kind of kidnapping (along with the explanation that a **It is important to note that the generalization algorithm can operate on a single event. In this case, the “simila<’ features are simply all the features, and “always precedes” is interpreted as “precedes”. LEARNING I 547 racial make-up of the area. Other’s causal theories will place the blame on the high unemployment in the area. We distinguish another kind of generalization: a tentative generalization. A tentative generalization is one whose causal relationship is progtsed by generalization rules but not confined by prior causal theories. In a tentative generalization, the relationships postulated by the generalization rules are assumed to hold until they are contradicted by later examples. The verification of a tentative generalization occurs after Step 1. 1.5 If the most specific generalization is tentative, compare the new event to the prediction made by the generalization. The primary difference between a tentative generalization and an explanatory generalization is how they are treated when a new event contradicts the generalization. In this case, a tentative generalization will be abandoned However, if an explanatory generalization is contradicted an attempt will be made to explain why the new event differs from previous events. For example, OCCAM constructs a explanatory generalization which states that in kidnapping the kidnapper releases the victim to keep his end of the trade demanded in the ransom note. After this generalization is made, it is presented with an kidnapping example in which the victim is murdered. Rather than abandoning the previous generalization it finds an explanation for murdering the victim: to prevent the victim from testifying against the kidnapper. In contrast, consider what happens if a tentative generalization were built by correlational means describing the release of the hostage: if a hostage were killed in a later kidnapping, the tentative generalization would be contradicted and abandoned. The perseverance of explained generalizations is supported by a study by Anderson et al. [2] in which subjects who were requested to explain a relationship showed a greater degree of perseverance after additional information than those who were not requested. Later in this paper, we will describe the mechanisms used by OCCAM to conftrm tentative generalizations. An example of OCCAM learning with and without prior causal theories should help to clarify how generalization rules and causal theories interact to create explanatory and tentative generalizations. Inflating Balloons In this example, the initial memory is essentially empty. The examples ate input as conceptual dependency representations [29] of the events taking place in Figure 1. First L. successfully blowing up a red balloon is added to memory. Next, the event which describes L. unsuccessfully blowing up a green balloon is added to memory and similarities are noticed. DIFFERENT-FEATURES is an applicable generalization rule: DIFFERENT-FEATURES If two actions have d@erent results, and they are performed on similar objects with some difierent features, assume the differing features enable the action to produce the result. This generalization rule produces a question for the explanation process: “Does. the state of a balloon being red enable the balloon to be inflated when L. blows air into it?“. This cannot be confirmed, but it is saved as a tentative generalization. Associated with this generalization is the explanation which describes the difference in results as enabled by the color of the balloon. A green balloon successfully blown up by L. after M. deflated it is added to memory. It contradicts the previously created tentative generalization, which is removed from the memory. Next, a generalization rule is applied: PREVIOUS-ACTION If ACTION-l always precedes ACTION-2 which results in STATE-2, assume ACTION-I results in STATE-l which enables ACTION-2 to produce STATE-2. In this case, ACTION-l is M. deflating the balloon, ACTION-2 is L. blowing into the balloon and STATE-2 is that the balloon is inflated. The confirmation process attempts to verify that deflating a balloon results in a state that enables L. to inflate the balloon. However, this fails and a new tentative generalized event is created which saves the postulated explanation. Note that if there existed a proper causal theory, an explanatory generalization could be created which would save the information that STATE-l is that the balloon is stretched. “‘OCCAM does not always create a tentative generalization if it cannot create an explanatory one. See [27] for the details. 548 / SCIENCE Mike is blowing up a red balloon. LYIUI: “Let me blow it up.” Mike lets the air,out of the balloon and hands it to Lynn. Lynn blows up the red balloon. Lynn picks up a green balloon and tries to inflate it. Lynn cannot inflate the green balloon. Lynn puts down the green balloon and looks around. LF: “HOW come they only gave us one red one?” Mike: “Why do you want a red One?” LF: “I can blow up the red ones.” Mike pi& up a green balloon and inflates it. Mike lets the air out of the green balloon; hands it to Lynn- Mike: “Try this one.” L~II blows up the green balloon. Lynn gives Mike an uninflated blue balloon. Lynn: “Here, let’s do this one.” _______ _________-______-------------------------------- Figure 1: Protocol of Lynn (age 4) trying to blow UP balloons. Economic Sanctions OCCAM is provided with a large amount of background knowledge in its newest domain of economic sanctions. In this section, we illustrate how a specific generalization rule is useful both in the previous example as well as in explaining the effects of economic sanctions. Due to space limitations we must ignore the memory issues. We assume the initial memory contains the following example summarized from [131: In 1980, the US refused to sell grain to the USSR unless the USSR withdrew troops from Afghanistan. The USSR paid a higher price to buy grain from Argentina. When the following event is added to memory, the generalization process is initiated when a similarity is noticed between the new event and a previous event: In 1983, Australia refused to sell uranium to France, unless France ceased nuclear testing in the South Pacific France paid a higher price to buy uranium from South Africa. Here, PREVIOUS-ACTION suggests an explanation for the similarities. In this case, ACTION-l is identified as the US or Australia refusing to sell a product, ACTION-2 is identified as USSR or France buying the product from another country, and STATE-2 is the USSR or France possessing the Product. PREVIOUS-ACTION postulates that ACTION-l (refusing to sell the product) resulted in STATE-l which enabled ACTION-2 (purchasing the product for more money from a different country) to result in STATE-2 (possessing the product). OCCAM’S causal theories in the economic domain identify STATE-l as the willingness to pay more money for the product. Therefore, OCCAM constructs the explanatory generalization in Figure 2 from these two examples. In this generalization country-l is generalized from the US and Australia. A purely correlational approach could find a number of features in common between these two countries (e.g., both have a native language of English, both have a democratic government etc.). However, the explanation-based approach finds relevant only that both countries are suppliers of a product-l. Similarly, country-3 is generalized from Argentina and South Africa but only two of their common features are relevant: that they supply product-l and that they have a business relationship with country-2 coerce ACTOR country-l VICTIM country-2 DEMAND goal-l THREAT sell ACTOR (country-l A-SUPPLIER-OF product-l) TO country-2 OBJECT product-l AMOUNT amount-l MODE neg RESULT sell ACTOR (country-3 A-SUPPLIER-OF product-l BUSINESS-REL country-2) TO country-2 OBJECT product-l AMOUNT amount-2 RESULT goal-failure GOAL goal-l can be inflated by L. contains an intermediate state which results from deflating the balloon which enables L. to inflate the balloon. If this intermediate state were identified in future examples, the tentative generalization could be confirmed. Similarly, cold weather was identified as a tentative cause for the Space Shuttle accident. Identifying a particular component whose performance is affected by cold weather and whose failure would account for the accident would confinn the cause. Finally, we have identified but confirm tentative generalizations: not yet implemented another strategy to l Ask an authority. Many children are constantly asking for explanations. There are two types of questions asked: verification (e.g., “Does X cause Y?“) which corresponds to confinning a tentative generalization in OCCAM and generation (e.g., “Why X?“) which corresponds to generating and confirming an explanation. Future Directions ACTOR country-l Figure 2: Explanatory generahzation created by OCCAM to explain one possible outcome of economic sanctions. This generalization indicates that country-l refusing to sell a product to country-2 to achieve goal-l wiii fail to achieve this goal if there is a country-3 which supplies the product and has a business relationship with country-2. Confirming Tentative Generalizations In many respects, tentative generalizations are treated in the same manner as explanatory generalizations. Both can be used to predict or explain the outcomes of other events. For example, after several examples of parents helping their children and some strangers not assisting children OCCAM builds a tentative generalization which describes the fact that parents have the goal of preserving the health of their children. This tentative generalization is used as a causal theory to explain why a parent pays the ransom in a kidnapping. However, as stated earlier tentative generalizations are treated differently when new evidence contradicts the generalizations. When a tentative gener-i&on is confirmed, it becomes an explanatory generalizatbn. There are a number of strategies which are useful in confirming a tentative generalization. l Increase confidence with new examples. When new examples conform to the prediction made by a generalization, the confidence in the generalization is increased. When the confidence exceeds a threshold, the generalization is confirmed. This strategy was the mechanism utilized by IPP [ 181. l Increase confidence when a tentative generalization is used as an explanation for another generalization. When the explanation stored with a tentative generalization is confirms a postulated causal relationship the confidence in the tentative generalization is increased. For example, when OCCAM uses the tentative generalization that parents have a goal of preserving the health of their children to explain why the parent pays the ransom in kidnapping, the confidence in the tentative generalization is increased. l Search for competing hypotheses. If no competing hypothesis can be found to explain the regularities in the data, the confidence of the generalization is increased. In OCCAM searching for competing hypotheses consists of trying other generalization rules. If no other generalization rules are applicable, the confidence in the tentative generalization is increased. The above strategies all increase the confidence in a tentative generalization. We have experimented with different values for the increment of confidence and the threshold. More research needs to be done in this area to determine reasonable values for these parameters. There is some evidence [26] that these parameters are not constants but a function of the vividness of the new information. The following strategies can confirm a tentative generalization with just one additional example: l Specify intermediate states or goals. Typically, a tentative generalization has intermediate states or goals which are not identified. For example, the tentative generalization describing which balloons Currently, OCCAM is a passive learning program which learns as it adds new observations to its memory. We are in the process of making OCCAM play a more active role in the learning process. There are number of ways that OCCAM can initiative: l Ask Questions. As discussed previously, a tentative generalization may be confmed by asking a authority (e.g., a parent). However, the explanation provided by the authority may report the cause of an event without illustrating the justification used by the authority to attribute causality. Recall that in explanation-based learning the preconditions of the inference rules used to deduce causal relationships determine what features are relevant (i.e., should be included in a generalization). When the explanation is provided by another person it may not include these preconditions. We intend to make use of similarities and differences between examples to induce these preconditions. l Suggest Experiments. Recall that one mechanism to confum a tentative generalization is to search for other explanations. If there are two (or more) possible explanations, they may make different predictions. We plan to extend OCCAM to suggest an experiment which would distinguish between the competing explanations [8]. Conclusion OCCAM is a program which integrates two sources of information to build generalizations describing the causes or motivations of events. The design of OCCAM was influenced by a number of studies which indicate that prior causal theories are more influential than correlational information in attributing causality. The combination of explanation-based and correlational learning techniques used by OCCAM improves on previous learning programs in the follow manners: l Purely correlational learning programs such as IPP [IS] require a large number of examples to determine which similarities among the examples are relevant and which are coincidental. In contrast, OCCAM builds explanatory generalizations which describe novel interactions among its causal theories. These causal theories indicate what features are relevant. l Explanation-based learning programs such as GENESIS [251 must have a complete causal theory to generalize. In contrast, OCCAM’s generalization rules enable the learning of causal theories. In addition, these generalization roles serve to focus the explanation process. . UNIMEM [20] integrates correlational and explanation-based learning by using a strategy which prefers correlational information to prior causal theories: explanation-based learning to rules out coincidental similarities in correlational generalizations. Empirical evidence indicates that in people, the causal theories are preferred to correlational information. One reason for this bias is that correlating features over a number of examples may exceed the limitations of a person’s memory. Due to the limitations of computer memory, UNIMEM does not perform correlation for combinations of features. Therefore, it cannot learn that a conjunction of features results in an outcome. In contrast, OCCAM’s learning strategy naturally discovers when a conjunction of features results in an outcome. This occurs LEARNING / 549 when it forms an explanatory generalization by recording the interaction between two or more inference rules. If the preconditions of these inference rules rely on different features of the same entity, then the conjunction of these features is relevant. In the generalization theory implemented in OCCAM, prior causal theories are used to infer causality. Generalizations are built to record novel chains of inference. Correlg+?al information has a role similiar to temporal information: to suggest or confirm causal the;ories. References HI Anderson, J.R. Knowledge Compilation: The General Learning Mechanism. In Proceedings of the International Machine Learning Workshop. Monticello, Illinois, 1983. PI Anderson, C.A., Lepper, M.R., & Ross, L. The perseverance of social theories: The role of explanation in the persistance of discredited information. Journal of Personality and Social Psychology 39: 1037-1049, 1980. 131 Ausubel, D.M. and Schiff, H. M. The Effect of Incidental and Experimentally Induced Experience on the Learning of Relevant and Irrelevant Causal Relationships by Children. Journal of Genetic Psychology 84:109-123, 1954. 141 Bruner, J.S., Gcodnow, JJ., & Austin, G.A. A Stzzdy of Thinking. Wiley, New York, 1956. PI Bullock, Merry. Aspects of the Young Child’s PhD thesis, University of Pennsylvania, 1979. Theory of Causality. El Chapman, LJ., & Chapman, J.P. Genesis of Popular but Erroneous Diagnostic Observations. Journal of Abnormal Psychology 72: 193-204, 1967. 173 Chapman,.L.Js, & Chapman, J.P. Illusory Correlation as an Obstacle to the Use of Valid Psychodiagnostic Signs. Journal of Abnormal Psychology 74:271-280,1969. @I Cohen, Paul. Heuristic Reasoning about Uncertainty: An Artificial Intelligence Appraoch. Pitman Publishing Inc., Marshfield, Mass., 1985. PI DeJong, G. Acquiring Schemata Through Understanding and Generalizing Plans. In Proceedings of the Eighth International Joint Conference on Artificial Intelligence. Karlsruhe, West Germany, 1983. HOI Dyer, M. In Depth Understanding. MIT Press, 1983. [ 1 l] Garcia, J., McGowan, B., Erwin, F-R., & Koelling, R.A. Cues: Their relative effectiveness as reinforcers. Science 160:794-795, 1968. [12] Granger, R. & Schlimmer, J. Combining Numeric and Symbolic Learning Techniques. In Proceedings of the Third International Machine Learning Workshop. Skytop, PA, 1985. [13] Hufbauer, G.C., & Schott, JJ. Economic SanctionsReconsiderd: History and Current Policy. Institute For International Economics, Washington, D.C., 1985. [14] Kelley, Harold H. Causal Schemata and the Attribution Process. In Jones, Edward E., Kanouse, David E., Kelley, Harold H., Nisbett, Richard E., Valins, Staurt & Weiner, Bernard (editor), Attribution: Perceiving the Causes of Behavior, pages 15 1- 174. General Learning Press, Morristown, NJ, 1971. Attribution. America [ 151 Kelley, Harold H. The Process of Psychologist : 107- 128, February, 1983. [ 161 Kolodner, J. Retrieval and Organizational Strategies in Conceptual Memory: A Computer Model. Lawrence Erlbaum Associates, Hillsdale, NJ., 1984. [17] Laird, J., Rosenbloom, P., and Newell, A. Towards Chunking as a General Learning Mechanism. In Proceedings of the National Conference on Artificial Intelligence. American Association for Artificial Intelligence, Austin, Texas, 1984. [18] Lebowitz, M. Generalization and Memory in an integrated Understanding System. Computer Science Research Report 186, Yale University, 1980. [ 191 Lebowitz, M. Correcting Erroneous Generalizations. Cognition and Brain Theory 5(4), 1982, [20] Lebowitz, M. Integrated Cognitive Science , 1986. P11 Michotte, York, 1963. A. The Perception of Causality. Basic Books, Inc., New Learning: Controlling Explanation. [22] Minton, S. Constraint-based Generalization: Learning Game- Playing Plans from Single Examples. In Proceedings of the National Conference on Artificial Intelligence. Austin, TX, 1984. [23] Mitchell, T. Generalization as Search. Artificial Intelligence 18(2), 1982. [24] Mitchell, T., Kedar-Cabelli, S. & Keller, R. A Unifying Framework for Explanation-based Learning. Technical Report, Rutgers University, 1985. [25] Mooney, R. & DeJong, G. Learning Schemata for Natural Language Processing. In Proceedings of the Ninth International Joint Cogerence on Artificial Intelligence. Los Angeles, CA, 1985. [26] Nisbett, Richard & Ross, Lee. Human Inference: Strategies and Shortcomings of Social Judgements. Prentiss-Hall, Inc., Engelwood Cliffs, NJ, 1978. [27] Pazzani, Michael. Explanation and Generalization-based Memory. In Proceedings of the Seventh Annual Conference of the Cognitive Science Society. Irvine, CA, 1985. [28] Pazzani, M. Refining the Knowledge Base of a Diagnostic Expert System: An Application of Failure-Driven Learning. In Proceedings of the National Conference on Artificial Intelligence. American Association for Artificial Intelligence, 1986. [29] Schank, R.C. & Abelson, R.P. Scripts, Plans, Goals, and Understanding. Lawrence Erlbaum Associates, Hillsdale, NJ., 1977. [30] Schank, R. Dynamic Memory: A Theory of Reminding and Learning in Computers and People. Cambridge University Press, 1982. [31] Soloway E. Learning = Interpretation + Generalization: A Case Study in Knowledge-directed Learning. PhD thesis, University of Massachusetts at Amherst, 1978. [32] Vere, S. Induction of Concepts in the Predicate Calculus. In Proceedings of the Fourth International Joint Conference on Artificial Intelligence. Tbilisi, USSR, 1975. 550 / SCIENCE
1986
79
526
Comments on Kornfeld’e “Equality for Prolog”: e-unification as a mechanism for augmenting the Prolog search strategy. E. W. Elcock and P. Hoddinott Department of Computer Science The University of Western Ontario London, Ontario, Canada N6A 5B7 Abstract The search strategy of standard Prolog can lead to a situation in which a predicate has to be evaluated in circumstances where it has an infeasibly large number of instantiations. The work by Kornfeld [8] addressed this important problem by means of an extension of unification which allows Prolog to be augmented by what is essentially a (non-standard) equality theory. This paper uses the notion of the general procedure introduced by van Emden and Lloyd [12] to formalize Kornfeld’s work. In particular, the formalization is used to make a careful analysis and evaluation of Kornfeld’s solution to the problem of delayed evaluation. 1. Introduction The work by van Emden and Lloyd [12] shows how the notion of a general procedure augmented by a particular equality theory can be used to make overt the logical framework common to apparently quite different systems. In their paper they present an account of Prolog II [2], as essentially the general procedure together with an appropriate equality theory. As they remark, this is particularly interesting in that Colmerauer’s own presentation of Prolog II is one in which Prolog II is regarded as a system for manipulating infinite trees, and presented as a complete departure from a system based on first order logic, The paper by van Emden and Lloyd makes reference to other novel work which attempts to incorporate equality into Prolog programming. In particular they refer to the work by Kornfeld [8]. Kornfeld’s work in this area is particularly provocative. As Goguen [6] points out, Kornfeld gives The work was funded In part by NSWt( grant AzII:,, and by IBM Corporation. no theoretical justification for his approach, and that it is in fact incomplete - although Goguen gives no clarification of his criticism of incompleteness that would not apply to all feasible logic programming implementations. Nevertheless, the underlying notions of Kornfeld’s work are intuitively appealing. In what follows, we use the method of van Emden and Lloyd to elaborate Goguen’s remark, and at least attempt to expose what Is hidden in Kornfeld’s work. Section 2 contains a brief reminder of the general procedure augmented by an equality theory. Section 3 briefly introduces Kornfeld’s work, and section 4 shows how the central notion can be formalized in the framework of the general procedure. The remaining sections focus on Kornfeld’s use of an equality theory as a mechanism for augmenting the Prolog search strategy to handle delayed evaluation of goal predicates. 2. The General Procedure The following description of what van Emden and Lloyd call the general procedure is taken from [12]. Definition. The homogeneous form of a clause p(t,, . . . ,tn) 4- B is P&9 * l . ,⌧n) + ⌧l=t 1’ * - . ,xn = tn, B where x1, . . . ,xn are distinct variables not appearing in the original clause. De jinition. Let P be a program. The homogeneous form P’ of P is the collection of the homogenous forms of each of its clauses. Definition. An atomic formula, whose predicate symbol is “=I’, is called an equation. We now describe the general procedure. We call it “general” because, depending on the theory of equality invoked after it, we get Prolog, Prolog II, or other specialized languages. 766 I ENGINEERING From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. The general procedure uses the homogeneous form P’ of P, and produces an SLD-derivation [9, 111. It consists of constructing, from an initial goal G, an SLD- derivation using input clauses from P’, while never selecting an equation. The general procedure terminates if a goal consisting solely of equations is reached. Note that because of the homogeneous form of P’, the general procedure never constructs bindings for the variables in the initial goal. For a particular language, the general procedure needs to be supplemented by a theory J!? of equality. E is used to prove the equations resulting from the general procedure. During the proving of the equations, substitutions for variables in the initial goal are produced. If the equatlon- solving process is successful (that is, the empty goal is eventually produced), then these substitutions for the variables in the initial goal are output as the answer. Van Emden and Lloyd show that with an equality theory consisting solely of the reflexive axiom (x=x), the general procedure is equivalent to Prolog. To the extent that one adds other axioms of “equality”, one creates other applied logics. 3. Kornfeld’s Implementation of an Equality for Prolog Kornfeld modified a Lisp embedded Prolog system [7] to make an intuitively appealing change in the behaviour of the interpreter on unification failure. Kornfeld’s informal description of the change can be paraphrased as follows: If the interpreter attempts to unify two terms @ and !& and fails, then the interpreter attempts to establish the goal equah(@,?@ where equals is a user defined predicate in the Prolog program. If this goal succeeds, the resulting bindings are added to the binding environment and the original unification is deemed to have succeeded. If the goal fails then the goal equal@,@) is tried. If this succeeds then the resulting bindings are added to the binding environment and the original unification is deemed to have succeeded: otherwise the original unification is deemed to have failed. We will call this mechanism extended unification (e-unification). This informal description leaves much to be desired in the way of specificity: much of the operational semantics has to be induced from Kornfeld’s examples. For this reason, in the remainder of this section we consider two of Kornfeld’s examples to motivate our interpretation of what we perceive is the intended operational semantics of e-unification. Consider the introduction of the concept of rationals as a quotient set of ordered pairs of integers. To do this we define an equivalence relation over ordered pairs. This equivalence relation can then be used to define equality over rationals. Kornfeld does this by introducing the axiom Elrat: equals( rat(XN,XD), rat(YN,YD)) + times(XN,YD,Z), times(XD,YN,Z). Consider now the first illustrating e-unification. of Kornfeld’s examples The terms rat(2,3) and r&(X,6) are said to be e- unifiable. We are told that (standard) unification is attempted first and fails. Kornfeld tells us that this failure r‘esults in e-unification generating + equak(rat(2,3), r&(X,6)), and that this goal succeeds by Elrat, binding X to 4. Let us now consider this example in more detail. Standard unification would attempt to unify rat(2,3) with rat(X,6) as follows: starting with the leftmost symbol of each term, the algorithm would find a disagreement at the third symbol and would attempt to unify 2 with X. This would succeed, binding X to 2. The next disagreement would lead to an attempt to unify 3 with 6, which would fail. Kornfeld’s explanation does not give the impression that e- unification would nowlead to a call of equah(3,6), that is, a call to “equals” as a result of (standard) unification failure for a subterm. We therefore assume that the e-unification of a goal p(t,* ’ - . ,tn) with the head p(sl, . . . ,sn) of a clause is successful iff for each i, i < n, the argument ti is e- unifiable with the argument si. The e-unification of an argument ti with an argument si is successful if ti and si are unifiable, ehe if equa!s(ti,si) can be established, else if equah(si, t;) can be established. Let us now consider Kornfeld’s formalization of the concept that a rational is equal to an integer. Kornfeld formalizes the equivalence relation with the following non-logical axioms: E2rat: equals(rat(N,D),I) + integer(l), var(N), var(D), AI LANGUAGES AND ARCHITECTURES / 767 N= I, D= 1. HI = Hz, Tl = 72. E3rat: equaZs(rat(N,D),I) + integer(I), times(D,I,N). The predication integer(r) is a Prolog [l] evaluable predicate which is true just in case 1 is an integer, and fails otherwise. The predication var(X> Is a Prolog evaluable predicate which is true just in case X is an variable, and fails otherwise. The predication t = s is a Prolog evaluable predicate which now is true just in case t and s are e-unifiable, and fails otherwise. The following is Kornfeld’s second example lllustrating e-unification. Kornfeld says that in the presence of the axioms Elrat, E2rat and E3rat, the goal + mem(rat(4,XJ [2,3,cons(Y,z>,rat(R,I+%rat(2,7)]) will succeed three times with respectively, X bound to 2, R bound to 4 and X bound to W, and X bound to 14. Again let us consider this example in more detail. It suffices to simplify the example so that the goal, G, is + mem(rat(4,X),(2]). As Kornfeld does not give an axiomatization of “mem” we assume that “mem” has the following standard Prolog axiomatization: Meml: mem(A,[AIL]) + Mem2: mem(A, [BIL]) + mem(A,L). If the goal G is to succeed it must be e-unifiable with axiom Meml. The e-unification of G with Meml proceeds as follows: The e-unifier first attempts to unify rat(4,X) with A. The unification of rat(4,X) with A succeeds with rat(4,X) bound to A. Next, an attempt is made to unify (21 with [rat(4,X)/L]. This attempt fails, and by our earlier assumption the e-unifier now generates + equah([2],[rat(4,X)IL]). By inspection of axioms, Elrat, E2rat and E3rat, it is clear that this and the symmetrized version + equals([rat(4,X)IL],[2]) both fail. Accordingly, G fails, apparently contradicting Kornfeld’s claim about this example. We can however remove the contradiction. Although Kornfeld nowhere discusses their presence or necessity in his system, problems like the above do not arise if we assume that, for every function symbol in the lexicon of the program, we introduce a function substitutivity axiom. In the example above, let us add the axiom Ellist: equals([HlITl),[H2jT2]) + It is e=w to see that the goal + equals([2],[rat(4,X)IL]) which failed in the previous analysis, now succeeds. In what follows we will take the view that e- unification is formalized as assumed, but that where appropriate, equality theories that build on e- unification Will include appropriate function substitutivity axioms. 4. The general procedure and e-unification. The general procedure allows us to capture e- unification, as expressed above, using the equality theory El: 97(X-V + E2: eq(X,V + equals(X,Y) E3: eq(X,Y) + equals(Y,X) EI is the “Prolog” equality axiom providing standard unification. Axioms E2 and E3 provide the symmetric extension to Kornfeld’s predicate “equah”. Finally a particular equality theory built on e-unification would be completed and distinguished by the particular axiomatization of the predicate “equals”. As observed at the end of the proceeding section, these axioms would typically include a selected set of axioms from the axiom schema for function substitutivity such as Ellist below. To illustrate this we show how the last example discussed above is handled using the general procedure. The program would include the axiomatization of “mem” which, in homogeneous form, is Meml: memLJL9 + e&C-W, eq(L [Xl/W Mem2: mem(X,L) + eq(XXl), eq(L, [Yl ILlI), mem(Xl,Ll). Along with Meml and Mem2 the program also includes the following equality theory: El: edXX) + E2: eqW,r) + equaWC Yl E3: eq(X W + equaWX) Ellist: equals([H11Tl],[H2[L%]) - eq(H1, HZ), eq(fi, m) Elrat: equals( rut(XN,XD), rat(YN,YD)) + times(XN,M),Z), times(XD,YN,.Z) E2rat: equuls( rat( N, D),r) + 768 / ENGINEERING integer(I), var(N), var(D), eq(N r>, eq(D, 1) E3rat: equals(rat(N,D),I) - integer(l), times(D,I,N). The goal + mem(rat(4,X),[2]) is reduced by Meml to - eq(rat(%X), Xl), eq( [2], [Xl ILlI >. The first equation succeeds by El leaving - eq( [2], [rat(4JOI~ll>. Using E2 this goal is reduced to - equals( [2], [rat(U)I > which, using &list, reduces to - eq( 2, rat(4,rC) >, eq( [I, Ll). The first equation, using E2, results in the goal clause - equaW 2, rat(4,m 1, eq( 11, Ll). The first equation in this goal clause fails, but its symmetrized version reduces to 0 by ESrat. The second equation in this goal clause reduces to 0 by El. A fuller discussion on Kornfeld’s approach to equality and in particular, its relation to classical equality, can be found in [3]. 5. Incompleteness and “partially specified objects” Possibly the most provocative part of Kornfeld’s treatment of unification failure, and one on which he places considerable emphasis, is his attempt to use the binding environment to mitigate the problem of dealing with potentially very large (possibly infinite) sets of instantiations of variables in a goal predication. This is a problem that standard Prolog [l] attempts to deal with by “evaluable predicates” - a partial solution at best, and one which exacerbates the incompleteness of standard Prolog [4]. Kornfeld attempts to deal with the problem by arranging to delay the evaluation of such a goal predicate. The delayed evaluation notionally constrains the original uninstantiated variables in that predication. Kornfeld calls such constrained variables “partially specified objects” and represents them in the object language. We begin with a brief and informal motivation for delayed evaluation. The general problem is sufficiently important from the point of view of pragmatic implementations however, that a detailed analysis and comment on Kornfeld’s treatment is presented in the following sections. Consider a typical Prolog goal evaluable predicate such as t;mes(X,Y,z>, called when at least two of the variables X, Y and 2 are uninstantiated. Standard Prologs, sensibly enough, refuse to initiate an exploration of the infinite space of possible instantiations, and the call fails. A better approach is that taken in Absys [5] where the unevaluated assertion is associated with each of the unlnstantiated variables. The associated assertion is automatically reprocessed if and when any of its associating variables is instantiated. If processing ends with variables which still have sets of associated assertions, then the system uses these sets to give the premises under which the set of assertions could be valid. Thus, and very trivially, the assertions gr(x,T), plus(2,3,r) might generate “yes: if gr(X,5)“. This capability was achieved in Absys by appropriate design at the level of the interpreter primitives. Kornfeld obtains a similar effect by exploiting his treatment of unification failure and subsequent appeal to a special “equality” theory. In the following sections we will analyse Kornfeld’s approach by casting his apparatus and his illustrative example in the framework of the general procedure and an equality theory. 6. Delayed evaluation using e-unification We begin with an informal motivation of what Kornfeld refers to as a property of his “equality” theory - that of treating “partially instantiated data objects” which, in turn, stand for “non-singleton sets of the Herbrand Universe of the program.” Suppose we want to write the code for an evaluable predicate “gr” (greater-than) over pairs of integers. We want to take into account the possibility that the predication gets called when one or both of its arguments are not yet instantiated to integers. We do not want to respond by an arbitrary instantiation from the infinity of numbers available. Somehow we have to associate the unevaluated primitive predication “gr” with the uninstantiated variable or variables, with the intent that this association is to act as a constraint on any other attempted instantiations of that variable or variables. The only Prolog mechanism we have for forming such an association is the “binding” AI LANGUAGES AND ARCHITECTURES / 769 environment (substitution composition). Kornfeld writes the code for the primitive evaluable predicate “gr” so that each uninstantiated variable is bound to what is called an “Q-term” (i.e. a 2-ary term with functor “n”). The arguments of the fl term are a newly created variable, and the original predication, but with the original variable replaced by the new variable. The new variable plays the role of a surrogate for the original variable. Kornfeld now relies on the fact that, if the system attempts to unify the original variable, now bound to an fl-term, with some other term, then the unification fails and the “equality theory” is invoked. The equality axioms for Q-terms are written so that the failed unification is repeated, but this time with the szlrrogate of the original variable - in effect mimicing “try the (original) goal again”. Success allows the original computation to proceed. The following is a simple illustrative example. Suppose we have the goal - gr(X,3), mem(X,[4,2]). The call of the evaluable predicate “gr(x,3)” leads to X being bound to fl(Xs,gr(Xs,3)), where Xs 1s a new variable introduced as a surrogate for X, and “gr(Xs,3)” is a surrogate for the original predication involved. The “mem” predication is now called. The attempt to unify X and 4 fails because X is bound to the O-term. The unification failure now invokes an appeal to the equality theory which eventually leads to a call of + equah(4, n(xs,gr(xs,3))). The equality axioms for 0 have the effect of binding Xs (the surrogate of X) to 4, and calling gr(Xs,3) (the predicate denoted by the surrogate of gr(X,3)) with this binding, which call succeeds. The original attempt to “unify” X with 4 is now deemed to have succeeded and the evaluation of the goals proceeds. In what follows we exhibit a standard Prolog equality theory for n which we think is faithful to Kornfeld’s Lisp/Prolog implementation. We will use this formulation to analyse “delayed evaluation”. We will show that Kornfeld’s method as presented by him is both incomplete and unsound. We also show that, even within the subset of “successful” evaluations the mechanism is inefficient and present a more efficient alternative. We take the final position that the use of an “equality” theory is an interesting mechanism for addressing undesirable effects of the Prolog search strategy and merits further study. 7. An illustrative predicate For the expository purposes of this paper, it will be sufficient to consider a single predicate gr with the intended interpretation that gr(X,Y) asserts that X and Y denote integers, and that the number denoted by X is greater than the number denoted by Y. The clauses Kornfeld uses to define the predicate gr are: grl: vwm9 +-- instantiated(M,Ml), instantiated(N,Nl), Ml>Nl. gr2: v-@Q’V - instantiated(M, Ml), eq( N,L?( Ns,gr( Ml,Ns))). gr3: gOOV - instantiated( N,Nl), eq(M,~n(Ms,gr(Ms,N1))). gr4: v&W - eq(N,~(Ns,gr(Ms,Ns))), eq( M,fl( Ms,gr( Ms, Ns))). Suppose that, at the time gr(X,T) is called, X and Y are indeed instantiated by integers n and m respectively, then the call of gr can simply be replaced by the Prolog system predicate ” > “. This motivates the first clause. The other three clauses are intended to capture those cases in which, at the time gr(X,r) is called, one or both of the arguments are not currently instantiated by integers. The reader can assume that the predicate “instantiated” has the expected interpretation: its formal definition will be deferred until after we have exhibited the domain of its potential arguments. Consider the goal clause - gr(X,4),eq(X,5). The predication P-(x’4) reduces by gr3 to +- eq(X, f&Xs,gr(Xs,4))) which in turn is reduced by the axiom eq(X,X), (the first axiom of the equality theory to be developed), to q with the binding X/fKCgQW)). We are left with the goal + eq(n(Xs,gr(Xs,4)),5). It is clear that our intended n-equality theory should bind “Xs”, (the surrogate for “X”) to “5” and verify the surrogate “gr(Xs,d)” for the original relation “gr(X,4)‘*, Indeed, a first (oversimplified) o-equality theory would have to include the following clauses: El: elwfX) c E2: eq(X,T) - equa&WJ E3: eq(X,r) + equafs(Y,X) 770 / ENGINEERING E4: equals(fI(Vsurrogate,Psurrogate),Thing) - instantiated(Thing,Value), eq(Vsurrogate,Value), Psurrogate. Clause E4 is certainly adequate for the illustrative example above: the recursive appeal to the full G-equality theory in “eq(Vsurrogate,Vatue)” would succeed trivially, in this simple example, by El, when ” Psurrogate” would succeed. The final bindings for the example are X/n(Xs,gr(Xs,4), Xs/5. @5,gr(5,4)) is the instance of the fl term bound to X. Kornfeld interprets such a ground instance representation of the constant 5. as an alternative Consider now the slightly more complex goal clause - gr(X,4>,gr(~5>,eq(Xl~. As before, the first two predications succeed with the bindings X/fVhgr(Xs,4)) and Y/@Ys,gr(Ys,Fi)). The evaluation of the remaining goal eq(X,Y) now introduces and raises the question as to how the n-equality theory should deal with equality between two n-terms. Clearly, the predication If eq(X,T) If asserts that X and Ydenote the same individual. This, in turn, implies that the conjunction of any separate constraints on X and Y must constrain this individual. In addition then to asserting that the surrogates for X and Y are equal, we have to arrange to introduce a new n-term whose Vsurrogate is equal to the Vsurrogates for X and Y and whose Psurrogate is the conjunction of the Psurrogates for X and Y. We need to extend the n-equality theory by the clause E5: equals(fI(Vsl,Psl),fI(Vs2,Ps2)) - not _ instantiated(Vsl), not -instantiated(Vs2), conj’oin -relations(Vsl,Psl,Vs2,Ps2,Vs,Ps), eq(Vsl,Vs2), eq(fi%f&Vs,P)). The above axioms E4 and E5 are as given by Kornfeld in (81. When we will we see come to define the predicate “instantiated” that the first two predications in the body of the clause are intended to ensure that we are indeed dealing with the general case, i.e. where neither 0 argument of the head of the clause has a fully constrained substitution instance in the current binding environment, (i.e. neither argument denotes a known individual). The definition of the predicate “conjoin-relations” is a straightforward piece of Prolog non-logical wizardry! It is: conj’oin _ relations(Xl,Rl,X2,R2,X,R) - replace _ occurrence(X1 ,Rl ,X, NRl), replace_occurrence(X2,R2,X,NR2), conjoin(NRl,NR2,R). replace- occurrence(T,(R,Rs),NT,(NR,NRs)) +- R =.. [qArgs], replace( T,Args,NT, NArgs), NR =.. [q NArgs], replace-occurrence(T,Rs,NT,NRs). replace _ occurrence( T,R,NT, NR) - R =,. [qArgs], replace(T,Args,NT, NArgs), NR =.. [fl NArgs]. replace(T,[ArgIRArgs],NT,[~NRArgs]) - T == Arg, replace(T,RArgs,NT, NRArgs). replace( T, [Arg IRArgs] ,NT, [Nil NRArgs]) +- replace(T,RArgs,NT,NRArgs). conjoin((A,B),R,(A,RR)) - conjoin(B,R,RR). con join(A, R,(A, R)). The =.. is a Dee-10 Prolog evaluable predicate with, for example, f(A,B,C) =.. [f,A,B,Q being a true goal. The == is also a Dee-10 Prolog evaluable predicate which is true only if its two arguments are syntactically identical (i.e. even the variable names must be the same). For our example goal clause - gr(Xf4),gr(Y,5),eq(X,Y) the axiom E4 invoked by the predication “eq(X,Y)” results in the bindings Xs/Ys and Ys/fI(~s,(gr(XYs,4),gr(XY3,5))). In effect, the binding for X, is now ~(~(XYst(gr(~,4>,gr(XYs,5))), gr(~(~,(gr(XYs,4>,Sr(XYs,5))),4)) The substitution instance of X is an n-term whose Vsurrogate is l&elf an n-term. In general, this nesting, involving binding n-terms to the surrogates of n-terms, may have arbitrary depth. We will call such a nesting a surrogate chain. The first fl term in a surrogate chain is that R term which is bound to a program AI LANGUAGES AND ARCHITECTURES / 77 1 variable. The last 0 term in a surrogate chain is Q term whose surrogate is not bound to an fl term. that In the example, we see that the “last” n-term contains the Psurrogate, “gr(xYs,4),gT(rjys,5)” that expresses the complete constraint on X. The residual content of the full n-term bound to X is redundant. It should be apparent that this will always be so. Finally, consider the goal clause - gr(X,4),gr(~5),eq(X,r),eq(X,6) which is identical to the initial goal clause in the previous example except for the additional goal eqGK6). In accordance with the previous example, the goals gr(X,4),gr(Y,5),eq(X,T) will all succeed. It remains to establish the goal - eq(n(n(Xus,(gr(XYs,4),gr(XYs,s))>, gr(~(XYs,(gr(XYs,4),gr(XYs,5))),4)), 6) * This goal will succeed by E3 and E4, and will result in the program variables X and Y being bound to the completely specified object n(n(s,(gr(6,4>,gr(s,s))),gr(~(6,(gr(6,4),gr(6,5))),4)) which, following Kornfeld, we take as an alternative denotation of the number 6. In general, for an n-term n(@,!#) bound to a program variable X, the object which we will interpret as the denotation of X, will always be the last n-term in the Vsurrogate chain of a(@,!&. If the Vsurrogate of this last Q-term is a non-variable then the object identified by X is taken to be completely specified. In the sense of the predicate “instantiated” still to be defined, the non-variable instance of the surrogate of the last n-term is regarded as the value of the “instantiated” variable X. Otherwise,X is regarded as not instantiated. With these considerations in mind, the formal definition of the predicate “instantiated” is: instantiated(Omega,Value) +-- omega _ term(Omega,fI(X, -)),! nonvar(rr), var(Vulue), X=Value. instantiated(Thing,Value) +- nonvar( Thing), var(Value), Thing=Value. The predicate “omega-term” has the intended interpretation that omega- term(X,?‘) asserts that X is an omega-term and that Y is that last Vsurrogate omega-term in X in the sense discussed above. The formal definition of “omega- term” is: omega _ term(fl(XR),T) - nonvaf(X), omega _ term(X,r). omega _ term(fWR>,J&JCR)). 8. Pragmatic8 In the last section we motivated and explicated, in a standard Prolog, Kornfeld’s D-equality theory. We will show that this mechanism for handling delayed evaluation 1s inefficient and we will show how some of the inefficiency can be removed whilst staying within the same conceptual framework. We will first exhibit the source of the “inefficiency” by a (very) simple example. Consider the goal clause + 96JCT), eq(XJ), 4%) Reduction of the leftmost predication gives - eq( K f&~s~g~.(Xs,Ys))), eq( Xfl(Xs,gr(Xs,ys)) >, e&W, eq(Y,2) where Xs and Ys are surrogates for X and Y. It should be noted that we have the surrogate predication gr(Xs,Ys) occurring twice, once in each n-term. Using El the two leftmost predications give rise the bindings Y/ R(Ys,gr(Xs,Ys) and X/ f&Xs,gr(Xs,Ys) when we can write the current goal clause as - eq( fKJhg~(Xs,~>>, 3 1, eq(fW&s~(Xs,fi>>~ 2 > . Using E2 followed by E4, the leftmost predication is reduced to +- instantiated(3,Value), eq(Xs,Value), gr(Xs,Ys). The first predication binds Value to 3 when, using El, the second predication binds Xs to 3, leaving the current goal clause + gr(&Ys), eq( f&(ys,g$%Ys)), 2). The leftmost predication is essentially “try again (the surrogate for) gr(X,v but in the new binding context”. This is certainly well motivated, but again note the replication of the predication gr(3,Ys) - the instance of the replicated surrogate gr(Xs,Ys) of the “original” predication gr(X,r>. Unfortunately the call of gr(3,Ys) will be delayed. Unlike the Absys mechanism mentioned earlier, in order to delay we have to go through another level of surrogates and a-terms. 772 / ENGINEERING The leftmost predicate gr(3,Ys) reduces by gr2 to t instantiated($v), eq( Ys, f’I(Yss,gr(V,Yss)) The call of instantiated binds V to 3, when El binds Ys to L?(Yss,gr(3,Yss)). The current goal clause is now: + eq( C?( L?(Yss,gr(3,Yss)), gr(3,n(Yss,gr(3,YsS>>))). 2). Axiom E2 replaces eq by equals when E4 reduces the predication to t instantiated(2,Vl), eq( n(Yss,gr(&Yss)), Vl), gr(3,n(Yss,grt3,rss>>> which, in turn, reduces to - eq( QYss,gr(3,Yss)), 2), gr(3,0(Yss,gr(3,Yss))). The leftmost predicate reduces by E2 and E5 to + instantiated(2,V2), eq(Ysss,V2), gr(3,Ysss) all of which clearly reduce to 0. It would be nice to stop here! left with the “redundant” goal + go, fwwwa)) However, we are still which is reduced by g.rl to t instantiated(3,v3), instantiated( fI(2,gr(3,2))),V4), v3 > v4. The first predication binds v3 to 3; the second determines that the D-term 1s indeed instantiated, (its last surrogate is “2”), and binds v4 to 2, when the call of 3 > 2 succeeds. Although, in the simple illustrative example considered here, unnecessary elaborations of surrogate chains and replication of surrogates predicates, are not traumatic, they can quickly begin to be so as the complexity of the original goal increases. Kornfeld’s equality axioms can be modified to avoid these unpleasant effects. The modification is to take advantage of what we know about the significance of a nesting structure for n-terms, and, as in the predicate “instantiated”, deal directly and appropriately with the last G-term in any nesting. We replace the equality axiom E4 by: E4: eq(Omega,Thing) - omega _ term(Omega,fI(X,R)), instantiated(Thing,Vulue),!, eq(X,Vulue), R,!. If we now reduce the goal + eq( fS?( C?(Yss,gr(3,Yss), gr(3,f$Yss,gr(3,Yss))))9 2 ) using the new equality theory, we have: + omega _ term( instantiated(2,V), eq(Yns,Tr), Pns,! The first predicate binds the new variable surrogate Yns to Yss, and the new predicate surrogate Pns to just gr(3,Yss). The predicate instantiated(2,V) now binds 2 to V, when El leads to V being bound to Yns. The call of Pns now becomes the straightforward predication gr(3,2), ! which succeeds. This elegant change in the equality theory completely eliminates processing of irrelevant O-term structure. Finally, in this section on pragmatics, let us consider an example in which the reduction of the goal leads to a conjunction. Consider the goal + g-(x,4>, &‘L5), eq(X,T), go, g@B), eq(W,Q eq(XW), eq(X,lO). Reduction of the first three predications will lead to + eq( fX-J&v(Xs,4>>9 f&fi,gQkS)) ) with X and Y bound to the two Q-terms respectively. Reduction using ES which, in turn, uses ” conjoin _ relations”, succeeds with Xs and Ys bound to and hence with X bound to f-&fv=%(g~t~94), g-(~,5>>>, gr(~(xys,(grtXYs,4),grtXYs,5))), 4)) Similarly, the reduction of the next three predicates leads to a sequence of substitutions which bind W to ~n(~tXYsttgrtXYs,8),gr(XYs,g))), gr(~(XYs,tgrtXYs,S>,gr(XYs,g))), 8)) and with the current goal + &XW,eq(XlO). The leftmost predication is in effect This is reduced by ES, again involving a new Vsurrogate and a conjunction of predications, to 0 with X bound to W bound to AI LANGUAGES AND ARCHITECTURES / 773 Because of structure sharing, the size of these n-terms is not all that alarming. However this complexity of n-structure produced by Kornfeld’s axiom E5 is unnecessary. We modify E5 (as we did E4) to ignore irrelevant Q-structure. In the same spirit as the earlier changes, we replace Es by E5: eq( Omega1 ,Omega2) + omega _ term(Omegal,~(Xl,Rl)), w-(X1), omega _ term(Omega2,fI(X2,R2)), var(X2). conjoin -relations(Xl,Rl,X2,R2,X,R), x1=x2, X2=n(x,R). The equality between n-terms above now reduces by the new E5 to t- conjoin _ relations(XKs,(gr(X,4),gr(XYs,5)), ~s,(gr(~s,8),gr(Wvs,s>>,~s,Pxws), xys=wKs, wvFn(Xws,PXws) and we end up with X bound to Xs to X% to Wvs to n(xws,(gr(xws,4),gr(xws,5),gr(~s,8),gr(~s,g))) where all unnecessary structure has been eliminated. 9. Soundness and Completeness Consider the goal + gCWJ f&WI which should fail. The first predication binds X and Y to the f&terms f&Xs,gr(Xs,Ys)) and fl(Ys,gr(Xs,Ys)) respectively. The current goal is now + eq(f&Xs,gr(Xs,Ys>>, f$Ys,gr(Xs,Ys>>> which succeeds by El with Xs bound to Ys. The inference system is unsound. Consider the goal - gr(X,r), g$O), eq(Y,5). The leftmost predication succeeds with the bindings x/ f&Xs,gr(Xs,Y$) and Y/ f-qYs,p-(XSXS)) respectively. We are left with the goal + gr( f&Xs,gQWW>, 3), eq( @Ys,gr(Xs,Y$), 5). Using the axiom gGfm + instantiated(N,Nl), the leftmost predication in the goal reduces to - eq( fI(Xs,gr(Xs,Ys)), QXss, gr(Xss,3))). This reduces to 0 by El with Xs bound to Xss and Ys bound to 3. We are left with the remaining goal - eq(Q(3,gr(Xss,3)), 5) which reduces to - eq(33), gr(Xs.0) which fails. If the original goal were reordered to read - eq(Y,5), eq(XJ), g+JL5) the goal would succeed. This result would seem to defeat the motivation for introducing n-terms! 10. A wider context In section 2 we introduced the notion of the generalized procedure based on the transformation of a Prolog program P to its homogeneous form P’, supplemented by an (equality) theory of the predicate ‘=‘I of the homogeneous form. The discussion of the O-terms In sections 5 to 9 above has made no appeal to the homogeneous form and the full general procedure as such. This is simply because all the relevant issues lie in the equality theory of the Q-terms as such, and could be exposed using a very simple predicate “P-l’ whose conversion to homogeneous form would have added nothing to the exposition, However, if the equality theory for n-terms were, as would be typical, embedded in a more comprehensive equality theory, then, as should be clear from sections 3 and 4, the full power of the complete equality theory is only realized in the context of the general procedure. A simple illustrative example in the style of section 3 is the goal clause e gQh-~t(w)), mem(x,[rat(4,6)]). We assume that “gr” is a predicate over pairs of ratlonals specified in a way similar to the predicate in section 7, and that as in section 3 “mem” and “rat” have their usual axiomatization, and that the equality theory is augmented to deal with rationals and lists. ---t / ENGINEERING It is left as an exercise to the reader to show that this goal clause is reduced to IJ only in the context of the general procedure. 1. Clocksin, W. F. and Mellish, C. S. Programming in Prolog. Springer-Verlag, New York, 1981. 11. Summary and conclusion A paper by Kornfeld [8] presenting a notion of “extended unification” has received some attention and has been widely cited. The notion of extended unification is informally presented in the original paper and its intended operational semantics has to be largely induced from the examples given in the paper. In this paper and in [3], we have attempted to give a more formal and clearer treatment, faithful to our perception of the original intuitions, but in the context of the general procedure [12]. In this paper we have focused our attention on Kornfeld’s use of extended unification to appropriately delay evaluation of predicates with a large or possibly infinite set of instantiations - an important and Interesting problem for Prolog implementations. We have formalized Kornfeld’s basic notions of what we call the n-theory. The formalization is presented as an executable program written entirely in standard Prolog. Because of the unusual nature of this approach to delayed evaluation, we have given the n-theory in full. We use the formalization to show that Kornfeld’s method is potentially incomplete and unsound in a way that runs completely counter to its motivation. We have shown that the method is inefficient even after a nice modification. Kornfeld’s work does however provide interesting insights into the possibilities offered by “non-standard” equality theories, insights which we hope the present paper sharpens. Our current work indicates that the difficulties identified in the n-theory can be resolved by a modified axiomatization. However, we take the position that it is not yet clear whether such a modified a-theory would have other than conceptual advantages over more direct methods, in which the necessary associations between variables and sets of, as yet unevaluated constraining predicates, are handled directly by the interpreter as in [5, lo]. References 2. Colmerauer, A. et. al. Prolog II: Reference Manual and Theoretical Model. Groupe d’Intelligence Artificielle, Faculte des Sciences deluming, Marseilles, 1982. 3. Elcock, E. W. and Hoddinott, P. Classical Equality and Prolog. TR 143, Department of Computer Science, The University of Western Ontario, 1985. 4. Elcock, E. W. The Pragmatics of Prolog: Some Comments. Proceedings Logic Programming Workshop’83, Portugal, June, 1983, pp. 94-106. 5. Foster, J. M. and Elcock, E. W. Absys 1: an incremental complier for assertions - an introduction. In Machine Intelligence, Edinburgh University Press, Edinburgh, 1969, pp. 423-429. 6. Goguen, Joseph A. and Meseguer, J. “Equality, Types, Modules, and (Why Not?) Generics for Logic Programming”. The Journal of Logic Programming 1, 2 (August 1984), 179-209. 7. Kahn, K. Unique Features of LM-Prolog. Unpublished manuscript. 8. Kornfeld, W. A. Equality for Prolog. Proceedings, Seventh International Joint Conference on Artificial Intelligence, 1983, pp. 514-519. 9. Kowalski, R. A. Predicate Logic as Programming Language. Proceedings, Information Processing, Amsterdam, 1974, pp. 570-574. 10. Naish, L. Automatic generation of control for logic programs. Technical Report 83/6, Dept. of Computer Science, University of Melbourne, 1983. 11. van Emden, M. H. Programming with Resolution Logic. In Machine Intelligence 8, Ellis Horwood, 1977, pp. 266-299. 12. van Emden, M. H. and Lloyd, J. W. “A Logical Reconstruction of Prolog II”. The Journal of Logic Programming 1, 2 (August 1984), 143-150. AI LANGUAGES AND ARCHITECTURES / 775
1986
8
527
Constructing and Refining Causal Explanations from an Inconsistent Domain Theory 1 Richard J. Doyle Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, MA 02139 Abstract Recent work in the field of machine learning has demon- strated the utility of explanation formation as a guide to gen- eralization. Most of these investigations have concentrated on the formation of explanations from consistent domain theories. I present an approach to forming explanations from domain the- ories which are inconsistent due to the presence of abstractions which suppress potentially relevant detail. In this approach, ex- planations are constructed to support reasoning tasks and are refined in a failure-driven manner. The elaboration of explana- tions is guided by the structuring of domain theories into layers of abstractions. This work is part of a larger effort to develop a causal mod- elling system which forms explanations of the underlying causal relations in physical systems. This system utilizes an inconsis- tent, common-sense theory of the mechanisms which operate in physical systems. 1 The Problem The field of machine learning has shown a recent shift towards knowledge intensive methods which utilize the construction of explanations as an important step in the generalization process. In these ezplanation-based learning methods /DeJong & Mooney 86, Mahadevan 85, Mitchell et al 86, W’inston et al 831, an explanation derived from a domain theory shows why a par- ticular example is an instance of some concept. After the critical constraints in the explanation are determined, its components are generalized while maintaining these constraints; the result is a generalized recognition rule for examples of the given concept. This approach is now well understood for domain theories which are consistent, or are at least assumed to be consistent. Ex- planations derived and generalized from consistent domain the- ories constitute proofs which can be taken to be correct in the context of all reasoning tasks they may subsequently support. However, most domain theories are not consistent - they incorporate defaults, they omit details, or they otherwise ab- stract away from a complete account of the constraints which ‘This report describes research done at the Artificial Intelligence Labo- ratory of the Massachusetts Inetitute of Technology. Support for the labo- ratory’s artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N000414-80-C-0505. may be relevant to the reasoning tasks to which they are ap- plied, Explanations derived and generalized from inconsistent domain theories cannot be assumed to be always correct; their inherent abstractions may manifest when inferences derived from them are not corroborated. The problem addressed in this paper is how to construct justified, plausible explanations despite inconsistent domain the- ories; and how to refine those explanations or their generaliza- tions when they fail to support reasoning tasks to which they are applied. 1.1 An Example Consider a domain theory which describes at a common-sense level the kinds of causal mechanisms that operate in physical systems: flows, mechanical couplings, etc. My system derives from this domain theory a simple causal model of a bathtub which describes two flow mechanism instances: water flows in at the tap and flows out at the drain. This simple model proves inadequate for the planning problem of how to fill the bathtub with water. This reasoning task becomes solvable after my system elaborates the model to describe a mechanical coupling between the plunger and the plug and how the plug blocks the flow of water at the drain. This elaborated causal explanation includes an interesting intersection between a flow mechanism and a mechanical coupling mechanism. A single physical object - the plug - plays dual roles: it serves both as one half of a mechanical coupling and as barrier to a flow. My system extracts this composed causal mechanism - which might be called “valve” - out of the causal model of the bathtub and generalizes it in the explanation-based learning manner, maintaining the constraint that one physical object plaj these two roles. My system next uses the valve mechanism to explain the causal relations in another physical system - a camera. In a camera, there is a mechanical coupling between the shutter re- lease and the shutter; furthermore the shutter plays the addi- tional role of barrier to the light flow between the photographed subject and the film. This causal model generates an incorrect prediction when a lens cap is inadvertently left on the lens. My system refines the model by instantiating the lens cap as another barrier to light flow. The model also cannot be used to explain why the shutter does not move when the safety latch on the shutter release is engaged. My system handles this situation by instantiating a latch as a type of barrier to a mechanical coupling 538 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. - a detail which never appeared in the original construction of the valve explanation in the context of the bathtub. 1.2 The Proposed Solution I take the following view of explanation formation from inconsis- tent domain theories, as a tool for learning or otherwise: Expla- nations are constructed in the context of a reasoning task; they are refined, as needed, in an incremental failure-driven manner. The usefulness of an explanation is relative to the goal of its motivating reasoning task. Similarly, the consistency of an ex- planation is relative to the set of inferences it supports. In the example above, a planning problem motivates the elaboration of the bathtub model and prediction failures motivate the refinement of the camera model. In this paper, I focus on domain theories which are incon- sistent because they incorporate a particular kind of abstraction - the suppression of possibly relevant detail through approzima- tion. Approximations may be layered into several levels. Ex- planations derived from less approximate levels are less likely to support incorrect inferences. I argue that the minimum level to which an explanation must be instantiated depends on the goal of the motivating reasoning task. I present two means of refining failed explanations: rein- stantiation of the explanation into a situation which has changed, and elaboration of the explanation to a less approximate level in the domain theory with more explanatory power. This approach to refinement uses the layered structure of a domain theory to guide the familiar processes of dependency-directed backtracking and truth maintenance [Doyle 791. 2 A Context for the Problem - Causal Modelling The issue of how to construct and refine explanations from an inconsistent domain theory comes up in my work on causal mod- efling [Doyle 861. My causal modelling system learns how physical systems work in the context of reasoning tasks such as planning or prediction. Given a description of how quantity values, structural relations, and geometrical relations in a physical system change over time, my system utilizes a common-sense theory of causal mechanisms to hypothesize underlying causal relations which can explain the observed behavior of the physical system. I have developed a representation for causality in physical systems which supports the description of these mechanisms or processes by which effects emerge from causes in this domain. This aspect of my work addresses issues first considered in ;Rieger & Grinberg 771. In my representation, mechanisms require the presence of some kind of medium, or structural link, between the site of the cause and the site of the effect. For example, flows require a chan- nel through which to transfer material and mechanical couplings require a physical connection through which to propagate mo- mentum. Causal mechanisms can be disrupted by barriers which decouple cause from effect. For example, flows can be inhibited by a blocked channel and mechanical couplings can be disabled by a broken physical connection. This representation for causality and a vocabulary of causal mechanisms describable within it are currently under develop- ment and are being tested in the modelling of a number of phys- ical systems. A generalization hierarchy for these mechanisms is shown in Figure 1. /’ PROPAGATION / ’ MECHANICAL ELECTRICITY CAUSAL MECI MATERIAL ’ LIGHT FLOW TRANSMISSION FIELD INTERACTION i\ \ GRAVITATION I MAGNETISM ELECTRICAL ELECTROPHOTIC I \ PHOTOCHEMICAL ELECTROTHERMAL THERMOCHEMICAL Figure 1: Generalization Hierarchy for Causal Mechanisms The relevant aspect of this domain theory of causal mech- anisms for the purposes of this paper is its inconsistency. The theory does not describe all the relevant aspects of the various mechanisms which operate in physical systems. Furthermore, the representation of causality underlying this domain theory sug- gests a decomposition of the mechanism descriptions into several layers of approximation. I describe these levels of explanation in the next section. 2.1 Layers of Explanation in a Domain Theory There are several levels of causal explanation available in the representation for causality described above: each drawing on the notion of mechanism to a different degree. Each more detailed level introduces additional constraints which are meaningful only in the context provided by the more abstract levels. The higher levels of explanation do not employ a coarser grain size, rather they ignore certain potentially relevant conditions. The most abstract level of explanation in the representation does not incorporate the notion of causal mechanism at all. This explanation merely notes the co-occurrence of two events and verifies that the effect does not precede the cause. CO-OCCURRENCE EXPLANATION (Changes(zq, tl) A Changes(dq, t2) A 2 (tl, t2)) ===+ FunctaonalDependence(aq, dq) The next level of explanation verifies that the quantities whose values are correlated are of the appropriate type for the mechanism. For example, flows are causal links between amount quantities and mechanical couplings are causal links between po- sition quantities. 2 LEARNING I 539 QUANTITY TYPES EXPLANATION (Changes(iq, tl) f\ Changes(dq, t2) A 2 (tl, t2) A IndependentQuantityType(iq) A DependentQuantityType(dq)) ===+ FunctzonalDependence(iq, dq) This explanation is an approximation of one which identifies the enabling medium between the physical objects of the quan- tities whose values are correlated. MEDIUM EXPLANATION (Changes(zq, tl) A Chunges(dq, t2) A > (tl, 12) A IndependentQuantityType(iq) A DependentQuuntztyType(dq) A 34 (Between(m, PhyslculOb~ectOf(zq), PhyszculObjectOf(dq), tl : t2) A MediumType(m ===+ FunctionulDependence(iq, dq) Enables(m, FunctionalDependence(iq, dq)) Note that the medium must be maintained throughout the causal interaction. This explanation in turn approximates one which states that there must be no barriers which disrupt the structural link and disable the causal mechanism. BARRIER EXPLANATION (Changes(iq, tl) A Ghunges(dq, t2) A 2 (tl, t2) A IndependentQuantityType(iq) A DependentQuantityType(dq) A 3(m) (Between(m, Physicu10bjectOf(iq), PhysaculObjectOf(dq), tl : t2) A MedzumType(m) A 3(b) (Along(b, m, tl : t2) A BarrzerType(b)))) _ FunctionulDependence(iq, dq) Enables(m, FunctzonulDependence(iq, dq)) Disubles(b, FunctionalDependence(iq, dq)) Finally, this description of barriers can be elaborated to one which states that in general the effectiveness of a barrier depends on how much of the medium it blocks. VARIABLE BARRIER EXPLANATION 3 (Changes(iq, tl) A Chunges(dq, t2) A 2 (tl, t2) A IndependentQuantityType(iq) A DependentQuuntityType(dq) A 3b-4 (Between(m, PhyszculObjectOf(iq), PhysicuZObJectOf(dq), tl : t2) A MediumType A ‘Flows and mechanical couplings are instances of a class of causal mech- anisms I call propagattons; they involve similar co-occurring events at dif- ferent sites. There are also tranc~ormat:ons (e.g. photochemical on film, electrophotic in a light bulb, electrothermal in a toaster) involving different co-occurring events at a single site. 3This level of explanation remove- P a different type of abstraction than the other levels. This difference is discussed in the section on types of abstraction. 3(b) (Along(b, m, tl : t2) A BurrzerType(b) A 3(k) (QuantztyOf(bq, b) A IsA(bq, Posztion))))) ===+ FunctionulDependence(iq, dq) Enubles(m, FunctionulDependence(iq, dq)) FunctionulDependence(bq, dq) Enubles(b, FunctionulDependence(bq, dq)) Note that this level of explanation describes a dependence (between a quantity associated with a barrier and the quantity associated with the effect) which does not appear at any of the other levels. These levels of explanation are defined for causal mechanisms in general; the particular most detailed levels of explanation of flows and mechanical couplings needed for the bathtub and cam- era examples are shown below. MATERIAL FLOW (VARIABLE BARRIER) (Changes(iq, tl) A Changes(dq, t2) A 2 (tl, t2) A IsA(zq, Amount) A IsA(dq, Amount) A 3b-4 (Touches(PhyszculObjectOf(iq), PhyszculObjectOf(dq), tl : t2) A 3(b) (Along(b, m, tl : t2) A Blocks(b, PhysicalObjectOf(zq)) A 3(h) (QuuntityOf(bq, b) A IsA(bq, Position))))) e FunctzonulDependence(zq, dq) Enables(m, FunctzonulDependence(zq, dq)) FunctionulDependence(bq, dq) Enables(b, FunctionulDependence(bq, dq)) LIGHT TRANSMISSION (VARIABLE BARRIER) (Changes(zq, tl) A Chunges(dq, t2) A 2 (tl, t2) A IsA(zq, Amount) A IsA(dq, Amount) A +-4 (StraightLznePuth(PhysiculObjectOf(zq), PhysiculObjectOf(dq), tl : t2) A 3(b) (Along(b, m, tl : t2) A Opaque(b) A Wq) (QuuntztyOf(bq, h) A IsA(bq, Posztzon))))) ===+ FunctzonulDependence(zq, dq) Enables(m, FunctaonulDependence(iq, dq)) FunctzonalDependence(bq, dq) Enubles(b, FunctzonuIDependence(bq, dq)) MECHANICAL COUPLING (BARRIER) (Chunges(iq, tl) A -Change$(dq, t2) A > (tl, t2) A IsA(aq, Positton) A IsA(dq, Posztzon) A 3(m) (AttuchedTo(Phy szculOb~ect0 j(lq), PhysaculOb3ectOj(dq), t1 : t2) A 540 / SCIENCE (AttachedTo(b, m) A Anchored(b)))) J Funct~onalDe~tndence(zq, dq) Enables(n, FunctzonalDependence(zq, dq)) Disables(b, FunctionalDependence(og, dq)) This last explanation describes a latch barrier to a mechan- ical coupling. Although this presentation of levels in the causal mechanism descriptions suggests fixed approximation hierarchies, in general there may be several ways to elaborate any level of explanation. For example, a non-rigid physical connection as well as a latch may disable a mechanical coupling. 3 Constructing and Refining Explanations In this section. I show how the causal explanations of the bathtub and camera are constructed and incrementally refined from the layered, approximate, inconsistent domain theory described in the previous section. 3.1 Construction of an Explanation Consider the problem of constructing a causal model of a bathtub in the context of a planning problem to fill a tub with water. A first attempt instantiates a model at the medium level of expla- nation of the material flow mechanism. This causal explanation is shown in Figure 2. 4 ___-- __-.- - - QIIANTITICS AND FUNCTIONAL DEPENDENCIES _--.-_ PHYSICAL OBJECTS AND RELATIONS Figure 2: Medium Level Explanation of Material Flows in a Bath- tub Water flows from the tap through the tub and out of the drain. It is not yet possible to generate a plan for filling the tub with water. Starting a flow at the tap into the tub does not work because of the presence of the drain, an additional medium for flow from the tub. This unsolved planning problem motivates the further expansion of the bathtub model to a more detailed level of explanation, as shown in Figure 3. The model now reveals how the plug can block flow at the drain and how the position of the plug is affected by the plunger. 4Temporal, enabling, and disabling relations are mostly suppressed in ------ET / -A ‘. ALONG ATTACHED TO ATTACHED TO \4 Figure 3: Variable Barrier Level Explanation of a Material Flow and Medium Level Explanation of a Mechanical Coupling in a Bathtub A plan can now be generated for filling the bathtub with water: Start a flow of water at the tap and move the plunger to place the plug in the drain. This example of explanation construction illustrates how the needed level of explanation depends on the motivating reasoning task. In this case, a barrier was needed to solve the planning problem; hence the causal explanation of the bathtub had to go to the barrier level. 3.2 Generalization of an Explanation A material flow mechanism and a mechanical coupling mecha- nism intersect in the expanded bathtub model at the plug. My causal modelling system notes such intersections because they may provide opportunities for extracting and generalizing use- ful compositions of causal mechanisms. This particular complex mechanism might be called “valve”. Using the hierarchy in Figure 1, my system generalizes the valve concept to other kinds of flows. The definitions of media, barriers, etc. for other types of flow are substituted while main- taining the constraint that a single physical object must serve as both one half of the mechanical coupling and as barrier to the flow. The generalized valve mechanism for light transmission is shown in Figure 4. ALONG / /Y ATTACHED TO I/ the figures to avoid clutter; such information explanation descriptions. used ZLP indicated in the Figure 4: The Learned Valve Mechanism for Light Transmission LEARNING / 541 This learned complex mechanism is used in the construction of a causal model for another physical system - a camera. This causal model is shown in Figure 5. ALONG / OPAQUE <, Figure 5: Valve Explanation of a Camera All of the valve explanations combine the variable barrier explanation level of a flow mechanism and the medium expla- nation level of the mechanical coupling mechanism. The origins of composed mechanism explanations are recorded, as in Figure 6, so that more detailed levels of explanation in the constituent mechanisms can be accessed if needed. ~- iv!ECHkNICAL COUPLla Figure 6: Origins of the Valve Explanation for Light Transmis- sion 3.3 Refinement of an Explanation through Rein- stantiation When a lens cap is placed on a camera this model supports an incorrect prediction - that light will continue to reach the film. In this case, the level of explanation needed to handle the new situation already appears in the model; the lens cap, like the shutter, is a barrier to light flow. My system instantiates this additional barrier, as in Figure 7. The refined explanation now supports the correct prediction that light does not reach the film in the altered camera. ALONG / I OPAQUE Figure 7: Reinstantiated Explanation of a Camera 3.4 Refinement of an Explanation through Elabo- rat ion In some cases: refinement of a failed explanation requires elab- orating to a level of explanation which calls on details not yet considered. This kind of refinement is needed in the camera model to handle the situation where a safety latch on the shut- ter release is engaged. As is, the model supports the incorrect inference that the shutter moves whenever the release moves. The model is repaired when my system recognizes the latch barrier to the mechanical coupling between the release and the shutter, as in Figure 8. The shutter does not move when the anchored release latch is attached. My system formed this ex- planation by elaborating to the barrier explanation level of the mechanical coupling constituent of the valve mechanism for light transmission (see Figure 6). Although this level of explanation was never reached in the bathtub model, it is accessible in the learned valve mechanism for light transmission used in the cam- era model. 1 RELEASE ATTACHED TO IT1 ATTACHED TO / ANCHORED \< \&W Figure 8: Elaborated Explanation of a Camera 4 Issues In this section, I discuss a set of issues relevant to the problem of constructing and refining explanations from a domain theory which is inconsistent. 542 / SCIENCE 4.1 Limits on Perception and Use of Empirical Ev- gas behavior in terms of the motions of molecules and in terms of idence the macroscopic properties of volume, temperature, and pressure. Aggregations currently do not appear in the causal mechanism The justification for employing an explanation to support rea- domain theory; the theory stops short of a full physical account- soning in a given situation comes partially from the explanation ing of the laws which govern the behavior of physical systems. schema used, and partially from the perceptions which instanti- This enumeration of abstraction types is admittedly prelim- ate the existentially quantified terms in that explanation schema. inary. A recent investigation [Smith et al 85: also has described The justification due to the explanation schema may be compro- different abstraction types, and has investigated how explana- mised by approximations. The justification due to perception tions fail because of them. I have described in this paper an ap- may be compromised when the instantiations of terms in an ex- preach to explanation construction and refinement from domain planation are unobsetoable due to limits in the available tion equipment. percep- theories which incorporate approximations and qualitizations. For example, air, which may be unobservable, serves as a 4.3 Incomplete and Intractable Domain Theories thermal conducting medium for heat flow in a toaster. A causal explanation for a toaster based on heat flow might be only par- Even the lowest level of explanation in a domain theory may tially instantiated. incorporate abstractions. This is true of the causal mechanism The loss of justification due to an uninstantiable term can theory. For example, a barrier may be selective, e.g. a UV filter be countered by gathering empirical evidence that an explana- on a camera. Abstractions at the lowest level of a domain theory tion is consistent, e.g., confirming that bread placed in a toaster imply missing knowledge. does indeed become hotter. This is one way in which analytical, The method of explanation refinement described in this pa- i.e., explanation-based, methods can be combined with empirical per has no recourse when an incomplete domain theory ‘Lbottoms methods. out”. A possible course of action in this circumstance is to resort to an inductive method. Another is to invoke some other means Uninstantiable terms in an explanation also can be countered by elaborating an explanation. More detailed levels of explana- of accessing applicable knowledge, perhaps analogy. Simply giv- tion can suggest how to obtain indirect empirical evidence for ing up may also be arguably appropriate. the uninstantiable term. For example, the barrier level of ex- Even complete domain theories might make use of layered ap- planation in the heat flow mechanism indicates that heat flow proximations. A complete domain theory may involve so much in a toaster should be disabled when a thermal insulator exists detail as to be intractable. A structuring of such a theory into between the coils and the bread. Confirming observations at this several approximating levels of explanation allows plausible ex- level can strongly suggest the presence of the unobservable ther- planations to be constructed, and maintains a capability for re- ma1 conducting medium. fining those explanations [Tadepalli 851. 4.2 Types of Abstraction 4.4 Learning from Experiments Approximation is the most prevalent type of abstraction appear- Given an inconsistent domain theory, it is possible to derive more ing between levels of explanation in the causal mechanism do- than one plausible, partially justified explanation in many situa- main theory. Approximations are assumptions that some con- tions. For example, a glowing taillight on an automobile might be dition holds or that some constraint is satisfied. For example, explained either by the electrical system of the car or by reflected the approximation between the quantity types and medium lev- light from the sun. els of explanation is that an appropriate medium to support a causal mechanism is in place. A more detailed explanation may I am developing an experiment design capability for distin- be correct in situations where an approximate explanation is not. guishing multiple explanations. This capability utilizes the ex- planation refinement method described in this paper. It appears A different kind of abstraction appears between t,he barrier and variable barrier levels. Here a continuous description is col- lapsed into a discrete one. At the barrier level, a barrier either completely disables a mechanism or has no effect at all. At the variable barrier level a barrier may also partially affect a mech- anism. This kind of abstraction might be called qualitization. Some situations may not even be describable, much less correctly described, by explanations which incorporate qualitizations. For example, the variable flow out of a bathtub drain or the way the aperture in a camera lens affects light flow cannot be described by the on/off barrier explanation. Under aggregation, complex structures at one level of ex- planation are subsumed under simpler structures, perhaps even single terms, at higher levels. Aggregations involve changes in grain size. The oft-used example is the alternate explanations of similar in spirit to that proposed in [Rajamoney et al 851. In my method, refinements are proposed to one or more of a set of competing explanations until the explanations support divergent predictions. The refinements specify further instantiations at the same or at an elaborated level. For example, an experiment to distinguish the glowing taillight explanations might elaborate the light flow explanation from the medium level and specify the in- stantiation of an opaque barrier to disable the hypothesized light transmission. This barrier, importantly, would have no predicted effect on the electrical system of the car. This approach to experiment design applies equally well to single explanations. Even an explanation with no rivals may be only partially justified because of perception limits. Empirical evidence for the correctness of such an explanation may be gath- ered via experiments which specify refinements involving addi- LEARNING / 543 tional observable instantiations of terms in the explanation. Such an experiment, involving a toaster, is described in the section on perception above. Experimenting can be viewed as the active gathering of greater justification for fewer and fewer plausible explanations. completeness of a domain theory, and designing experiments to distinguish and gather justification for plausible explanations. In addition, a better understanding is needed of the kinds of domain theories which admit to decomposition via layered abstractions. and of the principles which govern the placement of orderings on abstractions. 5 Relation to Other Work 7 Acknowledgements Patil has investigated multi-level causal explanation in a medical domain [Patil et al 811. He identifies five levels of explanation and describes methods for moving between levels in both directions. The kind of abstraction employed by Patil’s system ABEL is ag- gregation; nodes and/or causal links at one level are condensed into fewer nodes and links at the next higher level. Elaboration in ABEL supports confirmation of diagnoses to greater resolution and allows the reasoning of the system to be revealed in greater detail to a user. Elaboration in ABEL is not intended to support failure-driven refinement of explanations through approximations, as described in this paper. the removal of Davis’ hardware troubleshooting system expands both aggre- gations and approximations [Davis 841. The structure and behav- ior of digital circuits are described at several levels of aggregation; this provides the troubleshooting system with different grain sizes at which to examine a circuit. Fault models indicate how to lift approximations concerning the possible “paths of interaction” in circuits. Davis’ fault models appear to be well-described in my representation for causality. His notions of spurious and inhib- ited causal pathways correspond to my concepts of medium and barrier. In general, there may be many ways to repair failed approx- imate explanations. Smith et al [Smith et al 851 have explored how the task of isolating the source of an explanation failure can be constrained. They show how different types of abstraction in an explanation schema propagate along dependency links to instantiated explanations and lead to different types of failure. 6 Conclusions I have presented an approach to explanation construction and refinement from inconsistent domain theories which incorporate two types of abstraction - the suppression of potentially relevant constraints and the discretization of continuous representations. In this approach, explanations are elaborated to support new reasoning tasks and to recover from failures. The elaboration process is guided by the structuring of domain theories into layers of abstractions. This work is taking place in the context of an investigation into the formation of causal models of physical systems. Causal modelling involves the construction and refinement of causal ex- planations of the behavior of physical systems from a domain theory describing the mechanisms which operate in such systems. The levels of explanation in this domain theory are derived from a representation for causality in physical systems. Some of the issues related to explanation formation from in- consistent domain theories include: using empirical evidence to complement explanation, understanding the types of abstraction which render a domain theory inconsistent, dealing with the in- j&i / SCIENCE Patrick Winston encouraged me to pursue this line of investiga- tion. Randall Davis, Bob Hall, David Kirsh, Rick Lathrop, Jintae Lee and Tomas Lozano-Perez have all engaged in discussions of the ideas in this paper. 8 References [Davis 841 Davis, Randall, “Diagnostic Reasoning Based on Struc- ture and Behavior,” Artificial Intelligence 24, 1984. [DeJong & Mooney 861 DeJong, Gerald and Raymond Mooney, “Explanation Based Learning: A Differentiating View,“ Machine Learning 1, no. 2, 1986. [Doyle 791 Doyle, Jon, “A Truth Maintenance System.” Artificial Intelligence 12, 1979. [Doyle 861 Doyle, Richard J., “Construction and Refinement of Justified Causal Models Through Multiple Levels of Explanation and Experimenting,” Ph.D., Massachusetts Institute of Technol- ogy, forthcoming. [Mahadevan 851 Mahadevan, Sridhar, “Verification-Based Learn- ing: A Generalization Strategy for Problem-Reduction ?rleth- ads,” 9th IJCAI, 616-623, 1985. [Mitchell et al 861 Mitchell, Tom M., Richard M. Keller and Smadar T. Kedar-Cabelli, “Explanation-Based Generalization: A Unifying View,” Machine Learning 1, no. 1: 1986. [Patil et al 81; Patil, Ramesh S., Peter Szolovits and CVilliam B. Schwartz, ‘Causal Understanding of Patient Illness in hledical Diagnosis,” 7th IJCA I, 1981. [Rajamoney et al 851 Rajamoney, Shankar, Gerald DeJong and Boi Faltings, “Towards a Model of Conceptual Knowledge Acqui- sition Through Directed Experimentation,” 9th IJCAl. 688-690. 1985. [Rieger & Grinberg 771 Rieger, Chuck and Milt Grinberg, *‘The Declarative Representation and Procedural Simulation of Causal- ity in Physical Mechanisms,” 5th IJCAI. 1977. [Smith et al 851 Smith, Reid G., Howard A. 1Vinston. Tom hl Mitchell and Bruce G. Buchanan, ‘LRepresentation and Cse of Explicit Justifications for Knowledge Base Refinement.” 9th IJ- CAI, 673-680, 1985. [Tadepalli 851 Tadepalli, Prasad V., “Learning in Intractable Do- mains,” 3rd International Machine Learning Workshop. 202-205, 1985. [Winston et al 831 Winston, Patrick H., Thomas 0. Binford, Boris Katz and Michael Lowry, “Learning Physical Descriptions from Functional Definitions, Examples: and Precedents.” .4.4A I-83, 433-439, 1983.
1986
80
528
Not the Path to Perdition: The Utility of Similarity-Based Learning Michael Lebowitz’ Department of Computer Science -- Columbia University New York, NY 10027 Abstract A large portion of the research in machine learning has involved a paradigm of comparing many examples and analyzing them in terms of similarities and differences, assuming that the resulting generalizations will have applicability to new examples. While such research has been very successful, it is by no means obvious why similarity-based generalizations should be useful, since they may simply reflect coincidences. Proponents of explanation-based learning, a new, knowledge-intensive method of examining single examples to derive generalizations based on underlying causal models, could contend that their methods are more fundamentally grounded, and that there is no need to look for similarities across examples. In this paper, we present the issues, and then show why similarity-based methods are important. We present four reasons why robust machine learning must involve the integration of similarity-based and explanation-based methods. We argue that: 1) it may not always be practical or even possible to determine a causal explanation; 2) similarity usually implies causality; 3) similarity-based generalizations can be refined over time; 4) similarity-based and explanation-based methods complement each other in important ways. 1 Introduction Until recently, machine learning has focused upon a single paradigm -- the generalization of concepts through the comparison of examples. The assumption has been made, though often tacitly, that the generalization of similarities will lead to concepts that can be applied in other contexts. Despite its ubiquity there is one real problem with this paradigm: there is no obvious reason why the underlying assumption should hold. In other fields people have called into doubt the utility of noticing similarities in the world and assuming them to be important. Naturalist Stephen Jay Gould, in discussing the nature of scientific discovery comments that: The human mind delights in finding pattern -- so much so that we often mistake coincidence or forced analogy for profound meaning. No other habit of thought lies so deeply within the soul of a small creature trying to make sense of a complex world not constructed for it. ‘Into this Universe, and why not knowing N Nor whence, like water willy-nilly flowing’ as the Rubaiyatsays. No other habit of thought stands so doggedly in the way of any forthright attempt to understand some of the world’s most essential aspects -- the tortuous paths of history, the unpredictability of complex systems, and the lack of causal connection among events superficially similar. Numerical coincidence is a common path to intellectual perdition in our quest for meaning. [Gould 841 Further doubt has been cast upon the use of similarity-based learning by a new methodology that has been developed in the last few years: the extensive application of knowledge to single examples to determine the underlying mechanism behind an ‘This research was supported in part by the Defense Advanced Research Projects Agency under contract N00039-84-C-0165 and in part by the United States Army Research Institute under contract MDA903-850103. Comments by Kathy McKeown on an earlier draft of this paper were quite useful. example, and the use of this causal explanation to derive generalized concepts. By learning from single examples, this knowledge-based approach calls into question the necessity of similarity-based approaches. Despite Gould’s warning and the recent successes of explanation-based methods, learning methods that concentrate on seeking out coincidences have had remarkable success across a variety of tasks. Furthermore, as Gould implies above, people (and other creatures) do seem to be optimized for such learning. Given this evidence, it worth trying to explain why such methods work. In this paper we will explain why similarity-based learning not only works, but is a crucial part of learning. 2 EBL and SBL Considerable research has been done involving similarity-based learning (SBL). [Winston 72; Winston 80; Michalski 80; Michalski 83; Dietterich and Michalski 86; Lebowitz 83; Lebowitz 86a] are just a few examples. (See also, [Michalski et al. 83; Michalski et al. 861.) While there are many variations to such learning research, the basic idea is that a program takes a number of examples, compares them in terms of similarities and differences, and creates a generalized description by abstracting out similarities. A program given descriptions of Columbia University and Yale University and told that they were Ivy League universities and that the University of Massachusetts was not would define “Ivy League university” in terms of the properties that the first two examples had and that the third did not -- e.g., as being private, expensive and old. Similarity- based learning has been studied for cases where the input is specially prepared by a teacher; for unprepared input; where there are only positive examples; where there are both positive and negative examples; for a few examples; for many examples; for determining only a single concept at a time; and for determining multiple concepts. In a practical sense, SBL programs have learned by comparing examples more or less syntactically, using little “high level” knowledge of their domains (other than in deciding how to represent each example initially). Explanation-based learning (EBL), in contrast, views learning as a knowledge-intensive activity, much like other tasks in Artificial Intelligence. [DeJong 86; Ellman 85; Mitchell 83a; Mostow 83; Minton 84; Silver 861 are a few examples of explanation-based learning research. (See also [Michalski et al. 861.) An EBL program takes a single example, builds up an explanation of how the various components relate to each other at a low level of detail by using traditional Al understanding or planning methods, and then generalizes the properties of various components of the example so long as the explanation remains valid. What is left is then viewed as a generalized description of the example that can be applied in .understanding further examples. This kind of learning is tremendously useful, as it allows generalized concepts to be determined on the basis of a single example. On the other hand, the building and analysis of explanations does require extremely detailed knowledge of the domain (which may minimize the need to learn). In addition, virtually all current EBL work is in the “perfect learner” paradigm that assumes that all input is noise-free and fits the correct final generalization. LEARNING / 53.3 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. It is important to make clear here exactly the sense in which EBL is concept learning. It might be contended that all that is being done is the application of pre-existing information to a problem, unlike SBL, which is clearly a form of inductive learning. The key is in the generalization phase, where the EBL learner loosens constraints on its representation and determines whether the explanation that it has built up still holds. This generalized concept can then setve as a form of compiled knowledge that simplifies the processing of later input. This may be a way to learn structures such as frames [Minsky 751 and scripts [Schank and Abelson 771. The view of using EBL to produce knowledge structures that make later processing more efficient has been called operationalization [Mostow 831. Even though it might in some sense be possible to understand later examples just using low-levei rules, realistically it is crucial to have a set of knowledge structures at various levels of complexity. 3 The goal of learning It does not make sense to consider learning in isolation from other elements of intelligent processing. While certain aspects of learning may not be in service of an immediate goal (e.g., curiosity), at some point there must be a task involved to make use of what is learned. In general, the idea is for an organism or program to be able to carry out a task better (either be able to do more examples or do examples more efficiently) than it did before learning. It is particularly important to keep in mind the task nature of learning when considering concept learning, which has often been studied without regard to the future utility of the concepts created. For most tasks that people or intelligent programs will carry out, the most obvious way to be able to improve performance is to attempt to develop a causal model that explains how elements of the domain work. Such a model will allow the learner to predict what is likely to happen in later situations, which will clearly be useful. The model will allow the learner to understand further input. Although we will consider later whether it is possible in all domains, the construction of a causal model is clearly a worthy goal in learning. [Schank 75; Schank 841 present reasons for constructing such models even in domains with incomplete models. Explanation- based learning methods strike directly at the problem of creating causal models. Similarity-based methods do not, but yet seem to lead to useful generalizations. This leads us to the central mystery of this paper. 4 The puzzle Having decided that the construction of a causal model for a domain is important, or perhaps even crucial, as part of learning, we are left with the key question, “Is there any role for similarity-based learning in a full learning modei, and if so, why?” Even if we assume that there must be something to SBL, since, after all, so many people have worked on it with impressive results, we must ask why it works; why it helps a learner perform better. That generalizations from explanation-based learning are valid and useful makes sense intuitively, since they are derived from causal analyses. Similarity-based generalizations could just be the result of the coincidences that arise in a complex world. Note that similarity-based learning is not merely an artifact of researchers in machine learning. As pointed out in the Gould quote above, people delight in noticing similarities in disparate situations. Indeed, in many ways human processing seems to be optimized for such learning. An anecdotal example immediately comes to mind: On the Eastern Air Shuttle between New York and Boston, passengers are given a sequence number for boarding. On one roundtrip, I received the same sequence number going in each direction. I noticed the similarity immediately, even though the first number was not in front of me when I received the second, despite the apparent irrelevance of the coincidence to my performance on later shuttle trips. Virtually everyone has experienced, and noticed, similar coincidences. When nature provides such a powerful cognitive mechanism, there always seems to be a good reason. We will see shortly why the recognition of similarities is important, though, to reiterate, the utility is not obvious and should not simply be assumed by SBL researchers. 5 A similarity-based learning program We can most easily look at the utility of SBL in the context of a specific learning program. UNIMEM [Lebowitz 82; Lebowitz 86a; Lebowitz 86b] takes examples represented as sets of features (essentially property/value pairs) and automatically builds up a generalization hierarchy using similarity-based methods. It is not told in advance which examples to compare or concepts to form, but instead learns by observation. One domain on which we have tested UNIMEM involves data about universities that was collected from students in an Artificial Intelligence class at Columbia.* Figure 1 shows the information used by UNIMEM for two universities, Columbia and Carnegie-Mellon. Each university is represented by a set of triples that describe features of the university, the first two providing a property name and the third its value. So, Columbia is in New York State while Carnegie-Mellon is in Pennsylvania. Both are urban and private and Columbia has a 7/3 male/female ratio compared to Carnegie-Mellon’s 6/4. Some features, like quality of life, involve arbitrary numeric scales. FEATURE: COLUMBIA: CMU: ------------------------------------------------- STATE VALUE NEW-YORK PENNSYLVANIA LOCATION VALUE URBAN URBAN CONTROL VALUE PRIVATE PRIVATE MALE:FFJ4ALE VALUE RATIO:7:3 RATIO:6:4 NO-OF-STUDENTS VALUE THOUS:5- THOUS:S- STUDBNT:FACULTYVALUE RATIO:g:l RATIO:lo:l SAT VERBAL 625 600 MATH 650 650 EXPENSES VALUE THOUS$:lO+ THOUS$:lO+ %-FINANCIAL-AID VALUE 60 70 NO-APPLICANTS VALUE THOUS:4-7 THOUS:4-7 %-ADMITTANCE VALUE 30 40 %-ENROLLED VALUE 50 50 ACADEMICS SCALE:l-5 5 4 SOCIAL SCALE:l-5 3 3 QUALITY-OF-LIFE SCALE:l-5 3 3 ACAD-EMPHASIS VALUE LIB-ARTS ENGINEERING Figure 1: Information about two universities The first question we have to address concerning the examples in Figure 1 is precisely what it means to “understand” them, or to learn from them. While the exact nature of understanding would depend on the ultimate task that we had in mind, j3resumably what a person or system learning from these examples would be after is a causal model that refates the various features to each other. As an example, in understanding Figure 1 we might wish to know how the fact that both universities are private relates to the fact that they are both expensive or why Carnegie-Mellon offers financial aid to more people. A causal model that answers questions of this sort would be extremely useful for almost any task involving universities. Typical of the causation that we would look for is, for example, that private universities get less government support and hence have to raise more money through tuition. (At least that is how private universities explain it!) Similarly, a model %ther domains UNIMEM has been tested on include: information about states of the United States, Congressional voting records, software evaluations, biological data, football plays, universities, and terrorism stories. 5% / SCIENCE might indicate that Carnegie-Mellon’s emphasis on engineering leads to the acceptance of more students who need financial aid. Notice, however, that it will certainly not be possible to build a complete causal model solely from the information in Figure 1, but will require additional domain knowledge. An EBL program would create a low-level causal model of a university using whatever methods were available and then would use the model to develop a generalized concept. For example, it might decide that the Columbia explanation could be generalized by removing the requirement of being in New York State and by allowing the numeric values to vary within ranges, if none of these changes would affect the underlying explanation. It might be, however, that the liberal arts emphasis is crucial for some aspect of the explanation. In any case, by relaxing constraints in the representation, an EBL program would develop, using a single, causally motivated example, a generalized concept that ought to apply to a wide range of situations. Let us now compare the desired causal explanation with the kind of generalization made using similarity-based methods. Figure 2 shows the generalization that is made by UNIMEM, GNDl, from the two university representations in Figure 1 ,3 We see in Figure 2 that UNIMEM has generalized Columbia and Carnegie-Mellon by retaining the features that have identical values (like social level and quality of life), averaging feature values that are close (such as SAT verbal score) and eliminating features that are substantially different, such as the state where the university is located and the percentage of financial aid.4 The resulting set of features can be viewed as a generalization of the two examples, as it describes both of them, as well as, presumably, other universities that differ in other features. GNDl SOCIAL QUALITY-OF-LIFE LOCATION CONTROL NO-OF-STUDENTS STUDENT:FACULTY SAT SAT EXPENSES NO-APPLICANTS %-ENROLLED [CARNEGIE-MELLON ScALE:l-5 SCALEi:l-5 VALUE VALUE VALUE VALUE MATH VERBAL VALUE VALUE VALUE COLUMBIA] 3 5 LAN PRIVATE THOUS:S- RATIO:g:l 650 612.5 THOUS$:lo+ THOUS:4-7 50 Figure 2: Generalizing Columbia and Carnegie-Mellon What would the generalization in Figure 2 be used for once it had been made? Presumably it would be used in processing information about other universities. If we identified a situation where GNDl was thought to be relevant, we would assume that any of its features that were not known would indeed be present. The assumption is made by all similarity-based learning programs, including UNIMEM, that they have created usable concepts from which default values may be inherited. We can now state our problem quite clearly in terms of this example: What reason do we have to believe that a new example that fits part of the generalization of Columbia and Carnegie-Mellon will fit the rest? With explanation-based methods we at least have 3Actually, UNIMEM also had to decide that these hvo examples should even be compared and that they had a substantial amount in common before doing the actual generalization. *Exactly what constitutes “substantially different” is a parameter of the program. the underlying causal model as justification for believing the generalization. But what is the support of similarity-based learning? 6 Elements of an answer There are four main elements to our explanation as to why SBL produces generalized concepts that can be profitably applied to other problems and why it should be so used: l While the goal of learning is indeed a causal model, it is often not possible to determine underlying causality and even where it is possible it may not be practical. l Similarity usually to determine. implies causality and is much easier l There are ways to refine effects of coincidence. l Explanation-based and similarity-based complement each other in crucial ways. generalizations to mitigate the methods 6.1 Causality cannot always be determined In order to achieve their impressive results, the EBL methods that have been developed to date assume that a complete model of a domain is available and thus a full causal explanation can be constructed. In addition, it is assumed that it is ahvays computationally feasible to determine the explanation of any given example. While these assumptions may be acceptable for some learning tasks, they do not appear reasonable for situations where we are dealing with noisy, complex, uncertain data -- characteristics of most real-world problems. It is also unreasonable to expect to have a complete domain model available for a new domain that we are just beginning to explore. Even in our university example, it is hard imagine all the information being available to build a complete model. Most EBL work has not addressed these issues. Some of the domains used, like integration problems [Mitchell 83a], logic circuits [Mitchell 83b; Ellman 851 or chess games [Minton 841 do indeed have complete domain models and the examples used are small enough for the explanation construction to be tractable. Even in a domain such as the news stories of [DeJong 861, the assumption is made, perhaps less validly, that it is always possible to build up a complete explanation. In domains where a detailed explanation cannot reasonably be constructed, a learner can only rely on similarity-based methods. By looking for similarities it is at least possible for the learner to bring some regularity to its knowledge base. The noticing of co- occurrence is possible even the absence of a complete domain model. Further, much research, including our own, has shown that SBL can be done efficiently in a variety of different prbblem situations. In the university example of Section 5, UNIMEM was able to come up with a variety of similarity-based generalizations with minimal domain information. Further, as we noted above, people seem to be optimized for SBL. 6.2 Similarity usually implies causality The regularity that is detected using SBL is not worthwhile if it cannot be used to help cope with further examples. Such help is not likely if there is no connection between the similarities and the underlying causal explanation. Fortunately, such a connection will usually exist. Put as simply as is possible, similarities among examples usually occur because of some underlying causal mechanism. Clearly if there is a consistent mechanism, it will produce consistent results that can be observed as similarities. While the infinite variety of the world will also produce many coincidental similarities, it is nonetheless true that among the observed similarities are the LEARNING / 535 mechanisms that we desire. So, in the Eastern Shuttle example used above, while it is almost certain that the duplicate seat numbers I received were coincidental, if there was a mechanism involving seat numbers (say the .numbers were distributed in alphabetical order) it would manifest itself in this sort of coincidence. Similarly, in the university generalization GNDl (Figure 2) we indicated possible of mechanisms that would lead to the kind of expensive private school that is described. Two recent examples illustrate how causal understanding frequently relates to similarity-based processing. The first involves scientific research, an attempt to understand a complex meteorological phenomenon, and the second an investigation into a mysterious crime. In recent years weather researchers have been trying to explain a set of possibly related facts. Specifically: 1) the average temperature in 1981 was very high; 2) the El Chichon volcano erupted spectacularly in early 1982; 3) El Nino (a warm Pacific current) lasted an exceptionally long time starting in mid-1982; 4) there have been severe droughts in Africa since 1982. One might expect researchers to immediately attempt to construct a causal model that explains all these phenomena. However, weather systems are extremely complex, and by no means fully understood. Author Gordon Williams, writing in Atlantic, discusses the attempt to gain understanding as follows: “How could so much human misery in Africa be caused by an errant current in the Pacific? Records going back more than a century show that the worst African droughts often come in N Nino years.” (Emphasis added.) Furthermore, Williams quotes climate analyst Eugene Rasmussen as saying, “It’s disturbing because we don’t understand the process” [Williams 861. We can see clearly in this example that although the ultimate learning goal is a causal model, the construction of such a model is not immediately possible. So, researchers began by looking for correlations. However, they expect correlations to lead eventually to deeper understanding. The second example involves investigators trying to determine how certain extra-strength Tylenol capsules became laced with poison. The New York Times of February 16,1986 reported: Investigators tracing the routes of two bottles of Extra-Strength Tylenol containing cyanide-laced capsules have found that both were handled at the same distribution center in Pennsylvania two weeks apart last summer. Federal officials and the product’s manufacturer said that the chance that the tainting occurred at the distribution facility was remote! but the finding prompted investigators to examine the possrbrlrty as part of their inquiry.” Again we have a case where a causal explanation is desired and yet there is not enough information available to construct one. So, the investigators began by looking for commonalities among the various poisoned capsules. When they found the distribution facility in common, that became an immediate possible contributor to the explanation. Although no final explanation had been discovered as this is written, it is clear that the explanation process attempted began with the noticing of similarities. There is one further connection between noticing similarities and generating explanations that is worth making. This involves the idea of predictabi/ity. It turns out that the kinds of similarities that are noticed provide clues not only to what features should be involved in an explanation, but what the direction of causality might be (e.g., what causes what). As we have described elsewhere [Lebowitz 83; Lebowitz 86c], features that appear in just a few generalizations, which we call predictive, are the only ones that indicate a generalization’s relevance to a given situation, and, further, are those likely to be the causes in an underlying explanation. This becomes clear when we realize that a feature present in many different situations cannot cause the other features in any single generalization, or it would cause the same features to appear in all the other generalizations that it is in. In the weather example above, if we knew of many generalizations involving droughts, but only one with both warm currents and a volcano, then the volcano might cause the drought, but the drought could not cause the volcano. Of course, it may be that neither direction of causality is right, there being a common cause of both, but at least predictability provides a starting point. The power of predictability is that it can be determined quite simply, basically as a byproduct of the normal SBL process. The various indexing schemes used in a generalization-based memory [Lebowitz 83; Lebowitz 86a] allow the simple counting of features in context. While there are many problems to be explored, particularly that of predictive combinations of features, the ability to know the likely initial causes when determining a mechanism is an important advantage of SBL. Further, even when no explanation can be found, the use of predictability often allows us to make predictions from a generalization at the correct moments, even without any deep understanding of the generalization 6.3 Refining generalizations The third part of our explanation as to the utility of similarity- based learning is that generalizations, once made, are not immutable -- they can be refined in the light of later information. This means that the aspects of a generalizations that are due to coincidence can be removed. We have developed various techniques for doing this [Lebowitz 821 that work essentially by noticing during processing when various elements of a generalization are contradicted by new examples. If we remove the features that are frequently contradicted we can have a concept that is more widely applicable and contain meaningful information. As an example of this, we will look again at our university generalization (Figure 2). Suppose that there were a wide range of universities with most of the features of GNDl, but with different levels of social life. This contradiction of the social level value that was derived from the coincidental value that both Columbia and Carnegie-Mellon have might seem to invalidate the generalization. However, our refinement methods would allow UNIMEM (or a similar system) to remove this feature, leaving a more widely applicable generalization that describes high-quality private schools. In this way similarity-based methods can overcome some of the coincidences that might seem to require explanation-based methods. Notice, however, that UNIMEM makes this refinement without having any real idea of why it is doing so, other than the pragmatic rationale that it allows the generalization to fit more examples, but does not reduce it so much that it carries no information. 6.4 Integrated learning The final element of our explanation for the importance of similarity-based methods lies in the need for an integrated approach employing both similarity-based and explanation-based approaches. This point is really a corollary of the relation between similarity and causality described in Section 6.2. The basic idea is to use EBL primarily upon the generalizations that are found using SBL rather than trying to explain everything in sight. This drastically cuts down the search necessary for constructing an explanation, particularly in domains where we have very little specific knowledge and have to rely on general rules for the explanations. Basically, we use SBL as a bottom-up control on the top-down processing of EBL. The “real world” weather and crime investigation examples in 536 / SCIENCE Section 6.2 illustrate clearly how human problem solvers make use of this form of integrated learning -- trying to explain the coincidences that are noted, rather than explaining every element of a situation from scratch. We have described how a simple form of such integrated learning has been implemented for UNIMEM in [Lebowitz 86~1. For the university example in Figure 5, the main point is that we would only try to build up an explanation for the generalization GNDl (actually, the version of GNDl refined over time), and not the specific examples that made it up. Explaining the generalization is likely to be much easier than explaining the features of Columbia and Carnegie-Mellon and provide almost as much information. 7 Conclusion We have shown in this paper a number of ways that similarity- based learning can contribute to the ultimate learning goal of building a coherent causal explanation of a situation. From this analysis it is not surprising that people seem to be optimized for noticing similarities, as such processing leads to the understanding that helps deal with the world. Our computer programs should be equally well equipped. Similarity-based learning is definitely not the path to perdition. References [DeJong 861 DeJong, G. F. An approach to learning from observation. In R. S. Michalski, J. G. Carbonell and T. M. Mitchell, Ed., Machine Learning: An Artificial Intelligence Approach, Volume /I, Morgan Kaufmann, Los Altos, CA, 1986, pp. 571 - 590. [Dietterich and Michalski 861 Dietterich, T. G. and Michalski, R. S. Learning to predict sequences. In Ft. S. Michalski, J. G. Carbonell and T. M. Mitchell, Ed., Machine Learning: An Artificial intelligence Approach, Volume II, Morgan Kaufmann, Los Altos, CA, 1986, pp. 63-106. [Ellman 851 Ellman, T. Generalizing logic circuit designs by analyzing proofs of correctness. Proceedings of the Ninth International Joint Conference on Artificial Intelligence, Los Angeles, 1985, pp. 643 - 646. [Gould 841 Gould, S. J. “The rule of five.” Natural History 93, 10, October 1984, pp. 14 - 23. [Lebowitz 821 Lebowitz, M. ‘Correcting erroneous generalizations.” Cognition and Brain Theory 5, 4, 1982, pp. 367 - 381. [Lebowitz 831 Lebowitz, M. “Generalization from natural language text.” Cognitive Science 7, 1, 1983, pp. 1 - 40. [Lebowitr 86a] Lebowitz, M. Concept learning in a rich input domain: Generalization-Based Memory. In R. S. Michalski, J. G. Carbonell and T. M. Mitchell, Ed., Machine Learning: An Artificial Intelligence Approach, Volume II, Morgan Kaufmann, Los Altos, CA, 1986, pp. 193 - 214. [Lebowitr 86b] Lebowitz, M. UNIMEM, a general learning system: An overview. Proceedings of ECAI-86, Brighton, England, 1986. [Lebowitz 86~1 Lebowitz, M. “Integrated learning: Controlling explanation.” Cognitive Science IO, 2, 1986, pp. 219 - 240. [Michalski 801 Michalski, R. S. “Pattern recognition as rule-guided inductive inference.” IEEE Transactions on Pattern Analysis and Machine intelligence 24, 1980, pp. 349 - 361. [Michalski 831 Michalski, R. S. “A theory and methodology of inductive learning.” Artificial Intelligence 20, 1983, pp. 111 - 161. [Michalski et al. 831 Michalski, R. S., Carbonell, J. G. and Mitchell, T. M. (Eds.). Machine Learning, An Artificial Intelligence Approach. Morgan Kaufmann, Los Altos, CA, 1983. [Michalski et al. 861 Michalski, R. S., Carbonell, J. G. and Mitchell, T. M. (Eds.). Machine Learning, An Artificial intelligence Approach, Volume II. Morgan Kaufmann, Los Altos, CA, 1986. [Minsky 751 Minsky, M. A framework for representing knowledge. , In P. H. Winston, Ed., The Psychology of Computer Vision, McGraw-Hill, New York, 1975. [Minton 841 Minton, S. Constraint-based generalization. Proceedings of the Fourth National Conference on Artificial Intelligence, Austin, TX, 1984, pp. 251 - 254. [Mitchell 83a] Mitchell, T. M. Learning and problem solving. Proceedings of the Eighth International Joint Conference on Artificial Intelligence, Karlsruhe, West Germany, 1983, pp. 1139 - 1151. [Mitchell 83b] Mitchell, T. M. An intelligent aid for circuit redesign. Proceedings of the Third National Conference on Artificial Intelligence, Washington, DC, 1983, pp. 274 - 278. [Mostow 831 Mostow, J. Operationalizing advice: A problem- solving model. Proceedings of the 1983 International Machine Learning Workshop, Champaign-Urbana, Illinois, 1983, pp. 110 - 116. [Schank 751 Schank, R. C. The structure of episodes in memory. In D. Bobrow and A. Collins, Ed., Representation and Understanding: Studies in Cognitive Science, Academic Press, New York, 1975, pp. 237 - 272. [Schank 841 Schank, R. C. The Explanation Game. Technical Report 307, Yale University Department of Computer Science, New Haven, CT, 1984. [Schank and Abelson 771 Schank, R. C. and Abelson, R. P. Scripts, Plans, Goals and Understanding. Lawrence Erlbaum Associates, Hillsdale, New Jersey, 1977. [Silver 861 Silver B. Precondition analysis: Learning control information. In R. S. Michalski, J. G. Carbonell and T. M. Mitchell, Ed., Machine Learning: An Artificial Intelligence Approach, Volume I/, Morgan Kaufmann, Los Altos, CA, 1986, pp. 647 - 670. [Williams 861 Williams, G. “The weather watchers.” Atlantic 257, 1986, pp. 69 - 73. [Winston 721 Winston, P. H. Learning structural descriptions from examples. In P. H. Winston, Ed., The Psychology of Computer Vision, McGraw-Hill, New York, 1972, pp. 157 - 209. Winston 801 Winston, P. H. “Learning and reasoning by analogy.” Communications of the ACM 23, 1980, pp. 689 - 702. LEARNING / 537
1986
81
529
STAHLp: Belief Revision in Scientific Discovery 1)epartmenI of Inforrr~aLiori and Computer Science IJniversity of California, Irvine 92717 RrpaNet: drose~lC:S.IJ(II.I~:1)1J, lar~gl~~y~I~~S.lJ~~I.~~~I~IJ Abstract III this paper we describe the STAlILp system for infer- ring components of chemical substances - i.e., construct- ing cornponential models. S’l‘AIil,p is a descendant of the S’I’AI-11, system (Xytkow & Simon, 1986); both use chemi- cal reactions and any known models in order to construct new rrlodels. I lowever, S’I’A 11 Lp employs a more unified and effective strategy for preventing, detecting, and re- covrbri rig from errorleous inferences. This strategy is based partly upon the assumption-based method (de Kleer, 1984) of rccordillg the source beliefs, or premises, which lead to cacti illfclrred belief (i.e., reaction or model). STAlI 1,‘s mul- tiple methods for detecting and recovering frorn erroneous inferences have been reduced to one method in STAH Lp, which can hypothesize faulty premises, revise them, and proc& to construct new models. The hypotheses made during belief revision can be viewed as interpretations from comptating theories; how they are chosen thus determines how ttieories evolve after repeated revisions. We analyze this issue with an example involving the shift from ptllo- giston to oxygeii theory. I Introduction Scitarltific discovery and belief revision are two areas of Al which have undergone considerable investigation, yet work in these areas has rarely overlapped. The S’I’AtIl, system (Xytkow & Simon, 1986) a forward-chaining production systtlrn which constructed cotnponential models of chemical 511 tJst;tIlcrJs was a first step towards combining techniques f’rorll both areas. Its domain was 18th century chemistry, ciuri~~g which the prevailing framework was phloyistm tlre- ory. ‘I’his theory evolved f’rom the observation that burn- ing su Ipstances reduce in size durirlg combustion and thus swiii to lose something (phlogiston) iri the process. The thclory also seemed to explain calcination (now known as oxidizatiorl), which was believed to occur wheu a Ineta lost phlogiston and transformed into its associatoti “citlx”. ‘I’hus, phlogiston theory provided rational explanations for two problems which had long frustrated chemists, and in- deed seemed to relate both phenomena. 111 addition to inferring models within this domain, S’l’A tll, employed be- lief revision techniques to resolve collft icts between models, and recover from certain erronwus inferences. tiowt>ver, its methods were limited in scope; we created S’I‘A t I Lp in part to remedy S’I’A I 11,‘s deficielbcies, but more importantly to further investigate how scientific theories evolve through repeated belief revision. II Overview of STAHLp I,ike its predecessor, S’I’A tl I,p is a forward-chaining pro- duction system designed to construct models of chemical su bstar1ces. Its input consists of 18th century reactions and any known models, and its output consists of newly inferred models. S’l‘AllLp’s inference cycle begins when prernise beliefs are input to the system. Then (I) new mod- els are periodically inferred based on these prelnises until (2) a11 erroneous inference is detected. Normal illfererlc- ing is then suspt~nded and belief revision begins as (3) hy- potheses are generated, proposing ways in which premises would have to be rllodified to avoid the erroneous infer- ence. Next, (4) the “best” hypothesis having the least impact on existing models is chosen, assigning blame to certain premises; its proposed premise modifications are then carried out. l~‘inally, inferencing (step I) starts again, (possibly) I e&i ing to the construction of more rllodcls. Step d I, sufficient if no tbrrorb are noted, is itself a cycle in which prt~riiises Icad to iritc~rrricdiate reactions, then to inferred n~odels, then to triort’ irltornlediate reactions, and so 011. Like S’L’A II I,, the f,W(J kinds of beliefs S’I’A 11 Lp deals with are reactions and coniponential models. i3ot h sys- tenis represent a reaction as a list of its inputs arld uut- puts; a iriodel is represented as a list contaiuing ttle sub- stumx being modelled, and its cornyonents. tj’or example, 18th century chemists observed that calx-of-iron’ and char- COiLI reacted to form iron arid ash; S’I’AtII,p Would rep- resent this rtbactiotl as (reacts inputs {calx-of -iron 528 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. charcoal) outputs {iron ash)). We abbreviate this to CI Ch + I Ash. lf S’l’A~il,p eventually infers that char- coal is composc~d of p tilogistoll and ash, S’l’All I,p rcprest~~~ts this c~ornponential rrlodel as (components of (charcoal} are {phlogiston ash}), or Ch = Ph Ash. We can think of tflt3t: beliefs usirig an algebraic metaphor; t]le belief; at~OVt~ (‘an kJC? ViPWd its “(:I } (:/I 1 { Ash” and ‘Y:h I’ll t ASll.” At this point, wc c’a~i substilde the compo- Ilc’Irts of (.harcoat into the first equation to get “(:I ) I’ll I nstl 1 t Ash”, f hen reduce ash frolri both sides to get “(:I I 1’11 I”. ‘I’tttw~ two steps c~orrt~spond to the two main rules of both S’l‘AII I, alIt S’1‘AlII.p. I,cttillg S indicate a set of one or fnor(L sul>stallcc>s, I,he S~JL3STITIJTE rule is: I/ A occurs 111 (1 bulfej, und A is cornpmed of I) u?d ,S, theyi replac:e A with ti ~rtd S. The basic REDIJ~X rultl is: IJ A occurs UYL both s1de.s of u belie/, ther1 rerr1ove A froYr1 the Irelief. When an equation has only one substance in either the inputs or ttic: outputs, S’I‘A II I,p ilifers a corriponetitiat model for that sut~sl,aricct, sucfi as I = CI Ph. 'I'his third rule, for assert- irlg rrt>wly inferred models, is lNF~R,-C<)MPONENTS: I/ .I arid S react to form I), or iJ /I decornposcs rnto A UYM~ S’, then ltljer thd Jj is cornposed oj A UYL~ S. At this point, if ottitlr rtxiLct,ions are present which cotltain iron, substi- tul ion can occur again; this may in turn lcad to further rt>ti uctions, and rnore models being inferred, and so on. S’1‘A II Lp’s basic represctrlt,atiorl differs from S’l‘A 11 I, in that rthactions a~~tl models are augmented with a reduced list. Its main purpose is to keep track, for wry belief L1, of’ all su bstanct3 rc~tiuced thus far from reactions w hiclr led to t.llc: assertion of 11. fq’or c~xarrrple, CI Ph Ash ) I Ash {I} tras a11 e1npty reduced list, indicating the H EI)l.J(?I~: rule was never applic4 lo reactions leading to it. IIow<~vt~r, ash C~LII now be rrkducecl, reslllting in CI Ph t I {Ash}. At this point, IN t~I~:H-(:OM t’ON tI:N’I’S would fire, and I = CI Ph (Ash} WOUtd tJe aSSeI’kd iIlk IrlerIlOry. III The l3elief Revision Process %y tkow and Simon noted that STAllI, has two sources of t:rroIic’ous illfererlctts: faulty applic-ations of thck ltl~~l)~JCl~~ rule; arid “error in tile input lo S’l’.AtlL” i.e. faulty prc’triist:s. S’l’All t,p’s rr:cluccd list plays the key role iI1 han- tililig these problerr15. 111 see-1 iou A we will show its use iri prei~erltiny erro7~ewus I‘f~jer(:ticf3. In section lj we will show tlow S’L‘AI] 1,‘s ttircc~ rirairi error typos call 1~~3 viewed ah a sirlgtc error type in S’I’A t 1 14p, thus allowing a simpler ~rit~tl~od for detectr~ly erroneous irLjerences. Since 11k:l)11(~14~ rutt: errors havtb IJ~II prevented, arid there is only one error tyl,t:, all (erroneous irlfererlccs in S’I‘A I 11,~ must be caused t)y f’itulty premises. tri section (: we will discuss how such I’auIIy I)rtl1Iii bt5 are revised, again using the rc4uct4 list in t,his rr~c:tt~ocf f;)r recoverirq from erronf2ous irijt:renc.es. Xytkow and Simou pointed out that “there are situa- tiolls in which I< b;l) IJ(:lj: produces erroneous conclusions” (Zytkow & Simon, 1986). t;or example”~ standard applica- tion of S’I‘AIll.‘s rules transforms C VA -* SA VC and SA = VA Ph into C VA + VA Ph VC after substitution, then into C t Ph VC after reduction of VA. Finally, S’l’AHL asserts a cotrlponcntial niodet for copper after applying lNI~1~~l~-~~OMI’ON~~N’l‘S: C = Ph VC. liowever, this con- clusion is incorrect; the model of coppt’r accepted in the phlogiston paradigrrl was C = Ph CC. The rllissing knowl- edge needed to construct this correct model of copper is an- other nlodel, VC = VA CC. If this model had keen present as a prcnlise, the correct Irlodtlt would have resulted, because S’l‘A tll, would haves eventually inferred C VA 1 VA Ph VA CC, tlrrrr reductd all occurrences of VA. Llowever, if VC’s model btbcarrte known ujter the incorrect copper model was inferred, S’I’AtlI, would conclude C = Ph VA CC after sub- stitution, which is again incorrect. STAti/, cannot dwuys infer the correct model because it tms YMJ rnectlunisrn to Yrerr~emtxr” what bus ulreudy been reducctl curlier in an injerenm chin. w he11 tJ1e cornpo- rlents of’ V(: arc substituted into the above reaction, and VA again appears, S’I‘A 11 I, caltriot remove this occurrence of VA from the reaction and hence cannot infer the cor- r’cact copper rrlodtxl. llowever, S’SA tll,p can; it I et~lcmbers which substances have been reduced through its reduced lists, and removes such substances if they reappear in later (dcsc.t~lldent.) reactions by using a new rule, DELAYED- REDIJ~1E: If A occurs in u belief, and A ulso occurs in rts reduced list, therl rerrwue A from the belief. For any be- lief t$, this rule ensures that substances previously reduced f‘rorrl 11’s ancestral beliefs arc immediately rtlrnoved from fj. I)ttlayed reduction thus enables S’rAIIf,p to construct tile correct rr~odel C = Ph CC, by removing the new occur- rence of VA. In short,, the reduced list enables S’l’AIILp to Ibrevc’lll t’rroneous iufertlnc~cs (e.g. incorrect, models like C = Ph VA CC) by rc‘vising beliefs as so011 as new information (e.g. tthe wmpwe~lts of’ VC) becomes known. %y tkow and Simon classified the erroneoub irlferences not involving rrlisal)plical.iorl of the ltt4:DIJ(1tl: rult> into three rllnirl catclgoric>s. Ij’irst, a 5 ‘11 5 ante can become defined b ,t as being composed of itself (infinite recursion). We refer to this first error typta (e.g. where A = B C and B = A D exist c~oIrcurrt:ritty in rtienlory) as a circularity, because a SII bstallctl assu~rws a circillar definition af’ttbr’ applying sub- st,itutiorl (e.g. A = A D C or B = B C D). kcondly, there can be two rnodets for the same su bstanc.e. We refer to this secorid error type as [i~odet su bsuriiption, f&~usirig on the special case where one model’s componerlts are a subset of LEARNING / 529 the other model’s components (e.g. A = B C D subsumes A = B C). 1“ 1’ II 1 Jr n y, a reaction can be irlterred where either its inputs or outputs are empty. We have found that these three error types can be viewed as one type; the first two types can be restated as tlJe third. Thus, a reaction with either empty inputs or empty outputs (but JJot both) is the fuJldameJJtal error type in S’l’AlI Lp; we refer to it as an un6alunced null reaction (or simply as an erroneous reaction). To see how the first two error types can be restated as unbalanced null reactions, note how the circJJlarity A = B C and B = A D leads to A = A D C after sJJbst,itution, then to nil = D C after reduction. (Conflict- ing models A = B C a11d A = B C D lead to B C = B C D alter substitution, therJ to nil = D after two reductions. ‘I’hus, the only erroneous inf’erences S’I‘A 11 Lp must detect are uribalaJiccd null react,ioJJs, enabling the system to use a sirrJplttr, unified tnethod of belief revision. IJporr detecting an erroneous inference, S’I‘AII I,p invokes its rriairi revision process in order to recover frorri this error. This process decides which pretniscs caused the errorleous iJift:reJJccA, revises these premises, and c~onstructs a new the- ory (i.e. reinfers a new set of beliefs) which does not include ttitl original erroneous iliference. Iri fact, there is historical motivation for this revision method. 18th century chemists sorrlt~times hypothesized missitlg substances, such as water, in observed rcactiorls, in arJ attempt to explain c-oriflict- irlg c~XporirrJental results. lj’or example, (iay-Lussac and ‘l‘henard claimed that potassiurrl consists of potash and hy- drogen, while I)avy observed I,tiat potash deco~r~posed into pOtaSSiu[Jl anti oXygt:Jl. ln order to support their claim, (;a.~-Lussac and Thenard concluded tlJat l)avy’s potastJ was JJot pure but actually coJJt,aiJled sotne water (%ytkow & Simon, lY8ci). As we shall see later, S’I’A ttl,p is also able to exhibit sucti hypothetical reasoning. S’I’A I I I,p selects certain premises for revision based on ttit~ .w(lrce tays of’ the detected erroneous inference. Simi- lar to Jnechanisms in assuJrJZ)tioJl-bas<~c~ systems (dtl K f(>(lr, 19X4), these tags store the underlying premises t:orrcspoJJd- iJJg to each belief in memory; as a belief 131 is used to iJJfer a new belief 112, 131’s tags are propagated to 132. In this way, each belief in JnerJJory is associated with the preJniscs that ultirriately support it. For each substance in a belief 13, its associated source tag contains tfJe substance itself plus the JJuJrJber of the premise which ultimately contributed thut szlbslartce to belief 1) after a series of’ rule applicatiorls. 1. Generating Effect-Hypotheses W heJJ an unbalanced null reaction is inferred, STA11Lp finds tlith ways in which it could have instead led to a com- plete rlufl reaction (having empty inputs and outputs), by hypothetically altering its l,IlS or IttlS. That is, S’f’AklLp constructs hypotheses about how charlges to supporting premises could have the eff’ect of inferring a Lalutlced ver- sion of this erroneous reaction 011~ which would lead to a complete JJ1Jll reactiori af’tcr one or rrJore reduclioris. For example, siippose the erroneous reaction is nil f H 0 {P). Once detected, S’I’Alll,p deletes it f’rorn Jnemory arid begiJis belief revisioJJ. Its goal is to J.evise premises such that 11 and 0 would JJot have beer1 left isolated on the IllIS. ‘I‘he first step is to perform an “inverse reduction” of all reduced list substances, pluggiJlg them back into the reaction; the result here is P t P H 0 { -}, the rnodijied erroncow reacllotl. Now the system finds how many ways it carI change this reaction (without using nthw substaoccs) so that its LIIS equals its l<llS. There are f’our opt,ioJis here: (1) Add II 0 I,O the LliS, (2) Add tl to the LlfS aJJd I1tllet.e 0 I’ronl the RIIS, (3) Add 0 to the LtlS and I)elete II frorri tlie ItIfS, (4) I)clete 11 0 from the 1111s. ‘J’hese are S’l‘AIll,p’s eflect-hypotheses changes to the modified erro- I~WIJS reaction that would have resulted if certain premises had h.~~l difI’ert?JJt. For example, the balancctl reaction P H t P H would be inferred illstead of P f P H 0 if’ the effect of revisiJig prt:Jrrises is hypothesis (2). 2. Generating Causo-Hypot~leses ‘l’he problem r~ow is to decide which premises should be rev ised, by matchirig each substance in tile effect- hypottic3c5 to a corresporrdiJig substance iJi sorile prclrriise. In ltitl t’abt3 of’subslar~cc:s which rJlust be hypoltlesized as re- ally absent I’rorrb soJne premise (i.e. wheti t.l~cx desired efrect is delelioJJ f’rorri the modific~d t’rrolieous rea(.tioll), there is little corJlplic;itioiJ; the source lag for t*ach “l)elete” sub- stanc-c in ari effect-hypothesis indicates which premise is irlvolved (as well as which side). For exdJtJplt&, given eff’ecl- tiypothcsis (4) arid the modified erroneous rt,actioJJ P t P H 0, a tag (fl 2 r) would indicate that the tt ftS of’ premise 2 should riot tiave 114 fl, while (0 3 r) would iJidicate that the ItIfS of premise 3 should not 11ave Ilad 0. Thus, S’l’AIll,p would coJJsl,ruct, ccluse-hypothesis (4), whose pro- post4 revisioJJs would rc3ult in effect-tlyI>ottJcsis (4): The It’1I.S O/ prerrllst: 2 ?rlust rwt huue hud 11, urltl tht: ti!iS of prerrrisc: 3 niust riot have bud 0. While such corrirnission cause-~Jypol,lieses art: relatively easy lo create, omission cause-hypotliescls art: more difii- cult. lj‘or each “Add” substaIJce in ari effect-hypothesis, S’I’AII t,p must decide which premise this substance should have beeli preserJt iJJ to get the dt3iruti efl’cbct . ‘J’he prob- ItlrrJ is that there is no obvious source tag to work from, sirice S’I‘AtI Lp will add this substance to a prerrlisc it, did riot exist in before:. Our solution is to IJse the source tags of’ substarlces that were plugged back to the er/Jplty side of the errotieous reaction 511 t,star1ces that arc’ IJOW OJl the “smaller” side of’ the Jnodifitbd erroneous reaction. ‘I‘his is the side wtJere substaJJccs Jnust be add4 to tlffect a bal- aJJced reactioJJ. Agai11 usirlg the above exanrple, STAlll,p 530 / SCIENCE ‘1‘1it> qtlc:st,iori now is uhrc~h I,IIS rrirlst be revised i.e. whit Ii premise’s 1’ is to blat~~. I”5 >ource tag hOl(lS the answer-. If’ it was (I’ 3 I), the cltatailed causal-hyI)ottiesis (1) woc~ld r.csuIt: the t’ 01~ the I,ifS of leliej PI redly had 11 und 0. (Nottt that such a coIl(.IuskJrl, wtlich S’I’AlII,p intleed rriadc iri 011e of’ its ruris, r~~otl~~ls (lay-l,ussac anti ‘I’heriard’s claiIii that the potash (I’) in the reactiou I)avy allegedly otJst:rvc~d really had sotr~e wakr (II and 0) irl it.) S’l’AtiLp sirriply uscxtf I’ to pillpoint the relevarit promise and side in which to hypothrsize orr~ittod substances. In general, all rc~ci~lc~c:d-list substa~lct~ that are pIugg:cA back irlto the c,rr~pl> bide of an er~roric~o~~s reaction art’ used for this pur- [JOSC to aid iri constrllctitlg orrlission cause-tly[>ot,tieses. Wllile we have shown how S’I’A I I Lp would f’orrri cause- tly[“Jtht?S (1) alld (‘$ tt ltb reasoIking &scribed would also t,ti iihc~i to construct hybrid sausc~-IiyIJottieses thosta (‘ori- laitiirig both omission i~~ld coInmission errors (e.g. cause- Ily~JothtWs (2) and (3)). 3. tlavirig gc:rieraled detailed cause-hy[~otheses about Iiow to rc>vistt prerrrises in order to avoid the erroneous ilif’er- (~1lct~, S’I’AIlLp riow selects which of these sets ot’ revisions to CXtY II te. This step begills when S’I’A tl I,p computes the cost of’ rrlaking the rrlodific;tt.iorls suggested by each cause- Ily~Joth~sis. ‘I‘tlt: cost rtbftt:cts l10w rriariy existing flmlt:ls art’ supportecl tJy those [Jrerriises wllich wouId 1~: ctlariged if’ a cortairl causc~-IlylJot~lc~sis is applkd. Vor example, let II:, c~xarriirie cause-tiypot,tittsis (4): ‘I’h.5 /l//S o/ premise 2 must IlOl have bud II, UPLd the IIlI,S of yremzse 3 7IlUSf not tlnoe tiutt 0. If prerriisr~ 2 supports 7 models arid [)rerriise 3 supports 1 rnod<tl, ttitln the total cost (of’ rrlaking trlodifi- cations to premises 2 arid 3) is 7 1 I 8. After corrlputing + he cost of each ~ausc-llyI)ottlesis, S’I’A 11 1,p sekcts the one wit Ii ttlcl lowest cost i.e., tJle 011~ wtlose revisiorls will have 1 h(> IC’ilbl. irripact orl existiIlg belief> as t,he lesl tly]Jottlesrs. 4. AlI’tt~r choosirlg 111th best hyl~othesis, S’l‘A 1I I,p starlb COII- striic tirig a hew lhcory c~olitairlirig a possibly tlifl~ert~rit set 01‘ ( ortlporleritial rr~odels; sorml Iriay be view, s0111e may no lorigclr present, alid 0t~ir~r.s trl;iy have tm3ri rnodiliecl. Iqirst, for cbacti premise that will 1)c charigecl due to the chosc>Ii hy- potht3i5, all t Jt’ I(’ ‘5 JdWc 011 that ~>r~~lrliSt! arc: c~eh!kd f’rCJll1 ,I’ 1; I ,’ 1 rrlc~rllory.“’ Second, the hystcrrl ~Jerforrrls tflc changes pro- IJcJSc‘(1 ill the hy[)OthcSiS allc~ asserts the nlodified [)rt!IrliSC’(S) into rrierriory. fj‘irially, ariy new irlf’erencing that may occur in response to the II~W premise(s) is performed. Hopefully, the result will be new models, but at the very Ieast the orig- inal erroneous reaction will not be reinferred; the design of’ S’l’Alil,p’s revision strategy guarantees that a complete null rcac.tiorl will result irlstedci tjurirlg the new irlf’erencing cycle. l3y viewilig the result ot’ reinferencing as the con- structiori of’ d n~‘w ttieory (i.e. a new set of coniponential rrlodols), ori(~ can visualize an initial theory incrementally evolvirlg iri response to repeated detections of erroneous reactions and subsequent revisions of’ selected premises. IV Phlogiston vs. Oxygen ‘f‘llus f’ar we llave discussed tht: fundamentals of’STAlll,p’s operation. Let us riow synthesize the previous sections by walkirlg through a detailed example, beginning with the assertion of three prerriises: (1 (M --> CM Ph i-1)) (2 (CM = M 0 i-1)) (3 (M CI --> I CM t-1)) These premises t heri lead to two inference chains (4 through 6, then 7 and 8): (4 (M Cl --> I M 0 {-I>> after substituting 2 into 3, (5 (CI --> I 0 {M3)) after reducing 4, (6 (Cl = I 0 {Ml)) after infer components fromg (7 (M --> M 0 Ph {-))I after substituting 2 into 1, (8 (nil --> 0 Ph {M))) after reducing 7. At lhis point, rt>action 8 (an erroneous inference) is re- rrlovclci aud belief revision bctgi1ls. S’I‘AII Lp starts gener- atirlg hypot hescs atJout how the errorleous reaction could have been avoided i.e. how the substances in the non- empty side of’ reaction H could have been reduced them- sclve~, ~llus prevcrlt,irlg a11 uIlbalanced 11ull reactiorl frorrl bclirig asserted. The answer comes by recognizing the dif- I’c%rerlt ways irl wllicll a corrll>lcl,e null reaction would have resulkd irlskaci. The system first constructs the modified errorlt’ous reaction by plugging M back into both sides iI1 eH~c1, turning its attention to reaction 7. ‘I’hen, S’I‘,~kll,p analyzes the four balanced reactions which relight have been irll’errt~~l irlskaci of r’oaction 7 if premises had been different (wit,tlout using ;rny tlt’w sutJstaIlces):+ (EHl) M [O Ph] --> M 0 Ph needed 0 and Ph on LHS; (EH2) M I.01 --> M 0 (Ph) needed 0 on LHS, no RHS Ph; (EH3) M [Ph] --> M (0) Ph needed Ph on LHS, no RHS 0; (EH4) M --> M (0) (Ph) needed no RHS 0, no RHS Ph. LEARNING / 53 1 ‘I’0 determine which premises contributed each su t,stmce ill rcactiofi 7, S’I‘Afll~p must analyze it cornplcte with its source tag irlf’orrnaliou: (M 1 1) + (M 2 r) (Cl 2 r) (Ph 1 r> {-}. N ow the corresponding cause-hypottlc~ses c‘au be gerleratcd. ‘I‘he tag (M I I) is used for hypot hesix- ing ri~‘w I,llS su hstanccs (omission errors), while 1,he tags for 0 and t’h are used for hypothesizing commission errors: (CHl) Belief 1. LHS: should have had 0 and Ph; (CIf2) Belief 1, LHS: should have had 0, Belief 1, RHS: should not have had Ph; (CH3) Belief 1, LHS: should have had Ph, Belief 2, RHS: should not have had 0; (CH4) Belief 1. RHS: should not have had Ph. Belief 2, RHS: should not have had 0. iri the t)esl case, the repeated presence of erroneous reac- lions would lead to removal of phlogistorl from premises aud helice force the theory shift to take place. The revisiorl in this example corresponds to how a bck- liever in oxygen tlieory cou td arlalyxe betiels from ariolher p”riHiigUl (i.e. phlogiston theory), and hypothesize that those beliefs were actually rnisint,erpreted observations. Sirliilar results were obtained in rnodclling the dispute be- tween I)avy and C:ay-I,ussac/‘l’~ie~iard. S’TAlll~p revised IIavy’s premise P 9 K 0 to include I1 and 0 on the 1,115 replicating Gay-l,ussac and Thenard’s reillterpretatiorl Now tile cost of carrying out. the changes t,hese cause- hypottleses recommend is computed.* IJetief :! supports 011e rr~oclt~l (CI = I 0), alid belief I supports none. Thus (Ill3 arid (:114 have cost I, since bolh propose chaIigirig I)elielh 1 arid 2, while Cl1 1 arid CliZ have zero (aud hence 01’ I)avy’s resulI,s. Iri another example, giveri a set of 5 l,avoisier-era reactions, STAlll,p’s belief revision process ItAt1 to l,ht3 hypothesis that caloric does riot t)xist a beliclf’ t~ventually accepted by chemists just. as phlogis~orl’s nontzx- ist,crlc.e was.++ IN short, S’l’ALf I,p’s rrlec*llallisrrl for revising premises in response to erroneous iriferences enables it to queslion its basic assurriptioris, as well as propose new ones a vital ability in any dornaiI1 of scierllific discovery. thtl lowest) cos;, sirice both propose changirig belief 1. I,et us say (:11X is arbitrarily chosc~n as the best, hypothesis. At this point,, all bczliefs habed 011 htllief 1 (the prerllise to be rrlodific4) would be deleted; here the only Lelicf support,ed t)y betiet’ 1 is the erroIitf0us reactiou, which has already t)tlcaIl titbteted. S’I‘A tll,p uow asserts belief 9, a rnodifitbd vt%rsioIl of premise belief I wtlicti incorporat,es the cttartgcs of’ (:fi’L: M 0 + CM { -}. Now reinferencing begirls; ltle subslitulion of’ belief 2 ir1t.o 9 leads to tjelief 10: M 0 + M 0 { -}. ‘I’wo reductions then lead to a c.orIlpleLt: uull rtlacl iou, which is harrnlcssty dtlleted f’rorri rriernory. Thus, whittb 110 uew models were discovered upon rcinf’erencing ttcrt’, ttith coniplele null reactioir that resulted shows that if borrtt’ 0xygtsIi was actually presertt in t hta irtputs of reaction 1, and phlogiston was actually not irt the outputs, rto belief’s corilradic1 irrg existing modt:l:, will he inferred. S’I‘A II I,p’s hypotht~ses loosc~ly model how fol towckrs of’ one p;LradigIir can propcjse rttvlsions of data rc:port,edly ol)sthrvetf by fi)Ilowers of mother paradigm. I~or t~xarnplc, a fOl lower of’ bvoisier would probably be prone to believe hyI>othesis (: t I’L, sirice he would believe in the c:xistcllce of oxygc~~, but not. ptltogistori. ‘I’he important point of ttiis exarriptt!, al- V Summary S’I’A I I I,p, a system for c’onstructirlg ~orrlI)ollent,ial mod- els of chemical substaricths, eIttploys a more unified and ct- fectivtf strategy for dealing with erroueous irrfcrt~Iict3 thari its predecessor S’f’A t II,. ‘I‘he reduced list. coIlGus intor- rrlatiorr rleedeti for preverlt.iIlg erroIlt>ous i1lf’ertbric.m caused by ulisapplictd reduction. I)t!tc~ctilig erroIit>ous iIll’ert+rictts is simpler iIi S’l‘A f1 I,p; S’I‘A t 11,‘s Z ) mdin error t,yI)tls can be viewed as unbalarlct:d null rc‘actiorls. l~‘iually, Lfte reduced lisl erlables S’I’Alf I,p to recover t’roIl1 such a11 erroneous re- actiori. using irif’orrriat ioll atmut where it,5 su hst itllCCS cartle frorri to propose revisions to sortie of ils premises. Once a pl;tusit)te hypothesis is choseri to accoiirit for t)lc> error, the prerrlises it assigns blarrle lo arc: rttvised, arlti a rlcw set, of’ bclicls are t~\~t~lll~lliilly iril’cbrred. S’I’A tll,p’s IItain <‘On- Lributiori lies ill its iIlc.or-pc,l.aliori of’ more powc>rlul Ixtirbl’ revision tectkrriqilt5 iltto work 011 sc.ierttific discovery, alld iI1 its potential for rrlotit~llirig 11ow t,tieorirts evolve. References t,hough the number of beliefs was kept small, was that, af’ter (: 112’s rtbvisio1is were cxecu told phfoyzslon 910 lo7iyer rsisted 111 (~71~ belie/s. If one dofirtcs oxygerl theory as a systcnl of reac tiolls alld rrlotlels w t1ic.h 00 not inc. I udt: t ht! existeuc-e of‘ pit togisI,ori, then over a period of tirtic it is possible for S’I‘A 11 1,p to revise it,s set. of’ blief’s from one errltmdyiug phlogistori theory to orit: t~mhodyirig oxygen theory. SIIct1 a Ihtlory shiR is not guaraliteeti, but the hypotheses at tile very least represent tkic views of the competing tlieories; r Note how IIO hylmtl~eses use I,elId 3 ‘1s it did IIUL cotItCbute to the er- I’~,I,t’~>lls IIIfereIlce. I,ike all ~LsaIIIIIl)tIUII-t)‘Lse(l systems, S’l’AltIdp ih rl d~l~~~tlc~tcy-clurc ted I‘~LL~OIICI ~LLUS~ it 011ly IIypuLlIesixes Ixvisims 1~ ,,I (‘i~~Iht:h uI1 w Ibis II t he er1~0lIeolI~ iIIfcI e~lc e ulC iIIIdt,t’ly tlepiIcIS. 532 / SCIENCE
1986
82
530
A CASE-BASED REASONING SYSTEM FOR SUBJECTIVE ASSESSMENT* William M. Bain Yale University Computer Science Department Abstract People tend to improve their abilities to reason about situations by amassing experiences in reasoning. Resorting to previous instances of similar situations for guidance is known as case-based reasoning. This pa- per presents JUDGE, a computer model of judges who sentence criminals. The task is viewed as one in which people learn empirically from the process of producing relative assessments of input situations with respect to several concerns, with little external feedback. People can perform such subjective tasks by at least trying to keep their assessments consistent. For assessment tasks, this reasoning style involves comparing a pre- vious similar situation with an input one, and then extracting an assessment for the new input, based on both the assessment previously assigned to the older example, and differences found between them. The system also stores input items to reflect their relation- ships to situations already contained in memory. 1 Introduction When people run out of rules to guide them, they rea- son about problems subjectively. Domains where ex- pert reasoning of this type occurs usually come pack- aged with a starter kit of traditions, prototypes and precedents; such is the case, for example, with le- gal reasoning, real estate assessment, various meth- ods of scientific discovery, and art. Beyond such ini- tial guidelines, however, a person often finds himself in uncharted territory. This paper describes research which has been di- rected at modelling by computer the behavior of judges who sentence criminals. Our effort has not been to ex- amine sentencing as a representative example of legal * This paper is a greatly shortened version of [2]; see that source for more extensive discussion. This research was sup- ported in part by the Air Force Offke of Scientific Research under contract F49620-82-K-0010 and contract 85-0343. reasoning. Instead, we have viewed it as a more generic reasoning task in which people learn empirically from having to produce relative assessments of input situa- tions with respect to several different concerns. Judges receive little external feedback from sentencing that they can directly apply to future cases, so studying this task can help us to understand better the nature of subjectivity, and how to get computer programs to reason subjectively, relying on experience. Unlike medical tasks, the task of sentencing is not usually considered by judges to be diagnostic. As a re- sult, we have not taken a traditional classification-style approach to modeling judges [3]; instead, our imple- mentation, the JUDGE program, uses a method called case-based reasoning [5], [2], which relies on its own experiences to dictate reasons for making certain as- sessments. 2 Case of the 16-year old offender To facilitate building a sentencing model, we consid- ered how judges face the task of fashioning sentences by talking with judges who were sitting on the bench in Connecticut at the Superior Court level. An excerpt from a discussion which I had with a judge follows. I described to him briefly an augmented version of a real case which was new to him. This crime was unusual in that it involved a child molestation where the offender was himself only sixteen-years old. Interviewer: This is a Risk of Injury to a Minor case. against a boy who is sixteen-years old himself...a first of- fender...with no juvenile record. The details of the crime were that this boy was babysitting for the two kids, and he molested both of them (details given). The kids told their mother and she called the police. Neither of them needed any psychiatric treatment or care for their trauma other than some talking to by their mother-some reas- sura nce. LEARNING I 523 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. Judge: If he were presented to me for determination (of youthful offender status), I would feel very strongly against it, because basically, I’d have to... it’s very hard to judge, I mean, some people just goof up sexually as they’re that age or so, and it’s hard to tell with no prior record, I would tend to want to give someone the benefit of the doubt, especially since there is no severe trauma to the victims. I’d do an awful lot of agonizing, but I might give him youthful offender status, and give him three years suspended, with psychiatric treatment throughout the probation. If he were not treated as a youthful of- fender, then I might well place him on five years, give him a suspended sentence, five years probation with the psychiatric treatment. Although the judge first began to say that he would not wish to treat the offender as a juvenile, he very abruptly changed his mind. Even so, he formulated two alternative sentences, depending on the ultimate status granted to the offender. About six weeks after this discussion, I met with the same judge again to discuss a number of cases, in- cluding the one above. This time his reaction to the issue of youthful offender status was markedly differ- ent, even after such a short period of time. Interviewer: One of the cases we discussed before dealt with a sixteen-year old boy, who was charged with two counts of Risk of Injury. The facts of this case are . . . (sum6 facts given). Judge: What did I say about that? I don’t remember what I said. As you talked, the fact situation sounds very familiar. Interviewer: OK. The boy was sixteen, so one of the things you wanted to know was whether he should, whether he was being treated as a juvenile or an adult. Judge: Yeah, he should be treated as an adult. He’s not a kid. That’s a situation where I would find it hard not to consider a suspended sentence and a long, perhaps maximum period of probation with psychiatric treatment. if that’s possible. Notwithstanding my feel- ing that it’s going to simply be a waste of time. But who knows, you know ? You’re giving someone the benefit of the doubt at that age. Interviewer: How long a period of probation? Judge: At least five years. Interviewer: Would you treat him as a juvenile if he were presented to you for determination as a juvenile. Judge: No I would not. Not at sixteen. Interviewer: Why not? Judge: Because I don’t think he should be treated as a juvenile at sixteen. I don’t think we should be saying, “011. he’s nothing but a little kid.” And besides that, nothing happens in juvenile court. Absolutely nothing. I mean you go through a charade and kids walk out of there laughing. I don’t think that’s a laughing matter. I mean you can commit murder in our society if you’re a juvenile, and get tapped on the wrists: where you gonna send ‘em? What are you going to do with them? 2.1 Reasoning from Experience-based Gener- alizations The judge’s staunch attitude against the offender on the second occasion differed dramatically from his pre- vious position of feeling uncertain, yet beneficent, to- ward him. Part of the judge’s earlier uncertainty was apparent when he proposed two possible sentences for the offender-one for the condition in which he would grant youthful offender status and one if he were to treat him as an adult. In the second discussion, however, he made no such provisions for doubt. He made the strong statement that this offender was an adult and not a child. He did soften this position a bit by suggesting, as before, that the offender should be given the benefit of the doubt; however, his sentence proposal of “at least five years” was substantially harsher than the tentative sentences which he had mentioned the first time (either 3 years or 5 years). Moreover, his attitude in denying youthful offender status this time could only be described as hostile. The only explanations which the judge gave for his changed attitude was that in the meantime he had presided in a juvenile case which had been particularly agonizing for him; unfortunately, he gave no details of the case. However, it is noteworthy that the case he heard during that month and a half contributed to his using a different perspective for dealing with juveniles than he had used before, to the point that he reacted to the same set of facts quite differently the second time. From this we note that Ihe extent to which a judge considers certain features oj cases and of oflenders to be significant is a function of similar experiences he has in dealing with those features. 3 The JUDGE Program The JUDGE system was written to develop sentences for certain crimes, including cases of murder, assault, 524 / SCIENCE and manslaughter, by reasoning about similar situ- ations which the program has previously sentenced. The program compares crimes to ascertain differences in heinousness, which it then maps onto differences in sentence severity. Heinousness is determined rel- atively, by comparing the causal structures used to represent and interpret crimes. Some of the 55 cases presented to the system so far were based on real cases of manslaughter and assault, similar to the type of case analyzed in [4]; however, the majority of the cases were constructed to include a variety of actions, results and degrees of justification. JUDGE does not use all of the information available for real cases to determine its sentences. For example, it isn’t concerned with whether the offender pleaded guilty, the likelihood that he’ll repeat, or the gravity of his prior record. The sentence which the system gives in each case is simplified to apply to only one count of one statutory violation. The JUDGE system has five stages of operation which lead it to derive sentences. These include the following: 1. An interpretation phase for inferring the motives of the actors in the input.case and to determine the extent to which each person was justified in acting violently; 2. Retrieval memory; procedures for finding similar cases 3. Diflerence analysis procedures for comparing re- trieved cases with the input to help determine how severe the new sentence should be; 4. Strategy application and modification junctions which both map sentences from old cases onto new cases, and help to insure that differences in sentence severity between crimes corresponds to the relative degrees of heinousness of the crimes; 5. Generalization capaGilities which enable the sys- tem to form sentencing rules when it finds similar cases that require similar sentences. In addition to being able to generalize rules from processing its input cases, JUDGE can also further modify its own rules. Each of these processes is de- scribed in detail in [2]. 3.1 Interpretation The interpretation phase in JUDGE assigns an inter- pretation to each set of input actions and results. In- terpretations provide the system with inferences about the motivations of actors and expand greatly on the representation given initially for each crime; they also serve as indices to cases in memory. For example, CRIMEO, the first case we gave to JUDGE, is an instance of a murder (fictitious), which the system interpreted initially a+s shown below: CRIMEO Facts: First, Ted slashed at Al with a knife one time. (In- terpreted as an UNPROVOKED-VIOLATION.) Next, AI slashed back at Ted with a knife one time. (PARITY- SELF-DEFENSE with an ACHIEVED-RESULT.) Finally, Ted stabbed Al with a knife several times. Al died. (ESCALATED-RETALIATION, ACHIEVED-RESULT.) An UNPROVOKED-VIOLATION means that no other violative actions occurred before the act in question (where Ted slashed at Al); furthermore, this interpre- tation indicates that Ted’s intent to act violently was not justified by any other input knowledge. The final action in this crime, where Ted stabbed Al to death, was found to be RETALIATION with Es- CALATED force and an ACHIEVED-RESULT. TO the system, this means that the actor used greater force against his opponent than was previously used against him (escalation); there was at the time of the action no outstanding threat of harm that the actor might have perceived which could justify his action by self-defense (hence, it was retaliatory); and the actor achieved his apparent violative goal. These interpretive structures supply JUDGE with inferences such as that Ted in- tended to kill Al, thus eliminating other possible in- ferences (e.g., that the killing was accidental). 3.2 Retrieving Previous Instances from Mem- ory JUDGE uses the results of its interpretation, including both the interpretive structures themselves and certain of the inferences they provide, to find similar episodes and accompanying strategies in memory for sentencing cases. CRIMEO, described above, must be sentenced with initial rules provided to the system, since no other cases are in memory yet. JUDGE’S rules assign it a sentence of 40-50 years imprisonment. In general, when other cases are stored in memory, it is difficult to decide which of the many features of a situation are the most salient and crucial ones to focus on. The system is provided with a set of criteria for determining feature salience derived from the causal structure that it builds for each case during the inter- pretation phase. This set includes the statute that was LEARNING / 525 violated, who started the fight, the violative actions and results, and the interpretations assigned to those actions and results by the program. JUDGE looks for crimes in memory which involved these same features. If any crime in memory is found to be similar, using these criteria, the system will begin to consider differ- ences between the input and retrieved crimes. 3.3 Differentiating Cases Once JUDGE has found a crime from memory similar to the input case, it begins to look in-depth for dif- ferences between the two crimes. The system begins by comparing the extent of harm caused by the last actions of each case; then it compares the intentions which led to each action. CRIMEA Facts: First, Randy struck Chuck with his fists several times. Next, Chuck struck Randy back with his fists several times. Then, Randy slashed at Chuck with a knife one time. Next, Chuck slashed back at Randy with a knife one time. Finally, Randy stabbed Chuck with a knife several times. Chuck died. Comparing CRIMEI with CRIMEO... In both crimes, the victim was killed. Not only were both of these outcomes the result of direct intentions, but the actors intended and caused the same amount of harm. Ted demonstrated an extreme use of force against Al when he acted to stab Al to death in response to having his skin cut in CRIMEO. Randy demonstrated an extreme use of force against Chuck when he acted to stab Chuck to death in response to having his skin cut in CRIMEA. Unable to distinguish whether the extreme force used in either crime was worse. The intent of both offenders was to act repeatedly to stab the victim to death. III ad- dition, neither actor’s intentions were justified, and both escalated the level of violence about the same degree. (At this point, JUDGE cannot find a substantial dif- ference between the two cases. As a result, it backs up to compare events that led to these intentions, actions, and results.) ****** Considering actions of the offenders which led to subsequent victim actions. , . ****** Ted demonstrated at1 extreme use of force against Al whell he acted to slash at Al with a knife in CRIMEO. This action was unprovoked. Rarldy demonstrated an extreme use of force against Chuck when he acted to slash at Chuck with a knife in response to beirlg hit hard in CRIMEA. The magnitude of the extreme force used in CRIMEO was greater than that in CRIMEA, and so CRIMEO will be cotlsidered worse. Comparison finished with result that the old crime, CRIMEO, is somewhat worse. It took the program several iterations to deter- mine that CRIMEO was worse than CRIME 1. What it found was a difference between the extent to which the offenders escalated the violence in their respective crimes. In general, the system continues to compare the events of two crimes until some notable difference is found or until one or both crimes has been fully scru- tinized one event at a time. Notable differences include such features disparities as greater intended harm in one crime, greater caused harm, more justification to respond, extreme force used in one crime, greater rel- ative force, and greater relative escalated force. 3.4 Generalization JUDGE produces its own rules to generalize certain knowledge about sentencing. General rules are formed only when cases retrieved from memory match a sub- stantial part of the set of features of the input case. The features which commonly describe both situations are extracted and used as indices to store the rule in memory, and a sentence is inherited from the older case. The output below shows this inheritance, along with the set of features common to both CRIMEO and CRIMEA which form the left-hand side of the rule. The sentence given to CRIMEA was 40-50 years-the same as for CRIMEO. FORMING GENERAL SENTENCING RULE: FOR violation of Murder.. . FOR causing result of kill.. . FOR using action of stab-knife.. . FOR offender starting the fight.. . FOR responding to slash-knife harm.. . FOR using escalated force and retaliation.. . FOR intending to cause the result.. . The sentence for this violation will be 40-50 years. Rules stored in memory can be quickly used to cre- ate a sentence for any situation where the rule applies. Thus, in most circumstances the system avoids making an in-depth comparison of input and retrieved cases. 526 / SCIENCE 3.5 Rule Differentiation When JUDGE finds a rule stored with a retrieved case, it tries to apply it if it finds that key features of the rule match with features of the input case. If all of the input case differs from features of the rule, the sentence associated with the rule is modified accordingly. An example of rule modification is shown below. (The input cr&le-CRIME2: The oflender, Tim, is charged with one count of Murder. Tim was involved in a fight with David, which David started. They traded blowa, and after David knocked Tim to the ground, Tim stabbed David several times and killed him.) Using general rule indexed under CRIMEA. Checking for feature similarity: FOR victim starting the fight - failure. General rule in CRIMEA applies offender starting fight. FOR responding to harm at knock-down level - failure. General rule in CRIMEI applies to response to slash- knife level. Handling failure to match 011 features-features will be added as new indices to rule. - Reducirlg the serltellce slightly because the eventual victim started the violence in the current situation. - Increasing the sentence moderately because the of- fender responded to a lesser degree of violence in the current act than the rule accounts for. The system stores a new rule in memory with fea- tures that reflect the modifications it makes, including those to the sentence. The sentence for the above case changed to 45-50 years. 4 Conclusions The process described here involves subjective reason- ing and learning in a task of relative empirical assess- ment with little external feedback. This case-based reasoning involves comparing a previous similar situ- ation with an input one, and then extracting an as- sessment for the new case, based both on the assess- ment previously assigned to the older case, and on the differences found between them. Rules which gener- alize assessments for particular feature combinations can also be derived, and can refer illustratively back to underlying cases. The case-based process requires several kinds of knowledge and functional abilities. 1. 2. 3. 4. 5. Previous situations must be kept at hand to com- pare with input cases. Some notion of similarity must be defined, such that only similar previous situations will be re- trieved. A related notion of significant diflerence must be defined so that cases may be compared (and thus differentiated) in a meaningful way. The outcome of comparisons must correlate with the assignment of relative assessments. Finally, the new situation must be stored along with the older ones with respect to its relation- ship with them, and in such a way that it can be located and used in the future. These steps are required in general for learning from several examples, as opposed to one-shot or single- instance learning. Acknowledgements I am indebted to Roger Schank, Chris Riesbeck, Bob Abelson, Eduard Hovy and Larry Birnbaum for extensive comments and discussion on this work. References 1. 2. 3. 4. 5. Bain, W. M. Toward a Model of Subjective Inter- pre ta tion. Technical Report 324, Yale University Department of Computer Science, July 1984. Bain, W. M. Case-based Reasoning: A Computer Model of Subjective Assessment. PhD thesis, Yale University, 1986. Clancey, W. J. Heuristic Classification. Artificial Intelligence, 1985, 27, 289-350. Pennington, N., and Hastie, R. Juror Decision Making: Story Structure and Verdict Choice. Tech- nical Report, America1 Psychological Associa- tion, August 1981. Simpson, R. L. A Computer Model of Case-based Reasoning in Problem-solving: An Investigation in the Domain of Dispute Mediation. PhD the- sis, School of Information and Computer Science, Georgia Institute of Technology, 1985. LEARNING / 52’
1986
83
531
Factorization in Experiment Generation Devika Subramanian Joan Feigenbaum Department of Computer Science Stanford University Stanford, CA 94305 ABSTRACT Experiment generation is an important part of incremental concept learning. One basic function of experimentation is to gather data to refine the existing space of hypotheses[DB83]. Here we examine the class of experiments that accomplish this, called discrimination experiments, and propose factoring as a technique for generating them efficiently. I Introduction The need to generate experiments that discriminate between sets of hypotheses arises in the context of a learner using the version space algorithm[Mit78][Mit83]. Here we show how im- plicit independence relations in the concept language can be used to factor the version space of hypotheses. We analyze the computational advantages gained by doing experiment genera- tion using the factors. This paper is organized as follows. Section II describes the single concept learning problem that provides the set- ting for our investigation of the discrimination-experiment- generation(DEG) problem. Next, we introduce the blocks world example that will be used throughout this paper. In Section IV we characterize the DEG problem and explain why it is hard. Section V briefly describes two sources of information that can be used to make it tractable; one is domain-specific informa- tion, the other is knowledge of independence between parts of the concept being learned, which is domain-independent. In this section we also outline how this independence allows us to factor the version space and generate experiments by working with the factors. The next two sections are a formal analysis of factoring: Section VI demonstrates the conditions under which a version space can be factored (under the independent credit assignment (ICA) assumption) and provides an optimal strat- egy for generating experiments in the factored space. Section VII does a similar analysis for the case that ICA is not avail- able. The tradeoffs associated with factoring along with a cost comparison are presented in Section VIII. Experimental results using our implementation of factoring are sketched in Section IX. Finally, Section X highlights the main contributions of this paper and concludes with a proposal for future work on this problem. II The Single Concept Learning Problem The single concept learning probZem[Mit78] is: Given: The first author is supported by an IBM fellowship. author’s work was supported by a Xerox fellowship. The second l a first order concept language C, l a first order instance language I, 0 a set P of sentences in I, containing the concept to be learned, positive instances of l a set N of sentences in I, containing negative instances of the concept to be learned, l the TV relation between sentences in C and I that indicates when an instance matches a concept (i k c). l a biasing theory T that describes which of several alterna- tive descriptions of a concept is more plausible. Find: the concept description (represented as a sentence c in C) that is consistent with (P,N), i.e. l Vp.pEP*p~c l Vn.nEN+npc Typically, the learner is given sets P and N that fail to de- termine c uniquely(that such a c exists follows from the rep- resentability assumption made in the version space algorithm [Mit78]), thus the learner constructs the set VS of descriptions that are consistent with the observed instances. VS is called the version space of the concept to be learned. The learner now attempts to gather more information about the concept by us- ing instances in I (that will be classified by a critic/teacher) to eliminate some of the concept descriptions in VS. This process is iterated until a single concept description survives. Finding the sequence of instances in I that will accomplish this is the discrimination-experiment-generation problem. If more than one concept description remains and the instances in I do not tell them apart, the learner uses T to select the most plausible one. III Blocks World Example The single concept learning problem will be illustrated with the following blocks world example. We define the vocabulary of the concept language Cr. The constants of Cl are l ⌧, Y, z, . . . (names of blocks) l red, green, any-colour (colours of blocks) l cube, brick, nedge, blocks) pyramid, any-shape (shapes of The predicates of Cl are l shape: l colour name of block x shape of block -+ {T,F) : name of block x colour of block -+ {T,F} The pure predicate language constructed from this vocabulary is Cl. The following are relations between well-formed formulae in Cl. (if (shape $x brick) (shape $x wedge)) 518 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. cube brick red \/ yv&d green any-colour Figure 1 any-shape red cube is a positive instance. The two possible updates to the VS are shown in Figure 3. Red Wedge Any 0 Red Red Brick Wedge Any / Any Any Any Brick vs,+ vs;- Figure 3 Formally, l VSi’ = { c 1 c E VS and i b c } l vs,- = VS - VSi+ V Figure 2 This is called candidate elimination[Mit78]. IV The Discrimination Experiment (if (shape $x cube) (shape $x wedge)) Generation (DEG) Problem (if (shape $x wedge) (shape $x any-shape)) (if (shape $ x pyramid) (shape $x any-shape)) (if (colour $x red) (colour $I: any-colour)) (if (colour $x green) (colour $x any-colour)) They are used to construct generalizations/specializations of a concept in Cl. If x logically implies y then z is said to be more- specific than y. Logical implication defines a partial order on the sentences of Cl. A concept is generalized by constructing its logical consequences. A concept is specialized by constructing the sentences in Cl that it can be deduced from. The DEG problem is that of finding a minimal sequence of instances in I that will cause the VS to converge to a single concept description. We study this p;oblem under the following assumption. l All hypotheses in the VS are equally likely. This means that the learner has no a priori basis for preferring one hypothesis over another (i.e. T = 0). A strategy for DEG is a policy for choosing the next instance in the experiment sequence. [Sub841 gives a formal proof that the general DEG problem is NP-hard and that the strategy presented below is-optimal under the assumption above. We can express these relations with the directed acyclic graphs (DAGs) in Figure 1. Given that all hypotheses are equally likely, the probability pi+ that an instance i in I is a positive instance of the concept is: pi+ = I”&’ -I-+ The probability that i is a negative instance of the concept being learned is: We use the shorthand notation cube for (shape $x cube) . . . etc. and red for (colour $x red) . . . etc. Without loss of generality, we assume I C C. This is referred to as the single representation trick. However, we do not regard it as a trick, because the only other alternative under the repre- sentability assumption is for I to be the domain of an injective mapping whose range is a subset of C. I can be regarded as the observational component of C. In our example, I = {red cube, red brick, red pyramid, green cube, Pi- = 1 -pi+ = l- ‘\;;;;I = 1’;;;; I green brick, green pyramid} Suppose the learner is presented with C and I as above as well as initial values: The expected size of the version space if i is chosen as the next instance in the experiment sequence is 0 P’{ red brick } l N=0 E(i, VS)=p+ IVST I -I- P, lvsT 1 =&lVS;t12 + IVSJ2) l T=0 The learner constructs the VS in Figure 2 using the relations in Cl. Each node of the graph is a sentence in Cl. Each arc stands for logical deducibility. We describe how the version space is updated in response to a labelled instance. Suppose the learner asks the teacher whether Notice that an instance that all hypotheses match (resp. don’t match) has pi+ = 1 (resp. pi- = 1). Such an instance has E(i,VS) = IVSl, confirming our intuition that it has zero discriminatory power. The function E(i,VS) has a minimum value of y which is achieved when IVSi+l = IVSi-l. Thus the best instance halves VS at every step resulting in an ex- periment sequence of length O(roglVS(). Not every VS has a LEARNING / 519 halving instance; if none exists, we choose one that has the smallest value for E(i,VS). Our strategy for DEG can now be stated simply as Select the instance that minimizes E(i,VS). The time complexity of a generate-and-test implementation of DEG is lVSllIlt 1 w rere t is the time to compute the + rela- tion. This computation is infeasible because the version space is very large even for simple concepts in Cl. This naive method can be improved by using the fact that VS is partially ordered. The middle node w of the version space (the concept that is the root of half the nodes) is found, and then an instance which matches w - but matches no other concept descriptions more specific than w - is selected. How- ever not all version spaces have middle nodes - very branchy partial orders still need fl(lVSllIlt) amount of processing for the selection of the best instance. Given the partial order on VS, we can try to generate the next best instance by using the boundary sets (S and G in [Mit78]) and the nature of the generalization mechanism (de- duction). Estimation of E(i,VS) in the general case is very difficult without constructing the entire version space. Hence we have looked for sources of knowledge that reduce the size of the version space, allowing the optimal instance sequence to be constructed by the naive generation method in a smaller space. V Exploiting the structure of the version space We will now show how the generation of the best instance can be speeded up if we know some properties of the concept to be learned. An example of such a property is that the blocks world structure that is being learned, is a stable one. This is a domain-specific property by which the size of the VS can be reduced directly, without the use of instances in I. The learner simply prunes those descriptions that do not conform to its stability theory T.l DEG can then be used to learn the concept in the smaller VS. A domain-independent property of the concept that can be used to make DEG more efficient is its factorability. The concept red wedge is factorable into red and wedge. All the concepts in our example VS are factorable into the colour component and the shape component. This is because colour and shape are independent relations in Cl. This suggests dividing the origi- nal learning problem into two independent learning problems: learning the shape and learning the colour. Two separate ver- sion spaces can be maintained, one for each component, and experiment design can be done by obtaining the best instance in each factor (by the method indicated in section 2.2). If credit or blame is assigned independently to each component of the instance, we say that independent credit assignment (ICA) is available. If the VS is factorable into k almost equal factors of size n, and if the induced factors in I are each of size m, then we have reduced a problem of size n”m”t to k problems of size nmt, under ICA. This is clearly a significant computational gain. The factors of the VS can be collapsed into singletons in either of two ways: 0 In Parallel This is the optimal strategy to use if ICA is available. Each ‘However, after this pruning, the VS may no longer be repre- sentable in terms of its boundary sets. Red Any Colour space l-- --m---m- Brick TV Wedge t Any Shape space factor will be guaranteed to reduce in size by the maximal amount by DEG. 0 In Series This is the basis for hierarchical learning as illustrated in Figure 4. Because the factors are logically independent, the order in which they are learned does not matter. If ICA is not available, this is the preferred strategy. The learner collapses one VS factor at a time. The negative instances generated will be near-misses[Win70]. Red J Any Brick \ Wedge Any - hstract - - - -- % s ape - - - - -j-,---“ ; co our away away Figure 4 VI Formal analysis of factoring under ICA We will relate the structural property of Cartesian factora- bility of VS to the logical independence of parts of the concept being learned. We use graph theory to formalize the notion of Cartesian factorability. We introduce the necessary definitions first: Definition 1: A theory T is factorable iff 3 Tr , T2,. . . , Tn such that If each of the Ti is unfactorable, we call them the irreducible factors of T. Definition 2: A partially ordered set VS of theories is fac- torable iff 3 VS1, VS2,. . . , VSI, such that l vs = ( USI A US2 r\ . . . A vsk 1 VSl E VSl,VS2 E vsz,.. .,VSk E vsk } written as VS = VS1 X VS2 X . . . X VSI, l vsl,vs2 ,..., vs~-~,vsj+~~.-.~vsI; k VS,, 15 i 5 k l Each VSi respects the partial order on VS, i.e. (VS~AVS2A...AVSiA...AVSn) 5 (VSlAVS2A...AVS’iA . . . A VSn) * VSj <j US’, Again, if each of the VS,‘s is unfactorable, they are called the irreducible factors of VS. 520 / SCIENCE Definition 3: If D1 and Dz are DAGs, then their Cartesian product D (denoted as Dl x Dz) is defined as follows: The vertex set V(D) is the Cartesian product of the vertex sets of D1 and Dz. The arc set of D, A(D) = { (x,y) + (u,v) : (x = u and y ---t v E A(D;1)) or (y = v and x -+ u E A(D1)) }. Clearly, this definition can be generalized to the product of k factors D1, Dz,. . . , Dk. A DAG D is called cartesian- factorableif there are two DAGs D1 and D2, both with IV(Di)l > 2 such that D = D1 x D2. A DAG with no nontrivial factorizations is called prime. Cartesian multiplication of DAGs is a commutative operation. Every finite DAG has a unique set of prime factors that can be found in polynomial time [Fei86]. The factors of a factorable concept can be generalized or specialized independently. This requires that C be factorable. Cl in our example is presented in factored form. If C is finite, we can use the polynomial-time graph-factoring algorithm in [FHS85] [Fei86] t o make the factorization explicit. This gives us a way of discovering independence relations in the concept language. Observation 1: If we construct all generalizations and special- izations of a factorable theory using a factored C, the resulting (partially ordered) set of theories is factorable. The DAG that represents this set of theories is Cartesian-factorable. This means that the syntactic operation of graph factoring in Definition 3 corresponds to the semantic notion of logical fac- toring of the version space (Definition 2 above). Observation 2: The initial version space of a factorable con- cept is a Cartesian factorable DAG. This is because the first positive instance is factorable and the version space is constructed as in Observation 1 above. Observation 3: Under ICA, the update operations on the version space are guaranteed to preserve its factorability. The update algorithm under ICA for the factored version space is: forj = 1 tokdo if i, is positive then replace VS, by VSj+ else replace VS, by VS,-- The updated unfactored version updated factors. space is the product of the Observation 4: The best strategy for generating an instance when the version space is factorable, is to choose the best in- stance in each factor (i.e. the one that splits each factored space in half). Because all hypotheses in VS are equally likely and all factors are logically independent, all hypotheses in VSi are also equally likely. Thus the best strategy in each subspace is choosing an instance with the minimal E(i,VS). Observation 5: We can compute E(i,VS) of every instance in the unfactored space from (II, lVSl+ 1, IV&-l) and (12, IV&+l, (V&-I) tables. Construction: Consider the instance i = il A i2. We have IVS+j = IVS1+I. IVS2+l and IVS-I = IVSl- jVS+I. We can then calculate E(i,VS) using IVS+I and IVS-I. The table below shows this computation for our example. 2 2a is IVS+l, b is I1’S-(, c is lVSl+/, d is IVS,-I, e is IVS2tj, f is jVS2-1 i a b E il c d El i2 e f Es v- RB 6 0 6 R 2 0 2 B 3 0 3 RC 4 2 lo TX G 1 1 1 C 2 1 5 RP 2 4 Ti- P 1 2 a g GB 3 3 3 GC 2 4 lo GP 1 5 $ Best i: GB Best il: G Best i2: C or P This construction generalizes to the case where VS has more than 2 factors. Observation 6: The best instance in the unfactored space is not the conjunction of the best instances in the factored spaces. The nature of the feedback obtained from the teacher is differ- ent in the two cases. In the unfactored space, if green brick has been marked negative, then the version space is updated so that all three possibilities(green is OK but brick isn’t, green isn’t OK but brick is, green and brick are both not OK) are kept. In the factored space, because of ICA, we get more in- formation per instance from the teacher: thus only one of the three possibilities above will be maintained. VII Analysis for the non-ICA case Usually, ICA is not available in the real world (e.g., digital circuit diagnosis), and the analysis is complicated by the fact that the learner has to do the credit assignment on negative ex- amples by itself in order to update its factors. We now present a strategy for the learner under these conditions. 1. Generate the instance i that is the conjunction of all i,‘s where each ij is the best instance in each factor. 2. Ask teacher if i is positive or negative 3. If i is positive, replace every VS, by VS,+. 4. If i is negative,the learner needs to find which of the i,‘s are negative: Let p = pl A p2 A.. . p, A . . . A pk be a known positive instance. l forj = 1 tokdo l Ask teacher about pl A.. . pj-1 A i, A p,+l A.. . A pk. If it is positive, replace VS, by VS, + else replace VS, by VS, - Because each i, is potentially faulty, the learner asks k ques- tions to do the credit assignment. This credit assignment method corresponds closely to Winston’s near-miss idea. A more sophisticated credit assignment strategy uses binary search. The learner replaces $ of the factors in a known posi- tive instance. If that instance is labelled positive, it exonerates k 7 of the factors in one fell swoop. If it is labelled negative, further credit assignment instances need to be generated using the same strategy. Observation 7: The version space update algorithm does not preserve factorability under non-ICA. The instances that the learner generates to do the credit assignment are guaranteed to restore factorability to the version space. Without the credit assignment instances, under non-ICA, the updated version space will be a disjunction (disjoint union, in graph theoretic terms) of the 2k - 1 possible updates, where k is the number of factors. This graph is prime, even though each of its 2k - 1 components is factorable. The credit assign- ment instances seek to isolate that update and hence restore factorability to the version space. LEARNING / 521 VIII Tradeoffs associated with factoring The problem above points to a tradeoff associated with fac- toring: in general, the finer the factoring, the harder the ex- periment generator has to work on the credit assignment. This gives us a way of choosing how large k (the number of fac- tors) should be, given that we don’t have to factor down to irreducible factors. The formal cost comparison to be made is: nkmkt + r(a + b) > kmnt + r(a + kb) + F 1. a is the number of positive instances needed to learn the concept in the unfactored space, 2. b is the number of negative instances needed, 3. r is the cost of asking a question of the teacher. 4. F is the one-time cost of constructing the factored space. applicability conditions for this technique and presented opti- mal strategies for its use under a varying set of assumptions. The effectiveness of this method has been experimentally veri- fied on several examples. This work gives computational justification for some well- known maxims in the design of concept languages. The inde- pendence between relations should be stated explicitly so that they can be directly used to factor concepts. Also, the choice of relations should be such that they reflect independences in the world. The results obtained here can be extended to handle partial independence relations, in which some communication between the factors is needed. The implementation in [Sub841 deals with the sharing of variable bindings between factors. Further work includes relaxing the uniform probability assumptions made in the analysis of factoring and building a logical framework in which intelligent experiment generation strategies can be de- rived. Acknowledgements This equation characterizes the conditions under which factor- ing is a good idea during experiment generation. Usually, the one-time cost of factoring is much smaller than the expected gain obtained by the use of factoring. In the ICA case, if the cost of asking questions is small, factoring up to irreducible fac- tors is optimal. Factoring in the non-ICA case is a win exactly when the equation above holds: r will control how large k will be. Since r may vary from example to example, we use an aver- age r in the above equation to compute k. The savings obtained by generating instances in the factored space are offset by the cost of asking the credit assignment questions (the cost of gen- erating these is negligible). A judicious choice of k (which is constrained by the defmition for factorability) will ensure that factoring leads to computational gains in the non-ICA case. IX Implementation The factoring method introduced in this paper has been used in the context of an implementation (called VS) of the gener- alized version space algorithm [Sub84]. VS is built on top of MRS, a logic based representation system. VS was tried out on several concept learning problems in the blocks world. The time for learning a concept increased exponentially with the number of conjuncts in it. Factoring instances and working with mul- tiple version spaces proved to be a very powerful strategy. The size of the unfactored version space for Winston’s arch prob- lem[Win70] was approximately 3’. VS used the cost formula in section 8 to determine the optimal number of factors(3 of size 27 each), and the arch was learned using 9 instances which were generated by the experiment generator in VS. We thank Professor Genesereth and Stuart Russell for their valuable criticisms on an early draft of this paper. David Wilkins, Haym Hirsh, Chris Fraley and members of GRAIL provided useful feedback. Thanks also to the anonymous re- viewers for their comments. The experiment generator was im- plemented on the SUMEX computer facilities at the Knowledge Systems Laboratory, Stanford University. References [DB83] T.G. Dietterich, and B.G. Buchanan. “The Role of Experimentation in Theory Formation”, in Proceedings of the International Workshop on Muchine Learning, Univ. of Illinois at Urbana-Champaign, pp. 147-155, June 1983. [FHS85] J. Feigenbaum, J. Hershberger, and A.A SchZfTer. “A Polynomial-time Algorithm for Finding the Prime Factors of Cartesian-Product Graphs”, Discrete Applied Mathematics, 12,2(1985), 123-138. [Fe&61 J. Feigenbaum. “Directed Cartesian-Product Graphs have Unique Prime Factors that Can be Found in Polynomial Time”, to appear in Discrete Applied Mathematics. [Mit78] T. Mitchell. Version Spaces: An Approach 20 Concept Learning, Ph.D. di ssertation, Stanford University, December 1978. [Mit83] T. Mitchell, P. Utgoff, R. Banerji. “Learning by Exper- imentation: Acquiring and Refining Problem-Solving Heuris- tics”, in Machine Learning I, Mitchell, Michalski, CarboneIl and MitcheIl(eds.), Tioga Publishing Company, 163-189. [Sub841 D. Subramanian. “Experiment Generation with Ver- sion Spaces”, HPP-84-45, December 1984, revised March 1986. [Win701 P.H. Winston, “Learning Structural Descriptions from Examples”, The Psychology of Computer Vision, Winston, P.H. (ed.), McGraw Hill, NY, 1975. The definition of factoring presented here hinges on the con- junctive factorability of the concepts in the version space. We have a similar notion of factoring that applies to disjunctions, except that we use disjoint union (instead of Cartesian product) as the composition operator. We have used this in the design of experiments in digital circuit diagnosis. Only constant factor speedups have been obtained in this case. These two definitions can be combined to factor more complex version spaces. X Conclusions In this paper we proposed factoring as a technique for gen- erating discrimination experiments efficiently. We analyzed the 522 I SCIENCE
1986
84
532
GENER.ATING PREDICTIONS TO AID THE SCIENTIFIC DISC:OVERY PROCESS Randy .Jor1es Department of Information and Computer Science University of California, Irvine Irvine, CA $2717 Abstract NGLAIIBER is a system which models the scientific discoverv of qualitati ve empirical laws. of screntitic discovery systems. .! As such, it falls into the category liowever, NGLAUBICR can also be viewed as a conceptual clustering system since it forms classes of objects and characterizes these classes. NGLAUHEH differs from existing scientific discovery and a number of ways. conceptual clustering systems in I. It uses an incremental method to group objects inlo classes. 2. These classes are formed based on the relationships between objects rather than just the attributes of objects. 3 The system descri bes the relationships between rather than simply describing the class&. 4. Most irriportantly, NGLAUHEH proposes predicting future data. The experirnents help the syster& guide itself through the search for regularities in the data. Study ing scientific discovery from a rriachine learning point of view is still a relatively new idea. So far there have been only a few systerns which atternpt to model aspects of this area (5,7j. In this paper we will discuss NGLAUIWH, a system which searches for regularities in scientific data and makes predictions classes experiments I Irltroduc t ion about them. NGLAUBEH is based on an earlier system called ( il,AlIl~~+;H [6] but contains a number of differen& from that system. ~“‘“f3 ex NGLAUBER perirnents to accepts improve its its input incrementally and pro- characterizations of the input. We will discuss N(:LAUBEH’S architecture and give a simpli- fied example of NGLAIIIMX at work. Finally, we will discuss N(;LA~I UER’S relation to other systems in the area of machine learning. These include concepttial clustering systerns and sys- terns which rnodcl scientific discovery. II Data representation in NGLAUHEK ‘I’0 I)egin our discussion of the NGLAu scribe the data representation scheme. we wi II de- deals with four basic entities. These are fucts, nonfucts, prenic/ions and The two basic units cjf data are objects and :,fatements. are the iterrls which are described by statements. Ally- thirlg cau be an object, fruni a block to a chemical to a qual- ltative description. Every stateIuent is composed of a relation nar~~e, a set of input objects (or ilidependent variables), and a set of output objects (or dependent variables). The general form is rrlatlon( { InpI, . . ,Inpm}, { OuLl, ,Out,,}). For exarnple, a stattament describing tile taste of tile chemical NaCl would look like tuste( { Na<:l}, {salty}) which sirrlply means that NaCI tastes salty. Statements may also be quantified over any classes that have been formed. For illstance if the salts were the class of all chern- icals which taste salty, then the following fact might appear irl rrieuiury : Vz E salts: tuste( {z}, {salty} If SOIne but not all of the salts tasted salty, t,tlis statement would be existentially quantified (3) rather than universally quaIltltied (V). Facts, nonfacts and predictions are just sets of statements which have special meanings to NCLAUMX. A fact simply repre- sents a staternttnt which N(:LAIIBEH knows is true. III contrast, a nonfact looks just like a fact, but it represents a statement which N(;I,AIIUER knows is not true. A prediction is represented as a pair uf staterrlents (Prediction, For), where Prediction is a statement which NGLAUHEIL believes may be true and For is a statement which is true if the Prediction is true. An exatnples is the prediction Prediction: taste({KCI}, {salty}) For: VL t salts : taste({x}, {salty}) If NGLAIJHP:R makes this prediction It is saying that it will know that all salts taste salty If it sees that KCI tastes salty (KC1 is a member of the class of salts). The I’redlction part of a prediction is always an instantiation of the For part. Classes are sets of objects which appear as lriput or output values in various statements. A class is formed when a set of objects is found to have properties in common based on existing facts. The class of salts might be stored iu nir~rriury as salts {NaCI, Kc:l) ‘I’tie classes are used to allow simple statements lo be rewritten as quautified statements as shown above. The exact methods for forming classes and quantifying stateIrlerlts wrll be detailed later. NGLAIJBEH is an increrllental discovery system with the abll- ity to rnake predictions about the data it is given.* These two properties are natural companions for a number of reasons. It is unnecessary to make predictions with an all-at-once systrlll because the systtlril knows 11~1 Illore data is conling. ‘rhe abil- lty to IIlakc pretiic.tioIls is rnadt, possible by irrcrerrlentality. For NGLAIIHER’S task, rrlaking predictions is not only dc>sirable, but Iieccssary. This is bf7ause when facts are quantified so111f.2 in- forlrlation can be lost. t’rttdictions allow that infornlation to be retalrled. ‘I’his prot,lt~Irl will be discussed rnorc cuIIipletely laler. l,angley, et al 161 describe (;I,AuBER as a set of operators being ,rppIled cyclically to a working memory. ‘I’he same ap- proa(‘h c~)uld be used to describe NGI,A[I~~IC:H but there IS so niuch rnttbrar’tloIl bt~twerrl various rules that it is [Ilore conve- nienl tu divide the systtsrtl illto four n1a111 mrchunisrns. We will desc.ri bca 1:ac.h of ttlc+,rb rriec-hanisrns in turn ‘l’tley are referred to as the Introduction Itlt*cllanism, the predictho nlechanisn1, the prediction sutiajuctlon mechanism, and tl~e drniuf rnecha- oism. These Irit:c.harIisms can also be considered in two separate LEARNING / 5 13 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. b”“ll[J”. The introducliun, prediction, and prediction satisfac- LIOII rncc.Ilanisms work together in a highly recursive nlanner to create classes and quantified facts. The denial rnechanisnl works srbparately lo prutle ~CJWU the riurtiber of predictions in memory ar14 to handle t.he nonfacts. ‘1’111s is the tnain section of the NGLAUBEH system. When V The preclictiou rr~ecliariisrrl ‘l’here are certain prolJle1rIs assc,clated with N GLAUUKR’S in- LrtJdllcLiou tIlechalllstil due to iLs irlc.rollient,alit,y. At any given pu1111 ill time, the systelrl does Ilot know rf it has see11 all the data it, IS going to see. ‘t’herefore, it assurlles that it will receive no IIIor<’ input when forrrliug its classes arid facts. Iiowever, it tilubl also be flexible etlougll Lo alter its rrleltlory in a correct, and irfJ[JI-OlJriak Illatltlef if it, dcJeS recc!iVe IIlOrcf iIlpUts. :I dc:sirable characteristic for suc.h a systerrl is Lo have suItle vxp~‘Lallori of what it WIII bet: irl ttie future. When possi Me, 5’~: LAI 1)~~‘s prediction rnechanlsrll perfornls this c,ask. I’redic- t.l(Jllh are Inade which will allow ttte systetn to easily expand iis facts wt1r.n the predicL.lons are satisfied. ‘I’tre IJredlctiun nrechanisrrl works or1 the assunlption that every t~xl~l,cIlltially quantified fact can eventually become a universally quarlLifir:d fact if the proper dat,a IS seen. Referring to the exam- ple iI1 I,tle previous section, when clusbl is formed the following predlct.iotl is also made prediction: ahuye( ( t,lockZ}, {cube}) for: V1: t classl. s/~~~~c({G}, {cube}) ‘l‘tlt: Implicit assumption irl LIIIS type of prediction making is Lhal tIlc. domain is highly regular. NC~,AIIH~~H believes t,hat if “1~1 coi~trdst, ( ~I,AlIUlGIi IS &LI~ .Lll at c~llce systtnl. l3ec;rutie uf this, Nl :LAIII:EII’S cxiteriurl for faming a HZW class: is quite ;L bit differcut frtim I ;I.Al’lII~:li’s. objects have one thing in common then they will probably have niany things in common. Therefore, when it sees that block1 and block:! are both blue and that block1 is a cube, it decides tllat block2 will probably be a cube too. The predictions in NGLAuL~EK’s memory are generally highly iliterrelated There c’arl be many predictions with the same yre- diction part. I,ikewise, there can be many predictions with the sarrie 107 part. The set of predictions with the same for part is called a prediction yrouy. Also, the fur statement that is com- mon to every prediction in a group is called the hyyothe& of the group. Another way to think of the predictions in a predic- tion group is as a conjunctive implication. To know that the for statement in a prediction is true, it is not enough for just one prediction to he satisfied. Rather, every predict,ion in the same group must be satisfied before it is known that the fur statement (i.e. the hyp~Jh!SiS) is true. It can be seen that the predictions also conveniently solve thr loss IJ~ inforrnatiori problem mentioned earlier. When the fact shupe( { block 1)) {cube}) is q uantified to 3~ E classl: shape({r},{cut)e}), p re IC ions are made at the same time. d’ t These predictions act as sort of a sieve. They tell NGLAUBEH wllich statenlents have not yet been seen, so it also knows which statements have been seen. The net result is a reorganization of infortrration with no loss. A benefit of this is that NGLA~JBEH will COIIW up with the same classes and facts for a given input set regardless of the order of Ihe input. This is a trait which IS often not (‘i hi bited by increrriental systems. VI The predictiorl satisfactiorl rrwchanisrn Working hand in hand with the prediction mechanism is the predict,ion salisfaction mechanism. The prediction satisfaction nlecllanisrn is invoked by the introduction mechanism to see if the current fact has been predicted by the systerrl. Satisfying a predicliotl is usually just a matter of ‘checking off’ the fact from the list of predictions kept in rrterrtory. When a predicted fact is introduced to the system, all predictions uf that fact are removed from tI1emory. Often this is the only thing that happens when this trterhartisrtt is invoked. A special case occurs when the last predictiun in a predic- tiott group is removed from memory. As explained earlier, NC; LAUHKR knows at, this point that the hypothesis of the pre- dicLion group is true. This allows NCLALIBER to make stronger claitrts about the data it is considering. When this occurs, the prediction mechanism invokes the introductiott tttecttanisrrt with the tleWly CCJrlfirrrlet~ fact. ‘CbS CotIlpleteS the rf2cllrSiVe cycle be- twectt t.tte first t,hree tttrcttartistrts. When NC;I,A~IHM introduces a new fac.L to itself via the prediction satisfaction rrlechanism, the cycle begirls again. New predictions tttay be made or satis- fied, and new classes tnay be formed by the itttroduction of the 11ew fact VII The denial rnecharlisrn ‘l‘l~e final IIlechanism lo be discussed is a bit differettt front the prevtous three. lt is a separate entity which cattttot be invoked by the other nlechanisms. Neither does it call any of them into dctiuri. ‘l’tte task of the denial tttechanisrrt is to correctly reshape NGLAIIBER’S nlemory whell a prediction has been niade which turns out to be false. This rttechanisrtt does not do attyt,hing to Ltie facts in rtietriory. This is because the facts 011ly summarize everythiug which NGI,ALIH~+X knows to be true. ‘1’0 deny sorrte- thing NGLAIJ IJIM knows to be a fact, would mean that the data 1s ttoisy. Currently NGL,ALIUC:I< is not designed lo deal wtth noise so ttlere would be unpredictable consequences. The real effect of t11e denial tt&tanisrtt is to prune down the number of predictiotts itt tttetttory. We saw earlier that all the predictions in a prediction group had to be satisfied in order 514 / SCIENCE for the hypolhesis of the group to be true. By way of the denial mechamsm, we can tell NGI,AUBER that one of these predictions is not true. If that is the case, then NGI,ALIUF:R knows that the hypulhesis can never be true. This revelation allows lhe denial mechanism to perform two tasks. ‘l’he first is to eliminate all predictions in the same group as the denied statement. At the same little, the staterrlent is recorded as a nonfact to keep any future prediction groups in- volving the statement from being formed. The reason for elim- inatiug the predictions is not because they have been satisfied. Rather, NGLAIJBEX no longer carts whether they are true be- cause it already knows that the hypothesis of the group is not true. ‘l’his knowledge is also the justification for the secorld task of the denial mechanism. Since the hypothesis of the prediction group cannot be true, it also qualifies as a nonfact. Therefore, the denial mechanism loops back, using the hypothesis as the dctlied statement. This can lead to more predictions being re- moved from mernory. The cycle will continue until there are 110 more predictions left which can be removed. All the while, non- facts will be recorded in ~nernory but the classes and facts will never be touched. VIII An example of NGLAUHEK at work 111 this section we give a simplified example of NGI,AIJBE~~ at work on a task. We will use the same input data as used for C:LA~~BER by Langley, et al [6]. The example is from the domaiu of eighteenth century chemistry. Given a set of reactions bctwec:Jl elements and descriptions of the tastes of the chetnicals, N~II,AIIBEK forms the classes CJf acids, alkalis and salts. The system also comes up with a set of facts which describe these c.lasses arid the interactions between the classes. Following are the data input to the system. They were en- tered in the order shown, but it should be reemphasized that NCLAIJBEH is order-independent. No matter which order the facts are input, the system will end up in the same state. I. reacts( { HCI,NaOH}, { NaCl}) 2. rtacts({ HCI,KOli}, {KCI)) 3. r~acts({lINO~,NaOll}, {NaNOx}) 4. reacts({lINOs,KOtI}, {KNOy}) 5. taste((JfCI}, {sour}) 6. laste((HNO3}, {sour}) 7. lasle({NaCl}, {salty}) 8. laste( { KCI}, (salty}) 9. taste( {NaNOy}, {salty )) 10. taste({KNOs}, {salty}) I I. tuste( { NaOH} , {bitter}) 12. lastc({KOtf}, {bitter}) The first five facts listed are just added into NGI,AUUER’S rnetrrory unchanged because NGLAIJBEH has found no reason to fornl a class. However, when fact number six is introduced more intcrcsting things start to happen. To begin with, N~:I,AIJUEK notices that both HCI and [IN03 taste sour. Using this knowl- edge a class contaitling those two objects is formed. We will refer to Lhe class as ‘acids’ although NGI,AIJBEK would use a generic rlaIt1c’ like ‘classI’. The generalizatiort process of the introduc- tion mechanism then alters the reacts facts to describe the new class. For instance, facts one arid three are changed to 11: c acids : reucts({z,NaOll}, {NaCI}) 11: t: acids : reacts({z,NaOtf), {NaNOs}) Facts two and four are changed similarly. Notice now that NC;I~A[JBEH. can form two new classes based on these new reacts facts. Using the new facts one and three, the class salts1 -- (NaCI, NaNOy} will be formed. Likewise, using facts two and four, NGI,AIII~E;H comes up with salts2 ~ {KCI, KN03}. After everythmg has beet1 cortlpleted, NGLA~IBEH’S rneltlury will look something like this: acids - { tlCI, 11N03} salts1 - { NaCI, NaNOy} salts2 ~ {KCl, KN03} --t Vz t saltsl ‘ix t acids : reacts({z, NaOtI}, (2)) p VS tz acids 3.z t salts1 : rearts({z, NaOli}, {z}) Vr c salts2 lz C- acids : reucts({x, KOtf}, {.z}) Vx t acids Jz t salts:! : reacts({z, KOll}, {z}) Vx i a~lcis : toa’, ((y}, {sour}) Now is a good time to point out. that the spare of quanti- fied facts is 011ly partially ordered. By examining the new facts rrtarked by arrows, fcor example, we see two descriptious which surtlrtlarize the data and yet do not subsume each other. IL would be possible for one of these facts to be true without the C>Lht3 ‘l’h~s parLial ordering is discussed more in the next sec- tion. NGI,AIIUEH finds ulf the characterizations which apply to a given set of data.*” Durirlg this whole process predictions are being made about fllture data. We ttave omitted listing thern because rnost of then1 are not true and will [lever be useful. On this and similar exam- ple TIIJIS, sixty to eighty five percetlt of N(:~,AIJBE:K’S predictions turned out Lo be fa1se.t ‘I’hese will later be reltloved with the denial rnechanisrn. tiowever, when fact seven is introduced, a useful prediction is made. Whrhr~ NGI,ALJBEK sees that NaCl tastes salty, it predicts that, NaNO3 will also taste salty. The sanle occurs with facL eight. KN03 is predicted Lo taste salty. There is a great deal more that happens when fact number eight is introduced. AL this poillt, N(:I,AIIISEI< has two distinct classes we are calling them salts! and salt,&?. When NGLAUHEII sees that mem- bers of each class have sotnething in common (i.e. they both taste salty), It decides thaL ttlese two classes should really be one class atld merges them. We will refer Lo this new class sim- ply as ‘salts’. A consequence of this merger is that facts currently in rnernory now describe only one class rather than two. This means ttlat a new class call be formed containing NaOH and KOH. This is t,he class of ‘alkalis’. After all appropriate quan- Lifcattons have been made to the existing facts, NCLAl1tJEH’s memory contains: 0 acids {HCI, HN03} l alkalis - { NaOll, KOH} 0 salts {NacJl, KC], NaN03, KNOs} l Vr t acids Vy t alkalis $2 E salts : reucfs((l, y}, (2)) l Vz < acids jz c salts -jy t alkalis : rmch( { .tz, y}, (2)) l Vy tr alkalis VT t acids iz c salts : rracts({r, y}, {z}) l Vy t alkalis 3, t salts 11: t acids : recrcts({z, y}, (2)) l Vz <- salts IX t acids Jy c. alkalis : reacts( {L, y}, {z}) l Vz t salts 1.c c acids Sy t alkalis : reucts({.c, y}, (z}) 0 Vr t- acids : taste( {2}, {sour}) lz i_ salts : tasle( (z}, {salty}) N~:I,AIIBEH’s ttlenlory will also contain two important predic- Lions, that NaNOs and KNOy taste salty. This ensures that when facts nine arld ten are seen, the last fact III NGI,A~~B~:H’s Itlemory will be changed to l vz t salts : tuste({z}, {saky}) Facts eleven arld twelve now sirrlply result in the facl l Vy t alkalis : taste({y}, {bitter}) “‘This is suuletllillg which ~~1,AlIl1I~1: dues Ilt,t &I. 11 at,,ps wheu it 11:~s fdd 0062 Iof the cl~aracterizatiulls whi& apply. +Of cc)urst: it, is possible to trril”r exaluples where Jl uf tile preciiitivlis ;Lre false or hone uf them de:. LEARNING / 5 15 being added to memory. The only job left is to get rid of all the useless predictions lying around. By denying all the false IJredic.tlons N<:I.AI~BER has made, such as reacts( { tiNO3, NaOlt}, { NaCI}) ~~:1.AIII~lCtt’s final contents will consist only of the classes and quantilied facts that are Itlarked with bullets (0). These final quautilied facts represent the relationship between the classes of acids, alkalis, and salts that was discovered in the eighteenth century IX N GLAUHER as a conceptual clustering system III this section, we will examine the NGLAIIBEII system using fl‘ishclr and Langley’s framework for conceptual clustering algo- rlttlIlls j2,31. This frarrlework includes three c.lasses of techniques used 111 conceptual clustering aid divides the conceptual cluster- ~ng tdsk into two main problerlls. ‘I’he three types of techniques Optirr~ization t’artitloning the object set into disjoint c.lusters 11 cerurcjlicul Creating a tree, where each leaf is an iodi- vltfual object and each interrkal node is a cluster and L’fumpiny Creating independent clusters which may over- l&kJ. I,WO problems of conceptual clustering are defined as Ayyrcyution -- The problenl of deciding which objects will tJc> irl which clusters and (Jhuructerizution ‘rhe pr(Jblern of describing the clusters CJIICC ttrey have been forrIled. N~:I,AuHEH wes an optimization technique because its classes are simply partitions of the set of objects. The classes are dis- Joilit, but they cover all lhe objecls. Actually, it is possible for obje1.t~ to end up un(.Iassifir:d but each of these can be considered its a class of one object. ‘I’tle uyyreyution problem IS scJlvet1 for N(;I,AIIHEH by the Ilt:uristlc used for fornling classes. As stated previously, classes art‘ forlned when two facts are found to differ in exactly one po- sition. ‘I’his problem has actually become simpler because of the iucrc:rnt:ntality of the system. Wheu a new fact is input, it only has to be coInpar to the exrsting facts in rrlernory in order to IJo~3itJly form a new class. ‘I’tie new class is th<.u charucterizerl by the quantification pro- cess which changes facts destribillg objects into facts describing 1.1 axws. This problcni is also relatively sirriple since the initial facts are used as ternplates to form the new facts. AI) iriiportant difFererIc? in ctiaracterixation from other sys- terns is in the quantilied facts which descrilJe the classes. Ex- isting ccJnceptual clustering systems form clusters and come up with One defining characterization for each cluster [ 1,81. In con- trast, there is usually not just or~e fact which defines a class 111 NGI,AIJI~EH (or C~I,AU~C;H). More often, there is a set of facts irrvolving a class which describes its relationships with other classes. The reason this occurs is that classes are formed and descxribed using the relationships between objects. In other sys- terrls, c.lusters are fornled strictly by examining the uttributes of each otlject. ‘I‘tils tyl)e of description requires thfa use of existential quanti- fiers. I lowever, existential quantifiers are desirable because they ilicrrase the power of the description language. Without them, fac.I,s like \l.r t acids Vy i-alkalis Iz t salts : reucts({z, y}, {z}) are not p(Jssi ble. Most existing conceptual clustering systerns w(Juld have trouble generating this type of description. ‘l‘his brings us to the discussiorl of NGLAUBEH’S charucteri- zutiun space. As nlentioned previously, the concept descriptions used by N~:I,ACJHEH are partially ordered with respect to gener- ality. For this reason there is usually more than one applicable characterization for a given set of data. Consider statements which have two quantifiers. We can draw a direct analogy to mathematical logic with predicates of two variables. Fig;ie 1 shows a diagram of the partial ordering involving a predicate t’(x, y) from general to specific, where the truth of more general statements imply the truth of more specific staternents. This same ordering holds on the characterization space of NGLAIIBEK. When more than one characterization applies to a set of data given to NC:I,AIIRER, it will generate every maxi- rnally general quantified description which is true. X NGLAUBER as a discovery system ‘l‘he (; I~AC~LIEH system was designed to model the discovery of qualitative emplrical laws. This is just one irnp(Jrtant aspect of the general field of scientific discovery [5,6]. Since NGLAIJBEH is based on C;I.AIIBEH, it is meant to address and expand on these same issues. NGI,AIIBEH examines a set (Jf scientific data and attempts to characterize the regularities occuring within the data. This is considered to be an important first step in the scientific dis- covery process. One can envision NGLAUBER as part of a larger discovery systerrl. N(:LAIJHEH’S task might be tu srarch for qual- itative regularities and prompt another systern, such as BACON 141, to do a more in-depth quantitative analysis. The nla~n improvement of NGLAUBEK over C;LAIIB~X is its ability to rllakts predictions. When NGLAUBEH makes a predic- tion, it IS effectively proposing an experiment to he carried out arid asking for the results. 13~ proposing experirnerlts, the sys- tem is telling the user what it thinks is interesting and should be looked at Illore closely. It is obviously desirable for a discovery systerrl to guide its own search for regularities. The prediction rIlecharlisIll of NCLAIII~IJX is a step in that direction. Most cur- rent discovery systems (and conceptual clustering systems) are completely passive. They simply characterize data without at- ternptillg tu repcJrt which data would be rncJre helpful to know about. A IllJhtJk! eXcept,ioIl LO this rule iS h?Ilat’S AM 171. AM not only proposes experiments in arithmetic but carries the111 out itself. AM also searches for regularities amung data to form spmal classes. In theory, AM could come up with the same classes as N(:I,A~IBEII does but it would complete this task in a very diti’ort~r~t lriarlner. The philosophy irr AM is to explore a corlce!Jt space looking for ‘interesting’ thmgs. Iiowever, unless the irltcbrestlngness functions built in to AM were highly specific, it st’tm~ unlikely that the concepts discovered by NGLAIIHER would be dlscovored by AM in a short amount of tirrle (if ever). ‘I’he main difference between the systems is that NGLAIJBER has a well-defined goal t(J attain. It is attempting to change a set of input facts which describe objects into a set of maximally general quantified facts which describe classes of objects. In contrast, AM has no specific state it is trying to reach. It just perfurlrls a search through the space of possible concepts led by its Interest functions. This works wonderfully in the domain of pure rnathernatics, but does not seern easily transferable to more applied domains. XI Summary We have examined a system called N(:I.AlJHb:H. Althugh N(:I,AIJHKH was originally designed as a scientific discovery sys- tent, it can &o be viewed as a conceptual clustering system. It slrould be clear that at least part of scientific discovery in- volves searching for regularities in data arid creating clusters based upon these regularities. NCLAIJUER’S rnain contributions involve its incremental na- ture. Previous discovery systems need all their data at the outset 516 / SCIENCE Figure 1: Parlial Ordering of Quantified Predicates and perform all-at-once computations. In contrast,, N(;[,AI/BER exartrirtes its data a piece at a tittie, allowing it to be rttore flexible in its c.haracterizalions of the data. Ittcremetttality also allows N~;I,A~~L~EK to interact with the user by trtaking predicliotts or proposing experirrtents about the data it has seen so far. In this way, t,tle system cart guide itself through the data space uttti] the proper characterizations are fourid. 111 the field of cOIlceptual cluSleriIlg, increIIlenta]iLy iS a]SCJ Se]- dut11 used. As stated above, NC;L,AUBE:R’S characterizatiorts are rncjre fiexible to change as more data comes in. This rnay lead to rtr)n-uptitttal classes in some cases but the trade-off is tlte ability LO make predictions about future data. NGI,AUHER also bases its classes (or clusters) on relational information ralher than in- forrrtatiott about t,he attributes of objects. This is something that has not been seen in other conceptual clustering systertls. Finally, NGLAUHEH has a rttore powerful description language LhrCJUgh the use of existential quantifiers. This allows the sys- t,ern t.cJ describe re]at,iotrs between classes rather than just giving defirrittons for each class separately. XII Future work There are many directious in which this work can be extended. N(:LACIBEH is an irrtpcjrlant first step toward discovery systettts which design their own experiments. However, to become re- ally useful it tnust be rttade more sophisticated in sotne areas. One needed improvement is in the heuristic used to forrn classes. ‘l’hts rule is sitnple and cheap since it allows NGLACJBEH Lo cotn- plet,e its task using IJO search (and therefore no backtracking). fiowevc>r, the rule is also rather nai’ve. A more sophisticated ver- sion of NGLAIJHEH might forrn classes frotn facts which diff‘er in rtiure tlian one position. In this case, a number of hypoLheses for the “best” classes (according to some evaluation function) would be retnernbered. Unfortunately, this rnethod would also requlrtl search. Viewing NG&AIJBEK'S classes as clusters and the quantified facts as characterizations we can consider NGLAIJBEH to be a conceptual clustering system. Using this knowledge, we should be able to look to the conceptual clustering literature fur possi- ble extensions to N~:LAIJUEK. Another itnportant irnprovetnent would be to incorporate a hierarchical technique or perhaps a clurtlptng technique for clustering rather than the current opti- mization technique. Arranging the classes as a tree would allow ntore Iiexible clusters and characterizattons to be formed. This is sotnt:thing we hope to do in the near future. We envision a version of NGLAUBEH which will be able to construct a periodic table of elements when given sets of reactions sirnllar to those given in our example. To corttplete this task, NGLAUHCR would need LU have a class for each row of the table and a class for each colurr~Il. More research needs to be done in the area of prediction- making. NGLAUBEK'S current tttethod sirrtply uses the goal of changing existentially quantified facts into universally quanti- fithcl Fact,s. Although this rnethod has turned out to be useful, tttore ittlelligettt and complicated predictions could be made by additlg some domain-specific knowledge to the system. Cur- rerlt,ly, NGLAUBEK just looks for obvious regularities in the data and usually generates a large nutttber of predictions. A ltttle inlelligencc about the dorrtaitt being cxatrtined wuulci limit the nuntber of prediclions rttade and allow NGI,AIIL~E~~ to proIJose a few specific experitnents to be performed. Finally, an ideal N(;I,AIJUEH systetn would be able to deal with a certain atnount uf noise. Currently the system detnancls absolute regularrty in the data lo form classes and universally quantified facts. A ttlore flexible system would be able to tttake rules describing how most of the ilettts iu a class behave. This wcdd rcx~lCJVe the assunlptiorr t,hat alf it,ems in a class have everything in c*oltlttton. This prublerrt is closely lied with the probletn of rrtaking more irttelligent predictions. A future ver- sion of NGLAIIUEH nlight carefully select a set of experiments to perform. If ntust of these cxperirnents succeed or fail then NGLAIJt3ER can corr~e up wit,h a statement that is yeneralfy true or false. flowever, if some expcrirttents succeed aud some fail, il wo1~1cl irriply that the systerti has an improper understanding of the true concept. In this cast:, NOLAUUEH would design more specific ex])eritrlettts to c’orlte up with more refitic)d classes and characterizations. References [I] Fisher, 1). A hierarchical Conceptual Clustering Alyotithm, Technical Beport 85 21, Department of Irtforntation and Computer Science, llniversity of California, Irvine, 1984. 12 Fisher, 1). at~ti I,arlgley, Cl. Approaches to Cor~ceytuul Cfus- leriny, I’roceediugs of the Ninth Interuatioual Joiut Conference OLI Artificial Intelligence, 691 697, 1985. I3 Fislter, 1) and I,atlgley, 1’. Methods of Conccytual Cluster- ing and their tlelation to Numerical ‘i’a.c:o~~omy, ‘l’echnical ]ieprt, 85 26, Ih[JartInenL of ]llfOrma~iCJIl arid <hIIlpUter Science, Ilniversily of California, Irvine, 1985. [4] Langley, I’., H radshaw, G. I,. and Sirnon, ti. A. Hediscovrr- ing Chemistry u&h the tlACON System, Machine Learn- ing: An Artificial Intelligence Approach, Michalskl, tl. S., (:arbottell, J. G. and Mitchell, ‘I’. hl. (editors), ‘I‘ioga Publishing Co., f’a]CJ Alto, Ca., 307 329, 1983. [ 51 htgley, I’., Zytkow, J. M., Simon, 11. A. and 13radshaw, G. L ‘The Smrch for tleyulurity: Four Aspects u.j Scientific Ilk- cowry, Machine Learning: An Artificial htelligence Approach, Volll~ne 2, Michalski, It. S., CarbonelI, J. (:. and Mitchell, ‘I‘. M. (editors), Morgan Kauftllatt l’ublishers, ,os Altos Ca., 425 469, 19%. [6] ltartgley I’. Zytkow I. M. Sitnon if. A. and f:isher, D. Ii. lliscuu~rin; Qualitrltlve l+~&iricai Laws, ‘I’ecfinical tteport 85 18, t)eparttrtettt ~,f Information arid GJrnpuLer Scie-rice, University of California, Irvine, 1985. l,enat, 1). H. Automated Theory i+brrnation in Mathematics, Proceedings of the Fifth Internatioual Joint Confer- euce ou Artificial Intelligence, 833 X42, 1977. Mtcllalski, It. S. and Stepp, II. E. Leurniny from Obseruation: Conceptual Clustering, Machine Learning: Au Artifi- cial Intelligence Approach, Michalski, K. S., Carbonell, J. G. and Mitchrll, ‘I‘. M. (editors), ‘l’ioga Publtshing Co., l’alo Allo, <:a., 331 368, 19x3. 17 P ” - LEARNING / 5 17
1986
85
533
Beyond incremental processing: Tracking concept drift JefI’rt:y C. Schlirrlrr~er arid Ric-hard 11. Grarlger, Jr. I)epartrnent of’ Irif’orrriaCion arid Computer Science IJniversity of’ Catif’ornia, Irvine 92717 ArpaNet: Schlin~rr~et~OI(~S.IJ(~I.l1:1)17, C:rarlgt:r~iCS.IJCT.li:L>U Abstract I,c~arriilig in complex, changing erivironrrlents requires rrltlt hods that are able to tolerate noise (less than per- Iixc t I:edback) a11d &if1 ( concepts that change over time). ‘I‘tic5e two aspecfs of complex environments iriteract with cm.h oltier: w heri some particular learned predictor fails to corrc~cl,ly predict the expected outcome (or when the out- (‘011ic’ oc’c urs without havitig been preceded by the learIled prtlclic.Lor), a learner must, be able to dcterrriine whethet 1 t\ts siLuaL.ion is an iIistan<e of noise or an irldication ttlat t ht> c.of~c.t~pL is beginning to drift. We j)rescnt, a learning trlt~lhoci that, is able to learrl complex Boolean character- iz.aliollb while tolerating noise and drift. An analysis of I tit, aIgorit,hrri illust.rates why it teas these desirable bellav- ions, a.r~ti tmpirical results from an irrij)lerrlcntatiorl (called S’I‘A( ;( ;EI~) are prc:seI~!mi to show its ability to t,rack chang- irrg coiiccpts over time. I Introductiou Sorirc~Lirnt~s a low ~JarOIwk!r reading indicates rain corn- irig, d11d sornct.irrlcs it doesll’t. lc’urt.tierrriore, for trioriths ,ill(br (1 volcanic: criil)t ion, j)rctviously good indicators of rili~i 111a.y become pc~(Jr [JI’t’diCtkJrS, while olher (previously Ijoor) iildicat,ors may I~ecorrlt~ j,retiictive. Attempting to I(siif II I’rollr c~xjJcrit:lice AtJOllt assoCiiLt,iO~lS tJet.W(!(!ll CVC’lltS like> 1 ti(+,cb irl ttlcb rtB;tl world is corrfollrldtd bccaust> (a) rt~ost, ;t~50c.icLLiolls art’ riot j)erftTLly coilsistent, (11Cnt.c~ obst~rvetl iti~li~~~ct’s of Ltlc5cs a~mciatiorls coiltail ‘Iioistf’), arld (b) ab- 50~ i,Ltiolis c.llarlgt’ or u’rt.Jl ov(lr tirrlc,. I,cbarriitig ill t hesc (‘Iiv i ro[tlrit~lits is C’OlIli)(Jll IlCi~‘Ci l,y Llle t‘ac.t tllat uoistb aI1d (ir il t ir1tcarac.t: il ;tt sonit: poii~t a part,ic~ular gooci irlclicatot Idit Lo IJrtdict. Lhe irlt,erldtxi outxo~llt:, is this ,just a noisy lfl~l dll( t’, or is it, ai ir~dic:aLiorl Ll1a1, the corlctlpl is begirirririg IO tlrill? ?;,~t ur(’ has solved Ltlis j,roblt~rn in hurrrarrs alld animals: rat> iI1 classical corlditiorlirlg t~xpcritrlerlts arc at)le Lo tol- t~r;itc~ rioise and drift., t:vcn irl extrerrlely complex cnviron- rrlcrll5 with rriany cortip:Ling cues. tiowcvcr, few current, machine luarrlirlg systems are able to tolerate liaise and drift, and hence ca~noi, deal with complex reactive envi- ronments containing these qualities. We present a learning rriethod that tolerates noise and drift, and we otfer an an- alytical account of why it behaves as well as it, does. The rrlethod is able to keep track of, and hence distinguish be- twcerl, different types of noisy instances. Via formula based on tlayesian statistics, it tolerates systendic Iloise, but not random noise, distinguishes between noise and drift, and is able to track changing concepts over time. We have irrlplerrlented this method in a computer program called S’I‘A(:(;EK, and have tested it in a variety 01’ environments, ranging from animal learning tasks to t>locksworlds to chess endgames. We present some empirical lindings refiecting ttle prograrrl’s ability to track drifting corlcepts. II Related work Many successful learnirrg systems have Failed lo deal with the issue of concept, drift over time. Quirllarl’s ID3 (1986) prograrri, for example, constructs a tiiscritrlillatiorl tree to c harac tthrizc i rrsLaIlc:es of a concc~pt,. This representatiorl allows corljunctive, disjunC.live, arid negated characteriza- tions. Quinlall has examined the ability of this method to ac.cotrlIrlodate varying levthls of noise, concluding that its perforrrmnce is close to optimal (Quinlarl, lY8f.j). However, tfie rliethod is rlollirl~rerrlerltal, for it requires exarninirlg (and re-t~xarnirling) a rt:l;tLive large number of instarict3 allci does 1101 have I~it~ctranisrrls for Iiiodifying an ttxislilig tree Lo iricorpora1.t~ Ilt’w iIlsLanc.es. IL is uriable, thcLref’ort>, to trac.k cliarlgc3 ilr c’olict*pt dt*firliLions over tilrie. ‘I’titl iricrmrielilal IlaLurtB of’ a Iearnitlg algorithm does Ilot gllararitee that iL will Iw able to deal with concept drift over tinIt>. ,Mit,cht:ll ( IOH~), for example, reports on the vcrsioll spactk Ittarriirlg rr~el hod iri which an aj)propriate de- sc.ription of observed illstarlc’es is forriled via a bidirectional searcti Lhrougti it space of possibililic5. ‘l’hough relational irlforrrlatiorl is utilized, the version sjmce rnc!ttlod assurtles the strong bias t,hal a conjullrtive characterir/,aliorl can ac- <urat,t:ly capture Ltle c.o~lc.elJt, to be learned. lrr later work 502 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. (MiLchelI, IJtgofl’, AL Hanerji, 19X3), a modification was pro- powd w hicli would forrri cjisj urictive descriptions or toler- ilt,t’ lirriitcd noise in instanct3 (but not both, inleresLingly). ‘1‘1i0ugh this method is incrcmenlal, learned characteriza- Liolis r~iay not change and recross Lhe search boundaries previously established in Llie version space as the defiiii- tion of a concept drifts over time. 1,angley’s discrimination Icarning method (in press) is abItt to track changes in a corlcept defirlition over time. The ic~ariied concepts are expressed as a set of productioli rules, one of whicli influences expectalion at, a tiirie. If Lhc applicability conditions for an operator change, prc- su~rlat)ly recently learned productions would be weakeucd via st rerigthenirig while discrimination would propose new given repeated prcsontations of a novel cue (NC) and an unpleasant stimulus (I JS). Aft,er extensive testing, Rescorla (I!NiH) f’oririuiated Lhe co~~t,irigency law whicll states that subjects will learn an association beLwecri the two events ouly if ~,he unpleasant stin~ulus is more likely following the novel cue thal, wilhout it, or p(USlNC>) b~ p((lSI INC). In behavioral terms, Lhis nieans that, if one or the other slimulus frequently occurb alone, the subject still learns an association t,ctweell t,he two cues. Ilowever, if each of the stirriuli occur alone eve11 a few rlurnber of times, learning about, their associatiorl is severely impaired. tteal-world tasks also contain spurious events. For ex- ample, the descriptions of instances be subject to either rumforrl or systemutic variation. An example of random ollt’s. I~Iventually, Lhe new characterixations would be st rt~IigLticned and overwhellrl any previous learning. t se- t’i~usc1 this method is based on a slrengtlieniiig evaluation I’unction, however, it does not distinguish between tyyes of rioiscl. noise would be a tempera1 ure sensor which is accurate Lo withill IO% of its operating range. It may read too high 011 o11e occasion and too low 011 ariolher; Lhc difectiun of its error is random. Only a few authors Ilavc dealt with this possibiIiLy (c-g., Q uitilan, 1986). llowever, it rriay oflen be the case that errors in description are the result of a sys- III A new learning method: STACCEK tcmaLic variation. F‘or exarnplt~, a rain gauge may leak and someLimes read lower, but never higher, lhan it should. The errors of this IatLer instrument are syste7nuticafly of 011e type (only Loo low), though llrey rnay occur with an unpredictable frequtlncy. The conlingericy law states that, leariling occurs in systematic cases but is dubious in situ- atlons with raIidom variatiori. ‘l‘tit: heart of STA(;GEfl’s learning method is hased on a distritjuted concepL rc~prcseIltaLion composed of a se1 of tf~lally wclighled, sy rtlholic c-tlarac.teri~ations. As each new irisLance is processed, a cumulative expectation of iLs iden- t.ity is formed by using the pair of weights associaled with c.tiarat:terizatioris. I,carning occurs at. two levels: adjust- rriclrit. of the weights and generation of’ new IJoolean char- iLctc>rixalions. This latter process constructs inore general, 111ort’ specific, and invert.4 versions of existing concept de- Xl.i~JtiOll t’k’~~l~m~S. ‘I‘ht5t~ 11ew cliaracterixatioris Cornpete I’or iIlc.lusiori in tlic concept description with the elements I tl;Lt w(‘rc cornbincd to fornl them. With this in mind, S’I‘A(;GEK uses logical sufficiency (M), or positive likelihood ratio, as a measure of suffi- ciency (Duda, Casthnig, Kr tlart, 1979). Similarly, logical necessity (I,N), or negative likelihood ratio, serves to mea- sure necessity. ‘I’hey are defined as: /,S ranges from zero to positive infiility and is iiitt:rpreLed iI1 tcrrus of odds. (Odds may be easily corrvert,cd to prob- ability I, otlds/(l t odds).) ftr1 1,s ValuC less than unity indicates a negative correlatioli, unity iJldiciLtt3 iridepen- deuce, aud a valise grtlattlr t ban unity irtdicaLcls a positive rt~lationship. I,N also represents odds and takes on values I’ronj zero Lo posiLivt> iilfinil,y. llowt~vtfr, ati /,A’ value near zero iildic.aLc:, a positivtl corrtllatiori, aricl ii L’iilklt’ grtxaltar Ltiari uirity iritlicates negative correlatiori. If’or hotti l,.S anti l,N, unity indicates irrelt~varlcc. The I,S a~itl I,N rrit~asures ad tlt>re to Lhtk conf ingt1nc.y law, fi>r it can IJc shown via al- gebraic tnanip~Jlat.io~~s t,lliit I,S’ ’ I a11d 1,-V < I if and orlly if p(II,SiNC,‘) ’ p(/‘,Sl liy(,‘) (SchliIrl~rlc~r, 19%). (:onc.tlpt.s arc represented in STAC;(:K:H as a set of’ du- ally weighted, symbolic charac.terixations. Ii:actl c~lomenL of I II(~ c.oiic~t~pl. descript,iorr is a I3oolean function of atlrihute- VillllC’ /‘airs represented l)y a disjunct of c*oIljuncts. Ali ex- ,iril~)le t~lcrrlt:lltS rriatchi~~g either small blue figures or square on(+i would would tJt: represcrlLt4 as (size small cl~tcl color blue) or shape square. These characteri- xal iorr5 dre dually weighLt:ti in order to capture: positive and rltlgiit ivcl iniplication. Oritf wr%igliL represefits t,titl sufficiency 01’ I1 ctiarac.terixatioil lor prediclion, or (mulched :, j~,o,s), illIll 111th other represent,s it.s necessity, or ( Irrlutched > /JO.\). ‘1‘11~ rIiathernatic:al measures chosen for the sufliciency itllti nt05sity weights are based on psychological learning results. In a classical conditioliing t~xperirrieiit, a subjecL is C;ivcli il list. of aLtribute-value pairs describing ali in- b t,arice, Llie disLrit)uled coricepl rel)rt’s”Iit,;rt.ioIl as a whole infl uencds expc~ctation of a positive or negative instance. Followirlg the rriechaliislri i~sed by I)uda, (;aschnig, and Ilart (1979), the dual wcighls associated wiLh each charac- terixaLior1 are used togettlt~r with estirnaLt4 prior odds to LEARNING / SO3 calculate the odds that a given instance is positive. Expec- tat ion is the product of the prior odds of’s positive instance ;tl~(i Lht‘ f,S values of’ all matched characterizations and the /,K values of all urirrlalchd ones. c1kl.r ( pas / i74 mLs(pos) x n 1,s x n LN Vmotchcd v ~rrrdtchcd ‘I’htl resulting number represents the odds in favor of a pobitivc instance. This holistic approach differs from most Inac,tlirie learIling systtmls iu which a single characterization c~o~rrI~l~~t,t:ly irlfluences concept prediction. 13. III addition to rcpreseritirig concepts in a distributed Irlafln(‘r ;trkcl using Ikiyt:siali measurt:s to compute a holis- tic, t~xJJ(~ctaliou, S’I‘AC:(; CR iucrtmenlally modifies troth the wciigllls ilsbO<:iat~ld with individual charactttrixatiom aud thra b1ructure of the (.tlarac.tc~ixations themselves. ‘l’hesc two l;lI,ter abilities allow S’I‘AC;C.;l~11 to adapt its concept dt5criptioti to t,etter reflect the concepl. ‘1‘11tb sufliciency and necessity weights associated with taac-tI of’ the c011cept descripliori elerrlents may be easily ac1justt.d. Consider the possit~lr situations that rr~ay arise wl1c111 rrlatctiirig a characterixatiori against an instance. lcol- lowirlg t11e terminology used by ljruner, Goodrlow, arid Au:,Iirl (IYW), a positive instance is positive evidence wtiic h tinily either con/irm ttit: predicliverms of a charac- t.tbrixal iori (if’ it is matched ill this instance) or infirm the (.~li~L.itcI,t~rizatiorl’s predicliveriess (if it is unmatched). Sirn- ilitrly, a negative iustance is negative evidence which either c011 Ii r rus an urirriatcl~ed elerrient or infirm3 a matched one. ‘I’dlJIt: I surrirriarixes these possibilities. ‘l’l~tJlt! I : l’ossi t)le situal,ions in matching a ctiaracterixalion 10 rill irlstarlw. / lIlstarlce I /I ~:haracterizatiori Matchetl Il~lItl~tchCXl 1 III 1 t’rms of’ these rrratctiir~g everits, the coutingency law irrilJlit3 I,l~at learrlirlg occurs iii cases involving at triost oric f,j pt’ of’ irrfirrriirig cvitieuce. III siluatioris with everi small it~nou~~t,h 01’ I~0111 positive and negative infirming evidellce, sut~j~~c-t.s fail I,0 learu art association. The corrt~sporitlirig dt~lirtitiorl of systcxlrlatic variation is the presence of’ 011ly cfc~lillt~cl ;is both typt5 oL’ ir~firriiirig evidence. ‘I‘tlt~ weighting rrieasurt3 f,S and f,N rriay be easily cal- ( II 1‘11 t4 by keeping counts for each characterization of the pobi t)le situatiolis listed iri Table 1. f,N h’(h i CN) c:N(G I I]‘) The prior odds for a positive instance are easily estimated as (C, t lr)/(I~ i CN). If STAGGER limited its learning to adjustment of the cll;lractt~rizatior1 weights, the distributed concept represerl- tatiou would be sufficient to accurately describe the class of “linearly separable” concepts (llarnpson & Kibler, 1983). Jn tllis respect s’I’AC;C: Ktt is similar to comec&ionkt models of’ learning w he11 those models do riot have aIly “hidden” units. The purpose of the hidden, internal units is to allow the encoding of more complicated concepts. Search pro- cesses in S?‘A(:GEK serve an analogous purpose: individual ctiarar.terixatiorls are cornLiued into rriore c011iplt’x J300leali f’urlctioris. S’i’A(;C;KH searches through a space of possible charac- terizatiom as it refines its irlitial distributed represerltatiorl of the concept irito a uIiified, accurate one. Each possible i~oolca~i c.tiarac.tt~rixatiorl of attribute-value pairs may be viewed as a node iri the space of all such furictions. Fig- ure I depicts a small portiori of this space over a simple domain (each ellipse rep1 t sents a Boolean function). Any two of the possible I3ooleau functions a.re partially ordered along a dirrxnsion of generality (Mitchell, 1982). MAXIMALLY <z-) “ET / dALLY <CzEJ MAXlIl ’ GENERAL Vigure I : I’artinl CJ~A~~LC lerixatiorr st~arct, spact’. S’I‘AGC:ER’s initial c-onct~pt description corrsists of the sirrrple cllarac.terir/,atioris in the rriidiile ot’ b‘igure 1 eacll wit11 initially unbiased weights. Notice tllat this space is more thari twice the size of’ that. typically searched by a corljuIlct,iorl-orlly method like version spaces (Mitchell, 1982) _ A uotlier iuterestirig difference is that the versiou space rnetl1od searches its space of characteri~atiorls from both sides toward the middle; S’I’AC:CHl~ hearn-searches from the sirriplest points in the middle outward toward both boundaries. 504 / SCIENCE S’i’Ac;(: RR’s thrc!c search operators correspond l,o spe- cializillg, generalizing, or irlvertitlg c-harac.teri~atiorls. 1’0 rriake ti concept descript,iorl elernerlt more specific, search proceeds down a co~l.junctive path. Conversely, to rnakc a IIIOI’V gcrieral elerrittrll, search proceeds to a new disjunc- 1 ion. tdstly, a poorly scorirlg ~harac.t,cri~atiorl rrlay t)e tithgatc>cl; Lhis does not. raisca or lowcbr it2 degree ot’ generality. ‘t‘tltl coiijunctiou, disjuIlct,ion, a.nd rlegatioll opralors art: riot ;ipplied exhaustively; search is limited by proposing IICW t~~t~Irlt:nts only when S’l‘AGCKtZ makes an expectatioIl (Jrror. W hcri ii negative inst.arlce is predicted to be posit,ivo (atl t’rror of’ corrlrrlissiorl), the cxpcctation is too gerlerat. ‘l‘hub h(1im.h is c:xparltied toward a more specific ctiaract,er- iLnliol1. On the other hand, a guess that a positive instarlce is Iltlgdtive (a11 error 01’ ortlisbion) is overly specific:; searcll is (~xp;~t~dd to irlc.ludt: a t110r~‘ general c.llaract~~rir/,atioll. b:i- t tl(sr type of’ error albo c’auscs S’I‘AC;G Eli to cxpatld sctarch 1,~ proposirlg lhe riegatioll 01 a poor characlerixaliorl. Ta- lJl1~ ” _ surilrriarixes die operators’ precorlditions. ‘t’able 2: Stlarcli operalor preconditions. !i’l’.A( ;(; tC:il l’oltows a t.wo-step process of choosing good ar- gllrtrt~llts f;)r the operators; oti(’ sel of’ hrlurist,ics ~~orfli~~ute~ potc:ril iat argurnc~nts, iiud 2~ sccortd set, elects t hct rrlosl, prth- (iict ib(l 011es for irrctusiori ill 11ew c:ttaracleri~itt,ioris. ‘t‘h(a rlorninaliorl hchuristic specities atlerrlative groups of’ (,11~lr;L(.t,t’rixatiolis from wtlich to form compounds. Af’tcr ~S”I‘A~:( ;lCti has rriadc afl error of’ corrilrlissiori, ch;tractc:r- iL;it ioils rrlatc:)ltlcl iI1 t.his rlchgative instance may IJt: par- 1 inlly t1t’cc5silry, t)bit dre ctedrty Iiot sufficit!nl. Sorric> et- t’tllr’llt :, rIlust h;ivc suggested (vid the rllatchirlg process) t teal 1 his iristarlcta was likraly l,o be posilive, but, t)t%c.ausc t Ilib itlstallce wa5 riegativth , sortie tlr:c.ess;try elerrlenl, wits un- tttdLch(d. Co~ljlIll~~t,iorl corrlt)itlr:s two Ilecessary elt!Irlents, so trldt(.timl ctiarac,t,c~t.i’c,;lt,ioris art’ riornirialecl alorig with u~i- rtl;tlchcd ones. If 1’ ’ d ( ~s;~ullc~t.ioil is forriled, elements which ;it’(’ II r~tt~at~hed iri t.his riorlexarnple are norriiIiatd siricct cli5jL1tlc.t iorl corrtbirids two sufficient ctlaractcrixal,iorls, alid IIO hllflic.ic:rlt <:haract,cri~c,atiorls were present. Negation is IIMVI IO illverl, (~harac~lerixat ions whkh predict, ~lon(~xi~tt~- ptt3. I Is corrlponeri1, is norrlirlatd f’rorn Ihostt ctlarac-Ltar- iziLIiotlb w)iicti art> IIiiltctid it1 ttlis rlorlt~xarrlplt~. Sirrlilar hcburist its apply for au terror 01’ omissiorl. ‘t’ablc: 3 surn~na- riz/,c3 S’l’AC;(;l2ti.‘s rlorrlill;tlion heuristics. ‘l’at,le 3: Nomination hrurisl,ic.. 1 -1 Cortlrriissio~i OR [c 1 , ~23 Urltllatctled, l~rlIllalctletf NOT[cj Matched AND [c 1, ~23 Matched, Ma~chcci OIuissiun OR[cl ,c2] Matched, IJnluat,cld NOT[c] Ilrllllatctled J parerit and a mate. The two charact~~iil;at,iorls (parent arid il~dlc) are always matched iri a posilive instance (hthcr) though they sorrletirrles occur atont: (a brother is male). ‘t’tlis is rlegative irlfirrrlirlg evidence (refer to Table 1). LN tolerates tlegative irlfirrrlirlg evidenct!, and thcref’orcl rhtects criteria1 ele~lit~rits for corljurlctiorls. tly similar rca- soniIig, 1,tre converse woightirlg Iiieasurt~, i,.S, t~leds high scmirlg ctlaractclrixatic)tls to be used in fbrrrling new, dis- jurlctive c~lara<,teri~at,iolls. New negateci ~tlar,a~t,erizatiorls are elecl,cd equally by tmt h measures. ‘l’ablc 4 summarizes these second step candidate etecl.iorl heuristics. ‘I‘a1,le Function AND[cl ,c2] OR[cl .c2] NOTIc] t : I+;lt!ctiorl heuristic.. I<teclion rrieasurr 1 f,N(ci) s I Lsyci) A:, I L/V(c) > I or I,Y(c) < I j New c,haritc.1c!rixatiorls are introduced into the search f’rolltitar in il gcrlt>riLf c:-alld-test ItlarlIlt’r. ‘I’he scdrch opera- tors gcllerate new (.t~arac.tt~r.izat,iolls wtlich are tllell eittirr prurltd frorrl the f’rolltictr or t~slat~tisht~ci as part of it,. ‘I’0 avoid htitlg prulltd, ii ticw ctiaract,er-ixatiori Iilust be Irlorc cflkct,ivc t,han its sporisorilrg corriporierits. If ttie 11ew ete- merit, surpassc3 a weigtit, tlirestiotd, it is estal,)ished aild its c.oIrlporlerlls iLI’t2 pruud. IrlkriItl pdor~rlallce is assessed by c:xaIrritiirig recellt CtliiIlgc’S ill its wtbights. ‘l‘tlese changes art’ avcragtvl, arid if’ this average is very small, tllc etcrrlellt appears t0 t)t2 rt~iLI.~liIlg an itSy~Il[>tOt (3. 11’ it. is still t)cloLc thrcs~lold, t tic> ~tlarac.teri~ation is pruritd. S’I‘.AC;GKH wilt Irigger backtracking who11 (,he wtlighting rlit’;thlIreb indicate t,tiat the rlew chariic’leri~,atioll is pthrf’ortn- irlg worse than wtlerl it wab t5tat)lished. Its pruntd cortl- potlt~rlts are react,ivaletl arid cortlpett: ab tflts f’aititlg ett~Irlent did More. This amou~lts to chro~lologic.al backtracking kailse ~Iloves Lhrollgll t)icl st’drch space art’ retracttd iii t,licl opposil,(J order f’rotn which tllchy wtlrtf ~~roposetl. IV Tracking concept drift An irIlport.arlt feature of a Icarrlirlg IKl~ChiitlisIIl is its rfh sporisiveriess tm changes in the eriviromie~it. lcor irIst,ance, ii fox tts;irIls lo look f;)r it cllangcd coat color iti his prey LEARNING / 505 ii&S 1 Ilts SCaSOIlS change. First, the learner must disitinguish IJI>I wet111 randotiiim3s and getiuirie change. For a Failed ex- ~~(~ct;il ion, the question arises as to whether it was simply a Iloisy instance, and should be tolerated, or whether it, irltlit~al,cs that, the Iearned concept has drifted. S’I‘AGGEIZ IIM’> ! frtb Ijayesiarl weighting measures to dist,inguish be- I W~TII events that indicat,e a change in the definition of a c.o~~ct~p!, arid those which are probably t,he result of noise. St~cortrlly, does the arllourit of previous learning about a givclli tolrcept tlckfiriition aflitct subsequent relearning of a Ilt’w dc~finition? In humans and animals it. does. ‘I’he adage “It’s hard to teach an old dog rlew tricks” roughly captures a rrlairl fillding in Iearning (e.g., Siegel &X IIornjan, 1971). ‘l‘tlc3e st,utlies ilidicate t,hat the resiliency of learned con- ctbpt, definitious is inverstxly proporlional to the amount of trairlirlg; briefly t,rained cotlcep~,s are more readily aban- do11t~1 irl t,he Fart3 of charlge t,han ext,ensively t,rairled ones. Kctlpirlg counts of the evidence types in t,able 1 amounts lo rc~l,;iiriirig a history of’ associatioli, allowing S’1‘AGGI31< to r110tlc~l resiliency appropriately. 11‘igure 2 dcpict,s the performance of S’l’ACG~:tZ on three hucct3bive definiliorls for t,hc same coricept: (I) color red UPL~ shape squarish, (2) size small or shape circular, (3) color (blue or green). ‘I’he cliL>l1<‘(1 verti<al liIles indicate when the definition of the concc~pt was changed. Notice how performance falls im- Irlt~(lidi,t~ly following the charge because the previously ac- quircatf defiriitior\ was not sullicient to characlerize new, chi11tgtv1 instances. III each of llte three cases STAGGl5lZ f’or~r~c:tl t,he explicit,, symbolic represerltation of the COII- cts\)(‘s clc:fillition and evaluated it as the best, among tllose 011 I I1t1 3bart.h f’roIit,ier. S’ilAc ;(; EK addresses tile noise versus change issue l,hroIlgll t,he use of its weighl,irrg measures. When /,S and /,,Y itldicale a change iri 1,tie type of noise prewnt, they 1 rigg:tll- backtracking as explained above. 011 Lht: other tIculd, Illore 01’ the sanle type of’ rloise does non lead to 1 II(~ rr~odifit:ation of t~harat~t,erixalior~s. Figure 3 depicts S’l‘nc I(; tq;le’s acquisition of’ t,ht color red or size squari sh ctrarac.l.ttri~atiorl as iri figure 2. Aft,er tlic daslied vcsrl iC’i1 I lilttl, positive irrstariccs were sut)jet:tcd to 25% Ileg- ,rt ivtk itili rrriing, systerriat.ic Iloist!. ‘l‘lial is, %5(X of’ the pas- ililts ilrbt,dric.t3 wttr‘e ra~ltlorrily assigned t,o tlit,her tlit: posi- liv(> 01‘ littgalivcl t-lass; a situ;tI,ioll similar to t.he Icaky rain giLllg;(l. Not.ic-t: that ulrlihth figurta ‘t, perfi)rItlatlt.t: is not, ad- vc~r~~ly ;Lff’&.tc~tJ, iritlit~;ilir~g t,h;lt, S1’Ac:(:C:l< is correctly dis- t iii~:lli~~lirig ht~twc~t~Ii rioise ailltl concept charlgtb. I~T~IIM~ S’rAc;(;t+;I{ wtairih t:ol~rit,s of siluatiori l,ypt:s, il is % CORRECTLY CLASSIFIED 25 50 75 INSTANCES PROCESSED tcigurc> 2: ‘l’racking concept drift. % CORRECTLY CLASSIFIED 100 J -” ~- - .--_ ~~ “-v~.-- / I ’ AI_--- ---.- 25 50 75 INSTANCES PROCESSED Figure 3: ‘LS% syste~riatic Iioibtt. in effect keeping an abbreviated bist.ory of the correlation between a characterization and a concept definition. This allows the prograrrl to model the effects of’ varying arnouIlts of previous learning on relearriirlg resiliency al a gross level. (Jolilrast, figure 4 in which the prograrrl was give11 more than four times the mwurrt of training for tbach cont~ep~ bef;)rc c1at.h charlge t,liau iii figure 2. Notice I tlat t,he recov- cry learning is considerably I’ast,er (higher resiliency) iI1 LIlta rriiriirrml Lrairiirig cast! (figure 2). 111 short, t,he htturist it. dernoIlstratcd hclre is that briefly t,rairlc~tl concept,s are less likely LO be st,able mtl should t,ht:r(\(i)rt> btt abarltio~letf more quickly in tIlta f’acc of change. Ori l,l~e othtar hand, extelisively trailled collcfxpts are rriore stable and have a longer liistory of past sut:ct:ss; they should bc less resilient in the f’dce of rlew evidence. Psychological studies indicate lhat natural learning met:hanisrns behave iri this rriaririer (Siegel 8% l)omjan, 1971). 506 / SCIENCE “/ CORRECTLY CLASSIFIED 100 200 300 400 500 INSTANCES PROCESSED Figure 4: ‘l’rackittg conc.ept. dril’l given overtraining. V Conclusions S’I’AC:C:EK is an inc~retnental learning method which tol- cbralt5 systematic noise and concept drift. It begins with sirrlplca characterizations and learns complex characterixa- tiorls t)y c.otiducling a trtiddlcbout bearti search through the space of possible conjunctive, disjunctive, arid negated (.tI~tr.itc.l~,rixatiotts. tlacktrackittg allows t,rackittg changes in c.ottc.chjjt, defitiit.iorts over titrie. t~urtherrnore, the use of ttte lbyc~siatt weighting rtt<:asures affords the proper distittct.iort I)~~t.wet~tt ttoise attd gertuitte concept drift. fly retairtitrg nu- rt~cbrical histories of evettts, STAGGER models the efl’ec ts of’ ovr~ri raitbitig seen in psyc:hological experittlents. ‘I‘lte learti- itlg rttottiods employed in ~‘I’AGGEI~ are far frorn a COIIl~Jlek 5olut iotl t,o the prot~tt:rris of leartting it1 complex, reactive c~tlvit‘ot~trlettts. So far, it, is littlited t,o learning tboleatt c~otrtt)itlat,iotis of attribute values and cannot, acquire rela- 1 iorl;tl descriptions of structured objects. STAG(:I*~R also rcquirc~s feedback, as all concept attaitment systctrts do, ;tt~cl is tlterefore unable to conceptually cluster its inputs. Acknowledgements ‘I‘lli> research was supporl.ed in part by t,ho OHice 01‘ Naval Research uttder grants NO001 4-X4-K-(139 1 artd NO001 l-X5-K-0854, l.he N;Ll,iollal Science I~ourrtiat,iolI UII- tltir grdtlts 1YrL‘-81-20~iH5 and lS’l’-85- 124 19, ttte Arttty IIe- st1arc.h Ittstitute urtdct- grant Ml)A<303-85-(:-0:1’L4, at~d by I tit> Nnval Ocean Syst.errrs (:(:tt ler under contract NCiCiOO I- H3- ( :-0255. We would likt: to thank Michal Young who wd> itlvolved in the early fortrlulalion of these ideas, Ross Quitilitrt for suggestitig a tiat,urat exletisiotl to the rrtatchittg j~roc~~ss, and Ihe etitire mac.hitict learttittg group at. lrvitte for LI1eir vigorous discussions arid cotlsistettl eticouragernertt. References I)utla, l-t.., Gaschllig, J., & llart,, k’. (1979). Motir:l design in tlie l’rospecbor consull,anL system for tlliner;tl exploratiott. In I). Michie (I’ll.), I!‘rpert sya!ems ,in tht rt~icro electronic uyc, b2illl)urgll: l’hlirll)llrgll Ilniversity I’rcss. bngley, I’. (in press). A getteral theory of cfis~.rtrtlitlatiutl learn- irig. 11) II. litahr, 1’. I,angley, & It. Nec~hus (I~>cls.), Produc- tion system models OJ ltarniny und development. MiLchelI, ‘1‘. M. (1982). ( :eneralizat,iori as search. Arfificial Intelliyencr, 1 K, 203 226. Qurrhl, J. li. (1986). ‘i’t if2 effect uf tloise 011 colicxpt leartlltlg. III Ii. S. Illictlalski, J C. Cart~onclt, & ‘I‘. hl MiLchelI (Us.), Muchine lcurnlny: An urtijiciul irltrlliyerlcr uyyroach, uol- UW~C II. I,os Alt,os, (:;Jifortlta: Morgau Kaufr~~ar~rl I’ublish- t’rb, IIIU. Ihc.orta, tt. A. (lIM3) f ‘rola1)llit.y of st1c)c.k irl t.11~: f)resence ard ahserir~c of’ (3 III fear c.c,rltiil.lollirlg. .louri~ul of Compurutive untl i’hysioioyicul I’syr~holoyy, 0’6, 1 5 Sclblirtlrtler ] J. c: (lwzi). ‘4 note on corrrlutronul mcusures (‘l’echttic~at report ff Xti- 13). Irvine, Califorrlla: ‘1%~ tini- versity of California, I)epuLnienL of Inforn~at.ion and &III- puler Science. Siegel, S., & UortIjarl, M. (1971). t3 ac ward coritiitionitig as a11 k illtlibitAJry ~mJCedUIX Lmrnitly und Motioution, .2’, 1 Il. LEARNING / 507
1986
86
534
A Case Study of Incremental Concept Induction Abstract Applicat,ioli of niactiirie inductiori l,cx-tlrliques in corriplcx doltlailis promises to push the computational limits of nofi- incrc~rnerila.1, search ilitensive induction methods. 1,earning t~ff;~ctiverlcss in complex domains requires the developrrlerlt of iricrorrierital, cost effective methods. However, discussion 01 ciinierisioris fi>r corriparirig I,he utilily of diffcrirrg incre- rr~e~~tal melhods teas beers lacking. 111 this paper we intro- (iuc(~ :j dimensions for ctiaracterixirig incremental concept iriducLior1 syst,enis wllicti relate to the cost anti qualily of ltlarriifig. ‘t’he dimensions are used lo compare lhe respec- 1 ivtl Irlt:rit,s of 4 iricrerrleritat variants of Quirilari’s learning Irol~~ t>x;llrlples program, IlI3. This coniparisori indicates I tlat, cost effert,ive iriductiori car1 be obtained, without sig- rlific-;Llktly det,racting frorrl the quality of induct4 knowt- CTlgt~. I Introduction n’ork in machine learning has corlcerltrated significantly OII t Iit> problem of concept, irlducf,ion (e.g., learm’r~y Jrom esuttlples, comeptutrl clusterr’rly). ‘t’hus far, the rnaj0rit.y of concept induction syskrrls arc tlorlincre~rleP1fuI, iri t,tiat ttltly rquire all objects over which induction is Lo occur to t)tl prtw~11 frorrl t.tie outsel 01’ systcrn executioll, white in- (.rett1etltul systems acccspt objects over a span of tirtle. Mo- t,iv;Ltioils for ir~crcrrlt~nt.al sy5t.clrrls triIigc& prilnarily on ttlc rc~;ilizClt.iori that its learnirlg systerrls arc’ rtquired t,o clcal wil II in great,c:r riurllber arid divctrsity of‘ obsc~rval,iotls, ~lorl- irlcrt~rrlcrital syslthrrls rrlay riot t)(a c.orri~)~~tatiorlally usable (,Ilic tlalski, 1~6). Spt~cific~ally, t hc: primary rllot,ival,iorl for incrchrrlerllal induct,iorl is 1,hat a knowledge st,orcb 111ay be rai)idly updat4 as c;Ktl new iIISt,ilIICCa is (‘rl(.ollrlt,erc~d, lhUS ~1151~1iriirrg ,L corit.iriual basis for rcactirlg to nt’w sLirnuli; t,tiis iJrol”:rty is btbcorliirlg pararrlount with Lhe ticvclol)rrle~lt of 5irrl~Jl;ilc4 world t,rlvirt)rlltlr~rll,s (~1arbor~ctII X: tloocl, 1985; SctlIlIrllll. x tIIJIIl~~, lW5) which prorrlisth Lo pustl ltlc lirriits 01 ( II rrcarit learriirlg sys t t>rlis. Along with the cost advantages of illcrernentat systenls come di5advantagcs which emerge as a result of the con- straints mandated by perf’orrning rapid update. Nonirl- cremerltal concept induction systems tend to be search intensive (deplh-first with backtracking, breadth-first, or versiorl-space) which requires maintaining a frontier of hy- potheses and/or a list of previously observed instances (Mitchell, 1982) urltit some stable hypothesis is converged OH. New objoct,s servca to expand the frorltier of hypothe- ses which makes incorporation costly. tfowever, exhaus- tive techniques generally guarantee that a correct or opti- rrial hypothesis is obtairled. lncrcmentat systems seek to reduce the cost of update, thus precludirlg the luxury of keeping past, instances or equivalent inforrnatiou (i.e., a frorit ier of hypottit5es). Iricrernerital systems will generally require more objects to converge on a stable hypothesis, and 111rty sacrifice tile guarantee of the correctness or opti- rriality of a filial tiypoltiesis. A reduction in search control yields a search process which Simon (1960) has termed sat- isfici?i y. At1 erivirori~rierit, wliicti places comt.raints on re- sporist~ titrle precludes searching for optimal solutions, and riccessil.att3 a scarctl fOr sal,isfactory hypotheses. This does tid preclude the possibility of obtaining optirrial solutions, but otily Lhe explicit scharcti for such solutions. In fact, a sat,isficiIlg strategy car1 allow rapid rrlerilory update and trypottieses wtlicti are of tligli quality. As iliterest, in irlcrernerltal learning rrlour~ts, it becorlles irlcreasingly irrlportarll to Itlake explicit thra cu~r~pututiord pTYJl.'t!l-tie.5 (e.g., cost, concept quality) of increrriental iri- duct.iori Wctijiiqut5, arid not. limit their characterization to t,hta l~el~uvwrd property ttlat. objects are accepted one at a tirric. tcor irisLance, atiy c>xhaustive scarcti tectinique (e.g., vt~rsiiofI space) cari bt: irlipI(~r~icrlted so iib Lo accept, instances iIlc.rc!rrlt,lltally, but suctl ;tri irIl~~I~jr~t~ritat,ioll rliay have lirn- iLed ulilit,y it1 an c~r~vir-o~irIlerrl tit~rnaridirig irlcrerrlental cufll- 1julul107l. trl ttlis paper we discuss several dirrlerlsions which differeritiatt: incrcrnentat and riorlilic:rerrlerltal learners, as WCII its serving as a basis for evaluating competing incre- rrierital systm~s. These dimensions are 496 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. l The number o/ obseruatiotw required by a learning sys- tem to obtain a ‘stable’ set of concept descriptions. 0 The cost u/ u$utifly memory to accorrirtlodate an ob- served objet t. These two factors can be corrlbirled irlto a single measure of currlulative cost, which reflects the amount of resources expended during Iearning. A last diniension for character- izing iricrernental induction systems is: 0 ‘I‘ht~ cep t quulity uf concept induction system. derived by a con- ‘1‘11e rclrnaindcr of the paper focuses on a cast‘ study for discussing these dinrensions. Specifically, the dirrlerisions arc used to compare the behavior of several irtc,rerrlerital variants of Quinlan’s (1983, 1085) 1113 program. l+:ach sys- tcrrl is of the learning frorrl examples variety, and each builds decision trees over observed objects which distin- guish positive from negative instances. A formal analysis of t hesrb systems is bolstered by an empirical aua lysis which indicates that object incorporation cau be considerably re- duct4 without a significant decrease in the quality of tie- rived decision trees. In general, however, reducing updatcl cost implies an increase in the number of observed objects reqllired to find an ‘optimal tree. II A case study: ID3 Quirllan’s (1983, 1985) 1113 constructs a discrimination tree that distinguishes between examples and rloncxarn- ples of a particular concept. The algorilhrn starts out with au empty decision tree arid a collection of exarllples arid riorit~xarriples of a corlcc:pl,. tt>ac:h object is described by a nurribt~r of attribut,es (e.g., size, shape, color) and a value for each of those attributes (e.g., size is red). A rrieasurc is applied to each of the attributes to determine how will t lley discriminate between positive and negative examples. The rrlost informative at tribute is used to form the root ot ttlr (14acisiori tree with a brarich for each of its values. ‘I’tlcl instdnces are then Jlvided into groups according to t hcbir value for this attribute, and the process is recursively ap- plied for each group, thus buildirlg subtrees. ‘I’hc proce~ continues until all of the examples in a subtree arc) r:il.hcr positive or negative. At this point, the decision t r(‘e corn- plet,t~ly discriminates bc~twecn exanlplcs arid rlon~~xarrlplcs of I tit: coiict3pt to be acquired. ‘l‘tl(b choice of a rrieasure for selectiug discriniirialiou trc’r root:, is critical if good decision trees are to be ot,lainetl. II)3 uses an irlforrnation lhcoretic rri6lasure to determine whicll attribute values besl divide all ot)ject (sub)set at cBac:tl sllt)t,ree. 1113 has also beer1 designed to acconlrlloclate occasioiial errors (or mjl’se) in the concept instances. 111 a noisy situation, I I)3 will atterript to discrirrlinate betwcerl every positive arid rictgativct irlstanco, resultirlg iu a dtaci- siori tree which rndy t,c> uriricac.msarily large. ‘1’0 limit l.lli:, growth, a x2 (clli-squared) statistical test is us4 to insure (with a high d e ~rt7’ 0. c0nfidcrit.e) tllat thcb sarriplirig of in- b _ _ f stances is rrot due to cllarlce. A 11101 c de1 ailchd account of’ II):< arici its rrlcasIIL‘(3 cdn tw touuti iI Quirllari (1985). 11)X is a rloriillc.rcrrlclrIt ‘tl algorittlrrl wlic~rc a largct ~iurt~- bcr of instarlcc3 are dvailltblc for proct3sirlg ;iL. oue 1 irritl. Orit’ ‘brute force’ rr~t~ltlod of applyirlg II)3 in ilI1 increrri6Lri- tal rriaririer would be to allow objects lo be prcsetitc~d ontk at a tirrle, arid sirrlply rt’ruli I I)3 as eat II objet 1 is olm~rveii. llowever, t hc I I)3 franlework is arntlrlable to rrlodificaf ion so Ltlat iIIsl,iirlcc5 Irid) I,(, proc~twd orie al, ii tirritb iIi ail efficier~t, c.orril,iltal,ioriall~ ilic.rt~rIlcrltal triaIllit:r. ‘I’htt ht>arl of tticsc> rrioclifications lies in a series of tables loc.atcd ill encli potent iitl d~~c~isiorl tree root. 14:ach tabIt consists of entries for ItIc valur3 of all urilesled attribufcls and suril- rriarixm thra riurrlbclr of positive arid negative irlslct11ces with each val lit?. As a new ills1 aric(a is processed, tlie positive> or negative count for caacti of its attril,iitc~-values is irlc-rcl- rrieritcd accordirig to wlictllr~r I.liis irlstarlct was all ~xarIipl<~ or rionexarrlplc. Classilicatioli tIlerI proceeds dowll ttlc llle approprial (A su bLrt>e. If’a 511 btrtIe is t~ncouutt~red which dot3 not yet have a root test at1 ributc, arid tticre arc: both pm- itive arId negative coilrits for classilied instances, tht>n t hc inforrrialiori rIirsasurc> is used to corriputc~ the rrlosl iIlf0rIIli~- tivtt of the previously uriused attrilutc5. ‘I’his attribute is 1heIi evali~alcxl usilrg 1 Ike k2 ksl,; if it, is ur1lik~~ly to Iiavta arisen by chance, thctri it is cllost~rl as tile root attl‘il)utt~. Otherwise, this root. is Ith tbrnpty. ‘I‘he process conI iriuc3 urltil eitlker all iristaIrct:s aI, a sul)trrw aw of’ orit2 lype (pas- itive or negative) or a ri(‘w root, canriot, be relial)ly choscri. ‘I’able 1 dt3pictS the Sk[JS foI]OWtd irl l.hC IliOdifit’d vf:rsi()li of 1113. This rriodific~atiorr is tc~rrried I 1)4. II)-1 also allows c.llarIgirlg a poorly ( tiosc~I1 root (stiap 3d). (:harigirlg the root 01’ a su bl,t.t~c tliscartls irllorrrlatiorl gloar~eti over previous inslarlccs rtnd rtquires tlxarrlirling a rlur~ib~~r ofsut~stqut~r~l i~i:,t;~rices t)clf’ortl (Iw.~I)(~T subtret~ roots cali be rccliost~n. ‘l‘liib prot.c+s occurs irit‘rc’clut’rllly, lo the credit of I llc iriforrrlaliorl IIltmur-e lleuristic.. :\s I~~;trIiirig progrmst3, ttir> iIIlpOrtiirIt roots (highcbr irl t tic t rtY>) bc~corlit~ rrlorc’ stabl<~. <irid ctinllgcs iri su Gl rt’tf root ctloic,c3 tla\‘t‘ loh!~ of’ aI1 ctff>V.t. lri ttlta lollowirig ilIlillySC’S. II) 1 \Cd5 ,11)1(1 to tiis- card previous k.ul,l rW ill I ril)ul.c~ t,lloic (3. ;trlcl t.OJiVt’rge 011 I,ht> 5111ic titx isiori LIXY ds I IJ3. LEARNING / -tc)’ Inputs: A decision tree, One instance. 011tp11t: A tlecisior~ tree. is I. If this instance is positive, increment the total number of positive instances. Otherwise increlnent the number of neg- alive iristances. 2. If all of the irdanccs are positive or negative then return tlie decision tree. Compute the expected information score. For each attribute, for each value present in the in- stance, increment either the number of positive or negative instances. Compute the information scores for all attributes. If thcrc is no rout or Lhe maximal aLtribuLe is riot the root, then build a new tree. i. If the maxinlal attribute is x2 tiepentlcnt then make it the root of this tree. ii. Make a test link from the root for every value of the root attri butt:. Go to step I with ttre subtree found by following ~lle link for the root attribute’s value in this instance. Table 1 : I’seudo code for increrrlerital 1114. allalysis is augmented with an enipirical analysis, during whic,l~ two additional incremental variants are introduced (I f?j ;Lrld 16). ‘I’he introduction of thcxse l&or rrlcthods st!rvt’s to refine the space of incremental techniques, and r>rnpiric.al analysis indicates that, the cost of incorporating an instance cart be signilicantly reduced without, signifi- ca~illy effecting the quality of learning. Arr irilportant cotriput,atiorlal measure is the number of ir~sl~a~lc:es required to c0tisl.ruc.t an optimal tree. II):3 choose::, an alt,ribute to form the test for the root based on t,l~ts information that attributtl contains over ttie observed instancc5. A sample of objects (from the environrric~nt) of suflic.ic!rit size, ql, rr~usl bc seer) for It)3 to choose the root at.tribute wllost: values best discrirrlinate objects of the e11- viroli~t~crit as a whole. This is true in the creation of all subtrtlt: roots as well, where the required r~urrlber of objects is 71,, lor the subtree rooted at node j of’ level i. lcor an cl111 ire level, i, of t,tie decision tree, r~, ( JL, TL,,) represents t,ht: 11urrlber of objects required for all nodes at that level Lo attain a stabIt! discrirninat iI)g altribute. A level cannot stal)ilk until prtavious levels have achieved stability, and t.hus 7t, ;> 71, 1. Since 1113 retains all instances seen, the uurrlt)t:r of objects to c:onsLruct a decision tree of depth d Again, assumitlg a representative object sample, ID4 must examine the same n o instances in order to choose the root attribute in an optimal manner. llowever, be- cause Ii&i does not store all instances encountered, at the next level it must examine mother rll instances because the first n, instances are not available for inspection. Con- sequently, the number of instances required to construct the tree is the bum of all of the root choice points, or B. Cost of updating rnerrrory In 1113, to update a tree is to build a new tree frorn scratch. CotistructiIig each node of the tree requires that instances be examined to determine their values for previ- ously unused attributes. The cost of constructing an entire tree is PI (1 a-- (‘4 / where /II is the number of instances, IA ( is lhe number of attributes, and d is the depth of the tree (which cannot exceed j A I). If a tree is built after every instance, then the above expense is incurred over a single object, over two objects, . . . . for a total of Asyrrlpt,otir:ally, the most iruportant term is ) /I2 since the t~unlbc~r of instances is presuluably much greater than the riurribcr of attributes.” In 1111, building a decision tree is pr.oporLional only to t,he nurriber of ot?jects times the square of the number of attributes, as shown below 498 / SCIENCE of objects to an efficient characterization is rod 1. When this is substituted into the cost equation for 1113 we have For 1114, the number of instances to an optimal decision tree is larger: 27::: 71,. Substituting this into the expres- sion for object incorporation yields Comparing total expense hinges on the number of in- stances required to select each subtree root attribute. If 71; 1 :- >;f;,’ Tli then 11)s is more expensive than 1114. ‘l’his is very likely the case, since the number of instances re- quirc~tl to construct the tree is probably greater than the depth of the deecision tree, or TL~ r > d. Our analysis has assurned a ‘representative’ sample of 0 bjects lacking a rigorous discussion of regularity and distribution of objects in an environment, we now perform ari empirical analysis. IV Empirical performance (:onsider the task of classifying chess endgames. Given a hoard position, a classifier attempts to identify the situ- ation as a win or loss. Following Quinlan (1979), we defjne a corrcept attainment task as determining whether a black king and knight versus a white king and rook results in the safety or loss of the black knight or king in two moves with black to move. Figure 1 depicts a sample board configura- tion. l<‘igure I: Example of a safe, pinned black knight. Iloards were randomly generated and described in terms of’ the distance (in squares) bcttween each pair of pieces (6 attributes of this type), board relationship between every pair of pieces (i.e., whether they lie on the same rank or file, diagonal, or otherwise) (6 attributes), and the sq~arc tyPc? where each piece resides (i.e., corner, edge, or ot,ht>rwise) (4 attrihutts). There art3 a total of six- teen attributes, each with three values. Although there arts fj” i\ (;” h 4” ._ 2, 985, 984 objects possible, ail exhaust,ive enurtreration of the actual 95,480 distinct knight pins indi- cates that there are orily 3,251) actual objects in terms of these attributes. Four behaviorally incremental variants of the II)3 algo- rithrn are tested. The first is a brute force version of 1113 which constructs a new decision tree from scratch after each new instance is received. A smarter version, 6‘3, only reconstructs the decision tree when au instance has been misclassified. The third variant is 1114; the counts of positive and rregative instances are updated for each new instance. I~‘inally, the fourth variant 164, only updates at- tribute counts when an error in classification is made (sim- ilar to IT,). ‘1‘1 rc same randomly generated hoards (KS 69% of the instances were positive) were presented to these four variants. The decision tree formed by ull of the variations tested is depicted in figure 2.’ ’ Idist-bk-knight1 dist-wk-knight diag/rectl \pther 0’ t 0 ‘0 t Figure 2: IIecision tree for a safe knight pin. For the three more efficient algorithms, ttic nurrrhcr of observations required to forrn this decision tree ranges from the least for 11% to ttle greatest for f-64. Figure 3 depicts the average depth of the decision tree in figure 2 (averaged over 50 execulions) built by each variant as a function 01 the nurrrber of instances. l>epth gives a rough and sim- ple picture of learning speed, t1ut should 11ot tw equated _-- with correctrress. ID3 rapidly builds a coniplcte tree, while _ - 1 I)4 requires substantially more instances. 111-l rtquires the largest, converging consistently on the complete tree after approximately 20,000 irrstances. Though there is a substantial range iI1 the timc each variant constructs its decision tree, each of the variants quickly forms an etftlctivc classification for t,htl instances. Classification perforrnarrctt of tlie three more efficient vari- aiits (averaged over 50 executions) was measured over 1000 _-_ instances. k‘or II):{ and 1114, a 90% effective classification LEARNING / 499 DEPTH OF DECISION TREE COST PER INSTANCE i / ID4 Ii% 250 500 750 1.000 18,500 INSTANCES PROCESSED is Ir)r’rrred after as few as 100 instances. The classilicatim ~~t:r Iorrnance of’ 164 takes somewhat lorlger to kvel off; it rt’d( tit3 75’i’: correct classificatiorl after 275 irisLarlct3, anti !#‘I c-orrect c Iassificdtiori after 750 instances. ‘l’hub, al- t I~ough it may take corlsiderable time for these incrcrnerl- t.al dlgorit hrns to achieve perfect classificatiofl perfornlarlce, good (2 WC) .I c ;issificatioIl was achieved relatively quickly. ‘I’tltl apparent sp~etf with which these algorithms form an effective classificatiori is due to the order of the decisiori t MY’S construction; important, decisive attributes are do- seri first, leaving tests of It5XY value until deeper in the ‘1’11~ cost of updating coricept desc.riptioris is measureci by couritirlg the number of corriparisons rrracie to irlcorporatt: cant II Iitfw instance. Icigures 4 aritl 5 depict t tie riurriber of corrll>;lrisorks per irddfic-e for a lypicd execution (among 50 ~xec’ut ions perforrried) for each variant. 1113 reconstructs it rit’w clccisiori tree after each IIC~ irislance arid exllibits ‘l‘tlth t’xperisc% of processing ari instmcc in II&l is nearly ^_ coast iitlt when compartd to I I)3 and 1113 (nol~ that the vtxrl ical scale for IIF1 has becri rrlagnified 400 times). ‘l’tie O( ,.,t;‘) bound of 1114 is greater thaIi the usually low cost _a . of II)3, hut it is always consiclttrably less tharl the latter’s --. pta;ikb. ‘I’he fourth variant, IIM, displays the least expense _-. 01’ tllth ttlree. It asymptotes to a value as srllall as II)3 wtiil(k r t~rriainirig wit hiri the bounded expc~nse per instance 30K 20K IOK 250 500 / - I 750 / / / / I , ID3 63 I 1.000 INSTANCES PROCESSED .-. I;igure 4: II>3 and 1113 cost per instance. tIstablisheci by JIM. The price of this efficierlcy is that 1% learns slowly. In figure 6 the cumulative performance of each of the four variants over a typical sequence of 1000 irislances is depicted. ‘l‘he vert,ic’al axis refiects the cumulatl’~e number of comparisons rriade by each algorithm as a t’uriction of the number of’ instances. The brute force version of 11)s displays the most expensive approach of the four. The geometrically accelerating curve reflects the 0( / II’ x let) asyrrlptotic bound. The step function nature of the IL)3 curve reveals the low, intermediate expense of classifying instarices arid the high, irlterrrlittelit expense of rebuilding the clecisiori tree from scratch when an incorrectly classi- fied instar1c.e is encountered. ID4 performs O(jAl’) work for each iristarlc32 processe<i, and this is clearly refiertecI iI1 its nearly linear curve. The least expensive curve results from Ifii which updates attribute counts orily if the test classific~idion is incorrect. V Conclusion As rriachine learning rrlethods are applied in more corrlpticated domains, the deficiencies of rlorlirlcrerrlerltal, sc1arc.h intoIlsive methods have become evidcrlt. This leas increased interest in incremental concept induction rneth- ads which process observatiom as they are observed. An important point thougli, is that my noriiIlc~rerrierlta1 al- gorithm can be rriatle to behave in an iIicrcrrienta1 fasti- ion (i.e., process observations one at a time). In general, however, incremerital behavior tloes riot insure the cornpu- 500 i SCIENCE COST PER INSTANCE 75 , CUMULATIVE COST PER INSTANCE ID3 100K 75K 50K 25K ID4 ID4 I /’ 1’. _/’ 1” I-’ 1 1 I I 250 iNA _/ IE _ ..- - _ -_ __ --.~--.- ----__ __ _ r , 200 400 600 800 1,000 INSTANCES PROCESSED ” Figure 6: Cumulative cost, per instance. INSTANCES PROCESSED Figure 5: ID4 and 64 cost per instance. References tatiollal efficacy of such an algorithm. Three dirnensions for clkal uating inc’rernc~rital concept induction methods have t)t:crl outlined. These dimensions are: CarbonelI, J. & flood, C;. (1985). The World Modelers f’roject: Objectives and Simulator Architecture. Proceedings of the Third lt~ter71Uti07~ul Muchine LeartLing Workshop (pp. 14 It;). Kutgers Ilrliversrty. of updating memory to accommodate a new Michalski, II. (1985). K nowledge ftepair Mechanlsr~ls: ll:volu- t i0n Versus Revolution. Proceedings of the Third lntt r71u- tio72 ul Machine Leurning Workshop (pp. 1 IG 119). fiutgers 0 ‘l’tr(~ riurriber of objects necessary to obtain a stable corlcept description. IJniversity. l ‘[‘he quality of derived concept descriptions. Mitchull, ‘I‘. (19832). < :erreralixation as Searcll. ,4 rtific.iul lttfrl- liyence, 18. 203 226. ‘J’trr>se dimensions have been used to cornpare the t)ehCiv- ior of 1 increrriental variants of Quinlarr’s II)3 program. A cascx sludy in the domain of chess endgames has served as a prorrlising indication that irrcrerrrental induction meth- od:, cari meet the computational constraints of complex c:rlviroIIrrrerrts, while meeting high standards of quality and (‘OI‘I‘t’CtIleSS. I)iumverirlg Rules by lrltlllctloIl Exarlrptes. In 11. Michitb (Ed.), systerrls ifb Universit,y the micro l’rcss. . . (Lge. fhiirlburgh. khliliburgh Quinlarl, .I. It. (19X3) I ,earrllrlg ttfficirml classific.at.iorr prom- icatiori ~tmrIt!lI tu ct1ess , . & ‘I‘ pvl chine leurn~r~g: . Altu, C:alllorrila: A 71 ‘l’ic urtijiclui irilrlligerice uyp )g;i I’ut)lishing Co~ripaiiy. Acknowledgements IJisc~ussious with Ikrlrlis KiLlcr initially raised a number of’ itltlas cbxpressed irk this paper, specifically the criteria re- tat irlg to ttie cost arid quality of learning. ‘I‘his resclarcti was supported in part by the Office of Naval Research urltlthr grants NO0014-84-K-0391, NOO014-84-K-0345, and N0001.f-85-K-01154, tfle National Srience Voundation un- tichr grants [ST-81-20685 arid lS’I‘-85- I24 19, the ArItly Re- smrctl trlstitute under- grmt MI)h903-X5-~X)3~4, and by the Naval Ocean Systems Center under contract N66001- t-u-( :-o2s5. SiirIlrrlut, c h I1ur11e, I>. (1985) 1 xarriirlg ~hr~c:~:pts In a Corrl- plex Kot)o1 World. Procredings of the ‘I’hlrd ltltcrnutmnul Muchine Lcurniny Workshop (pp. 173 176). ftutgcrs Ilni- vcrslty. ~illlOIl, t1. (11)6’3) ‘h! s cirrices of the .4rtificd. (Tarrlbritfgr, Mass.: ‘l’l~e M.l.‘l’. l’ress LEARNING / 501
1986
87
535
Quantifying the inductive bias in concept learning (extended abstract) David Haussler Department of Mathematics and Computer Science, University of Denver, Denver, Colorado 80208. Abstract We show that the notion of bias in inductive concept learning can be quantified in a way that directly relates to learning performance, and that this quantitative theory of bias can provide guidance in the design of effective learning algorithms. We apply this idea by measuring some common language biases, including restriction to conjunctive concepts and conjunctive concepts with internal disjunction, and, P uided by these measurements, develop learning algorithms or these classes of concepts that have provably good conver- gence properties. Introduction The theme of this pa er is that the notion of bias in inductive concept learning U86] [R86/ can be quantified in a way that enables us to Ip rove i meaningful convergence proper- ties for learning algorit ms. We measure bias with a com- binatorial parameter defined on classes of concepts known as the Vapnik-Chervonenkis dimension (or simply d+ensiorr ) [VC71/, [P78j’, JBEHW86/. The lower the dlmenslon of the class of concepts considered by the learning algorithm, the stronger the bias. shown to be In /BEHW86,‘? this parameter has been strongly correlated with learning performance, as defined in the learning performance model introduced by Vali- ant jV84j, [VSS]. Th is model can be outlined as follows. A concept is defined by its set of instances in some instance s ace. A sample of a concept is sequence of observa- tions, eat rl of which is an instance of the concept ( positive observation ) or a non-instance of the concept vation ). Samples are assumed to be create 6 negatrve obser- from inde en- dent, random observations, chosen according to some K xed probability distribution on the instance space. Given a sam- f! le of a target concept to be learned, a learning algorithm orms a hypothesis, which is itself a concept. The algorithm is consistent if its hypothesis is always consistent with the given sample, i.e. includes all observed positive instances and no observed negative instances. A consistent hypothesis may still disagree with the target concept by failing to include unob- served instances of the target concept or including unobserved non-instances of the target concept. The error of a hypothesis is the combined probability of such instances, i.e. the probability that the hypothesis will disa dom observation of the target concept, se ected ‘i ree with a ran- from the instance space according to the fixed probability distribution. Two performance measures are applied to learning algo- rithms in this setting. 1. The convergence rate of the learning algorithm is measured in terms of the sample size that is required for the algorithm to produce, with high probability, a hypothesis that has a small error. The qualification “with high probability” is required because the creation of the sample is a probabilistic event. Even the best learning algorithm cannot succeed in the unlikely event that the sample is not indicative of typical observations. However, while the model is probabilistic, no specific assumptions are made about the probability distribu- tion that governs the observations. This distinguishes this approach from usual statistical methods employed in pattern recognition, where the object of learning is usually reduced to the estimation of certain parameters of a classical distribu- tion. The distribution-free formulation of convergence rate is obtained by upper bounding the worst case convergence rate of the learning algorithm over all probability distributions on the instance space. This provides an extremely robust perfor- mance guarantee. 2. The >omputational efficiency of the learning algorithm is measured in terms of the (worst case) computation time required to pass from a sample of a given size to a hypothesis. Our results for conjunctive concepts indicate the possibility of a trade-off between convergence rate and computational efficiency., in which the fastest converging learning methods require significantly more computation time than their slower converging counterparts. In order to optimize this trade-off, applying the general method developed in JBEHW86/, we em x loy heuristic techniques based on the greedy method for fin ine: a small set cover iN69l lJ741 that trade off a small decrezse in the convergence rate’ for’s very large increase in computational efficiency. This general idea forms a secondary theme of the paper. 1. Quantifying inductive bias In the simplest type of inductive concept learnin , each instance of a concept is defined by the values of a fixe li set of attributes, not all of which are necessarily relevant. For example, an instance of the concept “red triangle” might be characterized by the fact that its color is red, its shape is tri- angular and its size is 5. Following [MCLBS], we consider three types of attributes. A nominal attribute is one that takes on a finite, unordered set of mutually exclusive values, e.g. the attribute color, restricted to the six primary and secondary colors. A linear attribute is one with a linearly ordered set of mutually exclusive values, e.g. a real-valued or integer-valued attribute. A tree-structured attribute is one with a finite set of hierarchically ordered values, e.g. the attri- bute shape with values triangle, square, hezagonll circle, polygon and any-shape, arranged in the usual “is-a hierar- chy. Only the leaf values triangle, square, hexagon and circle are directly observed. Since a nominal attribute can be con- verted to a tree-structured attribute by addition of the spe- cial value any-value, we will restrict our discussion to tree- structured and linear attributes. Equations relating attributes to values will be called terms, which are either elementary or compound. The possible forms of elementary terms are as follows. For tree-structured attributes: = red, shape = polygon. attribute = value, e.g. color For linear attributes: value1 2 attribute < value2 e.g. 5 < sire < 12. Strict inequalities are also permzted, as well as-intervds open on one side. Terms such as 5 6 size 5 5 are abbreviated as size = 5. Compound terms /MC83/ can take the following forms. For tree-structured attributes: attribute = value. or value, or * * . or value, e.g. shape = square 0: circle, aid for linear at&ibutes: any disjunction of intervals e.g. 0 5 age 2 21 or age 2 65. Dis- junctive operators within compound terms are called internal disjunctions. We consider the following types of concepts: 1. pure conjunctive: where each term1 and term2 and * * . and termk, term. is an color = red and 5 1. .&e 5 12, elementary term, e.g. LEARNING / 485 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. 2. pure disjunctive: connected by “or” same as pure conjunctive but terms are 3. internal disjunctive: same as pure conjunctive, but allowing compound terms, e.g. (cofor = red or blue or yellow) and (5 2 size 2 12) These concept types have the following interpretations in the context of rule based knowledge representations. Pure conjunctive: antecedent of a single, variable-free Horn clause rule (PROLOG rule), e.g. type = pos t color = red and 5 < size 5 12 Pure disjunctive: antecedents of severafrules, each with a sin- f le term and all with a common consequent. nternal disjunctive: antecedent of a single rule with pure dis- junctive “helper rules” for the compound terms, e.g. for the internal disjunctive concept given above, create a new value “primary” for color and form the rules color = primary t color = red color = primary t color = blue color = primary t color = yellow type = pos t color = primary and 5 5 site 2 12 In Section 2 we will see how collections of rules for inter- nal disjunctive concepts can be generated mechanically from samples. But first, we describe how these and other learning algorithms can be evaluated, To quantify the inductive bias of a learning al orithm, we use the following notion from ‘VC71]. Let 2 be an instance s f ace and let H be a class o i concepts defined on X, e.g. the c ass of pure conjunctive concepts over an instance space determined by a fixed set of attributes. For any finite set S C X of instances, El,(S) = {S n h : h E H}, i.e. the set of 21 subsets of S that can be obtained by intersecting S with a concept in H, or equivalently, the set of all ways the instances of S can be divided into positive and negative instances so as to be consistent with some concept in H. If n,(S) is the set of all subsets of S then we say that S is shat- tered by H. The Vapnik-Chervonenkis dimension of H (or simply the dimension of H) is the smallest integer d such that no S C X of cardinality d + 1 is shattered by H. If no such d existsrthe dimension of H is infinite. As an example, suppose X is the instance space defined by one linearly ordered attribute size and H is the set of pure conjunctive concepts over X. Thus H is just the set of ele- mentary terms involving site, i.e. size intervals. For any three distinct instances, i.e. instances where size = z, size = y and sire = z, with x < y < Z, there is no concept in H for which the first and third instances are positive but the second instance is negative because there is no interval that contains z and z without containin fI y. Hence no set of three instances in X can be shattered by implying that the dimension of H is at most 2. Since any two ‘out of three distinct instances can be shattered by H, this upper bound is tight, at least when size has three or more distinct values. cpper bounds on the dimensions of the more general con- cept classes introduced above are as follows: k term pure conjunctive concepts on n attributes, each tree- structured or linear: F( 1) d 5 4klog(4k”/n). or k of size roughly n/2 or larger, i!‘i) bette: u’,,‘e”r bound . k term pure*disjunctive concepts on n attributes: (2) d < 4klog(l6n)(log(2k) + loglog( 16n)). k term internal disJunctlve concepts on n attributes, using a total of j internal disjunctions: (3) d < 5(k+j)log(5(k+j d;). Justifications for these d boun s are omitted due to lack of space’. Let C be a class of target concepts of some type and level of complexity, e.g. p-term pure conjunctive concepts over an instance space defined by n-attributes. Given a target concept in C and some number m of observations of this con- ‘/VC71), [WSl], [AA831 and [BEHWM] g’ lve a variety of other examples of concept classes of finite dimeoslon. When H is of finite dimension, Wenocur and Dudley call H a Vapalk-Chervonenkis Class (VCC) iwD8li. The Vapntk- Chervonenkis nttmber of this class, denoted V(H), corresponds to the dimension of H plus one. cept! a learning algorithm will explore some space of possible hypotheses. This will be called the eflective hypothesis space of the learning algorithm for target concepts in C and sample size m. The numerical bias of the learning algorithm is defined as the Vapnik-Chervonenkis dimension of its effective hypothesis space. A lower bias is a stronger bias. For exam- ple, in the next section we will present an algorithm (Algo- rithm 2) for learning pure conjunctive concepts that has the following property: presented with m observations of an unk- nown p-term pure conjunctive target concept over an instance space of it attributes, it always produces a consistent pure conjunctive hypothesis with at most Dlnm + 1 terms. Hence the-effective hypothesis space of the a’lgorithm for target con- cepts of this type with samde size m is the class of at most plnm + l-term pure * con’unctive concepts over an instance space of n attrlbutes. + hese limitations on the hypothesis space are due to the fact that the algorithm only considers pure conjunctive hypotheses and prefers concepts with fewer terms, two of the informal types of bias identified in jV86/. Using formula (1) with k = plnm, we can approximately upper bound the numerical bias of the algorithm for p-term pure conjunctive target concepts over an instance space of n ~~ibute~~~d~~~~~~~~~,y e can now use the following theorem to relate this numeri- cal bias with the convergence rate of the algorithm for these target concepts. Theorem 1. [BEHWSS,p Given any consistent learning algorithm with numerical bias for targetaconcepts in a class C and sample size m of at most rm , where r 2 2 and 0 5 N < 1, then for any probability distribution on the instance space, any target concept in C and any E and 6 between 0 and 1, given a random sample of the target concept of size at least I I %og2, 8r 8r laol (6) max -log- E 6 e(r-cr) c( l-(Y) 1 I the algorithm produces, with probability at least 1 - 6, a hypothesis with errpr at most e. If the numerical bias is bounded by r(log m) , it suffices to have sample size l-cl I 2 21+4r (7) %og-,- [ 1% 8(21+2)‘+lr max - q E 6 f 6 1 ! Plu ging in formula (5 hdln(m ,‘, term (i.e. letting 1 above into (7) but ignoring the = 1 and r = 4plog(4pVn)), this theorem shows that given a p-term pure conjunctive target concept over n attributes and approximately 2 (8) 128plog(4p vi) max log t random observations, Algorithm 2 produces, with probability at least 1 - 6, a hypothesis with error at most e, independent of the target concept and independent of the underlying distri- bution governing the generation of observations. By a different argument, using bound (1’) and (6) with (Y = 0, we can also obtain the upper bound (9) 2 16n 16n max Qlog- -log- t 6’ CT t 1 on the required number better for small n. of observations, which is considerably cepts over n attributes, %This is derived from Theorem 11 of [BEHW86]. We are suppressing some ad- ditional measurability assumptions required in the general form of the theorem since they wll not be relevant in our intended applications (see appendix of [BEHW86/3. 486 / SCIENCE formulas (8 1 and (9) is that the convergence rate does not depend at a 1 on the size or complexity of the trees that define the values of the tree-structured attributes, nor on the number of values of the linearly ordered attributes. It also shows that the convergence rate depends only logarithmically on the number of attributes and the confidence facltor S. The strongest dependence is on the inverse error - and the number p of terms in the is much worse than linear. target concept, yet neitger of these In fact, the argument used in proving Theorem 1 shows the following stronger result: given any p-term pure conjunc- tive target concept over n attributes and a sample of approx- imately the size given in 6 1 - 6 any consistent hypot 8) or (9), with probability at least esis within the effective hypothesis space of Algorithm 2, i.e. any consistent conjunct with at most plnm + 1 terms, will have error at most e, independent of the underlying probability distribution that governs the observations. Thus no matter what our method is, if we hap- pen to find a conjunct with at most plnm + 1 terms con- sistent with a sample of this size, then we can use this con- junct as our hypothesis and have confidence at least 1 - 6 that its error is at most E. This kind of a posteriori is what lead Pearl to call the sample grows. 2. Application: tion. learning concepts with internal disjunc- We now illustrate the application of the analytical method outlined above in the stepwise development and analysis of learning algorithms for pure conjunctive, pure dis- junctive and finally internal disjunctive concepts. We will use the single representation trick, as described in [CSZ]: each observation is encoded as a rule, e.g. a positive observation of a red triangle of size 5 becomes: type = pos c color = red and site = 5 and shape = triangle. Let S be a sample encoded in this form and A be an attri- bute. If A is a tree-structured attribute, for each term A = v that occurs in the sample S, mark the leaf of the tree for A that represents the value u with the number of positive obser- vations and the number of negative observations that include the term A = w. If A is a linear attribute, build a list of such pairs of numbers, ordered by the values 11. This data structure will be called the projection of the sample onto the attribute A. Given the projection of S onto A, we can find the most s ecific ii term of the form A = v that implies all of the positive o servations, which we call the minimal dominating term for A’. If A is a tree-structured attribute, the minimal dominat- ing term is A = V, where v is the value of the node that is the least common ancestor of all the leaves of the tree of A WV;: values occur in at least one positive observation. minimal dominating term is found using the climbing tree heuristic of [MCL~Z?]. It corres onds to the “lower mark” in the attribute trees of IBSPSS]. pf A is a linear attribute, the minimal dominating term is the term ur 5 A 5 u2, where y1 and v2 are the smallest and largest values of A that occur m at least one positive observation, i.e. the result of applying the “closing interval rule” of /MCL83/. We can use the minimal dominating terms to find the most specific pure con- junctive concept consistent with a given sample. Algorithm 1. (naive algorithm for learning conjunctive concepts) ‘For simplicity, we will assume that every sample contains at leaat one poai- tive and one negative observation. This implies (among other things) that a minimal dominating term always exists, and will make our algorithms simpler. 1. For each attribute, calculate the projection of the sample onto this attribute and find the minimal dominating term. Let the conjunction of these minimal dominating terms be the expression E. 2. If no negative examples are implied by E then return E, else report that the sample is not consistent with any pure conjunctive concept. The effective hypothesis space of this algorithm is the class of all pure conjunctive concepts over some fixed set of attributes and doesn’t depend on the sample size or the number of terms in the target concept. Since the dimension of pure con’unctive concepts on n attributes is at most 2n by formula (1’ iven 3 above, the convergence rate of this algorithm is K by formula (9) above, i.e. given a random sample of size 9), Algorithm 1 produces, with probability at least 1 - 6, a ypothesis with error at most e for any pure conjunctive tar- get concept and any distribution on the instance space. While significant in its generality, this upper bound suffers from the fact that the number of observations required grows at least linearly in the number of attributes. In many AI learning situations where conjunctive concepts are used, the task is to learn relatively simple conjuncts from samples over instance spaces with many attributes. In this case a better algorithm would be to find the simplest conjunct (i.e. the conjunct with the least number of terms) that is con- sistent with the data, rather than the most specific conjunct. With this strategy, given a sample of any p-term pure con- junctive concept on n attributes, we always find a consistent K ure conjunctive hypothesis that has at most p terms. Thus y the same analysis (i.e. using (6) with a = 0) and using for- mula (1) instead of (1 ) (with k = p)! the upper bound on the sample size required for convergence 1s reduced to (10) max [ %og2, c 6 32p log( 4p u/n ) 1% 32p log(4p <) E c 1 which is logarithmic in the number of attributes. Call this the optimal algorithm. Can it be efficiently implemented? The following shows that it probably cannot. Theorem ,!?. Given a sample on n attributes, it is NP- hard to find a consistent pure conjunctive concept for this sample with the minimum number of terms. In proving this theorem, we show that this problem is equivalent to following NP-hard problem [GJ79]: Minimum Set Cover: given a collection of sets with union T find a subcollection whose union is T that has the mmimum number of sets. There is, however, an obvious heuristic for approximat- ing the minimum cover of T: First choose a largest set. Then remove the elements of this set from T and choose another set that includes the maximum number of the remaining ele- ments, continuing in this manner until T is exhausted. This is called the greedy method. Applying it to the problem of finding pure conjunctive concepts, we get the following. Algorithm 2. (greedy algorithm for learning pure conjunc- tive concepts) 1. For each attribute, calculate the projection of the sample onto this attribute and find the minimal dominating term. 2. Starting with the empty expression E, while there are nega- tive observations in the sample do: a. Among all attributes, find the minimal dominating term that eliminates the most negative observations and add it to E, breaking out of the loop if no minimal dom- inating term eliminates any negative examples. b. Remove from the sample the negative observations that are eliminated and update the projections onto the attributes accordingly. 3. If there are no negative observations left return E, else report that the sample is not consistent with any pure conjunctive concept. It can be shown that if the set T to be covered has m elements and p is the size of the minimum cover, then the greedy method is uaranteed to find a cover of size at most p logm + 1 [N69] fJ7.41. H ence given a sample of an p-term pure conjunctive concept with m negative observations, Algo- rithm 2 is guaranteed to find a consistent pure conjunctive hypothesis with at most approximately plogm terms. Using LEARNING / 43’ Theorem 1, this ives the approximate upper bound on the convergence rate 7 previous section. or Algorithm 2 given by formula (8) in the Since Algorithm 2 is, like Algorithm 1, a consistent the bound algorithm for arbitrary pure .conjunctTve on the convergence rate given in formula concepts, (9) holds as well. Note that the b&nd on theconvergence rate for the gir;z:~, method is not much worse than the bound (10) for the algorithm! yet the greedy method 1s slgmficantly cheaper computationally. The compliments of pure conjunctive concepts can be represented as pure disjunctive concepts. Hence this is the dual form of pure conjunctive concepts. A variant of Algo- rithm 2 can be used to learn oure disiunctive concerts. In the dual form, each term must eliminate811 negative observat,ions and need only imply some subset of positive observations, and all terms together must imply all positive observations. The dual greedy method is to repeatedly choose the term that implies the most positive observations and add it to the dis- junct, removing the positive observations that are implied, until all P ositive observations are accounted for. This is a variant o the “star” method in MCL891. Since k term nure disjunctive concepts have a Vannik-Chervonenkis dimension similar to that of k term pure cbnjunctive concepts f formula (2)), the analysis of the convergence rate of this a gorithm goes through as above. We lation of now tackle internal disjunctive concepts. The calcu- the Vannik-Chervonenkis dimension of these concerts given in the pretious section indicates that the strongest bias yn learning them is to minimize the total number ‘;f terms plus internal disiunctions. i.e. to minimize the total size of all the terms, where the size ‘of a compound term is defined as the number of internal disjunctions it contains plus one. Let E be an internal disjunctive concept that is consistent with a given sample. As with pure conjunctive concepts, each term in E implies all positive observations and eliminates some set of negative observations. A compound term with this pro will be called a dominating compound term. We would li K erty e to eliminate all the negative observations using a set of terms with the smallest total size. This leads to the following. given Minimum Set Cover problem with positive integer costs: a collection of sets with union T, where each set has associated with it a positive integer cost, find a subcollection whose union is T that has the minimum total cost. Since it generalizes Minimum Set Cover, this problem is clearly NP-hard. However, approximate solutions can found by a generalized greedy method. Let T’ be a set of elements remainine to be covered. For each set in the collection. define the gain,&ost ratio of this set as the number of elements of T’ it contains divided by its cost. The generalized greedy method is to always choose the set with the highest gain/cost ratio. As with the basic Minimum Set Cover problem, it can be shown that if the original set T to be covered has m ele- ments and p is the minimum cost of any cover, then the gen- eralized greedy method is guaranteed to find a cover of size at most plogm L 1. To apply this method in learning internal disjunctions, let the gain/cost ratio of a dominating compound term be the number of negative observations it eliminates divided by its size. Algorithm 3. (greedy algorithm for learning internal dis- junctive concepts) 1. For each attribute, calculate the projection of the sample onto this attribute. 2. Starting with the empty expression E, while there are nega- tive observations in the samnle do: a. Among all attributei, find the dominating compound term t with the highest gain/cost ratio, breaking out of the loop if none have for the attribute of t P ositive gains. If there is no term a ready in E, add t to E. Otherwise replace the old term in E for the attribute of t with t. b. Remove from the sample the negative observations t eliminates and update the projections onto all attributes accordingly. 3. If else ’ there are no negative observations left return E, report that the sample is not consistent with any internal disjunctive concept. - find To implement this algorithm, we need a procedure to a dominating compound term with the highest gain/cost ratio for a given attribute from the projection of the sample onto that attribute. Since there are in general exponentially many distinct dominating compound terms with respect to the number of leaves of a tree-structured attribute or the number of values of a linear attribute, this cannot be done by exhaus- tive search. However, there is a reasonably efficient recursive procedure that does this for tree-structured attributes, and a simple iterative procedure fsr linear attributes. Each of these procedures takes time O(q ), where q is the number of dis- tinct values of the attribute that ap ear in the observations. Space limitations preclude a detaile cr cedures. discussion of these pro- By formula (3) and the above result on the performance of the generalized greedy method. the numerical bias of Algo- rithm g for k-terminteinal disju’nctive target concepts using a total of j internal disjunctions (i.e. of size k + j) and sam- ple size S(k+j)fn(m)log S(k?j ln(rr$‘&) most term, formula ( i 4 ) of atIgnoring the l$l”n~~j] heorem 1 gives an upper bound on the convergence rate similar to that of Algorithm 2 given in equation (8), with k+j substituted for p. 3. Extensions There are several possible extensions to these algorithms that would increase their domain of application. We outline two of them here. 1. The ability to handle “don’t care” values for some attri- butes in the sample (see e.g. [QSS], [V84/). A “don’t care” value for attribute A corresponds to an observation in rule form having the term A = any-value. In fact, we can go one step further and let observations be arbi- trary pure conjunctive expressions, where, for example, the positive observation shape = polygon and color = blue means that the concept contains all blue polygons, and the corresponding negative observation means that no blue P olygons are contained in the concept. In this form, the prob- em of learning from examples is seen to be a special. case of the more general problem of knowledge refinement /MICS,Y,I, wherein we start with a collection of rules that are already known and try to derive from them a simpler, more general (and hopefully more comprehensible) set of rules. This exten- sion can be accomplished by modifying the notion of the pro- jection of the samples onto the attributes to allow terms of the observations to project to internal nodes of the tree- structured attributes or intervals in the linear attributes. Other parts of the algorithm are changed accordingly. 2. Promoting synergy while learning a set of concepts. So far we have only considered the problem of learning a single concept in isolation. In fact, we would like to build sys- tems that learn many concepts, with higher level concepts bein built upon intermediate and low level concepts (see e.g. P4 / SB86/). The first step is to extend our notion of concept to inc ude many-valued observations, rather than just positive and negative. In this way we can learn rules that define the values of one attribute in terms of the values of the other attributes. This is essentially knowledge refinement on rela- tional databases JMIC86/. Ignoring attributes with many values for the time being, this can be accomplished in a rea- sonable way by finding a separate concept for each value of the attribute that discriminates this value from all the others. Once we have learned to recognize the values of the new attribute.in terms of the primitive attributes, it can be added to the set of primitive attributes and used later in learning to reco nize the values of other attributes. In this scheme new attri utes are always nominal. However, they could acquire a f tree structure as they are used to define later concepts in the following manner (see also [US61 /BSPSS/): whenever an inter- nal disjunctive concept is formed using a compound term A =vloru20t a*- otvk, check to see if this same com- pound term is required by other concepts. If it is required often enough, check the tree for the attribute A. If a node ~&tlZZ~~~~ “d:, ‘.Z. v can be added without destroying the f f a new node is added, the compound terms it reprdsents can be replaced by an elementary term using the value of the new node. Thus the collection of rules given in Section 1 for the internal disjunctive concept involv- 488 / SCIENCE ing the primary colors might be created by the “discovery” of the higher level value of primary for the attribute color. In this way a useful vocabulary of more abstract values for attributes evolves under the pressure to find simple forms for higher level concepts, creating a synergy between learned con- cepts. Another type of synergy is achieved by using the algo- rithm for pure conjunctive concepts along with the dual algo- rithm for pure disjunctive concepts. If new Boolean attributes are defined for often-used pure conjuncts or disjuncts, then these can allow the recognition of higher level concepts in DNF and CNF respectively by effectively reducing these expressions to pure disjunctive or conjunctive form. Often used internal disiunctive concents could be used as well. The creation of thesk new attribites can greatly increase the number of attributes that are considered in- later learning tasks, which argues strongly for learning methods whose per- formance does not degrade badly as the number of attributes grows, such as those we have presented. Conclusion. We have presented a methodology for the quantitative analysis of learning performance based on a relatively simple combinatorial property of the space of hypotheses explored by the learning algorithm. Applications of this methodology have been presented in the development and analysis of learn- ing algorithms for pure conjunctive, pure disjunctive and internal disjunctive concepts. Several open problems remain, in addition to those mentioned above. Some are: 1. Can we develop the proper analytic tools to deal with algo- rithms that a. attempt to handle the problem of noisy data jQSS/ or b. attemnt to learn “fuzzv” concepts that are defin’ed proba- bilisticaliy with respect to-the instance space? 2. What power is gained by allowing the learnin algorithm to form queries during the learning process [SASS] b’vG86/? 3. Can we find provably efficient incremental learning algo- rithms (i.e. ones that modify an evolving hypothesis after each observation) to replace the “batch processing” learning algo- rithms we have given here? 4. To what extent can we extend these results to concepts that involve internal structure, expressed with the use of vari- ables, quantifiers and binary relations (e.g. the c-expressions of [MCL 83/)? Acknowledgements. I would like to thank Larry Rendell for suggestin i the relationship between the Vapnik- Chervonenkis imension and Utgoff’s notion of inductive bias and Ryszard Michalski for suggesting I look at the problem of learning internal disjunctive concepts. I also thank Les Vali- ant, Leonard Pitt, Phil Laird, Ivan Bratko and Stephan Mug- gleton and Andrzej Ehrenfeucht for helpful discussions of these ideas, and an anonymous referee for suggestions on improving the presentation. References: [ANG88] .bgluin, A., “Learning regular sets from queries and counter-examples,” Tech. rep. YALEU/DCS/TR-464, Yale Univer- sity, 1986. [A881 Assouad, P., “Densite et Dimension,” Ann. Inst. Fourier, Grenoble 33 (3) (1983) 233-282. [B85] Banerji, R., “The logic of learning: a basis for pattern recog- nition and improvement of performance,” in Advances in Computers, 24, (1985) 177-216. [BEHW86] Bl umer, A., A. Ehrenfeucht, D. Haussler and MLI. War- muth, “Classifying learnable geometric concepts with the Vapnik- Chervonenkis dimension,” 18th ACM Symp. Theor. Comp., Berkeley, CA, 1986, to appear. [BSPSS] Bundy, A., B. Silver and D. Plummer, “An analytical com- parison of some rule-learning programs,” Attif. Intel. 27 (1985) 137-181. [CSZ] Cohen, P. and E. Feigenbaum, Handbook oj AI, Vol. 3, Wil- liam Kaufmann, 1982, 323-494. (GJ79] Garey, M. and D. Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness, W.H.Freeman, 1979. [J74] Johnson, D.S.. “Approximation algorithms for combinatorial problems,” J. Comp. Sys. Sci., 9, 1974. [MCL83] ,Michalski, R.S., “A theory and methodology of inductive learning, ” in Machine learning: an artificial intelligence approach, Tioga Press, 1983, 83-134. [MIC83] Michie, D., “Inductive rule generation in the context of the fifth generation,” Proc. Int. Mach. Learning Workshop, Monticello, Il., (1983) 65-70. [MIT821 Mitchell, T.&l., “Generalization as search;” Art. Intefl. 18 (1982) 203-226. [N69] Nigmatullin, R.G., “The Fastest Descent Method for Cover- ing Problems (in Russian),” Proceedings of a Symposium on Ques- tions of Precision and Eficiency of Computer Algorithms, Book 5, Kiev, 1969, pp. 116-126. [P78] Pearl, J., “On the connection between the complexity and credibility of inferred models,” Id. J. Gen. Sys., 4, 1978, 255-64. [QSS] Quinlan, J.R., “Induction of decision trees,” Machine Learn- ing, 1 (1) (1986), to appear. [R86] Rendell, L., “A general framework for induction and a study of selective induction,” Machine Learning 1 (2) (1986), to appear. [SBSB] Sammut, C., and R. Banerji, “Learning concepts by asking questions,” in Machine Learning II, R. Michalski, J. Carbonell and T. Mitchell, eds., Morgan Kaufmann, Los Altos, CA, 1986. [U86] Utgoff, P., “Shift of Bias for inductive Concept Learning,” ibid. [V84] Valiant, L.G., “A theory of the learnable,” Comm. ACM, 27(11), 1984, pp. 1134-42. [V85] Valiant, L.G., “Learning disjunctions of conjunctions,” Proc. 9th IJCAI, Los Angeles, CA, 1985, 560-6. [VC71] Vapnik, V.N. and A.Ya.Chervonenkis, “On the uniform con- vergence of relative frequencies of events to their probabilities,” Th. Prob. and its iippl., 16(2), 1971, 264-80. [WDSl] Wenocur, R.S. and R.M.Dudley, “Some special Vapnik- Chervonenkis classes,” Discrete Math., 33, 1981, 313-8. LEARNING / 4%)
1986
88
536
PRELIMINARY STEPS TOWARD THE AUTOMATION OF INDUCTION Stuart J. Russell Department of Computer Science Stanford University Stanford, CA 94305 ABSTRACT Rational inductive behaviour is strongly influenced by ex- isting knowledge of the world. This paper begins to elucidate the formal relationship between the base-level induction to be attempted, the direct evidence for it (positive and negative in- stances) and the indirect evidence (higher-level regularities in the world). By constructing a program to search the space of forms of higher-level regularity WC discover some important new forms which have direct application to analogy, single- instance generalization and enumerative induction in general. We outline a theory which we hope is the first step towards the construction of powerful and robust learning systems. * I INTRODUCTION Ultimately, the source of all our knowledge of the world must be observation, either direct, communicated or inherited. One of the principal problems of philosophy has been to ex- plain how this accumulation of observations can be used to fill in the gaps in our knowledge, particularly of the future. With- out such an ability, rationality, which requires the prediction of the outcome of our actions, would be impossible. In AI, the problem is doubly acute: not only do we desire to under- stand the process for its own sake, but also without such an understanding we cannot build machines that learn. The basic answer to the problem is that we come to believe in some gen- erally applicable rules (universals) by a process of induction from prior instances of their application; we then apply these rules in situations of incomplete knowledge using deduction. So far, so good. In AI, the two halves of the process correspond roughly to the division into the areas of machine learning and knowledge-based systems. Analogy, which seems at first sight to defy this classification, is shown in [Davies & Russell 861 to belong more to the deductive phase. In this paper, our object is to make some progress towards a theory of induction which will prescribe, as far as is possi- ble, the correct inductive behaviour for an intelligent system. As explained below, one essential element of this task is to explicate the way in which existing world knowledge affects a system’s inductive acquisition of new knowledge. This need is pointed out in [Michalski 831. In order to explain how present- day intelligent systems (such as ourselves) have arrived at our degree of understanding of the world, given the fact that at the beginning of evolutionary history there was no existing knowl- edge, our theory must provide a formal relationship between * This work was performed while the author was supported by a NATO studentship from the UK Science and Engineering Research Council, and by ONR contract N00014-81-K-0004. Computing support was provided by the SUMEX-AIM facility, under NIH grant RR-00785. the system’s existing knowledge and the universal to be in- duced; put simply, we seek a domain-independent theory. The basic problem to be solved is this: given a mass of ground facts and no other domain knowledge, what can be inferred? As mentioned earlier, we perform inductions on the ground facts to obtain universals. Enumerative induction is just the simple process by which, from a collection of instances . a, satisfying P(u,) and Q(ai), we induce the general rule i&‘(z) + Q(x)]. Th e search for a rationale for this induc- tive step seems to be circular: we use it because it has always worked, but the belief that this means it will work in the fu- ture requires an inductive step. This is Hume’s Problem of In- duction, which, according to modern interpretation, he rightly deemed to be inherently insoluble. If we could prove an enu- merative induction to be valid, this would amount to prevision of the future, a scientifically dubious concept.* Intuitively, an enumerative induction is made more certain by the discovery of further confirming instances as long as no disconfirmation occurs. This model of induction is somewhat different from the version space approach to concept learn- ing ([Mitchell 78]), in which the generalizations produced are justified by a linguistic bias which limits the set of allowable generalizations so that if only one of the set is consistent with the observations then it is assumed to be true. This means that the number of confirming instances is ignored. Moreover, the factual content of the linguistic bias is neither elucidated nor motivated (but see [Utgoff 841); in this light it is hard to view the version space approach as a form of inference. This issue is also discussed in [Dietterich 861. The problem with which we are concerned is not just the selection of an appro- priate generalization for some data, but the assessment of its probable truth; selection derives automatically from this if we select the most probable generalization. In particular, we wish to investigate why one generaliza- tion may be given a great deal of credence, whilst another is regarded very suspiciously, even though they both have the same number of positive instances and no negative instances. For example, consider the case of the traveller to Italy meeting her first Italian. On hearing him speak Italian, she immedi- ately concludes that all Italians speak Italian; yet on discov- ering that his name is Giuseppe, she doesn’t conclude that all Italians are called Giuseppe. Clearly, the difference lies in the traveller’s prior knowledge of countries, languages and names. Goodman’s classic example of grue emeralds is another case in point, which he used in [Goodman 461 to refute the early claims of the confirmation theorists (Carnap and others) that the probability of a proposition could be inferred from its in- stances and syntactic form alone. In Goodman’s example, we * For, despite our best predictions, the whole world could be swallowed tomorrow by a giant intergalactic toad ([Hoppe]). LEARNING / 477 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. are to evaluate the two inductions 1) All emeralds are green 2) All emeralds are grue given that all the millions of emeralds observed to date are green, where grue means ‘green if observed before tomorrow, and blue if observed thereafter’. Both have the same over- whelming collection of positive instances, but the second seems somewhat undesirable. Goodman’s answer to what became known as the ‘new riddle of induction’ is that the rule must be projectible. We will return to the definition of projectibilitp in a later section. In spirit, his answer is the same as ours: “\Yhile confirmation is indeed a relation between cvi- dence and hypotheses, this does not mean that our def- inition of this relation must refer to nothing other than such evidence and hypotheses. The fact is that when- ever we set about determining the validity of a given projection from a given base, we have and use a good deal of other relevant knowledge.” ([Goodman 8.31 pp. 845). The object of this paper is to show what this knowledge con- sists of, and to show how it can be found and used to give additional confirmation to enumerative inductions. What we want is a theory which will be able to start with nny body of knowledge of any world (preferably in wff form), and say which inductions are reasonable and which aren’t. We therefore re- quire that the ‘other relevant knowledge’ have a syntactic rela- tionship to the evidence and inductive hypothesis, since other- wise the theory itself will be assuming something factual about the world, and hence will fail when applied to a world in which the factual assumption is false. In this, we strongly disagree with [Holland et al. 861, who say “In essence, our approach is to deny the sufficiency of purely syntactic accounts . . . and to insist that sensible inferential rules take into account the kinds of things being reasoned about.” We believe that such an ap- proach simply begs the question of how such world-dependent rules could ultimately be acquired, except by some syntactic process; moreover, a physical system seems fundamentally in- capable of performing anything but syntactic processes. For- tunately, in a formal system, logical entailment is a syntactic relationship (this is the fundamental achievement of the study of logic since Aristotle) and will play a large role in our theory. If we are to build systems which observe an environment containing regularities and make use of them via the process of induction, we must be able to eliminate such spurious induc- tions as ‘all emeralds are grue’. It might be argued that Good- man is playing the sophist here; a philosopher might wish to know why emeralds are not considered grue, but the AI prag- matist might object that this is creating difficulties for the sake of it, and that we can avoid such problems in real systems just by not coining absurd, unmotivated concepts. However, an AI system needs to coin new terms (see? e.g., [Lenat 83a,83b], [Lenat et al. 791); not being endowed with common sense, an AI system is quite likely to generate terms as absurd as ‘grue’, and thus we need a theory to guard against inductions using them and a theory to help avoid their generation. At a more basic level, we wish to avoid calling all Italians Giuseppe. II HIGHER-LEVEL REGULARITIES The fundamental idea which we aim to expound and for- malize is that an inductive generalization can be confirmed or disconfirmed, not only by the observation of its own in- stances or counter-examples, but also by the observation of other, higher-level regularities in the world. Naturally, these regularities will be based on other instances and, in turn, on other regularities. The general idea is to bring our outside experience to bear on whether to accept a given rule. It is extremely rare for inductions to be performed in vucuo. In the case of the traveller in Italy, the generalization that all Italians speak Italian is supported by the more general regularity that, within any given country, most people tend to speak the same language; on the other hand, Giuseppe is not assumed to be the name of all Italians because of the higher-level regularity that almost all social groups use a variety of names. Assum- ing that emeralds are grue contradicts the general rule that intrinsic properties of objects don’t change, particularly not over a whole class and particularly not in response to some ab- solute time point (as opposed to a time point related to each individual). Some philosophers have objected to the use of such properties as grue in inductions on the grounds that they are intrinsically disjunctive ([Sanford 70]), not ostensively de- finable ([Salmon 74]), positional and non-qualitative ([Barker & Achinstein SO]) and epistemologically inferior ([Swinburne 731). But to the little-known species of bond-weevil that lives exclusively on unmatured, fixed-date, treasury bonds, proper- ties such as ‘grue’ will seem perfectly natural and useful. A theory of induction cannot, therefore, rest on ‘intrinsic’ prop- erties of the induced rule, but on its relation to the sum of our knowledge of the universe. In this paper, we will concentrate on confirmatory, rather than disconfirmatory, regularities. Our proposal is that each such regularity corresponds to a universally quantified propo- sition which, if taken as literally true, would be sufficient to deductively imply the base-level generalization we are at-tempt- ing, given its observed, positive instances. Furthermore, if the higher-level regularity is to provide additional confirmation, it must have positive instances, preferably a large number, which are not instances of the base-level rule. This is the external ev- idence requirement. In a formal system, therefore, the higher- level regularities have the desired syntactic relationship to the base-level rule (see the discussion of the syntactic requirement in the Introduction). The higher-level regularities, in turn, may be confirmed by regularities at a still higher level, until ultimately we have to give in to the necessity to do simple enumerative induction. In essence, therefore, we are trying to bring deduction to the aid of induction as far as possible, as a means of allowing our world knowledge to influence our inductive processes. In the remainder of this paper, we describe the following steps in the process of building a theory of induction: 1) Construction of the space of possible classes of higher- level regularities. 2) Searching the space for interesting classes. 3) Analyzing the results of the search. 4) Applying the results. III CONSTRUCTING THE SP.4CE OF HIGHER-LEVEL REGULARITIES For any particular induction, we can often think of some higher-level rule, derived from our experience, which either confirms or denies it, as in the Italian case. In order to auto- mate this process, we need to elucidate the formal relationship between the base-level and the general rule. We must also en- deavour to identify all such classes of general rules, in order that 178 / SCIENCE 1) We can take into account all the applicable higher-level rules already known. 2) We can perform further inductions to decide if a poten- tially relevant higher-level regularity actually holds. As mentioned above, the higher-level rule; if literally true, should form part of a deductive argument, together with the base-level instances, leading to the affirmation of the base-level rule. Our approach is therefore to construct the space of all possible deductive arguments that lead to the base-level rule as their conclusion. The construction is schematic, i.e., we use generalized predicate schemata P and Q as antecedent and COII- sequent, and the results we are looking for are thus schematic classes of regularities, such that when faced with the task of confirming a given induction, we can instantiate the schematic rule appropriately and use it for steps 1) and 2) given above. In order to maintain completeness, we construct all resolu- tion proofs of the base-level rule, given the instances. In de- scribing how to do this, we will use rules with unary predicates, for simplicity. As we show below, this results in an overly- restricted space of first-order regularities; this restriction is re- lieved by using binary predicates. The simplest schematic rule is Vz[P(x) =s- Q(z)]; we must find those sets of facts which, when combined with the instances and the negation of the rule, lead to a contradiction. We thus begin with t,he negation of the rule, which, in clausal (CNF) form, is Pb> 1QW for some skolem constant a; we then ask what other facts could be added to lead to a contradiction. To cut a long story short, the only interesting fact we can add is the rule itself, written +‘(W Q( > 5 in CNF. Thus our task becomes that of find- ing all sets of facts which can be resolved, together with the instances, to leave the base-level rule as the only remaining clause. Since a resolution step removes two complementary literals: our reverse resolution algorithm takes as input the current state of the database, then generates all possible new pairs of complementary literals (and finds all possible existing clauses to which they could be added), such that if the litcrals were resolved the database would be left in the current state. The literals we introduce can contain any existing predicate, variable or constant, or include a new one of each (designated R;, y;, b; respectively; we choose not to include function sym bols in our language). * Thus two possible ‘parent’ databases of the database containing just the clause UP V Q(z) are -(xc> v RI@) and -I(YI) v +W v Q(d -1(4 v Q(x) h(h) As one might suspect, the space is quite large (in fact, dou- bly exponential): the base-level database MY V Q(x) has 20 possible parents; at the second level the average database has around 350 parents. Although we will not discuss them here, our implementation therefore includes a number of prun- ing heuristics which keep the search manageable without losing any of the interesting points in the space. Another modifica- tion is to introduce the instances in a ‘macro-move’: for the * The exact details of the algorithm used are not important here; one can imagine constructing a simple PROLOG predi- cate resolve(Clause1, Clause,?, Clause) which suceeds iff Clause is the result of resolving Clause1 and Clause2; we then in\-okc the predicate with only the Clause argument instantiated. itt” instance we add the literals P(c~i) and Q(a;) as separate clauses, along with their complementary literals attached to ot,her part,s of the dat,abase, all in one step. IV SFARCHING THE SPACE So far we have given a somewhat simplistic picture of the space of regularities we have constructed. As soon as we start searching it, we realize that many of the regularity classes are simply not plausible; that is, they fail to correspond to any possible regularity in the actual world. Unfortunately, this is a hard condition for any machine with limited experience to rec- ognize. For this reason, we currently use a human evaluator for nodes in the space, so that the machine follows paths that the author thinks promising. As a preliminary measure, this has been quite successful; however, to attain our goal of a world- independent theory, and to explore more of the space, we also need to investigate how a machine can recognize that a given class of regularities is uncommon in the world of its experience. It is intended that such a capability be built into the proto- type system we have constructed; for our world-knowledge, we will use the broad, common-sense knowledge base of the CYC project ([Lenat et al. 861). I nasmuch as this knowledge base corresponds to our actual world, this will also constitute em- pirical research into the actual structure of high-level common- sense knowledge. It is important for our purposes that the causal structure of the world be such that there are really only a few important, classes of regularities. If this were not the case, then whenever we wished to confirm an induction it would be necessary to es- amine a large amount of potentially relevant information, and to perform a large amount of cumulative detection to maintain the currency of the stock of regularities. Our results so far in- dicate that there are grounds for optimism, at least in the real world. V RESULTS The following subsections describe the general classes of regularity which have been identified after searching the space, with the help of some additional thought. 1Ve start with the unary space to illustrate its restrictions, and then move to the binary space. In each section vve give the schematic, logical form of the regularity, display the deductive argument leading to the base-level rule, and give an example. Some of these classes were already known to us; others were quite unexpected (sections A and E), although perhaps obvious in retrospect. We are thus convinced of the usefulness of an automatic, (semi- )exhaustive generator of classes of deductive arguments. A. Rules with a more general left-hand side The simplest class of higher-level regularities consists of rules with the same consequent as the base-level rule but a weaker (more general) antecedent. Thus the rule V’r[A(z) 3 Q(x)] (where Wf’(~> * RWI) is sufficient to imply the base-level rule V.x[P(x) j Q(x)] di- rectly. Examples: a) .‘All social groups use a variety of names” confirms “All nations use a variety of names.” llcre P = Nation, Q = ~YameT’ari~ty, R = SocialGroup. b) “All things made of (a certain type of) beryl are green” confirms LEARNING !’ 4”) “All emeralds are green.” Here P = Emerald, Q = Green, R = MadeOfBeryl. Because R is more general than P, the rule VZ[R(Z) =F Q(x)] can be confirmed by many instances which are not P; thus, if we have the appropriate data, it becomes easier to prove the more general rule than the more specific (base-level) rule. This class of confirmation has two apparently distinct in- terpretations. On the one hand, a) is ‘empirical’ in flavour: by observing lots of other social groups, we add plausibility to the base-level rule, but no explanation is offered. On the other hand, b) is causal in flavour, offering the beginning of an explanation. Two important points to note here: a No observations of positive instances of the base-level rule are required. l The ‘explanation’ type of support for the generalization is the starting-point for explanation-based generaliza- tion [Mitchell et al. 861, which also has no logical need for an instance of the proposed generalization; we can extend the basic principle by adding further intermedi- ary concepts, for example where S is ‘reflects light of wavelength 550 nm’. The process of explanation-based generalization uses exactly such a detailed, non-operational theory and compiles it into such useful encapsulations as “all emeralds are green ” and “never run outside during an earthquake”. B. Decision rules The only other simple regularity we have found so far in the unary space takes the form V~YPW A J=(Y) A Q(Y) * Q(41 which can also be written as WW * Q(41 v WP(4 * ~QWI. In [Davies 851 these are called decision rules, because P decides the truth of Q. With one instance described by P(al) A Q(al), the base-level rule becomes deductively justified. Example: “Either all cars in Japan drive on the left, or they all drive on the right .” Once we see one car driving on the left, we know that all cars in Japan drive on the left. While it seems true that we can know this decision rule without having been to Japan, in fact it has no confirming instances that are not also instances of the base-level rule. Thus it does not satisfy the external evidence requirement. We actually believe it as a result of a further gen- eralization; if we restrict ourselves to formulae with only unary predicates, we must express this as a second-order regularity, by quantifying over the country predicate P: VP[NationaEityPredicate(P) I Vzy[P(x) A LeftDriver A P(y) * LeftDriwer( y)]] We will see in the next subsection that this awkward formula- tion is turned into a first-order sentence by using binary pred- icate schemata. C. Direct generalizations using binary predicates As noted above, using only unary predicates limits the rich- ness of the hierarchy of regularity classes; this limitation is eased when we use binary predicates. The base-level rule that we are now trying to confirm is written V’z[P(z,b) + Q(x,c)], where b and c are constants. In the unary space, the only in- teresting database that refutes the negation of the base-level rule was the rule itself. With binary predicates, we also have the following three ‘variabilization’ generalizations: D. V~YP’(~, Y> * Qb, 41 v’s@‘(~, b) * Q(z, 41 V~YZP(~, Y) * Q(v)l. More general rules using binary predicates The binary equivalent of the unary formulae for rules with more general antecedents is W&(~, al> * Q(G 41 where Vz[P@, b) * &(z, al)]. Thus the rule “things made of beryl are green” is expressed as Vz[MateriaE(z, Beryl) + CoZour(z, Green)] The normal type of causal argument introduces a chain of intermediate predicates Ri using appropriate linking constants a;. A simple generalization relationship between P and R can also be used: ‘WRl(z, b) * Qh 41 where V~YW, Y> * Rda:,~)l. E. Determination rules The binary equivalent of a decision rule is called a determi- nation, a form which captures a very common and useful type of regularity. The form VWZY@‘(Z, w) A P(Y, w> A Q(Y, z> =+ Qb d] together with one instance described by P(a, b), Q(a, c) is suf- ficient to guarantee the base-level rule. Example: If NationaZity(s, w) means “x has nationality w”, and Language(z, Z) means ‘Ox speaks language z”, then the determination V’ws yz[Nationality(x, w) A Nationality( y, w) ALanguage(y, z) + Language(z, z)] T means “Nationality determines Language”, since it re- quires that any two people with the same nationality must speak the same language. With the observation of Giuseppe, an Italian speaking Italian, this gives us the base-level rule “All Italians speak Italian”. ‘wo important points to note: Decision and determination rules find a common expres- sion in the extension of predicate calculus described in [Davies and Russell 861, which also shows this form of regularity to be the necessary background knowledge for the successful use of analogical reasoning. We de- fine a new connective, representing the determination relationship as P&w) > Q(z, 2). Determinations also provide a valid form of single- instance generalization which actually utilizes informa- tion contained in the instance in forming the general- ization. This contrasts with the explanation-based gen- eralization (EBG) technique which simply uses the in- 480 / SCIENCE stance as a focus, assuming that the domain theory is already strong enough to prove the base-level rule. A corollary of this is that, by taking information from the instance, we can build a more powerful single-instance generalization system, in the sense that we can perform the generalization with a weaker domain theory. For ex- ample, using the determination “Nationality determines Language”, and one instance of an Italian, we predict that all Italians speak Italian; for an EBG system this would require a theory which could predict an entire Ian- guage (vocabulary, grammar and all) from facts about a nation - needless to say, no such theory is available. F. Extended determinations The regularity classes given above are sufficient to guaran- tee the generalization from no instances or from one instance. Yet quite often we find that one instance is not quite satisfy- ing, but after several confirmations we are happy. One way to account for this is to postulate that the appropriate deter- mination is only weakly supported, so that we need the extra instances to convince ourselves. A different way is to extend the search direction already taken to reach determination, by adding further instances: VWw,X,Yl,... , in, dP(x, w> A P(YI 7 w> A Q(YI, z)A . . . AP(yn, w> A Q(Y~, 4 =+ Qb ~>1 together with n instances described by P(al,b), Q(al,c) -. . P(am,b), Q(anTc) is sufficient to guarantee the base-level rule ‘W(x, b) =j Q(x, 41. The meaning of the exten ded determination (we determinationn) is clearly seen if we rewrite it: might call it \JW,Yl,-,Yn, ~[P(~l,w)AQ(yl,z)A... AP(ynL, w> A Q(Y~, z> 3 ‘WP(x, 4 * Q(x, 411 Roughly this can be interpreted as follows “All enumerative inductions from n instances, with P as antecedent and Q as consequent, succeed.” This regularity can be confirmed by a history of such successful inductions, and thus the induction in question, Vx[P(x, b) 3 Q(x, c)] becomes justified. As an example, consider again the case of inducing the rule “all emeralds are green”, given n green instances. Formally, we write this as Vx[JeweZType(x, Emerald) + CoZour(z, Green)]. Now many jewel types are not uniform in colour (diamonds, for example, come in black, yellow, blue, pink and white) so the determination “jewel type determines colour” does not hold and we cannot perform a single-instance induction. However, as we explain below, the extended determination does still hold, so the n-instance induction is justified. If we have successfully induced the rules “all sapphires are blue”, “all rubies are red”, “all amethysts are purple” from collections of instances, then these will be positive instances of the extended determination, so it will be well-confirmed. But in the case of classes such as diamonds, the left-hand side of the extended determination isn’t satisfied, since it is unlikely that n instances of a variegated class are all the same colour; thus diamonds are not a disconfirming instance of the extended determination, and it remains well-supported. If, on the other hand, the Colour predicate admitted ar- guments like ‘gruezsss’ (green until 2086, blue thereafter), then the extended determination would have disconfirming in- stances, since the left-hand side would be satisfied by colours such as gruels72 but the universal on the right-hand side would be false. It is important to note that extended determinations are actually much weaker than determinations, and we basically expect them to be satisfied, more or less, for any ‘reasonable’ P and Q. VI COMPARISON WITH GOODMAN’S THEORY OF PROJECTIBILITY Goodman’s theory of induction has been the most influen- tial contribution to the field in recent times. We will therefore take the time to briefly outline his theory here, and then re- express it in our terms. Goodman defines the act of projection as the assumption of a general rule from some collection of instances; a rule is projectible if this can be done legitimately. The last part of his excellent book, “Fact, Fiction and Forecast” ([Goodman 831, first published 1955) is devoted to an attempt to elucidate the criteria for deciding projectibility. In this theory, rules derive projectibility from three sources: 1) the earned entrenchment of the predicates involved; 2) the inherited entrenchment which the predicates derive from their parent predicates; 3) the projectibility of their overhypotheses. We define these terms below. A. Entrenchment Goodman’s principal requirement for the projectibility of a rule Vx[P(z) + Q(X)] is that the predicates P and Q be well-entrenched. A predicate P becomes well-entrenched as an antecedent as a result of frequent past projections of other rules with P as antecedent; similarly for Q as consequent. Thus ‘green’ is well-entrenched, whilst ‘grue’ is not. B. Parent predicates The notion of a parent predicate is used in defining both inherited entrenchment and overhypotheses. A predicate R is a parent of S iff 1) R is a predicate applying to classes of individuals. 2) Among the classes to which R applies is the extension of S. Thus ‘uniform in colour’, which applies to any group of individuals all of the same colour, is a parent of ‘green’. Similarly, ‘type of jewel’ is a parent of ‘emerald’. C. Inherited entrenchment A predicate inherits entrenchment from its parent predi- cates. Thus if ‘uniform in colour’ is well-entrenched, ‘green’ derives further entrenchment from it. D. Overhypotheses An overhypothesis of P + Q is a rule R + S such that R is a parent of P and S is a parent of Q. Thus an overhypothesis of “all emeralds are green” is “all types of jewels are uniform in colour” . If the overhypothesis is projectible, this adds to llte projectibility of its underhypothesis. Here, for example, both R and S are reasonably entrenched, and the overhypoth- esis is fairly well supported, e.g. by “all sapphires are blue”, “all rubies are red”. A given rule can have many overhypothe- ses, and each may in turn be supported in turn by further overhypotheses at the next level. LEARNING / -tXl E. Analysis We will now attempt to analyze Goodman’s theory in our terms. By formalizing each of his notions, we can fit them into the general framework of the confirmation of rules by higher- level regularities. The entrenchment of a predicate P corresponds approxi- mately to an observed second-order regularity of the form VQVXl . . - xn[[P(xd A Q(xl) A . - - A P(G) A Q(xn>l =+ ‘J4JW =+ Q~x>ll which bears close resemblance to the definition of extended de- termination given above. The difference is that because Good- man is working exclusively with unary predicates, he is forced to quantify over the predicate Q (in defining the entrenchment of P) in order to satisfy the external evidence requirement, thus requiring that P be a successfully projected predicate re- gardless of the consequent Q. The use of binary predicates allows us to quantify just over their second argument, giving the more fine-grained notion of successful projection of similar rules, rather than just rules with the same antecedent. The notion of a parent predicate is a little tricky to for- malize using unary predicates; it would look something like this: A is a parent of B iff 3S[A(S) A Vx[x E S M B(x)]] A more natural way to write it is to use a binary predicate: A is a parent of B iff Vz[B(z) x=+ A(z,B)] which amounts to reifying B. For example, we write Vx[EmeraZd(x) u JeweEType(x, Emerald)] Viewed in this light, an overhypothesis is essentially a deter- mination. Clearly, there is a great deal of overlap in the two ap- proaches. There are, however, some slight differences in em- phasis, stemming mainly, one may conjecture, from the differ- ing requirements of philosophy and artificial intelligence. 482 i Goodman is trying to systematize human practice; he does not attempt, for example, to jzlstifv the entrench- ment criterion. When written formally, we see en- trenchment (and the other notions) as codifications of higher-level regularities, which push back the inevitable point at which we must simply appeal to an unjustifi- able, naked principle of enumerative induction. (As is pointed out in [Quine gL Ullian 701, in the human case we may be able to push it back far enough such that the evolutionary process itself may be ‘credited’ with performing such inductions.) The main commonality of the two theories, and the revolutionary aspect of Good- man’s work, is that we no longer have to make such an appeal within the base-level induction itself. In Goodman’s theory, predicates derive entrenchment from actual past projections, taking the form of (not necessarily spoken) linguistic utterances and correspond- ing to projections performed in the history of the cul- ture rather than just the individual. This is essentially a psychological theory about exactly what evidence hu- mans take into account in making new projections. In our approach, we try to identify all the evidence that should logically be taken into account, which may en- tail making further inductions ‘on demand’ as well as SCIENCE noticing past inductions. l Because we use binary predicates and an exhaustive generator, we are able to produce a much richer hierar- chy of ‘overhypotheses’. Both theories, however, rely on the existence of a rich taxonomic vocabulary to facili- tate expression of the desired regularities. This leads us naturally into a study of the relation between language and induction. VII REPRESENTATION AND INDUCTION An implicit hypothesis of Goodman’s theory is that ev- eryday terms will tend to be well-entrenched, since otherwise they would drop out of use. (He states (p. 97) that “entrench- ment and familiarity are not the same . . . a very familiar predicate may be rather poorly entrenched,” but gives no ex- amples.) The key idea behind analyzing this hypothesis is to understand the process by which terms become familiar parts of the language. If we can capture the conditions under which new words are acquired, then we can give a semantics to the presence of a word in our language, as well as to the word it- self. * Thus the fact that green is a primitive attribute in our language, as weIl as being a physiological primitive of our observation apparatus, suggests that greenness is a commonly- occurring property in the world, and, more importantly, that greenness is a good predictor for various other properties, such as whether something is animal or vegetable, ripe or unripe. If we limit our acquisition and retention of terms to those which manifest such useful properties, then we are guaranteed that familiar terms will tend to be entrenched, and thus that rules using t,hem will be projectible. The language-evolution aspect of this idea finds strong echoes in the theory of induction given in [Christensen 641; the reflection of properties and regularities of the world in our neurological development is one of the prin- ciple themes of Roger Shepard’s work, described in [Shepard 84, 861. Although we have barely scratched the surface of the enormous topic of the interrelationship of language, represen- tation and learning, it seems that the analysis of the semantics of the presence of words in a language, via the analysis of the processes of acquisition and retention, may be a profitable ap- proach. VIII APPLICATIONS We will first describe how we propose to build systems uti- lizing the ideas given above; we wiIl then discuss possible ap- plications to some induction projects, past and present. The scenario we envisage is that of an autonomous intelli- gent agent engaged in the continuous process of investigating its environment and attempting to fulfil its various goals. The system may need to assess the degree of confirmation of a pro- posed rule for one of three reasons: 1) it needs a rule for concluding some goal, and has none available; 2) it has some theoretical reasons for believing the rule plausible; 3) it has noticed that the rule is empirically plausible. * Rendell, in [Rendell 861, talks about the “semantics of the constraint imposed by the language” as part of an attempt to understand the bias inherent in version-space systems (the un- grounded premise to which we alluded earlier); this is another aspect of the same idea. To evaluate the proposed rule, the system performs the follow- ing tasks: a Assess the direct empirical support for the rule; if nec- essary, this may involve experimentation. l Instantiate the known classes of higher-level regularity so that they apply to the rule in question; if the system already knows the degree of confirmation of the instan- tiated regularities, take that into account; if not, call the evaluation procedure recursively to compute their confirmation. l Repeat the same process for any plausible competing hypotheses. If the proposed rule is well-supported by its higher-level reg- ularities, and clearly better than any conflicting hypothesis, then it can be adopted (subject to revision). From our investigations to date in the space of regularities, it seems that we can capture most of the relevant informa- tion using just three basic classes: simple implicative rules, determinations and extended determinations. These seem to provide the justification for the basic types of argument on common use. As mentioned above, as long as there are a small number of types it is reasonable to build specialized ‘regularity- noticing’ demons to spread the computation load, rather than using ‘lazy evaluation’. The higher-level rules we thus accu- mulate are also useful for suggesting new, plausible base-level Our proposed architecture seems closest to that of AM and EURISKO ([L enat 761, [Lenat 83a,83b]), which actively per- forms experiments in order to confirm its conjectures induc- tively. EURISKO can be said to use higher-level regularities of a sort, since it has a heuristic which essentially leads it to con- sider conjectures similar t,o those which have already proven successful. Recalling the basic task of inferring facts from a mass of ground data, it is clear that when we add the ability to recognize a new class of higher-level regularities we actually expand the set of inferences the system can make. Most induc- tive systems in AI use only simple, associative regularities. We therefore hypothesize that with the degree of synergy afforded by the addition of multiple layers of regularities, EURISKO’s performance can be considerably enhanced. A system which uses theoretical (causal, explanatory) sup- port as well as direct empirical support for its proposed rules is described in [Doyle 851. In the light of the theory given above, we would argue that there are forms of further, indi- rect empirical support which are in no sense causal, yet offer more power than the simple ‘associationist’ approach. Other systems which conduct large-scale inductive investigations are the RX system ([slum 82]), and UNIMEM/RESEARCHER ([Lebowitz 861); th e same arguments apply in these cases. IX CONCLUSIONS AND FUTURE RESEARCH DIRECTIONS We have shown that the requirement for a theory of in- duction is not that it render enumerative induction valid, but that it elucidate the way in which the plausibility of an induc- tion is affected by the presence of further evidence, distinct from its direct positive and negative instances. The relation- ship between the direct and indirect evidence is a formal one, as required, and we have given a method for identifying all general classes of such evidence. We have constructed a sys- tem which applies the method to discover some novel and, we believe, important classes of regularity. The result of the syn- ergistic interplay of induction and deduction is that we can now distinguish plausible from spurious inductions, and can max- imize the usefulness of the observational knowledge a system possesses. The ‘punchline’ is simply this: fhe more classes of regularity a system is equipped to observe, the more inferences it can make from a given collection of data. A major weakness which we would like to address is that the theory as described only allows first-order regularities. Al- though we glossed over the point in the exposition above, an extended determination need not use only an exact number n for all its inductions -- n really just means ‘many’, and this is how it will be implemented in the real system. The model of analogy by similarity in [Russell 861 suggests that there may be other useful non-first-order regularities, for example in the definitions of natural kinds ([Rosch 781) and in the distribu- tional variation of attribute values in a population ([Medin & Smith 841). At present it is not clear how to cope with these problems. Potentially fruitful areas for further investigation include: l studying the interaction of language and induction via the semantic analysis of the process of representational evolution; l empirical experiments to establish what are the useful, commonly-occurring classes of regularity in any given world; l quantification of the contributions of higher-level regu- larities to a base-level rule, especially regularities with less than 100% confirmation; l construction of robust systems, using the principles out- lined above, that are able to acquire, organize and use effectively knowledge of a complex environment, even in the absence of any a priori knowledge of the environ- ment; although such systems seem somewhat beyond our present abilities, it is hoped that we have begun to dismantle one of the theoretical barriers to their cre- ation. ACKNOWLEDGEMENTS I would like to thank my advisors Doug Lenat and Mike Genesereth and colleagues Todd Davies (SRI), Devika Subra- manian and David Wilkins (Stanford) and all the members of the Logic Group of the Stanford Knowledge Systems Labora- tory for fruitful discussions, constructive criticism and moral support. References [Barker & Achinstein 601 Barker, S. F. & Peter Achinstein. “On the New Riddle of Induction”. In Philosophical Review, vol. 69, pp. 511-22; 1960. [Blum 821 Blum, R. L. Discovery and representation of causal relation- ships from a large time-oriented clinical database: the RX project. Ph. D. thesis, Stanford University, 1982. [Christensen 641 Christensen, Ronald. Foundations of Inductive Reasoning. Berkeley: 1964. [Davies 851 Davies, Todd. Analogy. Informal Note No. IN-CSLI-85-4, LEARNING / 483 Center for the Study of Language and Information, Stanford University; 1985. [Davies & Russell 861 Davies, Todd & Stuart Russell. A Logical Approach to Rea- soning by Analogy. Stanford CS Report (forthcoming) and Technical Note 385, AI Center, SRI International; June, 1986. [Dietterich 861 Dietterich, Thomas G. Lenrning at the Knowledge LeveE. Technical Report No. 86-30-1, Computer Science Depart- ment, Oregon State University; 1986. [Doyle 851 Doyle, Richard J. The Construction and Refinement of Jus- tified Causal Models through Variable-level Explanation and Perception, and Experimenting. Ph.D. thesis proposal. Mas- sachusetts Institute of Technology; 1985. [Goodman 461 Goodman, Nelson. “A Query on Confirmation”. In Journal of Philosophy, Vol. 43, pp. 383-5; 1946. [Goodman 831 Goodman, Nelson. Fact, Fiction and Forecast, 4th edition. Cambridge, MA and London: Harvard University Press; 1983. (First published 1955). [Holland et al. 861 Holland J., Holyoak K., Nisbett R. & Thagard P. Induction: Processes of Inference, Learning and Discovery. In press. Medin D. L. & Smith E. E. “Concepts and Concept Forma- tion.” In Annual Review of Psychology Vol. 35; 1984. PJow4 Hoppe, Arthur. “Our perfect economy”. In Sun Francisco Chronicle. San Francisco: Date unknown. [Lebowitz 861 Lebowitz, Michael. “Concept Learning in a Rich Input Domain: Generalization-based Memory”. In Ryszard S. Michalski, Jaime G. Carbonell & Tom M. Mitchell (Eds.), Machine Learning: an Artificial Intelligence Approach; Vol- ume II. Los Altos, CA: Morgann Kaufmann, 1986. [Lenat 761 Lenat, D. B. AM: An artificial intelligence approach to dis- covery in mathematics as heuristic search. Ph.D. thesis, Stanford University, 1976. [ Lenat 83a] Lenat D. B. “Theory formation by heuristic search. The nature of heuristics II: Background and Examples”. In Ar- tificial Intelligence, Vol. 21, Nos. 1,2; 1983. [Lenat 83b] Lenat D. B. “EURISKO: A Program That Learns New Heuristics and Domain Concepts. The Nature of Heuristics III: Program Design and Results”. In Artificial Intelligence, Vol. 21, Nos. 1,2; 1983. [Lenat et al. 791 Lenat, D. B., Hayes-Roth, F. and Klahr, P. Cognitive Econ- omy. RAND Technical Report No. N-1185-NSF. Santa Monica, CA: The RAND Corporation; 1979. [Michalski 831 Michalski R. S. “A Theory and Methodology of Inductive Learning.” In Artificial Intelligence, Vol. 20, No. 2; Feb 1983. [Mitchell 781 Mitchell, Tom M. Version Spaces: an Approach to Concept Learning. Ph.D. thesis, Stanford University, 1978. [Mitchell et al. 861 Mitchell, T. M., Keller R. M., Kedar-Cabelli S. T. “Explanation-based Generalization: a Unifying View”. In Machine Learning Journal Vol.1 No. 1; 1986. [Quine & Ullian 701 Quine W. V. & Ullian J. S. The Web of Belief. New York: Random House; 1970. [Rendell 861 Rendell, Larry. “A General Framework for Induction and a Study of Selective Induction.” In Machine Learning Journal Vol. 1 No. 2; 1986. [Rosch 781 Rosch, E. “Principles of categorization”. In Cognition and Categorization, Rosch E. and Lloyd B. B. (Eds.). Hillsdale: Lawrence Erlbaum Associates; 1978. [Russell 861 Russell, Stuart J. “A Q uantitative Analysis of Analogy by Similarity”. In Proceedings of the National Conference on Artificial Intelligence. Philadelphia: AAAI; 1986. [Salmon 741 Salmon, Wesley. “Russell on Scientific Inference”. In G. Nakhnikian (Ed.), Bertrand Russell’s PhiEosophy. New York: Barnes and Noble; 1974. [Sanford 701 Sanford, David H. “Disjunctive Predicates”. In American Philosophical Quarterly, Vol. 7, pp. 162-70; 1970. [Shepard 841 Shepard, Roger. “Ecological Constraints on Internal Rep- resentation: Resonant Kinematics of Perceiving, Imagining, Thinking and Dreaming”. In Psychological Review Vol. 91, No. 4. October, 1984. [Shepard 861 Shepard, Roger. Mind and World. Forthcoming. [Swinburne 731 Swinburne, Richard. An Introduction to Confirmation The- ory. London: Methuen; 1973. [Utgoff 841 Utgoff P. E. Adjusting Bias in Concept Learning. Ph.D. thesis, Rutgers University, 1984. [Lenat et al. 861 Lenat D., Mayank P. and Shepherd M.. “CYC: Using Com- mon Sense Knowledge to Overcome Brittleness and Knowl- edge Acquisition Bottlenecks.” AI Magazine Vol. 6 No. 4; Winter 1986. [Medin & Smith 841 484 / SCIENCE
1986
89
537
DESIGN AND EXPERIHENTATION OF AN EXPERT SYSTEM FOR PROGRANNING IN-THE-LARGE Giovanni Guida, Marco Guida, Sergio Gusmeroli, Marco Somalvico Milan Polytechnic Artificial Intelligence Project Politecnico di Milan0 Mi Lano, Italy 1. INTRODUCTION The results of artificial intelligence research are often important to other areas than computer science itself. One area which presents a wide variety of potential applications of artificial intelligence tecniques, is the area of software production. A well knoun role of artificial intelligence in software technology has been in the area of program synthesis; several experimental systems based on different methodological approaches have been deveLoped in the past (Barstou (1979); BarteLs et al. (1981); Green (1977); Green and Barstou (1978); Green et al. (1979); Manna and Waldinger (1979); Smith (1981)). At the Milan Polytechnic Artificial Intelligence Project, the BIS system (Caio et al. (1982)) based on an approach oriented to problem reduction methodology for problem solving has been developed. In our opinion, the recent evolution of knowledge-based systems is showing how the role of artificial intelLigence can be further extended in dealing uith the conceptual analysis of complex probLems and applications. While the complexity of the problems solvable by a program synthesizer is of Limited size, we may expect that a knowledge-based system can assist the designer of a camp lex software system devising its modular architecture. This activity is called within software technology programming in-the-large, as opposed to programming izhe-smalL, i.e. the classical programming activity devoted to design the data structures and algorithms needed for representing and solving a given problem. The purpose of this paper is the illustration of the results obtained in a research project, devoted to design an expert system assisting the programmer in-the-large in his activity of problem analysis and software design: the ESAP (Expert System for Automatic Programming) (Guida et al. (1984); Guida et al. (1985)). The environment where the ESAP has been designed refers to a new rearrangement of the softuare Life-cycLe, in which several tools for automating software production are avai lable. We call this environment Software Factory of the Future CSFF), as illustrated in -m--- Figure 1. Initial Problem ----a------- ----------------e-- --e--e 1 SOFTWARE ]------>I I I I 1 DESIGNER 1 SPECIFICATION EXPERT f Ix------I ------------ -_---------------------- I! II I II II Formal High-Level II II Specification I II II ------------------------ I II I -------->I PROGRAMMING I II ' I IN-THf-LARGE ------c------ I EXPERT I I I I I ------------------------ I I I Modular Architecture I I I --------_--------------- I I I I 1 ---------->I I PROGRAMMING I I IN-THE-SMALL EXPERT I I I Target Program 1 PROGRAM MODULES 1 1 PROGRAM 1 I LIBRARY 1 1 SYNTHESIZER 1 Fig. 1 The Software Factory of the Future- ESAP receives in input the description of a Large software system, supplied by the user by means of an high-Level formal representation Language and interacts with the programmer in refining the specification, progressively decomposing the problem into modules, and defining the appropriate interfaces among them. APPLICATIONS / 1155 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. This process halts when all the modules are simple enough to be easily manufactured by a set of Lower-Level actors for programming in-the-small, namely automatic program synthesizers and program modules libraries. The output of ESAP consists of the overall modular architecture of the desired software system, from which the final program can be implemented. The goal of the project is, thus, to interactively support the activities carried out by a programmer during the design of a Large software system. On the contrary, both the management of a software project and the programming in-the-small activity are not considered in this research. ESAP has been implemented in Franz-Lisp, on coopera ting subproblems, in order to obta the car respondent modula r architec ture. in - The programming in-the-small, i.e. the solution of the simple problems constituting the leaves of the modular architecture. ESAP is conceived as a set of expert systems, each one "expert" in one of the above mentioned domains. In particular, ESAP includes: - Specification Expert, which enables the user to formally describe the initial problem he wants to soLve, to modify the problem description and to incrementally complete it with new details. The activity of the specification expert divides into three phases: DEC VAX11/780. In this paper we will illustrate the experimental activity carried out in designing the various components of ESAP and experimenting with them. We will first discuss the architecture of ESAP (section 2); later, we describe the knowledge representation Language (section 3) and the inferential engine (section 4); finally, the experimental results wi 11 be outlined (section 5). 2. ESAP ARCHITECTURE In the past, severa 1 research directions have been carried out in the area of automatic programming (Barstow (1979); Green et aL.Cl979); Manna and Waldinger (1979); Smith (1981)). In particular, the approach of program transformation, which is based on incremental transformations of an high-level formal specification in order to achieve an automatically compilable representation (Manna and Waldinger (1979); Barstow (1984); Smith et al. (1985); BaLzer (1985); Fickas (198511, revealed very interesting. Furthermore, artificial intelligence techniques for the construction of knowledge-based systems have proved very powerful in constructing automatic programming systems (Barstow (1979); Green et al. (1979); Manna and Waldinger (1979)). ESAP represents a synthesis of several approaches to program design and construction, developed in the areas both of automatic programming and of software engineeringcparnas (1972); Stevens et al. (1974); Parnas (1979); Booth (1983); BaLzer (1985)>, in the Light of a new concept of the software Life cycle, where several solutions in automating software production are available. ESAP can support three major activities involved in software production: - The specification of the problem to be solved and of the subproblems obtained during the decomposition process. - The programm ing in-the-large, i.e. the decorposition of a problem into a set of Key-words (verbs and nouns) identification. Description of a problem in terms of the classes of activities involved in it. Problem reformulation by means of a formal representation Language, in terms of input and output data and of operations on them and relationships among them. This approach to problem specification, supported by a knowledge base concerning the specification Language (Representation Language knowledge base), enables the user to gradually frame his problem in the ESAP environment, interactively supported during each step in the representation process. The output of the Specification Expert is a Largely formal problem description, in which, however, the user is allowed to Leave some informa 1 Ly expressed informations. At any stage during the decomposition process, the user can modify a representation, either substituting an informal sentence with a formal one, or adding previously omitted detai Is. The Last task of the Specification Expert consists in analyzing (syntactically and semantically) and translating the specification into an internaL representation, easy to deal with by the inferential engine of ESAP. Prograrni ng in-the- large Expert , which analyzes a problem, finding out its possible decompositions in cooperating subproblems, shows them to the user, and, with his help, chooses the most promising one. Furthermore, a knowledge base concerning the specification Language and the application domain allows ESAP to derive the specification of the subproblems, starting from the description of the initial problem and from the chosen decomposition. It relies on two different kinds of knowledge: 1156 / ENGINEERING - software engineering knowledge, concerning principles and methodologies to be used in software design. - domain knowledge, concerning criteria characteristic of the application domain to be used in Leading the decomposition of problems into subproblems. Both these knowledge-bases are independent of the specification Language of ESAP and of the target programming Language. - Lower-Level Actors Expert, which analyzes a problem to find out whether and how it is solvable using the two foundamental resources ESAP has at his disposal: - A set of program synthesizers, varying in methodologica 1 approach and in application domain, capable of generating algorithms and constructing programs solving problems of limited complexity. - A set of program Libraries, each one containing strongly parameterited programs, implementing elementary tasks in the chosen application domain. Each program has been designed and implemented with the aim to make it reusable for different, but similar tasks, using it directly or after performing on it some changes, according to user's needs. Program libraries are, thus, a collection of reusable software components. Each module is described by ESAP representation language and by the corresponding program, but the first description is the only one used by Lower-Level Actors Expert. The synthesis of the final program implementing the modular architecture and, thus, solving the user's problem is not at ESAP's charge, but it is performed by an external subsystem, called implementation expert. Decomposition Knowledge-base ----------------- 1 PROGRAMMING 1 Representation 1 IN-THE-LARGE 1 Language I EXPERT I Knowledge-base ----------------- I 1 SPECIFICATION 1 I I EXPERT I- ----------------- I INFERENCE 1 I I I ENGINE 1 I I -------e-w------- 1 Lower-leve 1 Actors Knowledge-base ----------------- ----------------- I I I ------------- I ------------------- --------------- I SYNTHESIZERS 1 I AND ----- 1 LOWER-LEVEL 1 I I ACTORS 1 PROGRAM LIBRARY 1 I EXPERT I ------------------- --------------- Fig. 2 The architecture of ESAP. The architecture of ESAP (see Figure. 2) represents a prototype of an integrated environment for a first step towards toward the software factory of the future. In particular, ESAP makes an extensive use of the crucial concept of reuse of resources, that we have interpreted in various ways: - Reuse of general design methodologies, that is of domain and Language independent principles (Stevens et al. (1974); Parnas (1972); Parnas (1979); Booth (1983)). - Reuse of general domain concepts, both representation and target programming languages independent (Barstow (1984); Barstow (1985); Kant (1985); Adelson and SoLoway (1985)). - Reuse of software components, i.e. of parameterized and ad hoc designed programs, developed in order to solve different, though simi Lar problems (Sommervi 1 Le (1982); Wegner (1984); Ramamoorthy et al. (1984)). APPLICATIONS / 1157 the accounts; a registry archiv, containing the orivate data of the clients. Some’ primitives are inspired by the 3. THE KNRULEDCE REPRESENTATIOW LANGUAGE We have identified two representation languages in ESAP: one for the interaction with the user, a 1 lowing semplicity of use and supporting abstractions in the definition of the problem he wants to solve; the other for the inferential activity of the system, allowing an efficient representation of facts and rules. The possi bi li ty to translate the representat ion of a problem from the former language to the latter is offered by a conversion algorithm. 3.1 User representation language The foundamental characteristic of ESAP user representation language is the ability to support abstractions in the definition of a problem. This goal is achieved by allowing the user to neglect all the details of the representation that he considers unimportant and to define them only when it is necessary to complete the description. Furthermore, the user is allowed to employ terminology and concepts of the task domain, since the system has knouledge about them and about hou to deal with. The interaction with the user through ESAP user representation language is supported at two abstraction levels: Class definition level, which allows the definition of a problem in terms of the types (classes) of the functionalities involved in it. This allows a graceful approach to the problem, that is represented in an highly abstracted way. relational algebra and they allou to handle archivs by - means of operations projecting of retrieving, se letting, fields, joi ni ng, and so on. on Other primitives define the usual set operations of union, intersection and difference of archivs. Other more primitives allou to define updating operations, consisting in inserting, deleting or tuplas into an archi v. modifying fields or Furthermore, ue have introduced the the "compute" and "compact" primitives: former a 1 lows to represent typical computations in the management of current accounts, using domain concepts. The latter a 1 lows two types of manipulation of an archiv: adding the values of an assigned field of the tuplas with the same value of another assigned -field and adding the values of an- assigned field in a<1 the archiv. The primitive “ord” allous to sort the tuplas of archivs on the basis of values assumed by their fields or by expressions containing one or more references to fields. Finally, we have introduced the primitive “reduce”, which allows to select occurrences of element ordinal constrai nts. s on the basis of As an example, class 1 COMPUTE interest IN current account class 2 UPDATE archiv IN current ac;ount The example above level as follows: may be detai Led at the module MOD esmp Cc a archiv trans archiv -rate date 1 date 2 --> new c-a archTv) --- LINK BODY INPUT c a archiv : --current account archiv trans archi : - represents the problem consisting in computing the interest of all current_accounts of a bank and in updating the corresponding archivs with the new balance computed values . The user can disregard all the details not relevant for a first description of his problem, as the value of the interest rate or the time period on which compute the interest. - Module deflnitfon level, which allous an incremental representat ion of the problem through a set of primitive expressions. They make reference to an abstract data type, called "archiv", whose instances represent the foundamental data of all the problems in a bank environment. There are three basic instances of the "archiv" type: a current account archiv, containing informations about the present situation of the accounts; a transactions archiv, recording all the transactions on transactions archiv : number - rate date 1 : date date-2 : date neu c a archiv -curTent ac - OUTPUT . count arch iv STOP COMPUTE interest OF current account - FROM date 1 TO date 2 -WITH ANNUAL RATE rate % WHERE (USING c-a archTv -- AND trans archiv) IN c a archiv (money = dollar) IN tFa;s archiv (money = dollar) - END MODIFY FIELD (balance) INTO c a archiv WITH KEW VALUE (Cc a archiv 'Ealance -- + interZsF)) WHERE (OBTAINING neu c a archiv) --- END MOD END - 1158 / ENGINEERING In this example, we have already introduced all the particulars of the problem, but ESAP is able to start its activity even with incomplete specifications, allowing the user to correct* and modify the representation at each step of the decomposition process. As an example, we can omit the where conditions (i.e. the conditions appearing after the key-word "IN", which define the tuplas to be handled by the primitive), but the system is nevertheless able to suggest a set of possible decompositions. The flexibility of this language is increased by means of informal user sentences, which can be included in the specification of a problem, a 1 lowing to satisfy particular user's exigencies. These sentences will be handled interactively by the system in a second step, in order to transform them into forma 1 expressions containing known concepts and in acquiring new knowledge. 3.2 System representation language A knowledge base may be viewed as made up of facts and actions (Hayes-Roth (1983); Laurent (1984)). In the ESAP system representation Language, a fact is a pattern with the following structure: FACT -+-> object (relation object I attribute value> where "object" may be any element of the description of the activity of a moduLe. An object may have one or more attribute with a value (e.g. we represent the fact that a variable A is of type T as: CA type T)). Furthermore, an object may be bound to another object by a relation (e.g., we represent the fact that the variable A is input for the module M as: CM input A)). The Left-hand side of a rule is made up of conjunctions, disjunctions and negations of patterns; the right-hand side is made up of new patterns to be added to the knowledge base, when the rule is applied. Furthermore, we allow to use in the left-hand side of a rule two speciaL patterns, namely ASK and ASKY/N. They allow to point directly to functions that evaluate to a boolean value CASKY/N) or to a new binding for the variables CASK). 4. THE INFERENCE ENGINE As we have pointed out in section 2, ESAP is conceived as a set of knowledge-based subsystems. The control cycle, on which the system is based, determins the correct order in which each expert has to be activated. We have considered a particular interpretation of an expert system in terms of the state space model for problem solving (Laurent (1984)). In this view, a given knowledge base represents a state and an action represents an operator that allows a transition from a state to another state. In particular, the inner representation of the initial user's problem represents the initial state. The control cycle of an expert system is usually based on four steps: selecting the next state to be expanded; finding all the transitions appliable to the chosen state; selecting the next transition to wp Ly; effectively applying the chosen transition. The conflict resolution may be implemented in two sub-steps, consisting in the choice of the object on which to apply the next transition and in the choice of the transition to be executed; the two steps can appear in any order in the control cycle of the inferential engine. In particular, we have chosen a S-O-A (State-Object-Action) strategy, consisting in selecting first the next state to be expanded, then the object (i.e. the module) on which the expansion will be based, finally the action (i.e. the decomposition operator) to be applied on it. A state is a set of modules, that are leaves of the decomposition tree under development. At each control cycle, the system checks for the necessity to change the state. This corresponds to abandoning the current decomposition and restoring a previous state, in which it is possible to apply a new decomposition operator to derive a different modularization of the same problem (state selection). The next step consists in activating the programming-in-the small expert to check the terminality of the Leaves of the modular architecture and to choose the next one to decompose (object selection). Then, the programming in-the-large expert searches for all the elementary appliable decomposition operators, reasoning on them and selecting one of their consistent and complete combinations, possibly the most promising one (action selection). Finally, the choosen operator is applied and the representation expert is activated to deduce the representation of the subproblems from the initial problem's one and to complete them, interacting with the user. This activity corresponds to the construction of the new current state for the next controL cycle of the inferential engine of ESAP. Each cycle corresponds to a design step for the construction of the modular architecture of the desired software system. The inferential process is based on production rules and on metarules, that define the order in which a set of goals are to be achieved (i.e. implement strategies) and allow to quickly focouse on relevant rules' subsets, referring to them by name CAieLLo (1983)). 5. EXPERIMENTAL RESULTS ESAP has been successfully implemented at the MiLan Polytechnic Artificial Intelligence Project on DEC-VAX11/780, in Franz Lisp. The task domain we have chosen is that one of APPLICATIONS / 1159 current accounts management in a bank. This domain is sufficiently known and large to allow a realistic programming in-the-Large activity. At the present, in the Knowledge Base of ESAP there is enough knowledge to deal with a set of domain concepts as "interest", "interest rate", "current account", "transactions", and so on. Furthermore, domain-independent know ledge has been supplied to deal with a top-down structured design methodology for the development of the modular architecture of software systems. It is currently under development a knowledge area to support object-oriented design. Such programming in-the-large know ledge a LLous ESAP to find out all the possibLe decomposition criteria that are applicable to a given problem. The final choice among the set of applicable operators is at user's charge, but ESAP can give suggestions to direct the decomposition towards problems manageable by its lower leve L actors for programming in-the-small. sample problem allows to add to the knowledge base the basic informations to be used by the representation expert to derive the description of the submodules. Acting on its private knowledge area and interacting with the user to ask for informations whenever they cannot be automatically deduced (e.g. asking for the names of the new modules or of newly introduced variables), the representation expert produces the following descriptions: Starting from the example outlined in section 3, we will sketch the decomposition process. MOD camp int (c a archiv trans archiv -- rate date 1 date 2 --> interest List)- LINK EXPORT interest List - BODY INPUT c a archTv : --current account archiv trans archi : - t?ansactions archiv rate : number - date 1 : date date-2 : date OUTPUT interest list : CarcF c a num : c a number -- Let's consider the following top-down design rules: STOP COMPUTE interest OF current account - FROM date 1 TO date 2 -WITH ANNUAL RATE rate % RULE 1 - WHERE IF A module %M has a subactivity $Sl IN and IN The moduLe $M has a second subactivity SS2 and A variable $V is output for %Sl and END MOD END - The same variable %V is input for SS2 THEN The module A is sequentia 1 Ly bound by $Sl and $S2 through SV RULE 2 - MOD modify archiv Cc a archiv interest list - -=>-new c a archiv)- LINK IMPORT (interest ListASTnterest list - FROM camp i&> BODY INPUT c a archiv : -- IF A module SM is sequentially bound by SSl and %S2 through SV and THEN The module SM produces a submodule derived from 351 current account archiv interest list : - CarcF c a num : c a number total TnFerest :-number) OUTPUT new c a arFhiv : -current account archiv - - STOP The module BM produces a submodule derived from SS2 The variable 3V represents the interconnection between the two submodules. Each module representation is translated into an internal form, expressed as a set of facts with the structure described in section 3.2. These facts are matched against the patterns of the left-hand side of the rules during the activity of the system. The application of RULE 1 and RULE 2 on the set of facts derived from tKe represenTation of the MODIFY FIELD (balance) INTO c a archiv WITH XEW VALUE (Cc a archiv ;i- ?;aLance + - -- i';;terest List " total interest)) WHERE (JUNCTION-c a archiv .'-c a num = interest List "C a num> (OBTAINING new c aarchiv) -- --- END MOD END As shown above, ESAP interprets the approach of program transformation breaking a complex problem into co-operating subproblems, in such a way to allow to automatically define total TnFerest : number> - (USING c-a archTv AND -- trans archiv) c a archiv (money =boLlar) tFa;s archiv (money = dollar) COBTATNING interest list) 1160 / ENGINEERING the structure of the modular architecture to be constructed. This may be vieued as an automatic documentation of the developement process of the software system solving the initial problem. At this stage, the Lower-level Actors Expert is able to automatically establish that the module "modify archiv" matches against a module of the library; and thus it advises the user that the only module “camp i nt” is to be decomposed. The module )Icornp intrr is now functionally bound, i.e. it defines a unique activity, with a well precised output. Nevertheless, the decomposition process can continue by applying on the module domain-oriented rules, to further reduce it to simpler subproblems. The follouing two rules are nou applicable on the module "camp int": RULE 3 IF A module SM has only one subactivity 361 and The subactivity $Sl has where conditions on archiv %A THEN The module 'SM produces a submodu le reducing the archiv $A on the basis of the where conditions The module 3M produces a submodule derived from %Sl uithout where conditions RULE 4 - IF A module $M has only one subactivity $Sl and The subactivity $51 is of kind compute and The object of subactivity SSI is interest and The environment of subactivity $Sl is current account - and THEN The module SM produces a submodule calculating the interest of the balances at closing date The module SM produces a submodule calculating the interest of the transactions The module $M produces a submodule to combine the results of the tuo submodules above Choosing the RULE-3, ue obtain the three follouing interconnected submodules: MOD red act archiv Cc a archiv --> - - red c-a-archiv) LINK EXPORT red c a-aTcriv BODY INPUT c a FrFhiv : -- current account archiv OUTPUT red c a archiv : -current account archiv - - STOP SELECT CC c a archiv * money = dollar) -- OBT red c a archiv) --- END MOD END MOD red tr archiv (trans archiv --> - - red trans archiv) LINK EXPORT red traxs arcFiv BODY INPUT trans archTv : transactions archiv OUTPUT red trans archiv-: transactions archiv - STOP SELECT CC trans archiv n money = dollar) OBT red-trans archiv) - END MOD END - MOD camp int son (red c a archiv - - red tFa;s-archiv rate dxte 1 date 2 --> interest list7 LINK IMPORT (red c a arcFiv as --- red c a archiv --- from red act archiv) (red trans aTchi7 as Ted trans archiv from red tr archiv) EXPORT interest list- - BODY INPUT red c aarchiv : --- current account archiv red trans a?chiv : - transactions archiv rate : number - date 1 : date date-2 : date OUTPUT interest list : Car& c a num : c a number -- total-interest : number) - STOP COMPUTE interest OF current account - FROM date 1 TO date 2 -WITH ANNUAL RATE rate % WHERE (USING c-a archTv AND trans archiv) (OBTAINING-interest list) - END MOD END - The first tuo modules define the where conditions' reduction in the speci7ication and they are solvable by the lower-leve 1 actors. The third one represents the father-module, without the where conditions and it needs further steps of decomposition. Choosing, instead, RULE 4, we obtain other three interconnected submodules: MOD camp int of balance (c a archiv - -- -- APPLICATIONS / 116 1 trans archiv rate zate 1 date 2 --> int 07 balances) - - LINK EXPORT int of balances BODY INPUT c a'-arFhiv : --current account archiv trans archi; : - tFansactions archiv rate : number - date 1 : date date-2 : date OUTPUT int zf balances : -CaTch c a num : c a number balance Tnt : nuzbyr) - STOP COMPUTE interest OF balance FROM date 1 TO date 2 WITH ANNUAL RATE raFe i! - WHERE (USIN c a archiv -- AND trans archiv) IN c a archiv (money = dollar> -- IN trans archiv (money = dollar> COBTATNING int of balances) - - END MOD END - MOD camp int of trans Ttrans archiv rate date 1 date 2 --> i* of trans) - - LINK EXPORT int zf trans BODY INPUT trans archiv : tyansactions archiv rate : number - date 1 : date date-2 : date OUTPUT int <f trans : -CaFch c a num : c a number -- trans int : relati7> STOP - COMPUTE interest OF transactions FROM date 1 TO date 2 WITH ANNUAL RATE raTe X WHERE TUSING trans archiv) IN trans archiv-(money = dollar> COBTATNING int of trans) - - END MOD END MOD sum of interests -Cint of balances i nt of t rans -->-inFerest list) - - LINK IMPORT Cint of bzlances as Tnt of balances from corn; int of balance) tint of trans as i-i;t Zf trans fro; camp int of-tr&s> EXPORT interest list- - - STOP (arch c a num : c a number -- total TnFerest : number> - MODIFY FIELD (balance int> - INTO int OT balances WITH NEW VALUE ( - - C-i nt-of balances n balance i nt + int-oftrans n trans int 7) WHERE (JuNC-i-ION - int of balances A c a number = -- int-oftrans a c a number) COBTAI~IN~ interest -7 list) - END MOD END - The above decomposition implements a technique, namely the "Direct Method", used in bank environment, in order to compute the interests of the current accounts. It consists first in computing the interests of the balances at the closing date, then in computing the interests of the transactions and finally in adding them. The first two modules can be further decomposed by the system; the third one, instead, is considered solvable by measns of the lower-level actors. During its activity, ESAP suggests to the user several alternative decompositions (see Figure. 3) of the same problem, leaving the choice of the most promising one at user's charge. camp at1 interests - : -------------------------------- I I camp int modify archiv : TERMTNAL -------------------------------------- I I I red act archiv red tr archiv camp int son TER??INAc TERHINKL TO BE-DECSMPOSED - - a. RULE 3 application. BODY INPUT int of bxlances : -(arch c a num : c a number balance Tnt : nuib;r) int of trans-: -(arch c a num : c a number trans int : relaFi';;> OUTPUT interest list : 1162 / ENGINEERING camp all interests - T I camp int T I modify archiv TEREINAL I camp int of balance I ! I TO FE DTCO?iPOSED 1 I - - I ------------- I I camp int of trans sum of interests TO BT DE??OMFOSED TERWAL - - b. RULE 4 application. - Fig. 3 First steps of alternative decompositions (a. and b.). In the above example, the aoolication of RULE 3 . r . I allows to achieve a simplier decomposition zf the problem, taking advantage of the capabilities offered by the lower-level actors, i.e. by the program synthesizer BIS and by the program library. The decomposition process halts when there are not operators to further reduce the subproblems. Particular attention has been devoted to the integration in ESAP environment of both program libraries of reusable software components and program synthesizers. This has lead us to the conception and the develooment of the Lower-level Actors expert. At the present, only the BIS (Bidirectional Synthesizer) system is at ESAP disposal. Given a simple problem, ESAP checks for its synthesizability by BIS, analyzing the description in the ESAP user representation language. The analysis is based on a knowledge area that takes into account the properties of ESAP representation language and of the BIS one, first of all checking for the possibility of translation from the former to the latter. Determining quantitatively the complexity level of a problem and comparing it with the solving power of a program synthesizer is a very difficult and challenging task. On the other hand, it seems to us too shallow to consider a program synthesizer ideal, i.e. actually able to derive a program solving any problem representable in terms of its specification Language. For these reasons, we adopted an intermediate solution, consisting in qualitative reasoning about the synthesizability of a problem by BIS: on the basis of our experimentations with the BIS system, we found a set of conditions to be satisfied by a problem specification in order to make the corresponding algorithm efficiently implementable. Thus, the system is able to check for the satisfaction of such a set of conditions, in order to establish the possibility to solve by BIS a given problem. We consider this aspect of ESAP of conceptual relevance, since it offers the opportunity to deal within a unique system with different types of knowledge representation Languages. Intermediate know ledge areas give the possibility to reason on the various representation languages, providing an adeguate interface among them and supporting an eventual translation from one to another. The other lower level actor for programming in-the-small at ESAP disposal is the program library, i.e. a collection of reusable software components. The representation of a given user problem of Lou complexity and a module in the library may be isomorphic, if they have the same description in the user representation Language. They are alike if they have syntactically and semantic different representation, but if the program can be modified to meet the user needs. If a problem is isomorphic to or like a given library module, it can be automatically manufactered by ESAP. If it is not the case and if no decomposition operators are applicable, the code writing is leaved at user's charge. The match against the program library may occur after a trasformation of the user specification into a semantically equivalent one, more convenient for the matching process. The.management of the program library is based on a know ledge area, taking into account meaning preserving transformations within ESAP user representation language. Furthermore, this knowledge area contains rules that define elementary modifications that it is possible to implement on an already existing program. The programming in-the-small expert allows the user to reason on programs at the abstract level of the representation language rather than at the concrete level of a particular programming language. 6. CONCLUSION The goal of the ESAP project, carried out in the last two years, has been the development of a prototype system for the software factory of the future. The realization of our system has pointed out some interesting research areas related to ESAP project. In particular, it is our opinion that attention has to be devoted to the development of a subsystem for the acquisition of new domain knowledge and for its integration with the existing one. This will allow ESAP to learn new task-domain concepts, their meaning, their properties, and how to dea 1 with problems including such concepts (i.e., how to decompose them). APPLICATIONS / 1163 REFERENCES 1. Adelson, B. and Solouay, E. (1985). The role of domain experience in softuare design. IEEE Trans. on Software Engineering SE-11 (11),1351-136E 2. Aiello, L. (1984). The uses of meta-knowledge in AI systems. In T-O'Shea CEd.)ECAI '84 Advances in Artificial Inte Lmnce .?lsevi er, Amsterdam, NL. 3. Balzer, R. (1985). A 15 year perspective on Automatic Programming. IEEE Trans. on Software Engineering SE-11(11),1257-126r 4. Barstou, D. (1979). Knowledge-based program -- construction. North-Holland, Amsterdam, NL. 5. Barstou, D. (1984). A perspective on Automatic Programming. The AI Magazine -- S(l), S-27. 6. Barstou, D. (1985). Domain-specific Automatic Programming. IEEE Trans. on -- Software Engineering SE-11 (111, 1321-133K 7. Bartels, U., Olthoff, W. and Raulefs, P. (1981). APE: An expert system for Automatic Programming from abstract specifications of data types and algorithms. Proc. 7th IJCAI, Vancouver, BC, Canada, 1037m3.-- 8. Booth, G. (1983). Software Engineering with ADA. Addison-Wesley publ., Amsterdam, NL. 9. 10. Caio, F., Guida, G. and Somalvico, M. (1982). Problem Solving as a basis for program synthesis: design and experimentation of the BIS system. Int -- Journal on Man-Machine Studies 17, 173-188. ----- Fickas, S.F. (1985). Automating the transformational development of Software. IEEE Trans on Softuare Engineering SE-11 <-11),7X%=1277. 11. Green, C. (1977). A summary of the PSI program synthesis system. Proc. 5th IJCAI, -- Cambridge, MA, 380-381. 12. Green, C. and Barstow, D. (1978). On proaram synthesis knou ledae. Artificial intilligence 10(3), 241-279.- 13. Green, C. et al. (1979). Results in 14. know ledge based program synthesis.Proc. 6th - -- IJCAI, Tokyo. Japan. 342-344. -- -T . v Guida, G., Guida, M., Gusmeroli, S., and Somalvico, M. (1984). ESAP: an Expert System for Automatic Programming. In T. O'Shea (Ed.), ECAI ‘84: Advanced in Artificial InZZiigeZe, Elseviec Amsterdam, NL, 585-588. 15. Guida, M., Gusmeroli, S. and Somalvico, M. (1985). ESAP: an intelligent assistant for the desian of softuare systems. Proc. Cognitiva-'85, Paris, F, 201-209. - - 16. Hayes-Roth, F., Waterman, D.A., and Lenat, D.B. CEds.)C1983>. Building Expert Systems. -- Addison-Wesley, Reding, MA. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. Kant, E. (1985). Understanding and automating algorithm design. Proc 9th -- IJCAI. Los Angeles, CA, 1243-1253. Laurent, J.P. (1984). Control structures in expert systems. Technology and Science of Informatics 3(3>, 147-162. - - - Manna, 2. and Waldinger, R. (1979). Synthesis: Dreams ==) Programs. IEEE Trans. on Software Engineering SE-5(4),294-m - Parnas, D.L. (1972). On the criteria to be used in decomposing systems into modules.Comm. ACM,‘1053-1058. -- Parnas, D.L. (1979). Designing software for ease of extensions and contraction. IEEE Trans. on Software Engineering SE-5(2), 128-138.- Ramamoorthy, C.V. et al. (19841. Software Engineering: problems and perspectives. IEEE Trans. on Software Engineering, 191-2097 - Smith, D.R. (1981). A design for an automatic programming system. Proc. 7th IJCAI, Vancouver, BC, Canada, 1027-1029.- Smith, D.R., Kotik, G.B., and Westfold, S.J. (1985). Research on Knowledge-Based Software Environments at Kestrel Institute. IEEE Trans. on Software Engineering -- SE-llCll>, 1278-;iT95. Sommervil le, I. (19821.Software Engineering. Addison-Wesley, Reding, MA. Stevens, W-P., Myers, G-J., and Constantine, L.L. (1974). Structured design. IBM System Journal 13(2), 115-137. -Pm Wegner, P. (1984). Capital intensive software technology. IEEE Trans. on - Software Engineering, 7-45. 1164 / ENGINEERING
1986
9
538
INDUCTIVE Abstract INFERENCE BY REFINEMENT P. D. Laird* Department of Computer Science Yale University New Haven, Ct., 06520 A model is presented for the class of inductive inference problems that are solved by refinement algorithms - that is, algorithms that modify a hypothesis by making it more general or more spe- cific in response to examples. The separate effects of the syntax (rule space) and semantics, and the relevant orderings on these, are precisely specified. Relations called refinement operators are defined, one for generalization and one for specialization. Gen- eral and particular properties of these relations are considered, and algorithm schemas for top-down and bottom-up inference are given. Finally, difficulties common to refinement algorithms are reviewed. Introduction The topic of this paper is the familiar problem of inductive learn- ing: determining a rule from examples. Humans exhibit a strik- ing ability to solve this problem in a variety of situations - to the extent that it is difficult to believe that a separate algorithm is at work in each case. Hence, in addition to the problems of implementing real systems that learn by example, there is the challenge of identifying fundamental principles that underlie this sort of learning. As an illustration of the basic ideas, consider the following concept-learning problem (adapted from [Mitchell, 19821). Ob- jects have three attributes: size (large, small), color (red, yel- low, blue), and shape (triangle, circle). A concept consists of an ordered pair of objects, possibly with some attributes left un- determined. For example, C = {(large ? circle), (small ? ?)} represents the concept “a large circle of any color, and any small object”. There is a most-general concept ( { (? ? ?), (? ? ?)}) and a large number (144) f o most-specific concepts (both ob- jects fully specified). Examples, or training instances, can be positive or negative: {(large blue circle), (small blue triange)} is a positive example of the concept C above, whereas {(large red triangle), (large blue circle)} is a negative example. If the current hypothesis excludes a positive example, the inference procedure must generalize it; and if it includes a negative exam- ple, the procedure must make it more specific. Every domain has rules for making a hypothesis more or less general; here, a concept can be generalized by changing an attribute from a specific value to ‘?‘, or specialized by the inverse operation. l Work funded in part by the National Science Foundation, under Grants MCS8002447 and DCR8404226. The essential features of this simple illustration apply to many inductive learning problems in a variety of domains: for- mal languages and automata (e.g., [Angluin, 19821, [Crespi- Reghizzi, 19721); programming languages (e.g., [Hardy, 19751, [Shapiro, 19821, [Summers, 19771 ); functions and sequences ([Hunt et al., 19661, [Langley, 19801); propositional and pred- icate logic ([Michalski, 19751, [Shapiro, 19811, [Valiant, 19841)) and a variety of representations specific to a particular domain (e.g., [Feigenbaum, 19631, [Winston, 19751). From the experiences of many researchers (see, for exam- ple, [Angluin and Smith, 19831, [Banerji, 19851, [Cohen, 19823, [Michalski, 19831 f or summaries), a number of general guidelines have been suggested: Define a space of examples and a space of rules rich enough to explain any set of examples. Given some examples and a set of possible hypotheses, generalize the hypotheses that fail to explain positive ex- amples, and specialize hypotheses that imply negative ex- amples. If possible, represent examples in the same language used to express the rules. Our goal is to present a formal model to unify many of the ideas common to these domains. The value of such a formalism is that the essential features of the inductive component of a projected application can be identified quickly, and basic algo- rithms constructed, without the need to rediscover these ideas from first principles. Another advantage is that the abstract properties and limitations common to algorithms based on this model can be identified and studied without reference to the details of a particular application. This report is necessarily brief, with only the outlines of the principal concepts given, plus examples to illustrate their application. More details, examples, and proofs are available in the full report ([Laird, 19851). Inductive Inference Problems Definition 1 An inductive inference problem has six compo- nents: l A partially ordered set (D, >), called the semantic domain. l A set E of expressions over a finitely presented algebra, called the syntactic domain. 472 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. A mapping h: E + D such that every d in D is h(e) for some expression e in E. A designated element do of D, called the target object. An oracle, EX, for “examples” of do, in the form of signed expressions in E. If EX() returns +e, then do 2 h(e), and if EX() returns -e, then do 2 h(e). An oracle (>?) for the partial order, such that (ei > ez?) returns 1 if h(el) 2 h(ea), and 0 otherwise. The examples below will help this definition seem less ab- stract. It is more general than most definitions of inductive inference, in that target objects need not be subsets of some set; instead, they are simply elements of a partial ordering. Rules are expressions over some algebra which is expressive enough to name every possible target object. The mapping h is the asso- ciation between rules and the objects they denote. Note that h may map more than one expression to the same semantic el- ement; if h(el) = h(ez), th en ei and ez are called h-equivalent rules. There may, of course, be different syntactical representa- tions of one semantic domain (grammars, automata, logical ax- ioms, etc.); according to this model, the problem changes when a new syntax & is adopted. Examples are elements of the same set & of expressions: i.e., every expression in E is potentially an example - positive in case its semantic representation is no greater than than the target, and negative otherwise. In practice, the set of examples is often limited to a subset of the expressions (see below). The oracle EX represents the mechanism which produces examples of the target; in actuality, examples may come from a teacher, a ran- dom source, a query-answering source, or combinations of these. Note that EX depends on the target object do. The oracle (z?) serves to abstract the aspect of the problem concerned with testing whether an expression imples an exam- ple. In practice, the complexity of this problem ranges from easy to unsolvable. By oracularizing it we are choosing to ig- nore the complexity of this problem while we study the abstract properties of inductive inference. Example 1 Let X = {xi,. . . ,xt} be a finite set, and D = 2x, the set of subsets of X. For example, X might denote a set of automobile attributes (sedan, convertible, front-wheel drive, etc.), while an element of D specifies those attributes possessed by a particular model. Let D be partially ordered by 2, the containment relation. There are many possible languages for representing D, such as the conventional one for set theory. A more algebraic language is a Boolean algebra over the elements Xl,..., it with elements of D represented by monomials of degree t (minterms). Thus the empty set is represented by xi . , . xi (x’ denotes the complement of z), and {xi} = h(x1xhxi . . . xi). In this case h is a bijection between minterms and elements of D. If ml and mz are minterms, then ml is a positive example of m2 iff every uncomplemented variable in ml occurs uncomplemented in mz. Example 2 Let D be the class of partial recursive functions mapping integers to integers. If fr and f2 are functions in D, define fl 1 f2 iff for every integer x such that f2(x) is defined, fdx) = fi(x)- A convenient language for representing the func- tions in D is the subset of LISP expressions mapping integers to integers. If P is such a program, then h(P) is the partial function computed by P. Consider the function f(x) = x2. A positive example of f is the program (LAMBDA (X) (COND ((= X 2) 4))). Intuitively, this example states that f (2) = 4 with- out giving any other values of f. The problem of deciding for two arbitrary programs Pi and P2 whether Pr > Pz is, of course, recursively unsolvable. The general inductive inference problem allows any expres- sion in E to be an example. More often we are limited to a subset of E for examples (e.g., in Example 2 above, we may be given only programs defined on a single integer rather than ar- bitrary programs). But what property guarantees that a subset of E .is sufficient to identify a target uniquely? Definition 2 Let S be a set of examples of do. An expression e is said to agree with S if for every positive example e+ in S, h(e) 2 h(e+), and for every negative example e- in S, h(e) 2 h(e-). Definition 3 A suficient set of examples of do is a signed sub- set S of E with the property that all expressions that agree with S are h-equivalent and are mapped by h to do. Example 3 In Example 1 above, a sufficient set of examples for any target can be formed from only the t minterms with exactly one uncomplemented variable. In Example 2, a sufficient set of examples can be constructed from programs of the form: (LAMBDA (X) (COND ((= X i> j 1))) where i and j are integer constants. Example 4 In many concept-learning problems, objects pos- sess a subset of some group of t binary attributes, and a concept is a subset of the set of all possible distinct objects. A “ball”, for example, might denote the concept consisting of all objects with the “spherical?” attribute, regardless of other attributes (“red?“, “wooden?“, etc.). As a formal expression of this do- main, let (51,. . . , xl} be Boolean variables, and D the set of sets of assignments of values (0 or 1) to all of the variables, par- tially ordered by 2. For syntax we may take the set of Boolean expressions; h maps an expression to the set of satisfying as- signments. Expression er is an example of ez iff the Boolean expression “er + e2” is a tautology (+ denotes implication), since then any assignment satisfying ei then must also satisfy eg. A sufficient set of examples for any expression can be formed from the set of minterms, since these represent a single assign- ment. Finally, note that for every inductive inference problem there is a dual problem, differing only in that D is partially ordered by 5 (rather than 2). Then er is a positive example of eg if e2 5 ei, and a negative example otherwise. Refinements In most applications, the mapping h: & --) D is not just an unstructured assignment of expressions to objects. Usually there LEARNING / 473 is an ordering k of the expressions that is closely related to that on the underlying semantics. For example, referring again to the size-color-shape example in the introduction, we see that the syntactic operation of replacing an attribute (“red”) by the don’t-care token (“?“) corresponds semantically to replacing the set of expressed objects by a larger set. The ordering k may only be a quasi-ordering (reflexive and transitive but not anti- symmetric). But we can still use it to advantage in computing generalizations and specializations of hypotheses, provided it has three properties: an order-homomorphism property with respect to h; a computational sub-relation called a refiraement; and a completeness property. These we shall now define. Definition 4 Let ? be an ordering of E. The mapping h: E + D is said to be an order-homomorphism if, for all el and e:! in E such that el k e2, h(el) 2 h(e2). Example 5 Referring back to Example 1, let rn:! k ml if every uncomplemented variable in ml also occurs uncomplemented in m2. Then h is an order-homomorphism. In Example 2, it is not clear how to define P2 > PI on arbi- trary LISP programs so as to form an order-homomorphism. In [Summers, 19771, this problem is solved by restricting the class of LISP programs to have a very specific form. Example 6 Because of the expressiveness of predicate calculus, many systems use a first-order language L (or subset thereof) as the syntactic component of the inference problem. But what is the semantic component, and how is it ordered? Example 4 is the analogous case for propositional logic, where D con- sisted of sets of assignments, ordered by 2. With first-order logic, Herbrand models (i.e., sets of variable-free atomic formu- las) take the place of assignments, and D consists of classes of (first-order definable) H er rand models, ordered by 2. The sen- b tence ‘dx (red(;c)V - large(x)) designates the class of models in which everything that is large is also red. A syntactic ordering is as follows: given sentences (~1 and ‘p2 in l, define ‘p2 k ‘p1 iff k ‘p1 - ~32 (where -+ indicates im- plication). It is easy to see that h is an order-homomorphism: if (~1 implies ‘p2, then any model of ‘p1 is also a model of cp2, and hence the models of cp1 are a subset of those of 92. The importance of t to inductive inference is as follows: Suppose a hypothesized rule e is found to be too general in the sense that there exists a negative example e- such that h(e) 2 h(e-). Then (assuming h is an order-homomorphism) any new hypothesis e’ such that e’ t e will also be too general, and hence need not be considered. Similarly, if e is too specific, then any hypothesis e’ such that e ? e’ can be eliminated from consideration. In order to take advantage of the efficiency induced by the syntactic ordering k, we need the means to take an expression and obtain from it expressions that are more general or more specific. This leads to the notion of a refinement relation. Definition 5 An upward refinement 7 for k is a recursively enumerable (r.e.) binary relation on & such that (i) 7* (the reflexive-transitive closure of 7) is the relation 2; and (ii) for all el,e2 E l, if (el, e2) E 7 then h(el) 2 h(e2). The notation 7(e) denotes the set of espressions el such that (el, e) E y. There is a dual definition of a downward refinement p for an ordering 5: p* = 5, and if (el,e2) E p then h(el) < h(e2). Nearly everything true of upward refinements has a dual for downward refinements, but to save space we shall omit dual statements. The r.e. condition on refinements means that they can be effectively computed. Thus if e is found to be too specific, an inference algorithm can compute the set 7(e), and y of each of these expressions, etc., in order to find a more general hy- pothesis. We would like to know that, by continuing to refine e upward in this way, we will eventually obtain an expression for every object d more general than h(e). This motivates the completeness property of refinements. Definition 6 An upward refinement 7 is said to be complete for e E & if h(r*(e)) = {d 1 d 2 h(e)}. If 7 is complete for all e E E then 7 is said to be complete. Example 7 In the language of Example 1, let 7(m) be the set of minterms m’ obtained from m by uncomplementing ex- actly one of the complemented attributes. Thus 7(z~z~. . . xi) = {(XIX:. . . xi), (5’1x24.. . xi),. . . ) (2’1.. *x:‘tqxCt)}, and 7(zlz2 - - . xt) = 0. It is easily seen that 7 is a complete upward refinement for minterms. A complete downward refinement for minterms 7(m) computes the set of minterms obtained from m by complementing one of the uncomplemented attributes. Example 8 Let & be the set of first-order clauses (i.e., disjunc- tions of atomic literals or their negation) with only universal quantification, assuming some fixed language Z. An upward re- finement for E (with respect to the ordering ? of Example 6) is as follows ([Shapiro, 19811): Let C be a clause. 7(C) is the set of clauses obtained from applying exactly one of the following operations: Unify two distinct variables x and y in C (replace all oc- currences of one by the other). Substitute for every occurrence of a variable x a most- general term (i.e., function call or constant) with fresh variables. (For example, replace every 5 by f(x1) where f is a function symbol in C and x1 does not occur in C.) Disjoin a most-general literal with fresh variables (i.e., P(XI> or - 14x1>, where p is a predicate symbol and x1 does not occur in C). For example, let r(x, y) stand for the relation Z- is-a-blood- relative-ojy, j(x) f or le tl f unction the-father-ojx, and m(s) for the function the-mother-ojx. Let C be the clause 7.(x, j(y)) -+ r(x, y), meaning that if someone is related to a person’s father, then he is related to that person. The following clauses are all in 7(C): 0 r(x, j(x)) -+ r(x:x) l f+44, f M) - +(4, d l +, f(y)) - (4x, Y) V +I, ~2)) Each of the derived clauses is easier to satisfy and hence has more models. t’t / SCIENCE More examples of refinements over various domains are given in [Laird, 19851. The task of constructing a refinement can be tricky, because one must ensure that all useful generalizations or specializations have been included. But given the formal defini- tion, it is basically a problem in algebra, rather than a heuristic problem as has usually been the case for most applications. In essence, the refinement is a formal expression of the “production rules” or “generalization rules” found in many implementations (e.g., [Michalski, 19801, [Mitchell, 19771). Below we sketch a simple “bottom-up” algorithm for induc- tive inference using an upward refinement. For simplicity we assume that 1. 7(e) is finite for all e. (7 is then said to be locally finite.) 2. There is an expression emin E & such that e ? emin for all eE t. 3. 7 is complete for emin. The algorithm repeatedly calls on EX and tests the current hypothesis against the resulting example. If the hypothesis is too specific, it refines it, placing the more general expressions onto a queue and taking a new hypothesis from the front of the queue. If the hypothesis is too general, it discards it (with no refinement) and takes a new one from the queue. It can be shown that the algorithm will eventually converge to a correct hypothesis provided EX presents a sufficient set of examples for the target. ALGORITHM UP-INFER: Initialize H + emin. QUEUE + emptyo. EXAMPLES + empty0. Do Forever: EXAMPLES + EXAMPLES U {EXO}. While H disagrees with any EXAMPLES: Using the (>?) oracle, check that Hz no negative examples, and H 2 some positive example. If so, add 7(H) to QUEUE. H + front(QUEUE) . By duality we can construct a top-down algorithm using 7 and emoz. Other algorithms are also possible, depending on the properties of the domain and the refinements. Most induc- tive inference algorithms in the literature are either top-down or bottom-up ([Mitchell, 19821 suggests using both in parallel). And for some domains, one direction seems advantageous over the other (conjunctive logical domains, for example, seem to pre- fer generalization to specialization). This directional asymmetry seems to occur mainly when the refinement is locally finite in one direction but not in the other. Note that this is a syntactic property, not a semantic one: regular sets of strings, for exam- ple, are easier to infer bottom-up when the rules are expressed as automata or as logical axioms ([Biermann and Feldman, 19721, [Shapiro, 1981]), but top-down when the rules are expressed as regular expressions ([Laird, 19851). Finally, it is worth observing how refinement algorithms han- dle the so-called Disjunction Problem. In the context of classical concept-learn 7, this refers to the problem of forming a “rea- sonable” gener. _ization from examples in a domain that includes a disjunction operation. The trivial generalization, consisting of the disjunction of all the positive examples, is usually unsuit- able since it will never converge to a hypothesis representing an infinite set. On the other hand, it is undesirable to eliminate the trivial generalization as a possibility, since it might lead to the correct rule. Since refinement algorithms such as the one above apply the operator 7 to hypotheses in the order in which they are dis- carded, the minimal generalization (adding only the one element to the set) is not necessarily the first one tried. For example. suppose the domain is the class of regular sets of strings over the alphabet (0, l} and examples are strings in and not in the target set. If the current hypothesis is represented by a regular expression R, and a string w1 is presented that is not included by R, a refinement algorithm will generalize by applying 7 to R, producing a set of new expressions to be tried in turn as hypotheses. Among these are expressions which extend R to R + WI; but other expressions, such as R*, will also be con- structed and held on the queue in order. If another positive example w2 is presented, R + w1 will be discarded, but R’ will be considered before the refinements of R + w1 (including the trivial one R + w1 + ~2). It can be shown that the algorithm will converge to a correct expression, whether or not the target is a finite set of strings. Limitations of the Refinement Approach The induction-by-refinement model is not expected to yield effi- cient algorithms directly since it is too general to take advantage of specific properties of the domain. Instead, the primary value of the model is the way it clarifies the important roles played by the semantic and syntactic orderings, and in the definition of refinement operators for computing appropriate generalizations and specializations of hypotheses. Recently several researchers have been looking for efficient inference algorithms that yield (with high probability) rules whose “error” is arbitrarily small, as measured by the probability distribution governing the pre- sentation of examples ([Valiant, 19841, [V&liant, 19851. [Blumer et. al., 19861). I n many of these algorithms, a refinement op- eration is clearly being employed; but instead of generating all refinements 7(e), the examples are used to reduce the set of pos- sibilities - e.g., 7(e, Z) is computed using the example Z. yielding more general expressions that are consistent with .z. There are many domains in which the partial order is too linear or too “flat” to be of much use in searching for hypothe- ses. Consider, for example, the problem of finding an arithmetic recursion relation of the form sn = j(sn-1) to explain a se- quence of integers. We might, for instance, try the hypothesis 2 s, = s,-1 + 5 and find that it explains only one integer in the sequence. At this point, the “less defined than or equal to” or- dering used in Example 2 is no more useful for finding a more general function than a simple generate-and-test approach. Refinement algorithms have generally performed poorly when the examples are subject to “noise” (e.g., [Buchanan and Mitchell, 19781). They also tend to require that all examples be stored, so that later refinements can be tested to avoid over-generalization or -specialization (e.g., [Shapiro, 19821). These two limitations LEARNING / 4’5 are inevitably related: for, a procedure which is tolerant of faulty examples cannot expect to find a hypothesis consistent with ev- ery example seen so far and hence must be selective about the examples it chooses to retain. It is interesting to note that nearly all refinement algorithms in the literature refine in only one di- rection (up or down). Consequently these algorithms cannot recover if they over-refine in response to a faulty example. By contrast, an algorithm which can refine upward and downward has the potential for correcting an over-generalization resulting from a false positive example when subsequent examples so in- dicate (e.g., [Shapiro, 19821). Finally - and most serious - the refinement technique relies on a fixed algebraic language, without suggesting any way to incorporate new “terms” or “concepts” into the language. In the learning of geometric shapes, for example, we could in principle define complex patterns in terms of the elementary relations of the language (edge, circle, above, etc.), but the description would be too complex to find by searching through the rule space E. By contrast, a suitable set of higher-level concepts (e.g., triangle, box, inside) could make the refinement path to a successful rule short enough to find by searching. But I am unaware of any general technique for discovering such new terms (other than having a friendly “teacher” present them explicity). A successful model of this process would be a significant advance in the study of inductive learning. [8] Crespi-Reghizzi, S. An effective model for grammar infer- ence. In Information Processing 71, B. Gilchrist, ed. New York: Elsevier North-Holland, 1972, pp. 524-529. [9] Feigenbaum, E. A. The simulation of verbal learning be- havior. In Computers and Thought, E. A. Feigenbaum and J. Feldman, eds. New York: McGraw-Hill, 1963. [lo] Gold, E. M. Language identification in the limit. Infor- mation and Control IO (1967) 447-474. [ 11) Hardy, S. Synthesis of LISP programs from examples. Proc. IJCAI-75, pp. 268-273. [12] Hunt, E. B., J. Marin, and P. J. Stone. Experiments in Induction. New York: Academic Press, 1966. [13] Laird, P. D. Inductive inference by refinement. Tech. Rpt. 376, Department of Computer Science, Yale Univer- sity, New Haven, Ct., 1986. [14] Langley, P. W. Descriptive discovery processes: experi- ments in Baconian science. Tech. Rep. CS-80-121, Computer Science Department, Carnegie-Mellon Uni- versity, 1980. [15] Michalski, R. V ariable-valued logic and its applications to pattern recognition and machine learning. In Com- puter Science and multipte-valued logic theory and ap- plications, D. Rine, ed. Amsterdam: North-Holland, 1975, pp. 506-534. [16] Michalski, R. Pattern recognition as rule-guided inductive - _ inference. IEEE Trans. Pat. Anal. and Mach. Intel. (PAMI-2):4 (1980) 349-361. Acknowledgements I am especially grateful to Dana Angluin for many helpful dis- cussions of this work. Thanks also to Takeshi Shinohara, whose careful reading of the original report identified some errors and improved the exposition. [17] Michalski, R. Theory and methodology of inductive learn- ing. AI 20:2 (1983) 111-161. [18] Mitchell, T. M. Version Spaces: a candidate elimination approach to rule learning. Proc. IJCAI-77, pp. 305- 310. References [19] Mitchell, T. M. Generalization as search. Artificial Intel- ligence l8:2 (1982) 203-226. [l] Angluin, D. Inference of Reversible Languages. J. ACM 293 (1982) 741-765. [2] Angluin, D. and C. H. Smith. Inductive Inference: theory and methods. Computing Surveys 15~3 (1983) 237-269. [3] Banerji, Ranan. The logic of learning. In Advances in Computers 24, M. Yovits, ed., Orlando: Academic Press, 1985, pp. 177-216. [4] Biermann, A. W. and J. Feldman. On the synthesis of finite-state machines from samples of their behavior. IEEE Trans. Comput. C-21 :6 (1972) 592 - 597. [5] Blumer, A., A. Ehrenfeucht, D. Haussler, M. Warmuth. Classifying learnable geometric concepts with the Vap- nik-Chervonekis Dimension. Proc. 18th ACM Symp. Theory of Comp. May, 1986. [6] Buchanan, B. G. and T. M. Mitchell. Model-Directed learning of production rules. In Pattern-directed in- ference systems, D. A. Waterman and F. Hayes-Roth, eds. New York: Academic Press, 1978, pp. 297-312. [7] Cohen, P. R. and E. A. Feigenbaum, eds. The handbook of artificial intelligence, Vol. III. Los Altos: William Kaufmann, Inc., 1982. [20] Shapiro, E. Y. (1981). Inductive inference of theories from facts. Tech. Rep. 192, Department of Computer Sci- ence, Yale University, New Haven, Ct., 1981. [21] Shapiro, E. Y. (1982). Algorithmic program debugging. Ph. D. dissertation, Computer Science Department, Yale University, New Haven, Ct., 1982. Published by M.I.T. Press. (221 Summers, P. D. A methodology for LISP program con- struction from examples. J.ACM 24 :1 (1977) 161-175. [23] Valiant, L. G. A theory of the learnable. C. ACM 27:ll (1984) 1134-1142. [24] Valiant, L. G. Learning disjunctions of conjunctions. Proc. IJCAI-85, pp. 560 - 566. [25] Winston, P. H. Learning structur’al descriptions from ex- amples. In Psychology of Computer Vision, P. H. Win- ston, ed. New York: McGraw-Hill, 1975. 476 / SCIENCE
1986
90
539
OPTIMAL ALLOCATION OF VERY LIMITED SEARCH RESOURCES David Mutchler t Naval Research Laboratory, Code 7591 Washington, D.C. 20375-5000 Abstract This paper presents a probabilistic model for studying the question: given n search resources, where in the search tree should they be ezpended? Specifically, a least-cost root-to-leaf path is sought in a random tree. The tree is known to be binary and com- plete to depth N. Arc costs are independently set either to 1 (with probability p ) or to 0 (with probability l-p ). The cost of a leaf is the sum of the arc costs on the path from the root to that leaf. The searcher (scout) can learn n arc values. How should these scarce resources be dynamically allocated to minimize the average cost of the leaf selected? A natural decision rule for the scout is to allocate resources to arcs that lie above leaves whose current expected cost is minimal. The bad-news theorem says that situations exist for which this rule is nonoptimal, no matter what the value of n . The good-news theorem counters this: for a large class of situa- tions, the aforementioned rule is an optimal decision rule if p 5 0.5 and within a constant of optimal if p > 0.5. This report discusses the lessons provided by these two theorems and presents the proof of the bad-news theorem. I Informal description of the problem Searching the state-space for an acceptable solution is a fun- damental activity for many AI programs. Complete search of the state-space is typically infeasible. Instead, one relies on whatever heuristic information is available. Interesting questions then arise as to how much speed-up is obtained and at what price. Many authors have evaluated the complexity of algorithms that invoke heuristic search [I, 3, 6, 7, 9, 10, 111. A typical ques- tion asked is: How fast can the algorithm find an optimal (nearly-optimal) (probably nearly-optimal) solution? This paper focuses upon the inverse question: Given n search resources, how good a solution can one obtain? This inverse question is appropriate for real-time processes characterized by an insistence upon an answer (decision) after X seconds have passed. For example, a chess-playing program is lim- ited by the external chess clock. A speech recognizer should main- tain pace with the speaker. In these and other processes, search resources are very limited; even linear time may not be fast enough. Heuristics are often said to offer “solutions which are good enough most of the time” [4, page 61. The converse of this phrase implies that heuristics will, by definition, fail some of the time. Worst-case analysis is unilluminating-any algorithm using the heuristic information will, on occasion, perform poorly. One is forced, reluctantly perhaps, to turn to probabilistic, average-case t This report describes work done in the Department of Computer Science at Duke Umverslty It was supported m part by the Au Force Office of Scientific Research, Au Force Systems Command under Grant AF’OSR LB-0205 analysis. Karp and Pearl said it well [lo]: Since the ultimate test for the success of heuristic methods is that they work well “most of the time”, and since probability theory is our principal formalism for quantifying concepts such as “most of the time “, it is only natural that probabilistic models should provide a formal ground for evaluating the performance of heuristic methods quantitatively. In agreement with this philosophy, this paper seeks the algo- rithm whose average result is best. It must be emphasized from the outset that any conclusions drawn from average-case analysis depend fundamentally on the underlying probability distribution assumed. The concluding section of this paper discusses whether the results of this paper do in fact apply to real-world algorithms. II The formal model This paper restricts its interest to a particular variety of heuristic search-finding a least-cost root-to-leaf path in a tree. The trees considered are binary trees complete to depth N. The arcs of the trees are assigned costs randomly and independently; each arc costs either 1 (with probability p ) or 0 (with probability l-p ). The cost of a leaf is the sum of the costs of the arcs on the path from the root to the leaf. This arc-sum method for assigning dependent leaf costs has been used by several researchers [2, 5, 10, 12, 13, 171. The searcher (hereafter called the scout) begins with exactly the foregoing information. Th e scout acquires additional informa- t,ion by expanding arcs, i.e., learning the actual cost (either 1 or 0) of the arc expanded. At each stage of the search, the scout can expand any arc on the frontier of the search (any arc whose parent has been expanded already). Backtracking incurs no extra penalty. Recall that this paper focuses upon limited resources. Model this by insisting that the scout halt after n arc expansions. The gen- eral then comes forward to select a leaf whose cost is (in general) a random variable. The general seeks a low-cost leaf, of course. The optimal decision-strategy for the general is easily seen. The interesting issue is how the scout should allocate the n arc expan- sions. Time for an example. Let p = 0.7 and N (depth of tree) be four. Suppose the scout expands the left arc beneath the root and finds that its cost is 1. Suppose further that the scout began with two arc expansions available, hence has only one more to use. Of the three arcs on the frontier of the search, which should the scout expand for this final action? Please pause a moment to decide for yourself. depth 4 tree Figure 1. Which arc should the scout expand? LEARNING / -t6’ From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. The natural answer seems to be arc A , perhaps because leaves beneath A have lower expected cost (given the current information) than those beneath the other frontier arcs. In fact, expanding arc A is wrong. The expected cost of the leaf selected by the general will, on average, be strictly lower if the scout expands arc B or C instead of arc A . The next section contains the generalized version of this startling result, and a reassuring theorem to counter it. The scout is initially quite uncertain about the leaf costs. As the search proceeds, leaf costs beneath expanded arcs become less indeterminate. This accumulation of knowledge models what a “generic” heuristic rule might provide. This heuristic rule is better described as vague than as error-prone, as is fitting for a study of the utilization by various algorithms of heuristically acquired information. Contrast this with studies concerned with how the amount of error in the heuristic rule affects a given algo- rithm (6, 8, 9, 11, 14,15, 16, 191. This model differs from that used by Karp and Pearl [lo] in only two aspects. First, they expand nodes (learning the costs of both arcs below the node) instead of arcs. This difference is not of consequence; more on this later. The second and more significant difference is the presence of a cutoff beyond which search cannot continue. This cutoff models the limitation on resources. The present model is the appropriate model if the search process must shut down and an answer must be given after a certain amount of time has elapsed, that is, after a certain number of arc expansions. III Results First consider the general’s decision. For any frontier arc cr, define the zero-value of arc (Y to be aum of the coats of the arcs from the root to a + P x (distance from cm to the bottom of the tree) For example, in the depth 4 tree of Figure 1, arcs B and C each have zero-value 1 + 3p , while arc A has zerevalue 4p . The zerevalue of an arc is the expected cost of each of the leaves beneath that arc based on current information. Since the general will received no further data, the optimal decision for the general is to select any leaf beneath the arc whose zero-value is smallest, breaking ties arbitrarily. The scouting activity can be modeled as a finite-horizon Markov decision process [18]. The acore of a scouting algorithm is the expected value of the smallest zero-value corresponding to a frontier arc after n arc expansions. An optimal algorithm is one that minimizes this score. Because zero-values are themselves expected costs, the score of a scouting algorithm is an expected expected cost. Note that an optimal algorithm does not usually discover the in-fact optimal path in the tree. An optimal algo- rithm finds a path whose average cost is no larger than the average cost of the path discovered by any other algorithm restricted to n arc expansions. What are the optimal scouting algorithms? As discussed above, the general should choose any leaf below the arc whose zerevalue is smallest. (The scout will assume that the general behaves optimally.) Perhaps the Same policy should be used by the scout. Call this scouting algorithm-expand the arc whose zerevalue is smallest-the greedy policy. The following theorem relates the bad news: the greedy policy is not optimal. No matter how many arc expansions remain, if p is large enough, there exist situations from which can arise even if the dictates of the greedy policy have been followed from the beginning of search. The bad-news theorem says that the scout should, under cer- tain circumstances, apply search resources to an arc that is not currently the arc that looks “best”, insofar as the final decision goes. Return to the example in Figure 1. The scout should expand arc B instead of arc A because, in that example, informa- tion about the second-best arc (B ) tion about the best arc (A ). is more valuable than informa- This distinction between information and path-cost compli- cates the optimal scouting policy. A priori, one might have guessed that the two would coincide, i.e., that information is most valuable when gathered for the path whose expected cost is currently smallest. The bad-news theorem denies that these two measures are the same. The d enial is strong-the measures may differ whether there be 5 or 500 arc expansions left. Note the order of quantification in the bad-news theorem. The theorem says that for every n (number of arc expansions remaining), there exist values of p and situations from which the greedy decision is a mistake. Th e models considered in this paper are parameterized by p . Specifying p specifies the model. The bad-news theorem says: if you tell me how many arc expansions you have left, I will give you a specific model (value of p ) and a specific state (subtree with its arc costs labeled) from which the greedy decision is wrong. What can be said for a fixed model, i.e., for fixed p ? There the sun shines brightly. For a large class of situations, the greedy decision can be proved optimal. Good-news theorem. Consider any state from which the scout cannot reach a leaf before exhausting the n remaining arc expansions. From any such state: for p 5 0.5, the greedy decision is an optimal decision; for 0.5 < p 5 0.618, the greedy decision is optimal if n 2 2; for 0.618 < p < 0.682, the greedy decision is optimal if n 2 3. (The numbers 0.618 and 0.682 are more accurately described as the solutions to the equations p2=1-p and p 3 = 1 - p , respectively.) So, if the scout is exploring a tree for which p 5 0.5, the . . greedy policy can be used with complete confidence. If p is between 0.5 and 0.618, the good-news theorem tells the scout that only at the last arc expansion might the greedy policy err; for p between 0.618 and 0.682, only the last two arc expansions are suspect, Section VI discusses the gravity of the restriction that the scout be unable to reach a leaf (i.e., that search resources are very limited). The proof of the good-news theorem contains three induc- tions on n , one for each of the three ranges of p listed. The basis case of each induction is the smallest value of n for which the greedy decision is guaranteed to be optimal: n = 1 for p 5 0.5, n = 2 for 0.5 < p I: 0.618, and n = 3 for 0.618 < p 5 0.682. Interestingly, the induction step for the third range also applies to any p 5 0.99. It fails for larger p only because the proof uses a numerical step for which arithmetic roundoff plays a role eventu- ally. Unfortunately, the value of n for the basis case grows as p increases (see the conjecture below), so that the basis case involves increasingly complicated algebraic comparisons. This prevents the proof of the good-news theorem from extending beyond 0.682. There is no evidence that the basis case fails for larger values of p . Indeed, it would be strange-though of course possible-if the basis case were to hold for p up to 0.682 but fail for larger p , while the induction step provably works for p up to 0.99. The success of the induction step provides theoretical support for the following conjecture. I Conjecture. Consider any fixed model, i.e., any fixed value for p . Let M be the k for which p k = 1 - p . That is, let A4 = log, (l-p ). If M or more arc expansions remain, the greedy decision is an optimal decision from any situation from which the scout cannot reach a leaf before exhausting the remaining arc expansions. Corollary. Suppose the scout begins with no more arc ex- pansions available than the depth of the tree. For any fixed p , the expected score of the greedy policy is within a constant of the expected score of the optimal scouting algorithm. 468 I SCIENCE Computer simulation provides additional, albeit anecdotal, support for the conjecture. It is not hard to compute mechanically the optimal decision for any specific state, model (value of p ), and small value of n . A wide variety of such runs yielded no excep- tions to the conjecture. The restriction of the simulations to small values of n (5 10) is not particularly worrisome, because (as explained above) only the basis case needs to be verified. IV Proof of the bad-news theorem We prove the bad-news theorem for n = 2. The proof for larger n is analogous; see [12]. Choose p large enough that p2 > l-p . The troublemaking state is the example seen earlier. We show that expanding arc A is a nonoptimal decision if the scout has exactly two arc expan- sions to apply to the state shown in Figure 1. Suppose the con- trary: suppose there is an optimal algorithm, call it algorithm OPT, that expands arc A from the state pictured. Here optimal means that the expected value of the leaf chosen by the general is minimized if the scout uses algorithm OPT. There are many optimal algorithms. Without loss of generality, algorithm OPT can be taken to be a deterministic algorithm. Imagine that OPT expands arc A and finds that it has cost 1. OPT now has only one arc expansion left. It can expand any of the four frontier arcs-all have the same zero-value. Algorithm OPT, being deterministic, must fail to expand three of these fron- tier arcs. The expected value of OPT is the same no matter which three are skipped. Hence OPT can be chosen to skip the two arcs below arc A and (say) arc B , in the event that arc A has value 1. That is, OPT expands arc C if its previous expansion of arc A yielded a l-arc. We now define an algorithm called MIMIC that uses OPT as a subroutine. We will show that MIMIC performs better than OPT (on average), thus contradicting the assumption that OPT (and the greedy decision) are optimal. Algorithm MIMIC mimics OPT, but with arcs A and B reversed. OPT expands A from the state in Figure 1, so MIMIC expands B from that state. MIMIC continues its mimicry (still with A and B reversed) on its last arc expansion, as shown below. (The fat line indicates the arc to be expanded.) 1 0 OPT 3 / \ \ I \ \ /I \ \ \ \ \ \ \ \ MIMIC Figure 2. The final arc expansion by MIMIC and OPT Return to the state pictured in Figure 1, from which 2 arc expansions remain. The expected score of algorithm OPT is the weighted average of the expected cost the general incurs when the scout uses OPT. This average is over all 22 trees r that the scout might hand the general after 2 expansions from the pictured state. The four such trees possible after 2 expansions of the state in Fig- ure 1 are shown in the top half of the following figure. (The highlighting therein becomes meaningful shortly.) Group 3 Group 1 Group 2 Group 2 Figure 3. All the possible trees the general might be given The expected score of MIMIC is likewise a weighted average, but over a different set of 22 trees, pictured in the bottom half of Figure 3. The action “exchange arc A and the subtree beneath it with arc B and its attached subtree” provides a l-l correspon- dence between these two sets of trees. (The correspondence is shown by vertical pairing in Figure 3.) Note that any tree r and its corresponding tree 7’ are equally likely. It follows that the difference between the expected score of OPT and the expected score of MIMIC is the difference between the score of OPT on r and the score of MIMIC on the corresponding tree r’, averaged over all 22 trees T that OPT might reveal to the general. Let us com- pute this difference as just described, but in three groups. Group 1: consider any tree r on which the general selects a leaf below arc B when the scout uses OPT on 7. The second tree in the top half of Figure 3 falls in this group. (Highlighted arc B is tied with two other frontier arcs for smallest zerovalue in that tree. Let the general break ties in favor of arc B . This is as good a tie-breaking rule as any.) When the scout uses MIMIC on the corresponding tree T’, arc B and the subtree beneath it in r also appear in I’, but from a better starting point. (Arc B is alone in the example in Figure 3; the visible subtree below it is null.) To be precise, algorithm MIMIC scores (on average) l-p better on T’ than OPT does on r, for any tree r in Group 1. Group 2: consider any tree r on which the general selects a leaf below arc A when the scout uses OPT on T. The third and fourth trees in the top half of Figure 3 fall in this group. When the scout uses MIMIC on Y’, arc A and the subtree beneath it in r also appear in 71, but from a worse starting point. (These subtrees are highlighted in Figure 3.) Algorithm MIMIC scores (on average) no worse than l-p worse on 7’ than OPT does on r, for any tree r in Group 2. Group 3: consider any tree r on which the general selects a leaf below neither arc A nor arc B when the scout uses OPT on r. The first tree in the top half of Figure 3 falls in this group. The same frontier arc in r beneath which the general chose a leaf also appears in f. (The common subtree is highlighted in Figure 3.) The expected score of the general (and MIMIC) is no worse on r’ than the expected score of the general (and OPT) on T, for any tree r in this group. Conclude: if the collective likelihood of Group 1 exceeds that of Group 2, algorithm MIMIC performs (on average) better LEARNING / 469 than algorithm OPT. A trite calculation shows this to be the case for the trees in Figure 3. The following discussion suggests how the proof works when n > 2. What is the probability of Group l? Return to the situation of Figure 1. In the event that both remaining arc expansions reveal only l-arcs in the tree r uncovered by OPT, arc B is tied for minimum zero-value at the conclusion of search. (Remember: arc B is chosen as an arc that OPT will not expand in just this case and the general breaks ties in favor of arc B .) Then Pr (Group 1) > Pr (two l-arcs) = p 2 What is the probability of Group 2? Remember that algo- rithm OPT-by design-will ignore the arcs below arc A if arc A has cost 1. The only hope for Group 2 is that arc A has cost 0. That is, P” and Pa as functions of 8. It, can be shown that: a. The expected score the general achieves when the scout uses either of these algorithms is a continuous, piecewise-linear function of p. b. At B C. using = cr, the general achieves either scouting algorithm. the same expected score by For any scouting algorithm P , let the phrase ,8 wins by using policy P be shorthand for the event the general chooaee a leaf below arc fi tvhen scouting poficy P ia used. The slope of each linear segment in the graph of the general’s expected score when the scout uses P” is simply the probability that p wins by using P”. A similar statement holds for algorithm PP. It follows that one way to show well as policy Pa) is to show that our goal (policy P” performs as Pr (Group 2) 5 Pr (arc A has cost 0) = l-p By choice of model, p2 > l-p so Group 1 is more likely than Group 2. This contradicts the assumption that OPT is optimal and shows that the greedy decision is not an optimal deci- sion in this construction. Pr ( a wine by using Ps ) > Pr ( /3 wins by ueing P* ) (t) for any value of p such that CY < @ < cr + p . V Proof of the good-news theorem The proof of the good news is long and involved. This sec- tion presents some of the more appetizing parts of the proof, to give its flavor. Th e reader’s pardon is asked for the lack of rigor in the presentation that follows. See [12] for a careful exposition of all the details. The devices used for p < 0.5 (where the greedy policy is optimal) are somewhat different from those used for p > 0.5 (where it is not). Within each half, further division is necessary as well. In each subcase, however, the proof is by induction on n, the number of arc expansions remaining. The basis cases compute the optimal policy explicitly. B Figure 4. Piecewise linearity Consider an arbitrary state z . Because of the assumption that the scout can no longer reach a leaf, the state is characterized by the zero-values of the frontier arcs. Let (Y denote the smallest of these zero-values. Let “arc cry” denote the arc corresponding to CY. Let PB be an optimal algorithm. If this algorithm expands arc o from state z , the greedy decision (expand the arc whose zero- value is smallest) is an optimal decision, completing the induction step of the proof. Suppose the contrary: suppose this optimal algorithm expands some other arc whose zerovalue is (say) p > a; call this other arc /?. (Hence the name of the algorithm.) Define P* to be the policy that expands arc (Y and then proceeds optimally. The goal of the rest of this proof is to show that policy P” performs (on average) as well as optimal policy PB. Achieving this goal will hemonstrate that expanding arc (Y is also an optimal decision, hence that the greedy decision is an optimal decision, hence (by induction) that the greedy policy is optimal. First consider the case in which ,8 exceeds CY by at least p ) i.e., 3 2 ff + p . In this case, cr will be the smallest zero-value after policy PB expands arc ,8, no matter what the result of that expansion. By the induction hypothesis, policy Pp (being optimal) will continue by expanding the arc whose zero-value is smallest, namely, arc 0. In other words, policy PB is (in this case) equivalent to the policy that expands both (Y and ,0 without regard to order. But such a policy is certainly no better than the more flexible policy P” that expands (Y and then proceeds optimally. The goal described in the preceding paragraph has been achieved in this case. Turn now to the case in which p < (Y + p . The argument in this case operates by considering the performance of algorithms The “meat” of the proof is devoted to showing the truth of inequality (t). Th is is done by conditioning on the possible costs of arcs CY and /3 and meticulously examining the four cases that result. The central theme is an application of the induction hypothesis: if “enough” O-arcs lie on and beneath the arc that is first expanded, the policy (either P” or pa) never leaves the sub- tree beneath that arc; hence that first expanded arc wins. For example, if arc /3 has cost 0 and has an infinite path of O-arcs directly beneath it, p must win when policy Pa is used. The clas- sical theory of branching processes provides an easy formula for the probability that there is such an infinite path. New results for branching random walks developed for this proof give stronger approximations to the “win probabilities”. These new results are of particular interest because numerical approximations are used to provide analytic bounds. VI Discussion Ia the “limited resources” problem relevant to the real world? Is it reasonable that after n arc expansions, the search halts? This absolute cutoff is not typical in AI problem-solving programs. Only real-time processes might be so described. Nonetheless, I view this aspect of the model as a significant contribution to inves- tigation of optimal, dynamic allocation of search resources. The cutoff clearly separates the search process from the final-decision process. Search gathers information for two purposes: to optimize some final decision, and to assist the accumulation of additional useful information. The present model, by design, accents this latter purpose. One reasonable alternative is a staged search: the scout gains some information; the general makes some decision; then the pro- cess iterates, although still with some final cutoff. Such a model is appropriate if outside factors are involved: an unpredictable i-0 / SCIENCE opponent, for instance, or events whose outcome is impervious to search but can be experienced as consequences of the general’s decisions. A second alternative is to abandon the absolute cutoff. Allow the general to direct the scout to continue search, at some additional cost. The problem then becomes an optimal stopping process. Both of these alternatives are attractive. It is their analysis that appears forbidding. Is our arc-sum model the right model for studying search with limited resources? Without doubt, the present model is simple- minded. Some of its simplicity has merit, capturing essence without detail. The restriction to binary tress with two-valued arcs falls into this class. On the other hand, the assignment of leaf costs by arc costs that are independent and identically distributed is artificial. Happily, the foregoing results are oblivious to some of the assumptions. Any value is permitted for the heuristic parame- ter p. The bad-news theorem can be shown to apply to any branching factor. Both the bad-news theorem and a weaker ver- sion of the good-news theorem apply to the model in which nodes are expanded instead of arcs [12]. Does the assumption that search resources be very limited sabotage the substance of the good-news theorem? From a practi- cal standpoint, this restriction (that the scout be unable to reach a leaf) is a big winner. Without it, all sorts of rough-edged boun- dary problems are encountered. For example, a prejudice appears against expanding arcs at the bottom of the tree because such expansions cannot be followed-up. In addition to this practical justification, there is a heuristic argument that the restriction is of little effect. The argument goes like this. The search begins well away from leaves. Whether the tree has depth 50 or 5000 should have little effect while the search is rummaging around level 5 or so. Any reasonable algorithm (including the greedy policy) has a breadth-first character whenever l-arcs are found. Conclude: the search typically will not reach a leaf. So long as this is the case, the analysis in this paper works. Open questions: is the conjecture in section III true? Can similar results be obtained for generalizations of the present model? (In particular, what happens if one allows arc values other than 0 and l? a random branching factor? a depth-dependent dis- tribution for arc values?) Do the lessons of this study apply to other models of heuristic search? More to the point, do the lessons apply in practice? Is the greedy policy a good algorithm if the scout misestimates p ? What do the results in this study REALLY say? These results should not be taken as literal advice for finding a least-cost root-to-leaf path in a tree. The b a news and good news should be d assimilated in a broader sense, as follows. Bad news: intuition about heuristic search is not always right. The example at the beginning of this paper shows that one’s intuitions can be firmly set, and firmly wrong. Our model and the bad-news theorem show that blind adherence to custom may prevent optimal use of search resources. In particular, there is a real difference between where best to gather information and how best to utilize it. Good news: theoretical justification can be provided for the intuition that the best information is acquired from the path that currently looks best. As the bad-news theorem shows, this intui- tion fails when p > 0.5. But for p 2 0.5, the intuition is sound; even for p > 0.5, the good-news theorem and accompanying con- jecture show that the intuition provides a good approximation. In sum, this study of heuristic search establishes that this intuition-search the path you currently judge best-can justifiably be labeled a heuristic. It sometimes fails, but on aver- age provides a result close to optimal. References 1. Bagchi, A. and A. Mahanti, “Search algorithms under different kinds of heuristics - A comparative study,” .I=1C‘\I 30(l) pp. 1-21 (January 1983). 2. Ballard, Bruce W., “The *-minimax search procedure for trees containing chance nodes,” Artificial Intelligence 21 pp. 327-350 (1983). 3. Dewitt, H.K., “The t,heory of random graphs with applica- tions to the probabilistic analysis of optimization algo- rithms,” Ph.D. dissertation, Computer Science Dept.. University of California, Los Angeles (1977). 4. Feigenbaum, E.A. and J. Feldman McGraw-Hill Book Company, New Computers ‘York (1963). and Thought. 5. Fuller, S.H., J.G. Gaschnig, and J.J. Gillogly, “An analysis of the alpha-beta pruning algorithm,” Dept. of Computer Science Report, Carnegie-blellon University, Pittsburgh, P-4 (1973). Gaschnig, John, “Performance measurement and analysis of certain search algorithms,” Ph.D. dissertation, Technical Report CMU-cs-79-124, Computer Science Dept., Carnegie-Mellon University (1979). Golden, B. L. and M. Ball, “Shortest paths with Euclidean distances: An explanatory model,” Networks S(4) pp. 297-314 (Winter 1978). 8. Harris, Larry R., “The heuristic search under conditions of error,” Artificial Intelligence 5 pp. 217-234 (1974). 9. Huyn, Nam, Rina Dechter, and Judea Pearl, “Probabilistic analysis of the complexity of I%*,” Artificial Intelligence 15 pp. 241-254 (1980). 10. Karp, R.M. and J. Pearl, “Searching for an optimal pat,h in a tree with random costs,” ilrtificial Intelligence 21 pp. OS-116 (1983). 11. Munyer, J., “Some results on the complexity of heuristic search in graphs,” Technical Report HP-76-2, Information mversity of California, Santa Cruz (1976). 12. ZZ1::PZZ,,i,” C “Search with very limited resources.” Ph.D. dfssertation,” Duke Technical Report cs-1986-10, Duke University Department of Computer Science (1986). 13. Newborn, M. M., “The efficiency of the alpha-beta search on trees with branch-dependent terminal node scores.,” Artificial Intelligence 8 pp. 137-153 (1977). 14. Pearl, Judea, “Knowledge versus search: A quantitative analysis using A’,” Artificial Intelligence 20 pp. 1-13 (1983). 15. Pohl, Ira, “First results on the effect of error in heuristic search,” pp. 219-236 in Machine Intelligence 5, ed. Bernard Meltzer and Donald Mitchie,American Elsevier, New ‘l-ork (1970). 16. Pohl, Ira, “Practical and theoretical considerations in heuris- tic search algorithms,” pp. 55-72 in Machine Intelligence X cd. E.W. Elcock and D. hlitchie,Wiley, New Y-ork (1970). 17. Reibman, Andrew L. and Bruce IV. Ballard, “The perfor- mance of a non-minimax search strategy in games with imperfect players,” Duke Technical Report CS-1983-17. Duke University Department of Computer Science (1983). 18. Ross, Sheldon M., Introduction to Stochastic Dynamic Pro- gramming, i\cademic Press, New York (1983). 19. VanderBrug, Gordon J., “Problem representations and for- ma1 properties of heuristic search.” Information Sciences 11 pp. 279-307 (1976). LEARNING / t-l
1986
91
540
Selecting Appropriate Representations for Learning from Examples Nicholas S. Flann and Thomas G. Dietterich Department of Computer Science Oregon State University Corvallis, Oregon 97331 Abstract The task of inductive learning from examples places constraints on the representation of training instances and concepts. These constraints are different from, and often incompatible with, the constraints placed on the representation by the performance task. This incompatibility explains why previous researchers have found it so difficult to construct good representations for inductive learning-they were trying to achieve a compromise between these two sets of constraints. To address this problem, we have developed a learning system that employs two different represen- tations: one for learning and one for performance. The learning system accepts training instances in the “performance represen- tation,” converts them into a “learning representation” where they are inductively generalized, and then maps the learned con- cept back into the “performance representation.” The advan- tages of this approach are (a) many fewer training instances are required to learn the concept, (b) the biases of the learning pro- gram are very simple, and (c) the learning system requires vir- tually no ‘vocabulary engineering” to learn concepts in a new domain. 1 Introduction In the idea paper entitled “Learning Meaning,” Minsky (1985) stresses the importance of maintaining different representations of knowledge, each suited to different tasks. For example, a sys- tem designed to recognize examples of cups on a table would do well to represent its knowledge as descriptions of observable fea- tures and structures. In contrast, a planning system employing cups to achieve goals would require a representation describing the purpose and function of cups. When we turn from the issue of performance to the issue of learning, it is not clear what representation to choose. The most direct approach is to choose the same representation for learning as for performance, thus gaining the advantage that any knowl- edge learned will be immediately available to support perfor- mance. Early machine learning work, such as Winston’s ARCH (Winston 1975) and Michalski’s AQll system (Michalsk 8 Chi- lausky, 1980)) employed this approach, and it worked quite well. The design of a structural language capable of capturing the con- cepts of interest was straightforward, and concepts were learned quickly with (relatively) few training instances. However, when Quinlan (1982) attempted to pursue this ap- proach in his work on learning chess end-game concepts, he en- countered difficulties. His representation for high-level chess fea- tures was effective for the task of recognizing end-game positions, but it introduced many problems for the learning task. First, the concept language was very difficult to design. Quinlan spent two man-months iteratively designing and testing the language until it was satisfactory. The second problem was that it took a large number of training instances (a minimum of 334) to learn the con- cept of lost-in-3-ply completely. These problems illustrate that the approach of employing the same representation for learning and for performance was inappropriate for this domain. In this paper, we show that inductive learning places con- straints on the representation for training instances and concepts and that these constraints often conflict with the requirements of the performance task. Hence, the difficulty that Quinlan en- countered can be traced to the fact that the concept lost-in-bply is an inherently functional concept that is most easily learned in a functional representation. However, the performance task (recognition) requires a structural concept representation. The vocabulary that Quinlan painstakingly constructed was a com- promise between these functional and structural representations. The remainder of this paper is organized as follows. First, we discuss the constraints that the task of inductive learning places on the representation for training instances and concepts. Sec- ond, we describe a strategy for identifying the most appropriate representation given these constraints. Third, we consider the problems that arise when the representation for learning is dif- ferent from the representation in which the training instances are supplied and from the representation that is needed by the perfor- mance task. Finally, we describe an implemented system, Wyl, that learns structural descriptions of checkers and chess concepts by first mapping the training instances into a functional repre- sentation, generalizing them there, and converting the learned concept back into a structural representation for efficient recog- nition. 2 Representational Constraints of Induc- tive Learning The goal of an inductive learning program is to produce a correct definition of a concept after observing a relatively small number of positive (and negative) training instances. Gold (1967) cast this problem in terms of search. The learning program is search- ing some space of concept definitions under guidance from the training instances. He showed that (for most interesting cases) this search cannot produce a unique answer, even with denu- merably many training instances, unless some other criterion, or bias, is applied. Horning (1969), and many others since, have formulated this task as an optimization problem. The learning program is given a preference function that states which concept definitions are a priori more likely to be correct. The task of the learning program is to maximize this likelihood subject to consistency with the training instances. 460 I SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. This highly abstract view of learning tells us that inductive learning will be easiest when (a) the search space of possible concept definitions is small, (b) it is easy to check whether a concept definition is consistent with a training instance, and (c) the preference function or bias is easy to implement. In practice, researchers in machine learning have achieved these three proper- ties by (a) restricting the concept description language to contain few (or no) disjunctions, (b) employing a representation for con- cepts that permits consistency checking by direct matching to the training instances, and (c) implementing the bias in terms of constraints on the syntactic form of the concept description, Let us explore each of these decisions in detail, since they place strong constraints on the choice of good representations for inductive learning. Consider first the restriction that the concept description lan- guage must contain little or no disjunction. This constraint helps keep the space of possible concept definitions small. It can be summarized as saying ‘Choose a representation in which the de- sired concept can be captured succinctly.” The second decision-to use matching to determine whether a concept definition is consistent with a training instance-places constraints on the representation of training instances. Training instances must have the same syntactic form as the concept def- inition. Furthermore, since the concept definition contains little or no disjunction, the positive training instances must all be very similar syntactically. To see why this is so, consider the situation that would arise if the concept definition were highly disjunctive. Each disjunct could correspond to a separate “cluster” of positive training instances. With disjunction severely limited, however, the positive training instances must form only a small number of clusters. In addition to grouping the positive instances “near” one an- other, the representation must also allow them to be easily distin- guished from the negative instances. This is again a consequence of the desire to keep the concept definition simple. The concept definition can be viewed as providing the minimum information necessary to determine whether a training instance is a positive or a negative instance. Hence, if the concept definition is to be short and succinct, the syntactic differences between positive and negative instances must be clear and simple. The third decision-to implement bias in terms of constraints on the syntactic form of the concept description-makes the choice of concept representation even more critical. Recall that the function of bias is to select the correct, or at least the most plausible, concept description from among all of the concept de- scriptions consistent with the training instances. Typically, the bias is implemented as some fixed policy in the program, such as “prefer conjunctive descriptions” or “prefer descriptions with fewest disjuncts.” The bias will only have its intended effect if conjunctive descriptions or descriptions with fewest disjuncts are in fact more plausible. In other words, for syntactic biases to be effective, the concept description language must be chosen to make them true. The net effect of this is to reinforce the first representational constraint: the concept representation language should capture the desired concept as succinctly as possible. I Choosing the Most Suitable Represen- tation those constraints in a given learning task. It should be clear that we want to select the representation that captures the concept most %aturally.” The “natural” representation is the one that formalizes the underlying reason for treating a collection of en- tities as a concept in the first place. A concept (in the machine learning sense anyway) is a collection of entities that share some- thing in common. Some entities are grouped together because of the way they appear (e.g., arches, mountains, lakes), the way they behave (e.g., mobs, avalanches, rivers), or the functions that they serve (e.g., vehicles, cups, doors). Occasionally, these cate- gories correspond nicely. Arches have a common appearance and a common function (e.g., as doorways or supports). More of- ten, though, entities similar in one way (e.g., function) are quite different in another (e.g., structure). The performance task for which a concept definition is to be learned may require a structural representation (e.g., for efficient recognition), a functional representation (e.g., for planning), or a behavioral representation (e.g., for simulation or prediction). When we review the successes and failures of machine learning, we see that difficulties arise when the representation required for the performance task is not the natural representation for the concept. Winston’s ARCH program was successful because the natural representation- structural-was also the performance represen- tation. The structural representation captured the important similarities among the positive training instances as a simple conjunction. It also separated the positive instances from the negative ones by simple features such as touching and standing. Quinlan’s difficulties with lost-in-J-ply can be traced to the fact that this concept is naturally defined functionally, yet the performance task required a structural representation. All board positions that are lost-in-3-ply are the same, not because they have the same appearance, but because they all result in a loss in exactly 3 moves. This concept can be captured naturally in a representation that includes operators (such as move) and goals (such as loss). In Quinlan’s concept language, which includes both structural and functional terms, this concept required a disjunction of 334 disjuncts. 2 Coordinating Different Representations For situations in which the representation most appropriate for learning is different from the one required for the performance task, there are two basic approaches that can be pursued. First, we can try, as Quinlan did, to find an intermediate representa- tion that provides some support for both learning and perfor- mance. However, the alternative that we have investigated is to employ two separate representations-one for learning and one for performance. This raises the problem of converting from one representation to another. Figure 1 shows the general structure of a learning system that employs this “multiple representation strategy.” Training instances are presented to the system in a representation called the “Environment Representation” (ER). To support induction, the instances are translated into training instances written in the “Learning Representation” (LR). Within this representation, the instances are generalized inductively to produce a concept de- scription. For this concept to be employed in some performance task, it must be translated into the “Performance Representation Now that we have reviewed the constraints that inductive learn- (PR).” ing places on the representation, we must consider how to satisfy Many existing learning systems can be viewed as pursuing LEARNING / 41 Figure 1: The Multiple Representation Strategy variants of this “multiple representation strategy.” For exam- ple, consider the operation of Meta-DENDRAL (Buchanan & Mitchell, 1978). Training instances are presented in a structural ER consisting of molecular structures and associated mass spec- tra. The program INTSUM converts these structural training examples into a LR of behavioral descriptions called cleavage processes, which are sequences of cleavage steps. These individ- ual cleavage steps are then inductively generalized by programs RULEGEN and RULEMOD to obtain general cleavage rules. In this domain, the PR is the same as the LR. These cleavage rules are produced as the output of Meta-DENDRAL for use in pre- dicting how other molecules will behave in the mass spectrometer. One can imagine an implementation of Meta-DENDRAL that attempted to inductively generalize the training instances in the given structural ER of molecules and mass spectra. However, this representation does not capture the important similarities between different molecules. The similarities are properly cap tured at the level of individual cleavage steps that produce single spectral lines, rather than entire molecules and spectra. In addition to Meta-DENDRAL, most of the explanation- based learning systems (e.g., Mitchell, Keller, & Kedar-Cabelli, 1986; DeJong & Mooney, 1986) can also be viewed as employ- ing a version of this multiple representation strategy. In LEX2 (Mitchell, et al., 1982), f or example, training instances are pre- sented in an ER consisting of structural descriptions of symbolic integration problems. By applying its integration problem solver, LEX2 converts each training instance into a functional represen- tation (the LR) consisting of a particular sequence of integration operators leading to a solution. In this LR, the specific sequence of integration operators is generalized by permitting the final state to be any solved problem state. In some sense, LEX2 is assuming that the teacher is trying to teach it the concept of “all integration problems solvable by this particular sequence of operators.” Once it has developed this generalized concept de- scription in the LR, LEX2 must convert it into the PR, which is the same representation as the ER. This translation is accom- plished by back-propagating the description of a solved problem through the operator sequence to compute the weakest precon- ditions of this particular operator sequence. This view of LEX2 explains why the original LEX system was not as successful as LEX2. In LEX, inductive inference was applied to positive training examples represented in the ER. The goal of inductive learning was to find general structural descrip- tions of integration problems for which particular operators, such as OP3, should be applied. This knowledge of the learning goal was not explicit in the structural representation, but only in the teacher’s mind. Hence, LEX could not take advantage of it. How- ever, by mapping the training examples into the functional rep- resentation, the learning goal could be made explicit and used to Structural Representation I Functional Reureeentation I concept - compilation 4 concept Figure 2: Representations in Wyl guide the generalization process. The functional representation concisely captures the desired similarity between the different training examples. 3 Overview of Wyl Although previous learning systems can be viewed as apply- ing the multiple representation strategy, none of these systems fully exploits this approach. In particular, the explanation-based learning systems do not perform any significant inductive infer- ence in the LR aside from generalizing the final state of the op erator sequence. In order to explore the multiple-representation strategy, we have developed a learning system named Wyl (after James Wyllie, checker champion of the world from 1847 to 1878) that applies the strategy to learn concepts in board games such as checkers and chess. We have chosen this domain because there are many interesting concepts that are naturally functional (e.g., trap, skewer, fork, lost-in-Z-ply) and yet have complex structural definitions. Wyl has been applied to learn definitions for trap and trap-in-Z-ply in checkers and skewer and knight-fork in chess. The performance task of Wyl is recognition. Given a board position, represented simply in terms of the kinds and locations of the playing pieces, Wyl must decide whether that position is, for example, a trap. To perform this task, the trap concept must be represented in a structural vocabulary that permits effi- cient matching against the board positions. However, as we have noted above, concepts such as trap are most easily learned in a functional representation. In addition to requiring a structural representation for perfor- mance, a structural representation is also needed for the training instances. To teach Wyl checkers and chess concepts, we want to simply present board positions that are examples of those con- cepts. Hence, in the terminology of the previous section, the ER and the PR are structural representations, but the LR is a functional representation. The organization of Wyl is shown in Figure 2. The three main processes in Wyl are envisionment, generalization, and compila- tion. The envisionment process translates each supplied struc- tural training instance into the functional representation to ob- tain the corresponding functional training instance. The general- ization process performs inductive inference on these functional training instances resulting in a functional definition that cap tures the desired concept. Finally the compilation stage converts this functional definition into an equivalent structural description that can support efficient recognition. The initial knowledge given to Wyl takes four forms. First, there is the environment representation for board positions. Sec- t62 / SCIENCE Figure 3: Checkers nap training instance, red to play ond, there is a representation for each of the legal operators in the game (e.g., normal-move and take-move). Third, Wyl is given the rules of the game, represented as a recursive schema that de- scribes what moves are legal at what points in the game. Finally, Wyl is given definitions of the important goals of the game, such as loss, win, and draw. For chess, Wyl is also told that lose-queen is an important goal. These given goals are the key to Wyl’s learning ability. Wyl learns new functional concepts as specializations of these known concepts. For example, the checkers concept trap is a special- ization of loss. To see this, consider the particular trap position shown in Figure 3. In this position, the red king in square 2 is trapped by the white man at square 10. No matter what move the red king makes, the white man can take him. Hence, trap is a particular way to lose a checkers game. Once Wyl learns a recognition predicate for trap, it is added to the pool of known concepts, where it may be specialized further to form some future concept (such as trap-in-Z-ply). The goals are provided to Wyl in a predicate calculus note tion. The checkers concept of loss is represented below (win is the dual case): v 8tUtel 8idc?1 Loss(8tb?l 8idd) ti recognizedLOSS(atate1 sidel) V t/ state2 side2 type from over to oppositep!ayer(sidel stde2) A [ [ takemove(state1 state2 from over to side1 type) AWIN(state2 side2)] V [norma/move(statel state2 from to side1 type) AWIN(state2 side2)]]. This formula is interpreted as follows. A board is an instance of loss if, for all legal moves available to sidel, the outcome is a win for the other player (side2). In checkers, there are two kinds of moves: takemoves, in which one piece captures another by jumping over it, and normalmoves, in which a piece simply moves one square. This completes our overview of the Wyl system and the in- formation that it is initially given. The following three sections describe each of the main phases of the program: envisionment, generalization, and compilation. We illustrate the operation of Wyl as it learns the checkers concept of trap. 3.1 Envisionment Wyl starts with a given structural training instance (i.e., board position), which it is told is an instance of trap. In Figure 3, Normalmove Normalmove from 82 from sYd to a6 to 07 kind king kind king side red side red LWIN(white) / / WIN(white) 1 Takemove Takemove from a10 from 810 over 06 over 67 to sl kind man side white Figure 4: Functional training instance for trap we illustrate the first training instance for trap, with red to play. The structural representation of the instance State1 is occupied(State1 s2 rkl)A occupied(Statels10 wml) I) TRAP(State1 red). Where rkl and wml are playing pieces, described as type(wm1 man) A side(wmi white) A type(rk1 king) A side(rki red). To convert this into a functional instance, Wyl applies a spe- cial proof procedure to Statel. This proof procedure has the effect of conducting a minimax search to look for known goals. When a known goal is discovered, the proof procedure returns a minimax search tree in which each state is marked with its outcome. In our trap example, the proof procedure discovers that the board position is an instantiation of the concept loss, with each node representing a state in the search and each branch repre- senting the particular operators in the search. The first operators instantiated are the normalmoves from square s2. These are fol- lowed by takemoves that lead to an instantiation of the predicate recognizedLOSS and termination of the search. The next step is to convert this minimax tree into an expla- nation tree (along the lines of Mitchell, et al., 1986). An expla- nation tree is a proof tree that explains the computed outcome ( i.e., loss) of the training instance. The minimax tree contains all of the information needed to construct this proof, but usually it also contains extra information that is irrelevant to the proof. Hence, Wyl traverses the minimax tree to extract the minimum (i.e., necessary and sufficient) conditions for the proof of the out- come. Figure 4 shows the functional training instance that is produced by this process. 3.2 Generalization This functional instance describes a particular way to lose a checkers game. It is a conjunction of two fully instantiated (i.e., ground) sequences of operators, each resulting in a loss. If Wyl were to follow the standard paradigm of explanation-based learn- ing, it would now attempt to find the weakest precondition of this particular operator graph that would result in a loss. How- ever, this is not the concept of trap that the teacher is trying to get Wyl to learn, because it only describes traps in which the trapped piece has two alternative moves. There are other traps, against the sides of the board, in which the trapped piece has only one possible move. Hence, rather than having Wyl gener- alize based on one training instance, we provide it with several training instances and allow it to perform inductive inference on the functional representations of these instances. LEARNING / tO.5 [LOSS(white)] Normalmove from a28 to 924 kind man side white WIN(red) Takemove from 619 over 624 to ~38 kind king side red * [LOSS(white)l LoSS(sidcl) Normalmove from from1 to to1 kind type1 side side1 1 WIN(sidel?) 1 Takemove from fro&? over to1 to to2 kind typcl side sided v LOSS(side.1) Figure 5: Second functional training instance of trap To demonstrate this generalization process, let us present Wyl with a second (very well-chosen) training instance. This struc- tural training instance can be expressed in logic as occupied(State8 s28 wml)A occupied(State8 sl9 rkl) > TRAP(State8 white). In this instance, a red king has trapped a white man against the east side of the board (see Figure 3). Wyl performs the envisionment process and discovers that this situation again leads to a loss-this time for white. The minimax tree is a simple sequence of moves, because white has only one possible move. Figure 5 shows the resulting functional training instance. Now that two training examples have been presented, Wyl is able to perform some inductive generalization. Two simple and strong biases are employed to guide this induction. The first is the familiar bias toward maximally-specific gener- alizations. The two functional instances are generalized as little as possible. The second bias can be stated as “There are no coin- cidences.” More concretely, if the same constant appears at two different points within a single training instance, it is asserted that those two different points are necessurily equal. The result of applying these two inductive biases to the train- ing instances is shown in Figure 6. In order to make the two separate branches of the first training instance match the single branch in the second instance, Wyl must generalize to a sin- gle universally-quantified line of play that allows any number of branches. Similarly, iri order to make the kinds, types, and lo- cations of the pieces match, Wyl must generalize all of these specific constants to variables. However, these variables are not completely independent of one another. First, the two sides are known to-be opposing. Second, the no-coincidences bias is ap- plied to ensure that the square that the first piece moves to (tol) is the same as the square that the second piece jumps over in the takemove. Because we chose these two training examples carefully, this generalized functional description is the correct definition of trap. This functional definition of trap can be expressed in logic as V state1 side1 from1 from2 TRAP(statel 8idel) * V state2 type1 to1 oppositepfayer(side1 side2) Anormalmove(statel stute2 from1 to1 side1 typel) A 3 state3 type2 to2 takemove(state2 state3 f rom2 to1 to2 side2 type2) ArecognizedLOSS(state3 sidel). 464 / SCIENCE Figure 6: Generalized functional definition of trap 3.3 Compilation The third stage of the learning process is to translate the func- tional knowledge into a form suitable for recognition-that is, to re-describe the acquired functional concept in the PR. This is a difficult task, because, unlike LEXP, Wyl is not given a good vocabulary for the performance language. The only structural representation that Wyl receives from the teacher is the repre- sentation used to describe individual board positions. This lan- guage could be used to represent the structural concept, but for trap this would require a large disjunction with 146 disjuncts. For other functional concepts, this approach is clearly infeasible. Instead of employing the same low level structural language in which the training instances were presented, Wyl must construct its own structural concept language for expressing the functional concept. Currently, there are no methods capable of designing such a structural language automatically. The only method that pro- vides even a partial solution to this problem is the method of constraint back-propagation or goal regression (Mitchell, et al., 1986). U&off (1986) h s ows that this method can create new structural terms to extend a structural language. We are ex- perimenting with extensions to his method to construct terms in chess and checkers, but to date we do not have a fully satisfactory method. Instead, we have explored an alternative (and extremely in- efficient) approach in Wyl based on generation and clustering. First we apply the functional definition to generate all possible structural examples of the concept (i.e., all possible board posi- tions that are traps according to the functional definition). This can be viewed as a highly disjunctive description of the concept in the supplied environment language. Next the large number of disjunctions in the description is reduced by a compaction pro- cess that creates simple new terms. The generator works by employing the functional concept as a constructive proof, generating all possible board positions consistent with the concept. (We employ an extension of the Residue inference procedure of the MRS system; see Russell, 1985.) Each trap position generated is a conjunction of the two single observable facts like the structural trap examples given above and illustrated in Figure 3. In the trap case, a disjunction of 146 possible positions is generated. The compaction stage then applies two algorithms to compress this set of 146 positions into a disjunction of 11 (more general) descriptions. The first algorithm discovers simple relational terms that de- scribe relationships between squares. For example, in the first training example of trap (Statel), the white king is directly two squares south of the red king. As part of Wyl’s initial envi- ronment language, primitive facts are given that describe the relationship between any square on the board and its immediate neighbors. The neighbors of s2 are 452, ~6) and se(s2, ~7). The algorithm identifies new relational terms by a simple breadth-first search from one of the squares in an instance to discover paths to the others. From Statel, a disjunction of two terms is found: t/ square1 square2 square3 South2squares(squarel square2) ti [se(squarel square3) A sw(square3 square2)] V[sw(squarel square3) A se(square3 square2)] The second term-creation algorithm is similar to GLAUBER (Langley et al., 1986) and identifies common internal disjunc- tions over the primitive structural features. The structural in- stances created by the generator are translated into a feature- vector representation based on the primitive attributes. For ex- ample, State1 is translated to the following vector: TRAPvector(red king s2 white man ~10). The first three items describe the red king, the following three, the white man. Next, one of the squares is replaced by its rela- tionship with the other. The new relational term South2squares is used, and it yields the new instance: TRAPvector(red king s2 white man South2squares). Common disjunctions are found by locating sets of instance vec- tors that share all but one feature in common. For example, consider two trap positions, the initial training instance and a red king on ~3, white man on sll given below: TRAPvector(red king s2 white man SouthZsquares) TRAPvector(red king s3 white man South2squares) The algorithm identifies the set of squares (~2. s3}, which is named NorthCenterSide. All of the features can be used to form the new terms. Using the trap instances, this algorithm creates terms defining regions such as Center (~6, ~7, ~8. ~14, ~15. ~16. ~22. ~23. s24}, NorthSingleSide (~4. NorthCenterSide}. Directional relationships between squares produce terms such as North, {ne, nw} and AnyDirection, {North, South}. In all, Wyl discovers 13 descriptive terms of this kind, along with 6 relational terms like SouthSsquares. While this method is very successful in constructing new terms, it is clearly unsatisfactory, since it does not scale up to domains having large or infinite numbers of structural instances. Even for trap this algorithm requires several hours of CPU time on a VAX 11/750. We are optimistic that a more direct algo- rithm, based on goal regression, can be developed. 4 Relationship to Previous Research It is informative to compare Wyl to previous work in explanation- based learning. If we were to apply the explanation-based learn- ing paradigm of Mitchell, et al. (1986), we would need to pro- vide Wyl with four things: (a) the goal concept, (b) the domain theory, (c) the operationality criterion, and (d) the training in- stance. In the checkers domain, the goal concept would be a functional definition of trap. The domain theory would be the rules of checkers and the goals of win, loss, and draw. The opera- tionality criterion would state that the final definition should be given in a structural vocabulary. The training instance would, of course, be a structural example of a trap. When we consider Wyl, we see that all of these things have been provided ezcept for the goal concept. Wyl can be said to acquire the goal concept by inductive inference from the training examples. An example from LEX2 will clarify this point. In LEX2, one goal concept is UsefuLOP3, that is, the set of all integration problems that can be solved by applying a sequence of operators beginning with operator OP3. The domain theory consists of the definition of solvable and solved problems. Imagine a new version of LEX2 constructed along the lines of Wyl (call it WYLLEX). WYLLEX would be given the domain theory, the operational- ity criterion and several training instances, but no goal concept. For each (structural) training instance, it would apply its domain theory to convert it into a functional instance. Suppose we are trying to teach WYLLEX the concept of Useful-OP3. We would present positive examples of problems for which applying OP3 leads to a solution. When WYLLEX converted these to func- tional instances, they would each consist of a sequence of opera- tors beginning with OP3 and ending in a solved problem. Hence, WYLLEX could perform inductive inference on these functional instances and derive the concept of Useful-OP3. Notice that in order to convlert the structural instances into functional instances, WYLLEX must already have a concept more general than the goal concept, namely, the concept of solv- able problem. Similarly, Wyl starts out with knowledge about the very general goals of win, loss, and draw and learns more spe- cific goals such as trap and trap-in-Z-ply. (The same is true in Meta-DENDRAL, where all concepts learned are specializations of the initial “half-order theory.“) This is not a serious limita- tion, because it is reasonable to assume that all intelligent agents have available a hierarchy of goals rooted in goals like survival and minimize-resource-consumption. An area of work closely related to explanation-based learn- ing is the work on purpose-based analogy (Winston, et al., 1983; Kedar-Cabelli, 1985). The constraints imposed on representa- tions by inductive learning are exactly those imposed by analog- ical reasoning. For two items to be analogous, they must have some commonality. That commonality is often not expressed in surface (observable) features, but in function. A hydrogen atom is like our solar system not because of size or color, but because of the way the respective components interact. So, the best representation for analogical reasoning about different items is one in which their underlying similarity is captured syntacti- cally. The work in Wyl suggests that new analogies may be exploited to identify new functional concepts as specializations of existing goals. 5 Conclusion In this paper, we have argued that inductive learning is most effective when the concept language captures the “natural” simi- larities and differences among the training instances. We have also shown that in some domains the representation required for efficient performance is not the “natural” one. To resolve this difficulty, we proposed the ‘multiple-representation strat- egy” whereby the learning system translates the training exam- ples into a “natural” representation for inductive learning and then translates the learned concepts into the appropriate perfor- mance representation. We have tested this strategy by imple- menting the Wyl system, which learns functional concepts from structural examples in chess and checkers. Wyl demonstrates LEARNING / 465 three key advantages of this strategy: (a) fewer examples are re- quired to learn the concept, (b) the bias built into the program is very simple (maximally-specific generalization), and (c) the representation language requires little or no domain-specific or concept-specific engineering. Our analysis of Wyl suggests that previous learning systems can be usefully viewed as pursuing simpler variants of this multiple- representation strategy. This suggests that part of the power of these learning systems derives from the choice of representation (as well as from the use of a domain theory). 6 Acknowledgments The authors wish to thank Bruce Porter for reading a draft of this paper. The AAAI referees also made many helpful comments. This research was partially supported by a Tektronix Graduate Fellowship (to Flann) and by the National Science Foundation under grant numbers IST-8519926 and DMC-8514949. 7 References Buchanan, B. G. and Mitchell, T. M., “Model-Directed Learning of Production Rules,” in Pattern-Directed Inference Systems, Waterman, D. A. and Hayes-Roth, F. (Eds.), Academic Press, New York, 1978. DeJong, G., and Mooney, R., “Explanation-Based Learning: An alternative view,” Machine Learning 1, 1986. Gold, E. “Language identification in the limit.,, in Information and Control, Vol 16, 447-474, 1967. Horning, J. J. “A Study of grammatical inference.” Rep. No. CS-139, Computer Science Department, Stanford University. 1969. Kedar-Cabelli, S.T., “Purpose-Directed Analogy,,, in Proceedings of the Cognitive Science Society, Irvine, Calif., 1985. Langley, P. W., Zytkow, J., Simon, H. A., and Bradshaw, G. L., “The search for regularity: Four Aspects of Scientific Dis- covery,” in Machine Learning: An Artificial Intelligence Ap- proach, Vol II. Michalski, R. S., Carbonell, J. G. and Mitchell, T. M. (Eds.), Tioga Press, Palo Alto, 1986. Michalski, R. S. and Chilausky, R. L., “Learning by Being Told and Learning from Examples: An Experimental Comparison of Two Methods of Knowledge Acquisition,,, Policy Analysis and Information Systems, Vol. 4, No. 2, June 1980. Minsky, M. “Society of mind,” Technical Report, Massachusetts Institute of Technology, (1985). Mitchell, T.M., Utgoff, P. E. and Banerji, R., “Learning by Experimentation: Acquiring and Refining Problem-Solving Heuristics,, , in Machine Learning: An Artificial Intelligence Approach, Vol I., Michalski, R. S., Carbonell, J. G. and Mitchell, T. M. (Eds.), Tioga Press, Palo Alto, 1982. Mitchell, T., Keller, R., and Kedar-Cabelli, S. “Explanation- Based Generalization: A Unifying View,,, in Machine Learning 1, 1, 1986. Quinlan, J. R., “Learning Efficient Classification Procedures and their Application to Chess End Games,, in Machine Learning: An Artificial Intelligence Approach, Vol I. Michalski, R. S., Carbonell, J. G. and Mitchell, T. M. (Eds.), Tioga Press, Palo Alto, 1982. Russell, S., “The Compleat Guide to MRS,” Rep. No. KSL-85- 12, Knowledge Systems Laboratory, Department of Computer Science, Stanford University, 1985. Utgoff, P. E., “Shift of Bias for Inductive Concept Learning,” in Machine Learning: An Artificial Intelligence Approach, Vol II. Michalski, R. S., Carbonell, J. G. and Mitchell, T. M. (Eds.), Tioga Press, Palo Alto, 1986. Winston, P., Binford, T., Katz, B. and Lowry, M. “Learning Physical Descriptions from Functional Definitions, Examples and Precedents,,, Proceedings of AAAI-83, Washington, D.C., 1983. Winston, P. H., “Learning Structural Descriptions from Exam- ples,” in The Psychology of Computer Vision, Winston, P. H. (Ed.), McGraw Hill, New York, Ch. 5, 1975. 466 / SCIENCE
1986
92
541
DISCOVERING FUNCTIONAL FORMULAS THROUGH CHANGING REPRESENTATION BASE Mieczyslaw M. Kokar Department of Industrial Engineering and Information Systems Northeastern University 360 Huntington Avenue, Boston, MA 02115 ABSTRACT This paper deals with computer generation of numerical functional formulas describing results of scientific experiments (measurements). It describes the methodology for generating functional physical laws called COPER (Kokar 1985a). This method generates only so called "meaningful functions", i.e., such that fulfill some syntactic conditions. In the case of physical laws these conditions are described in the theory of dimensional analysis, which provides rules for grouping arguments of a function into a (smaller) number of dimensionless monomials. These monomials constitute new arguments for which a functional formula is generated. COPER takes advantage of the fact that the grouping is not unique since it depends on which of the initial arguments are chosen as so called "dimensional base" (representation base). For a given functional formula the final result depends on the base. In its search for a functional formula COPER first performs a search through different representation bases for a fixed form of the function before going into more complex functional formulas. It appears that for most of the physical laws only two classes of functional formulas - linear functions and second degree polynomials - need to be considered to generate a formula exactly matching the law under consideration. 1. Introduction Learning a functional formula from observation is composed of three major steps: - deciding which features (arguments) to choose as relevant, - selecting a functional formula generalizing the observational data, - performing a function fit (calculating coefficients of the formula). Even though the problem has a very long history in mathematics only the answer to the third step (given by mathematics) can be perceived as satisfactory. As to the first step, selection of relevant features, factor analysis can be used, but only if the set from which the features are to be selected is known. In the case of selection of functional formulas, mathematics offers the Weierstrass approximation theorem (c.f., (Johnson & Reise 1982)), which says that any function (fulfilling some additional restrictions) can be approximated, with any degree of accuracy, by a polynomial (possibly of a very high degree). Application of this theorem to describing observational data with functional formulas (e.g., in generalizing results of scientific experiments) leads to two kinds of problems. First, the formulas generated in this way do not fulfill some conditions of syntactic consistency required from formulas describing results of physical measurements. For instance, a formula generated according to this theorem might result in addition of meters to seconds, kilograms to square meters, etc., which are dimensionally inconsistent. The second problem is that it usually leads to quite complex formulas which are unacceptable to humans. Humans expect formulas similar to the ones representing physical laws - simple and consistent. Because of this the Weierstrass theorem alone is not acceptable as a tool for generating functional formulas describing observational data. Nevertheless, it should not be ignored totally. It provides a very important feature - convergence of an algorithm for generating functional descriptions. The simplicity of physical laws contrasted with the complexity of functional formulas generated using standard approximation methods suggests that there should be a more "intelligent" method of describing results of scientific experiments by functional formulas. Attempts have been undertaken to use heuristics for discovery of functional formulas. The BACON system (Langley et al. 1983) used heuristics to generate both the features (arguments of a function) and the form of the function. The drawback of this system is that it can generate a formula which is dimensionally inconsistent. ABACUS (Falkenheiner 1985) uses dimensionality principles to constrain its search space. Dimensionality is used to eliminate those formulas which are dimensionally inconsistent. The search space and the space of dimensionally consistent functional formulas partially overlap, i.e., some of the formulas belong to the search space but they do not belong to the dimensionally consistent formulas (so they are tested and then rejected by ABACUS), and some dimensionally consistent formulas cannot be generated (they are not covered by the search space). LEARNING / 455 From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. Obviously, it is impossible to generate the space of all functional formulas, but at least the whole search space could be a subset of all dimensionally consistent formulas. This paper describes the method of generating functional formulas used by the system called COPER, which fulfills this requirement. The methodology employed in COPER is based on three main principles: meaningful functions, Weierstrass theorem and change of representation base. In short, the principle of meaningfulness of functions says that the functional formulas must fulfill some syntactic conditions to be interpretable in the relevant domain theory. For instance, in physics, a formula resulting in addition of meters to seconds is unacceptable. The most general restrictions imposed by the domain knowledge of physics are contained in the theory of dimensional analysis. COPER utilizes it in such a way that all the formulas it generates fulfill the formal condition of meaningfulness. The second principle, Weierstrass theorem, justifies the use of the space of polynomials in the search for functional formulas. The main strength of COPER, however, lies in the third principle, change of representation base. Perhaps it is worthwile to point here to an expressive example in (Charniak & McDermott 1985). In this example the numbers: 7, 60, 546, 627 are "magic." In base 10 it is hard to see what they have in common. In the representation base 9 they are: 7, 66, 77, 666, 766 and the common feature becomes much more apparent - they are composed only of two different digits, 6 and 7. In the case of functional formulas the principle of change of representation base is related to dimensional analvsis, mentioned above in reference to the principle of meaningfulness. Dimensional analysis provides rules for combining arguments of a function into a (smaller) number of dimensionless monomials. These monomials constitute new transformed arguments. The problem of finding a functional formula is reduced to finding a formula for the new transformed arguments. The interesting point is that the grouping of the arguments into monomials is not unique; usually there are a number of possible combinations. How the monomials are combined depends on which arguments have been selected as so called "dimensional base." Applying the same functional formula to the different groups of monomials (generated out of the same arguments) leads to different results. For a fixed functional formula we can either obtain a very simple resulting formula or a very complex one. In other words, the functional formula one derives depends on the selection of the dimensional base. This is analogical to the above mentioned selection of representation base. the Sections 2, 3 and 4 provide more details on three principles employed in COPER. 2. Meaningful functions. The notion of meaningfulness is known in the philosophy of science (measurement theory). It was investigated by many researchers, cf., (Adams et al. 1965), (Lute 1978), (Narens 1981), (Roberts 1984). In this approach functions are relations among elements called quantities. The structure of quantities is defined by some operations on them. For instance, in case of physical quantities the operations are multiplication and raising to a power. In addition to this, because numbers are also part of the quantity structure, other operations (logarithm, addition, etc.) are allowed for numbers. Meaningful functions are those that are expressed solely in terms of the operations defining the quantity structure. It is possible to show ((Lute 1978), (Kokar 1985)) that meaningful functions are invariant with respect to transformations of units in terms of which the quantities are expressed. The property of invariance is very important because in the theory of dimensional analysis (e.g., (Drobot 1953), (Whitney 1968)) there exists a theorem, called Pi-theorem, which relates the form of a function with the property of invariance. To explain what the Pi-theorem is let us introduce some notation. Assume that a function is to relate the value of a dependent argument Z with the values of the independent arguments Xl, . . . , Xn, which is usually represented as: Z = F(X1, . . . . Xn). The arguments Xl, . . . . Xn can be divided into two groups: a dimensionally independent set of arguments (they are usually called "dimensional base") and the remaining arguments being dimensionally dependent on the previous ones. Dimensional analysis provides algorithms for the division. Roughly speaking a dimensional base is such a maximal subset that the elements of it cannot be combined into a dimensionless monomial (using multiplication and exponentiation). For instance, velocity v[m] and acceleration a[m/s2] can be part of a dimensional base because a monomial of the form: vbl.ab2 can be dimensionless only for the values of bl, b2 = 0. In this paper we are not going to discuss dimensional analysis; an interested reader is referred to the cited literature ((Drobot lg53), (Whitney 1968)). Assume that some of the arguments have been selected as the base arguments; let us denote them Al, A2, . . . . Am. The rest of them will be denoted as Bl, B2, ,.., Br (where m+r-n). Our function takes the form Z = F(A1, . . . . Am, Bl, . . . . Br). The Pi-theorem says that such a function can be represented in the form of 456 / SCIENCE Z = f(Q1, . . . . Qr).Alal-..:Amam, 4. Change of representation base. where each of the Q's is a dimensionless monomial of the form: Qj - Bj/(Alajl. . . . .h+jm) and dimensional analysis provides algorithms for calculating all the exponents ai, aji, (i=l,.,.,m; j-l,...,r). Note that even the straight forward application of dimensional analysis significantly reduces the search space. Instead of searching the space of functions of n arguments we need to search the space of r-n-m arguments. Another advantage is that because Q's are dimensionless (like numbers), any of the functional formulas known for numbers can be used to generate the formula f. Therefore this method allows us to generate only meaningful formulas and all formulas admissible for numbers can be used to generate the function f (this is not the case with the function F). The system does not need to generate a formula and then test its meaningfulness; the generated formula is guaranteed to be meaningful. 3. The Weierstrass theorem. As we mentioned before the Weierstrass theorem says that a function can be approximated with any accuracy by a polynomial, possibly of a high degree. Application of this theorem to the function F(Al,...,Am,Bl,...Br) would result in formulas which might not fulfill the requirement of meaningfulness. It can however be applied to the transformed function f(Ql,...,Qr), since the arguments Ql,... ,Qr are dimensionless and any operation admissible for numbers can be used here without any harm to meaningfulness. Because the Pi-theorem establishes equivalence between the two functions, the Weierstrass theorem guarantees existence of a polynomial approximating a searched function. The system can begin with the polynomial of the lowest degree (a linear function) and if some threshold accuracy of approximation is not achieved then the degree of the polynomial can be incremented and the function fit performed again. Unfortunately, this is still not satisfactory; the degree of the polynomial may become unacceptably large. In the case of physical laws it does not guarantee obtaining results which would be an exact match with the formulas representing the laws. The usual way of solving this problem is to try out some other functional formulas, not only polynomials. There is however another possibility - changing the representation base (dimensional base). This approach has been incorporated into COPER. According to the rules of dimensional analysis the set of arguments of a functional formula has to be subdivided into two sets - a dimensional base and the rest. Dimensional analysis provides algorithms for testing whether a given set of arguments satisfies the condition of a dimensional base. Three possible situations can take place: - none of the subsets fulfills the condition of a dimensional base, - only one subset fulfills this condition, - there is more than one possible dimensional base. In the first case it is not possible to represent this function with the arguments provided. This circumstance indicates that there should.be some arguments that are not known to the system. This feature can be utilized nicely in the discovery system. Take for instance Ohm's law U-R.1 where U represents voltage, I - current, and R - resistance. If COPER is asked to find a functional formula U - F(1) it will immediately request more arguments, because it is impossible to express U solely in terms of I using the admissible operations of multiplication and raising to a power. The resulting formula would be dimensionally inconsistent. If the number of arguments is equal to the number of elements that must be included in a dimensional base and the set of arguments satisfies the condition of dimensional base, then the form of the function is determined uniquely. To understand this recall the Pi- theorem from Section 2. If n-m, then there are no Bj's, which means that there are no Qj's, and consequently the form of the function must be Z-F(Al,...,Am)-C~Alal~..:Amam, where C is a constant numerical value. The reader can easily check that many physical laws have this form. If the dimensions of the arguments Al,..., Am are known then the values of the exponents al,..., am can be calculated using algorithms of dimensional analysis. As an example of this situation take the formula describing pendulum period. If COPER is given arguments T[s] (pendulum's period, dependent argument), L[m] (pendulum's string length) and g[m/s2] (acceleration of gravity), it will generate the formula T = C - &L/g) immediately, and then will calculate the value of the coefficient C using the measurement results. LEARNING I 457 In the third case, if there is more than one dimensional base among the arguments Xl,... ,Xn, dimensional analysis does not give us any indication as to which base to choose. However, this may be advantageous to the process of searching for the form of the function. Instead of searching through the space of functional formulas, we can search through the space of dimensional bases first. Only if this search does not lead us to a plausible solution should we continue the search of the functional formulas space. Here is how such a search can proceed. Step 1. Choose a form of the function f (see Pi-theorem in Section 2). Step 2. Perform a search through the possible dimensional bases. To this end the system has to select a subset of arguments, test whether it satisfies the condition of dimensional base, if it does then express the Qj's in terms of this base (i.e., calculate exponents aji), calculate exponents ai, calculate the best coefficients for the given functional formula f, and calculate the accuracy of approximation of the experimental data by the function. Step 3. Select the best representation, i.e., one for which the accuracy of approximation (approximation error) is the best. Step 4. If the accuracy for the selected formula exceeds some threshold value then select another form of the function f and start from Step 1, otherwise stop. In COPER such an approach has been implemented and the results are very promising. The space of functional formulas currently includes polynomials. In future implementations the space will be extended to other functional formulas. COPER starts its search with a polynomial of the lowest degree - a linear function. It tests all the possible bases. In the case of physical laws such an exhaustive search is feasible for large amounts of experimental data. In many cases of physical laws an exact match is achieved with a first degree polynomial, i.e., there exists a dimensional base for which the function gives an exact match with the formula representing a particular physical law. The example presented in the next section is intended to show how this algorithm works. One of the directions for the future research is to investigate the influence of noise. The antinoise protection that COPER has stems from the fact that its decisions are based on the whole measurement results available at the time. 5. Example. We will show the results of an application of the described method to the discovery of the functional formula representing uniformly accelerated motion: In this formula S[m] stands for distance (expressed in meters), Vo[m/s] - initial velocity, a[m/s2] - acceleration, and t[s] - time. The system knows values of S for many different values of Vo, t, a, and the units of measurement (in square brackets). In this example 1000 of such values were generated out of the above formula. In practice they could be obtained from experiments. The goal for COPER is to discover the above formula given the experimental results and knowledge about dimensions of the arguments. Obviously, the system does not have any knowledge about the form of the function. It knows only that it is supposed to generate a formula S=F(Vo, a, t) describing the experimental results. In this particular case the dimensional base must consist of two arguments (any three arguments could be combined into a dimensionless monomial). Therefore there are at most three possible choices: (Vo, t), (Vo, a) and (a, t). Any one of the pairs may be selected since all fulfill the condition of a dimensional base. Below we represent the application of the Pi- theorem to this function for all possible selections of a dimensional base (note that in each case the new argument Ql is dimensionless). Table 1. Application of the Pi-theorem to S=F(Vo, a, t) - - - - - - - _ _-_-- Base 1 Resul ----- ting ------v-m formula --___-_-_------___-------------------------- vo, t 1 S = f(Q1).Vo2.a-1 - I f(t/(Vol-a-l))~Vo2~a-1 ______---------_________________________--~- Vo, a 1 S = f(Q1).Voletl = f(a/(Vol.t-l)).Vol.tl ------ a, t I S = f(Q1).al.t2 I f(Vo/(al.tl)).11.t2 -____--------------------------------------- As we can see the problem of selecting a functional formula has been reduced from a function of three arguments to a function of one argument. In the first step of the search procedure COPER assumes that the functional formula for f is linear, i.e., f(Ql)=Co+Cl.Ql, and starts searching through different dimensional bases. It represents the problem in the forms shown in the above table and performs a function fit. It then calculates the values of the coefficients Co and Cl. Both the coefficients and the degree of accuracy of the fit are different for the three bases. The coefficients of the function and the degrees of accuracy obtained are represented in Table 2. tjt+ / SCIENCE Table 2. Coefficients and accuracies for different bases _______________-_--_------------------- Base I Co I Cl I Accuracy ------ I________- I__---- -- I------------- Vo, t I -186294 I 2135 1 4.78E+6 Vo, a I 1 I 0.5 1 1.40E-2 a, tl 0.5 1 1 I 1.84E-4 ______________---__-___________________ The reason for the wide variation in the degrees of accuracy (ten orders of magnitude) becomes apparent when we substitute the values of the coefficients Co, Cl into the formulas and perform some simple symbolic operations leading to the elimination of parentheses as in the following table. Table 3. Final functional formulas for different bases ----__---_-_____-_______________________------ Base I Resulting formula ----__ 1 - - - - - - - - - - - - - - - _ _ _ - _ - - - - - - - - - - - _ - - - - - - - vo, t I S = Co*Vo2/a+C1.Vo.t - I -186294.Vo2/a+2135.Vo.t -m-w__ _-__-___________--__------------------- Vo, a I S = Co.Vo.t+C1.a.t2 = For the bases (Vo, a) and (a, t) we received an exact match with the original formula describing this law. Therefore no more searching is required; the first order polynomial satisfies the requirement of plausibility (low value of accuracy). 6. Conclusions. The approach to searching for a functional formula describing scientific experimental results presented in this paper has been tested on many physical laws with positive results. Its strength lies in the fact that it generates very simple functional formulas exactly matching physical laws. This is achieved through changing the representation base (dimensional base) before going into more complex functional formulas. The approach also has been tested on real data, i.e., on the results of scientific experiments obtained by measuring a physical process for which the functional formula was not known. The results of these investigations have been described partially in (Kokar 1975, 1978). Here again COPER's formulas were simple while accurate. Still, there are physical laws for which COPER cannot generate an exact formula, e.g., if a logarithm is part of the formula. It can come up with a polynomial which describes the experimental data with sufficient precision (Weierstrass theorem), but the formula is too complex (too high degree of the polynomial). Research is underway to incorporate further heuristics for searching the space of functional formulas (not only polynomials). In any case, the idea of changing description base proved to be very useful in the process of discovery of functional formulas describing physical laws. REFERENCES [II 121 [31 [41 [51 [61 [71 [f31 Adams, E., W., Fagot, R., F., and Robinson, R E (1965), Stitistics," "A theory of appropriate Psychometrica, 30, pp. 99-127. Charniak, E., McDermot, D., (1985), Introduction to Artificial Intelligence, Addison-Wesley, pp. 616-617. Drobot, S., (1953), "On the Foundations of Dimensional Analysis," Studia Mathematics, 14, pp. 84-89. Falkenhainer, B., (1985), "Proportionality Graphs, Units Analysis, and Domain Constraints: Improving the Power and Efficiency of the Scientific Discovery Process," Proceedings of the Nineth International Joint Conference on Artificial Intelligence, August 1985, Los Angeles, California, pp. 552-554. Johnson, L., W., & Riess, R., D., (1982), Numerical Analysis, Addison-Wesley, p. 205. Kokar, M., (1975), "The Choice of the Form of the Mathematical Model Using Dimensional Analysis," (in Polish), Inzvnieria Chemiczna, V, 1, pp. 103-119. Kokar, M., (1978), "A System Approach to Search of Laws of Empirical Theories," Current Tooics in Cybernetics and Systems, Berlin-Heidelberg-New York. Kokar, M., M., (1985), "On Invariance in Dimensional Analysis," Technical Report, MMK-2-85, College of Engineering, Northeastern University, Boston, Massachusetts. Kokar, M., M., (1985a), "Coper: A Methodology for Learning Invariant Functional Descriptions," in R.S. Michalski, J.G.Carbonell, and T.M.Mitchell (Eds), Machine Learning: A Guide to Current Research, Kluwer Academic Publishers. Langley, P., Bradshaw, G., L., Simon, H., A (1983), Rediscovering Chemistry with the Bacon System. In R.S.Michalski, J.G.Carbonell, and T.M.Mitchell (Eds.), Machine Learning: An Artificial Intelligence Anoroach, Tioga Pub., pp. 307- 330. Lute, R., D., (1978), "Dimensionally Invariant Numerical Laws Correspond to Meaningful Qualitative Relations," Philosoohv of Science, 45, pp. l-16. Narens, L., (1981), "A General Theory of Ratio Scalability, with Remarks about the Measurement-Theoretic Concept of Meaningfulness," Theorv and Decision, 13, pp. l-70. Roberts, F., S., (1984), "On the theory of meaningfulness of ordinal comparisons in measurement," Measurement, 1, pp. 35-38. Whitney, H., (1968), "The Mathematics of Physical Quantities, I1 American Mathematical Monthly 75, part I and II, pp. 115-138, and 2271256. LEARNING / -tS9
1986
93
542
On Debugging Rule Sets When Reasoning Under Uncertainty David C. Wilkins and Bruce G. Buchanan Department of Computer Science Stanford University Stanford, CA 94305 ABSTRACT Heuristic inference rules with a measure of strength less than certaint,y have an unusual property: better individ- ual rules do not necessarily lead to a better overall rule set. All less-than-certain rules contribute evidence towards er- roneous conclusions for some problem instances, and the distribution of these erroneous conclusions over the in- stances is not necessarily related to individual rule quality. This has important consequences for automatic machine learning of rules, since rule selection is usually based on measures of quality of individual rules. In this paper, we explain why the most obvious and in- tuitively reasonable solut,ion to this problem, incremental modification and deletion of rules responsible for wrong conclusions a la Teiresias, is not always appropriate. In our experience, it usually fails to converge to an optimal set of rules. Given a set of heuristic rules, we explain why the the best rule set should be considered to be the element of the power set of rules that yields a global min- imum error with respect to generating erroneous positive and negative conclusions. This selection process is modeled as a bipartite graph minimization problem and shown to be NP-complete. A solution method is described, the An- tidote Algorithm, that performs a model-directed search of the rule space, On an example from medical diagnosis, the Antidote Algorithm significantly reduced the number of misdiagnoses when applied to a rule set. generated from 104 training instances. I Introduction Reasoning under uncertainty has been widely investi- gated in artificial intelligence. Probabilistic approaches are of particular relevance to rule-based expert systems, where one is interested in modeling the heuristic tind evidential reasoning of experts. Methods developed to represent and draw inferences under uncertainty include the certainty factors used in Mycin [23, fuzzy set theory [12], and the be- lief functions of Dempster-Shafer theory [lo] [5]. In many expert system frameworks, such as Emycin, Expert, MRS, S.l, and Kee, the rule structure permits a conclusion to be drawn with varying degrees of certaint,y or belief. This paper addresses a concern common to all these methods and systems. In refining and debugging a probabilistic rule set, there are three major causes of errors: missing rules, wrong rules, and deleterious interactions between good rules. The pur- pose of this paper is to explicate a type of deleterious in- teraction and to show that, it (a) is indigenous to rule sets for reasoning under uncertainty, (b) is of a fundamentally different nature from missing and wrong rules, (c) cannot be handled by traditional methods for correcting wrong and missing rules, and (d) can be handled by the method described in this paper. In section 2, we describe the type of deleterious rule in- teractions that we have encountered in connection with au- tomatic induction of rule sets, and explain why the use of most rule modification methods fails to grasp the nature of the problem. In section 3, we discuss approaches to debug- ging and refining rule sets and explain why traditional rule set debugging methods are inadequate for handling global interactions. In section 4, we formulate the problem of reducing deleterious interactions as a bipartite graph min- imization problem and show that it is NP-complete. In section 5, we present a heuristic solution method called the Antidote Algorithm. Finally, our experiences in using the Antidote Algorithm are described. A brief description of terminology will be helpful to the reader. Assume there exists a collection of training in- stances, each represented as a set of feature-value pairs of evidence and a set of hypotheses. Rules have the form LHS + RHS (CF) > where LHS is a conjunction of evidence, RHS is a hypothesis, and CF is a certainty factor or its equivalent. A rule that correctly confirms a hypothesis generates true positive evidence; one that correctly discon- firms a hypothesis generakes true negative evidence. ,4 rule that incorrectly confirms a hypothesis generates false po,si- tive evidence; one that incorrectly disconfirms a hypothesis generates false negative evidence. False positive and false negative evidence can lead to misdiagnoses of training in- stances. II Inexact Reasoning a.nd Rule Interact ions When operating as an evidence-gathering system [2], an expert system accumulates evidence for and against com- peting hypotheses. Each rule whose preconditions match the gathered data contributes either positively or nega- tively toward one or more hypotheses. Unavoidably, the preconditions of probabilistic rules succeed on instances where the rule will be contributing false positive or false 448 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. negative evidence for conclusions. For example, consider the following rule:’ Rl: Surgery=Yes A GramNeglnfection=Yes =+ Klebsiella=Yes (0.77) The frequency with which Rl generates false positive evidence has a major influence on its CF of 0.77, where - 1 5 CF 5 1. Indeed, given a set of training instances, such as a library of medical cases, the certainty factor of a rule can be given a probabilistic interpretation2 as a func- tion Q(sr, x2, us), where xi is the fraction of the positive instances of a hypothesis where the rule premise succeeds, thus contributing true positive or false negative evidence; 22 is the fraction of the negative instances of a hypothesis where the rule premise succeeds, thus contributing false positive or true negative evidence; and 5s is the ratio of positive instances of a hypothesis to all instances in the training set. For Rl in our domain, @(.43, .lO, .22) = 0.77, because statistics on 104 training instances yield the fol- lowing values: x1: LHS true among positive instances = lo/23 x2: LHS true among negative instances = 8/81 x3: RHS true among all instances = 23/104 Hence, Rl generates false positive evidence on eight in- stances, some of which may lead to false negative diag- noses. But whether they do or not depends on the other rules in the system; hence our emphasis on taking a global perspective. The usual method of dealing with situations such as this is to make the rule fail less often by specializing its premise [8]. F or example, surgery could be specialized to neurosurgery, and we could replace Rl with: R2: Neurosurgery=Yes A GramNeg_Infection=Yes + Klebsiella=Yes (0.92) On our case library of training instances for the R2 rule, $(.26, .02, .22) = 0.92, so R2 makes erroneous inferences in two instances instead of eight. Nevertheless, modify- ing Rl to be R2 on the grounds that Rl contributes to a misdiagnosis is not always appropriate; we offer three objections to this frequent practice. First, both rules are inesuct rules that offer advice in the face of limited in- formation, and their relative accuracy and correctness is explicitly represented by their respective CFs. We expect them to fail, hence failure should not necessarily lead to their modification. Second, all probabilistic rules reflect a trade-off between generality and specificity. An overly general rule provides too little discriminatory power, and a overly specific rule contributes too infrequently to prob- lem solving. A policy on proper grain size is explicitly ‘This is a simplified form of (%And (Same Cntxt Surgery)) j (Conclude Cntxt Gram-Negative-l Klebsiella Tally 770). 2See Appendix 1 for a descripticn of the function a. This statistical interpretation of CFs deemphasizes incorporating orthogonal utility measures as discussed in [2]. or implicitly built into rule induction programs; this pol- icy should be followed as much as possible. Specialization produces a rule that usually violates such a policy. Third, if the underlying problem for an incorrect diagnosis is rule interactions, a more specialized rule, such as the specializa- tion of Rl to R2, can be viewed as creating a potentially more dangerous rule. Although it only makes an incor- rect inference in two instead of eight instances, these two instances will be now harder to counteract when they con- tribute to misdiagnoses because R2 is stronger. Note that a rule with a large CF is more likely to have its erroneous conclusions lead to misdiagnoses. This perspective moti- vates the prevention of misdiagnoses in ways other than the use of rule specialization or generalization. Besides rule modification, another way of nullifying the incorrect inference of a rule in an evidence-gathering sys- tem is to introduce a counteracting rules. In our example, this would be rules with a negative CF that concludes Kleb- siella on the false positive training instances that lead to misdiagnoses. But since these new rules are probabilistic, they introduce false negatives on some other training in- stances, and these may lead to misdiagnoses. We could add yet more counteracting rules with a positive CF to nullify any problems caused by the original counteracting rules, but these rules introduce false positives on yet other training instances, and these may lead to other misdiag- noses. Also, a counteracting rule is often of less quality in comparison to rules in the original rule set; if it were otherwise the induction program would have included the counteracting rule in the original rule set. Clearly, adding counteracting rules may not be necessarily the best way of dealing with misdiagnoses made by probabilistic rules. III Debugging Rule Sets and Rule Interact ions Assume we are given a set of probabilistic rules that were either automatically induced from a set of training cases or created manually by an expert and knowledge engineer. In refining and debugging this probabilistic rule set, there are three major causes of errors: missing rules, wrong rules, and unexpected interactions among good rules. We first describe types of rule interactions, and then show how the traditional approach to debugging is inadequate. A. Types of rule interactions In a rule-based system, there are many types of rule interactions. Rules interact by chaining together, by using the same evidence for different conclusions, and by drawing the same conclusions from different collections of evidence. Thus one of the lessons learned from research on MYCIN [2] was that complete modularity of rules is not possible to achieve when rules are written manually. An expert uses other rules in a set of closely interacting rules in order to define a new rule, in particular to set a CF value relative to the CFs of interacting rules. LEARNING I 449 Automatic rule induction systems encounter the same cope with deleterious interactions. The second method- problems. Moreover, automatic systems lack an under- ological problem is that the traditional method picks an standing of the strong semantic relationships among con- arbitrary case to run in its search for misdiagnoses. Such cepts to allow judgements about the relative strengths of a procedure will often not converge to a good rule set, even evidential support. Instead, induction systems use biases if modifications are restricted to rule deletion. Example 2 to guide the rule search [8] [3]. Examples of some biases used by the induction subsystem of the Odysseus appren- ticeship learning program are rule generality, whereby a rule must cover a certain percentage of instances; rule specificity, whereby a rule must be above a minimum discrimination threshold; rule colinearity, whereby rules must not be too similar in classification of the instances in the training set; and rule simplicity, whereby a maxi- mum bound is placed on the number of conjunctions and disjunctions [3]. in section 5.B illustrates this situation. Our perspective on this topic evolved in the course of ex- periments in induction and refinement of knowledge bases. Using “better” induction biases did not always produce rule sets with better performance, and this prompted in- vestigating the possibility of global probabilistic interac- tions. Our original approach to debugging was similar to the Teiresias approach. Often, correcting a problem led to other cases being misdiagnosed, and in fact this type of azltomated incremental debugging seldom converged to B. Traditional methods of debugging a an acceptable set of rules. It might have if we we engaged rule set in the common practice of “tweaking” the CF strengths of rules. However this was not permissible, since our CF values have a precise probabilistic interpretation. The standard approach to debugging a rule set consists of iteratively performing the following steps: l Step 1. Run the system on cases until a false diagnosis is made. IV Problem Formalization l Step 2. Track down the error and correct it, using one of five methods pioneered by Teiresias 141 and used by knowledge engineers generally: - - - Method 1: Make the preconditions of the of- fending rules more specific or sometimes more generaL3 rules - Method 2: Make the conclusions of offending more general or sometimes more specific. - Method 3: Delete offending rules. - Method 4: Add new rules that counteract the ef- fects of offending rules. - Method 5: Modify the strengths or CFs of offend- ing rules. This approach may be sufficient for correcting wrong and missing rules. However, it is flawed from a theoretical point of view, with respect to its sufficiency for correcting prob- lems resulting from the global behavior of rules over a set of cases. It possesses two serious methodological problems. First, using all five of these methods is not necessarily ap- propriate for dealing with global deleterious interactions. In section 2 we explained why in some situations modify- ing the offending rule or adding counteracting rules leads to problems, and misses the point of having probabilistic rules, and this eliminates methods 1, 2 and 4. If rules are being induced from a training set of cases, modifying the strength of the rule is illegal, since the strength of the rule has a probabilistic interpretation, being derived from frequency information derived from the training instances, and this eliminates method 5. Only method 3 is left to 3Ways of generalizing and specializing rules are nicely described in [8]. They include dropping conditions, changing constants to variables, generalizing by internal disjunction, tree climbing, interval closing, exception introduction, etc. Assume there exists a large set of training instances, and a rule set for solving these instances has been induced that is fairly complete and contains rules that are individually judged to be good. By good, we mean that they individ- ually meet some predefined quality standards such as the biases described in section 3.A. Further, assume that the rule set misdiagnoses some of the instances in the training set. Given such an initial rule set, the problem is to find a rule set that meets some optimality criteria, such as to minimize the number of misdiagnoses without violating the goodness constraints on individual rules.4 Now modifica- tions to rules, except for rule deletion, generally break the predefined goodness constraints. And adding other rules is not desirable, for if they satisfied the goodness constraints they would have been in the original rule set produced by the induction program. Hence, if we are to find a solution that meets the described constraints, the solution must be a subset of the original rule set.5 The best rule set is viewed as the element of the power set of rules in the initial rule set that yields a global min- imum weighted error. A straightforward approach is to examine and compare all subsets of the rule set. How- ever, the power set is almost always to large to work with, especially when the initial se: has deliberately been gener- ously generated. The selection process can be modeled as a bipartite graph minimization problem as follows. TMeta-Dendral, a large initial rule set was created by the RULE GEN program, which produced plausible individual rules without re- gard to how the rules worked together. The RULEMOD program selected and refined a subset of the rules. See [l] for details. 51f we discover that this solution is inadequate for our needs, then introducing rules that violate the induction biases is justifiable. iS0 i SCIENCE aij = if arc [Rj,Ji;I exists then Qj else 0; Instance Set I1 (%) Rule Set . RI (ail) I2 (Q2) R2 (Q2) . . : : Figure 1: Bipartite Graph Formulation A. Bipartite graph minimization formu- lat ion For each hypothesis in the set of training instances, de- fine a directed graph G(V,A), with its vertices V parti- tioned into two sets I and R, as shown in Figure 1. Ele- ments of R represent rules, and the evidential strength of Rj is denoted by @j. Each vertex in I represents a train- ing instance; for positive instances \Ei is 1, and for negative instances 9; is - 1. Arcs [Rj 7 Ii] connect a rule in R with the training instances in 1 for which its preconditions are satisfied; the weight of arc [Rj, I;] is Qj. The weighted arcs terminating in a vertex in I are combined using an evidence combination function a’, which is defined by the user. The combined evidence classifies an instance as a positive instance if the combined evidence is above a user specified threshold CFt. In the example in section V.B., CF’ is 0, while for Mycin, CFt is 0.2. More formally, assume that 11, . . . ,I, = training set of instances, and RI, . . . . R, = rules of an initial rule set. Then we want to minimize: z = 2 bjrj j=l subject to the constraints n c rj 2 &in j=l where rj = if Rj is in solution rule set then 1 else 0; bj = bias constant to preferentially favor rules; CFt = the CF threshold for positive classification; a’ = n-ary function for combining CFs, where the time to evaluate is polynomial in n; R min = minimum number of rules in solution set; @i= if 9i is 1 then “ > ” else “ 5 “. The solution formulation solves for rj ; if rj = 1 then rule R3 is in the final rule set. The main t,ask of the user is setting up the aij matrix, which associates rules and in- stances and indicates the strength of the the associations. Note that the value of aij is zero if the preconditions of Rj are not satisfied in instance 1; Preference can be given to particular rules via the bias bj in the objective function Z. For instance, the user may wish to favor the selection of strong rules. The Rnin constraint forces the solution rule set to be above a minimum size. This prevents finding a solution that is too specialized for the training set, giving good accuracy on the training set but having a high vari- ance on other sets, which would lead to poor performance. Theorem 1. The bipartite graph minimization problem for heuristic rule set optimization is NP-complete. Proof. A sketch of our proof is given; details can be found in [ll]. To show that the bipartite graph minimiza- tion problem is NP-complete, we use reduction from Sat- isfiability. Satisfiability clauses are mapped into graph in- stance nodes and the atoms of the clauses are mapped into rule nodes. Arcs connect rule nodes to instance nodes when the respective literals appear in the respective clauses. The evidence combination function ensures that at least one arc goes into each clause node from a rule node representing a true literal. The evidence combination function also per- forms bookkeeping functions. 0 V Solution Method In this section, a solution method called the Antidote Algorithm is described, and an example is provided based on the graph shown in figure 2. An alternative solution method that uses zero-one integer programming is de- scribed in [ll]. It is more robust, but places a restriction on the evidence combination function, namely that, the ev- idence be additively combined. It is not adequate when using the certainty factor model, but may be suitable for connectionist approaches. A. The Antidote Algorithm The following model-directed search method, the Anti- dote Algorithm, is one that we have developed and used in our experiments: a Step 1. Assign values to penalty constants. Let pl be the penalty assigned to a poison rule. A poison rule is a strong rule giving erroneous evidence for a case LEARNING / 451 that cannot be counteracted by the combined weight of all the rules that give correct evidence. Let p2 be the penalty for contributing false positive evidence to a misdiagnosed case, p3 be the penalty for contribut- ing false negative evidence to a misdiagnosed case, p4 be the penalty for contributing false positive evi- dence to a correctly diagnosed case, p5 be the penalty for contributing false negative evidence to a correctly diagnosed case, and p6 be the penalty for using weak rules. Let h be the maximum number of rules that are removed at each iteration. Let Rmin be the minimum size of the solution rule set. Step 2. Optional step for very large rule sets: given an initial rule set, create a new rule set containing the n strongest rules for each case. Step 3. Find all misdiagnosed cases for the rule set. Then collect and rank the rules that contribute evi- dence toward these erroneous diagnoses. The rank of rule Rj is Cy==, pin;j, where: - nl; = 1 if Rj is a poison rule or its deletion leads to the creation of another poison rule and 0 oth- erwise. - n2j = the number of misdiagnoses gives false positive evidence; - n3j = the number of misdiagnoses gives false negative evidence; for which Rj for which Rj - n4j = the number of correct diagnoses Rj gives false positive evidence; - n5j = the number of correct diagnoses Rj gives false negative evidence; for which for which - n6j = the absolute value of the CF of Rj; Step 4. Eliminate the h highest ranking rules. Step 5. If the number of misdiagnoses begins to in- crease and h # 1, then h e h - 1. Repeat steps 3-4 until either - there are no misdiagnoses - &;, is reached - h = 1 and the number of misdiagnoses begins to increase. 0 Each iteration of the algorithm produces a new rule set, and each rule set must be rerun on all training instances to locate the new set of misdiagnosed instances. If this is par- ticularly difficult to do, the h parameter in step 4 can be increased, but there is the potential risk of converging to a suboptimal solution. For each misdiagnosed instance, the automated reasoning system that uses the rule set must be able to explain which rules contributed to a misdiag- nosis. Hence, we require an system with good explanation capabilities. The nature of an optimal rule set differs between do- mains. Penalty constants, p;, are the means by which the user can define an optimal policy. For instance, via p2 and p3, the user can favor false positive over false negative misdiagnoses, or visa versa. For medical expert systems, a false negative is often more damaging than a false posi- tive, as false positives generated by a medical program can often be caught by a physician upon further testing. False negatives, however, may be sent home, never to be seen again. In our experiments, the value of the six penalty con- stants was p; = 106-‘. The h constant determines how many rules are removed on each iteration, with lower val- ues, especially h 5 3, giving better performance. R,,+, is the minimum size of the solution rule set; its usefulness was described in section 5.A. Example 1. In this example, which is illustrated in Figure 2, there are six training instances, classified as positive or negative instances of the hypothesis. There are five rules shown with their CF strength. The arcs indicate the instances to which the rules apply. To simplify the example, define the combined evidence for an instance as the sum of the evidence contributed by all applicable rules, and let CF, = 0. Rules with a CF of one sign that are connected to an instance of the other sign contribute erroneous evidence. Two cases in the example are misdiagnosed: I4 and 15. The objective is to find a subset of the rule set that minimizes the number of misdiagnoses. Classified Example Instances Rule Set I1 I2 14 15 (+1) (+1) C-1) (+1) F-1) R2 R3 R4 I6 (-1) - R5 (-.5) Figure 2: Optimizing Rules for One Hypothesis Assume that the final ruleset must have at least three rules, hence Rmin = 3. Since all rules have identical mag- nitude and out degree, it is reasonable to set the bias to the same value for all n rules. hence bj = 1, for 1 5 j 5 n. 452 / SCIENCE Let pi = 106-‘, for 0 5 i 5 5, thus choosing rules in the highest category, and using lower categories to break ties. On the first iteration, two misdiagnosed instances are found, I4 and 15, and four rules contribute erroneous evidence toward these misdiagnoses, R2, R3, R4, and R5. Rules are ranked and R4 is chosen for deletion. On the second iteration, one misdiagnosis is found, 14, and two erroneous rules contribute erroneous evidence, R3 and R5. Rules are ranked and Rg is deleted. This reduces the num- ber of misdiagnoses to zero and the algorithm successfully terminates. The same example can be used to illustrate the prob- lem of the traditional method of rule set debugging, where the order in which cases are checked for misdiagnoses in- fluences which rules are deleted. Consider a Teiresias style program that looks at training instances and discovers I4 is misdiagnosed. There are two rules that contribute erro- neous evidence to this misdiagnosis, rules R3 and R5. It wisely notices that deleting R5 causes 13 to become misdi- agnosed, hence increasing the number of misdiagnoses; so it chooses to delete R3. However, no matter which rule it now deletes, there will always be at least one misdiagnosed case. To its credit, it reduced the number of misdiagnoses from two to one; however, it fails to converge to an rule set that minimizes the number of misdiagnoses. 0 B. Experience with the Antidote Algo- rithm Experiments with the Antidote Algorithm were per- formed using the Mycin case library[2]. Our experiments involved using 119 evidential findings, 26 intermediate hy- potheses, and 21 final hypotheses. The training set had 104 training instances and each instance was classified as a member of four hypothesis classes on the average. The generated rules had one to three LHS conjuncts. In our experiments, we generated approximately forty rule sets containing between 200 and 20000 rules. Large rule sets were generated because we our investigating the construction of knowledge bases that allow an expert sys- tem to automatically follow the line of reasoning of an expert; understanding a community of problem solvers re- quires more knowledge than that needed to just solve di- agnosis problems. Typically, 85% of the training instances were diagnosed correctly, and seven out of ten cases used to validate the original Mycin system were evaluated cor- rectly. While ten cases is a small number for a validation set, it is a carefully constructed set and has been found adequate in accurately classifying human diagnosticians at all levels [7]. Further, since there are an average of four hypotheses in the diagnosis per instance, we can view our training set as having 416 instances and our validation set as having 40 instances. After, the Antidote Algorithm was applied, 95% of the training instances was diagnosed correctly, and 80% of the validation set was diagnosed cor- rectly. Besides almost always converging to a solution in which all members of the training set are diagnosed correctly, the Antidote Algorithm is very efficient: only five to fifteen it- erations are required, for rule sets containing between 200 and 500 rules. It was surprising to see how greatly perfor- mance is improved by deleting a small percentage of the rules in the rule set. As our results show, the improved per- formance on the training set carried over to the validation set. VI Summary and Conclusion Traditional methods of debugging a probabilistic rule set are suited to handling missing or wrong rules, but not to handling deleterious interactions between good rules. This paper describes the underlying reason for this phe- nomenon. We formulated the problem of minimizing dele- terious rule interactions as a bipartite graph minimization problem and proved that it Is NP-Complete. A heuristic method was described for solving the graph problem, called the Antidote Algorithm. In our experiments, the Antidote Algorithm gave good results. It reduced the number of misdiagnoses on the training set from 15% to 5%, and the number of misdiagnoses on the validation set from 30% to 20%. We believe that the rule set refinement method described in this paper, or its equivalent, is an important component of any learning system for automatic creation of proba- bilistic rule sets for automated reasoning systems. All such learning systems will confront the problem of deleterious interactions among good rules, and the problem will re- quire a global solution method, such as we have described here. VII Acknowledgements We thank Marianne Winslett for suggesting the bipartite graph formulation and for detailed comments. We also ex- press our gratitude for the helpful discussions and critiques provided by Bill Clancey, Ramsey Haddad, David Hecker- man, Eric Horovitz, Curt Langlotz, Peter Rathmann and Devika Subramanian. This work was supported in part by NSF grant MCS-83- 12148, ONR/ARI contract N00014-79C-0302, Advanced Research Project Agency Contract DARPA N00039-83- C-0136, the National Institute of Health Grant NIH RR- 00785-11, National Aeronautics and Space Administration Grant NAG-5-261, and Boeing Grant W266875. We are grateful for the computer time provided by the Intelligent Systems Lab of Xerox PARC and SUMEX-AIM. Appendix 1: Calculating a. Consider rules of the form E z H. Then CF = ip = @(xi, x2, x3) = empirical predictive power of rule R, where: l x1 = P(E+IH+) = f rat ion t of the positive instances in which R correctly succeeds (true positives or true negatives) LEARNING / 453 l x2 = P(E+JH-) = f rat ion of the negative instances t’ in which R incorrectly succeeds false positives or neg- atives l x3 = P(H+) = f rat ion of all instances that are posi- t’ tive instances Given xi, x2, x3, let If x4 > x3 then Q = I~~i~~~l else Cp = Zr(11T31. This probabilistic interpretation reflects to the modifi- cations to the certainly factor model proposed by [S]. REFERENCES [l] B. G. Buchanan and T. M. Mitchell. Model-directed learning of production rules. In Pattern-Directed In- ference Systems, pages 297-312, New York: Academic Press, 1978. [2] B.G. Buchanan and E.H. Shortliffe. Rule- Baaed Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project. Addison- Wesley, Reading, Mass., 1984. [3] Wilkins D. C., Clancey W. J., and B. G. Buchanan. An overview of the Odysseus learning apprentice, pages 332-340. New York: Kluwer Academic Press, 1986. [4] R. Davis and D. B. Lenat. Knowledge-Based Systems in Artificial Intelligence. McGraw-Hill, New York, 1982. [5] J. Gordon and E. H. Shortliffe. A method for man- aging evidential reasoning in a hierarchical hypothe- sis space. Artificial Intelligence, 26(3):323-358, July 1985. [S] D. Heckerman. Probabilistic interpretations for Mycin’s certainty factors. In Uncertainty in Artiji- cial Intelligence, North Holland, 1986. [7] Yu V. L., Fagan L. M., et al. Evaluating the per- formance of a computer-based consultant. J. Amer. Med. ASSOC., 242( 12):1279-1282, 1979. [8] R. S. Michalski. A theory and methodology of induc- tive inference, chapter 4, pages 83-134. Palo Alto: Tioga, 1984. [9] P. Politakis and S. M. Weiss. Using empirical analysis to refine expert system knowledge bases. Artificial Intelligence, 22( 1):23-48, 1984. [lo] G. A. Shafer. Mathematical Theory of Evidence. Princeton University Press, Princeton, 1976. [ll] D. C. Wilkins and B. G. Buchanan. On debugging rule sets when reasoning under uncertainty. Techni- cal Report KSL 86-30, Stanford University, Computer Science Dept., 1986. [12] L. A. Zadeh. Approxinate reasoning based on fuzzy logic. In IJCAI-6, pages 1004-1010, 1979. 454 / SCIENCE
1986
94
543
RULE REFINEMENT USING THE PROBABILISTIC RULE GENERATOR Won D. Lee and Sylvian R. Ray Department of Computer Science, University of Illinois, Urbana, Illinois ABSTRACT This work treats the case of expert-originated hypotheses which are to be modified or refined by training event data. The method accepts the hypotheses in the form of weighted VL, expressions and uses the probabilistic rule generator, PRG. The theory of operation, verified by experimental results, provides for any degree of hypothesis modification, ranging from minor perturbation to complete replacement according to supplied confidence weightings. I INTRODUCTION There are many situations where we would like to con- struct a knowledge base initially as a set of hypotheses, introduced by a human expert, which are later systemati- cally modified by experimental training data events. Indeed, one might say that this ordering of the learning process is analogous to theoretical study of a problem’s solution methods followed by practical experience with the problem, wherein modification or adaptation of the initial rules or hypotheses occur. R.equired modifications may range from small perturbations of the hypotheses through major or minor deletions and addition of new rules. This problem has been called,“rule refinement”, in which cases it was viewed as an incremental learning of machine generated rules. Here, we extend the idea to modi- fying hypotheses originated either by human agent or machine. Therefore, communication between human expert and machine becomes possible in the sense of human intro- duction of a bias or preliminary problem treatment. Our approach hinges upon a probabilistic formulation of the rule generation problem and deviates significantly from pre- vious approaches because of this and its embodiment in the Probabilistic Rule Generator (PRG) (Lee and Ray, 1986a & b). The form of expression of the rules must also be equally convenient both for a human and for the rule generator. First, some of the difficulties arising from the rule refinement problem are discussed. Next, related works are examined, and then, a language is presented describing the initial hypotheses appropriately to communicate to a machine. Finally, a scheme to modify initial hypotheses with the training data set is described with some practical application results. II DIFFICULT NATURE OF THE PROBLEM To examine the difficult nature of the problem, let us consider a simple case first. Assume that we have only one initial hypothesis V, for a class C,, and one hypothesis, V,, for a class C,. Let F, and F, be new training event sets for C, and C,, respectively, to be used in rule refinement. If V, and V, perfectly describe classes C, and C,, respectively, then all the events in F, will be covered by the hypothesis V,, and likewise, there will be no events in F, which are not covered by V,. But, in general, hypotheses are not perfect, hence usually not consistent with the new event sets. Therefore, hypotheses need to be modified to accommodate newly acquired facts. But, the modification process is complicated by various interactions among hypotheses and events as described below. A hypothesis V, is incomplete if it does not cover all the events in F1(we are still assuming that there is only one hypothesis V, for the class C,). A hypothesis is inconsistent if it covers events which belong to other classes. A hypothesis V, collides with another hypothesis V, if their intersection is non-null. If the two hypotheses belong to different classes, then a contradiction between the hypotheses results. On the other hand, if the two hypotheses belong to the same class, then they might have to be merged together to form a new hypothesis. Now, let us consider all of the three problems men- tioned above at the same time(see Figure 1). More compli- cations arise since when we try to resolve incompleteness by expanding a hypothesis, then inconsistency might occur during the process by covering exception events belonging to other classes. Likewise, if we shrink a hypothesis to resolve incon- sistency by not covering exception events, then incomplete- ness might occur during the process since some of the events included in the former hypothesis might not belong V 1+ - + - + %!I 4 + - + - Figure 1. Simple example of a general rule refinement problem: here, “+” are events in F,, and “-” are those in F,. Notice the probabilistic nature of the problem. 442 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. to the modified hypothesis any more. Of course, we can always adjust hypothesis V, so that it includes some of the events in F,, and does not cover any event in F,. But, in doing so, the newly generated hypothesis might become malformed in shape as it might have to be shrunk too much to exclude all exception events. Therefore, the classical viewpoint of strict completeness and consistency of a concept is no longer adequate. A scheme should be flexible enough to generate an appropriate form of hypotheses according to user specification, and it is desir- able to have a process with a probabilistic nature, allowing partially complete and partially consistent concepts. The point here is that a general rule refinement scheme must be able to deal with all the possible situations that the usually noisy data set might present. As we can see, the problem of dealing with collisions between hypotheses will become increasingly complex, as the situation worsens by the addition of problems of incom- pleteness and inconsistency. We have considered only a simple case so far; each class having only one initial hypothesis. But, what about more complex cases when there is more than one hypothesis for each class? A bewildering variety of interactions among the hypotheses and the event sets can occur then. Implicit in the above discussion is that we modify the hypotheses to fit with the example events. But, if the hypotheses represent only a few cases of the facts, and if the new training data set contains a large amount of events not described by the initial hypotheses, the idea of starting with the hypotheses and adjusting them to fit the data might not be the right one. Rather, we might want to pro- duce complexes from the data events first, and adjust those complexes according to hypotheses. An extreme case arises when the initial hypotheses do not cover any of the events in the new training data set. The idea is that we should not give any special privilege to the initial hypotheses, but treat the hypotheses equally with the event sets according to their importance of how many data events they represent. III PREVIOUS WORK* A. AQ Rule Refinement The AQ rule refinement scheme was derived in (Michalski and Larson, 1978). There are some shortcom- ings in this scheme. First, the objective of AQ incremental rule generation is to produce a new decision rule consistent with the input hypotheses and the observed events. Initial hypotheses are not modified to make them consistent with observed events; they are rather used to find events that cause inconsistency and incompleteness to the hypotheses, and to generate a cover of an event set against those hypotheses. Since complexes are generated around the example events only, and no attempt is made to modify the initial hypotheses to make them consistent with the example events, it is argued that the expansion strategy used in incremental rule generation in AQ should be *Rule refinement for production rules was Ginsberg et al. (Ginsberg et al., 1985). treated by dropped(O’Rorke, 1982). Secondly, it has been observed that new hypotheses generated are usually overly complex compared to former hypotheses. This is because there is a lack of facility to capture complexes with some exception events in them, and therefore all the new hypotheses are formed to include strictly positive events only. There have been attempts to remedy the problems mentioned above (Reinke and Michalski, 1985). But since those methods are based on the modified AQ star synthesis, they do not address the probabilistic nature of rule refinement fully. Some problems associated with AQ still remain in them(Lee and Ray, 1986a). The capability to cap- ture complexes probabilistically becomes important since there will be a large number of interactions among com- plexes and events in the modification process. B. ID3 iterative rule generation -- methodology and discussion A rule can be generated by ID3 iteratively by selecting a subset of training events, called a “window”, at each iteration(Quinlan, 1983). There are two methods of forming a new window in ID3. One way is to add the exceptions to the current win- dow up to some specified number to form an enlarged win- dow. The other one is to select “key” events in the current window and replace the rest with the exceptions, thus keep- ing the window size constant. Notice that ID3 itself does not have the capability to accept initial hypotheses, but uses already existing data events to iteratively generate a decision tree. Also, ID3 itself is intended to make a decision tree, rather than to synthesize variable-valued logic expressions. Therefore, the ID3 rule refinement scheme is not intended for initial hypotheses modification. Yet, when we relax some of the constraints in the scheme, we can extract some useful ideas from it. First, let us consider the case of making a new window by adding some specified number of exceptions to the current window. Let us relax the requirement of “some specified number”, so that we can add a large number of exceptions to the current window, if we desire. Then we observe that the ID3 rule refinement scheme will not differentiate between already available rules and the train- ing events. In other words, the current rule can be grown in any direction. Therefore, a new rule can have any shape from almost identical to the current rule to almost entirely different from the current one. Secondly, let us consider the idea of selecting the key events in the current window to represent the current rule. It is important to select these key events to represent the current rule faithfully as they will be mixed with “foreign” exception events. If there is no further information available about the general distribution tendency of the events in each variable domain, we might want to choose events which are distri- buted equally over the subspace of the complex to which these events belong. Let us further imagine the situation that, after we generate a rule, we lost all the actual data sets of events. LEARNING / G-i.3 --i---i--i I ti ; -- ,--,1_-, -I ’ ;--I - -I- - ? ! -LL --- I- - V 4 - Figure 2. An example of a rule refinement problem. Hypotheses are denoted as bold dashed rectangles, and “+” and “-” are the events in the sets F, and F-, respectively. Figure 4. Case 2: notice that the major hypothesis is a spe- cialization of the original hypothesis V,. Then one way to recover the rule for the next iteration is to generate events artificially, thus simulating the hypotheses. Because the event space is generally huge, and the number of available training events is usually small, and more importantly, human conceptual knowledge may be very condensed, a concept by a human expert can play an important role. Thus, a scheme to modify an expert-driven concept systematically may be especially effective. 3). This kind of situation occurs when an expert is inexperi- enced and is not sure of some of his claims, or is merely guessing the rule. Therefore, the expert wants the machine to generate a new rule mainly by the new event sets F, and F-. Thus, machines should be able to detect the fact that these hypotheses are indeed unimportant or light, and hence ought to generate new hypotheses by the given exam- ples. In this case, the major cluster would emerge by cap- turing a cluster made of C, events. Case 2 IV CASE STUDIES VL, expressions(Michalski, 1975) have been used to describe rules generated by inductive inference machines. Consider a simple example to see whether the VL, expres- sion is sufficient to convey an expert’s hypotheses to a machine so that rule refinement might be done by the machine. Let there be two classes, C, and C-, and two linear variables, vi and vz, describing the events, each with cardi- nality 5. Let there be two initial hypotheses, V, and V,, both for class G,, and new sets of exampIe events, F, and F-, belonging to C, and C-, respectively(see Figure 2). Here, hypotheses V, and V, are described by the VL, expression as two complexes, [v2 = 1..2], and [vr = 2..4][~z = 41, respectively. Let us consider the following five cases. Case1 Let the hypotheses be very “light” in weight, not representing many actual events by themselves(see Figure Let us consider the case when the hypothesis I’, is heavy, but V, is light, compared with the actual number of C, example events(see Figure 4). A major, newly generated hypothesis might be the ori- ginal hypothesis except the part that a C- event resides inside the hypothesis V,. Thus, this example shows how a hypothesis can be speciaked by some exception events. Case 3 Here, let VI be light, while V, is heavy. This is the reverse of the case 2, and therefore a major new hypothesis would be created around the hypothesis V,. As we can see in Figure 5, because two C, events are in the region where I/, can be more generalized, a new hypothesis would be formed by merging I/, with these two C, events. Therefore, be merged with ized. this case exemplifies how a hypothesis can some example events to be more general- Vl v2 0 1 1 2 1 3 4 1 Figure 3. Case 1: here, a major hypothesis is denoted bold rectangle, and is created mainly from C, events. by a Figure 5. Case 3: notice that the major hypothesis is a gen- eralization of the original hypothesis I/,. 44-i / SCIENCE Ul 112 0 1 i 2L 3 4 1 Figure 6. Case 4: notice that the major hypothesis is made from the original hypotheses V, and I/,. Case 4 Next, consider the case when both hypotheses V, and V2 are equally heavy(see Figure 6). If there is no way to merge these two hypotheses to make a heavier hypothesis, then new hypotheses would be created around original hypotheses. But, in this specific example, as in the figure, if there is a way to merge these two together, and if the merged hypothesis is indeed heavier than each individual hypothesis, then a new, more important hypothesis can be made by the merge. Case 5 Finally, let us reconsider the case 2, when hypothesis V, is heavy, but V, is not(see Figure 7). If the hypothesis ‘VI is heavy enough, then a flexible rule refinement scheme should be able to ignore a small number of exception events which would otherwise make the new hypothesis shrink too much. By doing so, the new hypothesis would be general enough to be a good description of the facts. This, in turn, is the problem of a rule refinement scheme having the capa- bility of capturing complexes probabilistically. V COMMUNICATION LANGUAGE We have discussed so far only some of the cases that can occur during rule refinement process. There are other cases, such as the case when two hypotheses belonging to two different classes collide with each other. Then, a deci- sion should be made by the user whether to divide an initial hypothesis to generate new hypotheses without any excep- tion, or to accept the initial hypothesis without change if 3 4 I I Figure 7. Case 5: notice that the major hypothesis is the same as the original hypothesis V,, and contains one excep- tion event. the collision is not severe. Of course, there should be some intermediate solutions between the two extremes. according to the user specification. Again, this is most naturally treated as a probabilistic rule refinement problem. Since the data might well be contaminated, being able to gen- erate hypotheses probabilistically becomes important. .4s we have seen, if an expert wished to describe his hypotheses to communicate with the machine, he would need to describe how important each hypothesis is, com- pared not only with other hypotheses, but also with the sets of example events. Let us, for instance, say that there are two hypotheses, [v2 = 1..2] having a weight of 0.8 and [vl = 2..4][v, = 41 having a weight of 0.2, representing a total of 100 events. Then, we can express the initial hypotheses as 8O[~2 = 1..2] + ~O[W, = 2..4][~, = 41. Therefore, this expression cont,ains not only the rela- tive importance of each hypothesis, but also the relative importance of the total hypotheses to the sets of example events. We will call this expression a weighted VL, (WVL,) expression.” VI PRG RULE REFINEMENT The objective of the PRG rule refinement scheme is summarized below. First, it should be able to consider all the interactions among hypotheses and sets of example events. In other words, it must be capable of merging hypotheses, resolving collisions between hypotheses belonging to different classes, generalizing hypotheses by adding some events, and special- izing hypotheses by excluding some exception events. Secondly, it should be flexible enough, hence be proba- bilistic, in generating new hypotheses according to user specification. For instance, a hypothesis can be very specific, explaining only the positive events, or be more gen- eral by ignoring some minor exceptions. Because PRG is already probabilistic, the only prob- lem left is to enter the WVL, expression into PRG to- be modified. A simple solution is used in PRG rule refinement scheme. Since a human expert’s description of a concept is often probabilistic, it is suitable for simulation in PRG. PRG rule refinement scheme is described as: PRG rule refinement; begin simulate each hypothesis by a set of randomly generated events in the subspace described by the hypothesis, making the number of events equal to the weight of the hypothesis; *Notice that this expression is different from the weighted DVL expression in (Michaiski and Chilausky, 1980), since weighted DVL expressions are concerned with the weighted selectors in a VL, expression, rather than weighted complexes. We will call the coefficient in front of each complex, a “weight”, as it represents the strength of evidence of each hypothesis. LEARNING / -k-k5 Table 1. Result of Experiment 1. Notes: 1. Major complex is created mainly from positive training events since hypotheses are lightly weighted. 2. The major complex comes from original hypothesis, V,, by excluding one exception event. 3. The major complex is formed by merging the dominant hypothesis, V,, with some positive training events. 4. A new, heavier hypothesis is created by merging hypotheses V, and V,. 5. With specificity=0.9(which means that any subspace that has ratio of the number of posi- tive events to the total number of events in it is greater than or equal to 0.9, is considered as an acceptable complex), a single exception event in hypothesis V, is ignored, permitting V, to be retained as the major complex. WVL, Hypotheses Refined Rule Note 1 v/1 + 1 v, ~[w~=~..~][v~=~..~]+~[v,=~]+~[v~=~] 1 lOV, + lV, 9[v1=1..4][2r2=l..2]+5[v1=O..l][v2=2..4]+3[2r2=4] 2 lV, + lOV, 12[vz=4]+2[vz=2]+4[vI=0..l][v,=2..4] 3 lOV, + lOV, l4[v,=3..4][~2=l..4]+~2[v,=4]+5[v2=2]+5[v,=O..l][~2=2..4] 4 lOV, + lV, ~~[v~=~..~]+~[v~=~]+~[v,=~..~][v~=~..~] 5 add sets of training events; run PRG to generate new hypotheses according to user specification; end. Thus, the PRG rule refinement scheme does not give any special attention to the hypotheses, but treats them equally with the example events. This makes it possible to face the complexity of the interactions among the hypotheses and the sets of example events in a uniform way, and PRG will generate new hypotheses as if there were no special hypothesis at all, and hence no partiality will take place in the rule refinement process. VII EXPERIMENTS Experiment 1 : In Sec. IV, we dealt with some cases that a rule refinement scheme should be able to resolve. Those cases were run by the PRG rule refinement program. Two initial hypotheses for class C, were: V, = [u2 = 1..2] and V, = [wl = 2..4][v, = 41. The two sets of training events, F, and F-, used in Sec. IV, were introduced as data. Complexes generated by PRG agree with the earlier discussion(see Table 1). Experiment 2 : Rules for five classes of sleep (“stages”) were written by a human expert based on standard sleep stage scoring principles(Ray, et al., 1985) and presented as initial hypotheses to the PRG program. A data base of 742 events from one individual’s full night sleep study was introduced as new data. Sets of rules were generated by the PRG using four different relative weightings of the hypotheses and the new events in addition to the hypotheses alone. The experiment consisted of test- ing the accuracy of the rules in classifying the 742 events for each of the five rulesets, the results of which are shown in Table 2. Note Class 3 where with hypotheses only (X = x ), the accuracy was only 19% but with X = 2, the new events overcame the inaccuracy of the hypotheses and the result- ing rules were nearly perfect(99% accuracy). Class 2 exhibits more typical behavior, the accuracy rising monotonically ( except for trivial noise fluctuations) from 71%, with hypotheses only, to 94% when only train- ing data was used for the rules. Class 5 also exhibits accu- racy growth that is monotonic, paralleling that of Class 2. Only Class 1, which is known semantically as the noisiest, most poorly clustered class, shows strongly erratic behavior. The anomalous-looking 5370 accuracy for hypotheses only is due to “overgeneralization” which becomes constrained by negative events as new data events are introduced. As hypothesis weight decreases, interaction between Class 0 and Class 1 manifests as non-monotonic accuracy increase. Table 2. Rule Modification Experiment Result. Entries are % of events correctly classified(X=s is the case of hypothesis only). H ere, specificity = 0.9, certainty = 0.9, and weight = 0.05. j Class 1 # New 117 X=wt. of hypothesis/wt. of new events 1 I I I 1 Events s 0 100 19 1 83 53 2 385 71 3 108 19 5 66 47 Total 742 52.4 2.0 1 1.0 ] 0.25 ] 0.0 hi6 / SCIENCE VIII CONCLUSION A system has been developed to permit communica- tion between expert and machine using the following princi- ples. Weighted VL, expressions are used by the expert to introduce hypotheses. Training events may be appended as new data, each event having weight 1. The hypotheses are expanded into a weight-equivalent number of data events, which are joined with the actual new events. Thus, neither hypotheses nor training events have special significance except through weighting. The superset of events is then submitted to the Proba- bilistic Rule Generator which is capable of capturing major complexes in spite of moderate noise. Experiments verify that the resulting rules may range from minor refinement of the hypotheses through various reorganizations of the hypotheses and on to rules which are completely dominated by the new training events in a con- tinuous, systematic spectrum, controlled by assigned (confidence) weighting. REFERENCES Ginsberg, A., Weiss, S. and Politakis, P., “SEEKZ: A Generalized Approach to Automatic Knowledge Base Refinement,” Proceedings of Ninth International Joint Conference on Artificial Intelligence, University of California at Los Angeles, August 1985, pp. 367-374. Lee, W. D. and Ray, S. R., “Probabilistic Rule Gen- erator,” Proceedings of the 1986 ACM Annual Com- puter Science Conference, Cincinnati, Ohio, February 4-6, 1986. Lee, W. D. and Ray, S. R., “Probabilistic Rule Gen- erator: A New Methodology of Variable-Valuetl Logic Synthesis,” Proceedings of 1986 IEEE Interna- tional Symposium on Multiple-Valued Logic, Blacksburg, Virginia, May 1986. Michalski, R. S., “Variable-Valued Logic and Its Applications to Pattern Recognition and Machine Learning,” Computer Science and Multiple- Valued Logic Theory and Applications, Rine, D. C.(Ed.), North-Holland, 1975, pp. 506-534. Michalski, R. S. and Chilausky, R. L., “Knowledge Acquisition by Encoding Expert Rules versus Com- puter Induct jolt from Examples: A Case Study Involv- ing Soybeau Pathology,” International Journal for Man-Machitle Studies, No. 12, 1980, pp. 63-87. Michalski, R. S. and Larson, J. B., “Selection of Most Representative Training Examples and Incremental Generation of VL, Hypotheses: The Underlying Methodology and the Description of Programs ESEL and AQll,” Technical Report 867, Department of Computer Science, University of Illinois, May 1978. O’Rorke, P., “A Comparative Study of Inductive Learning Systems AQllP and ID-3 Using a Chess Endgame Test Problem,” ISG 82-2, UIUCDCS-F-82-899, Computer Science Depart- ment, University of Illinois, 1982. Quinlan, J. R., “Learning Efficient Classification Pro- cedures and Their Application to Chess End Games,” Machine Learning, Michalski, R. S., Carbonell, J. G. and Mitchell, T. M.(Eds.), Palo Alto:Tioga Press, 1983. 191 Ray, S. R., Lee, W. D., Morgan, C. D. and Airth-Kindree, W., “Computer Sleep Stage Scoring- -An Expert System Approach,” Technical Report 1228, Computer Science Department, University of Illinois, September 1985. Also, to Appear in Interna- tional Journal of Biomedical Computing. [lo] Reinke, R. E. and Michalski, R. S., “Incremental Learning of Concept Descriptions,” Machine Intelli- gence 11, Hayes, J. E., Michie, D. and Richards, J.(Eds.), 0xford:Oxford University Press, 1985. LEARNING / 447
1986
95
544
A METALINGUISTIC APPROACH TO THE CONSTRUCTION OF KNOWLEDGE BASE REFLVEMENT SYSTEMS Allen Ginsberg AT&T Bell Laboratories Holmdel, NJ 07733 Abstract A variety of approaches to knowledge base refinement [3, 81 and rule acquistion [4] have appeared recently. This paper is concerned with the means by which alternative refinement systems themselves may be specified, developed, and studied. The anticipated virtues of taking a metalinguistic approach to these tasks are described, and shown to be confirmed by experience with an actual refinement metalanguage, RM. I Introduction Knowledge base refinement involves the generation, testing, and possible incorporation of plausible refinements to the rules in a knowledge base with the intention of thereby improving its empirical adequacy, i.e., its ability to correctly diagnose or classifj the cases in its “domain of expertise.” Knowledge base refinement may thus be viewed as being a part of, or a well-constrained subcase of, the knowledge acquisition problem*, Recently a variety of methods or approaches to rule refinement for expert system knowledge bases have been presented [3, 81. In [3] my colleagues and I discussed SEEK2, a system utilizing an empirically-grounded heuristic approach. In [8] an approach that makes use of explicit rule jusrijication structures based upon the underlying domain theory was presented. An approach to rule acquistion using interview strategies based upon metaknowledge concerning the role of qualitative or causal models in diagnostic reasoning is given in [4]. The concern of this paper, however, is not with any particular refinement system or approach as such, but rather with the means by which refinement systems themselves may be designed, implemented, refined, and studied. In this paper a metalinguistic framework for accomplishing these tasks, called RM (for Refinement Metalanguage), will be described. RM is a metalanguage within which one may specify a wide variety of rule refinement systems. In section II a brief elaboration on the general idea of a metalanguage for knowledge base refinement is given. Section 111 deals with the reasons for taking the “metalinguistic turn.” Sections IV presents some of the salient features and primitives of RM, and gives examples of their use. Some concrete results concerning RM’s performance are reported in section V. II The Metalinguistic Turn First we discuss the role of a rule refinement system within the context of the traditional expert or rule-based systems paradigm. A knowledge base is a collection of rules written in some rule representation language. In addition, an expert system framework contains an inference mechanism for determining if and how satisfied rules will be used to reach conclusions for any given case. *This research was conducted at the Department of Computer Science of Rutgers University and was supported in part by the Division of Research Resources, National Institutes of Health, Public Health Service, Department of Health, Education, and Welfare, Grant P41 RR02230. By comparing the conclusions of a given knowledge base in one or more cases with the known conclusions in those cases, a decision can be made (either by a knowledge engineer or the refinement system) as to the current need for rule refinement. When invoked, the refinement system wi!l - at the very least - suggest ways of modifying the rules in the knowledge base that are either likely to correct the given performance deficiencies or, as in the case of SEEK2 [33, have been verified to yield a certain specific gain in performance. From this perspective a refinement system is a useful “black box.” Figure II-1 illustrates what is meant by taking “the metalinguistic turn.” K’e Conclusions 7 Figure II-l: Refinement Systems: After the Metalinguistic Turn In this figure we see that the refinement system itself is viewed as a formal object written in a general metalanguage. Just as the rule representation language allows for the specification of many different knowledge bases, so too the refinement metalanguage allows for the specification of many different refinement systems for refining knowledge bases written in the rule representation language. Using the terminology of formal logic, we say that the refinement metalanguage is a meta-language with respect to the rule representation object-language because the former must have the ability to refer to, examine, and modify the linguistic entities that are definable in the latter. The metalinguistic turn may be viewed as running parallel to, or extending, the well-known generalization process that led from special purpose hard-coded expert systems to general rule representation frameworks or languages. The metalanguage presented in this paper, RM, is currently capable of dealing with object-level rules that are expressible in an extended propositional logic, where the latter is a propositional logic that allows for confidence factors, and certain special forms - e.g., numerical ranges. In concrete terms, RM is designed for use with knowledge bases written in EXPERT [91. 436 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. III Motivation Why is the “metalinguistic turn” worth taking? To put the question another way, what perceived needs are met or advantages realized by using a refinement metalanguage to design, implement, and study refinement systems? instances to suggest refinements to the concepts, heuristics, or procedures that led to these erroneous suggestions. IV Salient Features of RM A. Primitives for Accessing Ob.iect-Language Entities and Properties One answer is in fact presupposed by the question itself: there is an expectation that the design and development of refinement systems is a field that is by no means a closed book. A prime motivation for creating a refinement metalanguage is, therefore, to have a tool for facilitating experimental research in knowledge base refinement. A system that allows for the easy specification of alternative refinement concepts, heuristics, and strategies, is ipso facto, a system that is useful for testing and comparing alternative refinement systems designs. In order to allow for the specification of a wide variety of control strategies for the knowledge base refinement process [2], a refinement metalanguage must have the ability to create, store, and manipulate refined versions of the initial or given knowledge base. There are many ways in which such a capability can be implemented. For the sake of efficiency, RM currently employs a scheme in which a distinction is maintained between the potentially numerous versions of the knowledge base that are accessible, and the Moreover, as we have found in developing SEEK2 [3,2], a unique version that is currently active. In RM kb is a primitive variable over the set of all accessible refined versions of the refinement system undergoes debugging, revision, and expansion over time. Such development and evolution is much more easily understood, and hence, managed and achieved, by means of a high- level metalanguage, such as RM, than by use of traditional programming techniques. Again the reasoning here parallels the reasoning behind the evolution of expert system frameworks from hard-coded systems. Just as an expert system framework allows a knowledge engineer to concentrate on the essential problem at hand, viz., extracting knowledge from a domain expert, without worrying about irrelevant programming issues, so too a refinement metalanguage allows a researcher or refinement system designer to concentrate on the discovery and use of meruknowledge for facilitating knowledge base refinement, without being concerned about how such metaknowledge is to be represented and incorporated in a computer program. A second reason for taking the metalinguistic turn is that there is no reason to believe that there is one “best” refinement system or overall approach to knowledge base refinement. Applicable and useful refinement strategies and tactics may vary from domain to domain, and may also be a function of the stage of development of the object knowledge base. The metalinguistic approach to the specification of refinement systems allows for the easy customization of refinement systems to meet these varying circumstances. knowledge base; kb, is a constant that refers to the initial knowledge base. Below we discuss the primitives provided by RM for creating and accessing refined versions of knowledge base, and some of the implementation details (section IV.B.6). Part of what it means for a knowledge base to be the active knowledge base will be made clear in the following paragraph. A refinement metalanguage must possess primitive variables and functions to provide it (or its user) with the ability to “access” various “objects” or their components in any of the available versions of the knowledge base (as well as any of the domain cases that may be stored). In RM, variables referring to knowledge base entities are always interpreted as referring to the objects in the active knowledge base. For example, rule is a primitive variable whose range is the set of rules in the active knowledge base; RuleCF(rule) is a function whose value is the confidence Tactor associated with rule in the active knowledge base. To use these primitives to access or refer to rules, etc., in any accessible kb other than the active knowledge base, kb must first be activated. This is accomplished in RM by issuing the command Activate-kb(kb). A closely related motivation concerns the use of domain-speci’c metaknowledge in refinement systems, i.e., metaknowledge concerning the particular linguistic entities in some domain knowledge base. Falling in this category, for example, would be the metaknowledge that certain rules are “known with certainty” and should not be refined under anv circumstances. A more interesting; RM views a domain case as a complex object consisting of a fixed collection of data or findings (including the known or expert conclusions in the case), i.e., the content of a case should not be subject to alteration by a refinement system. case is a variable whose range is the set of cases in the data base of cases. In addition some primitive functions are needed to allow one to refer to selected parts or aspects of a rule or a case, e.g., PDX(case) is a function whose value is the known or expert conclusion in case (“PDX” stands for “Presumed Diagnosis”); Value(finding,case), is a function that returns the value of finding in case. d example of such metaknowledge is the notion of a generalization language [6] or causal network [lo] defined over the concepts utilized by the rules of the knowledge base. An example of how such domain-specific metaknowledge can be utilized will be discussed in section IV.B.2. The point to be made here is that, while domain-specific metaknowledge concerns, by definition, a particular domain knowledge base, such metaknowledge can be represented by general metalinguistic devices. Therefore a refinement metalanguage is a natural vehicle for the representation and incorporation of such metaknowledge in the refinement process. CDX(case) is a function whose value is the conclusion reached (with highest confidence) by the knowledge base in case (“CDX” stands for “Computer’s Diagnosis”). CDX-Total(case) returns a vector containing all diagnostic or interpretive conclusions reached in case together with their confidence. While the content of any case is fixed (as far as RM is concerned), the conclusions reached by a various refined versions of kb, in a case will, of course, vary. Therefore, the value returned by CDX(case) depends upon which knowledge base is active. Finally, note that the metalinguistic turn allows us to contemplate the implementation of a refinement system that is capable of refining itself, i.e., given the accessibility of its own metalinguistic representation there is no reason in principle why a refinement system should not be able to refine its own concepts, heuristics, and procedures in order to improve its performance [5]. For example, if a refinement system C is given feedback, i.e., told which of its suggested refinements are unacceptable, C may be able to use this information together with the record of its reasoning in these Some primitives can be used to return information concerning either rules, rule components, or subcomponents of rule components. This is achieved by a system for referencing such objects, the details of which need not concern us here. Basically, to refer to a rule, one pointer is required; to refer to a rule component, two pointers are required, etc. For example the RM primitive function Satisfied can be invoked in the following ways: Satisfied(rule,case) LEARNING : -t_i- SatisfIed(rule,component,case) Satisfied(rule,component,subcomponent,case) and will return true if and only if the designated rule, rule component, or rule subcomponent is indeed satisfied in the designated case. Certain special sets of objects are of importance in the knowledge base refinement process, and it is therefore useful to have primitives that refer to them, e.g., Rules-For(hypothesis) is a function whose value is the set of rules that have hypothesis as their conclusion (hypothesis is, of course, a primitive variable ranging over the set of hypotheses in the knowledge base). In addition, it is desirable to have the ability to refer to subsets of various objects. Thus, in RM, cases is a primitive variable over subsets of cases, rules is a primitive variable over subsets of rules. Other primitives that in some way involve semantic properties of rules, or the pe$ormance characteristics of the knowledge base as a whole are clearly required. For example, ModelCF(hypothesis,case) is a function whose value is the system’s confidence factor accorded to hypothesis in case. B. Primitives for Defining Functions, Modifying the Knowledge Base, and CreatinP Objects 1. Logical Operators A refinement metalanguage must provide operators for combining primitive functions in order to form sophisticated functions. The basic operators that are needed are familiar from set theory, logic, and arithmetic. Some procedural or algorithmic primitives are needed as well. Since RM is built “on top” of common lisp, most of the primitive operations needed are already available as lisp primitives, and may be used in RM commands and definitions. Examples of these operators will be given in the course of the exposition. 2. Modifying the Knowledge Base A metalanguage for rule refinement must provide an adequate set of primitive rule refinement operators. It should allow for these operators to be composed, so that higher-order refinement is possible [2]. The primitive refinement operations currently available in RM are ones that we have found to be useful in developing SEEK2 [3]. However, RM offers a set of primitive operators that is not only sufficient for the definition of SEEK2, but also goes beyond SEEK2 in power. For example, in SEEK2 all refinement operators apply only to rule components (including confidence factors); there is no way SEEK2 can apply an operator to a rule subcomponent. In RM, on the other hand, rule subcomponents can be accessed and manipulated in the same manner as top-level components. As an example, the expression (operation delete-component rule component) denotes the operation of deleting component from rule. The expression (operation (operation delete-component rule component) (operation lower-cf rule x)) denotes the compound operation of deleting component from rule and simultaneously lowering rule’s confidence factor to the new value x. Refinement operators such as delete-component and lower-cf are specifiable in a purely syntactic manner. Refinement operators that are specifiable semanrically may also, however, be incorporated in a refinement system. As an example, suppose that a generalization language [6] for concepts used in the rules is provided along with the knowledge base. Then concept-generalization and concept-specialization can be made available as primitive refinement operations. In RM the operation (operation generalize-concept rule component) replaces the item designated by the two arguments rule, component with the next “more general” item in the generalization language. As a concrete example of such an operation, a rule component corresponding to the finding (Patient has a swollen ankle), might be generalized to (Patient has a swollen joint). 3. Creating Mathematical Objects A refinement metalanguage must give a user the ability to define sophisticated functions using the primitives. In order for this to be possible a user must be able to create new sets, relations, functions, and new variables (over both individuals and sets). RM provides several primitive commands that allows these tasks to be accomplished. The basic commands relevant to the design of functions are: define-set, define-variable, define-set-variable, define- binary-relation and define-function. As an example consider the following RM command: define-variable rl rule This first command defines a new variable of type rule. Since rule is a primitive types, RM, using frame-based property-inheritance, is able to determine what the properties of this newly defined object should be. It is also possible to define variables over user-defined objects. Consider, for example, the following RM commands: define-set Misdiagnosed-Cases (case1 (/= (pdx case) (cdx case))} define-set-variable mcases Misdiagnosed-Cases define-variable mcase Misdiagnosed-Cases The first command defines a new set misdiagnosed-cases, i.e., the set of all case such that (Pdx case)#(Cdx case), in English: the set of all case such that the known or expert conclusion is not identical to the knowledge base’s most confident conclusion. The second and third command then establish mcases and mcase as variables over subsets of misdiagnosed-cases, and over the individual cases in misdiagnosed-cases, respectively. Note that misdiagnosed-cases utilizes a function that is an implicit function of the active knowledge base, viz. Cdx(case). Therefore the membership of the set misdiagnosed-cases will also vary as the active knowledge base changes. RM automatically sees to it that the membership of this set, as well as any other user-defined set whose membership is sensitive to the active knowledge base, is recomputed when a new knowledge base is activated. As an illustration of a RIM’s function-definition capability, consider the following useful definition: [define-function satisfied-rules-for-hypothesis (hypothesis case) {rule in (rules-for hypothesis)/ (= (satisfied rule case) 1))l. This command defines a function satisfied-rules-for-hypothesis of two arguments, of type hypothesis and case respectively, which returns the set of all rules with conclusion hypothesis that are satisfied in case. 4.38 / SCIENCE 4. Defining Useful Refinement Concepts We are now in position to see how useful refinement concepts may be defined using RM. As an example we choose a function used in SEEK2 called GenCF(rule), which defines a pattern of behavior a rule must exhibit in order to be considered for a possible “confidence-boosting” refinement. GenCF(rule) is the number of mcase in which, a) rule is satisfied and concludes the correct conclusion for mcase, i.e., PDX(mcase), and b) of all the rules meeting clause (a), rule has the greatest confidence factor. (Intuitively, we are interested in the satisfied rule whose confidence will have to be raised the least in order to correct the mcase). (mean-cdx-cf (GenCf-mcases rule))))] This concept is rendered in RM in two steps as follows: [define-function GenCF-rule (mcase) SELECT rule IN (satisfied-rules-for-hypothesis (pdx mcase) mcase) WITH-MAX (rule-cf rule)] (In English: given mcase, select a rule among the satisfied rules for the correct conclusion of mcase that has maximum confidence-factor.) define-function GenCF (rule) ]{mcase] (= rule (GenCF-rule mcase))}] (In English: the cardinality of the set of mcase in which rule is returned by GenCF-rule(mcase) . ) Continuing with this example allows us to exhibit something of the flexibility of the metalinguistic approach, and to show its use as both a customization device and a tool for experimental research. In general there may be a number of rules that satisfy the conditions given in GenCF-rule(mcase). The motivation for SEEK2’s selecting only one of these as the “genCF-rule” of mcase, has to do with our expectations concerning the size of the domain knowledge base, and our desire to. keep the number of refinement experiments attempted relatively small [3]. Under other circumstances, however, it may be reasonable to “credit” every rule that satisfies these conditions as a “genCF-rule,” and thus allow for the generation of a larger number of “confidence boosting” refinement experiments. In RM this is easily accomplished by altering the first definition given above in the following manner: [define-function GenCF-rules (mcase) SELECT {rule) IN (satisfied-rules-for-hypothesis (pdx mcase) mcase) WITH-MAX (rule-cf rule)] (In English: given mcase, select thesetofrules among the satisfied rules for the correct conclusion of mcase that have the maximum confidence-factor.) 5. Primitives for Supporting Heuristic Refinement Generation According to the general model of heuristic refinement generation [7, 21, specific plausible rule refinements are generated by evaluating heuristics that relate the observed behavior and structural properties of rules to appropriate classes of rule refinements. Such heuristics will utilize refinement concepts or functions of the sort described above and elsewhere [3]. Here is an example of how such a heuristic is defined in RM: [define-heuristic (if (> (GenCF rule) 0) (operation raise-cf rule (English translation: if Gen-Cf(rule) > 0 then raise the confidence factor of rule to the mean-value of CDX(mcase) over the mcases that contribute to GenCF(rule), i.e., try to "win" some of the mcases contributing to GenCf(rule) by boosting the confidence-factor of rule to the mean-value of the currently incorrect conclusion in these mcases. ) The above heuristic is actually a simplified version of a similar heuristic currently in use in SEEK2. mean-cdx-cf(cases) and GenCF-mcases(rule) are also functions defined in terms of RM primitives. 6. Defining Control Strategies for Experimentation and Selection A fully automatic refinement system will not only generate refinement suggestions, it will also test them over the current data base of cases, and, perhaps, create refined versions of the given knowledge base, kbo, that incorporates one or more tested refinements. A system that tentatively alters kb, in an attempt to open up new refinement possibilities will be called a generational refinement system. SEEK2 is an example of what we may call a single-generation refinement system: it is a generational system that keeps only one refined version of kb, at any given time. A refinement system will be said to be a multiple-generation system if it is capable of creating and accessing several versions of the knowledge base at any given time. In order to support the specification of such fully automated refinement systems, a refinement metalanguage must have primitives for evaluating heuristics, trying suggested refinement experiments, and creating new versions of the knowledge base if desired. The metalanguage should also allow the user to put all these primitive actions together into an overall control strategy for the refinement process. There are several salient RM primitives for performing these tasks. One of these is a procedure, Evaluate-heuristics(rules), that evaluates all the refinement heuristics for each rule in rules. This procedure may also take an optional argument specifying a particular subset of the heuristics to be evaluated as opposed to the entire set. Suppose that kb, is the active knowledge base (see section 1V.A above). Try-Experiment(operation) is a procedure that 1) applies the refinement, operation, to kb,, 2) calculates the result of running this refined version of kb, over all the cases, and returns a data- structure, called a case-vecror, containing the new results, and 3) applies the inverse of operation to kb, in order to return it to its original form. Thus this procedure does not change kb, permanently; rather the case-vector it returns is used to determine the effectiveness of operation. By comparing this returned case-vector with the case-vector for kb, one can determine the exact effect of operation on a case-by-case basis, if desired. In order to create a refined knowledge base that can be available for future analysis the procedure Create-Kb(operation) must be invoked. This procedure “creates a knowledge base” kb that is the result of applying operation to kb,. The data structure that represents kb contains a) operation, b) a pointer to kb,, c) a slot for pointers to kb’s potential successors, d) a table summarizing the basic performance characteristics of this knowledge base, including, for example, the total number of cases this knowledge base diagnoses correctly. LEARNING / 139 In virtue of these predecessors and successors links, at any time the set of available knowledge bases forms a tree rooted at kb,. When RM is instructed to activate a knowledge base kb# kb,, RM traces back through the ancestors of kb until either kb, or kb, is reached (one of these events must occur). If kb, is Eached then in order to activate kb all the refinement-operations occurring in the path from kb, to kb are performed on the current internal version of the knowledge base. If kbo is reached, the operator information in the path from kb, to kb is used to activate kb. (Note that RM never requires more than one internal copy of the knowledge base; to activate a refined version of kb,, the current internal copy is modified in the specified manner using the information in the “tree of knowledge bases.“) To see how these primitives may be used to specify alternative control strategies, let us first briefly review SEEK2’s control strategy. SEEK2 employs a cyclic control strategy that is a form of hill-climbing. In each cycle of operation the system a) evaluates its heuristics to generate refinement experiments, b) attempts all of these experiments, keeping track of the one that yields the greatest net gain in overall performance over the data base of cases, c) creates and activates a version of the knowledge base that incorporates the refinement yielding the greatest net gain. These cycles continue until a point is reached at which none of the generated refinements yield a postive net gain. There are myriad ways in which this simple procedure can be modified [2]. As a simple example, suppose in a given cycle the system finds n > 1 refinements all yielding the same maximum net gain. One might wish to create n new knowledge bases, one for each of these refinements, and continue the process for each of these. Or perhaps one would like to incorporate a subset of these n refinements in kb,, rather than just one of them**. Using the primitive procedures we have discussed, such variations on the simple hill- climbing approach are easily specified in RM. 7. Incorporation of Domain-Specific Metaknowledge In order to see how RM can be used to express domain-specific metaknowledge, we show how RM can be used to incorporate some of the important features of the approach discussed in [8]. Some rules in a knowledge base (or their supporting beliefs) may be d$bzitional in character, or may represent laws or principles of a theoretical nature. As discussed in [8], it is reasonable to avoid modifications of such rules or beliefs, provided other refinement candidates can be identified. To incorporate this preference in one’s refinement system using RM, one would first define the relevant sets using enumeration. For example, define-set Definitional-Rules (4 5 15 27) define-set Theoretical-Rules (1 2 11) establishes rules 4, 5, 15, and, 27 as belonging to a set called “Definitional-rules,” and rules 1, 2, and 11 as belonging to a set called “Theoretical-rules.” One then has several options. One could cause the refinement system to avoid gathering information for the rules in these sets altogether by modifying the definitions of refinement concepts such as GenCF-rule (see section IV.B.4), e.g., GenCF-rule(mcase) may be defined as the non-definitional and non-theoretical rule such that etc. Or one could design the **It should be noted that there is no logical guarantee that incorporation of several independently tested refinements will yield a positive net gain equal to the sum of their individual gains, or even yield a gain at all [2]. Therefore, implementation of this tactic behooves one to retest the refinements again in &&em to be certain of a beneficial effect. refinement system’s control strategy so that experiments generated for rules belonging in one of these sets would be attempted only if no experiments to other rules are found in a given refinement cycle or session. Another important idea contained in the approach to rule refinement discussed in [8], is the notion of a set of rules or beliefs justifying another rule or belief. While [8] presents a variety of attributes of “the justification relation” that are of potential use in knowledge base refinement, here we will only show how the basic idea can be incorporated in refinement systems specifiable using R_hl. Consider the following RM commands: define-binary-relation Justifies (rules rule) Assert Justifies (25 96) 101 The first command informs RM that the user intends to supply and make use of ordered-pairs of the form crules,rule> under the relation-name “Justifies,” i.e., ordered-pairs whose first element is a set of rules and whose second element is a rule. The second command then makes the assertion that the ordered-pair <{25 96},101> belongs to this relation. Intuitively, this represents the fact that rules 25 and 96 together provide a justification for rule 101. V Some Concrete Results A version of SEEK2 has been specified in RM. The RM definitions and commands for doing so take up roughly 4 pages of readable text. By contrast, the hard-coded implementation of SEEK2 takes up about 50 pages of code. This is evidence that the primitives in RM have been well-chosen: using them it is possible to write compact, easily understandable, but, nevertheless, powerful, specifications. As we claimed earlier, a well-designed metalinguistic system allows one to easily specify what one wants done, without having to worry about the details of how it is achieved. In [3] we reported that the hard-coded version of SEEK2, working on a rheumatology knowledge base of 140 rules and 121 cases, was able to improve performance from a level of 73% (88 out of 121 cases diagnosed correctly) to a level of 98% (119 out of 121 cases diagnosed correctly) in roughly 18 minutes of CPU time on a VAX-785. The RM version of SEEK2 achieves the same results on the same machine in roughly 2 hours of CPU time. This result is much better than was actually anticipated since no attempt has been made to optimize the lisp-code translations RM generates from its high-level specifications (beyond any optimizations that may be attributable to the common-lisp compiler). In addition, RM has been used to formulate and test. alternative control strategies. For example, a modified version of SEEK2’s control regime has been tested using the aforementioned knowledge base. Briefly, instead of selecting only the single refinement yielding the greatest net gain in a cycle, the modified control strategy allows for the selection of many successful refinements within a single cycle. For the knowledge base in question, this procedure converges to the same result as the simple hill-climbing approach, but does so in 1 hour of CPU time. RM has also proven itself to be useful as both a debugging and design tool. Logical bugs in SEEK2 have been discovered through the use of RM, and a number of new refinement concepts and heuristics have recently been added to SEEK2 as a result of experimentation with KM [2]. RM has also been used as a tool for performing experiments concerning the statistical validity of the empirically-based heuristic approach to refinement generation employed in SEEK2***. ***The results of these experiments are encouraging and are reported in [Z]. t-&O / SCIENCE Foliowing an accepted practice in statistical pattern recognition [ 11, to test a refinement system one can partition the set of domain cases into disjoint training and testing sets, and run the refinement system only on the training set. Afterwards, the resulting refined knowledge base(s) are run on the testing set. Such experiments are easily specified using RM. VI Summarv A metalinguistic approach to the construction of knowledge base refinement systems has been described, motivated, and implemented. Concrete positive results have been achieved using a system based on this approach. In terms of future directions, it is reasonable to expect that similar metalinguistic approaches will be useful in designing more powerful refinement systems - e.g., systems that include a rule acquisition capability [4]- and that many of the key features of the metalanguage described here will be applicable to the design of metalanguages corresponding to richer object-languages. The research on a metalinguistic framework presented here may be seen as an exploration of the consequences of applying the “knowledge is power” principle to the domain of knowledge acquistion itself, and more specifically, to knowledge base refinement. If domain knowledge gives a system problem-solving power, and if the domain of interest is itself the problem of making a given knowledge base fit certain given facts more closely, then it follows that metaknowledge about knowledge representation itself - e.g., knowledge of the ways in which formal objects can be used or altered to fit facts, knowledge of the sorts of evidence that can be gathered in support of certain classes of refinements, etc. - must be an essential ingredient of any successful automatic knowledge base refinement system. It also follows that just as there is a knowledge acquisition problem for ordinary “object-level” systems, so too there must be a metaknowledge acquisition problem for knowledge refinement and acquisition systems. Therefore, just as the use of high-level formal languages has helped researchers to clarify issues and generalize from experiences with object-level knowledge acquisition, one would expect that the use of a high-level metahguage would provide similar benefits with respect to the metaknowledge acquisition problem. It is hoped that the work presented here will be seen as justifying this expectation. VII Acknowledgments I want to thank Sholom Weiss, Casimir Kulikowski, Peter Politakis, and Alex Borgida for helpful criticisms of this work. References 1. Fukunaga, K.. Introduction to Statistical Pattern Recognition. Academic Press, New York, 1972. 2. Ginsberg, A. Refinement of Expert System Knowledge Bases: A Metalinguistic Framework for Heuristic Analysis. Ph.D. Th., Department of Computer Science, Rutgers University, 1986. 3. Ginsberg,A., Weiss,S., and Politakis,P. SEEK2: A Generalized Approach to Automatic Knowledge Base Refinement. Proceedings of the Ninth International Joint Conference on Artificial Intelligence, Los Angeles, California, 1985, pp. 367-374. 4. Kahn, G., Nowlan, S., Mcdermott, J. MORE: An Intelligent Knowledge Acquisition Tool. Proceedings of the Ninth International Joint Conference on Artificial Intelligence, Los Angeles, CA, 1985, pp. 58 l-584. 5. Lenat, D. The Role of Heuristics in Learning By Discovery: Three Case Studies. In Machine Learning, Tioga Publishing Company, 1983. 6. Mitchell, T. “Generalization as Search”. Artificial Intelligence I8 (1982), 203-226. 7. Politakis, P. and Weiss, S. “Using Empirical Analysis to Refi;le Expert System Knowledge Bases” . 23-48. Artificial Intelligence 22 (1984), 8. Smith, R., Winston, H., Mitchell, T., and Buchanan, B. Representation and Use of Explicit Justification for Knowledge Base Refinement. Proceedings of the Ninth International Joint Conference on Artificial Intelligence, Los Angeles, California, 1985, pp. 673-680. 9. Weiss, S., and Kulikowski, C. EXPERT: A System for Developing Consultation Models. Proceedings of the Sixth International Joint Conference on Artificial Intelligence, Tokyo, Japan, 1979, pp. 942-947. 10. Weiss, S., Kulikowski, C., Amarel, S., and Safir, A. “A Model- Based Method for Computer-aided Medical Decision-Making”. Artificial Intelligence I I, 1,2 (August 1978), 145172. LEARNING II t tl
1986
96
545
The FERMI System: Inducing Iterative Macro-operators from Experience Patricia W. Cheng and Jaime G. Carbonell Computer Science Department Carnegie-Mellon University Pittsburgh PA 15213 Abstract Automated methods of exploiting past experience to reduce search vary from analogical transfer to chunking control knowledge. In the latter category, various forms of composing problem-solving operators into larger units have been explored. However, the automated formulation of effective macro-operators requires more than the storage and parametrization of individual linear operator sequences. This paper addresses the issue of acquiring conditional and iterative operators, presenting a concrete example implemented in the FERMI problem-solving system. In essence, the process combines empirical recognition of cyclic patterns in the problem-solving trace with analytic validation and subsequent formulation of general iterative rules. Such rules can prove extremely effective in reducing search beyond linear macro-operators produced by past techniques.* 1. Int reduction Automated improvement of problem-solving behavior through expcricncc has long been a central objcctivc in both machino learning and problem solving. Starting from STRlI’S [9], which acquired simple macro-operators by conc.ltcnation and paramctcrization of useful operator sequences, chunking control knowlcdgc has proben a popular method for reducing search in solving future problems of like type. Marc comprehensive chunking disciplines have been studlcd; for instance, SOAR [18] chunks at all possible decision points in the problem solving, whereas MORRIS [l.S] and PRODIGY [14] are more selective in their formulation of useFir macro-operators. Other forms of learning particularly relevant to problem solving inclttde strategy acquisition [il, 171, and various fauns of analogical reasoning. Transform;ltional analogy [S] transfers expcrtisc directly from the solution of past problems to new problems that bear close similarity, and dcriiational analogy [6] transfers problem-solving strategies across structurally Gmilar problem-solving episodes. Both forms of analogy provide th\: positive and negative exemplar data required to fom-mlatc gcneraliLcd plans [4, 7, 8, 11, 16, 17, 201. This paper discusses the need for the formulation of a more general class of macro-opcrarors that enable conditional br‘mching and generalLed iteration. It then presents a method for automated induction of such macro-operators from recursive and iterative problem-solving traces. Inducing iterative rules (macro-operators) from behavioral traces involves the detection of repetitibc pdttcrns in the subgoal structure of the problem-sole ing episodes. This process includes analysis of the trace to determine the common operations at an ,Ipl?r”prialc ICCCl Of. ,Ib~ll~lctiOn, ;rncl cxlr;lctic)n (I:’ cctn(!iliotls tIcc‘c~\~lI y for wcccss. *The research reported in this paper was funded in part by the Office of Naval Research under grants number N00074-82-50767 and number NOC014.84-K-0345 and in part by a gift from the Hughes Corporation. We thank every member of the FERMI project -- Angela Gugliotta, Angela Hickman, Jill Larkin, Fred Reif, Peter Shell, and Chris Walton -- for their valuable discussions. We are especially grateful to Peter Shell and to Chris Walton for their indispensible help on using Rulekit and on programming respectively. poillk ill the pi’ol~lcm-solving process to up-front left-hand-side (I I IS) conditions. I:or instnncc, an itcratibc riilc ;tquircd by 1~11l<hll solves indcpcndcnt lincnr equations in miiltiplL’ unknowns by rcpcatcd ~,\lh~titution 01’ cxprcssions containing progr\lhsi\cly fc\ccr vnriablcs. ‘l’liih (or dny orhcr) mcrhod can yield d ilniquc solution oiil~ if thcrc n~*c ;IS many linearly indcpcndent equations as thcrc arc \ari;lblcs. Such a contlitlon is dcduccd :\utom:itic;#y by ;mnlysis of the problem and is subjcqucntly :lddcd to the 1.11s of the itcrativc rule, eliminating the need to pcrlijrm all the step-by-step substitutions in order to discover at the end of Lhe process that thcrc arc rcnlaining variables and no remaining equations, or that thcrc is a contradiction. ‘1%~ techniques for dcvcloping and implcmcnting this type uf learning, as elaborated in subscqucnt sections, provide a ud’rrl addition to the repcrtoirc of machine learning methods in problem solving. 2. Overview of FERMI t-FRI?I I**. our cxpcrimcntal tcstbed for iterative rute induction, is a general problem solver in the natural scicnccs. Its flexible architecture has been described clsewhcre [3, 1.31; here WC focus only on thocc aypccts directly relevant to automated induction of iterative rules. FI-XMI scpamtes problctn solving knowledge from domain knowledge, rcprcscl~ting the former as stratcgim and the l&ter as factual frames at different hels of abstr<iction in a semantic frame network. Thus, gcncrdl concepts such ;is uww saliuJl vl+it~ass or equilii,l.ium wnditioiis need be represented only once and inherited whcrc appropriate. Similarly, p$blcm-solving knowledge, such as iterative decomposition arc encoded as gcncrnl strategies applicable to a wide variety of probler&. l-‘ERM 1 has successfully solved problems in areas as diverse ;I$ fluid statics, linear algebra, classical mechanics, and DC circuits applying the same gcncral problem solving strategies, and some of the same general domain concepts. **FERMI is an acronym for Flexible Expert Reasoner with Multi-Domam Inference, and a tribute to Enrico Fermi, who displayed abihties to solve difficult problems in many of the natural sciences by the application of general domain principles and problem solving strategies. ***Iterative decomposition proceeds as follows: 1. Isolate the largest manageable subproblem. 2. Solve that subproblem by direct means. 3. If nothing remains to be solved, compose the solutions to the subproblems into a solution to the original problem. 4. If part of the problem remains to be solved, check whether that remaining part is a reduced version of the origmal problem. 5. If so, go to 1, and if not halt with failure. 400 1 SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. goal-subgoal tree. the methods LISC~ to attack the wh prohlcm. and the substitution, and 5. repeating the above steps until only an equation causes of success or failure at cbcry inter-mcdiatc itcp in the reasoning. that contains no variable other than the desired unknown remains. 3. Acquiring Iterative Macro-operators Many problems share an implicit rccur\lvc or iterative nntur’c. ‘fhcse problems include mundane cvcrqday ac!ilitics such as walking until a destination is rcachcd and eating until hunger is satistictl as well as oroblcms in mathematics and science such as those solved bv When given a trace that exhibits a fixed number of iterative cycles before solution is reached. current methods of forming macro-operators such as STRIPS, ACT*, and SOAK [l, 9,lO. 1X] cannot produce operators that will generalize to an arbitrary number of iterations. lndccd, they cannot even detect the iterative nature of the problem. The MACROPS facility in STRIPS [9], for instance, would add all subsequences of primitive operators for as many cycles as the instance 3.1. Pattern Detection What type uf repetitive pattern in the solution trace would warrant the formation of an itcratitc rule? We think that requiring identical scquenccs of rules would be too restrictive, because partially matched scqucnces may nonetheless contain information on equivalent choices and orderings of operators. Consider rcpc,ltcd instances of the same subproblcm - say to establish a precondition on occasions when it is not already satisfied. 7he instances may (or may not) require different operators. In our algebra example, after the execution of the rule select-\sr+, if the problem state happens to include an equation that has the bariablc returned by select-v;tr+ on its I-HS. then the rule var-on-lhs+ would apply. Otherwise. rulemkar-on-lhq- would have to be executed before var-on-lhs+ applies. Thus. what rule follows select- \ar+ could vary depending on the particular problem state. Nonetheless, the specification that either var-on-lhs- or var-on-lhs+ - and not other opcrntors irrelevant to v,lriable substitution - follows the execution of selccr-var+ is useful. It reduces the number of matches to be done by an amount proportional to the number of operators excluded. Noticc that the two alternative mles are different paths satisfying the same subgoal. To capture information on sequencing that is common across differing circumstances. our pattern detector looks for consecutive, identical rcpctition of any scqucnce of chaqes in rubgoals. SubgoJs are generated and remobcd by rules during problem solving. l%ch column in Figure 3 shows a type of trace of the solution path for the example problem in Figure 2: problem required into its triangle table - generating huge numbers of macro-operators and failing to capture the cyclic nature of tile solution. Anderson’s ACT* [l, 21 would compile one (or more) linear macro- operators for each-number of repetitions, also failing to capture the iteration. Thus, for any single cycle of iteration, existing macro- operator formation systems will, at best, produce macro-operators that will apply to a predetermined number of iterations, which would not generalize to a fewer or greater number of cycles. Moreover, as we rcmarkcd earlier, each cycle may select different methods for solving the same subgoals, and the regularity exists at a higher level of abstraction in the subgoal trace. Most earlier systems (SOAR partially excepted) do not chunk problem solking tracts at higher level of abstraction than the sequence of instantiated operators. As we will illustrate with an example problem in the familiar domain of solving simultaneous linear equations, tl~c exact scqucnce of rules may vary from cycle to cycle while preserving an overall subgoal structure. ‘Ihc learning in our program proceeds in three steps: tritcc at the 1. dctcction of an itcrativc pattern in the solution appropriatC ICvCl of abstraction and granularity, 2. formatit)n of a macro-operator that transli)rms d state ilt the beginning of a single itcrativc cycle to the c;tntc at the beginning of the next cycle, and 3. formation of an itcrativc operator that cheeks for gcncraliLed rtpplicability conditions inferred from the macro operator togcthcr with conditions immcdiatcly following the iterative scquencc in the successful solution trace. 4. <I tI‘,lcc OF the a~~l~llcll~c o(‘c!l;tllgcs 111 t!;P,c ~llbgc’als. As c.111 hc 4ccn. con~,cctlti\cs idcnLical rc.pc:i!ioti is ,ip!J,ircnt only in the 1~~1 type of‘ ll;rc:‘. (ldctlticitl KJ~C~I~IOI~S arc !>r,~~k~ic~d in the ligurc.) ‘I he \ccond t)i!)c of trace. \;ti%l‘:, ~icross c4cIL’s bcc~iusc the number of HIICS rcquircd Lo satislj il Stlllg0dl IIliI?’ ViJI.)’ liUiJ1 CyClC t0 cycle. 3.2. Formation of Macro-operators with Conditionals After .tn itcrnti\c p:iltern is dctcctclt. the program forms il macro- 0pc:;ttor I)> composin:, I-UILY in a sln$lc clclc of the ilcr,ltion 33 an intcrmcdi~~te \tcp towards furming ill1 itcr.ilivc operator. I’hc scqucncc of opcratorr that should IX composed is t!lcrcii~r\~ dctermincd by the p;rtt& &>tcctcd and is lcs\ arbit;.,Iry than in systems such ;js SI’KIPS [‘I] alld AC 1 I-1 ” ‘* 7 In whicll any scqwncc can bc a candic!,ltc for a macro-operator. Although our rriggcr for forming a tn,~cro-operator differs tiv~n others, the actual formation is in the tradition of macro- opcr&or lc,lrning sqstcms such as SIX I I’S and AC’I‘*, with the cxcCph)n LlW we i1llOti Ii,]. alternative ,:ctions con~!itiou,ll on the problem state within tllc same operator. ‘I‘hc grcatcr gcucrality cn;cndcrcd hy this fc,tturc helps a\,oid tllc prolifcr,ltion of macro- opcI‘nt0rs iJ7 ;I ~~roblCX1 Solver [I 5, 141. Assumin, ‘7 that CXII conditional consists of a simple if-then-else brunch, th‘it thcrc is ‘I scrics of 0 iOllditi~H~illS in a cycle of ircration. and that these conditionals arc indcpcndcnt of each other. the number of rr;,ditiona! macro-operators I~clow WC elaborate on each step with illustrations drawn from our example problem on solving simultaneous linear equations. A set of operators for solving such systems of cqudtions is listed in Figure 1. The operators arc all in the form of standard condition-action rules. (Variables in the tigurc arc preccdcd by an = sign, all l,IiS conditions are conjoined, and all RHS actions arc cvaluatcd sequentially.) A trace using these operators solving an algebra problem is shown in Figure 2. The solution path involves: 1. selecting an appropriate cariablc, 2. rearranging an equation to express this variable in terms of others, if the equation dots not already appear in that form, 3. substituting the equivalent expression for the variable whenever the variable occurs in the remaining set of equations. 4. eliminating the equation used for - which do not allow inrornJ same state S!XKC would bc 2”. conditionals - rcquircd to c&er the lntcrnal conditionals arc implcmenlcd in our program by an agmdrl control structure on the right-hand-side (RHS) of UK macro- operator. An agenda consists of an ordered list of “buckets” that contain ordcrcd sets of operators. In our program the buckets, and the operators in each bucket, arc tested in order. When an opcrntor does not apply, the next operator in the same bucket is tested. When an operator dots apply, control is returned to the first operator in the bucket. Control is passed on to the next bucket when no rule in the bucket applies or when a rule that applies halts execution of the bucket. When 110 rule in the last bucket applies or when it halts, control leaves LEARNING ! 491 name of rule solve-unknown- l-equation solve-unknown- n-equations selectvar -I- select-var - var-on-lhs + var-on-lhs- replace + replace - 1X3: conditions the current goal is to solve for unknown, the number of equations is 1, the number of variables is 1, the desired unknown is = u, there is an equation =c that cont;hins =u, there are no other equations the current goal is to solve for unknown, the number of equations is > 1, the number of variables is > 1, the desired unknown is = u, there is no equation that contclins = u and has no other variables in it the current goal is to select a variable for substitution, the dcsircd unknown is = u, there is a variable = v that is not = u, and = v appears in at least 2 equations the current goal is to select a variable for substitution, there is no variable that is not the unknown and appears in at least 2 equations the current goal is to get an equation with the selectcdvariablc on its I.HS, the selected variable is = v, and there is an equation with = v on its LHS the current goal is to get an equation with the selected variable on its LHS, the selected variable is = v, and there is no equation with = v on its LHS, but there is an equation = e that contains = v the current goal is to form a new equation by substitution, the selected equation is = el, the selected variable is = v, and = v occurs in a second equation = e2 the current goal is to form a new equation by substitution, the selected equation is = e, the selected variable is = v, no equation other than = e contains = v, the number of equations is = nume, and the number of variables is = numv Figure 1: operators for algebra problem the agenda and is returned to the top-levci. We exploit this control structure by placing in each agenda bucket a disjunctive set of operators for satisfying the same subgoal. The automated Icarner puts operators in a bucket in an agenda when it detects in the solution trace sets of operators that have the same subgoal on their JMS but each member of the set ha5 condition clemcnts that negate condition elements in each of the other mcmbcrs of the set. The negated condition elements obviously cannot be composed on the LHS of a macro-operator, and arc instead left to form separate operators in a bucket on the RHS of the macro-operator. In each bucket the operator that checks for satisfaction of the subgoal - and therefore halts execution of the bucket when its conditions arc satisfied - is placed in the first position. In such manner conditional branches are formulated in new macro-operators without altering the uniform top-lcvcl control structure. III our algebra example there arc two sets of conditionals, with two oper:ltorj in each set: var-on-lhs+ atIt v;lr-on-lhs- iI1 one set, and replace- and rcpl;lcc + in the other set. ‘t’hc first set is a simptc if-thcn- clsc conditional, the second set is rcpc~tcdly tested until replace- is RHS: actions solve for =u in =e, and pop success set up subgoals to (1) select a variable for substitution (2) get an equation with the selected 1 arinblc on its I,HS (3) form a new equation by substitution (4) solve for unknown mark = v as selected, and pop success pop failure mark the equation and pop success as rearrange =e so that = v is on its LHS substitute occurrences of = v with the RHS of =el remove = v from working memory, remove = e from working memory, set = nume to (= nume - 1) set = numv to (= numv - 1) and pop success applicable, i-c.. when there are no more occurrences of the variable to bc rcplnccd. A macro-operator with conditionals for our example algebra problem is shown in I;igure 4a. With the exception of negated condition elements and their corresponding RJHSs, whenever wc compose a sequence of operators, WC aggrcg&e the condition dnd action clcmcnts of the scqucncc of operators ‘rpplicd in the tr,icc. A\ in other proposed methods of forming macro-operators (c.g., [l]). We eliminate redundancies in the macro- opcrntor by deleting: 1. duplicate condition clcments, 2. condition elements that match a working memory element, including subgoals, cre‘lted by an earlier rule within the sequence, 3. action elements that create subgoals condition elements in the sequcncc, and matched by subsequent 4. condition and action elements whose sole variables bindings from one rule to the next. function is to pass -i92 / SCIENCE Equations: Variables: 3x+ -10 Goal Stack: 3- x: desired unkown solve for unknown z= yl4 Y 2x+y+22=14 x I failed opGGs e olve-unknown-n- \ (e.g., Solve-unknown-l-equation) eqUatiOnS untried operators (e.g., select-var+) [EquatIonsI* [Variables] Initial State Goal Stack: select variable get equation fom new esuations solve for &own I fori new’ wuations I solve for u&norm I Goal Stack: get equation form new equations solve for unknown Goal Stack: form new equation solve for unlulown y: selected variable failed o$Lators JY ar-on-lhs- \ unhied operators Equations: y=lO-3x z=3 14 2x+y+ =14 ai [Variables] / Jz ar-on-lhs+ \ Equations: y=lO-3x 2=3 f4 2x+y+ =14 lz [Variables] important class of iterative operators could not be rcprcscntcd or acquired. The saving due to internal branching on the number of equivalent traditional macro-operators increases cxpc~~cntinllq tiith the number of iteration cycles considered. For problems requiring exactly ,n itcrativc cycles with n indcpendcnt if-then-else conditionals in each cycle, the number of traditional macro-operators Icquired to cover the state space is 2”m. Thus, fur problems requiring iteration up to tn cycles, the total number of macro-operators grow5 to E, = 1 ,~ 2’“. In contrast, a single iterative, conditional macro-operator indcp‘cndcnt of m and II suffices. In view of the number of macro-opcratorc, rcyuircd, unless iterative operators arc formed, it could c’,Gly bc less cl’ficicnt to starch through the large spocc of macro-operators than the original space of operators in problem domains involving iteration. ‘Ihe incf?ciency is cxaccrbated by the fact that thy myriad specific macro- operators would share significant common substructure. Restricting ourselves to non-iterative operators would thercforc sevcrcly limit usctil learning in such domains. An important additional advantage of forming iterative operators is that certain algebraic modifications in the intermedi‘lte macro- operator can be related to the number of iterations i. Given a trace Of a successfU1 path, we can form equations hit11 variables in these algebraic modifications expressing conditions under which ;I solution will bc rc;lchcd through the itcrntivc proccctlurc. If thcrc arc multiple modilic;ltions of this sort, variables in thcsc modifications c;\n bc rcl;M to cilch other through i. ‘13~ infcrrcd rcl;ltic)n CJII help detect sulv,lhility by the itcrativc rule early, bclilrc itCratic\n is ;~ctunlly cntcrcd. I-et us illustrate this principle in our ;IlgCbril cx~rnplc. As can be seen in Figure 3, c~h cycle through this macro-opcI.;ltl)r rcduccs the number of equations and the number of v:\riablcs each by 1. ‘I’hc reduction for i cycles would bc (4 - i) and (V - 11 rcspcctivcly, whcrc y is the number of equations given and I’ is the number of kariablcs in the given equations. From the solution tr,icc, WC know th,it when the number of equations and number of variables arc both 1, i.e., when Equations: y= 10-3X 2=3(10-3x)/4 2x+y+2z=14 [Variables] Goal Stack: form new equation solve for *own q-i= land v-i= 1, eplace+ then a solution can be reached. Eliminating i from the above equations, wc get Equations: [Varlables] Goal Stack: y=lO-3x 2=3(10-3x)/4 form new equation solve for unknown 2x+(10-3x)+2z=14 eplace- I \ 9 = v. Putting this inferred relation in the LHS of the itcrativc rule helps screen out insoluble problems without actually iterating through the solution procedure. Other information can be similarly prccomputcd and fronted as operational conditions on the LHS of new iterative operators. Equations: Variables: 2=3(10-3xJI4 x:desired unknown 2x+(10-3x)+2z=14 z The iterative operator formed in our example problem is presented in Figure 4b. In FERMI, the LHS of iterative operators is formed by an aggrcgatc of condition clcmcnts that need no iteration. I They are: olve-unknown-n-equations\ ~“““““““““‘“‘-‘--- . . ..----c-------“------~~-~---. (see operator and goal traces in figure 3) 1. condition clemcnts in the T.FIS of the intcrmcdiatc macro- 2. condition elements whose variables undergo simple algebraic modifications by the operator - modifications such as addition, multiplication, division by a constant or variable. etc.. and 3. checks on reldtlons between the nbove variables inferred through the successful solution tract and the number of iter‘itions - checks such as cqu,lting the number of unknou ns to the number of equations in our example. operator with variables or constants that are not modified by the operator, (Success) Figure 2: Trace of states and Operators in an example algebra problem The RHS of the iterative operator consists of l Square brackets indicate that the elements enclosed are unchanged. Bccausc justifications for the above deletions have been discussed elsewhere (e.g., [1,2]), we arc not repeating them in this paper. 3.3. Formation of Iterative OperatOrS The branching feature above is dcsirablc for an iterative operator because it allows for variation from cycle to cycle. WithOUt it, an 1. a statement initializing a counter for the number of iterations, 2. an iterative agenda call to the intermediate macro-operator formed earlier (See Figure 4b), 3. and simple algebraic modifications based on the number of iterations. For instance. the number of equations in our example algebra problem is reduced by the number of iterations, LEARNING / 49-3 trace of rules trace of subgoals of rules trace of changes in subgoals solve-unknown-n-equations select-var+ var-on-lhs- vat--on-lhs+ replace+ replace+ replace- solve-unknown-n-equations select-var+ var-on-lhs+ replace+ replace- solve-unknown-l-equation (sucess) solve for unknown select variable get equation for substitution get equation for substitution form new equation form new equation form new equation solve for unknown select variable get equation for substitution form new equation form new equation solve for unknown (success) Figure 3: I’hree types of traces of the example problem in *Information extracted from the same problem-solving step appears in the The macro-operator halts when its conditions are no longer satisfied. Note that the iterative call to the macro-operator is the only truly iterative component required. To coordinate with the iterative operator, the intermediate macro- operator (in the RHS of the iterative operator) is modified as follows: the first two kinds of condition elements just listed are removed from its LHS name of rule m-solve-unknown m-var-on-lhs+ m-var-on-lhs- m-replace- m-replace+ * Elements marked iterations is added. name of rule solve for unknown select variable get equation for substitution form new equation solve for unknown select variable get equation for substitution form new equation solve for unknown (success) LHS: conditions the current goal is to solve for unknown,* the desired unknown is =u,* the number of equations =nume is > I,+ the number of variables =numv is > l,* there is a variable =v that is not =u, =v appears in at least 2 equations, there is no equation that contains =u and has no other variables in it call agenda with bucket 1: (m-var-on-lhs+ m-var-on-lhs-) bucket 2: (m-replace- m-replace+) there is an equation =e with =v on its LHS mark equation as selected, and halt bucket there is no equation with =v on its LHS, but there is an equation =e that contains =v rearrange =e so that =v is on its LHS the selected equation is =e, no equation other than =e contains =v, the number of equations is =nume,* and the number of variables is =numv* remove =e from working memory remove =v from working memory set =nume to (=nume - l)* set =numv to (=numv - l)* and halt bucket the selected equation is =el) and =v occurs in a second equation =e2 with an asterisk are removed when the iterative operator is formed, and a counter =iterations for the number of Figure 2* same row and the LHSs of operators in the agenda in its RHS. These are elements that require no iteration and have been moved to the LHS of the iterative operator. Corresponding action elements, those that do simple algebraic modifications on the variables in condition elements, are also removed, Such condition elements are no longer necessary because these modifications are done more efficiently in the RHS of the iterative operator RHS: actions substitute occurrences of =v with the RHS of =el Figure 4a: Intermediate Macro-operator formed from a single cycle LHS: conditions RHS: actions i-solve-unknown the current goal is to solve for unknown, the desired unknown is =u, the number of equations =nume is > 1, the number of variables =numv is > 1, =nume is equal to =numv set =iterations to 0, call agenda with bucket: (m-solve-unknown), set =nume to (=nume - =iterations), set =numv to (=numv - Gterations) Figure 4b: Iterative operator 191 i SCIENCE in a single step by relating the modifications directly to the number of iterations. In their place a counter for the number of iterations is added. FERMI solves many problems requiring iteration, including simultilncotts algebraic equations and physics problems such as finding the prcssurc diffcrencc bctwccn 2 points a and b in a container containing multiple layers of liquids that have various dcnsitics. A path from a to b is rcpcatcdly dccomposcd, until the rcquircmcnts for applying the formula for prcsswc diffcrcncc in a single liquid arc met. ‘Ihe iterative operator to be learned from the problem-solving process is equivalent to the formula A Pah = g ‘i = 1, n Pi Ahi where A pa, is the pressure difference bctwecn 1 and b, g is the surface gravity, i 1s the summation index, 11 is the total number of liquids between a and b, pi is the density of liquidi, and A hi is the change in height of a path from a to b in liquidi. 4. Concluding Remarks Learning in problem solving requires more than rote memorization of linear operator sequences into macro-operators [15]. Parametrizing the macro-operators, and reducing redundant condition and action elements provides only the first step towards more general strategy learning. In the FERMI project we have gone two steps further: 1. automated generation of macro-operators with conditional branching, and 2. automated creation of iterative macro-operators to solve problems with cyclic subgoal structure. The integrated implementation in FERMI of these two techniques, on top of the traditional macro-operator formation method, provides a theoretical enhancement in the power of macro-operators, and major savings in both the number of requisite macro-operators and the time required to search for applicable operators in future problem solving. Further work should be done on correcting over-generalization in iterative rules by learning from failure, generalizing types of inferences that can be made from the iterative trace to produce up-front I,IIS tests for an iterative operator, and on specifying types of condition elements that can be transferred from the intermediate macro-operator to the LHS of the iterative operator so as to improve early detection of solvability. In addition to a larger scale implementation and more extensive testing of our iterative macro-operator formation techniques, future directions for Icarning in Ft3KMI include: l Incorporating analogical reasoning tcchniqucs [5. 61, which can provide ;I basis for transferring powcrfttl macro-opcr;ltors across rclatcd domains, as well as the mom traditional tr,msfcr of solution scqucnccs across rclatcd problems. 0 l~xplorirlg the role of automatic progmmming in the creiltion of cvcr more claboratc macro-operator. ‘I’hus f:u-, our primary cffon has been in dctcction. analysis. and evaluation of problem solving tracts in order to extract all the information required to formulntc useful, gcncmli/cd macro-operators. Hut, as the complexity of the task incrcascs, so dots the ncccssity for principled atttomatic synthesis of such macro-operators. 5. References 1. Anderson, J. R., The Architecture of Cognition, Cambridge, Mass.: Harvard University Press, 1983. 2. Anderson, J. A., “Acquisition of Proof Skills in Geometry,” in Machine Learning, An Artificial Intelligence Approach, R. S. Michalski, J. G. &bone11 and T. M. Mitchell, eds., Tioga Press, Palo Alto, CA, 1983. 3. 4. 5. 6. Carbonell, J. G., Larkin, J. H. and Reif, F., “Towards a General Scientific Reasoning Engine,” Tech. report, Carnegie-Mellon University, Computer Science Department, 1983, CIP #445. Carbonell, J. G., “Experiential Learning in Analogical Problem Solving,” Proceedings of the Second Meeting of the American Association for Artificial Intelligence, Pittsburgh, PA, 1982. Carbonell, J. G., “Learning by Analogy: Formulating and Generalizing Plans from Past Experience,” in Machine Learning, An Artificial Intelligence Approach, R. S. Michalski, J. G. Carbonell and T. M. Mitchell, eds., Tioga Press, Palo Alto, CA, 1983. Carbonell, J. G., “Derivational Analogy: A Theory of Reconstructive Problem Solving and Expertise Acquisition,” in Machine Learning, An Artificial Intelligence Approach, Volume II, Michalski, R. S., Carbonell, J. G. andMitchell, T. M., eds., Morgan Kaufmann, 1986. Dietterich, T. and Michalski, R., “Inductive Learning of Structural Descriptions,” Artificial Intelligence, Vol. 16, 1981. Dietterich, T. G. and Michalski, R. S., “A Comparative Review of Selected Methods for Learning Structural Descriptions,” in Machine Learning, An Artificial Intelligence Approach, R. S. Michalski, J. G. Carbone11 and T. M. Mitchell, eds., Tioga Press, Palo Alto, CA, 1983. Fikes, R. E. and Nilsson, N. J., “STRIPS: A New Approach to the Application of Theorem Proving to Problem Solving,” Artificial Intelligence, Vol. 2, 1971, pp. 189-208. 10. Laird, J. E., Rosenbloom, P. S. and Newell, A., “Chunking in SOAR: The Anatomy of a General Learning Mechanism,” Machine Learning, Vol. 1, 1986. 11. Langley, P. and Carbonell, J. G., “Language Acquisition and Machine Learning,’ ’ . in Mechanisms for Language Acquisition, MacWhinney B., ed., Lawrence Erlbaum Associates, 1986. 12. Larkin, J. H., “Enriching formal knowledge: A model for learning to solve problems in physics,” in Cognitive Skills and their Acquisition. J. R. Anderson, eds., Lawrence Erlbaum Associates, Hillsdale, NJ, 1981. 13. Larkin, J., Reif, F. and Carbonell, J. G., “FERMI: A Flexible Expert Reasoner with Multi-Domain Inference,” Cognitive Science, Vol. 9, 1986. 14. Minton, S., Carbonell, J. G., Knoblock, C., Kuokka, D. and Nordin, H., “Improving the Effectiveness of Explanation-Based Learning,” Tech. report, Carnegie-Mellon University, Computer Science Department, 1986. 15. Minton, S., “Selectively Generalizing Plans for Problem Solving,” Proceedings of iJCAI-85, 1985, pp. 596-599. 16. Mitchell, T. M., Version Spaces: An Approach to Concept Learning, PhD dissertation, Stanford University, December 1978. 17. Mitchell, T. M., Utgoff, P. E. and Banerji, R. B., “Learning by Experimentation: Acquiring and Refining Problem-Solving Heuristics,” in Machine Learning, An Artificial Intelligence Approach, R. S. Michalski, J. G. Carbonell and T. M. Mitchell, eds., Tioga Press, Palo Alto, CA, 1983. 18. Rosenbloom, P. S. and Newell, A., “The chunking of goal hierarchies: A generalized model of practice,” in Machine Learning: An Artificial Intelligence Approach, Vo1.2, R. S. Michalski, J. G. Carbonell, and T. Mitchell, eds., Kaufmann, Los Altos, Calif.: 1986. 19. Shell, P. and Carbonell, J. G., “The RuleKit Reference Manual”, CMU Computer Science Department internal paper. 20. Winston, P., Artificial Intelligence, Reading, MA: Addison Wesley, 1977. LEARNING / -t95
1986
97
546
GENERALIZED PLAN RECOGNITION Henry A. Kautz James F. Allen Department of Computer Science University of Rochester Rochester, New York 14627 ABSTRACT This pa r er outlines a new theory of plan recognition that is significant y more powerful than previous approaches. Con- current actions. shared steps between actions, and disjunctive information are all handled. The theory allows one to draw conclusions based on the class of possible plans being per- formed, rather than having to prematurely commit to a single interpretation. The theory employs circumscription to transform a first-order theory of action into an action taxonomy. which can be used to logically deduce the complex action(s) an agent is performing. 1. Introduction A central issue in Artificial Intelligence is the representa- tion of actions and plans. One of the major modes of reasoning about actions is called plan recognition, in which a set of observed or described actions is explained by constructing a plan that contains them. Such techni are-as, including story understanding, a ues are useful *in many iscourse modelmg, stra- tegic planning and modeling naive psycholo y. In story under- standing, for example, the plans of the c aracters must be i recognized from the described actions in order to answer ques- tions based on the story. In strategic planning, the planyer may need to recognize the plans of another agent m order to rnteract (co-operatively or competitively) with that agent. Unlike planning, which often can be viewed as purely hypothetical reasoning (i.e. if I did A. then P would be true), plan recognition models must be able to represent actual events that have happened as well as proposing hypothetical explana- tions of actions. In addition. plan recognition inherently involves more uncertainty than in planning. Whereas. in plan- ning, one is interested in finding any plan that achieves the desired goal. in the particular p an that another agent is P lan recognition. one must attempt to recognize erforming. Previous plan recognition models. as we shall see, R ave been unable to deal with this form of uncertainty in any significant way. A truly useful plan reco P nition system must. besides being well-defined. be able to hand e various forms of uncertainty. In particular, often a given set of observed Fctions may not uniquely identify a particular plan. yet many important conclu- sions can still be drawn and predictions about hture actions can still be made. For example, if we observe a person in a house picking up the car keys, we should be able to infer that they are goin leave the house to go to the car. even though we cannot te I f to if they plan to drive somewhere, or simply to put the car in the garage. On the basis of this information. we might ask the per- son to take the garbage out when they leave. To accomplish this. 3 system cannot wait until a single plan is uniquely identified before drawing any conclusions, On the other hand. a plan recognizer should not prematurely jump to conclusions either. We do not want to handle the above example by simolv L - __ This work was supported in part by the Air Force Systems Command, Rome Air Development Center, Griffiss Air Force Base, and the Air Force Office of Scientific Research, under Contract No. F30602-85-C- 0008. and the National Science Foundation under grant DCR- 8502481. inferring that the person is going to put the car into the garage when there is no evidence to support this interpretation over the one involving driving to the store. In addition, a useful plan recognizer in many contexts can- not make simplistic assumptions about the temporal ordering of the observations either. In story understanding, for example, the actions might not be described in the actual order that they occurred. In many domains, we must also allow actions lo occur simultaneously with each other, or allow the temporal ordering not to be known at all, Finally, we must allow the possibility that an action may be executed as part of two independent plans. We might for example, make enough pasta one night in order to prepare two separate meals over the next two days. None of the previous models of plan recognition can han- dle more than a few of these situations in any general way. Part of the problem is that none of the frameworks have used a rich enough temporal model to support reasoning about temporally complex situations. But even if we were able to extend each framework with a general temporal reasoning facility. there would still be problems that remain. Let us consider three of the major approaches briefly, and discuss these other problems. The explanation-based approaches, outlined formally by [Cha85] all attempt to explain a set of observations by finding a set of assum with this is iR tions that entails the observations. The problem at there may be many such sets of assumptions that will have this property, and the theory says nothing as to how to select amon framework (e.g. f them, In practice, systems based on this [Wi 831) will over-commit, and select the first explanation found, even though it is not uni the observations. In addition. they are not a uely identified by & le to handle dis- junctive information. The approaches based on parsing (e.g. [H&32, Sid81]) view actions as sequences of subactions and essentially model this knowledge as a context-free rule in an “action rammar”. The rimitive (i.e. non-decomposable) actions in the B ramework are tR e terminal symbols in the grammar. The observations are then treated as input to the parser and it attempts to derive a parse tree to explain the observations. A system based on this model would suffer from the problem of over-commitment unless it generates the set of possible explanations (i.e. ail possi- ble parses). While some interesting temporal aspects in com- bining plans can be handled by usin more powerful grammars such as shuffle grammars, each in ividual tf modelled as a sequence of actions. plan can only be plan must be observed -- In addition, every step of a vation. there is no capabili It is not clear how more tempor 2 for partial obser- ly complex plans could be modelled, such as those involving simultaneous actions, or how a single action could be viewed as being part of multiple plans. The final approach to be discussed is based on the concept of “likely” inference (e.g. [All83, Po184]). In these systems a set of rules is used of the form: “If one observes act A, then it may be that it is part of act B”. Such rules outline a search space of actions that produces plans that include the observations. In practice, the control of this search is hidden in a set of heuris- tics and thus is hard to define precisely. It is also difficult to 32 / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. attach a semantics to such rules as the one above. This rule does not mean that if we observe A. then it is probable that B IS being executed. or even that it is possible that B is being exe- cuted. The rule is valid even in situations where it is impossible for B to be in execution. These issues are decided enttrely by the heuristics. As such, it is hard to make precise claims as to the power of this formalism. In this pa er we outline a new theory of plan recognition that is signi canrly more powerful that these previous K Jpproaches in that it can handle many of the above issues in an intuitively satisfjing way. Furthermore. there dre no restrictions on the temporal relatronships between the observations. Another important result is that the implicit assumptions that clt and precisely del!ned w&in a I!rrnai theory of action. appear to underly all Ian reco nition recesses are made expli- Given these assumptions and a specific body of knowledge about the possible actions and plans to be considered. this theory will give us the strongest set of conclusions that can be made given a set of observations. As such, this work lays a firm base for future work in plan recognition. Several a roblems often associated with plan recognition are not consi ered in the current approach. however. In artic- ular, beyond some sim R le simplicity assumptions, the P rame- work does not distinguis between a priori likely and non-likely plans. Each logically tions, is treated equa ly within the theory. It also can only P ossible explanation, given the assump- recognize plans that are constructed out of the initial library of actions defined for a particular domain. As a result, novel situations that arise from a combination of existing plans may be recognized, but other situations that require generalization techniques, or reasoning by analogy cannot be recognized. 2. A New View of Plan Recognition It is not necessary to abandon logic, or to enter the depths of probabilistic inference, in order to handle the problematic cases of plan recognition described above. Instead, we propose that plan recognition be viewed as ordinary deductive inference. based on a set of observations, an action taxonomy, and one or more simplicity constraints, An action taxonomy is an exhaustive description of the ways in which actions can be performed, and the ways in which any action can be used as step of a more complex action. Because the taxonomy is complete, one can infer the disjunc- tion of the set of possible plans which contain the observations, and then reason by cases to reduce this disjunction. An action taxonomy is obtained by applying two closed- world assumptions to an axiomatization of an action hierarchy. The first assumption states that the known ways of performing an action are the only ways of performing that action. The assumption is actually a bit more general, in that it states the known wczys of specializing an action are the only ways. Each time an abstract action is specialized. more is known ,tbout how to perform it. For example, because the action type “throw” \peciaiizes the action type “transfer location”, we can think of throwing as a way to transfer location The second assumption states that .lli actions are purpose- fu!, and that all the possible reasons tr performing an action ire known. This assumption is realized by stating that if Jn Liction A occurs. and P is the set of more complex actions in :thich 4 occurs s a substep. then home member of P also occurs, These assumptions can be stated using McCarthy’s cir- cumscription scheme. The action hierarchy IS transformed by first circumscribing the ways of specializing an act, and then cir- cumscribing the ways of usmg an act. The precise formulation of this operation is described m section 6 below. The simplicity constraints become important when we need to recognize a plan which integrates several observations. The top of the action taxonomy contains actions which are done for their own s,ake. rather than as steps of more complex actions. When several actions are observed, it is often a good heuristic to assume that the observations are all part of the same top level act. rather than each being a step of an indepen- dent top level act. lhe simplicity constraint which we will use asserts that as few top level actions occur as P ossible. The sim- plicity constraint can be represented by a ormula of second- order logic which is similar to the circumscription formula. In any parttcular case the constraint is instanttated as a first-order formula which asserts “there are no more than n top level acts”. which n is a particular constant choosen to be as small as possi- ble, and still allow the instantiated constraint to be consistent with the observations and taxonomy. While one can imagine man other heuristic rules for choosing between interpretations o r a set of observed actions, the few given here cover a reat many common cases. and seem to capture the “obvious” in erences one might make. More fine B grained plan recognition tasks (such as strategic planning) would probably require some sort of quantitative reasoning. 3. Representing Action The scheme just described requires a representation of action that includes: --the ability to assert that an action actually occurred at a time: --a specialization hierarchy: --a decomposition (substep) hierarchy. Action instances are individuals which occur in the world. and are classified by action types. The example domain is the world of cooking, which includes a very rich action hierarchy, as well as a token bit of block stacking. (See figure 1.) The broad arrows indicate that one action type is a speciahzatton of another action type, whereas the thin arrows indicates the decomposition of an action into subactions. We will see how to represent this information in logic presently. The diagram does not indicate other conditions and constraints which are also part of an action decomposition. Instances of action types are also not shown. We introduce particular instances of actions using formulas such as #( E9, makePastaDish) to mean that E9 is a real action instance of t pe J4akePastaD ish. (The symbol # is the “occurs” predicate. “, The structure of a particular action can be specified by a qet of role functions. In particular, the function T applied to an action Instance returns the interval of time over which the action instance occurs. Other roles of an action can also be represented by functions: e.g.. Agent(E9) could be the agent causing the action. and Result(E9) could be the particular meal produced b, E9. (For simplicity we will assume in this paper that all actions ‘ire performed by the same agent.) To record the observation of the agent making a pasta dish at time [7. one would assert: 3 e . #(e.makePastaDish) & T(e) = 17 Action types need not ail be conrtants, as they are here: often it is useful to use functions to construct ty es. such as Move(x.y). For sim licity. JI the actions used in in this paper use on P R t e examples y constant action tl pes. i\ction specialization is easy to represent in this scheme. In the cooking world. the act of making a pasta dish specializes the act of preparing a meal, which in turn specializes the Ass of top level acts. Specialization st,ttements are simply universally-quantified implications. For example. part of the hierarchy in figure 1 is represented by the following astoms: [l] v e . #(e, PrepareMeal) II #(e. ropLeveL4ct) [2] V e . #(e, MakePastaDish) II #(e. PrepareMeal) [3] V e . #(e, MakeFettuciniMarinara) > #(e, MakePastaDish) [4] V e . #(e, MakeFettucini) > #(e, UakeNoodles) Planning: AUTOMATED REASONING i 33 [5] v e . #(e, MakeSpaghetti) > #(e, MakeNoodles) [6] tl e . #(e, MakeChickenMarinara) > #(e, MakeMeatDish) The first statement, for example, means that any action instance which is a PrepareMeal is also a TopLevelAct The decomposition hierarchy is represented by implica- tions which assert necessary (and erhaps sufficient) conditions for an action instance to occur. TR is may include the fact that some number of subactions occur, and that various facts hold at various times [A1184]. These facts include the preconditions and effects of the action, as well as various constraints on the tem- poral relationships of the subactions [Al183a]. For the level of analysis in the present pa et, we do not need to distinguish the minimal necessary set o P conditions for an action to occur, from a larger set which may include facts which could be deduced from the components of the act. It is also convenient to eliminate some existentially quantified vari- ables by introducing a function S(i,e) which names the i-th subaction (if any) of action e. (The actual numbers are not important; any constant symbols can be used.) For example, the makePastaDish action is decomposed as follows: PI v e . #(e, MakePastaDish) 1 3 tn . #(S(l.e), MakeNoodles) & # Boil) & # MakeSauce) & Object(S(2.e))= Result(S( 1.e)) & hold( noodle( Result( S( 1,e)). tn) & overlap(T(S(l,e)), tn) & during(T(S( 2,e)), tn) This states that every instance of MakePastaDish consists of (at least) three steps: making noodles, boiling them. and making a sauce. The result of making noodles is an object which is (naturally) of type noodle, for some period of time which fol- lows on the heels of the making. (Presumably the noodles cease being noodles after they are eaten.) Furthermore. the boiling action must occur while the noodles are, in fact, noo- dles. A complete decomposition of MskePastaDish would con- tain other facts, such that result of the MakeSauce act must be combined at some point with the noodles. after they are boiled. The constraint that all the subactions of an action occur during the time of the action is expressed for all acts by the axiom: [S] t/ i,e . during(T(S(i.e)), T(e)) It is important to note that a decomposable action can still be further specialized. For example, the action type MakeFettu- ciniMa.rinara specializes MakePastaDish and adds additional constraints on the above definition. In particular, the type of noodles made in step 1 must be fettucini, while the sauce made in step 3 must be marinara sauce. A final component of the action hierarchy are axioms which state action-type disjointedness. Such axioms are expressed with the connective “not and”, written as V: [9] t/ e , #(e,MakeFettuciniAlfredo) V #(e,MakeFettuciniMarinara) This simply says that a particular action cannot be borh an instance of making fettucini Alfred0 and an instance of making fettucini Marinara Disjointedness axioms can be compactly represented and used in resolution-based inference using tech- niques adapted from [Ten86]. - . . TopLevelAct PreDareMeal StackBlocks Rnil WI.. MakeNoodles MakeChicken r MakeFettucini MakeMarinara Figure 1: Action Hierarchy 34 / SCIENCE 4. Creating the Taxonomy The assumptions necessary for plan recognition can now be specified more precisely by considenng the full action hierarchy presented m Ii t! ure 1. Let KBl be the set of axioms schematically represente by the graph, including the axioms mentioned above. KBl will be transformed into a taxonomy by appl in the completeness assumptions discussed above. The resu t 0 the first assum r$ i tion (all ways of specializin an action are known) is the data ase KB2, which includes s 1 of KBL, together with statements which assert specialization complete- ness. These include the following, where the symbol @ 1s the connective ‘*exclusive or”. [lo] t/ e . #(e, [ll] V e . #(e. e, MakePastaDish) @ e, MakeMeatDish) [12] V e . #(e, MakePastaDish) 1 # e, MakeFettucini Marinara) @ # I e. MakeFettuciniAifredo) @ # e. ~/lakeSpagh~rtiC,lrbon~lr~) [13] v e . #(e. MakeMeatDish) 1 # e. :MakeChicken~l;rrinara) $ # e. ~~ilakeChickenPrirnavera) [14] t/ e . #(e, MakeNoodles) 3 # e. MakeFettucini) @ # I e. MakeSpaghetti) These state that eve 7 top level action is either a case of preparing a meal. or 0 stacking blocks: that every meal preparation is either a case of making a pasta dish or making a meat dish; and so on, all the way down to particular, basic types of meals. Not all actions, of course, are specializations of TopLevelAct For example, axiom [14] states that every Mak- eNoodles can be further classified as a MakeFettucini or as a MakeSpaghetti, but it is not the case that any MakeNoodles can be classified as a TopLeveIAct. The second assumption asserts that the given decomposi- tions are the only decom ositions. KB2 is transformed to the final taxonomy KB3, whit rl includes all of KB2, as well as: [15] V e . #(e, MakeNoodles) > 3 a . #(a, MakePastaDish) & e = S(l,a) [16] V e . #(e, MakeMarinara) 1 3 a. #(a, MakeFettuciniMarinara) & e = V #(a, MakeChickenMarinara) & e = [17] V e . #(e, MakeFettucini) > 3 a. \ #(a, MakeFettuciniMarinara) & e = S(l,a ] V #(a, MakeFettuciniAlfredo) & e = S(1.a) i Axiom [15] states that whenever an instance of MakeNoodles occurs, then it must be the case that some instance of MakePas- &Dish occurs. Furthermore, the MakeNoodles act which is required as a substep of the MakePastaDish is in fact the given instance of MakeNoodles, Cases like this, where an action can on1 be used in one possible su hi $I level of action abstraction. P er-action, usually occur at a t is more common for many uses for an action to occur in the taxonomy. The given hierar- chy has two distinct uses for the action MakeMarinara, and this is captured in axiom [16]. From the fact that the agent is mak- ing Marinara sauce, one is justified in concluding that an action instance will occur which is either of type MakeFettuciniMari- nara or of type MakeChickenMarinara. All these transformations can easily be performed automatically given an action taxonomy of the form described in the previous section. The formal basis for these transforma- tions is described in section 6. 5. Recognition Examples We-are now ready to work through some examples of plan recognition using the cooking taxonomy. In the steps that- fol- low, existentially-quantified variables will be replaced by tresh constants. Constants introduced for observed action inst‘ances begin with E. and those for deduced action instances being with K. Simple cases typical of standard accounted for. In this section, we s R Ian recognmon are easrly all consider an extended example demonstrating some more problematic cases. Let the first observation be disjunctive: the agent is observed to be either making fettucini or making spaghetti. but we cannot tell which. This is still enough information to make predictions about future actions. The observation is: [18] #(El, MakeFettucini) V #(El. MakeSpaghetti) The abstraction axioms let us infer up the hierarchy:’ [19] #(El, MakeNoodles) abstraction axioms [-+I. [5] [20] #(KOl, MakePastaDish) decomposition axiom [15], and extstential instantiation [21] #(KOl, PrepareMeal) abstraction axiom [2] [22] #(KOl, TopLevelAct) abstraction axiom [l] Statement [20 the future: a lo together with [7] lets us make a prediction about il will occur: [23] #(S(2,KOl), boil) 8c Object(S(2,KOl)) = ResuIt( El) & after(T(S(2,KOl)),T(El)) Thus even thou P h the particular plan the agent is performing cannot be exact y identified, specific predictions about future activities can still be made. The previous step showed how one could reason from a disjunctive observation u 4 the abstraction hierarchy to a non- disjunctive conclusion. ith the next observation, we see that omg a up -the decomposition hierarchy from a non-disjunctive ierarchy can lead to a disjunctive conclusion. Suppose the next observation is: [24] #(E3, MakeMarinara) Applying axiom (161, which was created by the second com- pleteness assumption, leads to the conclusion: (251 # K02, MakeFettuciniMarinara) V # I K02, MakeChickenMarinara) The abstraction hierarchy can again be used to collapse this dis- junction: [26] #(K02, MakePastaDish) V #(K02, MakeMeatDish) [27] R (K02, PrepareMeal) [28] #(K02, TopLevelAct) At this point the simplicity constraint comes into play. The strongest form of the constraint, that there is only one top level action in progress, is tried first: [29] V el,e2 . #(el. TopLevelAct) & #(eZ, TopLevelAct) > el =e2 Together with [22] and [28]. this implies: (301 KOl = K02 Substitution of equals yields: [31] #(K02, MakePastaDish) One of the disjointedness sAiorns t‘rom the original action hierarchy is: Planning: AUTOMATED REASONING / 35 [32] V e . #(e. MakePastaDish) C #(e. ,&Iake\leatDjsh) Statements [30]. [31]. and [6] let us deduce: [33] 1#( K02. MakeMeatDish) [34] 1##KOO, MakeChickenMarinara) Finally. [34] and [25] let us conclude that only one plan. which contains both observations, is occurring: [35] xf(KO2, MakeFettuciniMarinara) Tern ples. as 51 oral constraints did not play a role in these exam- ey do in more complicated cases. For example, observations need not be received in the order in which the observed events occurred, or actions might be observed in an order where the violation of temporal constraints can ~IOW the system to reject hypotheses. For example, if a Boil act at an unknown time were input, the system would assume that it was the boil act of the (already deduced) MakePastaDish act. If the Boil were constrained to occur before the initial MakeNoodles, then the strong simplicity constraint (and all deductions based upon it) would have to withdrawn, and two distinct top level actions postulated. Different top level actions (or any actions, in fact) fan share subactions, if such sharing is permitted by the particular domain axioms. For example, sup ose every Prepare&Meal action begins with GotoKitchen, and J: e agent is constrained to remain in the kitchen for the duration of the act. If the agent is observed performing two different instances of PrepareMeal. and is further observed to remain in the kitchen for an interval which intersects the time of both actions. then we can deduce that both PrepareMeal actions share the same initial step. This example shows the importance of including obsenations that certain states hold over an interval. Without the fact that the agent remained in the kitchen. one could not conclude that the two PrepareMeal actions share a step. since it would be possible that the agent left the kitchen and then returned. 6. Closing the Action Hierarchy Our formulation of plan recognition is based on an ex ii- citiy asserting that the action hierarch the “plan library”) is complete. Whi e the transformation of r (also commonly ca Ied P the hierarchy into a taxonomy can be automated. some details of the process are not obvious. It is not correct to simply apply predicate completion, in the style of [Cla78]. For example. even if action A is the only act which is stated to contain act B as a substep, it may not be correct to add the statement v e . #(e.B) > 3 el . #(el.A) if there is some act C which either speci;ilires or generalires B. and is USed in an action other than A. Ior e\,~llple. in our action hierarchy. he only expfrcll mention of MakeSauce appears in the decomposition of JlahcP:titaI>rsh. But the tax- onomy should not contain the statement v e . #(e. MakeSauce) 2 3 a . #(a, MakePastrlDish) & e = S(3.a) because a particular instance ot MakeSauce ma) ak0 he an instance of MakeMarinara. and occur in the decomposition of the action MakeChickenMarinara. Only the weaker statement v e . #(e, MakeSauce) 1 3 a . [#(a MakePastaDish) & e= S(3,a)] V [#(a, MakeChickenMarinara) & e = ‘$3-a)] is justified. It would be correct. however. to infer from an observation of MakeSauce which is known not to be an instance of MakeMarinara that MakePastaDish 0ccui-s We would like, therefore, a clear understanding of the semantics of closing the hierarchy. McCarthy’s notion of minimal entailment and circumscription [McC85] provides a semantic and proof-theoretic model of the process. The imple- .-;6 i SCIENCE mentation described in section 7 can be viewed as an efficient means for performing the sanctioned inferences. There is not space here to fully explain how and why the circumscription works; more details ap liarity with the technical vocabulary o P ear in [Kau85]. A fami- circumscription is prob- ably needed to make complete sense of the rest of this section. Roughly, circumscribing a predicate minimizes its extension. Predicates whose extensions are allowed to change during the minimization are said to vary. All other predicates are are called parameters to the circumscription. In anthropomorphic terms, the circumscribed predicate is trying to shrink. but is constrained by the parameters. who can choose to take on any values allowed by the original axiomatization. For example. circumscribing the predicate p in the theory: v x . p(x) = q(x) where q acts as a parameter does nothing because q can “force” the extension of p to be arbitrarily large. On the other hand, if cumscribe El varies during the circumscription, then the cir- theory entails that the extension of p is empty. As demonstrated above, the first assum known ways of specializing an action are a I the ways. Let us P tion states that the call all action types which are not further specialized basrc. Then another wa) of putting the assumption is to sav that the “occurs” predicate. #. holds of an insr,ance and an’ abstract action type only if it h lo, because # holds of that instance and a basic action type which specializes the abstract type. So what we want is to circumscribe that non-basic action ty P es. This can be one by adding a predicate a art of # which applies to which is true of al basic action instances, and to let this cate act as a parameter during the circumscription of #. f redi- n our example domain. the following two statements. which we will call \I/. define such a predicate. Let + = ( tr x basic(\) I= r = MakeFettuciniMarinara V \= Boil V . . . , V e,r #b&c(e.t) = #(e.\) & basic(x) ) KBl is the set of axioms which make up the original action hierarchy. KB2 is then defined to be the circumscription of # relative to KBl together with \L. where all other predicates (including #basic) act as pammeters. KB2 = Circumscribe(KB1 ir \L. #) Interestingly. the process works even if there are manv levels of abstraction hierarchy above the level of basic actions. ‘Note that basic actions (such as MakeFettuciniMarinara) may be decom- posable. even though they are not further specialized. The second assum occurs only as part o P tion states that any non-top-level action the decomposition of some top-level action. Therefore we want to circumscribe that part of # which applies to non-to -level actions. This can be done by adding a predicate to K fl 2 which is true of all top-level action instances. and circumscribin added above must be allowe a’ # again. The predicate #basic to vary in the circumscription. Let @ = ( V e . #topleveI(e) > #(e,TopLevelAct) ) KB3 = Circumscribe(KB2 U @. #‘, #basic) As before, the process can percolate though many levels of the action decomposition hierarchy. Note that the concepts basic action and top-level action are not antonyms; for example, the type MakeFettuciniMarinara is basic (not speciahzable). yet any instance of it is also an instance of TopLeveiAct Circumscription cannot be used to express the simplicity constramt. Instead, one must minimize the cardinality of the extension of #, after the observations are recorded. [Kau85] describes the cardinality-minimization operator, which is simi- lar, but more powerful than. the circumscription operator. 7. Implementation Considerations The formal theory described here has given a precise semantics to the plan recognition reasonin process by specify- ing a set of axioms from which all desire t derived deductively. Although no conclusions may be universally-applicable methods are known for automating circumscription. b . f7 placing reasonable restrictions on the form of the action ierarchy axioms. we can devise a special-purpose algorithm for comput- ing the circumscriptions. As a result. in theory we could simply run a general purpose theorem proker gilen the resulting axioms to prove any particular (valid) conclusion. In practice. since we often don’t have a specific questlon to ask beyond “what is the agent’s goal?” or “what will happen next?“, it is considerably more usetil to design a specialized forward chain- in f reasoning process that essentially embodies a particular in erence strategy over these axioms. We are in the process of constructing such a specialized reasoner. The algorithm divides into two components: the preprocessing stage and the forward-chaining stage. The preprocessing stage is done once for any given domain. The two corn E leteness assumptions from in the previous section are realized y circumscribing the action hierarchy. The result of the circumscri R tion can be viewed as an enormously long logi- cal formula. ut is quite compactly represented bl a graph structure. The forward-chaining stage be ins when obsenations are received. This stage incorporates tl e assumption that ;1s few top-level acts as possible are occurring. ~2s each observation is received, the system chains up both the abstraction and decom- position hierarchies. until a top-level action is reached. The intermediate steps may include many disjunctive statements. The action hierarchy is used as a control graph to direct and limit this disjunctive reasoning. After more than one observa- tion arrives, the s stem will hake derived two or more (elisten- tially instantiated r constants which refer to top-level actions. The simplicity assum that some subsets of tit tion is applied, by adding a statement ese constants must be equal. Exclusive- or reasoning now pro agates down the hierarchy. deriving a more restrictive s’et o P assertions about the to -level acts and their subacts. If an inconsistency is detected. ti en the number of top-level acts is incremented. and the system backtracks to the point at which the simplicity assumption was applied. This description of the implementation is admittedly sketchy. Many more details, including how the temporal con- straint propagation system integrates with the forward-chaining reasoner, will appear in a forthcoming technical report. 8. Future Work Future work involves completing the theoretical founda- tion. and building a test implementation. The theoretical work includes a formal specification of the form of the action taxonomy so that its circumscription can always be effectively computed. Theorems guaranteeing the consistency and intuitive correctness of the circumscription will be completed. More complex temporal interactions between simultane- ously occurring actions ~111 be investigated. We will show how the framework handles more complicated examples involving step-sharing and observations received out of temporal order (e. a s ightly more sophisticated simplicity constraint. Rather than Y . mystery stories). It will probably be necessary to develop stating that as few top-level actions occur as possible. it is more realistic to state that as few top-level actions as possible are occurring at any one time. In addition. observations of non- occurrences of events (e. B . the agent did not boil water) are an important source of in ormation in plan recognition. Non- Occurrences integrate nicely into our framework. Many of the subsystems that are used b) the plan recog- nizer (such as a temporal reasoner [Al183a], And a lisp-based theorem rover which handles equality [All84a]) have been develope 1 in previous work at Rochester. and construction of the complete implementation is under way. References A1183. All83a. Al184. Cha85. Cla78. Hut-82. Kau85. McC85 Pol84. Sid81. Ten86. Wi183. Al184a. James F. Allen, “Recognizing Intentions from Natural Langua e Utterances.” % m Compulatlonal hfodeis of Discourse, ed. M. rady. MITP. 1983. James F. Allen, “hlaintaining Knowicdge Atwut Temporal Intervals.” Comruutlicullons ot the AC.lf. no. 26, PP. 832-843. tiov 1983. , . . James F. Allen, ” I obdrds a Gencrdl I heor) of i\ctlon 2nd Time,” ArriJiciul Inrcllrge,lce. vol. 23, no. 2. pp. 123-154. Jul> 1984. Eugene Charniak and Drew McDermott. lrzlroducfion lo Arfrjcial Inlelligcncc: Addison Wesley. Reading, .Mfi. 1985. K.L. Clark, “Negauon as Failure,” in Logic and Darabuses. ed. J. Minker. Plenum Press. New York. 1978. Karen Huff and Victor Lesser, “KNOWLEDGE-BASED COMMAND UNDERSTANDING: An Example for the Software Development Environment” Technical Report 82- 6. Computer and Information Sciences University of Mas- sachusetts at Amherst. Amherst, MA, 1982. Henry A. Kautz. “Toward a Theory of Plan Recognition,” T’R162. Department of Computer Science. University of Rochester, July, 1985. John McCarthy, “Applications of Circumscription to Formal- izing Common Sense Knowledge.” in Proceedings ram the ir Non-Monofonic Reasoning Workshop, AAAI. Ott 19 5. Martha E. Pollack, “Generating Expert Answers Throunh Goal Inference,” PhD Thesis Propod. Department of Coti- P uter Science, University of Pennsylvania, August 1983, anuary 1984. DRAFI’ Candace L. Sidner and David J. Israel, “Recognizing Intended Meaning and Speakers’ Plans,” IJCAI. 1981. Josh D. Tenenburg. “Reasonin sion of Clausal Form.” TR 1 $ Using Exclusion: An Exten- 7, De P artment of Computer Science, University of Rochester, Jan 986. Robert Wilensky, Planning and Understanding, Addison- Wesley, Reading, MA, 1983. James F. Allen. Mark Giuliano. and Alan M. Fnsch, “The HORNE Reasoning System.” TR 126 Revised, Computer Science Department, University of Rochester, Sept 1984. Planning: AUTOMATED REASONING i 3'
1986
98
547
A LOGIC OF DELIBERATION Marvin Belzer Advanced Computational Methods Center University of Georgia Athens, GA 30602 ABSTRACT Deliberation typically involves the formation of a plan or intention from a set of values and beliefs. I suggest that deliberation, or “practical reasoning,” is a form of normative reasoning and that the understanding and construction of reasoning systems that can deliberate and act intentionally presupposes a theory of normative reasoning. The language and semantics of a deontic logic is used to develop a theory of defeasible reasoning in normative systems and belief systems. This theory may be applied in action theory and to artificial intelligence by identifying expressions of values, beliefs, and intentions with various types of modal sentences from the language. While there have been some investigations of the structure of normative reasoning in deontic logic, Bayesian decision theory, and philosophical action theory and ethics, there does not yet exist a general theory of normative reasoning. Such a theory is necessary for the understanding and construction of decision-making systems that use normative principles and policies to form plans, strategies, and intentions. A general logic of the all-purpose “normative reasoner,” or “deliberator,” is needed. Practical reasoning, or deliberation, in which intentions to act are formed from a set of desires and beliefs, may be a form of normative reasoning. Expressions of desires and intentions may be treated as rules (norms) or evaluative judgments (Davidson 1977). There also may be a normative component in belief-systems. It has been suggested, for example, that the rules of thumb that enable a system to form tentative conclusions from incomplete information are expressions of “ratiocinative desires” (Doyle 1983a); and that “epistemic policies” guide an epistemic agent in revising beliefs in the light of new information (Stalnaker 1984). The expressions of desires and policies may be interpreted as norms, and therefore an understanding of normative reasoning would be useful in a theory of reasoning with incomplete or new information. $1. The structure of normative reasoning. Several features of normative systems must be respected by any adequate formal representation of normative reasoning. First, some rules are defeasible, that is, they are generally valid but may have exceptions. Secondly, there is a fundamental distinction between prima facie rules and all-things-considered normative commitments. The prima facie rules of a system, together with a set of facts or opinions determine the system’s all-things-considered commitments. Thirdly, for some set of sentences the all-things-considered (a.t.c.) closure should be non-monotonic, that is, set s is included in set s* but the a.t.c. closure of s is not included in the a.t.c. closure of s*. These features of rules may be illustrated simply as follows. Suppose that Nixon told you a secret after you promised to comply with these requirements: (a) You should not tell the secret to Reagan. (b) You should not tell the secret to Gorbachev. (c) You should tell Reagan if you tell Gorbachev. (d) You should tell Gorbachev if you tell Reagan. Suppose you break promise (b) by a certain time, (e) You told the secret to Gorbachev, and you are trying to decide whether you should tell Reagan. If no other rules or facts are relevant then, to comply with the requests as given, clearly you should tell the secret to Reagan, because of rule (c)--and in spite of (a). The prima facie rule (a) is defeasible because of (c). After you have told Gorbachev you have an all-things-considered commitment expressed by the rule (f) You should tell the secret to Reagan. Rules (a) and (f) conflict, yet correct resolution is possible if we recognize that stipulation (a) is a valid prima facie rule whereas (f) expresses a valid all-things-considered commitment after it is settled that you have violated rule (b) by telling Gorbachev. A prima facie rule may be “defeated,” in which case it cannot reliably be used to draw normative conclusions. In the example, after you told the secret to Gorbachev the rule (a) was defeated. To use a prima facie rule in particular circumstances to detach an all-things-considered normative conclusion one needs to know that the prima facie rule is not defeated in those circumstances. If it is not defeated, then it can be used--as in the detachment of(f) from (c) and (e). It does not appear possible to deal separately with the issues of defeasibility and normative reasoning, for even our simple story cannot be represented satisfactorily without defeasible rules - we cannot for instance replace (a) and (b) bv 3s / SCIENCE From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved. (a’) You should not tell Reagan if you do not tell Gorbachev and (b’) You should not tell Gorbachev if you do not tell Reagan. without omitting from the analysis the significant fact that telling neither is preferable to telling both. $2. The Deontic Logic 3-D. Deontic logic is a branch of modal logic whose main goals are to provide a formal representation of rules - typically it does so with modal operators for “ought” and “permissible” - and to provide a semantics for such expressions. A satisfactory deontic logic must be able to represent the distinction between defeasible prima facie (p.f.) rules and all-things-considered (a.t.c.) rules. Moreover it should not permit the detachment of all-things-considered conclusions from defeated rules; and it must have principles that state when such detachment is acceptable. The deontic logic 3-D (Loewer and Belzer 1983) meets these requirements. The language of 3-D is a propositional language containing the unary connectives T and F to which are added two dyadic deontic operators 0(-/-) and !(-/-), a necessity operator L, and a dyadic operator U(-,-). The wffs of 3-D are characterized as follows: (a) propositional variables are wffs, and (b) if P,Q are wffs then the following (in addition to the usual truth functional wffs) are wffs: O(Q/P), !(Q/P), LP, and U(Q,P). These statements may be read informally as follows: O(Q/P): it ought prima facie to be that Q, given P. !(Q/P): it ought all-things-considered to be that Q, given P. LP: it is settled that P. U(Q,P): P determines the normative status of Q. For tautology T, let OQ = O(Q/T) and !Q = !(Q/T). A 3-D model structure is a 6-tuple (W,T,H,I,s,F) where W is a set of momentary world stages, T is the set of natural numbers (the set of times), H is a subset of the set of functions from T into W (these functions are possible histories), I is a set (of “perspectives”), I is a function from T x H x I into the set of weak orderings on H, and F is a function from T x H x I into H. Call v=<t,h,i>, for time t, history h, and perspective i a temporal perspective. The weak ordering s;v is a ranking of possible histories according to the extent to which the histories comply with the values of perspective i at time t in history h (cf. Lewis 1973, 1974). The most highly ranked histories are those at which no value or rule is violated. As one descends the ranking more and/or more serious violations occur. This allows for the interpretation of prima facie rules. O(Q/P) is to hold relative to the temporal perspective v just in case Q is true at each of the most highly ranked P-histories in the p.f. ranking IV. The set Fv is the set of histories accessible at v. For an objective interpretation we stipulate that F(<t,h,i>) = F(<t,h,i*>) for all i* (that is, in the objective interpretation the perspective i is not relevant to accessibility).* Let P be settIed at v just in case P is true at each of the histories in the set Fv (cf. Thomason, 1970). LP says that P is settled. Now we want to use the p.f. ranking IV and the set Fv to define a new ranking +‘v with which to interpret expressions of all-things-considered (a.t.c.) commitments !(Q/P). The main idea to be used is that the a.t.c. ranking for v can be defined as the ranking that results when all histories that are inaccessible at v are removed from the p.f. ranking for v. Given an ordering x on H and subset y of H, let the restriction of x to y be the ordering z that results by removing from x each element of H not in y. ** Let c’v be the restriction of IV to Fv. !(Q/P) is to hold at v just% case Q is true at each most highly ranked history in s;‘v at which P is true. An interpretation [ ] on a 3-D model structure is defined as follows: [ ] assigns to each propositional variable a subset of T x H x I where we stipulate that for non-modal P: <h,t,i> E [P] iff for all t* ET and i* E I, <h,t*,i*> E [PI. In other words, only histories--and not perspectives or times--are relevant to the evaluation of non-modal propositional variables. Recursion clauses for the truth functional connectives are as expected. Now let [Q/P] be the class of weak orderings 5 on H that are such that: Ej(j E [P&Q] and (k)(k E [P&-Q] + not&j)). In other words, [Q/P] is the class of weak orderings on H in which some P&Q-history is ranked more highly than any P&-Q-history. *** For v=<h,t,i>, and j,k E H: v E [O(Q/P)] iff Iv E [Q/P]. v E [!(Q/P)] iff s’v E [Q/P]. v E [LPI iff Fv s [PI. For U(Q,P) let us say first that for xa, f (v,x) is the set of most highly ranked histories in x according to IV. Also for x,y GH, let * Cf. $5 below for a subjective interpretation of L that does depend on i. ** For example, suppose that H = {1,2,3,4,5} and x is the ordering (4s) < (1,2,3) and y = { 1,2,4}. The restriction z of x to y would be the ordering 4 < (1,2). *** The proposition expressed by P sometimes is identified with the set of histories [PI. Analogously, the class of rankings [Q/P] may be identified with the w-m expressed by the sentence “it ought to be that Q, given P,” which contains no p.f. or a.t.c. qualifiers. Planning: AUTOMATED REASONING / 39 x =P y say xc[P]iffy<[P]andx n[P]#Aiffy n[P];t/\. The recursion clause for U(Q,P) is as follows: v E v(Q,P)] iff (x)(x SH & FVGX. + f (VD’l n x> =Q f MPI)). This clause guarantees that if v E KXQWI and v E KJ(Q,P>l then there is no R such that v E [LR] and v E [-O(Q/P&R)]. In reasoning on the basis of defeasible principles of the form O(Q/P), U(Q,P) plays the role of asserting roughly that “other things are equal” - more precisely, that relative to what is settled, P determines the normative status of Q. The logic of both 0(-/-) and !(-/-) is CD (van Fraasen 1972, Lewis 1974). The logic of the objective L is S3. The question of a complete proof theory for 3-D remains Here are some important formulas that are valid in 3-D: open. (1) O(Q/P) & LP & U(Q,P) + !Q. (2) O(Q/P&R) & U(Q,P&R) & LP + !(Q/R). (3) -O(Q/P&R) & U(-Q,P&R) & LP + -!(Q/R). (4) -O(Q/P) & U(-Q,P) & -!P + -!Q. (5) O(Q/P) & U(Q,P) + -L-Q. (6) O(Qi’P) & U(Q,P) +-L-P. (7) LQ + !Q, (8) !Q + -L-Q. (9) U(Q,P) & LR + (O(Q/P) = O(Q/P&R)). (10) U(Q,P) & U(R,P) + U(Q&R,P). (11) U(Q,P) & LR -+ U(Q,P&R). (12) !(Q/P&R) & LP + !(Q/R). (13) -!(Q/P&R) & LP j -!(QW. (14) !P & L(P + Q) + !Q. An application of 3-D can be illustrated with the “promising” example introduced above (for other applications, cf. Loewer and Belzer 1983, 1986; Belzer 1986a). The relevant rules may be represented as (a#) O-r where r stands for ‘You tell the secret to Reagan’ and g for ‘You tell the secret to Gorbachev’. Suppose that from your perspective each of these rules is true and that g is settled. If there is no settled c such that -O(r/g & c), then you are committed to !r. On the other hand, if -Lg holds then so also does !-r , A.t.c. closure is non-monotonic in 3-D in the sense that a.t.c. commitments relative to a set s of p.f. rules and settled propositions may not hold relative to a superset of s. For instance, let s be the set that contains (a#)-(d#) and let s* include s and also contain Lg. !-r is contained in the a.t.c. closure of s but not in the a.t.c of s* even though s* includes s. The non-monotonicity of a.t.c. closure is owing to the defeasibility of the p.f. rules, where O(Q/P) is d&easibZe at v iff there is some R such that -O(Q/P & R) holds at v. (cf. Belzer 1985a). A rule O(Q/P) is defeated at v iff there is some R such that both -O(Q/P & R) and LR hold at v. For instance if p is settled at v, then O-g is defeated at v. To see the role of the U-statements in 3-D, suppose that O(Q/P) and LP hold at v. We cannot conclude that !Q for for -O(Q/P&R) and LR may hold at v; if so, O(Q/P) is defeated at v. However we can infer !Q at v if we know both that O(Q/P) and LP hold at v and that U(Q,P) also holds at v, for U(Q,P) guarantees that no proposition that holds at v defeats O(Q/P). To complicate the example a bit more, suppose also that 0(-r/t) is true, for t ‘You tell the secret to Thatcher’, and that t as well as g is settled. Is !r true now? It depends. O(r/g) and 0(-r/t) may be equally important in the relevant system, or one may have more weight than the other. Such relationships can be expressed in 3-D, as is shown in the following section. $3. Conflicts and Relative Weight. The distinction between prima facie and all-things-considered reasons for an action is familiar to legal, ethical, and action theorists. Philosophers have stressed the importance of this distinction for formal deontic systems. Much of the work in deontic logic is of marginal interest to those concerned with practical reasoning because it ignores problems due to conflicts of prima facie reasons (Raz 1978). 3-D however is an exception to this claim, for it can be used to represent conflicting prima facie reasons. In a prima facie conflict, both O(Q/P) and O(S/R) are true and (P & R & -(S = Q)) is settled. The metaphorical notion of relative weight that is important in the resolution of conflicts can be defined, as follows: O(Q/P) has greater relative weight than O(S/R) iff O(Q/P & R & -(S = Q)) (cf. Belzer 1985a,b). In the example discussed above, suppose that O(r/g & t) is true; if so, then O(r/g) has greater relative weight than 0(-r/t), so !r would hold if O(r/g & t) itself @W 0-g to ’ SCIENCE is not defeated. On the other hand, if neither O(r/g & t) nor 0(-r/g & t) is true then neither O(r/g) nor 0(-r/t) has greater weight than the other, and it is reasonable to suggest that neither !r nor !-r should hold. The importance of being able to formulate precise expressions of relative weight between rules is that it may make possible a theory of practical reasoning and rational decision-making that does not depend on quantitative utility functions. While implementation of practical reasoning eventually may involve a combination of qualitative rules and numerical evaluation functions, Bayesian statistical decision making has played only a limited role in artificial intelligence. It often is pointed out that it is not easy to apply Bayesian techniques directly because of both the amount of information that must be supplied in the form of conditional probabilities, prior probabilities, and utilities and the awkwardness of modifying the formulation (Doyle 1983b). However, as Ginsberg (1985) suggests, these problems cannot be regarded as conclusive without having compared Bayesian implementations with others based on qualitative rules having differing relative weights. $4. Applying 3-D in Belief Systems. Belief systems also may be structured by a distinction that parallels the distinction between p.f. and a.t.c. norms in normative systems, for we can distinguish between p.f. and a.t.c. expectations. Expectations may be treated as rules or norms (for example, as expressions of “ratiocinative desires” or “epistemic policies” for belief revision). However, even if one does not accept the idea that there is a normative component in belief systems a 3-D type semantics 3-Db can be used as follows to interpret defeasible reasoning in belief systems. Let the language of 3-Db be a propositional language containing ‘I and F, and two dyadic belief operators B(-l-j and L(-/-j, a monadic belief operator S, and a dyadic operator V(-/-j. B(Q/P): Q is p.f. expected, given P. L(Q/P): Q is a.t.c expected, given P. SP: it is certain that P. V(Q,P): P determines the doxastic status of Q. For tautology T, let BQ = B(Q/T) and LQ = L(Q/T). A 3-Db model structure is a 6-tuple (W,T,H,I,$,G) where W, T,H, and I are as above, $ is a function from T x H x I into the set of weak orderings on H, and G is a function from T x H x I into H. For a temporal perspective v=<t,h,i>, let $‘v be defined as the restriction of $v to Gv. In an interpretation [ ] on a 3-Db model structure we have v E [B(Q/P)] iff $v E [Q/P]. v E [L(Q/P)] iff $‘v E [Q/P]. v E [SP] iff Gvg[P]. The weak ordering $v is a ranking according to the p.f. expectations at v. All birds fly and every Quaker is a pacifist at each of the most highly ranked histories in $v, but non-pacifist Quakers and non-flying birds may be found at lower reaches. The prima facie rules of the form B(Q/P) express defeasible rules of thumb that are used in “the common practice of jumping to conclusions when actions demand decisions but solid knowledge fails” (Doyle 1983a, p.1) - they are used to form tentative expectations about the world in the absence of complete information. On the other hand sentences of the form SQ express one’s “solid information.” The “tentative expectations” to which one is committed by one’s p.f. expectations and solid information may be expressed with sentences of the form L(Q/P) . The distinction between the operators S and L corresponds to the distinction between what one feels certain about and what one “expects” (but may not feel certain about) given one’s certainties and p.f. expectations. For instance, one may see that a certain flower beneath a certain is yellow--one is certain about that--while merely “expecting a.t.c.” that it grew from the seed one planted in the spring and that it would disappear were one to untie the goat. To consider an example suppose that you p.f. expect Quakers to be pacifists, so you p.f. expect that Nixon is a pacifist if he is a Quaker, W B (p/q), and you are certain that Nixon is a Quaker, VW sq. If B(p/q) is not defeated, then you are committed to the a.t.c. expectation Nixon is a pacifist, Lp. Of course B(p/q) might be defeated, as happens if also you are certain that Nixon is a republican, W Se, while expecting p.f. that Nixon is not a pacifist if he is a Republican, W) W-p/e), and if B(-p/e) has greater relative weight than B(p/q), that is, (i#) B(-p/e & q). The logic of both B(-/-) and L(-/-j is CD while the logic of S is “weak $5.“” Two important theorems about B (also L) may be used to test the conjecture that 3-Db is useful as a logic of defeasible expectations in belief systems: (15) B(Q/l’) & -B(Q/P&R) + B(-R/Q). (16) B(Q/P) & -BQ -+ B(-P/-Q). (15’) L(Q/P) & -L(Q/P&Rj -+ L(-R/Q). (16’) L(Q/P) & -LQ + L(-P/-Q). For V(Q,P) let g(v,x) be the set of most highly ranked histories in x according to $v. * S5 minus the reflexivity axiom, cf. Moore 1983. Even though S is used to express what one feels certain about, v E [V(Q,P)] iff (x)(x LH & Gv C_ x. + gW’1 fi x1 =Q gWP1)). sQ&-Q nonetheless is consistent (i.e., S is a doxastic, not an epistemic, operator). Planning: AUTOMATED REASONING i -t 1 Other valid formulas of 3-Db include: (17) B(Q/P) & SP & V(Q,P) + LQ. (18) B(Q/P&R) & V(Q,P&R) & SP + UQN. (19) -B(Q/P&R) & V(-Q,P&R) & SP + -L(QW (20) mB(Q/P) & V(-Q,P) & -LP + -LQ. (21) B(Q/P) & V(Q,P) + -S-Q. (22) B(Q/P) & V(Q,P) + -S-P. (23) SP + LP. (24) LP + -S-P. (25) V(Q,P) & SR + (B(Q/P) = B(Q/P&R)). (26) WQP) & WV') + V(Q&W'). (27) V(Q,P) & SR + V(Q,P&R). (28) L(Q/P&R) & SP + L(Q/R). (29) -L(Q/P&R) & SP + -L(Q/R). (30) LP & S(P + Q) + LQ. Each of the operators B(-/-j, L(-/-j, and S are “implicit” belief operators (cf. Levesque 1984) - both BQ and LQ for instance may hold at v even though Q is not “actively” or “explicitly” expected by the perspective v. The focus of this section is on the distinction between the differing types of expectations, and yet these or similar distinctions also may be necessary in a theory of “explicit” belief (cf. Nute 1986, Belzer 1986b). $5. Practical Reasoning. The 3-D and 3-Db systems provide the foundation for a general theory of practical reasoning. Let the language of 3-Dpr be the combined languages of 3-D and 3-Db. The values of an agent are expressed by sentences of the form O(Q/P) while B(Q/P), SQ, and L(Q/P) express various types of beliefs. Plans are expressed by sentences of the form !(Q/P), whereas an intention is a special type of plan (one whose expression !(Q/P) is such that the subject in Q is the reflexive “I-myself’). An intention in which the predicate of Q is qualified by “here and now” is a volition, or immediate intention (Brand, 1984). In 3-Dpr a 3-Db belief sub-system may be embedded into a 3-D normative reasoning system by imposing a subjective interpretation on the operator L of 3-D. Recall that the interpretation of L in 3-D is objective if L is interpreted independent of perspectives. If so 3-D sentences of the form !(Q/P) specify the “objective” a.t.c. commitments of a perspective i (that is, at least, the commitments relative to the settled facts at t and the values of i at t). In practical reasoning, however, we want to represent commitments based not on the settled facts at t but rather on the beliefs--in particular, the a.t.c. expectations--of i at t. L is the only operator shared by the languages of 3-D and 3-Db, and it is the key to embedding a belief system in a normative reasoning system. We give L a subjective interpretation by requiring that LQ (“it is settled that Q”) holds at v=<t,h,i> in the 3-Dpr embedding system iff LQ (“it is a.t.c. expected that Q”) holds at v in the 3-Db belief sub-system. The set of settled propositions in the practical reasoning system is to be identified at each time with the set of a.t.c. expectations committed. to which the belief sub-system is A 3-Dpr model structure is an (W,T,H,I,s,F,$,G) where W, T,H,I,s, and F 3-D while $ and G are as in 3-Db. Let 8-tuple are as in fY$‘V denote the set of most highly ranked histories in the a.t.c.-belief ranking $‘v. The following condition on 3-Dpr model structures guarantees that L is interpreted coherently (and it guarantees that a proposition is settled at v just if it is a.t.c.-expected at v): WV) Fv = n $‘v. . Interpretations on 3-Dpr model structures are as in 3-D and 3-Db. The logic of 0(-/-j, !(-/-), B(-/-j, and L(-/-) is CD. Weak S5 is the logic of S (as it is also for the monadic O,!,B, and L). Each of (l)-(30) is included among the valid formulas of 3-Dpr. In a “well-balanced” agent there are interesting relationships between expressions of various mental states. 3-Dpr offers a conception of consistency between an agent’s “implicit” values, beliefs (of three types), plans, intentions, and volitions; this is a conception of “internal rationality,” that is, rationality independent of what one’s values and beliefs happen to be. It may be argued for instance that !(Q/P) expresses an acceptable plan for an agent iff !(-Q/P) is not entailed by the agent’s values and beliefs. Given condition (F$‘) and the subjective interpretation of L, some validities of 3-D are counter-intuitive in 3-Dpr, in particular, (7) and (8). (7) says thatthe a.t.c.-expectations of the well-balanced agent also are plans. But surely this is not necessarily so, since one may expect things about which one is indifferent. Similarly (31) SQ + !Q which also holds in 3-Dpr is unacceptable for the same reason. It is plausible even to hold that if one is certain that Q then one does not rationally plan for Q, that is, (7’) SQ + -!Q (cf. Feldman 1983, Loewer and Belzer 1986). On the other hand, according to (8) one reasonably plans that Q only if one does not a.t.c. expect -Q; but this also should fail because sometimes one reasonably acts intentionally to bring about the best even while expecting the worst. It is at least more plausible, however, to hold that if one is certain that -Q then one should not be planning that Q, i.e., (32) !Q + -S-Q, which also is valid. The truth condition for !(Q/P) can be revised so that (7) and (8) are rejected, and (7’) is validated while (32) is maintained: v E [!(Q/P)] ff i 2% E [Q/P] and not GvglJ’ + Q]. 42 / SCIENCE Given this revision each of (1), (2), (13), and (14) also fail, but are replaced by (1’) O(Q/Pj & LP & U(Q,P) & -SQ + !Q. (2’) O(Q/P&R) & LP & U(Q,P&R) & * -S(R -+ Q) + !(Q/R). (13’) -!(Q/P&R) & SP + -!(Q/R). , (14’) !P & L(P + Qj & -SQ + !Q. The “promising” and “pacifist” examples given earlier can be combined to illustrate an application of 3-Dpr. Suppose holding for your temporal perspective v the values (a#) - (d#), the certainties (f#j and (g#j, and the p.f. expectations (e#j, (h#j, and (i#j; and suppose also that for some odd reason you are certain that if Nixon is not a pacifist then you will tell the secret to Reagan, (j#j S(mp + rj. Are you committed by these beliefs and values to telling Gorbachev? Assuming no other beliefs or values are relevant to that question you fiit can conclude L-p because B(-p/q&e) & S(q&e) & V(-p,q&e) + L-p is an instance of theorem (17), S(q&e) is entailed by Sq and Se, and V(-p,q&ej holds if (as supposed) no other beliefs and values are relevant. Yet L-p and (‘j#j together entail Lr, by (30). So indeed !g does hold because O(g/rj & Lr & U(g,r) & -Sg + !g is in instance of (l’), and both U(g,rj and WSg hold given the “no other things are relevant” assumption in the example. Your values and expectations commit you to telling Gorbachev the secret. This is an example of practical reasoning in which tentative a.t.c. expectations first are detached from p.f. expectations and certainties, and secondly a.t.c. commitments are detached from values and the a.t.c. expectations. 56. Summary A theory of normative reasoning needs to be able to handle the defeasibility of prima facie rules that, together with facts or beliefs, determine all-things-considered commitments. The semantics of 3-D characterizes these concepts in a formal system, and a similar system 3-Db characterizes related concepts in the context of belief. Deliberation, or practical reasoning, may be understood as a form of normative reasoning. The combined languages of 3-D and 3-Db thus are useful in expressing the mental states that figure in practical reasoning. The system 3-Dpr combines the semantics of 3-D and 3-Db with the condition (F$‘) which embeds a belief system within a more general normative system. 3-Dpr offers a conception of consistency among implicit values, beliefs of three types, plans, intentions, and volitions. Acknowledgments. I am grateful to Barry Loewer, Donald Nute, Andre Vellino, and Michael Lewis for discussions about the role of defeasible reasoning in normative systems, belief systems, and deliberation. References. Belzer, M. 1985a. Normative kinematics (I): a solution to a problem about permission. Law and Philosophy 4:257-287. ---. 1985b. Normative kinematics (II): the introduction of imperatives. Law and Philosophy 4:377-403. ---. 1986a. Reasoning with defeasible principles. Synthese 66:1-24. ---. 1986b. Reasons as norms in a theory of defeasibility and non-monotonic commitment. Athens, Ga.: Advanced Computational Methods Center research report 01-0008. Brand, M. 1984. Intending and Acting. Cambridge, Mass.: MIT Press. Davidson, D. 1978. Intending, In Yirmiaku, Y., ed., Philosophy of History and Action. Dordrecht: D. Reidel. Doyle, J. 1983a. Some theories of reasoned assumptions. Pittsburgh: Carnegie-Mellon University Computer Science Department technical reportno. CMU CS-83-125. ---. 1983a. What AI should want from the supercomputers. AI Magazine 4:33-35,3 1. Feldman, F. 1983. Obligations -- absolute, conditioned, and conditional. Philosophia 12. Ginsberg, M. 1985. Does probability have a place in non-monotonic reasoning? Proc. Ninth IJCAI, 107-l 10. Lewis, D. 1973. Counterfactuals. Oxford: Basil Blackwell. ---. 1974. Semantic analyses for dyadic deontic logic. In Stenlund, S., ed., Logical Theory and Semantic Analysis. Dordrecht: D. Reidel, 1-14. Levesque, H. J. 1984. A logic of implicit and explicit belief. hoc. National Conference on Artificial Intelligence, 198-202. Loewer, B. and Belzer, M. 1983. Dyadic deontic detachment. Synthese 54:295-319. ---. 1986. Help for the Good Samaritan Philosophical Studies 50. Moore, R. 1983. Semantical considerations Proc. Eighth IJCAI, 272-279. paradox. To appear in on nonmonotonic logic. Nute, D. 1985. A non-monotonic logic based on conditional logic. Athens, Ga.: Advanced Computational Methods Center research report 0 l-0007. ---. 1986. Defeasible reasoning. Athens, Ga.: Advanced Computational Methods Center. To appear. Raz, J. 1978. Introduction. In Raz, J., ed., Practical Reasoning. Oxford: Oxford University Press, 1-17. Stalnaker, R. 1984. Inquiry. Cambridge, Mass.: MIT Press. Thomason, R. 1970. Indeterminist time and truth value gaps. Theoria 36:264-28 1. van Fraasen, B. 1972. The logic of conditional obligation. Journal of Philosophical Logic 1:417-483. Planning: AUTOMATED REASONING / i3
1986
99
548
Donald @. Allen, Seth A. Steinberg, and Lawrence A. Stabile BBN Advanced Computers, Inc. 10 Fawcett Street Cambridge, Massachusetts 02238 Abstract II. The Butterfly Architecture This paper describes recent enhancements to the Common Lisp system that BBN is developing’ for its Butterfly multiprocessor. The BBN Butterfly is a shared memory multiprocessor that contains up to 256 processor nodes. The system provides a shared heap, parallel garbage collector, and window based I/Q system. The ‘future’ construct is used to specify parallelism. Introduction BBN has been actively involved in the development of shared- memory multiprocessor computing systems since the early 1970s. The first such machine was the Pluribus, a bus-based architecture employing commercially-available minicomputers. The ButterflyTM system is BBN’s second generation parallel processor and builds on BBN’s experience with the Pluribus. In 1986, BBN formed a wholly-owned subsidiary, BBN Advanced Computers Inc., to further develop the Butterfly architecture. More than 70 Butterfly systems are currently used for applications such as complex simulation,. symbolic processing, image understanding, speech recognmon, signal processing and data communications. For approximately two years, we have undertaken (with DARPA support) the development of a Lisp programming environment for the Butterfly. At AAAI ‘86 [Steinberg et al., 19861 we reported on the basic design of the system. In this paper, we discuss a variety of developments resulting from the work of the past year. Figure 1: The Butterfly Multiprocessor IThe work described herein was done at BBN Advanced Computers, Inc. under contract to the Defense Advanced Research Projects Agency of the Department of Defense The Butterfly parallel processor includes from one to 256 processor nodes interconnected by a high performance logarithmic switching network (figure 1). Each processor node contains a Motorola 16-M& MC68020 microprocessor; a Motorola MC68881 floating point coprocessor; up to four megabytes of memory; a general I/O port; and an interface to the Butterfly switch (see figure 2). Each node also contains a microcoded coprocessor, called the Processor Node Controller (PNC), that performs switch and memory management functions, as well as providing extensions to the 68020 instrnction set in support of multiprocessing. The memory management hardware, combined with the small latency of the Butterfly switch, permit the memories of the individual processor nodes to be treated as a pool of shared memory that is directly accessible by all processors. Remote memory references are made through the Butterfly switch and take approximately four microseconds to complete regardless of configuration size. This shared-memory capability is crucial to the implementation of Butterfly Lisp. The Butterfly is designed to maintain a balance of processing power, memory capacity, and switch bandwidth over a wide range of configurations. The largest Butterfly system consists of 256-processor nodes and executes 250 million instructions per second, has a gigabyte of memory and provides eight gigabits per second of switch bandwidth. The Butterfly system’s expandability is due predominantly to the design of the Butterfly switch network. The switch is built from intelligent four-by-four crossbars configured in a serial decision network. The cost of the switch and switch bandwidth grow almost linearly, preserving price-performance from the smallest to the largest con@uration. III. utter isp verview Butterfly Lisp is a shared-memory, multiple-interpreter system. Rather than implementing a loosely coupled set of separate Lisps that communicate via some external message-passing protocol, we have chosen to capitalize on the Butterfly’s ability to support memory-sharing by providing a single Lisp heap, mapped identically by all interpreters. This approach preserves the shared-memory quality that has always been characteristic of the Lisp language: data structures of arbitrary complexity are easily communicated from one context to another by simply transmitting a pointer, rather than by copying. We believe this approach has significant ease-of-programming and efficiency advantages. Butterfly Lisp uses the “future” mechanism (first implemented at MIT by Professor Robert Malstead and students in Evaluating the form (future <lisp-expression>) 2 Al Architectures From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. causes the system to record that a request has been made for the evaluation of <lisp-expression> andto commit resources to that evaluation when they -become available. Control returns immediately to the caller, returning a new type of Lisp object called an “undetermined future”. The “undetermined future” object serves as a placeholder for the value that the evaluation of <lisp-expression> will ultimately produce. An “undetermined future” may be manipulated as if it were an ordinary Lisp object: it may be stored as the value of a svmbol. consed into a list. passed as an argument to a function, etc.. If, however, it& subjected to an operation that requires the value of clisp- expression> prior to its arrival, that operation will automatically be suspended until the value becomes available. The “future” mechanism provides an elegant abstraction for the synchronization required between the producer and consumer of a value. This preserves and encourages the applicative programming style so integral to Lisp programming and so important in a parallel machine, where carelessness with side- effecting operations can result in (difficult to diagnose) probabilistic bugs. The Butterfly Lisp implementation is based on MIT CScheme, modified to support the future construct and other multiprocessing primitives (e.g., primitives for mutual exclusion). The system also includes a parallel stop-and-copy garbage collector. In addition to the Scheme dialect of Lisp, Butterfly Lisp also provides Common Lisp language support, implemented largely on top of Scheme. This implementation, which we discuss in detail below, uses significant portions of CMU’s Spice Lisp. The Butterfly Lisp compiler will be based on LIAR (LIAR Imitates Apply Recursively), a Scheme compiler recently developed at MIT by members of the Scheme Team. The Butterfly Lisp User Interface (which leads to the rather unfortunate acronym BLUI) is implemented on a Symbolics 3600-series Lisp Machine and communicates with the Butterflv using Internet protocols. This system provides a means f& controlling and communicating with tasks running on the Butterfly, as well as providing a continuously updated display of the overall system status and performance. Special Butterfly Lisp interaction windows, associated with tasks running on the Butterfly, may be easily selected, moved, resized, or folded up into task icons. There is also a Butterfly Lisp mode provided for the ZMACS editor, which connects the various evaluation commands (e.g., evaluate region) to an evaluation service task running in the Butterfly Lisp system. Each task is created with the potential to create an interaction window on the Lisp machine. The first time an operation is performed on one of the standard input or output streams a message is sent to the Lisp machine and the associated window is created. Output is directed to this window and any input typed while the window is selected may be read by the task. This multiple window approach makes it possible to use standard system utilities like the trace package and the debugger. A pane at the top of the screen is used to display the system state. The system state information is collected by a Butterfly process that is separate from the Butterfly Lisp system, but has shared-memory access to important interpreter data structures. The major feature of this pane is a horizontal rectangle broken vertically into slices. Each slice shows the state of a particular processor. If the top half of the slice is black, then the processor is running, if gray, it is garbage collecting, and if white, it is idle. The bottom half of each slice is a bar graph that shows how much of each processor’s portion of the heap is in use. The status pane also shows, in graphical form, the number of tasks awaiting execution. This display makes such performance problems as task starvation easy to recognize. A recently developed User Interface facility provides a display of a combined spawning and data-dependency, graph, which is created from metering data collected during the running of a Lisp program on the Butterfly. This facility is described in detail below. IV. aging Executio Programs A. Introduction Computer software execution is both invisible and complex and thus it has always been a challenge to present an image of how programs are executing. Many tools have been developed to perform this function in uniprocessor programming environments. For example, debuggers present a picture of processor state frozen in time, while profilers provide a summary distribution of program execution time. Useful as these may be, the added dimension of concurrency renders the typical suite of such tools insufficient. This realization led us to the development of the capability described in this section. . Metering and Presentation Facillities The Butterfly Lisp User Interface uses a transaction-oriented protocol that carries the various logical I/Q streams between the Butterfly and the 3600. To support our new imaging facility, the protocol was extended so that the tasting system could transmit a packet for each major scheduling operation. When metering is enabled, a packet is sent when a task - is created, - begins executing on a processor, - requires the final been computed, Figure 2: The Butterfly Processor Node value of another task which has not yet - finishes executing, Allen, §teinberg, and Stabile 3 - terminates, enabling another task to resume execution. Each packet contains the time, the unique identifiers of the relevant tasks and the processor number. These packets allow the User Interface to construct a complete lifetime history of each task, reconstruct the task creation tree, and partially reconstruct the data dependency graph. For efficiency reasons, if the value of a future has been determined before it is needed, the reference is not recorded. The User Interface presents this information in a special window. The top pane of the window contains the task histories in depth- first order of their creation. The bottom pane contains a derived chart of aggregate information about the tasks in the system. In both panes, the horizontal axis is the time axis and the two charts are presented in the same time scale. The history of each task is displayed as a horizontal rectangle ranging along the time axis from task creation time to task termination time. When vertical spacing permits, the gray level indicates the state of the task that may be running (black), on the scheduler queue (gray) or idle (white). The uppermost task history rectangle in figure 3 shows a task that is created and placed on the scheduler queue (gray). It then runs (black), creating subtasks (see arrows), until it needs the value computed by another task (see arrow). At this point it goes idle (white). When the subtask has determined that value, this task is requeued (gray) and then runs (black) to completion. An arrow drawn from one task history to another indicates that either: - a task has created another task, or I - a task needs the result of another task, or - a terminating task has restarted another task -- -- -------~------ I II mi I Figure 3: Execution image of BBNACI function Since the arrows are always associated with task state changes, the actual meaning of a particular arrow can easily be derived from the context. When a task is created, the creating task continues running so the history rectangle is black to the right of the creation arrow. When a task needs the result of another task it will go idle so the history rectangle will go from black to white. When a task terminates, the history rectangle ends, and any waiting tasks are restarted. As the number of task histories to be shown increases, the history rectangles will be made smaller. If necessary, the black- gray-white shadings will not be shown. If still more data is available it will not be drawn, but can be displayed by using the mouse to scroll the window. The mouse can also be used to zoom in on the task histories by selecting a rectangular region, which will be expanded to fill the entire window. This zooming can be done repeatedly, and the image can be returned to its original scale by pressing the middle mouse button. As an additional aid to deciphering the task structure, pointing the mouse at a particular task history will “pop up” the text description of the task being computed. The more conventional graph in the lower pane displays the number of tasks in a particular state plotted against time. Ordinarily, this graph shows the total number of tasks in the system. There is a menu of graphing options that allows the user to select a plot of any combination of running, queued, or idling tasks. The images in this paper were produced using an interpreted version of Butterfly Lisp running on a 16 processor Butterfly (with one node used for network communications). Figure 3 was produced by the execution of the following recursive Fibonacci algorithm: (define (bbnaci n); Pronounced bi-bin- ; acci (if (< n 2) n (+ (future (bbnaci (-1+ n))) (future (bbnaci (- n 2)))))) C. Example: Bayer-Moore Theorem Prover The Boyer-Moore theorem prover (a classic early AI program and part of the Gabriel Lisp benchmark suite [Gabriel, 19851) works by transforming the list structure representing the theorem into a canonical form using a series of rewrite rules called lemmas. The final form, which resembles a tree stmctured truth table, is scanned by a simple tautology checker. The algorithm starts rewriting at the top level of the theorem and recursively applies the rewriter to its results. Parallelism is introduced by creating a subtask for each subtree that might need to be transformed. Figure 4 shows the proof of modus-ponens: (implies (and (implies a b) a) b) The proof performs 13 transformations that appear as the 13 clusters in the execution image. Two levels of parallelism are visible. At the finer level, various tasks attempt to apply rewrite rules, usually unsuccessfully. Many short lived tasks are created, but parallelism is limited by the small size of the associated list stmcture. This limited parallelism appears within each cluster of tasks. At the coarser level, the various lemmas are applied and their dependencies are preserved. The first set of transformations must be performed serially because each depends on the results of the previous one. Later transformations can be performed in parallel on isolated subbranches of the expanded list structure. This parallelism appears in the arrangement of the clusters. The rectangle drawn around cluster 6 is a region that has been selected for enlargement. Clicking the left mouse button changes the cursor from an arrow to the upper left hand comer of a rectangle. When this is clicked into place, the mouse allows the user to “rubber band” a rectangle by moving the lower right hand corner. The next click enlarges the region within the rectangle, filling the entire screen as shown in figure 5. 4 Al Architectures D. Summary The illustrations in this paper demonstrate the utility of this new parallel program execution imaging technique. In the future, we will add a variety of viewing modes to provide alternate images of execution and we plan to allow the user to explicitly annotate the diagram by sending appropriate data packets. It should also be possible to use the metering data to detect circular dependencies, unused task subtrees, task starvation, and other common pitfalls on the path to parallelism. utterfly Common Lisp A key aspect of the Butterfly Lisp effort is the support of a concurrent variant of Common Lisp. In this section we discuss some of the issues that arise in building a Common Lisp on a Scheme base. A. Scheme Scheme is a dialect of Lisp developed at MIT by Gerald J. Sussman and Guy Steele in 1975. While the language has evolved during the past 12 years [Rees et aE.., 19861, it has, from the beginning, featured lexical-scoping, tail-recursion, and first-class procedural objects, which unify the operator and operand name spaces (the first element of a form is evaluated in exactly the same manner as the rest). Scheme is in an excellent base on which to build a Common Lisp, having simple, powerful and well-defined semantics. For example, Scheme provides a very general facility for creating advanced control structures, i.e., the ability to create and manipulate ‘continuations’ (the future course of a computation) as first-class objects. This capability is critical to, and greatly simplifies, our implementations of Common Lisp tagbodies, block/return-from, and catch/throw. In addition to the language features described in the Scheme report [Rees et al.., 19861, Butterfly Common Lisp makes significant use of extensions to the language provided by MIT CScheme: - Environments are first-class objects, i.e., environment objects, and operations on them, are directly accessible to the user. - The Eva1 function exists, accepting an environment as a required second argument. - CScheme provides many of the arithmetic types required by Common Lisp, including an efficient bignum implementation. - Lambda lists that support rest and optional arguments: sufficient power on which to construct Common Lisp’s more complex lambda lists. - A macro facility of sufficient power to construct Defmacro. - A simple protocol for adding new primitives, written in C, to support some of the more esoteric Common Lisp operations. - Fluid-let, a dynamic-binding construct that is used in the implementation of Common Lisp special variables - Dynamic-wind, a generalization of Common Lisp unwind- protect B. Mapping from Common Lisp to Scheme It was decided early in the evolution of our Common Lisp that we would avoid modifications to the Scheme interpreter and compiler, unless semantic or performance considerations dictated otherwise. In addition, we decided that we would Figure 4: Execution image of Boyer-Moore Theorem Prover capitalize as much as possible on CMU’s excellent Common Lisp, Spice. Our task then became one of identifying and implementing enough of the basic Common Lisp semantics to be able employ the CMU Common Lisp-in-Common Lisp code. Happily, in many cases, Scheme and Common Lisp were identical or nearly so. In others, language differences required resolution; Common Lisp’s separate operator/operand name spaces are an example. Here, an obvious solution would have been to add a function definition cell to the symbol data structures, necessitating a non- trivial interpreter and compiler modification in the treatment of the evaluation of the frst element of function-application forms. We chose a less invasive solution: when the Scheme syntaxer (which converts reader-produced list structure into S-code, the input to both the interpreter and the compiler) senses any form with a symbol in the function position, it substitutes a new, unintemed symbol having a pname that is derived from the pname of the original. Function-defining constructs, e.g., defun, (setf (symbol-function)), place the functional objects in the value cell of the corresponding mutated symbol. The symbol block has been expanded so that the pairs can point to each other. I 11 Figure 5: Execution image of Boyer-Moore Theorem Prover -- close-up view Allen, Steinberg, and Stabile 5 Common Lisp packages are another example of the gulf between the two languages; Scheme relies exclusively on lexical scoping to solve name-conflict problems. ‘The solution was simply a matter of extending the symbol table system of Scheme to accommodate the needs of the Spice Lisp package system and reader. This has amounted to little more that porting the Spice Lisp package system, and viewing Scheme symbols as belonging to Be ‘Lisp’ package. To do this, the package operations are multiplexed between the Scheme and Spice data structures via a simple object-oriented tool. Common Lisp provides a large set of control mechanisms, including looping and branching. Scheme has no inherent looping or branching, and instead relies on tail-recursion optimization for efficient iteration and on continuations for other types of control structures. The simpler looping forms of Common Lisp are easy to implement in Scheme. For example, examine the following simple use of dotimes, in which no go’s appear: (dotimes (i n) (print (factorial i))) transforms to the Scheme: (let 0 (define (dotimes-loop-1234 i) (if (= i n) nil (begin (print (factorial i)) (dotimes-loop-1234 (l+ i))))) (dcjtims-loop-1234 0)) The more complicated branching implied by tagbody is implemented using continuations. In essence, the sections of a tagbody between labels become zero-argument procedures that are applied in sequence, calling each other tail-recursively to perform a go. Mechanisms for paraIlelism extant in Butterfly Lisp are inherited by Common Lisp. These include future, the locking primitives, and all metering and user-interface facilities, such as the imaging system described in section 4. c. Implementation Status Currently, almost all of Common Lisp is implemented and runs in interpreted mode. Much of Portable Common Loops, a severe test of any Common Lisp implementation, has been ported. VI. The Butterfly I&p Compiler The Butterfly Lisp compiler will be based on a new Scheme compiler, LIAR, recently developed at MIT. LIAR performs extensive analysis of the source program in order to produce highly optimized code. In particular, it does a careful examination of program structure in order to distinguish functions whose environment frames have dynamic extent, and are thus stack-allocatable, from those requiring frames of indefinite extent, whose storage must be allocated from the Lisp heap. This is an extremely important efficiency measure, dramatically reducing garbage collection time. LIAR also deals efficiently with an interaction between lexical scoping and tail-recursion: a procedure called tail recursively may make a free reference to a binding in the caller’s frame, thus it is not always possible for the caller to pop his own frame prior to a tail-recursive call; the callee must do it. Furthermore, the compiler cannot always know how the called procedure will be called (tail-recursively or otherwise), and thus cannot statically emit code to clean up the stack, since the requirements are different in the two cases. This is a problem that requires a runtime decision about how to deal with the stack. LIAR handles this with an extremely efficient technique that preserves the semantics of lexical scoping and the constant-space attribute of tail-recursion. At the time of this writing, LIAR is undergoing extensive testing both at MIT (on the HP Bobcat) and at BBN on the Sun 3. The port to the Butterfly computer will begin shortly and is expected to take a matter of weeks. In addition, the compiler will continue to be refined (certain basic optimizations, such as user control over in-lining of primitives -- the compiler presently in- lines only car, cdr, and cons -- have been omitted for the present, the strategy being to get a relatively unadorned version working first). We will also be modifying the compiler in behalf of Common Lisp. In particular, processing of Common Lisp’s optional type, ‘inline’, and ‘optimize’ declarations can be critical in achieving acceptable Common Lisp performance on a processor such as the MC68020. It is hoped that we will be able to report results obtained from running compiled Lisp on the Butterfly at the time of the conference. Acknwdedgements The authors would like to thank the following people for their contributions to our efforts: - The MIT Scheme Team: Jim Miller, Chris IIanson, Bill Rozas, and Professor Gerald J. Sussman, for their tremendous skill, cooperation, and support. - Laura Bagnall, the person responsible for all the magic that happens on the 3600 side of our system. Laura was a member of our team almost since the project’s inception and recently left us to return to MIT in pursuit of a Ph. D. - Jonathan Payne, a University of Rochester undergraduate who has worked with us during school vacations, and has made a very significant contribution to the project, particularly to the Common Lisp implementation. - DARPA, for providing support for this work. References [Steinberg et al., 19861 Steinberg, S., Allen, D., Bagnall, L., Scott, C. The Butterfly Lisp System Proceedings of the August 1986 AAAI, Volume 2, Philadelphia, PA, pp. 730 [Crowther et al.., 19851 Crowther, W., Goodhue, J., Starr, E., Milliken, W., Blackadar, T. Performance Measurements on a 12%Node Butterfly Parallel Processor Internal Bolt, Beranek and Newman Paper, Cambridge, Massachusetts [Gabriel, 19851 Gabriel, R. Performance and Evaluation of Lisp Systems MIT Press, 1985 [Halstead, 19851 Halstead, R. Multilisp: A Language for Concurrent Symbolic Computation ACM Transactions on Programruing Languages and Systems [Rees et al.., 19861 Rees, J., Clinger, W., et al Revised3 Report on the Algorithmic Language Scheme MIT report, Cambridge, Mass. 6 Al Architectures
1987
1
549
CP as a general- urpose constraint-language Vijay A. Saraswat Computer Science Department Carnegie Group Inc Carnegie-Mellon University Station Square Pittsburgh Pa 15213 Pittsburgh Pa 15209 Abstract In this paper we present the notion of concurrent, controllable constraint systems. We argue that purely declarative search formalisms, whether they are based on dependency-directed backtracking (as in Steele [Steele, 19801 or Bruynooghe [Bruynooghe and Pereira, 19851) or bottom- up breadth-fist (albeit incremental) definite clause theorem provers (as in deKleer’s ATM approach [deKleer, 19861) or built-in general purpose heuristics (as in Laurier’s work [Lau- riere, 19781 ) are unlikely to be efficient enough to serve as the basis of a general purpose programming formalism which supports the notion of constraint-based computation. To that end we propose the programming language CP[J, 1, &], based on the concurrent interpretation of definite clauses, which al- lows the user to express domain-specific heuristics and con- trol the forward search process based on eager propogation of constraints and early detection of determinacy and con- tradiction. This control follows naturally from the alternate metaphor of viewing constraints as processes that communi- cate by exchanging messages. The language, in addition, al- lows for the dynamic generation and hierarchical specification of constraints, for concurrent exploration of alternate solutions, for pruning and merging sub-spaces and for expressing pref- erences over which portions of the search space to explore next. . . . a constraint is a declarative statement of relation- ship... (and) a computational device for enforcing the relationship... [From Steele [Steele, 1980]]’ In this paper we examine the concurrent logic program- ing (CLP) language CP2 which strongly supports the notion of concurrent, controllable constraint-based programming in the sense of [Steele, 19801. Steele presented the programming model of constraint-based computation, and raised the possi- bility that some day a general purpose programming language may be constructed based on such a model. He noted that ‘The constraint model of computation is not supported by any ‘All quotations in this paper are from [Steele, 19801, unless otherwise noted 2More speciIicaUy, the language discussed in this paper is CP[L, I, &]. which is one of a family of language.s, all based on the concurrent interprebon of &finite clauses. See e.g. [Saraswat, 1987c] for more details. programming language in existence today; the closest approx- imation is probably Prolog.’ We show that a concurrent inter- pretation of definite clause programs provides a computational paradigm that strongly supports constraint-based computation, and, in fact, naturally extends it to the notion of controllable constraint-based systems. ‘The d@culty with general theorem provers is the combinatorial explosion which resuls from simply trying to deduce all possible consequences from a set of statements. There must be some means of lim- iting this explosion in a useful way. The challenge is to invent a limiting technique powerful enough to contain the explosion, permissive enough to allow deductions of use&l results in most cases of interest, and simple enough that the programmer can under- stand the consequences of the limiting mechanism.’ This quote captures our attempt to design control struc- tures to allow the programmer to control (limit) potential com- binatorial explosions. To the list above, we only wish to add that, in some appropriate sense, the control structures should be sound. The soundness of the control structures we present with respect to the logical interpretation of the clauses un- derlying the program has been proven in the author’s thesis ([Saraswat, forthcoming]), in which the language is studied more extensively (this paper is an abstract of a chapter in it). A formal operational semantics may be found in [Saraswat, 1987c]. We recapitulate the essentials here. A. Syntax We take as primitive the usual logic programming notions of variables, junctions, predicates and atomic formulas (‘atoms’). For the purposes of this paper, a CP program consists of a sequence of clauses. A clause consists of a head and a body separated (in sequence) by the <if’ symbol (‘+-‘), a guard and a commitment operation (which is one of ‘I’, the don’t care commit or ‘&‘, the don’t know commit). me guard is a conjunction of atoms for built-in predicates. For syntactic convenience, the guard and the commitment operation may be omitted: they default to the special goal true and ‘ 1’ respectively. Saraswat 53 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. The body is a goal system g, which is either an atom (no- tated a), a simple goal system (gl.g2), an isolated atom ([a]) or an isolated goal system ([gl .g2 ]). A query is syntactically the same as the body of a clause. The head of a clause is a well-annotated atom in a language whose non-logical symbols contain a special unary function-symbol ‘J. ‘. If t is a term in the language, we say that tL is an annotated term (if t is a functional term, then its function symbol must be different from ‘1 ‘.) A term is well-annotated if either it does not contain an annotated sub- term or every super-term of an annotated term is annotated. For simplicity of syntax, we ussume that every super-term of an annotated term is annotated: hence only the innermost an- notations need be explicitly written. formal semantics Very roughly, the operational semantics of CB programs is the same as the operational semantics of pure Horn clauses (us- ing SLD-refutation) except that in every step of the refutation process, the mgul of two atoms is used, instead of the mgu (most general unifier), and it must be possible to satisfy the (built-in) goals if any, in the guard. The mgul of two terms, when it exists, is the same as the ntlagll~ of the two terms. To compute the mgul of two terms, one follows the same algorithm as for computing the mgu except that an annotated term t J can unify against another term t 1 only if the term t 1 is not a variable. In case t 1 is a variable, unification is said to suspend until such time as some event causes t 1 to be instantiated Decorating a term in the head of a clause with a ‘1’) then ensures that the clause can be used only for goals in which the argument in the corresponding place is a non-variuble. As an example consider the clauses, which may be taken to define a ‘plus’ constraint: xi + YJ = 2 t Z is X+Y. xj. + Y = zl t Y is Z-X. x + Y.J = z.j, f- Z is Y-X. For a given goal A + B = C, these clauses are applica- ble only if at least two of the variables A, B, C are instan- tiated. The only ‘built-in’ predicates we consider in this paper are is/2 and ==/2. A goal A is B) succeeds if B can be ‘evaluated’ as an arithmetic expression; the value is unified with A. The god A == B suspends until either A is unified with B or A and B are (top-level) instantiated, whence it suc- ceeds if A and B can be unified and fails otherwise. Consider the following clause: x * Y = ZJ. t- X == Y I X is sqrt(Z). A god A * B = C succeeds (with this clause) only if A and B are either the same variable or are instantiated, and C is instantiated; as a result of the execution of this clause X (and Y) are unified with sqrt (Z) . We now consider the execution cycle in more detail. Computation is initiated by the presentation of a query t a0 . ..a.-1. Each goal ai will try to find a proof by ‘1 ‘- unifying against the head of a clause, and finding proofs for first, the goals in the guard and, after committing, for the goals in the body of that clause. Unification results in bindings for the variables in the goal, which are communicated (applied) to sibling goals at commit-time. A goal can simultaneously attempt to unify against the heads of all the clauses; however different clause invocations for a goal must commit in some serializable order. Commitment involves three operations: the atomic pub- lication of the answer bindings, action on other OR-siblings and promotion of body goals. Atomic publication of bindings means that the bindings are (conceptually) instantaneously applied to all the goals in the body of the clause and to all AND-sibling goals of the committing goal ai. The extent of this publication is deter- mined by goal system boundaries: the bindings are published only upto the smallest enclosing goal system boundary. Together with atomic publication, both the commit op- erations also cause the goals in the body goal-system to be executed as AND-siblings of the goals that were the siblings of the committing goal (i.e. their uncles). The commit operators differ in the actions they take with respect to other OR-siblings of the committing guard system, either in the same clause or in other clauses. The don’t care commit kills them all. The don’t know commit allows OR- siblings to keep on computing, in effect allowing a goal to commit multiple bindings. Each of these bindings is commit- ted to a different copy of the rest of the goals in the smallest enclosing GoalSyst em. Thus, if a goal ‘&‘-commits bind- ings 8, all the sibling goals in the smallest enclosing goal system are split into two at commit time. Goal system bound- aries are one-way walls: they allow bindings committed by a sibling of the goal system to enter the block, but prevent the bindings committed from within the goal system from leaving. As computation proceeds, the goal system within a block may thus repeatedly split. Two adjacent blocks (isolated atom or isolated goal systems) are combined when there is no more progress to be made in either one, i.e. either each has termi- nated successfully or suspended for lack of input. The pro- cess of merging two blocks bl and b2 is concerned with creating another block b which has one OR-branch for every compatible pair of OR-branches, one each from bl and b2. An OR-branch contains a sequence of suspended blocks and goals, together with (possibly vacuous) substitution, which is the composition of all the substitutions that have been com- mitted internal to the goal system. Two OR-branches are com- patible if their substitutions are compatible. Informally, two substitutions are compatible if they do not assign ununifiable values to the same variable. The substitution associated with an OR-branch of b is obtained from these two substitutions, and the sequence of blocks obtained by concatenating the two sequences. Conceptually, all the branches of one block may be merged with all the branches of another block in parallel. Finally, computation succeeds when a branch finds a so- 54 Al Architectures lution (no more goals left to prove); it fails when all branches terminate in failure. 8s 8 constraint-based language Design Goal 2: As fur us possible, u construint- bused system shall perform its computations on the basis of locally available information only. In our language the only information that can influence The use of CP as a constraint language should now be clear: goals in the current resolvent correspond to constraints and the behaviour of a Constraint is information contained directly in the constraint in the form of variables, and their current the program axioms correspond to the rules of behaviour for bindings. Hence this design goal is trivially satisfied. a constraint. The versatility of CP as a constraint language arises from its simple solution for tie control problem. The control prob- lem in the context of constraint-based languages is: given an under-constrained system, which of a possible set of assump- tions to make next? Another useful metaphor for programming in CP is to think of a goal as representing a process, pro- cesses communicating with each other by instantiating shared variables to structures which may contain other terms, called messuges.3 (Recall that variables may be instantiated not just to constants - actually, just integers in Steele’s language - but also to arbitrary terms.) This allows the user to solve the control problem by programming negotiations between vari- ous constraints. For example, in the case of a discrete con- straint satisfaction problem, it is possible for the user to write constraints such that if local propagation does not yield a solu- tion then the constraints cooperate to determine the problem- variable which is the most constrained (the so-called ‘fail-fist’ heuristic) and have the constraint corresponding to this variable make ussumptions about the possible values for the variable. (We discuss a specific example in the next section.) The el- egance of the CP solution to the control problem lies in that such heuristic rules ure expressed in the sume language, using the same concepts und techniques us the construint propaga- tion rules. The process metaphor provides another important bene- fit: in CLP languages, process behaviours are naturally spec- ified in a recursive form. Such rules can describe in a suc- cinct fashion arbitrary recursively constructed topologies and inter-connection patterns for constraints. For example, one constraint system may be specified for solving the N-queens problem, where N is an input: this system uses the value of N to spawn a network of appropriate size. Moreover constraint definitions (and not just connection structures) may recursively depend upon each other. We now consider the design criteria Steele lays down for a constraint language. Design Goal I : As fur us possible, the computational stute of u constraint system should depend only upon the relationships stated so fup; and not on the order in which they were stated. In our language constraints am represented by means of goals in the current resolvent. All these goals are treated as AND-parallel siblings: hence any one goal can reduce at any given time, provided that it can find a matching behaviour. 31he venutility of CP as a conixment programming language is. demonstrated in e.g. [Saraswat, foxthcoming& and [Saraswat, 1987a]. An ovexview of programming techniques in the related CLF’ language Cmcurreat Pro@ may be found in [Shapiro, 19861. Design Goal 3: A constraint-bused system should, so fur us possible, be monotonic. The more is known, the more can be deduced, and once a value has been deduced, knowing more true things ought not to capriciously invalidate it. The language CP is monotonic in a very important sense: as computation progresses, only bindings that are consistent with the ones aheady generated are produced. Moreover, if a constraint is known to be true, then providing more information (in the form of bindings) cunnot invalidate the constraint. Steeele goes on to discuss the use of the term ‘capri- ciously’. He wants to be able to allow the system to make assumptions which may later be retracted in the light of more information in a reasoned way. , assumptions are made when a goal reduces us- ing a rule with the don’t know commitment operation: the bindings associated with this resolution step constitute the as- sumptio made in this inference. In a sequential language such as olog, such bindings may be undone on, backtrack- ing: in CP, such bindings are always assumed to be made to a copy of the current resolvent. Hence taking a tentative step in CP always corresponds to splitting the current query into two disjoint queries. If future processing results in a contradiction being discovered, the current copy is merely discarded; mean- while the other copy is free to make other derivations, and thus pursue other contexts. The presence of the other control structures may also be motivated naturally. The ‘J, ‘-annotation is essential: without some such annotation on unification it is impossible to specify (efficiently) that a highly non-deterministic constraint should suspend until more bindings are available which reduce the number of possible solutions for the constraint. The don’t-care commit is necessary to allow the user to specify that alternate solutions to the constraint are to be eschewed, thus pruning portions of the search-space. Both the ‘J. ‘-annotation and the ‘ I’ commit introduce incompleteness. Finally, blocks allow the user to provide control informa- tion which may be quite important in solving loosely connected constraint systems efficiently. There are two important compu- tational savings that blocks may introduce. First, in a system such as 1. b . [ gl 1 . [ g2 ] ] , any determinate bindings intro- duced by b are shared by all the sub-contexts in gl and 92. (This is quite analgous to pushing sub-contexts in Conniver- style languages in which changes in the original context are visible in the sub-context as well.) The advantage here is that whenever gl splits into two, b is not copied into both the sub- contexts, resulting in b making the same transition twice. The price paid is that no bindings that gl produces can be commu- nicated to b until merge-time. Second, in a system in which Saraswat 55 gl and g2 spawn a large number (say cl and c2 respectively) of alternate branches only a few of which survive at the end (say br 5 cl and bz 5 CZ), the number of contexts examined are cl + c2 + bl x b2 rather than cl + c2 + ci x ~2. If there are few interactions between two large constraint systems, it is preferrable therefore to solve the constraint-systems in iso- lation and then combine the results. (See [Saraswat, 1987c] for a discussion of an example used in [deKleer, 19861 to il- lustrate pathological behaviour by chronological backtracking systems.) To sum up, our language design exhibits the follow- ing charactersitics: it allows the user to express control over the constraint-propagation as well as the constraint-selection phase using naturally motivated concurrent programming id- ioms, it allows a natural notion of user-definable, hierarchical, mutually-recursive constraints, and provides a problem-solving framework in which multiple solutions are possible, together with the possibility of simultaneously working in more than one context. In the following we consider a solution to the N-queens exam- ple. We first consider a purely declarative program (with no search control) and then consider how to improve its perfor- mance by programming various heuristics. A. A straightforward solution We consider a solution (first presented and discussed in [Saraswat, 1987c]) in which there is a constraint for every square on the chess-board. We imagine that in order to solve the N-queens problem, we have spawned an N x N chess-board with one cell constraint for every square on the board. Each constraint has six parameters: its I and J coordinates, and four wires (variables), the H, V, L R. All the cells on the same row have the same H wire, on the same column the same V wire, on the same left-diagonal the same L wire and on the same right-diagonal the same R wire. (Each wire could thus have a fan-in/fan-out of up to N.) There are just two behaviours for every cell. Each cell may either non-deterministically decide that it has a queen (in which case it sends its Id on all the four wires incident on it and terminates) or else it waits for some cell on the horizontal wire to declare that it has a queen and then it terminates. Note that as soon as a cell decides that it has a queen, no other cell that is dominated by it can decide that it has a queen (no two cells have the same ID). It should be clear that this solution is correct and complete: exactly the set of solutions to the N-queens problem may be obtained by following these behaviours. The specification for a cell is simply: cell(I,J,J,I,I,I) t true & true. cell(I,J,Hi ,V,L,R) t true & true. B. Doing local propagation before choosing While the program given above is correct, it may not exhibit good run-time behaviour, because of two reasons. First, there is no guarantee that when a cell asserts that it has a queen, all other cells which have are dominated die immediately. If these cells remain they may be unneccesarily copied each time a new assumption is made. Second, it is preferable to detect as soon as possible when all the cells on a row or column have been dominated by queens already placed (and there is no queen on that row or column), because such a state is bound to lead to failure. Along the same lines, if a row or column has just one non-dominated cell left, then it is preferrable if that cell immediately decides that it has a queen, because given the problem formulation, it must have one for a solution to exist. In a phrase, local propagation should precede making assumptions. We obtain this effect as follows. We assume a mechanism (discussed in the next section) for serialising phases. There will be N phases; in each phase, one queen is placed, and the next phase is not initiated until the previous phase quiesces. A phase is initiated when a cell (the leader for this phase) decides that it has a queen and is considered to terminate when the leader detects quiescence. We now consider a topology in which each cell, besides having its I and J coordinates and the four wires, is also connected in four rings, one each along the horizontal, vertical, left-diagonal and right-diagonal axes. For each process, its ring-connections consist merely of two wires, one connecting it to its predecessor (the left connection) in the ring and the other connecting it to its successor (the right connection). (To be precise, the left connection of a cell is the same variable as the right connection of the cell to its left along the given axis; similarly for the other direction.) As before when a cell decides it has a queen, it sends its Id on the H, V, L and R wires. We would now like to force the cells that get dominated to die in the current phase. We can achieve this by using a variation of the so- called short-circuit technique for detecting distributed termi- nation ([Saraswat, 1987a]). The idea is simple. When a cell is dominated, it should die; this implies that it should remove itself from all its rings. It can remove itself from a ring by shorting its left and right connections on that ring: by shorting two variables, we mean unifying them. After it does this, its right neighbour will become the right neighbour of its left neighbour, and vice versa. (This is analogous to removing an item from a linked list.) IIowever, when a cell decides it has a queen all Phe cells remaining on all its rings will remove themselves. After this occurs, the leader will find that, for each ring, its left and right connections are the same; it thus detects that the current phase has terminated. We give a sample rule to show how straightforward this is to implement in CP. We assume that each cell is of the form: cell(id(I,J), wire(H, V, L, R), rings(Hleft-Hright, Vleft-Vright), rings(Lleft-Lright, Rleft-Rright)) where the variable names should be self-explanatory. (Note: 56 Al Architectures Note that as in the previous section, the leader process can detect that the current phase has quiesced exactly when all its four rings are shorted. With this new protocol, however, it is possible that the four rings are never shorted: this happens exactly when, as a result of placing this queen, some row or column which does not yet have a queen has no more cells left. This results in the current context being deadlocked and consequently abandoned by the problem-solver. 2. EWy deltectio~m of deterlminacy We leave as an exercise for the reader the problem of programming a protocol such that if in the current phase a cell is detected as being the only one in a row or column, that cell is forced to have a queen (in the current phase). ext queen wiselly. In the previous sections, the decision of which cell next decides to have a queen is still non-deterministic: any cell that is not yet dominated may so decide. We now sketch how the same techniques may be used to implement heuristics for making this control decision. Recall that the problem was formulated by having a ce 11 constraint for every square on the chess-hoard. We now add an extra constraint enable, one for each row in the chess board. Each enable constraint is linked into the horizontal ring for that row, and all the enable constraints are con- nected together in another ring. Conceptually, a token will flow down the links of this privilege ring, which will ensure mutual exclusion, i.e. sequentialisation of each phase. A cell can decide that it has a queen only if the enable process on its horizontal ring has the token. When this cell detects the end of its quiescence phase, the token is passed on to the next enable contraint in the privilege ring. This simple protocol results in the queens being placed in row-major order. Consider now the implementation of a heuristic (which is quite useful for the N-queens problem) that in each phase only a cell with the highest weight can decide to place a queen: the weight of a cell is the number of other cells this cell can dominate if it had a queen. It is straightforward to associate with each ring a count of the number of elements in the ring, to have each cell compute its weight from the counts of the rings incident on it and to add to the cells a network of max devices which select in each phase the cell with the highest weight. This cell then becomes the leader for the next phase, and continues the cycle of waiting for local propagation to terminate (achieved by means of the end goal) enabling the selection phase for determining the next leader, and passing control to it. ark er We very briefly consider other related work. More details may be found in [Saraswat, forthcoming]. CP differs from other CLP languages such as G ParIog in using atomic commitment, together with unifica- tion (as opposed to matching) during process execution. This ability seems fundamental to obtain the dynamic dataflow that characterises constraint-based computation. Concurrent Pro- log is also based on unification, but it introduces a problem- atic capability annotation which does not seem to be directly relevant to modelling constraints. An alternative viewpoint re- lated to constraints may be found in [Lassez and Jaffar, 19871. The techniques in [van Hentemyck and Dincbas, 19861 seem to be easily representable, and are naturally generalised, in our framework. CP avoids the combination of chronological backtracking and pre-determined order for instantiating prob- lem variables that plague the use of Prolog as a language for constraint-based computation ([deIUeer, 19861). By making sure that the opportunity is available to propagate all the con- sequences of a choice to all the constraints before making the next choice, we ensure that it is possible to write programs such that when a contradiction is discovered, the last choice made contributes to the contradiction. Problem solvers based on reason-maintenance systems have recently been studied (e.g. [deKleer, 19861, [McDermott, 19831). In such systems, as computation progresses, the prob- lem solver informs the RMS of the assumptions it makes and of the justifications that it discovers. The (well-known) prob- lem here is that may be quite difficult for the problem-solver to determine which dependencies to capture in its justification of an inference and also quite difficult for the problem-solver to exercise control. I am grateful to Jon Doyle for extensive discussions, to Guy Steele and to many others at CMU and CGI (particularly Gary Kahn, Dave Homig and Mark Fox) who have discussed this work with me. Saraswat 57 References [Bruynooghe and Pereira, 19851 M. Bruynooghe and L.M. Pereira. Deduction revision by intelligent backtracking. In J.A. Campbell, editor, Implementations of Prolog, El- lis Horwood, 1985. [deIUeer, 19861 J. deKleer. An assumption based TMS. Ar- tifical Intelligence, 28:127-162, 1986. [Lassez and Jaffar, 19871 J-.L. Lassez and J. Jaffar. Con- straint logic programming. In Proceedings of the SIGACT-SIGPLAN Symposium on Principles of Program- ming Languages, ACM, January 1987. [Lauriere, 19781 J.-L. Lauriere. A language and a program for stating and solving combinatorial problems. AZ, 10:29- 127, 1978. [McDermott, 19831 D. McDermott. Contexts and data- dependencies: a synthesis. IEEE Trans. on Pattern- directed Inference and Machine Intelligence, 5(3), 1983. [Saraswat, 1987a] V.A. Saraswat. Detecting distributed termi- nation efficiently: the short-circuit technique in FCP(A , I). February 1987. To be submitted [Saraswat, 1987c] V.A. Saraswat. The concurrent logic pro- gramming language CP: definition and operational se- mantics. In Proceedings of the SIGACT-SIGPLAN Sym- posium on Principles of Programming Languages, ACM, January 1987. [Saraswat, forthcoming] V.A. Saraswat. Concurrent Logic Programming Languages. PhD. thesis, Carnegie-Mellon University, forthcoming. [Shapiro, 19861 E.Y. Shapiro. Concurrent Prolog: a progress report. IEEE Computer, ~~4-58, August 1986. [Steele, 19801 G.L. Steele. The definition and implementa- tion of a computer programming language based on Con- straints. PhD thesis, M.I.T, 1980. [van Hentenryck and Dincbas, 19861 P. van Hentenryck and M. Dincbas. Domains in logic programming. In Pro- ceedings of the AA.&, pages 759-765, 1986. 5% Al Architectures
1987
10
550
Inference In Text Understanding Peter Norvig Computer Science Dept., Evans Hall University of California, Berkeley Berkeley CA 94720 Abstract The problem of deciding what was implied by a writ- ten text, of “reading between the lines’ ’ is the problem of inference. To extract proper inferences from a text requires a great deal of general knowledge on the part of the reader. Past approaches have often postulated an algorithm tuned to process a particular kind of knowledge structure (such as a script, or a plan). An alternative, unified approach is proposed. The algo- rithm recognizes six very general classes of inference, classes that are not dependent on individual knowledge structures, but instead rely on patterns of connectivity between concepts. The complexity has been effec- tively shifted from the algorithm to the knowledge base; new kinds of knowledge structures can be added without modifying the algorithm. The reader of a text is faced with a formidable task: recogniz- ing the individual words of the text, deciding how they are structured into sentences, determining the explicit meaning of each sentence, and also making inferences about the likely implicit meaning of each sentence, and the implicit connec- tions between sentences. An inference is defined to be any assertion which the reader comes to believe to be true as a result of reading the text, but which was not previously believed by the reader, and was not stated explicitly in the text. Note that inferences need not follow logically or neces- sarily from the text; the reader can jump to conclusions that seem likely but are not 100% certain. In the past, there have been a variety of programs that han- dled inferences at the sentential and inter-sentential level. However, there has been a tendency to create new algorithms every time a new knowledge structure is proposed. For exam- ple, from the Yale school we see one program, MARGIE, [Schank et al., 19731 that handled single-sentence inferences. Another program, SAM [Cullingford, 19781 was introduced to process stories referring to scripts, and yet another, PAM, [Wilensky, 19781 dealt with plan/goal interactions. But in going from one program to the next a new algorithm always replaced the old one; it was not possible to incorporate previ- ous results except by re-implementing them in the new for- malism. Even individual researchers have been prone to intro- duce a series of distinct systems. Thus, we see Charniak going from demon-based [Charniak, 19721 to frame-based This work was supported ia past by National Science Foundation grant IST-8208602 and by Defense Advanced Research Projects Agency contrad NWO39-84-C-0089. [Charniak, 19781 to marker-passer based [Charniak, 19863 systems. Granger has gone from a plan/goal based system [Granger, 19801 to a spreading activation model [Granger, Eiselt and Holbrook, 19841. Gne could say that the research- ers gained experience, but the programs did not. Both these researchers ended up with systems that are similar to the one outlined here. I have implemented an inferencing algorithm in a program called FAUSTUS (Pact Activated Unified STory Understanding System). A preliminary version of this system was described in [Norvig, 19831, and a complete account is given in [Norvig, 19861. The program is designed to handle a variety of texts, and to handle new subject matter by adding new knowledge rather than by changing the algorithm or adding new inference rules. Thus, the algorithm must work at a very general level. The algorithm makes use of six inference classes which are described in terms of the primitives of this language. The algorithm itself can be broken into steps as follows: step 0: Clonstruct a kwwk ase defining general con- cepts like actions, locations, and physical objects, as well as specific concepts like bicycles and tax deductions. The same knowledge base is applied to all texts, whereas steps l-5 apply to an individual text. Step I: Construct a semantic representation of the next piece of the input text. Various conceptual analyzers (parsers) have been used for this, but the process will not be addressed in this paper. Occasionally the resulting representation is vague, and FAUSTUS resolves some ambiguities in the input using two kinds of non-marker-passing inferences. Step 2: Pass markers from each concept in the semantic representation of the input text to adjacent nodes, following along links in the semantic net. IMarkers start out with a given amount of marker energy, and are spread recursively through the network, spawning new markers with less energy, and stopping when the energy value hits zero. (Each of the primi- tive link types in KODIAK has an energy cost associated with it.) Each marker points back to the marker that spawned it, so we can always trace the marker path from a given marker back to the original concept that initiated marker passing. Step 3: Suggest Inferences based on marker collisions. When two or more markers are passed to the same concept, a marker collision is said to have occurred. Por each collision, From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. look at the sequence of primitive link types along which mark- ers were passed. This is called the path shape. If it matches one of six pre-defined path shapes then an inference is sug- gested. Suggestions are kept in a list called the agenda, rather than being evaluated immediately. Note that inferences are suggested solely on the basis of primitive link types, and are independent of the actual concepts mentioned in the text. The power of the algorithm comes from defining the right set of pre-defined path shapes (and associated suggestions). Step 4: Evaluate potential inferences on the agenda. The result can be either-making the suggested inference, rejecting it, or deferring the decision by keeping the suggestion on the agenda. If there is explicit contradictory evidence, an infer- ence can be rejected immediately. If there are multiple poten- tial inferences competing with one another, as when there are several possible referents for a pronoun, then if none of them is more plausible than the others, the decision is deferred. If there is no reason ence is accepted. to reject or defer, then the suggested infer- Step 5: Repeat steps l-4 for each piece of the text. Step 6: At the end of the text there may be some suggested inferences remaining on the agenda. Evaluate them to see if they lead to any more inferences. The knowledge base is modeled in the KODIAK representation language, a semantic net-based formalism with a-f&d set of well-defined primitive links. We present a simplified version for expository reasons; see [Wilensky, 19861 for more details. KODIAK resembles I&ONE [Bra&man and Schmolze, 19851, and continues the renaissance of spreading activation approaches spurred by [Fahlman, 19791. 3, Describing Inference Classes Shap3 FAUSTUS has six marker-passing inference classes. These classes will be more meaningful after examples are provided below, but I want to emphasize their formal definition in terms of path shapes. Each inference class is characterized in terms of the shapes of the two path-halves which lead to the marker collision. There are three path-half shapes, which are defined in terms of primitive link types: H for hierarchical links between a concept and one of its super-categories, S for links between a concept and one of its “slots” or relations, and R for a link indicating a range rest&ion on a relation. A * marks indefinite repetition, and a marks traversal of an inverse link. Inference Classes Path 1 Path 2 Elaboration Ref Elaboration Double Elaboration Elaboration Elaboration Reference Resolution Ref Ref Concretion Elaboration Filler Relation Concretion Elaboration Filler View Application Constraint View Path Name Path Shape Elaboration origin + H* + S + R + H* + collision Ref origin + H* + collision Filler origin -+ S1 + S + H* + collision Constraint origin + H* + R” -+ collision View origin + H* + V + H* + R-l + collision Non-Marker-Passing Inference Classes Relation Classification Relation Constraint For example, an elaboration collision can occur at concept X when one marker path starts at Y and goes up any number of H links to X, and another marker path starts at Z and goes up some H links, out across an S link to a slot, then along an R link to the category the filler of that slot must be, and then possibly along H links to X. Given a collision, FAUSTUS first looks at the shape of the two path-halves. If either shape is not one of the three named shapes, then no inference can come of the collision. Even if both halves are named shapes, a suggestion is made only if the halves combine to one of the six inference classes, and the suggestion is accepted only if certain inference-class-specific criteria are met. The rest of the paper will be devoted to showing a range of examples, and demonstrating that the general inference classes used by FAUSTUS can duplicate inferences made by various special-purpose algorithms. One of the first inferencing programs was the Teachable Language Comprehender, or TLC, [Quillian, 19691 which took as input single noun phrases or simple sentences, and related them to what was already stored in semantic memory. For example, given the input “lawyer for the client,” the pro- gram could output “at this point we are discussing a lawyer who is employed by a client who is represented or advised by this lawyer in a legal matter.” The examples given in [Quil- lian, 19691 show an ability to find the main relation between two concepts, but not to go beyond that. One problem with TLC was that it ignores the grammatical relations between concepts until the last moment, when it applies “form tests” to rule out certain inferences. For the purposes of generating inferences, TLC treats the input as if it had been just “Lawyer. Client”. Quilli an suggests this could lead to a potential prob- lem. He presents the following examples: lawyer for the enemy enemy of the lawyer lawyer for the wife wife of the lawyer lawyer for the client client of the lawyer In all the examples on the left hand side, the lawyer is employed by someone. However, among the examples on the right hand side, only the last should include the employment relation as part of the interpretation. While he suggests a solu- tion in general terms, Quillian admits that TLC as it stood could not handle these examples. FAUSTUS has a better way of combining information from syntax and semantics. Both TLC and FAUSTUS suggest infer- ences by spreading markers from components of the input, and looking for collisions. The difference is that TLC used syntactic relations only as a filter to eliminate certain sugges- tions, while FAUSTUS incorporates the meaning of these rela- tions into the representation before spreading markers. Even vague relations denoted by for and of are represented as full- fledged concepts, and are the source of marker-passing. 562 Natural Language Trace output #l below shows that FAUSTUS can find a con- nection between lawyer and client without the for relation, just l&e TLC. Markers originating at the representations of ‘ ‘lawyer’ ’ and “client” collide at the concept employing- event. The shape of the marker path indicates that this is a double elaboration path, and the suggested inference, that the lawyer employs the client, is eventually accepted. 5. Script Based Inferences In output #2 a non-marker-passing inference first classifies the for as an employed-by r&tiOB, because a lawyer is defined as a professional-service-provider, which includes an employed-by slot as a specialization of the for slot. This classification means the enemy must be classified as ZUl employer. Once this is done, FAUSTUS can suggest the employing-event that mediates between an employee and an employer, just as it did in #l. Finally, in #3, the of is left with the vague interpretation related-to, so the enemy does not get classified as an employer, and no employment event is The SAM (Script Applier Mechanism) program [Cullingford, 19781 was built to account for stories that refer to stereotypi- cal situations, such as eating at a restaurant. A new algorithm was needed because Conceptual Dependency couldn’t represent scripts directly. h KODIAK, there are no arbitrary distinctions between “primitive acts” and complex events, so eating-at-a-restaurant is just another event, much like eating or walking, except that it involves multiple agents and multiple sub-steps, with relations between the steps. Con- sider the following example: The Waiter [ 11 John was eating at a restaurant with Mary. Rep: (EATING (ACTOR = JOHN)(SETTING = A RESTAURANT) (WITH = MARY)) suggested. Quillian #l [ 11 Lawyer. Rep: (LAWYER) [ 23 Client. Inferring: a WITH of the EATING is probably the ACCOMPANIER because Mary fits it best. This is a RELATION-CONCRETION inference. Inferring: the EATING is a EAT-AT-RESTAURANT. This is a CONCRETION inference. Rep: (CLIENT) [ 23 The waiter spilled soup all over her. Inferring: there is a EMPLOYING-EVENT such that the CLIENT is the EMPLOY-ER of it and the LAWYER is the EMPLOY-EE of it. This is a DOUBLE-ELABORATION inference. Quillian #2 [ 11 lawyer for the enemy Rep: (LAWYER (FOR E THE ENEMY)) Inferring: a FOR of the LAWYER is the EMPLOYED-BY This is a RELATION-CLASSIFICATION inference. Rep: (SPILLING (ACTOR = THE (RECIPIENT = WAITER) HER) 1 (PATIENT = Inferring: there is a EAT-AT-RESTAURANT such that the SOUP is the FOOD-ROLE of it and the RESTAURANT is the SETTING of it. This is a DOUBLE-ELABORATION inference. Inferring: there is a EATING such that the SOUP is the EATEN of it and it is the PURPOSE of the RESTAURANT. This is a DOUBLE-ELABORATION inference. Inferring: there is a EMPLOYING-EVENT such that the ENEMY is the EMPLOY-ER of it and the LAWYER is the EMPLOY-EE of it. This is a DOUBLE-ELABORATION inference. Inferring: there is a EAT-AT-RESTAURANT such that the WAITER is the WAITER-ROLE of it and the SOUP is the FOOD-ROLE of it. This iS a DOUBLE-ELABORATION inference. Quillian #3 [ 11 enemy of the lawyer Rep: (ENEMY (OF = THE LAWYER)) Inferring: a OF of the ENEMY is probably a RELATED-TO This is a RELATION-CONCRETION inference. It should be noted that [Char&k, 19861 has a marker-passing mechanism that also improves on Quillian, and is in many ways similar to FAUSTUS. Chamiak integrates parsing, while FAUSTUS does not, but FAUSTUS has a larger knowledge base (about 1000 concepts compared to about 75). Another key difference is that Char&k uses marker strength to make deci- sions, while FAUSTUS only uses markers to find suggestions, and evaluates them with other means. The set of inferences seems reasonable, but it is instructive to contrast them with the inferences SAM would have made. SAM would first notice the word restaurant and fetch the res- taurant script. From there it would match the script against the input, filling in all possible information about restaurants with either an input or a default value, and ignoring input that didn’t match the script. FAUSTUS does not mark words like restaurant or waiter as keywords. Instead it is able to use information associated with these words only when appropri- ate, to find connections to events in the text. Thus, FAUSTUS could handle John walked past a restaurant without inferring that he ordered, ate, and paid for a meal. In the previous section we saw that FAUSTUS was able to make what have been called “script-based inferences” without any explicit script-processing control structure. This was enabled Not-wig 563 partially by adding causal information to the representation of script-like events. The theory of plans and goals as they relate to story understanding, specifically the work of Wilensky [Wilensky, 19781, was also an attempt to use causal informa- tion to understand stories that could not be comprehended using scripts alone. Consider story (4): (4a) John was lost. (4b) He pulled over to a farmer by the side of the road. (4~) He asked him where he was. Wilensky’s PAM program processed this story as follows: from (4a) it infers that John will have the goal of knowing where he is. From that it infers he is trying to go somewhere, and that going somewhere is often instrumental to doing something there. From (4b) PAM infers that John wanted to be near the farmer, because he wanted to use the farmer for some purpose. Finally (4~) is processed. It is recognized that asking is a plan for knowing, and since it is known that John has the goal of knowing where he is, there is a match, and (4~) is explained. As a side effect of matching, the three pronouns in (4~) are disambiguated. Besides resolving the pronouns, the two key inferences are that John has the goal of finding out where he is, and that asking the farmer is a plan to achieve that goal. In FAUSTUS, we can a.ITiVe at the same interpretation Of the story by a very different method. (4a) does not generate any expectations, as it would in PAM, and FAUSTUS cannot find a connection between (4a) and (4b), although it does resolve the pronominal reference, because John is the only possible candi- date. Finally, in (de), FAUSTUS makes the two main infer- ences. The program recognizes that being near the farmer is related to asking him a question by a precondition relation (and resolves the pronominal references while making this connection). FAUSTUS could find this connection because both the asking and the being-near are explicit inputs. The other connection is a little trickier. The goal of knowing where one is was not an explicit input, but “where he was” is part of (4c), and there is a collision between paths starting from the representation of that phrase and another path start: ing from the asking that lead to the creation of the plan-for between John’s ing where he is. asking where he is and his hypothetical ktlQW- The important conclusion, as far as FAUSTUS is concerned, is that both script- and goal-based processing can be repro& duced by a system that has no explicit processing mechanism aimed at one type of story or another, but just looks for con- nections in the input as they relate to what is known in memory. For both scripts and goals, this involves defining situations largely in terms of their causal structure. 7. Coherence Rellation Based Inferences In this section we turn to inferences based on coherence rela- tions, as exemplified by this example proposed by Kay and Fillmore [Kay, 19811: (5) A hiker bought a pair of boots from a cobbler. From the definition of buying one could infer that the hiker now owns the boots that previously belonged to the cobbler and the cobbler now has some money that previously belonged to the hiker. However, a more complete understand- ing of (5) should include the inference that the transaction probably took place in the cobbler’s store, and that the hiker will probably use the boots in his avocation, rather than, say, give them as a gift to his sister. The first of these can be derived from concretion inferences once we have described what goes on at a shoe store. The problem is that we want to describe this in a neutral manner - to describe not “buying at a shoe store” which would be useless for “selling at a shoe store” or “paying for goods at a shoe store” but rather the general “shoe store transaction.” This is done by using the commercial-event absolute, which dominates store- transaction on the one hand, and buying, selling and paying on the other. Each of these last three is also dom- inated by action. Assertions are made to indicate that the buyer of buying is Beth the actor of the action and the merchant of the commercial-event. The next step is to define shoe-store-transaction as a kind of store- transaction where the merchandise is constrained to be shoes. With that done, we get the following: The Cobbler and the Hiker [ 11 A cobbler sold a pair of boots to a hiker. Rep: (SELLING (ACTOR = A COBBLER) (PATIENT = A BOOT) (RECIPIENT = A HIKER)) Inferring: the SELLING is a SHOE-STORE-TRANSACTION. This is a CONCRETION inference. Inferring: there is a WALKING such that it is the PURPOSE of the BOOT and the HIKER is the OBJECT-MOVED of it. This is a DOUBLE-ELABORATION inference. The program concludes that a selling involving shoes is a shoe store transaction, and although it was not printed, this means that it takes place in a shoe store, and the seller is an employee of the store. The second inference is based on a collision at the concept walking. The purpose of boots is walking, and the walking is to be done by the hiker9 because that’s what they do. Note that the representation is not sophisticated enough to distinguish between actual events and potential future events like this one. One hallmark of an AI program is to generate output that was not expected by the program’s developer. The following text shows an example of this: The President [ 11 The president discussed Nicaragua. Rep: (DISCUSSING (ACTOR = (CONTENT THE PRESIDENT) = NICARAGUA)) [ 21 He spoke for an hour. Rep: (TALKING (ACTOR = HE) (DURATION = AN HOUR)) Inferring: 'HE' must be a PERSON, because it is the TALKER This is a RELATION-CONSTRAINT inference. Inferring: IHE' refers to the PRESIDENT. This is a REFERENCE inference. 564 Natural language Inferring: the NICARAGUA is a COUNTRY such that it is the HABITAT of 'HE* and it is the COUNTRY of the PRESIDENT. This is a DOUBLE-ELABORATION inference. Inferring: the TALKING refers to This is a REFERENCE inference This example was meant to illustrate action/action co- reference. The talking in the second sentence refers to the same event as the discussing in the first sentence, but nei- ther event is explicitly marked as definite or indefinite. FAUSTUS is able to make the inference that the two actions are co-referential, using the same mechanism that works for pro- nouns. The idea of treating actions under a theory of refer- ence is discussed in bkman and Klappholz, 19801. FAUSTUS correctly finds the coreference between the two actions, and infers that ‘he’ refers to the president. But FAUSTUS infers that Nicaragua is the president’s home or habitat and is the country of his constituency. This makes a certain amount of sense, since presidents must have such things, and Nicaragua is the only country mentioned. Of course, this was unexpected, we interpret the text as referring to the president of the United States because we are living in the U.S., and our president is a salient figure. Given FAUSTUS'S lack of context, the inference is quite reasonable. the DISCUSSING. 9. Conclusion We have shown that a general marker-passing algorithm with a small number of inference classes can process a wide variety of texts. FAUSTLJS shifts the complexity from the algorithm to the knowledge base to handle examples that other systems could do only by introducing specialized algorithms. eferences [Bra&man and Schmolze, 19851 Ronald J. Brachman and James G. Schmolze, An overview of the KL-ONE knowledge representation system, Cognitive Science 9, 2 (1985), 171-216. [Char&k, 19721 Eugene Char&k, Toward a Model of Children’s Story Comprehension, AI-Tech. Rep.-266, MIT AI Labs, Cambridge, MA, 1972. [Charniak, 19781 Eugene Charniak, On the use of framed knowledge in language comprehension, Artifzciul Intelligence I I, 3 (1978), 225-266. [Char&k, 19861 Eugene Charniak, A neat theory of marker passing, Proceedings of the F@h National Conference on Artifrcial Intelligence, 1986,584-588. [Cullingford, 19781 R. E. Cullingford, Script Application: Computer understanding of newspaper stories, Research Report #116, Yale University Computer Science Dept., 1978. [Fahlman, 19791 Scott E. Fahhnan, NETL: A System for Representing and Using Real-World Knowledge, MIT Press, Cambridge, 1979. [Granger, 19801 Richard H. Grarwr, Adaptive Understanding: Correcting Erroneous Inferences, 17 1, Yale Univ. Dept. of Computer Science, New Haven, CT, 1980. [Granger, Eiselt and Holbrook, 19841 Richard H. Granger, Kurt P. Eiselt and Jennifer K. Holbrook, Parsing with parallelism: a spreading-activation model of inference processing during text understanding, Technical Report #228, Dept. of Information and Computer Science, UC Irvine, 1984. [Kay, 19811 Paul Kay, Three Properties of the Ideal Reader, Berkeley Cognitive Science Program, 1981. [Lockman and Klappholz, 19801 Abe Lockman and A. David Klappholz, Toward a procedural model of contextual reference resolution, Discourse Processes 3 (1980), 25- 71. [Norvig, 19831 Peter Norvig, Frame Activated Inferences in a Story Understanding Program, Proceedings of the 8th Int. Joint Co& on M, Karlsruhe, West Germany, 1983. [Norvig, 19861 Peter Norvig, A Unified Theory of Inference for Text Understanding, Ph.D Thesis, University of California, Berkeley, 1986. [Quillian, 19691 M. Ross Quillian, The teachable language comprehender: A simulation program and theory of language, Corrwnunications of the ACM, 1969,459-476. [Schank et al., 19731 Roger C. &hank, Neil Goldman, Charles Rieger and Christopher K. Riesbeck, MARGIE: Memory, analysis, response generation, and inference in English, 3rd Int. Joint Co& on AI, 1973,255-261. [Wilensky, 19781 Robert W. Wilensky, Understanding Goal- based Stories, Yale University Computer Science Research Report, New Haven, CT, 1978. [Wilensky, 19861 Robert Wilensky, Some Problems and Proposals for Knowledge Representation, Report No. UCB/Computer Science Dpt. 86/294, Computer Science Division, UC Berkeley, 1986. Norvig 565
1987
100
551
Department of Computer Science Brandeis University Waltham, 02254 617-736-2709 jamesp@ brandeis.csnet-relay There has recently been a great deal of interest in the struc- ture of the lexicon for natural language understanding and generation. One of the major problems encountered has been the optimal organization of the enormous amounts of lexical knowledge necessary for robust NLP systems. Mod- ifying machine readable dictionaries into semantically or- ganized networks, therefore, has become a major research interest. In this paper we propose a representation lan- guage for lexical information in dictionaries, and describe an interactive learning approach to this problem, making use of extensive knowledge of the domain being learned. We compare our model to existing systems designed for automatic classification of lexical knowledge. In this paper we describe an interactive machine learning approach to the problem of making machine read- able dictionaries useful to natural language processing sys- tems. This is accomplished in part by making extensive use of the knowledge of the domain being learned. The domain model we are assuming is the Eztended Aspect Calcdw, [Pustejovsky, 19871, where possible word (verb) meanings are constrained by how arguments may bmd to semantic types. In the case of lexical meanings for words, if the semantic theory constrains what a possible word meaning can be, then the learning task is greatly simplified since the model specializes the most general rule descriptions. The system generates hypothesis instances for word meanings based on the domain model, for which the interactive user acts as credit assigner. Generalization proceeds accord- ing to the paths established by the model. We compare our framework to existing systems designed for automatic classification of lexical knowledge. There are three points we wish to make in this pa- per: o The semantic relations and connections between lex- ical items in a dictionary can be easily learned if a semantic model of the domain is used to bias the acquisition process. e A theory of lexical semantics can act as this model, constraining what a possible word type is, just as a grammar constrains what an acceptable sentence is. e An interactive knowledge acquisition device can im- prove the performance of purely algorithmic ap- proaches to lexical hierarchy-format ion. The paper will be organized as follows. In the sec- ond section we discuss the lexical information necessary lace for robust natural language processing systems. In sec- tion three we outline a framework for encoding the semantics associated with a word, the Extended Calculus. Then in section four we describe how to a dictionary environment for efficient lexical acquisition Section five runs through the knowledge acquisition sys- tem, TULLY, which learns the semantic structure of verbs with the help of an interactive critic. Finally, we discuss how our system compares to previous attempts at lexical acquisition, and discuss directions for future research. One of the central issues currently being addressed in natural language processing is: what information is needed in the lexicon for a system, in order to perform robust analysis and generation [Cf. Ingria, 1986, Cum- ming, 1986]? We examine this issue in detail here8 and review what seems to be the minimum requirements for any lexicon. Let us begin with one of the most central needs for analysis and parsing of almost any variety: knowing the pQZyadicdty of a relation; that is, how many arguments a verb or predicate takes. Although this would appear to be a straightforward problem to solve, there ia still very little agreement on how to specify what is and isn’t an argument to a relation. For example, the verb butges can appear with two, three, four, or apparently five arguments, as illustrated below. (1) a. John buttered the toast. b. John buttered the toast with a knife. c. John buttered the toast with a knife in the kitchen. d. John buttered the toast with a knife in the kitchen on Tuesday. Some indication must be given, either explicitly or imptic- itly, of how many NPs to expect for each verb form. Ig- noring how each argument is interpreted for now, we could represent butter as butter(z, I), butter(z, I, z), butter(z, y, 2, w), or bcatter(z, .y, z, w, w). Generally, we can make a distinction between the real arguments and the modifiers of a predi- cate. Even with a clear method of determining what is a modifier, another problem is posed by verbs such as ape% melt, si&, and close, called causative discussed in [Atkin et al, 19861. 4 inchoative pairs, and hese verbs ty ically have both an intransitive, noncausative reading (2b , and a transitive, causative reading (2a). 5 (2) a. Susan opened the door. b. The door opened. 566 NaturaB Language From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. The issue here is whether there should be two separate en- tries for verbs like open - opn( 2, g) and open(~)- or one entry with a rule relating the two forms - open(z,g) CJ open(g). The arguments to nominal forms seem even more variable than those for verbs. In fact, they are in general entirely optional. For example, the nominal destruction can appear by itself (3a), with one argument (3b), or with two arguments (3c). (3) a. The destruction was widespread. b. The destruction of the city took place on Friday. c. The army’s destruction of the city took place on Friday. We will not consider nominal forms in this paper, however. owing the number of arguments for a verb is ob- viously no use if the lexicon gives no indication of where each one appears in the syntax. For example, in the simple active form of butter, the argument occuppying the subject slot is always going to be the z in the argument list, while the passive form changes this. For verbs such as open, however, arguments which perform different functions can occupy the subject position using the same form of the verb. We will term this the ez%ema;l arpment specifica- tion, which must somehow be given by a word entry. The other side of this is knowlin verb are syntactically realize 8 how the com hments of a ; this is term ei the subcute- gor&&on problem. For example, give has two complement types, NB NP~ and NP PB, as shown in (da) and (4b). That is, in one case, the VP contains an NP followed by an NP, and in the other, an NP followed by a PP. (4) a. John gave Mary the book. b. John gave the book to In addition to these specifications, for some argu- ments it will be necessary to indicate certain “selectional For example, the verb put as used in (51, put the book on the shelf. the third argument be realized az a PP, and izrthermore that this preposition be a locative ([+-LOCI). Likewise, many verbs of transfer, such as @we, send, and ppaesen?, *require the indirect object to be marked with the preposltron to ([+to]) if it follows the direct object in the syntax (cf. ($b)). This information must be associated with the verb somehow, presumably by a lexical specifica- tion of some sort. What we have discussed so far is only the simplest syntactic information about a verb. The real di&ulty comes when we try to give an account of the semantics This is typically achieved in natural lan- sing systems by associating the arguments “named relations” s such as Agent, Pa%ien%, Ins%~‘pc- ctor, etc. These are represented as case roles or thematic roles.’ ith this additional information, the lexical entry for give, for example, ill now look something like (CD), ignoring the finer details. (5) give(z, y, z): a = Agent, g = Patient, z = Goal, a = External. if a = Eaternel then z = [+to]. The information we have assembled thus far will still not be rich enough for the deep understanding or flu- ent generation of texts. For each lexical item, there are associated inferences which must be made, and those that can be made about a certain state of affairs. One class of inferences deals with the uspectud properties associated with a verb. This identifies a verb with a particular event- procesf, or event. For example, from 6b), whrle from (7a), no such inference ). (6) a. John is running. b. k John has run. (7) a. John is drawing a circle. b. p John has drawn a circle. What is at work here is the fact that the meanings of cer- tain verbs seem to entail a termination or end point, while for other verbs this is not the case. Thus, “drawing a cir- cle” and Ubuilding a house” are events which have logical culminations, while simply “runningg”. or ‘walkin do not. These types of inferences interact crucially with tense in- formation for the analysis of larger texts and discourses. For more detail see [Pustejovsky, 1987b]. Finally, to make lexicon entries useful for perform- ing inferences about classes and categories, it is important to know how each entry fits into a semantic hierarchy or network. Cf. [Amsler, 1980, Touretzky, B9$6]. Let us now review what requirements we have placed on the lexical specification of word entries (where we limit ourselves here to verbal forms). The lexicon should snecifv: 1. 2. 3. 4. 5. 6. 7. IIow many argume&ts the verb takes (the polyadic- ity). An indication of where each argument appears in the syntax: i. Which is the ez%erneb argument; and ii. What the subcategorization are. Optionality or obligatoriness of arguments. Selectional properties of the verb on its i.e. what preposition types they must ents;- with, in addition to semantic features such as ~~~~~~~, count, mass, etc. The case roles of the arguments, e.g. men%, etc. The aspectual type of a verb; i.e. whether it is a stute, process, or event. Gategorization or type information; e.g. as expressed in a semantic hierarchy. _- Having reviewed the basic needs for a NLP lexicon, we will now outline a representation framework for encod- ing this information. In this section we outline the semantic framework which defines our domain for lexical acquisition. The model we have in mind acts to constrain the space of possible word meanings, by restricting the form that lexical de- composition can take. In this sense it is similar to theory of lexical decomposition ([Dowty, 1979]), b in some important respects.a Lexical decomposition is a technique for assigning meanings to words in order to perform inferences between them. Generative semantics [Lakoff, 197211 took this tech- nique to its limit in determining word semantics, but however, to provide an adequate theory of meaning. AI literature, primitives have been suggested a with varying degrees of success, [Schank, 1975, hilt dn tmd to move useful. 1 will use these terms interchangeably, although there are strictly speaking, technical distinctions made by many people. -For further discussion of case roles, see Gruber’s original work on the matter? [Fillmore, well as Gruber’s treatment of thematic relations, 1968 1) as b 19651, and as extended by [Jackendof& 19721. [@r-u er, * Space does not permit us to co here with that of [Dowty 9 19791 and [Pustejovsli;y, 1987b] for a full dnscueaon. Pustejjowsky and Bergler 567 The model we present here, the Extended Aspect Calculus, is a partial decomposition model, indicating only a subset of the total interpretation of a lexical item. Yet, as we see in the next section, this partial specification is very useful for helping structure dictionary entries. For a more detailed description of our model, see [Pustejovsky, 1987b]. In the current linguistic literature on case roles or thematic relations, there is little discussion on what logi- cal connection exists between one e-role and another. The most that is claimed is that there is a repertoire of the- matic relations, Agent, Theme, Patient, Goal, Source, In- strument, Locative, Benefrzctive, and that every NP must carry at least one role. It should be remembered, however, that thematic relations were originally conceived in terms of the argument positions of semantic predicates such as CAUSE and DO, present in the decomposition of verbs. 3 For example, the causer of an event (following [Jack- endoff, 19761) is defined as an Agent CAUSE(z,e) -+ Agent(z). Similarly, the first argument position of the pred- icate GO is interpreted as Theme, as in GO(z,y,z). The second argument here is the SOURCE and the third is called the GOAL. Our model is a first-order logic that employs spe- cial symbols acting as operators over the standard logical vocabulary. These are taken from three distinct seman- tic fields. They are: causal, spatial9 and aspectual. The predicates associated with the causal field are: Cau8er(C1), Causee(Ca), and Instrument(l). The spatial field has two predicate types: Locative and Theme. Finally, the aspectual field has three predicates, representing three temporal in- tervals: cl, beginning, ta 9 middle, and t3 9 end. From the interaction of these predicates all thematic types can be derived.” Let us illustrate the workings of the calculus with a few examples. For each lexical item, we specify informa- tion relating to the argument structure and mappings that exist to each semantic field; we term this information the Thematic Mapping Index (T&&I).” Part of the semantic information specified lexically will include some classification into one of the followin event-types (cf. [Kenny 19631, [Vendler 19671, [Ryle 1949 9 f [Dowty 19791, [Bach, 19g6]). protracted momentanwus For example, the distinction between state, activity or process), and accomplishment can be captured in the way. A state can be thought of as reference to an unbounded interval, which we will simply call es; that is, the state spans this intexval.‘j An activity or process can be thought of as referring to a designated initial point and s Cf. [Jackendoff 1972, 19761 for a detailed elabora- tion of this theory. 4 The presentation of the theory is simplified here, as we do not have the space for a complete discussion. See [Pustejovsky, 1987b] for discussion. 5 [Marcus, 19871 suggests that the lexicon have some structure similar to what we are proposing. He states that a lexicon for generation or parsing should have the basic thematic information available to it. 568 Natural Language the ensuing process; in other words, the situation spans the two intervals tl and t2. Finally, an event can be viewed as referring to both an activity and a designated terminating interval; that is, the event spans all three intervals, cl, tZ, and cl. We assume that part of the lexical information spec- ified for a predicate in the dictionary is a classification into some event-type as well as the number and type of ar- guments it takes. For example, consider the verb ml~ in sentence (8), and give in sentence (9). John ran yesterday. John gave the book to iary. We associate with the verb run an aspect structure P for process) and an argument structure of simply run(z). k or give we associate the aspect structure A (for accomplish- ment), and the argument structure give(z,y, z).. The The- ;;“,ic Mapping Index for each 1s grven below m (IO) and . ’ (i0) run = (11) give = I I F i \ t; ts J The sentence in (8) represents a process with no logical culmination, and the one argument is linked to the named case role, Theme. The entire process is associated with both the initial interval t1 and the middle interval t2. The argument z is linked to C, as well, indicating that it is an Actor as well as a moving object i.e. Theme). This 6 represents one TMI for an activity ver . The structure in (9) specifies that the meaning of giwe carries with it the supposition that there is a logical culmination to the process of giving., This is captured by reference to the final subinterval, tS. The linking between z and the L associated with t, is interpreted as Source, while the other linked arguments, u and z are Theme (the book) and Goa& respectively. Furthermore, z is specified as a Cawer and the object which is marked Theme is also an affected object (i.e. Patient). This will be one of the This for an accomplishment. In this fashion, we will encode the thematic and as- pectual information about lexical items. This will prove to be a useful representation, as it allows for hierarchi- cal organization among the indexes and will be central to our learning algorithm and the particular way specializa- tion and generalization is performed on conceptual units. Essentially, the indexes define nodes in a tangled special- ization tree, where the more explicitly defined the associ- ations for a concept are, the lower in the hierarchy it will he!.’ 6 This is a simplication of our model, but for our pur- poses the difference is moot. A state is actually interpreted as a primitive homogeneous event-sequence, with down- ward closure. Cf. [Bustejovsky, 1987b], 7 (Miller, 19851 argues that something like this is psychologically plausible, as well. The most general concept types will be those in- dexes with a single link to one argument. B ettb. vir t efore we describe the knowledge acquisition algo- rithm, we must define how to build the environment neces- ing lexical information for a particular dic- r, 1986, Calzolari, 1984, Amsler 1984 . AE though the specifics of the environment-setting wi d vary from dictionary to dictionary and from task to task, we are able to give a set of parameterizable features which can be used in all cases. For each dictionary and task, the set of semantic primitives must be selected and specified by hand. These include all the entries corresponding to the operators from section 3, including moue, cawe, become, be, as well as aspectual operators such as bega& star%, stop, etc. For each primitive, we associate a thematic repre- sentation in terms of our model structure. For example, the particular word(s) in the dictionary that will refer to cause will have az their interpretation in the model, the following partial thematic mapping index cuude = This says that if cause is part of the definition of some term, we can assume that there are at least two argument places in that verb, and that one represents the causer, and the other the causee. As another example,. consider an entry with the primitive mowe in its defimtron. We can assume, in this case, that there is some argument which will assocrate with the Theme role. move = Similar structures, what we term partial TMls, can be de- fined for each primitive in the representation language.Q In addition to associating specific words in the dic- tionary with primitives in the representation language, we need to define the criteria by which we complete the as- sociation between arguments and thematic types. This is tantamount to claiming that case (or thematic) roles are simply sets of inferences associated with an argument of a verb. lo For example, suppose we want to know whether the first argument of some verb should be assigned the role of Agent or Instrument of causation. We need to determine whet her the argument is arnimute and ddtectly the cause. These features will be assigned in the inter- active session of the acquistion phase, and the associated role(s) will be assigned. Similar tests are used to deter- mine whether something is a moving object, the source of an action, etc. Aspectual tests will determine whether something is an activity or accomplishment, etc. These tests are heuristics and can be modified as necessary by the user for his purposes. Finally, another parameterizable component of the svstem deals with extracting specialized information from Q One problem that we recognize, but have not ad- dressed here! is multiple word sense. [Amsler, 1980 makes the point quite clear, that to fully disambiguate e aA entry and the entries defined in terms of them, is a major task. lo This is hinted at in Miller and Johnson-Eaipd’s 1976 pioneering work, and haz recently been suggested, in a somewhat different form, by Dowty and Ladusaw (1986). the dictionary entry. These are called Themcelie ecid- $s%s. For example, consider a definition that defin some motion and contains the phrase with the arm. This is an ex- ample where additional information specifies incorporated roles ([Gruber, ISSS]). In %~ern~t~~~6~y the instrument of the action is restricted to t her example would be the locative phrase on ing the restriction on the location of the action. Each thematic specialist looks for particular patterns-in the definition and proposes the associated thematic linking to the user in the interac w that we have described how to set up the dic- vironment, we turn to the acquisition algorithm will illustrate each step in the procedure with an example run.l1 (1) Selecl o Ptimitdve: choose a primitive, p from the set of primitives, P, in the representation language. We begin with the intersective primitive cstsse and move.” That is, we are narrowing in our learning mecha- nism to words whose entries contain both these primitives. (2) (3) (4 (5) (6) J’orm Candidate-§e%: Modified Head-Finding Algo- rithm 6 of wor [Chodorow et al, 19851). This returns a set s, c with the pnmitivee in their definitions, namely: wu8e U moue = {turn, shake, propel, walk) lace this set c into a tangled hierarchy, dominated by both Cszcse and Move. -5cove Select a singb candidate: born c pick a candidate, c. We select Drove%. Interactive Cpedi% Assignment: The pleted in assigner. vely, where the user move? ions include: 1s z ~~~m~%~?9 Does y Can x gro@ y per soa hour?, etc. The new information includes the aspectual class, aelectional information, and of course the complete thematic manninz. The system conclude that z is the first I1 We have based our environment initions from the American Heritage 19831. Throughout the example, we shortened the output. on hand-coded def- la There is good reason to begin the search with entries cant aining two primitive terms. First, this will gene pull out two-place verbs. Later, when the single pri is used, these verbs will be defined already. Pustejovsky and Beaglea 56 argument, y and z are ident ical arguments, and that the aspectual class is activity. The system returns - the total TMI, &(c). (7) Apply the minimum TM to the complete set C: Re- turn to (6). Thii applies the minimal thematic map- ping over set C. Check results interactively with the user as critic. The minimal thematic mapping for a word in this set is: (8) CEC = 2 Y Th Apply Thematic Speciakts to C: These extract in- corporated information from the entries. For exam- ple, in the definition of throw, we encounter after the head propel, the phrase with motion of the arm Two Thematic specialists operate on this phrase, both for incorporation of the Instrument as well as a secondary Theme, or moving object, i.e. the arm. This knowledge is represented in the Thematic Mapping Index explicitly, as in I ( 1 I 2 for the Instrument. (9) Update Primitive Set: Add the words in C to the set P, forming a derived set, 5. (10) Return to Step (1): Repeat with a primitive selected from PI. At this point, the system can select a primitive or a derived primitive, such as propel, to do specialization over. Suppose we select propel. This will define the set cant aining throw, etc. the hierarchy being formed as a result will embed this tree within the tree formed from the previous run. We will discuss this process in more length in the final paper. 6. Related Research and Conclusion ularly In this pa rich lexica P er we have tried to motivate a structure for dictionarv entries. partic- Given this representation in terms of the &&e&d Aspecct CCJ- c&s, we presented a knowledge acquisition system that generates robust lexical structures from Machine Read- able Dictionaries. The knowledge added to an entry in- cludes a full specification of the argument types, selectional properties, the aspectual classification of the verb, and the thematically incorporated information. The information we have hand-coded is richer than that provided by the Longman, LDOCE ([Procter, 1978]), but it is quite feasi- ble to automate the acquisition of their information with our system.13 We motivated the thematic mapping index as a use- ful way to generalize over argument structures and concep- tual types. For example, this will be a convenient repre- sentation for lexical selection in the generation process (Cf. [Pustejovsky et al, 1987]). There are several problems we have been unable to address. First, how does the system determine the direc- tion of the search in the acquisition mode? We suggested some heuristics in section four, but this is still an open question. Another issue left open is how general the the- matic specialists are and can be. Eventually, one would like such information-extractors to be generalizable over different MRDs. Finally, there is the issue of how this work relates to the machine learning literature. The generalization per- formed in steps (5) and (7) to the entire word set con- stitute a conservatrve induction step, with an interactive credit assignment. The semantic model limits what a pos- sible word type can be, and this in effect specializes the most general rule descriptions, increasing the number of maximally-specific specializations. A more automated lex- icon construction algorithm is our current research goal. We will also compare with works such as [Haas and Hen- drix, 19831, [Ballard and Stumberger, 19861, as well as [Anderson, 19861 and [Lebowitz, 19861. As is! however, we hope the model here could act in conjunctron with a system such as Amsler’s [Amsler, 19801 for improved per- formance. References b AHD, 19831 AHD, The American Heritage Dictionary, ell Publishing, New York, 1983. [Am&r, 19801 Amsler, Robert, UThe Structure of the Mer- riam Webster Pocket Dictionary”. Ph.D. Thesis, Uniyer- sity of Texas, Austin, Texas, 1989 I, Atkin et al, 19861 Atkin, Beryl T, Judy Kegl, and Beth evin, “Explicit and Implicit Information in Dictionaries, CSL Report 5, Princeton University, 1986. [Bach, 19861 Bach, Emmon, UThe Algebra of Events”, in Linguistics and Philosophy, 1986. [Chodorow et al,19851 Chodorow, Martin S, Roy J. Byrd, and George E. Heidorn, “Extracting Semantic Hierarchies from a Large On-Line dictionary”, in Proceedings of the 23 Annual Meeting of the Association of Computational Linguistics”, Chicago, Ill, 1985.n [Gumming, 19861 Cumming, Sussana, “The Distribution of Lexical Information in Text Generation”, presented for Workshop on Automating the Lexicon, Pisa, 1986. [Dowty, 19791 Dowty, David R., Word Meaning and Mon- tame Grammar. D. Reidel. Dordrecht, Holland, 1979. [b%ria, 19861 Ingria, Robert, “Lexical Information for Pars- mg Systems: Points of Convergence and Divergence”, Work- shop on Automating the Lexicon, Pisa, 1986 [Jackendoff? 19721 Jackendoff, Ray, Semanic Interptation in Generattve Grammar, MIT Press, Cambridge. MA _ I 079 [Pustejovsky, 19871 Pustejovsky, James, QThe Extended Aspect Calculus”, Submission to special issue of Compu- tational LinlJUi8tiC8, 1987. L Pustejovsky et al, 1987b] Pustejovsky, James and Sergei irenburg, “Lexical Selection in the Process of Generation, to appear in Proceedings of ACL, 1987b. 570 Natural Language
1987
101
552
Ambiguity Procrastination Elaine Rich ent Wittenburg MCC 3500 West Balcones Center Drive Austin, Texas 78759 bstract In this paper we present the procrastination approach to the treatment of ambiguity, particularly in the context of natural language interfaces. In this approach, we try not to notice ambiguity unless we have the knowledge to resolve it available. In order to support this approach, we have developed a collection of structures that describe sets of possible interpretations efficiently. We have implemented this approach using these structures in Lucy, an English interface program for knowledge-based systems. In this system w R aper, we will describe our work on Lucy, a ose specific goal is to be a portable English front end subsystem for knowledge-based, interactive computer systems. The entire design of this system has been influenced by the decision to procrastinate resolvrn The am % all kinds of ambiguity for as long as possible. iguity procrastinatron approach is motivated by two concerns: 0 The desire to minimize search by avoiding branching whenever possible. @ The desire to simplify the semantics of the processing routines by making them monotonic. By this we mean that we try to avoid making assertions that may need to be changed later. Although Luc Y 8 is desi ned to be a portable system, it is currently imp emente for a statrstics as a front end to a help system Processing in e rogram with an icon-based interface. following parts: ucy is divided conceptually into the morphological analysis, syntactic analysis, semantic analysis, and discourse pr0cessing.l It happens that these parts occur in this order as well, although there is no real commitment to that in the design of the system, and we intend to explore more flexible control structures. Interestingly, many of the problems that have traditionally befallen such lock-steo language processing systems become less serious when the philosophy of ambiguity procrastination is followed carefully. Ambiauitv orocrastination forces a novel treatment of most-components of the language processing task. In particular, it forces a clear articulation, for each such component, of exactly what information that component contributes to the final ‘In this paper we will focus primarily on syntactic and semantic analysis, because they are the best-developed parts of the system. interpretation. In the rest of this paper we will describe through examples the way that we have structured the main components of Lucy in order to support the notion that decisions should be made only when justified or necessary. The first sentence we will consider is, “I compute the mean price. data.” Following morphological processing, whrch in this example IS tnv!al, the sentence IS parsed. The output of Lucy s parser IS a description of the main constituents of the sentence. It is not a complete structural description since such a description cannot be built using onl to exploit i syntactic knowledge. Rather than trying ot er modules that contain the re uired nonsyntactic knowledge 9 other s stems t as is done, for examp e, in such as Woods 80]), Lucy’s parser simply 08s not attempt to form a complete structural J descnption. A simplified version of the result of parsing this sentence in Luc r is shown in Pi [Wittenburo 86a is a best-first c 1 ure 4. Lucy’s parser art oarser built on the unification rormalism of [Shieber 841. ’ The grammatical framework it uses is a form of combinatory categorial grammar [Steedman 85, Wittenburg 86b]. For this example, the parser determines that there is a verb compute, whose subject is / and whose direct object is the noun compound the mean prrce data. The parser does not attempt to determine “case role” assignment. Nor does it attempt to assign an internal structure to the noun compound. Instead, it represents the compound with the noncommittal structure of modifiers (WC&) and domains of modification (dams): [mod: [lex: mean] dam: [mod: [lex : price] dam: [lex: data]]] This structure is interpreted by later parts of Lucy as representing the entire family of parses that would be found if all bracketings were enumerated. To implement this approach, the grammar must be desi ned to guarantee that only this structure can be built. 3 l the low attachment structure were built, it would describe a different and smaller family of interpretations. Lucy grammar, this structure cannot arise nouns cannot combine directly. To form a noun compound, a unary rule (of the form x + y) convert the first noun into an adjective. This una rn;; only a R plies to single-word constituents, which orces 7 the rig t-branching structure shown above. Rich, Barneti, Withburg, and WrsbBewski 571 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. [cat: s pred: [cat: vp main-verb: [lex:compute] compls: [obj: [cat: np spec: [lex: the] head: [cat: cn 1 ((I E-155 X-156) 2 (*COmOIpEID E-155 X-158 3 (( E-155 Y-162) 4 (PRICE E-155 U-159) 5 (DATA E-155 Y-160)) 6 (COMPUTE E-155 X-156 X-158) 7 (THE E-155 X-158) 8 (*QUANT-LI§T(E-155 X-156 X-158))) mod: [lex: mean] dom: [cat: cn The Initial Logical Form mod: [lex: price] ll=igure 2: “I compute the mean price data.” dam: [cat: cn - lex: data]]]]]] In this structure, it is significant that the object of the subj: [cat: np verb is the entire noun compound, not any one of its lex: I]] pieces. This matters because it can happen, and does in this examole. that the referent of a ComDOund noun The Simplified Output of the Parse phrase is not a thing of the type given by the head noun. The final inter data will turn out to I! retation of the phrase mean price e a mean whose input was price Figure 1: “I compute the mean price data.” data. The result of the parsing process is passed to semantic interpretation. The goal of this ste from a string of English words into a set o P is to map assertions that are stated in terms of the knowledge base used by the backend program to which we are providing an interface. Thus it is this step that provides the bridge between English and domain knowledge. The first step is to convert the graph produced during parsing into a logical form. This ‘Initial Loaical Form for ILF) is similar to-the ILF described in [l%bbs 851. ’ It contains the information from the parser output that may play a role in semantic interpretation? Specificall Y , it enumerates the entities contained in the sentence as well as the surface functional relationships among those entities. The production of ILF from the parse graph is straightforward and uses no additional knowled e (except for handlin next example). #l idioms, which we will discuss in t a e e shown in Figure 2. ILF for the sample sentence is The predicates in the ILF are still the Enalish words from the sentence or they are spe&l predicates, which are orefixed with *. system The first argument . of every predicate is a referent that corresponds to the event or state being described in the clause from which that predicate was derived. In the example, this is E-155. Roughly, the important assertions in the ILF can be read as follows: The second step of semantic interpretation in Lucy uses the expressions in the ILF as the basis for constructing the Final Logical Form (FLF). The FLF contains a descri tion terms defined wit f of the meaning of the sentence in FLF construction in the backend knowledge base. The al time from the ILF. %u orithm selects expressions one at a hen an expression is chosen, the system’s action is determined by the expression. If the Dredicate is an Enalis R redicate in the word. then the predicate is looked up in the semantic lexicon; which is where the connection between En lish words and objects in the backend knowled e % ase is defined. Each entry consists of a set o B constraints that a meaning of a word imposes on the interpretation of a sentence containin the word. For example, the entry for the predicate C 8 MPUTE is (compute x y e) + (isa x computational-process) (has-agent x y) (p-son Y) (output x z) (computable-object z) These constraints are added to the constraints that have been imposed on the interpretation of the sentence by the ILF forms that have already been processed. 2-5: There is a noun compound labelled X-158 composed of three parts, mean, price and data. 6: The event of the sentence has been described by the verb compute. The subject (represented as the second. argument of the assertion) of the verb is I (x-156) and the direct object of the verb is the compound noun phrase mean price data (X-158). If an English word is ambiguous with respect to the backend knowledge base, then there may be more than one set of constraints that could be added to the interpretation. This may result in a branch in the interpretation process if more than one set of constraints is internally consistent.4 When search is required, a best-first search procedure is used, where best is defined by priorities such as those given to lexical entries and to common attachment structures. In this example, the word mean is ambiguous. It has two 2There is, in addition, a separate syntax structure that provides additional information that is necessary for anaphora resolution, but we will ignore that here. 3This list of entities includes a set of discourse referents in the sense of [Kamp 811 as well as a set of entities that do not have that status but that will eventually correspond to knowledge-base concepts. 41t is because of the possibility of branching that this algorithm is not purely a constraint satisfaction process such as that described in @Iellish 851. It is, however, an instance of the generalized constraint satisfaction mothod described in [Rich 881. In addition, we are exploring ways of decreasing branching at this point through the use of a richer constraint language. 572 Natural Language (ASSERT NZCE (E-155 (ISA CQMPUTATIONAL-PROCESS) @AS-AGENT X-156) (OUTPUT X-158)) means finding the referent for the definite noun phrase the mean pnce data. This can be done by using the FLF shown in the figure as input to the procedure that finds objects in the current discourse context that match the constraints given in the noun phrase. (X-158 iISA )(SPEC DEF)(INPPJT Y-160)) (Y-160 (ISA DATASET) (NAMED-BY-STRING X-159)) (X-159 (ISA STRING-CONSTANT)(NA= PRICE)) (X-156 (ISA USER) ( CURRENT-USER))) The second sentence we will considler is, “The system looks up the word for the user.” A simplified form of Lucy’s parse of the sentence is shown in Figure 4a. The Final Logical Form There are two structural ambiguities in this sentence: 0 The attachment of the prepositional phrase Figure 3: “I compute the mean price data.“ for the user. interpretations with respect to the backend s stem: a concept (mean) corresponding to the mat ematical Yl notion of a mean, and a concept (mean-calculation) corresponding to the particular function that computes the mean in the system being discussed. Since only the former can be computed, the correct interpretation for the word mean can be found as soon as the constraints imposed by the word compute are posted. Examples such as this point u developing powerful mechanisms or handling ambiguity P the importance of when the meaning of English sentences must be defined in terms of a detailed knowledge base that contains high1 differentiated, specialized concepts with respect to w ich h overloaded. English words are usually highly If the predicate is a special system system-specified action occurs. predicate, then a In this example, *COMPOUND does occur and it is here that a decision about both the structural associativity and the semantic connections among the words in the noun compound must be resolved. procrastinated durin The ambiguity that was exploiting semantic a parsing can now be dealt with by nowledge and the mechanisms of constraint satisfaction, To determine the structure of the noun compound mean price data, Lucy does the following. Startin B at the right (where the most likely head noun is), it ooks up each noun in the semantic lexicon to find its meaning(s) (stated as set(s) of constraints). It then looks to see if there is any information on how the concept represented by the word combines semanticallv with the concepts represented by the other word& This information must be stored with individual concepts because there is no general rule for computing such semantic relations (as shown by the classic example: olive oil, corn oil, peanut oil, baby oil). This is particularly true given the necessity to map correctly into semantic relationships appro riate to some externally specified knowledge base. 8 Q The choice between look up as a two-word verb with a direct object, versus look as an intransitive verb followed by the prepositonal phrase up the word Since decisions on these issues cannot be made using purely syntactic information, they are not made during parsing In Lucy. Postmodifier structures (such as prepositional phrases and relative clauses) are represented using a modifier/domain structure of the form: [MOD: <postmodifier phrase> DOM: <tree of structure within which MOD attaches>] This structure will be interpreted during ILF construction as allowing the modifier to attach anywhere on the rightmost branch of the structure given in DOf’vl.6 We guarantee that this is the only structure that the parser can build by blocking the construction of any modifier that is itself modified. Two-word verbs are handled b that guarantees that only a designing a grammar sing e interpetation can be r produced. The word up will be treated as a preposition and not as a particle except when the particle interpretation is unambiguous (as it would be in look the word up). The parse result containing the phrase up the word will not be interpreted 8 repositional uring ILF as actually committing to that interpretation, howevera The FLF for the entire sentence is simply the union of the individual sets of constraints, sorted by the objects to which they apply, and with more general assertions eliminated if more ssecific ones are present. The FLF for our first example ‘sentence is shown in Figure 3. Actual1 , results o r the complete FLF must also include the discourse processing, which, in this example, sThis approach contrasts in spirit with approaches such aS [Isabelle 841, in which an attempt is made to find a more principled basis for semantic composition. 6Actually, the situation may be a bit more complicated in the case of extraposition [Wittenburg 871. ‘The hypothesis lurking behind this design is that it may be possible for a grammar used only for a first pass in processing to be unambiguous in its assignment of initial bracketings to strings. Subsequent processes that make use of semantic (and possibly discourse) information would then branch in assigning interpretations to the initial bracketing. While basic attachment ambiguities are relatively amenable to such a treatment, we have found the interactions to be complicated in other cases. In the case at hand, for instance, there are interactions in the grammar between the various complement structures possible for look, the ambiguity between up as a preposition or a particle, and the attachment of the preposition. It is difficult, if not impossible, to maintain monotonicity in the system when interactions are this complex and at the same time maintain traditional assumptions about syntactic bracketing. The working version of Lucy attempts to maintain an unambiguous grammar at the cost of a rather strained relation between syntactic bracketings and semantic interpretations. Forthcoming work will discuss the pros and cons of such an approach. Rich, Barneti, Wittenburg, and Wroblewski 573 Construction of the ILF for this sentence requires the use of the idiom lexicon to recognize the idiom look up. The recognition of possible idioms at this point in processing is, to some extent, a violation of the ambi uity % possi procrastination principle, because the ility of idioms is detected (resulting in a branch in the processing) before the information that is required to choose between the idiomatic and nonidiomatic readings is available (since no other semantic information is available until FLF construction begins). The reason for this violation is that it permits a significant structural commitment to the FLF construction P recess to be made. In Lucy, construction of the FLF rom the ILF is completely compositional. Each ILF form adds constraints to the FLF but never modifies the constraints imposed by any other ILF form. Unfortunately, the words in idioms cannot be processed that way. Kicking the bucket does not involve any kicking or any buckets. In essence, the two motivations behind the procrastination principle are in. conflict here. By making the compromise of inserting idioms during ILF construction, , we make it possible to use a pf;otonrc constraint posfrng algorithm In building the Because constraint posting based on local information is itself such a powerful way of handling ambiguous structures, the overall goal of. ambi uity procrastination is best served by mtroduclng CFI I ior?- induced ambiguities at the beginning of semantic processing. For this example, the ILF is shown in Figure 4(b). The *OR represents the choice between the idiomatrc (DICT-S verb phrase. CH) and the literal (LOOK) meanings of the Notice that the choice of an attachment point for the prepositional phrase for’ the user has still not been made. Instead, each of the verb phrase alternatives contains an *ATTaChment list, containing, in the order in which they occurred in the sentence, attachment points and attachable structures. In the case of the nonidiomatic reading, *ATT contains the referent corresponding to the entire event and a description of each of the attachable things in the sentence (namely the two prepostional phrases). This list will be interpreted as allowing the up phrase to attach only to the event and as allowing the for phrase to attach either to the event or to word Given the idiomatic reading (in which look up is a two word verb), there is only one attachable thing, which corresponds to the preposrtional phrase for the user. But *ATT also contains the referent corresponding to the entire event and the referent corresponding to word, since both of these are P ossible prepositiona P hrase attachment points for the final (for the user). Because the attachments o the prepositional been determined, it is not possib e to speci P hrases have not yet arguments of the relations 2 bboth HI: P repositions. represente The dummy argument ARGO is use J in the LF to represent the unknown argument. It will be bound when an attachment point for the phrase has been selected. Just as in the previous exam le, the predicates in the ILF still correspond to the Eng ish words in the surface P string. This means that some of the predicates carry little information on their own. In this sentence, an example of such a “vague predicate” [Martin 831 is FOR, which may end up specifying any of a large numbe+hyj concrete relationships between its arguments. choice, though, themselves. must depend on the arguments [cat: s mod: [eat: pp prep: [lex: for] pobj : [cat: np spee: [lex: the] head: [cat: cn lax: user] ] ] dom: [cat: s mod: [cat: pp prep: [lex: up] pobj: [cat: np spae: [lex: the] head: [cat: cn lex: word]]] dom: [cat: s pred: [lex: look] subjr [c&z: np spee : [ Pex : the] head:[lex:system]]]]] (a) The Simplified Output of the Parse ((SYSTEM E-171 X-176) (WORD E-171 X-174) (USER E-171 X-172) (*OR ((LOOK E-171 X-176) (*ATT (E-171) (UP E-171 ARG1 X-17 (FOR E-171 ARG1 X-1 ((DICT-SEARCH E-17.3 X-176 X- (*ATT (E-171) X-l (FOR E-171 Gl X-172))))) (THE E-171 X-176) (THE E-171 X-174) (THE E-171 X-172)) (*QUANT-LIST (E-171 X-176 X-174 X-172)) (b) The Initial Logical Form (ASSERT NIL (E-171(1 CH-FOR)(AGENT X-176) (X-172;: -172) (S&7ECT X-174)) WSER) (SPEC DEF)) (X-176(ISA PRGRM) VSTAT) ) (X-174(ISA LEX-IT (c) The Final Logical Form “The system looks up the word for the user.” The FLF (shown in Figure 4(c)) is built from the ILF using the best-first constraint postin P 1 rocedure described for the first example sentence. n t e current implementation, branching occurs as a result both of the *OR and of the *ATT. However, there may be ways of avoiding at least the latter of these through the use of an appropriate constraint language. 574 fUah.wal Language The third sentence we will consider is, “Every command computes a function.” The interesting ambiguit in this sentence is the scope of the every. -7 uantifier he FLF for this sentence is shown in ?- of wide igure 5. It represents an assi nment sco e to the quantifier evew, with t i! e result that the re erent of a P fbnction must-depend on a particular value of the referent af command. This is indicated bv the assertion 6) that is made about the object x-148. This assertion states that the referent of x-148 depends on (is a function of) the referent of x-146. This reading was simply chosen as the default reading. This mechanism for choosing a representation is not very interesting. But the simple mechanism for representing the choice is interesting, among other reasons, because it can also be used to represent the fact that no choice has yet been made. In this case, no F assertions are specified. (X-148 (ISA SWNCTlcON) (F X-146)) (X-146 (ISA WSER-OPERATOR) (Q The, Final Logical Form 5: “Every command computes a function.” epresenting an incomplete assignment of quantifier sco un ap e has been a problem for other natural langua e erstanding systems [Hobbs 83, Woods 77j. TIC3 approach used in Lucy is an attempt to satisfy the major requirements for an acceptable representation, namely that it should support reasoning, that it should be easily modifiable as new information is obtained, and that it quantified. lnstead, existential quantification will arise whenever no other quantification applies. Thus a late commitment on this question too IS possible. This explains the lack of an explicit quantificational statement in the FLF corresponding to the noun phrase a function. The goal of this paper has been to show how a commitment to the principle of ambiguity procrastination can shape the desi n understanding system. 9 of a natural language mplementing this commitment requires a clear articulation of the contribution of each part of the understanding system. It also requires the development of a family of representational and processing techniques that support the manipulation of incomplete structures. Some of the techniques that Luc uses to do this have been derived from other work in t K is area. See, for exam le, EChurch 881, [Church 821, [Marcus 831, [Pereira 831 or dlscussrons of ways to P represent specific kinds of syntactic ambiguity. What we have tried to do in the design of Lucy is to build on these techniques in a unified way to reduce the overall complexity of the language understanding process. What we have actually succeeded in doin is to pmcrastinate several kinds of ambiguity t rough 97 syntactic processing so that they do not show up until semantic processin % with them is availa time, when the knowledge to deal le. Unfortunately, in the current system, it is often the case that branching does then occur. Two comments are worth making on this, since one might ask the uestion, “What have you gained if you eventually branc anyway?” f is possible (although duce the branching in rough the use of one ion rather than a set example, rather than listing the alternative exact Interpretations of the word m&n, we could have stated only that some kind of averaging is involved, leaving the addition of the specific facts (namely the choice between a mathematical concept and a VSTAT function) to be added by some other part of the sentence (such as the verb compute). If the Information required to choose among the detailed meanings must come from some other part of the Sentence anyway, then it is unnecessary to add that information twice.” If the entire search process within the logical assertion space is viewed as one of constraint satisfaction, then this follows naturally. 8Another way of saying this is that, in a constraint satisfaction system, every inconsistency that arises during the solution of an initially consistent problem corresponds to a situation in which some module made an unjustified commitment to something. A good way to improve the performance of such a system is to eliminate such early commitments. Rich, Barneta, Witienburg, and Wroblewski 575 These two responses taken together suggest that if syntactic processing can be unambi uous and if the right constraints can be articulated or semantic and 9 pragmatic P recessing, then the total branching level may be ab e to be reduced. Of course, an alternative that. produces the same result would be to allow ambrgurty to be detected during syntactic processing but to redescribe syntactic processing as source of constraints that can be applie cl etanother same. space as is other processing. rnvestigatrng both these approaches. w$F ;h,t References [Church 801 Church, K. On Memory Limitations in Natural Language Processing. Master’s thesis, MIT, 1980. [Wittenburg 86b] Wittenburg, K. Natural Language Parsing with Combinatory Categorial Grammar in a Graph Unification-Based Formalism. PhD thesis, Department of Linguistics, The University of Texas at Austin, 1986. [Church 821 Church, K., and R. Patil. Coping with Syntactic Ambiguity or How to Put the Block in the Box on the Table. Journal of Computational Linguistics 8:139-i 49, 1982. [Wittenburg 871 Wittenburg, K. Extraposition from NP as Anaphora. In Syntax and Semantics, Volume 20: Discontinuous Constituencies. Academic Press, New York, 1987. Also available as Technical Report MCC HI-1 18-85. [Hobbs 831 Hobbs, J. R. An improper Treatment [Woods 771 Woods, W. Semantics and of Quantification in Ordinary English. In Proceedings of Quantification in Natural Language Question Answering. the 21st Annual Meeting of the Association for In Advances in Computers Volume 17, pages l-87. Computational Linguistics, pages 57-63. 1983. Academic Press, New York, 1977. [Woods 801 Woods, W. A. Cascaded ATN Grammars. American Journal of Computational Linguistics6(1):1-12, 1980. [Hobbs 851 Hobbs, J. R. Ontological Promiscuity. In Proceedings of the 23rd Annual Meeting of the Association for Computational Linguistics. 1985. [ lsabelle 841 Isabelle, P. Another Look at Nominal Compounds. In Proceedings of Coling84. 1984. [Kamp 811 Kamp, H. A Theory of Truth and Semantic Representation. In J. Froenendijk, T. Janssen, & M. Stokhof (editors), Formal Methods in the Study of Language, Part 7. Mathematisch Centrum, Amsterdam, The Netherlands, 1981. [Marcus 831 Marcus, M. P., D. Hindle, & M. M. Fleck. D-theory: Talking about Talking about Trees. In Proceedings of the 21st Annual Meeting of the Association for Computational Linguistics. 1983. [Martin 831 Martin, P., D. Appelt, & F. Pereira. Transportability and Generality in a Natural-Language Interface System. In Proceedings IJCAI 83. 1983. [Mellish 851 Mellish, C.S. Computer Interpretation of Natural Language Descriptions. Halsted Press, New York, 1985. [Pereira 831 Pereira, F. Logic for Natural Language Analysis. Technical Report, SRI International, 1983. [Rich 881 Rich , E. Artificial Intelligence, Second Edition. McGraw-Hill, New York, 1988. [Shieber 841 Shieber, S. The Design of a Computer Language for Linguistic Information. In Proceedings of COLING-84. 1984. [Steedman 851 Steedman, M. Dependency and Coordination in the Grammar of Dutch and English. Language 61:523-568,1985. [Wittenburg 86a] Wittenburg, K. A Parser for Portable NL Interfaces Using Graph-Unification-Based Grammars. In Proceedings AAAI 86. 1986. Also available as Technical Report MCC HI-l 79-86. 576 Natural language
1987
102
553
Memory-Based Reasoning Applied to English Pronunciation Craig W. Stanfill Thinking Machines Corporation 245 First Street Cambridge, MA 02142 Abstract Memory-based Reasoning is a paradigm for AI in which best-match recall from memory is the primary inference mechanism. Iu its simplest form, it is a method of solving the inductive inference (learning) problem. The primary topics of this paper are a sim- ple memory-based reasoning algorithm, the problem of pronouncing english words, and MBRtalk, a pro- gram which uses memory-based reasoning to solve the pronunciation problem. Experimental results demon- strate the properties of the algorithm as training-set size is varied, as distracting information is added, and as noise is added to the data. The principle operation of memory-based reasoning is retrieving “the most relevant item” from memory’. This requires an exhaustive search which, on a sequential ma- chines, is prohibitively expensive for large databases. The only alternative is to index the database in a clever way (e.g. [Kolodner, 19801). No truly general indexing scheme has yet been devised, so the intensive use of memory in rea- soning has not been extensively studied. The recent devel- opment of the Connection Machine2 System [Hillis, 19851 has changed this situation: a CMS is capable of applying an arbitrary measure of relevance to a large database and retrieving the most relevant items in a few milliseconds. lA computatio nal measure of relevance is the essence of imple- menting MBR. 2Connection Machine is a registered trademark of Thinking Ma- chines Corporation. The first use of memory-based reasoning has been for the inductive inference task. Given a collection of data which has been partitioned into a set of disjoint classes (the training data) and a second collection of data which has not been classified (the test data), the task is to classify the test data according to patterns observed in the training data. To date, this task has been worked on in the connectionist paradigm (e.g. backpropagation learning [Sejnowski and Rosenberg, 19861,) the rule-based paradigm (e.g. building decision trees [Quinlan, 1979]), and the classifier-system paradigm (e.g. genetic algorithms [Holland et al., 19861). Experiments conducted over the last year now solidly confirm the applicability of memory-based reasoning to in- ductive inference. A program called MBRtalk, operating within the memory-based reasoning paradigm, has demon- s trat ed strong performance on the task of inferring the pro- nunciation of english words from a relatively small sample. MBRtalk infers the pronunciation of novel words, given only a dictionary of 18,098 words. On a phoneme-by- phoneme basis, it is correct approximately 88% of the, time. Furthermore, performance degrades gracefully, so that the pronunciation it generates is almost always plausible. The most intensively studied setting of the inductive in- ference mode problem occurs in the rule-based systems paradigm, where it goes under the name “similarity-based learning3 .” Here it takes the form of learning a set of rules from a collection of training data. For a recent survey, see [Carbonell et al., 19831. F [Michalski et al., 19831 or more in-depth treatments, see and [Michalski et al., 19861. There is a closely related line of research which goes under the name case-based reasoning (see, e.g. [Kolodner, 19851 [Lehnert, 19871). It is similar to memory-based rea- soning in that recall from memory plays a role in learning, but different in that it presupposes substantial knowlqdge 3There is also “model-based” learning, which depends on the learner having a substantial amount of knowledge about the problem at hand. Stanfill 577 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. about the target domain in the form of a deductive pro- cedure. In addition, case-based reasoning operates within the rule-based paradigm, so that whatever knowledge is extracted from cases is stored in a rule-like form. Inductive inference has also been studied in the con- nectionist paradigm. Specifically, in backpropagation learning [Sejnowski and Rosenberg, 19861, classification is accomplished by a three-layer network, with weighted links connecting adjacent layers. Learning is accomplished by running the network against the training data, generating an error signal, and adjusting link weights. A Classifier System [Holland et al., 19861 is a collec- tion of primitive classification rules, each consisting of a condition-action pair. Initially, the system contains ran- dom classifiers. Learning takes place through an evolu- tionary process, typically including genetic crossover and mutation operations; the right to reproduce is governed by the success of the classifier in correctly classifying the data. III. Pronunciation as a Test Domain Memory-based reasoning has been tested on the pronun- ciation problem: given the spelling of a word, determine its pronunciation. The training data for this problem is a dictionary, and the test data is a set of words not in that dictionary. There are a number of advantages to working in this domain. First, training data is available in large quan- tities. Second, the domain is rich and complex, so that any inductive algorithm is sure to be tested rigorously. Unfortunately, perfect performance is fundamentally impossible. First, there are irregularities in english pro- nunciation, so that some words must always be learned by rote. Second, some words have different pronunciations, depending on whether they are used as nouns or verbs:4 live = “liv” or “liv” object = “bbfjekt” or “eb jekt!” Third, many words have several allowable pronuncia- tions regardless of how they are used: amenity = “e menl e tZ” or “e mZl ne tZ” Fourth, many words of foreign origin have retained foreign pronunciations: pizza = “pi?tIs~” vs fizzy = “fi?zZ?’ montage = m&n tizh! vs. frontage = “fruntlej” The pronunciation task has been studied within the connection&t paradigm [Sejnowski and Rosenberg, 19861. Backpropagation learning was applied to a transcription of speech by a child and to a subset (1000 words) of Web- ster’s dictionary [Webster, 19741. In each case, both a text and a phonetic transcription of that text was repeatedly presented to a network. The experiment was primarily evaluated according to how well it could reproduce the phonetic transcription given only the text - no novel text was introduced. Thus, although this experiment provides important insight into the properties of backpropagation learning as a form of self organizing system, the results are not directly comparable to those from MBRtalk. The pronunciation task has also been studied in the case-based reasoning paradigm [Lehnert, 19871, with re- sults similar to those reported below. In order to apply memory-based reasoning to pronuncia- tion, it is necessary to devise a representation for the words in the dictionary. The representation used in MBRtalk is identical to that used in NETtalk5. For every letter of every word in the database, we create a “frame,” which consists of the letter, the previous four letters, the suc- ceeding four letters, the phoneme corresponding to that letter, and the stress assigned to that letter. Certain difficulties are associated with this represen- tation, primarily due to the fact that the correspondence between letters and phonemes is not one-to-one. First, two letters sometimes produce a single phoneme, as the double ‘s’ in ‘kiss’ (kis-). This is handled by using the letter ‘-’ as a silent place holder. Second, the existence of diphthongs and glides may cause one letter to yield several phonemes, as the first ‘u’ in ‘future’ (fyZch”r). This problem is solved by treating dipthongs as if they were single phonemes, so that the ‘u’ in ‘future’ becomes ‘~00’. Finally, stress is not indicated by an accent mark, but by a separate stress field, which can contain the ‘0’ for unstressed vowels, ‘1’ for primary stress, ‘2’ for secondary stress, and ‘+’ or ‘-’ for consonants (rising and falling stress). Applying these principles, we get the following transcription for ‘future’: Text f u ture Phonemes f ~00 ch - ‘r - Stress +1 --o- As noted above, each letter of each word yields a frame consisting of the letter, the four preceding letters, the four succeeding letters, plus the phoneme code and the stress code corresponding to the letter. These fields are called n-4 through n-l (the preceeding four letters); n (the letter itself); n+l through n+4 (the suceeding four letters); p 4The phonological symbols used below correspond to common us- age in dictionaries. Due to font limitations, the symbol ‘8 is used to stand for the unstressed vowel sound usually represented by a schwa. 5With the ex ce p tion that we use a g-letter window. NETtalk used a 7-letter window, while 578 Natural language (the phoneme); and s (the stress). Thus, the word ‘future’ yields the following 6 frames: f utur f + f u ture ~00 I fu t ure ch - fut u re - - futu r 8 'r 0 utur B To implement memory-based reasoning, we need a com- putational measure of similarity. This section will first explain some notation, then present two different metrics which have been used in pronunciation experiments. A record is a structured object containing a fixed set of fields. A field may be empty, or it may contain a value. A database is a collection of records’. A target is a record containing some empty fields. The empty fields are called goals, and the non-empty fields are called predictors. A metric is a function giving the dissimilarity of two records. The number of possible metrics is immense, and no claim is being made that the following two metrics are optimal. We compute the dissimilarity between two records by assigning a penalty for each field in which they differ. For example, if we have the following two frames: f u ture ~00 f n u turs 00 i we would compute their dissimilarity by assessing a penalty for the field n-l. The penalty function we are using is based on how tightly a single field-value pair in a predictor field con- strains the value of the goal field. For example, the field- value pair [n= ‘ b ‘1 gets a high weight because, if field n contains a ‘b’, the the phoneme field can only con+ain ‘b’ or ‘-‘. On the other hand, the field-value pair [n-$=‘a’] gets a low weight because, if field n-4 (the fourth previous letter) contains a ‘a’, the phoneme field might contain al- most anything. The exact form of this penalty function is contained in [Stanfill and Waltz, 19861. There are two variations on this metric: we can use the penalty function based on the contents of the target record or of the data record’. In the example above, we might ‘Duplicates may be present. ‘A third alternative is to use a penalty function depending on both values. Figure 1: Database Size use the penalty function associated with [n - 1 = 'f '1 or with.[n - 1 = ‘n’]. If the penalty function depends on the target record we have a uniform metric. If the pepalty function depends on the data record we have a variabZe metric. These two metrics were applied to the pronunciation task, and their sensitivity to database size, distraction, and noise was determined. The first task was to determine how the quality of the pronunciation varied as size of the database changed. The raw databases consisted of frames generated from ‘Web- ster’s dictionary [Webster, 19741. First, 1024 frames were extracted and set aside as test data. Second, various quan- tities of training data were extracted; the smallest sample was 4096 frames and the largest was 131,072. Memory- based reasoning, using the two different metrics noted above, was applied to the test data. The value MBR pre- dicted for the phoneme slot was then compared with the value already stored there. With the largest database, us- ing the uniform metric, the accuracy rate was 88%. Using the variable metric, the best accuracy was 83%. The per- formance of both algorithms degraded gracefully as the size of the database was reduced. With a sample of only 4K frames (approximately 700 words), MBR s&l1 managed to get the correct answer 76% of the time (Figure 1). The next task was to determine how well the two al- gorithms rejected spurious information (distraction). This was done by adding between 1 and 7 fields contal?ning Stanfill 579 0 1 234567 Figure 2: Distraction random values to each frame in a 64K-frame training database. The uniform metric’s performance degraded slightly, from 88% down to 83% correct. The variable met- ric’s performance did not change at all (Figure 2). The third task was to determine how well the metrics performed in the presence of noise. Two different types of noise must be considered: predictor noise and goal noise. N-% noise is added to a field by randomly choosing N- % of the records, then giving them a randomly selected value.8 In the predictor-noise test, a fixed percentage of noise was added to every predictor field in a 64K-record training database, and the results tabulated. The uniform metric was relatively unaffected: with 90% noise, perfor- mance declined from 88% to 79%, after which it quickly dropped to chance. The variable metric was somewhat sur- prising: performance was actually better with 10% - 50% noise than with none (Figure 3).’ When noise was added to goal fields, both algorithms’ performances dropped off more-or-less linearly (Figure 4). In summary, for the pronunciation task the uniform metric is always more accurate than the variable metric. It has fairly good resistance to distraction, and extremely good resistance to predictor noise. It does not resist goal noise particularly well. The variable metric does, however have some useful properties: it seems immune to distrac- 8These values were uniformly distributed. An alternative exper- iment would have been to select a random value having the same distribution as the data occuring in the field. eFor a discussion of the effect of noise on concept learning, see [Quinlan, 19861. 100 90 80 70 60 50 40 30 20 10 0 100 90 80 70 60 50 40 30 20 10 0 lb 2b 4; 7; 8; 9; lb0 Figure 3: Predictor Noise 0 10 20 80 40 50 00 70 80 90 $0 Figure 4: Goal Noise 580 Natural Language tion, and has even better resistance to predictor noise. The anomalous improvement in performance as predictor noise increases up to 50% needs to be understood. iscussion Substantial work remains to be done on the mechanics of memory-based reasoning. First, a variety of metrics need to be studied. Second, MBR needs to be extended to work in domains with continuous variables. Third, tasks other than pronunciation need to be attacked. Fourth, there is a need for research into the effects of representation on learnability. Finally, a rigorous head-to-head comparison between MBR and other methods of inductive inference is needed. The most striking aspect of this experiment is high performance on a difficult problem with a very simple mechanism. There are no rules, and neither complex algo- rithms nor complex data structures. If simplicity is a good indicator of the plausibility of a paradigm, then memory- based reasoning has a lot going for it. The ultimate goal of Memory-based Reasoning re- mains to build intelligent systems based on memory. This experiment is an important first step in that direction. What has been demonstrated is that it is possible to use memory as an inference engine; that if an agency can store up experiences and then recall them on a best-match basis, it can learn to perform a complex action. Much remains to be done, but the memory-based reasoning paradigm has passed a crucial first test. Many thanks to Dave Waltz, who is the co-originator of the Memory-based reasoning paradigm; to George Robert- son who got learning research going at Thinking Machines Corporation; to Donna Fritesche and Robert Thau for as- sisting with software; to Danny Hillis for designing a very nice machine; and to Thinking Machines Corporation for supporting this research. [Carbonell et al., 19831 J aime Carbonell, Ryszard Michal- ski, and Tom Mitchell. Machine Learning: A His- torical and Methodological Analysis. AI Magazine 4(3):69-79, 1983. [Hillis, 19851 Danny H 11 i is. The Connection Machine. MIT Press, Cambridge Massachusetts, 1985. [Holland et al., 19861 John Holland, Keith Holyoak, Richard Nisbett, and Paul Thagard. Induction: Pro- cesses of Inference, Learning, and Discovery. MIT Press, Cambridge Massachusetts, 1986. [Kolodner, 19801 Janet Kolodner. “Retrieval and Organi- zational Strategies in Conceptual Memory: A Com- puter Model.” Technical Report 187, Yale Univer- sity, Department of Computer Science, 1980 (Ph.D. Dissertation). [Kolodner, 19851 J anet Kolodner and Robert Simpson. “A Process Model of Case-Based Reasoning in Prob- lem Solving.” In P roceedings IJCAI-85, Los Angeles, California, International Joint Committee for Artifi- cial Intelligence, August 1985. [Lehnert, 19871 Wendy Lehnert. Case-Based Problem Solving with a Large Knowledge Base of Learned Cases. In Proceedings AAAI-87, Seatle, Washing- ton, American Association for Artificial Intelligence, 1987. [Michalski et al., 19831 Ryszard Michalski, Jaime Car- bone& and Tom Mitchell, editors. Muchine Learn- ing. Morgan Kaufman, Los Altos, California, 1983. [Michalski et al., 19861 Ryszard Michalski, Jaime Car- bonell, and Tom Mitchell, editors. Machine Learn- ing, Volume 2. Morgan Kaufman, Los Altos, Cali- fornai, 1986. [Qu,inlan, 19791 Ross Quinlan. “Discovering Rules from Large Collections of Examples: A Case Study.” In Expert Systems in the Micro Electronic Age. Don- ald Michie, editor. Edinburgh University Press, Ed- inburgh, 1979. [Quinlan, 19861 Ross Quinlan. “The Effect of Noise on Concept Learning.” in Machine Learning, Volume 2. Ryszard Michalski et. al., editors. Morgan Kaufman, Los Altos, Californai, 1986. [Sejnowski and Rosenberg, 19861 Terry Se- jnowski and Charley Rosenberg. “NETtalk: A Par- allel Network that Learns to Read Aloud.” Techni- cal Report JHU/EECS-86, The Johns Hopkins Uni- versity Electrical Engineering and Computer Science Department. [Stanfill and Waltz, 19861 Craig S tanfill, and David Waltz. “Toward Memory-Based Reasoning.” Communications of the ACM 29( 12):1213-1228, De- cember 1986. [Webster, 19741 M erriam Webster’s Pocket Dictionary, 1974. Stanfill 581
1987
103
554
David A. Wroblews htlcc 3500 West Balcones Center Drive Austin, Texas 78759 Abstract Graph unification is sometimes implemented as a destructive operation, making it neccesary to copy the argument graphs before beginning the actual unification. Previous research on graph unification claimed that this copying is a computation sink, and has sought to correct this. In this paper I claim that the fundamental problem is in designing graph unification as a destructive operation. This forces it to both over copy and early copy I present a nondestructive graph unification algorithm that minimizes over copying and eliminates early copying. This algorithm is significantly simpler than recently published solutions to copying problems, but maintains the essential efficiency gains of older techniques. In this connecte 8 aper I will deal with unification of rooted, % raph ! acyclic graphs, or “DAGs”. The efficiency of unrfrcation has recently received attention ecause of the popularity of graph-unification-based formalisms in corn utational lrnguistics Karttunen 84E), [Wittenburg 8 6 1, [Periera Karttunen and Kay 851. In these parsers, the entries and grammar rules are represented as DAGs, and graph unification is the mechanism whereby rules are applied to sentence constituents. Unfortunately, graph unification is an expensive process; any-attempt to build a practical parser based on graph unrfrcatron must address the issue of making it efficient. Past research has identified the copyrng involved in graph unification as a computational sink. Thus paper presents a graph unification algorithm that does not need to copy its argument graphs because it is nondestructive. [a: b c: 1 [d:e] f: <I>] e Figure 1: The Graph Matrix Notation For the purposes of I will use the matrjx notation (whenever for graphs as used in [Wittenburg 861, somewhat of a d which has become e facto standard in the field. In this notation, reentrant structures are indicated with a mark, a number receding a subgraph, and a @n&r, a number enc osed in <” and 5”. When a pointer of the P form <n> is encountered, it should be interpreted as: 582 Natural language “The followin Yl graph is the ve marked bv t e number n esewhere in the graph.” r same graph as the one Figure 1 shows a graph in the matrix notation with a reentrant pointer, marked and pointed to by the number 1. Beside it is an equivalent picture of the same graph. Abstractly, a DAG consists of a set of nodes, a set of arcs, and a special node designated the root. In practice, we implement DAGS with structures that are analogical; there is a node structure and an arc structure as shown in Figure 2’. Msde Structure 1 forward 1 arc list I Figure 2: The Implementation Of DAGs Finall , in our implementation, nodes in a BAG can be forwar ed to other nodes. For instance, the DAG J ? raphicall orwarde J described in Figure 3 shows the root node to another node; beside it is the result of printing this ra iv h in the matrix notation. ote that the contents of t e orwarded node are completely ignored. For all purposes, the node being forwarded has been discarded and replaced with node B. It is important that all operations on gra hs pointer above all else; P must honor a forwarding orwardin is the highest pnonty operation. The process of reso ving these forwarding 9 pointers (there may be chains of them) is known as ‘dereferencing” a node [Penera 851. E¶: g a: t3 s r t Figure 3: Forwarding Links Override Node Definitions 10ur imblementation of the destructive unification algorithm presented in Section 3 and the structures shown in Figure 2 owes much to the work of Shieber [Shieber 84], Karttunnen [Marttunen 841, and Wittenburg [Wittenburg 881 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. Conceptually, the al orithm for graph unification is Tl quite simple, and muc like ordina 3 term unification used in theorem-proving programs [ arren 831, [Bayer oore 721, extended to the more complex structure ir” DAGs. Like term unification, graph unification both matches the argument DAGs and builds a new DAG. If the DAGs to be unified are somehow incompatible, then unification produces nothing; if the DAGs are compatible then unification returns a new DAG which is compatible with, and more specific than, both of the argument DAGS. [a: [b: cl Q [a: l[b: cl =-> [a: l[b: c d: [e: f]] a: Xl> e: f] g: b: jll d: <l> 91: [h: HI : The Successful Unification Of Two DAGs For instanc;hlmgure 4shows two DAGs and theresult of unifyin Et incompatib e Figure 5 shows two pairs of DA&, i.e. two DAGs over which unification would fail. [c: cd] 9 [c: e] => Failure! [a: l[x:y] -I- [a: [c:d] => Failure! : <l>] e: [c:e]j : The Unsuccessful Unification of Two DAGs The following version of the destructive unification algorithm is taken, with modification, from [Periera 851. It takes as input two nodes, initially the roots of the DAGs to be unified. Bt recurses on the subgraphs of each DAG until an “atomic” arc value is found*. assumes the existence of two utility enta~s (dl, d2) takes two unique to df tur;; the arc labels that are respect to d2. ~I-I~GOZS~C~~~CS (d%, 82) takes two nodes as input and returns the arc labels that exist in both dl and d2. These operations are equivalent to the set complement and set intersection, respectively, of the set of arc labels for each node. First, 1 modifies its argument DAGs in two ways. all the nodes from one DAG are forwarded to the other. Subse will always 1 uent operations on either of these DAGs ereference either node to the same structure. Second, arcs may be added to the d2 nodes, *In this paper, atomic cases are not considered, since they are trival. An actual implementation of this would include a type check for atomic DAGs and clause testing for atomic equality if so. This has been left out for clarity. PROCEDURE Unify1 (dl d2) Dereference dl and d2. IF dl and d2 are identical THEN success: return dl or d2. ELSE new = complennentarcs (Cal, d2) . shared = intersectarcs(dl,d2). Forward dl to d2. FOR each arc in shared DO Find the corresponding arc in d2. Recursively unify1 the arc-values. IF unify1 failed THEN Return failure ELSE Replace the d2 arc value with the result. ENDIF FOR all ares in new DO Add this arc to d2. Return d2 or dl arbitrarily. ENDIF ENDPROCEDWRE as in the arc labelled Q in Figure 4. Finally, note that the order in which shared arcs are unified is a nondeterministic choice. Since Unify1 ravages its argument BAGS, they must be copied before it is invoked if the argument DAGs need to be preserved. For instance, if a rammar rule is represented as a DAGr then it surely s 9, ould not be permanently changed during the application of the rule to a DAG representing a sentence constituent. Thus ;:I n$ DAG must be copied before the application of . Previous research 851 has identified DA LL Karttunen and Kay 851, [Periera cop However, some amount o Y ing as significant overhead. copying must be done to create the result DAG. Exactly when is copying wrong, then? The answer is: when the algorithm copies too much or copies too soon. Destructive unification makes both of these mistakes. They are named: e Over Copying. Copies are made of both DAGs, and then these copies are ravaged by the unification algorithm to build a result DAG. This would appear to require the raw materials for two DAGs in order to create just one new DAG. A better algorithm would only allocate enough memory for the resulting DAG. e Early Copying. The argument DAGs are copied before unification is started. If the unification fails, then some of the copying is wasted effort. A better algorithm would copy incrementally, so that if a failure occurred, only the minimal copying would be done before the failure was detected. The important point here is that these are two distinct features of a unification algorithm, and have no Wroblewski 5%3 neccesary connection. Previous attempts to improve unification have not acknowledged their independence, and are perhaps more complicated than they need to be because of that. On the other hand, the al orithm B presented in the next section deals complete y with early copying but only partially with over copying; they are treated as independent problems. In this section, I present a nondestructive graph unification algorithm that incrementally copies its argument graphs. It avoids, whenever possible, over copying, and completely eliminates early copying. The price to be paid for this is a slightly more complicated algorithm and graph representation. Intuitively, unification could be nondestructive if it were to build the result DAG as it proceeds, making all changes in this new DAG, and leaving the argument DAGs untouched. This means that we will have to associate with each component of the argument DAGs, its copy. If, at each step during unificatron, we return the co we wil P y structure as the result of the unification, then finally be left with a pointer to the root of the newly constructed DAG, or a failure-indicator. . Incremental Copying Means More Bookkeeping For this algorithm, we will extend the representT;g y; nodes somewhat as shown in Figure 6. essential1 J the same as shown in Figure 2 except that a copy an status field have been added. We will use the copy field to associate a node with it’s copy. We will use the status field to indicate whether or not a given node is part of a copy or part of the original graph; It will hold one of two possible values: %opyvl or "not-copym. +-----------------.+ I forward I +------------------+ I arc list I +---------------...--+ I COPY I +------------------+ I status I +------------------+ Figure 6: Nondestructive Unification Node B. Graph Unification With IIncremental Copying The procedure Unify2 takes as input two nodes, initially the roots of the DAGs to be unified. It recurses on the subgraphs of each DAG until an “atomic” arc value is found. It differs from the al that it never alters dl or d2; i! orithm Unify1 in rat er it puts these modifications in the new node being created, named COPY. PROCEDURE Unify2 (dl d2) Dereference dl, d2. IF neither dl nor d2 have copies THEN copy = a new node. copy.status = **copy". dl.copy, d2.copy = copy. newdl = complementarcs(dl,d2). newd2 = complementarcs(d2,dl). shared = intersectarcs(dl,d2). FOR all arcs in shared DO Find the corresponding arc in d2. Recursively unify2 the arc values. IF unify2 failed THEN Return failure. ELSE Add a new arc in copy. ENDIF FOR arc in union(newdl,newd2) DO Copy the arc-value of each arc, honoring existing copies within, and place this value in copy. Return Copy. ELSE if dl xor d2 has a copy THEN Without loss of generality, assume dl has the copy. * unifyl(dl.copy,d2) preserving d2. Return dl.copy. ELSE if both dl and 82 have copies TREN Unifyl(dl.copy, d2.copy). ENDIF ENDPROCEDURE @~t~~~t~ve ~~~f~~ati~~ , we will partially walk through the unification of the graphs shown in Figure 4 using the procedure Unify2. dashed lines In the following series of figures, indicate the contents of the copy field, darkened circles represent “non-copy” nodes, and hollow circles represent nodes which are copies. Figure 7 shows the state of unification after the path (a,b) has been followed during unification. Unify2 has recursed twice and returned to the top node; three new nodes have been created, one a copy of the root, one a copy of the node on the path (a) and the last a copy of the node on the path (a, b). The copy field of the appropriate nodes in DAG! and DAG2 have been fiir;lli with the copy nodes, as Indicated by the dashed . e h Note that when a copy alread Y exists for one the other, but not both, this agorithm will pe 4 raph or orm an operation ve..ry much like unifyl, but no forwarding will be done since the changes can all be safely recorded in the copy. -This is what IS meant by the line marked with an asterisk. ure 7: Nondestructive Unification: Snapshot 1 584 Natural Language In Figure 8, Unify2 has followed the path (d) on the argument DAGs. Rut notice that the nodes at the end of path (a) and at the end of path (d) in DAG2 are the same; a co traversing t R y of this node was previously made when e path (a, b) , and so this copy is reused rather than allocating a new node. Subsequently, an arc labelled e is added to this reused copy. Finally, Unify2 recursion unwinds back to the root node of both DAGs. e h Figure 8: Nondestructive Unification: Snapshot 2 In Figure 9, Unify2 has added the arc labeled g in DAG2 to the result gra h, subgraph at the end of t R makin at arc an result graph. tl a copy of the placing it in the Notice that the subgraph [h: j] of DA62 was copied even though there existed no correspondin subgraph in DAGl. Later we will see that this -lea 0s s to possible over copying on the part of Unify2 in some special cases. The result graph is shown in corn Notice that DAGl and leted form in Figure DA 2 have been left 8 &changed except for their COPY fields The new DAG can be returned, with a total of 6 new nodes created and 6 new arcs created. nondestructively with To unify these DAGs procedure thifyl, 10 nodes and 9 arcs would have been created, i.e. a copy of both argument DAGs. e ID. Advanta es of heremental Copyl incrementa y copying graphs w dunn means over copying is avoided and ear y copying is B unification eliminated. This incremental copying scheme has the potential for being more efficient than destructive unification (including the preceding copying) both in space and speed. Even if the unification can be guaranteed to succeed, Unify2 potentially uses less space and time copying than Unifyl, because it avoids over-copying. sadvantages of lneremental Co fy2 is not a perfect algorithm. It can, In some cases, -over copy. Such a case is illustrated in the unrfrcation of the DAGs in Figure 10. If the top level arcs are unified in the order x then y then Z, double copying occurs during the unification of the z subgraph. DAGl DAG2 RESULT [x:[a:b] [x:l[a:b] [x:l[a:b y:[c:dJ y:2[c:d] => e:f z:[p:l[e:f] 2: [p:<l> c:d] q:<l>]] q:<2>]] y:<l> z:[p:<l> q:<l>]] Figure IQ: Two DAGs That Force Double Copying To understand this, notice that when the X and ‘JI subgraphs are unified, a new copy of the graphs [a: b] and [c: d] was made and associated with the original nodes in DAGl and DAG2. When unification takes place along the path (z, p) a new arc/value of [e : f] is combined with the existin of [a: b] to make the result graph 1 R loo c; ey [a:b e:f]. Finally, the reentrant structure in DAGl forces the values at the ends of the paths (2,~) and (z, q) to be unified. But in this case, there is now a copied graph already associated with each of these paths! The correct result can be obtained by invoking the destructive unification routine Unify1 on both copies, as is done in the final conditional clause of Unify:!. This provides the correct result DAG, but is unsatisfying with respect to the goals of having a “perfect” unification algorithm, because the algorithm has still over co led, even though it produces the correct result. I have ii een unable to discover a way to retain the incremental copying scheme but still completely avoid this sort of over copying, although somehow combining “reversible unification” (discussed in the next section) with this algorithm seems to be a promising approach. Several other graph unification algorithms that avoid early co P ying and over cop ing and imp emented. Each of t t have been proposed em have emphasized the importance of dealing with copying efficiently. In this section I will compare the nondestructive unification .algorithm presented here with these previous techniques. Figure 9: Nondestructive Unification: Final Result Wroblewski 585 This comparison of alternate approaches will proceed along the following dimensions: 0 Does it eliminate early copying? e Does it eliminate over copying? m Does it impose an overhead on DAG operations? e is it linjted to a certain . context? Unification structur aring ‘Pereira [Periera 851 has proposed a structure-sharing approach to graph unification, analogous to the structure-sharing techniques used in theorem-proving programs [Boyer and Moore 721, [Warren 831. In this scheme, a DAG is represented by a skeleton and an environment. The skeleton is a simple DAG in the same sense used above. However, it must be interpreted along with an environment in which changes to the graph, such as arc bindings or node forwardings, may be added. The unification procedure in such a system looks much like Unifyl, except that it records changes to the ar ument DAG nodes in the “environment instead d 9 in the nodes themselves. The effect of this technique is to make unification nondestructive and thus non-over and non-early copying. Even in the cases where Unify2 would over copy, this structure sharing algorithm would not. Unfortunately, structure sharing has its own set of costs. The mechanism of structure sharing itself places a fixed-cost overhead on all node accesses; in Periera’s implementation this overhead is log d), number of the nodes in the DA d where d is the . Any operation manipulating a graph must suffer this log(d) overhead in order to assemble the whole DAG from the skeleton and the updates in the environment. Also, this technique ties each DAG to the derivational environment in which it was created; this appears to have been done as a efficievcy measure, in order to ;t$ztthe structure of the environments to the greatest . I found the environment/skeleton scheme hard to implement and extend in a Lisp environment. In fact, it was my discouraging experience when trying accelerate unification via structure-sharing that led to the design of the incremental copying scheme described here. In my implementation, most of the speed-advantages of the structure-sharing were cancelled by the speedArs;;i the log(d) node access overhead. disadvanta es of structure-sharing are avoided using incrementa copying. B Each node in the graph can be accessed in constant time, and the result of a unification is not necessarily tied to the derivational context in which the unification was done. Finally, it is significantly easier to implement and extend than the structure- sharing mechanism. B. Reversible Unifie Karttunen [Karttunen 861 has implemented a “reversible unification“ scheme in which the changes to the argumeht DAGs are made in a semi-permanent way. .After successful unification, a fresh graph is copied from the two. altered argument DAGs, and the argument graphs are then restored by undoing all the changes made durin unification. If the unification fails, ,then the argument % AGs are restored and no result graph is made. Reversible unification does not appear to be restricted to any special context. The most important difference between reversible unification and unify2 concerns the restoration process. ~nify2 only changes the original graphs in their copy fields. More radical unification changes are made in the copies themselves. Thus, restor!ng the argument DAGs is only a matter of InvalidatIng the copy fields of the ar ument DAGs. This can be done in constant time by a 8 din B a mark field which indicates the validity of the copy leid iff it is equal to some global counter; ail the currently valid copy fields can be simultaneously invalidated by incrementing the global counter.3. This trick is not possible for reversible unification, since it alters its argument DAGs more radically; instead the algorithm must consider each node separately when restoring. Another difference between reversible unification and uni%y2 is that reversible unification does not incrementally copy it’s argument DAGs. This forces it to add a constant-time ‘save” operation before all modifications and to make a second pass over the result DAGs to create the copy; in Unify2 this work is traded for a copy-dereferencing operation each time a node is examined. A possible ar ument for reversible unification. over wnify%l would % e its simplicity, possibly making it easier to implement, validate, and maintain. Reversible unification also avoids the need for adding two fields (COPY, status) to each node through the use of th.e restoration records. Further, reversible unification will never over copy, even in cases where Unify2 would. Graph unification is sometimes implemented as a destructive operation, making it neccesary to copy the argument graphs unification. before beginning the actual Previous research on graph unification showed that this copying is a computation sink, and has sought to correct this. In this paper I have claimed that the fundamental problem is in designing graph unification as a destructive operation. and early copy. I This forces it to both over copy have presented a nondestructive gra h unification algorithm that minimizes over copying an a eliminates early copying. In retrospect, it can be seen that earlier attempts to fix the efficiency problems also addressed the problems of early co R ying and over copying. The new algorithm presented ere is simpler than structure-sharing, and replaces the restoration process of reversible unification with a (small) constant time operation. There are clearly some tradeoffs to be considered in implementing graph unification. I have tried to outline four that I know of: over copying, early copying, DAG access overhead, and restrictiveness to certain contexts. Complicating this is the surprisin possible in the simple structure of a % complexity AG under unificaton; implementing any graph unification algorithm and testing its correctness is a formidable task. One of the problems with the algorithm presented here is that it 3Thanks to Mark Tarlton for suggesting this. 586 Natural banguage has not been proven correct (nor has any other unification al orithm, to my knowledge), althoug ii raph B we have informa ly tested it and have been using it on a daily basis for about 5 months. Future research in this area should strive toward understanding how various design decisions in unification-based parsers affect design decisions for unification. For instance, some parsers may be able to intelligently eliminate rule applications that would fail without invoking unification; one such system is Astro [Wittenbur succeed most o 9 861. If it is known that unification will the times it is applied, then one would prefer to optimize the successful case of the unification algorithm. This would mean that early copying might not be a bad design decision. Another consideration is the pur ose to which the unification result will be put. Some El AGs have a short lifes an, such as those on chart edges. Other DAGs pro uced via unification CP mi ht have a relatively ermanent existence, such as exical definition F B inally, sometimes one would like to provide 3 raphs. etailed information about the causes of unification failure (for debugging grammars, say) while at other times space and time IS at a premium, and debugging information is not required. The author’s experience suggests that the “perfect raph unification algorithm” may not exist, and is best t ought 3, of as a family of related algorithms optimized for different purposes. This paper has been greatly improved by the thoughtful comments of Elaine Rich and Kent Wittenburg. I am also indebted to Elaine, Kent and the rest of the MCC Lingo group for many interesting discussions on this topic, and to MCC for providing the computational and intellectual environment in which this work took place. [Boyer and Moore 721 R. Boyer and J. Moore. The Sharing of Structure in Theorem-Proving Programs. In Machine Intelligence 7. John Wiley and Sons, New [Karttunen 841 Lauri Karttunen. Features and Values. In Proceedings of Colinga4, pages 28-33. 1984. [Karttunen 861 Lauri Karttunen. D-PAT/?: A Development Environment For Unification-Based Grammars. Technical Report CSLI-86-61, Center for the Study of Language and Information, August, 1986. [Karttunen and Kay 851 L. Karttunen and M. Kay. [Periera 851 Fernando C. N. Periera. A Structure- Sharing Representation for Unification-Based Grammar Formalisms. In Proceedings of the 23rdAnnual Meeting of the Association for Computational Linguistics, pages 4 37-l 44. 1985. [Shieber 841 S. Shieber. The Design of a Computer Language for Linguistic Information. In Proceedings of Coling84, pages 362-366. 1984. [Warren 831 David H. D. Warren. Applied Logic - Its Use And Implementation As A Programming Tool. Technical Report 290, SRI International, June, 4983. [Wittenburg 861 Kent B. Wittenburg. Natural Language Parsing With Combinatory Categorial %rammar In A Graph-Unification-Based Formalism. PhD thesis, University Of Texas-Austin, August, 1986. Sharing Structure With Binary Trees. In Proceedings of the 23rd Annual Meeting of the Association for Computational Linguistics, pages 133-l 36a. 1985. Wroblewski 587
1987
104
555
oices in Problems Sanjay Mittal and Felix Frayman Intelligent Systems Laboratory, Xerox PARC, 3333 Coyote Hill Rd., Palo Alto, CA. 94304 Constraint problems derived from design and configurations tasks often use components (structured values) as domains of constrained variables. Most existing methods are forced into unnecessary search because they assign complete components to variables. A notion of partial choice is introduced as a way to assign a part of a component. The basic idea is to work with descriptions of classes of solutions as opposed to the actual solutions. It is shown how this idea can reduce search and in the best case eliminate search. A distinction is made between a partial commitment (a partial choice that would not be retracted) and a partial guess. A particular way to implement partial choice problem solvers is discussed. This method organizes choices in a taxonomic classification. Use of taxonomies not only helps in pruning the search space but also provides a compact language for describing solutions, no-goods, and representing constraints. It is also shown how multiple hierarchies can be used to avoid some of the problems associated with using a single hierarchy. lidi A central component of many design tasks is a constraint satisfaction problem (CSP) as defined by Mackwot-th [Mackworth, 19771, i.e., finding consistent assignment of values for a set of variables that together define the artifact which is the output of the design task. These variables are constrained by expressions derived from structural, functional, and performance requirements (see [Mittal and Araya, 19861, and [Araya and Mittal, 19871 for a more detailed articulation of this approach). An important characteristic of such constraint problems, i.e., the ones formulated for design tasks, is the use of structured values to represent domains of variables. Simply stated a structured value has internal structure in terms of additional variables with corresponding values. It is not hard to see why this is convenient. In design tasks, one often has pre-defined components that are used to define the domain of some of the design variables. Use of certain pre-defined resistors in discrete circuit design or the use of fixed sets of components in computer configuration are some examples. Unfortunately, most of the general-purpose methods for constraint satisfaction work at the level of making a complete choice for a variable and rely on some form of least commitment to defer making a guess to minimize search. In this paper, we introduce the idea of making a partial choice, which is especially appropriate for variables that have structured values (henceforth components). The paper is organized as follows. We start with a simple example involving the use of components and show how the existing methods are forced into unnecessary search because they have to choose a complete component. Next, we introduce the notion of partial choice and how it reduces search. The basic idea of partial choice is to operate on descriptions of sets of solutions as opposed to actual solutions. In some situations, a partial choice can be viewed as a commitment that will not be retracted. A partial commitment may thus be viewed as an extension of the least commitment principle. We also discuss situations in which no partial commitment can be made and one has to resort to a partial guess. Making a partial choice affords benefits similar to hierarchical search, i.e., pruning of choices without having to examine all choices. In the next section, we show that taxonomies are one way to implement partial choices. We also show how multiple taxonomies can be simultaneously used to avoid having a particular order in which partial choices are made, something that results from using a single hierarchy. Some of these ideas have been implemented in two design expert systems, Pride [Mittal et al., 19861 and Cossack [Frayman and Mittal, 19871. In the first part of the paper we will use the following very simple constraint problem. Example 1. There are two variables X and Y. Each has two components as possible choices (in other words their domains). The components in turn can be viewed as having two distinct fields: pl and p2 for components in the domains of X and Y. We shall use curly braces ({}) to represent the choices for a variable and square brackets ([I) to represent a component. The choices for X and Y are: Mittal and Frayman 631 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. x: {[pi :a, p2: b] [pl :a, p2:c]} and Y: {[pl :d, p2:elI~l :d, p2:f]) the constraint problem in such a way that arbitrary choices can be We shall further use the dot notation to refer to nested variables. Thus, X.pl means the value of the pl field of the component assigned to X. The constraints on X and Y are: minimized. In simple terms, one can think of least commitment as a technique for deferring a variable assignment as long as possible if multiple choices are possible and enough constraints are not known that would allow one to commit to the right choice. In Cl : X.pl = a iff Y.p2 = e; C2: Y.pl =d iff X.p2 = c other words, avoid making a guess as long as possible in order to The constraint Cl reads, “X.pl has value a if and only if Y.p2 has value e”. This problem has four candidate solutions (2 choices for X x 2 choices for Y) and only one solution: minimize backtracking. In our example, the choices and constraints are such that no particular order of assigning values or checking the constraints helps in reducing the search. x = [pl :a, p2:c] and Y = [pl :d, p2:el V. Use The idea of making a partial choice is deceptively simple. In this section, we shall briefly analyze the performance of some However, instead of just stating it we will motivate it by the of the general-purpose methods for doing constraint search on following exercise. the above problem. A. Generate and test Using generate and test (G&T), one would build a generator that generates candidate solutions by making all possible assignments to X and Y. The constraints would be used to test the acceptance of the candidates. It is easy to see that in this case, the generator would produce 4 candidates, only one of which will be acceptable. A. “Flattening”’ the components The basic common cause of search in the methods discussed in the previous section can be traced to the fact that each component really represents a pre-packaged assignment of values to many variables. This means that a problem solver that assigns such a component to a variable ends up with a larger commitment than is warranted. Consideration of later constraints may lead to a variables in some order to define partial candidates. Prune a partial candidate if any constraints can be applied. This improves over G&T in general. But in our example, the constraints can only . Hierarchical Generate and test We define hierarchical G&T as follows. Assign values to the problem to the following problem. There are four independent variables: X.pl, X.p2, Y.pl, Y.p2 with the following domain of contradiction for some of these assignments, causing the problem solver to search. This is easy to see if we eliminate the use of components, i.e., “flatten” them, and transform the above values: be tested after both variables have been assigned values and thus no benefit results. Note that chronological backtracking is a X.pl : {a); X.p2: {b, c); Y.pl : {d}; Y.pZ: (e, f}. standard way to implement hierarchical G&T. would minimize search and in our example find the correct solution without any search. In general, however, the above One can easily build a least commitment problem solver that C. Constraint propagation Another common technique is constraint propagation as described in [de #leer and Brown, 19861. This is like hierarchical G&T with the constraints folded into the generator in such a way that they can be used to “look-ahead” for making more constrained choices for the other variables. In our example, constraints can be used inside the generator in the following way. Once a choice is made, say for X, constraint Cl can be used to make a choice for Y that satisfies that constraint. Clearly, this has advantages if the constraints can be so folded into the generator. However, only one of the two initial choices for X or Y is the correct one so in half the cases one would still have to search. However, as we shall show one can still do better. D. Least Commitment Approaches problem transformation would be incorrect. Consider the problem defined in figure 1 (example 2, s$c. IV.B.2). Simply “flattening” the components changes a 2-variable problem with a search space of 16 to a 4-variable problem with a search space of 64. This is because the use of components already represents the result of doing constraint satisfaction on this flattened set of variables with their own domains. In essence, components represent solutions to a set of constraints which can then be ignored because they are already implicit in the components. Alternately, one can view components as the result of doing some kind of compilation of the constraints on sub-sets of variables, which now correspond to components. By going back to the flatter set of variables, we will have to re-introduce those constraints, undoing past work. Real design problems often have One approach that has sometimes been very effective is the use of some kind of least commitment problem solver as implemented in Molgen [Stefik, 19811 or more recently in the Pride expert system [Mittal and Araya, 19861. In both of the above systems, least commitment is practiced by employing techniques for ordering hundreds of variables with tens of components per variables. In summary, flattening such problems affects the search efficiency in two ways. One, flattening the components causes the search space to grow exponentially. Remember that the increase in the search space comes both from increasing the number of 632 Engineering Problem Solving variables as well as by, possibly, increasing the domains for the variables. Two, flattening brings back the constraints which had been compiled away, increasing the time complexity at least linearly. artial choices: the best of both worlds It is easy to state the notion of partial choice now. Simply, it is a way to make a commitment to only a part of a component in the expectation that such a partial choice would allow constraints to be considered to enable a better choice1 for the rest of the component at a later point in problem solving. The idea is that where one wauld resort to guessing, one now selects an appropriate description for a class of components. This description may not uniquely apply to a single component but should be specific enough to enable further inferences to be made. At the same time it should not be so specific that it has to be retracted, at least avoidably so. We differentiate between partial commitment - a partial choice that does not have to be retracted and partial guess - a partial choice that may have to be removed later in the processing. Partial choice refers to both partial commitment and partial guess. 1. Partial Commitment Consider example 1 of section II. A partial commitment for X would be to commit to pl =a because that is common to all choices of X. Similarly, for Y commit to pl = d because that is common to all choices for Y. With these partial commitments, the constraints Cl and C2 can be used to select the correct component for X and Y. In effect, by allowing partial commitments we get the best of both worlds. By continuing to operate on components, albeit partially, we preserve the advantages of components, i.e., prior solutions to other constraints and a reduced search space. At the same time we get the benefits of being able to consider the fields of a component as independent variables which allows finer-grain commitments to be made (or deferred). ab acYde df At any moment during problem solving, when there is more than one alternative to be selected from and least commitment has to resort to guessing, the notion of partial commitment is applicable. Partial commitment involves examining the set of alternatives and determining a common part present in all the alternatives or the facts which will be true no matter which alternative is selected. Partial commitment is a monotonic inference, i.e., it never has to be retracted. Another way of viewing partial commitment is as means of calculating the entailments of the previously made decisions or making explicit the facts already available in an implicit form. A problem solver that uses partial commitment would progressively commit to more fields of a component until all of them are assigned values. The major benefit of partial Figure 1. Choices and solutions for example 2. It is not possible to use the partial commitment approach as it has been introduced earlier, since there is no common part for any of the 4 possible choices for X and Y. It is necessary to introduce some modifications for the partial choice idea to make it work in this case. The modified approach will be called partial 1 By “better choice” we mean a choice that minimizes backtracking. commitment approach is reducing search (backtracking) since a potentially retractable full commitment is replaced by a safe partial commitment. Another benefit of partial commitment has to do with finding all solutions of the constraint satisfaction problem. Since making a partial commitment does not make any unnecessary commitments that have to be retracted later, the state of the computation implicitly carries all the solutions to the CSP problem. 2. Partial Guess The notion of partial commitment crucially depends on the existence of a common part for a set of alternatives. Often, there is nothing common among all the choices considered. We introduce the notion of partial guess to make the same idea work in thiscase. Let’s consider the following example. Example 2. There are two variables X and Y having two fields pl and p2 with identical domainsof possible values: X: {[pl :a, p2: b] [pl :a, p2:c] [pl :d, p2:e] [pl :d, p2:f]} Y: {[pl :a, p2:b] [pl :a, p2:c] [pl :d, p2:e] [pl :d, p2:f)) The domains of the variables internal to these components, pl and p2, are: pl : {a, d}; p2: {b, c, e, f} The constraints on X and Y are: Cl: X.pl = a iff Y.pl =d; C2: Y.p2=e iff X.pZ=c C3 X.pl =d iff Y.pl =a; C4 Y.p2=c iff X.p2 =e Figure 1 shows all 16 candidate solutions with four solutions marked : Mittal and Frayman 633 guess approach. The partial guess approach is based on the idea that it is possible to introduce subsets of the original set that have common parts and make commitments to the introduced sets, instead of committing to individual components. The subsets with common parts for our example are shown in Figure 2. Figure 2. Choices for X and Y in example 2 organized as a tree. Since the common part decomposition is based on pl we will use pl for the first decision. There are multiple alternatives for both X.pl, Y.pl and we arbitrarily2 select assignment of value to X.pl first. There are 2 possible values X.pl = a or X.pl = d. We will choose arbitrarily X.pl = a. Assuming that the problem solver can use constraints in the generation3 of alternatives, the first constraint Cl can be used to assign Y.pl =d. Making a partial guess of X.pl =a effectively reduced the search space by eliminating four choices in the top-left quadrant in Figure 1 from viable solution candidates. Thus, making partial guesses effectively cuts down the search space by providing the benefit of a hierarchical search. A solution for the problem in Example 2 can be found easily by further examining the set of solution candidates left. It is necessary to point out an important difference between making partial guesses with the introduced subsets and making a partial commitment as illustrated by Example 1. Guesses with the introduced sets are retractable, while partial commitments are monotonic and do not involve any guessing. 2 To simplify the discussion, the problem was structured symmetrically in order to eliminate the reasoning in choosing the preferred order of making commitments. Selecting Y.pl at this point will work similarly. 3 The ideas are still applicable in case the problem use constraints in the generation phase. solver can not some common descriptions. Partial choice strategy is also preferred to the flattening approach in cases when components have large number of properties or when the properties have large domains of values. In such cases flattening approach will increase the search space of possible values describing an individual component to the product of the domain sizes for every component property, while partial choice strategy will operate on the search space defined by a number of distinct component alternatives. So far we have presented these ideas in terms of assigning a partial component to a variable and thereby reducing search. One can also think of partial choice as a method for working with a description of classes of solutions, instead of working with actual solutions. In the examples we used for illustration the common part (either as a committment or as a guess) represented a description of a set of choices which could be instantiated by filling in the other variables of the components to obtain the complete components. Viewed this way, partial committment is the special case where a description applies to all members of a set. As a result, the description is a monotonic inference. Partial guess is the general case where the description may cover only a sub-set and may have to be retracted as the problem solving proceeds. We elaborate on this view in the next section where we present the use of taxonomies as a particular way of representing the descriptions of solution sets. There is a long history in Al of representing a set by abstracting the common parts and organizing the set in a taxonomic classification. The same basic idea can be used as a way of organizing choices for constraint problems. A set of component choices can be organized in a hierarchy. intermediate nodes in this hierarchy represent a subset of the choices characterized by some common description of all the choices. For example, in a computer configuration problem, the set of printers may be organized in a taxonomy of high speed, medium speed, and low speed printers. The high speed printer node represents the subset of printers that all have speeds greater than say 1OOcps. Notice that at this level of description of the set, nothing is said about other properties of printers such as technology of printing, quality of printing, cost, interface, etc. A. Searching with taxonomies We briefly sketch a method for using taxonomic grouping of choices for making partial choices. The basic ideas are as follows. First, the choices for variables are organized into taxonomies. Second, instead of committing to a complete component as the value of a variable, we allow a more abstract description to be assigned to a variable. This description is a node in a pre-defined taxonomy. Third, the constraints have to be written in such a way that they can operate on these taxonomic descriptions. Finally, the 634 Engineering Problem Solving search methods operate by moving down the taxonomy. If the correct taxonomic node cannot be committed to and a guess has to be made, it is made by selecting one of the nodes. If the choice later proves to be incorrect, then the problem solver backs up and selects another node. If a choice at some level does not allow all the applicable constraints to be processed, the problem solver descends to the next level and makes further choices. Notice that a retracted node allows a whole set of choices to be marked as no-goods (in ATMS terms [de Kleer, 19861) without having to examine them individually. Furthermore, the chosen node represents a partial choice that would allow further inferencing. Space limitations do not allow detailed consideration of the algorithmic choices for building such problem solvers. We summarize some of the advantages of using taxonomies for organizing choices. One, they allow partial choices to be made which leads to more efficient problem solving. Two, they provide a compact description of the no-good sets. For example, the no-goods determined from the constraint, “Class Foo of word-processing programs need letter-quality printers”, can be compactly expressed as: (((Word-Processing Foo) (Printer Dot-Matrix)), ((Word-Processing Foo) (Printer Thermal))) In other words, no-goods can be represented compactly by pairs of inconsistent classes. Without the use of taxonomies, one would need to enumerate the actual printers and word-processing programs that are now represented by the classes Dot-Matrix, Thermal, and X. Finally, taxonomies provide a compact language for describing the solutions to the constraint problem. For example, instead of enumerating the sets of components that are consistent, one might be able to express them more succinctly in terms of these taxonomies. For example, ((Word-Processing X) (Printer Letter-Quality)) compactly describes a potentially very large set of solutions. 5. atural taxonomies In practice, it may not be easy to automatically compute an appropriate way of organizing a set of components in a hierarchy that is most effective during search. There may be competing ways of decomposing a component, some offering search advantages over the others. Furthermore, common part decomposition involve multiple component properties. The decomposition search space for computing all possible ways of finding common parts is a power set over the set of properties in a component. Natural taxonomies that evolve over a period of use by different users in some domains provide some relief from those problems. Essentially, taxonomies can be used to represent pre-determined decisions about how to abstract common parts. The different levels of the taxonomy reflect a decision about which variables are more useful to commit to earlier. As we pointed out previously, this decision is intrinsically tied to the nature of the constraints and choices. One can conjecture that the evolution of these taxonomies reflects a continuing experimentation with ways to make the proper choices with respect to features to abstract. A stable taxonomy, as obtained from domain experts, represents a compilation of these prior experimentations. Another kind of decision embedded in a taxonomy is the grouping of variables in a node. Consider the case where a set of choices can be abstracted along different dimensions by finding different common parts. The constraint relationships might be such that instead of walking down a hierarchy, making progressively larger choices (i.e., involving more variables), it might make more sense to directly commit to two or more variables. These meta-level choices are again reflected in how the taxonomies are created. Note, that instead of viewing these kinds of properties of taxonomies as a given, one can use them actively in developing suitable taxonomies. Thus, there is a duality between characteristics of existing taxonomies and problem solving criteria for forming new ones. Use of taxonomies also helps in acquiring and maintaining the knowledge-base of components and constraints (see [Frayman and Mittal, 19871 for an extended discussion). . sin Use of taxonomies suffers from one potentially fatal flaw. Consider what happens if the choices and constraints are such that a suitable hierarchical classification cannot be made or is not made. In the worst case, all the choices may be grouped under a single class. Taxonomies would not only not help but may lead to much wasted effort while the other nodes are rejected. Another, and more common, situation can occur when there are alternate ways of classifying some choices and it cannot be pre-determined which of those ways is more useful. Figure 3 shows three alternate ways of classifying printers, each of which is useful in resolving some of the constraints some of the time. Any single hierarchy which merges these three will lead to the wrong partial choice some of the time. Printe Printe Printe etterQualdty ear Letter Quality raft Quality Figure 3. Alternate ways of classifying printers. We briefly sketch a method whereby one can search using multiple alternate ways of organizing a set of choices. The basic idea is to extend the problem solver to simultaneously make Mittal and Frayman 635 partial choices in each of the alternate hierarchies. For example, instead of just selecting a node in a single hierarchy, the problem solver can select a node in one of the hierarchies that is most relevant to the constraint being considered. Each such partial choice would lead to certain constraints being processed, which in turn would allow further inferencing. Once all the constraints have been satisfied, then the choices described by the chosen nodes in each of the hierarchies can be intersected to find the actual solutions. For example, the problem solver could be simultaneously making partial choices from each of the hierarchies shown in Figure 3. At the end of constraint satisfaction, the printer choices might be described by: {HighSpeed & LetterQuality & Laser). By intersecting the sets of printers described by these classes one gets the actual set of printers which constitute all consistent solutions. Note that the same intersection technique can be used during the intermediate stages of problem solving to quickly determine if the current partial choices are mutually consistent. By allowing multiple hierarchies one can avoid the problems associated with a single internal to the set of variables comprising a component) to be ignored. We suspect that both of these ideas can be effectively used together. Another difference between the two works is that while arc consistency algorithms only remove some of the inconsistencies, partial choice methods are also effective in finding consistent solutions to the complete CSP problem. A continuing area of investigation for us is to extend some of the existing constraint reasoning methods in a way that incorporates partial choice ideas, especially the use of multiple taxonomies. We are also investigating if our ideas can also be incorporated in the various network consistency algorithms [Mackworth, 1977; Mackworth et al., 19851. We are grateful to Agustin Araya and Mark Stefik for useful discussions of some of the ideas presented in this paper. Danny Bobrow, Johan de Kleer, and Mark Stefik gave insightful comments on earlier draft of this paper. Danny pointed out the relationship to the work on minimization of boolean terms. taxonomy. 1. Conclusion [Araya and Mittal, 19871 A. Araya and S. Mittal. Compiling design plans from description of artifacts and problem solving heuristics. To appear in Proc. l/CA/-87, Milan, Italy, International Joint Committee for Artificial Intelligence, August 1987. [de Kleer, 19861 J. de Kleer. An assumption-based TMS. Artificial This paper introduced the notion of partial choice for constraint satisfaction problems in structured domains. Partial choice represents an improvement over the least commitment strategy when least commitment has to rely on guessing to proceed. There are two flavors of partial choices -partial commitments are partial choices that do not have to be retracted and partial guesses which are partial choices that may be retracted. Partial commitments are applicable in case alternate choices have a common part and involve computing the entailments of the previously made decisions. Partial guesses are applicable in case alternative choices do not have a common part, but can be divided into non-intersecting subsets with common parts. Partial guesses select such subsets which allows the whole subsets to be ruled out Intelligence, 28(2): 127- 162, March 1986. [de Kleer and Brown, 19861 J. de #leer and J. 5. Brown. Theories of Causal Ordering. Artificial Intelligence, 29(1):33-61, July 1986. [Frayman and Mittal, 19871 F. Frayman and 5. Mittal. Cossack: A constraints-based expert system for configuration tasks. To appear in Proc. 2nd Intl. Conf. on Applications of A/ to Eng., Boston, MA., August 1987. networks [Mackworth, 19771 A. K. Mackworth. Consistency in relations. Artificial Intelligence, 8: 99-l 18, 1977 of [Mackworth et al., 19851 A. K. Mackworth, J. A. Mulder, and W. 5. Havens. Hierarchical arc consistency: exploiting structured domains in constraint satisfaction problems. Computationa/ /nte//igence, 1(3-4):118-126, August - November 1985. without considering all their elements. A common thread through all the ideas presented here is the use of description of classes of solutions as opposed to the actual solutions. Working with such descriptions, in appropriate cases, can both help reduce the search as well as find multiple solutions. Our use of hierarchies for structuring a set of choices is similar in some ways to the use of hierarchical domains for arc consistency, i.e., removal of local inconsistencies for binary constraints [Mackworth et al., 19851. In some ways the notion of partial choice as applied to structured values is complementary to the use of hierarchies in [Mackworth et al., 19851. There hierarchies are an effective way to organize the choices for a variable without any consideration to the internal structure of the choices. We are particularly concerned about situations where the choices have internal structure in terms of additional variables and constraints. Thus, hierarchies are a way of organizing a set of variables that effectively allows a set of constraints (i.e., ones [Stefik, 19811 M. J. Stefik. Planning with constraints. Artificial Intelligence, 16( 2) : 1 1 1 - 140, 198 1. [Mittal et al., 19861 5. Mittal, C. L. Dym, and M. Morjaria. PRIDE: An Expert System for the Design of Paper Handling Systems. Computer, 19(7):102-l 14, July 1986. [Mittal and Araya, 19861 S. Mittal and A. Araya. A Knowledge-Based Framework for Design. In Proc. AAAI-86, pages 856-865, Philadelphia, PA., American Association for Artificial Intelligence, August 1986. 636 Engineering Problem Solving
1987
105
556
PROMPT: An Innovative Design Tool Seshashayee S. Murthy and Sanjaya Addanki IBM T.J.Watson Research Center B.O. Box 704 Yorktown Heights, NY 10598 ABSTRACT We describe a system, Prompt, used to design physical systems. Prompt employs a multi-level approach to design. When simple constraint propagation over prototypes [Adda fails, Prompt can significantly modify prototypes by reasoning about their structure and physics. Prompt de- rives the behavior of a prototype from its structure using knowledge of physics stored in a Graph of Models. It then uses heuristics called Modification operators to control the process of modifying the prototypes. In this paper we de- scribe how our system works in the domain of structural design. We describe the kinds of analysis Prompt performs on beams and how it makes innovative changes to proto- types. ems [Srir85, M.itc85, Mitt86b] config- ure their designs from a fixed finite library of atomic com- ponents. The design process in such systems is best thought of as hierarchical refinement. These systems start with a rough schematic that is successively refined at each step. Each stage of the refinement process generates constraints which are used to guide the partial design; successive re- finements are performed and in the final stage, the atomic components and their parameter values are chosen so as to satisfy these constraints. Such design systems use functional models [Joha85] of prototypes and therefore cannot reason about, or modify, the structure or behavior of the atomic components. It is often the case that the constraints generated by such a process cannot be satisfied by any combination of parame- ters of the components in the library. For example, it is ev- ident that weight and torsional stiffness requirements conflict in the design of beams. If the system’s entire library of beams consists of a solid, circular beam parameterized by the radius and the length of the beam, it is easy to con- ceive of a design specification that cannot be met by varying parameters over the library. The solution to such design problems lies in modifying one or more prototypes in the library. In order to modify a prototype, the system must first be able to derive the behavior of the prototype from its structure and the underlying physics. e system must then use the results of this analysis to appropriately modify the prototype. There are two major difficulties with developing a system that works in this manner. First, analysis is expensive. Analysis requires representing large engineering domains and prototypes can be analyzed along many different di- mensions. Second, mapping the results of analysis to struc- tural changes requires sophisticated reasoning about complex equations. In its full generality the Analyze-Modify loop is very powerful but inefficient. Prompt is a design system that is capable of analyzing and modifying the prototypes in its library. Prompt introduces two mechanisms to help alleviate the difficulties of the Analyze-Modify loop. Graphs of odeb , are helpful in reasoning about large engineering domains. I[Adda87, Penb87] h of Models paradigm in detail. consist of three parts; a set of prec the application of the operator, a set of heuristics that help focus the anal- ysis of the prototype, and a set of heuristics that control the mapping from the results of the analysis to the structural changes. In our example above, the second part of a Mod- ification operator directs Prompt to analyze the prototype of the solid circular beam with respect to stress distribution. The analysis is carried out using the Graph of Models paradigm. The third part of the operator directs Prompt to redistribute the mass of the rod, thus inventing the concept of a tube. ore powerful than pure parameterization and powerful than general reasoning from first principles, dification Operators offer an intermediate level of rea- soning about structural changes. The rest of this paper describes the main ideas in Prompt by working through our example. Beam design is an inter- esting domain for various reasons. First, the theory is well understood, thus providing a firm foundation for analysis from the first principles. Second, the domain is of some im- portance in robot design. Emtlly, the domain includes in- teresting aspects of reasoning about shape and mathematical equations. 2 Cmastraimt Satisfaetkm The first stage in Prompt is similar to existing design sys- tems. Hierarchical decompositions and the atomic compo- nents are stored in structures called Prototypes [Adda85]. Prototypes are similar to the frame-like representation structures used in other systems [SrirSS, tc85]. A com- ponent prototype contains the constraints on its interactions Mu-thy and Addanki 637 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. with other components. The prototype also contains the behavior of the component stored as a function of pre- determined parameters. The difference between our proto- types and those in existing design systems is that our prototypes contain the structural description of the compo- nent; this knowledge helps Prompt analyze the behavior of components from the underlying principles. Figure 1 shows a sample prototype of a solid cylindrical beam. Structure : (solid-cylinder rod ) (radius rod R) (length rod I,) T, = M*/qi = (n/2)GR4 /L (1.1) W= nR2LD (1.2) IIere Ts is the torsional stiffness of the rod. M’ is the external torque applied to the rod. cp is the angle by which the rod twists. L is the length of the rod. R is the radius of the rod. Wis the weight of the rod. G is the shear modulus of the material of the rod. D is the density of the material of the rod. Figure 1. The prototype for a solid cylindrical rod. The prototype has the following parameters: the radius R, the length L , and the material of which the rod is constructed. The last parameter determines the density D and the Shear Modulus G. The behavior of the prototype as a function of these parameters is described by Eqns. 1.1 and 1.2. told to find a beam that satisfies In our example, Prompt is the following constraints: Weight W 5 w1 Torsional Stiffness Ts 1 Tr Length = L, (2) Prompt starts with the beam from Figure 1. Prompt finds that there are two parameters that can be altered to bring the prototype into conformity with the constraints from Eqn 2. Varying the material of which the beam is constructed. This alters 6; and D to help satisfy the constraints on both 9, and W. Prompt can try to choose a system with a high G/D. Varying R. Increasing R increases both T, and W It is easy to conceive of a situation where the material with the highest G/D does not satisfy the constraints on T, and W for any R. Hence the system is unable to satisfy the constraints for any combination of the parameters and the system begins the analysis stage. Prompt has a powerful analysis component that is capable of analyzing prototypes from first principles. The large amount of physics knowledge required for reasoning from first principles is stored in a Graph of models [Adda87]. A model describes the behavior of the system under certain explicit assumptions. The models form the nodes in a graph. An edge in graph represents a set of assumptions that must be added or relaxed to go between adjacent models. Prob- lem solving in a given domain is now reduced to iterating between search for the appropriate model and search for the appropriate knowledge within the model. Searching for the appropriate model is simplified by explicitly representing assumptions and reasoning about conflicts with respect to assumptions. Searching within a model is simplified by the relatively small size of the different models. The Graph of Models structure, and its advantages are described in detail in [Adda87, Penb87] Each domain has a unique Graph of kIodels structure. Prompt will include the following types of analysis of strut- tures: 1. bending loads. 2. torsional loads. 3. vibrational loads. 4. buckling. 5. distributed loads. Each of these forms of analysis is a domain and has its own Graph of dels. Figure 2 is a Graph of Models for beams under torsional loads [Cran72]. It is to be noted that additional models can be very easily added to this structure. Given a simple structure that lends itself to analytical sol- ution, Prompt obtains sets of equations that describe the behavior of the system. lvIore commonly, the structure or its loading is complex, and it may not be possible to obtain closed form solutions. In such cases Prompt will have to use numerical techniques like finite-element analysis.[Desa72] to obtain the numerical distributions of the relevant va,+ ables, e.g. stress, in the various parts of the structure. In our simple case Prompt generates the following equations to describe. the behaviour of the rod in Figure I under the assumptions of haodel7. 638 Engineering Problem Solving Slender Member Lx-l constant Cross-Section El6 aterial Cb sm=eW Rectangular cross-section (a, L) Solid beams Mallow thin-walled ah&t Figure 2. The Graph of models for beams under torsional loads. The assumptions are listed under the model name. The equations describing the behaviour of a beam described by Model 7 are listed below ( 3.1 - 3.6). The equations describing the rest of the models may be found in [cran721. Tflz = %I, (3.0 Yez = (d$/dz)r = ($/L)r (3.2) df= ?3,HA (3.3) dM = rdf (3.4) Ml/$ = (l/L)$ArGrdA W= ss DdAdz W6) Here To2 is the shear stress. yez is the shear strain. d+ is the angle between two cross-sections a distance dz apart. r is the distance from the center of an area &I df is the force exerted by the area &I dM is the torque exerted by the area &I Although in this case the results of the analysis are valid, Prompt may find that its analysis invalidates the initial se- lection of assumptions. For example, the analysis may show that the stresses caused by an external torque are greater than the Yield Stress ICran721 of the material. Also, the modifications Prompt makes to prototypes may invalidate the current model. For example, the assumption of a Um- form cross-section will be violated if Prompt tapers the cy- lindrical rod. In such situations, Prompt can reason about the prototype with respect to the assumptions in order to find the appropriate model.[Adda87, Penb871 Note that, in general, analysis is very expensive because it requires reasoning through very large domains and proto- types can typically be analyzed along many dimensions. In Prompt, as we shall shortly see, analysis is controlled by Modification Operators. s reasoning about the re- sults of the analysis in order to determine the appropriate modification. In our case, modifying the rod requires the system to realize that the important equations are 3.5 and 3.6 because they describe the stiffness and the mass of the beau: in terms of its underlying structure. Then, examining these equations shows that the contribution of a given area dA to stiffness varies proportionally to r2 and the contrib- ution of the same area to mass is invariant with respect to P. This leads to the final conclusion that moving mass from areas of low r to areas of high r increases the stiffness of the beam but does not affect the mass of the beam. The difficulty of directly implementing such a scheme is that it requires very complex and sophisticated reasoning that can explode combinatorially. Fortunately, of the many dif- ferent types of changes that may be derived from first prin- ciples analysis, a small number find frequent use in meeting design constramts. These changes are notable in that they can be precompiled in the form of heuristics. We call these packaged heuristics Modification Operators. Our le ve illustrates the Mass Redistribution Operator ). e MRO captures the heuristic “Move mass from areas g a low load to areas bearing a high load”. Modifica- tion operators consist of three parts. The first part is a set of conditions under which the operator is hkely to be useful, ,the second part consists of heuristics that direct the analysis of the prototypes, and the third part consists of heuristics that direct the changes t are to be made to the prototype. The three parts of the 0 can be summarized as: Use in case of conflict between strength and mass. Derive stress distribution and mass distribution. Move mass from areas bearing a low load to areas bearing a high load. Precompiling these changes results in a level of reasoning that forms a bridge between the purely parameterized level of component descriptions, and the very general, but ineffi- cient, level of first principles analysis. .Alt tion operators may be regarded as parameterized op they are really more complex. Application of these cation operators requires the ability to derive the behavior of the component from its structure; quite different from the simple constraint propagation normally used for choos- ing uarameters. Murthy arud Addanki Apart from efficiency, storing modification operators also allows Prompt to use operators that are non-trivial to derive from the underlying analysis. An example of such an oper- ator is the Shape Modification operator below. Each design space has its own set of Modification Opera- tors. We describe below a set of operators that we have identified for the beam and structural design problem. These are well known to Civil and Mechanical engineers. 1. 2. 3. Redistribution of mass. There are two aspects to this operator. a. Removal of mass. Mass can be removed from re- gions bearing a low load. This decreases the weight of the structure without affecting its load bearing capacity adversely. b. Addition of mass. Mass can be added to a beam in regions where it would bear a high load. This increases the load-bearing capacity of the structure without increasing its weight adversely. These two operators acting in conjunction can be used to de- crease the weight of a structure without affecting its load bearing capability. Changing the material distribution. If an element is under high stress a high strength material can be used to bolster the structure. If certain members are under low stress a low strength material can be used to build them. Changing the material alters the density, the shear modulus and the elastic modulus. The load bear- ing characteristics can be altered without affecting the volume of the beam. Different load mixes can be han- dled with composite materials. Changes in shape. Changes in shape are used to re- move areas of high stress in a beam, to change the connectivity of regions and to prevent buckling. An example of the operators use is given in Figure 3. Here the corners are rounded to decrease the high stress that occurs at corners under a torsional load. This is an ex- ample of an operator that is not easily derivable from the equations that describe the behaviors of structures. Figure 3. Rounding corners. a square to reduce stress in the 4. Addition or removal of elements. Elements can be added to a structure at points of high deflection. They decrease the deflection and the stresses in the rest of the structure. Elements can also be added to make a structure fail-safe. Elements that are under low stress can be removed totally. Figure 4 gives an example of the use of this operator. Figure 4. Addition of a beam at points of high deflection in a structure. Modification Operators thus constitute the intermediate level of our design system. They are derivable from the underlying principles but are precompiled and stored for of efficiency. The changes that they can effect are more gen- eral than those that can be effected by parameters. At the same time they are more expensive to apply. 5. Our Example Revisited. Let us now see how Modification Operators can be applied to alter the prototype of Figure 1 to bring it into conformity with the constraints in Eqn. 2. Analyzing the beam under a twisting moment ai4; Prompt obtains the stress distribution across the cross section of the beam. A graphical version of the distribution is shown in Figure 5. Matching its goals to the intended effects of the Modification Operators, Prompt chooses to apply Modifi- cation Operator #l because one of the effects of this oper- ator is reducing mass without significantly affecting the beam’s load bearing capacity. The heuristic associated with the operator guides Prompt to remove regions bearing a low load and add mass in regions where it would bear a high load. Very Hugh Stress HI gh Stress lmln ned#ufn Stress m 0 cl a Low Stress Figure 5. The loading diagram for a cylindrical beam under a torsional load. 640 Engineering Problem Solving Eqn 3.4 shows that the stress distribution is radially sym- metric around the axis of the beam, and, further, stress in- creases with radius. Prompt therefore chooses to remove a cylindrical shape of radius Ri from the center of the beam. By going through a couple of iterations of analysis and choosing Ri and R,, it selects values that satisfy the weight constraint and the stiffness constraint. It thus comes up with the design shown in Figure & This can be stored as a prototype to be used in future design. Figure 6. Modification of the prototype to meet weight and torsional stiffness constraints. If the user now decides to add an additional constraint, Bending Stiffness 1 B, , Prompt needs to modify the pro- to type again. Prompt analyzes the beam under both torsional and bending loads. The results are graphically depicted in Figure 7. Here Prompt has the goal of improv- ing the bending stiffness in one direction. The shape Mod- ification Operator has this as one of its intended effects. The heuristic associated with the operator guides Prompt to elongate the shape in the direction of the bending load. It therefore decides to use an ellipse that has its major axis in the direction of the bending load. The result is shown in Figure 8. status 0 f work me envisioning apparatus within Prompt has been imple- mented for the domains of Dynamics, IKinematics, Statics and Fluid Dynamics. We are at present extending it to do stress analysis. The design system built on top of this is not complete at this point. paris~m with other work Current design systems do not attempt to analyze designs at more than one level. Expert human designers display this capability; they use handbooks to get standard parts, but they are capable of reanalyzing the parts to change them if the need arises. This ability is essential for a system to be able to do more than routine design. It has been recognized that systems that perform the task of diagnosis should have more than surface rules. Chandrasekaran [Chan83] argues that deep knowledge is indispensable to a system performing diagnostic problem Very HI gh Stress Shear Stress High Stress Medium Stress e e q . Low Stress Figure 7. Loading diagram under torsional and bending loads. Figure 8. Modification of the beam to handle bending stress. Prompt uses the Shape Modification Operator to increase the bending stiffness while not compromising the torsional stiffness. solving. Randall Davis [Davi85] describes a system that reasons from fist principles, i.e., using knowledge of struc- ture and behavior. The system has been implemented in the domain of troubleshooting digital electronic circuits. ore recently Iwasaki [Iwas86] describes Acord, a model-based diagnostic program that reasons about physi device be- havior in the domain of a coal power plant. wever this is the fist attempt at using such deep knowledge that can be be derived from the structure of the device, to bring about changes that are not merely parameterized. Dominic [Dixo86] embodies the idea that for physical systems, analysis is very important and that an iterative analysis-modify cycle is the correct way to approach design in these fields. However, the system is capable of only do- ing routine design; it can only vary the parameters of the prototype to bring it into conformity with the constraints. All-Rise [SrirSS] is a knowledge-based expert system for preliminary structural design. Problem solving in All-Rise involves generating a solution tree of alternatives by proc- essing through a static knowledge hierarchy. It does not attempt to modify the prototypes that are available in the library. Bride [Mitt86b, Mitt86a] is a system built to design paper transports in copiers. It uses data-dependency backtracking to do hierarchical refinement. We have not addressed this issue in our paper. kh,wthy and Addanki 641 Edison [Dyer86]is a system designed to create a model for experimenting computationally with the processes of in- vention, analogy and naive mechanical device represen- tation. The paper presents a theory of invention based on an episodic memory-based understanding of device functionality, memory generalization, analogical reasoning and symbolic representation. Edison has three general strategies for creating novel devices: generalization, anal- ogy and mutation. However Edison pays little attention to the problem of finding the behavior of a device based on its structure. Also the mutation heuristics which the authors mention are not elaborated on. Modification operators are firmly based in the physics of the domain we are consider- ing. Using the Graph of models approach for representing knowledge, Prompt is able to analyze a prototype based on first principles and derive its behavior from its structure. It uses Modification Operators as a method of making changes to the prototype based on this analysis. By providing a means to alter prototypes that exist in the library, Prompt can handle situations where parameter variation alone will not suffice to handle constraints. No library, however complete can hold all possible changes to prototypes. By using Modification Operators, and when this fails, by dis- covering changes from first principles, Prompt is able to reach more points in the design space. 8 Conclusions. This paper introduces Modification Operators as an inter- mediate level in reasoning about satisfying design con- straints. Modification Operators lie between simple constraint propagation and first principles analysis in power and efficiency. We showed, through an exam le., that con- straint propagation can fail to satisfy given esign specifi- B cations. We also showed that satisfying these specifications requires reasoning from first principles about the behavior of existing prototypes. Modification Operators are com- piled versions of some of the changes derivable from such reasoning. Applying these operators requires some amount of first principles analysis to determine their applicability. The designs resulting from first principles reasoning, and from the application of Modification Operators, are novel in that they do not exist within the system library. Hence Prsmpt is an approach for moving towards systems that can produce “creative” designs for novel situations. Bibliography [Adda85] Addanki, Sanjaya and Davis, Ernest S. A Representation for Complex Domains. Proceedings of the Ninth International Joint Conference on Ar- tificial Intelligence, 1, August 1985. [Adda87] Addanki, Sanjaya, Penberthy, Scott, and Davis, Ernest. Envisioning as Problem Solving: An Application of Prompt.. IBM Research Report, 1987. [Chan83] Chandrasekaran, B. and Mittal, Sanjay. Deep Versus Compiled Knowledge Approaches to Diagnostic Problem-Solving. IEEE Journal of Man-Machine Studies, (19):425-436, 1983. [Cran72] Crandali, S. H., DahI, N. C., and Lardner, T. J. An Introduction to the Mechanics of Solids. McGraw Hill Book Company, 1972. [Davis51 Davis, RandaIl. Diagnostic Reasoning Based on Structure and Behavior, pages 347-411. in Bobrow, Daniel G., Qualitative Reasoning about Physical Systems. The MHT Press, 1985. [Desa72] Desai, Chandrakant S. and Abel, John. F. Introduction to the finite element method. Van Nostrand Reinhold, 1972. [Dixo86] Dixon, John R. Artificial Intelligence and Design: A Mechanical Engineering View. Proceedings of the Fifth National Conference on Artificial Intel- ligence, 2, July 1986. [Dyer861 Dyer, Michaei G., Fiowers, Margot, and Hodges, Jack. Apphcations of Artificial Intelligence in Engineering Problems, chapter Edison: An Engineering Design Invention System Op- erating Naively, pages 327-341. Springer-Verlag, 1986. [Iwas Iwasaki, Yumi. Model-Based Reasoning of Device Behaviour with Causal Or- dering. CMU CS Tech Report, 1986. Thesis Proposal.. [Joha85] de Kleer, Johan and Brown, John Seely. A Qualitatiw Physics based on Confluences, pages 7-84. in Bobrow, Daniel G., Qualitatiw Reasoning about Physical Systems. The IWT Press, 1985. [Mitc85] Mitchell, Tom M., Steinberg, Louis I., and Shuhnan, Jeffrey S. A Knowledge-based Approach to Design. IEEE Transactions on Pattern Analysis and Machine Intelligence, 7(5), September 1985. [Mitt86b] Mittal, Sanjay and Araya, Augustin. A Knowledge-based Framework for Design. Proceedings of the Fifth National Conference on Artificial Intel- ligence, 2, July 1986. [Mitt86al Mittai, S., Dym, C. M., and Morjaria, M. PRIDE: An Expert System for the Design of Paper Handling systems. Computer, July 1986. [Penb87] J. Scott Penberthy. Incremental Analysis in the Graph of Models: A First Step Towards Analysis in the Plumbers World.. Massachusetts Institute of TechnologyCambridge, Massachusetts. 1987. S.M. Thesis. [Srir85] Sriram, Duwuru. Knowledge-Based Approaches for Structural Design. PhD thesis, Carnegie Melion University, Civil Engineering Department, Pittsburgh. 1985. 642 Engineering Problem Solving
1987
106
557
s Toyoaki Nishida and Shuji Departs of Pnformation Science oto university an ABSTRACT Intuitively, discontinuous changes can be seen as very rapid continuous changes. A couple of alternative methods based on this ontology are presented and compared. One, called the approximation method, approximates discontinuous change by continuous function and then calculates a limit. The other, called the direct method, directly creates a chain of hypothetical intermediate states (mythical instants) which a given circuit is supposed to go through during a discontinuous change. Although the direct method may fail to predict certain properties of discontinuity and its applicability is limited, it is more efficient than the approximation method. The direct method has been fully implemented and incorporated into an existing qualitative reasoning program. I . Introdmctisn Continuous change is a notion in which quantities are assumed to take a certain amount of time to change value. Discontinuous changes are those to which this assumption does not apply; quantities can change value in a moment. Notion of discontinuous change plays a crucial role in characterizing the behavior of dynamic systems, such as nonlinear oscillators or flip-flops, without worrying about unmotivated details. At the commonsense level, the notion of discontinuous change seems to be natural; things appear to suddenly stop moving, collide, disappear and so on. Unfortunately, analysis of discontinuous changes is not easy. This is mainly because ordinary models for physical systems (e.g., circuit equations) do not always specify the system’s behavior under discontinuous change in full detail. In textbooks, this problem is often solved by using an ontology in which discontinuous change is very rapid continuous change. A couple of alternative methods are possible to implement this view. One, called the approximation method, approximates discontinuous change by a continuous function and then calculates a limit. The other, called the direct method, uses a notion of mythical instants to describe hypothetical intermediate states which a given circuit is supposed to go through during a discontinuous change. In this paper, we present and compare these two algorithms. We base our theory on qualitative reasoning, a formal theory for causal understanding, and we choose electronic circuits as a subject domain. In the next section, we study properties of discontinuous changes. In section ill, we will briefly overview previous work in qualitative reasoning and see how discontinuity has been handled. In sections IV and V, we will describe the two algorithms separately, and in section Vl, we will compare the two and summarize the discussion. The varieties of discontinuous changes depend on the physical model employed. In this paper, we study discontinuous changes arising in piecewise linear equation models for electronic circuits, since the use of piecewise linear equations is one of the most popular techniques in the electronic circuit domain. In this modeling, nonlinear circuit elements, such as diodes or transistors, are described with multiple operating regions. Circuit devices modeled with multiple operating regions will be called multiple-mode devices. Figure 1 shows the models for diodes and transistors we employ for explanation in this paper. Although they might appear too simple, they suffice for the discussion below, since the same kind of phenomena arise even when more complex models are used, as will be seen below. Possible causes of a single occurrence of discontinuous change arising in piecewise linear circuit models can be classified into three categories: (Al) discontinuous input (A2) mode transition of a multiple-mode device (A3) positive feedback without time delay. Nishida and Doshita 643 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. (a) A model for diodes. (b) A model for transistors. WC d ic iB -b g&E . . . ...* ‘43E L iE [OFFi VD <V,, iD=o. [OFF] VBE <Vg, iB = iC = iE =o. [ONI uD=&, , i&o CON1 VBE = V,, VCE = 0, ig2 0, i& 0. iB+iC=iE, VBE=~BC+~CE Figure 1. Device models in (uO is a positive constant.) piecewise linear equations. Discontinuous change caused by cause category Ai will be referred to as type Ai discontinuity. Properties of type Al discontinuity have been studied in depth in the field of transient analysis. Mathematical techniques such as Laplace or Fourier transform methods are studied to see how discontinuous input affects a given circuit. However, humans seem to use much simpler method when they characterize the behavior of many pulse circuits at a commonsense level. Type A2 discontinuity results from the use of piecewise linear equations as a circuit model. For example, the diode model shown in figure l(a) causes derivatives of current or voltage to change discontinuously when the operating region of the diode transitions. If detailed, nonlinear description of a circuit device is used, discontinuity of this type might disappear. However, abstract, less precise models are more useful in characterizing the behavior of nonlinear devices in such circuits as regulators, TI’L, or Schmitt triggers. Note also that as long as we use a piecewise linear model, we cannot avoid discontinuous change (at least of first order derivatives) on mode transition. Positive feedback without time delay may accelerate any small disturbance ad infinitum, resulting in type A3 discontinuity. Positive feedback is not rare in electronic circuits; positive feedback is observed in circuits containing devices such as tunnel diodes which exhibit negative incremental resistance in some operating mode, or one can design a circuit with positive feedback to create a memory (or a stable state), to generate pulses, and so on. In fact, the latter is an important technique in digital circuit design. Note that a positive feedback will not result in value jump, if the feedback factor is less than I. Note also that ordinary qualitative reasoners may produce an undesired result unless positive feedback is correctly recognized and handled. B. Properties of Discontinuous Change Discontinuous change arising from piecewise linear models has at least two properties. (Property 1) Causal structure of the system may change during discontinuous change. Theories of qualitative reasoning view causality as a value dependency among variables. More computationally, it can be seen as information flow from the cause to the effect [de Kleer and Brown, 19841. When a circuit consists only of passive elements, such as resistors or capacitors, its causal structure will not change in general. In contrast, when multiple-mode devices are involved, the causal structure of a circuit may change drastically, due to the transition of operating mode. An analysis program should be able to recognize and keep track of the change of causal structure so as to generate a causal explanation. (Property 2) A number of discontinuous changes of different types may occur one after another. Discontinuous change applied to the input of a given circuit will be propagated into other parts, possibly creating a complex chain of events. The problem here is that ordinary circuit equations do not contain sufficient information to explain this process in stepwise, causal terms. Consider the hypothetical circuit shown in figure 2. In textbooks of electronic vcc Figure 2. A hypothetical transistor circuit. engineering, the behavior of this circuit with UIN being raised from zero is usually explained as follows: when unv rises and reaches some level, TRI will turn ON, causing TR2 to turn OFF, causing TR3 to turn ON. This explanation implicitly assumes the following states: state-l: TRl: OFF, TR2: ON, TR3: OFF state-2 TRl: ON, TR2: ON, TR3: OFF state-J: TR1: ON, TR2: OFF, TR3: OFF state-4: TRl: ON, TR2: OFF, TR3: ON. Among these, state-2 and state-3 are mythical in the sense that they do not satisfy circuit equations employed here. By definition, mythical instants terminate instantaneously. Although one might want to use only legal states (namely, state-l and state-c?), a resulting explanation would be acausal and magical. This kind of situation arises as long as one uses an abstract model to capture the physical world. The direct method attempts to use mythical states to create a causal explanation. III. Previous Work 644 Engineering Problem Solving So far, the mainstream of qualitative reasoning has been analysis under continuity assumption [de Kleer, 19841 [Kuipers, I.9841 Williams, 19841. Analysis of discontinuous change has received an inadequate treatment. In the domain of electronic circuits, J. de Kleer and B. Williams each have given algorithms for analyzing operating mode transition. De Kleer’s EQUAL does not allow discontinuous change even when a mode transition occurs [de Kleer, 1984, p. 2’721. Williams allows discontinuous change only when discontinuity is explicitly indicated in a device model. But neither of the two deals with type Al or A3 discontinuity or property (2). Qualitative Process (QP) theory [Forbus, 19841 has more flexibility with discontinuous changes. Discontinuous change is allowed when processes are switched or when a special process, one for collision for instance, comes into play. Unfortunately, the process centered ontology of QP theory does not seem to match the device centered view taken by circuit engineers. Aggregation theory [Weld, 19851 also handles discontinuity, but from a different perspective. Iv. A. Outline sf the Method If discontinuous changes are very rapid continuous changes, it would be natural to approximate them by continuous change and then to calculate a limit. This idea can be implemented using a simple version of infinitesimal calculus. The analysis is carried out in two stages: 1. Replacing discontinuous input by qualitatively continuous change in infinitesimal calculus. 2. Carrying out envisionment using an infinitesimal calculus. A number of techniques [Robinson, 19661 [Nishida et al., 19851 [Raiman, 19861 are available for this. For the purpose of this paper, our simpler method will do. We use a set of symbols (0, e(infinitesimal), M(medium), m (infInitely large)}, to represent order of magnitude. Among these symbols, we can define some obvious rules, such as E +E =E, &+M=M, &X&l=&, ~xQ3 =?, etc. We use a value interval to represent the range of the changing value. For example, C--E, 1~) means that the value is changing between some negative infinitesimal and some positive mid-range value. We abbreviate (a, a) as a. In order to see the possible behavior of a given system over time, we can make use of the following qualitative integration rule (on time): given a time interval Zr [to, 111 and a function fj’t), the value of fltl) is constrained by the following formula: C~~~llC~~tolJ+~Zengt~ZllXCranger(af~l where, length(Z): the length of the interval Z, rangedafk the value range of af during the interval ZI Although this rule seems to be too underconstrained, it is useful when the length of interval Z is infinitesimal. . Application of the This method directly applies to the type Al discontinuity. Suppose a step input is applied to tbe circuit shown in figure 3(a). This discontinuous input is approximated to the second order derivatives, as shown in figure 3(c). Table 1 shows the result of (a) A Circuit (b) Circuit Eouations VlN=VCfVR C-dvcldt = i R-i=vR . . . input : (c) The input. V __----- approximation Zlzstantl Interval1 Znstant2 Zatervalg Znstantg Figure 3. A sample circuit and approximation of discontinuqus input. envisionment. The three rows headed by aiVIN represent the approximated input. The six POWS headed by awe or aiVR are the result of envisionment. They are derived by integrating various constraints. Table 1. Analysis of type Al discontinuity by the approximation method. a”vZN a1 “IN s”IN a&c al UC Znstan.tl Interval1 Instant2 Interval2 (length E) (length d Instant3 @“R a1 VR #“R a) a~uC(imtant~) =0 is assumed. For example, the value of aovc at instant2 is constrained using the qualitative integration rule, as follows: a”uC(ZnstantZ)caovC(Z~~~t~) + I Interval1 bvcdt The right handside will be simplified as follows: 0+&X(-&, +M)=(-&, +&). For the value of aoVR at instan&, the value range +M obtained by applying the equation: a”VR(hSkZn~2) =a”VZN(lnskZn&) - d0VC(znSbZ~2) is more precise than that from applying the Nishida and Doshita 645 integration rule (namely, (--I, + ~1). Mence we have employed the former. For type A2 and A3 discontinuity and property (11, however, the above idea does not suggest a solution. In order to handle these problems, we use a technique called dynamic causal stream analysis [Nishida et al., 19871, which has some similarity to the transition analysis Williams, 19841. Details will be mentioned in section V-B, since the same algorithm is also used in the direct method. Difficulty arises when a discontinuous change evolves from the inside of a given system. Suppose for instance the situation in figure 4, in which discontinuous change evolves at variables ~2, ~3, and ~4, due to positive feedback. In order to apply the idea f the input i - :. P the ‘k-- system v4 +-” q z v4)- . . . . . I Figure 4. A situation in o : an equation * : direction of causality which discontinuous internally. change evolves mentioned above likewise to this kind of situation, we must replace each internal occurrence of discontinuous change by continuous change before propagating them into other parts (involving I$, u3’, and ~4’ in this case) of the system. For this purpose, the notion of local evolution of time Williams, 19861 seems to provide an adequate ontology, though it is not explored in this paper. It is not obvious, however, whether or not the above algorithm will eventually terminate with a correct state description for the next stable state. The problem is complicated since mode transition may change the structure of the causal network during the above process. Although we have not proved, this algorithm seems to work correctly for normal circuits. V . The Direct Method A. Outline of the Method The direct method produces stepwise causal explanations for discontinuous changes by admitting intermediate mythical states which are not consistent with circuit equations. Mythical instants result from assuming as a default that the operating mode of multiple-mode devices and the value of variables constrained by integral will not change unless otherwise specified. These assumptions are called persistence of operating mode and persistence of integrated quantity, respectively. Analysis with these default assumptions seems to coincide with our intuitions, at least in the electronic circuit domain. Unlike the approximation method, the direct method does not place any hypothetical time intervals of infinitesimal length between adjacent instantaneous states during discontinuous change; instead, it directly predicts the next instant by analyzing the current instant. This process is repeated until the algorithm encounters a normal instant without any inconsistency. As a result, the algorithm will produce a chain of successive mythical instants followed by a normal instant for each successive occurrence of discontinuous change. Our discontinuity-as-a-very-rapid-continuous- change ontology is used in various forms in this process. For example, we use the following rules: [Continuity in discontinuous change] When the value of a variable x changes (either continuously or discontinuously) from one value a to another b (b*a), change to any value c between e and b always occurs before that to b. [Adjacency in operating mode transition] A muliple-mode device cannot transit from one operating mode to another in one transition, unless the next mode is adjacent to the original. Like canonicality heuristics [de Mleer and Brown, 19841, these rules provide a basis of canonical explanation. B. Analyzing Discontinuity Using the Direct Method The key idea in analyzing type Al discontinuity is to identify variables which will not be instantaneously affected by the discontinuous input. Causal analysis [Williams, 19841 [Nishida et aI., 19871 helps us do this. If it is possible to assign to equations for a given circuit causal directions in such a way that no differential causality (i.e., data flow from a variable to its derivative) is involved, we can safely say that the output of each integral causality will remain unchanged during discontinuous change. For example, we can predict that the voltage UC across the capacitor C in the circuit in figure 3 will not be affected by a discontinuous input, since we can consistently think of the value of UC as being determined by integrating i/C. Notice that during the above process the predicted state may turn out to be inconsistent with the circuit equations. Inconsistencies encountered during the analysis are analyzed so as to predict the next state. Type A2 and A3 discontinuity is recognized during the analysis of inconsistency. It goes in three steps: 1. Singling out an incorrect assumption. First, a set of assumed equations or inequalities that are relevant to the inconsistency (the suspect set) is built by tracing back the causal structure from a constraint in contradiction; and then, if the suspect set contains 646 Engineering Problem Solving mire one element, it is filtered by using canonicality heuristics for discontinuous change (some of which are mentioned in the last section) and the preference rules as follows: an inequality supported by a persistence- of-operating-mode assumption is most preferred as a culprit, then comes an equation supported by a persistence-of-operating-mode assumption, and finally an equation supported by a persistence-of- integrated-quantity assumption. It is possible that the suspect set may still contain more than one element. This will produce an ambiguous result. redicting the next state. If it turns out that a ence-of-integrated-quantity assumption is supporting the culprit, the assumption is simply retracted. This will loosen the current constraints by one degree of freedom, resolving inconsistency. If a persistence-of-operating-mode assumption is blamed for supporting the culprit, the next operating mode is sought by examining how circuit equations or inequalities get violated. In order to make this process run efficiently, we associate a suggestion about the next operating mode with each assumption- based constraint. For example, associated with an inequality VDcV,, (a condition for a diode to be OFF) is a note which suggests that if this condition is violated rise of vD, then the next operating mode of will be ON. Notice that the search for the ting mode becomes crucial when multiple- mode devices are modeled with many operating modes. 3. Constructing a state description for the next state. The state description for the next state is obtained from the current state description, rather than recomputed from the beginning. First, the et of circuit equations and inequalities are by r&racting those depending on the culprit ng those associated with a new assumption. Then, the causal structure for the next state is reconstructed, which is used to compute the state description. If the state description is obtained successfully, the next state is judged as a normal instant, followed by an interval. Otherwise, the causal structure for a new state is checked for a sitive feedback, to see a possibility of type A3 scontinuity. If this is the case, a special procedure is applied to determine the direction of jump and to foresee a possible conclusion of the jump. Otherwise analysis of inconsistency is repeated. A rule for predicting the direction of value jump caused by a positive feedback is as follows: if a variable in a positive feedback loop depends positively (negatively) on the primary cause, the value will jump to the reverse (same) direction. This rule is derived from an ontological ground. Consider for example a system which is modeled by equations: x=y+z, and z=K.~ (x: input, K: a constant), and let the constant K be set to Ko (~0). A positive feedback comes into play if the constant K is changed to ~1 ( C-I) as a result of a mode transition. Although in piecewise linear models, K changes instantaneously, it would be beneficial to think about a hypothetical situation in which K changes gradually from Ko to Kl. The closer K comes to -1, the bigger becomes y/x and -Z/X, since y=(l/(l +K))-x and z =(Kl(l + K))-x. Notice that the above rule for jumping values is exemplified, since y depends negatively on r and z positively on z when K reaches Kl and a positive feedback comes into play. C. Am Example In general, the direct method provides a simple but a powerful means for dealing with chains of discontinuous change. Let us see how it works for an unstable multi-vibrator shown in figure 5. Figure 5. An unstable multi-vibrator. 1. Initial condition. We assume TRl and TR2 are initially ON and OFF, respectively, and both UC, and vc2 are involved in an interval (I+,- vcc, v,), where V, is a threshold (see figure l(b)), and VCC~,XI. 2. Analyzing the initial state. It follows that the capacitor cl is being charged, raising the base voltage VTR,-B of the transistor TR2. Thus, it is foreseen that the condition vTR,-B <v, for TR2 to be OFF will eventually be violated, turning TR2 ON. Notice that C2 is being discharged, keeping IQ less than v,. This fact will be used in the next step. Notice also that although the base current into TRl is positive and is decreasing, it will not reach zero since before that happens the capacitor ~2 would be saturated. 3. Constructing a state description for the next instant, say instantl. It is assumed as a default that TRl remains ON (persistence of operating mode), and the values of ucl and uc2 will not be affected by the transition of operating mode (persistence of integrated quantities). Unfortunately, these assumptions turn out to be inconsistent, because VTR,-B must be below v,, on the one hand, since UTR,B=VC, +VTR,-C, vc2 CV, and VTR~-C=O, and VTR,-B must be equal to v,,, on the other hand, since TRI remains ON. Relevant to this Nishida and Doshita 647 contradiction are the assumptions that UC, remains unchanged and that TRl remains ON. The latter is preferred as a culprit (see the last section) and is retracted. Notice that it also follows that TRl will turn OFF since it is now assumed that UTR,-B drops below Q,. Thus, it has turned out that the current state is mythical and immediately followed by another instant, Say@zstant2. 4. Constructing a state description for instunt2. This time no inconsistency is encountered and instant2 is declared to be normal. Nence, it is followed by an interval, in which TRl is OFF, TR2 ON, Cl being discharged, and ~2 being charged, just symmetric with the initial situation. Notice that the above analysis predicts that a number of variables change their value discontinuously. For example, UTR,-C is expected to rise discontinuously from zero, UTR,-B drop discontinuously from u,,, and so on. VI. Comparissm imd @Qnduding These two methods differ in terms of preciseness and efficiency. Preciseness. In general, the approximation method seems to implement the discontinuity-as-a- very-rapid-continuous change ontology with more fidelity. The direct method may fail to characterize certain properties of the response to discontinuous input. If the direct method is applied to the example shown in figure 3, it will predict that doURI alUR and &-bR, willjmp to +, - and +, respectively, as a result of discontinuous input. Unlike the result shown in table-l, prediction by the direct method does not make explicit the fact that &UR(ir I) has several keen peaks of infinitely large magnitude. Fortunately, those peaks do not cause serious problems in the electronic circuit domain. In ordinary models of electronic circuits, the operating mode of each circuit element is determined only by variables on a0 level (those that stand either for voltage or for current). Therefore, a peak at al level plays a critical role only when differential causality is involved and the value of a a0 level variable is determined by that of a al level variable. First of all, circuits with differential causality are relatively rare. Second, existence of differential causality can be detected by causal analysis. It serves as a warning. Efficiency. A naive implementation of the approximation method will result in an inefficient algorithm because the approximation-limit process will be carried out uniformly for a discontinuous input irrespective of necessity. In contrast, the direct method is more efficient because the computation process is invoked only when inconsistency is detected. Compare also how type Al discontinuity is handled by each method (see table-l and description in section V -B). We have incorporated an algorithm based on the direct method into an existing qualitative reasoning program R-I [Nishida et al., 19871. It can analyze all the examples shown in this paper. Cur future direction is twofold: extension for differential causality and ill-formed circuits. The robustness against ill-formed circuit is crucially important in ICAI environments where students use the program for reviewing their circuits. References [de Kleer and Brown, 19841 de Kleer, J. and Brown, J. S., A Qualitative Physics Based on Confluences, Artificial Intelligence, 24,7-83,1984. [de Kleer, 19841 de Kleer, J., How Circuits Work, Artificial Intelligence 24,205280,1984. [Forbus, 19841 Forbus, K. D., Qualitative Process Theory, Artificial Intelligence, 24,85-168,1984. [Kuipers, 19841 Kuipers, B., Commonsense Reasoning about Causality: Deriving Behavior from Structure, Artificial Intelligence, 24, 169-203, 1984. [Nishida et al., 19851 Nishida, T., Kawamura, T. and Doshita, S., Dealing with Ambiguity and Discontinuity in Qualitative Reasoning, in Proceedings of Symposium on Knowledge Information Processing, IPSJ, 1985. [Nishida et al., 19871 Nishida, T., Kawamura, T. and Doshita, S., Dynamic Causal Stream Analysis for Electronic Circuits, Trans. IPM, 28(2), 1987. [Raiman, 19861 Raiman, C., Order of Magnitude Reasoning, in Proceedings AAAI-86, 100-104, 1986. [Robinson, 19663 Robinson, A., Non-Standard Analysis, North-Holland, Amsterdam, 1966. [Weld, 19851 Weld, D. S., Combining Discrete and Continuous Process Models, in Proceedings MCAI- 85,140-143,1985. Cwilliams, 19841 Williams, B. C., Qualitative Analysis of BIOS Circuits, Artificial Intelligence, 24,281-346,1984. [Williams, I.9861 Williams, B. C., Doing Time: Putting Qualitative Reasoning on Firmer Ground, in .Proceedings AAAI-86,105-112,1986. IPSJ: Information Processing Society of Japan. 648 Engineering Problem Solving
1987
107
558
Hierarchical Reasoning about Inequalities Elisha Sacks’ MIT Laboratory for Computer Science 545 Technology Square, Room 370 Cambridge, MA 02139, USA Abstract This paper describes a program called BOUNDER that proves inequalities between functions over finite sets of constraints. Previous inequality algorithms per- form well on some subset of the elementary func- tions, but poorly elsewhere. To overcome this prob- lem, BOUNDER maintains a hierarchy of increasingly complex algorithms. When one fails to resolve an in- equality, it tries the next. This strategy resolves more inequalities than any single algorithm. It also per- forms well on hard problems without wasting time on easy ones. The current hierarchy consists of four algo- rithms: bounds propagation, substitution, derivative inspection, and iterative approximation. Propagation is an extension of interval arithmetic that takes lin- ear time, but ignores constraints between variables and multiple occurrences of variables. The remaining algorithms consider these factors, but require expo- nential time. Substitution is a new, provably correct, algorithm for utilizing constraints between variables. The final two algorithms analyze constraints between variables. Inspection examines the signs of partial derivatives. Iteration is based on several earlier algo- rithms from interval arithmetic. . This paper describes a program called BOUNDER that proves inequalities between functions over all points sat- isfying a finite set of constraints: equalities and inequali- ties between functions. BOUNDER manipulates extended elementary functions: polynomials and compositions of exponentials, logarithms, trigonometric functions, inverse trigonometric functions, absolute values, maxima, and minima. It tests whether a set of constraints, C, implies an inequality a 5 b between the extended elementary func- tions a and b by calculating upper and lower bounds for a - b over all points satisfying C. It proves the inequality when the upper bound is negative or zero, refutes it when the lower bound is positive, and fails otherwise. Previous bounding algorithms perform well on some subset of the extended elementary functions, but poorly lThis research was supported (in part) by National Institutes of Health Grant No. ROl L&f04493 from the National Library of .Medicine and National Institutes of Health Grant No. R24 RR01320 from the Division of Research Resources. elsewhere. For this reason, BOUNDER maintains a hierar- chy of increasingly complex bounding algorithms. When one fails to resolve an inequality, it tries the next. Al- though complex algorithms derive tighter bounds than simple ones for most functions, exceptions exist. Hence, BOUNDER'S hierarchy of algorithms derives tighter bounds than even its most powerful component. It also performs well on hard problems without wasting time on easier ones. The purpose of BOUNDER is to resolve inequalities that arise in realistic modeling problems efficiently, not to derive deep theoretical results. It is an engineering utility, rather than a theorem-prover for pure mathematics. For this reason, it only addresses universally quantified inequali- ties, which make up the majority of practical problems, while ignoring the complexities of arbitrary quantification. BOUNDER helps PLR [Sacks, 1987b] explore the qualita- tive behavior of dynamic systems, such as stability and periodicity. For example, suppose a linear system contains symbolic parameters. Given constraints on the parame- ters, one can use BOUNDER to reason about the locations of the system’s poles and zeroes. PLR also enhances the performance of QMR [Sacks, 19851, a program that derives the qualitative properties of parameterized functions: signs of the first and second derivatives, discontinuities, singu- larities, and asymptotes. BOUNDER consists of an inequality prover, a context manager, and four bounding algorithms: bounds propa- gation, substitution, derivative inspection, and iterative approximation. The prover uses the bounding algorithms to resolve inequalities, as described above. First, it re- duces the original inequality to an equivalent but simpler one by canceling common terms and replacing monotonic functions with their arguments. For example, x+ P 5 y + 1 simplifies to x 5 y, -x 5 -y to x 2 y, and eZ < ear to x 5 y. The prover (only cancels multiplicands whose signs it can determine by bounds propagation. The context manager organizes constraint sets in the format required by the bounding algorithms. The bound- ing algorithms derive upper and lower bounds for a func- tion over all points satisfying a constraint set. The con- text manager and bounding algorithms are described in the next two sections. The final two sections contain a review of literature and conclusions. I argue that current inequality provers are weak, brittle, or inefficient because they process all inputs uniformly, whereas BOUNDER avoids Sacks 649 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. these shortcomings with its hierarchical strategy. While calls bounds propagation. The bounding algorithms de- this paper discusses only inequality constraints and non- fine the extended elementary functions on the extended strict inequalities, BOUNDER implements boolean combina- tions of inequality constraints and strict inequalities anal- real numbers in the standard fashion, so that l/ f 00 = 0, logo=-co,2m = 00, and so on. Throughout this paper, ogously. “number” refers to an extended real number. II. The Context al-lager The context ima nager derives, an upper (lower) bound for a variable x from an inequality L 2 R by reformulating it as x 2 U (z 2 U) with U free of x. It derives upper and lower bounds for x from an equality L = R by reformu- lating it as x = U. Inequality .manipulation may depend on the signs of the expressions involved. For example, the constraint ax < b can imply x 5 b/a or x 2 b/a depend- ing on the sign of a. In such cases, the context manager attempts to derive the relevant signs from other members of the constraint set using bounds propagation. If it fails, it ignores the constraint. Constraints whose variables can- not be isolated, such as x < 22, are ignored as well. The number of variables in a constraint is linear in its length and each variable requires linear time to isolate. Isolation may require deriving the signs of all the subexpressions in the constraint. Theorem 1 implies that this process takes linear time. All told, processing each constraint requires quadratic time in its length. Subsequent complexity re- sults exclude this time. Two pairs of functions form the interface between the context manager and the bounding algorithms. Given a variable x and a set of constraints C, the functions VAR-L&(x) and VAR-U&(x) return the maximum of x’s numeric lower bounds in C and the minimum of its numeric upper bounds. The functions LOWE%(x) and UPPERc(x) return the maximum over all lower bounds, symbolic and numeric, and the minimum over all upper bounds. Both VAR-LB and LOWER derive lower bounds for x, whereas both VAR-UB and UPPER derive upper bounds. Bowever, LOWER and UPPER produce tighter bounds then VAR-LB ' and VAR-UB because they take symbolic constraints into account. Examples of these functions appear in Table 1. All four functions run in constant time once the contexts are constructed. Table 1: Bounds of {a 2 1, b 2 0, b 2 -2, ab 2 -4, c = b) 1 VAR-LB VAR-UB LOWER UPPER a 1 00 1 -4/b b -2 0 max(-2,-4/a,c} min(O,c} C -00 00 b b This section contains the details of the bounding algo- rithms. Each derives tighter bounds than its predeces- sor, but takes more time. Each invokes all of its predeces- sors for subtasks, except that derivative inspection never A. Bounds Psopagartioln The bounds propagation algorithm (BP) bounds a com- pound function by bounding its components recursively and combining the results. For example, the upper bound of a sum is the sum of the upper bounds of its addends. The recursion terminates when it reaches numbers and variables. Numbers are their own bounds, while VAR-LB and VAR-UB bound variables. Figures 1 and 2 contain the upper bound algorithm, UBc(e), for a function e over a set of constraints C. The lower bound algorithm, LBc(e), is analogous. One can represent e as an expression in its variables xl, . . . , x, or as a function e(x) of the vector x = (Xl,. . . , xn). From here on, these forms are used inter- changeably. The TRIG-UB algorithm, not shown here, uses periodicity and monotonicity information to derive upper bounds for trigonometric functions and their inverses. $& a number a variable a+b ab a” min{a, b) m={a, bl log a I4 trigonometric UBc(e) ;AR-U%(e) ub, -I- uba max {lb,lba, !b,ubb, ub,bba, ub,uba) EXPT-UR(a, b) min (uba, ubb) m= (UL ubb) log ubo ma {I&l, IuhJ) TRIG-U&(e) Figure 1: The UBC(e) algorithm; Zb, and ub, abbreviate LBc(e) and UBc(e). Case EXPT-UB& b) UBc(u) > 0 ,UBc (b loga) b = f with p, q integers p, q odd and positive PBCMlb p,q odd and ub, < 0 [LB&lb p even eUBc(bloglal) else 00 else in 00 Figure 2: The EXPT-U&(a, b) algorithm The correctness the theorem: and complexity ofBP are summarized Theorem 1 For any extended elementary function e(x) and set of constraints C, bounds propagation derives num- bers lb, and ube satisfying Vx.satisfies(x, S) + lb, 5 e(x) 5 ub, (1) 650 Engineering Problem Solving in time proportional to e’s length. The proof is by induction on e9s length. It appears in a longer version of this paper [Sacks, 1987a], as do all sub- sequent proofs. Bounds propagation achieves linear time-complexity by ignoring constraints among variables or multiple oc- currences of a variable in an expression. It derives ex- cessively loose bounds when these factors prevent all the constituents of an expression from varying independently over their ranges. For instance, the constraint a 5 b im- plies that a - b cannot be positive. Yet given only this constraint, BP derives an upper bound of oo for a - b by adding the upper bounds of a and -b, both 00. As another example, when no constraints exist, the joint occurrence of x in the constituents of z2 + zc implies a global minimum of -l/4. Yet BP deduces a lower bound of -oo by adding the lower bounds of x2 and x, 0 and -oo. Subsequent bound- ing algorithms derive optimal bounds for these examples. Substitution analyzes constraints among variables and the final two algorithms handle multiple occurrences of vari- ables. All three obtain better results than BP, but pay an exponential time-complexity price. El. Substitution The substitution algorithm constructs bounds for an ex- pression by replacing some of its variables with their bounds in terms of the other variables. Substitution ex- ploits all solvable constraints, whereas bounds propagation limits itself to constraints between variables and numbers. In our previous example, substitution derives an upper bound of 0 for a-b from the constraint a 5 b by bounding a from above with b, that is a - B 5 b - b = 0. Sub- stitution is performed by the algorithms SUPc(e,H) and INFc(e, H) 9 which calculate upper and lower bounds on e over the constraint set C in terms of the variable set H. When H is empty, the bounds reduce to numbers. Figures 3 and 4 contain the SUP function and its auxiliary, SUPP. The auxiliary functions EXPT-SUP and TRIG-SUP are derived from BP’s exponential and trigono- metric bounding algorithms by replacing UBc(a) with SUPc(a, H), LBc(a) with INFc(a, H), and so on for b. The expression v(e) denotes the variables contained in e and f(b, a, H) abbreviates u(b) - v(u) C H. In the remainder of this section, we will focus on SUP. HNF is analogous. In step 1, SUP calculates the upper bounds of numbers and of variables included in H. It analyzes a variable, x, not in H by constructing an intermediate bound B = SUPc(UPPERc(x), fl U {x)) (2) for z and calling SUPP to derive a final bound. If possible, SUPP derives an upper bound for ZE in H directly from the inequality x 5 B. Otherwise, it applies bounds propaga- tion to B. For instance, the inequality x < 1 - z yields a bound of l/2, but x 2 z2 - 1 does not provide an upper , bound, so SUPP returns UBc(x2 - 1). e is 1 v(e)EH SUPc(e, H) e 2 a ‘variable SUPpc(e, s(UPPERc(e), w U {e})) 3 u+b 3.1 f(b, a, H) +, H) + s(b, H) 3.2 else f (03) 4 ab 4.1 LBc(u) 2 0 f (b, a, W) =x {S(a, M)s(b, H), i(a, H)s(h W>) else s(as(W u 443) 4.2 UBC(a) 5 0 f (b, a, H) max {~(a, W)i(b, H), ;(a, W)i(b, H)} else s(ai(b,N U v(a)),H) 4.3 else max{s(a, +(b, H), s(a, -W(b, W>, ;(a, N)s(b, H),+, H)i(b, H)) 5 ub EXPT-SUPc(a, b, a) 6 min(u, b} dn @(a, a), s(b, H)) 7 max(u, b} ~x~s(oq&~)~ 8 logu 1% 4% H) 9 I4 mx {I+, W)I 9 I+, a) II 10 trig TRIG-SUPc(e, H) Figure 3: The SUPc(e, N) algorithm. The symbols s and i abbreviate SUPc and INFc respectively. case 1 GWB) SUPPc(x, B) B 2 B=rx+A rER, u$?u(A) 2.1 r 2 1 00 2.2 r < 1 A r--r 3 B = min(C, D) min {SUPP~(X, C), SUPPC(X, D)} 4 B = max(C, D) max {SUPPC(S, C), SUPPC(X, D)} 5 else uBc(B) Figure 4: The SUPPc(x, B) algorithm SUP exploits constraints among variables to improve its bounds on sums and products. If b contains variables that a lacks, but which have bounds in u’s variables, SUP constructs an intermediate upper bound, U, for a+ b or ab by replacing b with these bounds. A recursive application of SUP to U produces a final upper bound. (Although not indicated explicitly in Figure 3, these steps are symmetric in a and b.) If a and b have the same variables, SUP bounds a + b and ab by recursively bounding a and b and applying bounds propagation to the results. For example, given the constraints c 2 1, d 2 1, and cd 5 4, SUP derives an intermediate bound of 3c/4 for c - l/d by replacing -l/d with -c/4, its upper bound in c. This bound is derived as follows: SUP(-;, (6)) = -I$, {c)) = Sup~;~~c~) = -i (3) SUP uses the recursive call SUP(c, (1) = 4 to derive a ,final bound of 3 for c - I/d. The following theorem establishes the correctness of substitution: Sacks 651 Theorem 2 For every extended elementary function e, variable set H, and construint set C, the expressions i = INFc(e, H) and s = SUPc(e, H) satisfy the conditions: variable causes less damage on smaller regions because all these values are less far apart. Figure 5 illustrates this idea for the function x2 - x on the interval [O,l]. Part (a) demonstrates that BP derives an overly pessimistic lower i and s are expressions in H (4 Vx.satisfies(x,§) =S i(x) i e(x) 2 s(x) (5) bound on [0, l] b ecause it minimizes both -x and x2 in- dependently. Part (b) h s ows that this factor is less signif- icant on smaller intervals: the maximum of the two lower Substitution utilizes constraints among variables to bounds, -3/4, is a tighter bound for x2 - x on [0, l] than improve on the bounds of ‘BP, but ignores constraints that of part (a). One can obtain arbitrarily tight bounds among multiple occurrences of variables. It performs iden- by constructing sufficiently fine partitions. tically to BP on the example of x2 + x, deriving a lower bound of -oo. Yet that bound is overly pessimistic because n m n mn m no value of x minimizes both addends simultaneously. The c . e m last two bounding algorithms address this shortcoming. 0 1 0 1 P z’z 1 -1 1 3 -- -- 2 4 c. Derivative Inspection Derivative inspection calculates bounds for a function over (4 (b) a constraint set C from the signs of its partial derivatives. Let us define the range of xi in C as the interval Figure 5: Illustration of iterative approximation on [Q, 11. The symbols m and n mark the values of x that minimize xi = [INFb(xi, {}), SUPc(xi, {})I (6) -x and x2. The numbers below are LB(x2 - z) . and the range of x = (xl,. . . ,x,) in C as the Carte- sian product X = X1 x ... x X, of its components’ ranges. Derivative inspection splits the range of a function f(x) into subregions by dividing the range of each xi into maximal intervals on which a f /axi is non-negative, non- positive, or of unknown sign. The maximum upper bound over all subregions bounds f from above on X. This bound is valid over all points satisfying C by Theorem 2. Each region can be collapsed to the upper (lower) bound. of xi in every dimension i where d f /azi is non-negative (non- positive) without altering f’s upper bounds. An analogous procedure derives lower bounds. Derivative inspection takes time proportional to the number of regions into which f’s domain splits. For this reason, it only applies to functions whose partial deriva- tives all have finitely many zeroes in X. When the signs of all partial derivatives are known, derivative inspection yields optimal bounds directly, since all regions reduce to points. For example, it derives an optimal lower bound of -l/4 for x2 + x because the derivative of x2 + z is non- positive on [-00,-l/2] and non-negative on [-l/2, oo] . Otherwise, one must use a second bounding algorithm to calculate bounds on the non-trivial subregions. This two- step approach generally yields tighter bounds than apply- ing the second algorithm directly on f’s entire domain, since the subregions are smaller and often reduce to points along some dimensions. D. Iterative Approximation Iterative approximation, like derivative inspection, reduces the errors in bounds propagation and substitution caused by multiple occurrences of variables. Instead of bounding a function over its entire range directly, it subdivides the regions under consideration and combines the results. In- tuitively, BP’s choice of multiple worst case values for a Iterative approximation generalizes interval subdivi- sion to multivariate functions and increases its efficiency, using ideas from Moore (Moore, 19791 and Asaithambi et al. [Asaithambi et al., 19821. As an additional opti- mization, it bounds functions over the regions generated by derivative inspection, rather than over their entire do- mains. Let f (x1, . . . , xn) be continuously differentiable on a region X and let wi denote the width of the interval Xi. For every positive E, iterative approximation derives an upper bound for f on X that exceeds the least upper bound by at most E within L 0 n n - Wi c i=l (7) iterations, where the constant L depends on f and X. In this section, I discuss, in order of increasing generality, existing programs that derive bounds and prove inequal- ities. As one would expect, the broader the domain of functions and constraints, the slower the program. The first class of systems bounds linear functions subject to linear constraints. ValdBs-PQrez [Valdds-PQrez, 19861 ana- lyzes sets of simple lineur inequulities of the form x - y 2 n with & and y variables and n a number. Be uses graph search to test their consistency in cu time for c constraints and v variables. Malik and Binford [Malik and Binford, 19831 and Bledsoe [Bledsoe, 19751 check sets of general linear constraints for consistency and calculate bounds on linear functions over consistent sets of constraints. Both methods require exponential time.2 The former uses the 2The eirnplex algorithm often a polynomial alternative exists. performs better in practice. Also, 652 Engineering Problem Solving Simplex algorithm, whereas the latter introduces prelimi- nary versions of BOUNDER'S substitution algorithms. Bled- soe defines SUP ) SUPP, INF, and INFF for linear functions and constraints and proves the linear version of Theorem 2. In fact, these algorithms produce exact bounds, as Shostak [Shostak, 19771 proves. The next class of systems bounds nonlinear functions, but allows only range constraints. All resemble BOUNDER's bounds propagation and all stem from Moore’s [Moore, 19791 interval arithmetic. IvIoore introduces the rules for bounding elementary functions on finite domains by com- bining the bounds of their constituents. His algorithm takes linear time in the length of its input. Bundy [Bundy, 19841 implements an interval package that resembles BP closely. It generalizes the combination rules of interval arithmetic to any function that has a linite number of ex- trema. If the user specifies the sign of a function’s deriva- tive over its domain, Bundy’s program can perform inter- val arithmetic on it. Unlike BOUNDER'S derivative inspec- tion algorithm, it cannot derive this information for itself. Many other implementations of interval arithmetic exist, some in hardware. empty sets H, his algorithm makes recursive calls with H empty. This produces needlessly loose bounds and some- times causes an infinite recursion. Bundy and Welham [Bundy and Welham, 19791 derive upper bounds for a variable z from an inequality L 5 R by reformulating it as z 5 u with U free of x. If U contains a single variable, they try to find its global maximum, M, by inspecting the sign of its second derivative at the ze- roes of its first derivative. When successful, they bound x from above with M. Lower bounds and strict inequali- ties are treated analogously. They use a modified version of the PRESS equation solver [Bundy and Welham, 19811 to isolate x. As discussed in section II, inequality manip- ulation depends on the signs of the expressions involved. When this information is required, they use Bundy’s inter- val package to try to derive it. The complexity of this al- gorithm is unclear, since PRESS can apply its simplification rules repeatedly, possibly producing large intermediate ex- pressions. BOUNDER contains both steps of Bundy and Welham’s bounding algorithm: its context manager de- rives bounds on variables from constraints, while its deriva- tive inspection algorithm generalizes theirs to multivariate functions. PRESS may be able to exploit some constraints that BOUNDER ignores because it contains a stronger equa- tion solver than does BOUNDER. The final class of systems consists of theorem provers for predicate calculus that treat inequalities specially. These systems focus on general theorem proving, rather than problem-solving. They handle more logical connec- tives than BOUNDER, including disjunction and existential quantification, but fewer functions, typically just addition. Bledsoe and Hines [Bledsoe and Hines, 19801 derive a re- stricted form of resolution that contains a theory of dense linear orders without endpoints. Bledsoe et al. [Bledsoe et csl., 19831 prove this form of resolution complete. Finally, Bledsoe et al. [Bledsoe et Cal., 19791 extend a natural deduc- tion system with rules for inequalities. Although none of these authors discuss complexity, all their algorithms must be at least exponential. Moore also proposes a simple form of iterative ap- proximation, which Skelboe [Skelboe, 19741, Asaithambi et al. [Asaithambi et al., 19821, and Ratschek and Rokne [Ratschek and Rokne, 1984, ch. 41 improve. BOUNDER'S iterative approximation algorithm draws on all these sources. Simmons [Simmons, 19861 handles functions and con- straints containing numbers, variables, and the four arith- metic operators. He augments interval arithmetic with simple algebraic simplification and inequality information. For example, suppose z lies in the interval [ --1,1]. Sim- mons simplifies x - a: to 0, whereas interval arithmetic pro- duces the range [ -2,2]. He also deduces that x 2 z from the constraints x 5 y and y 5 z by finding a path from x to z in the graph of known inequalities. The algorithm is linear in the total number of constraints. Although more powerful than BOUNDER's bounds propagation, Simmons’s program is weaker than substitution. For example, it can- not deduce that x2 2 y2 from the constraints x 2 31 and Y 2 0. Brooks [Brooks, 1981, sec. 31 extends Bundy’s SUP Current inequality reasoners are weak, brittle, or ineffi- and INF to nonlinear functions and argues informally that cient because they process all inputs uniformly. Interval Theorem 2 hold for his algorithms. This argument must be arithmetic systems, such as Bundy’s and Simmons’s, run faulty because his version of SUPH(@, {}) recurses infinitely quickly, but generate exceedingly pessimistic bounds when when e equals x + l/x or x + x2, for instance. Brooks’s dependencies exist among the components of functions. program only exploits constraints among the variables of These dependencies are caused by constraints among vari- sums rx + B and of products xnB with r real, z a vari- ables or multiple occurrences of a variable, as discussed able of known sign, B an expression free of x, and n an in Section 1II.A. The upper bound of a - b given a 2 b integer. In other cases, it adds or multiplies the bounds demonstrates the first type, while the lower bound of x2+x of constituents, as in steps 3.1, 4.1.1, 4.2.1, and 4.3 of given no constraints demonstrates the second. Each of the BOUNDER'S SUP (Figure 3). These overly restrictive condi- remaining systems is brittle because it takes only one type tions rule out legitimate substitutions that steps 3.2, 4.1.2, of dependency into account. Iterative approximation, sug- and 4.2.2 permit. For example, BOUNDER can deduce that l/y 2 0 from the constraints y > x and 2 2 1, but gested by Moore, and derivative inspection, performed in l/z - the univariate case by Bundy and Welham, address the sec- Brooks’s algorithm cannot. On some functions and non- ond type of dependency, but ignore the first. Conversely, Sacks 653 substitution, used (in a limited form) by Brooks and Sim- mons, exploits constraints among variables, while ignoring multiple occurrences sf variables. All these systems are in- efficient because they apply a complex algorithm to every input without trying a simple one first. BOUNDER overcomes the limitations of current in- equality reasoners with its hierarchical strategy. It uses substitution to analyze dependencies among variables and derivative in & ection and iterative approximation to an- alyze multiple occurrences of variables. Together, these techniques cover far more cases than any single-algorithm system. Yet unlike those systems, BOUNDER does not waste’time applying overly powerful methods to simple problems. It tries bounds propagation, which has lin- ear time-complexity, before resorting to its other methods. An inequality reasoner like BOUNDER should be an impor- tant component of future general-purpose symbolic algebra packages. [Asaithambi et al., 19821 M. S. Asaithambi, Shen Zuhe, and R. E. Moore. On computing the range of val- ues. Computing, 283225-237, 1982. [Bledsoe, 19751 W. W. Bledsoe. A new method for prov- ing certain Presburger formulas. In Proceedings of the Fourth International Joint Conference on Artifi- cial Intelligence, pages 15-21, 1975. [Bledsoe and Hines, 19801 W. W. Bledsoe and Larry M. Hines. Variable elimination and chaining in a resolution-based prover for inequalities. In Proceed- ing of the fifth conference on automated deduction, Springer-Verlag, Les Arcs, France, July 1988. [Bledsoe et al., 19791 W. W. Bledsoe, Peter Bruell, and Robert Shostak. A prover for general inequalities. In Proceedings of the Sixth International Joint Confer- ence on Artificial Intelligence, pages 66-69, 1979. [Bledsoe et al., 19831 W. W. Bledsoe, K. Kunen, and R. Shostak. Completeness results for inequality provers. ATP 65, University of Texas, 1983. [Brooks, 19811 Rodney A. Brooks. Symbolic reasoning among 3-d models and 2-d images. Artificial Intel- ligence, 17~285-348, 1981. [Bundy, 19841 Alan Bundy. A generalized interval package and its use for semantic checking. ACM Transactions on Mathematical Software, 10(4):397-409, December 1984. [Bundy and Welham, 19791 Alan Bundy and Bob Wel- ham. Using meta-level descriptions for selective ap- plication of multiple rewrite rules ia algebraic man& ulution. D.A.I. Working Paper 55, University of Edin- burgh, Depatment of Artificial Intelligence, May 1979. [Bundy and Welham, 198l] Alan Bundy and Bob Wel- ham. Using meta-level descriptions for selective ap plication of multiple rewrite rules in algebraic manip- ulation. Artificial Intelligence, 16(2):189-211, May 1981. [Malik and Binford, 19831 J. Malik and T. Binford. Rea- soning in time and space. In Proceedings of the Eighth International Joint Conference on Artificial Intelli- getace, pages 343-345, August 1983. [Moore, 19791 Ramon E. Moore. Methods and Applica- tions of Interval Analysis. SIAM Studies in Applied Muthematics, SIAM, Philadelphia, 1979. [Ratschek and Rokne, 19841 H. Ratschek and J. Rokne. Computer Methods for the Range of Functions. Hal- sted Press: a division of John Wiley and Sons, New York, 1984. [Sacks, 19851 Elisha P. Sacks. Qualitative mathematical reasoning. In Proceedings of the Ninth International Joint Conference on Artificial Intelligence, pages 137- 139, 1985. [Sacks, 1987a] Elisha P. Sacks. Hierarchical inequality rea- soning. TM 312, Massachussetts Institute of Technol- ogy, Laboratory for Computer Science, 545 Technol- ogy Square, Cambridge, MA, 02139, 1987. [Sacks, 1987133 Elisha P. Sacks. Piecewise linear reasoning. In Proceedings of the National Conference on Artifi- cial Intelligence, American Association for Artificial Intelligence, 1987. [Shostak, 19771 Robert E. Shostak. On the SUP-INF method for proving Presburger formulas. Journal of the ACM9 24:529-543, 1977. [Simmons, 19861 Reid Gordon Simmons. “Commonsense” arithmetic reasoning. In Proceedings of the National Conference on Artificial Intelligence, pages 118-124, American Association for Artificial Intelligence, Au- gust 1986. [Skelboe, 19741 S. Skelboe. Computation of rational func- tions. BIT, 14:87-95, 1974. [Valdb-P&ez, 19861 Rati Valdds-PCrez. Sputio-temporal reasoning and linear inequalities. AIM 875, Mas- sachusetts Institute of Technology, Artificial Intelli- gence Laboratory, May 1986.
1987
108
559
Piecewise Linear Reasoning Elisha ISacks’ MIT Laboratory for Computer Science 545 Technology Square, Room 370 Cambridge, MA 02139, USA Abstract This paper describes a new technique called piecewise linear reasoning (PLR) for analyzing dynamic systems describable by finite sets of ordinary differential equs tions. Current qualitative reasoning programs derive the abstract behavior of a system by simulating hand- crafted “qualitative” versions of the differential equa- tions that characterize it and summarizing the re- sults. PLR infers more detailed information by con- structing and examining piecewise linear approxima- tions of the original equations. As evidence that PLR can provide useful information to engineers, its anal- yses of the Lienard and van der Pol equations are presented. 0 This paper describes a new technique called piecewise lin- ear reasoning (PLR) for analyzing dynamic engineering sys- tems. Engineers treat many devices as dynamic systems and model them with sets of ordinary differential equa- tions. They derive the behavior of the devices by analyz- ing the associated equations. Rather than treat individual devices directly, engineers aggregate them into classes that share common sets of parameterized differential equations. They analyze device classes abstractly and instantiate the results with appropriate numbers. This approach avoids redundancy, provides global insight, and facilitates design. For example, the parameterized equation y’(t) = ay(t) de- scribes the class of one-tank devices with instantaneous mixing. From the solution, y(t) = yceat, and the phys- ical constraint ye > 0, one sees that y increases toward infinity if a is positive, remains constant if a equals 0, and decreases asymptotically to 0 if a is negative. One can de- sign a specific one-tank device by choosing an appropriate value of a. PLR provides engineers with information they need about parameterized systems: local properties in interest- ing.regions as well as global properties such as stability, periodicity, limit cycles, and asymptotic behavior. For sys- tems of linear equations, this information can be derived lThis research was supported (in part) ;y National Institutes of Health Grant No. ROl LM04493 bran the National Library of Medicine and National Institutes of Health Grant No. R24 RR01320 from the Division of Research Resources. through straightforward mathematical analysis. Nonlin- ear systems, however, generally require extremely sophis- ticated analysis and rarely yield to any known analytic technique. The central tenet of my research is to solve this problem by sacrificing generality for tractability: con- structing and examining piecewise linear approximations of nonlinear systems instead of analyzing them directly. The next section describes the PLR methodology and the following two sections demonstrate its capabilities. The final three sections contain a review of previous work, PLR’s implementation status, and conclusions. e PLR Engineers need to know the properties of parameterized systems of differential equations that model device classes. This section explains how PLR derives that information from piecewise linear approximations of the equations. PLR can produce straightforward approximations auto- matically, including both examples in this paper, but the ultimate responsibility for constructing adequate approx- imations rests with the user. Precedents for this divi- sion of labor certainly exist. Users of numerical packages must choose appropriate algorithms, error margins, initial guesses, and step sizes. Similarly, de Kleer and Brown ]Bo brow, 1985, p. 261 note that qualitative reasoning requires users to derive the confluences for systems by themselves. PLR determines the properties of a parameterized piecewise linear system in two analysis stages: local and global. Both stages employ a phase-space representation. Local analysis derives phase diagrams for each linear subre- gion of a piecewise linear system. It solves the differential equations symbolically with the familiar algorithm from the theory of linear systems-Laplace transform, partial fractions expansion, and inverse Laplace transform-and invokes the QMR mathematical reasoner [Sacks, 19851 to deduces the q&it&due properties of the solutions: signs of the first and second derivatives, discontinuities, singulari- ties, and asymptotes. It uses this information to construct a phase diagram consisting of one or more significant re- gions on which all solutions have identical qualitative prop- erties. Global analysis infers the joint phase diagram for a system from the local phase diagrams through a com- bination of algebraic and geometric reasoning. First, it Sacks 655 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. concatenates the relevant portions of the individual dia- grams and determines the significant regions. Next, it tests whether trajectories can cross the boundaries be- tween pairs of adjoining regions and summa rizes the results in a transition graph whose nodes and links represent re- gions and possible transitions. Whenever the out-degree of a region exceeds 1, PLR attempts to split it into subre- gions of lower degree by case analysis. Each walk through the graph denotes a trajectory in the joint phase diagram. Loops denote trajectories that remain in one region forever, whereas longer cycles denote trajectories that continually shift between a sequence of regions. PLR completes the phase diagram by sketching trajectories for all walks. PLR has exponential time-complexity in the number of nonlinear components in a system. It must combine the solutions of 2n sets of linear equations to perform global analysis on a system of n piecewise linear equations each composed of two lines. The original system would have to contain n nonlinear components for this situation to arise. Engineers rarely analyze systems with large numbers of nonlinearities. Doing so is a challenge for any intelligent agent. I will illustrate local and global analysis with two ex- amples that frequently arise in nonlinear oscillators: the Lienard equation and the van der Pol equation. Both ex- amples are simple enough for mathematicians to analyze directly. The solutions, described in Brauer and Nohel [Brauer and Nohel, 19691, afford a standard against which to measure PLR. III. The Lienasd Ekpatism The Lienard equation takes on many forms. We will dis- cuss the version Y” + y’ + y2 + y = 0 (1) in this section. Approximating the nonlinear term y2 + y with two lines, as shown in Figure 1 yields the piecewise linear equations y" + y' - ; - ; =0 for 8<-i y”+y’+i=O for y>-f (2) (3) PLR chooses this approximation by default because it is the simplest one that passes through the extrema and zeroes of y2 + y. The results generalize to any bimodal linearization that contains these points. In this example, I have chc+ sen numeric equations for expository ease. An example of global analysis of parameterized equations appears in the next section. Figure 2 shows the phase diagrams that local analysis constructs for equations (2) and (3). Equation (2) has the solution ydt> = a(y0, y&)e-qt + 6(yo,yh)eTt - 1 (4 ?P+?/ \ Figure 1: Piecewise linear approximation of y2 + y Figure 2: Phase diagrams for (a) equation (2) and (b) equation (3). The lines a(y, y’) = 0 and b(y, y’) = 0 are dotted. with gy, Y’) = (3 - dq(y + 1) - 2y’/G (5) b(y, y’) = (3 + 6)(Y + 1) + 2y’/fi (6) where yo and yb denote the initial values of y and y’. Let us abbreviate a(yo, yh) by a0 and b(yo, yk) by be. The function y1 has four possible behaviors depending on the signs of ue and be.2 It increases monotonically toward infinity for ac negative and be positive, since the ao term increases toward 0 and the be term increases toward infinity. Inspection of derivatives establishes that y1 decreases to a minimum then increases toward infinity for uo and bo positive. By similar reasoning, it decreases toward negative infinity for ao positive and bo negative and increases to a maximum then decreases toward negative infinity for both negative. Two lines delimit the significant regions in which these four behaviors occur: ao is positive for points (yo, yb) below the line u(y, y’) = 0 and negative for points above it, whereas be is negative for points below the line b(y, y’) = 0 and positive for points above it. The remaining analysis of equations (2) and (3) is analogous. Figure 3 contains the transition graph and phase dia- gram produced by global analysis. The significant regions are labeled A-E with region E subdivided into El, Ez, and 2This discussion excludes the following degenerate cases: if ao and bo both equal 0, yl equals -1 identically; if uo equals 0, yl moves away from -1 along the line b(y, y’) = 0; and if bo equals 0, yl approaches -1 along the line a(y, y’) = 0. 656 Engineering Problem Solving Ea. Trajectories in region A cannot cross into any other region. Trajectories in region B cross into region E be- cause they increase toward infinity in the y direction. If the height, h, at which a trajectory crosses into E is less than hr, it enters El and remains there forever, spiraling around the origin. For h between hp and hz, the trajec- tory crosses into region E2 then into region C then back into El for good. It cannot cross from region C to E2 or Es because the upper boundary of C, the line a(y, y’) = 8, intersects the boundary of E below hl. For h greater than hz, the trajectory crosses from region B to Es then enters region D and remains there. After performing this anal- ysis, PLR sketches the phase diagram. We can verify the results by comparing Figure 3 with Figure 4, the phase diagram for the original Lienard equation (l), as given by Brauer and Nohel [Brauer and Nohel, 1969, p. 2201. Figure 3: Transition graph and phase diagram for the piecewise Lienard equations Van der Pol equations often arise in oscillatory dynamic systems. Figure 5 depicts a simple example from network theory: a capacitor, an inductor, and a nonlinear resis- tor connected in series. By Kirchoff’s laws, the current through the circuit, I, obeys the equation I” + ;(312 - 1)1’ + -&I=0 (7) with C the capacitance, L the inductance, and k a positive scaling factor. Intuitively, the system oscillates because the nonlinear resistor adds energy to the circuit at low currents and drains energy at high currents. One obtains a piecewise linear approximation of equation (7) by replacing the nonlinear resistor model with a piecewise linear one, as illustrated in Figure 6. The analysis of the resulting equations Figure 4: Phase diagram for the actual Lienard equation. follows the general pattern described in the previous sec- tion, although the symbolic parameters, k, E, and C, com- plicate the process somewhat. PLR must consider two cases, depending on whether the characteristic equations have real or complex roots. I will discuss only real roots; the complex case is similar. I L I c Figure 5: A circuit governed by van der Pol’s equation In the case of real characteristic roots, equation (8) has positive roots, while equation (9) has negative ones. Figure 7 depicts the complete phase diagrams for both equations along with their regions of applicability. Equa- tion (8) holds in region G, while equation (9) holds in F and H. As with the Lienard equation, PLR infers the joint phase diagram (Figure 8) from the individual ones by con- structing a transition graph. The significant regions are F, G, and H with region G subdivided into Gr above the I IT” - $I/+ &I = 0 for ]I] 5 J- = .58 4 (8) Sacks 657 I Figure 6: Piecewise linear approximation of k(13 - I) axis and G2 below. Figure 7b shows that trajectories in region F of the joint phase diagram eventually enter Gr. From there, they go up and right until they enter H. Sim- ilarly, Figure 7a shows that trajectories in region H even- tually enter G2 and continue down and left into F. Hence, the transition graph consists of a single cycle. The phase diagram contains a unique limit cycle toward which all non-periodic trajectories spiral, but PLR currently lacks the tools to derive this fact. I F G H F H (4 Figure 7: Phase diagrams for (a) equation (8) and (b) equation (9) Figure 8: Transition graph and phase diagram for the piecewise van der Pol equations Commonly used tools for analyzing nonlinear device mod- els fall into the following categories: theoretical methods, experimentation, numeric simulation, piecewise linear ap- proximation, and qualitative reasoning. Although theoret- ical methods can be extremely powerful, engineers try to avoid them because of their complexity and limited ap- plicability. Experiments and simulations yield low-level, numeric data about individual devices. Engineers must in- terpret the data and generalize the results to device classes. This process becomes difficult for systems containing many parameters. In interpretation, engineers can miss impor- tant properties of the model for lack of raw data. For ex- ample, discontinuities and extrema might occur between the observed or simulated points, while asymptotes may arise beyond their range. Engineers can also overlook im- portant properties due to the sheer volume of raw data. Generalization can fail too, since a model need not behave in a certain manner for UU parameter values just because it does so for certain ones. The third method of analyzing a nonlinear system consists of constructing a piecewise linear approximation, simulating it for various parameter values, and scrutinizing the results. Piecewise linear approximation offers a con- venient representation for nonlinear engineered systems. However, analysis by simulation and scrutiny suffers from the same limitation as experimentation and simulation: it provides raw data about individual devices rather than ab- stract properties of device classes. PLR exploits the piece- wise linear representation, but replaces the simulation al- gorithm with one that derives higher-level information. Qualitative reasoning [Bobrow, 19851 (QR) derives the abstract behavior of dynamic systems by simulating hand- crafted “qualitative” versions of their differential equations and summarizing the results. In its current form, QR falls far short of telling an engineer what he needs to know about a nonlinear system. It can only provide extremely abstract descriptions, such as “the quantity f increases for a while, reaches a maximum, and decreases thereafter.” More information is required to design, analyze, and de- bug actual devices: local properties in interesting regions such as estimates of maxima, minima, and rates of change as well as global properties such as stability, periodicity, limit cycles, and asymptotic behavior. QR abstracts away the details required to derive this information by repre- senting dynamic systems with confluences instead of dif- ferential equations. It cannot even express many functional properties that engineers find useful, such as linearity, ex- ponential decay, asymptotic approach, oscillation, damped oscillation, stability and limit cycles. QR also generates spurious behaviors. One cause, de- scribed by Kuipers (Kuipers, 1985b], is the local charac- ter of its analysis. In addition, the abstract nature of confluences introduces ambiguities that differential equa- tions preclude. For example, the equation y’ = y - y2 implies that 9’ is negative whenever y exceeds 1, whereas 658 Engineering Problem Solving the corresponding confluence leaves the sign completely ambiguous.3 Consequently, QR concludes that y can in- crease toward infinity, even though it is bounded from above. Kuipers [Kuipers, 1985a] notes that this type of am- biguity crops up in almost every clinical system of second- order or higher. The same result holds for other domains. The problem is that QR focuses on the abstract behavior of extremely general systems, whereas engineers require de- tailed information about more-specific ones. It might be possible to attain this level of detail with an extended ver- sion of QR that included a richer set of confluences and stronger analysis algorithms. I have found it more promis- ing to extend the piecewise linear approach, although PLR takes ideas from QR as well. Ilytically, the prospc L-- ---l:-L:--L?,- ^ defy known analytic techniques. [Bobrow, 19851 Daniel G. Bobrow, editor. Qualitative Reasoning about Physical Systems. M. I. T. Press, 1985. [Brauer and Nohel, 19691 Fred Brauer and John A. No- hel. The Qualitative Theory of Qrdinary Diflerential Equations. W.A. Benjamin, Inc., New York, 1969. [Kuipers, 1985a] Benjamin Kuipers. Qualitative Sirnu- lation in Medical Physiology: A Progress Report. TM 280, Massachussetts Institute of Technology, Laboratory for Computer Science, 545 Technology Square, Cambridge, MA, 02139, June 1985. [Kuipers, 1985b] Benjamin J. Kuipers. The limits of qual- itative simulation. In Proceedings of the Ninth Inter- national Joint Conference on Artificial Intelligence, pages 128-136, August 1985. [Raiman, 19861 Olivier Raiman. Order of magnitude rea- soning. In Proceedings of the National Conference on Artificial Intelligence, pages 100-104, American As- sociation for Artificial Intelligence, 1986. [Sacks, 19851 Elisha P. Sacks. Qualitative mathematical reasoning. In Proceedings of the Ninth hternational Joint Conference on Artificial Intelligence, pages 137- 139, 1985. [Sacks, 1987a] Elisha P. Sacks. Hierarchical reasoning about inequalities. In Proceedings of the National Conference on Artificial Intelligence, American As- sociation for Artificial Intelligence, 1987. [Sacks, 1987b] Elisha P. Sacks. Qualitative sketching of parameterized functions. In Broceedings of the Second International Conference on Applications of Artificial Intelligence in Engineering, August 1987. forthcom- ing. SRaiman [Raiman, 19861 addresses a special case of this problem by incorporating assertions of the form “quantity o is negligible in relation to quantity F’ into QR. His extension does not solve our example because neither y2 nor y is negligible with respect to the Sacks 659
1987
109
560
Non-Deterministic Lisp with Dependency-Directed Backtracking Ramin Zabiht, David McAllester and David Chapman Artificial Intelligence Laboratory Massachusetts Institute of Technology Abstract Extending functional Lisp with McCarthy’s non- deterministic operator AHFJ yields a language which can concisely express search problems. Dependency- directed backtracking is a powerful search strategy. We describe a non-deterministic Lisp dialect called SCHEMER and show that it can provide automatic dependency-directed backtracking. The resulting language provides a convenient interface to this ef- ficient backtracking strategy. Many problems in Artificial Intelligence involve search. SCHEMER is a Lisp-like language with non- determinism which provides a natural way to express sea.rch problems. Dependency-directed backtracking is a powerful strategy for solving search problems. We de- scribe how to use dependency-directed backtracking to in- terpret SCHEMER. This provides SCHEMER programs with the benefits of dependency-directed backtracking au- tomatically. We begin by describing the SCHEMER language. We next provide an overview of dependency-directed back- tracking and list its requirements. We then show how to meet these requirements in interpreting SCHEMER. Fi- nally, we argue that SCHEMER with automatic depen- dency-directed backtracking would be a useful tool for Ar- tificial Intelligence by comparing it with current methods for obtaining dependency-directed backtracking. I. SCHEMER is Scheme with AMB SCHEMER consists of functional Scheme [Rees e-t al. 19861 plus McCarthy’s ambiguous operator AMB [McCarthy 19631 and the special form (FAIL). AMB takes two arguments and non-deterministically returns the value of one of them. Selecting the arguments of the AMB's in an expression determines a possible execution. Each SCHEMER ex- t Author's current address: Computer Science Department, Stanford University, Stanford, California, 94305. This paper describes research done at the Artificial Intelligence Lab- oratory at the Massachusetts Institute of Technology, supported in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contracts NoO014-80-C- 0505 and NOOOl4-86-K-0180, in part by National Science Foun- dation grant MCS-8117633, and in part by khe IBM Corporation. Ramin Zabih is suppori.ed by a fellowship from the Fannie and John Hertz Foundation. pression is thus associated with a set of possible values. In the program below, the expression (ANY-NUMBER) non- deterministically returns some whole number. (DEFINE (ANY-NUMBER) (AMB 0 (l+ (ANY-NUMBER)))) Similarly, (ANY-PRIME) non-deterministically returns some prime number. (DEFINE (ANY-PRIME) (LET ((NUMBER (ANY-NUMBER))) (IF (PRIME? NUMBER) NUMBER (FAIL)))) ANY-PRIME eliminates certain possible values by evaluating (FAIL). The expression (FAIL) has no possible values. A mathematically precise semantics for SCHEMER is beyond the scope of this paper - there are several possi- ble semantics that differ in technical detail [Clinger 1982, Zabih et nl. 19871. Under all these semantics, however, the expression (FAIL) can be used to eliminate possible val- ues; finding a possible value for a SCHEMER expression requires finding an execution that doesn’t evaluate (FAIL). For a given expression there may be a very large num- ber of different ways of choosing the values of AMB expres- sions. If there are 12 independent binary choices in the computation then there are 2* different combinations of choices, and thus 2* different executions. In certain ex- pressions most combinations of choices result in failure. Finding one or more possible values for a SCHEMER ex- pression requires searching the various possible combina- tions of choices. Interpreting SCHEMER thus requires search. The semantics of the language do not specify a search strat- egy. Correct interpreters with different strategies will pro- duce the same possible values for an expression, and can differ only in efficiency. It is straightforward to write 3. SCHEMER interpreter that searches all possible esecu- tions in a brute force manner by backtracking to the most, recent non-exhausted choice in the event of a failure. Such an interpreter would use simple “chronological” backtra.ck- ing. We describe a more sophisticated SCHEMER inter- preter that automatically incorporates dependency anal- ysis and dependency-directed backtracking. This inter- preter, originally described in [Za.bih 198’71, allows pro- grammers to gain the efficiency benefits of dependency- directed backtracking automatically for SCHEMER code. Zabih, McAllester, and Chapman 59 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. Figure 1: A search tree. Failures are labeled “f”. Figure 2: The search tree after labeling and dependency analysis. Capital letters are labels. The dependency set for the leftmost failure is also shown. II. Dependency-Directed Backtracking Dependency-directed backtracking is a. general search stra.tegy invented by Stallman and Sussman [Stallman and Sussman 19771. It can best be understood as a technique for pruning search trees. Consider an arbitrary search tree generated by some particular search. Such a tree is shown in Figure 1. The leaves of the tree labeled with the letter “f” represent sea.rch paths which lead to failure. Depen- dency-directed backtracking can be used to prune such a sea.rch tree, by detecting unsearched fragments of the tree which cannot contain solutions. been assigned the dependency set {C,E). This means that the failure was “caused” by the labels C and E. More specifically, it means that every leaf node which is beneath both a node labeled C and a node labeled E is guaranteed to be a failure. For example in a graph coloring problem C may represent the statement that p is colored red and E may represent the statement that m is colored red, and we may know that no solution can color both p and m red. Such a set of labels is called a nogood. Dependency-directed backtracking requires that two additional pieces of information be added to the tree. First, the non-root nodes must be assigned labels. Second; each failing leaf node must be associated with a subset of the set of labels that appear above that leaf. For reasons to be explained, the process of assigning sets of labels to fail- ing leaf nodes is called dependency analysis. Carrying out labeling and dependency analysis on the tree of Figure 1 could result in Figure 2. Nogoods can be used to prune fragments of the search tree. In the above tree the nogood {C,E} prunes the first and second leaf nodes (counting from the left) as well as leaf nodes nine and ten. These represent about a quarter of the entire search tree. If the nogood had contained the single label C, about half of the tree would have been pruned by this one nogood. In general, the smaller the number of labels in a nogood, the larger the fragment of the search tree pruned by that nogood. Each lab21 represents a statement that is known to be true of all leaf nodes beneath the labeled node. For example, suppose that the above tree represents the search for a coloring of a graph such that adjacent vertices have distinct colors, and suppose that n is a vertex in the graph. In this case the la.bel A might represent the statement that n is assigned the color red. All candidate colorings under the search node labeled A would color n red. More formally, let N be a nogood, i.e. a set of la- bels. We say that N prunes 3 given node if every la- bel in N appears above that leaf node in the search tree. Dependency-directed backtracking maintains a set of no- goods, and never looks at nodes that are pruned by a no- good in this set. When the search process examines a leaf node that turns out to be a failure, dependency analysis is used to generate a new nogood; this is added to the set of nogoods and the process continues. The leftmost leaf node in the tree of Figure 2 has A particular method of node la.beling and dependency 60 Al Architectures analysis is called sound if the nogoods associated with fail- ure nodes only prune failure nodes; solution nodes should never be pruned. ‘When computation is required to deter- mine failure, dependencies must be maintained in a way tha.t ensures soundness. If a label contributes to n f~‘7rl- ure, but the contribution is overlooked, solutions can be missed. For example, if dependency anaiysis on the ieft- most failure in Figure 2 overlooked the contribution of C, a nogood consisting of just (E) would be created, which would discard the only solution. The next two sections describe techniques for automatic node labeling and dependency analysis in SCHEMER. The automatic dependency-directed back- tracking provided by these techniques makes it possible for programmers to take advantage of dependency-directed tree-pruning without the necessity of writing their own code for Sear& nQd_P !&?&nP a.nfl d~ncnrkncv anaiy&: o ---- -‘-r-------J II. Node %a Finding one or more of the possible values for a given SCHEMER expression involves searching the possible ex- ecutions for one which does not require evaluating (FAIL). The search has an associated binary search tree; each branch in the search tree corresponds to selecting either the first or second argument as the value of a particular ANB expression. Recall that a label on a node in a search tree represents a statement that is true of all candidate solutions under that nnde Tn SCTTFMFT? w-arch tree?s the la.hds renrewnt J---r2 -_---. -__ LJ--l’-.---‘-“L-- ---- I---L .t--- -----L --r---L--l statements of the form “AM3-37 chooses its first argument” where AHB-37 refers to a particular AMEl expression. For this to work properly, we need to identify particular AHB expressions within a given SCHEMER expression; each AMB expression must be given a unique name. Figure 3 shows an expression in which each APIA has been given a unique name. The corresponding search tree is also shown. Tlle non-root nodes have been iabeied with ctatpmentc nhnllt nnrtirlllnr AMU’s rhnncincr their left. nr Y.,IyUV~I.VI.“Y W...IlY y-‘“‘“‘.s’ a..- 1 .3’“““.“b .,1*--a 1-a.. -- right arguments, and dependency analysis has been per- formed on the leftmost failure. The label m-37-~, for example, represents the statement that the AMFI expres- cinn 8~1-27 rhnnax ita fipct fl&‘,\ nrallment while the label “ IVLI c..- “C “IlV”““” IV” I.L”U \“a.“, .a’~““‘“““, . . llll” Vll” lUVVl APfB-37-R represents the statement that Am-37 chooses its second (right) argument. In this tree the failure of the leftmost node is caused by the fact that AHEl-39 chose its first argument. The nogood consisting of the single label AHB-39-L prunes the first, third and fifth leaf nodes. The choices in a SCHEMER expression must be named before searching for possible values. If the nam- :,, :, -1--A -1....:,- &I-- ^^^-_ L - m^^^^^ &I--,. :- A--,-- -c 111g IS UUllt: uurIIll; Cllt: searc11 yruLe:as, bIleI-e 1s uan(;er Ul giving the same A~BB expression different names in different regions of the search tree. This problem can be avoided by naming all the choices in the expression before starting the search process. For TTnffi,rt,,natel.r th;c ;c nrrt PP PDC., PO ;t ,;crht CP~P- “nIL” IuuInuY.dIJ Vlll.2 1.2 ,,“ V CA.2 bu.JJ w 1” llll&ll” U\rLll,. example consider the expression (ANY-WMBER) defined (LET ‘;! ;A+!EJ-31 3 (Am-38 4 5))) ,. “..\ (1 (Al’lB-3Y e, 111) (IF (= Y 6) (FAIL) (+ X Y>)> f 1: f 12 Figure 3: A SCHEMER expression with named choices and its labeled search tree. previously. (DEFINE (AMY-NUHE!ER) (AMFI 0 (I+ (ANY-NWTBER)))) I-SC, nhnxm AWU nvr\rr\no;r\., n.,nmrr+ ai-ml.. l..,, :,.l,,t:fi,.A III-C a”““6 nrru ~*y’cxml”” cIa*lll”I, 31111yly UC IUC;‘IClIICU as, say, AkIB-52, because it is being used to make several dif- ferent choices in different recursive calls to the procedure AMY-WUMBER. Tt :, ..--Zl.1, I.-...-..-.. c,. <c..,...:_.rln LL- ---..--:-.- --ll- lb 13 pu331u1c, 1IUWtTVtx) LU UllW 111u lone 1-ecu1-s1ve Calls in the above expression and then to name each choice in- dependently. The resulting expression is called a named choice expression. The following infinite named choice es- --_-,-I-- :- LL- ---. .,L press1011 IS LII~ rebulc of iinroiiing ihe above definition. (A.W-52 0 Cl+ (APB-53 0 (l+ (AMEI-54 0 (1+ . . .>)>>)) In the above expression each distinct choice has been givc‘ll a distinct ~7 sme. Infinite expressions such as this one can he represented by lazy S-expressions. Lazy S-expressions are ?n~lc\~pr\,,a tr\ ct~~~m~ rAh,lar\n ?nrl C,,nnrv(~~ lc1Qc;l. iazv cblbQI”~“U.3 U” OUILccI,,D LIL “LI.3”II a.11u kJuuJJl,lcLll I J”“, ) S-expressions delay the computation of their parts until those parts must be computed. When a portion of a lazy S-expression is computed, the result is saved. I-, c...,-l cl., -,,,:11, ..,l..-- ,A- @~U37R6K’D -_----- IV 111lU bllC yuasl”lc V&lUG3 Ul a lJ~,nr>lViJsln tx Y res- sion, the expression is first converted to a named choice expression by giving all AMEI expressions names. In practice Znbih, McAllester, and Chapman 61 the result is a lazy S-expression whose parts are computed A set of assumptions about the choices in a named on demand and then saved. Conceptually, however, the choice expression E will assign E a value. The value entire named choice expression is created, and all choices is computed by replacing all the named choices by their are named, before the search process begins. The search first or second arguments, depending upon the assump- process then evaluates the resulting named choice expres- tion about that choice. The resulting expression contains sion. Nodes in the search tree are given labels of the form no choices at all, and either fails or has a unique possible AMP52-L which means that AMB expression 52 chooses its value. b first (left) argument. As the search for possible values of a named choice Producing a named choice expression from a regu- lar SCHEMER expression turns out to be difficult. @- substitution followed by textual naming of AMB’S is suffi- cient for the examples we have mentioned, but does not preserve the semantics of SCHEMER. This is because sub- stitution can result in multiple choices where there should be only one. Consider the procedure below. (DEFINE (BETA) expression proceeds, assumptions are made about the var- ious choices in the expression. When a value for an ex- pression (or subexpression) is found dependency analysis is performed to determine the assumptions about choices which lead to this particular value. ((LAMBDA (X) (+ X X>) (AMB 1 2))) The possible values of (BETA) should be 2 and 4. Perform- ing /3-substitution produces an expression with possible values 2, 3 and 4. Recall that the job of dependency analysis is to provide a set of labels that constitute a nogood. In SCHEMER, the labels are assumptions such as AMB-57-L. A justification for a value of a named choice expression is a set of such assumptions which ensures that the expression has that value. A justification for the value of failure will therefore be a valid nogood. It turns out that it is possible to unwind a SCHEMER expression completely so that the resulting named choice expression has the same possible values as the original ex- pression. The basic trick is to interleave /Y-substitution and textual choice-naming. However, there are several sub- tleties involved, and the solution is too complex to describe in the space available. Interested readers are referred to [Zabih et al. 1987], which contains a complete description of the problem and its solution. Unwinding SCHEMER ex- pressions without violating the semantics of the language was the major technical contribution of [Zabih 19871. For our present purposes it is only important that a solution exists. The justification for a value of a named choice expres- sion can be defined recursively in terms of the justifications for its subexpressions. If the expression is a constant or failure, the justifica- tion for its value is empty. If the expression is a choice (AKB-n El E, 1, then the justification for its value is the assumption AMB-n-L or AMB-n-R, added to the justification for the value of El or E,- , respectively. IV. Dependency Analysis Since SCHEMER expressions can be converted to named choice expressions, the problem of finding possible values for SCHEMER expressions is reduced to the problem of finding possible values for named choice expressions. It is possible to give a simple recursive definition for named choice expressions. If the predicate of a conditional expression fails, then the entire conditional fails, and the justification for this failure is equal to the justification for the failure of the predicate. If the predicate does not fail then the justification for the value of the conditional is the the union of the justification for the value of the predicate and the justification for the value of whichever branch is taken. A named choice expression is one of the following, where &‘s denote named choice expressions. l A constant If any argument to a primitive application fails then the application itself fails, and the justification for this failure equals the justification for the failure of the ar- gument. If no argument fails, the justification for the value of the application is the union of the justifica- tions for the arguments. l Failure l A named AMB expression of the form (AHB-n El E,) l A conditional (IF Epred EcmJeq Eatter) l A primitive application (P El Es), where P is a Scheme primitive such as + Justifications are calculated incrementally as the search progresses. When the search produces a leaf node, which is a value for the named choice expression, a justification for that value is also produced. If the value is ~~;!llre: then the justification will be recorded as a nogood. A given named choice such as AMB-52 may appear in sev- eral different places in a given named choice expression. We require that when this happens the arguments to the AMB-52 are the same in all cases. Named choice expressions need not be finite; they are produced top down in a lazy manner. The search process maintains a list of nogoods, ini- tially empty. Whenever the search discovers a failure, de- pendency analysis produces a nogood, i.e. a set of assump- tions that ensures that the named choice expression fails. This new nogood is added to the list. The search process discards portions of the search tree that are pruned by any of the nogoods. 62 Al Architectures Automatic dependency-directed backtracking in the SCHEMER interpreter, as described above, is a special case of the general dependency-directed backt,racking pro- cedure mentioned earlier. This interpreter makes it pos- sible to gain the efficiency of dependency-directed back- tracking automatically while writing search programs in SCHEMER. A more detailed description of the above pro- cess can be found in [Zabih et al. 19871. v. Comparison with a fair amount of work on non-chronological backtrack- ing strategies within the Prolog community [Bruynooghe and Pereira 19841. While it is likely that much of our framework for providing dependency-directed backtrack- ing could be applied to Prolog, we have not yet done so. Complicating matters are several differences between SCHEMER and the “functional” subset of Prolog (i.e. pure horn clause logic). For example, SCHEMER has clo- sures while Prolog, which uses unification to implement parameter passing, potentially has data flowing both into and out of each parameter. SCHEMER is interesting because it provides automatic backtracking, without specifying a backtracking strategy, in a language that is almost Scheme. It can thus give the user dependency-directed backtracking in a highly t’rans- parent manner. Previously available methods for obtaining dependency-directed backtracking include the direct use of a ??uth Maintenance System (or TMS) [Doyle 19791, de- Kleer’s consumer architecture [deKleer 19863 and the lan- guage AMORD [deKleer et al. 1978). These methods, how- ever, require the user to explicitly use dependency-directed backtracking or to write in an unconventional language. They also necessitate special programming techniques, be- cause of the way they use the underlying TMS. The closest language to SCHEMER is Dependency- Directed Lisp (DDL), a Lisp-based language invented by Chapman to implement TWEAK [Chapman 19851. This is not surprising, since SCI-IEMER is based on DDL. There are two differences between DDL and SCHEMER that are worth describing. In particular, these methods force the user to provide node labeling and dependency analysis. Deciding which facts in the search problem should be assigned TMS nodes corresponds to node labeling. Providing the TMS with logical implications, so that it can determine the labels responsible for failures, corresponds to dependency anal- ysis. If these implications are not carefully designed it is possible to overlook the contributions of some labels; this can result in unsound nogoods which prune solutions, as mentioned earlier. First, DDL used a weaker dependency-directed back- tracking strategy than SCHEMER does. DDL would nevei use a nogood more than once. This was because DDL la- bels never appeared more than once in the search tree. As a result DDL considers parts of the tree containing only failures, which SCHEMER would prune. This in turn was due to the difficulty of devising a choice-naming scheme that produces repeated labels without destroying the se- mantics of the language. Using a TMS directly does not provide a separate lan- guage layer at all. It is easy for the problem solver to neglect to inform the TMS of the labels responsible for some decision, leading to unsound nogoods. This is also inconvenient; the user must intersperse code to solve the search problem with calls to the TMS to ensure dependen- cy-directed backtracking. SCHEMER, on the other hand, enforces a clean separation between the code that defines the search problem, which the user writes in SCHEMER, and the code that implements the search strategy, which the interpreter provides transparently. In addition, DDL had side-effects. Side-effects com- plicate dependency analysis by introducing too many de- pendencies. In SCBEMER, justifications can be computed incrementally. When the variable x is bound to the value of (FOO) , all the choices that affect the value of X can be col- lected incrementally in the process of evaluating the body of FOO, and no other choice can affect the value of X. In the code below, the AHB shown is never part of the justification for the value of X. (LET ((X (FOO))) (LET ((Y (AMB (F) (Gl))) (BAR x Y>)) In the presence of side-effects it is hard to prove that the value of x does not depend on whether Y *is (~1 or (G). This is because (G), for example, could side-effect data shared with X. This makes it difficult to design a method for dependency analysis which is sound in the presence of side-effects. Our ( not very determined) attempts to design such a method for dependency analysis have produced such - AMORD provides a language layer, as does the con- sumer architecture (to a lesser extent). The language is rule-based, though, and thus lacks a single locus of con- trol. Such an approach is well-suited to problems that can be easily expressed with rules and a global database of assertions. On the other hand, it is difficult to use on prob- lems that are not easily converted into rule-based form. A major advantage of SCHEMER is that it allows the user to express search problems without forcing him to think in terms of a. rule-set and a global data&se. large nogoods that pruning never occurs. VI. chnelusions We have shown that SCHEMER, a non-deterministic lan- guage based on Lisp, can elegantly express search prob- lems, and that it can provide automatic dependency- directed backtracking. The resulting interpreter allows users to gain the benefits of this backtracking stra eF;.y t while writing in a remarkably conventional language. Ge suspect that many search programs could benefit from de- Prolog [Warren et al. 19771 is defined to provide depth- pendency-directed backtracking if it were only more acces- first chronological backtracking. However, there has been sible. It is our hope that SCHEMER will make depenclen- Zabih, McAllester, and Chapman 63 a more popular search strategy in References [Abelson and Sussman 19851 Harold Abelson, Gerald Jay Sussman, and Julie Sussman. Structure and Inter- pretation of Computer Programs. MIT Press, Cam- bridge, Massachusetts, 1985. [Bruynooghe and Pereira 19841 Maurice Bruynooghe and Luis Pereira. “Deduction Revision by Intelli- gent Backtracking”. In Implementations of Prolog, J . Campbell (editor). Ellis Horwood, Chichester, 1984. [Chapman 19851 David Chapman. “Planning for Con- junctive Goals”. MIT AI Technical Report 802, November 1985. Revised version to appear in Arti- ficial Intelligence. [Clinger 19821 William Clinger. “Nondeterministic Call by Need is Neither Lazy Nor by Name.” Proceedings of the ACM Conference on LISP and Functional Pro- gramming, 226-234,1982. [deKleer 19861 Johan deKleer. “Problem Solving with the ATMS”. Artificial Intelligence 2$( 1986), 197-224. [deKleer et al. 19781 Johan deKleer, Jon Doyle, Charles Rich, Guy Steele, and Gerald Jay Sussman. “AMORD, a Deductive Procedure System”. MIT AI Memo 435, January 1978. [Doyle 19791 Jon Doyle. “ A Truth Maintenance System”. Artificial Intelligence 12( 1979), 231-272. [McCarthy 19631 John McCarthy. “A basis for a math- ematical theory of computation”. In Computer Programming and Formal Systems, P. Braffort and D. Hirschberg (editors). North-Holland, Amsterdam, 1963. [Rees et al. 19863 Jonathan Rees et. al. “Revised3 Report on the Algorithmic Language Scheme”. SIGPLAN Notices 21( 12), December 1986. [Stallman and Sussman 19771 Richard Stallman and Ger- ald Jay Sussman. “Forward Reasoning and De- pendency Directed Backtracking in a System for Computer-Aided Circuit Analysis”. Artificial Intel- ligence 9(1977), 135-196. [Warren et al. 19771 D. Warren, L. Pereira and F. Pereira. “Prolog - the language and its implementation com- pared with Lisp”. ACM Symposium on Artificial In- telligence and Programming Languages, 1977. [Zabih 19871 Ramin Zabih. Dependency-Directed Back- tracking in Non-Deterministic Scheme. M.S. thesis, MIT Department of Electrical Engineering and Com- puter Science, January 1987. Revised version avail- able as MIT AI Technical Report 956, July 1987. [Zabih et al. 19871 Ramin Zabih, David McAllester anal David Chapman. “Dependency-Directed Backtrack- ing in Non-Deterministic Scheme”. To appear in Ar- tificial Intelligence. (Preliminary draft available from the authors). cy-directed backtracking the AI community. Acknowledgments Alan Bawden, Mark Shirley and Gerald Sussman helped us considerably with SCHEMER. Phil Agre, Jonathan Rees, Jeff Siskind, Daniel Weise, Dan Weld and Brian Williams also contributed useful insights. John Lamping and Joe Weening read and commented on drafts of this pa.per. 64 Al Architectures
1987
11
561
Robabilistic Semantics ualitative Influences Michael P. Wellman MIT Laboratory for Computer Science 545 Technology Square Cambridge, MA 02139 Abstract What’s in an infiuence link? To answer this founda- tional question, I propose a semantics for qualitative influences: a positively influences b if and only if the posterior distribution for b given o increases with o in the sense of first-order stochastic dominance. By requiring that this condition hold in all contexts, we gain the ability to perform inference across chains of qualitative influences. Under sets of basic desiderata, the proposed definition is necessary as well as sufF~- cient for this desirable computational property. I, Introduction Innumerable AI programs incorporate constructs that are intended to capture the notion that one variable “causes” or influences another in some particular fashion, at preci- sions ranging from the mere direction of influence to exact numerical relationships. Although such terms as “cause” and “influence” are often defined rather loosely in knowl- edge language specifications, any inference procedure that manipulates models containing these terms imposes con- straints on their possible meaning. In the sections below, I investigate the constraints imposed on a semantics for qualitative probabilistic in- fluences by the most basic properties of typical inference algorithms. Qualitative influences are those at the impre- cise end of the spectrum, asserting only a direction of as- sociation among variables. In looking for a probabilistic semantics we admit models where the directions are not guaranteed, and the functional relationships are not deter- ministically fixed. Figure 1: Part of the causal model for digitalis therapy. The direction on a link from a to b indicates the effect of an increase in a on b. In the figure the elliptical nodes represent random variables. The rectangular node is a decision variable, in this case the dosage of digitalis administered to the pa- tient. The hexagonal node is called the value node and represents the utility of the outcome to the patient.’ This terminology and notation are adapted from influence dia- gram [Shachter, 19861, a probabilistic modeling formalism similar to Bayes networks [Pearl, 1986a]. Influences among the variables are indicated by depen- dence links, annotated with a sign denoting the direction of influence. Thus digitalis negatively influences conduction and positively influences automaticity. The former is the desired effect of the drug, because a decrease in conduction decreases the heart rate, which is considered beneficial for patients with tachycardia (tach), the population of interest here. The desirability of lower heart rates is represented by the negative influence on the value node (given tach), asserting that lower rates increase expected utility. The increase in automaticity is an undesired side-effect of digi- talis because this variable is positively related to the prob- ability of ventricular fibrillation (v. fib.), a life-threatening cardiac state. Calcium (Ca) and potassium (K) levels also influence the level of automaticity. One of the primary advantages of encoding the dig- italis model qualitatively is modularity, a knowledge rep- resentation issue of particular concern in the case of un- certainty [Heckerman and Horvitz, 19871. While the exact probabilistic relationships among these variables vary from patient to patient, the direction of the relations are reliably HI, Example: The Therapy Advisor Cur discussion of qualitative influences is set in the con- text of a simple causal model taken from Swartout’s pro- gram for digitalis therapy [Swartout, 19831. The model, shown in Figure 1, is a fragment of the knowledge base that Swartout used to re-implement the Digitalis Therapy Advisor [Gerry et al., 19781 via an automatic programmer. ‘Supported by National Institutes of Health Grant No. ROl LM04493 from the National Library-of Medicine. 660 Engineering Problem Solving From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. taken as constant. Conclusions drawn from this model are therefore valid for a much broader class of patients. The conclusions we would like our programs to derive from the digitalis model are those taken for granted in the description above. For example, we unthinkingly assumed that the effects of digitalis on conduction and of conduc- tion on heart rate would combine to imply that digitalis reduces the heart rate. Further, because lower heart rates are desirable, digitalis is therapeutic along the upper path. Similarly, it is toxic along its lower path to the value node. The tradeoff between therapy and toxicity cannot be re- solved by the mere qualitative influences in the model. The remainder of this paper develops a semantics for qualitative influences that justifies the kinds of inferences we require while providing the maximum possible degree of modularity. A formalism for qualitative influences among binary events was introduced in a previous paper [Well- man, 19871. In the sections below I present the basic defi- nitions, extending them to cover multi-valued parameters. The resulting definition is shown to be the weakest that satisfies our inference desiderata. Consider two random variables, a and b. Informally, when a and 6 are dichotomous events, a qualitative influence is a statement of the form “a makes b more (or less) likely.” This binary case is easy to capture in a probabilistic as- sertion. Let A and A denote the assertions a = true and a = false, respectively, and similarly, B and B. Then we say “a positively influences b,” written S+(a, b), if and only if Vx Pr(.B]Ax) 2 Pr(B]Az). (1) In the equation x ranges over all assignments to the other event variables consistent with both A and A. The quan- tification is necessary to assert that the influence holds in all contexts, not just marginally. Because of this context variable, S+ holds in a particular influence network pro- grams that alter the structure of the network may exhibit non-monotonicity in S+ [Grosof, 19871. Conditions anal- ogous to (1) and those following serve to define negative and zero influences, omitted here for brevity. For the dichotomous case, Bayes’s rule implies that (1) is equivalent to Vx Pr(A]Bx) 2 Pr(A]Bx). (2) In the terminology of Bayesian revision (1) is a condition on posteriors, while (2) is a condition on likelihoods. Notice that S+(a, b) is simply an assertion that the likelihood ratio is greater than or equal to unity. Formalizing S+ is not quite so straightforward when a and b take on more than two values. In such cases we want to capture the idea that “higher values of a make higher values of b more likely.” An obvious prerequisite for such statements is some interpretation of “higher.” Therefore, we require that each random variable be associated with an order 2 on its values. For numeric variables such as Upotassium concentration,n this relation has the usual in- terpretation; for variables like “automaticity” a measure- ment scale and ordering relation must be contrived. The more troublesome part of defining positive influ- ences is in specifying what it means to Qmake higher values of b more likely.” Intuitively, we want a statement that the probability distribution for b shifts toward higher values as a increases. To make such a statement, we need an or- dering that, given any two cumulative probability density functions (CDFs) Gi and G2 over b, determines whether Gr is “higher” than 62. However, not all probability distributions can be eas- ily ordered according to the size of the random variable. Different rankings are obtained through comparing dis- tributions by median, mean, or mean-log, for example. We require an ordering that is robust to changes of these measures because the random variables need be described by merely ordinal scales [Krantz et al., 19711. An asser- tion that calcium concentration positively influences auto- maticity should hold whether calcium is measured on an absolute or logarithmic scale, and regardless of how auto- maticity is measured. An ordering criterion with the robustness we desire is first-order stochastic dominance (FSD) [Whitmore and Findlay, 19781. FSD holds for Gr over Gz if and only if the mean of Gr is greater than the mean of G2 for any monotonic transform of b. That is, for all monotonically increasing functions 4, / W)G(b) 2 J cb(+G@). (3) A necessary and sufficient condition for (3) is Vb cl(b) 5 G2(b). (4 That is, for any given value of b the probability of obtaining b or less is smaller for Gi than for G2. For further discus- sion and a proof that (4) is equivalent to (3), see (Fishburn and Vickson, 19781. We are now ready to define qualitative influences. Let F(b(aix) be the CDF for b given a = ai and context x.~ Then S+(a, b) if and only if Val, a2, x a1 2 a2 + F(bla,x) FSD F(blaax). (5) Adopting the convention for binary events that true > false, we can verify that (5) is a generalization of (1). Like (I), (5) is a condition on posteriors. Mil- grom [Milgrom, 19811 proves that the equivalent likeli- hood condition is the Monotone Likelihood Ratio Property (MLRP) from statistics [Berger, 19851. a As above, z ie an assignment to the remaining random variables consistent with the condition as = ~6. We need to include x here and in the definitions below because these conditions will be applied in situations where z is partially or totally known. If we had stated the conditions in marginal terms (“on average, a positively influences b-1, it would not be valid to apply them in specific contexts. Finally, we need a special definition for influences on the value node. The variable a positively influences utility, U+ (a), if and only if va,a2,x a1 L a2 =+ +l,s) 3 +2,x), (6) where u is a utility function [Savage, 19721 defined over the event space. Because b positively influences c, the pointwise FSD con- dition (4) implies that for any co, G(co ]bzo) is a decreasirag function of b. And S+(a, b) entails FSD of F(blqxo) over F(blazx,-,). Therefore, (3) applies with the inequality re- versed (negating G(colbxo) yields an increasing function), leading to the conclusion Vco G(coJwo)I G(cola2~0), (11) IV. Chains of Pnfluence In an earlier paper on qualitative influences [Wellman, 19871, I considered networks of variables connected by in- fluence links describing the direction of probabilistic de- pendence. There, I demonstrated for the binary case that, in the absence of direct links from a to b, S+(u, b) A S+(b,c) =% S+(u,c). F rom a computational perspective the ability to perform inference across influence chains is an essential property of a qualitative algebra. From the digitalis model, for example, we would like to deduce that increasing the dose of digitalis decreases the heart rate but increases the likelihood of v. fib. Indeed, most programs with models like this would make such an inference. For- tunately, the definition offered above for S+ implies tran- sitivity for multi-valued as well as binary variables. Proposition 1 If a and c are not connected by any direct links, S+(a, b), and S+(b,c), then S+(a,c) holds in the network obtained by removing b. Proof: Choose al and a2 such that al > ~2, and an so consistent with al, ~2, and all b. Let G denote the condi- tional CDF for c and c the minimal value of the variable. By the definition of cumulative probability we have G(coluixo) = 1” / fac(bcluixo)dbdc. (7) E Changing the order of integration and decomposing the joint probability yields G(colwo) = // cco f,(claibxo)fa(blaizo)dcdb. (8) Because a and c are conditionally independent given b and 5, we can remove ui from the fc expression.3 Rewriting the density function as the derivative of a cumulative, we get G(colwo) = fc(clbxo)dcdF(bluixo). (9) The inner integral is simply the CDF for c given b. G(colwo) = / G(coIbxo)dJ’(blaixo). (10) SThe conditional independence follows from separation in the in- fluence network. See Pearl [Pearl, 1986b] for a discussion of the independence properties of graphical probability representations. implying FSD. Because al) a2, and zro were chosen arbi- trarily, we have finally S+(u, c). Similar arguments with the appropriate signs and di- rections switched would reveal that chains of influences may be combined by sign multiplication. In the remainder of this section I present some simple desiderata for a qualitative influence definition that entail the hecessity of FSD for chaining influences. We start by specifying the form such definitions must take. To capture the intent of “higher values of a make higher values of b more likely” in a probabilistic semantics, it seems reason- able to restrict our attention to conditions on the posterior distribution of b for increasing values of a. Therefore, we postulate that a definition of S+(u, b) must be of the form 02) where R is some relation on CDFs. This condition is ex- actly (5) with FSD replaced by the more abstract relation. There are two basic desiderata that severely restrict the possible Rs. First, the definition for S+ induced by R in (12) must satisfy Proposition 1. Without the ability to chain inferences, the qualitative influence formalism has little computational value. Second, the condition must be a generalization of the original definition of S+ for dichoto- mous events (1). With only two possible values there does not appear to be a weaker monotonicity condition. These criteria lead to a sharp conclusion. Proposition 2 Let @(a, b) be defined by (121. Given the following conditions: 1. Proposition 1 2. For binary b, al 2 ~2, and x, F(blalx) R F(blu2x) ts Pr(B]ars) 2 Pr(BIa2z) (13) the weakest R is FSD. Proof: First, note that FSD satisfies these conditions. Next, assume that R satisfies them but R does not en- tail FSD. We will start with an instantiation of Proposi- tion 1 and derive a contradiction. Let a, b, and c be the only variables (so we can safely ignore x) with S+(a, b), S+(b, c), and no other direct links. For concreteness, let b range over the unit interval [0, l] and c be binary with Pr(C]b) = 4(b), for some 4 : [0, l] -+ [0, l] monotonic. The monotonicity of 4 guarantees S+(b, c): By assumption, Proposition 1 applies, yielding the conclusion Sf (a, c) and therefore F(c]ar) R F(cJa2). Because c is binary, (13) must 662 Engineering Problem Solving hold. Expanding the expression for the posterior probabil- ity of c given a, the RBS of (13) becomes /,’ 4(b)dJ’(blal) 2 /,’ 4(bW’(blw). (14 Because 4 may be any monotonic function, FSD is neces- sary for (14) and is therefore entailed by R. The force of this result is weakened somewhat by the u priori restriction of definitions to those having the form of (12). Many statistical concepts of directional rela- tion (based on correlation or joint expectations, for exam- ple) do not fit (12) yet appear to be plausible candidates for a definition of qualitative influence. Quadrant depen- dence [Lehmann, 19661 holds between a and b when4 U+(u) is satisfied in the reduced network if and only if u(ai, z) is increasing in ui. From (6) we know that u(b, 5) is monotonically increasing in b. In fact, it can be any monotonic function. Therefore, (16) is increasing in ai under the same conditions as (3), which is exactly S+ as defined by (5). Proposition 3 demonstrates that while conditions. like quadrant dependence which are weaker than Sf may be sufficient for propagating influences across chains, they are not adequate to justify decisions across chains. For choos- ing among alternatives, the relevant parameter is the util- ity function evaluated at a point; utilities conditioned on intervals of the decision variable (as in quadrant depen- dence) do not have the same decision-making import. Vul,a2 al 2 u2 =+ F(blu 5 al) FSD F(bla 5 ~2). (15) Lehmann proves that quadrant dependence is necessary but not sufficient for regression dependence, which is his terminology for (5) without the quantification over con- texts z. As quadrant dependence is weaker yet still ex- hibits transitivity, 6 it seems to be an attractive alternate to regression dependence. To justify our choice of the lat- ter, we must appeal to the decision-making implications of probabilistic models. The basic definitions above can be extended in a variety of ways. Conditional influences-defined for binary events in a previous paper [Wellman, 1987]-simply delimit the range of z in (5). For example, the negative influence of heart rate on utility in the digitalis model is conditional on tachycardia. Swartout’s XPLAIN knowledge base included the “do- main principle” that if a state variable acts synergistically The prime motivation for adopting a probabilistic seman- tics is so that the behavior of our programs can be justified by Bayesian decision theory [Savage, 19721. A decision is valid with respect to an influence model if expected util- ity is maximized. For example, if U+(u) and there are no indirect paths from a to the value node, then a decision of al over a2 is valid if and only if al > a2, by the definition of U+ (6).6 Decision-making power is enhanced if we can deduce new influences on utility from chains of influences in the network. Our definition of qualitative influence is necessary as well as sufficient for such inferences. with the drug to induce toxicity, then smaller doses should be given for higher observations of the variable [Swartout, 19831. This fact could be derived by a domain-independent inference procedure given a suitable definition for qualita- tive synergy. We can say that two variables synergisti- cally influence a third if their joint influence is greater (in the sense of FSD) than separate statistically independent influences.’ In the digitalis example, we need to assert that digitalis acts at least independently with Ca and K deviations in increasing automaticity. In addition, we must specify that the decrease in utility for a given increase in automaticity is larger when automaticity is already high. Such a relationship can be captured by an assertion that automaticity is synergistic with itself in its toxic effects. Proposition 3 Consider a network where W+(b) holds and a and u are not connected by any direct links. A nec- essary and suficient condition fir U+(a) on removal of b is S+ (a, b) as defined by (5). Proof: The expected utility of ai with any x is given by u(a&x) = J u(b, z)dF(bluiz). (16) ‘This ia actually the condition Lehmann proposes as a strength- ening of quadrant dependence. The basic quadrant dependence fixes 01 at a’s maximal value. 6For transitivity we need to quantify proof parallels that for Proposition 1. over contexts in (15). The “The existence of other paths from a to utility would leave open the possibility that the net influence of a is negative. For exam- ple, we could summarize the therapeutic effect of digitalis through conduction and heart rate as a direct positive influence. But this might be outweighed by the indirect negative influence of digitalis via automaticity. elated Philosophers have long attempted to develop mathemat- ical definitions of causality. Motivated by computational rather than philosophical concerns, I have ignored in this treatment temporal properties, mechanisms, spuriousness, and other issues salient to causality. These concerns aside, Suppes [Suppes, 19701 proposes a probabilistic condition equivalent to (1) without the context quantification for bi- nary events. For multi-valued variables, Suppes suggests quadrant dependence (15). As suggested previously, ordering of random vari- ables has also attracted considerable interest in statis- tics [Berger, 1985, Lehmann, 1966, Ross, 19831 and deci- sion theory [Whitmore and Findlay, 19781. Milgrom [Mil- 7This type of relationship was 19841, a diagnostic program based exploited by NESTOR [Cooper, on probabilistic inequalities. Wellman 663 grom, 19811 demonstrates the application of MLRP to the- oretical problems in informational economics. The key difference between the S+ definition proposed here and previous work is that we obtain transitivity by re- quiring the condition to hold in all contexts. Suppes shows that the causal algebra induced by his condition-defined only at the margin-does not possess the transitive prop- erty. As argued above, this is a computationally essential characteristic of qualitative influences. VIII. Conclusions Despite the ubiquity of qualitative influence assertions in knowledge representation mechanisms, there has been lit- tle study of the semantics of such constructs. Previous work either denies the probabilistic nature of the relation- ships among variables in the model or takes for granted the ability to draw inferences by chaining influences in the network. I have defZned a positive qualitative influence of a on b as an assertion that, in all contexts, the poste- rior probability distribution for b given a is stochastically increasing (FSD) in a. A series of propositions provided theoretical support for this S+ definition: Q s+ supports chaining of influences. condition 0 s+ is the weakest posterior chaining of influences. that supports 0 s+ is necessary and sufficient for chaining decisions across influences. A semantics for qualitative influences should prove valuable for analyzing knowledge bases like the digitalis model of Figure 1, as well as knowledge representation theories that include similar constructs. In particular, the definition of S+ can help to evaluate the potential of purely qualitative methods like Cohen’s endorsement ap- proach [Cohen, 19851 and to characterize the techniques from AI work on qualitative reasoning that are valid in probabilistic domains. Acknowledgments Zak Kohane, Bill Long, Ramesh Patil, Elisha Sacks, Kate Unrath, and Alex Yeh positively influenced this paper. References [Berger, 19851 James 0. Berger. Statistical Decision The- ory and Bayesian Analysis. Springer-Verlag, second edition, 1985. [Cohen, 19851 Paul R. Cohen. Heuristic Reasoning about Uncertainty: An Artificial Intelligence Approach. Volume 2 of Research Notes in Artificial Intelligence, Pitman, 1985. [Cooper, 19841 Gregory Floyd Cooper. MESTOR: A computer-based medical diugnastic aid that integrates causal und probabilistic knowledge. PhD thesis, Stan- ford University, November 1984. [Fishburn and Vickson, 19781 Peter C. Fishburn and Ray- mond G. Vickson. Theoretical foundations of stochas- tic dominance. In [Whitmore and Findlay, 19781. [Gerry et al., 19781 G. Anthony Gorry, Howard Silver- man, and Stephen 6. Pauker. Capturing clinical ex- pertise: A computer program that considers clinical responses to digitalis. American Journal of Medicine, 64:452-460, March 1978. [Grosof, 19871 B en amin N. Grosof. Non-monotonicity in j probabilistic reasoning. In John F. Lemmer, editor, Uncertainty in Artificial Intelligence, North-Holland, 1987. [Heckerman and Horvitz, 19871 David E. Heckerman and Eric J. Horvitz. The myth of modularity in rule-based systems. In John F. Lemmer, editor, Uncertainty in Artificial Intelligence, North-Holland, 1987. [Krantz et al., 19711 David H. Krantz, R. Duncan Lute, Patrick Suppes, et al. Foundations of Measurement. Academic Press, New York, 1971. [Lehmann, 19661 E. L. Lehmann. Some concepts of depen- dence. Annals of Mzthemata‘cal Statistics, 37:1137- 1153,1966. [Milgrom, 19811 Paul R. Milgrom. Good news and bad news: Representation theorems and applications. Bell Journal of Economics, 12:380-391, 1981. [Pearl, 1986a] Judea Pearl. Fusion, propagation, and structuring in belief networks. Artificial Intelligence, 29:241-288, 1986. [Pearl, 1986b] Judea Pearl. Murkow and Buyes networks: A comparison of two graphical representations of prob- abilistic knowledge. Technical Report R-46, UCLA Computer Science Department, September 1986. [Ross, 19831 Sheldon M. Ross. Stochastic Processes. John Wiley and Sons, 1983. [Savage, 19721 L eonard J. Savage. The Foundations of Statistics. Dover Publications, New York, second edi- tion, 1972. [Shachter, 19861 Ross D. Shachter. Evaluating influence diagrams. Operations Research, 34, 1986. [Suppes, 19701 Patrick Suppes. A Probabilistic Theory of Causality. North-Holland Publishing Co., Amster- dam, 1970. [Swartout, 19831 William R. Swartout. XPLAIN: A sys- tem for creating and explaining expert consulting pro- grams. Artificial Intelligence, 21:285-325, 1983. [Wellman, 19871 Michael P. Wellman. Qualitative proba- bilistic networks for planning under uncertainty. In John F. Lemmer, editor, Uncertainty in Arta’ficiul In- telligence, North-Holland, 1987. [Whitmore and Findlay, 19781 G. A. Whitmore and M. C. Findlay, editors. Stochastic Dominance: An Ap- proach to Decision Making Under Risk. D. C. Heath and Company, Lexington, MA, 1978. 664 Engineering Problem Solving
1987
110
562
Extracting Qualitative Dynamics from Numerical Experiments Kenneth Man-kam Yip MIT Artificial Intelligence Laboratory NE 43 - 438 545 Technology Square, Cambridge, MA 02139. Abstract The Phase Space is a powerful tool for representing and reasoning about the qualitative behavior of non- linear dynamical systems. Significant physical phe- nomena of the dynamical system - periodicity, recur- rence, stability and the like - are reflected by out- standing geometric features of the trajectories in the phase space. Successful use of numerical computa- tions to completely explore the dynamics of the phase space depends on the ability to (1) interpret the nu- merical results, and (2) control the numerical exper- iments. This paper presents an approach for the au- tomatic reconstruction of the full dynamical behavior from the numerical results. The approach exploits knowledge of Dynamical Systems Theory which, for certain classes of dynamical systems, gives a complete classification of all the possible types of trajectories, and a list of bifurcation rules which govern the way trajectories can fit together in the phase space. These bifurcation rules are analogous to Waltz’s consistency rules used in labeling of line drawings. The approach is applied to an important class of dynamical system: the area-preserving maps, which often arise from the study of Hamiltonian systems. Finally, the paper de- scribes an implemented program which solves the in- terpretation problem by using techniques from com- putational geometry and computer vision. . ntroduction The theory Of any fUnCtiOns begins n8tu- r811y with its qualitative aspect, and thus the problem which fist presents itself is the following: Construct the curves defined by differential equations. - Hem-i Poincare Qualitative Physics is a young field. Progress is made when researchers formalize and implement their understanding of how certain qualitative reasoning tasks, such as prediction of future behavior, and ex- planation of how the behavior comes about, are being performed in particular problem domains. Two do- mains, among others, have received much attention: circuit analysis and design in the engineering domain, - and simple boilers and fluid flow in commonsense physics. Early works in Qualitative Physics primar- ily dealt with incremental deviation from equilibrium states where time evolution is not explicitly consid- ered [De Kleer, 19791. More recent works attempt to extend DeKleer’s qualitative algebra and incremen- tal analysis to handle time-varying behavior [Forbus, 1984, Williams, 1984, Williams, 1986, Kuipers, 19841. The machineries developed for qualitative reasoning - qualitative state vector, quantity space, and limit analysis - are largely applicable to systems which are piecewise well-approximated by low-order linear sys- tems or by first order nonlinear differential equations. The behavior of linear systems is particularly sim- ple: the complete input-output behavior can be sum- marized in a single system transfer function. Conse- quently, if the response to one type of input is known, no more information is needed to determine responses for other input signals. The situation in a nonlinear system is completely dif- ferent: essential changes in the qualitative behavior of the system may occur as the amplitude of the in- put signal changes, or as the starting conditions are varied. More importantly, nonlinear systems have a far richer spectrum of dynamical behavior. Simple equilibrium points, periodic and quasiperiodic mo- tion, limit cycles, chaotic motion 8s unpredictable as a sequence of coin tosses - these are some of the be- havior found in a typical nonlinear system. Unfortunately, these nonlinear Characteristics do not show up in first order nonlinear differential equations. This is because the continuity and (local) uniqueness of flow severely constrain the kind of behavior possi- ble on the real line: the flow either tends towards an equilibrium, or goes off to infinity. In this research, I therefore propose to look at dynam- ical systems - those typically encountered in Physics - to provide a new source of examples for investiga- tion into the fundamental issues of descriptive lan- guage, style of reasoning, and representation tech- niques in qualitative reasoning about nonlinear dy- namical systems. Specifically, I will consider two- dimensional discrete dynamical systems defined by area-preserving maps containing a single control pa- rameter. The study of area-preserving maps - trans- formations of the plane which preserves area - began with the venerable problem of the stability of the so- lar system. I choose to investigate this simplest non- trivial type of conservative system because many im- portant problems in physics - the restricted j-body problem, orbits of particles in accelerators, and two coupled nonlinear osdletors, just to mention a few - can be reduced to the study of are&preserving mape. Yip 665 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. Yn+l = x=sina!+(y, -xz)coso control parameter varies, I want my program to au- tomatically generate a family of phase portraits de- scribing the main dynamical properties of the map for all initial conditions in U and parameter values in T J. To explore the complete dynamics of a nonlinear sys tern over a large region of the phase space and param- eter space is a fairly typical problem in the physics literature. A good illustration of this task is provided by Henon’s well-known paper, “Numerical Study of Quadratic Area-Preserving Mappings” [Henon, 19691. The goal of Henon’s paper is to provide a description of the main properties of the quadratic map: Xn+l = x,cosa!-(y,-xi)sina where x and y are the state variables, and CY is the control parameter. The main results of Henon’s pa- per are shown in Figures l(a)-(f), which display the output of many numerical simulations. f(b) . .;:>, Figure 1: A partial list of phase portraits from numerical experiments. The figures are generated by plotting several hundred of successive values of (xra, yn). (a) CY = 1.16 (b) a! = 1.33 (c) a! = I.58 (d) CY = 2.0 (e) (;Y = 2.04 (f) CY = 2.21. Dashed line: axis of symmetry. The simplest approach to this problem is the brute force method: it divides the phase space and param- eter space into small grids and tries every possible combinations of initial conditions and parameter val- ues. A simple calculation will show that this method involves an enormous amount of computation. For instance, if we choose a uniform grid size of 0.01, we have to compute approximately 300 x 300 x 600 = 54 million orbits. Assuming, on the average, 0.02 second is needed to compute 8 trajectory of 500 points, it will take over 300 hours of computation time to compute all the trajectories. The brute force method suffers from two serious prob- lems. First, it is grossly inefficient because most of the phase portraits computed will be qualitatively the same. Second, it is not reliable because there is al- ways the danger of missing some important qualita- tive features when the change occurs at a resolution finer than the grid size. A physicist often does much better than this. Fig- ure 2 represents a flow-chart of what a professional physicist does during the numerical experiment. The flow-chart has two nested loops. The outer loop in- volves deciding when to stop the experiment; the in- ner loop, when to move on to next parameter value. Controlling what experiment to do next, and inter- preting the results of the simulation - these are the two most important decisions the experimenter has to make. Figure 2: How-chart which describes the process of exper- imenting with a dynamical system. The task of behavior prediction can now be summa- rized as this: to develop a picture of all possible solu- tions to the dynamical system from a limited amount of numerical experiments at a limited number d ini- tia conditions and parameter values. The key ob- servation is that knowledge of qualitative dynamics and their geometric manifestations in the phase space provides a strong constraint on the type of behav- ior possible. As we will see in the next section, this constraint translates into a dramatic reduction of the amount of search required to find those combinations of initial states and parameter values that lead to 9nteresting” phase portraits. A. Terminology The purpose of this section is to introduce some con- cepts and definitions from Dynamical Systems The- ory [Hirsch and Smale, 19741. A dynamical system consists of two parts: (1) the system state, and (2) the evolution law. The system state at any time to is a minimum set of values of variables {XI, . . . , x,) which, along with the input to the system for t 2 to, is sufficient to determine the behavior of the system for all time t 1 to. The variables which define the system state are called state variables. The conceptual n- dimensional space with the n state variables as basis vectors is called the phase space. A state vector is a set of state variables considered as a vector in the phase space. As the system evolves with time, the state vector traces out a path in the phase space; the path is called an orbit or a trajectory. Finally, 8 phase portrait is a partition of the phase space into orbits. The evolution law determines how the state vector evolves with time. In a finite dimensional discrete time system, the evolution law is given by difference equations. The difference equation is specified by a function f : X --) X where X is the phase space of the discrete system. The function f which defines a discrete dynamical system is called a mappinrg, or 8 map, for short. The multipliers of the map f are the eigenvalues of the Jacobian of f. An area- preserving map is a map whose Jacobian has a unit determinant. The set of iterates off-z, f(x), f2(x), f’(x), . ..) f”(z) - as n becomes large is called the orbit of x relative to q it captures the history of x as f is iterated. Two types of point have the simplest histories - fixed point, and periodic point. The point x is a dxed point of f if f(x) = x. A fixed point x is called &a- ble, or elliptic, if all the multipliers of f at x lie on the unit circle; it is called unstable, or hyperbolic, otherwise. The point x is a periodic point of pe- riod n if p”(x) = x. The least positive n for which f”(x) = x is called the period of x. The set of all iterates of a periodic point forms a periodic orbit. . Qualitative ynamics and their Types of Orbit ve 8 brief outline of rbits in area-preserv Yip 667 There are four ways in which an orbit generated by infinitely many iterations of the map can be explored in the phase space: 1. A finite number of N points are encountered re- peatedly, corresponding to a periodic orbit of period N. 2. The iterates fill a smooth curve, which is a topo- logical circle, in the phase space. This curve is called an invariant curve because the whole curve maps onto itself under the action of the map. 3. The iterates can form a random splatter of points that fills up some area of the phase space. This happens when the orbit evolves in a chaotic manner whose detail depends sensitively on the initial conditions. 4. The iterates leave the phase space after a finite number of iterations and escape to infinity in the end. These points are called escape points. Since the dynamics of are&preserving maps and Hamiltonian systems have a lot in common, these four types of geometric orbit have important physical interpretations. Due to space limitation, I just lit the interpretations below. The explanation of these interpretations can be found in [Yip, 19871. periodic points _ periodic motion invariant curve _ quasiperiodic motion chaotic region ++ chaotic motion escape points C unbounded motion 2. Bifurcations: Qualitative changes in the phase portrait Two phase portraits are qualitatively equivalent if there exists a homeomorphism between them which preserves fixed points, periodic points, invariant curves, and their stability. Bifurcation is said to oc- cur when the dynamical system goes through a qual- itative change in its phase portrait as the control pa- rameter is varied. I will focus on one important type of bifurcation: appearance and disappearance of pe- riodic orbits. Meyer Meyer, 19701 gives a complete classification of fhe generic bifurcations of periodic points for one- parameter area-preserving maps. Meyer has shown that generic bifurcations occur when the multiplier x nth root of unity where n = 1, 2, 3, 4, and 1 F The five types of generic bifurcation are: (1) extremal, (2) transitional, (3) phantom 3-k& (4) phantom Ckiss, and (5) emission. Because of space limitation, I only discuss the case of phantom J-~&W as an illustration of what the bifurcation geometry is. Again, detail of this can be found in [Yip, 19871. Phantom 3-kiss occurs when the multiplier X of the map is a cube root of unity. The region of stability of the elliptic fixed point shrinks to zero aa the hy- perbolic points of an unstable period-3 cycle “kiss” at, the origin. After the Ukiss”, the fixed point turns elliptic again, and a new unstable period-3 cycle is emitted. Note the change in orientation of the trim- g&r region around the elliptic point. The phantom S-kiss is often preceded by extremal bifurcations in a region a bit further away from the original elliptic fixed point, resulting in the formation of a pair of elliptic and hyperbolic period-3 points. The dynamics of the area-preserving maps severely constrain the way orbits can fit, together in the phase portraits. These constraints, which are encoded in the bifurcation patterns, are thus analogous to the consistency rules for line labeling in Waltz’s thesis [Waltz, 19751. A s we shall see later, the geometry of bifurcation allows us to decide whether a collection of phase portraits is consistent, and gives us clue to the types and locations of orbits that the program should be looking for. Biication Type Discrete Flow Pattern Figure 3: Five Generic types of Bifurcation Geometry. A periodic point bifurcates whenever its multipliers pass through an n-th root of unity. IV. The Control Problem A. How to start the numerical ex- periment? Elliptic fixed points are good places to start. We ex- pect that the orbits near an elliptic fixed point, where the linear terms of the map dominate, will be mostly 66% Engineering Problem Solving invariant curves. We then search radially outward un- til we encounter island chains, and eventually chaotic regions. B. Mow to decide what experiment to try next? Knowing the generic bifurcation patterns is valuable for controlling numerical experiments. To begin with, it is difficult, to locate the value of the control param- eter at which bifurcation occurs: it is of probabil- ity almost zero that a randomly chosen point in the (z,y, Q) space will be the bifurcation point. But the pattern of flow near a periodic point just before and after the bifurcation occurs in a finite range of the control parameter; hence the pattern is easier to de- tect during the experiments. Once a given flow pattern is found to match some parts in our library of bifurcation geometries, it will give us strong evidence that the corresponding bifur- cation exists, and we should be able to locate the rest of the flow patterns as given by the generic bifurca- tion. The pre-stored knowledge abouf these bifurca- tions gives us the complete information about what geometric objects, and approximately where in the control parameter space to look for. To take an example, consider the phantom S-k&w seen in figure Id. The local flow pattern around the fixed point matches that in figure 3. According to the bifur- cation pattern, the regular region around the stable fixed point will shrink in size, becoming an unstable fixed point; eventually, a new stable fixed point is born. So, we should expect to see figure If at some a slightly greater than two. c. Mow to decide when to termi- nate the experiments? Besides imposing a strong constraint on what can be expected to happen in the phase portrait, the generic bifurcations also provide an answer to the problem of termination: a simulation experiment is incom- plete unless all the major qualitative features in the phase portrait can be explained by this finite list of local generic bifurcations. An example is the change of stability of a fixed point. Suppose we have nu- merically located the 3-island chain and the center elliptic point at some ar = CYO as in figure Id. We know that the family of phase portraits is yet incom- plete because we expect a phantom S-kiss bifurcation to occur. In particular, we need to try at least two more experiments to obtain two phase portraits: first, ati Q = CYI when the triangular region is flipped, and second, at (~2 E (cYo,c~~) when the region becomes vanishingly small, indicating instability of the fixed point. 1. Orb2 Type. How can one recognize the or- bit type - a O-dimensional finite point set whose elements are encountered repeatedly, a l- dimensional smooth curve, or a 2-dimensional region - of a set of iterates? 2. Clustering. How can one determine the number of islands in an island chain? This number gives the period of the enclosed periodic point. 3. Area and Centroid. How can one estimate the centroid and area enclosed by the curve? The centroid is a good approximation of the location of the enclosed periodic point,. The area gives a measure of saliency of the island chain. 4. Shape. How can one recognize the shape of a curve? For example, is it, a 3-sided figure resem- bling a triangle? In the following, I will show how these four problems can be solved by applying techniques from computa- tional geometry and computer vision. Euclidean min- imal spanning tree (EMT) [Preparata and Shamos, 19851, and scale space image [Witkin, 19831 - these are the two important data structures used by the in- terpretation program. The a8 follows (see figure 4). main processing Figure 4: Main tion problem processing steps for solving the in terpl -eta- steps are e Step 1. The program computes a EMST from the input point set, using the Prim-Dijkstra al- gorithm. e Step 2. The program detects clusters in the EMST by looking for edges in l&he tree that are significantly longer than nearby edges. Such Yip 669 edges are called incontistent [Zahn, 19711. The criterion of edge inconsistency suggested by Zahn is used to detect inconsistent edges. Incon- sistent edges are then deleted, breaking up the EMST into connected sub-components. These sub-components are collected by a depth-first tree walk. l Step 9. For each sub-tree of the EMST, the pro- gram examines the degree of each of its nodes, where the degree of a node is the number of nodes connected to it in the sub-tree. For a smooth curve, the EMST consists of two ter- minal nodes of degree one; the rest, degree two. For a point set that fills an area, its correspond- ing EMST consists of many nodes having degree three or higher. e Step 4. To compute the area and centroid of the region bounded by a curve, the program generates an ordered sequence of points from the EMST, and spline-interpolates the sequence to obtain a smooth curve. The smooth curve is encoded using chain coding [Freeman, 19611. Straightforward algorithms are then applied to compute the area and centroid. l Step 5. A curve is parameterized by C(s) = (s(s), y(s)) where s is the arc length along the curve. The two functions x(s) and y(s) are com- puted from the chain code representation. Then, x(s) and y(s) are smoothed by the Gaussian and its first two derivatives at multiple spatial scales. Finally, the zero-crossings of the curvature func- tion R(S), and the signs of it(s) are computed to determine the locations and type of the extrema. Examples of orbit recognition can be found in [Yip, 19871. VI. Summary In this paper, I have studied the task of qualitative analysis of nonlinear area-preserving map by numer- ical experiments. I have also described how to ap preach the two major problems in automating the ex- perimenting process: (1) experiment control, and (2) result interpretation. The basic idea is that knowl- edge of qualitative dynamics and bifurcations pro- vides a strong constraint on the type of behavior possible. Finally, I have described a program which solves the interpretation problem by using techniques from computational geometry and computer vision. ACKNOWLEDGMENT The content of this paper benefited from many dis- cussions with Gerald Sussman, Hal Abelson, and Jack Wisdom. Ken Forbus and Brian Williams taught me their views of Qualitative Physics. I also thank Gen- eral Electric for their financial support during this research. TR 629, Massachusetts Institute of Technology, Artificial Intelligence Laboratory, 1979. [Forbus, 19841 Kenneth Dale Forbus. Qualitative process theory. Artificial Intelligence, 24:83- 168, 1984. [Freeman, 19611 H. Freeman. On the encoding of ar- bitrary geometric configurations. IRE, Trans. Electron. Cornput., EC-lo, 1961. [Henon, 19691 M. Henon. Numerical study of quadratic area-preserving mappings. Quarterly of Applied Mathematics, 27, 1969. [Henon, 19811 M. Henon. Numerical exploration of hamiltonian systems. In Chaotic behavior of De- terministic Systems, North-Holland, 1981. [Hirsch and Smale, 19741 M.W. Hirsch and S. Smale. Diflerential Equations, Dynamical Systems, and Linear Algebra. Academic Press, 1974. [Kuipers, 19841 Benjamin Kuipers. Commonsense reasoning about causality: Deriving behavior from structure. Artificial Intelligence, 24:169- 204, 1984. [Meyer, 19701 K.R. Meyer. Generic bifurcations of periodic points. l%ansactions of American Mathematical Society, 149, 1970. [Preparata and Shamos, 19851 Franc0 Preparata and Michael Shamos. Com- putational Geometry. Springer-Verlag, 1985. [Waltz, 19751 David Waltz. Understanding line drawings of scenes with shadows. In The Psy- chology of Computer Vision, McGraw-Hill, 1975. (Williams, 19841 Brian Williams. Qualitative anal- ysis of mos circuits. Artificial Intelligwace, 24:281-346, 1984. [Williams, 19863 Brian Williams. Doing time: putting qualitative reasoning on firmer ground. In Proceedings AAAI-86, American Association for Artificial Intelligence, 1986. [Witkin, 19831 Andrew Witkin. Scale-space filter- ing. In Proceedings IJCAI-89, International Joint Conference on Artificial Intelligence, 1983. [Yip, 19871 Kenneth Yip. Extracting Qualitative Dy- namics from Numerical Experiments. AIM 950, Artificial Intelligence Laboratory, Massachusetts Institute of TechnoIogy, 1987. [Zahn, 19711 C.T. Zahn. Graph-theoretical meth- ods for detecting and describing gestalt clusters. IEEE Transactions on Computers, 29, 1971. 670 References [De Kleer, 19791 Johan De Kleer. Causal and tele- ological recrsoning in circuit recognition. A.& Engineering Problem Solving
1987
111
563
olecullar colllections Collins and Kenneth ID. Forbus Qualitative Reasoning Group Department of Computer Science University of Illinois Abstract Hayes has identified two distinct ontologies for reasoning about liquids. Most qualitative physics research has focused on applying and generalizing his contained-liquid ontology. This paper presents a technique for generating descriptions using the molecular collection (MC) ontology, a specializa- tion of his alternate ontology which represents liq- uids in terms of little “pieces of stuff” traveling through a system. We claim that MC descrip- tions are parasitic on the Contained-Stuff ontol- ogy, and present rules for generating MC descrip- tions given a Qualitative Process theory model using contained stuffs. We illustrate these rules using several implemented examples and discuss how this representation can be used to draw com- plex conclusions. I. Introduction Sometimes two distinct but interrelated views of an object or system are needed to reason about the physical world. For example, sometimes an engineer must think of “the liquid in the container” as an object (the contained-liquid ontology) while also reasoning about a hypothetical col- lection of molecules traveling together through the system as an object (the piece-of-stuflontology). Likewise. a river may be viewed either as a static container of water defined by its banks (i.e.. the same river it was a century ago), or as a dynamic collection of little pieces of water, each of which retains its identity as it flows to the sea. As Hayes [6, 71 notes, neither ontology alone suffices to explain commonsense reasoning about liquids. Similarly, neither ontology alone suffices for intelligent computer- aided engineering. It is easy to reason about “the pressure at a portal” in the contained-liquid ontology, but impossi- ble to explain the details of a thermodynamic cycle with- out following a “piece of stuff” through the system. The piece-of-stzlfiontology, as we shall show, makes explicit the notions of continuity of space and conservation of matter, but provides no mechanism for reasoning about the overall behavior of the system. This paper presents a technique for generating and reasoning with descriptions of fluids as “pieces of stuff”. We introduce the molecular collection (MC) ontology as a specialization of Hayes’ piece-of-stuff ontology. We claim that the MC ontology is parasitic on the Contained-Stuff ontology, in that a description of a system in terms of con- tained stuffs is a prerequisite to computing its description in MC terms. We show implemented rules for perform- ing this computation, and illustrate their use with several examples. We argue that this representation provides a basis for more complex inferences, and discuss some open problems. We begin by reviewing the original Hayes ontologies: Contained-Liquid: Consider the liquid in a container as a single object. If the container is open then it is possi- ble for liquid to leave the container and for new liquid to enter. Contained liquids have a continuous quantity Amount-of which may be influenced by various pro- cesses (e.g., flow, evaporation, condensation). They may disappear and reappear, as when a cup of coffee is emptied and refilled. In this ontology the two cups of coffee are viewed as the same object. Piece-of-Stufl: Consider a particular collection of mole- cules as a unit traveling around inside a system. The collection of molecules will have a fixed mass and a continuous position in space, which is influenced by its velocity, which in turn is influenced by various forces acting upon the object. A piece of stuff is never cre- ated or destroyed (assuming conservation of mass), so there are fewer problems of changing existence from this ontology. It. is straightforward to generalize the Contained- Liquid ontology into a Contained-Stuff ontology that de- scribes gasses and allows multiple substances as well .3’. Qualitative Process theory [3, 4, 51 can be used to gener- ate descriptions of contained stuffs, and we build on those descriptions. In i7j no restriction is made as to the size of a piece of stuff. We obtain the molecular collection (MC) ontology by stipulating that the collection be so small that we can assume it is never distributed over more than one place (we return to this later). This tiny piece of stuff is viewed as a collection of molecules - as opposed to a single molecule 590 Engirwkring Problem Sslvisrg From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. Figure 1: The SWOS problem This schematic of a Navy Propulsion plant provides an illustration of the importance of the MC ontology. A sophisticated question about this system is, “Given an increase in feedwater temperture, what happens to the steam temperature at the superheater outlet?“. Understanding what happens in this situation is one of the hardest problems given at the Surface Warefare Officers’ School, in New Port, R.I. The representation developed in this paper provides a basis for answering this question. - so that it may possess such macroscopic properties as temperature and pressure. Call the arbitrary collection of molecules to be considered as a unit MC. Any ontology must divide the world into individuals: For reasoning it is important that the number of individu- als be few. The Contained-Stuff ontology partitions a fluid system into a few discrete objects using the natural bound- aries provided by containment. But the Contained-Stuff ontology fails to preserve molecular identity. Considering individual molecules would be prohibitive and unnecessary, since all the billions of them act more or less alike. By con- sidering the possible behaviors of an anonymous collection of molecules, we constrain the possibilities for the whole by considering only one individual. We claim that the molecular collection ontology is par- asitic on the Contained-Stuff ontology. No one has suc- ceeded in saying anything coherent about establishing the conditions for reasoning with the molecular collection on- tology. We believe the reason for this failure is that the MC ontology alone is insufficient. Global information is re- quired to identify what MC is doing. In classical physics the notion of gradient provides a local method for determin- ing such motion. But establishing the gradient requires a global view of the physical system. The Contained- Stuff ontology provides this viewpoint for the MC ontology by establishing paths and conditions for flows and state changes. The reasoning based on molecular collections skirts with the results of the Contained-Stuff description. and consequently does not have to re-derive those conclu- sions. Consider the system shown in Figure 1. Figuring out how MC moves requires knowing the mass properties of the fluid, viewed with respect to the components of the sys- tem. Looking solely at MC, there is no way to establish the pressure differences between system components that Figure 2: Sample rules for generating MC movements These rules, associated with particular processes, describe how the MC's place and state change as a consequence of that process acting. Space limitations preclude showing the entire rule set. If Flow (source, destination, path) then if Location(MC. source> Transition(MC, PLACE, path) if Location(MC, path) Transition(MC, PLACE, destination) IfBoiling(substance, container) then Transition(MC, STATE, GAS) IfCondensation(substance. container) then Transition(MC, STATE, LIqUID) imply the direction of flow. Although MC must play a role in the solution of the problem, the molecular collection ontology is inadequate. To determine facts like flow direc- tion, the Contained-Stuff ontology must be used. Given a Contained-Stuff description, we can talk about pressure as a function of location rather than trying to find the pres- sure on an arbitrary collection of molecules. Here, a pump establishes a pressure gradient, causing a liquid flow into the boiler. Our goal is to construct a history for MC, describing the sequence of places it is in and what is happening to it in those places. For our purposes, MC is uniquely defined by the place it is in, the substance of which it is composed,l and its current phase (i.e., solid, liquid or gas). The place is the container or fluid path in which MC resides.2 Constructing the MC history occurs in five steps. The first step is to feed the domain knowledge and the specific example through the Qualitative Process Engine (QPE) to generate the total envisionment for the given configuration. The total envisionment consists of all consistent situations connected by the possible transitions between them.3 Second, a single situation is selected for which the MC history is desired. In order for the history to be meaningful and interesting, the situation should involve some active processes and should last for an interval of time. The third step finds the possible locations and states of MC and establishes how these properties can change. The critical observation is that each active process spec- ifies a fragment of MC's history. Processes operate on ob- jects, some of which (in fluid systems) will be contained stuffs. We can associate rules with each process to describe what, if anything, its activity implies about the location ‘In this paper only single substance systems are considered. “Potentially, this could be refined through the use of some coor- dinate system such as submerged-depth or the length along a path. 3Each situation r e p resents a unique set of active views and pro- cesses taken together with the signs of derivatives for all quantities. and phase of MC. For example, the rule associated with liquid-f low implies that when MC is in liquid form in the source, it can move into the path of the flow, and end up in the destination of the flow without changing state (see Figure 2). The rule associated with boiling implies that MC will undergo a liquid to gas phase transition within the same location. By combining these partial histories, we can compute the full spatial extent of MC’s travels and its associated phase transitions (if any) .* In the fourth step, the Ds values for MC’s quantities are computed. 5 By assumption, Ds [Amount-of (MC)] = 0. Pressure is simply inherited from the surrounding con- tained stuff. Changes in Heat, Temperature, Volume and Height are determined by rules associated with processes. For example, if the temperature of the destination of a liquid flow is greater than the temperature of the source, then both Heat and Temperature of MC will be increasing in the destination. During boiling Heat is increasing and during condensation it is decreasing. Finally, the fifth step constructs the graph defined by the relevant places and the possible movements between them. From this graph it is easy to recognize such phe- nomena as branching or cycles of flow. In real fluid systems these histories often branch. For example, steam coming out of a ship’s boiler is often tapped off for several different purposes, such as driving the propulsion turbines, running generators to produce electricity, and powering the ship’s laundry. The choice of which path to take will depend on the goal of the reasoning. Sometimes it is the properties of a specific path which are of interest. In other cases all paths must be considered. However, it will be assumed that MC retains its identity, i.e., that the molecules of MC never split themselves between two paths. This assump- tion is realistic if one considers MC to be a tiny subset of the liquid described in the Contained-Stuff view of the same sys tern. Two observations are relevant here. First, the MC- history generation algorithm is linear in the number of active process instances, making it quite fast.” Second, the relationship between the episodes in the MC history and the states of the envisionment is slightly complicated. One state in the envisionment can give rise to a number of episodes in the MC history. For example, the steady flow of working fluid in a refrigeration system would typically be described as a single state in the envisionment using the Contained-Stuff ontology. But viewed from the MC level, it will give rise to episodes involving heating, liquid-gas phase transition, compression, gas-liquid phase transition, etc. 4More than one history can be produced if there are disconnected components in the fluid system. Each history corresponds to a dif- ferent choice of subsystem for MC. ‘The Ds value of a quantity is the sign of its derivative. “Running the rules over a total envisionment takes roughly one minute: constructing an MC history for any situation afterwards takes 5- 10 seconds. Figure 3: A simple pumped-flow example III. The MC-history generation algorithm has been tested on a number of examples of varying complexity. For concrete- ness we describe several of them below. a. Pumped FPow Figure 3 illustrates a scenario consisting of two open con- tainers connected by a pump and a return fluid path. We choose from the envisionment the equilibrium situation, where liquid exists in both containers and the flow rates have equalized. Figure 4 shows the MC history for this situ- ation. The MC history is annotated with information about the type of process responsible for the movement, as well as the derivatives and state (phase) of MC at each place in the history. This information becomes more useful for complex examples, such as in the refrigerator example be- low. Even though the situation is in steady state (i.e., ail derivatives in the Contained-Stuff ontology are zero), the MC history shows that each little piece of stuff in the sys- tem undergoes continuous change, both in position and in pressure. However, since there is no way for MC to enter or leave the system, one can conclude that the total amount of stuff in the system is constant. B. A Refrigerator One of the motivations for looking at the MC ontology was to allow reasoning about complex thermodynamic cy- cles such as that used in a refrigerator. Figure 5 shows a simple refrigerator involving six seperate processes: two heat flows, two state changes (boiling and condensation), a compressor flow and a liquid flow. As in the pumped flow example, the situation selected for the MC history is the steady state, where all flows have equalized. Figure 6 shows the MC history. MC boils in the evapora- tor and then is pumped through the compressor to the con- denser. where it returns to the liquid phase and is finally forced through the expansion valve back into the evapo- rator. This representation provides the foundation for an important class of engineering conclusions. Since MC gains heat during boiling and loses it during condensation. it 592 Engineering Problem Solving Figure 4: The MC history for the pumped-flow example Figure 7: The SWOS MC history PWMPEO-FLOW 0 ” 0 :, c F k L W 0 W LIQUID-FLOW Location Can1 Pump Can2 F-P Ds [Heat] 0 0 0 0 , Ds [Temperature] 0 0 0 0 I t Ds [Pressure] 0 1 0 -1 Ds [Volume] 0 0 0 0 I Ds [Heiahtl I u I u I u IUI Figure 5: A refrigerator Location 1 Sea 1 Pump B B Pl S-H P2 1 Env State IL/ L ILIGIGIGIGIG I I I I I Ds [Heat] 0 0 1 0 -1 1 -1 0 Ds [Templ 0 0 1 0 -1 1 -1 0 Ds [Press] 0 1 0 0 -1 0 -1 0 Ds[VolumelI 0 1 0 11101 11 1 1’ 1 ’ Ds[Height]I 0 I 0 Ill11 0 lo lo 1 O must be moving more heat through the compresser than returns via the expansion valve, so there is a net heat flow from the evaporator to the condenser. Thus the refrigera- tor is pumping heat uphill to a higher temperature. 6. The SWOS probllem Here we return to the Navy propulsion plant scenario of Figure 1. Figure 7 shows the MC history. The result of an increased feedwater temperature can in principle be calcu- lated by a differential qualitative analysis (DQ) based on this history 141. Roughly, the increased temperature means that the boiling episode is shorter, making the steam gener- ation rate higher. The higher steam generation rate means the steam spends less time in the superheater, hence less heat will be transferred, implying a lower temperature at the superheater outlet. Weld (in press) describes a set of DQ rules which, combined with this representation, may be powerful enough to draw this conclusion. Figure 6: The refrigerator MC history Location I Evap Evap 1 Comp Cond 1 Cond 1 EValve , State Liq Gas ! Gas Gas 1 Liq 1 Liq -__ DsCHeatl ’ 1 0 j 1 -1 ( 0 I -1 Ds [Temp] -1 0 I 1 1 0 -1 ’ ___. - 1 ) 4 Ds[Pressl , 0 I 0 1 1 0 / 0 -1 1 1 DsCVol&nel ’ 1 i 0 ! DslHeishtl / -1 -1 1 4 0 0 4 I 1 ! 1 1 0 ! -1 ! -1 1 0 The ability to reason with multiple views of a situation provides significant advantages over using a single ontol- ogy. The Contained-Stuff ontology provides the conditions to determine which processes are active, and thereby de- termines the overall behavior of the system. The MC on- tology provides the complementary ability to reason about where a piece of stuff came from and where it might go. We demonstrated that MC histories can be easily computed from QP models of fluids organized around Contained- Stuffs, and argued that this representation pro\,ides the basis for several important engineering inferences (i. e., closed-cycles, recognition of heat pumps and differential analysis). Collins and Forbus 593 It is unclear whether or not growing an MC history across transitions between situations in the Contained- Stuff ontology is a good idea. If one is considering a liquid system that oscillates, for instance, then this could be nec- essary. However, most questions that arise in engineering concerning the MC history are about steady-state behavior, i.e., a single situation in the Contained-Stuff ontology. The MC ontology is based on infinitesimal pieces of fluid. It may be possible to generalize it to spatially ex- tended pieces of stuff. This generalization would provide the ability to, for example, identify the spread of a con- taminate through a fluid system. We have only begun to explore the reasoning poten- tial of the MC ontology. Currently we are implementing rules to calculate quantity space information involving MC parameters. Furthermore, we plan to augment the MC his- tory by associating equations with each movement. These equations will be combined to yield quantitative descrip- tions of relevant system parameters, such as efficiency or work output per pound of working fluid. A differential qualitative (DQ) analysis could then be performed to iden- tify how these parameters could be optimized, or in general how a change in one quantity will affect the behavior of the system. V. Acknowledgements Brian Falkenhainer, John Hogge, and Gordon Skorstad provided valuable comments, both in developing the theory and in writing the paper. John Hogge’s ZGRAPH program has been invaluable for displaying the MC histories and total envisionment graphs. This research was supported by the National Aeronautics and Space Administration, Contract No. NASA NAG 9137, and by an IBM Faculty Develop- ment award. References [l] de Kleer, J. “An Assumption-Based Truth Mainte- nance System”, Artificial Intelligence, 28, 1986 [2] de Kleer, J. and Brown, J. “A Qualitative Physics based on Confluences”, Artificial Intelligence, 24, 1984 [3] Forbus, K. “Qualitative Process Theory” Artificial In- telligence, 24, 1984 [4] Forbus, K. “Q ua i a 1 t t ive Process Theory” MIT AI Lab Technical report No. 789, July, 1984. [5] Forbus, K. “The Problem of Existence”, Proceedings of the Cognitive Science Society, 1985. [6] Hayes, P. “The Naive Physics Manifesto” in Expert systems in the Micro-Electronic Age, D. Michie (Ed.), Edinburgh University Press, 1979 !7] Hayes, P. “Naive Physics 1: Qntology for Liquids” in Hobbs, J. and Moore, B. (Eds.), Formal Theories of the Commonsense World, Ablex Publishing Corpora- tion, 1985. PI 191 WI PI PI Kuipers, B. “Common Sense Causality: Deriving Behavior from Structure” Artificial Intelligence, 24: 1984 Shearer, J., Murphy, A., and Richardson, H. Introduc- tion to System Dynamics Addison- Wesley Publishing Company, Reading, Massachusetts, 1967 Simmons, R. “Representing and reasoning about change in geologic interpretation”, MIT Artificial In- telligence Lab TR-749, December, 1983 Weld, D. “Switching Between Discrete and Continu- ous Process Models to Predict Genetic Activity”, MIT Artificial Intelligence Lab TR-793, October, 1984 Williams, B. “Qualitative Analysis of MQS Circuits”, Artificial Intelligence, 24, 1984 594 Engineering Problem Solving
1987
112
564
Extending the Mathematics in Qualitative Process Bruce D’Ambrosio Department of Computer Science Oregon State University Abstract We present a semi-quantitative extension to the quali- tative value and relationship representations in Quali- tative Process (QP) theory. Examination of a detailed example reveals a number of limitations in the current ability of QP theory to analyze physical situations. The source of those limitations is traced in part to the qualitative mathematics used in QP theory. An exten- sion to this mathematics is then presented and shown capable of eliminating many of these limitations, at the price of requiring additional system specific infor- mation about the system being modelled. I. Introduction Qualitative Process (QP) theory [Forbus, 19841 describes the form and structure of naive theories [Hayes, 19791 about the dynamics of physical systems. A key component of QP theory is the qualitative mathematics used to repre- sent values of continuous parameters and relationships be- tween them. A research strategy for developing this math- ematics has been to search for a qualitative mathematics capable of yielding significant results from a minimum of information about the situation being modelled. In the work described here, we ask a slightly different question: what kinds of information can we add to the base theory, and what new questions can we answer with this additional information? A. Mathematics in QP theory The representation for a continuous parameter in QP the- ory is a quantity. A quantity has four parts: 1. The magnitude of the amount of the quantity. 2. The sign of the amount {-, 0, +}. 3. The magnitude of the derivative. 4. The sign of the derivative. The use of the sign as a significant qualitative abstrac- tion is adopted from DeKleer [deKleer, 19791 [deKleer and Brown, 19841. Magnitudes are represented in a quantity spuc.e. The quantity space for a number consists of all those amounts to which it is potentially related in the sit- uation being modelled. The special value ZERO is always included in every quantity space, and relates the quantity space representation with sign information. ‘This research was performed while at UC Berkeley and FMC Corp., and supported in part by NASA Grant NCE2-275 and NSF Grant IST-8320416. Major support also provided by FMC Corp. Quantities are related to one another through I&&r- tions, which can be either ordering relations, functional relations, or influences. Functional relations are a quali- tat ive analog of normal mathematical functions whose do- main and range are real numbers. The following states that the level of water in a container is qualitatively pro- portional to the amount in the container: level(p) &+ amount-of(p) These are called Qualitative Proportionalities (Qprops) . A Process is the mechanism of change in QP theory. A pro- cess acts to change a situation by i~zfluencdng some param- eter(s) of objects in the situation. An Influence is similar in information content to a qualitative proportionality, but affects the derivative of the range variable, rather than its amount. For example, the primary effect of a fluid-flow process is on the derivatives of the source and destination fluid quantities. Qprops are often referred to as indirect in- fluences, since they provide pathways through which direct influences propagate. Forbus’ implementation of &P theory combines this basic domain information with an initial system descrip- tion to perform measurement interpretation and envision- ing. The basic infeyences required are: Elaboration, View and process .9tructPr be determination, Influence resolution, and Lit& oncrl&s We will primarily be concerned with influence resolution in this paper. For a discussion of the other inferences, see [Forbus, 19841. e Pe We now analyze a hypothetical model of a typical contin- uous flow industrial process, in order to demonstrat steps and identify the capabilities and limitations theory. Fig. 1 shows a simplified sketch of the process. Reactants in granular form enter through the port at the top left (a material flow process), and are heated to reac- tion temperature within the vessel (a heat-flow process). When the reactants reach reaction temperature, they un- dergo a state change (a reaction), in which they disappear and a fluid product and an off-gas are created. The off-gas exits through the port at the upper right (another material flow process). As the hot off-gas flows out of the reaction vessel, heat is transferred to the cool incoming reactants (counter-current heat flow). We will ignore the processes by which the product is extracted from the vessel and sim- ply allow it to accumulate at the bottom. The four basic processes crucial to understanding of the system described above, basic heat flow, the reaction, D’Ambrosio 595 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. Reactants Offgas -? 0 0 A\\- ,roduct Heat Source Figure 1: Reaction Vessel material flow, and counter-current heat flow, are described in detail in [D’Ambrosio, 19861. Given a suitable initial state description, the first two QP inferences identify three possible states for the situation described, (1) that nothing is happening, (2) that the only thing occurring is that the reaction vessel is being heated, or (3) that all processes are active. The state of interest is the one in which all processes are active. Using the third basic QP deduction, we can determine various facts about this state, such as: e If the heat input is increasing, rate will be increasing also. the off-gas generation e If the incoming reactant temperature is decreasing, the off-gas temp will be decreasing. However, we cannot determine: 1. Is the product constant? temperature increasing, decreasing, or 2. If the heat input is increasing, is the off-gas exit tem- perature increasing or decreasing? 3. If we increase the heat input a little, how much will the generation rate increase? 4. If the available observations do not uniquely identify a single state, which of the possible states is more likely? These limitations are the result of ambiguity in the conclusions derived using QP theory. I. Ambiguity in theory We identify two types of ambiguity in QP theory, Internal and Ezternd ambiguity. Internal ambiguity occurs when use of QP theory produces multiple descriptions of a single physical situation. External ambiguity is the dual of this, namely when a single QP theory description corresponds to several possible physical situations which must be distin- guished. Internal ambiguity is of two types. First, given a situation description, there may be ambiguity about which of several possible states a system is in (e.g., given a leaky bucket with water pouring in, is the water level rising or falling?). Second, given a specific state, there may be am- biguity about what state will follow it (e.g. - given a closed container containing water, and a heat source heating the container, will it explode?). External ambiguity is the inability to determine, on a scale meaningful to an external observer, the duration of a situation, as well as the magnitude and intra-situation evo- lution of the parameters of the situation (e.g., how fast is the water rising? How long before the container explodes?) These ambiguities are the result of four fundamen- tal limitations in QP theory representat ions and inference mechanisms: 1. 2 3 4. Inability to resolve conflicting functional dependen- cies. This is caused by the weak representation for functional form of dependencies, which captures only the sign but no strength information. Inability to order predicted state changes. This is caused by lack of ordering information on change rates, as well as lack of quantitative information on the magnitude of change needed for state change. Inability to quantify, even approximately, parameters significant to external observers during times between major state transitions. This is caused by a weak model of intrcGstate situation evolution. Time, quan- tity values, and functional dependencies are all repre- sented qualitatively in QP theory. Inability to represent non-boolean predicate and state possibilities. Solving these problems requires extending QP repre- sentations to capture more information about the system being modelled. We have studied three classes of exten- sions: extensions to the quantity representations, the rela- tionship representations, and the certainty representations. Specifically, we have developed an extension to QP theory which utilizes: Belief functions certainty representations - these will permit capture of partial or uncertain observational data, and estimates of state likelihood. Linguistic descriptions of influence sensitivities - to reduce undecidability during influence resolution. Linguistic characterizations of parameter values and ordering relationships - to permit capture of partial or uncertain observational data, and enable estimates of the effects of adjustments to continuous control pa- rameters. These extensions reason at the appropriate level of de- tail for the kinds of control actions typically needed, draw the needed distinctions, are computationally tractable, and can reason with the imprecise or uncertain data typically available. In this paper we concentrate on the second of these extensions, linguistic influence sensitivities, and present a way of annotationing the relationship represen- tation in QP theory to reduce ambiguity. Discussion of the integration of Dempster-Shafer belief functions with QP theory and the underlying ATMS can be found in [D’Ambrosio, 19871. Discussion of parameter value exten- sions can be found in [D’Ambrosio, 1986). See [Simmons, 19871 for an alternate extended quantity representation. IV. Linguistic ence Sensitivities The influence resolution rule used by Forbus states that if opposite influences impinge on a single parameter, then the net influence on the parameter is unknown. In or- der to reduce the number of situations in which conflicting functional dependencies cannot be resolved, we extend 596 Engineering Problem Solving tern -lost ie \ offgas temp/"+-$\ ;,"z;;,,";;P in furnace 43. * offgas temp in furnace ~~;~~;;;;, Q+ a Figure 2: Conflict Triangle theory functional descriptions with a linguistic influence sensitivity. Intuitively, this corresponds to distinguishing between first order, second order, etc., dependencies. With this extension we can now address the second question unanswerable earlier: if we increase the heat input, will the offgas temp increase or decrease? Forbus claims that if actual data about relative magni- tudes of the influences is available, it can be used to resolve conflicts. We might attempt to achieve this by extending direct and indirect influences with a strength parameter. This is inadequate, however, for two reasons. First, the overriding influence may not be local. Information may have to be propagated through several influences before reaching the parameter at which it is combined. Second, various sources of strength information have varying scopes of validity. In the following sections we first identify two basic influence subgraphs responsible for the ambiguity in our example, and argue that the ambiguity can be elimi- nated by annotating the subgraphs with influence sensitiv- ity and adding additional situation parameters. We then present extensions to the influence resolution algorithm for utilising the sensitivity annotations, and finally describe a control structure for managing acquisition and use of an- notation information. . dentifying internal causes of conflict in influence graphs We have identified two basic patterns of influences which account for the ambiguity previously encountered. These are the conflict triangle (Fig. 2) and the feedback loop (Fig. 3). The reason, for example, that the change in of- fgas temp in the offtake cannot be resolved is that there are two conflicting paths through which a single parameter (offgas temp in the reaction vessel) affects the.target pa- rameter. But the effect on temperature-lost is in this case smaller than the direct effect on the offtake temp, and can be ignored. We can indicate this by adding to the influence arc an annotation indicating temp-lost in counter-current heat flow is relatively insensitive to offgas temp in the fur- nace (Fig 2b). Another ambiguity in the QP theory analysis of the furnace is in the generation rate and associated variables. One of the causes of this ambiguity is the set of influences on Product temperature shown in Fig. 3. Since both the Q+- (strqg) Temp Figure 3: Feedback Loops generation rate and heat-flow rate are positive, the qualita- tive derivative of the product temperature is undecidable. This network is similar to one Kuipers [Kuipers, 19861 identifies as introducing a new landmark value, not in the original quantity space for the product temperature. This new value represents an equilibrium value towards which the temperature will tend. Recognition of the existence of an equilibrium value permits resolution of the effects of the conflicting influences on product temperature, depending on the assumed ordering between the actual product tem- perature and the equilibrium value. analysis can be taken one step further. Kuipers adds the equilibrium value to the set of fixed points in the quantity space for the origi- nal variable. We, however, add it as a new parameter of the model, subject to influences similar to those of the origi- nal quantity. Thus, we can represent and reason about change in both the actual value and the equilibrium value in response to active processes. For example, if the actual temperature is only slightly sensitive to the heat-flow rate, but the equilibrium temperature is very sensitive, then we might conclude that the system will be slow in returning to equilibrium once perturbed. The extended influence di- agram for the feedback loop is shown in Fig 3b. Sensitivity mnotations An influence is a partial function from the controlling vari- able to the controlled variable. In QP theory, computing a value for a controlled variable takes place in two phases: 1. All of the individual influences on the controlled vari- able must be identified and the effect of each of these must be computed. 2. The various effects must be combined to determine the composite effect on the controlled variable. This procedure relies on local propagation to perform influence resolution. If local propagation is to carry the burden of our extended influence resolution, then the prop agated value must somehow be extended to represent the sensitivity information. The value being propagated in influence resolution is a quantity, and the representation used in sign abstraction. Given our model of extended influences as describing the normalised sensitivity of one D’Ambrosio 597 variable to changes in another, we can simply extend the quantity representation for the influence quantity and use a discrete scale of influence magnitudes. We then repre- sent the actual value as a fuzzy set over this value space, to model the imprecision in the available sensitivity in- formation. While this procedure is conceptually simple, the question arises of how an appropriate discretization for this normalised change value, henceforth referred to as influence, can be determined. If we start with an n-level influence discretization and an m-level sensitivity discretization, then after k influence propagation steps we seemingly might need an nmk dis- cretization to avoid information loss. This worst case com- plexity can be substantially reduced, however, by the fol- lowing four observations: 1. We are only interested in the result at a resolution equivalent to the original n-level discretization. 2. Additional detail is only relevant when two annotated influences are being combined, to aid in influence res- olution if they conflict. 3. Rather than annotating all influences in a graph, we will only annotate those necessary to disambiguate pa- rameters of interest in a specific query. 4. The basic fuzzy relational influence algorithm can be designed so that failure to maintain a fully detailed discretization only increases the ambiguity of the re- sult, rather than produce incorrect results. Given this, we model sensitivity annotations as p& rameters of a standard fuzzy relational influence algorithm [Zadeh, 19731. We choose a fuzzy representation to allow simple modelling of the imprecision of these annotations2. We next detail the algorithms used to compute the conse- quences of this fuzzy algorithm. 1. Computing individual influences An influence of the form: (Influenced-variable Q+/- Influencing-variable, Sensitivity) is taken to specify a fuzzy relation between three amounts: C, the amount of the influencing variable; S, the amount of the influence sensitivity; and Iv, the amount of the in- fluence on the influenced variable. The value of Iv can be computed as follows: Iv = C(min(lcc,CcsrClq)lQl,c,s(C, S)) C.S &I,C,S(cj, Sk) = 8dgn( cj * Sk) * (abs(Cj * Sk)““) 2. Combining influences Sensitivity annotations provide us with a means of estimating influence magnitudes, which are directly com- parable. Below we show an algorithm for computing the combined effect of two influences. A rough translation is that an element is definitely a member of the set of possi- ble values for the combined influence if that element is a 2The underlying model is of a set of independent, linear influences. Fuzzy eet models of sensitivities permit us to allow for the inaccu- racies of this model. member of the value sets for both input values, or if it is a member of the value set for one input, and a weaker ele- ment of the same sign is a member of the value set for the other input. Also, an element of the discretization may be an element of the result set under two conditions. First, if it is a member of the value set of one input, and a element of the same magnitude but opposite sign is a member of the value set for the other input. Second, if an element of the same sign but greater magnitude is a member of one value set, and an element of the opposite sign and greater magnitude is a member of the other value set: PI”(i) = (PZ"l(i') A Clrvz(i)) V(Vj,ljl<lil(ClIul(i) APIv2(.i))) v(/.m(i) A /4clluz(--i) ~unknown) V(Vj,j>i Vk,k<-i (hrvl(j) A pI,z(k) A unknown)) Subsrcipts i, j, and k are assumed to be 0 for no in- fluence, increasing positive for positive influence elements, and increasing negative for increasing negative influence elements (e.g., -3, -2, -1, 0, 1, 2, 3 for a seven element dis- crete scale, with -3 the strongest negative inlluence). The above is only half of the formula actually used. The actual relation is symmetrical in the two influences Iv1 and Iv2. C. Annotation Mana In examining the sources of ambiguity in the reaction ves- sel example, we note that many of the annotations which could resolve the ambiguities are not universally valid. In fact, we identify four levels of validity for an annotation. These validity levels are determined primarily by opportu- nities in the implementation: 1. 2. 3. 4. the An annotation is universally valid when it can.be in- corporated directly into a view or process description, and correctly describes the functioning of a particular influence in all situations in which an instance of the view or process participates. These are rare. An annotation is scenario valid when it correctly de- scribes the operation of a particular influence in a particular view or process instance, for all qualitative states in which the instance is active. Product tem- perature annotations in the example are an instance of this annotation type. An annotation is state valid when it correctly de- scribes the operation of a particular influence in a view or process instance, only for a defined subset of the qualitative states of a system. Annotation is query valid when it correctly describes the operation of a particular influence in a view or pro- cess instance, only for a particular query. The conflict triangle annotation for determining off-gas tempera- ture in the offtake is an example of this type of anno- tation. The first type of annotation can simply be part of basic view or process detiition. The other three are added to the QP description of a scenario as needed during problem solving. A four step algorithm extends the basic QP theory influence resolution algorithm: 598 Engineering Problem Solving 1. Execute the basic influence resolution. 2. Check results for ambiguities in parameter values of interest. If all’ interesting parameter values are deter- mined uniquely, then problem solving is complete. 3. Otherwise, search the influence graph for instances of ambiguity causing subgraphs. If one is found, and the parameter for which it might create an ambiguity is ambiguous, then annotate the subgraph with influence sensitivity information if available. [Zadeh, 19731 Lofti A. Zadeh. Outline of a New Approach to the Analysis of Complex Systems. In IEEE Tkans- actions of Systems, Man, and Cybernetics, 3(1):28-44, 1973. 4. Re-execute the basic influence the now annotated graph. resolution algorithm on This algorithm assumes the extended QP reasoner is embedded in a larger system which has or can obtain the necessary problem specific information to resolve ambigui- ties. It provides a problem directed way of selecting aspects of the larger system’s problem specific knowledge relevant to the query being processed. . Summary We have described an extension to QP theory which in- creases the precision of results available, and still retains the inherent advantages of qualitative modelling. This extension derives its power from influence sensitivity an- notations, and a fuzzy mathematical model of influences which permits propagation of the effects ities throughout the influence chart. of these sensitiv- eferences [D’Ambrosio, 1986] Bruce D’Ambrosio. Qualitative Pro- cess Theory using Linguistic Variables. Phd thesis, University of California, Berkeley, 1986. [D’Ambrosio, 19871 Bruce D’Ambrosio. Truth Mainte- nance with Numeric Certainty Estimates. In Proceed- ings Third Conference on AI Applications, pages 244- 249, Kissimmee, Florida, Computer Society of the IEEE, February, 1987. [deKleer, 19791 Johan deKleer. Causal and Teleological Reasoning in Circuit Recognition. Technical Re- port AI-TR-529, Artificial Intelligence Laboratory, Massachusetts Institute of Technology, October 1979. [deKleer and Brown, 19841 Johan deKleer and John Brown. A Qualitative Physics Based on Confluences. Artifi- cial Intelligence, 24( 1-3) :7-83, December 1984. [Forbus, 19841 Kenneth Forbus. Qualitative Process The- ory Phd Thesis, Massechusetts Institute of Technol- ogy, July, 1984. [Hayes, 19791 Patrick J. Bayes. The Naive Physics Mani- festo. IIn Expert Systems in the Micro-Electronic Age., D. Michie, (editor), Edinburgh University Press, Ed- inburgh, Scotland, 1979. [Kuipers, 1986] Benjamin J. Kuipers. Qualitative Simula- tion. Artificial Intelligence, 29(3):289-338, September 1986. [Simmons, 19871 Reid Simmons. Commonsense Arit h- metic Reasoning. In Proceedings AAAI-86, pages P18- 124; Philadelphia, Pennsylvania, American Associa- tion for Artificial Intelligence, August, 1986. D’Am brosio 599
1987
113
565
Philippe Dague and Qlivier Raiman IBM Scientific Ccntcr Electronique Serge Dass’nult 36 Ave R. Poincark 55 Quai M. Dassault 75116 Paris France S Cloud 92214 France Abstract This paper shows how order of magnitude reasoning has been Successfully used for troubleshooting complex analog circuits. The originality of this approach was to be able to remove the gap between the information required to apply a general theory of diagnosis and the limited infor- mation actually available. The expert’s ability to detect a defect by reasoning about the significant changes in be- havior it induces is extensively exploited here: as a kind of reasoning that justifies the qualitative modeling, as a heuristic that defines a strategy and as a working hypoth- esis that makes clear the scope of this approach. I. htroducth The challenge of troubleshooting is to localize, in a malfunc- tioning device, those faulty components (elementary physical elements having a well defined function) which can be re- placed or modified A classical approach would be to provide a set of depend- ency relations between failures and faults. The efficiency of such “shallow” reasoning relies on the description of all pos- sible failures. This knowledge is strongly dependent on a particular device and is often not complete. Troubleshooting another device with the same functioning principles requires reconsidering the knowledge base. The model-based paradigm [Davis et al., 19821, [Brown et al., 19821 leads to a more general approach, since only models of correct behavior for generic components have to be given. An interesting feature of this approach is that basically there is no need for either a fault model or a set of heuristically de- fined dependencies between failures and faults. The device specific knowledge is organized around a structural decom- position of the device. It is assumed that all correct behavior of a complex device can be predicted from its structure and the models of its components. Thus, a difference between the predicted behavior of a block (i.e. a set of connected compo- nents) which is presumed to be correct, and the observed be- havior indicates that there is at least one defect in the block. The task of troubleshooting is then to identify those differ- ences and to progres,sively refine their localization until a small faulty replaceable part has been located. But determining the differences between the presumed cor- rect behavior and the actual observations requires defining Philippe Dev6s relevant models of behavior for generic components. For an- alog circuits this is where problems arise, as explained in Sec- tion II. There is a lack of numerical models. Numerical models which are used in classical simulation algorithms to predict the behavior of a well functioning device are not ade- quate for troubleshooting purposes, once correct components work outside their normal functioning limits. In addition, basic qualitative models [De Kleer, 19841 that mainly handle signs of quantities are not powerful enough to lind inconsist- encies between the predicted behavior and observed behavior. Therefore in the two cases the predictive procedure may fail to detect conflicts. Modeling becomes the trouble. Section III shows how, in order to overcome this difliculty, we take ad- vantage of the expert’s ability to reason about the main changes in the behavior of a device. It allows us to make the fundamental assumption that a defect leads to significant changes in the behavior of a device and to exploit it by per- forming order of magnitude reasoning. Section IV demon- strates how this is used in the expert system DEDALE* to obtain a relevant qualitative modeling that distinguishes be- tween different patterns of behavior. An example of diagnosis is given in Section V. Stction VI explains that the funda- mental assumption also provides a troubleshooting strategy when the circuits are more complex. The difftculty in troubleshooting analog circuits is to charac- terize the correct behavior of a component in a malfunction- ing circuit. The main reason is that, in a malfunctioning analog circuit, a component can behave in a way which is radically different from its designed behavior, and yet be cor- rect. Thus knowing the designed behavior of components provides only part of the relevant modeling. In addition, the information required to numerically describe all possible cor- rect behavior of a component is often too complex or not available for troubleshooting purposes. The following prob- lems must be tackled:’ I DEDALE is an expert system for troubleshooting analog hybrid circuits, that has been jointly developed by Electronique Serge Dassault and IBM. a The case of intermittent failures is not taken into account in this paper: it is assumed the faulty circuit is in a steady electric state, and observa- tions are reproducible. 600 Engineering Problem Solving From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. - Multiple Correct Behavior Patterns The behavior of each analog electronic component de- pends, most of the time, on all the components which are connected to it. A defect on one component may change the functioning state of others. Thus, predicting the behavior of the different components quickly becomes very complex. For example, a transistor may behave very differently, as a current amplifier, a switch, etc depending on its electronic environ- ment. For instance, if the value of a resistor on the emitter of a transistor is too high the transistor changes from its normal (current amplifier) functioning state to an open (no current) functioning state. - Lack of Numerical Models Numerical models of behavior for each component are of- ten useless. Time dependence and non linearity make such models complex to use, except for some simple components (e.g. Ohm’s law for resistors). For example, a model of a transistor requires the specification of a dozen parameters. Today, such models are used only to simulate correct func- tioning. - Lack of Measurements The observation of analog electronic behavior means pro- viding the values of state variables, some of which cannot be measured. The inability, in an analog circuit, to measure currents is the most crucial of these limitations, because cur- rent is a state variable which is essential to distinguish be- tween the different models. problems is closely linked to the efficiency of the predictive procedure and of the strategy. The trouble for analog circuits is that (see I) the models of behavior required to have a predictive procedure are not available. The solution we propose is to take advantage of the fact that the troubleshooter reasons in terms of order of magnitude. This justifies the underlying assumption made in DEDALE that a defect leads to signilicant changes in the be- havior of the circuit. This assumption makes it possible to perform qualitative reasoning to: - use models of behavior based on order of magnitude re- lations, as defined by the expert, - search for significant symptoms. This is achieved by using a problem solver that checks the consistency of a set of order of magnitude equations. - define a strategy based on the concept of deviation. A de- viation is when a function behaves in a way which is signif- icantly different from its designed behavior. These deviations are looked for in the functional hierarchical decomposition of the circuit. It should be noticed that the checking process here is not a predictive procedure in the strict sense of the word. Such a procedure would lead to a qualitative “big crunch” [Brown, 19761 (a brute force approach) due to the multiplicity of cor- rect behavior patterns. In addition, the implication symptom + conflict is replaced for high level functions by the heuristic rule deviation + focusing. ehavior Setting the troubleshooting of analog circuits in a model- based approach raises a certain number of basic problems. In order to explain these difficulties, and to set the debate in a well defined framework, the General Diagnostic Engine [De Kleer and Williams, 1986-J is taken here as a reference. The General Diagnostic Engine consists essentially of three parts: a predictive procedure, an ATMS (Assumption-based Truth Maintenance System) and a measurement strategy. The predictive procedure uses models and structure to make be- havioral predictions from observations and assumptions of good functioning; it also detects symptoms (i.e. discrepancies between predictions or discrepancies between predictions and observations). A symptom is when at least one of a set of as- sumptions on the correct behavior of components is false. The ATMS manages these assumptions. From the symptoms it determines minimal conflicts (a conflict is a set of components, at least one of which is faulty) and generates a complete set of minimal cundidutes. A candidate is a set of components which, if they are faulty, explain, i.e. intersect, all the conjZicts3. The diagnosis procedure is incremental and guided by the measurement strategy. The adequacy of a GDE to real 3 Every superset of a conflict must be a conflict, and every superset of a candidate must be a candidate. Representing minimal conflicts and min- imal candidates is thus suflkient. A. Order of Magnitude Reasoning Reasoning about sign&ant changes in the behavior of a cir- cuit means performing qualitative reasoning. Models handl- ing only signs of quantities fail, even in simple cases, to distinguish between radically different patterns of behavior. In order to take signilicant changes into account, the qualita- tive value of a quantity that must be considered is both its sign and its relative order of magnitude. To describe order of magnitude relations, three key operators * , 2, - are de- lined, which represent the following intuitive concepts: A +Z B stands for A is negligible in comparison with B, A r B stands for A is close to B, i.e. (A - B) is negligible in comparison with B, A - B stands for A has the same order of magnitude as B. The underlying idea is that if A - B, then Be C implies A<<@ . A formal system FOG [Raiman, 19861 defmes a set of rules that can be applied to these relations (see Appendix). The basic axioms are the following: z and - are both equivalence relations and Y is fmer that -, e is a partial ordering be- tween the equivalence classes for -. Dague, Raiman, and Dev&s 601 Other operators useful in DEDALE are defined in terms of the three previous ones: A-+BstandsforA > B,A-B,and-(ArB) A m- B stands for A < B, A N B, and ‘(A z B). Library of Qualitative Models &me simple components have only one correct behavior, easily described by a unique model: Ohm’s law for resistors, Kirchoffs laws for nodes (remember that even a node is a component since it can be faulty). But, in general, generic components may have several possible correct behavior pat- terns, and different models are needed to describe all of them. A model Ml for a generic component consists of a set C, of constraints linking the electrical parameters attached to this component Voltages are linked by numerical constraints and currents by qualitative constraints. These models are based on physical laws and expertise. This expertise is required to describe all qualitatively correct behavior of complex compo- nents and to specify ranges for the numerical values of volt- ages corresponding to each behavior. For two different models of behavior MI and M, of the same component there is a significant change in terms of order of magnitude of at least one parameter. In order to reason about changes of behavior of the component, a set C, of constraints must be given for each pair (M,, M,) of models. These constraints ex- press the relative order of magnitude of any given parameter in M, and M’. Thus, each generic component has a set of models described by all the constraints C, and C,. For a given circuit and given input patterns’, each compo- nent will have a particular model, MN , (nominal) selected from its library. It is the model which corresponds to its de- signed behavior within the correct circuit. The values of the parameters for this particular model, called nominal values, are available by simulation or by measurement on a correct circuit. They are noted: P, IN,... Each model h4, is now de- scribed by its variation with respect to the reference model, MN, i.e. by the sets of constraints CN , C, and C,,. For instance, let us assume that the nominal model of a transistor is (this is the so called “normal” state in electronics):s 0.6 < Vf” < 0.9 0.2 < v& 1 With this nominal model, possible correct behavior patterns for the transistor are: “normal” state: no significant change in currents, but some possible changes in voltage. 4 We refer here to input patterns where the failure has been observed. s Base B, emitter E and collector C are the three terminals of a transistor. no : Zc z Zg C no : Z, z Zf 1 no : 0.6 < Vea -c If& + 0.4 B I( . . . . . . . . . . . no: I/cer VN, open state: great reduction in currents E 1 op : z,= Z$ c f -K: J op : I=== zg B op : ZBC zg . . . . . . . . . ., op : v,, < 0.5 i E J on state: limited increase in currents s state: same as on state, but with a lower collector current C. Assumptions and local consistency If we presume that a component is correct, we attach a qual- itative model of correct behavior to this component. This implies selecting a model from among the several models of correct behavior. This choice can be made only if relevant observations are available. Remember that the only observa- tions available are measurements for voltages, not for current intensity. Since there is not a one to one mapping between ranges for voltages and qualitative models of behavior, differ- ent models are generally consistent with the observations. Thus, selecting a model involves making an assumption. For example, consider the transistor Tl for which meas- urements indicate the following changes with respect to the nominal values: VN, = 0.74 v,, = 1 v& = 2 V CE = 0.25 Two models of correct behavior (on,s) for Tl are consistent with these observations. The two corresponding assumptions are noted Tl(on) and Tl(s). If we now presume that a block B, i.e. a set of connected components, is correct we attach a model of correct behavior to each of its components. Thus, an assumption for block B is a set a, of elementary assumptions for each of its compo- nents. A(B) stands for the set of all potential assumptions a, 602 Engineering Problem Solving for block B. The topology of a circuit makes it possible to define a set of links L(B) between the terminals of the com- ponents of block B. A link stands for a connection between two terminals of two diierent components, and is viewed as a constraints An assumption a, is consistent ifit satisfies all the constraints attached to L(B). FOG checks whether this set of constraints is satisfied or not. The set of assumptions a, that satisfy L(B), is noted C(B). If C(B) is empty, then the set of components in B is a con@?. This means that there is a defect in B. Minimal conflicts are searched for by focusing Iirst on minimal blocks, which are blocks composed of a node and the components connected to it. Such conflicts are minimal in that no available observation could reduce their size: EO be able to detect an inconsistency in Kirchoffs law for currents in a node, we need to know the order of magnitude of each of these currents, in other words to provide models of behav- ior of each component connected to that node. V. Example of Diagnosis Consider a simple basic block, called a voltage follower. It is composed of: two transistori (Tl and T2), five resistors (Rl-RS) and four internal nodes (N2-NS). Voltage is meas- ured at the terminals of the dinrerent components. These val- ues are available for the nominal behavior (see Fig. 1) and the actual behavior (see Fig. 2). With these nominal values, Tl and T2 are both in the no state. Values of resistors R4 and R5 and Ohm’s law imply that there are two main currents of the same order of magnitude (see Fig. 3 nominal behavior): IN - IN R4 FL5 Fig. 1 Nominal behavior Fig. 2 Malfunctioning Thus, C(B4) is empty. The same reasoning leads to: In this example, a deviation is observed for the voltage fol- lower: the observed voltage on input I is equal to its nominal value, but the voltage on output 0 is appreciably less than its 6 A link indicates that the same electrical signal is propagated on the two terminals. IO implies the same voltage on the hvo terminals, with opposite currents. nominal value. Because of deviation, attention is focused on the components of the follower. The assumptions of possible correct behavior for transistors, consistent with the observed measurements, are: Tl(on) and Tl(s), and T2(op). The sets of possible assumptions for the minimal blocks are? B2 = [N2,Tl,R2] A(B2) = { ( N2,Tl(on),R2 }, ( N2,Tl(s),R2 } } B3 = [N3,T2,R3) NW = { { N%-Wop),R3 ) ) B4 = [N4,T2,R2,R4] A(B4) = { ( N4,T2(op),R2,R4 } } BS = [NS,T2,RS] A(B5) = { { NS,T2(op),R5 } }. Let’s examine the consistency of A(B4). The nominal be- havior implies: (T2 no) (Ohm’s law, see above) (Kirchoff s law for N5) Using axioms of FOG (see Appendix), three relations: we deduce from these This relation and Kirchoffs law for N4 give: IS 2 ( - I&) (1) I:<< I& (2) For the actual behavior, the assumptions of correct be- havior for components of B4 give: T2(op) : 1,= 1: R2 : I, -+ IL R4 : IA4 z IL N4 : (I, + I,) z ( - I,4) (3) (4) (Ohnfs law) (5) (OhnYs law) (6) (Kirchoff s law) Using FOG once again, we obtain: (2) + (4) --, IpK I, (7) (1) + (4) + (5) + IIU -+ ( - 424) u-9 (3) + (7) -+ I,= IRl (9) (6)‘+ (9) -, Iw z (- IR4) (10) (8) + (10) + contradiction by deftnition of w+ C(B2) = { ( N2,Tl(on),R2 ) }. C(B3) = { ( N3,T2(op),R3 ) ). C(B4) = { ). C(B5) = { ). ’ Assumptions of correct behavior for resistors and nodes, that correspond to a unique model, are indicated by the name of the component. Dague, Raiman, and Devhs Since C(B4) and C(B5) are empty, at least one of the compo- nents N4, T2, R2 and R4 is faulty as is at least one of the components N5, T2 and R5. Thus, two minimal conflicts are identified: < N4,T2,R2,R4> and < N5,T2,R5 > . If the nodes are correct, then the three minimal candidates are [T2], [R2,R5] and [R4,R5]. This means that the set of defects of the circuit contains at least one of these three sets. With the more restrictive assumption that there is a unique defect, it is certain that T2 is the faulty component, because T2 is the only component that may cause both conflicts. But in addition we can also discover the kind of electrical defect occurring in T2. Finding the behavior of the faulty compo- nent is obtained by suppressing its assumption of correct be- havior. L-iere, suppressing the assumption that T2 is correct (in particular suppressing equation (3)) and applying FOG once again gives: The complete reasoning for the other terminals of T2 leads to: I,-= IB IE 2 I* This shows that the defect on T2 is a short-circuit between the base and the emitter. Unlike the situation in shallow reason- ing, where all possible faults have to be described beforehand, no models of misbehavior are needed here. Even better, such models can be discovered. Finally the qualitative reasoning describes the main changes in the behavior of the circuit due to the defect (see Fig.3). Notice that Tl is correct, although its behavior has changed: Tl(on) instead of Tl(norma1). nominal behavior Fig.3 Main short-circuit currents of T2 VI. Strategy The troubleshooting example described above does not re- quire using a strategy, since few components are involved. Troubleshooting circuits containing about a hundred compo- nents is more complex. Using once again the fundamental as- sumption that a defect in a component leads to significant changes in the behavior of the circuit allows us to define three basic strategies. A Top-Down Strategy According to the assumption, a defect in a component induces changes in the behavior of higher level blocks which contain this component. Most of the time, such changes occur in an observable way for at least one of these blocks, i.e. a deviation can be observed for this block. The top-down strategy con- sists then in focusing on functional hierarchical blocks where there are deviations, i.e. changes in order of magnitude be- tween the actual and the nominal behavior of the block. The process repeats itself until it reaches a basic function B, , for which the sub-functions are components. It is then possible to use models of behavior for these components 8. The search for minimal co@crs inside B, proceeds as for the above ex- ample by looking first for minimal blocks in II,. If no conflict has been detected inside B,, we obtain a non empty set of lo- cally consistent assumptions C(B,). The fact of obtaining a non empty set C(B,) is because the deviation + focusing rule is just a heuristic: a deviation for a function does not neces- sarily imply a defect in one of its components. Another block, BK, must then be considered. . HorizontaP Strategy In fact, a deviation for a block may result from a defect in another block linked to it. Thus, the horizontal strategy means selecting block BK which is of the same hierarchical level as B,, i.e. both are contained in the same higher level block B, and focusing first on a block BR linked to BP Ac- cording to the top-down strategy, we first search for a BK for which there is a deviation. If such a BK no longer exists, we look for a block without any observable deviation because a defect in one block does not necessarily imply a deviation for this block. If for all sub-functions B1 within B, no C(B,) is empty, we construct the set C(B), which is the subset of nC(B,) made up of assumptions that satisfy L(B), where L(B) is the set of links between the Bl sub-functions. If C(B) is empty, then B is a conflicf. Since it is not usually minimal, we therefore look for minimal cortJlicts in B. Such conflicts obviously do not respect the hierarchy. To find them, we begin by taking the B1 subfunctions two by two. For each pair (B,, B,J, we construct the set C(B, U B,J of assumptions which satisfy the links between B, and BP If C(B, lJ BR) is 8 Pot higher level functions, models of behavior describing exhaustively all good functioning states are not available. The only knowledge of the nominal behavior simply allows to observe deviafions. In particular, no assumption is made during the top-down process. 604 Engineering Problem Solving empty, minimal conflicts in B, lJ BK are searched for by con- sidering first, for each link 1 between B, and B,, the minimal block of components linked by 1. It should be pointed out that it is rare to have C(B,U BK) empty but C(B,) and C(B,J not empty at the same time. Indeed, if there actually is a de- fect in B, for example, it means that the observations of measurable parameters on B, and on all its components do not make it possible to distinguish the behavior of the faulty block BJ from a possible correct behavior of B,. C. Bottom-Up Strategy It is possible to have C(B) not empty because: - a deviation for B does not imply a defect in one of its com- ponents, - a defect in B may lead to a behavior of B consistent so far with correct patterns of behavior of its components. This means there is a possible consistency at a higher level. The process repeats itself by searching for conflicts in another block at the same hierarchical level as B and, if no conflict has been so far detected, by considering the higher level block which contains B. It guarantees the detection of the smallest con@ccts, with respect to the functional decomposition of the circuit. The expert system DEDALE has been implemented in ?&l/PROLOG [VM/PROLOG, 1985). It has 4 components: (1) An object oriented language with which to describe a cir- cuit structurally and functionally; (2) A library of qualitative models for generic components; (3) FOG, a problem solver which performs order of magnitude reasoning; (4) strategic rules. The expert system DEDALE is now being experimented on real size applications in a factory environment to trouble- shoot complex analog circuits. According to the first results, for about 75 % of investigated failures, there are significant change in the behavior of the circuit. In these cases, DEDALE is able to find the defects. The remaining 25% of failures are not due to faulty com- ponents, but rather to components that work to the limits of their designed behavior. In such cases there are no sign@ant deviations inside the circuit. Experience has shown that such failures are identified before trying a model-based approach. Specific heuristics can be added to DEDALE to try to handle these cases as well. These results, coupled with the highlighting of our working hypothesis, are, we hope, a step forward in knowing if and when qualitative reasoning techniques are efficient for real size applications. en&s We wish to acknowledge Jean Pierre Adam, Jean Griffault, Pierre Luciani, Jean Pierre Marx and Patrick Taillibert for their contributions to the development of DEDALE. We also thank Patrick Taillibert and Vincent Tixier for their comments on early drafts. Here are some rules of FOG ([A] stands for the sign of A): ArA AsB + BrA AzB,BzC -+ AzC A z B, [C] = [A] --, (A + C) r (B + C) A-B+B-A A-B,B-C + A-C A w B + [A] = [B] ArB+A-B A<<B, BccC + AeC A=B, B-C + AcC A 2 B -+ (A - B) c< B A<<B -+ (B + A)rB A -=-- B + -A<<B A-B,[A]#O --, -(A<cB) Aw+B * A- B, ‘(A z B), [A - B] = + A--B e-, A-B, -(AgB),[A-B] = - eferences [Brown, 19761 A.L. Brown. Qualitative knowledge, causal rea- soning, and the localization of failures. Technical Report TR-362, Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 1976. [Brown et al., 19821 J.S. Brown, R. R. Burton, and J. de Kleer. Pedagogical, natural language and knowledge en- gineering techniques in SOPHIE I, II and III. D. Sleeman and J.S. Brown (Eds.), Intelligenf Tutoring Sys- tems, Academic Press, New York, 227-282, 1982. [Davis et al., 19821 R. Davis, I-L Shrobe, W. Hamscher, K. Wieckert, M. Shirley, and S. Polit. Diagnosis based on description of structure and function. In Proceedings AAAI-82, pages 137-142, Bittsburg, PA, American Asso- ciation for Artificial Intelligence, August 1982. [De Kleer, 1984-J J. De Kleer. How circuits work. Artificial Intelligence, 24: 205-280, 1984. [De Kleer and Williams, 1986) J. De Kleer and B.C. Williams. Reasoning about multiple faults. In Proceedings AAAI-86, pages 132- 139, Philadelphia, PA, American Association for Artificial Intelligence, August 1986. [Raiman, 19863 0. Raiman. Order of magnitude reasoning. In Proceedings AAAI-86, pages 100-104, Philadelphia, PA, American Association for Artilicial Intelligence, Au- gust 1986. [VM/PROLOG, 19851 Vhf/Programming in Logic. Program description and operations manual, IB Dague, Raiman, and De&s 605
1987
114
566
Explanation-Based Faillure Ajay Gupta Hewlett-Packard Laboratories Filton Road, Bristol BS12 6&Z, UK. email: [email protected] Abstract Interactions are inherent in design-type problem- solving tasks where only partially compiled opera- tors are available. Failures arising from such inter- actions can best be recovered by explaining them in the underlying domain models. In this paper we explain how Explanation-Based Learning provides a framework for recovering in this manner. This ap- proach also alleviates some of the problems associated with the least-commitment approach to design-type problem-solving. I. Introduction In the ‘expert-system’ literature, the need for declara- tively represented ‘causal’ models along with compiled ‘association’-based rules has been argued for reasons such as explanation, teaching and flexibility. In this paper we illustrate how compiled goal-oriented rules need to be supported by uncompiled domain principles when the problem-solver runs into failure. In this process, we also justify an observation made by Clancey [Clancey, 19831, which we believe to be true for most expert problem- solving: Brinciples are good for summarizing arguments, and gooa to fall back on when l~ou’ve lost grasp on the problem, but they don’t drive the process of [medical) reasoning. Failures are particularly acute in synthesis tasks such as planning, design or control. These tasks usually require a problem-solving approach that involves construction of a solution in contrast to the classification approach which uses a pre-enumerated solution space. In these tasks, the problem-solver is given a set of goals and the operators that suggest how certain goals can be achieved or refined. These goals are hierarchically refined by applying suitable operators to generate new subgoals until the desired level of detail is reached. Most real design-type problems have some non-independent goals which lead to interactions. Interactions can be avoided by writing operators in such a way that no unwanted relationship between the sub- modules is established while problem-solving. Compiling out all the interactions amounts to mapping the problem into one of classification . . In sible to compile out all the general, however, it is infea- interact ions as they increase exponentially with the number of modules. Another way of avoiding interactions is to anticipate them by placing constraints on the choices where incom- patibility is likely to occur. By sufficiently constraining a choice point the interactions can be obviated. For instance, MOLGEN [Stefik, 1980] uses the least-commitment strat- egy by employing such constraint-posting. But in order to use this technique the problem-solver needs to know all conceivable interactions and the constraints to pre-empt them. In most domains, the operators used for planning or design are only partial models of real-world actions - in particular not all postconditions of an action are known statically. This is particularly so for complex actions whose consequences will depend upon the situation in which they are employed. For instance, such operators are required in order to build plans involving simultaneous actions. Here the traditional STRIPS model of operator representation breaks down because the factors that can affect the global consequences of an operator become very large, and record- ing all of them will make the operator unwieldy to use. Thus the local description of operators, required for flexi- bility and efficiency, necessitates only partial compilation of its consequences. Interactions arise during problem- solving because of using such partially compiled operators. Furthermore, in any real-life design task it is ex- tremely difficult to have all requirements identified in the initial specification. Design and prototype-evaluation is a cyclical process during which the specifications are mod- ified several times. Thus a realistic design-type problem- solver must have the capability to deal with failures as they arise during problem-solving. Backtracking - chronological or dependency-directed - is the last resort of recovery from problem-solving fail- ures. As we will demonstrate on some examples in the sections that follow, both of these approaches suffer from the problem of thrashing, i.e. running into identical fail- ures repeatedly. An alternative approach that addresses some of the above issues employs partially-compiled operators that will ‘normally’ produce the desired goal without interactions, but in some unusual cases when they do fail the problem- solver attempts to recover gracefully by attempting to ex- plain it in the domain model. The technique of explain- 606 Engineering Problem Solving From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. ing a failure is very similar in spirit to that employed in other Explanation-Based Learning (EBL) work [deJong and Mooney, 19861, [Mitchell et.al., 19861. First we sum- marise the basic ideas of EBL and then illustrate using an example how it provides a useful framework for failure- recovery. Later section discuss relationships, advantages and issues for further research. This architecture has been implemented in a PROLOG-based planning system called TRAP [Gupta 19853. The following diagram is the top- level architecture of the implemented system: Simulator + proof-trace + constraints Recovery-Module . Figure 1: TRAP Architecture I, Explanation-Based Learning Explanation-based genera.lisa.tion (EBG) is a technique for learning new concepts and rules from single examples. EBG and more generally EBL have been surveyed com- prehensively in [deJong and Mooney 19861, [Mitchell etal. 19861. In brief, for EBG the following information is re- quired: Goal Concept: A definition of the concept to be learned. Training example: A specific example. omain Theory: Axioms for verifying if an example has a property. Operationality Criteria: Specifications on representa- tion of the concepts. EBG attempts to construct an explanation of why the training example satisfies the goal concept. This explana- tion takes the form of a proof tree composed of domain- theory inference rules which proves that the training ex- ample is a member of the concept. This explanation is then generalised to obtain a set of sufficient conditions for which this explanation structure holds in general. The de- sired general preconditions are obtained by regressing the goal concept through the explanation structure. 13[1[, Example Consider a robot apprentice in a metallurgy workshop where its task is to plan the production of objects with some specified properties. The which encode metallurgical processes pla.maing such as operators, hot-rolling and heat-treatment, constitute compiled information for achieving typical goals. The apprentice also has some op- erators for transporting the objects around the workspace, fixing them in machines etc. An example operator for fix- ing an Object in a Machine is: op( f?xin(Machine,Object), % name [gripped( Object)], % Usewhen preconditions [on_side( Hand)], % Whenuse condition [ I, % subgoals - if any [traverses( Object ,Machine,Hand)])% postconditions Furthermore, the apprentice has domain knowledge about the basic properties of materials, machines and phys- ical process. This knowledge constitutes the domain theory and also includes certain domain requirements, e.g. tools shouldn’t be damaged, and accidents should be avoided: 8 melting_point(gripper,50). 0 melts(X) & tool(X) + “fail”. e traverses(Object,Machine,Side) & shielded( Machine,Side) -+ collision( Object,Machine). Consider a. scenario where the apprentice has been given the task of makin g a, hot-rolled ba.r out of a steel block. This requires heating up the block to 400 degreeC, and then fixing it on a rolling machine. Using the compiled operators the planner reduces the goal in following manner: make badblock) - Ag mc block) heat(block,400) ti pick up(block, -, -1 - 1 f ix-idrolling-mc, block) navigate-to(rolling-mc) Figure 2: Hierarchical Plan In this refinement the apprentice has the choice of holding the Object on its left- or right-hand-side, and also of different means for picking up an object that include using its grippers, polymer gloves or steel prongs. Because at this stage the apprentice lacks any knowledge to dis- criminate between the choices, it chooses to use its gripper on the right-hand-side for picking up the block leading to the following plan: 1. heat(block, 400) 2. pick-up( block,gripper,rhs) 3. navigate-to( rollingmc) 4. fixin(roller,block) Gupta 607 Before this plan is approved for ‘execution’ in the real world, it needs to be tested by simulation. In this process it is noticed that while picking up the heated block, the gripper will melt because its surface has a melting-point lower than the temperature of the block being picked up. This forces the plan to be rejected because it leads to a violation of the domain requirement that the tools should not damaged. Under dependency-directed backtracking, the problem-solver can revisit the choice of how to pick up the block and instead of picking it up directly with its grip- per, it can use some tool. But this time it can try to pick up the hot block using polymer gloves, which will again give rise to the same failure of a melting tool. Thus back- tracking does not prevent the problem-solver from making a choice that would lead to an identical failure, and hence thrash over a particular failure. Even in an assumption- based TMS [deI<l eer, 19861 the problem remains because it only records the instantiated nogood set. IV. Recovery by EBG In order to avoid this thrashing we need to note the reason behind the particular instance of failure so that the same kind of mistake is not repeated. In the above example the recovery mechanism should infer that there is a danger of damaging a tool if its melting-point is lower than the tem- perature of the object it is picking-up. The replacement technique that operationalises this behavior involves infer- ring sufficient conditions for the failure using EBG. These sufficient conditions are then rewritten into a constraint using assumptions about the tenacity of various types of choices. Negation of this condition will give the necessary conditions to avoid that failure. In terms of EBG each instance of a failure constitutes a training example. The goal concept to be learnt is “fail”. A simulation of the plan results in a proof under the do- main theory. For instance, the proof tree of the failure in the example above is: pick-up(block,gripper,rhs) melting-point(grippef ,501 +fy~I:I”” tool(gripper1 ieltdgfipped \/ “fail” Figure 3: Proof Tree The derivation of the above failure included the fol- lowing axioms: 1. pick-up(Object, Tool, Side) =+ touches(Object, Tool). 2. touches(Object, Tool) & meltingpoint(Too1, Mp) & temperature(Object, Temp) & Temp < Mp + melts{ Tool). 3. tool(Too1) & melts(Too1) + “fail”. On regressing the goal condition, we can obtain the initial premises which have to be satisfied for the proof to go through. In this process, we are identifying those properties of the objects that contributed to derivation of the failures; and it is this set of properties that must be avoided to prevent another similar failure. The above proof can be summarized as an implication of the form: if pick-up(Object, Tool, Side) SL melting-point(Tool, Mp) & temperature( Object, Temp) & Temp > Mp 8~ tool( Tool) then “fail”. In order that this failure can be avoided in future the planner needs to be able to test for its satisfaction when- ever a choice is to be made. To make this test more ef- ficient, the sufficient conditions for the failure need to be further transformed to get a constraint that can act as a guard at selected choice points. The constraint must obviously be evaluable at the time the choice is being con- sidered. Additional principles used in this transformation (essentially blame-assignment) record the tenacity of dif- ferent conditions. An example of such a principle would be the fact that aborting a goal (negation of goal condition) is more difficult than changing the operator to achieve it. The order in which we would be willing to give up a choice is: 1. Variable instantiation (e.g. use gripper ) 2. Operator to achieve a goal (e.g. pick-up the Object) 3. Problem-solving goal (e.g. move from A to B ) 4. Domain-requirements aged 1 (e.g. tool should not be dam- This ordering is largely pragmatic. It is reasonable to expect that a problem-solver should not be allowed to change domain laws simply because some choices conflict with them. However, it is conceivable that in certain situ- ations one might be more keen on giving up the goal rather than resatisfying it (e.g. in time-critical planning), or that certain domain requirements can be waived to meet a criti- cal goal. In our implementation these choices, which in the framework of EBG form the operationality criteria, have been hardwired so that goal regression stops at the op- erators. But in general a flexible control strategy can be used to dynamically compute this operationality criteria [deJong & Mooney, 19861. In the above example, after applying the simplifica- tion suggested above, we get the following constraint that records the necessary conditions for avoiding the failure: action: pick-up(Object, Tool, Side) 608 Engineering Problem Solving constraint: if melting-point(Too1, Mp) &.L tempera- ture(Object, Temp) then not(Mp < Temp). This constraint is added as another field in the op- erator definition and used as a guard over the choices to prevent all the instantiations of this derivation of the fail- ure. v. Compilation In the above example the derived constraint did not depend upon any condition other than those that were present in the state in which the failure occurred. In general a fail- ure may depend upon choices that have been ma.de by past actions. Continuing in the above pla.n, the action pick up(block, steel prongs, rhs) is to be followed by the action fix in(rolling me, block). If the apprentice knows that shielded(rolling-mc,rhs) then the plan would fail be- cause of collision(rolling-mc, block) which is recognised as a violation of a domain requirement. 1. 2. 3. 4. 5. the pick-up(Object,Tool,Side) + onhand(Object,Side). fixin(Machine,Object) & onhand(Object,Side) + traverses(Object,Machine,Side). shielded( rollingmc,rhs). traverses(Object,Machine,Side) & shielded( MachineSide) -+ collision(Machine,Object). collision(Machine,Object) & machine(Machine) --) “fail” . In this case the derived constraint would be placed on action pick-up(), b ecause that is where the only rele- vant variable instantiation choice was made, but it depends upon the action fix-&() that comes later in the plan. The scenario is illustrated in figure 4. In general such a constraint will not be evaluable at the action pickup because there is no a priori knowledge that it would be followed by the action fix-in in a par- ticular problem. Further, the action pick-up need not be immediately followed by the action fix-in for this failure to occur. So long as the culprit condition, in this case on-side(block,rhs), is not disturbed by intermediate actions the same failure can be derived (essentially a consequence of the frame assumption). In our current implementation we have t=aken the re- stricted generalisation approach [Mitchell et.al., 19863. We compile, in the form of an abstract operator, the subtree of the goal-refinement tree (generated while planning) that would include all the actions used in the proof of the fail- ure. In the above example, from the goal-refinement tree shown, it is clear that the subtree subtended at roll0 COV- ers all the actions participating in the above fa.ilure condi- tion. A new operator roll’(‘) is created with its subgoals as the actions on the frontier of the subtree. As this new op- erator ensures that the sufficient condition for the failure would hold, we attach the derived constraint to this new operator. As has been recognised in [de Jong & Mooney, 19861, this approach leads to under-generalisation because now the constraint would be applicable only if a specific sequence of actions are executed. In general, not only is it difficult to describe the above predicate, that represents if a condition is carried from one action to another action unchanged by intermediate actions, it is also impossible to evaluate such a condition while planning. In the process of generating these constraints we are generalising from a failure instance. Compilation of these constraints into operators results in specialisat ion of the operators. Increasing compilation leads to increased effi- ciency, but in our implementa.tion the general versions of the operators are retained for the purposes of flexibility. 0 For explanation-based failure recovery to be useful the gen- erated sufficient conditions for failure need to be more gen- era1 than the actual sequence which led to the failure: oth- erwise the technique degenerates into dependency-directed backtracking. Non-trivial constraints can be inferred only if the domain model has been represented intensionally. For instance, if the problem-solver stored only the fact that touching the hot block with the gripper damages it, with- out recording underlying reasons, it can only infer that the gripper should not be used for picking hot blocks. Backtracking makes the assumption that alternatives at a choice-point are ‘independent’, i.e. they have no rela- tion with one another, which need not be the case. We have noted that by using a domain model - sufficiently deep that these relationships can be inferred, a more. gen- era1 condition than that encoded in the nogood sets can be rejected after failure. One of the outstanding problems with EBG, as pointed out in [Mitchell et.al., 19861, is generating an ex- planation in an incomplete or undecidable domain model. In the discussion above the simulation needs to be exhaus- tive enough to detect every failure derivable under the do- main theory. It is clear that complete simulation of a sce- \1 {choice here) \1 (failure here} Figure 4: Complex Failure Gupta 609 . conclusions nario is computationally prohibitive. We need some mech- anisms for guiding the search for the failures. In practice there are two approaches: a. simulation can be localised to look for certain problems that are more likely to occur. kinds of b. in situations where it is safe, the plan can be executed in the real-world and its results explained if they turn out to be different to expectation. In the context of EBL, [deJong & Mooney, 19861 have noted the first alternative as learning under external guid- ance, and the second as learning by observation. In terms of plan-time failure-recovery the techniques of partial sim- ulation and execution monitoring are being actively pur- sued. But how a proof is arrived at is independent of the techniques used in the recovery module. The generalisation procedure currently used in TRAP is similar that in [deJong & Mooney, 19861. The simulator generates the SPECIFIC instance of the proof-tree, and records the axioms that were used in the process. Using this proof-tree and the recorded dependencies, the recov- ery mechanism generates the GENERAL version of the explanation structure. VIP, Relationships In MOLGEN [Stefik, 19801, where the least-commitment strategy is used for object-selection, backtracking has been replaced by the forward-reasoning involved in constraint- propagation. It is well known that this forward reasoning can be equally computationally expensive if there are a sufficient number of applicable but irrelevant constraints in a situation. The problem is identification of relevant constraints. Risk-and-recover contrasts with the very cau- tious approach of least-commitment which saves the cost of failure-recovery but trades it off for the cost of making sure that a step is right. We have suggested how the con- straints can be generated dynamically to cover failures as they are encountered. In fact, if in the absence of some constraints the plan fails due to interactions, the resulting failure can be used to infer the constraints which would avoid it in future. In this manner, the problem-solver be- comes more careful after encountering a failure, and avoids being an ‘pessimist’ that tests for everything that can go wrong before taking a step. Opportunistic planning, suggested as a model for hu- man planning [Hayes-Roth, 19831, emphasises the need of data-directed reasoning along with goal-directed reason- ing. The architecture presented in this paper is a restricted interpretation of the opportunistic model where detailed refinements can suggest modifications to previously taken abstract decisions, but only when the problem-solver runs into failures. By thus restricting the ‘data-directed’ guid- ance to be invoked only when the goal-directed approach does not quite work, we can get a computationally realistic control strategy. The idea of using sufficient conditions for a failure has been proposed as the avoidance method in [Hayes-Roth, 19831. We have presented a framework based on explanation- based learning that illustrates the role of a domain-model in supporting the compiled goal-oriented operators in the design-type problem-solving tasks. This support is re- quired in an evolving system, because at any point in time, although the domain model can be expected to be complete, compilation of the goal-oriented operators would be necessarily incomplete. This framework alleviates the three difficulties mentioned in the introduction: compiling out all the interactions, identifying applicable constraints in order to use least-commitment and the thrashing inher- ent in backtracking. We are currently investigating further into reason- ing about temporally ordered actions in generalised failure constraints and integrating this technique with other re- covery techniques such as goal-reordering [Tate, 19771 for dealing with failures due to interacting sub-goals. Acknowledgements John Lumley and Stefek Zaba provided extensive com- ments on the presentation of this paper. The Author was financially supported by Inlaks Foundation Scholar- ship while at the University of Edinburgh. eferences [Clancey, 19831 W .J. Clancey. An Epistemology of a Rule- Based Expert-Systems: A Framework for Explana- tion. Artificial Intelligence, 20(3):215-251, (1983). [deJong and M ooney, 19861 6. deJong and R. Mooney. Explanation-Based Learning: An alternative view Machine Learning, 1(2):145-176, 1986. [deKleer, 19861 J. deKleer An Assumption-Based TMS. Artificial Intelligence, 28(2):127-162, (1986). [Gupta, 19851 Ajay Gupta. Failure Recovery Using a Do- main Model. MPhil Thesis, Dept. of AI, University of Edinburgh 1985. [Hayes-Roth, 19831 F. Hayes-Roth. Using Proofs & Refu- tation to learn from Experience. In Machine Learning: an Artificial Intelligence Approach, Michalski, R.S., et. al. (eds.), Tioga, Palo Alto, CA 1983. [Mitchell et. aZ., 19861 T. M. Mitchell, et. al. Explanation- Based Generalisation: A Unifying View. Machine Leaning, l( 1):47-80, 1986. [Stefik, 19801 Mark Stefik. Planning with Constraints. PhD Thesis, Dept. of Computer Science, Stanford University 1980. [Tate, 19773 Austin Tate. Generating Project Networks. In Proceedings IJCAI-77, pp 888-893, MIT, Cam- bridge, International Joint Conference on Artificial Intelligence, 1977. 610 Engineering Problem Solving
1987
115
567
Department of Computer Science Courant Institute of Mathematical Sciences, New York University 251 Mercer Street, New York, NY 10012 Abstract This paper describes a two-step algorithm for the qualitative analysis of mechanical devices. The first step takes the geometrical description of the parts and their initial position and produces a description of the possible relative motions of pairs in contact by computing the configuration space of those pairs with respect to selected motions. Given the possible rel- ative motions and an input motion, the second step computes the actual motion of each object for fixed axis mechanisms using a constraint propagation, label inferencing technique. The output is a state diagram describing the motion of each part in the mechanism. Fixed fra of rigid ob- w-v-- “ *a Fig /I Gear2 ‘$ Gear1 Figure 1: The Driver System ical device from its structure. In De Kleer and Brown’s formalism [De Kleer and Brown 19851, a device consists of three types of constituents: materials, components and conduits. Components are elementary parts that oper- ate on and change materials. Conduits are components that do not change materials: they transport the material from one component to the other. Behavior is achieved by transporting materials from one component to the other through conduits. Applying this paradigm to the domain of mechanical devices amounts to considering motions as materials that are transported by mechanical parts and that are modified by pairs of parts. For example, in a train of gears, the components are gear pairs that mod- ify the material ‘rotation’, transported by ,individual gears and axes. The function of the particular pair configura- tion is stored in the component description. Individual parts are not considered components. This is in contrast with the modeling of electrical devices as described in [De Kleer and Brown 19853, where electrical parts (resistances, light bulbs, batteries) are components and their function does not change in different configurations (radios, heaters, etc.). A topological description of their connections is suf- ficient. For mechanical parts, the interaction between two joskowicz 611 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. gears is different from the interaction between a worm gear and a gear, leading to a different type of behavior. This requires that all the relations between two particular ob- jects must be defined in advance. Since the number of possible objects is very large (small changes in the geom- etry of objects can lead to radically different behaviors), an exhaustive list of such relations is implausible. Forbus’ Qualitative Process Theory [Forbus 19851 suffers from the same limitations. A more general model that supports ge- ometrical reasoning is required. Quantitative geometrical reasoning about moving ob- jects has been studied in relation to the motion planning problem [Schwartz et aZ., 19871 and [Lozano-Perez 19831. Given an initial and a final postion of the objects, the goal is to find a set of movements in space that describe the path (if such a path exists) from one position to the other. Finding the possible behaviors of the parts of a mechanism can be viewed as finding the set of all the possible paths the parts of a mechanism can have and the relationships between these paths. This has been shown to be possible in principle, but only very specific simplifying cases have been fully analyzed [Schwartz et al., 19871. II. An algorithm for the analysis of mechanisms We propose a two-step algorithm for the analysis of mecha- nisms. First, given a geometrical description of the objects and their initial positions, Local Interactions Analysis finds the possible relative motions of all pairs of objects that are in contact. Possible relative motions of objects in contact are expressed in terms of a small set of parametrized mo- tion predicates (such as rotates(A, a&(6), parameter) for object A), and a set of algebraic relations between parame- ters that indicate the dependencies between both motions. The second step is the Global Interactions Analysis. Given the pairwise possible relative motions and an input motion, it determines the actual motion of each object. Two classes of mechanisms are distinguished here: fixed axis mechanisms and movable axis mechanisms. Fixed axis mechanisms are those mechanisms for which rotary axes do not move in space (the Driver System for example). Mov- able axis mechanisms have at least one rotary axis that moves in space (such as linkages). For fixed axis mccha- nisms, we build a constraint propagation network where each object is represented as a node, and each pairwise re- lation as a constraint edge between two objects. The initial motion is propagated as a label in the network, and nodes are labeled with possible motion predicates according to the pairwise relations. When the propagation halts, each node has a label that corresponds to the motion(s) of the object represented by the node. For movable axis mech- anisms, we provide a heuristic rule to determine a lower bound on the degrees of freedom of the entire mechanism. I. Local Interactions We categorize pairs of objects in contact (kinematic pairs) following Reuleaux’s classification [Reuleaux 18761. Two objects in contact can form either a lower pair or a higher pair. Lower pairs are pairs in which the contact between the two objects takes place along a surface. Higher pairs are pairs in which the contact takes place along a line or a point. There are only six types of lower pairs, as illus- trated in Figure 2, and infinitely many higher pairs. A typical higher pair is the pair formed by two meshing par- allel gears. For each lower pair there is a simple motion predicate that describes the possible relative motions of the parts. For example, if A and B form a prismatic pair along an axis parallel to 0, this relation can be stated as: prism(A, B, 0) M translation(A, 0, Xa), translation(B, 0, Xb) 0 _< Xa + Xb 2 1 Zength(A, 0) - Zength(B, 0) 1 that is, the possible relative motion of A is a trans- lation along axis 0 by a distance Xa, and that of B is a translation along axis 0 by a distance Xb. The inequality involving Xu and Xb must always be sat- isfied where Zength(A, 0) denotes the length of the prismatic section of A along axis 0. The relations for revoZute(A, B, 0), heZicaZ(A, B, 0), cyZindric(A, B, 0), spheric(A, B, point) and planar(A, B, plane) are similarly defined. To find whether two parts form a lower pair, we com- pute the configuration space of the translation and rotation of A with respect to B which is fixed, as defined in [Lozano- Perez 19831. Th e configuration space of a moving object A with respect to a fixed object B is the set of all the po- sitions of A such that A does not overlap with B. Figure 2 shows the configuration spaces (properly projected) for each one of the lower pairs. Two parts form a lower pair if their configuration space is one of the six configuration spaces shown in this figure. If the resulting configuration space is a point, then the two objects are attached. Objects are assumed to be three dimensional objects that can be described by the union, intersection and dif- ference of simple forms such as cylinders, cones and poly- hedra, as in Constructive Solid Geometry. A three di- mensional object has six degrees of freedom in space and therefore its configuration space with respect to other fixed objects is 6-dimensional. Since it is impractical to compute the full g-dimensional space, we compute the two dimen- sional configuration space with respect to translation and rotation along a particular axis. By computing the con- figuration spaces for a small number of axis, and taking unions and intersections of them, we find the configura- tion space of the pair. This is a heuristic method since it depends on the right choice of axes to analyze. It is valid for all lower pairs except the helical one. The recognition of higher pairs does not lend itself to a general method as the one described above. Two approaches are suggested here: a functional approach and 612 Engineering Problem Solving (a) Circle (b) Line :-rll (I) (d) Cylinder (e) Sphere I 3 (c) Helix A lz (f) 3D Space Figure 2: The six lower pairs: (a) revolute (b) prism (c) helical (d) cylindric (e) sph eric (f) planar and their config- uration space a differential approach. In the functional approach, objects are described by properties. For example, a gear can be approximated by a cylinder with a number of properties such as the number of teeth, radius, etc: cylindrical-gear(G) is defined by the properties radius(G), origin(G), cyZinder( G), number_of -teeth(G), sire-of -teeth(G), axis_of rotation(G), beZongs(origin(G) , axis-ofrotation( G)) A higher pair is then described as a predicate that relates two functionally described objects. Two cylindrical gears that are mounted in parallel can be described by the parallel-gears(d, B) predicate. If the preconditions: isa(cyZindricaZ,gear(A)), isa(cyZindricaZ-gear(B)) size-of-teeth(A) = size-of-teeth(B) fixed(axis-ofrotation( A)), f ixed(axis-of rotation(B)) paraZZeZ(axis-of rotation(A), axis-of rotation(B)) distance(origin( A), origin(B)) = radius( -4) + radius(B) are satisfied, then the relation between the possible mo- tions is: rotation(A, axis-of-rotation(A), origin(A), 0,) e rotation( B, axis-of rotation( B), origin(B), BB ) 9 A=-eB xnumber-of-teeth(A)/number-of-teeth(B) and objects A and B are said to form a parallel gear pair. For unknown higher pairs, several rules can be used to deduce the differential behavior of the two parts. Their in-, tegral behavior can then be deduced from their differential behavior. The analysis at the differential level consists of determining the behavior of the two parts at the next in- finitesimal instant. The analysis at the integral level deter- mines the behavior over a period of time. The predicates for lower and higher pairs shown before p-ism(A, B, 0) and paraZZeZ_gears(A, B) d escribe both integral behaviors. It is possible to infer the differential and integral behavior of a higher pair using a set of differential behavior rules for solid objects. To illustrate how such an analysis can be made, suppose we want to infer (and not just to state) the relation between two parallel gears. The following ar- gument can be used: Let 5% and Tb be the two teeth in contact, where Tu belongs to gear A and Tb belongs to gear B. A rotation of gear A causes 2% to move along a circu- lar path by a distance of dl. Since Tu is in contact with Tb and there are no obstacles that interfere with the mo- tion of B, Tb will move along another circular path by the same distance dl. This constitutes the differential behavior of both gears. By determining how long Ta and Tb will be in contact, we can integrate this behavior. The behav- ior during an interval of time I is that B rotates together with A, in opposite directions by an angle of 8, that equals -e8 x number_of -teeth(A)/number-of-teeth(B). Let Ta’ and Tb’ be the two teeth following Tu and Tb in the di- rection of the motion of A and B respectively. Since Ta’ and Tb’ are part of A and B, they move with A and B respectively. Therefore they will be in contact before Ta and Tb stop being in contact (assuming the spacing be- tween teeth is such that this is true). Another integration of behavior can now be based on symmetry arguments; B will turn when A turns in opposite directions. The same angle relationship as described above will hold for any time interval I. We thus obtain the relation for parallel gears. This argument is made more precise by using a set ofmles that support this deduction. An example of such a rule is the differential Contact Rule. This rule states how a force is transmitted between two planar surfaces: joskowicz 613 Contact Rule: let S1, S& be two planar sur- faces of two distinct objects 01 and 02 in con- tact. Let N1, N2 be the two normals to the point (or surface) of contact of S1 and SZ respectively. Then if a force is applied to 01, it will be trans- mitted to 02 in the direction of the normal to the point (or surface) of contact, provided that 01 can move in the direction of the force (assum- ing no obstacles and forces greater than friction). Using this set of rules, with additional geometrical reason- ing, we showed how to deduce the behavior of a worm gear meshed with a cylindrical gear. Although this method is not general, it can be used in some simple cases, especially in the domain of gears. Kinematic pairs can be either simple or complex. Sim- ple pairs are the ones described above i.e. those who have a single state corresponding to a single relative qualitative behavior. A complex pair is a pair that is described by several simple pairs, each corresponding to a different rel- ative qualitative behavior. Each possible relative qualita- tive behavior is represented by a local state. A local state is created by a change in the contact points or surfaces between the two parts. Each new local topology is ana- lyzed as a simple pair, and the transitions between states are conditions on the positional parameters of the objects. The resulting collection of states and transitions is called the local state diagram for the kinematic pair. The output of the Local Interactions Analysis is a set of local state diagrams containing relative motion predi- cates, one for each pair of objects originally in contact. This description corresponds to a functional description of the kinematic pairs. IV. Global Interactions Analysis Given the pairwise possible relative motions and an input motion, the Global Interactions Analysis task is to find the behavior of the mechanism in terms of the motions of its individual parts. We will first provide an algorithm for the propagation of motion for mechanisms in which axes of rotation do not move in space (they are spatially fixed). We will then analyze the criteria necessary for mechanisms in which axes move in space. A. Motion Propagation Algorithm for fixed axis mechanisms We will first consider mechanisms for which each pair has a single local state (simple pairs), and whose global topology does not change as the parts move. The problem of deter- mining the motions of all the parts of a mechanism, given an initial input motion, can be viewed as a constraint prop- agation, label inferencing problem. Given a set of terms with initial labelings and a set of constraints relating the terms, the goal is to find a final term labeling that is con- sistent with the constraints. A constraint network for the Global Analysis is built by having one node for each part in the mechanism. The labels are the possible parametrized v--7 fizrerlaxis(Gearl, 01) fireQ_a.zis(GearZ, 02) Fixed Figure 3: The initial constraint network for the Driver Sys tern motions of the part (rotation, translation, fixed, undeter- mined, etc.) and their constraints. The constraints are the relations between pairs of objects found in the Local In- teraction Analysis (paruZZeLgears(A,B), prism(A,B), etc.). A dummy node is introduced to represent the ‘source’ of the input motion. All nodes are initially labeled fixed, fied,axis or undetermined except for the input node which is labeled with the initial motion. Figure 3 shows the con- straint network for the mechanism in Figure 1. The input motion is propagated by starting at the input node, and examining all its successors. The new labeling of a suc- cessor is determined by intersecting the label found in it (including its bounds) with the possible motion of that ob- ject, as found in the constraint. In the example of Figure 3, the input motion is rotation(I, 0, f?), the initial label- ing of Gear1 is fied-axis(Gear1, 0), and the relation be- tween I and Gear1 is attached(I, Gearl) (i.e motion(I) * motion{ Gearl)). S ince motion(I) is rotation(I, 0, 4) and motion(Gear1) is rotation(Gear1, 0, e), the intersection of rotation(Gear1, 0, 0) and fixed-axis(Gear1, 0) yields the new label, rotation(Gear1, 0, 0) for GearI. The intersec- tion between two possible motions is defined in intersection rules such as the following: let L be the label of an object and rotation(A, 0, 8,) its possible motion. Then the in- tersection of them is: if L = rotation(A, O,eL ) or f ixedlzxis(A, 0) then rotation( A, 0, eA ) and restrictions(8,) n restrictions else (when 6’ rf 0) 0 . *- else if L = transZation(A, O’, Xa) then 0 else if L = undetermined(A) then rotation(A, 0, eA Iand restrictions(8,) else if L = fixed(A) then fixed(A) The algorithm propagates the motion in a Breath First Search manner to all nodes. If a label modification occurs for a node, the node and all its neighbors are added to a list of nodes to be updated. The algorithm stops when 614 Engineering Problem Solving this list is empty, i.e when the node labels cannot be mod- ified any further. For each part, the label represents the possible motion of the object and its relation (via param- eters) to the motion of the neighboring parts. The output is n single global state that contains the behavior of each part. This algorithm has been implemented in Franz Lisp. to determine the degrees of freedom of a mechanism. v, Conclusions and futpmre work We have presented a two step algorithm for the analysis of mechanical devices. The first step computes the possible relative motions of pairs of object initially in contact, pro- ducing a functional description of the kinematic pair. We have provided an algorithm for the global analysis of fixed axis mechanisms based on a constraint propagation, label inferencing technique and a heuristics for the analysis of movable axis mechanisms. This algorithm can be extended to deal with mecha- nisms that have complex pairs by building a set of global states consisting of the cross product of local states. Tran- sitions between global states are constructed as the com- bination of the local transitions. To find the behavior of each part in a global state, each global state is analyzed using the algorithm described above. Some global states and transitions will be detected as infeasible, and thus be deleted from the global state graph. I wish to thank Sanjaya Addanki and Ernest Davis for helping me clarify the ideas presented in this paper. Different global states can also be produced by changes in the topology of the mechanism when new con- tacts between parts are created or when old ones disap- pear. After the motion propagation algorithm has been executed, topological changes can be detected by comput- ing the motion envelope (all the positions in space that an object occupies while moving) of each part. If two or more motion envelopes intersect, a new contact is created. This means that the bounds of the motions of the parts must be updated by propagation in the constraint network. The new contacts are analyzed locally as new pairs are cre- ated and a new constraint graph is built to correspond to the new global state. The transition between the current global state and the new one is specified as a condition on the positional parameters of the objects that came into contact. The resulting global state diagram is similar to the state diagram produced in [De Kleer and Brown 19851 and to the graph of transitions in [Forbus 19851 used to explain the behavior of a physical system. We are presently working on a formalization of the problem in terms of a decomposition of the configuration space into a set of disjoint connected regions that will re- flect the possibility of simple motions. This decomposition can later be used to construct the state diagram of the mechanism for a given initial position and input motion. cknowledgments 0 echanisms with movable axes eferences [De Kleer and Brown 19851 Johan de Kleer and John. S. Brown. A Qualitative Physics based on Confluences In Qualitative Reasoning about Physical Systems D. Bobrow editor, MIT Press 1985 1985. [Forbus 19851 K en or F b us. Qualitative Process Theory In Qualitative Reasoning about PhysicaZ Systems D. Bo- brow editor, MIT Press 1985 [Lozano-Perez 19831 Tom&s Lozano-Perez. Spatial Plan- ning: A Configuration Space Approach IEEE Truns- actions on Computers Vol. C-32, Number 2, pp 108- 120, 1983. The algorithm described in the previous section cannot be generalized for mechanisms with movable axes since the combination of two simple motions (rotation, trans- lation ,etc,) can result in a complex motion. Neverthe- less, the algorithm can still be applied to the parts of the mechanism with fixed axis, isolating the parts that have movable axes. A possible criteria for movable axes is the Kutzbach criteria [Shigley and Uicker 19801, origi- nally developed to determine the mobility of linkages. This criteria gives a lower bound on the number of degrees of freedom a mechanism has, based solely on the number of links, the number of higher pairs and the number of lower pairs. For a planar mechanism, the degree of mobility is: M = 3(n - 1) - Jh - 2 JZ where n is the number of links, Jh is the number of higher pairs and JZ is the number of lower pairs. M is a lower bound since, depending on the dimen- sions of the objects, some links can be redundant and have no effect on constraining the degrees of freedom of a mech- anism. Note that the links are considered to be simple rods, with their actual shape not playing any role in the analysis. The Kutzbach criteria is best used as a heuristic [Reuleaux 18761 Franz Reuleaux. The Kinematics of Ma- chinery: Outline of a Theory of Machines Published in 1876, Reprinted by Dover Publications Inc, 1963. [Schwartz et al., 19871 J acob T. Schwartz, Micha Sharir and John Hopcroft. Planning, Geometry and Com- plexity of Robot Motion Ablex Series in Artificial In- telligence, Ablex Publishing Co., 1987. [Shigley and Uicker 19801 J. E. Shigley and J. Uicker. Theory of Machines and Mechanism McGraw-Hill Inc, 1980. joskowicz 615
1987
116
568
CRITICAL HYPERSURFACES AND THE QUANTITY SPACE Mieczyslaw M. Kokar Northeastern University 360 Huntington Avenue Boston, Massachusetts 02115 KOKAR&[email protected] ABSTRACT1 Qualitative reasoning about physical processes is based on the notion of "quantity space" [Forbus, 1984a, 1984b]. The question is how to construct a quantity space for a particular physical process. One line of research is to establish a set of so called "landmark points" by selecting some values of the continuous physical parameters characterizing the physical process under consideration [Kuip'ers, 1985a, 1985b]. The landmark points are to delimit operating regions of qualitative processes. In most practical situations it is impossible to find a finite set of such points. This is because the operating regions of physical processes are delimited not by some specific values of physical parameters but by some hypersurfaces in the cross-product of the parameters, they are called here "critical hypersurfaces". The paper presents a relatively complete methodology for establishing critical hypersurfaces. 1. INTRODUCTION Qualitative reasoning about physical processes gained much of attention in the last several years. Not only this reflects a general spirit of AI, which is symbolic reasoning, but it complements the existing methodology of modelling and simulation of physical processes, which was limited to quantitative analysis of numerical models. Experience shows us that in many cases quantitative simulations are not feasible because of a very high complexity of the quantitative models. Qualitative simulation can then come to the rescue with methods that generate some less specific results, but which are feasible. This is not the only reason for using qualitative simulations. In many situations quantitative simulations are feasible, but not necessary. If we are interested whether a particular physical parameter is going to stay within its allowable range following some changes in the controls, then we should try to answer just this question and not try to calculate the exact value of this parameter. This exact value would be discarded anyway and only the qualitative information that "the parameter will (or will not) stay within the range It will be utilized to make a control decision. In such a case why should we perform the full quantitative analysis when several simple logical operations might do it. On the other hand, one should not go into another extreme and try to solve all the problems with qualitative methods only. Trying to resolve all inequalities in predicting behavior of a physical process would inevitably lead to simulations of differential equations and real numbers. And this would definitely lead into higher complexity problems than when using classical quantitative methods, even though defining a quantitative parameter requires a measurement method that is defined in terms of qualitative operations. Ultimately, there is a need for understanding relations between qualitative and quantitative methodologies. Integration of quantitative and qualitative methodologies is one of the key issues of this paper. In the qualitative simulation methodology states of processes are characterized by some parameters which can take on a limited number of nominal values. These values are usually related to some quantitative parameters. The relationships among the qualitative parameters are described in terms of "quantity space" [Forbus, 1984a, 1984b], When the relations among these parameters change, some nprocessesn are started or stopped. Kuippers [Kuipers, 19851 uses terms like "critical points", "landmark points", or "characteristic points" to describe some specific values of physical parameters. Qualitative simulation methodology involves moving from one qualitative state to another, or from one set of critical points characterizing a given physical system to another. The question is, however, how do we establish such a quantity space. In the literature on qualitative simulations the critical points are selected on the base of some semantics related to the values of physical parameters. The examples of those are boiling temperature, melting temperature, etc. The qualitative processes start or stop when the inequalities between the physical parameters, like temperature, and those critical points, change their signs. In fact, this is a very restricted approach. These processes usually depend not only on the particular physical parameters but on the relationships among them. For instance the water temperature 100 C does not necessarily mean the 616 Engineering Problem Solving From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. water is boiling, it depends what is the pressure, too. Therefore, to be able to predict the behavior of a process we need to know not the critical points of the physical parameters, but the sets of points fulfilling a specific constraint. We will call these sets "critical hypersurfaces". A more precise definition of critical hypersurface will be given in Section 2. Intuitively, a critical hypersurface will be understood as a subset in a cross-product of continuous physical parameters. A critical hypersurface constitutes a "distributed landmark". The position of a particular state with respect to such hypersurfaces determines whether a particular process is active or not. Given a description of a physical process and a semantics for qualitative reasoning, one can establish critical hypersurfaces in many ways. For instance, one can use differential equations for this purpose (provided such models are available). Another possibilty is to try finding the critical hypersurfaces experimentally. This would involve an amount of effort far beyond the advantages from the qualitative simulation. This paper presents a methodology for finding critical hypersurfaces which does not require such strong models as differential equations. This methodology is based on the theory of dimensional analysis. Critical hypersurfaces can be utilized in a similar way critical points are used in qualitative simulations. This problem is beyond the scope of this paper. 2. CRITICAL HYPERSURFACES vs. CRITICAL, POINTS In this section we show an example of a physical process in which the notion of critical hypersurfaces plays a significant role. Then we formally define this notion. As the example we consider the process of viscous fluid flow in a pipe. It is a known fact in fluid mechanics ([Young, 19641, [Monin and Yaglom, 19731) that the flow can be in one of at least three different qualitative states: laminar, turbulent, or transitional (unstable). In each of these states the process of fluid flow is described by qualitatively different mathematical models. Intuitively, the flow is laminar for small velocities, turbulent for large, and transitional for some interval of velocity in between. Therefore, one could try to find two critical velocities, vl, and v2, delimiting the three flow regions. Unfortunately, this can work only for a very limited range of situations, namely when the pipe diameter D, fluid density p, and fluid viscosity q, are constant. For instance, for water in room temperature in a l-cm-diameter pipe, turbulent flow occurs above about 0.2 m/s, while for air the critical speed is of the order of 4.0 m/s [Young, 19641. Thus the critical velocity in the latter case is 20 times higher than in the former one. This clearly shows that to reason about qualitative states of physical processes one cannot restrict the set of notions to critical points only. In hydromechanics the transition from laminar to turbulent flow is characterized in terms of so called "Reynolds number". Reynolds number (usually described as R) is defined as a function of the above referenced parameters of density p, average velocity v, pipe diameter D, and viscosity q as: R = p-v-D/~. It is found experimentally that laminar flow occurs whenever R is less than about 2000. When R is greater than about 3000, the flow is nearly always turbulent, and in the region between 2000 and 3000 the flaw is unstable, changing from one form of flow to the other (here we call it "transitional"). The relationships R = 2000 and R - 3000 describe two hypersurfaces in the space defined by the cross-product of the continuous parameters p, v, D, and 7. The two hypersurfaces divide the space into three operational regions: laminar, turbulent and transitional. This example shows that critical hypersurfaces play an important role in qualitative reasoning about physical processes. There are however three major problems with this approach: - how to know which physical parameters should be considered in the search for critical hypersurfaces, - how to know what kind of relationships should be taken into account in the search for critical hypersurfaces, - how to determine what are the critical points of the function that describes critical hypersurfaces (for instance, how do we know that the critical values of R are 2000 and 3000). These three problems will be referenced to as the complete relevance problem, the relationship problem, and the semantics problem respectively. They are discussed in the following sections. 3. SEMANTICS OF CRITICAL HYPERSURFACES In this section we concentrate on the third of the problems listed in Section 2. Assume for a while that we know both the parameters and the form of the relationship that describes the hypersurfaces. We need to find a set of critical points of the function describing the relationship. In the literature on qualitative reasoning there are known several semanics rules: sign semantics, derivatives semantics [deKleer and Brown, 19821, [Forbus, 1984a, 1984b], Kuipers' QSIM semantics [Kuipers, 1985a, 1985b], order of magnitude semantics [Raiman, 19861. For the example presented in the previous section none of these seem to be right. Neither Reynolds number R or its derivative change their directions of growth but flow changes from laminar to turbulent. Therefore some other approach is required. Kokar 617 One possibility is to find a relationship between a parameter, like Reynolds number, and some observable qualitative parameter for which the semantics of qualitative states is explicit. In our example the character of flow (laminar, turbulent, or transitional) can be determined by directly observing the profile of the speed of flow. As a matter of fact, that is how the relationship defining Reynolds number has been discovered. The bad side of this approach is that the semantics is specific to a particular physical process. It is hard to apply this approach as a general method. Another approach would be to find a continuous physical parameter functionally dependent on the value of the function defining the hypersurfaces, for which we can utilize one of the general semantics for qualitative states (signs, derivatives, or orders of magnitudes). Once critical points have been established for the new parameter we can transform them to the critical points of the value of the function defining the hypersurfaces. In the case of critical points of Reynolds number we could utilize the dependency of the rate of heat transfer on Reynolds number. It is a known fact that the rate of heat transfer from/to the liquid in a pipe to/from the outside strongly depends on Reynolds number, it is much more intensive for R > 3000 than for R < 2000. If we take the heat exchange rate as an indicator then we can utilize, for instance, the semantics of sign of derivative, or we can apply Kuipers' semantics to the derivative of the heat transfer rate. In all of these cases we suggested to use a parameter which depends on the value of function characterizing hypersurfaces. The choice of the parameter depends on which process we are interested in. When we were interested in fluid flow, we took the speed profile. In the second example our attention was concentrated on heat transfer, or more precisely, on the rate of heat transfer. Therefore we selected the characteristic parameter for this process. This seems to be a reasonable methodology. Mathematically, we are looking for a hypersurface in the cross-product of some parameters xl, . . . . xm, described as f(x1, . ..) xm) = c. A critical hypersurface is defined by fixing the value of C. If we are interested in a parameter 2, which depends on C, i.e., Z - g(C), and if we have some semantics (critical points) for Z, then we can determine critical points for c by applying the inverse of the-dependency g to the critical points. In our example, Z represents the heat transfer rate and C represents Reynolds number. If we are able to determine some critical points for the heat exchange rate, then we can transfer them onto Reynolds number. 4. THE RELATIONSHIP PROBLEM In this section we concentrate on the forms of the relationships that describe critical hypersurfaces. We assume now that all the relevant physical parameters are known, the question is bow to combine them into a formula that defines a critical hypersurface in the cross-product of the parameters. The interpretation of critical hypersurfaces is such that they describe boundaries between two regions of qualitatively different behaviors of a physical system. In other words, the physical system behaves similarly within the region, even though the quantitative parameters characterizing the system take on different values for different points within the region. Similar behaviors of physical systems are the subject of the similarity (or similitude) theory (eg., EBirkhoff, 19601, [Drobot, 19531). The similarity theory gives rules for combining physical parameters into monomials called similarity numbers (similarity modules, dimensionless numbers). A physical system is characterized by one or more similarity numbers. Iwo States Of a system are called similar if all the similarity numbers for the two states are equal. The Reynolds number referenced in previous sections is one example of such a similarity number. It characterizes the process of flow of liquids. A set of states for which all the similarity numbers are equal is a hypersurface. The cross-product of physical parameters describing a particular process is subdivided by the relation of similarity into classes (hypersurfaces). We are concerned only with some special of the classes, the ones that we call critical hypersurfaces. In this paper we present a method for determining similarity numbers from only the knowledge of physical parameters Uniquely characterizing the process, and their dimensions. Ihe rules for doing this are given by the theory of dimensional analysis ([Birkhoff, 19601, [Drobot, 19531, [Whitney, 19681). In order to figure out the forms of similarity numbers one needs to analyze dimensions of all physical parameters involved. Dimensions are expressed in terms of units of measure and exponents, eg., 5kglm2sm2. A number of physical parameters from the set of relevant parameters for the particular process are chosen as SO called "dimensional base". A dimensional base can consist of at most as many parameters as many units of measure are used (mostly three). A set of parameters can constitute a dimensional base if the determinant of the exponents' matrix for these parameters is not null. Usually several subsets of the parameters involved satisfy this condition. After selecting a base we take each parameter and combine it with the base by multiplying or dividing it by elements of the base raised to a real-number-power. Those powers are selected in such a way that the resulting monomial is dimensionless. This procedure is repeated for each parameter not in the dimensional base, which means that we can generate as many similarity numbers as many parameters remain in the set after removing from it the ones that constitute the base. In our example the relevant parameters are expressed in terms of units ofomTssl(kg), length and time (s) as: v-xvkgms' , ol= ~~;lm-ls-l P = xpkglm-3s0, D - xDkgOmlSO. T!e numbers'x,, x0, x R' and XD represent numerical scales of t e particular parameters. 618 fngineering Problem Solving Suppose P, rl, and D have been selected as dimensional base. It can be done because the determinant of the following matrix of exponents is not null: 1 -3 0 1 -1 -1 010 The remaining parameter, v, can be combined with the base into a dimensionless monomial in the following way: ~.pl.&~+ Note that this is the only possible combination of the exponents, given a dimensional base. This is how we derive the functional formula that represents Reynolds number. Selecting a different dimensional base would result in a little different formula. Fortunately, for the purpose of similarity it does not really matter which base has been chosen. The states that are similar in one dimensional base remain similar in any other base that we choose. More formally it is expressed as: similarity of the states of a physical process is invariant with respect to the choice of dimensional base. 5. THE COHS?LETE RELEVANCE I'ROBLEM The last main problem with establishing a set of critical hypersurfaces is how to determine all relevant physical parameters characterizing a given physical phenomenon. This seems to be the most difficult problem. This problem is beyond the scope of qualttative simulations. But it is closely related to qualitative reasoning as it is one of the steps that lead to establishing operating regions for qualitative processes. This problem has been intensively investigated in modelling and simulation, in systems science, and in the area of artificial intelligence as well. In any approach to this problem one must accept some kind of a "closed world assumption". In this paper we take a goal-oriented approach in which the closed world assumption is closely related to the goal of the process under consideration. This is manifested in the way of selecting the relevant parameters: only those parameters are selected which can uniquely determine the value of the parameter that characterizes behavior of the system. Mathematically, we are looking for a functional dependency between a parameter explicitly chosen as a characteristic of the behavior of the system and a set of the arguments of this function. The set of parameters that satisfy the requirement of functionality will be complete set of relevant parameters. Essentially, there are three main ways of determining what are the relevant parameters for a given physical process. One of them is the existing knowledge of the process being modelled. In many cases this knowledge is available, all the relevant parameters can be listed by experts in the given field, the only problem is the qualitative analysis of the possible behaviors of the process. In more difficult cases the list of the relevant parameters is not readily available, or at least the experts in the given field cannot come to a cosensus on this. The approach in such cases consists of listing all the suspected hypothetical parameters as the candidates, 6. A RR MIR ESTABLISHING HYIXRSURFACES cltITIcAL In this section we summarize the results of the previous three sections by descrllbing several steps in which critical hypersurfaces can be established. 1. 2. 3. 4. Determine the characteristic parameter (goal parameter, output parameter, dependent parameter) of the process which is to be modelled. Determine what are the parameters known to be relevant to this particular characteristic parameter. Use some expert knowledge to this aim. Collect some experimental data - measurements of the characteristic parameter for a number of combinations of values of the relevant parameters. Analyze completeness of the set of relevant parameters. If the completeness condition is fulfilled then analyze redundancy of some of the parameters. Kokar 619 5. 6. 7. 8. If the set of parameters is not complete then use one of the methods for generating descriptions of relevant parameters. Using methods of dimensional analysis derive a set of similarity numbers as functions of the physical parameters. Using one of the available semantics (e.g., signs of derivatives) determine critical values of the derived functions. The expressions equating the monomials representing the similarity numbers with the critical values constitute descriptions of the critical hypersurfaces. 7. CONCLUSIONS One of the most important problems in qualitative reasoning is how to establish an appropriate quantity space to conduct the simulations in, or in other words, how to establish operating regions for qualitative processes. One possible way is to apply some semantics to quantitative parameters characterizing the physical system under consideration, resulting in a set of critical points for every physical parameter involved. A set of critical points is applicable for some limited situations only, namely when some relevant parameters remain constant. The objective of this paper is to extend this approach to a wider domain of situations, such that would account for variability of all relevant parameters. The generalization of the methodology is achieved through introduction of the notion of critical hypersurfaces in place of critical points. These critical hypersurfaces delimit operating regions of qualitative processes. The paper presents a relatively complete methodology for finding critical hypersurfaces. The methodology is based on the theory of dimensional analysis. A part of this methodology is implemented in the system for discovery of physical parameters, called COPER. [Birkhoff, 19601 Birkhoff, G. Hydrodynamics. A study in logic, fact and similitude. Princeton University Press, Princeton, 1960. [deKleer and Brown, 19821 deKleer, J., Brown, S. Foundations of Envisioning. Proceedings of the National Conference on Artificial Intelligence AAAI-82, 209-212, 1982. [Drobot, 19531 Drobot, S. On the foundations of dimensional' analysis. Studia Mathematics, 14, 84 - 89, 1953. [Falkenhainer and Michalski, 19861 Falkenhainer, B. and Michalski, R., S. Integrating quantitative and qualitative discovery: The ABACUS system. Machine Learning, 4, 1986. [Forbus, 1984a] Forbus, K., D. Qualitative Process Theory. Artificial Intelligence, 24, 85-168, 1984. [Forbus, 1984b] Forbus, K., D. Qualitative Process Theory. Technical Report, 789, MIT Artificial Intelligence Laboratory, 1984. [Forbus, 19861 Forbus, K., D. Interpreting measurements of physical systems. Proceedings of AAAI-86, Fifth National Conference on Artificial Intelligence, Philadelphia, 113-117, 1986. [Kokar, 1986a] Kokar, M., M. Determining Arguments of Invariant Functional Descriptions. Machine Learning, 1, 1986. [Kokar, 1986b] Kokar, M., M. Discovering functional formulas through changing representation base. Proceedings of the Fifth National Conference on Artificial Intelligence, Philadelphia, PA, 1986. [Kuipers, 1985a] Kuipers, B., J. Qualitative Simulation of Mechanisms. MIT Laboratory for Computer Science, TM-274, Cambridge, iiA; 1983. [Kuipers, 1985b] Kuipers, B., J. The Limits of Oualitative Simulation. Proceedings of x----- the Ninth Joint Conference on Artificial Intelligence. Los Angeles: 128-136, 1985. [Langley, P. 19811 Langley, P. Data Driven Discovery of Physical Laws Cognitive Science, 5, 31-54, 1981. [Monin and Yaglom, 19733 Monin, A., S., and Yaglom, A., M. Statistical Fluid Mechanics: Mechanincs of Turbulence. The MIT Press, Cambridge, MA, and London, England, 1973. [Raiman, 19861 Raiman, 0. Order of Magnitude Reasoning. Proceedings of the National Conference on Artificial Intelligence, Philadelphia, PA, 100-104, 1986. [Whitney, 19681 Whitney, H. The Mathematics of Physical Quantities, part I and II. American Mathematical Monthly, pp. 115-138 and 227-256, 1968. [Young, 19641 Young, H., D. Fundamentals of Mechanics and Heat. Second Edition, McGraw-Hill Book Company, 1964. 620 Engineering Problem Solving
1987
117
569
Abstraction by Time-Scale in ua~itati~e Sedation Benjamin Kuipers Department of Computer Sciences University of Texas at Austin Austin, Texas 78712 Abstract Qualitative simulation faces an intrinsic problem of scale: the number of limit hypotheses grows exponentially with the number of parameters approaching limits. We present a method called Time-Scale Abstmction for structuring a complex system as a hierarchy of smaller, interacting equilibrium mechanisms. Within this hierarchy, a given mechanism views a slower one as being constant, and a faster one as being instantaneous. A perturbation to a fast mechanism may be seen by a slower mechanism as a displacement of a monotonic function constraint. We demonstrate the time-scale abstraction hierarchy using the interaction between the water and sodium balance mech- anisms in medical physiology, an example drawn from a larger, fully implemented, program. Where the structure of a large system permits decomposition by time-scale, this abstraction method permits qualitative simulation of oth- erwise intractibly complex systems. 1 The Problem of Scale Qualitative simulation is a promising method for reasoning with incomplete knowledge about the structure and behavior ofphys- ical systems [de Kleer and Brown, 1984; Forbus, 1984; Kuipers, 1984,1985,1986]. The structure of a system is described in terms of a collection of continuous parameters and constraints among them. Behavior is described in terms of changes to position and direction in qualitative quantity spaces. Such a constraint model may be derived from a component-connection description [deK- leer and Brown, 19841, from a process-view description [Forbus, 19841, or be given as part of the problem-solver’s model of the domain [Kuipers, 1984; Kuipers and Kassirer, 19841. The ad- vantage of these qualitative reasoning methods is their ability to express and reason with incomplete knowledge of functional relationships. For example, one may say that wind resistance increases monotonically with velocity, without needing to know or assume their exact relationship: Tesistance = M+(velocity). *This research was supported in part by the National Science Founda- tion through grants DCR-8512779 and DCR-8602665, and by the National Library of Medicine through NIH Grants LM 04374, and LM 04515. A fundamental operation in qualitative simulation is limit analysis: when several variables are changing, and moving toward limiting values, the constraints are analyzed to determine which limits may be reached, and hence which qualitative states may come next. For the small to moderate-sized systems examined thus far in the literature, the natural constraint model is often sufficiently powerful to limit the possibilities to a reasonable set. Unfortunately, there is an intrinsic problem of scale. When dealing with a large system, the number of changing variables moving toward limits may be very large. The set of global limit hypotheses grows exponentially with the number of variables. During a period when two variables in the system do not inter- act, the temporal reasoning methods of Williams [1986] can iso late them. However, we are frequently faced with large systems consisting of variables that do interact, which appear intractible to current qualitative reasoning methods. Numerous examples throughout AI and computer science demon- strate that a powerful method for handling a complex problem is to impose a modular, hierarchical structure that allows it to be solved in pieces of a manageable size. In order to apply this method, we need to define a valid hierarchical structure that breaks a complex system into a collection of tractible mecha- nisms. The structure must also support a discipline for moving the focus of attention among the individual mechanisms in the hierarchy, and a mapping relation for communicating informa- tion meaningfully among the mechanisms. This paper presents one such structure. We have encountered this problem of scale in our studies of the expert physician’s knowledge of human physiology, especially the systems whereby the body regulates its sodium and water balances [Kuipers and Kassirer, 1984; Kuipers, 19851. The ex- amples presented in this paper will draw on our models of these physiological mechanisms, but the techniques have more general applicability to qualitative modeling and simulation of large-scale systems. 2 Time-Scale Abstraction Looking at expert physicians for our inspiration, we observe that although the human regulatory systems are immensely compli- cated, the experts reason effectively about them by focusing on one aspect at a time. One important method for distinguishing closely related mechanisms within the same large system is the time-scale at which they operate. Kuipers 621 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. ant(watez,P) / I amt(Na, P) - i net7!!!irZtT<(wateT!r -b V) z net flow(water,out 3 P) Figure 1: Constraint model for the Water Balance mechanism. For example, two closely related mechanism in the kidney help regulate the body’s sodium and water balances [V&in, 19731. o The water balance mechanism responds to changes in plasma water volume by adjusting water excretion, through a hor- mone called antidiuretic hormone or ADH. Water volume is not sensed directly, but through its effect on sodium concentration. The water balance mechanism responds to changes within a period of minutes. (Figure 1) e The sodium balance mechanism responds to changes in the amount of sodium in the plasma by adjusting sodium ex- cretion through a hormone called aldosterone. The amount of sodium is not sensed directly, but through its effects on water volume. The sodium balance mechanism responds to changes over a period of hours to days. (Figure 2) Figures 1 and 2 give a graphical representation of the QSIM constraint models of the water balance and sodium balance mech- anisms, respectively. The separation in time-scales of these two mechanisms allows physicians to reason about them separately. For example, in dis- cussing the related but distinct problem of blood pressure regu- lation, Guyton [1981] presents graphs of the responses of eight different mechanisms, with time-scales ranging from seconds to days (Figure 3). net flow(Na, ingest 4 P) net flow(Na, P --) U) I net jlow(Na,out -+ P) u + P) Figure 2: Constraint model for the Sodium Balance system These observations lead us to define the concept of time-scale abstraction applied to a complex system made up of interacting equilibrium mechanisms: If a complex system can be decomposed into equilib- rium mechanisms that operate at widely separated time- scales, then a particular mechanism can view a faster one as being instantaneous, and a slower one as being constant. When a faster mechanism views a slower one as constant, the slower one can simply be treated as a source of values for certain parameters. When a slower mechanism views a faster one as instantaneous, a relation among shared variables may be treated by the fast mechanism as the result of a process over time, and by the slow mechanism as a functional relationship. 15301 24616321 246161 24616.---w TIME AFTER SUDDEN CHANGE IN PRESSURE Figure 3: Time-scales of physiological processes, from [Guyton, 19811. 622 Engineering Problem Solving Consider the relationship between the (slow) sodium and (fast) 3 Communicating Across Time-Scales water balance mechanisms: a The water balance mechanism (Figure 1) includes the fol- lowing parameters. (P stands for the plasma compartment of the body fluids, and the N in ANP stands for sodium (W-1 In order to use a hierarchical model linked by time-scale abstrac- tion for qualitative simulation of a complex system, information must be transmitted through shared variables among mechanisms operating at different time-scales. AWP amt(water, P) dependent ANP amt(sodium, P) independent NFWIP net flow(water,ingest + P) independent ANP and NFWIP are independent, or “context”, parame- ters of the water balance mechanism. The parameters AWP the sodium balance mechanism both dependent variables. and ANP are shared with (Figure 2), where they are 3.1 The Pattern of Shifting Focus We need a discipline for shifting the focus of attention among dif- ferent time-scales and for making valid use of previously derived information in subsequent computations. The two directions of shift in focus from a given mechanism require different methods. @z3 a From the point of view of the water balance mechanism, an externally given increase in sodium (ANP) results in the water balance moving, over some period of time, to a new equilibrium where the amount of water (AWP) is also increased (Figure 4a). Faster to Slower. Given an initial perturbation to its environment, qualitative simulation predicts the resulting equilibrium state of the fast mechanism, and shifts atten- tion to the next slower one. The final values of parameters that are shared with the slower mechanism can be treated as part of the initial state of the slower mechanism. There are also effects on the constraints which will be treated in the next section. o From the point of view of the sodium balance mechanism, the relationship between ANP and AWP is seen as instan- taneous, and is expressed by the monotonic function con- straint, AWP = A4+( ANP) (Figure 4b). Thus, different levels of the time-scale hierarchy view the rela- tion between two parameters in quite different ways. A structural constraint at one level is the result of an embedded process at a faster level. I I normal to JO t1 ANP: amt(sodium, P) 00 o -8 AWP . l .h’ ’ g....** AWP I I I normal to AWP: amt(water, P) ’ 0 t1 I Q Slower to Faster. After a slower mechanism has reached equilibrium, the environment it provides for a faster mech- anism may have changed. However, the faster mechanism, by definition, must have tracked the slower mechanism on its way to equilibrium. Thus, the fast mechanism is al- ready in equilibrium, and simulation is not necessary. By combining the values of shared variables, the fact that the mechanism is in equilibrium, and other context informa- tion, a complete description of the equilibrium state of the fast mechanism can be derived by propagation. AWP+ AWP* 1 I I ANP ANP* ANP+ (4 w Figure 4: The relationship between ANP and AWP e (a) From the point of view of the Water Balance mechanism (Figure l), a change to ANP causes a subsequent change to AWP. e (b) From tt-3 point of view of the Sodium Balance mechanibnr (Figure 2), the monotonic function constraint AWP = M+(ANP) requires the two parameters to change together. Kuipers 623 Figure 5 shows the pattern of control for a three-level time- scale hierarchy, deriving the effect of an initial perturbation through- out the system. Upward arrows initiate simulation to a new equilibrium, and downward arrows initiate propagation to a com- plete description of an existing equilibrium state. The algorithm is as follows. After simulating a mechanism, QSIM identifies the faster mechanisms which share parameters with the current mechanism, and propagate that information to determine the equilibrium state of the faster mechanism. Once this is done, the slower mechanisms sharing parameters are identified. The cur- rent values of parameters shared with this mechanism are used to define the initial state for it to be simulated. The process repeats recursively. In order for the abstraction hierarchy to support correct sim- ulation, control of the focus of attention must be combined with an appropriate interpretation of information from one level of the hierarchy, as viewed from another. In particular, if some change causes a fast mechanism to behave abnormally, this is viewed from the slower mechanism as a displacement of a monotonic function. 3.2 Changing the Monotonic Function Constraints As we have discussed, the slower sodium balance mechanism (Figure 2) includes the monotonic function constraint, AWP = M+(ANP). In addition to the monotonically increasing direc- tion of the relationship, the constraint specifies car-respnding vaZ- ues. In the normal situation, each parameter has a normal value - called AWP* and ANP*, respectively - and the monotonic function includes the point (ANP*,AWP*). Figure 4b shows this relationship. During the response of the sodium balance mechanism to dif- ferent initial conditions, the values of ANP and AWP move along this curve. These corresponding values, and those on the other constraints, provide critical information about the possible transient and equilibrium states of the sodium balance mecha- nism. The faster water balance mechanism acts to move the values back to this curve if they are displaced from it. slow AWP I medium AWP+ I fast AWP* Figure 5: Control of focus of attention. Each bead represents a qualitative state, so simulation produces a string of beads, and propagation of an equilibrium state pro- duces a single bead. Changes in focus of attention take place in the sequence shown. (1) The equilibrium state of the fastest mechanism provides values for initializing a simulation of the next slower mechanism. (2) The final state of the second simulation is first used to propagate a new equilibrium state for the fastest mechanism. (3) Then values from both faster mechanisms are available to initialize the slowest mechanism. And so on. Notice, however, that the abstracted monotonic function con- straint, AWP = M+(ANP), and especially its corresponding values, also depend on the value of the context parameter NFWIP, representing the rate of water intake, which appears only in the water balance mechanism. If NFWIP is shifted to a value higher than normal, then the monotonicity of the relationship AWP = M+(ANP) is preserved, but the corresponding values are changed to (ANP*,AWP+), where AWP+ > AWP*. Fig- ure 6 shows how this change means that the relationship has been shifted upward. In the water and sodium balance systems, we can see how a change can propagate within the hierarchy. An externally imposed change affects the fast mechanism, say an increase to the rate of water intake, NFWIP. The external change is not visible to the slower mechanism, which has abstracted away the changed variable, NFWIP. However, QSIM determines that the change to the water balance mechanism results in a shift of the monotonic func- tion constraint, AWP = M+(ANP). The slower mechanism adjusts to the shifted monotonic function constraint by finding a new equilibrium point. In this case, the sodium balance mechanism excretes sodium to bring the water volume, AWP, down to its normal level, AWP*, even at the cost of reducing the amount of sodium, ANP, below normal, to AWP-. (Figure 6) Using the time-scale abstraction hierarchy, we thus derive a single qualitative prediction for the behavior resulting from in- creased water intake: water volume rises quickly, followed by a slower process of sodium excretion (with simultaneous water ex- cretion) until water volume returns to normal. In this final equi- librium state, total sodium and sodium concentration are below normal. A “flat” model derived from the same set of constraints produces an intractibly branching set of predicted behaviors. shifted normal I ANP ‘I ANP* ANP- Figure 6: Normal and shifted monotonic function constraints. The sodium balance mechanism (Figure 2) moves to bring AWP back to its normal value AWP*. If the relation AWP = M+(ANP) is shifted upward, ANP will reach equilibrium at a value lower than normal, ANP- < ANP*. 624 Engineering Problem Solving 3.3 Prnplementation Considerations The time-scale abstraction methods have been implemented as extensions to QSIM’ , developed and tested on a three-level time- scale hierarchy consisting of the water and sodium balance mech- anisms and the Starling equilibrium mechanism governing the balance of water between the plasma and interstitial compart- ments [Kuipers and Kassirer, 19841. A preliminary model of con- trol of heart rate and output has also been developed in isolation [Kuipers and Kassirer, 19851 and is being incorporated into the hierarchy. In future work, we plan to extend the hierarchy to include the mechanisms referred to in Figure 3. The ultimate purpose of this physiological model is to support “deep model” reasoning and hypothesis testing in medical diagnosis. The extensions required to the knowledge given to QSIM are minor: The time-scale ordering of the mechanisms making up a system is given explicitly. Shared variables and shifted cor- responding values are computed automatically when infor- mation is mapped from one mechanism to another. In order to map a qualitative value from one mechanism de- scription to another, the landmarks in the quantity space have explicitly associated meanings, such as zero, infinity, or normal, which can be matched across two symbol struc- tures representing the same quantity space. At the moment, with a small hierarchy, simulation con- tinues until all related mechanisms have been considered. With a large knowledge base, a method for cutting off sim- ulation at some lowest level of detail will be required. Conchsions In the medical physiology domains we have discussed, the natural system appears to have a suitable modular structure for imposing a time-scale hierarchy. This is not necessarily always the case. Perrow [1984] argues that certain engineered systems such as nu- clear power plants are simply too complex and highly interactive for human comprehension, especially under emergency circum- stances. For some systems, we suspect that the modularity by time-scale necessary for this kind of hierarchical structure does not exist, and cannot validly be imposed. In this paper, we have presented methods for qualitative sim- ulation of complex systems that can be structured as a time-scale hierarchies of interacting mechanisms. Another important appli- cation of time-scale abstraction, discussed in [Kuipers, 19871, is the use of the abstracted view of a process to determine the cause of a branching behavioral prediction, identifying a new distinc- tion in the quantity space of some independent variable, and making the simulation deterministic. We believe that these results, along with other recent devel- op,ments in qualitative simulation (e.g. Williams [1986], Weld [1986], and Kuipers and Chiu [1987]), are significant steps to- wards robust qualitative reasoning methods capable of being ap- plied to complex problems in the real world. 5 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. References J. de Kleer and J. S. Brown. A qualitative physics based on confluences. Artificial Intelligence 24: 7 - 83, (1984). K. D. Forbus. Qualitative process theory. Artificial Intel- ligence 24: 85 - 168, (1984). A. C. Guyton. 1981. Textbook of Medical Physiology. Philadel- phia: W. B. Saunders. B. J. Kuipers. 1984. Commonsense reasoning about causal- ity: deriving behavior from structure. Artificial Intelligence 24: 169 - 204. B. J. Kuipers. 1985. The limits of qualitative simulation. In Proceedings of the Ninth International Joint Conference on Artificial Intelligence (IJCAI-85). William Kaufman, Los Altos, CA. B. J. Kuipers. 1986. Qualitative simulation. Artificial Intelligence 29: 289 - 338. B. Kuipers. 1987. Qualitative Simulation as Causal Expla- nation. To appear in IEEE Transactions on Systems, Man, and Cybernetics 17, No. 3, 1987; special issue on Causal and Strategic Aspects of Diagnostic Reasoning. B. Kuipers and C. Chiu. 1987. Taming intractible branch- ing in qualitative simulation. In Proceedings of the Tenth International Joint Conference on Artificial Intelligence (IJCAI- 87). Los Altos, CA: Morgan Kaufman Publishers. B. J. Kuipers and J. P. Kassirer. 1984. Causal reasoning in medicine: analysis of a protocol. Cognitive Science 8: 363 - 385. B. J. Kuipers and J. P. Kassirer. 1985. Qualitative simu- lation in medical physiology: a progress report. MIT Lab- oratory for Computer Science TM-280. Charles Perrow. 1984. Normal Accidents: Living With High-Risk Technologies. New York: Basic Books. H. Valtin. 1973. Renal Function: Mechanisms Preserving Fluid and Solute Balance in Health. Boston: Little, Brown. Daniel S. Weld. 1986. The use of aggregation in causal simulation. Artificial Intelligence 30: l-34. 14. Brian Williams. 1986. Doing time: putting qualitative reasoning on firmer ground. In Proceedings of the Fifth National Conference on Artijcial Intelligence (AAAI-86). Los Altos, CA: Morgan Kaufman Publishers, pp. 105-112. Acknowledgments My long-time medical collaborator, Dr. J. P. Kassirer, MD, of the New England Medical Center, has guided my attempts to become literate in medical physiology. He deserves major credit for the correct parts of the medical knowledge incorporated here. I am responsible for any errors. ‘As with our previous work, the program demonstrating the capabilities described in this paper is available to interested researchers. The program, named Q, is an extended version of QSIM implemented in Common Lisp. Kwipers 625
1987
118
570
Michael IL.. Mavrsvouniotis and George Stephanopoulos Laboratory for Intelligent Systems in Process Engineering Department of Chemical Engineering, Massachusetts Institute of Technology Cambridge, Massachusetts 02139 Abstract The O[M] formalism for representing orders of magnitude and approximate relations is described, based on seven primitive relations among quantities. Along with 21 compound relations, they permit expression and solution of engineering problems without explicit disjunction or’negation. In the semantics of the relations, strict interpretation allows exact inferences, while heuristic interpretation allows inferences more aggressive and human-like but not necessarily error-free. Inference strategies within O[M] are based on propagation of order of magnitude relations through properties of the relations, solved or unsolved algebraic constraints, and rules. Assumption-based truth- maintenance is used, and the physical dimensions of quantities efficiently constrain the inferences. Statement of goals allows more effective employment of the constraints and focuses the system’s opportunistic forward reasoning. Examples on the analysis of biochemical pathways are presented. a. introduction 6. It uses knowledge only in the form of rules, and equations Numerous efforts have been made to apply Qualitative Reasoning to Physical Systems [Bobrow 841. Major difficulties encountered in the reasoning effort, particularly in engineering applications, stem from the ambiguity inherent [de Kleer and Brown 841 in the qualitative values (-, 0, +) normally used. The incorporation of inequality relations through the quantity-space notion [Forbus 841 only partially resolves the ambiguities. involving addition and multiplication. In engineering, apart from signs of quantities there is more partial knowledge available on rough relative magnitudes of quantities. It is thus desirable to examine ways of introducing more quantitativeness in qualitative reasoning, and employ this type of partial knowledge. The problem of applying qualitative reasoning in engineering, we address here with the O[M] formalism for reasoning about orders of magnitudes and approximations. We believe that O[M] lacks the basic faults of FOG described above. We will first describe Order- of-Magnitude relations and their semantics. After we mentibn the additional concepts of assignments, constraints, and rules, we will discuss how inferences in O[M] are guided and maintained. We will close with examples and a discussion of the O[M]‘s potential. 2.O[M] Formalism A quantitative approach for digital circuit diagnosis [Davis 841 uses hierarchic representation of time with several time granularities. The longest delay until quiescence at the finer level determines how many fine-grain units correspond to one coarse- grain unit, while events whose duration is shorter than the current granularity level are not represented. A similar concept in qualitative reasoning is mythica/ time [de Kleer 84a], a finer time granularity that can distinguish cause and effect among simultaneous events. Underlying time granularities and mythical time, is the notion of different orders of magnitude in time scales. It was recently pointed out that explicit Order-of-Magnitude reasoning, not just with time scales but with all variables, is the key to successful qualitative reasoning in engineering [Raiman 861, and the FOG formal system was introduced with three basic relations: e A Ne B: A is negligible in relation to B. A variable in O[M] refers to a specific physical quantity, with known physical dimensions but unknown numerical value. Knowledge about the sign (-, 0, +) of the variable is kept as assertions, termed sign specs, stored within the variable. A landmark is similar to a variable, but it has known sign and value. Variables and landmarks are collectively called quantities. Two quantities are compatible if they have the same physical dimensions. Within each quantity, there are links, each representing a compatible pair of quantities that can be interrelated. A link contains all the Order-of-Magnitude relations asserted between the two quantities, and information on where such relations can be obtained from and where they can be used (e.g. relevant constraints and rules, as we will describe later). 2.1. Primitive and Compound Relations 0 A Vo B: A is close to B (and has the same sign as B). Order-of-Magnitude relations relate the non-negative magnitudes s A Co B: A has the same sign and Order-of-Magnitude as B. of quantities, regardless of their sign. Thus, there is no interference The system has 30 rules of reasoning with its basic relations, between signs and magnitudes, and reasoning with signs can be classical qualitative values, addition, and multiplication. Although carried out with the normal qualitative reasoning principles. We FOG is a good initial approach, it fails in several points particularly important in engineering applications: 1. It does not provide concrete semantics. If one does not intuitively understand what “A Co B” means, there is no further explanation available. 2. Its set of rules appears arbitrary, and it is not clear how it can be extended, e.g. to exponentials or integrals. 3. It does not allow incorporation of partial quantitative information, often available in engineering applications. For example, if FOG is told that “A Vo 0.1” and “B Vo 1000” it is unable to infer the obvious “A Ne B”. 4. It lumps signs and magnitudes in single relations. The relation “A Co B” carries unnecessary sign connotations: Since the signs are kept track of separately anyway, why should this relation carry sign information? The engineer’s intuitive Order- of-Magnitude notion does not carry such sign connotations. 5. It requires negation and disjunction to fulfill its reasoning even for very simple problems. 626 Engineering Problem Solving From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. introduce seven primitive irreducible binary relations among quantities, shown in Table 1. we impose the restrictions eFI/e, and e,=l/e,. To sanction the intuition that for A>B>O We accept as a compound relation any implicit disjunction of two or more successive primitive relations. It should be emphasized that this restricted disjunction refers mainly to the semantics of the relations, and no syntactic disjunction is allowed. There are in total 21 compound relations. The notation for a compound relation produced from the primitives rn through r,,, is r,..r,+, The compound relation standing for “A less than B” would be thus represented as “A CC..-< B”. A-B<<B = As-B we further impose e,-l=e, (3) Under this strict semantics, the above constraints leave only one degree of freedom for the interpretation of our relations, as depicted in Fig. 2. We let the “accuracy” parameter e unspecified because it depends on the application domain. In the preliminary design of chemical processes for example, the designer tends to think of e between 0.05 and 0.20. On the other hand a physicist would only consider a parameter ecO.01. For many domains, this interval semantics (with some particular value for e) reflects the way human experts carry out their approximations and Order-of- Magnitude reasoning. Table 1: Primitive relations of the OIM] formalism 0 [M] -RELATION VERBAL EXPLANATION rl: A << B r*: A -< B r3: A -< B rq: A--B r5: A >- B r6: A >- B r,: A >> B A is much smaller than B A is moderately smaller than B A is slightly smaller than B A is exactly equal to B A is slightly larger than B A is moderately larger than B A is much larger than B The 7 primitive relations and the 21 compound relations give a set R with a total of 28 legitimate relations r,, . . . . rss. This relation set allows full expressiveness without disjunction or negation. The inverse of every legitimate relation is also a legitimate relation. The negation of a legitimate relation is a legitimate relation if and only if that relation includes either of << or >>. All of the 28 relations are physically meaningful and each can be given a short and intuitively appealing verbal description. They are powerful enough to express quantity-space partial ordering, all of FOG’s relations, and other relations that engineers use in Order-of- Magnitude arguments. Negations of such commonsense relations are usually (but not always) expressible. For example the relation “less than or approximately equal to”, frequently used in engineering, is expressed as CC..>-, and its negation as >-..a>. The relation = “roughly equal to” is expressed as -e..>-, but its negation cannot be expressed. Table 2 shows the correspondence of OIM] relations to commonsense and FOG relations. 2.2. Strict Interpretation Semantics A relation A rn B is equivalent to (A/B) rn 1 and signifies an interval for the (A/B) ratio, as shown in Fig. 1. To sanction the symmetry of the relations A>-B = B-CA (1) A>>B E B<<A (2) e e 1 2 1 Table 2: O[M]-representation of relations from other systems CLASSICAL COMMONSENSE RELATIONS 0 WI less than (<I <<. .-< less than or equal to (5) << . .= greater than (>) >- . .>> greater than or equal to (2) ==. .>> equal to (=) z approximately equal to (k) -4. .>- less than or approximately equal to 6) <<. .>- greater than or approximately equal to (3) -<..>> much less than << much greater than >> FOG RELATIONS 0 WI Negligible in relation to (Ne) << Very close to (Vo) -<. .>- Comparable to (Co) -<. .>- With this clear semantics there is no need for prespecified rules since they can be derived from the intervals, which moreover allow incorporation of quantitative information. We named this interpretation strict because its solid intervals support only accurate correct inferences. For any primitive or compound relation the corresponding interval is continuous. The intervals produced from inferences are also continuous and the consequent relations can be expressed without disjunctions. 2.3. Heuristic Interpretation Semantics The strict interpretation is accurate, but too strict compared to human reasoning. For example, from the relations A B-B and B >- C the strict interpretation can only conclude A >-..>- C while human e e 3 4 I I---- I- I A/B << 1 A/B -< 1 A/B-<1 I A/B>-1 A/B >- 1 A/B >> 1 A/B==1 Figure 1: Strict interpretation of the relation A r,, B Mavrovouniotis and Stephanopoulos 627 commonsense would aggressively conclude A P C. Clearly the latter result is heuristic. It is not guaranteed correct, but it is correct often enough for an engineer to be happy with. Any mechanism that can accommodate this resutt will have to accept the risk of wrong conclusions as the price for more aggressive inferences. Note that FOG sanctions even the more aggressive inference (ACoB and BNeC) + ANeC (4) a subcase of which in our notation would be (A>-B and B<cC) + AecC (5) We feel this inference is too aggressive and error-prone, so we choose not to sanction it. The heuristic interpretation we adopt replaces the boundary points of the intervals with regions (Fig. 3). We then construct two sets of primitive intervals: A set of non-exhaustive intervals and a set of overlapping ones, shown in Fig. 4. The following heuristic inference convention is adopted: For every inference step, assume the antecedent relations to denote non-exhaustive intervals, but allow the consequent relations to denote ovetiapping intervals. Thus, when the consequents are used as antecedents at a later step their intervals are “shrunk” and therein lies the power and the risk. Note that for compound relations this mechanism refers only to the end points of the compound intervals (i.e. the compound intervals do not have “holes”). The good properties that were mentioned for the strict interpretation are preserved by this transformation, with the exception of lost guaranteed accuracy of the inferences. The heuristic inference procedure resembles closely human reasoning. In the previous example it would infer AS-B and BYC + A>-C (6) Once an inference is made, people use the consequent without reconsidering its uncertainty and would infer further AS-C and CS-D + A>-D. (7) Hence the “shrinking” of the expanded intervals when a consequent is used further. To choose the new interval boundaries, we sanction the symmetry of the relations, as before, and the following inferences: A>-B + A-BccB (8) A>-B and BYC + A>-C (9) A>-B and A>>C + B>>C. (10) The interval boundary regions in the final form are shown in Fig. 5. The exact choice of e depends on the domain of application. A very large number of inferences are valid regardless of the value of 8. Apart from inferences based on addition and subtraction, this group also includes inferences with other functions: x C< 1 + exp(x) >w 1+x (11) x ec 1 4 sin(x) >w x. (12) 2.4. Assignments, Constraints, and Rules Assignments are “solved” algebraic relations that allow some quantities to produce relations among other quantities. The left hand side of an assignment can be either a ratio (link) of two quantities, or just a single variable. The right hand side of an assignment, called expression, cannot be any arbitrary algebraic expression. It can involve only links, landmarks and numerical constants. The system attempts to automatically convert algebraic expressions to the acceptable form. The success of the automatic parsing depends on the form of the algebraic expression. Constraints are “unsolved” algebraic relations among quantities. As with assignments, there are requirements on the form of the expressions, and the system attempts automatic conversion. The first way to use constraints, is to simply “test” them, and accept or reject assumptions based on the outcome. The second is to form a set assignments by solving the constraint in all obvious ways. By “obvious” solutions we mean simply getting hold of one occurrence of a variable in the expression and solving with respect to that, regardless of its other occurrences. The O[M] system can apply automatically both approaches. Knowledge of highly empirical nature often cannot be expressed in algebraic form. O[M] can accept knowledge in the form of simple if-then rules without free variables. 3. Control of Reasoning We will briefly describe here how the system maintains consistency and how lt expands and prunes the inference tree. The basic strategy of O[M] is depth-first data-driven reasoning. Any new fact is first checked for redundancy, created and used immediately, regardless of whether the use of its “parent” has been completed. It invokes all possible scenarios for further reasoning: 1. From the conjunction of relations new relations are inferred and redundant ones are retracted. From the symmetry and transitivity of relations new relations are inferred. 2. For relations between a variable and a landmark, numeric transitivity is applied. The idea is that if we find another variable related to another landmark compatible to the original one we can infer a relation between the two variables. 3. When a relation can serve as the antecedent of rules, the rules are invoked. 4. When a relation (actually its link) participates in the expression of assignments or constraints, these are invoked. Applying an assignment can yield knowledge about the magnitude as well as the sign of a variable. In the domain of chemical engineering (our primary interest) there are many different kinds of variables present: temperatures, pressures, volumes, flowrates, masses, concentrations, etc. The requirement that only compatible quantities can be linked reduces I -I I-I- I I ---J-I I-1 << << -< ma< = >- >- >> Figure 3: “Fuzzy” interval boundaries for the heuristic interpretation -4 >- >> -< >- I---I I II I I _I-=-__ I I 1-1 - - << -< -< >- >- >> Figure 4: Overlapping intervals (top), and non-exhaustive intervals (bottom) for the heuristic interpr ‘etation 628 Engineering Problem Solving the search space in all inference scenarios. 3.1= Truth-Maintenance and Resolution of Contradictions Assertions can be stated as assumptions rather than known facts. They can also be stated as dependent on assuming of other assertions. For each inference step then we form the assumption set under which the conclusion is valid and allow several relations between two quantities to coexist. Assumption-based Truth-Maintenance is carried out using de Kleer’s ATMS approach [de Kleer 84b], which avoids some serious problems of other truth-maintenance systems that use dependency-directed backtracking. In ATMS there is no backtracking involved, and important assumption sets can be parsed after the main problem-solving effort. The resolution of contradictions requires more care in O[M] because with the heuristic interpretation, neighboring relations that apparently conflict may actually be both valid heuristically (since neighboring heuristic intervals are overlapping). We will delineate here the alternative ways of handling apparent contradictions. The first way is to forbid any special treatment of neighboring conflicting relations. This would cause all kinds of assumption sets and eventually the whole problem (i.e. the empty assumption set) would be marked inconsistent, without being truly so. The second way is to simply allow neighboring relations to coexist, and mark them in a special way as non-conflicting. Since they will both propagate, this aggressive strategy amounts to implicitly asserting that indeed the overlapping part of the two neighboring intervals represents the “true” relation. The third and most conservative way is to disclaim both relations (and mark them to avoid recurrence of the problem) and replace them by the compound relation representing their disjunction. If the initial relations are compound one need only consider the two primitive components (one from each initial relation) that are neighboring and take their disjunction. We can try to take advantage of these pseudo-contradictions in the special case where one of the quantities involved is a variable and the other a landmark. After we apply the third strategy outlined above, we can assume that the true relation was indeed in the overlapping part of the intervals, select or create another compatible landmark, and relate it to the variable by a tighter relation. 3.2. Goal Direction The search mode for O[M] is opportunistic forward chaining, but there are two ways to induce search for a particular relation. By stating that the goal is to relate two particular quantities, the user can induce additional ways to use constraints and assignments. e -2 1 --- e (l+e) --- 1 l+e l+e Whenever one of the two goal quantities occur, the system uses the other one as well (for example, it divides both sides of the constraint by that variable). Alternatively, the user may state that alternative relations between two quantities should be examined. Then, the system can create seven assumptions, one for each of the seven primitive relations and check them for consistency with available knowledge. The implementation of the OIM] system was done in Symbolics Common LISP, on Symbolics 3650 computers, running the Genera 7.0 environment. The Flavors Object-Oriented Programming system was heavily employed. All entities (quantities, relations, constraints, etc.) are implemented as objects. Each of the simple problems on which O[fvl] was tested (such as reasoning about a single equipment piece or a three-reaction segment in a biochemical pathway) was handled in at most a few seconds. We have not yet tested the system on complex problems. Having many assumptions slows the system down, because expensive set-operations are required by ATWIS. This problem can be remedied by using ordered data structures for assumption sets [de Kleer 84b]. 5. Reasoning about The expressive power of O[M] is illustrated by the following relations involving sizes of molecules of biochemical interest. e Enzymes have much larger Molecular Weight than small molecules: M, >> Ms. 0 In turn, H+ has much smaller Molecular Weight than any other compound of biochemical interest: M,, << Ms. a The molecular radius of an enzyme is only moderately larger than that of a small molecule (other than H+): rE B- rs. 0 For the molecular radius of H+: rH+ << rE and rH+ -C rs. A higher concept in the analysis of biochemical pathways is that of the rate-limiting step of biochemical pathways, the “bottleneck’” that limits the overall observable rate of the pathway. For a ltnear pathway P=(r,,r2, . . . . r,}, where rt is the irh bioreaction of the pathway, K, is the equilibrium constant of rt, Q, is the mass action ratio of rt, a consistent observable rate-limiting step HL=rL is a member of P such that the following relations are consistent with all knowledge available on P: Via [l ,L-11: K, v Q,, Vi E [L+l , n]: K, >N..B Q,, and K, >> QL. As a specific application, we will examine three consecutive reactions from the pathway of Glycolysis, to test the hypothesis that 2 1 l+e l+e (l+e) - --- e e << I -1 -- I I I I -1-I L-1 --- B-m B-w m-M -< -< zzz >- >- Figure 5: Final intervals for the heuristic interpretation >> Mavrovounistis and Stephanopoulos 629 the first step is rate-limiting. We abbreviate Fructose Diphosphate as FDP, Dihydroxyacetone Phosphate as DHAP, Glyceraldehyde Phosphate as GAP, reduced and oxidized Nicotinamide Cofactors as NADH and NAD, Inorganic Phosphate as PI, Hydrogen Cations as H, and Diphosphoglycerate as DPG. The steps of interest are: 1. FDP + GAP + DHAP 2. DHAP + GAP 3.GAP+NAD+PI + DPG+NADH+H The knowledge we have here is: 0 Algebraic definition of the mass action ratios, [e.g. for the first reaction: Gl = (GAP DHAP) / FDP] and the catabolic reduction charge [ CRC = NADH / (NADH + NAD) 1. e Constant values for H, PI, CRC,and equilibrium constants (KEl, KE2, KE3). 0 All concentrations are of the order of 100 PM. 0 For the reactions to proceed in the specified direction, the mass-action ratios must be larger than the equilibrium constant [ e.g. G3 >-..>a KE3]. 0 The goal to pursue relations among GAP, DPG, and landmark concentrations. 0 The hypothesis that the first step is rate-limiting: Gl 2s KEl. In this example, OIM] would use the knowledge we provided to conclude that the hypothesis, that the first step is rate-limiting, is inconsistent. O[M] first narrows the range of GAP, using the first two reactions. The assumption yields that GAP -< 1OOuM. Propagating this through the last reaction step OIM] obtains DPG CC lOOpM, which conflicts with the given relation DPG -<..a- 100 PM. 6. Discussion In the real world, there are always many positive and negative effects on any aggregate result. An intelligent approach in dealing with them, must concentrate on deciding which of the effects are important and which not. Only then should it attempt to determine the sign of the overall result. The O[M] formalism is aimed exactly at sorting out dominant effects. Even in quantitative reasoning people use Order-of-Magnitude arguments to reduce algebraic complexity. This ls often done systematically: As terms are dropped from equations, a term of the form O(x) does the bookkeeping, denoting that the largest dropped term is “of order x”. Numerical constants are not introduced in the O(x) term. This type of reasoning resembles the OIM] formalism with the understanding that we keep track of orders O(e) and we additionally distinguish between O(e) and 0(-e), but terms of order O(e*) or higher are neglected. The risks of the aggressive heuristic interpretation were pointed out earlier. Indeed the worst-case behavior of the strategy is miserable, but for real-world cases it performs much better. There is an additional safeguard in normal use of the strategy: We are normally interested in the relations -<, >-, <<, and >> which are separated by the “buffer” regions -C and >-. It takes extremely bad cases for the error to propagate through the whole buffer region and convert e.g. a >- to a r>. We believe the OIM] formalism bridges the gap between traditional qualitative reasoning (with signs) and full quantitative reasoning (with numbers), as it can use mixed (quantitative and qualitative) knowledge. It will be suitable in many domains where extensive knowledge is naturally expressible in Order-of-Magnitude relations, especially since it is capable of handling numerical and algebraic knowledge as well. REFERENCES [Bobrow 841 Bobrow, D.G. Qualitative Reasoning about Physical Systems: An Introduction. [Davis 841 Artificial Intelligence 24~1-7, 1984. Davis, R. Diagnostic Reasoning Based on Structure and Behavior. [de Kleer 84a] [de Kleer 84b] Artificial Intelligence 24:347-410, 1984. de Kleer, J. How Circuits Work. Artificial Intelligence 24:205-280, 1984. de Kleer, J. Choices Without Backtracking. In Proceedings AAAI-84, pages 79-85. American Association for Artificial Intellegence, 1984. [de Kleer and Brown 841 de Kleer, J., and Brown, J.S. A Qualiiative Physics Based on Confluences. Artificial Intelligence 24:7-83, 1984. [Forbus 841 Forbus, K.D. Qualitative Process Theory. Artificial Intelligence 24:85-l 68, 1984. [Raiman 861 Raiman, 0. Order of Magnitude Reasoning. In Proceedings AAAI-86, pages 100-l 04. American Association for Artificial Intellegence, 1986. 630 Engineering Problem Solving
1987
119
571
A. AN INTELLIGENT TUTORING SYSTEM FOR INTERPRETING GROUND TRACKS DT. Kathleen Swigger* Lt. Cal. Hugh Burns Harry Loveland Capt. Terresa Jackson Air Force Human Resources Laboratory Intelligent Systems Branch Brooks Air Force Base, Brooks Texas 78235 Abstract This paper describes an intelligent tutor- ing system for the space domain. The system was developed on a Xerox 1108 using LOOPS and provides an environment for discovering principles of ground tracks as a direct function of the orbital elements. The system was designed to teach students how to “deduce” a satellite’s orbital elements by looking at a graphic display of a satellite’s ground track. The system also teaches students how to use more systematic behaviors to explore this domain. Since the system is equipped with a number of online tools that were specially designed to help students better understand facts, principles and relationships, the student is free to investigate different options and learn at his own pace. I. Introduction General Introduction One of the nine basic operational missions for the Air Force is the continuous monitoring of the exoatmos- pheric arena through ground and space surveillance. NORAD, through its Space Defense Center, maintains a worldwide network that senses, tracks, and analyzes the characteristics of orbiting systems. In order to monitor and plan for satellite missions, the Air Force crew must be able to read and under- stand ground tracks. Ground tracks are two- dimensional displays that show the portion of the earth that a satellite covers in one orbit. If you can imagine being placed inside a satellite and being able to look directly down on the earth, then the “ground track” is that portion of the earth that you would see as you travelled through space. The ground track is a direct function of the orbital elements, so proper understand- ing of these functions and of the interactions between orbital elements is critical for anyone interested in satellite operations. One way to teach students how to deduce orbital elements from a satellite’s ground track is to present the various mathematical formulas that are used to compute the orbital elements and then show how to apply these formulas to situation- specific tracks [Bates et al., 1971; Astronautics, 19851. In contrast to this approach, we discovered that experts store ground tracks as graphical representations, indexed by feature and shape. Based on previous experience, experts learn how to detect specific features such as size, number of *The research reported herein was supported, in part, with funds from the Air Force Office of Scientific Research (AFOSR), and sponsored by the Air Force Human Resources Laboratory, Brooks AFB, Texas. loops, direction, etc., and then use this information to “estimate” the orbital elements. In order to duplicate this nrocess. we decided to build a qualitative model of how *the exbert nredicts orbital elements, and then use this model Awitgn a microworld, or simulated environ- ment, that allows the student to manipulate various orbital elements and observe how each of the parame- ters affects the shape of the ground track. B. Student/Computer Interaction As nreviouslv mentioned. the microworld for the Ground Track problem offers a number of online tools that permit students to discover relationships between orbital parameters and ground tracks. This environ- ment consists of an elaborate ground track display (Fig- ure 1) and a number of interactive tools designed to encourage systematic behaviors for investigating-ground track related problems. The student initiates a discovery activity by changing one or more orbital parameters or changing the injection parameters. This task is accomnlished bv nositionine: the cursor over the individual Da&meters and messin; the left mouse but- ton to in&ease the value& or th\ middle button to decrease the value. The injection point is changed by positioning the cursor over a particular point on the map and pressing the left mouse button, which automatically sets both the longitude and latitude. A student can- observe the results of these changes by selecting Generate a Ground Trace from the- main menu. After investigating the effects of changing dif- ferent narameter values for different ground tracks. the student can advance to the Prediction window where he can make a hypothesis regarding the particular shape of a ground track. In the Prediction portion of the program, the sys- tem displays a list of words that describe various features -about ground tracks such as shape, size, and symmetry student I Figure 2). From this list of descriptors, the se ects the words that “best” describe the current ground track under discussion. The student then tes& his prediction bv selecting this option from the menu and* comnarine: -his innuts to the Expert’s conclusions. The &deGt can then interrogate the Expert System by placing the cursor over aflvy of the descriptors and pressing the left mouse button. A “Why” pop-up menu appears on the screen which the student can mouse and receive an exulanation of the expert’s reason for the correct descripior. The student can continue this iterative process of changing parame- ters, making predictions, and asking why until he understand the various relationships between After making several successful predictions, the student enters-a Test environment which is designed to check the student’s predictive powers by &king him to 72 Al & Education From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. perform a task in the reverse order of the one described above. The student is shown a specific type of ground track and asked to enter a “guess-estimate” of the corresponding orbital parameters. If the student is suc- cessful, then he can continue to explore different types of ground tracks. If the student is unsuccessful, then he receives information about why his answers are incorrect. C. Tool Description There are three major online tools that can be used by the student to gather information and to understand concepts and principles about ground tracks. These tools are a) a History Tool that allows the students to overlay previously generated ground tracks and note relationships between parameters b) an Orbit Window that displays a two-dimensional representation of the orbit (Figure 1); and c) a Definition/Example tool which displays factual information about different orbi- tal parameters (Figure 1). The History tool is specifically designed to help students recognize relevant patterns between and among previously generated ground tracks. As the stu- dent generates various ground tracks, the system col- lects and stores each transaction. The student can retrieve any of thii data by selecting the History option from the main menu. A lit of the past twenty ground tracks appears on the screen from which the student can select one or more related ground tracks. The sys- tem then overlays the selected ground tracks onto a single map. Again, the student observes the results of this exercise. For any given set of orbital parameters, the stu- dent can obtain a two-dimensional display which shows the position of the satellite in relationship to the earth. The student selects the option labelled Orbit Window and gains immediate access to this particular display. The Orbit Window is especially useful for demonstrat- ing the relationship between the ground track and the actual orbit and for illustrating the effect of perigee on elliptical orbits. The Definition/Example tool provides the student with the factual knowledge about various parameters. A student can obtain definitions and examples for both the orbital parameters and the shape descriptors by simply placing the cursor over the keyword in question and pressing the right mouse button. A pop-up menu appears on the screen from which the student can select either the definition or example. Thus by using the available tools, a student can obtain facts about the orbital world (through the Definition/Example tool), see relationships between dif- ferent ground tracks (through the History window), and understand certain principles about satellite opera- tions (through the Orbit Window). A student has the option of using any of these tools at any time during the computer/student interaction. If, however, the stu- dent is not making sufficient progress, the system inter- rupts and directs the student to use a specific tool to achieve an objective. II. Design of the System A. Overview The system is composed of six major parts: (1) The Expert Module, (2) the Curriculum Model, (3) the State Model, (4) the Diagnostician, (5) the Student Model, and (6) .the Coach. The Expert Module includes the rules and inference procedures used to deduce shape descriptors from a set of orbital parameters. The Curriculum Module contains the major concepts associ- ated with the ground track domain. The State Module contains a list of appropriate behaviors for exploring the microworld. The Diagnostician is a set of software procedures which evaluates the student’s answer, analyzes student errors, and updates the Student and Curriculum Models. The Student Model stores the student’s current state of knowledge of both ground tracks and effective tool use. The Coach contains the instructional rules that tell the system when to inter- vene. The Coach makes this decision based on informa- tion it receives from the Student, Curriculum, and ‘State Models regarding the student’s current state of knowledge. A more detailed description of each module is presented below. B. The Expert Module This Module contains the rules and procedures used to deduce shape descriptors (e.g., closed-body, symmetri- cal, vertical; compressed, lean-right, hinge-symmetry, with loops) from a set of orbital parameters (eccentri- city, period, semi-major axis, argument of periapsis, inclination). The Expert Module is invoked only when the student is making a prediction or is in the Testing mode. The Expert Module works by posting a series of goals which determine the various shape descriptors. The general problem solving strategy employed by the Expert Module is to determine a shape descriptor by examining a specific orbital element. If this fails, then the system looks at another shape descriptor and attempts to find its value, or looks at a combination of two or more orbital elements to see if the system can deduce a shape descriptor. For example, the Expert Module determines the symmetry shape goal by asking whether this is a circular orbit. If the orbit is classified as a circular orbit, then its eccentricity must be equal to zero. If the orbit is elliptical then its eccentricity is not equal to zero and the Expert Module must look at the orientation descriptor, which in turn must look at the argument of periapsis. In thii manner, the Expert Module can determine a set of shape descriptors for a given set of ‘orbital parameters (and vice versa). During the process of deducing shape descriptors, the Expert Module also determines the optimal “procedure” for deriving the shape descriptors. Thus both declarative and procedural knowledge is available to the rest of the tutor. Another function of the Expert Module is to deduce parameter descriptors (such as a Circular, Syn- chronous orbit) at the same time that the system is deducing the shape descriptors. These parameter descriptors are used by the Curriculum Module to determine the essential skills that are necessary to understand a given ground track. Since the rules for determining the Curriculum Skills are embedded within the Expert Module rules, we now describe the organiza- tion of the Curriculum Module. 6. The Curricnlum Module Along with knowledge about shape descriptors for ground tracks, a student must also understand how thii information relates to specific orbit types. For example, an orbit which has a semi- major axis equal to 42,250 kilometers is said to be in a synchronous orbit. This term applies to all ground tracks that have a semi-major axis equal to 42,250 kilometers, regardless of 74 Al & Education the numbers that might appear for the other orbital parameters. Thus it is important that students recog- nize the relationship between the specific domain knowledge and the qualitative model produced by the Expert Module. The Curriculum Module, therefore, contains the specific content that is used to categorize different orbit types. This knowledge is stored in the Curriculum Module according to how it is used (and deduced) by the Expert Module. For example, the Expert System determines whether an orbit is circular or elliptical as it deduces the symmetry goal. The knowledge about shapes and orbit types are part of the Expert System. The Expert Module also provides a very powerful tool for organizing the content areas and for determin- ing various levels of difficulty. For example, the rules that determine the shape descriptors associated with circular orbits tend to have fewer constraints attached to them, and also tend to be fired first, and, as a result, tend to be easier for the student to learn. The hierar- chy of orbit types as represented in the Curriculum Module shows both the order that the knowledge should be learned and the relationships between the knowledge. This information is used by the Coach to recommend easier problems whenever the student becomes confused. . The State Module contains a list of goals and subgoals which presumably indicate acceptable procedures for exploring the Microworld. As the student proceeds through each of the states, the tutor records his/her actions. The authors have hypothesized that a student indicates appropriate experimental behaviors if they, first, explore the microworld. The student explores a microworld by generating ground traces. The student then moves on to “making predictions,” followed by testing and validating tests, and then generalizing these principles. Each one of these states, in turn, has separate subgoals which may or may not be met. The tutor uses the State Module in two ways. First, if the student is performing poorly, then the Coach checks to see if the student has proceeded through each state in an appropriate manner. Second, the Coach uses the State Module to reflect different “instructional” stra- tegies. For example, if the student is conducting experi- ments (as defined as “making predictions”) then the system gives a higher status to using tools correctly. If the student is “testing,” then the Coach will switch its strategy and try rules that check for skill deficiencies. The major purpose of the Diagnostician is to analyze student’s responses and update the Student and Curri- culum Models. Whenever the student enters a predic- tion from the Prediction Window or changes parame- ters from the Testing environment, the Diagnostician compares the student’s answer to the Expert’s answer and determines exactly which rules the student under- stands and does not understand. This information is then transmitted to the Student Module which, in turn, stores it for further processing. The Diagnostician is also responsible for identify- ing the student’s errors and ill-defined strategies. The Diagnostician does this by combining information obtained from the Expert Module, History files, and a series of high-level rules that generate students’ errors. For example, if the student enters an erroneous predic- _tion for the orientation shape-descriptor, the Diagnosti- cian looks at the Expert Module and obtains a list of the orbital elements which were used to make a correct prediction. The Diagnostician then looks at the the student’s History file to see if the student is manipulat- ing the correct parameters. If not, then the Diagnosti- cian invokes some high- level rules that try to generate the student’s error to match the student’s input. Some of these high-order rules are: Look at the rules that are used to deduce this shape-type and drop all the AND portion of the rules and change them to QR9s. (Student Hug: An overgeneralization of a rule . Look at all the rules that deduce this shape-type an cl find the “easiest * rules (i.e., rules with one or two constraints) and see if this is the parameter that the student is manipulating (Student-Bug : If a rule works in one case, it works in all cases). The Diagnostician also monitors the student’s use of the various tools. Every time the student selects a different activity, this information is passed to the Stu- dent Module. The Student Model contains a record of the student’s current understanding of both the domain knowledge and investigative behaviors. Whenever the student tests a prediction or changes parameters in the Testing Mode, the Diagnostician sends the Student Module a list of the rules that the student understands. The Student Model maintains a series of counters for each rule indicating the number of times a rule is used appropriately, inappropriately, or ignored (a “missed- opportunity” as defined in Carr and Goldstein, 1977). If the missed-opportunity counter exceeds the used- appropriate counter, then the Coach recommends intervention. The system also records the number of times that an online tool is invoked. In addition to this counter, an effectiveness measure is maintained for both the His- tory Tool and the Orbit Window. If the student demonstrates inefficient behavior as indicated by one of the effectiveness measures, then the Coach intervenes and offers advice. G. The Coach The Coach maintains the rules and procedures that direct the teaching portion of the Tutor. The Ground Track Microworld is designed for two major purposes: 1) to teach students about the relationships between/among orbital elements and ground tracks, and 2) to teach students how to use systematic behaviors to investigate this domain Thus, the Coach intervenes when either one of these conditions is not satisfied. The Coach monitors the student’s actions and determines when the student needs advice. Interven- tion occurs only when the student is making erroneous predictions or entering incorrect parameters in the Test Mode. The general or high-level teaching strategy for the Coach is as follows: If the student has made No errors and if the student is completing curriculum materials efficiently then record progress If the student has made No errors and if the student is NOT completing the curriculum materials efficiently then recommend an easier curriculum Swigger, Burns, Loveland, and jackson 75 If student has made error then a) Check ruleset for satisfaction of preconditions b Check ruleset for Correct Tool Use c I Check ruleset for Skill remediations The authors made the general assumption that when the student is in the Prediction Mode, then the Coach should help students discover the objectives by having them use the tools correctly. If thii fails, then the sys- tem should address individual skill errors. Thii strategy is reversed whenever the student enters the Testing State. The Coach’s overall intervention strategy is to check whether the student has completed the necessary preconditions (as determined by the values stored in the State Module). If the student has satisfied all the preconditions for an exercise, then the Coach checks the measures for effective inquiry skills. The lit of effec- tive inquiry skills as originally defined in Shute and Glaser [ 19871 include: Systematic experimental behaviors such as making sufficiently large/small incre- ments to orbital parameters; Inductive/generalization strategies such as replacating a test or prediction; Com- plexity of data organization such as isolating similar traces in the History file, selecting relevant ground traces in the History file; Strategies for diiconfirming evidence such as re-doing the experiment or adjusting orbital parameters to fit a new prediction. Every time a student enters a prediction or esti- mates the orbital parameters in the Test Mode, the Coach evaluates the Student Model and determines if intervention is required. If the student’s effectiveness measures are low, then the Coach proposes possible remediation and offers assistance. In the event that the student fails to attain a level of proficiency after receiv- ing instruction on effective Tool Use, then the Coach addresses the student’s domain knowledge inadequa- cies. At the present time, the Coach uses the informa- tion stored in both the Tool Objects and the Expert Module to advise the student concerning errors. Ini- tially, the system suggests that the student use one of the available tools to correct hii errors. If the student continues to have difficulty, then the Coach may display the definitions, examples or explicitly state the relationships between various parameters. III. Summ~ and Puture The current ground track microworld uses a qualitia- tive model to teach the basic concepts of orbital mechanics. Thii microworld provides the student with a discovery environment which allows hi to explore relationships between orbital parameters and ground tracks. The microworld also has intelligence. It knows about the domain, about how to estimate orbital parameters from a ground track, and about how to use the inquiry tools effectively to achieve goals. As a result, if the student fails to make satisfactory progress toward the stated goals, then the system intervenes and offers appropriate assistance. Thii type of intelli- gent simulation provides a more active and adaptive environment for reinforcing training skills. The initial prototype is now complete and has been formatively evaluated by members of the NORAD crew and instructors at the Space School. The authors performed further tests during the Spring Semester of ‘87 with students from the Space School at Lowry Air Force Base and from the Air Force Academy to deter- mine if the tutor is more effective than traditional class- room experience. This data will also be used to improve the diagnostic portion of the tutor. Several areas of research are also being investi- gated using the ground track domain. The intelligent tutor for this domain closely resembles an intelligent tutoring system developed by Schute and Glaser [1987] which is currently used at Lackland Air Force Base, San Antonio, Texas, to identify individual cognitive differences among students. We are planning to test the effectiveness of the acquisition of inquiry skills by comparing Airmen who use both the Shute & Glaser Economics Tutor and the Ground Track Tutor. From this data, we will be able to determine the extent to which individuals transfer experimental behavior. Because one of the primary purposes of this tutor was to create a vehicle for testing hypotheses for training effectiveness, we want to investigate specific questions dealing with this area such as: What happens in an instructional environment when you vary the order of the State Module? (Is it better to state a hypothesis and then conduct experiments?) What happens in the instructional environment when you vary the order of remediation? (Tool use versus Skill Diagnosis?) Finally, how can the information we obtain from these studies be made a dynamic part of the system so that it can adapt to individual student’s needs? These and other issues will be explored in the coming months and should contribute to our understanding of how to build more effective training systems. Acknowledgements The authors gratefully acknowledge Dr. Valerie Schute whose contributions to this project were invaluable. The author would also like to thank the instructors and students at the Unified Space Training School (UST) for their assistance with this project. References [Astronautics, 19851 Astronautics 332. USAF Academy, Colorado, 1985. [Bate et al., 19711 Roger R. Bate, Donad D. Mueller, and Jerry E. White. Fundamentals of Astrodynamics. Dover Publications, New York, 1971. (Carr and Goldstein, 19771 B.Carr and Ira Goldstein. Overlays: a theory of modelling for Computer Aided Instruction. MIT AI Memo 406 Memo 40. [Shute and Glaser, 19871 Valerie Shute and Robert Glazer. An Intelligent Tutoring System for Exploring Principles of Economics. In Press. 76 Al & Education
1987
12
572
An Architecture For Intelligent Task Autsmatio Jeffrey M. Becker and Fred E. Garrett Martin Marietta Denver Aerospace P.O. Box 179, M.S. 0428 Denver, CO 80201 Abstract This report discusses the Martin Marietta Intelligent Task Automation Project (ITA). The purpose of the ITA project is to integrate Artificial Intelligence (AI) task planning, path planning, vision, and robotics technologies into a system designed to autonomously perform manufacturing tasks in dynamic or unstructured environments. The application domain chosen for primary demonstrations is dimensional measurement of an F-l 5 bulkhead. The overall goal is to be able to perform the inspection an order of magnitude faster than the current manual method, which takes about 24 hours for about 1000 inspection points. The project was conducted in two phases. Phase I, completed in December 1984, demonstrated the readiness of the technologies in each of the areas making up the ITA system. Phase II, which was mostly complete in June 1987, demonstrated that the technologies can be integrated into a working system and that the system can be transferred to other applications. The architecture of the ITA system is discussed with an emphasis on the AI components making up the system. The strengths and weaknesses of the architecture and AI techniques applied are discussed. I. Introduction Artificial Intelligence and Robotics technologies have advanced to the state where combining them into an intelligent system for performing industrial tasks is feasible. The purpose of this paper is to give a broad overview of the Martin Marietta Intelligent Task Automation (ITA) project so the reader can gain an understanding of its overall architecure and the AI technologies applied. Phase I, which started in January 1983, demonstrated the readiness of component technologies of the ITA system. Sequence planning (the “traveling salesman” problem), task planning, and path planning systems were developed and demonstrated. Vision -_I- This work was performed at the Intelligent Task Automation Project facilities of Martin Marietta Denver Aerospace. This work was supported by the Air Force Wright Aeronautical Laboratories and the Defense Advanced Research Projects Agency under Contract F336 15-82-C-5 139. 672 Robotics capabilities demonstrated included edge extraction and classification, planar region extraction, object recognition [Magee and Nathan, 19851, and dimensional measurement, all from laser scanner range data. Plan execution was demonstrated by performing a tool pickup and several measurement actions using a Cincinnati Milacron T3-746 arm and the 6 degree of freedom control system developed during the program. An approach to the problems of execution monitoring and exception handlin [VanBaalen, f was also developed and implemented 1984 . Phase II, which started in December 1985, demonstrated that the technologies developed in Phase I could be integrated into a working system. Most of the code developed for Phase I was rewritten under Phase II to incorporate lessons learned. Figure I illustrates the hardware configuration for the bulkhead inspection demonstration task. The ITA Phase II system architecture is a heterogenous hierarchical planning and plan execution system consisting of a sequence level, a task level, a geometric level, and a physical level. The software consists of the thirteen components (boxes) shown in Figure 2. These components access the seven knowledge bases (cylinders) shown. The general sequence of operations is as follows. Measurements to be performed are entered using the Offline Measurement Entry component. The measurement specifications are preprocessed to generate the measurement knowledge base, the sequence plan, and the Operation Planner MACROPS (generalized plans) using the ITA system in Offline Simulation mode. Though not strictly necessary, preprocessing improves the speed of online operations in a production environment. Offline simulation also provides a safe means for verifying correct system operation. When started up online via the System Monitor, Top Level executes the sequence-level plan by getting the next measurement to be performed from the Sequence Planner, getting a task plan to perform the measurement from the Operation Planner, passing the operation plan to the Plan Executive/Monitor for execution, and passing the result of plan execution back to the System Monitor for archiving. The Plan Executive/Monitor uses the Geometric Reasoner to translate qualitative parameters of the plan to quantitative values. It sends commands for robot actions and ultrasonic measurements to the Path Planner. The Path Planner From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. uses the Collision Avoidance component to determine if a proposed path intersects with any object in the workspace, and sends commands to the Robot Controls component to execute a path plan. Commands for Scanning Laser Ranging Assembly (SLRA) measurements are sent by the Executive to the Vision component. If either the Path Planner or Vision component returns an error message for a command result, the Executive invokes the Exception Handler to diagnose the problem and generate a recovery plan. A description of each component follows. Ethernet f I & Control Wyse I 100 I I Symbollcs 3670 I I S~mbo?ics 3670 I I IRIS 3020 I I Intel 310 1-l I F16 Bulkhead Figure 1: ITA Phase II Hardware Configuration p-J - Tool Rack n / Geometric Reasoner Measurement I Planner Knowledge Knowledge * I h System Monitor * - ROBOT Figure 2: ITA Phase II Functional Diagram Becker and Garrett 673 The Offline components are used to enter and edit specifications of the measurements to be performed to inspect a part, to generate problem sets for testing the path planner, and to provide a 3-D graphics simulation capability for display of path planner output. The simulation may be driven instead of the actual robot for overall system verification. Measurement specifications are entered using a graphics display of the bulkhead to select measurement locations, and using a menu-oriented interface for entering additional dimension and tolerance. parameters such as The System Monitor serves as the user interface to the ITA online system, and may also be used to monitor preprocessing activities. Commands are included for starting, stopping, interrupting, and continuing inspection activities, and for displaying the results of the inspection. Graphics interfaces are provided for the Sequence Planner, the Operation Planner, the Plan Executive/Monitor, the Exception Handler, and the Path Planner to allow for detailed examination of system activities. The Top Level component executes sequence-level plans as described above. Top Level also watches for STOP and INTERRUPT commands from the System Monitor and will either halt all system activity immediately or interrupt activity after the current plan completes execution accordingly. The Sequence Planner generates a sequence plan from an unordered set of measurement specifications. It also looks up the next measurement goal in the stored sequence plan on request from Top Level. Generating a sequence plan consists of partitioning remaining measurements into groups according to the measuring tool to be used, and then ordering the points within each group. Partitioning of measurements is performed using a set of rules coded in MRS (Meta-Level Representation System) [Genesereth, et al.. 19841 for tool selection. Measurements that are already done or that cannot be done because of the unavailability of the correct tool are placed in separate groups. Ordering of the measurement points is accomplished using a near-optimal solution to the traveling salesman problem known as the “Convex Hull” algorithm followed by 2-Optimal Edge Exchange and Peephole optimizations [Golden, et al., 19801. Looking up the next step consists of popping the next measurement specification off the stored sequence plan, and verifying the required resources are available. If not, resequencing is performed and the next measurement (if any) from the new returned. sequence is The Operation Planner consists of a Task Planner, a Plan Generalization component that creates a macrop-operator (MACROP) from a plan, and a MACROP Lookup component that finds and instantiates a MACROP for a given initial state and goal conditions. The Operation Planner first tries MACROP Lookup. If no applicable MACROP can be found, the Task Planner is called on to generate the plan from scratch. The plan is then generalized and stored as a MACROP for future reference. The Task Planner is a hierarchical, nonlinear, backward chaining planner that uses hill climbing search (backtracking is chronological). For a treatmerfb7;f related planners see ABSTRIPS [Sacerdoti, . Nonlin [Tate, 19771, Noah [Sacerdoti, 19771. and SIPE [Wilkins, 19841. The Task Planner is hierarchical in the sense of ABSTRIPS - goals are weighted and only the highest level unsatisfied goals are worked on. Nonlinear plans are achieved by (1) allowing operators to be ordered in parallel with other operators in the plan if there are no interactions and (2) allowing serendipitious goal reduction. Deductive operators are used to replace explicit delete lists in the operator descriptions. Unary and n-ary constraints on operator variables are provided to generate and to test candidate bindings for operator variables, respectively. Figure 3 shows an operator declaration and a deductive operator for the ITA domain. ;;; Operator for Ultrasonic Measurements: (static-operator :name-and-format (us-measure Stool Sann Sid) zpreconditions ((couplant-applied (goal-point (meas Sid))) (at Sam (in-contact-point (meas Sid))) (holding $arm Stool)) : adders ((measured Stool Sarm (meas Sid))) :unary-constraints ((type Sa2-m arm) (type Stool us-tool)) :n-ary-constraints ((can-lift Sarm Swl) (weight Stool $w2) (<= SW2 Swl)) :resources (Stool Sarm) :message-pattern (MEASURE (min-value (meas Sid)) (max-value (meas Sid))) :command-stream (command-stream path-planner Sarzn) :reply-pattern (VALUE Sval) :result-pattern (dimension $id Stool Sval 0.0)) ;;; Propositions denied when a tool is picked up: (deductive-operator :name-and-format (holding Sann Stool) :denied ((location Stool $arm in-rack) (holding Sarm (n= Stool)))) Figure 3: Example of Static and Deductive Operators Plan Generalization involves replacing certain constants in a plan by variables, finding overall preconditions and adders of the plan, collecting unary and n-ary constraints, and creating additional resource constraints. Figure 4 shows an example of a generalized plan. MACROP Lookup is a straightforward process of comparing each MACROP to the given initial state and goal conditions, and then determining if the constraints are satisfied. The MACROP is then plugged with the bindings found. MACROP Lookup is roughly two orders of magnitude faster than generating the same plan from scratch (- 0.1 set versus w 10.0 set for a typical ITA domain plan). This capability is essential for meeting the production environment timing constraints of the ITA project. 674 Robotics ;;; MACROP for Ultrasonic Measurement sequence: (macrop :name-and-format (MEASURED $IDl STOOL1 S-1 SID2) :purpose ((MEASURED STOOL1 $ARMl (MEAS $ID2))) :p&onditions ((AT $ARMl (IN-CONTACT-POINT (PILEAS $IDl))) (HOLDING $ARMl $TOOLl)j :adders ((AT S-1 (IN-CONTACT-POINT (MEAS $ID2))) (MEASURED STOOL1 $ARMl WAS $ID2))) :unary-constraints ((TYPE STOOL1 US-TOOL) (TYPE $ARMl ARM)) :n-ary-constraints ((CAN-LIFT $AlWl $Wl) (WEIGHT STOOL1 $W2) (<= SW2 $Wl)) :resources (STOOL1 SARMl) :plan ((1 (MOVE-RETRACT $ARMl $IDl) NIL) (2 (MOVE $ARMl (MEAS $IDl) (MEAS $ID2)) (1)) i3 (APPLY-COUPLANT $ARMl $ID2) (2)) (4 (MOVE-CONTACT $ARMl $ID2) (3)) (5 (US-MEASURE STOOL1 $ARMl $ID2) (4)))) Figure 4: Example of a MACRQP The Geometric Reasoner is responsible for creating, accessing, and maintaining the Measurement Knowledge Base (MKB). The MKB contains information about where the arm can be positioned to perform each measurement, the approach position in free space for ultrasonic measurements, and parameters for performing SLRA measurements such as patch sizes and locations in the field of view. This information is derived from geometric constraint and preferrence information. The Plan Executive/Monitor executes a plan by sending commands to the Path Planner, which controls robot motion and ultrasonic measurements, and to Vision, which controls SLRA measurements. The Executive splits a plan into separate command streams, one for each independently controllable sensor or effector. The Path Planning component uses a lookahead queue to do smoothing where continuous motion is possible over several commands, so it receives all of its commands from a plan at once. To synchronize a commanded process that uses a lookahead queue with other processes, the Executive inserts WAIT commands before any command that has a predecessor belonging to another command stream. The Executive sends a CONTINUE command for a WAIT command when the appropriate predecessor commands have been completed. The reply to a command can be either a normal reply or an exception reply. A command may also “time out” if a reply is not sent within a reasonable period of time. When an exception reply or timeout occurs, execution of the plan is stopped, and all relevant information about the exception is passed to the Exception Handler. The Exception Handler is responsible for diagnosing the cause of the exception, updating the world model to correspond to the current state of the world, and generating a recovery plan. For diagnosis, the Exception Handler is given a knowledge base- @IRS rules) containing information about possible causes for each fault, the number of times each exception has occured, the assertions that each available test can verify, preconditions of each test, and an estimated cost for each test. When an exception message is received, the certainty of assertions associated with possible causes is reduced. Tests are selected, executed, and the results interpreted until a single cause is isolated. The next test to execute is selected by dynamically generating a near-minimal decision tree according to fault frequency, test cost, and test precondition information. Replanning is done by the Operation Planner using the current state for the initial state and the original goals of the failed plan for the goal conditions. The Path Planner functions as the interface between the task plan Executive/&Ionitor and the real-time robot controller. The Path Planner first verifies that the goal position is reachable. It then generates collision-free paths for the robot using a dual-level algorithm. First, a potential collision-free path for the end effectoq, (modeled as a point) is found using the “visibility lines” method [Lozano-Perez, 19791 with goal optimization for producing graph nodes, and A* search for selecting the node sequence. The prospective path is then checked at incremental positions to see if any collisions involving intermediate links of the arm will occur. If a collision could occur, new intermediate subgoals are proposed and evaluated until a collision free path is found. A third trajectory-planning phase, involving profile smoothing and velocity selection, is handled in the Robot Controller. The Collision Avoidance Model is the geometric representation of the workcell (objects, tools, robot parts) used by the Path Planner. The Collision Avoidance Model provides for determining if a point or line segment intersects any worlccell object, if a robot in a particular position intersects its own links or a workcell object, and for updating the model to reflect changes in the real world. The basic representation structure is a region tree. A region tree (actually a directed graph) is a hierarchical structuring of part of space into arbitrarily oriented regions. A region can be a sphere, tube (cylinder with spheres of the same radius at both ends), or a rectangular parallelepiped. At the leaves are solid regions representing actual workcell objects. Regions need not completely contain their children, but all regions except for roots must be completely contained in some set of ancestors. Region tree nodes contain shape, size, position, orientation, and solidity information. The Vision component is responsible for processing SLRA images to obtain dimensions for the observed parts of the bulkhead. The SLRA was developed by the Environmental Research Institute of Michigan (ERIM) under subcontract to Martin Marietta Corporation during Phase I of the ITA contract. It uses a modulated laser light source to determine the range to the target. The range is computed by determining the phase change that results when the light travels from the sensor to the target and back. The resulting 3-D range inform&ion can be used for dimensional measurement and object classification. Each measurement involves positioning rectangular patches in the image to correspond to critical areas of the part being measured. Measurements are obtained by a variety of techniques, depending on Becker and Garrett 675 the type of measurement to be performed. These techniques include edge detection, computing surface normals, and curve fitting. III. Results and Analysis Phase II demonstrations have shown the ITA system works as an integrated whole. Several runs of the measurement process were performed, both simulated and with the actual robot arm and measuring tools. The system was shown to be able to handle bad measurement and broken measurement tool exceptions properly. In a separate research task funded under the ITA program, coordinated dual-arm control algorithms were demonstrated. (Further details, not available at the time of writing, will be given at the conference.) Although major strides were made in building an integrated intelligent robot system, the system is still not as flexible nor as powerful as we would like for truly general-purpose manipulator automation. For example, to make the system more flexible, the Top Level component, which is currently hard coded for the inspection domain, should be replaced by a high-level planner that can call on special- purpose functions such as the current sequence planner as tools. Because of the heterogenous hierarchical architecture used, the task planner only has to plan for a single measurement at a time. This makes the task planner’s job much easier. In fact, we have found the branching factor of the ITA measurement domain to be less than that of the standard blocks world domain for task planning. Even so, ITA task plans share many subsequences. We would like to add the capabilities of selectively generalizin plan as in Morris f interesting subsequences of a Minton, 19851, and of using MACROPs in addition to primitive operators for constructing a task plan. We are also looking into incremental task plan revision techniques [Simmons, 19851 as an alternate means of replanning following an exception under a research task associated with the ITA project. Overall, richer representations of domain objects and robot actions are needed to allow more powerful. knowledge-based task planning for more difficult domains. We have fo;;ia;k;t truly robust exception handling in robotics requires powerful sensory capabilities, espeicially vision. Reasoning can do little to replace perception when it comes to determining the state of an environment subject to external influences. Our choice of a break-and-resume approach to exception handling was based on the (correct) assumption that high-level sensing operations could not generally be done in real time. Given a fast vision system for real-time hand-eye control, many problems that are now treated as exceptions (e.g., bumping into something because of positioning inaccuracy) could be easily avoided. We hope that a second arm and a more general vision component can be added back to the system in follow-on work. Object recognition research conducted during Phase I could be applied to such an effort. for controlling an industrial robot in a real-world domain. Being able to integrate such a system is very much a team effort and requires organizational commitment as well as technological expertise. Martin Marietta is currently assessing the possibility of making the Intelligent Task Automation system available as a test bed for outside research in the areas of planning, compliant and multi-arm controls, and integrating vision with robotics. Acknowledgements The Martin Marietta Intelligent Task Automation Project is a team effort, and this paper is based upon the efforts of and has received input from many people. Our thanks to the entire ITA team. Special thanks to Dennis Haley (project manager), Don Mathis, and Mark Thomas. References Brooks, R. A., Lozano-Perez, T., 1983. “A Subdivision Algorithm in Configuration Space for Findpath with Rotation”, Proceedings IJCAI-83, Karlsruhe, West Germany, 1983, pp 799-806. Genesereth, M., R. Greiner, M. Grinberg, and D. Smith, 1984. The MRS Dictionary, Heuristic Programming Project Report No. HPP-80-24, Stanford University, Stanford, CA, January 1984. Golden, B., L. Bodin, T. Doyle, and W. Stewart Jr., 1980. “Approximate Traveling Salesman Algorithms”, Operations Research, Vol 28, No. 3, Part II, May-June 1980, pp 694-711. Lozano-Perez, T., 1979. “An Algorithm for Planning Collision-Free Paths Among Polyhedral Obstacles”, Communications of the ACM, Vol 22, No. 10, October 1979, pp 560-570. Magee, M., and M. Nathan, 1985. “A Rule Based System for Pattern Recognition that Exploits Topological Constraints”, Proceedings of IEEE CVPR85, June 1985. Minton, S., 1985. Problem Solving”. “Selectively Generalizing Plans for Proceedings IJCAI-85, Los Angeles, CA, 1985, pp 596599. Sacerdoti, E.. 1977. A Structure for Plans and Behavior, North-Holland, New York, 1977. Simmons, R., 1985. Knowledge Intensive Plan Debugging, Internal Report, MIT AI Laboratory, January 3, 1985. Tate, A., 1977. “Generating Project Networks”, Proceedings IJCAI-77, Cambridge, MA, 1977, pp 888-893. Wilkins, D., 1984. Representation and “Domain-Independent Planning: Plan Generation”, Intelligence 22, 1984, pp 269-30 1. Artificial IV. Conclusions We have shown that current artificial intelligence technology can be applied to provide a powerful system Van Baalen, J., 1984. “Exception Handling in a Robot Planning System”, IEEE Workshop on Principles of Knowledge-Based Systems, Denver, CO, December 1984. Not Published - late submission. 676 Robotics
1987
120
573
REACTIVE REASONING AND PLANNING Micha.el P. Georgeff Amy L. Lansky Artificial Intelligence Center, SRI International 333 Ravenswood Avenue, Menlo Park, California Center for the Study of Language and Information, Stanford University Abstract In this paper, the reasoning and planning capabilities of an au- tonomous mobile robot are described; The reasoning system that controls the robot is designed to exhibit the kind of be- havior expected of a rational agent, and is endowed with the psychological attitudes of belief, desire, and intention. Because these attitudes are explicitly represented, they can be manipu- lated and reasoned about, resulting in complex goal-directed and reflective behaviors. Unlike most planning systems, the plans or intentions formed by the robot need only be partly elaborated before it decides to act. This allows the robot to avoid overly strong expectations about the environment, overly constrained plans of action, and other forms of overcommitment common to previous planners. In addition, the robot is continuously re- active and has the ability to change its goals and intentions as situations warrant. The system has been tested with SRI’s au- tonomous robot (Flakey) in a space station scenario involving navigation and the performance of emergency tasks. 1 Introduction The ability to act appropriately in dynamic environments is crit- ical for the survival of all living creatures. For lower life forms, it seems that sufficient capability is provided by stimulus-response and feedback mechanisms. Higher life forms, however, must be able to anticipate future events and situations, and form plans of action to achieve their goals. The design of reasoning and planning systems that are embedded in the world and must op- erate effectively under real-time constraints can thus be seen as fundamental to the development of intelligent autonomous ma- chines. In this paper, we describe a system for reasoning about and performing complex tasks in dynamic environments, and show how it can be applied to the control of an autonomous mobile robot. The system, called a Procedural Reasoning System (PRS), is endowed with the attitudes of belief, desire, and intention. At any given instant, the actions being considered by PRS depend not only on its current desires or goals, but also on its beliefs and previously formed intentions. PRS also has the ability to reason about its own internal state - that is, to reflect upon its own beliefs, desires, and intentions, modifying these as it chooses. This research has been made possible by a giftifrom the System Devel- opment Foundation, the Office of Naval Research under Contract N00014- 85-C-0251, by the National Aeronautics and Space Administration, Ames Research Center, under Contract NAS2-12521, and FMC under Contiact FMC-147466. This architecture allows PRS to reason about means and ends in much the same way as do traditional planners, but provides the reactivity that is essential for survival in highly dynamic and uncertain worlds. For our the task domain, we envisaged a robot in a space sta- tion, fulfilling the role of an astronaut’s assistant. When asked to get a wrench, for example, the robot determines where the wrench is kept, plans a route to that location, and goes there. If the wrench is not where expected, the robot may reason further about how to obtain information as to its whereabouts. It then either returns to the astronaut with the desired tool or explains why it could not be retrieved. In another scenario, the robot may be midway through the ta.sk of retrieving the wrench when it notices a malfunction light for one of the jets in the reactant control system of the space station. It reasons that handling this malfunction is a higher-priority task than retrieving the wrench and therefore sets about diagnosing the fault and correcting it. Having done this, it resumes its original ta.sk, finally telling the astronaut. To accomplish these tasks, the robot must not only be able to create and execute plans, but must be willing to interrupt or abandon a plan when circumstances demand it. Moreover, because the robot’s world is continuously changing and other agents and processes can issue demands at arbitrary times, per- formance of these tasks requires an architecture that is both highly reactive and goal-directed. We have used PRS with the new SRI robot, Flakey, to ex- hibit much of the behavior described in the foregoing scenarios, including both the navigational and malfunction-handling tasks [S]. In this paper, we concentrate on the navigational task; the know&edge base used for jet malfunction handling is described elsewhere [G ,7]. 2 Previous Approaches Most existing architectures for embedded planning systems con sist of a plan constructor and a plan executor. As a rule, the plan constructor formulates an entire course of action before commencing execution of the plan [5,12,14]. The plan itself is typically composed of primitive actions - that is, actions that are directly performable by the system. The rationale foflthis approach, of course, is to ensure that the planned sequence of actions will actually achieve the prescribed goal. As the plan is executed, the system performs these primitive actions by calling various low-level routines. Execution is usually monitored to ensure that these routines will culminate in the desired effects; Georgeff and Lansky 677 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. if they do not, the system can return control to the plan con- structor so that it may modify the existing plan appropriately. One problem with these schemes is that, in many domains, much of the information about how best to achieve a given goal is acquired during plan execution. For example, in planning to get from home to the airport, the particular sequence of actions to be performed depends on information acquired on the way - such as which turnoff to take, which lane to get into, when to slow down or speed up, and so on. To overcome this problem, at least in part, there has been some work on developing planning systems that interleave plan formation and execution [3,4]. Such systems are better suited to uncertain worlds than the kind of system described above, as decisions can be deferred until they have to be made. The reason for deferring decisions is that an agent can acquire more information as time passes; thus, the quality of its decisions can be expected only to improve. Of course, because of the need to coordinate some activities in advance and because of practical restrictions on the amount of decision-making that ca.n be accommodated during task execution, there are limitations on the degree to which such decisions may be deferred. Real-time constraints pose yet further problems for tradition- ally structured systems. First, the planning techniques typically used by these systems are very time-consuming, requiring ex- ponential search through potentially enormous problem spaces. While this may be acceptable in some situations, it is not suited to domains where replanning is frequently necessary and where system viability depends on readiness to act. In addition, most existing systems are overcommitted to the planning phase of their operations; no matter what the situation or how urgent the need for action, these systems always spend as much time as necessary to plan and reason about achieving a given goal before performing any external actions whatsoever. They lack the ability to decide when to stop planning or to rea- son about possible compromises between further planning and longer available execution time. replan so as to accomplish fixed goals, they are unable to change their focus completely and pursue new goals when the situation warrants. Indeed, the very survival of an autonomous system may depend on its ability to modify its goals and intentions according to the situation. A number of systems developed for the control of robots do have a high degree of reactivity [l]. Even SHAKEY [lo] uti- lized reactive procedures (ILAs) to realize the primitive actions of the high-level planner (STRIPS). This idea is pursued fur- ther in some recent work by Nilsson [ll]. Another approach is advocated by Brooks [2], who proposes decomposition of the problem into task-achieving units whereby distinct behaviors of the robot are realized separately, each making use of the robot’s sensors, effecters, and reasoning capabilities as needed. Kael- bling [9] proposes an interesting hybrid architecture based on similar ideas. These kinds of architectures could lead to more viable and ro- bust systems than the traditional robot-control systems. Yet most of this work has not addressed the issues of general problem-solving a.nd commonsense reasoning; the research is in- stead almost exclusively devoted to problems of navigation and the execution of low-level actions. These techniques have yet to be extended or integrated with systems that can change goal priorities completely, modify, defer, or abandon its plans, and reason about what is best to do in light of the immediate situa- tion. In sum, existing planning systems incorporate many useful techniques for constructing plans of action in a great variety of domains. However, most approaches to embedding these plan- ners in dynamic environments are not robust enough nor suffi- ciently reactive to be useful in many real-world applications. On the other hand, the more reactive systems developed in robotics are well suited to handling the low-level sensor and effector ac- tivities of a robot. Nevertheless, it is not yet clear how these techniques could be used for performing some of the higher- level reasoning desired of complex problem-solving systems. To reconcile these two extremes, it is necessary to develop reactive reasoning and planning systems that can utilize both kinds of capabilities whenever they are needed. 3 A Reactive Planning System Traditional planning systems also rely excessively on con- strutting plans solely from knowledge about the primitive ac- tions performable by the robot. However, many plans are not constructed from first principles, but have been acquired in a variety of other ways - for example, by being told, by learning, or through training. Furthermore, these plans may be very com- plex, involving a variety of control constructs (such as iteration and recursion) that are normally not part of the repertoire of conventional planning systems. Thus, although it is obviously desirable that an embedded system be capable of forming plans from first principles, it is also important that the system pos- sess a wealth of precompiled procedural lcnowleclge about how to function in the world [6]. The real-time constraints imposed by dynamic environments also require that a situated system be able to react quickly to en- vironmental changes. This means that the system should be able to notice critical changes in the environment within an appropri- ately small interval of time. However, most embedded planning systems provide no mechanisms for reacting in a timely man- ner to new situations or goals during plan execution, let alone during plan formation. Another disadvantage of most systems is that they commit themselves strongly to the plans they have adopted. While such systems may be reactive in the limited sense of being able to The system we used for controlling and carrying out the high- level reasoning of the robot is called a Procedural Reasoning System (PRS) [6,7]. The system consists of a data base con- taining current beliefs or facts about the world, a set of current goals or desires to be realized, a set of procedures (which, for historical reasons, are called knowledge ureas or KAs) describing how certain sequences of actions and tests may be performed to achieve given goals or to react to particular situations, and an interpreter (or inference mechanism) for manipulating these components. At any moment, the system will also have a process stuck (containing all currently active KAs) which can be viewed as the system’s current intentions for achieving its goals or re- acting to some observed situation. The basic structure of PRS is shown in Figure 1. A brief description of each component and its usage is given below. 678 Robotics Figure 1: System Structure 3.1 The System Data Base The contents of the PRS data base may be viewed as represent- ing the current beliefs of the system. Some of these beliefs may be provided initially by the system user. Typically, these will include facts about static properties of the application domain - for example, the structure of some subsystem, or the physical laws that some mechanical components must obey. Other be- liefs are derived by PRS itself as it executes its KAs. These will typically be current observations about the world or conclusions derived by the system from these observations. The data base itself consists of a set of state descriptions de- scribing what is believed to be true at the current instant of time. We use first-order predicate calculus for the state descrip- tion language. Data base queries are handled using unification over the set of data base facts. State descriptions that describe internal system states are called metalevel expressions. The ba- sic metalevel predicates and functions are predefined by the sys- tem. For example, the metalevel expression (goal g> is true if g is a current goal of the system. 3.2 Goals Goals appear both on the system goal stack and in the represen- tation of KAs. Unlike most AI planning systems, PRS goals rep- resent desired behaviors of the system, rather than static world states that are to be [eventually] achieved. Hence goals are ex- pressed as conditions on some interval of time (i.e., on some sequence of world states). Goal behaviors may be described in two ways. One is to apply a temporal predicate to an n-tuple of terms. Each temporal pred- icate denotes an action type or a set of state sequences. That is, an expression like “(walk a b)” can be considered to denote the set of state sequences which embody walking actions from point a to b. A behavior description can also be formed by applying a tem- poral operator to a state description. Three temporal opera- tors are currently used. The expression ( ! p> , where p is some state description (possibly involving logical connectives), is true of a sequence of states if p is true of the last state in the se- quence; that is, it denotes those behaviors that achieve p. Thus we might use the behavior description ( ! (walked a b) > rather than (walk a b). Similarly, (?p> is true if p is true of the first state in the sequence - that is, it can be considered to denote those behaviors that result from a successful test for p. Finally, (#p> is true if p is preserved (maintained invariant) through- out the sequence. Behavior descriptions can be combined using the logical operators A a.nd V. These denote, respectively, the intersection and union of the composite behaviors. As with state descriptions, behavior descriptions are not re- stricted to describing the external environment, but can also be used to describe the internal behavior of the system. Such be- havior specifications are called metalevel behavior specifica.tions. One important metalevel behavior is described by an expression of the form (=> p>. This specifies a behavior that places the state description p in the system data base. Another way of describing this behavior might be ( ! (belief p> 1. 3.3 Knowledge Areas Knowledge about how to accomplish given goals or react to cer- tain situations is represented in PRS by declarative procedure specifications called ICnowledge Areas (KAs). Each KA consists of a body, which describes the steps of the procedure, and an in- vocation condition that specifies under what situations the KA is useful. The body of a KA is represented as a graphic network and can be viewed as a plan or plan schema. However, it differs in a very important way from the plans produced by most AI planners: it does not consist of possible sequences of primitive actions, but rather of possible sequences of subgoals to be achieved. Thus, the bodies of KAs are much more like the high-level “operators” used in traditional planning systems [13]. They differ in that (1) the subgoals appearing in the body can be described by complex temporal expressions and (2) the allowed control constructs are richer and include conditionals, loops, and recursion. The invocation part of a KA contains an arbitrarily complex logical expression describing under what conditions the KA is useful. Usually this consists of some conditions on current sys- tem goals (in which case, the KA is invoked in a goal-directed fashion) or current system beliefs (resulting in data-directed or reactive invocation), and may involve both. Together the invo- cation condition and body of a KA express a declarative fact about the effects of performing certain sequences of actions un- der certain conditions. The set of KAs in a PRS application system not only con- sists of procedural knowledge about a specific domain, but also includes metalevel KAs - that is, information about the ma- nipulation of the beliefs, desires, and intentions of PRS itself. For example, typical metalevel KAs encode various methods for choosing among multiple relevant KAs, determining how to achieve a conjunction of goals, and computing the amount of ad- ditional reasoning that can be undertaken, given the real-time constraints of the problem domain. Metalevel KAs may of course utilize knowledge specifically related to the problem doma.in. In a.ddition to user-supplied KAs, each PRS application contains a set of system-defined default KAs. These are typically domain- independent metalevel KAs. Georgeff and Lansky 679 3.4 The System Interpreter The PRS interpreter runs the entire system. From a conceptual standpoint, it operates in a relatively simple way. At any par- ticular time, certain goals are active in the system and certain beliefs are held in the system data base. Given these extant goals and beliefs, a subset of KAs in the system will be relevant (i.e., applicable). One of these relevant KAs will then be chosen for execution by placing it on the process stack. GO-TO (a (office Sperson Stroom) (in-hall Stroom Sthall Stside stpo~) (in-wine Sthall Stwing))) (8, (robot-in-room Sfroom) (in-hall Sfroom tfhall Sfside sfpos))) In the course of executing the-chosen KA, new subgoals will be posted and new beliefs derived. When new goals are pushed onto the goal stack, the interpreter checks to see if any new KAs are relevant, chooses one, places it on the process stack, and begins executing it. Likewise, whenever a new belief is added to the data base, the interpreter will perform appropriate consis- tency maintenance procedures and possibly activate other rel- evant KAs. During this process, various metalevel KAs may also be called upon to make choices among alternative paths of execution, choose among multiple applicable KAs, decompose composite goals into achievable components, and make other decisions. > (destination Stroom Wall ttwing)) (say “Just a moment, I’m planning my Man-path Sthall Stroom)) (room-left Sfroom)) (! (follow-plan)) 0 Bnd This results in an interleaving of plan selection, formation, and execution. In essence, the system forms a partial overall plan, determines a means of accomplishing the first subgoal of the plan, acts on this, further expands the near-term plan of action, executes further, and so on. At any time, the plans the system is intending to execute (i.e., the selected KAs) are both partial and hierarchical - that is, while certain general goals have been decided upon, the specific means for achieving these ends have been left open for future deliberation. Figure 2: The Top-Level Strategy mined what to do about the message - for example, to acquire a new belief, establish a new goal, or modify intentions. 4 The Domain Knowledge Unless some new fact or request activates some new KA, PRS will try to fulfill any intentions it has previously decided upon. But if some important new fact or request does become known, PRS will reassess its goals and intentions, and then perhaps choose to work on something else. Thus, not all options that are considered by PRS arise as a result of means-end reasoning. Changes in the environment may lead to changes in the system’s beliefs, which in turn may result in the consideration of new plans that are not means to any already intended end. PRS is therefore able to change its focus completely and pursue new goals when the situation warrants it. PRS can even alter its intentions regarding its own reasoning processes - for example, it may decide that, given the current situation, it has no time for further reasoning and so must act immediately. The scenario described in the introduction includes problems of route planning, navigation to maintain the route, and such tasks as malfunction handling and requests for information. We shall concentrate herein on the tasks of route planning and naviga- tion. However, it is important to realize that the knowledge representation provided by PRS is used for reasoning about all tasks performed by the system. The way the robot (under the control of PRS) solves the tasks of the space station scenario is roughly as follows. To reach a particular destination, it knows that it must first plan a route and then navigate to the desired location (see the KA depicted in Figure 2). In planning the route, the robot uses knowledge of the station’s topology to work out a path to the target loca- tion, as is typically done in navigational tasks for autonomous 3.5 Multiple Asynchronous PRSs robots. The topological knowledge is not detailed, stating sim- ply which rooms are in which corridors and how the latter are In some applications, it is necessary to monitor and .process connected. The route plan formed by the robot is also high-level, many sources of information at the same time. Because of this, typically having the following form: “Travel to the end of the PRS was designed to allow several instantiations of the basic corridor, turn right, then go to the third room on the left.” The system to run in parallel. Each PRS instantiation has its own robot’s knowledge of the problem domain’s topology is stored data base, goals, and KAs, and operates asynchronously relative in its data base, while its knowledge of how to plan a route to other PRS instantiations, communicating with them by send- is represented in various route-planning KAs. Throughout this ing messages. The messages are written into the data base of the predictive-planning stage, the robot remains continuously reac- receiving PRS, which must then decide what to do, if anything, tive. Thus, for example, should the robot notice indication of a with the new information. As a rule, this decision is made by jet failure on the space station, it may well decide to interrupt a fact-invoked KA (in the receiving PRS), which responds upon its route planning and attend instead to the task of remedying receipt of the external message. In accordance with such factors the jet problem. as the reliability of the sender, the type of message, and the Once a plan is formed by the route-planning KAs, that plan beliefs, goals, and current intentions of the receiver, it is deter- must be used to guide the activities of the robot. To achieve this, 680 Robotics we defined a group of KAs that react to the presence of a plan (in room. The last step in this KA will insert a fact into the sys- the data base) by translating it into the appropriate sequence of tem data base of the form (current-origin $froom $fhall), subgoals. Each leg of the origina. route plan generates subgoals where the variables are again bound to specific constants. Next, - such as turning a corner, travelling along the hallway, and the KA in Figure 2 issues the command ( ! (follow-plan) 1. updating the data base to indicate progress. The second group This activates the KA in Figure 4, which assures that each leg of of navigational KAs reacts to these goals by actually doing the the plan is followed until the goal destination is reached. Beliefs work of reading the sonars, interpreting the readings, counting of the form (current-origin $locale $spot> are repeatedly doorways, aligning the robot in the hallway, and watching for updated to readjust the robot’s bearings and knowledge about obstacles up ahead. its whereabouts. PiOOPVI-LEFT A third group of KAs reacts to contingencies encountered by the robot as it interprets and follows its path. These will include KAs that respond to the presence of an obstacle ahead or the fact (moving 0) (speed (maxv)) (acceleration (maxa)))) to-coords B 8)) o-dg-bearing 180)) oved (9 (elbowroom) (wheelbase)))) (’ (robot-in-room Sfroom))) n-hall Sfroom tfhall Sside Spos)) (coming-from S&de)) (current-origin Sfroom Sfhall)) that an emergency light has been seen. Such reactive KAs are invoked solely on the basis of certain facts’ becoming known to the robot. Implicit in their invocation, however, is an underlying goal to “avoid obstacles” or “remain safe.” Yet other KAs perform the various other tasks required of the robot [7]. Metalevel KAs choose among different means of realizing any given goal aad determine the respective priority of tasks when mutually inconsistent goals arise (such as diag- nosing a jet failure and fetching a wrench). Each KA manifests a self-contained behavior, possibly including both sensory and effector components. Many of these KAs can be simultaneously active, performing their function whenever they may be applica- ble. Thus, while trying to follow a path down a hallway, an ob- stacle a.voidance procedure may simultaneously cause the robot to veer from its original path. We elsewhere provide a more detailed description of the KAs used by the robot [8]. c3 5 Discussion Figure 3: Route Navigation KA FOLLOW-PLAN (destination Stroom Sthalt stwing)) kurrent-origin tlocaie Sspot)) tale) b Otroom Sspot)))) (? t& (=/tiall Slocale)(= ttroom &pot))) Figure 4: Plan Interpretation KA For example, let us consider the KAs in Figures 3 and 4. After having used the KA in Figure 2 to plan a path, the robot acquires the goal ( ! (room-left $froom) >, where the variable $froom is bound to some particular constant representing the room that the robot is trying to leave. The KA in Figure 3 will respond, causing the robot to perform the steps for leaving the given The system as described here was implemented using the new SRI robot, Flakey, to accomplish much of the two scenarios de- scribed in the introduction. In particular, the robot managed to plan a path to the target room, maneuver its way out of the room in which it was stationed, and navigate to its desti- nation via a variety of hallways, intersections, and corners. It maintained a.lignment in the hallways, avoided obstacles, and stopped whenever its path was completely blocked. If it noticed a jet malfunction on the space station (simulated by human in- tera,ction via the keyboard), it would interrupt whatever it was doing (route planning, naviga.ting the hallways, etc.) and at- tend to diagnosing the problem. The diagnosis performed by the robot was quite complex and followed actual procedures used for NASA’s space shuttle [7]. The features of PRS that, we believe, contributed most to this success were (1) its partial planning strategy, (2) its reactivity, (3) its use of procedural knowledge, and (4) its metalevel (re- flective) capabilities. The partial hierarchical planning strategy and the reflective reasoning capabilities of PRS proved to be well suited to the robot application, yet still allowed the system to plan ahead when necessary. By finding and executing relevant procedures only when sufficient information was available, the system stood a better chance of achieving its goals under the stringent real-time constraints of the domain. For example, the method for determining the robot’s course was dynamically in- fluenced by the situation, such as whether the robot was between two hallway walls, adjacent to an open door, at a T-intersection, or passing an unknown obstacle. Georgeff and Lansky 681 Kurt Konolige, David Israel, and Martha Pollack. Leslie Pack Kaelbling, Stan Rosenschein, and Dave Wilkins also provided helpful advice and interesting comments. References PI PI PI PI 151 PI 171 PI PI PO1 illI P21 WI 1141 J. S. Albus. Brains, Behavior, and Robotics. McGraw-Hill, Pe- terborough, New Hampshire, 1981. R. A. Brooks. A Robust Layered Control System for a Mo- bile Robot. Technical Report 864, Artificial Intelligence Labo- ratory, Massachusetts Institute of Technology, Cambridge, Mas- sachusetts, 1985. P.R. Davis and R.T. Chien. Using and reusing partial plans. In Proceedings of the Fifth International Joint Conference on Arta- jicaal Intelligence, page 494, Cambridge, Massachussets, 1977. E. H. Durfee and V. R. Lesser. Incremental planning to con- trol a blackboard-based problem solver. In Proceedings of the Fifth National Conference on Artificial Intelligence, pages 58-64, Philadelphia, Pennsylvania, 1986. R. E. Fikes and N. J. Nilsson. STRIPS: a new approach to the application of theorem proving to problem solving. Artificial In- telligence, 2:189-208, 1971. M. P. Georgeff and A. L. Lansky. Procedural knowledge. Pro- ceedings of the IEEE Special Issue on Knowledge Representation, 74:1383-1398,1986. M. P. Georgeff and A. L. Lansky. A System for Reasoning in Dy- namic Domains: Fault Diagnosis on the Space Shuttle. Technical Note 375, Artificial Intelligence Center, SRI International, Menlo Park, California, 1986. M. P. Georgeff, A. L. Lansky, and M. Schoppers. Reasoning and Planning in Dynamic Domains: An Experiment with a Mobile Robot. Technical Note 380, Artificial Intelligence Center, SRI International, Menlo Park, California, 1987. L. P. Kaelbling. An architecture for intelligent reactive systems. In Reasoning about Actions and Plans: Proceedings of the 1986 Workshop, Morgan Kaufmann, Los AItos, California, 1987. N. J. Nilsson. Shakey the Robot. Technical Note 323, Artificial Intelligence Center, SRI International, Menlo Park, California, 1984. N. 3. Nilsson. Triangle Tables: A Proposal for a Robot Program- ming Language. Technical Note 347, Artificial Intelligence Cen- ter, SRI International, Menlo Park, California, 1985. S. Vere. Planning in time: windows and durations for activities and goals. IEEE Transactions on Pattern Analysis and Machine Intelligence, 5(3):246-267, 1983. D. E. Wilkins. Domain independent planning: representation and plan generation. Artificial Intelligence, 22:269-301, 1984. D. E. Wilkins. Recovering from execution errors in SIPE. Com- putational Intelligence, 1:33-45, 1985. Acknowledgments Marcel Schoppers carried out the experiment described here. Pierre Bessiere, Joshua Singer, and Mabry Tyson helped in the development of PRS. Stan Reifel and Sandy Wells designed Flakey and its interfaces, and assisted with the implementation described herein. We have also benefited from our participa- tion and interactions with members of CSLI’s Rational Agency Group (RATAG), particularly Michael Bratman, Phil Cohen,
1987
121
574
Fred Lakin Center for the Study of Language and Information, Stanford University Center for Design Research, Stanford University Rehabilitation R&D Center, Palo Alto Veterans Hospital 3801 Miranda Ave, Palo Alto, California 94304 ARPAnet: lakinQcsli.stanford.edu ABSTRACT In modern user interfaces, graphics play an important role in the communication between human and computer. When a person employs text and graphic objects in communication, those objects have meaning under a system of interpretation, or “visual language.” Formal visual languages are ones which have been ex- plicitly designed to be syntactically and semantically un- ambiguous. The research described in this paper aims at spatially parsing expressions in formal visual languages to recover their underlying syntactic structure. Such “spa- tial parsing” allows a general purpose graphics editor to be used as a visual language interface, giving the user the freedom to first simply create some text and graphics, and later have the system process those objects under a partic- ular system of interpretation. The task of spatial parsing can be simplified for the interface designer/implementer through the use of visual grammars. For each of the four formal visual languages described in this paper, there is a specifiable set of spatial arrangements of elements for well-formed visual expressions in that language. Visual Grammar Notation is a way to describe those sets of spa- tial arrangements; the context-free grammars expressed in this notation are not only visual, but also machine- readable, and are used directly to guide the parsing. I.WStJAkLANGUAGES If'+4 HHIJ&IAN/CO IPJTERACTBON When a person employs a text object in communication, that object has meaning under a sys- tem of interpretation, or ((visual language.n Visual languages can be used to communicate with computers, and are becoming an important kind of human/computer interaction. Phrases in a for- mal visual language can be used to direct searches in a data base (Odesta851; construct simulations [Budge82]; provide communi- cation for aphasics [Steele85]; or serve as expressions in a general purpose programmmg language [Sutherland65, Christianson69, Futrelle78, LakinBOc, Robinett81, Tanimoto82, Lanier84, Kim84, Glinert841. Using a general purpose graphics editor as a visual language interface offers flexibility, but necessitates computer processing of visual languages. This paper describes the use of visual gram- mars in parsing phrases from visual languages. Both the visual grammars and the phrases were constructed in the vnaacsT” graphics editor for the PAM graphics system. The grammars are machine-readable and are employed directly by the parser; examples of grammars and parsing for four different visual lan- guages will be given. II. IDBA~BACKS 0P SPECIAL’PU OSE VISUAL LANGUAGE INTERFACES A visual language interface should provide the user with two capabilities: the agility to cre- ate and modify phrases in the visual language, and the processing power to interpret the phrase and take appropriate action. All of the interfaces currently available (to the author’s knowledge) which allow creation and processing of visual objects employ some kind of special purpose editor which is syntax-driven. Such ed- itors achieve graphical agility and interpretative power, but at the expense of generality. In lieu tors substitute restriction. of understanding, these edi- From a practical point of view, they limit the user’s freedom: he can’t snontaneouslv arrange text and graphics in new ways, or add a pie’ce of text to-an objict already defined as graphical, or edit the text in a pull-down menu, or create a new kind of.diagram. From a theoretical point of view, such editors never deal with the general issues of understanding diagrams: the meaning has been built into the structures and procedures of the predefined object categories’. III. G-VI ?JA OSE ii%&lI- TOR A general purpose editor could be used to construct visual language phrases, giving the user more graphic freedom. But of course the deficiency of general purpose graphics editors is that although we can draw anything we want, there is no specialized help for drawing special purpose things (by definition). Added to this is the fact that when we’re finished we can’t do anything with the drawing. Spatial parsing offers a way to cure these deficiencies and obtain special purpose utility from a general purpose graphics editor. Spatial parsing recovers underlying syntactic structure so that a spatial arrangement of visual objects can be interpreted as a phrase in a particular visual language. Interpretation con- sists of parsing and then semantic processing so that appropriate action can be taken in response to the visual phrase. Appro- priate action may include: assistance for arrile manual maninu- lation of objects,-compilation into an intern-al form represent&g the semantics, translation into another text-graphic language, simply execution as an instruction to the computer. or Previous work ILakin86al has shown that recoverins the un- derlying structure of the elements in the phrase is the mire diffi- cult-part of the problem. Once a then semantic processing - at parse tree has been constructed, least for the formal visual lan- guages considered in this paper - is relatively straightforward. Through spatial parsing the system can do semantic processing of visual phrases, and thus the user can have the advantages of employing a general purpose graphics editor as a visual language interface. The user simply creates some text and graphics, and ’ A parallel can be drawn between special purpose, syntax-driven graphics editors and menu-driven so-called ‘natural language’ inter- faces to data bases. The latter interfaces allow the user to construct a natural language query through choosing from a series of menus containing predefined natural language fragments. As each succeed- ing fragment is selected, it can be put immediately into its proper place in the final query because the offering of subsequent menus is guided by the logical form of a query. Parsing (and understanding) has been finessed. Compare this approach to a general purpose natural language front end such as LUNAR (Woods741 or TEAM (MartinBS]. These “English understanding” systems are much more complex, but allow the user to type in query sentences that he or she makes up. The menu-driven approach has short-term practical advantages: it will run faster on cheaper computers; malformed sentences are not permitted so they don’t have to be handled. On the other hand, the comprehen- sive approach used by the LUNAR and TEAM projects has long-term advantages: it gives the user freedom and tries to handle ‘the sentence From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. later has the system process those objects as a visual expression under a garticular system of interpretation. This differs from syntax-driven editors which force the user to select the semantic type of objects before they are created. By choosing the system of interpretation and when it is to be applied, the user gets flexi- bility. This is like the freedom in the separate but tightly coupled interaction between text editing and interpretation compilation in emacs-based LISP programming environments StallmanSl]. i Text is typed into the editor at timel, and then at time2 it is selected for interpretation/compilation “as” a LISP expres- sion. Such fre&om in a graphics editor allows different visual languages to be used in the same’ image, blackboard-style. IV. SPATIAL PARSING FOB VISUAL LANGUAGES The previous section introduced the notion of spatial parsing for visual 1,anguages and explained some advantages. This section will discuss such parsing in more d&ail as a user interface tech- nique. The following section then presents visual grammars as a way to accomplish spatial parsing. A. Definitions The purpose of spatial parsing is to aid in the processing of visual languages. As an operational definition of visual language, we say: A visual language is a set of spatial arrangements of text-graphie symbols with a semantic interpre- tation that is used in carrying out communicative actions in the worZd2. Spatial parsing deals with the spatial arrangement of the text-graphic symbols in a visual phrase from a visual language: Spatial parsing is the proiess of recovering the underlying syn- tactic structure of a visual communication object from its spatial arrangemeni?. B. Examples of Visual Languages Examples of communi- cation objects (or visual phrases) from five different visual lan- guages are shown in Figure 1 (images were constructed in the vmacs graphics editor, Section VIII). The first four communica- tion objects are from formal visual languages; the fifth object is a piece of informal conversational graphics to remind us of the theoretical context of this work. An expression in a simple Bar chart language is in the upper left corner of Figure 1. Feature Structures are a notation employing brackets of differing sizes Figure 1. terns. Visual communication objects from 5 different sys- he was thinking of’ as opposed to forcing construction from predefined pieces; it can handle arbitrary embedding of phrases; and insofar as the projects are successful, general principles about computer under- standing of natural language will be discovered. 2 Note that if we substitute “strings of textual symbols” for %patial arrangemknts of text-graphic symbols” we have something strikingly similar to a characterization of written natural language. Interestingly, the text-grapliic definition includes the textual one. A paragraph of text is one kind of arrangement of text-graphic symbols. 3 Again the definition is parallel to one for textual parsing: textual parsing is the (rule-governed) process of recovering the underlying syn- tactic structure from the linear form of a sentence. to encode information for a natural (textual) language expres- sion. The Visual Grammar Notation uses text and graphics to represent a context-free grammar for a visual language. SIB- TRAN is a fixed set of graphic devices which organize textual sentence fragments and provide additional semantic information. And the title block in the lower right corner is from an Informal graphic conversation discussed in Section VII. C. Parsing and Interpretation of Visual Phrases in the User Interface The text-graphic objects representing parses for four of the visual communication objects are shown in Figure 2 (the ‘spiderwebs’ are a concise way of diagramming tree struc- ture recovered by parsing; the more traditional notation will be shown later). The four parses were all performed in the same image space by the same function. parse-appropriately tries spatial- Q arse with each grammar from a list until the parse succeeds if no parse succeeds? then “unrecognized visual expres- sion” is signaled). The claim 1s that parse-appropriately rep- resents progress is building user interfaces for formal visual lan- guages: we now have one general purpose function with a list of ‘known’ formal visual languages for which it can handle visual expressions. The elements for the expressions were created in a general purpose graphics editor, and then the spatial syntactic context of the elements was utilized in parsing them. After pars- ing a visual expression, the structure thus recovered is then used directly by a specialized semantic processor in taking appropri- ate communicative actions. Interpretations based on the parses in Figure 2 are shown in Figure 3. Again, the four interpretations were all performed in the same image space by a single interpre- tation function which calls parse-appropriately and selects the proper semantic processor based on the result of the parse. Figure 3. Appropriate interpretations for different visual communication objects residing in the same image. 684 Robotics D. Spatial Parsing Versus Image Processing Note that the visual communication objects under discussion are fundamen- tally symbolic objects, although the symbols are text-graphic. in spite of the fact that they are referred to as ‘images,’ they are very different from the raster images produced by a TV camera scanning a diagram on a blackboard4. The entire lower level of interpretation called recognition has been finessed when a hu- man constructs a visual phrase within the vmacs graphics edi- tor. Figure 4 presents the taxonomy of visual objects available to the vanacs user. Text-graphic objects are either lines or pat- terns: lines are visual atoms (single drawlines or textlines); and patterns are groups of text-graphic objects. Since the user em- ploys vmacs to construct phrases, they are input as drawn lines and pieces of text (i.e. atomic text-graphic symbols)5 in spatial juxaposition. Thus the focus of the research is on the recogni- tion/parsing of the spatial arrangment of atomic visual elements serving as terminals in visual phrases. This is roughly analogous to language understanding work which begins with a typed-in sentence as input rather than starting with an aural image. Figure 4. Taxonomy for graphic objects available in the vmacs graphics editor. V. WSUAE GR.ATvlMAR-DIRECTED PARSDIG OF VISUAL LAFJGIJAGES The problem with procedurally directed parsing is that knowledge about the syntax of each lan- guage is embedded implicitly in procedures, making it hard to understand and modify. In the current work, a spatial parser has been written that utilizes context-free grammars which are both visual and machine-readable. The parser takes two inputs: a region of image space and a visual grammar. The parser em- ploys the grammar in recovering the structure for the elements of the graphic communication object lying within the region. One advantage of a visual grammar is that it makes the syntactic features of the visual language it portrays explicit and obvious. Grammars also increase modularity - by parameterizing one parser with different grammars, it is easy to change the behavior of the parser to handle new visual languages. To illustrate spatial parsing using visual grammars, consider the visually notated context-free grammar for the very simple family of bar charts shown in Figure 5. The input region to be parsed is marked by a dashed box. Two different views of the successful parse are shown: the concise spiderweb diagram, and the traditional parse tree with labeled nodes (which unfortu- nately violates the spatial integrity of the arrangement of input elements; the more concise spiderwebs will be used for the re- maining examples). Once a visual expression has been parsed as a bar chart, then semantic processing is straightforward. The piece of text at the lower left of Figure 5 represents the interpre- tation of the bar chart as a table. Parsing also facilitates other 4 Robert Futrelle has described an expert system under develop ment which will parse X-Y plots in the form of digital binary images [Futrelle85]. Each rule is a spatial template, showing the parser where to search spatially for the members of the expression (i.e., where members should be according to that production). In addition, the rule shows what kind of object is acceptable at a location. This can be by example, as with the bar chart bar literal in Figure 5. Testing for acceptable literals can also be by pattern matcher-like predication (e.g. textlinep and ? used in some of the grammars). And finally, each rule expresses the target tree structure to be returned in the parse for its kind of expression. 5 The point of spatial parsing is exactly that the user does not When a spatial rule is applied to a region of space, either a have to manually create higher-level pattern structures (even though visual object is returned or the parse fails. In the case of suc- VIIXKS offers that capability). cess, the visual object is a copy of the objects in the region put kinds of processing; next to the table is a copy of the original bar chart which has been automatically ‘prettified’. Figure 5. A visual grammar for a very simple family of bar charts, the input region, and two views of the resulting parse tree; and then, based on the parse, textual interpretation and automatic ‘prettifying’. A. Visually Notated Context-Free Grammars A context- free grammar is a 4-tuple consisting of a terminal wocub~lury, a non-termininal vocabulary, a start symbol, and a set of produc- tions (or rules). The terminal vocabulary is the set of all simple (atomic) visual objects that might be found in an expression to be parsed, and the non-terminal vocabulary is the set of symbols used to represent combinations of terminals and non-terminals. The start symbol is the non-terminal which is the name of the topmost entity being parsed (for instance, “9 for sentence). A production has a non-terminal on the left hand side, and a com- bination of terminals and non-terminals on the right hand side. The first three parts of a context-free grammar have been implicitly expressed in the visual representation of the produc- tions, which we call the Visual Grammar Notation. Thus in the grammar for bar charts (Figure 5), the start-symbol is the sym- bol on the left hand side of the topmost rule, i.e., *bar-chart*. Terminals are visual literals appearing only on the right hand side, such as the bar and the horizontal line. Non-terminals are symbols that appear on the left hand side of rules, such as *bar- chart* and *bar-list*. B. A Spatial Parser Which Uses Visual Grammars The function spatial-parse takes two inputs: a region of space and a visual grammar. It then tries to parse whatever objects are in the region using the grammar. As used by spatial-parse, the rules or productions in a visual grammar are visual in two ways: first they are ‘about’ visual matters; and second they are themselves spatial objects whose spatiality is integral to their functioning as rules. Lakin 685 into a pattern with its members in the proper tree structure (as described by the grammar). In the case of failure, the parser gives up unless there are any other rules of the same name as yet untried in the grammar, in which case it tries the next one in top-down order ? Having two rules with the same name per- mits termination of recursion, an important feature of context- free grammars. For example, because the second rule in the bar chart grammar defines the non-terminal *bar-list* recursively, the grammar can handle bar charts with an arbitrary number of bars. The recur ive rule keeps succeeding until there is just one bar left, in whi c!i case it fails and the simple rule is tried, which succeeds and terminates the parsing of the *bar-list*. VI. EXAMPLES OF SPATIAL PARSING USING VI- SUAL GRAMMARS A. Feature Structures (Directed Acyclic Graph Notation Used by Linguists) Visual language description: Feature Structures are a no- tation for directed acyclic graphs of a certain type used by lin- guists. The Feature Structure (FS) in Figure 6 encodes various kinds of syntactic information (barlevel, category, nezt) and se- mantic information (functor) for a natural language expression (further details on the meaning and function of FS’s may be found in [Shieber85]). Looking at the grammar for the FS notation, we see that compared to the grammar for bar charts it uses enclo- sures and is deeply recursive. The basic FS (*f-s*) is a pair of brackets surrounding an atribute-value pair list (*a-v-p-list *). An attribute-value pair list is either an attribute-value pair (*a- v-pair*) on top of an attribute-value pair list, or simply one attribute-value pair. :F-S-CRRHllRR* (opstIsl-pwrs Figure 6. Grammar, parse and interpretation for an expres- sion in Feature Structure notation. Action taken based on the parse: The parse tree from the FS notation is easily processed to produce an isomorphic LISP s- expression which is then used as input to a function that creates an internal structure of the kind employed by the PATR natural language understanding system at SRI [Shieber85]. B. SIBTRAN (Graphic Devices for Organizing Textual Sentence Fragments) Visual language description: David Sibbet is a San Fran- cisco based graphic designer who makes his living by writing and drawing on walls to help groups think. He is shown at work in Figure 7. As a first step in dealing with the richness of the infor- mal conversational graphics in Figure 7, a formal visual language was devised. This language, called SIBTRAN, formalizes a lim- ited subset of the visual imagery system developed by Sibbet. SIBTRAN is a fixed set of graphic devices to organize textual sentence fragments under a system which provides a layer of ad- ditional semantic information (beyond the meaning of the words in the fragments . more of the grap h A SIBTRAN expression consists of one or ic elements (hollow arrows, bullets or straight arrows) placed in specified spatial relationships with either pieces of text (sentence fragments of typically six words or less) or other SIBTRAN expressions. The visual grammar for SIBTRAN ex- pressions is shown in Figure 8. The parse and interpretation (text translation derived from meanings used by Sibbet in his work) for a standard SIBTRAN expression were presented back in Figures 2 and 3. This leaves us free to show an extension to SIBTRAN in Figure 8: expressions from two other formal visual languages, Bar charts and Feature Structures, are defined in the grammar as proper SIBTRAN expressions. Thus Visual Grammar Notation allows us to express heterogeneous embedding, using SIBTRAN as a meta-level schema. Figure 8 shows the parse and interpreta- tion for a mixed language SIBTRAN expression with a Feature Structure phrase to the right of the hollow arrow. An example of informal conversational g~+ic ~conplle-grrmu-~n-r.s(on~ I- --_--_--- I IlEoIUn IS , THE RRSURE Figure 8. Grammar, parse and interpretation for a mixed language SIBTRAN expression. Action taken based on the parse: The SIBTRAN-assistant is a helpful interactive software module designed to facilitate graphic communication. The assistant uses the grammar in recog nition and parsing. Its job is first to recognize when an arrange- ment of graphic objects is a SIBTRAN expression, and then to issue special prompts depending on the identity of the expres- sion. The functioning of the SIBTRAN-assistant and its bearing on the conversational graphics problem (Section VII) is discussed in [Lakin86a,86b]. 6 Because the parser calls itself recursively, even if the invocation of a rule finally fails many levels down, control will return back to the originating level where any other rules of the same name will be tried. 686 Robotics C. VISUAL GRAMMAR NOTATHQJN Visual language description: The Visual Grammar Nota- tion is also a formal visual language. All of the visual grammars used in this paper are described by the grammar presented in Figure 9. Reading informally, an expression in Visual Grammar Notation is a piece of text with the visual literal “: : =” on its right, and a list of rules below it. A list of rules is either a rule on top of a list of rules, or just a rule by itself. And a rule is a piece of text with a straight arrow to the right, and any visual object (drawn line, piece of text or group of visual objects)’ to the right of that. On the left of Figure 9 is a region containing notation for a visual grammar (using the Visual Grammar Nota- tion to describe itself) and on the right is the proper parse tree for the expression in that region. Figure 9. Grammar, parse and interpretation for a visually notated grammar. Action taken based om the parse: Once a parse tree has been produced for an expression in Visual Grammar Notation, the grammar compiler can then take the tree and convert the grammar to the internal form which is used by the parser. The compiler returns the piece of text at the lower right of Figure 9, showing the name of the grammar and the rules it in. All of the parsing presented in this paper was done using grammars compiled in the above fashion. (CONTEXT: IJNDERSTANDWJ~ GEtAPMI[CS The overall goal of this research is effective computer participation in human graphic communication activity like that which takes place on black- boards. Blackboard activity is a kind of graphic conversation, involving the spontaneous generation and manipulation of text and graphics for the purpose of communication. Figure 7 shows a group participating in conversational graphics. The image in Figure 10-a is the final frame in the text-graphic performance of Figure 7. For purposes of study, that image was transcribed info text-graphic symbols using the vmacs graphics editor, Figure lo- b, becoming the corpus for further analysis [LakinBOa,86a]. As a general purpose editor, vmacs is a tool for exploring the rules used by humans to collect elementary visual objects into concep- tual groups. One possible underlying grouping structure for the image from Figure 10-b is shown in Figure 10-c, and future work will attempt to recover these structures. In the meantime, since phrases from special purpose, formal visual languages are often embedded in the imagery of conversational graphics, parsing such languages in order to assist in their use is the immediate goal of 7 Any non-atomic visual objects which serve as right band sides for rules must be grouped manually by the linguist user when inputing a visual grammar (thus in Figure 9 the right band sides in the input region to the parser already have spiderwebs). Machine grouping of these objects is very difficult because the tree structure of the right hand side is how the linguist specifies both the spatial search sequence for recognition of that kind of expression, and the tree structure to he returned in case of a successful parse. Figure 10-a. Final frame in the conversational graphics per- formance depicted in figure 7. Figure 10-b. vmacs transcription of image from figure 10-a. . -. Y . . Figure 10-c. Underlying grouping jects in figure 10-b. structures for visual ob- the research. We expect that strategies and tools developed for processing visual communication.qbjects in these languages can then be taken ‘back’ and applied to the more unrestricted, infor- mal domain. Visual Grammar Notation is one such tool, useful for perspicuously describing the patterns of spatial arrangement in specialized subsets of visual communication activity. VHPP. SOFTWARE FRAMEW0BW The basic software for the research is PAM, a LISP-based interactive graphics envi- ronment. PAM stands for PAttern Manipulation, and is an ex- tension of LISP from computing with symbolic expressions to com- puting with tezt-graphic forms, LakinBOa,80c,83a]. The PAM graphics language/system provi 6 es tree structured graphic ob- jects together with functions for manipulating them. vmacs, the graphics editor in the PAM environment [Lakin84a,86a,86b], is the means for arranging PAM’s objects into visual language phrases. PAM functions can compute with visual objects created in vmacs - including both the visual objects representing the grammars and the elements in the visual communication objects to be parsed. The vmacs/PAM graphics system is implemented in ZetaLISP on a Symbolics 36xx. Use of grammars to analyze formal visual languages was investigated some time ago. Shi-Kuo Chang parsed 2-D mathematical expressions using a “picture-processing grammar” [Changlll]; however the grammar itself was in a non- visual, tabular form and the rules were encoded manually. King Sun Fu extended 1-D string grammar notation with 2-D con- catenation operators and graphic objects as the terminal sym- bols [Fu71]; a graphical grammar to analyze stylized sketches of houses was described, but apparently never implemented. Alan Mackworth has done interpretation of maps sketched freehand on a graphical data tablet; primary local cues are interpreted with help of a grammar-like cue catalog [Mackworth83]. vmacs and PAM improve on these earlier researches be- cause they support grammar notations which are both visual and llakin 687 machine-readable. That is, the linguist can directly input a per- spicuous notation using vmacs. In addition, the visual language users employ vmacs to generate the images to be parsed. x. CONCLWSION Visual grammars can be useful in the spatial parsing of formal visual languages. Spatial parsing allows a general purpose graphics editor to be used as a visual language interface. This provides the user with the freedom to use dif- ferent visual languages in the same image, blackboard-style. He or she can first simply create some text and graphics, and later have the system process those objects under a particular system of interpretation. The task of spatial parsing can be simplified for the interface designer/programmer through the use of visual grammars. For each of the formal visual languages described in this paper, there is a specifiable set of spatial arrangements of ele- ments for well-formed visual expressions in that language. Visual Grammar Notation is a way to describe the spatial criteria (or rules) which distinguish those sets of spatial arrangements and the associated underlying structures; the context-free grammars expressed in this notation are not only visual, but also machine- readable, and are used directly to guide the parsing. Once a visual grammar has been written for a formal visual language, parsing can be accomplished. And once parsed, expressions can then be processed semantically and appropriate action taken. Vi- sual grammars and semantic processing for four formal visual languages have been presented. Understanding informal conversational graphics taking place in a general purpose graphics editor is the broader theoretical con- text for this work. Enroute to the overall goal of computer partic- ipation in conversational graphics (such as blackboard activity), we began with the parsing of special purpose visual languages. Not only are they simpler, but since they are often embedded in (and may have grown out of lessons learned there will like I general purpose graphics activity, y be applicable to the more difficult problem. ACKNOWLEDGMENTS The development of spatial pars- ing within vmacs has profited from contributions by Harlyn Baker, John Bear, Pascal Fua, Scott Kim, Larry Leifer, Mike Lowry, Paul Martin, Rob Myers, Alex Pentland, Lynn Quam, Fernando Pereira. Warren Robinett. Ted Selker. Stuart Shieber. Josh Singer, Richard Steele, Hans Uszkoreit, Mabry Tyson, and Machiel Van der Loos. REFERENCES [Budge821 Budge, William, “Pinball Construction Kit” software, BudgeCo, Berkeley, CA, 1982. jChang7lj Chang, S.K., “Picture Processing Grammar and its Applications,” INFORMATION SCIENCES, Vol. 3, 1971, pg 121-148. [Christianson691 Christianson, Carlos, and Henderson, Austin, “AMBIT-G ,” Lincoln Labs, 1969. [Fu71] Fu, K.S., and Swain, P.H., “On Syntactic Pattern Recog- nition,’ SOFTWARE ENGINEERING. Vol. 2. ed. bv J.T. Tou, Academic Press, 1971. ’ ’ + [Futrelle78] Futrelle, R.P. and Barta, G., “Towards the Design of an Intrinsically Graphical Language,,, SIGGRAPH ‘78 Proceedings, pages 28-32, August 1978. [ Futrelle85] Futrelle, R.P., “Towards Understanding Technical Documents and Granhics.” IEEE/MITRE Conference on Expert Systems in Government, November 1985. [Glinert84] Glinert, Ephraim P. and Tanimoto, Steven L., “PICT: An Interactive, Graphical Programming Environ- ment,,, COMPUTER, 1’7 (11):7-25, November 1984. [Kim841 Kim, Scott, “VIEWPOINT: A dissertation proposal towards an interdisciplinary PhD in Computers and Graphic Design,,, Stanford University, August 28, 1984. [Lakin80a] Lakin, Fred, “A Structure from Manipulation for Text-Graphic Objects,,, published in the proceedings of SIGGRAPH ‘80, Seattle, Washington, July, 1980. [Lakin80b] L k a in, Fred, “Diagramming a Project on the Electric Blackboard,,, video tape for SIGGRAPH ‘80, July 1980. [Lakin$Oc] Lakin, Fred, ‘Computing with Text-Graphic Forms,” published in the proceedings of the LISP Conference at Stanford University, August 1980. [Lakin83a] Lakin, Fred, “A Graphic Communication Environ- ment for the Cognitively Disabled,” published in the pro- ceedings of IEEE Spring Compcon ‘83, San Francisco, March 1983. [ Lakin83c] Lakin , Fred, “Measuring Text-Graphic Activity,” published in the proceedings of GRAPHICS INTER- FACE ‘83, Edmonton, Alberta, May 1983. [Lakin84a] Lakin, Fred, “Visual Communication Expert,,, Public Broadcast television segment on COMPUTER CHRON- ICLES KCSM, as part of show on Artificial Intelligence, March 22, 1984. [Lakin84b] Lakin, Fred, “A VISUAL COMMUNICATION LAB- ORATORY for the Study of Graphic Aids,” RRandD Merit Review Proposal, VACO, Rehab R&D Center, Palo Alto VA, February 1984. [Lakin86a] Lakin, Fred, “Spatial Parsing for Visual Languages,” chaoter in VISUAL LANGUAGES. edited bv Shi-Kuo Chang, Tadao Ichikawa, and Panos. A. Ligomenides, Plenum Press, 233 Spring Street, NY NY, 1986. [ Lakin86bl Lakin, Fred, “A Performing Medium for Working Group Graphics,” published in the proceedings of the CONFERENCE ON COMPUTER-SUPPORTED CO- OPERATIVE WORK, Austin, Texas, December 3-5, 1986. [Lanier84] Lanier, Jaron, Mandalla visual programming lan- guage as illustrated on the cover of SCIENTIFIC AMER- ICAN, special issue on computer languages, August 1984. [Mackworth83] Mackworth, Alan, “On Reading Sketch Maps,” Proceedings of IJCAI-77, Cambridge, Mass, August 1977. [Martin831 Martin, Paul, Appelt, Douglas, and Pereira, Fer- nando, ?I’ransportabilitv and Generality in a Natural Language Interface System,” PROC. 8TH INTERNA- TIONAL JOINT CONFERENCE ON ARTIFICIAL IN- TELLIGENCE, p 573, IJCAI, Karlsruhe, West-Germany, August 1983. [Odesta85] Odesta Company, “Helix” software, Northbrook, Illi- nois, 1985. [Robinett81] Robinett., Warren, ROCKY’S BOOTS visual cir- cuit programmmg video game, The Learning Company, 1981. [Shieber85] Shieber, S.M., “The Design of a Computer Language for Linguistic Information,” in PROCEEDINGS OF THE 22ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS. Universitv of ’ w Chicago, Chicago, Illinois, July, 1985. [StallmanBl] Stallman, Richard, “EMACS, the Extensible, Cus- tomizable Self-Documenting Display Editor,,, MIT AI Memo 519a, March 1981. [Sutherland651 Sutherland, Ivan E., ‘Computer Graphics: Ten Unsolved Problems,” Datamation, pages 22-27, May 1966. [Steele851 Steele, Richard, Illes, Judy, Weinrich, Michael, Lakin, Fred, “Towards Computer-Aided Visual Communication for Aphasics: Report of Studies,” Submitted to the Re- habilitation Engineering Society of North America 8th Annual Conference in Memphis, Tennessee, June 1985. [TanimotoBS] Tanimoto, Steven L. and Glinert, Ephraim P. “Programs Made of Pictures: Interactive Graphics Makes Programming Easy,” Technical Report 82-03-03, Dept of Computer Science FR-35, University of Washington, Seattle, Washington 98195, March 1982. [Woods741 Woods, William, Kaplan, R. M. and Nash-Webber, B, “The Lunar Sciences Natural Language Information System: Final Report,” Report 3438, Bolt Beranek and Newman Inc., June 1972. 688 Robotics
1987
122
575
QUALITATIVE LANDMARK-BASED PAT PLANNING ” AND FOLLOWING Tod S. Levitt, Daryl ‘I’. Lawton, David M. Chelberg, Philip C. Eelson Advanced Decision Systems, Mountain View, California 94040 ABSTRACT This paper develops a theory for path planning and following using visual landmark recognition for the representation of environmental locations. It encodes local perceptual knowledge in structures called viewframes and orientation regions. Rigorous represen- tations of places as visual events are developed in a uniform framework that smoothly integrates a qualita- tive version of path planning with inference over tradi- tional metric representations. Paths in the world are represented as sequences of sets of landmarks, viewframes, orientation boundary crossings, and other distinctive visual events. Approximate headings are computed between viewframes that have lines of sight to common landmarks. Orientation regions are range- free, topological descriptions of place that are rigorously abstracted from viewframes. They yield a coordinate-free model of visual landmark memory that can also be used for path planning and following. With this approach, a robot can opportunistically observe and execute visually cued “shortcuts”. 1. INTRODUCTION The questions that define the problems of path planning and following are: ‘<Where am I?“, “Where are other places relative to me?“, and ‘LH~~ do I get to other places from here?“. A robot that moves about the world must be able to compute answers to these questions. This paper is concerned with the structure and processing for robotic visual memory that yields visual path inference. The input data is assumed to be percepts extracted from imagery, and a database, i.e., memory, of models for visual recognition. A priori model and map data is only relevant insofar as it pro- vides a basis for runtime recognition of observable events. This is distinguished from path traversability planning where the guidance questions concern comput- ing shortest distances between points under constraints of support of the ground or surrounding environment for the robotic vehicle. Existing robot navigation techniques include tri- angulation [hfatthies and Shafer, 19861, ranging sensors [Hebert and Kanade, 19861, auto-focus [Pentland, 19853, stereo techniques [Lucas and Kanade, 19841, dead reckoning, inertial navigation, geo-satellite location, correspondence of map data with the robot’s location, and local obstacle avoidance techniques. These approaches tend to be brittle Bajcsy et, al., 19861, accu- mulate error [Smith and Cheeseman, 19851, are limited by the range of an active sensor, depend on accurate measurement of distance/direction perceived or trav- eled, and are non- ceptual models. perceptual, or only utilize weak per- Furthermore, these theories are largely concerned with the problem of measurement and do not centrally address issues of man or visual memory and the use of this memory for inference in vision-based path planning or following. Exceptions to this are the work of IDavis, 19861, ;McDermott and Davis, 1984], and [Kuipers, 19771. Davis addressed the nroblem of renresentation and assimilation of 2D geomeiric memory, but assumed an orthographic view of the world and did not consider - _ navigation or guidance. McDermott and Davis developed an ad hoc mixture of vector and topological based route planning, but assumed a map, rather than vision derived world fin their assumptions of knowledge of boundaries, their ‘shapes, and spatial relationship;), had no formal theory relating the multiple levels of representation, and consequently did not derive or implement results about - path- execution. Kuipers developed qualitative techniques for path planning and following that were the inspiration for our approach. He assumed capability of landmark recognition, as we do, but relied on dead-reckoning and constraint to one-dimensional (road) networks to permit path plan- ning and execution. We develon renresentation and inference for rela- tive geographic- position information that: build a memory of the environment the robot passes through; contains sufficient information to allow the robot to re- trace its paths; can be used to construct or update an a posterior; map of the geographic area the -robot has passed through; and can utilize all available informa- tion, including that from runtime perceptual inferences and a priori map data, to perform path planning and following. The robust, qualitative properties and for- mal mathematical basis of the representation and infer- ence processes presented herein are suggestive of the path planning and following behavior in animals and humans [Schone, 19841. However, we make no claims of biological foundations for this approach. 2. TOPOLOGICAL LANDMARK NETWO REPRESENTATIONS .A viewframe encodes the observable landmark information in a stationary panorama. To generate a viewframe, relative solid angles between distinguished points on landmarks are computed using a sensor- centered spherical coordinate system. We can pan from left to right, recognizing landmarks, L, . storing the Levitt, bawton, Ckelberg, and Nelson 689 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. A natural environmental representation based on viewframes orientation regions and LPB crossings, recorded while following a path, is given by a list of the ordered sequence of viewframes collected on the path, and another list of the set of landmarks observed on the path. For efficiency, the landmark list can be formed as a database that can be accessed based on spatial and/or visual proximity. When a new viewpath is added to the database of perceptual knowledge, additional links between viewpaths are constructed based on landmarks seen on both paths. Using coarse range estimates to common landmarks, viewframe headings are computed between viewframes on different viewpaths. This structure is pictured in Figure 7. Figure 7(a) shows two viewpaths, while Figure 7(b) h s ows the paths augmented with the additional links. It is this augmented visual memory over which path plans are generated prior to path exe- cution. a b Figure 7: Visual Memory Linking The top level loop for landmark-based path plan- ning and following is to: determine a destination-goal, compute and select a current heading, and execute the heading while building up an environmental representa- tion. The destination-goals implement a recursive goal-decomposition approach to perceptual path plarr- ning. The concept underlying the path planning/following strategies encoded in these rules is to mix the following approaches as knowledge is avail- able or can be inferred: o find landmarks in common between viewframes between point of origin and viewframe- destination and compute vector (i.e., direction and approximate range) headings between viewframes e locate and get on the correct side of LPB’s specified in an orientation-destination, or o associate visible and goal landmarks with map data and compute a metric heading between current location and goal Each of these strategies provably reaches its goal, up to the perceptual re-acquisition of landmarks and the traversability of intervening terrain. if viewframe goal landmarks visible -- > compute viewframe-heading if at least one LPB has an incorrect orientation relative to our viewframe-destination-goal then follow heading for approximate distance estimated by the viewframe-heading else maintain heading by control-feedback path following on relative angles between landmarks build a new viewpath to destination goal, using the existing landmark list where possible if viewframe goal landmarks not visible and viewpaths exists --> make a viewframe of the currently visible region chain back through viewpaths until common landmarks are located chain forward through viewframes setting up intermediate destination-goals recursively execute viewframe headings ‘1) reach the destination goals corresponding to visible landmarks if viewframe goal landmarks not visible and no viewpath exists --> set goal to find a metric heading We have implemented these rules with routines that use A* to plan an initial route to a destination based on data in visual memory. This route is exe- cuted using vision, with re-planning based on the currently perceived viewframe at each step. Figure 8(a) shows the plan over visual memory to move between two points. The executed route is shown in Figure 8(b). Notice how much smoother it is. Figure 8(c) shows an original plan, while 8(d) shows a dramatic re- plan based on observing a “short-cut” at runtime. 4. SUMMARY AND FUTURE WORM A rigorous theory of qualitative, landmark-based path planning and following for a mobile robot has been developed. It is based upon a theory of represen- tation of spatial relationships between visual events that smoothly integrates topological, interval-based, and metric information. The rule-based inference processes opportunistically plan and execute routes using visual memory and whatever data is currently available from visual recognition, range estimates and a priori map or other metric data. This document was prepared by Advanced Deci- sion Systems (ADS) of Mountain View, California, under U.S. Government contract number DACA76-85- C-0005 for the U.S. Army Engineer Topographic Laboratories (ETL), Fort Belvoir, Virginia, and the Defense Advanced Research Projects Agency (DARPA), Arlington, Virginia. The authors wish to thank Angela Erickson for providing administration, coordination, and document preparation support. Levitt, Lawton, Chelberg, and Nelson 693 a Understanding Workshop, Miami Beach, Florida, December 9-10, 1985, pp. 224- [Lucas and Kanade, 19841 - B. Lucas and T. Kanade, “Optical Navigation by the Method of Differences”, Proceedings Image Under- standing Workshop, New Orleans, Louisi- ana, October 3-4, 1984, pp. 272-281. [Matthies and Shafer, 19861 - L. Matthies and S. Shafer, “Error Modelling in Stereo Navi- gation”, Carnegie-Mellon University, Computer Science Denartment. Technical Report, CMU-CS-86- f40, 1986. ’ Figure 8: Path Planning and Following Results REFERENCES [Bajcsy et al., 19861 - R. Bajcsy, E. Krotkov, and M. Mintz, “Models of Errors and Mistakes in Machine Perception”, University of Pennsylvania, Computer and’Info. Science Technical Report, MS-CIS-86-26, GRASP LAB 64, 1986. (Davis, 1986: - E. Davis, “Representing and Acquiring Geographic Knowledge”, Courant Insti- tute of Mathematical Sciences, New York University, Morgan Kaufmann Publishers, Inc., 1986. [McDermott and Davis, 1984) - D. McDermott and E. Davis, “Planning Routes through Uncer- tain Territory”, Artificial Intelligence - An International Journal, Vol. 22, No. 2, March 1984, pp. 107-156 %ssey, 19671 - R. Massey, “Introduction to Algebraic Topology”, Addison-Wesley, 1967. [Pentland, 19851 - A. Pentland, “A New Sense for Depth of Field”, Proceedings of the Ninth International Joint Conference on Artificial Intelligence, IJCAI-85, Los Angeles, California, August 18-23, 1985, pp. 988-994. [Schone, 19841 - H. Schone, “Spatial Orientation - The Spatial Control of Behavior in Animals and Man”, Princeton Series in Neurobiol- %Y and Behavior, R. Capranica, P. Marler, and N. Adler (Eds.), 1984. [Smith and Cheeseman, 19851 - R. Smith and P. C heeseman, “On the Representation and Estimation of Spatial Uncertainty”, SRI International Robotics Laboratory Techn- ical Paper, Grant ECS-8200615, Sep- tember 1985. [Forbus, 19841 - K.D. Forbus, “Qualitative Process Theory”, Artificial Intelligence, Vol. 24, December, 1984. 694 Robotics
1987
123
576
sestions using Geometric Analysis an 1 A 560 with David R. Strip Intelligent Machine Principles Division 1411 San&a National Laboratories P. 0. Box 580Q Albuquerque, New Mexico 87185 Abstract Automatic programming of insertions is an essential step in achieving a truly flexible manufacturing en- vironment. We present techniques based on active compliance implemented with hybrid force-position control capable of inserting a wide variety of shaped pegs. These techniques provide a significant step to- wards an automatically programmed flexible manu- facturing environment. It will be necessary to reduce the programming difficulty of key tasks before robots can be conveniently used to per- form assembly operations in truly flexible manufacturing operations. One of these critical operations is insertion, ex- emplified by the familiar “peg-in-the-hole” problem. Much has been written about solving the case of a chamfered round peg in a round hole. Little is known about solving this problem for more complex shapes, let alone threaded or bayonet insertions. In our work we have developed a general approach to oriented insertions that uses geomet- ric properties of the object to control the behavior of a hybrid force-position controlled robot. Mason introduced a model for position and force con- trol for manipulators . In this model the degrees of freedom of a manipulator are partitioned into orthogonal subspaces representing the force controlled and the position con- trolled motions of the manipulator. This model provides a concise means of describing complex tasks, although in some cases the description is difficult to interpret. Raib- ert and Craig implemented a controller based on Mason’s model and performed some experiments within the capa- bility of a two degree of freedom manipulator [Raibert and Craig, 19811. In our work we have developed a means of hybrid force-position control for a PUMA 560 using the VAL II controller. Our technique allows six dimensional subspace partitioning into force and position controlled subspaces. The current implementation is restricted to subspace components being associated with the Cartesian axes of the tool frame. Our implementation extends Ma- son’s model in that it provides a “guarded move” [Will, 19751 capability for both force and position constrained movements. In this paper we describe how we implemented hybrid force-position control and how we applied it to per- forming force-directed oriented insertions based on geo- metric constraints. The relevant portion of the Sandia Intelligent Robotic As- sembly System (SIRAS) is comprised of a PUMA 560 six degree of freedom manipulator equipped with an Astek (now Barry Wright Corp.) FS6-120A 6-axis force-torque sensing wrist, an unmodified Unimation VAL II controller, a PDP 11/73 arm monitor, and a DEC microVax II task control computer. All user interaction is through the mi- croVax in SCHEME, a dialect of LISP. The microVax com- municates with the PDP 11/73 monitor which handles all communications to and from the Unimation controller. The arm monitor also provides the interface between the force sensing wrist and the Unimation controller. The VAL II language includes an ALTER mode in which the controller polls the ALTER port every 28 ms (the basic timing cycle of the controller) for a set of trans- lational and rotational offsets for the tool frame from the nominal position dictated by the current movement com- mand. This mode continues until an END ALTER com- mand is received. These offsets can be either cumulative or not, causing the manipulator to act as a dashpot or a spring, respectively. Our approach to hybrid force-position control was to implement a program on the arm monitor that calls the ALTER program on the Unimation controller and provides cumulative offsets to the ALTER port based on readings from the force sensing wrist and the param- eters from the SCHEME command. The format of the SCHEME command is (MCOMPLY GAIN BIAS THRESHOLD CONSTRAINT) where GAIN, BIAS, THRESHOLD, and CONSTRAINT are 1 x 6 vectors. The arm monitor interprets this command to mean “move for the next time interval at speed=force x gain + bias (where these terms are multi- plied on a component by component basis, with one com- ponent for each translation and rotation about the tool frame axes). If tht absolute value of any force component exceeds its threshold or if absolute value of the cumulative Strip 695 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. a. Approach b. Chamfer Crossing c. One Point Contact d. Two Point Contact Figure 1: Stages in Insertion movement exceeds the constraint, end alter and return a completion signal.” In our implementation a 0 value for any component in the threshold or constraint vectors implies unlimited threshold or constraint. This implement at ion provides a full six degrees of free- dom of hybrid force-position control. It assumes, however, that forces and torques can only affect translations and rotations about the axis with which the force or torque is associated. A more general implementation in which the gain vector is replaced by a full 6 x6 (accommodation) ma- trix would allow forces and torques to have effects off their natural axis. This more general implementation would al- low solutions such as Starr’s edge following, which was also based on the VAL ALTER command [Starr, 19861. III. Application to the Peg in the ole Problem Whitney provides an analysis of the forces and torques en- countered during the various phases of the insertion of a round peg into a chamfered hole [Whitney, 19821. Whit- ney’s analysis provides the means for establishing the de- sign parameters of a remote center compliance (RCC), a device for providing passive compliance on an otherwise rigid manipulator. A program for performing the peg-in-the-hole task was written using the hybrid force-position control command described above. With the peg positioned above the hole (Figure la) by means of a vision system, the arm is given the command speed of lOmm/sec in the positive z-direction only. On en- countering the chamfer (Figure lb) the sensor will see x- and y-forces (and in practice also small torques) that when multiplied by the gains will cause the peg to translate to- wards the center of the hole. As the peg drops into the hole (Figure lc) any binding will create a z-force which, when multiplied by the gain, will slow the insertion rate while the associated torques about the x- and y-axes will cause the peg z-axis to tilt into alignment with the axis of the hole. When the peg hits the bottom the z-force should build up to -10 Newtons, which, when multiplied by the gain, would negate the bias and cause the arm to stop moving with- out exceeding the threshold. In a well-behaved system the program’s stopping criterion would not be reached and the arm would appear to be “hung.” Because the VAL II con- troller only samples the ALTER port every 28 ms and the force sensing wrist samples at 16 ms intervals there can be a considerable lag between the time a force measurement is made and the time it impacts the arm movement. This time delay combined with the stiffness of the arm requires that the z-threshold in our demonstration be 30 Newtons, although a threshold of less than 10 Newtons would be re- quired to achieve the stopping criterion in a well behaved system. This threshold value always succeeds in stopping the system, contrary to intuition, although it does occa- sionally allow the peg to “bounce” one or two times at the bottom of the hole when the z-velocity is such that it hits bottom with greater than 10 Newtons force, which causes a negative z-velocity, but less than the 30 Newtons stopping threshold. These characteristics of the implemen- tation demonstrate the limitations of implementing force controlled manipulation using commercial parts linked by software. Because the manipulator controller and the force sensor are not synchronized and operate at different sam- pling rates, the time lag from sensing to action is a ran- dom variable. The stiffness of the arm and workpiece are such that unless one is willing to perform the task at ex- tremely low rates of movement (under lmm per second z-travel), it is not possible to analytically develop the pa- rameters for the MCOMPLY command given an analysis of the problem. An additional limitation in applying this force-position control technique is the inability to rotate the reference frame of the force sensor. While the refer- ence frame can be translated to a new location, it can- not be rotated. Since the implementation is constrained to programs with independent effects on all the axes, an ability to rotate the reference frame would allow the so- lution of problems that can be represented by orthogonal force-position programs, but which are not aligned with the natural axes of the force sensor reference frame. riented (MCOMPLY (1 11 .Ol -01 .Ol) (0 0 10 0 0 0) (0 0 so 0 0 o)(o 0 0 0 0 0)) The force sensor and the tool frame origins are both trans- RCCs provide a practical means of performing insertions lated to the center of the bottom of the peg. This set of using a single robotic motion and without the use of pre- gains and biases commands the peg to move at a nominal cision jigs. It does not appear to be practical (or, in some 696 Robotics cases, possible) to generalize the RCC design to allow inser- tion, in a single robotic motion, of unchamfered round pegs or pegs which are not round in cross section and therefore require orientation. A multi-stage strategy for performing oriented inser- tions was developed based on observation of human strat- egy for the same task. The underlying principle of using constraints imposed by the geometry of the object is shared with the approach used by Shariat, Coifeet, and Fournier to plan a strategy for an inaccurate, flexible robot [Shariat et al., 19851. This strategy is based on the assumption that the objects being inserted are “large” in comparison to the scale of error in the vision and manipulator systems. If this does not hold, it would not be possible to determine orientation information about the object from the vision system and manipulation would require an entirely differ- ent approach. The multi-stage strategy consists of three steps; ap- proach, orientation, and insertion. In the approach step the “peg” is brought into contact with the block contain- ing the hole. In making this approach the peg is oriented to match the orientation of the hole within the limits of the vision system and the manipulator. (In a factory en- vironment these locations may be known through the use of jigs. The adaptability of the technique, however, would allow the use of fairly low precision (and thus low cost) jigs in contrast to traditional high tolerance jigging tech- niques.) In our laboratory this amounts to about l/4 inch linear displacement and 4 or 5 degrees angular displace- ment. In bringing the parts into contact the peg is de- liberately shifted to insure a “target point” of the object is over the hole. For an object like the isosceles triangle shown in Figure 2, this point is the corner with the sharpest angle. (There is more discussion of how to select this point later.) The object is then tilted into the hole as shown in Figure 2a. This ends the approach stage. The first stage of the insertion does not require compliance, active or pas- sive, although force sensing may be used to simplify the programming of the approach since contact forces may be used to detect that the peg has contacted the block. The orientation stage is broken down into two parts. During the first part the target point is driven towards its matching point in the hole. If the target point has been properly selected, active or passive compliance combined with the appropriate manipulator motion will move the point of the peg into the corner of the hole. The peg will rotate to approximately the correct orientation due to the torques on the peg from the contact with the side of the hole (Figure 2b). In the second part of the orientation step the peg is rotated about a line through the target point and perpendicular to the direction of travel into the corner. This rotation will return the peg to an approx- imately vertical position (Figure 2~). If this rotation is made compliantly with constant force maintained between the target point of the peg and the corner of the hole, the lower edge of the peg will (in general) meet the edges of the hole at an angle, introducing a torque on the peg that Top View Side View (Cut Though Center) ~ ,~ ..; . . . . . :,.v.:.,: L a. Approach b. Orientation - Stage 1 C. Orientation - Stage 2 Figure 2: Stages in Oriented Insertion will further correct its orientation. The insertion stage for orientable objects is the same as the final stage of insertion for round pegs as analyzed by Whitney and described in Section III. [Whitney, 19821. A program using this strategy was implemented in our laboratory using the hybrid force-position control tech- nique described above. The use of hybrid force-position control instead of passive compliance encourages us to learn about the forces involved in the insertion process and leads to a more general understanding than we might get using passive compliance devices. In addition, the error of our vision system (particularly with respect to deter- mining orientation angle) is greater than the travel limits of commercial RCCs known to us and therefore precluded their use without some form of active compliance. Figure 3 shows the variety of peg shapes successfully inserted with this insertion program. By using hybrid force-position con- trol a single program can be used to perform insertion of a wide variety of shapes. While conceptually similar to the approach in [Shariat et al., 19851, the use of hy- brid force-position control (even for an inaccurate flexible robot) considerably simplifies the implementation of the insertion technique and allows an identical program to in- sert a variety of shapes. Hybrid force-position control also allows the programmer control over the forces exerted on the workpieces, which can be critical when manipulating fragile objects. Strip 697 Figure 3: Shapes Successfully Inserted v. electing the rget oil-It In the section above we referred to a “target point” that was central to all the stages but did not explain how to select such a point. Humans have an intuitive understand- ing that allows them to select this point without conscious thought. In order to have our robot select these points, we need to understand their properties. The key role of the target point is to induce torques and forces on the peg from its contact with the edge of the hole. These torques and forces should be such that when the peg is rotated or translated to zero out these forces and torques as the tar- get point approaches its corner in the hole, the orientation of the peg should move to alignment with the hole. For convex polyhedral objects the point with the smallest in- terior angle appears generally to be a good choice. A good target point for a convex object with a smooth boundary is the point with the smallest radius of curvature. VI. Future Work We have several directions in which we are taking these results. We would like to be able to automatically de- termine the “target point” and insertion strategy from a description of the object to be inserted. We want to be able to prove that a given technique is necessary and/or sufficient for performing insertions. Automated assembly will require the insertion of threaded and bayonet parts. Small parts (small relative to the scale of the manipula- tor and vision system accuracy) and parts with extremely tight tolerances will have to handled. We are working in each of these areas to develop a complete capability for automated insertions. In related research we are examining the role of active versus passive compliance and the requirements for con- trollers to provide hybrid force-position control. We are also examining the potential for fine control at the end of the manipulator to perform the small movements required in insertions, rather than relying on moving the entire ma- nipulator. In this vein we have mounted a Salisbury hand on a PUMA 560 robot, operating the hand in hybrid force- position control using a controller built in our lab. We are developing a dextrous end-effector based on our experi- ences with the Salisbury hand that will provide for fine movements, but with reduced degrees of freedom relative to the Salisbury hand to simplify control. eknow gements I would like to acknowledge the efforts of Greg Starr, who developed the concept for hybrid force-position control us- ing the VAL-II ALTER command and provided the first implementation, and Bill Davidson, who implemented the improved version of the hybrid force-position control used in our lab. eferences [Mason, 19811 M. T. Mason, “Compliance and Force Con- trol for Computer Controlled Manipulators,” IEEE Bans. Systems, Man, and Cybernetics, Vol. SMC-11, No. 6, June 1981. [Raibert and Craig, 19811 M. H. Raibert and J. J. Craig, “Hybrid Force-Position Control of Manipulators,” Trans. ASME J. Dynamic Systems, Measurement, and Control, Vol. 102, No. 2, June, 1981. [Shariat et al., 19851 B. Shariat, P. Coifeet, and A. Fournier , “A Strategy to Achieve an Assembly by Means of an Inaccurate, Flexible Robot,” Comput- ing Techniques for Robots, Chapman and Hall, N.Y., N.Y., 1985. [Starr, 19861 G. P. Starr, “Edge Following with a PUMA 560 Manipulator Using Val-II,” Proc. IEEE Int’l Conf. Robotics and Automation, San Francisco, CA., April, 1986. [Whitney, 19821 D. E. Whitney, “Quasi-Static Assem- bly of Compliantly Supported Rigid Parts,” Trans. ASME J. Dynamic Systems, Measurement, and Con- trol,, Vol 104, No.1, March, 1982. [Will, 19751 P. M. Will, and D. D. Grossman, “An Exper- imental System for Computer Controlled Mechanical Assembly,” IEEE Trans. Comput., Vol. C-24, No. 9, Sept., 1975. Robotics
1987
124
577
Bounds on translational and angular velocity components from first order derivatives of ‘hnage flow Muralidhara Subbarao Department of Electrical Engineering, State University of New York Stony Brook, NY 11794. Abstract A moving rigid object produces a moving image on the retina of an observer. It is shown that only the first order spatial derivatives of image motion are sufficient to determine (i) the maximum and minimum velocities of the object towards the observer, and (ii) the maximum and minimum angular velocities of the object along the direction of view. The second or higher order derivatives whose estimation is expensive and unreliable are not necessary. (The second order derivatives are necessary to determine the actual motion of the object; many researchers have worked on this problem.) These results are interpreted in the image domain in terms of three differential inuariants of the image flow field: divergence, curl, and shear magnitude. In the world domain, the above results are interpreted in terms of the motion and local surface orientation of the object. In particular, the result that the masimum velocity of approach of an object can be determined from only the first order derivatives has a fundamental significance to both biological and machine vision systems. It implies that an organism (or a robot) can quickly respond to avoid collision with a mov- ing object from only coarse information. This capability exists irrespective of the shape or motion of the object. The only restriction is that motion should be rigid. 1. Introduction The relative motion of an observer with respect to an object produces a time-varying image on the observer’s retina. This time-varying image contains valu- able information about the three-dimensional (3D) shape and motion of the object. Recovering this information from the time-varying imagery is an important problem in computer vision. The time-variation of an image can be represented by an image velocity field or an image flow field. An image flow field is a two-dimensional velocity field defined over the eye’s retina (or image plane in the case of a cam- era). The velocity at any point is the instantaneous velo- city of the image element at that point. Some authors refer to image flow as optical flow. Methods for the com- putation of image flow from time-varying images have been proposed by Horn and Schunck (1980), Hildreth (1983), Waxman and Wohn (1985), and others. The problem of three-dimensional interpretation of image flow has been addressed by many researchers (Longuet-Miggins and Prazdny, 1980; Longuet-Higgins, 1984; Kanatani, 1985; Waxman and Ullman, 1985; Subbarao and Wax- man, 1986; Waxman, Kamgar-Parsi, and Subbarao, 1986; Subbarao, 1986a,b,c). In all these approaches, up to second order derivatives of image flow are used to recover the three-dimensional shape and motion of objects” The reliable estimation of the second order derivatives requires significant computation and very high quality images in terms of both spatial and gray level resolution. The human eye is very likely capable of exploiting the second order derivatives, but the present day machine vision systems are far from it (Adiv, 1985; Waxman and Wohn, 1985; Wohn and Waxman, 1985). Thus the requirements of high quality images and computational power have been major obstacles to using the already known theoretical results of image flow analysis in actual machine vision systems. Obtaining a complete description of the shape and motion of an object may require a knowledge of the second or even higher order image flow derivatives, but some very useful information can be inferred from only up to the first order derivatives. For a given spatial and gray level resolution of the images, up to first order image flow derivatives can be recovered significantly more robustly than the second and higher order derivatives (Waxman and Wohn, 1985; Wohn and Waxman, 1985). In this paper we show that the first order flow derivatives are sufficient to determine the bounds on: (i) the velocity of approach of an object towards the observer, and, (ii) the angular velocity of the object along the direction of view. An interpretation of these two results are given in the image domain in terms of three diflerential invariants of the image flow field: divergence, curt, and shear magnitude. The boundary values of the translational and rotational velocities are related to these invariants by simple linear relations. The boundary values are also interpreted in the world domain in terms of the motion and local surface orientation of the object. An object moving towards an observer could poten- tially collide and hurt the observer, or, in the case of a robot, damage the camera system. Therefore, in particu- lar, the result that the maximum velocity of approach of an object can be determined from only the first order derivatives of image flow is of significance to both biologi- cal and machine vision systems. It implies that an organ- ism (or a robot) can respond quickly to avoid collision with a moving object from only coarse information. This 744 Vision From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. capability exists irrespective of the shape or motion of the object. The only restriction is that motion should be rigid. For the special case where an observer is moving in a static environment, our results have an interesting conse- quence. (Examples of such a case are flying bees, birds, and helicopters.) In this case, by determining the bounds on the translational and angular velocities along some three mutually orthogonal viewing directions, bounds on the over all translational and rotational velocities of the observer can be determined from only first order image flow derivatives. The results in this paper are potentially useful for collision avoidance by a robot in a dynamic environment and for robot navigation. Interestingly, biological vision systems have been found to be very quick in responding to approaching objects. This has been called the “loom- ing effect” (Schiff, Caviness, and Gibson, 1962). In the remaining part of this paper we derive the main results and give their interpretation in both the image domain and the world domain. equations notation, sind A first approximation to the human eye is a pin-hole camera. For a global image flow analysis we suggest using a pin-hole camera with a spherical projection screen whose center is at the pin-hole or the focus. For this camera model, due to symmetry, the image flow analysis is identical at all points on the projection screen. Mowever, here we do only a local analysis in a small field of view and in this field of view we consider the spherical screen to be approximated by a plane tangential to the spherical surface at the center of the field of view. The geometry of the screen is entirely a matter of convenience and does not affect our results. Note that there is a one to one correspondence between an image on a curved screen such as a spherical screen and an image on a planar screen. In our analysis using a planar projection screen, note that, the image flow being analyzed always corresponds to an object which is along a line normal to the image plane and passing through the focus. We call this line the line of sight or the optical axis or the direc- tion oj view. The camera model is illustrated in Figure 1. The ori- gin of a Cartesian coordinate system OXYZ forms the focus and the Z-axis is aligned with the optical axis. The image plme is assumed to be at unit distance from the origin perpendicular to the optical axis. The image coor- dinate system oxy on the image plane has its origin at (Q&I) and is aligned such that the x and y axes are, axis. The surface is assumed to be smooth Let 2x, Zy be the slopes of the surface at (X,Y)=(O,O) with respect to the X and Y axes respectively. Due to the relative motion of the camera with respect to the surface, a two- dimensional image flow is created by the perspective image on the image plane. At any point (x , y ) on the image plane, let ‘1~ ,v be the components of image velocity along the x and y axes respectively. For the situation described here, Longuet-I-Iiggins and Prazdny (1988) have derived the equations relating the derivatives of u , v at the image origin (up to second order) to the relative motion and shape of the surface. In these equations the translational velocity is always scaled by a quantity which cannot be determined. (This indeterminacy is due to the fact that absolute distance of objects cannot be determined using a monocular pin-hole camera. There- fore, a nearby object moving slowly and a distant object moving fast could both give rise identical image flows.) The scaling factor is usually chosen such that the dis- tance of the surface along the optical axis is unity. Let the translational velocity scaled by this quantity be (V., VY ,Vz). At the image origin, let (uc,vc) be the image velocity and U, ,iuy ,vZ ,vy be the partial derivatives of u ,v with respect to the indicated subscripts x ,y . The image velocity and its partial derivatives at the image origin describe the image flow in a small image region around the image origin. The following equations, origi- nally derived by Longuet- iggins and Prazdny (198Q), represent the relation between the image flow and the shape and motion of the surface in a small field of view around the optical axis: 210 = - v, - ny ) 210 = - vy + n,, 00) u, = v,+v,z,, vy=vz+vyzy, (W) 54 =sz, + v, zyv, =-RZ + I$ z,. (14 Above we have six equations in eight unknowns, hence an under constrained system of equations. We need more information to get a sufficiently constrained system of equations (e.g. see Longuet-Higgins and Prazdny, 198Q; Waxman, Mamgar-Parsi, and Subbarao, 1986; Subbarao, 1986~). However we shall see that we can obtain bounds on the velocity of approach V, and the angular velocity RZ along the direction of view from these equations. ouumds on the vePocify 0 psoach First we state and prove a theorem which will used later to establish bounds on the velocity approach. Theorem I : Suppose that translation parallel to image plane is not zero and let r and 19 be such that be Of the respectively, parallel to the X and Y axes. VZ = r cos6J and V. E r sin0 (2a,b) Let the relative motion of the camera with respect to a rigid surface along the optical axis be described by translational velocity ( Vx , V, , Vz ) and rotational velo- city (Rx,n,,n,) around the focus. Also, let 2 = j (X, Y) represent the surface along the optical for -r/i < 0 5 lr/2 . (Note: f is the signed magnitude of translation parallel to the image plane and 8 is the direction of translation parallel to the image plane.) Then, Subbarao 745 v, = u, sin20+ vy cos28-( uy +vz )cosO sine. (3) been assumed to be differentiable). Proof : From relations (lc-f), and (2a,b) we can get 4. Bolnnds on the angular ve8oeity Ugi +% = T cos0Zy + r sin 02, and tio~ of view ?2z is the angular velocity along the direction of 21, -vy = r cOseZx - r sine+ . w view. By following in steps similar to the previous sec- Solving for 2’ and Zy from above equations we get tion, it can be shown that (12) ’ t stZ =uY sin%&-‘u, c0s2e+(uz -zly )cosOsinfY , 2, = 7 { ( uy +v, )sine+( u, -zly )cose 1 and (5a) and uy -vz ’ zy f -;r 1 1 ~Jmaz/min) _ f d(% +“z )2+(uz -‘Y I2 (W ( uy + vz )cose-( u, -zly )sin0 2 . (5b) 2 Now, from relations (Ic), (2a), and (5a) we can get v, = ~~-(~~+z1,)~0sesin~(u,-~~ Jcos2e. Or, using the identity sin20+cos28=1, (6) 5. Interpretation of the bounds in the ianage domain In order to interpret the bounds on V, and Ozz we make the following observation. To a first order, the Vz =u, (sin2f?+c0s2B)-(illy +v, )cos0sim9-(u, -vy )c0s2e. (7) pelation (3) can be obtained from the above relation. Notice that V, , the velocity of approach along the direc- tion of view, is given only in terms of 0. Therefore it can be determined if 0 is known. Also it can be used to establish upper and lower limits on Vz . Theorem 2 : The first order flow derivatives determine lower and upper bounds on the velocity of approach V, of a surface along the line of sight. The bounds are image velocity field in a small field of view around the direction of view can be described by (14 The above expression represents an ufline transformation. In this expression, the vector [us, a)ej’ gives the pure translation of the image region at the image origin; the 2X2 tensor on the right hand side is the velocity gradient tensor. This tensor can be expressed uniquely as the sum of a symmetric tensor and an anti-symmetric tensor as below: Proof : By some trigonometric manipulation, expression (3) for V, can be written as vz uz +v, uy +vz =--- uz -vg 2 2 sin28 - - c0s2e . 2 (9) Differentiating the right hand side above and equating the resulting expression to zero we can show that the 8s corresponding to the extrema of Vz are given by uy +vz tan28 =-* % -vuy (19) From the above expression we have sin28 = uy +% Jc uy + VZ )2+( UZ -vv )2 and Ula) In Fluid Mechanics literature (e.g.: Aris, 1962), the sym metric tensor of a velocity gradient tensor is called the deformation or rate of strain tensor and the anti- symmetric tensor is called the spin tensor. These tensors have nice physical interpretations. We will borrow these well known ideas from Fluid Mechanics to interpret our results. Such an interpretation of image flow has already been described by many others in the computer vision area (Koenderink and Van Doorn, 1975, 1976; Waxman and Ijllman, 1985; Kanatani, 1986). cos2e = % -vy for 0<28<2r . w4 J(u, +% 12+(uz -‘uy )2 Substituting for sin28 and cos28 from the above expres- sions in expression (9) we can get relation (8). Note that all terms on the right hand side of relation (8) are only first order flow derivatives; no second or higher order derivatives are involved. Further, the above limits hold irrespective of the surface shape (except that the surface should be smooth because the image flow has The independent parameter uY -vZ of the spin tensor is called the spin or vorticity. It is also the negative curl of the image velocity field at the image origin, i.e. -curl = uy-v, . 06) This can be easily verified from relation (14). It gives the rigid body rotation of the image neighborhood at the image origin. By setting all terms except the curl term to zero, i.e. U0- -vo=( vz +ug )=u, =vy =o, (17) 746 Vision we can obtain the image flow field corresponding to this term. The term results in a purely rotational fzoul field. The deformation tensor gives the deformation of the image neighborhood at the image origin. We can interpret this tensor in terms of its eigen values. The two eigen values of this tensor are in fact V’““, VZmin, given by relation (8). The sum of the eigen values (which is also the trace of the original tensor) is the diveigence of the image velocity field at the image origin, i.e. tude are all invariant with respect to the orientation of the image axes. Their values are unaffected by a rotation of the image coordinate system. This can be easily shown by considering how the image flow derivatives % ‘uy rv, tvy are transformed by a rotation of the image coordinate system (e.g. see Kanatani, 1986). Hence they are called differential invariants of image flow. 6. Interpretation in the worlld doxnah Let us now interpret what the bounds mean in the world domain. For this sake we introduce two vectors, r which is the direction of translation parallel to the image plane, and which is the gradient of the object’s surface with respect to the image plane. More specifically, if s’ , j are unit vectors along the X, Y axes respectively, then, let divergence = u, +v~ . WI This can be easily verified from relation (14). This quan- tity gives the isotropic expansion or contraction of the image neighborhood at the image origin. The image flow corresponding to the divergence term is obtained by set- ting other terms to zero, i.e. uo=v()=uy=v~=(uz-vy)=o. (19) The result is a purely divergent flow. The difference of the two eignen values of the defor- mation tensor is the magnitude of pure shear of the image neighborhood at the image origin, i.e., r = iv, + jV, , and p = Z + iZy . (240) Now, from equations (lc,d,18,24a,b) we can show that divergence = 2 V, + r -p . w4 Let A: be a unit vector along the 2 axis. Then, from equations (le,f,16,24a,b) we can show that Shear magnitude = Jc UI +vz )2+( 11, -vy )2 . (20) -curl = 2RZ +rxp. (2W The image neighborhood undergoes a contraction along one direction and an expansion orthogonal to it under constant area. The directions of contraction and expan- sion are aligned with the two eigen vectors of the defor- mation tensor. The image flow corresponding to a pure shear transformation is obtained by setting all but the shear terms to zero, i.e., ue=vo=u~+zly=u~-v2=o. (21) Also, from equations (lc-f,20,24a,b) we can show that Shear magnitude = Irl Ip 1 . (254 The above relations (25a-c) show how the differential invariants of image flow are related to the three dimensional motion and surface orientation. Some of the terms in these relations are in agreement with our intui- tion, for example the appearance of Vt in divergence and ntz in curl. Now, from equations (22,23,25a-c) we can An example of a pure shear flow is shown in Figure 2. show that In summary, a small circular image element at the image origin translates rigidly with velocity [ uo, vo] T, v(mm/m’n)= v 2 z + P 2 (rep A-lld) and (264 rotates as a rigid area with spin ‘uy -vZ , dilates according to the sum of the eigen values of the deformation tensor, and undergoes a stretch and compression at constant area according to the difference of the eigen values of the deformation tensor (along mutually orthogonal axes aligned with the eigen vectors) (Koenderink and Van Doorn, 1975, 1976; Waxman and Wohn, 1986). k nhmaz/min) = The above relations show how the bounds are related to the translation parallel to the image plane t and the sur- face gradient with respect to the image plane. We are not able to give a straightforward physical interpretation In view of our above discussion and equations (16,18,20), equations (8,13) which give bounds on Vz and s2z can be expressed as below. Maximum/Minimum approach velocity of the above two relations, but they seem to have a pleas- ing form. We believe that an interpretation of these equations is related to the discussion in Koenderink and Van Doorn (1975) about the different types of image flows generated depending on the eigen values of the velocity gradient tensor. =-- i (Divergence f Shear magnitude) . (22) 1. Conclusion 1 Maximum/Minimum angular velocity 1 I around the viewing direction We have shown that using only the first order derivatives of the image flow of an object, a monocular 1 (-Curl =t Shear magnitude) . (23) observer can determine the bounds on (i) the transla- =- 2 tional velocity of the object towards the observer, and (ii) the angular velocity of the object in the direction of its The quantities: divergence, curl, and shear magni- position with respect to the observer. These bounds are
1987
125
578
shard Szeliski Computer Science Department Carnegie-Mellon University Pittsburgh, PA 15213 Abstract Many of the processing tasks arising in early vision involve the sclution of ill-posed inverse problems. Two techniques that are often used to solve these inverse problems are reg- uhuization and Bayesian modeling. Reguhuizatien is used to find a solution that both fits the data and is also suffi- ciently smooth. Bayesian modeling uses a statistical prior model of the field being estimated to determine an opti- mal solution. One convenient way of specifying the prior model is to associate an energy function with each possi- ble solution, and to use a Boltzmann distribution to relate the solution energy to its probability. This paper shows that regularization is an example of Bayesian modeling, and that using the regularization energy function for the surface interpolation problem results in a prior model that is fractal (self-aftine over a range of scales). We derive an algorithm for generating typical (fractal) estimates from the posterior distribution. We also show how this algo- rithm can be used to estimate the uncertainty associated with a regularized solution, and how this uncertainty can be used at later stages of processing. Much of the processing that occurs in the early stages of vi- sion deals with the solution of inverse problems [Hmm, 19771. The physics of image formation confounds many different phe- nomena such as lighting, surface reflectivity, surface geometry and projective geometry. Early visual processing attempts to recover some or all of these features from the sampled im- age array by making assumptions about the world being seen. For example, when solving the surface interpolation problem, i.e. the determination of a dense depth map from a sparse set of depth points (such as those provided by stereo matching), the assumption is made that surfaces vary smoothly in depth (except at object or part boundaries). The inverse problems arising in early vision ate generally ill-posed [Poggio and Terre, 19841, i.e. the data insufficiently constrains the desired solution. One approach to this prob- lem, cakl regularization, imposes additional constraints in the form of smoothness assumptions. Another approach, &yes&an modeling [Geman and Geman, 19841, assumes a prior statis- tical distribution on the data being estimated, and models the image and sensing phenom as stochastic (noisy) proces arization can be vie as a type of Bayesian modeling where the prior model is a Boltzmatm distribution using the same energy function as the regularization. This paper shows that the average or most likely (optimal) esti- mate from the resulting posterior distribution is the same as the regularized solution. However, a typical sample from the posterior distribution is fractal, i.e. it exhibits self-similarity (and roughness) over a large range of scales [Pentland, 19841. The fractal nature of the posterior distribution can be used to generate “realistic” fractal scenes with local control over elevation, discontinuities (either in depth or orientation) and fractal statistics. This paper presents an new algorithm for generating a sample from this distribution. This algorithm is a multigrid version of the Gibbs Sampler that is normally used for solving optimization problems whose energy function has many local minima [Szeliski, 19861. We show that by using this algorithm we can also estimate the uncertainty associated with the regularized solution, for example by calculating the covariance matrix of the posterior distribution. The resulting error model can be used at later stages of processing along with the optimal estimate. The remainder of this paper is structured as follows. Sec- tion II. reviews reguhuization techniques and shows an exam- ple of their application to the surface interpolation problem. Section III. discusses the application of Bayesian modeling to the solution of ill-posed problems, and shows that mod- els that are Markov Random Fields can be specified by the choice of energy functions. Section IV. analyses the effects of regularization in the frequency domain, and derives the spec- tral characteristics of the Markov Random Fields that use the same energy functions. Section V. introduces fractal processes, and shows that the Markov Random Fields previously intro- duced are actually fractal. Section VI. gives a new algorithm for generating these fractals using multi-grid stochastic relax- ation. Section VII. shows how this algorithm can be used to estimate the uncertainty inherent in regularized solutions. Sec- tion VIII. concludes with a discussion of possible applications of the results presented in this paper. 0 ria Regularization is a mathematical technique used to solve ill- posed problems that imposes smoothness constraints on pos- sible solutions[Tilchonov and Arsenin, 19771. Given a set of data d from which we wish to recover the solution u, we define an energy function I?& d) which measures the com- patibility between the solution and the sampled data. We then add a stabilizing function E,(U) which embodies the desired smoothness constraint, and find the solution U* that minimizes Szeliski 749 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. Figure 1: Sample data points the total energy E(u) = Ed@, d) + XE,(u) (1) The regularization parameter X controls the amount of smooth- ing performed. In general, the data term d and solution u can be vectors, fields (two-dimensional arrays of data such as im- ages or depth maps), or analytic functions (in which case the energy is a functional). For the surface interpolation problem, the data is usually a sparse set of points { di}, and the desired solution is a two- dimensional function U(X, y). The data compatibility term can be written as a weighted sum of squares EI(u, d) = ; C WiCub, Yd - &I2 (2) Two examples of possible smoothness functionals are the mem- brane model [Terzopoulos, 19841 Ep(u) = ; JJ <u:+4) hdr (3) which is a small deflection approximation of the surface area, and the thin plate model Ep(u) = ; JJ (2&+2z&+4 dxdy which is a small deflection approximation of the surface curva- ture (note that here the subscripts indicate partial derivatives). These two models can be combined into a single functional by using additional “rigidity” and “tension” functions, in order to introduce depth or orientation discontinuities perzopoulos, 19861. As an example of a controlled-continuity regularizer, con- sider the nine data points shown in Figure 1. The regularized solution using a thin plate model is shown in Figure 2. Note that a depth discontinuity has been introduced along the left edge, an orientation discontinuity along the right, and that the regularized solution is very smooth away from these disconti- nuities. The above stabilizer E’(u) is an example of the more general controlled-continuity constraint Figure 2: Regularized (thin plate) solution where x is the (multi-dimensional) domain of the function u. This general formulation will be used in Section IV. to derive the spectral (frequency domain) characteristics of the stabilizer. III. ayesia The Bayesian modeling approach uses an Q priori distribu- tion p(m) on the data being estimated, and a stochastic process p(d(u) relating the sampled data (input image) to the original data. According to Bayes’ Rule, we have p(uld) = P(dlulP(u) p(d) (6) In its usual application [Geman and Geman, 19841, Bayesian modeling is used to find the Maximum A Posteriori (MAP) estimate, i.e. the value of u which maximizes the conditional probability p(uld). In the more general case, the optimal es- timator u* is the value that minimizes the expected value of a loss function L(u, u*) with respect to this conditional prob- ability. Recently, Bayesian models that use Markov Random Fields have been used to solve ill-posed problems such as im- age restoration [Geman and Geman, 19841 and stereo matching [Szeliski, 19861. A Markov Random Field (MRF) is a distribu- tion where the probability of any one variable Ui is dependent on only a few neighbors, POJilU) =P<Uil{Uj}), j E Ni (7) In this case, the joint probability distribution p(u) can be writ- ten as a Boltzmann (or Gibbs) distribution P(U) 0~ exp [-E,(NI~] (8) where T is called the “temperature”. The “energy function” Ep(u) can be written as a sum of local clique energies where each clique energy EC(u) depends only on a few neigh- bors. Typically, the clique energy characterizes the local vio- lation of the prior model or smoothness constraint. The random vector IU is sampled by a sensor which pro- duces a data vector d. We will model the measurement process 750 Vision as having additive (multivariate) Gaussian noise p(dlN QC exp ’ (w - d)rA(u - d) -2 1 = exp [-Ed@, d)] (10) From Bayes rule, we have P(+o = PWp(dl N p(d) 0~ exp bW)l where E(u) = &&O/T + Ed@, f& (12) so that the posterior distribution is itself a Markov Random Field. Thus MAP estimation is equivalent to finding the mini- mum energy state. This shows that regularization is an exam- ple of the more general MRF approach to optimal estimation. The smoothing term (stabilizer) I?&(W) corresponds to the a priori distribution, and the data compatibility term Ed(u, d) corresponds to the measurement process. While Bayesian modeling has previously been used in computer vision to find an optimal estimate, it has not been used to generate an error model. We propose to estimate ad- ditional (second order) statistics using this model, and to use these additional statistics at later stages of processing. For ex- ample, we can use these statistics when matching for object recognition or pose detection, or to optimally integrate new knowledge or measurements (by using Kalman filtering [Smith and Cheeseman, 19851). We present a method for calculating these statistics in Section VII.. By taking a Fourier transform of the function u(x) and ex- pressing the energy equations in the frequency domain, we can analyse the filtering behaviour of regularization and the spec- tral characteristics of the prior model. To simplify the analysis, we will set the weighting function w,(x) used in Equation 5 to a constant. While this analysis does not strictly apply to the general case, it provides an approximation to the local be- baviour of the regularized system away from boundaries and discontinuities. The Fourier transform [Bracewell, 19781 of a multidimen- sional signal h(x) is defined by 3(h) 3 J h(x) exp(2ni f. x) hr = H(f) (13) and the transform of its partial derivative is given by By using Parseval’s theorem / lhc@12dx = J lHm12df (15) we can derive the smoothness functional I& in terms of the Fourier transform U(f) = 3{ M }. The notation E&V) denotes the energy associated with a signal V, which is derived from the original definition of E’(u) (in this case by using a Fourier transform). Applying the Equations 14 and 15 to Equation 5, we obtain E,(u) = ; J IWI121WO12~f WI where For example, the membrane interpolator has 16;(f)12 oc 127rq2 and the thin plate model has lC(f)12 cx 127rf14. Since the Fourier transform is a linear operation, if u(x) is Boltzmann distributed with energy EJu), then U(f) is also Boltzmann distributed with energy E,(U). Tbs we have NWexp -f [ J IW.J121W~12~f 1 WI from which we see that the probability distribution at any fre- quency f is p(W) 0; exp [-;lG(h121WO12] (19) Thus, U(f) is a random Gaussian variable with variance lW91-2, and the signal U(X) is correlated Gaussian noise with a spectral distribution Utf) = lWJl-2 (20) We can also use the same Fourier analysis techniques to determine the frequency response of regularization viewed as linear filtering. The result of this analysis (see [Szeliski, 19871 for details) is that the effective smoothing filter has a frequency response 1 H(f) = 1 ca2;G(F)12 (21) where o is the standard deviation of the sensor noise (with uniform dense sensing). For the case of the membrane model and the thin plate model, the shape of the frequency response is qualitatively similar to that of Gaussian filtering. The over- all posterior distribution (when the data confidence and prior model are spatially uniform) is the superposition of the regu- larized (smooth) solution and some correlated Gaussian noise. Fourier analysis can also be used to examine the convergence properties of the iterative algorithms discussed in Section VI. [Szeliski, 19871. Fractals are objects (geometic designs, coastlines, mountain surfaces) that exhibit self-similarity over a range of scales [Mandelbrot, 19821. Fractals have been used to generate ‘“real- istic” images of terrain or surfaces that exhibit roughness, and to anal& certain types of structured noise. Brownian f’rac- tals are-random processes or random fields that exhibit similar statistics over a range of scales. One common way to charac- terize such a fractal is to say that it follows a power law in its spectral density Wf) cx l/P (22) This spectral density characterizes a fractal Brownian function v&) with 2H = ,d - E, whose f!ractal dimension is D = E+ I- Szeliski 751 Figure 3: Fractal (random) solution H (where E is the dimension of the Euclidean space) [Voss, 19851. The spectral density of the regularization based prior mod- els examined in the previous section is lG(f,~j-~. For a mem- brane interpolator, we have S tt&?Pnbrane(O Qc 1274-2 while for a thin plate interpolator, we have (23) Sthin--plate(O QC 1274 -’ (24 Thus, the prior models for a membrane and a thin plate are indeed fractal, since the spectral density is a power of the frequency. The significance of this connection between regulariza- tion methods, I3ayesia.n models and fractal models is two-fold. First, it shows that the smoothness assumptions embedded in regularization methods are equivalent to assuming that the un- derlying processes is fractal. when regularization techniques are used, it is usual to find the minimum energy solution (Fig- ure 2), which also corresponds to the mean value solution for those cases where the energy functions are quadratic. Thus, the fractal nature of the process is not evident. A far more representative solution can be generated if a random (frac- tal) sample is taken from this distribution. Figure 3 shows such a random sample, generated by the algorithm that will be explained in section VI.. The amount of noise (and hence “bumpiness”) that is desirable or appropriate can be derived from the data [Szeliski, 19871. Second, the connection between Bayesian models and fractal models gives us a powerful new technique (described in Section VI.) for generating fiactal surfaces for computer graphics applications. Previous techniques for generating frac- tals use either recursive subdivision algorithms [Fournier et al., 19821 or the addition of correlated (pink) noise to some initial data [Pentland, 19841. While the latter algorithm is equivalent to Bayesian modeling with uniform data and prior models, the Bayesian modeling approach can be extended to non-uniform data and the full controlled-continuity constraint. Thus, it is possible to constrain the desired fractal by plac- ing control points at selected locations (using the discrete data formulation), or to introduce discontinuities such as cliffs or ridges. For example, the f’ractal in Figure 3 has been required to pass through the points in Figure 1, and has a depth dis- continuity along the left edge and an orientation discontinuity along the right. The introduction of data points affects the local noise characteristics. of the fractal without affecting the prior statistics. It thus generates a representative random sam- ple that is true both to the fractal statistics being used and to the sampled (or desired) data points. This approach can also be used for doing interpolation of digital terrain models. In- terpolators that have a smoothing behaviour between that of a membrane and a thin plate are better able to model the correct smoothness (fractal dimension) of natural terrain. e de tie To simulate the Markov Random Field (or equivalently, to find the minimum energy solution) on a digital or analog computer, it is necessary to discretize the domain of the solution u(x) by using a finite number of nodal variables. The usual and most flexible approach is to use finite element analysis [Terzopoulos, 19841. We will restrict our attention to rectangular domains on which a rectangular mesh has been applied. As well, the input data points will be constrained to lie on this mesh. As an example, let us examine the finite element ap- proximation for the surface interpolation problem. Using a triangular conforming element for the membrane, and a non- conforming rectangular element for the thin plate (as zopoulos, 1984]), we can derive the energy equations in [Ter- 1 J%P&%dJrm(~) = - 2 [(u~+~,~ - u,,? + (h,y+l - uxJYi (25) (JhY) for the membrane and &if4-plufc(~~ = 2 'w-2 ~[(u,+* y 3 - 2&z,, + &-l,y12 + (X,Y) 2(U,+l,y+l - Ux,y+l - Ux+l,y + ux,yj2 + o&+1 - 2k,y + &c,y-l~21 (26) for the thin plate, where 1 AX] is the size of the mesh (isotropic in x and y). These equations hold at the interior of the surface, i.e. away from the border points and discontinuities. Near border points or discontinuities some of the energy terms are dropped or replaced by lower continuity terms (see [Szeliski, 19871 for details). The equation for the data compatibility term is simply Ed@, d) = ; x %,y(Ux,y - 4,y)2 (27) (X,Y) with %,y = 0 at points where there is no input data. If we concatenate all the nodal variables {u~,~} into one vector uI, we can write the prior energy model as one quadratic form Ep(@ 1 = z~TApw W-9 This quadratic form is valid for any controlled continuity sta- bilizer, though the coefficients differ. Similarly, for the data compatibility m 1 we can write - Q) (29) where Ad is usually diagonal (for uncorrelated sensor noise). The resulting overall energy function E(u) is quadratic in u lT E(m)= p.~ Au-u%+c (30) where A=Ap+Ad and lb=Add (31) and has a minimum at u* u* = A-$ (32) Once the parameters of the energy function have been de- termined, we can calculate the minimum energy solution u* by using relaxation. For faster convergence on a serial machine, we use Gauss-Seidel relaxation where nodes are updated one at a time. At each step the selected node is set to the value that (locally) minimizes the energy function. The energy function for node Ui (with all other nodes fixed) is (33) 1c jENi and so the new node variable value is UT = bi - EjeNi a@j (34) ‘ Qii The result of executing this iterative algorithm on the nine data points in Figure 1 is shown in Figure 2. Note that it is possible to use a parallel version of Gauss-Seidel relaxation so long as nodes that are dependent (have a non-zero ag entry) are not updated simultaneously. This parallel version can be implemented on a mesh of processors for greater computational speed The stochastic version of Gauss-Seidel relaxation is known as the “Gibbs Sampler” [Geman and Geman, 19841 or Boltzmann Machine [Hinton et al., 19841. Nodes are up- dated sequentially (or asynchmnously), with the new nodal value selected from the local Boltzmann distribution (35) Since the local energy is quadratic E(Ui) = aii(Ui - UT)2 + k (36) this distribution is a Gaussian with a mean equal to the deter- ministic update value UT and a variance equal to T/aii. Thus, the Gibbs Sampler is equivalent to the usual relaxation algo- rithm with the addition of some locally controlled Gaussian noise at each step. The resulting surface exhibits the rough (wrinkled) look of fractals (Figure 3). The amount of rough- ness can be controlled by the ‘“temperature” parameter T. The “best” value for T can be determined by using parameter esti- mation techniques [Szeliski, 19871. While the above iterative algorithms will eventually con- verge to the correct estimate, their performance in practice is unacceptably slow. TQ overcome this, multigrid techniques [Terzopoulos, 19841 can be used. The problem is first solved on a coarser mesh, then this solution is used as a starting point for the next finer level (thus this is a coarse-to-dine algorithm). In previous work [Terzopoulos, 19841 a more complex inter- level coordination strategy was used, but in this instance it has not been found to be necessary. The application of multigrid techniques to stochastic algorithms requires some care, since the energy equations must be preserved when mapping from a fine to a coarse level [Szeliski, 19871. The application of a multigrid Gibbs Sampler to the gen- eration of samples from a Markov Random Field with fractal priors results in a new algorithm for fractal generation. Like other commonly used techniques (random midpoint displace- ment, successive random additions [Voss, 1985]), it is a coarse- to-fine algorithm. It uses the interpolated coarse level solution as a starting point for the next finer level, just like successive random additions. However, the noise that is added at each stage is highly correlated. Since control points and discon- tinuities can be imposed at arbitrary locations, it gives more control over the fractal generation process. The preceding section has discussed how to obtain representa- tive samples from the estimated posterior distribution. While this ability is useful in computer graphics, it is less relevant to the problems associated with computer vision. What is of interest is the optimal (or average) estimate, and also the un- certainty associated with this estimate. These uncertainty esti- mates can be used to integrate new data, guide search (set dis- parity limits in stereo matching), or dictate where more sensing is required. For the Markov Random Field with a quadratic energy function (Equation 30), the probability distribution is a multi- variate Gaussian with mean u* and covariance A-‘. Thus, to obtain the covariance matrix, we need only invert the A ma- trix. One way of doing this is to use the multigrid algorithm presented in the previous section to calculate the covariance matrix one row at a time [Szeliski, 19871. However, this ap- proach is time consuming, and storing all the covariance fields is impractical because of their large size (for a 512 x 512 im- age, the covariance matrix has 6.8 x 10” entries). An alternative to this deterministic algorithm is to run the multigrid Gibbs Sampler at a non-zero temperature, and to estimate the desired statistics (this is a Monte Carlo approach). For example, we can estimate the variance at each point (the diagonal of the covariance matrix) simply by keeping a running total of the depth values and their squares. Figure 4 shows the variance estimate corresponding to the regularized solution of Figum 2 (note how the variance increases near the edges and discontinuities). These variance values arc an estimate of the confidence associated with each point in the regularized solution. Alternatively, they can be viewed as the amount of fluctuation at a point in the Markov Random Field (the “wobble” in the thin plate). Note that this error model is dense, since a measure of uncertainty is available at every point in the image. Error modeling in computer vision has not previously been applied to systems with such a large number of parameters. The straightforward application of the Gibbs Sampler re- sults in estimates that are biased or take extremely long to SZdiSki 753 Figure 4: Variance estimate converge. This is because the Gibbs Sampler is a multi- dimensional version of the Markov random walk, so that suc- cessive samples are highly correlated, and time averages are ergodic only over a very large time scale. To help decorrelate the signal, we can use successive coarse-to-fine iterations, and only gather a few statistics at the fine level each time [Szeliski, 19871. The stochastic estimation technique can also be used with systems that have non-quadratic (and non-convex) en- ergy functions. In this case, the mean and covariance are not sufficient to completely characterise the distribution, but they can still be estimated. For the example of stereo matching, once the best match has been found (by using simulated an- nealing), it may still be useful to estimate the variance in the depth values. Alternatively, stochastic estimation may be used to provide a whole distribution of possible solutions, perhaps to be disambiguated by a higher level process. - - I. Conclusions This paper has shown that regularization can be viewed as a special case of Bayesian modeling, and that such an interpre- tation results in prior models that am fractal. We have shown how this can be used to generate typical solutions to inverse problems, and also to generate constrained fractals with local control over continuity and fractal dimension. We have de- vised and implemented a multigrid stochastic algorithm that allows for the efficient simulation of the posterior distribution (which is a Markov Random Field). The same approach has been extended to estimate the uncertainty associated with a regularized solution in order to build an error model. This information can be used at later stages of processing for sensor integration, search guidance, and on-line estimation. Work is currently under way [Szeliski, 19871 in studying related issues, such as the estimation of the model parameters, analysis of algorithm convergence rates, on- line estimation of depth and motion using Kahnan filtering, and the integration of the multiple resolution levels into a single representation. References [Bracewell, 19781 R. N. Bracewell. The Fourier transform and its applications. McGraw-Hill, New York, 2nd edi- tion, 1978. [Fournier et al., 19821 A. Fournier, D. Fussel, and L. Carpen- ter. Computer rendering of stochastic models. Commun. ACM, 25(6):37 l-384, 1982. [Geman and Geman, 19841 S. Geman and D. Geman. Stochastic relaxation, gibbs distribution, and the bayesian restoration of images. IEEE Trans. Pattern Anal. Ma- chine Intell., PAMI-6(6):721-741, November 1984. [Hinton et al., 19841 G. E. Hinton, T. J. Sejnowski, and D. H. Ackley. Boltzmann Machines: Constraint Satisfaction Networks that Learn. Technical Report CMU-CS-84-119, Carnegie-Mellon University, May 1984. [Horn, 19771 B. K. l? Horn. Understanding image intensities. Artificial Intelligence, 8:201-231, April 1977. [Mandelbrot, 19821 B. B. Mandelbrot The Fractal Geometry of Nature. W. H. Freeman, San Francisco, 1982. [Pentland, 19841 A. P. Pentland. Fractal-based description of natural scenes. IEEE Trans. Pattern Anal. Machine In- tell., PAMI-6(6):661-674, November 1984. [Poggio and Terre, 19841 T. Poggio and V. Torre. Ill-posed problems and regularization analysis in early vision. In ZUS Workshop, pages 257-263, ARPA, October 1984. [Smith and Cheeseman, 19851 R. C. Smith and P. Cheeseman. On the representation and estimation of spatial uncer- tainty. Technical Report (draft), SRI International, 1985. [Szeliski, 19861 R Szeliski. Cooperative algorithms for solv- ing rana%m-dot stereograms. Technical Report CMU- CS-86-133, Department of Computer Science, Carnegie- Mellon University, June 1986. [Szeliski, 19871 R. Szeliski. Uncertainty in Low Level Repre- sentations. PhD thesis, Carnegie Mellon University, (in preparation) 1987. [Terzopoulos, 19841 D. Terzopoulos. Multiresolution Compu- tation of Visible-Suvace Representations. PhD thesis, Massachusetts Institute of Technology, January 1984. [Terzopoulos, 19861 D. Terzopoulos. Regularization of in- verse visual problems involving discontinuities. IEEE Transactions of Pattern Analysis and Machine Intelli- gence, PAMI-8(4):413-424, July 1986. [Tiionov and Arsenin, 19771 A. N. Tikhonov and V. Y. Ar- seti. Solutions of Ill-Posed Problems. V. H. Winston and Sons, Washington, D. C., 1977. [Voss, 19851 R. F. Voss. Random fiactal forgeries. In R. A. Earnshaw, editor, Fundamental Algorithms for Computer Graphics, Springer-Verlag, Berlin, 1985. , I would like to thank Geoff Hinton and Takeo Kanade for their guidance in this research and their helpful comments on the paper. This work is supported in part by NSF grant IST 8520359, and by ARPA Order No. 4976, monitored by the Air Force Avionics Laboratory under contract F33615-84-K-1520. 754 Vision
1987
126
579
ENERGY CONSTRAINTS ON DEFORMA Recovering Shape and Non-Rigid Motion Demetri Terzopoulos Andrew Wit kin, Michael Kass Schlumberger Palo Alto Research 3340 Hillview Avenue, Palo Alto, CA 94304 Abstract We propose a panadigm for shape and motion reconstmsction based on dynamic energy con&mints. Objects are modeled as deformable elastic bodies and con&mints derived from image data are mod- eled as external forces applied to these bodies. The external con- straint forces are designed to mold a deformable body into a con- figuration that satisfies the constraints, making the model consis- tent with the images. We present a particular shape model whose internal forces induce a prefenznce for surface continuity and ax- ial symmetry. We develop a constraint force for dynamic stereo images and present results for the recovery of shape and non-rigid motion from natural imagery. I.. Introduction To reconstruct the shapes and motions of 3D objects from their images it is necessary to synthesize 3D models that simultane- ously satisfy a bewildering variety of constraints. Some of these constraints derive from the immediate content of the image. Oth- ers reflect background knowledge of the image-forming process and of the shape and behavior of real-world objects. We pro- pose a paradigm for shape and non-rigid motion reconstruction in which objects are modeled as deformable elastic bodies and constraints are modeled as dynamic external forces applied to the bodies. The external forces are designed to mold the de- formable body into a configuration that satisfies the constraints. This minimum-energy configuration is computed by numerically solving the equations of motion for the deformable body. In this paper, we consider the reconstruction of the shape and non-rigid motion of objects possessing rough axial symme- tries. The input data consists of a temporal sequence of stereo image pairs. Several researchers have investigated motion-stereo fusion as a means of facilitating the recovery of 3D scene infor- mation [Nevatia, 1976; Hegan and Beverley, ‘1979; Ballard and Kimball, 1983; Richards, P985; Waxman and Sinha, 19861. Iu our approach, an energy functional is defined which varies temporally according to the evolving stereo image pair. To reconstruct the shape and motion of a non-rigid object of interest, the dynami- cally deforming model maintains consistency with the image data by continually seeking lower energy states. An interesting feature of our procedure (though by no means a necessary consequence of our energy constraint methods in general) is that dynamic 3D object models are computed directly from image data without an intervening 2.5D surface representation. Our deformable model of shape is governed by internal forces that imbue it with a preference for surface continuity as well as a preference for axial symmetry. In the latter regard, our model is close in spirit to generalized cylinder representations [Neva- tia and Binford, 1977; Marr, 19771. However, while generalized cylinders impose exact symmetries on any object they represent, our energy-based model is symmetry-seeking: It is capable of rep- resenting any shape, but those with axial symmetry have lower energy and hence are preferred. Reconstruction is accomplished by applying image-derived forces to the symmetry-seeking deformable model. For each im- age we compute a local measure of the intensity gradient magni- tude. After an appropriate linear transformation, the local min- ima of the resulting potential functions indicate locally highest contrast and are interpreted as silhouettes (occluding contours). By de-projecting the gradient of these image potentials through the binocular camera model, a time-varying force field is created in 3-space. Given the camera parameters and the model’s current state, only unoccluded points where the lines of sight from either the left or right eye graze its surface (occluding boundaries) are sensi- tive to the force field. The forces move boundary points laterally and in depth such that their binocular projections are consistent with both the left and right image silhouettes. Consistency is achieved when the projected boundary points rest at local miu- ima of the image potentials. The shape over the remainder of the surface is determined by the model’s internal continuity and symmetry forces. It has been observed that occluding bound- aries present di&ulties to standard stereo matching methods, largely because occluding contours in the two images correspond to different occluding boundary curves on smooth objects. Our method overcomes these difficulties by applying separate forces to points along the left and right boundary curves. The work in this paper is an instance of a dynamic energy constraints paradigm which has been successfdy applied to a variety of problems in graphics and modeling as well as vision: In [Terropoulos, Platt, Barr, and Fleischer, 19871 energy con- straints are applied to deformable curve, surface, and solid mod- els to build and simulate objects made of rubber, cloth, and similar materials. In pitkin, Fleischer, and Harr, 19871 energy constraints are applied to parameterired shape models such as cylinders or spheres to automatically dimension, assemble, and animate objects made of collections of such parts. In [Barzel and Barr, 19871 articulated objects are assembled and simulated with accurate Newtonian dynamics. In [Has,, Witkin, and Terzopou- los, 19871 image energy constraints are applied to deformable plane curves to interactively locate and track edges and other image features. In [Witkiu, Terzopoulos, and Kass, 19861 a de- formable sheet in image coordinates is subjected to forces de- rived from area correlation to perform stereo reconstruction in the style of the 2.5D sketch (Fig. 1). In [Terzopoulos, Witkin, and Kass, 19871 symmetry-seeking models are used in a limited way to perform object reconstruction from static monocular sil- houettes (Fig. 1). In [Platt, 19871 a deformable space curve model is extended into a space-time surface, and used to recover rigid motion. The remainder of the paper is organized as follows: Sec- Terzopoulos, Witkin, and Kass 755 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. Figure 1. Reconstructions of a still life scene. Stereo images (top). 2.5D reconstruction of stereopair using a deformable sheet disparity model (bottom left). 3D reconstruction of objects in left image using a symmetry- seeking deformable model (bottom right). tion 2 describes the geometry and dynamics of the deformable symmetry-seeking model. Section 3 describes the stereo-motion image force. Section 4 discusses the implementation and Section 5 presents results. 2. Before we review the formulation of the symmetry-seeking model proposed in [Terzopoulos, Witkiu, and Kass, 19871, here is an informal description: Imagine a deformable sheet made of elastic material (a blend- ing of a membrane and thin plate). Take this sheet and roll it into a tube. Next, pass through the tube a deformable wire spine made of the same material and at regularly spaced points along its length couple it to the tube with radially projecting Hookean springs. The spring strengths can be adjusted so as to maintain the spine in approximately axial position within the tube. Additional forces are introduced tbat coerce the tube into a quasi-symmetric shape around the wire. Extra control is pro- vided through additional compression/expansion forces radiating airom the spine. The rigidity of the spine and the tube can be controlled independently, and their natural rest metrics and cur- vatures can be prescribed in advance or modified dynamicrally. For instance, if the circumferential metric of tbe tube is set to zero, the tube will tend to shrink around the spine unless ex- pansion forces prevail; the model will shorten or lengthen as the longitudinal metrics of the tube and spine are modified. Hence, a wide range of interesting behavior can be obtained by adjusting the control parameters of the model. The spine is a deformable space curve defined by mapping a l-dimensional parametric domain s E [0, l] into Euclidian 3- space: v(s) = (X(s), Y(s), Z(s)). The tube is made from a de- formable space sheet defined by mapping a 2-dimensional para- metric domain (2, y) E [0, l] 2 into 3-space: v(2, y) = (X(x, y), Y(x, y), Z(z, 3)) In this paper, the mapping functions represent S-space positions (alternatively they may may represent displace- ments away from prescribed rest configuration in 3-space). The mapping is governed by the minimum of an energy func- tional w = /, Jw41+ e4x)l (k (1) where x is a point in the parametric domain Sz. Here, E is the internal potential energy density of deformation and P is a generalized potential function associated with an externally applied force field. In our deformable model, E is an instance of the controlled-continuity constraint kernels [Terzopoulos, 19361. The deformation energy associated with the spine mapping v(s) is given by 1 G(v) = w~(s)Iv,(~ + ~2(4~vm~~ + h(v) ds. (2) 756 Vision d X G@we) = b JJ o1 o1 (r(x, s) - F(s)) - i(z, s) da: ds, (7) where b is the strength of the symmetrizing force. Finally, we want to provide control over the expansion or shrinkage of the tube around the spine. This is accomplished by introducing the functional &c(vs, VT) = J 1 1 44 r(2, s) * i-(x, s) a2 as. 69 0 Here, c(s) is the strength of the radial force; the tube will inflate if c > 0 and deflate if c < 0. In particular, an end of the tube can be cinched shut by setting an endpoint factor c(O) or ‘c(l) to be a large positive value. The potential energy of deformation of the model is then obtained by combining the potential energy of deformation of the spine and tube models with the three coupling energies: Figure 2. Parameterization of the the 3D model. The weighting functions control the material properties: WI(S) determines the metric properties and tension along the spine, while ‘WI(S) determines the curvature properties and rigidity of the spine. The deformation energy of the sheet mapping v(z, y) is given by the fuuctional 11 G(v) = JJ W1,01%12 + WO,l IvyI” 0 0 The functions wr,c(z,y) and ws,r(z,y) determine the metric of the sheet along eachparameter curve, while wr,e(z,y), wr,r(x, y), and wc,z(a, y) determine its natural curvature and rigidity. The tube is formed by prescribiug boundary conditions on two opposite edges of the sheet that “seam” these edges together. We seam the edge z = 0 to the edge x = 1, letting y span the length of the tube. The required periodic boundary conditions are V(O,Y) = V(l,Y), v,(O, Y) = v&, Y). (4) To couple the two models together, we first identify y G s, which brings into correspondence the spine parameter with the parameter along the tube (Fig. 2). We then distinguish the con@uration vector function of the spine vs from that of the tube VT. The spine is coerced into an axial position within the tube by introducing the interaction potential energy functional vs I2 ds, (5) where 1 VT(s) = VT{% 8) ds (6) and CL is the strength of the interaction. To make the tube prefer symmetry with respect to the spine, we first define the radial vector anywhere on the tube as ~(2, s) = vT(z,s)-vS(z,s), theunit radia+lvectorasP(z,s) = r(x,s)/]r(z,s)], and i(s) = J’i ]r( 2, s)] dz. The potential energy functional is then given by f+s, vT) = ;(&(vS) + ET(vT) L S&(vs, VT) + 4P(vs, VT) + &(vs, VT)). The variational principle involves the minimization of (9) within a space of suitably Merentiable deformations. The asso- ciated Euler-Lagrange equations are given in [TexzopouIos, Witkin and Xass, 19871. 3. Tbe symmetry-seeking model has the freedom to deform end to undergo translations and rotations in 3-space. The model is cou- pled to the dynamically evolving stereopair via a coupling energy term. The energy term is designed to impart forces that dic- tate the model’s deformations and motions such that it remains maximally consistent with an object of interest iu the dynamic stereopair. Our goal in the present paper is to match the deformable model to au object% occluding contours in the time-varying left =d r@t images ~L(~,~L) ad M-o&z). assume that the object is imaged in front of a contrastin ckgoMnd, so that we can formulate a simple force field of attraction towards strong intensity gradients which, by assumption, will include the occlllai.ng contollEs. Then, the occluding boundaries of tbe de- formable tube me made sensitive to this force field. We shall show in the next section that in spite of its simplicity this force field nonetheless yields interesting results. To couple the model to the image potential function, we stereoscopically project the material points of the tube into the left and right image planes through binoc&r imaging equations. The points sense the image potential at the projected locations. The material points of spine are projected as well, but in our current implementation this is done simply for display purposes- the spine experiences no image forces. Although it is possible to use a general binocular camera model (see, e.g., [Duda and Hart, 1973, Sec. lO.S]), its param- eters need not be known with great accuracy for our approach to work. Consequently, we have found it convenient to employ a simpli6ed perspective s oprojection with eye vergence at i&n- ity. Speci&dly, letting [vT(z,s)] and &[YT(z,s)] denote the stereoprojection of the tube material point 3-space coordinates (XT(~,g),y~(~,s),zT(z,$)) ido the bage pl=s (m,b) ad Terzspoulos, Witkin, and Kass 757 (97~, &) respectively, we employ HL : (1;11[,(% 4, h(v4)) =(xT(% 8) + ~a+, 4,xP(% 4), nR : (qR(2,4tR(2,9)) =(xT(2,s) + &‘(2,4,fi(2,4), (10) where Q is a constant. The coupling between the force field and the tube is through the external potential function PT (see eq. 3). We define h[vT(2,9)1 = -Pd4lV(Go- *IL(&[VT(~,J)]))~ -PR(~V(G * IR(nR[vT(2,8)]))1, (“I which imparts on the tube boundary an &ty for steep image intensity changes. Here, G, *I denotes the image convolved with a (Gaussian) smoothing filter whose characteristic width is u. When partial occlusions occur betweenmultiple objects (e.g., Fig. 1 in which the potato partially occludes the pear) only the unoccluded surface patches should be sensitive to image forces. We use 3D ray casting from each viewpoint to test surface patches for visibility in each image. Hence, the weighting functions /3~( z, s) and @R( 2,s) are non-zero only for visible material points (2,s) near occluding boundaries of the tube. Occluding boundary points are selected in the left image by setting PL(~, 4 = 1 - lb * n(2,4I (12) if the dot product is small (< 0.05), and p&(2,8) = 0 otherwise, where n(2,s) is the unit normal of the tube at i~(2,s) is the unit normal to the left image plane. The analogous weighting fimction is used for the right image. 4. Implementation In our implementation, the time evolution of the model erned by an initial value problem involving the equations is gov- where 7 is a damping factor. These fist-order equations describe the motion of massless material in a viscous medium. The for- mulation of a second-order dynamic system incorporating mass density is also straightforward [Terzopoulos, 19871, but (13) has served well for the time being. The components of the energy gradient b&/&s are approx- imated using standard finite di&rence expressions on a linear array of Nd nodes, while a N, x N, array is used to similarly approximate the components of BE/&T. The external force components VPT(VT) are computed numerically in the image domains (q,[) using bilinear interpolation between centrally dif- ferenced pixel values. We use an iterative procedure of the alternating direction implicit (ADI) type [Press, Flannery, Teulcolsky, and Vetterling, 19861 to solve the discrete equations of motion. This efficient procedure exploits the fact that we have a rectangular grid of nodes. Each time step of the ADI procedure involves (i) a sweep in the 2 direction solving N, independent systems of algebraic equations in IV= rmlmowns, followed by (ii) a sweep in the s direction solving N,: independent systems in N, unknowns. The ADI method is independently applied to each of the three tube pOSitiOn COmpOnedS (XT, YT, ZT). The spine gives rise to an additional system of equations in N, rmknowns for each of its position components (Xs,Ys, 2s). As a consequence of the controlled-continuity deformation model, each of the unidimensional systems of equatious has a pentadiagonal matrix of coefficients, aud it can be solved efli- ciently (linear-order in the number of unknowns) using direct solution methods. We employ a normalized Cholesky decom- position step followed by a forward-back resolution step. §ee [Terzopoulos, 19871 f or a derivation of the pentadiagonal matrix and for a discussion of the algorithm and [Mass, Witlcin, and Terzopoulos, 19871 for its application to “‘snakes.” Resolution, an inexpensive step, is performed at every ADI iteration as the applied forces change. Matrix decomposition is somewhat more expensive, but it is required only when the mate- rial properties of the model are altered (e.g., to increase rigidity or to introduce discontinuities). Currently, we perform only an initial decomposition because we have not yet experimented with the variation of material properties during solution. We find that for larger grid sizes aud increasingly rigid mate- rial the AD1 method evolves solutions faster than the successive over-relaxation (SOIL) method that we employed previously [Ter- zopoulos, Witkin, and Kass, 19871. This is attributable to the fact that the direct solution of each unidimensional system in the AD1 method “immediately” distributes to all nodes along two perpendicular parametric grid lines the effects of forces acting on their common node. 5. Results The reconstruction method was applied to a stereo motion se- quence consisting of 40 video fields portraying the 3D motion of a human finger. The imaging apparatus was a beam-splitting stereo adaptor mounted on a CCD camera. An initial axis was specified on the first stereopair by the user, and the shell iuitial- ized to a cylinder around the axis (Fig. 3) The system’s equations of motion were solved to equilibrium on the initial frame (requir- ing about 40 iterations), thus recoustructing the shape of the ob- ject iu proper depth. The initial shape is rendered from several viewpoints in Fig. 3. The equilibrium solution then evolved over the remaining frames of the stereo sequence (using 20 iterations per frame), producing a dynamic 3D reconstruction of the finger’s shape and motion. Fig. 4 shows six representative frames of the sequence along with the corresponding reconstructed shapes. 6. Conclusion Our results illustrate the useMness of dynamic energy constraints applied to deformable models as a means of recovering object shape and non-rigid motion. A shortcoming of our current system is due to the fact that silhouette information alone, even with stereo and motion, pro- vides limited information about objects. With larrge portions of the object’s surface left unspecified, the symmetry-seeking mate- rial tends to make the reconstructed shape more symmetric than the actual one. Also, it is difficult to detect rotations around the object’s axis only from silhouette information. However, a key advantage of the energy constraints approach is the ease with which additional constraints catl be integrated into the solution. A focus of our current work is the formulation and implementation of energy constraints that exploit shading and texture information over the entire visible surface, as well as constraints that make more effective use of motion information. Our approach suggests energy constraint mechanisms for bringing higher-level knowledge to bear on the reconstruction problem. This remains a topic for future research. For the time being, the system is interactive; the user supplies an initial condi- tion by instantiating a cylindrical surface about au approximate axis. We are investigating the use of scale-space continuation 758 Vision Figure 3. Initial 3D reconstruction of a finger. Finger stereopair for first time instant (top left). User-initialized cylinder (top right). Initial reconstructed shape from three viewpoints (bottom). methods [Within, Terzopoulos, and Kazs, 19861 to partially au- tomate the initialization. We anticipate the ability to incorporate analytic camera models of greater sophistication into the energy functional and to automatically solve for the camera parameters as part of the minimization procedure. Acknowledgments Kurt Fleischer provided generous assistance in picture creation. John Platt contributed to the development of our ideas. Marty Tenenbaum provided suggestive interpretations of some of the figures. References Ballard, D.H., end Kimbell, O.A., [lS83], “ Rigid body motion from depth aad optical flow, ” Computer Vision, Graphics, and Image Processing, 22, 95-115. Barsel, FL, and Barr, A., [1987], “ Dynamic constraints, ” to appear. Duda, R.O., and Hart, P.E., [1973], Pattern Clarsification and Scene Analysis, Wiley, New York. Kass, M., Witkln, A., and Ter5opoulos, D., [lS87], “ Snakes: Active contour models, ” Proc. First Int. Conf. on Computer Vision, London, UK, to appear in Int. J. of Computer Vision, l(4), 1987. Marr, D., [1977], “ Analysis of occluding contour, ” Proc. R. Sot. Lond. B, 197,441-475. Nevatia, R.., [lS76], “ Depth measurement from motion stereo, ” Corn- puter Vision, Graphics, and Image Processing, 9, 203-214. Nevatia, R., and Binford, T.O., [lS77], “ Description and recogni- tion of curved objects, ” Artificial Intelligence, 8, 77-98. Platt, J., [lS87], “ An elastic model for interpreting 3D structure from motion of a curve, ” to appear. Press, W.H., Flannery, B.P., Teukolsky, S.A., and Vetter- ling, W.T., [1986], Numerical Reciper: The Art of Scientific Computing, Cambridge University Press, Cambridge, UK. Began, D., and Beverley, K.I., [1979], “ Binocular and monocular stimuli for motion in depth: changing disparity and changing size feed the same motion in depth stage, ” Vision Res., 18, 1331- 1342. Richards, W., [1985], “ Structure from stereo and motion, ” I. Opt. Sot. Am. A, 2,343-349. Tarsopoulos, D., [1986], “ Regularization of inverse visual problems involving diintinuitiea, ” IEEE !&an& Pattern Analysis and Machine Intelli ence PAMI-S, 413-424. Tersopoulos, D., P I 1987, “ Matching deformable models to images: Direct and iterative solutions, ” Topical Meeting on Machine Vi- sion, Technical Digest Series, Vol. 12., Optical Society of Amer- ica, Washington, DC, 160-167. Teraopoulos, D., Platt, J., Barr, A., and Fleischer, K., [1987], “ deformable models, Proc. ACM SIGGRAPH-87 Conf. Tersopoulos, D., Witkin, A., and Kass, M., [1987], “ seeking models for 3D object reconstruction, Proc. First Int. Conf. on Computer Vision, London, UK, to appear in Int. J. of Computer Vision, 1 Waxman, A.M., and c S), 1987. l&al S.S., [1986], “ stereo: Paa- She ranging to moving ObJacts from relative image ilows, IFI%! tine. Pattern Analysis and Machine Intelligence, PAMI-8, 406-412. Witkin, A., Fleischer, K., end Barr, A., [1987], “ con- straints on parameterised models, Proc. ACM SIGGRAPII-87 Conf. Witkln, A., KMS, M., Ternopoulos, D., and Barr, A., [1987], “ perception and graphics: Modeling with dynamic con- Terzopoulos, Witkin, and Kass 759 Wit straints, Images and Understanding, H. Barlow, C. Blakemore, and M. Westoh-Smith (ed.), Cambridge University Press, to ap- kg?,%., Tersopoulos, D., and Kass, M., [1986], “ matching through scale space, PWJC. iVaknaa1 Conf. on Art+ cial Inlelligence, AAAE86, Philadelphia, PA, 714-719, to appear in Ink J. of Computer Vision, l(2), 1987. Figure 4. Evolution of the reconstructed 3D model through time. Six frames of the stereo sequence are shown (top) along with the evolving shape and motion of the model (bottom). 760 Vision
1987
127
580
Shadow Stereo -- Loc William B. Thompson Michael T. Cheeky Computer Science Department Computer Science Department Systems Development D&ision University of Minnesota University of Minnesota Honeywell Inc. Minneapolis, MN 55455 Minneapolis, MN 55455 Golden Valley, MN 55427 Abstract Shadows are a useful source of information about object structure. Shadows cast under oblique lighting often indicate the location of the silhouette of an object. This paper describes a method for reliably detecting shadow edges corresponding to object edges. It is able to distinguish between detected edges due to shadows and those due to surface markings. The basis of the technique is to observe the differ- ences in shadows due to changes in the direction of illumination. Analysis is further aided by a simple stereo technique that does not require a solution to the general correspondence problem. Both the multi-light source and multi-camera methods can be imple- mented in an extremely efficient manner. I. Introduction. This paper outlines a method for finding part boundaries using an approach combining structured lighting and stereo techniques. The method uses multiple light sources and mul- tiple cameras to determine object boundaries based on detected shadows in the images. It is effective at distinguish- ing dark portions of an image due to shadows from those due to surface markings. The combined approach allows for significant simplifications in each component technique. The structured light required consists of collimated illumination from a small number of fixed light sources, rather than a more complex requirement that patterned illumination be projected onto or scanned over the part. Likewise, since the multi- camera portion of the analysis is used only to determine whether a surface point is on or off of the ground plane, the correspondence problem involved is simplier than that inherent in most other stereo techniques. The basis of the method is that obliquely lit objects cast shadows in such a way that shadow boundaries in an image are often coincident with object boundaries. The method deals in a direct way with two critical problems in analyzing shadows: 1) simple thresholding is insufficient to accurately recognize shadows in situations with significant variations in surface reflectance, and 2) many shadows are physically detached from the objects generating them. The use of multi- This work was supported by National Science Foundation Grant DCR- 8500899. William F. Kaemmerer was with the Control Data Corporation Image Systems Technology Center during his participation in this work. pie light sources allows a simple and computationally efficient filtering of dark portions of the image, leaving only those regions corresponding to true, attached shadows. The use of multiple views is able to further filter shadow edges, eliminating those due to internal structure of the object and leaving only the object silhouette. In its current form, the method is designed to find the boundaries of isolated parts on a flat supporting surface. The method is subject to some limitations on the geometry of the parts. However, a strength of the technique is the ability to cope with supporting surfaces which are not visually distinct from the parts. The system is appropriate for the control of automated pick and place operations off a typical conveyor belt or pallet, with an almost arbitrary surface pattern and coloring. The use of shadows has received surprisingly little attention within the computer vision community, and as a result many of the vision methods which have been developed work effectively only in situations in which the illumination is highly diffuse. Methods which do deal directly with shadows tend to be computationally complex. In addition, most assume that shadows can be easily detected by simple thresholding operations. In fact, thresholding is often ineffective due to variations in surface reflectance, secondary reflections, and other related effects. [Waltz, 19751 demonstrated that shadow information can add con- straints that simplify the analysis of simple blocks world problems. puertas and Nevatia, 19831 and [Hambrick and Loew, 19851 infer the shapes of objects casting shadows based on an analysis of shadow shape and knowledge of the direction of illumination. [Shafer and Kanade, 19831 use sha- dow shape to infer surface orientation. [Kender and Smith, 19861 introduces the idea of using moving shadows (gen- erated by moving light sources) to aid in determining surface shape. We are interested in inferring the location of the silhouette of an object from the locations of apparent shadow boundaries in the image. To do this, we need to classify each apparent shadow boundary as arising either from an “actual” object boundary, or from some other factor in the scene. We refer to object boundaries making up the silhouette as exte- rior boundaries. Non-convex objects may generate shadows due to surface structure unrelated to the object’s silhouette. The object structures generating such shadows will be Thompson, ChEdcy, and Kaemmerer From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. referred to as interior boundaries. faint enough to be ignored. Shadow regions identified in an image (for example, by thresholding) can be true shadows, correlated to actual sha- dows in the scene, or false shadows resulting, for example, from surface markings. Each point on the boundary of a true shadow region can be either attached or detached. An attached shadow boundary point is coincident (in the scene) with the object generating the shadow. Detached shadow boundary points correspond either to the “far” side of the shadow, or to parts of “cast” shadows associated with non- convex objects. Attached shadow boundary points always are associated with a point on objects. Detached shadow boundary points may lie on either the supporting ground place, or an object in the scene (possibly the same as the object casting the shadow.) 2) 31 4) A supporting ground plane at a known location is assumed to exist. All objects are assumed to be resting directly on the ground plane. The surface curvature of the objects at the points corresponding to their exterior boundaries in the image is assumed to be high, relative to moderate changes in the angle of illumination of the scene. In particular, moderate changes in the angle of illumination should not significantly affect the location of attached shadow boundaries in the image. 5) Any automatic procedure identifying shadows from raw intensity data will also find false shadow regions. These are regions of dark intensity in the image which arise from sur- face markings, that is, spatial patterns in the colors on a sur- face in the scene, rather than from differences in the illumina- tion falling on the surface. Such markings can occur on either objects or on the ground plane. Finally, shadow boundaries can be generated by occlu- sions. One type of such a boundary arises from the occlusion of a portion of a shadow region by another illuminated sur- face nearer to the image plane. The resulting shadow boun- dary is false in the sense that it does not reflect the shape of the true shadow in the scene; nevertheless, it provides useful information about the location of an external boundary of an object in the image. Another type arises from a self- occlusion, which occurs when a shadowed surface in the scene curves out of view in front of an illuminated back- ground. It is assumed that it is sufficient to find the location of boundaries defined as the image of silhouettes produced by a projection along the direction of illumination of the scene, rather than by a projection from the point of view. In many applications, the relative positions of light sources, cameras, and the angle of the object sur- face at exterior boundaries are such that the difference between projections are within the tolerance of any actions based on the visual analysis. One further assumption will be introduced during the course of the analysis to lead to the final solution: 6) Detected shadow regions are assumed to be a manifesta- tion of a single type of underlying cause. For example, it is presumed that detected regions are not combina- tions of real shadows and surface markings: (See sec- tion VII.) IV. Leading and Trailing Edges. In summary, a consideration of the ways shadow regions can appear in an image leads to seven types of apparent sha- dow boundaries: 1) attached shadow boundaries 2) detached shadow boundaries on ground plane 3) detached shadow boundaries on an object 4) surface markings on ground plane 5) surface markings on object 6) occlusion of shadow by object 7) self-occlusion of shadowed surface A useful first step in classifying the apparent shadow boundaries in an image is to identify leading versus trailing shadow edges [Hambrick and Loew, 19851, as follows: Define a projected illumination direction by projecting the direction of illumination onto the image plane. Define a sha- dow leading edge as a transition from light to dark while moving in the projected illumination direction. A trailing edge is a transition from dark to light, while moving in the same direction. Leading edge-trailing edge pairs are identified by associating with each leading edge point the first trailing edge point found by moving in the projected illumi- nation direction. Of these seven, types 1,6, and 7 provide information on the location of external boundaries of objects in the scene. A goal of the following analysis is to develop a technique for identifying these types of shadow boundaries among all the shadow boundaries apparent in an image. III. Assumptions. A number of simplifying assumptions are made to facili- tate the analysis: 1) Only primary illumination effects are considered, secon- dary illuminations from reflections are assumed to be Each leading edge and each trailing edge in the image is part of an apparent shadow boundary, and hence must be one of the seven types described in the last section. As a result, seven types of leading edges, and seven types of trailing edges may be conceptually identified, for a total of 49 cases. These cases are shown in Table 1. Ideally, one would like to have an image analysis technique which allows all of the unique cases among the 49 to be distinguished. However, for the current purpose it is sufficient to have a technique which will distinguish cases in which leading edges provide infor- mation about boundaries from among the other cases: 762 Vision l- 8- 13 - 15 - 20 - 22 - 27 - 29 - 34 - 36 - 41 - 43 - Exterior boundary of back lit object. Exterior boundary (casting “typical” shadow). Shadow cast by an overhanging part of an object, partly occluded by another overhanging part of an object. 47 - Surface mark on self-shadowed object that curves out of view. V. Mulltiple Light Source -- Multiple Camera Technique. Two sources of information are used to assist in inter- Interior boundary (surface protrusion of object cast- ing shadow onto the same object) or shadow cast by an object onto another nearby object. Shadow cast by an overhanging part of an object onto another object, partly occluded by an overhanging part of a third object (possible with highly oblique lighting). preting the shadow boundaries in an image. Multiple light sources -- Shadow cast onto a surface marking on ground. Surface mark on ground partly occluded by (unmarked) object. Shadow cast onto a surface marking on an object. Surface mark on object partly occluded by an overhanging part of an object. Shadow cast by an object partly occluded by an overhanging part of an object. Shadow of one object viewed through hole in an overhanging object, or gap between overhanging parts of objects. A set of images is taken from the same camera position but using different illumination directions. Jllumination is varied in such a way that the direction of the projection of the illumination direction vector onto the ground plane remains constant. The image of detached shadow boundaries will move when the illumination direction changes, because the location of the detached boundary is a trigonometric function of illumination angle. Attached, occluded, and surface mark- ing boundaries will not move due to small changes in ilhuni- nation. (There is an exception for attached boundaries due to low curvature surfaces -- but note assumption 5, above.) It is useful to have several sets of illumination sources, each set individually satisfying the requirement for a common pro- jected direction. In this way, all exterior boundaries of objects in the scene cast shadows in at least some of the images, and it is possible to deal with object boundaries parallel to the projected direction of illumination. Multiple cameras-- Self-shadowed surface curving out of view before light background. Imaging the scene using multiple cameras allows stereo techniques to be used to determine whether boundaries are on Leading Edge: Trailing II attached detached detached sug. mark sur$ mark occlusion self- Edge: on ground on object on ground on object of region occlusion attached 1 2 3 4 5 6 7 detached on ground 8 9 10 11 12 13 14 detached on object 15 16 17 18 19 20 21 surf. mark on ground 22 23 24 25 26 27 28 surf. mark on object 29 30 31 32 33 34 35 occlusion of region 36 37 38 39 40 41 42 selj- occlusion 43 44 45 46 47 48 49 Tshb 1: Citnntinnn mubrlvino rnrnhinntinnc raf badino -W.-a v -1 L-.erll m-m1 Y ..mm- .e”J”‘b .e-r--Iv..-..I.“I-” -1 ‘w-..“‘a edge - trailing edge apparent shadow boundary types. Thompson, Cheeky, and Kaemmerer 763 or off of the ground plane. The analysis is simple, because stereo triangulation is required only at shadow boundaries and because only an on versus off ground plane determina- tion is requiredY- not an actual depth meas-urement. On-@ stereo solves this problem without the need for a solution to the On the fact correspondence problem. The technique is based that different views of the ground plane will be distorted in systematic ways due to me camera projection functions. Knowing the &mera models, it is possible to determine the view of the ground plane seen from one camera given the view in the other. In fact it is not necessary to know the camera models. A target such as a checkerboard can be placed on the ground plane and then conventional image warping techniques can be used to determine the transformation between two different views. (This involves a ‘ ‘correspondence problem,” but one of a very simple sort.) The warping will be the same for any patterns on the ground plane. Given two views of a shadow edge lying on the ground plane, the location of the edge in one view will be the same as the transformed location of the same edge in the second view. If the edge is at a height different from the ground plane, however, the warping transformation will not accurately predict the change between different views. To identify edges not on the ground plane, we need only warp one image to correspond to the viewing point of the other, and then look for edges that “move.” VI. IS-IM/CS-CM Designation. Given the change in the illumination direction from image-to-image, and the determination of on/off ground plane from the stereo views, each apparent shadow boundary in an image of the scene may be labeled as moving or station- ary with respect to the lighting and camera position changes. Shadows which move with changes in illumination may be labeled LM, those which do not may be labeled LS. Shadows which move (relative to their predicted location if they were at the level of the ground plane) with a change in camera viewpoint may be labeled CM; those which do not may be labeled CS. Each apparent shadow boundary thus may be given one of four labels based on information from the images: LS-CS, LS-CM, LM-CS, or LM-CM. When these labels are applied to leading-trailing edge pairs, sixteen possible cases result. These are the sixteen cases which may be distinguished using information available in the image. The goal of the current analysis is attained if the 49 different situations (from Table 1) can be mapped onto these 16 cases in a manner such that the shadows arising from exterior object boundaries are shown to be distinguishable from all other shadows, using the information available in an image. The desired mapping can be constructed by assigning an LS-LM/CS-CM label to each of the seven types of apparent shadow *boundaries, based on a consideration of their behavior under changes in lighting and camera positions. Attached shadow boundaries do not move with lighting changes, but they move with changes in camera position, since actual three-dimensional structure to which they are attached must’ be above the ground plane (LS-CM). Detached shadow boundaries do move with changes in illumination direction. Those on the ground plane appear sta- tionary from different camera viewpoints (LM-CS), and those falling onto objects appear to move (LM-CM). The boundaries of surface markings are stationary regardless of illumination direction, and move with camera viewpoint depending on whether they are on the ground plane (LS-CS) or an object (LS-CM). VII. Leading Edge Interpretation and Filtering. Given this labeling of the types of apparent shadow boundaries, the mapping of the 49 situations to the 16 cases involves a straight-forward “collapsing” of the rows and columns of Table 1, as shown in Table 2. (Cells in Table 1 which are not physically realizable are omitted.) Trailing Edge: LS-CS LS-CM LM-CS LM-CM Leading Edge: LS-cs LS-CM LM-CS LM-CM 25 22,26,27 23 24 32,39 1,29,33, 30,37 31,38,45 34,36,40, 41,43 47, 48 11 8,12,13 9 10 18 15, 19,ZQ 16 17 Table 2: Mapping sf Shadow-Producing Situations Onto Labeling Possibilities for Leading-Trailing Edge Pairs. Numbers in cells correspond to the situations in Table 1. Ital- icized numbers are those situations for which the leading edge of the pair can provide information on the location of an exterior boundary. In Table 2, the situation numbers which are italicized are those for which the leading edge of the apparent shadow boundary pair is indicative of the location of an actual exte- rior object boundary in the scene. Inspection of this table shows that there is no image information condition (i.e., joint labeling of leading and trailing edges as LS-LM/CS-CM types) which uniquely selects the situations providing the location of exterior boundaries. However, the additional sim- plifying assumption that shadow regions in the image are due to a single manifestation rules out the more “esoteric” shadow-producing situations, (situation numbers 22, 29, and 47.) The assumption that shadows are due to a single man- . ifaatotinn ;c n#xt PP ~mde4#4.,~ .-a” ;t m:mh+ 4L*+ ‘.s.-a..- ll”UbUU”ll la ‘1°C CAD LbauIbuY~ 45 IL ‘uL~“L lUJL qJyGa.l. In fact, we do not require that the whole shadow satisfy this con- straint, but only that each leading/trailing edge pair be due to a single cause. Table 3 shows the mapping when these situa- tions are eliminated. Under this assumption, there are three joint labeling conditions which may interpreted as providing information (from the leading edge) concerning the location 764 Vision
1987
128
581
Perceptual Significance Hierarchy: A Co uter Vision Theory for Color Se Deborah Walters and Ganapathy Krishnan Computer Science Department, University at Buffalo (SUNY), Buffalo, NY Abstract A Perceptual Significance Hierarchy (PSH) for line art images is developed which represents the relative perceptual significance of each image component. This is possible through the use of a set of image-features which are used by the human visual system. The PSH and related rho-space com- puter vrsion algorithms can be used to automate the fake color separation process used by the printing industry. This is accomplished by adding rudimentary visual processing capabil- ities to a computer graphics system. This paper describes an application of Artificial Intelligence techniques to a pre-press problem in the printing industry, color separation. This application area is interesting as it is one where expert systems techniques are not useful, where rule- based reasoning is inappropriate, and where relational knowledge bases make no sense. Instead, AI techniques based on basic visual perceptual computations and the parallel pro- cessing of visual information are required. This can be accom- plished through the use of a Perceptual Significance Hierarchy (PSH), as described below. A. The Color Separation Application The printing industry is rapidly becoming an entirely elec- tronic computer-based industry. However not all of the pre- press processes have been successfully computerized. Color separation is one such process. Before a colored image can be printed, it must go through the color separation process, which creates three or four separate plates - one each for printing cyan, magenta, yellow, and if required, black. There are two types of color separation: process and fake. Process color separation is used for photographic images, and has been successfully automated by using colored filters to opt- ically separate the colors projected from the negative of a color photograph. An example of images printed from process color separation are the color photographs in weekly news maga- zines. Fake color separation is used for line-art images such as the Sunday comics or commercial art. In this case, the printing company receives black and white art-work, and information about the color for each image region. The task is then to color in, by number, each region in the image. Despite the simple concept, this is a difficult problem. At present there are two techniques for fake color separa- tion: one is completely manual; and the other uses computer graphics techniques. In the manual technique, for each particular desired color in the image, a sheet of transparent are tate is laid on top of the original line drawing art, and blochs of translucent red cellophane tape are laid over each region to be colored the particular color. The red tape is then cut to the exact shape of the areas to be colored with an Xacto knife, and the excess tape peeled away. The acetate sheets are kept aligned with each other and with the original artwork by punching holes in their margins which fit over precisely aligned registration pins. This labor intensive manual color separation is still widely used. In the latter technique the artwork is digitized and displayed on a computer graphics workstation. The user can specify a particular color using a color palette, and then can indicate which region should be filled with that color by point- ing to the region with the cursor. A seed-fill algorithm can then be used to fill the desired region with the selected color [l]. After th e user has interactively colored in all of the image, the graphics system can compute the three or four required printing plates. There are several basic problems with the computer graphics based techniques, as illustrated by the simple line drawing in Figure 1: 1) First, extra lines. in a region can cause problems. For exam- ple, if the bottom of the container was to be colored blue, then the user would have to separately select and fill each of the Figure 1 Figure 2 I 1 r top level I/;// zdiate b (PSH) ‘This work supported in part by National Science Foundation grant ET-8409827 awarded to the senior author. Walters and Krishnan 767 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. four regions of the bottom of the container. The human visual system can readily segment the entire bottom of the container into a single region, and it would be preferable if the user could fill such a “perceptual region” with a single fill com- mand. For this particular example, the added cost of having to fill each image region may not be too large, but in many images this problem is more severe as artists often use lines to indicate texture, patterns, and interior lines, and these lines can divide a single perceptual region into many separate regions. Even in complex cases the user could point separately to each small region, but quite rightly, they refuse to use such an inefficient algorithm. 2) This problem of subdivided regions can also arise when lines from separate objects intersect. In Figure 1 the stem of the flower intersects the top boundary of the container, and divides the container top into two regions, which would again require separate fill commands. mary components. The goal of this research was to give the machine a rudimentary visual sense, so that it can perform some, but not necessarily all of the segmentation process. Any portion of the retouching stage that could not be handled by the machine vision preprocessing system would be done by the user at the graphics workstation. The goal was not to create a computer image understand- ing or scene analysis system, as the machine does not have to know what an object is, in order to color it, but only that a given region constitutes an object. The computer vision system just needs to be able to segment an image into regions which correspond to objects or object parts. And this can even be done with human interaction. C. Perceptual Signi One theoretical approach to this problem can be expressed in terms of a Perceptual Significance Hierarchy (PSH). There are 3) Third, many regions are not surrounded by a closed contour. For example, there is a gap in the boundary of the top of the container. Such gaps may not seem significant to human observers, they understand that the boundary encloses a single region. But such gaps create problems for seed-fill algorithms as the color assigned to a region will “leak” out through any gaps and fill the surrounding region. 4) The fourth problem is that some regions in line drawings are defined, not by lines, but by illusory contours, as in the center of the flower in Figure 1. Again, in such situations seed-fill algorithms are useless. There are two basic solutions currently used by the printing industry to solve these problems. The first solution is to simply make restrictions on the type of artwork that can be separated using the computer graphics techniques. One poten- tial means would be to require that all artists and cartoonists that submit line drawings for color separation ensure that none of the four problems would arise in their work. This approach is obviously not possible as it would dramatically interfere with artistic license. Some pre-press companies do require that artwork be generated on a computer graphics workstation using techniques which do not allow images with any of the four problems to be produced. But this solution is only possi- ble when the pre-press industry handles both the color separa- tion and the image creation stages. All other types of images have to be separated using manual techniques. Other pre-press companies handle these problems by manually retouching artwork before it is digitized. A techni- cian will “white-out” the extra texture and pattern lines that divide single regions, and will draw in black connecting lines wherever there are gaps or illusory contours. The retouched art-work can then be digitized and colored in interactively. This is much more efficient than the completely manual tech- nique, but is still very labor- intensive. In fact, the retouching stage requires as much time as the interactive coloring, which makes the system both expensive and slow. various hierarchical image processing and image interpretation techniques. For example, many pyramid algorithms are based on the hierarchy of spatial resolution [21. Another example would be hierarchical object representation, where an object is represented at various levels of detail [31. In contrast, the PSH is based upon the perceptual significance of image components. Thus instead of representing an object at various levels of detail, the PSH would have a top level where only the most significant image components were represented, intermediate levels where the next most significant components would appear, and a bottom level where all image components would be represented. For example, most observers say that the large rectangle is the most perceptually significant component of the scene in Fig. 2a, and that the unconnected diagonal lines are the least significant components. This could be represented in a PSH as shown in part b of Fig. 2, with the rectangle represented as being most significant by it’s presence at the top level. Simi- larly, the interior lines of the rectangle are represented in the PSH as being of intermediate significance, and the diagonal lines as having the least perceptual significance. A PSH would have several uses. First, if a PSH can be computed in parallel over an entire image (as is possible using the method described in Section III>, then the PSH can be used to focus the attention of subsequent non-parallel techniques. For example, the PSH could be used to improve the efficiency of model-matching by having only the most perceptually significant image components matched first. Another use of a PSH could be to aid in segmentation (see Section IV). The goal of the theoretical portion of this research is to develop computer vision algorithms which produce a PSH. The approach taken is to look to the human visual system for inspiration. This approach will not always be useful in com- puter vision as the solutions used by the human visual system may be based on limitations of the neural hardware which do not exist in computer systems. However, in other cases, the human solution may be based on the general problem of vision, * Goa1 for AH Techniques and thus be useful for machine vision systems as well. Using the human visual system for inspiration is especially relevant The goal of this research is to apply AI techniques to remove the retouching stage Of color separation by enabling a computer to perform much of that preprocessing. Current electronic color separation SYStemS represent a division of labor between human and machine, where a human visual system performs the segmentation of an image into objects or significant parts of objects to be colored - this is done in the retouching stage - and the machine takes care of most of the detail of filling in the segments with color and separating the colors into their pri- for the color separation application as artists may use a seman- tics of line art that is based on the semantics used by humans in the visual perception of line art. * How do humans perceive line art; what aspects of line draw- ings are perceptually significant? Psychophysical experiments 76% Vision have been used to provide answers to these questions [4]. The experimental results show that the local connections between the ends of lines and curves are important. In fact, before the action of context or other cognitive effects, the perceptual significance of a line or curve is based on the type of connec- tion at it’s two ends. There is actually a hierarchy of connec- tion types as illustrated in Figure 3, where an instance of each of the three possible types of end connections are shown. Lines terminating in Type A connections are the most perceptually significant, on [5]. those with B are the next most significant, and so Figure 3 X I’@ hierarchy of line end connections contains some of the connections which have been previously used in computer vision algorithms: corners [6], and ‘L’, ‘T’, and ‘fork’ junctions [7-lo]. But this end connection hierarchy is different in several ways: first, with the addition of one additional connection type it can be applied to both straight and curved lines; second, it is domain independent; third, only four types of connections are present; and fourth, this hierarchy has been proven to be geometrically complete, thus all potential connections between the end and interjacent points of lines can be classified as one of the four connection types A, B, C or D [5]. However, the most important difference is that these end connection features will not be used for constraint labeling as a part of object interpretation, but rather for a lower level task such as image segmentation. These end connection features can be used to develop a PSH. But first it is interesting to ask “Why should the end connection features be significant?’ What scene invariants do they capture? This can be explored by making two simple assumptions. First, assume that the viewpoint is representative [l I, 121. This means we assume we are not looking at a scene from a limited number of viewpoints which cause the objects to appear to be accidentally aligned. The second assumption is that the objects are not accidentally aligned. From these assumptions it is possible to make various inferences about the perceptual significance of the end- connection features [13]. For example, it can be shown that lines associated with the type A connection have a higher pro- bability of having arisen from the bounding contour of an object, while the type B are more likely to have arisen from objects than non-objects. The type C connections are most likely to represent one object occluding another, and the type D connections would most likely arise from the intersection of two wires or transparent edges. So the hypothesis is that these features are important because it is more probable that such features would arise from the physical properties of objects in a scene than from other causes. Thus the lines associated with these features should contain more information about objects in a scene than lines lacking these features, and thus be more perceptually significant. ctio We can use this hypothesis to develop a technique for selec- tively enhancing an image by categorizing lines and curves in terms of their end connections. The algorithm for producing the PSH assumes that the perceptual significance of a line is determined by the type of connection at each end of the line. This assumes that a line has exactly two ends, which is possi- ble if lines are assumed to lack d&continuities in orientation, and if branching lines are assumed to be broken at branching intersections [14]. It will be further assumed that a single type A connection makes a line more perceptually significant than having two type B connections, and that a single type B con- nection implies more significance than a single type C connec- tion. Thus a PSH can be created by assigning those lines which have type A connections at both ends to the top level of the hierarchy (which will be referred to as Level 11, and similarly assigning all lines to a level of the PSH based on the type of connections at each end of the line. The PSH will then have ten levels. Figure 4 shows an example of a PSH; the PSH of the drawing seen in Figure 1. Note that the outer contours of objects are judged to be more significant than the inner con- tours, or lines perceived as texture or pattern. (This example also shows the results of processes which fill in the gaps in lines and generate illusory cussed in later sections.) contours. Both processes are dis- The PSH can be used to create a generic segmentation algo- rithm, w hich will segment a line drawing into perceptually Figure 4 significant segments 1131. The output of the algorithm is a labeled version of the line drawing where each label corresponds to a separate segment. The segmentation algorithm is described in 1153. Figure 5 shows an example of the results of the segmen- tation algorithm. The two figures differ only by one small line, but we see the one as one object and the other as two. The algorithm results agree with this perception as the first figure is represented as a single segment, yet the second is representid as two segments (as indicated by the different line styles). Note that this segmentation is different from that done by Hoffman and Richards [16]. They segment a single closed con- tour into subparts, but the algorithm discussed here can seg- ment an entire image into parts. Walters and Krishnan 769 esentation An important issue remains to be discussed - - is it possible to detect and represent the end connection features in images? Assume that a line drawing can be turned into a binary image. It may have lines which vary in width, which means some method of extracting a representation of lines must be used. However, standard thinning techniques cannot be used as they alter the connectivity of lines. Similarly there are many types of edge detectors that cannot be used, as they alter connectivity of lines or regions. For example, the type B end connection would never occur with circularly symmetric gaussian based edge operators. Finally, the techniques for representation must be able to provide an interpretation for intersecting lines in which the constituent lines are not necessarily connected. A technique which meets these criteria was developed - it is the rho-space representation of oriented edges as described in Walters [ 13,141. In rho-space the x and y dimensions correspond to the spatial dimensions of the image, and the rho dimension is the local orientation dimension. Rho-space is assumed to be a discretely sampled space, thus there are only a limited number of orientations explicitly represented. The x and y dimensions are also discretely sampled. Figure 5 Figure 6 Figure 7 The algorithms developed for the rho-space representation assume there is a single processor associated with each point or pixel in space, and that each processor is locally connected only to those processors in it’s local 3-D neighborhood. Thus rho- space might be implemented using a 3-D mesh connected com- puter. The input to rho-space is the nonthresholded output of oriented edge operators. The basic idea is that local computa- tions based on the value of points within a local neighborhood can be used to process an image and detect and represent the end connection features. Thus, although the response of a sin- gle edge operator cannot be unambiguously interpreted through the local interactions between neighboring responses, an unam- biguous interpretation is possible. The local parallel computations performed on the rho- space representation can be loosely described as thinning the line representations, filling short gaps in lines, and removing noise. An example of these excitatory and inhibitory Processes is seen in Figure 6, where part a contains the Original image. Part b shows the nonzero responses of the edge operators. Part c shows the results after lateral inhibition, which removes many spurious responses of the edge operators. Part d is after short-range linear inhibition which removes InOre nonzero line points. Part e shows the small gaps being filled in by short- range linear excitation, while part f shows the short, uncon- nected lines removed after mid-range linear inhibition, yielding the “clean” image. (More details of these excitatory and inhibi- tory interactions can be found in [14I). Figure 6 also shows how these parallel algorithms have the useful property of generating illusory contours. If Figure 6a is viewed from the appropriate distance, the center may appear darker than the background. This can be explained by an illusory contour being formed around the central region. Note that in Figure 6f an illusory contour is generated as a result Of the rho-space computations. Notice the short lines rel’llaill ill Part c after lateral inhibition. Such lines were con- sidered to be an inherent problem of oriented edge operators by Marr and Hildreth [171, but in fact these orthogonal end lines may play a key role in producing illusory contours. It is also interesting to note that the PSH for this image indicates that the illusory contour is more perceptually significant than the straight lines in the pattern. The PSH and rho-space algorithms can be used to automate the color separation process by removing the necessity for the manual touch-up stage previously required to solve the four problems listed in Section 1.1. As some of the initial computa- tions such as convolution are computationally intensive, they can be performed non-interactively in a preprocessing stage. After preprocessing, the resultant image can be sent to graphics workstations for interactive processing. A. reprocessing The line drawings to be color separated are digitized. In order to apply the vision algorithms, the images must then be con- volved with a set of oriented line detectors, to transform the image into the rho-space representation. The next stage of pro- cessing involves the local parallel operations performed on the rho-space representation. The final stage of preprocessing is the detection and representation of instances of the end-connection features in the image. Using these features, the PSH algorithm attaches a label to each image line which indicates its perceptual significance level. These labels will be used in the interactive processing stage. The labeled image is now ready for the interactive stage of processing. e Interactive Color Separation The preprocessed line-drawing images can be viewed by the user on the monitor at the graphics workstation. As a supple- ment to the currently available graphics techniques, the user has three interactive techniques available which are based on these computer vision algorithms: 1) PSH, 770 Vision References 2) Segmentation, and 3) Line Extension. The PSH can be used to solve the texture and the inter- secting lines problems and to solve some contour intersection problems. The user can select the PSH, and then by moving a slider view it at any level of significance. To color in the bottom of the container in Figure 1, the user could select the PSH shown in Figure 4. The user can then select a particular sub-image. For example, using a slider value of 5, 6, or 7, the user could select the image shown in Figure 4 and a single fill operation can be performed. Similarly, to fill the top of the container, a value of 5, 6 or 7 could be chosen to allow the fill of the perceptual region with one command. But not all contour intersection problems are solved by this technique. For example, with the two vase picture seen in Figure 7, all lines would appear at the 1 level. In that case, the user could select the segmentation image and then view the segments individually, which would allow filling in one step. The rho-space algorithms can fill gaps in lines. In addi- tion, the system can interactively extend lines or curves to fill gaps. This is done by extending each line which is hot con- nected at its end, in a direction determined by it’s local curva- ture. The rho-space representation makes this easily imple- mentable. Any new end connections are detected and the asso- ciated lines temporarily relabeled with their new end- connection enhancement values. The amount that lines are extended is controlled by the user using a graphical potentiom- eter. Once an extended line meets another line, the line is not further extended. Finally illusory contours can be formed by rho-space pro- cessing. These contours then become just as real as any other contour, and thus can be used for filling. wcess of the Vision Alga-it It is estimated that the use of the computer vision algorithms in the interactive fake color separation process can accomplish 80% of the tasks presently done during the manual retouching stage. This means that it is now more economical to dispense with the labor-intensive retouching stage, and to handle the gap-filling, and removal of extraneous lines interactively using the vision/graphics system. The success of these vision algorithms appears to arise from their basis in features used by the human visual system, which makes them ideally suited for dealing with human- generated line-art. Another reason for their success is that these vision algorithms are general-purpose: they do not require any model-based processing, or domain-specific knowledge, but depend instead on generic knowledge about line-art. A final reason for the success of these vision algorithms, is that they generally provide more than one technique for solving a given color separation problem, and this redundancy improves the chances of finding a solution. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. Alvy Ray Smith, “Tint Fill,” SIGGRAPH Conference Proceedings, pp. 276-283 (August 79). S. Tanimoto and T. Pavlidis, “A hierarchical data struc- ture for picture processing,” CGIP 4,2 pp. 104-119 (June 1975). D. Marr and H.K. Nishihara, “Representation and recogni- tion of the spatial organization of three-dimensional shapes,” Proc. R. Sot. Lond. B-200 pp. 269-294 (1978). D.K.W. Walters and N. Weisstein, “Perceived brightness is influenced by structure of line drawings,” Investigative Ophthalmology and Visual Science 22 p. 124 (1982). D.K.W.Walters, “Object interpretation using boundary based perceptually valid features,” Proc. of SPIE Appli- cations of AZ III 635 pp. 196-202 (1986). K.Paler, J.Foglein, JIBingworth, and J.Kittler, “Local- ordered grey levels as an aid to corner detection,” Pattern Recognition P7(5) pp. 535-543 (1984). D.A. Huffman, “Impossible objects as nonsense sentences,” PP. 295-323 in Machine Intelligence 6, 4. D. MitchieEdinburgh University Press, Edinburgh (1971). M.B. Clowes, “On seeing things,” Artificial Intelligence, pp. 79-116 (2). D.I. Waltz, “Understanding line drawings of scenes with shadows,” in The Psychology of Computer Vision, ed. P.H. Winston,MaGraw-Hill, New York (1975 >. I.Chakravarty, “A generalized line and junction labeling scheme with applications to scene analysis,” IEEE Trans. Pattern Anal. Machine Intell. 1 pp. 202-205 (1979). R.I.D.Cowie, “The viewer’s place in theories of vision,” IJCAI Proceedings (1983). D. Lowe and T. Binford, “Segmentation and aggregation: figure ground phenomenon,” Workshop on Human and Machine Vis,ion, Montreal (1984). D.K.W.Walters, “Selection of image primitives for general-purpose visual processing,” Computer Vision Graphics & Image Processing 37(3) pp. 261 - 298 (1987). D.K.W. Walters, “Parallel Computations in Rho-Space,” Proceedings of the First International Conference on Neural Networks, SanDiego, CA (June, 1987). D.K.W.Walters, “Selection and use of image features for segmentation of boundary images,” Proceedings IEEE CVPR (1986). D.D. Hoffman and Whitman Richards, “Parts of recogni- tion,” MIT AI Memo 737 (1983). D. Marr and E. Hildreth, “Theory of edge detection,” Proceedings of the Royal Society of London B 207 pp. 187 - 217 (1980). Walters and Krishnan 771
1987
129
582
Plan inference and student modelin Y #M. Visetti CNRS / LIMSI . Universite Paris-Sud (Batiment 508) ORSAY 91406 FRANCE ABSTRACT This paper adresses the problem of building user mod& within the framework of Computer Assisted Instruction (ICAI), and more particularly for systems teaching elementary arithmetic or algebra. By “model building” we mean the understanding of the student’s performances, as well as a global description and evaluation of his/her ability (com- petence), including a representation of some errors. As an application domain we have here retained the learning of “calculus” in the field of rational numbers, as an intermediate area between arithmetic and algebra. The aim of our system is to control the way in which the pupil solves exercises. In the light of the particular nature of the chosen appli- cation, the main points to be stressed are the following : - calculations are described as plan generation and execution ; consequently the student’s modelling consists primarily in plan inferencing * the system takes into account the non deterministic nature of the task, and recognizes valid variants of expert calculation plans - numerous errors are detected and categorized - the system accepts that the student write the calculations in a more or less elliptic manner ; whenever ambiguities oc- cur, the student is precisely asked about implicit steps of his calculations, and the system uses the answers given to reduce the uncertainties - a global model of the student is generated, which incorporates observations and appreciations ; this motlrl, in turn, determines the subsequent interpretations. All these questions are discussed both at the fun- damental and the methodological levels. 0. IN’TRQDUCTION IIt is generally recognized (see [Sleeman et al. 82) or [Zissos 85)) that an Intelligent Tutoring System (ITS) should be composed of at least 4 components respectively in charge of: - the domain knowledge to be taught - the understanding of the student’s performances and the progressive building of a certain image (or model) of his/her competence - the pedagogical rules - the control of the dialogue with the student (including linguistic and graphic interface). h minimal architecture for a problem solving oricntcd ITS is presented in Figure 1. The following two examples will illustrate all thcsc functions (and particularly tl-,e modelling activity) of an ITS teaching the calculus in the field of rational numhcrs. WC as- sume here that the student has already a basic knowledge of fractions and operations cln fractions. The aim of the system is then to test the student’s ability to perform operations at a purely technical/procedural level. Exercise 1 System1 : Compute 4/6 + S/7 ! User1 : . ..I fmd : 29/21 P. Dague IBM Research Center . 36 av. R. Poincarc PARIS 75016 FRANCE I-- Fig 1 : Architecture for 8 problem solving ITS EXPERT ““““““‘““IF 1 test generation I analysis of student’s questions I S2 : that’s right, but show me how you have simplified U2 : 416 + 517 = 58142 = 29121 S3 : that’right, but you could have simpliied before adding up, as follows : 4/6 + 517 = 213 + S/7 = (2x7 + 3x7)/(3x7) = 29/21 This example shows that the system must not only be capable of generating (like in Sl) and solving exercises. It also must accept from the pupil, and of course understand, more or less developed solutions (compare answers tJ1 and U2). Because of the non deterministic nature of the task, uncertainties unavoidebly occur, concerning certain im- plicit steps in the actual calculations of the pupil. The system must then be able to recognize the possible altcrna- tives, and if it deems it important, to ask the pupil a question focusing on the specific step, or task involved (see S2). The system will then use the new answer (812) in order to reduce its uncertainties and thus to trigger off the appro- priate comments (see S3). Exercise 2 S4 : compute 4/6 + 2/S , and don’t forget to simplify as soon as possible ! U3 : 416 + 215 = 213 + 215 = 2/(3x5) = 2115 S5 : let’s take your first equality : that’s good, you have properly simplified this time now let’s consider the second equality : the addition of 2/3 + 2/5 is wrong ! There is nothing particular here, you should have done this addition exactly in t.he same way as you did in Exercise I. You should have written : 2/3 + 2/5 = (2x5 + 3x2) / (3x5) = 16/15 Visetti and Dague 77 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. It is to be noted that, when formulating the second cxcr- cise, the system takes into account the previous performances lc A $+$=-.. B-f-C’ B of the pu$l (see S4), and it refers to them in its comments (S4 and S5). On the other hand, the pupil has made a mistake, possibly because of the particular pattern of the fractions which he had to add up (after simplification both fractions The same type of errors is to be found currently in the calculus of rational numbers Let’s take examples similar to those quoted above : have the same numerators‘!). Nevertheless the system rccog- *a 2+5 2 nizes it as an “attempted addition” carried out with a deviant -=3; 5 5-l-3 “+f= -f++ procedure (like : a/b + a/d --> a / (bxd) ), so that it can categorize it in its comments as being a “wrong addition” (see 2b *+’ - = 3+7 -+++- The problem arises now of knowing what memory the svstem should keen of these nerformances. how it us& this memory in order Ito build u; a global image of the pupil’s competence, and finally how the global image is to influence the interpretations of future calculations. We shall now proceed to explain some partial answers among all those which are required to build up a system ca- pable-of reacting exactly as stated in the previous examples. We underline -that we are only concerned here with a the problem of the student’s modelling, which WC view both as a local and a global activity : analysis of the performances and synthesis (description and evaluation) of the pupil’s comnetence. We do not deal with the process of dialogue at the jevel of natural language generation or comprehension ; we shall only indicate how calculations must be analyzed in order to permit clarification dialogues such as those prc- sented in the above examples. 1. AN APPLICATION DOMAIN We tackle the nroblem of student’s modelling within the framework of a barticular area : calculus in the frcld of ra- _-.- tional numbers. Although this calculus may seem simple, _--~ - it nevertheless present: important processing difftcul&s, both for the learning pupil and for the teaching system. First the calculation processes are not completely deterministic : for instance, one can perform an addition of fractions tither before or after possible simplifications. Secondly objects and rules are not realy accessible- outside the symbolic (and not only numeric) framework in which they are defined. Thirdly the pupil is given the opportunity of making a numhcr of stereotyped errors, which an ITS cannot completely ignore (as shownin the Introduction). To reflect the non deterministic nature of the calculations, we have to snecifv which tasks are obligatory, and which are ontional. O&o&l tasks (like simplifications, factorbations, \ I et’c.) may de ignored or postponed without entailing a com- plete failure of the main task which consists in evaluating the proposed expressions. However whenever an optional task has not been performed while it was possible, the system will have to notice this significant fact. On the other hand, oblirratorv tasks are those which must mandatorily be per- formed when the right moment comes : the nroccss of _ --~~- reducing the expres&ns would otherwise be blocked. For instance if we want to reduce a sum of two fractions, we have to perform their addition ; or if we want to reduce a ratio A/B we must reduce A into A’, B into B’, and only then A’/B’ into the final answer (but possible simplifications of the successive ratios can be performed or not at different mo- lb A+C - = B+D $ + L$ 2c 2 +++=-. 3+5 ’ z++=* 3 The similarity of these errors is not surprising, since the pupil facing the “concrete” numerical expressions, tries to rcc- ognize them as instances of general symbolic patterns which he thinks he can transform according to some general rules. This means that certain situations, which we can legitimately describe in a formal symbolic manner (like in la, lb, 1 c), in- duce some pupils to “apply” non valid calculation methods : a wrong simplification like in 2a, a strange splitting like in 2b, or a wrong performance of the addition like in 2c. Some of these actions may be considered as wrong executions of lcgiti- mate tasks (which could be otherwise properly pcrformcd) ; other actions must be considered as the execution of totally illegitimate “tasks” (which should in no case whatsocvcr be performed). Actually this categorization into legitimate and il- legitimate tasks is up to a certain point arbitrary. It essentially depends upon the manner in which the system will use it in its comments : sometimes it may be preferable to indicate to the pupil the similarity between his errors and right procedures ; some other times it is better to consider them as thoroughly absurd. Several authors (see [B rown 78/80], [Slceman 841, [Resnick et al. SS]) have endeavoured to explain the psycho- logical nature of these erroneous cognitive processes. WC shall not go here into this aspect of the matter. In our system we use a purely formal/procedural description of errors, made in the same style as the description of valid processing rules. 2. KNOWLEDGE REPRESENTATION AND THE GLOBAL MODEL OF THE §TUDENT To sum up, calculations are for the system the succession of a certain number of tasks carried out on various cxprcssions. This means that we adopt the plan paradigm in order to de- scribe their execution (quite in the same line as [Gcncscrcth 821). When the system analyzes one of the equalities, whcthcr valid or not, stated by the pupil, let us say El = l-52 , it infers one or may be several plans which, applied to El, result in E2. We call context a pair composed of a task ascribed and a situation where the task is to be applied. In fact in our system the situation is completely specified by formal characteristics of the processed expressions, such as their symbolic pattern. In each context (understood as task + situation), occurring during the analysis of an equality, the system looks into the global model of the student where all the possible methods for this context are represented. It selects some of these (accord- ing to certain heuristics which we shall explain later), and ex- ecutes them. Of course, due to the recursive nature of plans, some of these methods call upon the execution of several other tasks, while other methods consist of one single final procc- dure which the system executes whitout analyzing it further. This process is non deterministic in that, in a given con- text, all possible methods will be tried. If none is successful, or if the heuristic does not allow to test any method at all, the system reports failure in the case of an obligatory task ; but for an optional task, the process will go on and the next task in the plan will be examined. Furthermore, execution contexts 7% Al & Education for a given task can be ordered according to the generality of their input and output filters, so that execution methods are inherited along this particularization link. There is now an abundant AI literature about plan gener- ation and inference. The representations we have used are simple, and the best we can do is to give now some examples. For more fundamental informations on plan representation, the reader is invited to consult [Allen 801 or [Charniak and MC Dermott 85-J Figure 2 shows the general form of a context frame. ‘The slots of the frame deline the task to be executed and the relc- vant situation, ie. the constraints ruling the input and output expressions (the latter information being particularly useful for plan inference). They also specify all valid methods on the one hand, and on the other hand all the possible methods (valid or not) which might be used by the pupil. Each of these methods is assigned a certain level from 1 to 4, the signification of which is as follows : - level 1 : in the course of the most recent occurrences of the context, the method has been predominantly used by the pupil among all those which are declared for that context - level 2 : the method has been occasionaly used in the course of the most recent occurrences - level 3 : the method has been used, but not recently - level 4 : the method has riever been used , These “most recent occurrences” of the context define the scope of the short- term memory of the system (for us this scope has been set at 5 occurrences). Level 3 represents the long-term memory of the system. occurred for a given student, If the context has not yet the possible methods ; levels are however assigned to but they only represent the general expectation of the system concerning a typical student. The distribution of all possible methods in four lcvcls is immediately obtained from the list (with repetition) of the methods the most recently used by the pupil. This list is the value to be found in the OCCURRENCE-TRACE slot of the context frame. Another slot, called EAST-C1 IAN<;& contains an integer, which is incremented each time the pupil uses a method belonging to levels I or 2. If he uses some other method, this counter is reset at value 0. Furthermore, in order to be able to trigger omits tutoring rules, the system must also find in the global model some qualitative evaluations of the pupil’s behavior in each context. For that reason the slot COHERENCE contains an associ- ation list pairing session identificators with some appreci- ations. Briefly the coherence is appreciated as being all the higher as there are fewer methods (valid or not) declared at levels 1 or 2 (contexts should be defined with enough accuracy so that this be meaningful). A similar information is given in the QUALITY slot where used methods are evaluated on the basis of their validity. Figure 3 presents some typical contexts concerning the re- duction of a sum of 2 fractions. The context frames set forth above are part of the model of the student, because they are progressively individualized. But the model also contains other frames which we shall mention here briefly. The pupil’s competence is tested by nu- merous exercises which are generated from types. Each type addresses particular contexts, and tries to avoid some other ones : for instance one can test the addition of fractions with or without the possibility of simplifications. For each exercise type the global model contains various counters such as the number of exercises already done, the ratio of successes of the pupil, etc. Lastly the system produces for each session a frame containing, among other slots, the list of the appeared con- texts, and the list of contexts where the pupil’s behavior has changed during the session. In summary the “student’s model” contains, in a redundant way, local (episodic) and global (qualitative) information about the pupil’s behavior, which is observed and evaluated IFig. 2 : representation of a t:lsk execution context < context-name > TASK : <task-name > TYPE : obligatory/optional INPUT-FILTER : < pattern of exp > < additional conditions > OUTPUT-FILTER : < pattern of exp > c conditions > VALID-METHODS : a list of < methods > METHODS : a list of pain < level,method > where c level > = l/2/3/4 and each < method > has the following format : ( ( <method-name >, <input > , <output > ) : (<taskl>,<inputl>,<outputl>), . . . (<taskn>,<inputn>,<outputn>)) OCCURRENCE-TRACE : list of < method-name > LAST-CHANGE : < integer > COHERENCE : a list of < session-identificator,appreciation ’ QUALITY : a list of < session-identificator, appreciation ’ where < appreciation > = good/average/bad . . . Fig.3 : some tsuical contexts for the sum of 2 fractions Context68 (a context for the reduction of a pair of fractions) TASK : TRANSFORM INPUT-FILTER : a/b ope c/d , with operator(ope), fraction(a/b), fraction(c/d) OUTPUT-FILTER: result , with fraction(result) or integer(result) METHOD : ((levell,METHODl)) whith ((METHODI, a/b opt c/d , result) : (SIMPLIF, a/b , a’lb’) (SIMPLIF, c/d , c’/d’) (CROSS-SIMPI JF, a’/b’ ope c’/d’ , a”/b” ope c”/d”) (OPERATE, a”/,” ope c”/d” , num/den) (EVAL, num , n) (EVAL, den, d) (SIMPLIF , n/d , result)) Context 31 (a context for the simplification of a fraction) TASK : SIMPLIF TYPE : OPTIONAL INPUT-FILTER : a/b , fraction(a/b) OUTPUT-FILTER : res , fraction(res) or integer(res) VALID-METHODS : SIMP , NON-EXEC METHODS : ((level 1 ,SIMP) (level 1 ,NON-EXEC)) where SIMP(a,b) is the name of the simplification procedure for a pair (a,b) and NON-EXEC is the “empty” proccdurc representing non-execution of optional tasks Zontext32 a context for the erroneous cross-simplification of a pair of fraction TASK : CROSS-SIMPLIF TYPE : OPTIONAL INPUT-FILTER : a/b + c/d , fraction(a/b) , fraction(c/d) OUTPUT-FILTER : . . . VALID-METHODS : NON-EXEC METHODS : ((levell,NON-EXEC) (level4,CROSSIMP)) where CROSSIMP(a,b,c,d) is the name of the procedure (here erroneous) for “cross-simplifications” of a quadruplet (example : 2/9 + 6/5 = 2/3 4 2/5 !) Iontext (a context for the addition of 2 fractions) TASK : OPERATE TYPE : OBLIGATORY INPUT-FILTER : a/b + c/d , . . . OUTPUT-FILTER : . . . VALID-METHODS : Al METHODS : ((levell,Al) (level4,Dl) (lcvel4,D2) . ..) where Al(a,b,c,d) = (ad + bc) / (ad) Dl(a,b,c,d) = (a + b) / (c f d) DZ(a,b,c,d) = (a + b) / (cd) . . . I Visetti and Dague 79 context by context. The updating of this model takes place at different moments of a session. 3. PLAN INFERENCE When confronted with an equality El = E2 asserted by the pupil, the system first examines wether the equality is valid. To do that, the plan interpreter executes on the expression El a general TRANSFORM task, by selecting only those context frames whose input and output filters match El and 132 re- spectively. Once such a frame has been chosen, only the valid methods are tried and the possible results are compared with E2. In all cases, whether the equality is found valid or not, the system starts again the whole analysis, this time using wider heuristics (indeed, certain valid equalities may bc ohtaincd through erroneous operations). If the analysis succeeds, the system now possesses one or several plans explaining the equality. It is on the basis of these plans that the system fo- cuses its comments. In the following section we shall see how it is possible to interrogate the pupil when there are unccr- tainties about the plan he has adopted. For the moment WC present the heuristics used by the system for plan inference. For any one of these heuristics, there is no restriction to the choice of a relevant context : any frame whose task and filters match will be tried. Restrictions are imposed only upon the accessible methods. We have delined 3 heuristics for the choice of methods. Going from the first to the third, WC pro- gressively put in question the previously observed behavior, Heuristic1 assumes the pupil’s regularity, ie. the plan in- terpreter tries to apply in each context the methods dcclarcd at level 1 or 2, or the methods already applied in the current session (they possibly have not yet been recorded at level I or 2 of the model). Moreover the system always tries to apply the valid methods in the assumption that the pupil has benefited from.the teaching ! All the possible plans compatible with this heuristic are inferred. If no plan is found, and only in that case, the system tries heuristic 2. Heuristic 2 makes available all the methods of heuristic 1, plus the methods mentioned at level 3 in the global mode!. Furthermore it gives access to certain erroneous patterns which have possibly never been used before, but could be at any moment by an inadvertent pupil (for instance, forgetting a sign ‘-’ in the result). If no plan is found with heuristic 2, and only in that case, the system goes on to heuristic 3. Heuristic 3 simply gives access to all possible methods. In fact, to avoid a combinatorial explosion, and also to avoid inferring totally absurd plans, the set of possible meth- ods has to be filtered (especially for the heuristic 3). ‘I’his fil- tering takes into account the already inferred part of the ongoing plan. Criteria for this filtering are : . maximum number of errors mentioned by a plan (one has here to distinguish between errors made “sequentially” or “in parallel”) - size of the analyzed data - local coherence (if the same context recurs within the ongoing plan, the same method is applied) - use of certain “crazy” errors only if the rest of the plan is correct - execution in a normalized order of certain universally commuting actions (but then if the plan succeeds, the system must find out all the previously inhibited variants). Note that the internal structure of a plan is that of a tree, whose nodes are instances of context frames, labelled by a context name, the input and output expressions, and the name of the executed method. At the end of this first phase of the analysis, it may hap- pen that severalplans are candidates to the explanations of the current equality. Figure 4a presents a simple example of such a situation. We shall now show how the system reacts to this uncertainty. 80 Al & Education Fig. 4a The pupil is asked to reduce 416 + 517 ; he answers by the equality : 416 + 517 = 29/21 ; The plan interpreter infers 2 Possible plans for this valid result : I TRANSFORM input = 4/6 + 5/7 outout = 29/21 I meihod : SUBPLAN or SUBPLAN SUBPLANl SUBPLAN SIMPLIF in Cont31 in = 416 ; out = 213 in = 213 + 517 evaluations.. . . . . out = 29/21 I SIMPLIF in Cont3l in = 416 ; out = 416 meth : non-exec I 1 OPE in Cont6 I ]L=&fy ?lY5),(n”7, 1 evaluations... 1 Fig. 4b Both plans are plausible ; the system knows that an addition and a simplification have been performed, but does not know in what order they were effected ; it wants to explain to the pupil that it is better to simplify frost. To clarify the situation, the system can focus on either of the 2 following calculation steps : step 1 Task : Simplif ; Expression : 4/6 ; Appreciation : valid Possible < context,method > : (Cont31,sd) (ContD i non-cxcc) Step2 Task : Addition ; Expression : + ; Appreciation : valid Possible < context,method > : (Cont6,al) If Step1 is selected, the system asks the pupil to explicit his aim- p!ification. He answers by the equality : 58/42 = 29/21 . This equality is analyzed, and by cross-checking with the two candidate plans, the system is now sure of the simplification time. In a similar way, the pupil could have been asked to rxplicil Step2, thus writing : 4/6 + 5/7 = (4x7 + 6x5) i (6x7). In this cast a!so the system would have reached the same conclusion. The existence of alternative plans simply indicates that the analyzed equality is ambiguous : it may be interpreted in sc- veral ways. This ambiguity however may not affect all the task execution contexts mentioned by the different plans. To be able to comment upon the pupil’s performances, the system needs to know which are the contexts affected by the amhigu- ity, and for each of these contexts, it must have the most ac- curate posssible description of the (implicit) calculation steps whose complete clarification would eliminate the uncertainty concerning the context. Firstly, it is easy to define the contexts affected by the ambiguity : theyare those which are not mentioned by a!! candidate plans, or those which, although mentioned in al! plans, are processed by different methods depending upon the plan. Secondly, what are the (implicit) calculation steps about which the system can on good grounds decide they have actu- ally been processed by the pupil, and on which it could focus a question ? In order to answer this question, let us !irst recall that the tree structure of a plan reflects the logical and temporal struc- ture of a task execution. Assuming that we climb down along a branch starting from the root, we note at each node the task name Ti and the input expression Ei. We obtain a scqucncc : <Tl,El> , <T2,E2> , . . . , <Ti,Ei> , . . . , <Tn,iInb In other words the execution of Tl on El has required (among other tasks) the execution of T2 on E2, . . . , Ti on Ei, . . . , Tn on En . If al! candidate plans possess in common the same sequence, we can take for granted that this hierarchy of situ- ations has actually occurred. The system can “develop” this sequence for the pupil and designate without ambiguity the step corresponding to the lowest node < Tn,En > . This gives rise to a first type of question3 : m how did you execute ‘I’n on En ? “. It is yet possible to go further down in the trees, but questions then become less precise. Let us suppose for cxam- ple (with the previous notations) that a!! plans mention, among all the sons of the node < Tn,En > , a node < ‘i-p,* 5 . This means that the execution of Tn on En has rcquircd the execution of Tp ; but depending on the plan, Tp has been cx- ecuted on different expressions. The system can however, alter having designated the “higher” step < Tn,En> , ask the ques- tion : ” show me how you did Tp “. An obvious variant of this second type of question is obtained by assuming that it is not the processed expression which varies from one plan to another, but the task. So the system could similarly ask : ” show me what you did when you obtained expression Ep “. 0f course this investigation into the implicit (but “RCCCS- sible”) calculation steps can only be meaningful if it is carried out on a small number of not very deep trees. Thus, given a certain task execution context afTiccted hy the ambiguity of the current equality, the system extracts the “ac- cessible” calculation steps corresponding to nodes as close as possible to those labelled by the context under examination. Anyone of these steps may be an opportunity to ask clarilica- tions from the pupil. The pupil gives this clarification in the form of new equalities expliciting some details of his former equality. These new equalities are in turn analyzed, and the inferred plans are cross-checked with the previous candidate plans. Thus the system reduces its uncertainties (see Fig. 4b). NCEUS1[ON AND FUTURE DlRECTIONS In this paper we have addressed the problem of modelling a student who performs non deterministic calculations (meaning that the calculations are not absolutely constrained by the as- signed task), and who addresses them to the system in a more or less elliptic way. Calculus being a well structured activity, the plan formalism is adequate for its representation. Non determinism is reflected by the existence of optional tasks and/or the variety of possibile methods in a given context. Since the drafting of calculations is not entirely normali7cd ei- ther, a certain ambiguity is unavoidable. We explained in Section 4 what are the only chances, according to us, for the system to manifest to the student a partial comprehension of his calculations, and to ask relevant questions in order to im- prove this comprehension. The approach presented here has been the basis for an implementation in Vmprolog carried out at the IBM Rcscarch Center in Paris. Only a part of the architecture given in J;igure l has been achieved, namely an expert problem solving module (covering the four operators +, - *, /), a knowledge base (in- cluding incorrect knowledge), a student mode!, and a modelizer analyzing and appreciating equalities asserted by a (simulated) pupil. The modelizer also updates the student model. Even without mentioning all the discourse understanding and tutoring strategies problems, there is still obviously much to be done, especially for the local modelling of the student’s calculations, even if they are analyzed from a strictly “procc- dural” point of view. I-Iere we have given methods for analyz- ing one equality, but we have hot said how to process complex calculations which may spread over many eqllalitics. Rctwccn those two abilities, there is, if a metaphor may he pcrmittcd here, as long a distance as between understanding a scntcncc and understanding a discourse. A last word of caution regarding the concept of plan which we have used in this paper. We do not pretend that the plans in our system exactly reflect the intentions of the pupil. They are only a way to describe his actions. J Jc is not sup- posed to acknowledge completely this description, more par- ticularly when this description mentions what we have called “erroneous methods”, which by definition have never hccn taught to him. The general relationship between an “execution method”, written in symbolic form as in the knowledge base of the system, and a “computation act” performed on numhcrs, must not be a priori considered as the intentional application of a rule, but only as a resemblance relation between two pat- terns. Similarly we think that it is perhaps better not to use (as we did here) the terms task and method which carry too much intentionality, but rather to speak of actions and d~compsition of actions. So, unless the same pattern of error returns several times, or unless the pupil is prompted to express this pattern in a symbolic (litteral) form to justify his calculation, the system should merely categorize the observed performance as a “wrong addition”, “wrong simplification”, etc., without going into further details regarding the origin of the mistake. Note : work presented here is part of first author’s thesis [Visrtti 86-j References [Allen 801 J. Al!en , C. Perrault. Analyzing intention in utterances. Artificial Intelligence 15 ( 1980), 143- 178 [Brown 781 J.S. Brown , R. Burton Diagnostic models for procedural bugs in basic mathematical skills. Cognitive Science 2 ( 1978), 155-192 [Brown 803 J.S. Brown , K. Van L&n Repair theory : a gcncrative theory of bugs in procedural skills. Cognitive Science 4 ( 1980), 379-426 [ChamizPk-Mc Derrnott 851 E. Chamiak , D. MC Dermott Intmduc- tion to Artificial Intelligeuce Addison-Weslay 1985 [Genesereth 821 M.R. Genesereth. The role of plans in intclligcnt teaching systems. Inteliigent tutoring syslems (eds. Sleeman and Brown). Academic Press 1982 [Matz 82-J M. Matz. Toward a process model for high school algebra errors. Intelligent tutoring systems (eds. Sleeman and Drown). Academic Press 1982 [Resnick et a!. 851 L. Resnick, E. CauTiniUe-Marmcchc, .I. Mathicu. Understanding algebra. International Seminar on cognitive proc- esses in maths and maths learning, University of Kecle, 1985 [Sleeman et a!. 821 D. Sleeman , J.S. Brown (editors) Intelligent 7’~ toring Systems. Academic Press 1982 [Sleeman 841 D. Sleeman An attempt to understand student’s rrnder- standiig of basic algebra. Cognitive Science 8 ( 1984), 387-4 I2 [Visetti 861 Y.M. Visetti . User modeling in 1CAI . Doctorat dc !‘Universite Paris 6 , 1986 [Vmprolog 851 Vm/Programming in Logic Il3M Programmer’s Alan- ual, 1985 [Zissos 851 A. Zissos , I. Witten. User modelling for a computer coach. Int. J. of Man Machine Studies 23 (1985), 729-750 Visetii and Dague 81
1987
13
583
Visual Estimation of 3-D Line Segments From Motion - A Mobile Robot Vision System 1 William M. Wells III Artificial Intelligence Center SRI International 333 Ravenswood Avenue Menlo Park, Ca 94025 ARPANET: WellsQai.ai.sri.com Abstract An efficient technique is presented for detecting, track- ing and locating three-dimensional (3-D) line seg- ments; The utility of this technique has been demon- strated by the SRI mobile robot, which uses it to locate features in an office environment in real time (one Hz frame rate). A formulation of Structure-from- Motion using line segments is described. The formula- tion uses longitudinal as well as transverse information about the endpoints of image line segments. Although two images suffice to form an estimate of a world line segment, more images are used here to obtain a better estimate. The system operates in a sequential fashion, using prediction-based feature detection to eliminate the need for global image processing. I. Introduction Three-dimensional (3-D) visual sensing is a useful capabil- ity for mobile robot navigation. However, the need for real-time operation using compact, on-board equipment imposes constraints on the design of 3-D vision systems. For the SRI mobile robot, we have chosen to use a feature- based system whose features are image and world line seg- ments. Line segments as features provide a practical com- promise between curves, which are complex to analyze, and point features, which are often sparse in man-made environments. We use a relatively fast frame rate (one Hz) to reduce the complexity of the feature correspondence problem. Be- cause features don ’ move very far in closely spaced im- ages, little searching is needed to find a feature ’ successor. Combining a fast frame rate with prediction-based feature detection can greatly reduce the portion of the image to which feature detectors must be applied. Another benefit of tracking world features in closely spaced images is that volumetric free-space information is readily available. Real-time 3-D vision may be further simplified by avoiding the Motion-from-Structure [Ullman, 19791 prob- lem. We derive camera poses from odometry. (Inertial navigation systems are becoming increasingly practical for this purpose.) Because the vision system is used for naviga- tion among stable objects, we need be concerned only with estimating the locations of stable features in the world. We use other sensors for rapidly moving objects. lThis research was supported General Motors Corporation. in part by contract SCA 50-1B from With these design parameters, we are faced with a problem of Structure-from-Motion [Ullman, 19791 in es- timating a static world feature from its observation in a sequence of images as the camera is moved. We have de- vised a simple formulation of Structure-from-Motion that is based on line segments. It uses simple vector and 3-by- 3 ma.trix operations. The most complicated aspect of the formulation is the inversion of 3-by-3 matrices. II. verview Figure 1: SRI Mobile Robot Here we describe the vision system as it is implemented on the SRI mobile robot. The SRI mobile robot [Reifel, 19871 is equipped with an on-board video camera, frame buffer, and 68010 com- puter system (Figure 1). Optical shaft encoders coupled to the two main drive wheels provide odometric data that are used to derive camera poses. We use closely spaced images to reduce the complexity of the feature correspondence problem. Combining closely spaced images with prediction-driven feature detection lows the application of edge operators to be limited al- to 772 Vision From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. small areas of the image that are near predictions, thus eliminating the need for global image processing. (Pre- diction based feature detection was used to advantage in Goad model based vision system [Goad, 19861.) Image line segments are detected by a software edge tracker that provides least-squares fits of line segments to linear im- age edge features. The edge tracker is directed by pro- totype image segments whose origin will be described be- low. The tracker finds sets of candidate segments that are close to each prototype. (The measure of such closeness is discussed in section III.B..) We require candidate edge segments to have the same gradient sense or “contrast po- larity” as their predecessors. Our system uses a sequential 3-D line segment estima- tor to infer world line segments from sequences of corre- sponding image line segments. The system operates in three phases: “prospecting,” “bootstrapping,” and “se- quential updating. ” “Prospecting” segments, the first pro- totype segments the system uses, are generated so that the feature detection component will find new image fea- ,tures. The “bootstrapping” phase is then used as a prelude to the. “sequential updating phase.” All prototypes gen- erated during bootstrapping are segments that were de- tected in the preceding image. While bootstrapping, we entertain alternative hypotheses about a feature succes- sors in a small tree of possible correspondence sequences. When the tree achieves a minimum depth, we use a nonse- quential form of the 3-D segment estimator (described in section 1II.D.) to generate a world feature estimate as well as a consistency measure for each sequence in the tree. If the most consistent sequence meets a minimum consistency threshold, it is promoted to sequential updating; otherwise, it is discarded. During the “sequential updating” phase, we use the sequential form of the 3-D segment estimator (sec- tion II1.D.). Newly detected image features are folded into world feature estimates as each new picture arrives. Previ- ous 3-D estimates are used to generate prototype segments to direct the feature detector. The prototype segments are generated by taking the central projections of the previous 3-D segment estimates into the image plane using the new camera pose. The detected image feature that is closest to the prototype is, if close enough, used as the successor. The system tracks a set of environment al features bv The robot finds walls by fitting planes to sets of per- ceived 3-D segments. These segments are grouped using a co-planarity measure. Once the walls have been located the robot servos to a path which is centered between the walls. Figure 2 shows an intensity image the robot saw in a hallway. Figure 3 displays a stereo view of an unedited collection of line segments that were estimated by the robot a.nd used to guide its path down the hallway. The frame --~ Figure 2: Hallway rate was one Hz, while the robot moved at 30 mm/s. Most of the segments that the robot gathered were vertical. This is a consequence of the way the “prospecting” segments are arranged, the motion of the robot, and the characteristics of the hallway. Occasionally the system will encounter a seemingly consistent set of miscorrespondences, which will lead to an incorrect hypothesis surviving to the sequential updating phase. Such hypotheses fail quickly when subjected to the long-term consistency requirement. In the future, we plan to investigate the use of ac- quired models within this framework. Such models may provide a means to measure the motion of the robot using Motion-from-Structure. Models may also make it possible to track moving objects. We plan to increase the frame rate of the vision system by installing a 68020 computer in the robot, perhaps using several CPU boards. I. Estimation of 3- Segments In this section, we present a simple formulation of Structure-from-Motion that is based on line segments. It uses longitudinal as well as transverse information about segment endpoints. Given a sequence of images with cor- responding line segment features, we estimate a 3-D line segment that minimizes an image error measure summed over the sequence of images. Camera poses are assumed to be known. In section 1II.A.) we discuss the choice of line segments as features to be used within the paradigm of Structure- from-Motion. We then define an image-based measure of the discrepancy between two line segments.{section IILB.). In section III.C., we express the error measure in terms of a world line segment a.nd its projection as detected in the image. We then estimate a 3-D segment which best fits a sequence of observations by varying the segment to min- imize the error measure summed over a sequence of im- ages. This yields a problem of nonlinear minimization. In section IILD., we describe a sequential estimator that lin- earizes the problem. The robot uses an implementation of this linearized sequential estimator to estimate 3-D world line segments . Wells 773 Figure 3: Estimated Line Segments A. Simple Structure-from-Motion Using Line Segments - Structure-from-Motion is a useful and popular paradigm for robotic visual perception [Aggarwal, 19861. Early work in feature-based motion analysis was based on world and image points [Roach and Aggarwal, 19791 [Longuet- Higgens, 19811 [H annah, 19801 [Gennery, 19821, while later research focused on straight lines [Yen and Huang, 19831. Points and lines also have been used widely in robotic vi- sion [Goad, 1986] [Lowe, 19851. Straight line segments are useful features for motion analysis and robotic vision applications [Ulhnan, 19791 . Point features are as simple to analyze, but unfortunately, prominent point features can be scarce, particularly in man-made environments. Cultural and industrial scenes usually contain prominent linear features that can be reli- ably detected by edge finders. Although cultural and in- dustrial scenes often also have significant curved features, such features are more difficult to analyze than points or lines. Edge finders are very good at determining the trans- verse position of a linear feature in an image. They are less accurate at finding the longitudinal (along the edge) position of the ends of a linear feature, as they usually use thresholds to locate feature terminations. Although the longitudinal information is less reliable than the transverse information, we believe that it is still useful information, which would be lost if linear features were abstracted into straight lines rather than line segments. Line segments carrying endpoint information present a balance between analytical simplicity and practicality as image features. B. Image Error Measure We propose the following as a component of the measure of the discrepancy between a pair of image line segments (Figure 4): E = [o!(P - S) . LIZ + [P(P - S) * Q2 . (1) Figure 4: Image Error Measure Here c represents the squared error due to one pair of corresponding endpoints. The total error for the corre- sponding segments is the sum of the errors for both corre- sponding endpoint pairs. P and S are two-vectors describ- ing the image locations of endpoints of line segments IS and r respectively. f, is a unit vector parallel to c, while C is a unit vector perpendicular to u. The longitudinal and perpendicular components are weighted by a and /?. We have settled on /3/o = 16 em- pirically, giving perpendicular errors 16 times the weight of longitudinal errors. This was deemed to be the smallest weighting of longitudinal errors that provided estimates that were “reasonably” accurate longitudinally, while not overly disturbing the transverse components of the esti- mates with less reliable longitudinal information. If an image line segment is clipped by the boundaries of an image, that endpoint has little meaningful longitu- dinal informat ion. One strategy for this case sets (Y to 774 Vision zero, ignoring the longitudinal information in that partic- ular image. c. 3- Error Measure Figure 5: Imaging Geometry We may recast Eq.(l) in terms of world S-vectors (Fig- ure 5). An endpoint of 3-D segment T is P, their central projections into the image plane are r and p respectively. The endpoint of Q that corresponds top is s. Here, p and s are 3-vectors that refer to locations of-image points in the 3-space in which the image plane is embedded. C is the projection center of the camera, and f is the focal length of the camera. The image plane and Q define an orthonor- ma1 basis composed of i, which is a 3-D unit vector parallel to a; 6, which is a 3-D unit vector normal ,to the image plane; and 6, which is perpendicular to both E and fi. Two additional unit vectors are defined by r”, which is the nor- malization of (P - C), and s^, which is the normalization of (s - C). The image error measure may rewritten as: ~=012[(p-s).i]2+p2[(p-s).~]2 . (2) Next, we express the error convenient unit vectors. Let measure in terms of more 6 = normalize(i X i) , i = normalize(s^ X 6) , and Then we can express 8 and i in terms of i, 6, and hi: 8 = w,6+w,ii 2 = xli+A,ii . Noting that l^i . (p - s) Noting that l^i . (p - s) = 0, we may rewrite Eq. (2) as = 0, we may rewrite Eq. (2) as e = ~2[~(p-s).(Xli+X,h)12+~2~~(p-s).(wo~+w~~)]2 , e = a2[$p-s).(Xli+X,h)12+~2[J-(p-s).(woc3+wni%)]2 , or or e=~[(p-s).il’+$[(p-s).o]2 . e=~[(p-s).il’+$[(p-s).o]2 . 1 1 Since s Since s = C + 6s^ for some 6, 4.0 = 0, and s^. J? = 0, we = C + 6s^ for some 6, 4.0 = 0, and s^. J? = 0, we may write may write . E = $[(p - C) - 21” + $[(p - C) * a2 . E = $[(p - C) - 21” + $[(p - C) * a2 * (3) * (3) 1 1 Now we will use a relation of central projection to Now we will use a relation of central projection to get the error in terms of P rather than p. The standard get the error in terms of P rather than p. The standard “z-division” form of central projection may be written as “z-division” form of central projection may be written as follows (Figure 5): follows (Figure 5): (P-C)= (P-C)= f(P-C) f(P-C) z z where z = (P-C).fi . where z = (P-C).fi . Letting Letting a a 4 4 =- =- Xl Xl b b =Pf , =Pf , wo wo Eq. (3) may be written as Eq. (3) may be written as E = -${a’[(P - C) . iI2 + b2[(P - C) .6]“} E = -${a’[(P - C) . iI2 + b2[(P - C) .6]“} * (4) * (4) If we consider e to be the squared error for a given endpoint If we consider e to be the squared error for a given endpoint due to detection in the ith member of a set of images, then due to detection in the ith member of a set of images, then the total error for a given endpoint would be given by the total error for a given endpoint would be given by E = 7 ${Ui[(P - Cj) ’ Zj]’ + b,2[(P - Ci) ’ 6ij”) E = 7 J${ai[(P - Ci) * Zi]’ + bf[(P - Ci) * 6ij”) e . t t Varying P to minimize E will yield an estimate for the 3-D Varying P to minimize E will yield an estimate for the 3-D segment endpoint. This is a nonlinear estimation problem segment endpoint. This is a nonlinear estimation problem by virtue of the factor of l/z: . by virtue of the factor of l/z: . D. Approximation and Minimization D. Approximation and Minimization There are many ways to minimize E. We will discuss a There are many ways to minimize E. We will discuss a sequential method that works well in practice, which is sequential method that works well in practice, which is designed for an application where a set of images arrives designed for an application where a set of images arrives sequentially and where an estimate of the 3-D feature is sequentially and where an estimate of the 3-D feature is desired after each image. This is often the case in robotic desired after each image. This is often the case in robotic guidance. guidance. The technique involves approximating z; = z(P) by The technique involves approximating z; = z(P) by z- z- = z(P-), where P- is the previous estimate of P. The = z(P-), where P- is the previous estimate of P. The process may be bootstrapped by using a nominal start- process may be bootstrapped by using a nominal start- ing value for Zi. ing value for Zi. This method essentially substitutes a This method essentially substitutes a “pseudo-art hographic” approximation (a different approx- “pseudo-art hographic” approximation (a different approx- imation for each image) for perspective projection. The er- imation for each image) for perspective projection. The er- ror terms ei become invariant to translations of P along &. ror terms ei become invariant to translations of P along &. The approximation is exact for points on one plane in the The approximation is exact for points on one plane in the world, namely the plane containing P, that is parallel to world, namely the plane containing P, that is parallel to the image plane. Within the framework of the I;linimiza- the image plane. Within the framework of the I;linimiza- tion, this is also equivalent to replacing the (unsquared) tion, this is also equivalent to replacing the (unsquared) error functions of P by second-order Taylor expansions. error functions of P by second-order Taylor expansions. Wells 775 The expansions are about the point where the ray ema- nating from Ci along sl; pierces the previously mentioned plane. The approximated squared error measure is also easy to visualize, as it is the weighted sum of the squared perpendicular distances of P from a pair of planes. The two planes both contain the camera center and the end- point s of g. One contains the segment c’, while the unit vector 6 lies in the other. After this$approximation,>he ith error (Eq. (4)) may be written as This is quadratic in P and its sum is easy to minimize. In matrix notation, Ei = $(P - C;)Tti;2T(P - Ci) + g F(P - Ci)T6i6T(P - C;) *- then or ei = (P - C;)TM;(P - Ci) 7 ci = PTMiP - 2PTMiCi + CTM;Ci . Defining M = xMi v= c Mici i k = CCTMiC; i allows us to write the total squared error as E=PTMP-2PTV+k . Setting the gradient of E with respect to P to zero, O=vpE=2MP-2V , or P=M-‘V , provides an easily computed estimate of a 3-D line segment endpoint viewed in a sequence of images. Two images are sufficient for computing an estimate of a line segment. If the camera motion is slight, mak- ing the effective baseline short, then the estimate may be somewhat inaccurate in depth. If more images are used and the camera moves appreciably about some feature in the world, then the estimate of that feature improves and the consistency of the estimate may be better evaluated. There are combinations of line segment orientation and camera motion which are degenerate and preclude depth estimates. In these situations M will be singular, or nearly so in the presence of noise. V. @OllCllX3iO~ We have described an efficient technique for detecting, tracking, and locating three-dimensional line segments as demonstrated on the SRI mobile robot. As the robot moves about, it makes good estimates of environmental 3-D line segments using Structure-from-Motion. In the future, we plan to investigate whether the sta- tistical characteristics of the image line segment detector can provide a maximum-likelihood basis for the estimator. This would also yield values for for the weights cy and @ which appear above. Acknowledgments I thank David Marimont for our many conversations about robotic vision, including some on the general topic of the longitudinal information of endpoints of line segments. Marimont’s thesis [Marimont, 19861 provides a good dis- cussion of feature estimation for robotics. efesences [Aggarwal, 19861 J. K. Aggarwal. Motion and time- varying imagery - an overview. In Workshop on Mo- tion: Representation and Analysis, pages l-6, IEEE, Charleston, South Carolina, May 1986. [Gennery, 19821 Donald B. Gennery. Tracking known three-dimensional objects. In Proceedings of the Na- tional Conference on Artificial Intelligence, pages 13- 17, August 18-20 1982. [Goad, 19861 Ch ris Goad. Fast 3-D Model Based Vision. In Alex P. Pentland, editor, From Pixels to Predicates, Ablex Publishing Co., 1986. [Hannah, 19801 Marsha Jo Hannah. Bootstrap stereo. In Proceedings of the First Annual National Conference on Artificial Intelligence, pages 38-40, American As- sociation for Artificial Intelligence, 1980. [Longuet-Higgens, 19811 H. C. Longuet-Higgens. A com- puter algorithm for reconstructing a scene from two projections. Nature, 293:133-135, 10 September 1981. [Lowe, 19851 David G. Lowe. Perceptual Organization and Visual Recognition. Kluwer Academic Publishers, 1985. [Marimont, 19861 David Henry Marimont. Inferring Spa- tial Structure from Feature Correspondences. Ph.D. thesis, Stanford University, 1986. [Reifel, 19871 Stanley W. Reifel. The SRI Mobile Robot Testbed, A Preliminary Report. Technical Report 413, SRI International Artificial Intelligence Center, 1987. [Roach and Agg arwal, 19791 J. W. Roach and J. K. Aggar- wal. Computer tracking of objects moving in space. IEEE Transactions PAMI, PAMI-1(2):127-135, April 1979. [Ullman, 19791 Shimon Ullman. The Interpretation of Vi- sual Motion. MIT Press, 1979. [Yen and Huang, 19831 B. L. Yen and T. S. Huang. De- termining 3-D motion and structure of a rigid body using straight line correspondences. In Proceedings of the International Joint Conference on Acoustics, Speech and Signal Processing, March 1983. 776 Vision
1987
130
584
The Sensitivity Of Motion and Structure Computations John L. Barron, Allan D. Jepson* and John K. Tsotsos* Department of Computer Science University of Toronto Toronto, Canada, M5S lA4 Abstract We address the problem of interpreting image velocity fields generated by a moving monocular observer viewing a stationary environment under perspective projection to obtain 3-D information about the relative motion of the observer (egomotion) and the rela- tive depth of environmental surface points (environmental layout). The algorithm presented in this paper involves computing motion and structure from a spatio-temporal distribution of image velocities that are hypothesized to belong to the same 3-D planar surface. However, the main result of this paper is not just another motion and structure algorithm that exhibits some novel features but rather an extensive error analysis of the algorithm’s preformance for various types of noise in the image velocities. Waxman and Ullman [83] have devised an algorithm for com- puting motion and structure using image velocity and its 1st and 2d order spatial derivatives at one image point. We generalize this result to include derivative information in time as well. Further, we show the approximate equivalence of reconstruction algorithms that use only image velocities and those that use one image velocity and its 1st and/or 2”d spatio-temporal derivatives at one image point. The main question addressed in this paper is: “How accurate do the input image velocities have to be?’ or equivalently, “How accurate does the input image velocity and its I~ and 2& order derivatives have to be?“. The answer to this question involves worst case error analysis. We end the paper by drawing some conclusions about the feasibility of motion and structure calculations in general. I.1 Introduction In this paper, we present a algorithm for computing the motion and strncture parameters that describe egomotion and environmen- tal layout from image velocity fields generated by a moving mono- cular observer viewing a stationary environment. Egomotion is defined as the motion of the observer relative to his environment and can be described by 6 parameters; 3 dvth-scaled translational parameters, Z and 3 rotation parameters, o. Environmental layout refers to the 3-D shape and location of objects in the environment. For monocular image sequences, en$ronmental layout is described by the normalized surface gradient, a, at each image point. To deter- mine these motion and structure parameters we derive nonlinear equations relating image velocity at some image int ?(?*,t ‘) to the underlying motion and structure parameters at (P,c). The computa- P tion of egomotion and environmental layout from image velocity is sometimes called the reconstruction problem; we reconstruct the observer’s motion, and the layout of his environment, from (time- varying) image velocity. A lot of research has been devoted to devising .mmnstruction algorithms. However, a little addressed issue concerna their performance for noisy input: how accurate does the input image velocity have to be to get useful output? 1.2 Previous Work The most common approach to monocular reconstruction involves solving (generally nonlinear) systems of equations relating image velo&y (or image displacement) to a set of motion and strut- tme parameters ([Longuet-Higgins 811, ITsai anti Humg 8% Cprazdny 791, moach and Aggarwal 803, IWebb and Aggarwd 8l]~ Fang and Huang 84a,b], [Buxton et al 841, IWlliam 89 [Dres&ler and Nagel 821, Lawton 831). SOme of the issues that anse for these algo&hms am the need for good initial guesses of me solu- tions, the possibility of multiple solutions and the need for accurate input. The latter is by far the most important issue if the recons@uc- * Also, Canadian Institute for Advanced Research tion approach is to be judged a success. AS Waxman and Ullman [85] and others have noted, reconstruction techniques that use image velocities of neighbouring image points require accurate differences of these similar velocities. That is, solving systems of equations effectively requires subtraction of very similar quantities: the error in the quantities themselves may be quite small but since the magni- tudes of these differences are quite small, the relative error in the differences can be quite large. Hence, such techniques can be expected to be sensitive to input errors. A second approach to reconstruction involves solving nonlinear systems of equations relating local image velocity information (one image velocity and its 1” and 2”’ order spatial derivatives) to the underlying motion and structure parameters (&onguet-Higgins and Prazdny 801, waxman and Ullman 851). The rationale is that using local information about one image point means that the problem of similar neighbouring image velocities can be averted. However, this is replaced with the problem of computing these 1” and 2& order spa- tial derivatives. Waxman and Wohn [85] propose that these deriva- tives be found by solving linear systems of equations: where each equation specifies the normal component of image velocity on a moving non-occluding contour in terms of a Talyor series expansion of the x and y components of image velocity. In effect, their recon- struction algorithm divides the computation into two steps: use a normal velocity distribution to compute image velocity and its 1”’ and 2” order spatial derivatives at an image point and then use these as input to an algorithm that solves the non-linear equations relating motion and structure to the image velocity and its 1* and 2& order derivatives. Only recently, have researchers begun to address the use of temporal information, such as temporal derivatives, in reconstruction ([Subbarao 861, [Bandyopadyay and Aloimonos 851). We note that others’ use of temporal derivative information and our use of time- varying image velocities are effectively equivalent; image velocity fields (at least locally) can be derived from one image velocity and its 1” and/or 2& spatial and temporal derivatives and vice-versa. Indeed, image velocity fields am often used in the derivation of spa- tial and temporal image velocity information. It is somewhat disappointing that almost none of these recon- struction techniques have been successfully applied to flow fields calculated from realistic scenes. Primarily, the problem is the difficulty in computing accurate flow fields. There has been little or no error analysis in previous monocular reconstruction work, although some researchers, such as [waxman and Ullman 851, puxton et al 841, [Aloimonos and Rigoutsos 861, [Snyder 861 and [Subbarao 861 have begun to consider the inherent sensitivity of their algorithms to random noise in the input. See [Barron 84,871 for a more detailed survey of reconstruction techniques and their prob- lems. 1.3 Underlying Assumptions In order to relate a spatio-temporal distribution of image velo- cities to the motion and structure parameters at some image point we need to make some assumptions: (a) 3-D objects are assumed to be rigid. The rigidity assumption ensures that the image velocity of an object’s point is due entirely to the point motion with respect to the observer and not due to changes in the object’s shape. (b) The 3-D surfaces of objects can be described locally as a plane. The local planarity assumption means curved surfaces are treated as collections of adjacent planes. (c) The observer rotates with a constant angular velocity for some small time interval. Webb and Aggarwal [Sl] call this the fixed axis assumption. (d) The spatio-temporal distribution of image velocity results from 700 Vision From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. 3-D points on the same planar surface. We call this the same surface assumption. (e) The observer translates with a constant speed (and possibily con- stant acceleration). The use of a spatio-temporal distributions of image velocity means motion and structure is computed using local spatio-temporal data; thus we are not necessarily restricted to stationary environments as we can treat each independently moving surface as stationary rela- tive to the moving observer. Similarly, the constraints on the motion need only be satisfied for short time intervals. In [Barron et al 87a] we treated violation of these assumptions as one type of error in the input data. The use of the local planarity and fixed axis assumptions means that the point-to point correspon- dence problem does not have to be solved, i.e. we do not have to use velocities of the same 3-D points at different time intervals, as it is now mathematically possible to relate image velocities distributed in space and time at any point and time on a 3-D planar surface to the motion and structure parameters of any other point on the planar sur- face at any other time (where these assumptions are reasonably satisfied) (‘1. Other researchers, such as [Kanatani 851 and [Aloimonos and Rigoutsos 861, have also advocated a correspondence-less approach. The computation of image velocity may require the solving of correspondence although there is a group of techniques based on the relationship between spatial and temporal grayvalue gradients, for example, [PIorn and Schunck 811, for deter- mining image velocity without the need to compute correspondence. results for-the monocular algorithm presented in this paper. Some of the more important ones are: In a previous paper [Barr-on et al 87a] we presented a Grst set of individual image velocities when a moving monocular observer views a planar surface. The main results of the paper are contained in the error analysis. In particular: (1) We conduct a scaled best, random and worst case error analysis for a set of related motion and structure combinations when image velocities are used as input. (The errors are scaled for comparison purposes.) We investiga2 the+unplification of input errors for the whole solution and for-g, a and o along with the relationship between worst case image velocity error and the error in the Taylor series expansion coefficients. We also investigate the algorithm’s perfor- mance when there is a maximum ofX% worst case error in any of the image velocities. (2) We conduct best and worst case error analysis by adding worst case error directly to the Taylor series expansion coefficients. We are interested in how the algorithm performs when there is a maximum of x% worst case error in any of the Taylor series coefficient pairs. (In general, the worst case error directions for the image velocities and the Taylor series coefficients are different.) 2 Mathematical Preliminaries In this section we present a brief description of our algorithm. Complete details are in marron 871. 2.1 Notation 2.2 PhysicA Setup We use notation ?(t;@ to indicate a 3-D point measured at time t with reyct to a coordinate system ?(T). Similarly, X3(?,t;~) is the depth of P(t 3). ?(&) is the image of &;t). (1) The use of a spatio-temporal distribution of image velocity rather than a purely spatial distribution of image velocity generally reduced the amplification of input to output error. As well, increasing the spatial extent of the image points where image velocity are measured also reduced error amplification. (2) It appears that the accuracy with which image velocities can be computed is much more important that the satisfaction of the various assumptions. The solutions are not especially sensitive to small vio- lations of the assumptions. (3) The error in the initial guess (required for Newton’s method) can be quite large (100% and more) and convergence can still be obtained for most cases when image velocity error is present. (4) For exact image velocities, we found multiple solutions even though theoretical results suggested unique solutions. The analysis showed that it is possible for 2 distinct sets of flow fields to have 4 image velocities in common. (5) We conducted a best, random and worst case error analysis for a related set of motion and structure parameters. (The 3 error types were scaled for comparison purposes.) The difference between the best, random and worst case results was significant. This suggests that tests based on a random noise alone are inadequate. (6) The use of time allowed us to analyze motion and structure com- binations that were singular at one time. For example, the motion and structure: 8=(0,0, lOOO), j&(0,0,1) and &(0.2,0,0) is unanalyzable at time 0 but can be solved given image velocities distributed over a short time interval. We have also devised a binocular reconstruction [Barron et al 87b] that contains the monocular algorithm presented in this paper as a special case. 15 Contributions of this Paper The algorithm presented in this paper involves solving non- linear system of equations that relate a spatio-temporal distribution of image velocity to a set of motion and structure parameters at some image point at a particular time. We conduct a Taylor series expan- sion analysis and show the equivalence of using a mean image velo- city and its 1” and/or P order spatio-temporal derivatives to using 4 (1) Of course, we must still be able to solve surface correspondence, i.e. we must be able to group together alI image velocities distributed locally in space and time that belong to the same planar surface. See [Adiv 841 for one ap- preach to this problem. Figure 2.1 The Observer-Based Coord%ate System 8=(Ul,U2,U3) is the observer’s 3-D translational velocity while =(q, city. The image plane is at depth 1. The image of 3 ,oj) is his 3-D rotational velo- is located at?=(yy,yz,l). The origin of the image plane is (0,O.l). The X3 axis is the line of sight. We adopt a right-handed coordinate system as in Longuet-Higgins and Prazdny [80] which is shown in Figure 2.1. g=(U, , u,, u,) is the translational veloci coordinate system 9 of the+ observer, centered at the origin of the observer. (t) and w(q,w,q) is the angular velocity of the 2.3 The General Monocular Image Velocity Equation We can write an equation relating image velocity at some image point ?(?j,t ‘) to the monocular motion and structure parame- ters at some image point Y&t) as where ?$ and 3. am 3-D points on the same planar surface and gen- erally ?(&)&$,,t ‘). In the above equation (2.3-2) and larron, Jepson, and Tsotsos 701 YlY2 --w-Y?) Y2 Az&h>)= U+Y;) -y1y2 -YI , I 1 0 0 0 (2.3-3) h(?(F,f)) is the perspective+csrrect9n function that specifies the ~(&;t) = I Ij,:%%.w)I 12 (2.4-lb) ratio between the depthzf ~(t;t),+(~,ts andsits 3-D distance from and the observation point, 1 \p(t,t) 1 12=(~(r7).p(t;r)) , i.e. I \ (2.3-4) and ~‘(?(?,t),t;t) is the distance-scaled translational velocity of the observer, Ti@(%),f ;t) = & 7) --) I lN7)I I2 * (2.4-5) One of the advantages of using a single instantaneous image velocity field is that no assumptions about the observer’s motion, for example his acceleration. have to be made. However, the use of a spatio- temporal distribution of image velocities requires that we mla-m the motion and structure oaramekrs at one time 6 those at another time. Hence. we need to make assumptions about the observer’s motion. In this-paper, we consider two skcific types of motion, although we emnhasize that our treatment can be generalized to other motions as weil. The two types of motion consid&ed are: Type 1: (Linear Motion, Rotating Observer): A vehicle is moving with constant translational velocity and has a camera mounted on it that is rotating with constant anguiar velocity. Type 2: (Circular Motion, Fixed Observer): A vehicle with a fixed mounted camera is moving with constant translational and angular velocitv. a,(~,r,t’)=RT(c4t’)R(~,,t) and Q,(&,t’)=I,(the identity matrix), for Types 1 and 2 motion repctivel . R(~c) is an ortho onal matrix specifying the rotation 1 IoIl 2 t of 9 (t) with respect to 9 (0). S,, the monocular spatial scaling fwnction, (2.3-6) _ X,($) . swcifies the denth ratio of two 3-D noints, z and p”; on the same pianar surface & the same time. The *monowiar tekporal scaling function, (2.3-7) . . . . I . ., . =+3 NPkJ”, V)~RT(&“)R(&) ~(~,t)-~(~,t,t”;t)~(~(~,t) (2.4-1~) In (2.4-1~) we use ~e~otation~~=(~I,.ol,,.ol,,) in (2.4-1~). Obviously, when z(?,,t ;t)= Tt(I’(PJ),t ;t) I I~eh>~ 7) I I2 ’ the solution is unique as the dual solution reduces to the first solution. Subbarao and Waxman [85] have also showed the uniqueness of the motion and structure param- eters over time as well. These theoretical results suggest that the possibility of multiple (non-dual in the spatial case) solutions is non-existent. However, they only hold when the whole flow field is considered. It is possible for two distinct image velocity fields to have four common image points at four times with the same the image velocity values. Hence, the analysis of the four image velocities may give rise to any of the sets of motion and structure parameters having those four image velocities in common. An example of such a situation is shown in [Barron et al 87a]. 2.5 Singularity If Z$,(O,O,O) then the system of equations is singular. In fact, when i&o, its condition number becomes very large; very small input error causes instability in the solution technique. Also, Fang and Huang [84a] and others have shown that the solution does not exist using the image velocities at three or more collinear 3-D points (as the determinant of the J is 0). We have also observed that the solution using two image velocities at each of two 3-D points on the same planar surface at two distinct times cannot be obtained. The motion of 2 points can be caused by an infinite number of motion and structure combinations. As well, there are particular motion and structure combinations that cannot be recovered at one time. For example, if If=(O,O,a), i&(0,0,1) and &O,b, 0) at time 0, then the values of constants a and b can be arbitrarily set to yield the same image velocity field; hence, it is impossible to distinguish the translational and rotational components of image velocity from each other. Other conditions of singularity are under active investigation. b specifies the depth ratio of two 3-D points, ?‘(&t)=?(&’ ‘) 3 Experimental Technique In special cases, equation (2.3-l) reduces to either a purely spa- In this section we discuss the implementation of our algorithm tial or a purely temporal image velocity equation when .S,=l or TM=l. and present details of our sensitivity analysis. Given eight distinct components of image velocity distributed over 3.1 Implementation space and time, but on the same 3-D planar surface, we can construct and solve a non-linear system of equations to determine the motion Newton’s method is used to solve the+ systems of non-linear and structure parameters. ewations. Since only 2 components of CL are independent, i.e. I loll I2=1, we add extra row to the Jacobian matrix, J to ensure the 2.4 The Non-Uniqueness of the Solutions computed 2 is normalized; hence J is a full rank 9 matrix. The P Because we are solving non-linear systems of equations we value of& the measured image velocities is then set to 1. need to be concerned about the uniqueness of our solution. Hay [66] was the first to investigate the inherent non-uniqueness of the visual When2 is known to be zero, i.e. in the case of pure translation (Type 1 and Type 2 motions are equivalent here) we solve a 6x6 interpretation of a stationary planar surface. He showed that for any Jacobian instead of a 9x9 one. We compute a 9x6 Jacobian (the 3 planar surface there are at most two sets of motion and structure columns corresponding to 2 are not computed). We let the LU parameters that give rise to the same image velocity field for that decomposition of J choose the bests 6 rows of J, with the provision surface. Hav also showed that given two views of such a surface only that the normalization row is always one of the chosen rows. one unique- set of motion an& structure parameters was capable of correctly describing the image velocity field. Waxman and IJllman 3.2 Sensitivity Analysis [83] carried this result one step further by showing the dual nature of we Compute an error veceor TfM, which when added to TM, the solutions: given one set of motion and structure parameters it is yields the perturbed input, yM’ =TM+?fM. For X% random case error, possible to derive a second set in terms of the first analytically. If we Compute four random 2-compon.nt unit vectors, ij, j=l,...,4, and this second solution is then substituted back into the equations speci- then compute each i” component of AfM as fying the duality, the first solution+@ obtained. Given one set of AAM motion and structure parameters,iZ, ,3 and q, we can defve expres- sions for the dual solution, ?2, z and ca,, at some point ?(P,t) as I 1 AX.+1 M = $ ij 1 13 1 12, j=l,...,d, i=jti-1. (3.2-l) 702 Vision Thus X% random error is added to each image velocity. Afw is 0, i.e. we do not add error to the normalization constant. Using AfM for ran- dom error we compute Af- = I I&, 1 12. We use forward and inverse c2) iteration on J to compute normal%ed best and worst case error directions, &, and &. We cornJute AfM = & Afnomr as X % scaled best case image velocity error and AfM = & Af- as X 8 scaled worst case image velocig error. In either case AfM is scaled t&) *the same size as the random Afar for comparison purposes. When w is known to be 0 the appropriate 6x6 Jacobian is used in the forward and inverse itera- tion calculation. We can also sompute X% relative c orst case image velocity error by computing AfM so that the image velocity with the largest error is X96. We perform a Taylor series expansion of image velocity. In the spatial case, we can write (3.2-2) for small spatial extents. Here2 is given as [ av, av2 av, av2 a%, a2v g's= 2 %l1,bm2,-r-,-r-,-r ah ay, aY2 ah hay, ahaY2 1 = @,.?2z?3j)3rz+4?4) and A, is given as c AY:I AYIIAYI 12 I2Ay11 IZAYIZ AyyI,Ay,2 AyYt2 3‘ [ AY& AYY~IAY~ I2 I2Ay21 I2A~22 Ay2,Ayn AY:~ 3 A, = [ AY!, AYSIAY~ * 12 12Ay31 12Ay32 AYSAY~ AY:Z 3 AYY:, AY.~IAY’Y~ 12 12AY41 12Ay42 AY 41 AY 42 A~y”42 For the spatio-temporal case, we can write J)M =A,% for small spatio-temporal extents. Here z= 1 av, av2 av, av2 av, av2 %@lr%2,-r-,-,-,-,- ah ah ay, aY2 at at 1 and = @lrji+2*?3r??4) (3.2-6) I2 I,AYY,I IZAYIZ I2At1 12 IzA~21 I2Ayn I,& At= I2 IZAy31 z2Ay32 I2&3 . i 1 12 I2Ay41 12Ay42 12b4 (3.2-7) We can compute the Taylor series expansion using both perfect J)M and noisyTM’, to get? andTfrespectively. zg is simply?-3 (3.2-3) (3.2-4) (3.2-5) We compute X% relative best case and x8 relative worst case error in A; by performing forward and inverse iteration on A*-‘./ wherg .A*_ is an 9x9 matrix computed using A =A3 or A=A, as where OR and 0, are 8 component row and column vec- all O’s and P is simply @‘,I>. These best and worst directions are then scaled so that+there is a maximsm of X% in any of the Taylor series coefficients, A&, i=1,...,4 where A& is the error in 3. When 2 is zero, we cannot conduct this error analysis as the 9x9 Jacobian is singular. C&gut error in the computed solutiondand in its components, ‘it 2 and o are computed simply as the ~~ norm of their difference from the correct solution/component over the L2 norm of the correct (2) We note that the best and worst directions so calculated are for the initial linear system of equations, JzO=TO. It is possible that the best and worst direc- tions for the nonlinear system of equations are different, although we expect these directions to be quite close to the computed best and worst directions for small input errors. solution/component. In purely spatial cases, we also compute tie dual of 3 3),, using (3.4) and co$pute the output error as the mhhnum Of error in?or$. Siye a’ and -2 specify he sami: sur- face gradient, we dwgs “flip” (r’ befog the output error is calcula- tio3Jf the ll+i;1Eped a’ is closer to (r. than the original 2, i.e. I I~+~‘1 12~ I I~--01’ll2. The error in the various &, i=~ to 4 and $ is simply computed as the L2 norm of the difference over the L, norm of the correct value. 4 Experimental Results We use the motion and structure described in the table below for the experimental results presented in this paper. X3 is 2000 in all cases. Image coordinates are measured on a 2.56~256 display device and so pixel coordinates are scaled by 256 to produce the corresponding f coordinates. Thus the solution point, 2, is (20,20) in pixels or (0.078125,0.078125,1) in f units. For a temporal extent O-t we measure image velocities at the following image point offsets and times where$?,+(AyiI,Ayi2,0), i=l. to 4. The viewing augle of these points, which we call their spatial extent, is computed as the maximum diagonal angle subtended by the points, i.e. 33.09. Temporal extent O-t is varied by varying t from 0 to 1 or 0.3 to 1 in 8 equal steps; a temporal extent of O-O corresponc$ to the purely spatial case. (As we have already pointed out, when co=(0,0.2,0), the motion is singular for temper-. extent O-O but is can be solved at other temporal extents. When o 1s known to be zero we+ca+n solve this motion for temporal extent O-O provided we enforce cu=O.) Image velocity error is varied from O-l .4% in 0.2% steps while Taylor series cofficient error is varied from 0- 14% in 2% steps. In the first experiment we vary image velocity error against temporal extent. Tables 1, 2 and 3 shows the overall amplification factors (3), their standard deviations and the number of solved runs when the image velocity error was n3t O%_lout of a maximum of 56) computed for the 3 motions for?,?, a and w for all solved runs. Best case results are quite good, especially when compared with the corresponding random and worst case results. Random output is about % the worst case output error. We include random results only to show the inadequacy of an error analyisis that only involves pr- forming a few runs with random noise in the input. Unless a particu- lar run is made n times (n a sufficiently large integer) for random noise in the input we cannot draw any useful conclusions. The larg- est output error for these n runs should approach worst case results while the average output error for the n runs comprises average case results. Table 4 shows the overall amplification factors for the 3 motions when O-1.4% relative worst case image velocity error is used instead. These results indicate that worst case error of 1.4% and smaller can produce unusable output. It seems that we need image velocity measurements to be quite accurate. An examination of the error velocity means and differences for he above runs confirms the hypothesis about image velocity error presented in section 1.2. For best case results, tlhe error in the means is larger t&n the error in the differences while for worst case results he error in the differences is larger. In another experiment (see [Bar- mn 871) we a&j4 worst case error directly to the means and differ- ences of the image velocities. The results further confirm’the hypothesis as worst case mem error produced very small error amplification while worst case difference error produced much larger ones. Indeed, even when large worst case mean error was used (UP (3) Overall amplification factors are computed as the average of output error di- vi&d by input emor for all solved runs where hI.PUt mOr is not 0%. Barron, Jepson, and Tsotsos 703 to 49% error in image velociGes) worst case mean amplification fac- tors were still less than 1. In fact, worst case mean error results are almost as good as best case error results. The second experiment involves adding relative best and worst case error to the Taylor coefficients, 2, and then computing motion and structure from the resulting image velocity fields. Preliminary results am presented in Table 5 for the 2”’ and 3ti motions. The stan- dard deviations are much larger here. This is because the magnitude of the input error increases significantly in time; hence, the output error also increases in time. For the smaller temporal extents the amplification is typically 2-3. Again we note that mean error (inzl) is larger in best case results than in worst case results while deriva- tive error (in22, z3 and z4 is larger in worst case results than in best case results. These and earlier results suggest that adding error to velocity means has only a minimal effect on the error amplification but that adding error to the velocity differences/derivatives has a much greater effect on the error amplification. It is interesting to note the relationship between image velocity error and error in the Taylor coefficients for that same image velocity field. Table 6 shows the error in $1, Tf2, 23, 24 and 3 for temporal extents O-Q.3 and O-l for the 3 motions when 1.4% scaled worst case image velocity error is present in the input. Its seems that error inz4 is by far the largest. Overall the error in zis 2-3 times the error in the image velocities (4-6 times if we look at the ~~ norm error in the input) while the error in the various 2 can be lo- 15 times larger. [Waxman and Ullman 851 note that in the spatial case recovery of motion and structure when there is large error in24 is quite robust. Changing 7, to (0,O) from (20,20), we conduct a last set of tests for the 3d motion, varying spatial extent (the magnitudes of the coor- dinates of the four comers on the square) to have values lo, 14’ and 70' (the full image). We vary t from 0.3 to 1 for these tests. Tables 7 and 8 show the overall amplification factors for these tests. We can solve most of the runs, even when the spatial extent is only 1’ and the relative error is 14%. Of course, the corresponding image velo- city error is quite small. As expected, best case results are quite good. On the other hand worst case results are not nearly as good. Large amplification factors means that (effectively) the output is not useful even when a solution is obtained as is the case for most of the runs. Again, the magnitude of the actual image velocity error increased with time and is entirely due to the error in 34=-$. The errors in ?1, Z2 and & are quite small relative to this error. When the spatial extent is either l& or 70° we observe an improvement in time. 5 Conclusions The results of the above experiments suggest that reconstruc- tion algorithms that use individual image velocities need them to within 1% or better or equivalently those algorithms that use local image velocity information (for example, [waxman and Ullman 831) need their input to within 10% accuracy. Derivative information is usually calculated directly from velocity fields, for example by using a least squares computation to related normal image velocity to the g’s (see Waxman and Ulhnan 1831). Current techniques for measur- ing image velocity cannot produce this required accuracy. This may appear to call into question the feasibility of the reconstruction approach. However, an alternative approach is suggested from the experimental results: (1) Compute one mean image velocity that corresponds to 31=3, (as we have seem the error in this velocity can be quite large) using one of the many conventional image velocity measurement techniques available (for example [Horn and Schunlc 811 or [Barnard and Thompson 801). (2) Use separate techniques to measure spatio-temporal derivative information directly from raw time-varying intensity data. It may well be that such techniques will be able to measure the derivative information within the required accuracy. We advocate the design of such measurement techniques as a new research area. The derivative data could then be used directly in the motion and structure calculation (such as Waxman and Ullman’s algorithm) or first be converted into time-varying image velocity fields which, in turn, are used as input to, say, the algorithm presented in this paper. Bibliography (1) Adiv G., 1984, ‘Determining 3-D Motion and Structure from Optical Flow Generated by Several Moving Objects”, COINS Techn- ical Report 84-07, University of Massachusetts, April. (2) Aloimonos J.Y. and I. Rigoutsos, 1986, ‘Determining the 3-D Motion of a Rigid Planar Patch Without Correspondence, Under Per- spective Projection, Proc. Workshop on Motion: Representation and Analysis, May 7-9. (3) Bandyopadhyay A. and J.Y. Aloimonos, 1985, “Preception of Rigid Motion from Spatio-Temporal Derivatives of Optical flow”, TR 157, Dept. of Computer Science, University of Rochester, NY, March. (4) Barnard ST. and W.B. Thompson, 1980, “Disparity Analysis of Images”, PAMI-2, No. 4, July, ~~333-340. (5) Barron, J.L., 1984, “A Survey of Approaches for Determining Optic Flow, Environmental Layout and Egomotion”, RBCV-TR-84- 5, Dept. of Computer Science, University of Toronto, November. (6) Ban-on, J.L., 1987, ‘Determination of Egomotion and Environ- mental Layout From Noisy Time-Varying Image Velocity in Mono- cular and Binocular Image sequences”, forthcoming PhD thesis, Dept. of Computer Science, University of Toronto. (7) Barron J.L., A.D. Jepson and J.K. Tsotsos, 1987a, “Determining Egomotion and Environmental Layout From Noisy Time-Varying Image velocity in Monocular Image sequences”, submitted for publi- cation. (8) Barron J.L., A.D. Jepson and J.K. Tsotsos, 1987b, “Determining Egomotion and Environmental Layout From Noisy Time-Varying Image velocity in Binocular Image sequences”, IJCA187, August, Milan, Italy. (9) Buxton B.F, H. Buxton, D.W. Murray and N.S. Williams, 1984, “3-D Solutions to the Aperture Problem”, in Advances in Artificial Intelligence, T. O’Shea (editor), Elsevier Science Publishers B.V. (North Holland), pp63 l-640. (10) Dreschler L.S. and H.-H. Nagal, 1982, “Volumetric Model and 3-D Trajectory of a Moving Car Derived from Monocular TV-Frame Sequences of a Street Scene”, CGIP 20, pp199-228. (11) Fang J.-Q. and T.S. Huang, 1984a, “Solving Three-Dimensional Small-Rotation Motion Equations: Uniqueness, Algorithms and Numerical Results”, CVGIP 26, ~~183-206. (12) Fang J.-Q. and T.S. Huang, 1984b, “Some Experiments on Estimating the 3-D Motion Parameters of a Rigid Body from Two Consecutive Image Frames”, PAMI, Vo1.6, No.5, 1984, ~~545-554. (13) Hay J.C., 1966, “Optical Motions and Space Perception: An Extension of Gibson’s Analysis”, Psychological Review, Vol. 73, No. 6, ~~550-565. (14) Horn B.K.P. and B.G. Schunck, 1981, “Determining Optical Flow”, AI 17, ~~185-203. (15) Kanatani K, 1985, “Structure from Motion without Correspon- dence: General Principle”, Proc. IJCAI85,886-888. (16) Lawton D.T., 1983, “Processing Sequences”, CGIP22, ppl16- 144. Translational Motion (17) Longuet-Higgins H.C., 198 1, “A Computer Algorithm for Reconstructing a Scene from Two Projections:, Nature 293, Sept., pp133-135. (18) Longuet-Higgens H.C. and K. Prazdny, 1980, “The Interpreta- tion of a Moving Image”, Proc. Royal Society of London, B208, 1980, ~~385-397. (19) Prazdny K., 1979, “Motion and Structure From Optical Flow”, IJCAI79, ~~702-704. (20) Roach J.W. and J.K. Aggarwal, 1980, “Determining the Move- ment of Objects from a Sequence of Images”, PAMI, Vol. 2, No. 6, Nov., ~~554-562. 704 Vision (21) Snyder M.A., 1986, “The Accuracy of 3-D Parameters in Correspondence-Based Techniques: Startup and Updating”, Proc. Workshop on Motion: Representation and Analysis, May 7-9. (22) Subbarao M. and A.M. Waxman, 1985, “On the Uniqueness of Image Flow Solutions for Planar Surfaces in Motion”, CAR-TR-114 (CS-TR-1485), Center for Automation Research, University of Maryland. (Also, 3rd Workshop on Computer Vision: Representation and Control, 1985.) (23) Subbarao M., 1986, “Interpretation of Image Motion Fields: A Spatio-Temporal Approach”, Proc. Workshop on Motion: Represen- tation and Analysis, May 7-9. (24) Tsai R.Y. and T.S. Huang, 1984, “Uniqueness and Estimation of Three-Dimensional Motion Parameters of Rigid Objects with Curved Surfaces”, IEEE PAMI, Vol. 6, No. 1, ~~13-27. (25) Ullman S., 1979, The Interpretation of Visual Motion, MIT Press, Cambridge, MA. (26) Waxman A.M. and S. Ullman, 1985, “Surface Structure and 3-D Motion from Image Flow Kinematics”, Intl. Journal of Robotics Research, Vo1.4, No.3, ~~72-94. (27) Waxman A.M. and K. Wohn, 1984, “Contour Evolution, Neigh- bourhood Deformation and Global Image Flow: Planar Surfaces in Motion”, Intl. Journal of Robotics Research, Vo1.4, No.3, ~~95-108. (28) Webb J.A. and J.K. Aggarwtl, 198 1, “Visually Interpreting the Motion of Objects in Space”, IEEE Computer, Aug., ~~40-46. (29) Williams T.D., 1980, “Depth from Camera Motion in a Real World Scene”, PAMI-2, No. 6, Nov., pp5 11-515. Table 2: Scaled Random Error Amplification Factors Motion f St. Dev. -it St Dev. d St Dev. 3 St Dev. Runs 1 4.066 0.163 0.476 0.017 4.532 0.182 - - 56 2 15.792 0.067 15.653 0.953 13.159 0.498 38.380 2.234 5.5 3 8.499 0.035 3.344 0.195 9.183 0.373 9.947 0.537 56 I Table 4: Relative Worst Case Error Amplification Factors Motion 2 3 Table 5 Amplification Factors for Taylor Series Coefficient Error Best Case Error St. Dev. RUnS Worst Case Error St. Dev. 0.093 0.046 56 7.955 3.244 0.179 0.086 56 5.072 2.136 RIlns 33 54 Table 6 Error in Taylor Coefficients for 1.4% Scaled Worst Case Error Motion Temporal Extent z ;bz 23 24 t 1 o-o 0.10 2.25 1.13 0.32 0.98 1 1 O-l 1 1.162 1 0.81 1.21 1.32 1.19 2 O-O.3 2.27 2.60 4.07 17.55 4.86 2 O-l 1.93 2.57 4.41 28.61 3.80 3 O-O.3 3.66 2.10 5.69 21.12 6.99 3 O-l 3.29 1.47 5.28 7.20 4.12 1 Table 7: Amplification Factors for Relative Best Case Error 1 Spatial Extent Amplification St. Dev. Runs lo 1 0.567 1 0.295 t 56 14O 0.144 0.069 56 7o” 0.147 0.070 56 Table 8: Amplification Factors for Relative Worst Case Ermr 11 Barron, jepson, and Ysstsos 705
1987
131
585
Using Generic Geometric Models for Intelligent Sha e Extraction Pascal Fua and Andrew J. Hanson * Artificial Intelligence Center Computer and Information Sciences Division SRI International Abstract Object delineation that is based only on low-level segmen- tation or edge-finding algorithms is difficult because typical edge maps have either too few object edges or too many irrelevant edges, while object-containing regions are gener- ally oversegmented or undersegmented. We correct these shortcomings by using model-based geometric constraints to produce delineations belonging to generic shape classes. Our work thus supplies an essential link between low-level and high-level image-understanding techniques. We show representative results achieved when our models for build- ings, roads, and trees are applied to aerial images. I. Introduction Our goal is to find and delineate probable instances of generic object classes in real images. The shape delin- eation task described here is critical for the extraction of objects from images that are too complex to be handled by syntactic approaches alone. We choose as our application domain aerial images of intermediate resolution, that is, images with resolution ad- equate for humans to perceive shapes clearly, but not so fine that small details and textures would dominate the description given by a human observer. In Figure 1, we present a typical aerial image of this class that contains a combination of suburban features, along with a corre- sponding edge image [Canny, 19861 and a segmentation [Laws, 19841. A standard low-level approach to the task of extract- ing objects such as buildings from Figure la would attempt to match region boundaries or edge groups with the edges of a building template. However, when we examine the data, we see that neither regions nor edges correspond reliably to building objects. The segmentation bound- aries tend either to break a building roof into pieces, or to merge extraneous areas with those identifiable as roofs. The Canny edges, on the other hand, do not include sev- eral critical edges in the center building or the road, even though these are extracted as region boundaries by the segment ation. *This research was supported in part by the Defense Advanced Research Projects Agency under Contract No. MDA903-86-C-0084. Clearly, no single parameter setting for conventional segmentation or edge-finding techniques can be expected to handle all the desired objects in one image, much less in multiple images. Therefore, an intermediate step is required for com- plex scenes: model-based shape parsing procedures must be provided in order to generate object delineations that are sufficiently reliable to be useful for applications such as context-specific labeling systems [see, e.g., Brooks, 1981; McKeown et al., 1985]. lem 8 The key elements of our approach to solving this prob- are the following: Define Generic Shape Models. We avoid the drawbacks of rigid template models and produce de- lineations that are not necessarily tied to any specific labeling scheme by defining shape models for generic classes of objects. When we supplement low-level data with the predictive power of such models, we are able to recover information that is more likely to be seman- tically meaningful. Integrate Edge-Based and Area-Based Geo- metric Constraints. Both the edges and areas of - a feature contain geometric information relevant to the task of identifying it as an instance of a generic model. We use edges to generate overall geometry and to provide estimated area outlines. Areas that are associated with edges are tested for compatibil- ity with the object model and with one another; we use the RANSAC random sample consensus technique [Fischler and Bolles, 19811 to compute optimal model fits that systematically discount gross anomalies. Fur- - thermore, multiple so&ces of information are incorpo- rated in the geometric search procedure by using a set of segmentations produced by a progression of param- eter settings. Predict and Verify Model Components Missing components of models are predicted and checked us- ing model-based adaptive search procedures; our im- plementation uses a gradient ascent method [Leclerc and Fua 19871 to search for predicted edges with the required geometry. Thus, struct and locate building - for example, we boundaries and can recon- road edges that might be unrecoverable using conventional meth- ods; if one chose an edge-detector parameter setting 706 Vision From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. weak enough to find such missing edges at the begin- ning, the edge map would be dominated by irrelevant noise. 2. Find edge relationships based 011 geometric constraints combined with tests Otl signatures of enclosed areas. enerie ePing People can classify instances of various object categories accurately even though a particular instance may have a unique shape they have never seen before. Generic shape classes provide an effective approach to automating this human ability. Generic models that we have found useful for analysis of real images possess the following character- istics: Q Strong edge geometry. The elementary edge or line data extractable from an image must be related in some direct and computable way to the object. In particular, the model must suggest explicit rules for dealing with anomalies and predicting likely lo- cations of missing geometrical components. Typical models include edge geometry characterized by long, straight edge segments, by edges or lines with uniform local curvature, and by edges with good statistical sig- natures characterizing their jaggedness. In addition, there must be mechanisms for the production of co- herent area-enclosing structures. Thus, for example, parallel edges, corners, equidistant curved lines and edges outlining a compact shape are reasonable ge- ometric substructures that can be used to delineate areas that are portions of the larger structure. e Strong area signature. Areas contained within a substructure of a generic object should be character- izable by a computable signature. If anomalies are expected, they should be clearly distinguishable by using the area signature and should ideally comprise no more than a small fraction of the area. Exam- ples of such areas are parking lots with cars or roofs with chimneys. Typical area signatures would be the presence of uniform or uniformly changing intensity values or textures. Anomalies in such a background are easily located and discounted using a RANSAC procedure to fit planes to the intensity values within the delineated area. The models we have implemented - buildings, roads, and trees - contain the following universal components: (1) edge definition, (2) composite structure definition, (3) linking geometry specification for composite structures, (4) area signature specification, and (5) a geometric com- pletion model. The components of each of these models are are summarized in Table I. The most general model-parsing procedure that we have needed to interpret each of these models in an image includes the following steps: 3. Build closed graphs of related edges that en- close consistent areas as well as matching the model geometry. 4. Predict, search for, and fill in missing elements of the model geometry. 5. Compare the resulting delineation with the characteristics of the original model. The overall approach can clearly be extended to any other object for which appropriate characteristics can be for- mulated, e.g, cylindrical oil tanks, drainage patterns, and buildings with perspective distortion. In the following subsections, we outline the features of our models for buildings, roads, and trees, and illustrate how these models fit into the general framework. Where* space allows, we mention some details of the individual re- quirements of the model parsing framework outlined above. A. ectilinear Networks Our most extensive work so far has been devoted to the task of delineating rectilinear, presumably cultural, struc- tures [Fua and Hanson, 1985, 19861. We characterize buildings and related cultural struc- tures (e.g., parking lots, patios, gardens, and court- yards) as rectilinear networks of adjacent or joinable area- enclosing straight-edge groups that contain areas with pla- nar intensity. The basic parsing procedure for generic rectilinear structures follows the pattern given above. Since re- gion boundaries of a histogram-based segmentation [Laws, 1984; Ohlander et al. 19781 tend to correspond to high im- age gradients, the straight edges are extracted as sequences of pixels with consistent gradient directions. While single segmentations often have inadequate characteristics, seg- mentations with increasingly permissive parameters pro- duce regions that are first undersegmented, then well seg- mented and finally oversegmented as shown in Figure 2. The multiple data sources lead to a network of geomet- rically consistent straight edges, shown in Figure 2f, that serve as a basis for the geometric processes. In practice, region boundaries may deviate by a few pixels from the actual edge location; we optimize their lo- cations using the gradient ascent procedure. In each of the segmentation regions, edges that are parallel or per- pendicular are singled out for special consideration. These associated edges, together with the region they come from, define areas in the image. Areas are tested for consistency with a RANSAC planar fit in intensity space, and edges that generate qualifying areas are retained for further wrs- ing. We note that the same edge can belong to several structures. If the structures are compatible with respect to 1. Build the edges according to the edge defini- the structure-linking specification and enclosed-area char- acteristics, new geometric relationships between edges, tion. IFua and Hanson 707 such as collinearity, are instantiated. The result is that the building model, jagged parts of the boundary, rather edges are grouped into networks defined by graphs of the than linear parts, are selected as edge candidates. Within geometric relations among them. single regions, jagged edges that have consistent neighbor- Rectilinear geometric relationships are used to predict ing areas computed by local chamfering are incorporated how the edges should be linked and where missing edges into composite structures. The F* general linear-feature might be. The predictions are fed to the adaptive straight- utility is then used to connect the edges along the path edge finder [Leclerc and Fua, 19871 or to the F* edge finder with the strongest image gradient. This generates the re- [Fischler, et al., 19811 f i a straight edge link is not found. quired closed regions delineating the vegetation candidate. The networks of-compatible composite structures are then connected to form closed contours and define new se- mantically motivated regions that are the final output of the current system. The candidate features can be scored using a measure of the closeness of the delineation charac- teristics to those expected in an ideal model instance. In this section we present some representative results of applying our approach to aerial imagery. To illustrate the behavior of the system on buildings, In practice, the information required to assign mean- ingful labels to candidate cultural structures can be very primitive; we shall give some examples below in which even such simple techniques as clustering based on region- similarity measures are quite effective. we have chosen two images that are especially challenging networks, selected in this case on the basis of a size filter; in terms of shape complexity and faint edge photometry. In Figure 3, we show the results of analyzing the image shown in Figure la. Figure 3a illustrates the initial set of Roads-Curvilinear Parallel Networks B. It is straightforward to modify the rectilinear cultural fea- ture model to include smoothly curving road segments. The edges of such roads are almost straight in most places and can be detected locally using the techniques given above. They are then grouped into parallel structures and linked into elongated networks that may have large-scale curvature. To deal with winding roads, the straight edges can be replaced by smoothly curved edges while the rest of the approach is retained. See Table I for a summary. The only major change in the road model is the rule employed to predict missing components of the geomet- ric structure. First, the initial network of parallel edges is used to estimate the location of the road’s center and width. Next, we fit a spline to the estimated center of the road and use it to define two parallel splines that cor- respond to the road’s edges. Using the gradient ascent method, we optimize the location of the two splines under the constraint that they must remain parallel. This is a powerful technique because, wherever one side of the road is lost due to poor photometry or occlusions, the edge in- formation present on the other side can still be utilized to guide the optimization procedure. C. Trees-Irregular Clumps if we add a selection criterion based upon clustering areas with similar intensity characteristics, one of the clusters is the set of house candidates in Figure 3b. Figure 4a shows another example of an image con- taining difficult-to-parse cultural structures; in particular, note the extreme weakness of many relevant roof edges. Figure 4b contains a cluster of bright enclosures that can be identified as sunlit roofs, Figure 4c shows a correspond- ing cluster of shaded roof sections, and Figure 4d gives the complete roof structures. Turning our attention now to linear features, we take the image shown in Figure la and apply the model for generic road segments. The system finds the initial set of straight edges shown in Figure 5a, groups them into equidistant parallels, connects those that seem to be collinear or smoothly curving, and then uses them to pre- diet the approximate delineation of the road as shown in Figure 5b. Finally, the predicted shape is optimized with respect to variations in the global width and local curve skeleton, thus yielding Figure 5c. Finally, we apply the parsing procedure to vegetation clumps. In Figure 6a, we show an image containing typical vegetation clumps, along with one of a set of segmentations in Figure 6b. The initial candidates for vegetation clumps are shown in Figure 6c, and a final selection filtered with respect to image intensity in Figure 6d. Clumps of vegetation, typically small groups of trees, are characterizable as being complementary to the regular cultural-object models we have described so far. Since their edges are typically jagged and irregular, any com- pact object that has no components resembling roads or buildings could be a candidate for vegetation. Other ir- regular objects such as rock outcroppings, bodies of water, and drainage patterns would have similar signatures. The tree model is summarized in Table I. The parsing procedure for vegetation clumps starts with the boundary of a given segmentation region and op- timizes the location using gradient ascent. In contrast to IV. Conclusions In this work, we have proposed an approach based on generic models and a combination of edge-driven and photometry-based geometric reasoning to delineate several classes of objects in aerial images. Such delineations may be utilized in a variety of ways, but are especially appropri- ate as input to high-level knowledge-based systems. Since these structures are generic, there is no a priori commit- ment to any particular labeling or modeling system. We have devised methods for the following: 708 Vision 0 Integration of Multiple Geometric Data Sources. Data-driven edge extraction and image segmentation processes do not perform well on multiple target ob- jects. We combine multiple information sources and use both edge geometry and enclosed-area character- istics to generate and verify shape hypotheses; we thus make efficient use of the available geometric informa- tion in the image. o Generic Shape Extraction. For many important tasks, the exact shapes of objects of interest are not known. We define and use generic models to deal with whole classes of objects. Within the context of such models, we recover expected but missing model components using adaptive search tech- niques, and compensate for photometric anomalies. In particular, we have proposed models for cultural structures, roads, and vegetation clumps, all of which fit into a universal format for model definition and parsing. The system’s effectiveness derives from the definition and use of generic shape models to refine and interpret low- level image information. The clear delineations that we can produce are an essential step toward application-oriented parsing schemes, and provide an adequate basis for rule- based labeling systems that could not function with tradi- tional low-level data alone. eferences R.A. Brooks, “Symbolic Reasoning Among 3-D Models and 2-D Images,” Artificial Intelligence Journal 16, (1981). M.A. Fischler and R.C. Bolles, “Random Sample Consen- sus: A Paradigm for Model Fitting with Applica- tions to Image Analysis and Automated Cartogra- phy,” CACM, Vol. 24, No. 6, pp. 381-395 (June 1981). M.A. Fischler, J.M. Tenenbaum, and H.C. Wolf, “De- tection of Roads and Linear Structures in Low- Resolution Aerial Imagery Using a Multisource Knowledge Integration Technique,” Computer Graph- ics and Image Processing 15, pp. 201-223 (1981). P. Fua and A.J. Hanson, “Locating Cultural Regions in Aerial Imagery Using Geometric Cues,” Proceedings of the Image Understanding Workshop, pp. 271-278 (December 1985). P. Fua and A.J. Hanson, “Resegmentation Using Generic Shape: Locating General Cultural Objects,” Pattern Recognition Letters (1986) in press. K.I. Laws, “Goal-Directed Texture Segmentation,” Tech-, nical Note 334, Artificial Intelligence Center, SRI In- ternational, Menlo Park, California (September 1984). Y. Leclerc and P. Fua “Finding Object Boundaries Using Guided Gradient Ascent,” Proceedings of the 1987 Topical Meeting on Machine Vision, Lake Tahoe, CA, pp.168-171 (March 1987). D. McKeown, W.A. Harvey, and J. McDermott, “Rule- Based Interpretation of Aerial Imagery,” IEEE Trans. PAM1 7, pp. 570-585 (1985). R. Ohlander, K. Price, and D.R. Reddy, “Picture Segmen- tation Using a Recursive Region Splitting Method,” Computer Graphics and Image Processing 8, pp. 313- 333 (1978). J. Canny, “A Computational Approach to Edge Detec- tion,” IEEE Trans. PAM1 8, pp. 679-698 (1986). Figure 1: (a) A typical aerial image with suburban features. (b) A C army edge map. (c) A Laws histogram-based segmentation. Fua and Hanson 709 Model Component khuldmgs Roads Trees Edge definition Straight Curved Jagged Composite structure definition Parallel and perpendicular Parallel Cluster Linking geometry specification Rectilinear Curvilinear Free form for composite structures Area signature specification Planar intensity Planar intensity Planar intensity Geometric completion model Straight edge search 1 I Curved edge search Connecting path search Table I. Summary of the characterstics of each of three models described in the text. Figure 2: (a) A small image portion containing a cultural structure. (b) A n extreme undersegmented partition. (c) An undersegmented partition. (d) An optimum partition for detecting the structure. (e) A highly oversegmented partition. (f) Th e set of long straight edges extracted from the partition boundaries using the criterion that the edges enclose as large a uniform rectilinear area as possible. These edges form a network. Figure 3: (a) Rectilinear networks meeting a size criterion. (b) H ouse-like networks found by imposing in an additional region uniformity filter. 710 Vision Figure 4: (a) An image containing complex buildings with some faint edges. (b) A sunlit roof cluster. (c) A shaded roof cluster. (d) House candidates constructed by merging the sunlit and shaded roof candidates. (4 (b) (4 Figure 5: An example of a road segment. (a) The edges that are originally grouped together as a possible road structure. (b) Intermediate prediction of the road path given only the initial edges. (c) Final road position optimized to choose best path with the same (variable) width for the entire length. Figure 6: (a) A n image containing vegetation clumps. (b) One of a family of segmentations used to derive edge candidates. (c) The initial set of vegetation clump candidates. (d) Vegetation candidates selected on the basis of the intensity of the enclosed area. Fua ad Hanson 711
1987
132
586
Detecting Runways in Aerial Images’ 4 A. Huertas, W. Cole and R. Nevatia Institute for Robotics and Intelligent Systems University of Southern California Los Angeles, California 90089-0273 Abstract We are pursuing the detection of runways in aerial images as part of a project to automatically map complex cultural areas such as a major commercial airport complex. This task is much more difficult that appears at first. _ We use a hypothesize and test paradigm. Hypotheses are formed by looking for instances of long rectangular shapes, possibly in- terrupted by other long rectangles. We use runway markings, mandated by standards for runway con- struction, to verify our hypotheses. I. Introduction Our aim is to develop general techniques for automated mapping and photointerpretation tasks. We have chosen major commercial airports as a test domain that has a variety of interesting characteristics. Airports contain a variety of objects, such as the trans- portation network (runways, taxiways, and roads), a va- riety of building structures (hangars, terminals, storage warehouses), and a variety of mobile objects (automobiles, aircraft, humans). Further, the airport complexes are un- der continual change, usually due to expansion. The im- ages themselves are rather complex due. to the large num- ber of objects present in them. The mapping of this do- main, thus, offers a variety of challenging problems. Our goal is to map all of the interesting objects in the scene and also to devise integrated descriptions that include the functional relationships of the objects in the scene. In this paper we concentrate on the mapping of runways (we are pursuing mapping of buildings in sepa- rate work [Huertas and Nevatia, 19871). . The runways and taxiways may appear to be modeled easily - namely as long, thin, rectangular strips of uniform brigthness. How- ever, the real images are much more complex, as shown in figure 1, a portion (LOGAN:800 x 2200 resolution) of Logan International Airport in Boston, and in figure 2, a portion (JFK:1500 x 2600 resolution) of John F. Kennedy International Airport in New York. These help illustrate the following: Advanced Research Wright Aeronautical Object complexity: Runways have a variety of mark- ings. These are applied to the paved areas of runways and taxiways to identify clearly the functions of these areas, and to delimit the physical areas for safe oper- ation and aid pilots. In many cases there are visible signs of heavy use, such as tire tread marks, oil spots, and exhaust fume smears. Also, runways have shoul- ders of various widths. Object composition: Runways may not be of uniform material. The landing surface and the shoulders may be of the same or different for different runways in the same airport. Runways may be extended using differ- ent materials. In certain geographical locations, the runway surfaces develop defects that need to be re- Daired: the reuair work. usually in the form of patches is not necessarily homogeneous with the original sur- face material, and can have random shapes. Object functionality: Runway surfaces may be oc- cluded by trucks and aircraft. Runways have access taxiways and service roads in a variety of positions with respect to the runway. Runways can intersect with other runways. Also, old runways or portions of them may be now used for other purposes. One of the major causes of difficulties in detecting runways and other objects in real aerial scenes is that the low level segmentation rarely give complete and accurate results. In our work we have chosen to work primarily with the line segments computed from the intensity edges in the image. These lines may be fragmented, due in part to inadequacies in the line detection process, and in part due to actual structures in the image. In general, we assume that the images are of fairly good quality and of adequate resolution. Our method uses the hypothesis formation and veri- fication paradigm to detect runways. Our approach uses a generic model of the objects of interest derived from the following sources of knowledge: l Geometry and Shape: We know that we are looking for instances of objects whose outlines represent a rect- angular shape having a large aspect ratio of length to width. We know that runways have ends as opposed to nearby straight stretches of highways and roads. 712 Vision From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. l Specific Knowledge of airport design: We know the features that make a visible long strip in the image an airport runway: the standard markings applied to the surfaces, according to FAA specifications. From airport engineering we also know the range of angles between runways, range of widths and so on. l Photometric Knowledge: Intensity data may be of some help in’ verifying runway hypotheses when run- way markings are non existent or not available due to lack of contrast or lack of resolution. Our current implementation does not make use of this knowledge but only uses the image resolution information. In work reported here, our verification step consists only of finding the various markings we expect. We have not yet combined the different criteria. to give an overall confidence value. This process should, ideally, take place in the context of the larger system that is also reasoning about other objects in the scene, such as the remainder of the transportation network, buildings and the mobile objects. Location of these objects will mutually affect the confidence levels of the descriptions of other objects. Thus, the system described here should be viewed as a module for the larger system to operate on. The software architecture in our system consists of collections of functions that operate on linear features on the basis of constraints imposed by the object’s geome- try. Extensive work on rule based systems for aerial image analysis has been reported by McKeown at CMU (see for example [McKeown et al, i987]). Their approach however is based on region features rather than linear features. II. Description of the Method A. Formation of Runway Hypothesis 1. Detection of Line Segments and Apars We have chosen to work primarily with line segments extracted from the image. Geometric knowledge of the de- sired structures indicate that they should be characterized by parallel lines of opposite contrast. We call such pairs of lines “anti-parallel”, and abbreviate them as upurs. Apars form the basic unit of our further analysis. We use the USC “LINEAR” line detection system [Nevatia and Babu, 19801 to obtain line segments and apars. Each linear segment is described by its length, ori- entation, contrast, and position of its end points. Addi- tionally we also know if a segment connects to another H uertas, Cole, Nevatia 713 segment at either end. Figure 3 shows the 8262 line seg- ments computed from our LOGAN example. The center axis lines of 9,498 apars, shown in figure 4, were computed from the LOGAN segments by specifying the minimum (in our examples, 1 pixel) and maximum (60 pixels) distance between the anti-parallel pairs ofbsegments. The range is derived from the known image resolution. The apars are described by their length, orientation, end points, width and color (b&ghter or darkeJ than surround). We also know if apars are connected to other apars at either end. Figure 3: Line Segments from LOGAN image. Figure 4: Anti-Parallels from Segments in LOGAN image 2. Reduction of Search Space Each line segment may contribute to many apars, as is the case along runway features where there may be a large number of linear features parallel to the runway. This leads to a large search space that we reduce by implementing a focus of attention mechanism that facilitates the detection of “targets” in the presence of a large number of “distrac- tars”. We accomplish this by computing estimates of the directions and widths of potential runways. Using these estimates we extract from the set of apars, those in the selected directions and having a range of widths, and form sets of apars presumably representing fragments of run- ways. First, we estimate the direction of the runways by computing a length-weighted histogram of the apar ori- entations. The histogram for the LOGAN apars is shown in figure 5. The three sharp peaks denote the dominant orientations of the linear features (including runways) in the image. To estimate the runway widths we compute a length- weighted histogram of the apar widths including only those apars oriented in the estimated runway directions. This histogram (not shown) typically shows three width groups: a group of wide apars including runway and shoulder fragments, a middle group including taxiways and service roads, and in some cases, narrow shoulders, and a group of thin apars including the surface markings. Figure 5: Length-Weighted Histogram of Apar Orienta- t ions Figure 6: Apars representing initial set of Runway Frag- ments We extract form the set of apars those in the selected directions and belonging to the width group for possible runways. We construct one set of runway fragments for each orientation peak, allowing for a tolerance of 5O on both sides of the peaks. The three sets for the LOGAN example are combined and shown in figure 6. We show the apars as rectangles to depict their width. A comparison of the original set of 9,498 apars to the 518 shown in figure 4 gives, in this example, a 94% reduction in the search space. 3. Joining Apars on the Basis of Conti- nuity Apars are usually broken due to noise in the image and inadequacies in the low-level processes. However, some of the breaks are due to real structures in the image. Gon- sider for example where taxiways join runways. One one of the boundaries of the runway is continuous while the other boundary is broken at the junctions. The runway portions on both sides of the junction form collinear apars having the same width. We join these apars allowing a 5O tolerance in collinearity and 5 pixels tolerance in width. The resulting longer apar must have an orientation that is compatible with the estimated direction of the runway within a small tolerance (5O). In some cases, as in our LOGAN example, there is sufficient resolution and contrast in the image for the edge detector to be able to resolve both boundaries of the white side stripes that bound the landing surfaces of some run- ways. In these cases the outside boundaries of the side stripes result in apars that contain apars resulting from the inside boundaries of the same side stripes. We remove properly contained apars from the sets. Apars that over- lap however are preserved. We also remove apars having an aspect ratio smaller than 1, as they are considered un- reliable. The result of these processes is shown in figure 7. 714 Vision Figure 7: Apars joined on the basis of boundary continuity and filtered on containment and aspect ratio. 4. Joining Apars on Collinearity and. Analysis of Gap Texture Next we join collinear (within 5”) apars that have sim- ilar widths (within 5 pixels) on the basis of examining the gap between the fragments. Many runway apars may re- main fragmented due to noise and occlusion. Consider for example where two runways cross or when there are air- craft on the runways. In general, this process is quite liberal in the analysis of the information in the gaps, as long as the resulting apar has a direction consistent (within *) with the hypothe- sized runway direction. For instance, if the gap contains mostly segments that are oriented in the direction of the apars, we join them. If the gap contain mostly segments oriented at an angle consistent with the angles allowed be- tween crossing runways then we join them. However, as in our JFK example, repair work, changes in surface ma- terial, signs of heavy use, oil spots and tire tread marks, can result in basically random arrangements of segments (texture) in the gaps. Thus, we also consider the lengths of the apar candidates and the size of the gap. A more pre- cise way to support these decisions would include the use 3-D information to determine if the surface is smooth and flat. The result of this process for our LOGAN example is shown in figure 8. Figure 8: Apars joined on segment texture and gap anal- ysis. 5. Final Runway Hypotheses At the end of the joining process, short apars are re- moved from the sets if they have an aspect ratio smaller than 20% This will preserve those apars possibly repre- senting partially visible runways. The resulting apars con- stitute the instances of the shapes found in the image that match our geometric model for airport runways. These are shown in figure 9. Figure 9: Runway Hypotheses. B. Runway Verification Hypotheses disambiguation and verification of runways is accomplished primarily by detection and identification of runway markings. We currently look for centerlines, side stripes, threshold marks, touchdown marks, distance marks, and blast pad marks. Most of these are shown in figure 10 (from [Ashford and Wright, 19841). We have spe- cific knowledge of their dimensions and position [Federal Aviation Administration, 19801. We map these knowledge onto the image’s coordinate system for the available image resolution. Fractions of pixel indicate lack of resolution and, instead of looking for, say two close markings, we look for one wider mark- ing, equivalent to the fusion of the individual non-resolved markings. The visibility of runway markings is primarily deter- mined by the following factors: Image Resoluttbn: Determines if the markings can be resolved. Surface Muterid: The contrast between markings and background depends on the underlying surface. White markings on a dark asphalt surface are quite visible. Concrete runways are brighter and perhaps make it more difficult to detect the markings. In some cases contrast depends also on the material in the runway shoulders. Uscrge und Upkeep: Tire tread marks, oil spots and exhaust fumes obscure the markings along and at the ends of runways. On the other hand, tire tread marks form quite visible and high contrasting dark regions in the center of concrete runways, and can be used for verification purposes. Our current technique relies on markings detected elsewhere to predict the presence of obscured markings. The size and position of each runway hypothesis de- termines the window where we search for the markings. To find them we first look for thin bright apars in the window. If necessary we also look at the line segments. Figure 11 shows the markings found for our LOGAN example. The two overlapping (competing) hypotheses in figure 10 are disambiguated early because the incorrect hypothesis has only a few centerlines compared to those in the hypothesis that remains valid. H uertas, Cole, Nevatia 715 Figure 10: Standard Runway Markings Figure 11: LOGAN Runways with Markings Detected. 1. Detection of Runway Centerlines Centerlines are equally spaced along the landing sur- face of runways. To detect these we look in the middle of the hypothesized runway for bright apars having the desired dimensions. We do not enforce the separation con- straint between centerlines to allow detection of broken or incomplete individual markings due to exhaust burns, tread marks, etc. We also look for individual segments (that do not form thin apars) down the middle of the run- way. 2, Detection of Side Stripe Markings The sides of the landing surface of runways are bound by side stripes. They result in thin bright apars. We look for these, at or near the boundaries of OUF runway hy- potheses, and test that they are oriented parallel to the estimated runway direction. These thin apars are often broken mostly due to lack of contrast, and we do not at- tempt to join them. We however require that the fragments be collinear and that they have a consistent width. 3. Threshold Mark Detection The threshold are probably the most important set of markings that can be used to verify a runway; they give pilots the position of the start and end of the runway. Often, these marks are partially worn away by exhaust fumes due to their position, so we expect our search to look for partial markings. At the resolution-in our examples the threshold marks appear as white rectangles separated by a dark zone. This results in two bright wide apars for each mark and a dark apar between them. In our search first look for the bright apars. These apars must be oriented in the direction of the runway (within a lie tolerance). If we find only one of these apars, we hypothesize the position of the missing mark, and look in the line segment information for line segments to support our hypothesis. If no bright apar is found we look for the dark apar. It must meet the length and orien- tation constraints for the dark zone between the threshold marks. From its position and orientation we predict the position and orientation of the two threshold marks, and look for support evidence in the set of line segments. 4. Touchdown Mark Detection Touchdown marks are located at a specific distance from the threshold marks, on each side of the runway. When present, at the resolution in our examples, they gen- erate two bright apars and a bright apar between them. We look for these, and test them for consistent orientation. 5. Distance Marking Detection Runways have a series of distance markings extending from the touchdown marks, equally spaced but of vary- ing width. They generate specific bright and dark apars that we can look for. We look foti the first (large) pair nf distance marks first. For this we rely on the position of the threshold marks to predict their approximate position. Locating the small distance markings proceeds in a similar manner. We estimate their position from the large distance marks (if these are available, otherwise we use the position of the threshold marks) and do a search in the area for apars of the desired characteristics. 6. Blast Pad Mark Detection Blast pad markings are optionally located at the ends of runways. They consist of pairs of white lines oriented at 45O angles with respect to the runways, and meet at the runway central axis. The separation between these pairs of lines varies thus, we detect them by looking for thin bright apars in the proper configuration. III. More Results The runways at LOGAN consist of dark asphalt, well main- tained surfaces and markings, while JFK present a wide variety of problems. We therefore selected a portion of this airport as our second example. The level of complex- ity of most major commercial airports lies between our two examples. In our JFK example, the partially visible apparent runways have no discernable markings on them. The com- plete runway running across the image shows increasing amounts of repair work, of a different material than that of the original surface. The darker material, however, makes some of the markings more visible. On the left side of the runway, the end of the runway becomes narrower as it turns into a taxiway. The accurate detection of the run- way end thus depends on being able to locate the threshold markings. As shown below, we were able to locate them. 716 Visiori Figure 12: Line Segments from JFK Image. Figure 13: Initial Set of Runway fragments in JFK Image The line segments computed from the JFK image are shown in figure 12. The reduced search space and apars representing the initial set of runway fragments is shown in figure 13. The rynway hypotheses are shown in figure 14. Figure 15 shows the results of the verification process. IV. Conclusion We have described a technique, based on geometry and shape as the sources of knowledge suitable to form and test hypotheses representing instances of a known object shape, airport runways, using linear features. We presented results on two very different airports to show the strength of the hypotheses formation process. Together with a sound search space reduction mechanism, and an object-specific feature verification technique, our method represents the state-of-the-art in runway detection. We have tested the technique on images of several major airports, varying in complexity between our two examples, with very encouraging results. In all our test the system parameters were the same. Our basic technique can be easily extended to use the intensity image if necessary, feedback mechanisms, and analysis of non-standard markings. We point out that our hypotheses formation/verification technique can be useful for similar tasks, such as road detection and in general, transportation network detection. L Figure 14: Runway Hypotheses. Figure 15: JFK Runway and Markings Detected. We have not yet combined the different criteria to give confidence values. This process should, ideally, take place in the context of the larger system that is alsoreasoning about other objects in the scene, such as the remainder of the transportation network, buildings and the mobile objects. Location of these objects will mutually affect the confidence levels of the descriptions of other objects. Thus, the system described here should be viewed as a module for the larger system to operate on. References [Ashford and Wright, 19841 N. Ashford and P.H. Wright. Airport Engineering, 2nd Ed. Wiley and Sons, 1984. [Federal Aviation Administration, 19801 FAA Advisory Cir- cular. AC 150/5340-13, November 4, 1980. 1: Huertas and Nevatia, 19871 A. Huertas and R. Nevatia. Detecting Buildings in Aerial Images. To appear in Computer Vision, Graphics and Image Processing. 1: McKeown et al, 19871 D.M. McKeown and W.A. Harvey. Automatic Knowledge Acquisition for Aerial Image Interpretation. In Proceedings, Image Understanding Workshop, Vol. 1, February 1987, pp. 205-226. [Nevatia and Babu, 19801 R. Nevatia and R. Babu. Lin- ear Feature Extraction and Description. In Computer Vision, Graphics and Image Processing, Vol. 13,1980, pp. 257-269. H uertas, Cole, Nevatia 717
1987
133
587
HYIWI’I-ESIS TESTING IN A COMPUTATI0NAL THEORY OF VISUAL WORD RECOGMTION Jonathan J. Hull Department of Computer Science State University of New York at Buffalo Buffalo, New York 14260 [email protected] alo.edu ABSTRACT A computational theory of reading and an algorithmic real- ization of the theory is presented that illustrates the application of the methodology of a computational theory to an engineering problem. The theory is based on past studies of how people read that show there are two steps of visual processing in reading and that these steps are influenced by cognitive processes. This paper discusses the development of a similar set of algorithms. A gross visual description of a word is used to suggest a set of hypotheses about its identity. These then drive further selective analysis of the image that can be altered by knowledge of language charac- teristics such as syntax. This is noS a character recognition algo- rithm since an explicit segmentation of a word and a recognition of its isolated characters is avoided. This paper presents a unified discussion of this methodology with a concentration on the second stage of selective image analysis. An algorithm is presented that determines the minimum number of tests that have to be programmed under the constraint that the minimum numberqf tests are to be executed. This is used to compare the proposed technique to a similar character recognition algorithm. 1. Introduction The fluent reading of text by computer without human intervention remains an elusive goal of Artificial Intelligence research. Fluent reading is the transformation of an arbitrary page of text, that could contain a mixture of machine-printed, hand-printed, or handwritten text, from its representation as a two-dimensional image into a form understandable by a com- puter, such as ASCII code. The current lack of a technique with these capabilities is interesting in light of the relative ease with which people read and the many years of investigation into com- puter reading algorithms, the methods people use to read text, and the long history of Artificial Intelligence research into com- puter vision [12]. The parallel between algorithms for reading text and expla- nations for human performance is most interesting. With some notable exceptions, most reading algorithms use a character recognition approach in which words are segmented into isolated characters that are individually recognized. For these algorithms reading is equivalent to a sequence of character recognitions. The way people read is significantly different from charac- ter recognition. We bring to reading a wealth of information about the world and expectations about what we will read. This is mixed with knowledge about how text is arranged on a page, knowledge of the syntax and semantics of language, and visual knowledge about letters and words. The recognition processes that take place during fluent reading use visual information from much more than just isolated characters. Whole words or groups of characters are recognized by processes, that in some cases, do not even require detailed visual processing. This is because fluent human reading uses many knowledge sources to develop an understanding of a text while it is being recognized. This integration of understanding and recognition is responsible for human performance in fluent reading. The fact that few reading algorithms have utilized the many disparate knowledge sources or the recognition strategy of a human reader might explain the gap between the reading proficiency of algorithms and people. Although some character recognition techniques have been augmented with knowledge about words, no reading algorithm has been proposed that fully utilizes the sorts of knowledge routinely employed by a human reader [ll]. Such an algorithm would have the potential of yielding substantial improvements in performance. 2. A Computational Theory and Algorithm for Reading The mechanism of a computational theory and its algo- rithm are chosen as the vehicle for the present investigation of reading because reading is an information processing task to which this mechanism applies [9]. The proposed computational theory of reading is based on previous studies of human perfor- mance. It shows what is computed by people when they read, why this is important, and general guidelines of how this should be carried out. Since reading is a complex information processing task involving interactions of knowledge from many different sources, algorithms are developed that implement only a subset of these interactions. However, these algorithms are sufficient to illustrate that if the complete version of the theory were imple mented, a robust “reading machine” would result. The computational theory of reading proposed here IS derived from work on human reading that includes studies of human eye movements [lo]. To a person who reads a line of text, it seems to them as if their eyes move smoothly from left to right. However, this is not completely true. In reality, our eyes move in ballistic jumps called saccades from one Jxation point to the next. During a saccade the text is blurred and unreadable. (This is not apparent to the reader.) Therefore, most of the visual processing of reading takes place during the fixations. Usually there are about one to three fixations near the beginning of the word. However, interestingly enough, some words are never fixated. The sequence of fixations is approximately from left to right across a line of text, however, regressions do occur fre- quently. Figure 1 shows the sequence of fixations in a line of text [l]. There are two types of visual processing in reading. In the first type of processing, information from peripheral vision pro- vides a gross visual description of words to the right of the current fixation point. This information is used to form expecta- tions about the words. The second stage of processing occurs on a subsequent fixation when these expectations are integrated with other visual information. 718 Vision From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. ADMIRAL KAZNAKBFF, A Ml%BER OF THE 2 3 The testing sequence is structured as a tree that specifies the order and locations in which tests are performed. The result of a test determines successive tests and reduces the number of words that could match the input image. It is assumed that the input image contains a word in the neighborhood. This constrains the features that could occur at locations in the image. If a set of features are defined a-priori, this constraint determines the subset that could be present. The features used in this paper for hypothesis testing are shown below: Figure 1. Sequence of fixations in a line of text [I]. The visual processing is influenced by many high-level fac- tors that include the reason a person is reading the passage of text, as well as the familiarity of the reader with the subject and his or her skill level. A more skilled reader uses visual informa- tion more economically than a less skilled one [3]. That is, a skilled reader uses less visual processing than a poor reader. Recent work has also shown that syntactic processing also influences the visual processing of a text [2]. The proposed computational theory of reading contains three stages that are similar to those of human reading. The first stage generates hypotheses about words from a gross visual description. This is similar to the visual processing of words to the right of a fixation point and is an essential component of one theory of human reading [4]. The second stage uses these hypotheses to determine a feature testing sequence that can be executed on the image to recognize the word. This sequence is adaptable to different high-level influences and can be executed at different physical locations in the word. This stage is similar to the detailed visual processing that takes place at a fixation. The third stage of the theory concerns high-level processing. This stage captures the influence of the various non-visual processes that influence reading such as syntax and semantics. These processes remove word-hypotheses from consideration that do not agree with the high-level constraints. This is a way to represent the influence of many high-level knowledge sources. The remainder of this paper discusses algorithms that implement two of the three stages outlined above. A hypothesis generation procedure is briefly presented. A global contextual analysis procedure is not fully discussed here. Instead, the reader is referred to a technique that uses knowledge about transitions between words to improve the performance of the hypothesis testing portion of this algorithm [5]. A hypothesis testing com- ponent is fully presented and an algorithm is discussed that determines the minimum number of feature tests needed by this component. 3. Hypothesis Generation The hypothesis generation component of the algorithm uses a description of the gross visual characteristics of a word image to index into a dictionary and retrieve a subset of words called a neighborhood that have the same description. The description is the left-to-right sequence of occurrence of a small number of features. The features are simple and easy to extract to increase the reliability of the technique in the presence of noise. This approach is suitable for generating hypotheses about an input word since a small number of features can partition a large dic- tionary into a limited number of small neighborhoods [6]. This is less error-prone than using many features to carry out complete recognition. 4. Hypothesis Testing The hypothesis testing component of the algorithm uses the words in a neighborhood to determine a feature testing sequence. feature code E 6 10 EE description empty space; closed at both the top and bottom, e.g. “0”; closed at the top, e.g. “n”; closed at the bottom, e.g. “u”; left of a short vertical bar m an “a” right of a short vertical bar m a “c” right of the short vertical bar in “en right of a long vertical bar m an “f” between two short vertical bar in a “g” right of a long vertical bar in a “k” right of a short vertical bar in an “r” large empty space containing one of { s,v,wx,yz 1. A discrimination test decides which member of a subset of these features is present at a given location. A list is used to show the features discriminated by a test. For example, (1 2) is a test that discriminates between feature 1 (closed at both the top and bottom) and feature 2 (closed at the top). The locations used by hypothesis testing are the areas between the features discovered by the neighborhood calculation. Several of the hypothesis testing features can be adjacent to one another in these locations. For example, in the sequence “ba”, the area between the short vertical bar in the “b” and the short vertical bar in the “a” contains the hypothesis testing feature E4. An example is shown in Figure 2 of how the discrimina- tion tests are arranged in a tree. The hypothesis generation pro- cedure determined that there were four features in the input word and four locations (1 through 4) between those features at which a discrimination test could be applied. The features of the hypothesis generation stage are numbered 2110 in the second line of Figure 2. The 2 refers to an ascender, the l’s to short vertical bars, and the 0 the a significant vertical space that does not con- tain a vertical bar. These are all present in the same sequence in the neighborhood { be, has, he >. The nodes at the first level of the tree are the tests that could be applied at each of the four locations. The result of a test either determines a recognition or the next set of tests that could be applied. In Figure 2, if location 3 is considered and the discrimination between features E and E4 is performed, has is recognized if E4 is present. Otherwise, if E is present, the choices are narrowed down to be or he. If feature 1 is then found in position 2, be is recognized, Otherwise if feature 2 is found in position 2, he is recognized. Of particular interest are the shortest paths in a hypothesis testing tree. These are paths from a top-level node to a terminal node (all of its descendents are word decisions). A shortest path contains the fewest tests needed to recognize the words in a hypothesis testing tree. There are four shortest paths in Figure 2. They are from position 2 tr position 3 (contains tests (1 2) and (E E4)), position 2 to position 4 (contains tests (1 2) and (6E EE)), position 3 to position 2 (contains tests (1 2) and (E E4)), and from position 4 to position 2 (contains tests (1 2) and (6E EE)). There- fore, there are two different mimimum sets of tests that can be used to recognize the words in this tree. They are { (1 21, (E E4) ) and { (1 21, (6E EE) 1. Henceforth, a shortest path will also mean the set of tests it contains. Hull 719 ( he, has, he > his his be he be he Figure 2. An example search tree for the neighbor- hood { be, has, he ) with visual description “2110” Each node contains a position indicator “In” where n is one of 1 through 4. The test at each node is carried out by discriminating between the features that follow the po- sition indicator. 5. Minimum Number of Tests It is of interest to determine the minimum number of different tests that would have to be executed to recognize every word in a given dictionary. (N.B. A dictionary is partitioned into neighborhoods by the hypothesis generation stage. Each neighbor- hood is in one-to-one correspondence with a hypothesis testing tree.> This set of tests (called MT) contains the smallest set of tests that would have to be programmed under the constraint that the minimum number are to be executed. As such, it indi- cates the computational effort needed to recognize the words in the dictionary and gives us a way to compare this methodology to other reading algorithms. The more efficient technique would require fewer tests and thus would have a smaller MT. Several of the issues involved in finding MT are illustrated by the following example. The top 100 most frequent words in the Brown Corpus were chosen and the testing trees for these words were determined. The Brown Corpus is a text of over l,OOO,OOO words that was designed to represent modern edited American English [S]. Of the 100 most frequent words in the Corpus, 55 are uniquely recognized by the hypothesis generation computation. The other 45 words are contained in 16 trees. These words are shown in Figure 3 along with the tests on the shortest paths in their trees. A shortest path is shown as a list of tests and a tree is represented by a list of the shortest paths it contains. In this example, there are as few as one (trees 5, 8,10-13, and 16) and as .nany as eleven different shortest paths (tree 7) in a testing tree. Overall, there are 25 different tests on the 16 shortest paths in Figure 3. They are: (1 2) (E ~4) (6~ EE) (EE ~4) (1 E ~4) (6~ EE E) (1 E) (6~ E> (EE EEL E4) (6~ 6~4) (10~ E) (10~ 6E) (10~ 2) (6~ 6~4 E) (10~ 2 E) (10~ 6E EE) (6EE 6E) (EE E) ;;$6E$ EE) (1 2 5E4 6EE) (1 6E E E4) (1 3) (2 3) (1 3 E) The objective is to choose the smallest subset of these tests that contains the tests on at least one of the shortest paths in each tree. tree W0d.S shortest paths 1 be, has, he (((1 2) (E ~4)) ((1 2) (6~ EEN) 2 have, her, for (((1 E E4)) ((6~ EE EN) 3 what, who (((E E4)) ((1 E))) 4 well, all NEE E4N ((6~ EN 5 was, way, we, as (((EE EE4 E4))) 6 so, at NEE E4N ((1 ENI I years, were, are, any (((6E 6E4) (EE E4)) ((1OE E) (EE E4)) Ki0EE 6~) GE ~4)) ((10~ 2) (EE 1~41) ((6~ EE) (EE ~4)) ((10~ 2) (6~ 6E4 EN ((6~ 6~4 E) (6~ EE)) ((10~ 2 E) (EE 1~41) ((10~ 2 E) (6~ EN ((IOEE 6~ EE) (EE ~4)) ((i0EE 6E EE) (6~ 2))) 8 they, the (((6EE 6E))) 9 down, than, then (((1 E E4)) ((6~ EE EN 10 two, to (((EE EN) 11 or, my, new (((1OE 6EE EE))) 12 on, no, can, even (((1 2 5E4 6EE))) 13 me, may, now, over (((1 6E E E4))) 14 out, not (((1 2)) ((1 3))) 15 one, our (((2 3)) ((1OE 6E))) 16 man, most, must (((1 3 E))) Figure 3. The shortest paths in the testing trees for the 16 neighborhoods computed from the top 100 most frequent words in the Brown Corpus. The brute force algorithm for this is to evaluate every pos- sible subset and determine if the tests in it can solve every tree. However, this is obviously unsuitable because of an exponential complexity. Therefore, an approach is needed that reduces the computation. An algorithm is proposed here that achieves this objective. The algorithm takes the shortest paths from a number of trees as input (Figure 3 is an example). It iteratively adds the tests on a shortest path to the solution set (initially empty) until it contains the tests that can be used to recognize the words in every tree. The solution set is then output. It should be koted that all the tests on one of the shortest paths in each tree must be in the solution. Otherwise, the words in that tree could not be recognized. For example, in tree one of Figure 3, either both (1 2) and (E E4) or both (1 2) and (6E EE) must be in the solution. A more precise description of the algorithm is: (1). (2). (3). (4). Algoyithm: Take the union of the shortest paths, Determine the number of trees that are solved by the tests in each shortest path. (Solving a tree is equivalent to having the tests on one of its shortest paths in the solution.) Find the shortest paths that contain the most tests and solve the maximum number of trees; in the above example there are eight shortest paths that contain two tests and solve three trees. For exam- ple, tests (1 2) and (E E4) solve trees 1, 3, and 14; (6E 6E4) and (EE E4) solve trees 4,6 and 7, etc. Add the tests from one of the shortest paths discovered in step 3 to the solution. Remove the trees that are solved from consideration. Derive a reduced set of shortest paths by removing tests that are in the solution. 720 Vision (5). If such a reduction yields an empty set of shortest paths, output the tests in the solution. Stop only if one of the possibly many solutions is desired. Otherwise continue with step one. This algorithm must terminate because every reduction is guaranteed to produce a smaller set of shortest paths. There are 16 different solutions to the example set of shortest paths in Fig- ure 3. Each solution contains 13 tests. One solution is: (1 2) (E E4) (EE EE4 E4) (EE E4) (6E 6E4) (1 E E4) (2 3) (6EE 6E) (EE E) (1OE 6EE EE) (1 2 5E4 6EE) (1 6E E E4) (1 3E) The 16 solutions to the example set of equations illustrates that there may be many solutions that contain the minimum number of tests. Choosing the “easiest” among them is largely a matter of judgement and intuition. The algorithm could be tuned to expand only the most promising paths, i.e., those that early-on are found to contain “easy” tests. A version of the above procedure was implemented that found the first set of tests on a shortest path. This shows the smallest number of tests that must be executed to recognize a given vocabulary. This was applied to two vocabularies. The first were subsets of the Brown Corpus determined by frequency. The second were the subject categories or genres that make up the Brown Corpus. These results are shown in Table 1. These results are interesting because they show at most 347 different tests are needed to recognize the text in any subject category of the Brown Corpus. The experiments that increased dictionary size show a linear effect of the number of dictionary words on the number of tests needed to recognize those words. Usually, doubling the dictionary size increases the number of tests by fifty percent. Another interesting result of this study is the difference between genres. Only 184 tests are needed to recognize the 34,495 words in genre D. This is very interesting in comparison to the 220 tests needed to recognize nearly the same number of words (35,466) in genre C, i.e., 36 more tests are needed to recognize only 1000 more words. This could be attri- butable to the difference in subject categories (genre C contains press reviews and genre D contains religious material). However, it is more likely due to the difference in the number of words in the dictionaries of the genres. The dictionaries for genres C and D contain 7751 words and 5733 words, respectively. Overall, the results indicate that the hypothesis testing methodology is economical and requires much fewer than the theoretical max- imum of 212 tests to be programmed. In fact, so few tests are needed that an implementation with high degrees of performance should be achievable. 6. Comparison to Character Recognition The reading algorithm proposed in this paper has been com- pared to a character recognition algorithm of similar design [7]. This was done by designing a character recognition algorithm that was comparable to the proposed reading algorithm and determining the minimum number of tests needed by both approaches. The character recognition algorithm was designed to contain both a hypothesis generation and a hypothesis testing phase. The hypothesis generation stage was the same as that of the proposed technique. Because a character recognition algorithm requires that individual characters be isolated, an additional assumption was made that the characters in the input words could be per- fectly segmented and, thus, the number of characters in each word could be determined. Therefore, the contents of each neigh- borhood were constrained to be words with the same number of characters. The hypothesis testing phase of the character recognition algorithm used the same design as the proposed technique. The only difference was in the discrimination tests. They were between-characters rather than between-features. This is the point where the two methods differed. For example, in the neighborhood { may, now > three character-discriminations were possible: (m n>, (a 01, and (y w>. In contrast, the proposed tech- nique would discriminate among the features that occur between the vertical bars. The character recognition algorithm can be summarized: 1. Assume that an input word can be perfectly segmented and that it comes from a given, fixed vocabulary. Identify the locations of each character. 2. Determine the neighborhood of the input word in the given vocabulary. The same features and feature extraction pro- cedure are used as in the proposed methodology except that the neighborhood must contain words with the same number of characters. 3. Use the proposed method of hypothesis testing to recognize the input. Because this is a character recognition approach, the discriminations are between characters that occur in the positions specified by step 1. One modification to the proposed method was needed to make the neighborhoods calculated by the two methods the same. The hypothesis generation algorithm used an additional parame- ter of the number of characters in each word. Note that the modification requires only that the number of characters in a word be determined. This can be much easier than segmenting a word into isolated characters. diet. words of no. of size text tests 50 413,565 6 loo 483,355 13 250 566,288 27 500 632,693 42 750 674,112 51 1000 705,045 61 2000 781,581 92 3000 827,178 128 4000 858,306 155 diet words of no. oi Size text tests 5000 880,802 172 6000 898,145 197 7OQO 912,068 205 8OUO 923,358 223 9000 932,831 246 10,000 940,871 252 15,000 968,004 331 20,000 984,105 368 25,000 994,105 425 I - genre A B C D E F G words of text 88,051 54,662 35,466 34,495 72,529 97,658 152,662 no. of tests 297 225 203 184 256 294 347 H 61,659 206 genre words of no. of text tests J 160,877 303 K 58,650 226 L 48,462 219 M 12,127 127 N 58,790 235 P 59,014 238 R 18,447 158 Table 1. The minimum number of tests that must be executed to recognize the indicated vocabularies. Hull 721 The minimum number of tests needed by the algorithm proposed in this paper and the character recognition technique were determined for the genres of the Brown Corpus. It was discovered that the proposed algorithm would have to execute from one percent to fourteen percent fewer tests to recognize the same texts. However, from three percent to seven percent of the running text could not be completely recognized by the proposed technique. These were words where the hypothesis testing algo- rithm could not reduce the neighborhood to a unique choice. Usu- ally, there were only two or three choices for such a word and these choices were from different syntactic or semantic categories. It is hoped that future work with such higher level knowledge will allow these ambiguities to be reduced. The conclusion of this experiment was that the proposed algorithm would execute significantly fewer tests than a similar character recognition technique at a tolerable cost in text that could not be completely recognized. 7. Experimental Results The viability of the hypothesis testing strategy and its abil- ity to adapt to different input conditions was demonstrated by its implementation for a dictionary of 630 words from a randomly- selected sample of 2003 running words from the Corpus. Each of these words was generated in ten different fonts. The fonts were 24 pt. samples digitized at 500 pixels per inch binary on a laser drum scanner. Word images were generated by appending the images of the appropriate characters. Word images were also gen- erated by appending the characters and moving them horizon- tally until they touched. This is very difficult for some tech- niques to compensate for. This resulted in 12,600 input images. Five of the ten fonts were used as training data. The other fonts were subject to no image processing before they were used for recognition testing. The 6300 words in the training data were recognized correctly 98% of the time when the characters were not touching and 95% of the time when the characters were touching. The other cases were errors. The 6300 words not in the training data were correctly recognized in 95% of the cases when the characters were not touching and 92% of the time when the characters were touching. 8. Discussion and Conclusions A computational theory and algorithm for fluent reading was presented. The work presented in this paper sought to bridge the gap between theory and methods and to bring to read- ing algorithms the benefits of many years of psychological inves- tigation of human reading. The mechanism used to effect this transfer was a computational theory and its related algorithms. This is an example of applying the theoretical constructs of Artificial Intelligence to an engineering problem. It was seen that people do not read by recognizing isolated characters as do most current techniques. Instead, people recog- nize larger groups of letters or words. This recognition process uses at least two stages. One uses a gross visual description to develop expectations about words in a running text. The other integrates these expectations with detailed visual processing to form complete perceptions of the words in the text. This stage of processing is very individualized and subject to change based on many external factors. Another process that occurs during read- ing uses high-level knowledge to affect the visual processing. This paper discussed algorithms that performed the two stages of visual processing. A method for hypothesis generation was presented that extracted a gross visual description of words and used it to return a number of hypotheses from a dictionary that contained the word in the input image. A technique for hypothesis testing was also presented. This method was struc- tured as a tree search of discrimination tests. The result of a discrimination test was either a recognition of the input word or another set of tests that could be applied to the image. ‘I’he tree search methodology was set up so that different testing strategies could be used to recognize a word, as a human reader is capable of doing. An algorithm was presented that determined the minimum number of different discrimination-tests needed to recognize the words in a large vocabulary. It was shown that this technique requires a small number of tests to recognize any word in large subsets of text. Recognition experiments on many different fonts showed that about 95% correct recognition was achieved on 12,600 word images. This demonstrates the ability of this methodology to tolerate different formats and its potential to reach high levels of performance. ACKNCWLEDGEMENTS The author is grateful to Sargur N. Srihari for valuable consultations. We acknowledge the support of the Ofhce of Advanced Technology of the United States Postal Service under contract BOA 104230-84-0962. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. REFERENCES W. K. Estes, “On the interaction of perception and memory in reading,” in Basic Processes in Reading: Perception and Comprehension, D. LaBerge and S. J. Samuels (editor), Lawrence Erlbaum Associates, Hillside, New Jersey, 1977. L. Frazier and K. Rayner, “Making and correcting errors during sentence comprehension: eye movements in the analysis of structurally ambiguous sentences,” Cognitive Psychology 14 (1982),178-210. R. N. Haber and L. R. Haber, “Visual components of the reading process,” Visible Language XV, 2 (19811, 147-181. J. Hochberg, “Components of literacy: Speculations and exploratory research,” in Basic Studies on Reading, H. Levin and J. P. Williams (editor), Basic Books, Inc., New York, 1970, 74-89. J. J. Hull, “Inter-word constraints in visual word recognition,” Proceedings of the Conference of the Canadian Society for Computational Studies of Intelligence, Montreal, Canada, May 21-23, 1986, 134-l 38. J. J. Hull, “Hypothesis generation in a computational model for visual word recognition,” IEEE Expert, August, 1986, 63-70. J. J. Hull, “A computational theory of visual word recognition,” Technical Report, SUNY at Buffalo, Department of Computer Science, 1987. H. Kucera and W. N. Francis, Computational analysis of present-day American English, Brown University Press, Providence, Rhode Island, 1967. D. Marr, Vision, W.H. Freeman and Company, San Francisco, 1982. K. Rayner, Eye movements in reading: percept& and language processes, Academic Press, New York, 1983. J. Schurmann, “Reading machines,” Proceedings of the 6th InternutionaJ Conference on Pattern Recognition, Munich, West Germany, October 19-22, 1982, 1031-1044. L. G. Shapiro, “The role of AI in computer vision,” The Second IEEE Conference on ArtijkiuZ Intelligence Applicakms, Miami Beach, Florida, December 11-l 3, 1985, 76-81. 722 Vision
1987
134
588
Department of Computer Science Columbia University New York, N.Y. 10027 This paper describes an approach which integrates several conflicting and corroborating shape-from-texture methods in a single system. The system uses a new data structure, the augmented texel, which combines multiple constraints on orientation in a compact notation for a single surface patch. The augmented texels initially store weighted orientation constraints that are generated by the system’s several independent components. shape-from- texture These texture components, which run autonomously and may run in parallel, derive constraints by any of the currently existing shape-from-texture approaches e.g. shape-from-uniform-texel-spacing. For each surface patch the augmented texel then combines the potentially inconsistent orientation data, using a Hough transform-like method on a tesselated gaussian spheres, restthing in an estimate of the most likely orientation for the patch. The system then defines which patches are part of the same surface, simplifing surface reconstruction. This knowledge fusion approach is illustrated by a system that integrates information from two different shape-from-texture methods, shape-from-uniform-texel- spacing and shape-from-uniform-tel-size. The system is demonstrated on camera images of artificial and natural textures. 1 INTRODUCTION This paper describes a new approach to the problem of defining and reconstructing surfaces based on mzdtiple independent textual cues. The generality of this approach is due to the interaction between textural cues, allowing the methodology to extract shape information from a wider range of textured surfaces than any individual method. The method, as shown in figure 1, consists of three major phases, the calculation of orientation constraints and the generation of texeI path&, the consolidation of constraints into a “most likely” orientation per patch, and fmally the reconstruction of the surface. During the first phase the different shape-from-texture components generate texel patches and augmented texels. Each augmented texel consists of the 2-D description of the texel patch and a list of weighted orientation constraints for ‘This research was supported in part by ARPA pt #N~XW- C-0165, by a NSF Presidential Young Investigator Award, and by Faculty Development Awards from AT&T, Ford Motor Co., and Digital Equipment Corporation. 2A t-2 patch is a 2-D description of a subimage that contains one Or more textural elements. The number of elements that compose a patch is dependent on the shape-from-texture akom.hm. the patch. The orientation constraints for each patch are potentially inconsistent or incorrect because the shape-from methods are locally based and utilize an unsegmented, noisy image. In the second phase, all the orientation constraints for each augmented texel are consolidated into a single “most likely” orientation by a Hough-like transformation on a tesselated gaussian sphere. During this phase the system will also merge together all augmented texels that cover the same area of the image. This is necessary because some of the shape-from components define “texel” simil~ly, and the constraints generated should also be merged. Finally, the system reanalyzes the orientation constraints to determine which augmented texels are part of the same constraint family and groups them together. In effect, this segments the image into regions of similar orientation. In order to build a complete system one may also want to reconstruct surfaces from these surface uatches Boult 861. The robustness of this approach is illustrated by a I Original I Orientatkm Surface Description re I: Integrating multiple shape-from methods Moerdler and Kender 723 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. system that fuses the orientation constraints of two existing shape-from methods: shape-from-uniform-texel-spacing [Moerdler 851, and shape-from-uniform-texel-size [Ohta et. al. 8 11. These two methods generate orientation constraints for different overlapping classes of textures. Current methods to derive shape-from-texture are based on measuring a distortion that occurs when a textured surface is viewed under perspective. This perspective distortion is imaged as a change in some aspect of the texture. In order to simplify the recovery of the orientation parameters from this distortion, researchers have imposed limitations on the applicable class of textured surfaces. Some of the limiting assumptions include uniform texel spacing h#ender 80; Kender 83; Moerdler 851, uniform texel size [Ikeuchi 80; Ohta et. al. 81; Aloimonos 851, uniform texd density [Aloimonos 861, and texel isotropy [witkin 80; Davis e&al. $31. Each of these are strong limitations causing methods based on them to be appliable to only a limited range of real images. The generation of orientation constraints from perspective distortion uses one or more image texels. The orientation constraints can be considered as local, defining the orientation of individual surface patches (called rexel patcks3) each of which covers a texel or group of texels. This definition allows a simple extension to the existing shape-from methods beyond their current limitation of planar surfaces or simple non planer surfaces based on a single textural cue. The problem can then be considered as one of intelligently fusing the orientation constraints per patch. Ilceuchi [Ikeuchi 801 and Aloimonos [Aloimonos 851 attempt a similar extension based on constraint propagation and relaxation for planer and non planer surfaces for using only a single shape-from-texture metbod. The process of fusing orientation constraints and generating surfaces can be broken down into the following three phases: 1. The creation of texel patches and multiple orientation constraints for each patch. 2. The unification of the orientation constraints patch into a “most likely” orientation. per 3. The formation patches. of surfaces from the texel The first phase of the system consists of multiple shape- from-texture components which generate augmented texels. Each augmented texel consisting of a texel patch, orientation constraints for the texel patch, and an assurity weighting per constraint. The orientation constraints are stored in the augmented texel as mathematically vanishing points which are equivalent to a class of other orientation 3Texel patches are defined by how each method utiliis the texels. Some methods (e.g. Uniform texel size) use a measured change between two texels; in this case the texels patches are the texels themselves. Other methods (e.g. Uniform texel density) use a change between two areas of the image, in this case the texel patches ate these predefined areas. notations (e.g. tilt and pan as gradient constraints) [Shafer etal. 831. Moreover, they are simple to generate and compact to store. The assurity weighting is defined separately for each shape-from method and is based upon the intrinsic error of the method. assurity For example, shape-from-uniform-texel-spacing’s weighting is a function of the total distance between the texel patches used to generate that constraint. A low assurity value is given whei the inter-texel distance is small (1 texel distance ) because under these conditions a small digitization error causes a large orientation error. Above this threshold the assurity weighting is set high and then starts to decrease as the inter-texei distance incrgases. flhe ontimal shape of this assurity function is under investigation.) A Once the orientation constraints have been generated for each augmented texel, the next step consists of”uni&ing the constraints into one orientation pt% augmented texel. -The major difficulty in deriving this “most likely” orientation is that the constraints are errorful, inconsistent, and potentially incorrect. A simple and computationally feasible, solution to this is to use a gaussian sphere which maps the orientation constraints to points on the sphere [Shafer etal. 831. A single vanishing point circumscribes a great circle on the gaussian sphere; two different constraints generate two great. circles that overlap at two points uniquely defining the orientation of both the visible and invisible sides of the surface patch. The gaussian sphere is approximated, within the system, by the hierarchical by tesselated gaussian sphere based on trixels (triangular shaped faces [Ballard etal. 82; Fekete etal. 84; Korn et.al. 861. See figure 2). The top level of the hierarchy is the icosahedron. At each level, other than the lowest level of the hierarchy, each trixel has four children. This hierarchical methodology allows the user to specify the accuracy to which the orie&tion should be cal&lat& bv defining the number of levels of tesselation that are crea.t.ed. - The system generates the “most likely” orientation for each texel - patch- by accumulating evidence for all the constraints for the pitch. For each>onstraint. it recursively visits each trixel to-check if the constraint’s great circle fal&. on the trixel, and then visiting the children~ if the result is positive. At each leaf trixel the likelihood value of the trixel is incremented by the constraint’s weight. Although this is a -- Figure 2: The trixelated gaussian sphere 724 Vision search process the hierarchical nature of this approach limits the number of trixels that need to be visited. Once all of the constraints for a texel patch have been considered, a peak finding program smears the likelihood values at the leaves. Currently, this is done heuristically by a rough approximation to a gaussian blur. The “most likely” orientation is defined to be the trixel with the largest smeared value. In shape-from-uniform-texel-spacing the calculations are similar. Given any two texels T, and T2 (see figure 4) whose inter-texel distance is defined as D, if the distance from T, to a mid-texel T3 is equal to L and the distance from T2 to the same mid-texel T, is equal to R, the distance from texel T, to a vanishing point is given exactly by : X=[D+(RxD)]/[L-R] The final phase of the system generates surfaces from the individual augmented texels. This is done by reanalyzing the orientation constraints generated by the shape-from methods in order to determine which augmented texels are part of the same surface. In doing this, the surface generation is also performing a first approximation to a surface separation and segmentation. The reanalysis consists of iterating through each augmented texel, considering all its orientation constraints, and determining which constraints aided in defining the “correct” orientation for the texel patch as described in the previous phase. If an orientation constraint correctly determined the orientation of all the texels that were used in generating the constraint, then these augmented texels arc considered as part of the same surface. The knowledge fusion approach outlined in the previous section has been applied to a test system that contains two shape-from-texture methods, shape-from-uniform-texel- spacing woerdler 851, and shape-from-uniform-texel-size [Ohta et. al. 811. Each of the methods is based on a different, limited type of texture. Shape-from-uniform-texel-spacing derives orientation constraints based on the assumption that the texels on the surface are of arbitrary shape but are equally spaced. Shape-from-uniform-texel-size is based on the unrelated criteria that the spacing between texels can be arbitrary but the size of all of the texels are equivalent but u~own. In shape-from-uniform-texel-size if the distance from the center of mass of texel T, to texel T2 (see figure 3) is defined as D then the distance from the center of texel Ta to a point on the vanishing line can be written as : F2 = D x S21f3 I (s,1’3-~~“3) Figulre 3: The calculation of shape-from-uniform-texel-size Under certain conditions either method may generate incorrect constraints, which the system will ignored. On textures that are solvable by both methods, they cooperate and correctly define the textured surface or surfaces in the image. Some images are not solvable by either method by itself but can only be correctly segmented and the surfaces defined by the interaction of the cues (i.e. the upper right texel of figure 13). E EFFECTS OF N Real images contain noise and shadows which are effectly ignored by the system in many cases. The system treats shadows as potential surface texels (see texels 9 and 13 in figure 5) and uses them to compute orientation constraints. Since many texels are used in generating the orientation for each individual texel the effect of shadow texels is minimized. Even under the conditions where many shadow texels are found they do not effect the computed orientation of surface texels so iong as the placement mimic perspective distortion. of the shadow does not -G Figure 4: A geometrical representation of back-projecting. Noise can occur in many ways: it can create texels, and it can change the shape, size, or position of texels. If noise texels are sufficiently small then they are ignored in the texel finding components of the shape-from methods. When they are large, they are treated in much the same way as shadow texels and thus often do not affect the orientation of the surface texel patches. Since many texels are used and more than one shape-from method is employed, noise-created changes in the shape of texels can perturb the orientation results, but the effect appears negligible as shown in the experimental results. 6EX ENTAL s The system has been tested over a range of both synthetic and natural textured surfaces, and appears to show robustness and generality. Three examples are given on real, noisy images that demonstrate the cooperation among the shape-from methods. Moerdler and Kender 725 using images that contain multiple surfaces, surfaces that are solvable by either method alone, and surfaces that are solvable by using only both methods together. Future enhancements to the system would include addition of other shape-from-texture modules, investigation of other means of fusing information (such as object model approaches), analysis of curved surfaces, studies of error behavior, and optimization of the fusion approach, especially in a parallel processing environment. Texel Numbers 6 to 9 Measured P&q p=5.5 q=O.O p = 3.0 q = 0.0 p=5.5 q=o.o p = 3.0 q = 0.0 p=5.5 q=o.o Actual P&q p = 3.0 q = 0.0 p = 3.0 q = 0.0 p = 3.0 q = 0.0 p = 3.0 q = 0.0 p = 3.0 q = 0.0 Error 8O O0 O0 O0 8O O0 O0 O0 8O O0 Figure 11: Orientation values for the coins Figure 12: surface normals generated for the coins Fig&e 13: A box of breakfast buns with one bun missing [Aloimonos 851 John Aloimonos and Michael J. Swain. Shape from Texture. In Proceedings of the Tenth International Joint Conference on ArtijYcial Intelligence. IJCAI, 1985. [Aloimonos 861 John Aloimonos. Detection of Surface Orientation and Motion from Texture: 1. The Case of Planes. Proceedings of Computer Vision Pattern Recognition Conference , 1986. [Ballard et.al. 821 Dana Ballard and Christopher Brown. Computer Vision. Prentice-Hall Inc., 1982. [Boult 861 Terrance E. Boult. Information Based Complexity in Non-Linear Equations and Computer Vision. PhD thesis, Department of Computer Science, Columbia University, 1986. mavis etal. 831 L. Davis, L. Janos, and S. Dunn. Efficient Recovery of Shape from Texture. IEEE Transactions on Pattern Analysis and Machine Intelligence PAMI-S(5), 1983. @?ekete et.al. 841 Gyorgy Fekete and Larry S. Davis. Property Spheres: A New Representation For 3-D Object Recognition. Proceedings of the Workshop on Computer Vision Representation and Control 1192 - 201, 1984. [Gibson 501 James J. Gibson. Perception of the Visual World. Riverside Press, 1950. [Ikeuchi 801 Katsushi Ikeuchi. Shape from Regular Patterns (an Example from Constraint Propagation in Vision). Proceedings of the Inter-nation Conference on Pattern Recognition :1032-1039, December 1980. [Kender 80) John R. Kender. Shapefrom Texture. PhD thesis, C.M.U., 1980. [Kender 831 John R. Kender. Surface Constraints from Linear Extents. Proceedings of the National Conference on Artificial Intelligence , March 1983. [Kern etal. 861 M. Kern and C. Dyer. 3-O Multiview Object Representation for Model-Based Object Recognition. Technical Report RC 11760, IBIvI T.J. Watson Research Center, 1986. [Moerdler $51 Mark L. Moerdler and John R. Mender. Surface Orientation and Segmentation from Perspective Views of Parallel-Line Textures. Technical Report, Columbia University, 1985. [Ohta et. al. 811 Y. Ohta, K. Maenobu, and T. Sakai. Obtaining Suface orientation from Texels under Perspective Projection. In Proceedings of the Seventh International Joint Conference on Artificial Intelligence. UCAI, 198 1. ~entland 861 Alex P. Pentland. Shading into Texture. Artificial Intelligence (2):147-170, August 1986. [Shafer etal. 831 S. Shafer and T. Kanade and J. Kender. Gradient Space under Orthography and Perspective. Computer Vision, Graphics and Image Processing (24), 1983. ~itkin 801 Andrew P. Witkin. Recovering Surface Shape from Orientation and Texture. In Michael Brady (editors), Computer Vision, pages 17-45. North-Holland Publishing Company, 1980. Moerdler and Kender 727
1987
135
589
K.Prazdny Translation-, rotation-, and scale-invariant recognition of multiple, superimposed, partially specified or occluded objects can be accomplished in a fast, simple, distributed and parallel fashion using localizable features with intrinsic orientation. All known objects are recognized, localized, and segmented simultaneously. The method is robust and efficient. Classification of signals on the basis of their shape is a fundamental problem in biological and machine informa- tion processing. Interactive activation models view this process as a continuous transaction between the incom- ing information and stored knowledge. For example, McClelland and Rumelhart (1981) developed an interac- tive activation model of perceiving letters within the con- text of words. When a stimulus (a particular spatial distribution of features) is presented to the system, the relevant letter units are activated. The Jetter units then activate word units that in turn reinforce the activation of the appropriate letter units. In this model, the connec- tions between the letter and word units are fixed. In a more recent model (McClelland, 1986) the connections are “programmable” in the sense that they can be switched on and off by multiplicative signals from the “knowledge” modules. The model uses both the bottom- up and the top-down processing simultaneously to home in on a consistent interpretation of the stimulation. An important feature of such models is parallelism: process- ing occurs both between and within levels simultaneously. This is one of the few known methods permitting large scale exploitation of mutual, often only partially valid constraints in a distributed and parallel fashion. Such connectionistic approach is powerful but unfor- tunately it also has some drawbacks. The sheer number of connections and units necessary to implement a full scale “programmable” similitude-invariant (i.e., translation-, rotation-, and scale-invariant) pattern recognizer is simply enormous. Another problem is that these models usually implicitly assume that the segmen- tation (i.e. the knowledge of what features belong to the same object) has already been achieved. For example, most word recognition systems of the connectionist variety (e.g. McClelland, 1986) use the spatial locality constraint: a feature at a position x cannot possibly interact with a feature at y because they are separated by more than z units (i.e. they cannot belong to the same let- ter). In other words, not only the segmentation but also the scale is given (e.g. Hinton 81 Lang, 1985). In general, however, different objects may be superimposed or occlude each other, and spatially widely separated features may cooperate to define an object. The problem, then, is not the recognition of an isolated object (for which many techniques are available [e.g. Hu, 1962; Casasent & Psaltis, 19761) but recognition of multiple objects without a prior segmentation or knowledge of which objects are present in the image. Sometimes, segmentation can be achieved on the basis of “peripheral” information without the involvement of pattern specific evidence. For example, a selective attention mechanism may (at least partially) extract one voice from the jumble of noise and other voices at a party (the cocktail party effect [Cherry, 19531) based on stimulus onset synchrony or other distinguishing com- ponents(e.g. pitch) of onesound. Similarly, in motion and stereopsis, objects oan be separated from the back- ground and each other using only motion or disparity information without being recognized first (Julesz, 1971; Prazdny, 1965). Often, such peripheral segmentation based on similarity of a local signal quality is not sufficient and more central evidence based on the geometrical/ topological disposition of the individual features has to be invoked. In short, in many (and perhaps most) situations, segmentation, recognition and localization of objects occur simultaneously as the perceptual system tries to “explain” the world. This paper presents a simple but surprizingly power- ful technique for position-, rotation-, and scale-invariant pattern recognition of two-dimensional objects. It is based on parallel distributed processing and uses message passing instead of fixed connections as the primary means of communication. In general, message passing is more economical, versatile, and powerful than “programming” through conjunctive connections. FE LS. People and animals (Hollard & Delius, 1982; Rock, 1973) can recognize objects from a variety of viewpoints even when the objects were never seen from those view- points previously. The problem addressed in such task can be stated simply: given a set of model objects, each specified independently and in isolation, locate (possibly) occluded, overlapping and/or partially specified instances of the models in an image (described 728 Vision From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. Given such encoding, the recognition process is sur- prizingly simple mainly because the relational specifica- tion does not involve absolute locations or angles (i.e. the relational coding takes care of the position and rotation invariance). First, image features are extracted. Each fea- ture is associated with a processor or unit that can send to, and accept messages from all other feature pro- cessors. Each processor, i, has also access to the knowledge sources (the model catalogue) and can deter- mine a set of admissible correspondences{cijkj between itself and the model features. A model is instantiated when a match Cijk is found that maps an image feature (processor) i to a feature j of model k. Two features match if they are of the same type (e.g., both are a corner) and have approximately the same value (in our implementa- tion, the image and model corner angles have to be within x degrees of each other, where x is a user defined parameter). Each putative correspondence Cijk immediately allows the demployment of the knowledge because each model contains information about all other object features relative to the local reference frame of the model feature j. Thus, each Ci’k results in a Set of messages being broadcast (FIGURE 2a). Each direction can be thought of as priming or gating signals along which information is propagated that can be “digested” only byspecific“receptors”within its path. Each message stipulates exactly which feature with what orientation is expected at each direction; features outside the “atten- tion” beams are ignored (this is in many ways analogous to an expectation driven parsing). This is a very general way to obtain programmable “connections” not available in the contemporary connectionist schemes. Connec- tions are not fixed but rather generated “on the fly”. It is similar in many ways to the earlier proposals of Waltz (1978). A message is accepted by a processor if the feature type, value, and direction specified by the message agree with the processor’s feature specification. When a message is accepted it is in effect a vote for a particular correspondence c&c of the feature processor a that accepted the message. It can be thought of as arguing: “the sender of this message says that if you accept this then you are feature b of model c”. Accepting a message also immediately defines the scale of the object relative to the model: the scale is simply the ratio of the original model distance to the signalled distance between the sender and the receiver. FlGURE 2. Messages generated by the correct feature-to-model correspondence. Each admissible correspondence, Cijk, bet- ween an image feature i and a feature j of model k generates a set of messages along directions determined by the local reference frame imposed by the image feature i and the rela- tive directions of all other features of the object k with respect to the feature j. Each message in effect says: “if the image fea- ture i (oriented along angle a) is the feature j of model k then the feature x of the same model (which lies at an angle p rela- tive to j) has to lie somewhere along a + p.” Each message, denoted by a small angle in the figure, can be caught only by a processor whose feature is sufficiently similar (in type, value, and orientation) to the message specification. An “attention” beam (instead of a single radial) centered at the predicted direction is used to overcome various forms of noise (measurement errors and deformations). Each processor i monitors the support for all feasible correspondences Cijk. After accepting all relevant messages and updating its state accordingly it chooses that C*ijk (and the associated scale s*ijk) with the largest support asthe “correct” correspondence. All subsequent messages are sent only on the basis of this best “hypothesized” correspondence C*ijk (this is a “winner- takes-all” computation). The knowledge of the “correct” scale s*ijk enables the transmitting processor to con- siderably narrow the message target area: instead of the “attention beams” small regions around the predicted locations are used (FIGURE 2b). That is, the first iteration uses the “attention beams” for message propagation while all subsequent iteration use the more refined When the scale is known the message delivery area can considerably narrowed to a region around the predicted feature location. The message specification (the type/value pair and the feature direction) is denoted by the thick lines. estimates based on the knowledge of s*i’k and the model distance, d*iak between i and a under k. f he radius of the message area is proportional to the distance between the centre of the target area and the sender. This is like send- ing a message with a delayed “fuse” that “explodes” in a “fireball” some time after its launch as opposed to one 730 Vision that is continuously active. Each processor thus simultaneously and continuously sends and receives messages, and updates its “beliefs” about the “correct” image-to-model feature correspondence based on the information supplied by the other features. After only a few such iterations a global order emerges due to mutual “coercion”; the “conspiracy of partial evidence” drives the system to a stable state (i.e. the state where, for all i, C*ijk(t) = C*ijk(t-1)) (FIGURE 3). In our experiments We have found that in general, no more than three iterations are required; mostly, the first pass already results in a correct assignment. The mutual support the features belonging to the same object provide to each other is also reminiscent of the concept of synergistic reverbera- tions. FIGURE 3. The final state of the system. The models (thick lines denote model features) are superimposed on the image. Numbers identify the models (10 models were in the catalogue in this experiment). Circles around the features identify the features of the same object. The lower right polygon (image features 14-19) has not been recognized, segmented or localized since no model in the catalogue was supported by the spatial distribution of those image features. Assume that ea6h image feature matches, on the average, n features of m models, and that each model has, on the average, p features. Then, each feature pro- cessor generates, on the average, mnp messages. Only a small fraction of such messages is actually accepted by the image feature processors. The computations inside each processor are very simple. The feasibility of the algorithm is thus mainly determined by the speed and costs associated with message broadcastin The parallel distributed approach described above is fast and robust: if there are features that are mutualiycon- sistent under some mapping to the same model, they will support each other and survive. The approach leaves one additional problem to solve, however: while we know which image feature correponds to which feature o which model, we do not explicitly know which features belong the the same object! In effect we have solved for the location and identity of objects without segmentation. One simple way to perform such labeling is through sup- port coincidence. Every correspondence or explanation C*ijk generates a set of locations where it expects to find other image features corresponding to the same o All other correspondences c*opk (o#i, p#j) generating the same set of expectations are thus instanees of the same object. Note that for this computation, it is immaterial whether there are any imagefeaturesfound at the predicted locations! This “labelling” strategy is fast and convenient and does not have the problems of most methods based on consensus. For example, the aocumulator peak coincidence problem of the generalized l-lough transform (Sallard, 1961) (when two shapes, each specified with respect to a referenoe point are positioned so that their reference points coincide) is handled in a natural way. Observe also that multiple instances of the same model are handled in the same way as instances of different models. Experiments with the algorithm (implemented as a simulation on a Symbolics 3670 Lisp machine) revealed that it is fast, robust and efficient. It is robust because any mutually consistent set of features wili reinforce each other and survive the “winner-takes-all” competition. it is efficient because of the automatic narrowing of the focus of attention during the search. The system performance is, of course, limited by the accuraccy of its input, mainly by the reliability of the image feature direotion measurements. In conclusion, a position-, rotation-, scale- attern reoognition can be perfor ast and parallel distributed fashion using features only slightly more complicated than points or edge segments. The algorithm’s power derives from relational coding, distribution of processing into many interacting but independent modules performing identical com- putations, and from the use of directed message passing. This approach is well suited for a fine-grained parallel architecttire, e.g. the Connection ssively achine (Hillis, 1985). Prazdny 731 Attneave F., “Some informational aspects of perception”, Psychological Review, 61,1954 Baird M.L., “SIGHT-l”: acomputer system for automated lC chip manufaoture”, IEEE Trans.Systems, Man, Cyber., 8, 133-139, 1978 Ballard D., “Generalizing Hough transform to detect arbitrary shapes”, Pattern Recognition, 13, 1 l-1 22, 1981 Belles R., “Robust feature matching through maximal cli- ques”, SBIE Technical Symposium on Imaging and Assembly, 1979 (December) Casasent D. Psaltis D., “Position, rotation, and scale invariant optical correlation”, Applied Optics, 15, 1795- 1799,1976 Cherry E.C, “Some experiments on the recognition of speech”, Journal Acoustical Sot. America, 25, 975 979, 1953 Hillis D. The connection machine, MIT Press, 1985 Hinton G. Lang K.J., “Shape recognition and illusory junctions”, Proceedings IJCAI, 252-259, 1985 con- Hollard V.D. Delius J.D., “Rotational invariance in visual pattern recognition by pigeons and humans”, Science, 218,804~806,1982 Hu M.K., “Visual pattern recogntion by moment invariants”, IEEE Trans.lnf. Theory, IT-8, 179-l 87, 1962 Julesz B., Foundations of cyclopean sity of Chicago Press, 197 1 perception, Univer- McClelland J.M., “The programmable blackboard model of reading”, in: Parallel distributed processing, NlcClelland J.M. and Rumelhart D.E. (eds), MIT Press, 1986 McClelland J.M. Rumelhart D.E., “An interactive activa- tion model of context effects in letter perception, Psy- chological Review, 88, 375-497, 1981 Prazdny K., “Detection of binocular cal Cybernetics, 52,387-395, 1985 disparities”, Biologi- Rock I., Orientation and form, Academic Press, 1973 Waltz D.L., A Parallel model for low-level vision, in “Com- puter Vision Systems”, Hanson A.R. Rieseman E.M. (ed), 175-l 86, Academic Press, 1978 732 Vision
1987
136
590
GRASP Laboratory Department sf Computer and Information Sciences University of Pennsylvania Philadelphia, Pennsylvania 6910445389, USA Abstract Although mail pieces can be classified by shape into paral- lelopipeds and cylinders, they do not conform exactly to these perfect geometrical shapes due to rounded edges, distorted comers, and bulging sides. Segmentation and classification of mail pieces hence cannot rely on a limited set of specific models. Variations and deformations of shape can be con- veniently expressed when using superquadrics. We show how to recover superquadric models for mail pieces and seg- ment the range image at the same time. Postal services are currently facing the problem of automating mail piece handling. At present only letter handling is IbIIy automated. The rest of the mail pieces is handled at least partiahy if not completely by hand due to their large variability in size and shape [Owen, 19861. Any automatic system for handling mail pieces has to determine location, orientation, size, and shape of maiI pieces in order to manipulate them accordingly. Com- puter vision is a promising way to satisfy these requirements. , The problem of characterizing mail pieces is somewhere between scene description and object recognition. Por scene description, a unique description of objects is not necessary. It is generally sufficient to generate, using a bottom up strategy, a suc- cession of representations that depend on the viewing direction and orientation of objects which results in a geometric representa- tion such as surface patches or polyhedral approximations. On the other hand, to recognize an object in the scene as one from a set of predefmed models, a computer vision system must have models of these objects which it compares to the input data. For recognition of 3-D objects, view point independent, 3-D models are required. Most working recognition systems rely on fmed, definitive models intended only for environ~nts where a limited, preselected number of objects are encountered. Flail pieces, how- ever, do not come just in a few uniform shapes and sizes. Thus, having individual models for each mail piece is not feasible. This is why segmenting and representing mail pieces is not object recognition in the strict sense which is normally understood as selecting the right ready-made model from a predefmed set of models. Classifying mail pieces is related to categorization. People form categories by picking out the essential and separating it from the accidental mosch, 19781. This sorting of instances into categories reflects the structure of the world pentland, 1986a], [Bajcsy and Solina, 19871. Like any other objects, mail pieces can be grouped into classes or categories. Shape classification which is used for manual handling of mail pieces and which identifies parcels, flats, tubes, rolls, and irregular packages, reflects such structure. An automated mail handling system must also divide mail pieces into appropriate classes, give their shape description by identifying the necessary parameters of the class model, and provide the position and orientation in a world coordi- nate system. The difficulty in modeling mail pieces is their nonuniform shape and size. They do not conform to perfect geometrical shapes because of rounded edges, distorted corners, bulging sides, and wrinkled wrapping. With standard 3-D shape representations, like generalized cylinders or polyhedral approxi- mations, such degradations from ideal prototypes are difficult to express. Superquadrics, on the other hand, have the advantages of generalized cylinders and direct control over the roundness/squareness of edges. In general, only a single super- quadric model is required for a single mail piece. The rest of the paper is organized as follows: we first describe the recovery of superquadric models from range data, outline the recognition procedure, including some new ideas and preliminary results about segmentation and, at the end, compam our recovery methcxf with other related work and discuss future research. Superquadrics are a family of parametric shapes that were invented by the Danish designer Peit IIein [Gardiner, 19651 as an extension of basic quadric surfaces and solids (see also [Barr, 19813). Pentland Ipentland, 1986a] suggested them first for analysis of scenes in computer vision. A superquadric surface is defined by the following implicit equation: . \ e?. jp]*+p]“J~+[+ (1) dliC %9 Ys, andzsare coordinates of a point on the superqua- Surface. Subscript S indicates a superquadric centered This research was made possible by the following grants and contracts: ?JSPS 104230-87&0001/h&0195, QNR Subcontract SB35923-0, I’RWDCR-8410771, ARMYDAAG-29-84-K-0061, NSPCEWDCR 82-19196 A02, Airforce/p49620-85-K-0018, and DAIWAIONR. We wish to Sandy Pentland for his eontinuo~s encouragement to use superquadric models and Max Mintz for helping with the minimization procedure. Solina and Bajcsy 733 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. coordinate system. Parameters a I, a2, a3 define the superquadric size in x, y and z directions, respectively. el is the squareness parameter along the z axis and e2 is the squareness parameter in the x-y plane. By changing the two shape parameters, superqua- dries can model a large set of standard building blocks, like ellip- soids, cylinders, parallelepipeds, and all shapes in between. Glo- bal deformations like tapering, twisting, and bending further enhance superquadric modeling capabilities [Barr, 19841. We define the “inside-outside” function for superquadrics IJhen F (G, YS, zd = 1, the point (xs, ys, zs) is on the surface of a superquadric. If F (xs, ys, zs) > 1, the correspond- ing point lies outside and if F (xs, ys, zs) < 1, inside the super- quadric. With the outermost exponent E~ we force F to grow qua- dratically instead of exponentially. This ensures faster conver- gence during model recovery. Superquadrics are suitable models for computer vision because we can form overconstrained estimates of their parame- ters. This overconstraint comes from using models defined by a F (xw, YW, zw) = (4) F (XW, YW, ZW, al, a2, a3, ~1, ~2, h 8, y,gl, ~2, p3 ) The independent parameters expressed in vector notation are:?? = [aal, a2, . . . , a 11 IT. Suppose we have N 3-D surface points (xw, yw, zw) which we want to model with a superquadric. Eq. (4) predicts the position of a point (xw, yw, zw) relative to the surface of the model. We want to vary the 11 adjustable parame- ters aj, j = 1, . . . , 11 in eq. (4) to get such values for aj’s that most of the 3-D points will lay on or close to the model’s surface. Since for points on the surface of a superquadric: F(xw,yw,zW;al, *.. 9all) = l,weachievethisbyminimiz- ing : N x [1 -F (Xwj, Ywit zwi; al, . . . 9 ad I2 i=O However, due to self-occlusion the solution to eq.(5) is unbounded in the sense that an infinite number of superquadric models of different sizi fit objects like cylinders or parallelo- pipeds. Obviously only the model with the smallest possible volume that still fits the given points is the desired solution. We want a modified fitting function which has a minimum corresponding to the smallest superquadric that fits a set of 3-D points aptd such that the function value for surface points is known before the minimization. Using function: few parameters to describe a large numb& of 3-D points. This R = ala2a3 (F - 1) (6) enables us to verify our esti&ted models and measure the “goodness of fit.” For a superquadric in an arbitrary position we must recover 11 parameters: location in space (3 par.), orientation in space (3 par.), size (3 par.), and two shape parameters, &l and e2. On the other hand, many more 3-D points are typically avail- able on the surface of the modeled object from either range imag- ing or passive stereo. To find the parameters so that the model best fits the data is called an overdetermined optimization prob- lem. we fulfill the first requirement with the factor a 1a2a3, which corresponds to the superquadric size. The second requirement is met by the factor (6; - 1), since function R has value 0 for all points on the surface and does not depend on knowing the correct size. Now we have to minimize: (7) We introduce here a relatively fast iterative fitting pro- Since R is a nonlineaer function of 11 parameters aj, j = 1, cedure based on the “inside-outside” function. Eq. (2) defines . . . , 11, the minimization must proceed iteratively. Given trial the surface in a superquadric centered coordinate system values for 2, we evaluate eq. (6) and employ a procedure to (XS, ys, ZS). 3-D points from passive stereo or range imaging, improve the trial solution. The procedure is then repeated with however, are given in a world coordinate system (xw, yw, zw). new trial values until the sum of least squares (eq. 7) stops We express these 3-D points in the superquadric centered coordi- decreasing, or the changes are statistically meaningless. Since nate system by a translation and a sequence of rotations. A con- first derivatives &?I&, for i = 1, . . . , 11 can be computed, we venient way of expressing such transformation in homogeneous use the Levenberg-Marquardt method for nonlinear least squares coordinates is with a 4 x 4 matrix T: [Press et al., 19861. The first trial set of parameters, 3 must be [I II set experimentally to some initial estimates& We found out that xs very rough estimates for position, size and orientation are suffl- Ys zs =T z (3) cient. Initial estimates for both shape parameters, Q and ~2 can always be 1, while position, orientation, and size can be estimated 1 1 by computing the center of gravity and moments of inertia for the given 3-D points. During the fitting procedure we introduce where T = Trans C.P I, 132, ~3) - Rot (9, 0, y). We use “jitter” by adding Poisson distributed noise to the evaluation of Euler angles to express the orientation in terms of rotation $ about function R. Small local minima caused by the complicated topol- the z axis, followed by a rotation 8 about the new y axis, and ogy of the fitting function and the noise in the input data are thus finally, a rotation w about the new z axis. Substituting eq. (3) into avoided and a global convergence assured [Pentland, 1986bl. eq. (2) we get the “inside-outside” function for a superquadric in general position and orientation: Deformed superquadrics can be recovered using the same technique of minimizing the “inside~outside” function (Fig. 1). Global deformations like tapering, bending, and twisting require 734 Vision Figure I: Recovery of a tapered cylinder with an itera- tive process through which the estimated shape con- verges to the actual range data. The initial estimate and some of the following iterations (solid lines) are shown superimposed on the superquadric (broken lines) that represents the input data (300 3-D points). A total of 13 model parameters (11 + 2 for tapering) were adjusted simultaneously to achieve a least squares fit. The whole fitting procedure took about three minutes on a VAX 785. just a few additional parameters. Any shape deformation can be recovered in this way as long as the inverse transformation is available [Bajcsy and Solina, 19871. We tested the fitting procedure on synthetic (Fig. l), and real range data (Fig. 2). The described recovery procedure is fast and stable in the sense that it always converges to a good approxi- mation of the actual object. We are able to fit simultaneously all 11 parameters and achieve a good fit in just a few iterations (Fig. 3). Speed depends on the number of 3-D points for which the fit- ting function and their derivatives must be evaluated, the number of necessary parameters and the accuracy of initial parameter esti- mates. We investigated the robustness of the minimization pro- cedure by studying the relation between independent parameters of the fitting function and the sum of least squares (Fig. 4). III. Recsgnition The goal of a vision system for mail piece handling is to classify each mail piece into a class of like objects and report its position, orientation and size so that appropriate manipulation can be performed. The whole process can be divided into image acquisition, segmentation, model recovery and classification. lvlodel recovery was already described. The rest of this section is devoted to segmentation and classification. Figure 2: Interpretation of a real range image [Hansen and Henderson, 19861 with superquadric models. 0n top are the initial model estimates, on the bottom the recovered models after 12th iteration. Segmentation into individual objects was done by hand. fitting function (log scale) E 5 10 15 20 25 30 number of iterations Figure 3: Rate of convergence for the cylinder in Fig. 3. The notch around 13th iteration is due to the addi- tion of Poisson distributed noise which pushed the fit- ting process out of a local minimum and towards a better solution. One iteration using about 50 range points took about 15 seconds on a VAX 785. A. Scene segmentation Under the assumption that only single mail pieces are present in the scene, segmentation consists of removing the sup- porting surface. The remaining range points are then used for model recovery. If several, possibly overlapping, mail pieces are present, segmentation must divide the scene into regions corresponding to single objects. Segmentation is a data driven process and normally applies image formation models like edges, comers, regions, normals, and surfaces to the image. A review of low level range image processing,research [Besl and Jam, 19851 reveals that there are two principal approaches. One extracts edges, the other segments surfaces into planar or cylindrical surfaces. The “edges first” Soha and Bajcsy 735 approach is successful when the objects have nice, clear edges. Mail pieces, however, have crumpled edges and beaten corners and this shape noise degrades the performance of edge finders. Crumpled paper on mail pieces can also mislead a region growing algorithm, causing it to subdivide a single face into a number of small surface patches. Using extracted features, regions corresponding to a single object or part can be hypothesised and verified by model fitting. fitting function (log scale) fitting function (log scale) fitting function (log scale) fitting function (log scale) I I I V/loo0 v 8V product of size parameters: V=ala2a3 1 shape parameter: ~1 0.1 2 shape parameter: &2 I I 0-R 8+7C second Euler angle Figure 4: Influence of inside-outside function parame- ters on the fitting function for the cylinder in Fig. 2. Although aLl parameters are interdependent, these 2-D plots give some insight into the behavior of the inside- outside furaction. Note that factor a 1~Z~3 in modified function R (eq. 6) introduced a new minimum when any of a’s is 0. If the initial values for size are not much to underestimated, this does not cause a prob- lem. We are currently investigating the use of superquadric models for segmentation also. The recovery procedure describe-d in the previous section uses a fixed number of range points which are assumed to belong to the same mail piece. Now consider the case where only very gross segmentation is available, a result of for example, or even no segmentation at the whole scene as a large block and like a sculptor carve out the objects or parts that make up the scene. The shape of possible recovered parts depends on the capabilities of our models. Superquadric primitives combined with some glo- bal deformations can describe a large class of man-made and natural objects [Pentland, 1986aJ. The problem can be interpreted as a global minimization problem over the space of model param- eters and number of models. First, we want to recover the model that accounts for the largest number of data points and repeat the process for remaining chunks until an appropriate level of representation for the task at hand is reached. The number of points during model recovery is not fiied. Points that are to far outside from the model’s surface in the current iteration do not contribute to the estimation of model parameters, while other points, not used in a previous iteration but close enough in the present iteration, are used again. The changing number of points from iteration to iteration must be taken into account when com- paring the goodness of fit.’ Classification of mail pieces is necessary because dif- ferently shaped mail pieces require different handling. A classifi- cation scheme must reflect the shape of mail pieces but can also depend on the nature of the automated manipulation (robot arms equipped with grippers or suction pumps, fwed automation). Using recovered superquadric parameters, different geometric classification schemes can be easily designed. For example, the classifkation currently used for manual handling is: Qletters and flats (aI a: a2, a3 and el, r, < l), Obox-like packages (Q~ ~2 GK I), Otubes and rolls (Q a 1 and e = I), Oirregular objects (1 c E 1, 2 > 2, global deformations). Pentland has shown that superquadric primitives can describe a large class of man-made and natural objects hlpentland, 1986a]. We believe that they are appropriate as part-based models, especially for the class of basic categories, since the pro- rorype CuIci Beforrplatiom paradigm common in human perception can easily be applied [Bajcsy and Solina, 19891. At that level very detailed shape descriptions are not necessary. With a small set of parameters a large set of primitives can be arnifo&y han- dled. Superquadrics model the whole object, including parts hid- den by self-occlusion and parts occluded by other objects, by assuming symmetry. Verification which normally comes as an afterthought is here an integral part of model recovery. ’ Instead of conp3ring the sum of least sqws, we &vi& the surIpB first by the number d participating points. ‘I’& treshold for rejecting prints that are too far outside hxn the model’s surface is a function of goudnas of fit. The better the fib the stricter the rejection titebia. 736 Vision E 2 4 7 1 3 5 12 Figure 5: Segmentation by model recovery. The above image sequence shows the iterative process through which the estimated shape based on the non- segmented range image converges to a model that ac- counts for the largest part in the scene. The small cube on top of the large oni$%an be recovered simply by ap- plying the fitting to the remaining range points. -. To recover superquadric models, Pentland [Pentland, 1986a] first suggested an analytic solution of parametric super- quadric equations. Using linear regression, one could compute parameter values that provide the best fit. Pentland [Pentland, 1986bl currently recovers superquadrics from range data by com- puting a heuristic “goodness-of-fit” functional in a coarse grain search over the entire parameter space. We believe that, due to complexity, an analytic solution for superquadric parameters is not practical. A heuristic approach, on the other hand, lacks pre- cision, and global search is computationally expensive. Recovery using the “inside-outside” function and a steepest descent method combined with addition of Poisson noise has proved to be more efficient. The speed of the fitting procedure depends on the number of range points, the number of function parameters and the accuracy of first parameter estimates. Since the “inside- outside” function and its partial derivatives can be evaluated for all range points in parallel, the fitting procedure may be speeded up on a parallel architecture. Segmentation by model recovery looks promising but more research is in order. Global search is a possible but costly propo- sal Ipentland, 198661. We will investigate if the method would benefit by using simulated annealing. [Bajcsy and Solina, 19891 R. Bajcsy and F. Solina, “Three Dimensional Shape Representation Revisited,” Proceedings ICCV, London, England (June 1989). [Barr, 19811 A. M. Barr, “Superquadrics and angle-preserving transfor- mations,” IEEE Computer Graphics and Applications 1 pp. 1 l-23 (1981). [Barr, 19841 .A. PH. Barr, “Global and local deformations of solid primi- tives,” Computer Graphics M(3) pp. 21-30. (1984). [Besl and Jain, 19851 P. Besl and R. Jain, “Range Image Understanding,” IEEE Proceedings on Computer Vision and Pattern Recognition, (June 1985). [Gardiner, 19651 M. Gardiner, “The superellipse: a curve that lies between the ellipse and the rectangle,” Scientzjk American, (Sep- tember 1965). mansen and Henderson, 19861 C. Hansen and T. Henderson, “‘UTAH Range Database,” Technical Report UUCS-86-113, Computer Science Department, University of Utah, Salt Lake City (April 1986). [Gwen, 19861 J. Owen, “Characterization of Live Mail,” Proceedings USPS Advanced Technology Conference, Washington, DC pp. 2-22 (1986). ~entland, 1986aI A. P. Pentland, “‘Perceptual Organization and the Representation of Natural Form,” Artificiaf Intelligence 28(3) pp. 293-331 (1986). pentland, 1986b] A. Pentland, “Recognition by parts,” SRI Technical note No. 406, Menlo Park, CA (1986). m-f.= P. Flannery, S. A. Teukolsky, and W. T. erical Recipies, Cambridge University Press, Cambridge, England (1986). Bosch, 19981 E. Rosch, “Principles of Categorizat and categorization, ed. E. Rosch and Yllillsdale, N.J (1998). Solina and Bajcsy 737
1987
137
591
CLOSED FORM SOLUTION TO THE STRUCTURE FROM MOTION PROBLEM FROM LINE CORRESPONDENCES Minas E. Spetsakis John (Yiannis) Aloimonos Center for Automation Research University of Maryland College Park, MD 20742 ABSTRACT A theory is presented for the computation of three dimensional motion and structure from dynamic imagery, using only line correspondences. The traditional approach of corresponding microfeatures (interesting points-highlights, corners, high cur- vature points, etc.) is reviewed and its shortcomings are dis- cussed. Then, a theory is presented that describes a closed form solution to the motion and structure determination prob- lem from line correspondences in three views. The theory is compared with previous ones that are based on nonlinear equ* tions and iterative methods. 1. Pntroduction The importance of the estimation of the three dimensional motion of a moving object (or of the sensor) from a sequence of images in robotics (visual input to a manipulator, proprioceptive abilities, navigation, struc- ture computation for recognition, etc.) can hardly be overemphasized. Up to now there have been three approaches toward the solution of the problem of computation of three dimensional motion from a sequence of images: 1) The first method assumes the dynamic image to be a three dimensional function of two spatial arguments and a temporal argument. Then, if this function is locally well behaved and its spatiotemporal gradients are computable, the image velocity or optical flow may be computed [7][9] [3l]. 2) The second method considers cases where the motion is “large” and the previous technique is not applica- ble. In these instances the measurement technique relies upon isolating and tracking features in the image through time. These features can be micro features (highlights, corners, points of high curva- ture, interest points) or macrofeatures (contours, areas, lines, etc.). In other words, operators are applied to both images which output a set of features in each image, and then the correspondence problem between these two sets of features has to be solved (i.e. finding which features in both dynamic images are projections of the same world feature). In both of the above approaches, after the optic flow field, the discrete displacement field (which can be sparse), or the correspondence between macrofeatures is The support of the Defense Advanced Research Projects Agency and the U.S. Arm under Contract D AKB Night Vision and Electra-Optics Laboratory 07-86-K-F073 is acknowledged. computed, algorithms are constructed for the determina- tion of the three dimensional motion, based on the image flow or on the correspondence [29] [l] WI 1231 PI PI PI PI PI I181 PI- [5] [6] [8] [l3] [3] 3) In the third method, the three dimensional motion parameters are directly computed from the spatial and temporal derivatives of the image intensity func- tion. In other words, if f is the intensity function and (1,~) the optic flow at a point, then the equation ~,u + Iyw + ft = o holds approximately. All methods in this category are based on substitution of the optic flow values in terms of the three dimensional motion parameters in the above equation, and there is promising work in this direction. [lo] [20] [4] Also, there is work on “correspondenceless” motion detec- tion in the discrete case, where a set of points is put into correspondence with another set of points (the sets correspond, not the individual points) [2]. As the problem has been formulated over the years one camera is used, and so the number of three dimen- sional motion parameters that have to be and can be computed is five: two for the direction of translation and three for the rotation. In this paper we present a theory for the determina- tion of three dimensional motion and structure from line correspondences in three views. A line is represented by its slope and intercept, and not by its endpoints, even if such points exist. 2. Motivation and previous work The basic motivation for this research is the fact that optical flow (or discrete dispacement) fields produced from real images by existing techniques are corrupted by noise and are partially incorrect. Most of the algorithms in the literature that use the retinal motion field to recover three dimensional motion, or are based on the correspondence of microfeatures, fail when the input (reti- nal motion) is noisy. Some algorithms work reasonably well for images in a specific domain. Some researchers [8] [22] [19] have developed sets of nonlinear equations with the three dimensional motion parameters as unknowns, which are solved by initial guessing and iteration. These methods are very sensitive to noise, as reported in [22] [8] On the other hand, other researchers [18] have developed methods that do not require the solution of nonlinear systems, but only of 738 Vision From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. linear ones. Despite this, in the presence of noise the results are not satisfactory [18]. Prazdny, Rieger and Lawton presented methods based on the separation of the optical flow field into translational and rotational components, under different assumptions. [23] [24] But difficulties are reported with the approach of Prazdny in the presence of noise [12], while the methods of Rieger and Lawton require the pres- ence of occluding boundaries in the scene, something that cannot be guaranteed a priori. Finally, Ullman in his pioneering work [29] p resented a local analysis, but his approach seems to be sensitive to noise, because of its local nature. Several other authors [l7] use the optic flow field and its first and second spatial derivatives at corresponding points to obtain the motion parameters. But these derivatives seem to be unreliable when noise is present, and there is no known algorithm that can determine them reliably in real images. At this point it is worth noting that all the aforementioned methods assume an unrestricted motion (translation and rotation). In the case of restricted motion (translation only) some robust algorithms have been reported [l4]. All in all, most of the methods presented up to now for the computation of three dimen- sional motion depend on the value of flow or retinal dis- placements. Certainly, there does not yet exist an algo- rithm that can compute retinal motion reasonably (for example with 5% accuracy) in real images. [30] Even if we had some way, however, to compute reti- nal motion acceptably, say with at most an error of lo%, we believe that all the algorithms proposed to date that use retinal motion as input (and one camera) would still produce non-robust results. The reason is that the motion constraint (i.e. the relation between three dimen- sional motion and retinal displacements) is very sensitive to small perturbations [27]. The third approach, that computes the motion parameters directly from the spatiotemporal derivatives of the image intensity function, gets rid of the correspon- dence problem and seems very promising. In [13] [10][20] the behavior of these methods with respect to noise is not discussed. Of course research on this topic is still at an early stage, but recent results [11][21] as well as ongoing work [25] indicate the potential of the approach. So, as the structure from motion problem has been formulated (for a monocular observer), it seems to be very difficult. A possible solution to this difficulty is as follows: Instead of using correspondences between microfeatures such as points, why not try to use correspondences of macrofeatures? In this case, on the one hand the retinal correspondence process will be much easier, greatly reduc- ing false matches, and on the other hand the constraint that relates three dimensional motion to retinal motion will be different and perhaps not as sensitive to small per- turbations resulting from discretization effects. As macrofeatures, we can use lines or contours, since they appear in a rich variety of natural images. The contour based approach has been examined in [2]. Research on the problem of motion interpretation based on line correspondences has been carried out by T.S. Huang and his colleagues [16][15] Th ere, the problem of three dimen- sional motion computation has been successfully addressed in the restricted cases of only rotational or only translational motion. In the case of unrestricted rigid motion some good results have been obtained in [15], but the solution is obtained iteratively from a system of non- linear equations, and convergence of the solution to a unique value is not guaranteed if the initial value that is fed to the iterative procedure is not close to the actual solution. 3. Sthement of the problem The problem we are addressing is to compute the 3-D motion and structure of a rigid object from its suc- cessive perspective projections. Since the structure can easily be computed when the motion is known, we will first derive the equation of a 3-D line given the motion parameters and the images of the line in two successive frames. Then, using this, we show how to recover 3-D motion from line correspondences. The imaging geometry is the usual one: The system 0XY.Z -is the object space coordinate system with the image plane perpendicular to the optical axis (Z axis) at the point o = [o,o,I]~, the focal length being 1. Let OZ,O~ be the axes of the naturally induced coordinate system on the image plane (ox//OX,og//OY). The focal point (nodal point of the eye) is 0 and so an object point [X,Y,Z]T is projected onto the point [~,y]r on the image plane, where X Y x=-, y=- z 2 (1) Finding structure is equivalent to finding the equa- tions of all the 3-D lines of interest. These equations have the following form: Ei:[X=Ati 2 +I& , Y =Ati 2 +B&] i = 1,2,... We use as motion parameters the rotation matrix R, representing a rotation around an axis that passes through the origin, and the translation vector T, where: and rl, t-2 etc as defined in [28] [26], ni, i = I, 2,3 are the directional cosines of the rotation axis. A point [X,Y,ZIT before the motion is related to itself [X’,Y’,Z’]’ after the motion by E]=R [g+T The above is enough to describe any rigid motion. The images are known 2-D lines of the form ei1 : I = ail x + bil ,i=128 2 , ,*** l:a,b,c Frames are denoted by letters. Also the lines ‘it and eilr correspond to the same line Ei in space. We are also going to use another representation of the lines in vector form which, although dual to the equa- Spetsakis and Aloimonos 739 tion form, can make the rotation computations look more natural. We represent an image line by the vector nor- mal to the plane defined by the origin and the object line (which also contains the image line) Eil : y = aa 2 + bfi , i = 1,2,3,. . . l:a,b,c or in vector form ait Gil : [I -1 b il We use a displacement and a direction vector to represent the object line: Ei: X=AtiZ+B, Y=A,Z+B, or and Ei : ft: + 2 dg The first problem for which we propose a solution is that of finding structure from motion and line correspon- dence, i.e. finding the equation, before the motion, of a line in 3-D given the equations of two successive images of it, as well as its motion parameters R,T. we first con- sider the case of no rotation and then we introduce rota- tion. The second problem that we will solve is that of finding the motion and structure knowing only the line correspondences over three frames. If we then solve the first problem twice, first for frames 1 and 2 and then for frames 1 and 3, clearly we should obtain the same line representation. This is the only constraint on the motion parameters of the problem, and is enough to solve it if we have an adequate number of line correspondences. (We need a minimum of 6, as pointed out in [16][15], and in order to have linear equations it seems that we need 13.) 4. Structure from motion the pure translation case and correspondence in A line E: X = A, 2 + B, Y = A, 2 + By when translated by T: [t,,t,,t,]* becomes X = A, 2 + B, + tz - A, tz Y = A, 2 + By + ttl - A, t, (2) The images are known to exist and are ea : y = (1, 2 f b, , tQ, : J = ab Z + bb m(4) for the two frames respectively. From the imaging geometry we find from (2) and the relations of perspective projection (l), by eliminating X,Y,Z, that 4 YES-- A, By -t A, & B, BY (5) y=Z By + ty - A, tz B, + t, - A, 4 - A,(B,+t,-A,t,)+A,(B,+t,-A=t,) (I+ + tv -A, tz) By equating the z,y coefficients of (3-5) and (4-6) we get four equations in four unknowns (the parameters of the 3-D lines). Solving them we get only two solutions (one is spurious [Xi]). Th e valid one can be written in vector form : where Z is the unit vector along the z axis. 6. Pntsoducing rotation The general case with both rotation and translation can be derived directly from the pure translation case quite easily. We first establish the following result which is also used in [16][15]: An image line co (in vector form) of a line in space that is rotating with rotation R around the origin, is transformed into an image line R a~,. PROOF: See [16] [26] The importance of the above result is that the rotated image can be found without any knowledge about the object line, which implies that no constraint can be derived from the pure rotation case to lead to a solution similar to that in the pure translational case. So we con- sider now the general case of both rotation and transla- tion. The movement of the line consists of a rotation fol- lowed by a translation. So if we rotate the first image e, to Rae, then we can solve the pure translation case with the image of the first frame being R.e, and the image of the second being E ), and what we get is the object line rotated by R. All we need then is to rotate back by RT. This way equations (i’), (8) become d e, x(R ‘*Q) t f= (T*eb) (Z%) = .?+, x(R %b)) Z’(E, X(R%b)) 09(10) In the above expressions the z components of the vectors are 31 and 0 respectively. This not only makes the duality of the vector and equation forms obvious but it is also a sufficient property to guarantee that two equal lines are always represented by the same pair of vectors, a fact that we use in the next section. 6. Motion and structure from line cosrespon- dences In the previous section we showed how to compute the structure given the line correspondences and the motion. Here we are concerned with finding motion from line correspondences alone. Given the images of one line in three successive frames (a,b,c ), the solution (as a function of the R and T parameters) must be the same for both pairs of frames b-c and a-b. So where cc is the image of the line in the third frame and 740 Vision T,, Tb, R,, Rb represent the translation and rotation for frames a-b and frames a-c respectively. We now sim- plify these vector equations since they represent four equations, only two of which are independent. (The proof is omitted here; instead an intuitive explanation is given.) The vector f represents the point where the line cuts the plane z = O. This point belongs to this plane and to the plane defined by the origin and the image line, which of course contains the object line, so it belongs to their intersection which we can find from the image alone. Thus given the x (Y) component of the f vector the y (z) component can be found. This implies that another equation in f is superfluous. For the d vector we know that it has z = 1 and is orthogonal to the image line vector. The only additional information we need to specify it is the one of the other two components. The third can be found then. So we can have only one independent equation in the d vector. Equations (1 l), (12) can be expanded and from them we choose the ones that come from equating the a com- ponents of the vectors. There is no reason for this, other than the fact that they lead to simpler equations and are independent. We can write them also as (13) where ( a , . , * ) is the scalar triple product of vectors. By simplifying the triple then substituting products and cross multiplying and K = (Tb*RaI)= - T;Rbl L = (T&,2)T - T;Rb2 M = (Tb*R,3)T - T;Rb3 where Ral is the first column of the matrix R, etc we get a,, (ebT’L*e,) + (EbT.K*Ee) = 0 (15) b, (cbT*L *e,) - (cbT’M*c, ) = 0 (16) from eqs. (13), (14) respectively. The above equations are non-linear in terms of the motion parameters but linear in terms of the elements of the matrices K, L, M, and they come from considering just one line. By using 13 lines we can get 26 linear equations, set any of the 27 ele- ments of the matrices to 1 and solve the 26 X 26 system; then we can find the elements of the K, L, M matrices which in terms of the motion parameters are: tbr - rbl tw ‘.a1 t by - f-b4 taz ral tbz - rb7 taz ra4 tbr - rbl tau ?a4 tbv - rb4 tGv ro4 tbz - rb7 tav ra7 tbs - rbl tw ra7 tbrr - rb4 & pa7 tbt - rb7 taz and similarly L, M. In this way it is easy to find the numerical values of the three matrices. By equating their values with the functions of the motion parameters that they represent we get 27 nonlinear equations involving the motion parameters only. By setting one of the values to 1 we actually set the scale factor of the solution to some value. 7. Solving for the motion parameters Now what we have to do is solve for the motion parameters, given that we know the K, L, M matrices. The procedure to find them is the following: first find the direction of the translation and the directions of the column vectors of the rotation matrices, and then the magnitude of the translation and the polarities of the rotation columns. The second part needs more explana- tion. It is well known that this family of problems hm an inherent ambiguity in the estimation of the translations and absolute positions. These can be found up to a scale factor only and there is nothing that we can do about this. But the magnitude of the translation we compute does not represent anything more than the arbitrary choice of the 29 of the elements of the matrices to be unity. The only thing we need these magnitudes for is their ratio, which is valid since the common scale factor is eliminated. For the rotation columns we don’t need to find their magnitude, since it is 1, but we have to find their polarity, which can be found easily. The three matrices can be written as K = * (17) I.. and simlarly L , M. The eigenvector that corresponds to the eigenvalue zero of the matrix M must be orthogonal to the Tb and to Rbl, and the same holds for the other two of them. If we consider the transpose of the matrix M, then the eigenvector is orthogonal to T, and to Ral. Let these six vectors be fl, f2, fa, ftl, f)2, fb, for the vectors of the a movement and the b movement and the three matrices respectively. The cross product of any two of them, if they are not collinear, yields the direction of the corresponding translation. The following theorem pro- vides the conditions for this. The direction of translation ‘a’ can be estimated from the cross product of two of the f’s when the f&owing hold: a> Rbl and Tb are linearly independent b) the analog tion of 1, 2 and 9 of condition 4 with circular substitu- OF: For proof and discusion see [26] . The vectors fl, fi, f3 provide sufficient constraints for the recovery of the rotation column vectors too. The problem for this recovery can be stated as follows: Given three vectors fl, fi, f3, find three pairwise orthogonal vectors rl, r2, r3, such that h is orthogonal to ri for i=l, 2, 9. This problem has two solutions in general. Before we present a way to find them we try to give a more visual description of the problem. The vectors fls fi, fS, define three planes that meet along a line parallel to the translation vector. Obviously each of rl, ‘2, r3 belopgs to each of these planes. So the problem is equivaIent to fitting an orthogonal system into three planes that meet along a line. In order to find the solution it is enough to find the solution for rl, because r2 is orthogonal to rl and f2, so is parallel to their cross product, and its length is Spetsakis and Aloimonos 741 known to be unity. The same holds true for the third vector. In order to find rl we define a vector k that is supposed to be orthogonal to rl; the only constraint we have is that ( (k x fl) x f!J-( (A x fl) x fd = 0 The above scalar equation states all the necessary condi- tions for the problem. There are infinitely many solu- tions to this equation and all the nonzero ones are equivalent for our purpose. To find just one we can arbi- trarily set two of the components of the k vector to any convenient values. We choose the y,.~ components to be 1, 0, respectively, because these values are simple and do not, in general, lead to a degenerate solution. If such a solution is detected (the cross product of I, and k should then be zero) the reason might be that the values chosen were bad or that the problem itself is ambiguous. (One such case of ambiguity is when fr, fZ are parallel and f3 is orthogonal to them, and similarly for all its cyclically symmetrical cases.) The first of the two cases is easily detected and we can then repeat the process by choosing better arbitrary components for the k vector, ones that can not lead to A vectors parallel to fl. After the substi- tutions and the simplifications are done the equation we get is ous solution. For the other solution we will get a unique value for pl, p2, p3, from which we can infer that pt = +pl or pt = -pl since pti is either +1 or -1. It is clear that there are two sets of signs that satisfy the above con- straints. One corresponds to a left and the other to a right handed coordinate system, only the second being of interest to us. In order to check which one is left-handed we form the rotation matrices and find the determinants and keep the solution that gives the positive determinant. This is the only solution that we get and our simulations show that indeed it is the correct one. 8. Experiments We have done several experiments using randomly generated lines and motion parameters. The results were very accurate in the absence of noise. Due to lack of space the results are not reported here. They can be found in [26] . In case of noise the results are affected. We are currently doing systematic experiments and work- ing on the development of a mathematical theory of the stability of the algorithm. 8. Conclusions ak:+/3ks+7=0 w where Q, ,I$ 7 are functions of the components of the fr, f2 etc 1261. Equation (18) has two solutions. These give two values of the k vector that lead to two sets of directions for the column vectors of the matrix r. We already know that these columns have length 1 so what remains to be found is their orientations along their axes. The method we present finds one valid set of orientation for only one value of the k vector (the rest of the orientations or values are rejected because either they are not compatible with the initial equations or they lead to rotation matrices with negative determinants). One of the two solutions for the rotation matrix turns out to be spurious without any physical interpretation and incompatible with the initial equations, but we haven’t yet established why there should be only one solution. Yet the spurious one is easily identified because it leads to an inconsistent linear system. The magnitude of the translation is com- puted as follows: We have presented a method for computing struc- ture and motion from line correspondences. The method briefly is as follows: Extract 13 lines from the image, approximate their equations, and then form a 26 x 26 matrix to find the elements of the K, L, M matrices. Some preliminary experiments indicate sensitivity to noisy input (by noisy here we mean inaccurate parame- ters of the image lines, not bad correspondence, since the possibility of the latter type of error is very small). The sensitivity of the solution of the linear system seems to be very high and might cancel the advantage we get by using lines (the parameters of which can be computed with better accuracy than in the case of points). The model for the noise we used wasn’t good enough to per- mit comparison of the point and line correspondence methods. Let T’, be the unit vector representing the direction of the translation a and T’* the unit vector ‘representing the direction of the translation b. Let pti and bti be the corresponding directions of the translations and R’d, R’bi, Q,+ and pri the directions and the polarities of the rotation columns i (notice the opposite order) which take on only +1 or -1 as their values. If we do the substitutions (p,i T’d for Tai, etc.) we get that K is equal to It is worth noting that the method gives a unique solution in general unless the lines we choose result in a system with determinant very close to 0, as became evi- dent from the experiments. We are working towards establishing both experimental and theoretical results on the stability of the proposed algorithm and conditions for the uniqueness of the solution. Acknowledgements We wish to thank Prof. Rosenfeld for his constructive critiscism. Our thanks also go to the great physicist V. A. Basios. References [ P1r’att’b, - fllr’blt’, plr’dt’by - qr’b4t’a plr’alt’bz - ulr’b7t’o. 1 1. G. Adiv, “Determining 3-D Motion and Structure from Plr’dt’bz - Qlr’blt’dy mr’dt’b, - 4b4t’c,y plr’.4t’bt - alr’b7t’, optical flow generated from several moving objects”, plrfo7t’bz - alr’blt’az plr’.7t’by - V’bd’oE plr’07t’bz - Cqr’b7t’, COINS-Tech. Rep. 84-07, July 1984. 2. and similarly for L, M. where p1 = pt prl, and the same J. Aloimonos, “Low level visual computations”, for the rest. The equations above are three systems of Ph.D. thesis, Dept. of Computer Science, University linear equations that have more equations than unknowns of Rochester, August 1986.. and so it is easy to check for incompatibility of the spuri- 3 a J. Aloimonos and C. M. Brown, “The relationship between Optical flow and surface orientation”, 742 Vision 4. 5. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. Proc. of the 7-th IGPR, Montreal-Ganada, 1984. J. Aloimonos and C. M. Brown, “Direct Processing of Curvilinear Sensor Motion from a Sequence of perspective Images”, Proc. IEEE Workshop on Computer Vision Representation and Control, Annapolis, MD., 1984. A. Bandopadhay and J. Aloimonos, “Perception of rigid motion from spatio-temporal derivatives of optical flow”, Tech. Rep. 157, Dept. of Computer Science, Univ. of Rochester, 1985. A. Bruss and B. K. P. Horn, “Passive Navigation”, CVGIP 21 (1983), 3-20. L. S. Davis, Z. Wu and H. Sun, “Contour Based Motion Estimation”, CVGIP 2’3 (1983), 313-326. J. &. Fang and T. S. Huang, “Solving three dimensional small- rotation motion equations: Uniqueness, algorithms, and numerical results”, CVGIP 26 (1984), 183-206. B. K. P. Horn and B. 6. Schunck., “Determining Optical Flow”, Artificial Intelligence 17 (1981), 185-204. T. S. Huang, “Three-dimensional motion analysis by direct matching, “, Proc. Topical Meeting on Machine Vision, Optical Society of America, Lake Tahoe, hJV, March 1985.. * E. It0 and J. Aloimonos, “Computing Transformation Parameters from Images”, Proc. IEEE Conference on Robotics and Automation, 1987. C. Jerian and R. Jain, “Determining motion parameters for scenes with translation and rotation”, PTOC. Workshop on Motion, Toronto , Canada, 1983.. T. Kanade, “Camera Motion from Image differentials”, Proc. Annual Meeting, Opt. Sot. of America, Lake Tahoe, March 1985. D. Lawton, “Motion analysis via local translational processing, “, Proc. Workshop in Computer Vision, Rindge, NH,, 1982.. Y. Liu and T. S. Huang, “Estimation of Rigid Body Motion Using Straight Line Correspondences”, Proc. ICPR, Paris, France, October 1986.. Y. Liu and T. S. Huang, “Estimation of Rigid Body Motion Using Straight Line Correspondences”, IEEE Workshop on Motion: Representation and Analysis, Kiawah Island, SC, May 1986. H. C. Longuet-Higgins and K. Prazdny, “The interpretation of a moving retinal image”, Proc. Royal Sot. London B 208 (1980), 385 - 397. H. C. Longuet-Higgins, “A Computer Algorithm for Reconstructing a Scene from Two Projections”, Nature 299 (September 1981), 133-135. H. H. Nagel, “On the derivation of three dimensional rigid points configurations from image sequences”, Proc. PRIP, Dallas, TX, 1981.. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. S. Negadahripour and B. K. P. Horn, “Determining three dimensional motionof planar objects from image brightness patterns”, PTOC. 9th 1JG-U Los Angeles, GA, 1985, 898-901.. S. Negadahripour, ““, Ph.D. thesis, AI Lab, MIT.. K. Prazdny, “Egomotion and Relative Depth Map from Optical Flow”, Biol. Cybernetics 96 (1980), 87-102. K. Prazdny, “Determining the instantaneous direction of motion from optic1 flow generated by curvilinearly moving observer”, CVGIP 17 (1981), 94 - 97. J. H. Rieger and D. T. Lawton, “Determining the Instantaneous Axis of Translation from Optic Flow Generated by Arbitrary Sensor Motion”, COINS tech. report 83-l., January 1983. M. Spetsakis, “Correspondenceless Motion Detection”, forthcoming Tech. Rep., Center for Automation Research, University of Maryland. M. Spetsakis, and J. Aloimonos,, “Closed Form Solution to th Structure from Motion Problem from Line Correspondences,“, CAR-Tech. Rep. 274, Computer Vision Laboratory,, University of Maryland,, 1987.. R. Y. , Tsai and T. S. Huang, “Uniqueness and estimation of three dimensional motion parameters of rigid objects”, in Image Understanding 1984, S. Ullman and W. Richards (editor), Ablex Publishing Co. New Jersey, 1984.. R. Y. Tsai and T. S. Huang, “Uniqueness and Estimation of Three Dimensional Motion Parameters of Rigid Objects with Curved Surfaces”, IEEE, Trans. PAMI 6 (January 1984), 13-27. S. Ullman, “The Interpretation of Visual Motion”, Ph.D. Thesis, 1977. S. Ullman, “Analysis of visual motion by biological and computer systems, “, IEEE Computer 14 (8) (m?l), 57-69. S. Ullman and E. Hildreth, “The Measurement of Visual Motion”, in Physical and Biological Processing of Images (Proc. Int. Symp. Rank Prize Funds. London), 0. J. Braddick and A. C. Sleigh (editor), Springer-Verlag, September 1982, 154 - 176. Spetsakis and Aloimonos 743
1987
138
592
Data Validation During Diagnosis, A Step Beyond Traditional Sensor Validation. B. Chandrasekaran and W.F Punch III’ Laboratory for Artificial Intelligence Research The Ohio State University Abstract A well known problem in diagnosis is the difficulty of providing correct diagnostic conclusions in light incorrect or missing data. Traditional approaches to solving this problem, as typified in the domains of various complex mechanical systems, validate data by using various kinds of redundancy in sensor hardware. While such techniques are useful, we propose that another level of redundancy exists beyond the hardware level, the redundancy provided by expectations derived during diagnosis. That is, in the process of exploring the space of possible malfunctions, initial data and intermediate conclusions set up expectations of the characteristics of the final answer. These expectations then provide a basis for judging the validity of the derived answer.’ We will show how such expectation- based data validation is a natural part of diagnosis as performed by hierarchical classification expert systems. 1. Introduction Diagnosis is the process of mapping system observations into zero or more possible malfunctions of the system’s components. Most of the work in AI in diagnosis assumes the observations given to an expert system are reliable. However, in real-world situations, data is often unreliable and real- world diagnostic systems must be capable of taking this into account just as the human expert must. In this paper, we will discuss a knowledge - based approach to validation that relies on diagnostic expectations derived from the diagnostic process itself to identify possible unreliable data points. Present -day aids for human experts performing diagnosis attempt to validate data before diagnosis begins. In the domains of various complex mechanical systems (Nuclear Power Plants, Chemical Manufacturing Plants, etc.), such aid is based on the concept of hardware redundancy of sensors. Each important system datum (pressure, temperature etc.) is monitored with a number of hardware sensors providing a redundancy of information from which a composite, more reliable value is extracted. Based on this hardware redundancy, a number of techniques were developed to validate a datum’s value: 1. Providing multiple sensors of the same kind to monitor a datum. Loss of one sensor therefore does not preclude data gathering and any disagreements among the sensors can be resolved statistically. IWe gratefully acknowledge the support of grants from the Air Force Cffice of Scientific Research, AFOSR - 82 - 0255, and the National Science Foundation, CPE - 8400840 For example, to measure the temperature of a chemical reaction, multiple temperature sensors could be used in the reactor and their statistical average given as the overall temperature value. 2. Providing different kinds of sensors to monitor a datum. This situation provides the same redundancy as (1) as well as minimizing the possibility of some kinds of common fault problems, That is, certain events that inactivate one sensor type will not affect sensors of a different type. Continuing with the example of (1) above, half of the sensors might be thermocouples while the other half might be mechanical temperature sensors. 3. Using sensors in several different locations to infer a datum value. In this situation, data values are monitored both directly and inferred from other system data based on well -established relationships. For example, while the temperature of a closed vessel may be directly monitored, it can be inferred from the measurement of the pressure using the PV = nrT equation. Such hardware redundancy allows some data validation, but some limitations for this approach do exist: 1. The expense of installing and maintaining multiple sensors for each important datum greatly increases the cost of the mechanical system. 2. Common fault failures still happen, despite cautions mentioned above, especially as the result of severe operation malfunctions. 3. Human operators and engineers resolve many such diagnostic problems despite incorrect and even absent data. In other words, human experts are more tolerant of bad data whether it has been validated or not. Therefore, while hardware redundancy does solve part of the problem, more sophisticated techniques are required to complete the job. The following simple example will help in examining point (3) and other ideasa. Consider the mechanical system diagrammed inFigure 1 with data values indicated in Figure 2. It is a closed vessel with two subsystems, a cooling system and a pressure relief system. The vessel is a reactor which contains some process (nuclear fission, chemical reactions, etc.) that produces both heat and pressure. The data values of Figure 2 indicate that the temperature of the reactor vessel is above acceptable limits. 2Note that the ideas presented here have been used on more complicated real - world systems [5,71. This example has been condensed from them for expository clarity. 778 Expert Systems From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. Assume for the example that two possible causes of the high reactor temperature exist: either the cooling system has failed or the pressure relief system has failed and the added heat has overpowered the functioning cooling system. Given the sensor readings, what would be the diagnostic conclusion? The data conflict is the normal pressure and cooling system readings and the abnormal pressure relief system readinq:s The failure of the pressure relief system is plausible, data indicates its failure and no other system failure, but such a failure expects the pressure to be high! The step to take is to assume that both the pressure relief system failed and the pressure sensor is incorrect. The process shown above demonstrates data validation at a higher level than that of simple sensor hardware validation. In the example, the pressure system has failed despite the lack of a high pressure datum. However, there is other strong evidence3 that the pressure system has indeed failed. The human reasoner expects the pressure datum to be high since the preponderance of other data indicate a malfunction. That is, the human reasoner in pursuing likely diagnostic conclusions discovers a plausible diagnostic conclusion that meets all but (in this case) one expectation. The important points to note are that: 1. A diagnostic conclusion can and should be made based on the preponderance of other evidence. 2. The datum value that does not meet expectation should be questioned and further investigation of its true value made. Note that this process involves redundancy, not at the level of sensor hardware, but at the level of diagnostic expectation. This is a redundancy of information that allows questioning (and subsequent validation) of data based on multiple expectations of diagnostic conclusions. If a conclusion is likely, but not all of its expectations are met, then those now questionable values are investigated by more computationally expensive techniques. Such expectations can be the result of one of a number of processes. Deep models can provide information on expected data patterns for any diagnostic conclusion. From this information, judgments on the reliability of any of the actual data values can be made. Information provided from such deep models can he incorporated into compiled structures that can also provide information on data reliability. Finally, the expert himself can provide the information on data reliability to the diagnosis system based on his expert judgment of the particular diagnostic process, in effect acting as the deep model for the system. Figure 1: An Example Mechanical System In this paper we will discuss compiled diagnostic systems that deal with conflicting datum at the level of diagnostic expectation indicated above. These are diagnostic systems that make conclusions based on diagnostic knowledge and some judgment on the validity of the data provided. In particular, we will show how 31n the example, this evidence is that there is a failure relief valve system which is part of the pressure system. of the - Variable Temperature Pressure Condenser Cooling Water Flow System Relief Valve Valve Control System Status High Normal All sensors Normal All Sensors Normal Sensors indicate Malfunction All sensors Normal Figure 2: Sensor Values for the Example redundancy of diagnostic expectation is a natural hierarchical classification diagnostic model. extension to the 2. Diagnosis as a Hierarchical CBassification Task Significant interest has recently been directed towards understanding problem solving behaviors (diagnosis, planning, design) from the viewpoint of Information Processing Strategies. For example, Clancey 141 has shown that MYCIN is a species of classification problem solving activity. Earlier, in our work on MDX [2], we explicitly identified hierarchical classification as a strategy useful for some classes of diagnostic problem solving. Diagnosis as a classification problem solving task is a matching of the data of the problem against a set of malfunctions (i.e., diseases, system failures, etch. If the present data are classified as a known malfunction, then the diagnosis is completed. Note that this is a compiled approach to diagnosis since it requires that the possible malfunctions and knowledge about how to establish those malfunctions be pre-enumerated. Other less well defined problems require a deeper model that rely on first principles (physics, chemistry, etc.) and an intimate understanding of the system at hand4. The rest of this section discusses the basic ideas behind hierarchical classification (See [6,2] for details). The malfunctions (diseases, failures) possible in the system are organized hierarchically. Typically, this hierarchy reflects a systemsub - system or function:sub - function relationship between the malfunctions?. Continuing with the example system in Figure 1, the malfunction hierarchy for the reactor system is shown in Figure 3. Each node in the malfunction hierarchy represents the hypothesis that some particular malfunction has occurred. Note that the nodes located in the upper levels of the hierarchy (Pressure System Failure, Cooling System Failure) represent more abstract malfunction hypotheses then those lower in the hierarchy (Relief Valve Failure, Condenser Failure). Further note that the sub nodes of any node are more particular kinds of the super node. For example, a Relief Value Failure is a particular kind of Pressure System Failure. Therefore, as one traverses the hierarchy in a top down fashion, one examines more detailed hypotheses about what malfunction has occurred in the system. 4Space limits this paper to the compiled system issues, see reference [91 for a detailed discussion of the deep model issues and computational strategies. 5Though the simple examples of this paper use only a single hierarchy, other work [lOI recognizes that multiple hierarchies may be required to properly represent all system malfunctions. Chandrasekaran and Bunch 779 Each node in the hierarchy has knowledge about the conditions under which the malfunction hypothesis it represents is plausible. Each node of the malfunction hierarchy is therefore a small expert system that evaluates whether the malfunction hypothesis it represents is present given the data. While there are a number of ways this could be accomplished, conceptually what is required is pattern- matching based on data features Each node contains a set of features that are compared against the data. The results of this comparison indicate the likelihood of that particular r alfunction being present. The pattern matching structure of a n de in the CSRL language [l] is called a knowledge group. The knowledge groups compare relevant features against the data and yield a symbolic likelihood. System Failure Pressure System Failllre Cooling System FailIN-e Relief Valve FflilUl-e Valve Control Failure Condenser Failure Feed System FailUl-2 Figure 3: Hierarchy of malfunctions from Figure 1 Consider the knowledge group depicted in Figure 4 taken from the Cooling System Failure node of our example. The first, section represents three queries about a datum value6. Each column of the table underneath represents a possible answer to each question (column 1 to question 1, etc.). The match value assigned to the knowledge group is based on the value located at the end of each row7. In our example when the answer to question 1 is True, the answer to question 2 is either High or Low and regardless of the answer to question 3, row 1 assigns a value of 3 to the knowledge group. The rows of the table are evaluated in order until either a row of queries matches or no row matches and a default value is assigned. Thus, when the data pattern of row 1 exists, the knowledge group (and thus the malfunction) is established at a high level of confidence. Finally, the control strategy of a hierarchical classifier is termed establish-refine. In this strategy, each node is asked to establish how likely the malfunction hypothesis it represents is given the data. The node, using knowledge groups, determines an overall measure of likelihood. If the node establishes, i.e,. the malfunction is judged likely, then each sub of that node are asked to try and establish themselves. If a node is found to be unlikely, then that node is ruled-out and none its subs are evaluated. Consider the example using the data from Figure 2 and the hierarchy of Figure 3. The top node is established since the temperature is high. Each of the subnodes is then asked to establish. Cooling System Failure rules out and none of its subnodes are examined. Pressure Relief Failure establishes and its subs are asked to try and establish themselves. This process %n the present CSRL implementation, these values are fetched from a database, though other means may be used, such as calls to deep models, simulations, etc. 71n this case, the values assigned are on a discrete scale from -3 to 3, - 3 representing ruled- out and 3 representing confirmed. r- I : 3 ) Is theTemperature Alarm on? ) What is theTemperature above the Condenser ) What is the Temperature of the Cooling Water Out? Figure 4: A Knowledge Group from Cooling System Failure continues until there are no more nodes to examine. In this way, the most specific malfunctions that can be confirmed are given as the diagnostic conclusion8. 3. Data Validation Two important methods are available for validating data in conjunction with hierarchical classification. First, it is possible to establish a malfunction based on a preponderance of other evidence. If the node can establish but not all the data it expects is present., the data not meeting expectation is subject to question. In the original example, the Pressure Relief System Failure established despite a normal pressure reading based on a preponderance or other evidence. Secondly, intermediate diagnostic conclusions from other nodes provide a context to evaluate data. If the Pressure System Failure does establish, its subs can expect the pressure reading to be abnormal. If it is not, they can also question the pressure reading. In the remainder of this section, we will discuss the following aspects of a data validation system: 1. How data is questioned based on diagnostic expectations. 2. The various methodologies available that could resolve the questionable data. 3. How the normal control flow of diagnostic probiem solving is affected. Discovering a questionable datum involves two steps. First, set some expectations, using local knowledge or the context of other nodes. Second, use those expectations to flag some particular data value as questionable. The expectations of a malfunction are embodied in the knowledge group. The knowledge group mechanism was designed to give a rating of pattern fit to data. If the fit is not as expected, those data values not meeting expectations are identified as questionable. In the example of Pressure Relief Valve Failure, evidence exists that the valve has failed even though the pressure is normal. The lack of fit between data and pattern allow the pressure value to be identified as questionable. Diagnosis continues despite apparent data conflict since enough evidence exists for establishing the malfunction hypothesis. 8Note that if multiple conclusions are reached, then either multiple independent malfunctions have occurred or multiple dependent malfunctions must be resolved into a smaller set [81. 780 Expert Systems Furthermore, the expectations of previous nodes create a context of expectation for the node currently being examined. Consider the example hierarchy of Figure 3. In order to establish the malfunction hypothesis Relief Value Failure, the malfunction hypothesis Pressure System Failure must have established. In the context of considering the Valve Failure, some expectations were created based on the establishment of the Pressure System Failure node and other ancestors. Since these expectations always exist when considering Valve Failure, i.e., you can’t get to Valve Failure without establishing Pressure System Failure, they can be coded into the Valve Failure Node. How expectations are used for the Pressure Relief System Failure node is shown in Figure 5. A modification is made to the standard knowledge group of Figure 4 that allows the expert to indicate both a match value for the group and a set of data that do not meet the expectations established at this stage of the problem solving. Thus, Pressure Relief Failure establishes (based on other data features) despite the lack of a change of pressure. However, in establishing the node, one should question why the pressure did not change. This is done by placing the pressure value in the rightmost column of the matching row. If that row is matched, then the match value is returned but the indicated data value is placed in a list of questionable data values which will be examined later. If the match value is high enough, the node establishes despite the existence of conflicting data. That is, if there is enough evidence to show a malfunction despite a conflicting value, the problem solving may continue. However, it may be the case that the value being questioned is of vital importance to establishing the node. The match value will reflect this, the node will not establish, but the data will still be placed on the questionable data list. After an initial run of the problem solver, the questionable data list will consist of data values that did not meet the expectations of some node. I) Is the Pressure Alarm on? 2) What is the Pressure above the Condenser? 3) Is the Temperature Alarm activated? Figure 5: Knowledge Group Modified for Data Validation Exphsion The knowledge engineer is responsible for providing the node with both feature matching data and datum values that do not meet expectations of the malfunction hypothesis at that point. This, as mentioned previously, is a compiled approach to data validation. It may appear that such a compilation of possibilities will result in a combinatorial explosion. However, not all possible combinations of sensor readings need be encoded; only those situations which the expert deems reasonable or necessary need be placed in the knowledge groups. More importantly, in our approach to use of knowledge groups, a hierarchy of abstractions is used to go from the data to the classi&atory conclusion C31. Thus, the set of data elements needed for any node in the hierarchy is limited to only those relevant for the malfunction hypothesis it represents. Furthermore, the d&a elements of each node are subdivided among the knowledge groups that need them. That is, even within the node, the data is further partitioned to only those knowledge groups that will use that data. The knowledge engineer is therefore presented with a much simpler task. Only those combinations of a few data items that present ‘1 missed expectation need be encoded into the diagnostic system 3.4. Control Issues While sections 3.1 and 3.3 have discussed discovering and resolving possibly invalid data, this section addresses the issues of control flow changes to the normal establish - refine strategy. 1. What happens to data whose values have been proven to be either incorrect or unresolved? If found to be incorrect, the value in the data base must be modified, i.e., the central data base value modified, to indicate the change. If unresolved, it must be flagged as such in hopes of being resolved later. 2. If incorrect data has been found, then it is possible that the problem solver made some mistakes in its diagnosis. It may be necessary to re- run the hierarchy to see if the diagnostic conclusions change. Furthermore, any unresolved data may be resolved as a result of the new information. 3. The basic control following cycle: strategy would then look like the Chandrasekaran and Punch 781 a. Run the Hierarchy, finding questionable values. b. Run resolution techniques of section 3.3 to resolve the values if possible. 3. Such a system integrates well with existing systems that presently rely solely on hardware redundancy by using those valbes as data for both diagnosis and a higher level of data validation. c. Update the data base with any changes. This cycle continues until *either no data is questioned or the user sees an answer that is satisfactory. 4. The programming by a user of such a system is facilitated by existing tools (CSRL) that need on11 minor modifications. 4. Other control strategies ,are also available. The resolution techniques of section 3.3 could be run as soon as a data item is questioned, i.e., right in the middle of the problem solving. This requires a backtracking scheme that re - runs any other node *that used that value. Finally, the operator can be directly involved in changing data values at any step of the process based on his/her expert opinion of the situation. Acknowledgments We would like to thank Don Miller, Brian Hajek and Sia Hashemi of the Division of Nuclear Engineering at the Ohio State University for their help in developing and implementing these ideas. Also, one of the authors (Punch) would like to thank Mike Tanner for his invaluable aid in criticizing this paper and sharpening its ideas. 4. Implementation to Date References The majority of the structures and strategies indicated in the paper have been included into a version of the CSRL [l] tool as it has been applied to diagnosis of Cooling System Accidents in a Boiling Water Nuclear Power Plant. The knowledge groups have been modified as indicated in Figure 5 such that a datum value is questioned whenever it does not meet expectation. A number of scenarios of data conflict have been encoded into this system concerning the plant and more are being added. The control strategies that allow the hierarchy to be re - run until either no data is questioned or the user is satisfied have been implemented. Future work will concentrate on problems of backtracking to prevent re - running the entire hierarchy. With regards to data value resolution, present work is focusing on methodologies for resolving’questionable sensors as indicated in Section 3.3. To date, each kind of sensor has associated with it a set of hardware checks that are not normally invoked. Furthermore, if the additional hardware checks do not resolve the problem, the user is presented with the conflict and information on all the nodes that used that value, including whether that value was questioned there or not. 1. T. C. Bylander /S. Mittal. CSRL: A Language for Classificatory Problem Solving and Uncertainty Handling. AZ Magazine ‘/(Summer 1986). 2. B. Chandrasekaran / S. Mittal. Conceptual Representation of Medical Knowledge for Diagnosis by Computer: MDX and Related Systems. In Advances in Computers, M. Yovits, Ed., Academic Press, 1983, pp. 217 - 293. 3. B. Chandrasekaran. From Numbers to Symbols to Knowledge Structures: Pattern Recognition and Artificial Intelligence Perspectives on the Classification Task. In Pattern Recognition in Practice -ZZ, North Holland Publishing, 1986, pp. 547 - 559. 4. W. J. Clancey. Heuristic Classification. Artificial Intelligence 27,3 (1985),289-350. 5. J. F. Davis/W. F. Punch III / S. K. Shum / B. Chandrasekaran. Application of Knowledge - Base Systems for the Diagnosis of Operating Problems. Presented to AIChe. Annual Meeting, Chicago Ill. 1985. 6. Comez, F. / Chandrasekaran, B. Knowledge Organization and Distribution for Medical Diagnosis. 1EEE Transactions on Systems, Man, and Cybernetics SMC -11,l (January 19811, 34-42. 5. Conclusion While some data validation can be done using the standard techniques of hardware redundancy, a higher level of redundancy based on diagnostic expectations can address more of the issues of conflicting data in a more sensible manner. Intermediate diagnostic conclusions from hierarchical diagnosticians provide expectations that can indicate invalid data values. Note that the method is deceptively simple. This is due to the focus provided by chunking the match and expectation knowledge into the smaller, more manageable parts in the knowledge groups. Furthermore, the hierarchy also acts to simplify the complexity of the match and expectation knowledge due to the context of information provided by the establishment or rejection of parent nodes. Despite this, much power can be gained in both diagnosis and sensor/data validation by use of these simple methods. Some advantages include: 1. Questioning data based on diagnostic expectations provides a way to focus on only some data, as opposed to hardware methods which must worry about all data a priori. 2. Even if a data value is questioned, the diagnostic process can continue if other evidence exists for the 7. S. Hashemi, B.K. Hajek, D.W. Miller, B. Chandrasekaran, and J.R. Josephson. Expert Systems Application to Plant Diagnosis and Sensor Data Validation. Proc. of the Sixth Power Plant Dynamics, Control and Testing Symposium, Knoxville, Tennessee, April, 1986. 8. J. R. Josephson, B. Chandrasekaran and J. W. Smith, M.D. Abduction by Classification, Assembly, and Criticism. Revision of an earlier version called * ‘Abduction by Classification and Assembly” which appears in Proc. Philosophy of Science Association, Vol.1, Biennial Meeting, 1986. 1986. 9. V. Sembugamoorthy / B. Chandrasekaran. Functional representation of devices and compilation of diagnostic problem solving systems. In Experience, Memory and Reasoning, J. L. Kolodner and C. K. Riesbeck, Eds., Erlbaum, 1986, pp. 47-73. 10. J. Sticklen. MDXB: An Integrated Diagnostic Approach. Dissertation In Progress, Ohio State University. 1987. failure. data. Thus the system can work with conflicting 782 Expert Systems
1987
139
593
Building a Common Intelligent Beverlly Woolf and Pat Cunninghamt Department of Computer and Information Science, University of Massachusetts, Amherst, Massachusetts 01003 tThe Hartford Graduate Center, Hartford, Conn 06101 Abstract This article discusses the need for multiple experts to work together to develop knowledge representation systems for intelligent tutors. Three case studies are examined in which the need for a pragmatic approach to the problem of knowledge acquisition has become apparent. Example methodologies for building tools for the knowledge acquisition phase are described in- cluding specific tasks and criteria that might be used to transfer expertise from several experts to an intel- ligent tutoring system. I. A Community Memory Building intelligent tutoring systems requires community knowledge, i.e., multiple experts working together to en- code individual expertise in an intelligent tutor. This knowledge acquisition phase might span months or years. Thus, we need a framework to simplify changing knowl- edge in the tutors as well as a suite of programming tools for browsing and summarizing knowledge, for tracing and explaining the student model, and for tracking reasoning about teaching strategies. In short, tools and methodolo- gies are needed that can be used specifically for knowledge acquisition activities within an intelligent tutor. In this paper we share our experience of building three intelligent tutors and describe the criteria for, and in some cases, the emerging tools used within this acquisition process. The concept of a community memory for intelligent tutors reflects the fact that knowledge of tutoring is often distributed, incomplete, and acquired incrementally [Bo- brow, Mittal and Stefik, 19861 and thus requires contri- butions from several experts. This is especially true in tutoring systems because the domain expert, cognitive sci- entist, and teaching expert are typically not the same per- son. Given multiple experts who contribute to building the system and the need for a large amount of testing and modification to fine tune the tutor, completion of a tutor can not be the “final” step in development of a single sys- tem, but rather must be a forcing function between the lThis work was supp orted in part, by the Air Force Systems Com- mand, Rome Air Development Center, Griffiss AFB, New York, 13441 and the Air Force Office of Scientific Research, Bolling AFB, DC 20332 under contract No. F30602-85-C-0008. This contract sup- ports the Northeast Artificial Intelligence Consortium (NAIC). Par- tial support also from URI University Research Initiative Contract No. N00014-86-K-0764. completion of one system and the beginning of another. A completed knowledge base provides grit for our collective grinder, forcing us to further clarify and amplify teaching and learning knowledge and to improve communication be- tween those experts who contribute to it. Articulating and incorporating communal knowledge into a tutor reveals a great deal about each area of ex- pertise and about the tools used by the experts to per- form problem solving in the domain. For example, build- ing the boiler tutor described in Section 2.1 indicated sev- eral weaknesses in the tools available to industrial boiler operators. We therefore developed simulation tools, in- cluding abstract meters and trends (Figure 1) that might ultimately be integrated into the equipment used by boiler operators. Similarly, in building a geometry tutor [An- derson, Boyle, and Yost, I.9851 provided an environment that would be a valuable aid to motivated learners, even without help from any on-line tutor. Anderson introduced visualization and forward and backward reasoning tem- plates that would facilitate geometry problem-solving in- dependent of teaching media. In the next section, we briefly describe our three in- telligent tutors and in Section III indicate some method- ologies for how knowledge can be acquired from multiple experts to build additional tutors. ‘Heaclhing Coq3lex Industrial S The first tutor to be discussed is fully implemented, tested, and now used for training in nearly 60 industrial sites across America. The Recovery Boiler Tutor, RBT2, is described elsewhere [Woolf, Blegen, Jansen and Verloop, 19861, and will only be summarized here. It provides mul- tiple explanations and tutoring facilities tempered to the individual user, a control room operator. The tutor is based on a mathematically accurate formulation of the boiler and provides an interactive simulation, (Figure 1) complete with help, hints, explanations, and tutoring. 2RBT was built, by J. H. Jansen Co., Inc., Steam and Power Engineers, Woodinville (Seattle) Washington and sponsored by The American Paper Institute, a non-profit trade institution for the pulp, paper, and paperboard industry in the United States, Energy Mate- rials Department, 260 Madison Ave., New York, NY, 10016. 82 Al & Education From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. c FEEDWAIEiir i : : “EL- I *._----- - 1 lquor 9”is Liquor flow Faadvatar flow stsan flow nir flola stsan prsssuFs Drurr lsusl 02 IRS Figure 1: Several Views of the Recovery Boiler Tutor The tutor challenges operators to solve boiler emer- gencies while monitoring their actions and advising them about the optimality of their solutions. The tutor recog- nizes less than optimal and clearly irrelevant actions and modifies its response accordingly. Operators can continue their freewheeling or purposeful problem-solving behavior while the tutor offers help, hints, explanations, and tutor- ing advice when needed or when requested. Operators gain experience in recognizing the impact of their actions on the simulated boiler and to react before the tutor advises them regarding potential problems. Meters, as shown on the left side of screens in Figure 1, record the state of the boiler using synthetic measures for safety, emissions, efics’ency, and reliability of the boiler. The meter readings are calculated from complex mathe- matical formulas that would rarely (if ever) be used by operators to evaluate the boiler. The meters have already proved effective as training aids in industrial training sites and could possibly be incorporated into actual control pan- els. Operators have reported using the system as much as 70 hours in three months to practice solving emergencies. They handle the simulation with extreme care, behaving as they might if they were in actual control of the pulp mill panel, slowly changing parameters, checking each action, and examining several meter readings before moving on to the next action. * Caleb for Teaching a Second kan- liwage Our second intelligent tutor teaches languages based on a powerful pedagogy called the “silent way”-a method de- veloped by Caleb Gattegno. The system uses non-verbal communications within a controlled environment to teach Spanish [Cunningham, 19861. It uses graphical Cuisenaire rods3, to generate linguistic situations in which the rod plays various roles. For example, it is used as an object to be given or taken by a student, or it is used to brush teeth. As a new rod is presented, the student theorizes about what situation is encountered and types the appro- priate phrase below the picture. In the case illustrated at the top of Figure 2 the tutor presents a rod in the center box. The student responds by typing the word for the new piece at the cursor. In the bottom figure, the tutor cor- rects a student who places an adjective before rather than after a noun. In this exercise, students might have classi- fied the word “blanca” as an adjective referring to the size of the rod before knowing its meaning. The tutor does not clarify student% conjectures. Students can later change a hypothetical definition if in fact the new word turns out to define the color of the rod. Meanwhile, they will have learned to write the word, spell it, and place it correctly in a sentence. 6. ESE for Teaching A third tutor is now in the early implementation stage. It is part of a program to develop interactive and mon- itored simulations to teach physics at the high school or college level.4 Qne of these tutors teaches the second law Soriginally developed by Gattegno for teaching arithmetic *These tutors are being built by the Exploring Systems Earth (ESE) consortium, a group of three universities working together to develop intelligent tutors. The schools include the University of Massachusetts, San Francisco State University, and the University of Hawaii. Woolf and Cwnningham 83 Figure 2: Caleb: A System for Teaching Second Languages of thermodynamics5 and provides a rich environment at the atomic level through which the principles of equilib- rium, entropy, and thermal diffusion can be observed and tested [Atkins, 19821. Students are shown (and are able to construct) collections of atoms that transfer heat to other atoms through random collision (see Figure 3). They can create areas of high-energy atoms, indicated by dark squares, along with variously shaped regions within which the high energy atoms can be monitored. Concepts such as temperature, energy density, and thermal equilibrium can be plotted against each other and against time. The tutor uses all student activities - including ques- tions, responses, and requests - to formulate its next teach- ing goal and activity. It uses student actions to determine whether to show an extreme or near-miss example, whether to give an analogy or whether to ask a question. To refine the tutor’s response, we are now studying student miscon- ceptions and common errors in learning thermodynamics and statistics. III. Tools for Knowlledge Acquisition Given the complex heterogeneous nature of the knowledge required to build each of these systems, we need method- ologies and tools to transfer teaching and learning knowl- edge from human experts to systems under construction. Few such tools exist. 6The second law states that heat cannot be absorbed from a reser- voir and completely converted into mechanical work. Figure 3: Systems Moving Towards Equilibrium Expert system shells contain a framework for building knowledge bases about concepts and rules and for mak- ing inferences about them. However, they are limited as specific tools for designing and storing tutoring knowledge. They are frequently based on production rules and are lim- ited in representing history and dependency of the tutor- ing interaction. Also, they inadequately represent tutoring and misconception knowledge such as how to reason about teaching strategies, how to update and assess student mod- els, how to select a path through domain concepts, and how to remediate for misconceptions. In this section, we describe the criteria for developing tools specific to this knowledge acquisition process. A. Environment Expert The first expert needed to build an intelligent tutor is the environmental expert. This person often uses a majority of system memory [Bobrow, Mittal and Stefik, 19861 to provide an envelope within which students and system in- teract. The environment provides specific tools and opera- tors for solving domain problems or for performing domain activities. Environmental, teaching, cognitive, and domain ex- pert contributions interact strongly with each other- especially those from the environmental expert. For ex- ample, a system that asks students to record entrance and exit angles for light in an optics experiment, assumes that the environment supplies such measuring devices. The following criteria for developing a tutoring envi- ronment have begun to emerge: 1) Environments should be intuitive, obvious, and fun. Student energy should be spent learning the material, not learning how to use the environment [Cunningham, 19861. 84 Al & Education For example, to indicate errors, express feelings or con- vey meaning, the second-language tutor%, visual activities mimic the human Silent Way teachek gestures,facial ex- pressions, and rods. 2) Environments should record not only what students do, but what they did, intended to do, might have forgot- ten to do, or were unable to do [Burton, in press]. Envi- ronments should provide a “wide bandwidth” within which multiple student activities can be entered and analyzed. For example, the Pascal tutor developed by Johnson and Soloway (19841 processed and analyzed an entire student program before offering advice. 3) Environments should be motivated by teaching and cognitive knowledge about how experts perform tasks and the nature of those tasks. For example, Anderson [1981] performed extensive research with geometry students be- fore developing his geometry tutor interface, and Woolf et al. [1986] incorporated knowledge from experts with more than 30 years experience working with boiler operations before building the RBT interface. 4) Environments must maintain physical fidelityG [Hollan, Hutchins and Weitzman, 19841. The RBT tutor presents a mathematically exact duplicate of the industrial process. It models and updates over 100 parameters every two seconds. Visual components of the industrial process and reports are such as alarm boards, control panels, dials, duplicated from the actual control room. 5) Environments should be responsive, permissive, consistent [Apple, 1985]. They should target applica- and tions based on skills that people already have, such as mov- ing icons, rather than forcing people to learn new skills. By responsive, we mean that student actions should have direct results-that students need not perform rigid sets of actions in rigid and unspecific order to achieve goals. By permissive, we mean that students may do anything reasonable and that multiple ways should exist for tak- ing action. By consistent, we mean that moving from one - application to another, (for example, from editing text to developing graphics), should not require learning new in- terfaces. All tools should be based on similar interface de- vices, such as pull-down menus or single and double mouse clicks. No one environment is appropriate for every domain. We must study each domain to determine how experts function in that domain, how novices might behave dif- ferently, and how novices can be helped to attain expert behavior. . Teaching Expert Acquiring sufficient and correct teaching expertise is a long term problem for builders of tutoring systems-in part, because sophisticated knowledge about learning, teach- ing, and domain knowledge remains an active area of re- search in most domains. Teaching expertise includes de- cision logic and rules that guide the tutor’s intervention with the student. Tools to facilitate teasing apart and en- coding teaching knowledge are just beginning to emerge. For example, we have developed a framework for manag- ing discourse in an intelligent tutor [Woolf and Murray, 19871 that reasons dynamically about discourse, student response, and tutor moves. The framework (Figure 4) reasons about which ped- agogical response to produce and which alternative dis- course move to make. It custom-tailors the tutor’s re- sponse in the form of examples, analogies, and simulations. Discourse schemas, or collections of activities and response profiles, are responsible for actually generating system ac- tions and for interpreting student behavior. The number and type of schemas used is dependent on context. We used empirical criteria to define discourse schemas: tutoring responses were analyzed from empirical studies of teaching and learning and from general rules of discourse structure[Grosz and Sidner]. The framework is flexible and domain-independent; it is designed to be rebuilt - decision points and machine actions are modifiable for fine-tuning system response. We are now using this framework to improve the physics tutor’s response to idiosyncratic student behav- ior. Response decisions and machine actions, explicitly represented in the system, can be modified through a ed- itor. Appropriate machine response can be assessed con- tinuously and improved. In the long term, we intend to make this reasoning process available to human teachers, who can then modify the tutor for use in a classroom. No single teaching strategy is appropriate for every domain. For example, Anderson et al. [1985] built ge- ometry and Lisp tutors that responded immediately to in- correct student answers. These authors argued that im- mediate computer feedback was needed to avoid fruitless student effort. This pedagogy was opposite to that used by Cunning- ham [ 19861 and Woolf et al. [1986]. These latter tutor’s advice was passive, not intrusive. The strategy was to sub- ordinate teaching to learning, and to allow students to ex- periment while developing hypotheses about the domain. The tutors guided their students toward developing their own intuitions, but did not correct them so long as their performance appeared to be attaining a precise goal. In industrial settings, particularly, trainees must learn to generate multiple hypotheses and to evaluate their own performance based on how their actions affect the indus- trial process. For example, no human tutor is available during normal boiler operation. C. Cognitive Expert At present, the role of the cognitive scientist is incom- pletely understood; in part, this expert seeks to discover how people learn and teach in a given domain. For ex- ample, cognitive science research in thermodynamics will enable systems to recognize common errors, tease apart probable misconceptions, and provide effective remedia- sFidelity measures how closely simulated environments match the real world. High fidelity identifies a system as almost indistinguish- able from the real world. Woolf and Cunningham 85 Figure 4: A Framework for Managing Tutoring Discourse tion. Cognitive science research provides the tutor with a basis for selecting instructional strategies. The importance of addressing common errors and misconceptions in physics is well documented, and the tutor’s intelligence hinges on making that knowledge explicit. We want a tutoring system to help students generate those hypotheses that are necessary precursors to expand- ing their intuition, and developing their own models of the physical world discover and “listen to” their own scientific intuitions. To do this, we rely on work done by cognitive scientists who study how students reason about qualitative processes, how teachers impart propaedeutic principles (or the knowledge needed for learning some art or science) [Halff, in press], and what tools are being used by experts working in the field. For example, the cognitive science experiments that must be performed to build our thermodynamics tutor in- clude (1) investigation of real-world tools currently used by physicists, (2) examination of studies that focus on cogni- tive processes used by novices and experts, and (3) com- parison of novice with expert understanding of thermody- namics. RBT articulates cognitive knowledge by explicitly recording student attempts to solve emergencies. It shows students their false paths and gives reasons behind par- ticular rule-of-thumb knowledge used to solve problems. RBT also provides students with various examples from which they can explore problem-solving activities-perhaps in time showing students their own underlying cognitive processes. By using such knowledge, a tutor can begin to help students learn how to learn. . Domain Expert An in-house domain expert is critical to building an intel- ligent tutoring system. By “in-house”, we mean that the domain expert must join the project team for anywhere from six months to several years while domain knowledge is being acquired. Any less commitment than that of full- fledged team member suggests a less than adequate trans- fer of domain knowledge. In the tutors described above, the domain experts were (and are) integral to the programming effort. The programmer, project manager, and director of RBT were themselves chemical engineers. More than 30 years of the- oretical and practical knowledge about boiler design and teaching strategies were incorporated into the system. De- velopment time for this project would have been much longer than 18 months if these experts had not previously identified the boiler’s chemical, physical, and thermody- namic characteristics and collected examples of successful teaching activities. The second language tutor was developed by a person who holds a graduate degree in teaching English as a sec- ond language and has spent more than 7 years using the Silent Way to teach intensive English courses to foreigners living in America and to teach Nepali to American Peace Corps volunteers living in Nepal. Based on the numerous expert systems projects, the following criteria for acquiring domain knowledge are well understood: 1) Domain experts should be true experts-if possible, the best in the field [Bobrow, Mittal and Stefik, 19861. 2) Domain experts are expensive. Gaining the at- tention of knowledgeable people is expensive and time consuming. However, the willingness and availability of such experts to participate is critical to the knowledge- engineering process. Assigning the task to a person of lesser ability (or worse, to persons with “time on their hands”) might doom a project to failure. 3) Individual domain experts may have incomplete knowledge or conceptual vacuums; therefore multiple ex- perts are needed for testing and modifying domain knowl- edge throughout the tutor’s life. 4) Similarly, domain knowledge can be overly dis- tributed and spread so diffusely among different experts as 86 Al & Education to leave severely restricted any system that uses only a sin- gle expert [Bobrow, Mittal and Stefik, 1986). Thus domain knowledge must be acquired incrementally and must be prototyped, refined, augmented and reimplemented. The time needed to build a tutoring system “should be mea- sured in years, not months, and in tens of worker-years, not worker-months” [Bobrow, Mittal and Stefik, 19861. 5) Domain knowledge as found in textbooks is in- complete and idealixed[Bobrow, Mittal and Stefik, 19861. Textbooks rarely contain the commonsense knowledge-the know-how used by expert tutors or professionals in the field-to help choose another teaching strategy or solve dif- ficult problems. Books tend to present clean, uncompli- cated concepts and results. To teach or solve real-world problems, tutors must know messy but necessary details of real or perceived links between concepts and unpublished rules of teaching and learning. Communities of experts are needed to provide a focus for articulating distributed knowledge in an intelligent tutor. The resultant machine tutor should include recent as well as historical research about thinking, teaching, and learn- ing in the domain. Evaluating such an articulation would, in itself, contribute to education-and ultimately to com- munication between experts. Compiling diverse research results from environmen- tal, teaching, cognitive, and domain experts is currently hampered by lack of explicit tools to help authors trans- fer their knowledge to a system. Based on criteria set out above, we intend to continue to develop and inte- grate knowledge acquisition tools to facilitate assimilation of teaching and learning knowledge into intelligent tutors. rences [Anderson, 1981] Anderson, J., Tuning of Search of the Problem Space for Geometry Proofs, International Joint Conference on Artificial Intelligence, British Colombia, 1981. [Anderson, Boyle, and Yost, 19851 Anderson, J., Boyle, C., and Yost, G., The Geometry Tutor, in Proceedings of the International Joint Conference on Artificial In- telligence, Los Angeles. 6985. [Apple, 1985) Apple Corp. Inside Macintosh, Vol. 1, Addison-Wesley, Reading, Mass., 1985. [Atkins, 19821 Atkins, T. The Second Law, Freedman Pub- lishers, San Francisco, CA, Scientific American Series, 1982. [Bobrow, Mittal and Stefik, 1986) Bobrow, D., Mittal, S., and Stefik, M., Expert Systems: Perils and Promise, in Communications of the ACM, Vol 29, #9, 1986. [Burton, in press] Burton, R., Instructional Environ- ments, in Polson, M. and Richardson, J. (Eds.) Founda- tions of Instructional Tutoring Systems, Lawrence Earl- baum Associates, Hillsdale, NJ, in press. [Cunningham, 19861 Cunningham, P., Caleb: A Silent Second Language Tutor: The Knowledge Acquisition Phase, Master’s thesis, Tech. Report #87-6. Rensselaer Polytechnic Institute, Troy, NY, 1986. [Grosz and Sidner] Grosz, B., and Sidner, C., The Struc- tures of Discourse Structure, Proceedings of the Ameri- can Association of Artificial Intelligence, 1985. [Johnson and Soloway, 19841 Johnson L. and Soloway E., Intention-Based Diagnosis of Programming Errors, in Proceedings of the American Association of Artificial In- telligence, AAAI-84, Austin, TX, 1984. [Halff, in press] Halff, H., Curriculum and Instruction in Automated Tutors, in in Polson, M. and Richardson, J. (Eds.) Foundations of Instructional Tutoring Systems, Lawrence Earlbaum Associates, Hillsdale, NJ, in press. [Hollan, Hutchins and Weitzman, 19841 Hollan, J Hutchins, E., and Weitzman, L., STEAMER: An Inter: active Inspectable Simulation-based Training System, in The A.I. Magazine, 1984. [Mittal and Dym, 19851 Mittal, S. & Dym, C., Knowledge Acquisition from Multiple Experts, in AI Magazine, 6(2), 1985. [Woolf and Murray, 19871 Woolf, B., and Murray, T., A Framework for Representing Tutorial Discourse, Inter- national Joint Conference on Artificial Intelligence, Mi- lan, Italy, 1987. [Woolf, Blegen, Jansen and Verloop, 19861 Woolf, B., Blegen, D., Jansen, J., and Verloop, A., Teaching a Complex Industrial Process, Proceedings of National Association of Artificial Intelligence, Philadelphia, PA, 1986. [Woolf and McDonald, 19841 Woolf, B., and McDonald, D., Design Issues in Building a Computer Tutor, in IEEE Computer, 1985. Woolf and Cunningham 87
1987
14
594
IM. Development Environment for respective Reasoning Systems Pad R. C&en and Michael Greenberg and Jefferson DeLisio Experimental Knowledge Systems Laboratory University of Massachusetts Amherst, Massachusetts 01IO03~ Abstract We describe a style of problem solving, prospective reasoning, and a development environment, MU, for building prospective reasoning systems. Prospective reasoning is a form of planning in which knowledge of the state of the world and the effects of actions is incomplete. We illustrate one implementation of prospective reasoning in MU with examples from medical diagnosis. 0 MU is a development environment for knowledge systems that reason with incomplete knowledge. It has evolved from a program called MUM that planned diagnostic se- quences of questions, tests, and treatments for chest and abdominal pain [Cohen et al., 19871. This task is called prospective diagnosis, because it emphasizes the selection of actions based on their potential outcomes and the cur- rent state of the patient. Prospective diagnosis is uncer- tain because the precise outcomes of actions cannot be predicted, in part because knowledge of the state of the patient is incomplete. Yet we have found that physicians have rich strategic knowledge with which they plan diag- noses in spite of their their uncertainty. MU does not pro- vide a knowledge engineer with any particular strategies, but rather provides an environment in which it is easy to acquire, represent, and experiment with a wide variety of strategies for prospective diagnosis and other prospective reasone’ag tasks. Three goals underlie our research and motivate the MU system. First, MU is intended to provide knowledge- engineering tools to help acquire expert problem-solving strategies. MU allows us to define explicit control features, which are the terms an expert uses to discuss strategies. Control features in medical diagnosis include degrees of belief in disease hypotheses, monetary costs of evidence, the consequences of incorrect conclusions, and “intangi- bles” such as anxiety and discomfort. Some, like degrees of belief, have values that change dynamically during prob- lem solving. MU helps the knowledge engineer define the functions that compute these dynamic values and keeps the values accessible during problem solving. For exam- 1 We thank Carole Beal for many helpful comments on drafts of this paper. This research is funded by DARPA/RADC Contract F30602-85-C-0014 and by NSF Grant IST 8409623. ple, with MU we can easily define a control feature called criticality in terms of two others, say dangerousness and degree of belief, and acquire a function for dynamically as- sessing the criticality of a hypothesis as its degree of belief changes. Second, we want to show that strategies enable a prospective reasoning system to produce solutions that are eficient in the sense of minimizing the costs of attaining given levels of certainty. MU has no “built in” problem solving strategies, but we have been able to acquire and implement efficient, expert strategies in MU because we can define explicit control features that represent the vari- ous costs of actions, as well as the levels of certainty in the evidence produced by actions. Third, we want to implement in MU a tasMevel ar- chitecture for prospective reasoning [Gruber and Cohen, 19871, an environment for building systems that plan effi- cient sequences of actions, despite uncertainty about their outcomes. After working in the domains of medicine and plant pathology, we now think that many control features pertain to diagnostic tasks in general. Moreover, diagnos- ticians in many fields seem to use similar strategies to solve problems efficiently. This view is influenced by the recent trend in AI toward defining generic tasks [Chandrasakeran, 19861 such as classification [Clancey, 19851 and the archi- tectures that support their implementation. MU shares the orientation toward explicit control efforts such as BB* [Hayes-Roth, 1985, Hayes-Roth et al., 19861 and Heracles [Clancey, 19861 but emphasizes control features that are appropriate for prospective reasoning. In sum, MU is a tool for representing and providing access to the knowledge that underlies efficient prospective reasoning. This paper begins with an analysis of prospec- tive reasoning, then describes the MU environment first as a program, emphasizing its structure and function, then from the perspective of the knowledge engineer who uses it. As an illustration, we describe how MUM was reimple- mented in MU. We conclude with a summary of current work. 0 eas Prospective reasoning is reasoning about the question “What shall I do next,” given that 1. knowledge complete, about the current state of the world is in- Cohen, Greenberg, and DeLisio 783 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. 2. the outcomes of actions are uncertain, 3. there are tradeoffs between the costs of actions with respect to the problem solver’s goals and the utility of the evidence they provide, 4. states of knowledge that result from ence the utility of other actions. actions can An example characteristics: from medical diagnosis illustrates these influ- A middle-aged man reports episodes of chest pain that could be either angina or esophageal spasm; the physician orders an EKG, but it pro- vides no evidence about either hypothesis; then he prescribes a trial prescription of vasodilators; the patient has no further episodes of pain, so the physician keeps him on long-acting vasodila- tors and eventually suggests a modified stress test to gauge the patient’s exercise tolerance. The first and second characteristics of prospective rea- soning are clearly seen in this case: Knowledge about the state of the patient is incomplete throughout diagnosis, and the outcomes of actions (the EKG, trial therapy, stress test) are uncertain until they are performed and are some- times ambiguous afterwards. Less obvious is the third characteristic, the tradeoffs inherent in each action. Sta- tistically, an EKG is not likely to provide useful evidence, but if it does, the evidence will be completely diagnostic. The EKG is given because its minimal costs (e.g., time, money, risk, and anxiety) are offset by the possibility of obtaining diagnostic evidence2. Similarly, trial therapy satisfies many goals; it protects the patient, costs little, has few side-effects and, if successful, is good evidence for the angina hypothesis. The fourth characteristic of prospective reasoning is that states of knowledge that result from actions can af- fect the utility other actions. This is because the costs and benefits of actions are judged in the context of what is already known about the patient. For example, trial ther- apy is worthwhile if the EKG does not produce diagnostic evidence, but is redundant otherwise. The outcome of an EKG thus affects the utility of trial therapy. This implies a dependency between the actions, and suggests a strat- egy: do the EKG first because, if it is positive, then trial therapy will be unnecessary. Dependencies between actions help the prospective reasoner to order actions. We call this planning, though it is not planning in the usual AI sense of the word [Sacer- doti, 1979, Cohen and Feigenbaum, 19821. The differences are due to the first and second characteristics of prospec- tive reasoning: the state of the world and the effects of actions are both uncertain. The prospective planner must “feel its way” by estimating the likely outcomes of one or more actions, executing them, then checking whether the actual state of the world is as expected. Plans in prospec- tive reasoning tend to be short. In contrast, uncertainty is excised from most AI planners by assuming that the initial state of the world and the effects of all actions are com- pletely known (e.g., the STRIPS assumption, [Fikes, Hart, and Nilsson, 19721. AI planners can proceed by “dead- reckoning,” because it follows from these assumptions that every state of the world is completely known. All further discussions of planning in this paper refer to the “feel your way” variety, not to “dead reckoning.” Prospective diagnosis requires a planner to select ac- tions based on their costs and utility given the current state of knowledge about the patient. We have described prospective reasoning as planning because the evidence from one action may affect the utility of another. Alter- natively, prospective reasoning can be viewed as a series of decisions about actions, each conditioned on the cur- rent state of knowledge about the patient. We consid- ered decision analysis [Raiffa, 1970, Howard, 19661 as a mechanism for selecting actions in prospective reasoning, but rejected it for two reasons. First, collapsing control features such as monetary expense, time, and criticality into a single measure of utility negates our goals of ex- plicit control and providing a task-level architecture for prospective reasoning [Cohen, 1985, Gruber and Cohen, 19871. Second, decision analysis requires too many num- bers - a complete, combinatorial model of each decision. The expected utility of each potential action can only be calculated from the joint probability distribution of the possible outcomes of the previous actions. Hut although we do not implement prospective reasoning with decision analysis, MU is designed to provide qualitative versions of several decision-analytic concepts, including the utility of evidence and sensitivity analysis. verview A coarse view of MU’s structure reveals these components: o a frame-based representation language, o tools for building inference networks, e an interface for defining control features and the func- tions that maintain their values, 8 a language for asking questions about the state of a problem and how to change its state. * a user interface solving, for acquiring data during problem- With these tools, a knowledge engineer can build a knowledge system with a planner for prospective reason- ing. MU does not “come with” any particular planners, but it provides tools for building planners and incorporat- ing expert problem-solving strategies within them. Among MU’s tools is an editor for encoding domain inferences, such as if EKG shows ischemic changes then angina is confirmed, in an inference network. MU does not dictate what the nodes in the inference network should rep- resent, except in the weak sense that nodes “lower” in the 2This example oversimplifies the reasons for giving an EKG, but not the cost/benefit analysis that underlies the decision. 784 Expert Systems network - relative to the direction of inference - provide evidence for those “higher” up. However, the nodes in the network are usually differentiated; for example, in Figure 1 some nodes represent raw data, others represent combina- tions of data (called clusters), and a third class represents hypotheses. In the medical domain, data nodes represent individual questions, tests, or treatments. Clusters com- bine several data; for example, the risk-factors-for-aPtgina cluster combines the patient’s blood pressure, family his- tory, past medical history, gender, and so on. Hypothesis nodes represent diseases such as angina. Since MU does not provide a planner, the knowledge engineer is required to build one. The planner should an- swer two questions: e Which node(s) in the network should be in the focus set, and which of these should be the immediate focus of attention? e Which actions are applicable, given the focus set, and which of these should be taken? For example, in the medical domain the focus set might in- clude all disease hypotheses that have some support, and the immediate focus of attention might be the most dan- gerous one. The potential actions might be the leaf nodes of the tree rooted at the focus of attention (Fig. I), and the selected action might be the cheapest of the potential actions. An Inference Net in MU I C.F. = Combining Function Figure P: Organization of Knowledge Within MU MU provides an interface to help the knowledge engi- neer define control features such as the degree of belief in hypotheses, the dangerousness of diseases, and the costs of diagnostic actions. It also provides a language with which a planner can query the values of features and ask about actions that would change those values. IPlanners can ask, for example, “What is the current level of belief in angina?” or “Tell me all the inexpensive ways to increase the level of belief in angina,” or even the hypothetical ques- tion, “Would the level of belief in angina change if blood pressure was high?” The relationship between these functions of MU and the functions of a planner are shown in Figure 2. Us- ing MU, a knowledge engineer can: define a control fea- ture such as criticality in terms of other features such as dangerousness and degree of belief; specify a combining function for calculating dynamically the value of critical- ity from these other features during problem solving; asso- ciate criticality and its combining function with a class of nodes, such as diseases, and have each member of the class inherit the definitions; and write a planner that encodes an expert strategy for dealing with critical or potentially- critical diseases. MU facilitates the development of plan- ners, and makes their behavior explicit and efficient, but the design of planners, and the acquisition of strategies and the control features on which they depend, is the job of the knowledge engineer. MIJ System Gluer I es User Figure 2: Mu System Schematic IV. The MU Environment - Knowledge representation in MU centers around features. Features and their values are the information with which planning decisions are made. Each node in a MU inference network can have several features; for example, the node that represents trial therapy for angina includes features for monetary cost and risk to the patient. Features are defined in the normal course of knowledge engineering to support expert strategies for prospective reasoning. We have identified four classes of features, digerentiated by their value types, how they are calculated, and the opera- tions that MU can perform on them: Static The value of a static feature is specified by the expert and does not change at run time. AJoaetary cost is a typical static feature, as the cost of an action does not change during a session. Datum The value of a datum feature is acquired at run time by asking the user questions. Data are often the results of actions; for example EKG shows ischemic changes is a potential result of performing an EKG. Dynamic The value of a dynamic feature is computed from the values of other feature values in the network. Cohen, Greenberg, and Delisio 785 The value of each dynamic feature is calculated by a combining function, acquired through knowledge en- gineering. A dynamic feature of every hypothesis is its degree of belief - a function of the degrees of belief of its evidence. FOCUS The value of a focus feature is a set of nodes whose features satisfy a user-defined predicate. Focus fea- tures are a subclass of dynamic features. In medicine, the diflerential focus feature can be defined as the list of all triggered hypotheses that are not confirmed or disconfirmed. Feature values can belong to several data types, in- cluding integers, sets, normal (one of an unordered set of possible values), ordinal (one of an ordered set of possible values), boolean, and relational (e.g., isa). Four operations are defined for features: one can set a feature value (e.g., assert that the monetary cost of a test is high) get a feature value (e.g., ask for the cost of a test), ask how to change a feature value, and ask what are the eflects of changing a feature value. Planners need answers to these kinds of questions to help them select actions (see Section 5 for further examples.) All combinations of feature type, value type, and op- erations are not possible. Figure 3 summarizes the legal combinations. MU provides an interface for defining features. A full definition includes the feature type, value type, its range of values, and the domain of its combining func- tions. For instance, the dynamic feature level of support is defined to have seven values on an ordinal scale: dis- confirmed, strongly-detracted, detracted, unknown, sup- ported, strongly-supported and confirmed. Figure 4 shows the definition of level of support. Instances of this feature (and others) are associated with individual hypotheses, each of which may have its own, local function for calculating level of support, and its own, dynamic value for the feature3. For example, Fig- ure 5 shows part of the frame for the angina hypothesis, encompassing an instance of the level of support feature, and showing a fragment of the function for calculating its value for angina. Level-Of-Support Feature-type: Dynamic Value-Type: Ordinal Value-restriction: (disconfirmed strongly-detracted detracted unknown supported strongly-supported confirmed) Combination-function-slot: local to each hypothesis Value: the current level of support of the hypothesis Figure 4: Definition of Level-Of-Support Angina Feature-list: (level-of-support severity) Current-level-of-support: strongly-supported Combination-function: IF value of ekg is ischemic-changes THEN angina is confirmed ELSEIF episode-incited-by contains exertion r!s%actors-for-angina are supported THEN angina is strongly-supported . . . Figure 5: Part of the Angina Frame With Local Combining Function Combining functions calculate values for dynamic fea- tures such as level of belief, criticality, elapsed time, and so on. They serve two important functions: First, they keep the state of MU’s inference network up-to-date; for example, when the result of an EKG becomes available, the combining function for the angina node updates the value of its level of support feature accordingly. Second, and perhaps more important from the stand- point of a planner, combining functions provide a prospec- tive view of the effects of actions; for example, the combin- ing function for angina can be interpreted prospectively to say that EKG can potentially confirm angina. The same r Data Types Questions Feature Number Set Ordinal Normal Get Set How To Effect Of static x x x X X datum x x x X x x X dynamic X X X X X focus X X Figure 3: Capabilities By Feature Type SNot all feature values are calculated locally, but, for reasons dis- cussed in [Cohen, Shafer, and Shenoy, 19871 and [Cohen eb al., 19871 level8 of belief are. 786 Expert Systems point holds for the combining functions for other features: MU can prospectively assess the potential effects of actions on all dynamic features. A planner can ask MU, “If EKG is negative, what changes?” and get back a list of all the features of all data, clusters, and hypotheses that are in some way affected by the value of EKG. The effects of ac- tions are assessed in the context of MU’s current state of knowledge (i.e., the state of its network). For example, if an EKG has been given and its results were negative, then MU knows that the answer to the previous question is that nothing changes. The syntax of combining functions is relatively unim- portant provided they are declarative, so MU’s question- answering interface can read them, and experts can easily specify and modify them. Currently, combining functions look like rules, but we are experimenting with tabular and graphic forms [Cohen, Shafer, and Shenoy, 19871. The two major classes of combining functions are Zo- cal and global. A local function for a node such as angina refers only to the nodes in the inference network that are directly connected to angina. In contrast, global functions survey the state of MU’s entire inference network. l?unc- tions for focus features take a global perspective because the value of a focus feature is the subset of nodes in the network whose features satisfy some predicate. For ex- ample, Figure 6 illustrates the combining function for the diferential focus feature. Any node that represents a dis- ease hypothesis, and is triggered, but is neither confirmed nor disconfirmed is a member of the differential. Differential feature-list: (focus-feature) current-focus: (angina prinz-metal ulcer) combining-fuxiction: Set-of $mde$ member-of disease Such-that $node$ is triggered AND level-of-support of $node$ is not confirmed AND level-of-support of $node$ is not disconfirmed Figure 6: Part of the Global Focus-Feature Differential The knowledge engineer can define many focus fea- tures, each corresponding to a class of nodes that a planner may want to monitor. Besides the differential, a planner might maintain the set of critical hypotheses (e.g., all dan- gerous hypotheses that have moderate support or better), or the set of hypotheses that have relatively high prior probability, or the set of all supported clusters that po- tentially confirm a particular hypothesis. MU supports set intersection, union, and sorting on the sets of nodes maintained by focus features. A planner’s current focus of attention is represented in terms of the results of these operations. MU is a development environment for prospective reason- ing systems. We began our research on prospective reason- ing when we were building a system, MUM, for prospective diagnosis [Cohen et al., 19871, and realized that we lacked the knowledge engineering tools to acquire and modify di- agnostic strategies. An example will illustrate the knowl- edge engineering issues in building MU: MUM had several strategic phases, each of which spec- ified how to assess a focus of attention and select an action. One phase, called initial assessment, directed MUM to fo- cus on triggered hypotheses one by one and take inexpen- sive actions that potentially support each. This covered a wide range of situations, and maintained the efficiency of diagnoses by focusing on low-cost evidence, but it made lit- tle sense for very dangerous disease hypotheses. For these, diagnosticity - not cost - is the most important crite- rion for selecting actions. Once the expert explained this, we should have immediately added a new strategic phase, run the system, and iterated if its performance was in- correct. Unfortunately, control features such as criticality and diagnosticity did not have declarative representations in MUM, were implemented in lisp, and could not eas- ily be composed from other control features. Operations such as sorting a list of critical hypotheses by their level of support were also implemented in lisp. Each strategic phase required a day or two to write and debug. From the standpoint of the expert, it was an unacceptable delay. The MUM project showed us that MU should facili- tate acquisition of control features, maintain their values efficiently, and support a broad range of questions about the state of the inference network. MU allows a planner to ask 6 classes of questions: Questions about state are concerned with the current values of features. For example: Q1: “What is the current level of support for angina?” $2: “Is an ulcer dangerous?” QS: “What is the cost of performing an angiogram?” ach Another class of questions is asked to find out how ieve a goal. Examples of questions about goals are : to Q4: “Given what I know now, which tests might confirm angina?” Q5: “What are all of the tests that might have some bear- ing on heart disease?” These questions help a planner identify relevant actions and select among them. Those that pertain to levels of belief are answered by refering to the appropriate combin- ing functions and current levels of belief. For example, the answer to the question about angina is “EKG,” if an EKG has not already been performed (Fig. 5). Questions about the effects of actions allow a planner to understand the ramifications of an action. For example, Cohen, Greenberg, and DeLisio 787 asking the expert to supply new control features when the current set is insufficient to represent the conditions under which strategies are appropriate. We are also building an interface to help acquire combining functions. This task becomes confusing for the expert and knowledge engineer alike when levels of belief must be specified for combina- tions of many data. We discuss related work on the design of functions to extrapolate from user-specified combining functions in [Cohen, Shafer, and Shenoy, 19871. A third project is to implement sensitivity analysis in MU. The goal is to add a seventh class of queries, of the form, “To which data and/or intermediate conclusions is my current level of belief in a hypothesis most sensitive.” This will fa- cilitate prospective reasoning by giving the planner a dy- namic picture not only of its belief in hypotheses, but also in its confidence in these beliefs. With sensitivity analysis the prospective reasoner will be able to find weak spots in its edifice of inferences and shore them up (or let them collapse) before they become the basis of unwarranted con- Q6: “Which disease hypotheses are affected by performing an EKG? $7: “What are the possible results of an angiogram?” Qs: “Does age have an effect on the criticality of colon cancer?” MU answers these questions by traversing the relations be- tween actions and nodes “higher” in the inference network. For example, QS is answered by finding all the nodes for which EKG provides evidence. The planner may ask either for the immediate consequence of knowing EKG, or for the consequences to any desired depth of inference. Focus questions help a planner establish focus of at- tention. For example: Q9: “Give me all diseases that are triggered and danger- ous .” Q10: “What are all of the critical diseases for which I have no information?” $11: “Are any hypotheses confirmed?” Questions about multiple effects allow the planner to combine the previous question types into more com- plex queries such as “What tests can discriminate between angina and esophageal spasm?” In this case, the term dis- criminate is defined to mean “simultaneously increase the level of belief in one disease and lower it in an other.” Hypothetical questions allow the planner to identify dependencies among actions. For example, one can ask, “Suppose the response to trial therapy is positive. Now, could a stress test still have any bearing on my belief in angina?” With the ability to define control features and answer such questions, we quickly reimplemented MUM’s strategic phase planner. Most of the effort went into adding declar- ative definitions of control features and their combining functions to MUM’s medical inference network. MU supports the construction of systems that have the characteristics of prospective reasoning identified in Sec- tion 2: Prospective reasoning involves answering the ques- tion, “What shall I do next,” given uncertainty about the state of the world, the effects of actions, tradeoffs between the costs and benefits of actions, and precondition rela- tions between actions. The six classes of questions, dis- cussed above, help planners to decide on courses of action despite uncertainty. Questions about state make uncer- tainty about hypotheses explicit. Hypothetical questions and questions about efEects make uncertainty about the outcomes of actions explicit. Questions about goals and multiple effects help a planner identify the tradeoffs be- tween actions. And hypothetical questions make depen- dencies between actions explicit. We are currently extending MU’s abilities in several ways. One project seeks to automate the process of acquir- ing strategies. It attempts to infer strategies from cases, elusions, eferences [Chandrasakeran, 19861 Chandrasakeran, B. Generic tasks in knowledge-based reasoning: high-level building blocks for expert system design. IEEE Expert, Fall:23-30, (1986). [Clancey, 19851 Clancey, W. J. Heuristic classification. Artificial Intel- ligence. 27:289-350, 1985. [Clancey, 19861 Clancey, W. J. From guidon to neomycin and hera- cles in twenty short lessons. AI Magazine. 7(3):40-60, 1986. [Cohen et al., 19871 Cohen, P., Day, D., Delisio, J., Greenberg, RI., Kjeldsen, R., Suthers, D., & Berman, P. Management of uncer- tainty in medicine. International Journal of Approzimate Reasoning. l( 1): Forthcoming. [Cohen and Feigenbaum, 19821 Cohen, P. R., and Feigenbaum, E. A. The Han&k of Artificial Intelligence, Vol. 9. Addison-Wesley, Read- ing, Massachusetts, 1982. [Cohen, 19851 Cohen, P. R. Heuristic Reasoning about Uncertainty: An Artificial InteIligence Approach. Pitman Advanced Reseamh Note. Pitman Publishing, London, 1985. [Cohen, Shafer, and Shenoy, 19871 Cohen, P. R., Shafer, G., and Shenoy, P. Modifiable combining functions. EKSL Report 87-05, Department of Computer and Information Science, University of Massachusetts, Amherst, MA, 1987. (Fikes, Hart, and Nilsson, 19721 Fikes, R., Hart, P., and Nilsson, N. Learning and executing generalized robot plans. Artificial Intelli- gence 3(4):251-288, 1972. [Gruber and Cohen, 19871 Gruber, T. R. & Cohen, P. R. Design for acquisition: principles of knowledge system design to facilitate knowledge acquisition. To appear in the International Journalof Man Machine Studies, 1987. [Hayes-Roth, 19851 Hayes-Roth, B. 1985. A blackboard architecture for control. Artificial Intelligence, 26:251-321, 1985. [Hayes-Roth et al., 1986] Hayes-Roth, B., Garvey, A., Johnson, M.V., and Hewett, M. A layered environment for reasoning about action. KSL Report No. 86-98. Department of Computer Science. Stanford University, 1986. [Howard, 19661 Howard, R.A. Decision Analysis: Applied Decision Theory. In D.B. Hertz and J. Melese, editors Pruceedings of the fourth IntemationaJ Conference on Opemtionof Reseanh, pages 55-7 1, Wiley, New York, 1966. [Raiffa, 19701 Raiffa, H. Decision Analysis: Introductory Lectures on Choices Under Uncertainty. Addison-Wesley, Reading, Mas- sachusetts, 1970. [Sacerdoti, 19791 Sacerdoti, E. Problem solving tactics. In Pmceedinga of the Sixth International Joint Confenwcc on Artif%al Intelligence, pages 1077-1085, 1979. 78% Expert Systems
1987
140
595
ia mprovement ensitivity A ggregatioga Keith L. Downing * Department of Computer and Information Science University of Oregon Eugene, Oregon 97403 Abstract This paper lays the foundation for a diagnostic system that improves its performance by deriv- ing symptom-fault associations from an under- lying causal model and then utilizes those rela- tionships to impose further structure upon the “deep” model. A qualitative version of sensitiv- ity analysis is introduced to extract the implicit symptom-fault information from a set of local constraints. Parameter aggregation triggered by this new informat ion then simplifies diagnosis by forming a more abstract causal representation. The resulting diagnostician thus employs both an experiential and a first-principle approach, where in this case “experiences” are compiled directly from first-principles. Key issues include the roles of knowledge compilation and abstraction in re- fining qualitative models of physical systems. Reiter (1987) recognizes two approaches to automated diagnosis: 1. Experiential methods in which direct symptom- fault links distilled from human expert knowledge facilitate quick diagnoses requiring little or no in- depth causal reasoning. 2. First-principle reasoning whereby explict “deep” system models are used to derive the causal path- ways from faults to symptoms. Drawbacks of the experiential method include the gen- eration of multiple fault hypotheses - often resolvable only by weak probabilistic means - and limited expla- nation capabilities. First-principle diagnosis provides causal explanations at the price of extensive reasoning and/or simulation. However, a deep-model diagnostic system that retains its derived symptom-fault associa- tions can reduce future diagnostic effort without sacri- ficing explanation abilities. *This work was supported by FIPSE grant G008440474-02 and a Tektronix Graduate Fellowship. This research involves the compilation of symptom- fault relationships from a mechanical model of the cir- culatory system with the intention of reusing that in- formation to simplify later diagnosis and therapy. This approach partitions the acquisition of diagnostic skill into two stages: 1. The derivation of symptom-fault connections by applying constraint satisfaction to the causal model. 2. The use of these associations to support the ag- gregation of structures and parameters to simplify the original model. Through this process, an automated diagnostician can acquire both rational heuristics supported by an un- derlying causal model and useful abstractions of that model. This avoids the standard expert-system depen- dence upon shallow, ad hoc rules and ill-defined symp- tom and disease hierarchies. This paper discusses a qualitative version of Campbell’s (1983) sensitivity anal- ysis as a knowledge-compilation methodology for sim- plifying qualitative reasoning about complex physical systems. The quantitative cardiovascular model developed by Pe- terson and Campbell(1985) serves as the physical sys- t em to undergo diagnosis. In this simulation environ- ment , observable parameters are partitioned into “prop- ert ies” and “variables”. The former represent the rel- atively static values of a real circulatory system such as vascular resistance to blood flow, or heart strength. Property deviations constitute “faults” and result only from the actions of external factors not represented in the model. Hence, they are always independent param- eters in causal relationships. Variables, such as cardiac blood flow or atrial pressure shift value either in direct response to property changes or indirectly through other variable changes. In either case, they represent the de- pendent system parameters whose deviations constitute “symptoms” . From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. In the absence of compensatory mechanisms, this cor- responds to a diagnosis of a clogged artery, a common source of super-normal resistance, to account for de- pressed cardiac output and elevated arterial pressure. Figure 1: Basic Circulatory Topology Figure 1 (Peterson and Campbell, 1985) portrays the basic circulatory topology. Briefly, the left heart pumps oxygen-rich blood through the systemic (body) arteries to the body’s capillary beds - - the narrow cap- illaries being the major source of resistance to blood flow. So blood that exited the left heart at a pressure of approximately 100 mm Hg returns to the right heart via the systemic veins at a pressure close to 5 nun Hg. This carbon-dioxide-laden blood gets pumped through the pulmonary arteries to the lungs, where it becomes re-oxygenated before returning to the left heart. After filling with low-pressure blood, the left and right hearts contract with both inflow and outflow valves closed, thereby developing enough pressure to open the arterial valves and send blood flowing toward the body and lungs respectively. Although the blood volume of the circulatory system remains constant over the short time span of these events, the amount of “active” blood in the systemic and pulmonary loops varies inversely with venous compliance. A highly compliant vein stretches or sags to accomodate more blood without drastically raising its pressure, thus functioning like an electrical capacitor. Because active-blood volume is the single most important factor in circulatory behavior, and only the veins have dynamic compliances (regu- lated by the nervous system), venous compliance is a crucial property whose changes incur inverse changes to all pressure and flow variables in the circulatory system. Using the standard mapping of pressure to voltage and flow to current, Figure 2 (Campbell, 1983) abstracts the left or right heart and its load into a simple elec- trical model. A flow source of maximal internal pres- sure, PO, and internal conductance, G, outputs flow, Q, against pressure, P, induced by resistance, R. Intu- itively, Q varies directly with the pressure differential, (Pi, - P), and with pump (heart) strength. As a simple diagnostic example, if P increases while Q decreases, then P = RQ indicates that R must have increased. Pronerties Variables A R: Total Circulatory Resistance Q: Cardiac Output G: Heart Contractility PO: Maximal Heart Pressure P: Arterial Pressure Q = G(Po - P) Constraints P = RQ Figure 2: Heart Pump and Hydraulic Load 3 uallitative Sk Analysis Due to the presence of feedback, via both the cyclic flow of blood and the bi-directionality of component interac- tions, cardiovascular variables are sensitive to changes in many properties. Hence, a great many implicit con- straints relating single properties to single variables un- derlie the causal model. By uncovering these associa- tions, many of which are non-local, the diagnostician can circumvent causal reasoning and take advantage of the highly-constrained model to identify faults after minimal testing. Campbell (1983) introduces quantita- tive sensitivities to express the dependence of variables on properties. Calculated as the ratio of partial differ- entials, the sensitivity of P to R, for instance, is: (aP/P)/(m/R) = l/(1 + G *R) (1) In Figure 2, this represents the system-wide sensitivity of P to changes in R under the single-fault assumption that no other properties have changed. Sensitivities pro- vide useful diagnostic pointers from changing variables (i.e. symptoms) to their most strongly-coupled proper- ties, where a “strong” sensitivity has an absolute value 790 Expert Systems close to (but never above) 1. However, during most diagnostic reasoning, the salient aspect of any sensitiv- ity is its sign. Does the property affect the variable directly (positive sensitivity value), inversely (negative sensitivity value) or not at all (sensitivity close to 0 )? In short, a good deal of diagnostic reasoning exploits only qualitative information: “If the arterial pressure is up, then the vascular compliance might be down, or the arterial resistance might be up.” Quantitative sensitiv- ities exceed informational needs while charging a large computational cost. Qualitative sensitivity analysis (QUALSA) extracts only the necessary diagnostic information from a set of constraints. QUALSA begins by converting all sys- tem constraints to “mixed” confluences (De Kleer and Brown, 1985). Next, a set of parameter assumptions is created to define a qualitative state of the system. These assumptions then enable a one-to-one mapping from mixed confluences to “pure” confluences. The lat- ter characterize the dynamic behavior of the qualitative state. Now, to test the system-wide qualitative sensitiv- ities of all variables to a selected property 4, 6’4 is set to either -I- or -, and all other property’ derivatives are set to 0. By restricting the values of all confluence terms to the (-A+) q uantity space (Forbus, 1985), QUALSA en- counters the ambiguities of qualitative arithmetic (Sim- mons, 1986). Using a constraint-satisfaction technique capable of dealing with these ambiguities (Thyagara- jan, 1987), all valid interpretations of the qualitative state’s confluence set are found subject to the previous assignment of property-derivative values. The collec- tion of qualitative variable values from each interpreta- tion serves as a fault-table index for &$ = x, where x is the original setting of a4. For each interpretation, a comparison of aX to a4 for any variable X yields a qualitative sensitivity defined as: + ifaX=& Qwx 4) = - ifdX#b4anddX#O (2) 0 ifdX=O 4 A Picat ion When applied to the abstract cardiovascular model of Figure 2, QUALSA proceeds as follows: I. Initial equations with properties = (G, Po, R) and variables = (P, Q) II. Digerentiate and transform to mixed conji?uences. [x] represents the sign of x, whether +, - OT 0. 8Q = aG * [PO - P] + [G] * (aP, - aP) (5) aP=BR*[Q]+[R]*dQ III. Make parameter assumptions: (6) Po>P,G>O,Q>O,R>O IV. Apply parameter assumptions to to derive pure confluences: mixed confhences dQ=dG+dPo-dp (7) aP=aR+aQ V. Modify a property (plant a fault): dG t + (8) VI. Apply Sing&-Fault Assumption: aPotO,aR+-0 VII. Call constraint satisfier with simultaneous conflu- ences 7 and 8 and instantiated properties. Receive a unique valid interpretation: (aQ+, aP+> VIII. Calculate Qualitative Sensitivities of both variables to G using Definition 2: (dP = aG) + (QLS(P,G) + +) PQ = => * (QLS(Q, G) + +) IX. Repeated calls to the constraint satisfier under the single-fault assumption with each property faulted high and low yield a complete fault table: II aP+ I aP0 I dP- II I I 1 aQ+ dG+ or aPo+ nil 3R- 890 nil nil nil aQ- dR+ nil aG- or ZIP,- Table 1: Faults indexed by symptoms X. Calculate all qualitative sensitivities , which in this case remain unambiguous over the 6 interpretations Te- turned by the 6 calls- to the constraint satisfier: Q=G*(PO--P) (3) Table 2: Qualitative Sensitivities Downing 791 5 Ambiguities The ambiguity of qualitative arithmetic often spurns an abundance of interpretations, any two of which will have conflicting sensitivities QLS(X, 4 ,Il) and QLS(X, 4,12) for at least one variable X and property 4. Hence, quali- tative sensitivities are sometimes indeterminate. In ad- dition, selected properties & and ~$2 can yield one or more of the same interpretations when individually in- stantiated to + or - and run through QUALSA. This creates a one-to-many mapping of indices to faults in the fault table. Multiple interpretations for a single prop- erty setting (fault) contribute to a many-to-one map- ping, but this creates no additional ambiguity for the backward causal reasoning indigenous to diagnosis. The sensitivities of Table 2 indicate that G and PO have identical qualitative effects upon variables P and Q. In fact, a more detailed cardiovascular model reveals fur- ther similarities in their induced sensitivities. These similarities, along with their common location, the left heart, make PO and G excellent candidates for a sim- plifying aggregation. Let H represent a general heart strength and define it as: H=G*Po (9) Under the assumptions: G > 0,Po > 0 steps II-IV of QUALSA produce the pure confluence: dH=dG+aPo (10) Substituting Equation 10 into Equation 7, a legal sub- stitution in qualitative arithmetic (De Kleer and Brown, 1986) since the coefficient of G * PO is the same in both Equation 9 and 3 , yields: aQ=aH-BP (11) Four applications (two for each property, H and R) of QUALSA steps V-VIII to confluence Equations 11 and 8 generate simplified fault and sensitivity tables: 792 dP+ dP0 ap- aQ+ i3H+ nil aR- a90 nil nil nil aQ- dR+ nil tlH- Table 3: Aggregated Fault Table Expert Systems Table 4: Aggregated Sensitivity Table Now, diagnosis can proceed in the abstracted fault space containing only H and R. Only if the fault is localized to H will granularity shift to the level of PO and G, where variables such as the left heart’s dias- tolic( “filling”) and systolic( “emptying”) blood volumes will discriminate between the two primitive faults. Under the strong assumption that no confluence con- tains more than a single property derivative, a “com- prehensive” multiple-fault table (i.e. one that covers from 0 to n faults, where n is the number of model’ properties) can be efficiently generated. Normally, this would require 3n calls to the constraint satisfier to test the affects of all combinations of property-derivative set- tings (+,-, and 0). But under the single-property as- sumption, only 3n such calls are required, where each call involves only the confluences containing a specified property. The interpretations returned from each such call are then intersected with the interpretations from other calls to create indices into the comprehensive fault table. For example, Equations 11 and 8 have only single property derivatives. Setting: dH t + and passing Equation 11 to the constraint satisfier cre- ates interpretation set: 1SETl = {(aP+, a$-), (W+, aQo), (aP+, aQ+) @PO, aQ+), (a-, aQ+)l (12) Next, let: and pass Equation 4 to the constraint satisfier. This returns: mxr2 = {(dp-, ag-), p-,~Qo), (dp-, aQ+), (=P aQ+), Cap+, aQ+>l (13) After intersecting ISETl and ISET to yield: ISETS = {(a-, aQ+), W’O, aQ+), W+, aQ+>l (14) use each of the three ISETS interpretations as an index for the double fault (aH+ and dR-). The nature of complex systems precludes the use of only local behavioral knowledge to diagnose faults ef- ficiently. QUALSA exploits local behavioral constraints to uncover implicit interactions, both local and global, between properties and variables. Its qualitative basis avoids the algebraic complexities of quantitative meth- ods at the cost of increased ambiguity in both the for- ward causal reasoning indigenous to simulation and in backward diagnostic reasoning. By exploiting empiri- cal ordinal relationships, such as the fact that arterial pressure normally greatly exceeds venous pressure, and organizing them in a quantity lattice (Simmons, 1986) or similar structure, I expect to reduce this nondeter- minism considerably. The background empirical-knowledge needs of a di- agnostician equipped with QUALSA shift from subjec- tive “causal” connections to more objective ordinal re- lationships. But deep models (no matter how deep) serve only as convenient abstractions of real systems; QUALSA cannot entirely replace the induction of causal rules from empirical observations, especially in complex domains such as human physiology. Bather, by using QUALSA-derived symptom-fault associations to sup- port or refute those obtained empirically, a true integra- tion of first-principle and experiential diagnosis results. Not only are both deep and high-level models used, but the high-level information is derived both experientially and analytically. Also, QUALSA outputs may inspire a re-interpretation of empirical data in search of support for previously-overlooked causal relationships. It can thus add top-down control to data analysis by provid- ing well-founded causal expectations. In short, rather than treating empirical associations as second-class in- formation, QUALSA can fortify the inductive processes that generate them. Campbell (1983) has detailed the drastic sensitivity alterations incurred by minor modifications to the com- ponent topology, while de Kleer and Brown (1983) have illustrated the importance of locality (and more gen- erally, no function in structure) for robust modelling. Thus, sensitivities can discriminate among the behav- iors of structurally similar models and thereby capture the behavioral ramifications of minor structural adjust- ments; and their derivation from local constraints en- hances robustness. In short, sensitivities are well suited for models of evolving systems. In addition to supporting structural changes, sensi- tivities can suggest structural abstractions to strengthen diagnostic capabilities. These aggregations embody a more organized understanding of the modeled system - an understanding recognized and implemented in diag- nostic systems such as ABEL(Pati1 et. al, 1982) and INTERNIST-II (Pople, 1982) but supplied externally. By deriving structure from within, the integration of QUALSA and aggregation exhibits a theory of self-contained diagnostic learning that unites experien- tial and first-principle techniques for their mutual en- richment. I owe a special thanks to Nils Peterson, whose original version of qualitative sensitivity analysis inspired a deeper investiga- tion into its implications for diagnostic reasoning, knowledge compilation and aggregation. Also, I must thank P. Thya- garajan for his constraint satisfier. His ideas, along with those of Art Farley, Steve Finer and Nils fueled our initial discussions of qualitative reasoning and diagnosis. My great- est debt is to my advisor, Sally Douglas, whose relentless crit- icism and encouragement have guided me through not only this research but most of my graduate experience. Campbell, Kenneth, The Physical Basis of the Problem Environment in Cardiovascular Diagnosis, January, 1983, unpublished. De Kleer, J. and Brown, J.S., Assumptions and Ambi- guities in Mechanistic Mental Models , in: D. Centner ad AS. Stevens (Eds.) Mental Models (Erlbaum, Hillsdale, NJ, 1983) 155 - 199. De Kleer, J. and Brown, J.S., Theories of dering, Artificialal Inte&gence 29(l), 1986. Causal Or- Forbus, K.D., Qualitative Reasoning About Physical Processes, Artificial Intelligence 24, 1984. Peterson, Nils S. and Campbell, Kenneth B., Teaching Cardiovascular Integrations with Computer Laborato- ries ,The Physiologist , Vol. 28, No. 3, 1985. Patil, Ramesh, Peter Szolovits and William Schwartz, Modeling Knowledge of the Patient in Acid-Base and Electrolyte Disorders, in Artificial Intelligence in Me&&e, ed. Peter Szolovits, Westview Press, 1982. Pople, Harry E. Jr., Heuristic Methods for Imposing Structure on Ill-structured Problems: The Structur- ing of Medical Diagnosis, in Artifkial Intelligence in Medicine, ed. Peter Szolovits, Westview Press, 1982. Reiter, Raymond, A Theory of Diagnosis from First Principles, Univ. of Toronto, Tech Rep. No.187/86, to appear in Artifkid Intelligence 1987. Simmons, Reid, “Commonsense” Arithmetic Reason- ing, Proceedings of the Fifth iVationa1 Conference on Artificial Intelligence , 1986. Thyagarajan, P.,forthcoming master9s thesis, The Univ. of Oregon, 1987, unpublished. Weld, D.S. Use of Aggregation in Causal Artificial Intelligence 9 30(l), 1986. Simulation, Downing 793
1987
141
596
CAMEX - AN EXPERT SYSTEM FOR PROCESS PLANNING ON CNC MACHINES 0. Eliyahu, L. Zaidenberg and M. Ben-Bassat IET - Intelligent Electronics and Faculty of Management, 14 Esser Tachanot St, Tel Aviv University Ramat Hachayal, Tel Aviv, Tel Aviv, Israel, 69788 Israel,69719 B25eTAUNIVM.BITNET ABSTRACT CAMEX is an expert system designed to plan machining processes for CNC (Computerized Numerical Control) cutting machines. At the present state of development it is const- rained to parts for which 2 l/2 D description is sufficient. For this kinds of parts, CAMEX is able to read a drawing of a workpiece from an ordinary CAD file, to understand its 3-dimensional structure and ge- nerate a plan workpiece. for producing the CAMEX is implemented in FRANZ LISP on an APOLLO workstation KEYWORDS Expert systems, CAD/CAM, mechanical engineering, CNC machines, rule- based systems. Topic Expert Systems for Mechanical Engi- neering. I. INTRODUCTION A CNC (Computerized Numerical Control) cutting machine obtains as input a block of material; e.g. aluminium, a rectangular block of and produces a workpiece with a desired shape by a series of repeated cuts. A cut is characterized by several parameters including size and shape of the cutter; whether it is a rough or fi- nal cut, tion and offset values, etc. Cut selec- ordering have not yet been au- tomated. The person who decides on the machining process essentially bridges the gap between sophisticated CAD systems that are used to draw the desired work- piece, and sophisticated CAM systems that are capable of executing a given plan, once it is decided upon. The CNC industry uses the term "tech- nologyll to describe the plan for produc- ing a given workpiece; i.e. the sequence of cutters and their characterizing parameters. This term may be somewhat confusing out of the CNC context. Nevertheless, we elected to adopt it throughout the paper. Generating a technology for complex and/or large parts may take a highly qualified expert weeks of intensive ef- fort, and this establishes the need for some degree of automation, or, at least, decision support tools. The need for au- tomation is enhanced by the realization that mistakes in the design are non- recoverable (one cannot llfill" material which has been removed by mistake) and very costly (CNC machine time is very ex- pensive). The planning process cannot be described by closed algorithms and/or formulas. It is based for the most part on human expertise, i.e. detailed knowledge about the characteristics of materials and machine capabilities, as well as experience and problem-solving skills. CAMEX is an expert system designed to plan machining processes ("technologiesll) for CNC cutting machines. At the present stage of development, the use of CAMEX is restricted to parts for which a 2 & l/2 D description is sufficient. These are parts which may be fully described by one projection, e.g. view from above, and as- sociated one-valued function for defining the height (or depth) at each point. For parts of this kind, CAMEX obtains as in- put an ordinary CAD file for the desired workpiece and generates as output a tech- nology for producing it. CAMEX is implemented in FRANZ LISP on an APOLLO workstation (only the user- interface part of code is machine- dependent). It consists of about 12000 lines of LISP and about 3000 lines of llC'l code. 794 Expert Systems From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. II. GENERAL APPROACH Three main components system such as CAMEX: are required for a 1. Problem representation, that is the system's rlseell) a ability to perceive workpiece (to in the various stages of its production and to recognize the legitimate tools and their capabilities. 2. Knowledge base, that is machine representation of the knowledge used by human experts in designing a CNC technology. 3. Inference and control mechanism, that is algorithms standing" that - upon "under- access relevant a given workpiece - parts of the knowledge base and construct a technology ducing it. for pro- III. PROBLEM REPRESENTATION The starting point of a human expert is a technical drawing on paper or on computer screen displays. Looking at the drawings, he creates in his mind a 3-D model of the desired workpiece and then proceeds to generate the technology. The first step in CAMEX development was to provide it with a 3-D model of the desired workpiece. ing a language We started by develop- for describing the geometrical properties of the workpiece. The idea behind the language was the no- tion of a human CNC expert who has become blind and now requires an assistant to describe the desired workpiece to him. In this language, the workpiece is described as a list of geometrical primi- tives with their attributes and the rela- tions between them. Two types of primi- tives exist in the language : subparts and surfaces. Subparts are classified into 2 types: cavities and material. For example, profile is a primitive of type cavity, and by external profile we mean all the material that must be removed from the initial block in order to reach the external wall of the workpiece. Most of the primitives were chosen by virtue of their representing basic technological structures (pocket, profile, hole, bay etc.). The attributes of a primitive are described in terms of Dmax (diameter of largest possible cutter), Dmin (smallest corner diameter), etc. The relations between primitives are described in such terms as is-above, is-below, is-aside, is-limited-by, etc. Figure 1 provides a description of a workpiece using the language. A similar approach to the workpiece-geometry description was used by Descotte and La- tombe [Descotte and Latombe,l981]. It was soon realised, however, that describing real-world workpieces in such a language is a very time-consuming and error-prone activity. Worse still was the fact that CAMEX could in no way verify the user-supplied description. On the other hand, it was discovered that a sub- stantial amount of the relevant geometri- cal information could be extracted from CAD files of the technical drawings. We proceeded therefore to developing a preprocessor that would generate a work- piece representation directly from the CAD files. Today, CAD systems describe part geometry as a collection of low-level geometrical primitives: line segments, polylines (strings), circles and arcs, and splines (see Figure 2). This collec- tion, when presented graphically to the human eye, allows the human brain to ima- gine the 3-dimemensional geometry of the part. In addition to "real" geometrical primitives such as points and line seg- ments which are actually seen when view- ing the workpiece, a CAD file also con- tains a large amount of auxiliary infor- mation, such as dimension lines and tex- tual material. Such information must be identified and removed from the CAD file if we are going to attempt automatic in- terpretation. Other problems in real- world CAD files are limited precision (can we assume 85.4 and 85.5 to be the same number ?), overlapping lines, etc. The CAMEX preprocessor l'cleans'l the CAD file of non-relevant elements, scans it and produces as output a geometrical database that describes the workpiece in terms of higher-level primitives - such as pockets, holes, profiles, etc. These primitives are displayed to the user, who is requested to supply the height (depth) of each primitive. piece The resulting work- description is equivalent to implicitly mentioned the language description above. The preprocessor goes beyond the mere identification of the basic geometrical entities. It also searches the data base for thin walls boundaries, i.e. walls with a width less than some prespecified threshold. Thin walls play an important part in techno- logical decisions, and it is more effi- cient to collect information about them in the preprocessing stage. A one-to-one link is maintained between the original CAD primitives and the higher-level geometrical primitives. Thus the user may point to any region in the drawing and make queries in the form: Eliyahu, Zaidenberg, and Ben-Bassat 795 is a specified region a wall ?, a hole ?, a pocket ? what are the neighbors of a specified region ? (Such queries may also be used to check the claim that CAMEX really "under- stands" the workpiece.) **********************************t profile-b ************************************ is-a: profile is-above: () is-below: () is-aside-of: (wall-b7 wall-b8) is-limited-from-above-by: () is-limited-from-below-by: () is-limited-from-side-by: () y-;-Q; 0 : 20 z-low: 0 d-max: 50 d-min: 20 r-fillet: nil depth: 20 width: 60 clearance: nil tolerance: nil area: 10000 Figure 1: A description of a workpiece using CAMEX language. Figure 2: A typical CAD file. IV. REPRESENTATION OF KNOWLEDGE. Geometrical knowledge is embedded in CAMEX problem representation. Addition- ally, knowledge about the use of CNC machines for a variety of workpieces made of various types of material is provided in the form of rules. Here are a few examples of the rules: IF total-volume of nc-jobs for tool is less than 50000 AND nc-jobs for tool are pockets only AND tool is larger than 16 AND smaller tool is with rad 0 THEN change tool to smaller tool. IF nc-entity is pocket or profile AND wall-thickness is less than 2.0 THEN set wall-offset to 0.05 AND DEFINE wall-finish The rule base is treated not as a static collection of knowledge chunks, but as a kind of very high-level program- ming language for describing the technology-generation process. This is achieved by rules that guide the control strategy and assist in breaking down the problem into subproblems and in determin- ing the order in which the subproblems are to be solved. For instance, the rule IF tool was changed THEN PERFORM select nc-entity tools guides the system to use a set of rules relevant nc-entity. for tool selection for a single Such an organization has the advan- tage of efficiency, because at every step of technology generation only a small group of rules is eligible for checking. Thus the cycle time of each rule applica- tion is independent of the total number of rules in the rule base. The primary disadvantage is inconvenience in the de- bugging of the knowledge base, because the meaning of some rules may depend on the organization of the rule base. The rules are formulated by the ex- perts in structured English. CAMEX has a rule translator module that automatically translates rules into LISP and adds them to the knowledge base (see Figure 3). This provides the experts with a great deal of independence in maintaining the knowledge base and in checking the impact of rule modifications. V. INFERENCE MECHANISMS CAMEX works in four main steps: Step 1. Removal identification: CAMEX 796 Expert Systems Step 2. starts by identifying a set of fillers (cylinders with arbitrary cross-sections), which, upon re- moval from the initial block of material, produce the workpiece. There are generally many such sets (one is obtained by defining a filler for each region with z- coordinate less than the height of the initial block), but tech- nological considerations make some sets illegal, and some preferable. The rules for choos- ing fillers are constraints on legal removals. For example, in Figure 4, alternative (a) con- sists of the two cylinders above the regions A and C with z-extent from 3mm to 16mm, and the cylinder above the region B with z-extent from 12mm to 16mm. Al- ternative (b) consists of the cylinder above regions A, B and C with z-extent from 12mm to 16mm, and the two cylinders above re- gions A and C with z-extent from 3mm to 12mm. The process is basically a depth-first search, and in most practical cases the search space is quite small (perhaps tens of possibilities). To handle the rare cases where the search space is large, rules of thumb are used to limit the space. At the end of this search we have a set of fillers each of which corresponds to a removal (pocket, profile, top-of-wall etc.). The resulting list of remo- vals approximately corresponds to the list of nc-primitives which, in the early version of CAMEX, were explicitly entered by the user (see the section on Problem Representation and Figure 1). Size parameters (such as Dmin for pockets and profiles) and spatial relations between removals (aside-of, above, etc.), which previously were explicitly speci- fied by the user, are now deter- mined easily from the geometrical database on an as-needed basis. Technology generation for indivi- dual removal. Rules such as: IF nc-entity is pocket AND fillet rad is smaller than 5.0 AND floor thickness is greater than 3-O THEN set floor offset to 0.0 AND set wall offset to fillet rad are applied in a forward-chaining manner to produce a list of operations for each removal. Each removal (nc-entity) defines one or more operations (cuts, nc- jobs). Each operation is defined by the relevant removal, the di- ameter and corner radius of the cutter, and other parameters. Step 3. Cutter optimization: The purpose of this step is to achieve better utilization of the cutters by taking a global view of the work- piece. The relevant rules have the form: IF cutter is used only once THEN remove it from part-tools-list AND retry select-part-tools. Different criteria may be used for cutter optimization. For example, time of processing of each workpiece may be crucial for large production lines; how- ever, for prototyping, time of generating a feasible technology (not necessarily an efficient one) may be more important. Step 4. Sorting of operations: Basically, the relevant rules for this step are constraints on the legal ord- er of operations. Two kinds of constraints exist: o'must beWq rules and 81should bet1 rules. For instance: wall-top MUST BE before profile or pocket with the same wall nc-jobs for the same tools SHOULD BE sequenced by decreasing volume Both kinds of rules imply a partial order of the operations. It is usually not really impor- tant which operation comes first, as long as all constraints are satisfied. VI. IMPLEMENTATION The main system modules are: Geometry Understanding Module. Reads the CAD system output file and builds a data structure describing the workpiece geometry. Because of the t'noise@l includ- ed in real world drawings (even computer- ized ones), a certain amount of interac- tion with the user is needed at this stage. Rule Translator Module. This module accepts rules from the user in structured Eliyahu, Zaidenberg, and Ben-Bassat 797 English and translates them into internal LISP form. This module may be considered as a compiler from a high-level language describing technology generation, into LISP. Control and Inference Module. This module controls the overall operation of CAMEX; calls procedurally implemented steps; and triggers and fires rules from the rule-base. Explanation Module. This module traces the process of rule application: translates rules from internal (LISP) form to English and generates explana- tions regarding the system reasoning pro- cess. It allows the user to ask gues- t-ions such as why a particular operation was added, why, or in what context, a particular rule was used, etc. This module may be considered as a kind of a symbolic domain-oriented debugger. In Figure 7 we show the end result of CAMEX, i.e. a technology for the entire workpiece. VII. ACNOWLEDGEMENT This project was supported by the Engineering Division of Israel Aircraft Industries, Inc. and performed in colla- boration with their staff. The authors are specifically grateful to Mr. R. Razi for many useful discussions. REFERENCES: [Descotte and Latombe,l981] Y. Descotte & J. Latombe. "GARI : A problem solver that plans how to machine mechanical parts". In Proc. IJCAI-81, Vancouver, 1981.uh. 766-772. Figure 3: Automatic rule translation Figure 4: The possible sets of removals -that yield the same end product. Figure 5: A technology for the entire workpiece. 798 Expert Systems
1987
142
597
sesentation A oath to Understan i&al Circuits Robert J. Hall Richard EL Lathrop Artificial Intelligence Lab Artificial Intelligence Lab M.I.T. M.I.T. Cambridge, MA 02139 Cambridge, MA 02139 Robert S. Kirk Semiconductor Division Gould, Inc. Santa Clara. CA 950.51 Abstract We put forth a multiple representation approach to deriving the behavioral model of a digital circuit au- tomatically from its structure and the behavioral sim- ulation models of its components. One represen- tation supports tempera) reasoning for composition and simplification, another supports simulation, and a third helps to partition the translation problem. A working prototype, FUNSTRUX, is described. ’ I. Introduction ‘I’hc function (time behavior) of a system is determined by the functions of its parts, together with their structural connections. Unfortunately, understanding the function of the whole by understanding the parts is difficult and poorly understood. The domain of digital circuits is convenient for investigating this problem because many of its objects already have well-defined, machine-readable behavior de- scriptions. These reside in the simulation model libraries used by the design community. We have developed a multiple representation approach to automatically deriving models of overall devjce behavior from the interconnection structure of its components, to- gether with their behavior models. Because one of our be- havior representations is the executable simulation model, our system can transform executable program code de- scribing the components’ behavior into executable program code describing the behavior of the device as a whole. The naive solution, invisibly simulating the compo- nents, has no value. Instead, we transform each compo- nent’s behavior to a representation that highlights value dependencies over time. These are propagated by substi- tution according to the circuit structure and simplified to produce an overall behavior description. We exploit different representations to facilitate dif- ferent reasoning tasks: o The temporal equation representation facilitates rea- soning about dependencies between values at times. B The code representation is the executable simulato: model for the circuit. Q The absirocl event representation helps partition the translations .ietween code and equations. A working r:ototype, FUNSTRUX, accepts as input the components’ interconnection and simulator models, a Figure 1: A dynamic storage cell. Its three components (l-r) are a clocked inverter with delay 1.7 nsec, a storage node, and an inverter with delay 0.5 nsec. and produces an executable simulator model for the en- tire circuit. The input components’ descriptions (and the final output) can also be in either of the equivalent abstract, event or temporal equation representations. FUNSTRUX has been tested successfully on the SCORE [Alexander, 19861 standard cell library genera- tor system, and has successfully generated a behavioral model for one bit-slice of an AMD 2901 [AMD, 19851 (an arithmetic-logic unit with memory and control circuitry, having 370 gate-level components and 365 interconnect- ing buses). Generating the functional model for this.large circuit required 7 hours of real time on a Symbolics Lisp Machine. The resulting module uses less than a third as much time to simulate and schedules an order of magni- tude fewer events than the full circuit at component level [Lathrop et al., 19871. Behavioral simulators are applications of well-known event-driven simulation techniques to digital circuits. Each circuit component has an algorithmic description which dictates how it propagates value-changes (events). Consider the circuit shown in Figure 1. For the pur- poses of this example, we choose a simple set of three logic levels (values) : 1, 0, and * (“no value,” e.g. the value of a tri-stated device). The inverter unconditionally puts out the negation of its input at a delay of 0.5. (Thropgh- out, times will be in units of nanoseconds.) If the clocked inverter’s 4 input is 1, then its output becomes the logi- cal negation of its a input 1.7 time units into the future. Otherwise, its output will be * 1.7 into the future. The storage node holds the most recent non-* value. (This ex- Hall, Lathmop, and Kirk 799 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. substitute + simplify circuit code I conv circuit conv circuit 4 events equation I I Figure 21 The FUNSTRUX system has two sorts of mod- ule. Representation conversion modules change one view of a circuit into another. Substitute/simplify modules per- form various sorts of simplification. ample has been simplified from [Lathrop et al.. 1987 and reflects minor improvements in FUNSTRUX’ code since that paper.) FUNSTRUX produces the following code from this example. (Only the code for the y output is shown here.) (defun dynamic-storage-cell-fen (self a y phi) (depends-on ‘(a phi) ‘0 (if (logone (read-bus phi ‘(bit))) (put-my-state self ‘(a-state) (read-bus a ‘(bit)) 0.0))) (depends-on ’ 0 ’ (a-state) (drive-bus y self ‘(bit) (get-my-state self ‘(a-state)) 2.2))) This says that “when either a or 4 changes, if 4 is 1, schedule an event at the same time to change astate to the current value of a. Whenever astate changes, schedule an event 2.2 later to drive y to the value of astate.” This description omits the double negation and the details of the storage node changing value. * e rescentations ConvePsions The FUNSTRUX system’s multiple representation scheme (see [Rich, 19851) is shown in Figure 2. It runs as one component of the STAR design system [Kirk et al., 19871 and continues an investigation into function, structure, and their relationships. The STAR system integrates behav- ioral simulation [Lathrop and Kirk, 1985], netlist manip- ulation [Lathrop ant >Tirk. 19861, and parameterized cell generators [Alexander. 19861. STAR’s LISP-based behav- ioral simulator, SIM?.!ER, is our target. SIMULATOR CODE REPRESENTATHON. The code representation is described in [Lathrop and Kirk, 19851. The SIMMER model for the storage node is (defun storage-node-fen (self b c> “b input, c output” (let* ((b-value (read-bus b ’ (bit))) > (if (#? * b-value) (put-my-state self ‘(bit) b-value)) (drive-bus c self ‘(bit) (get-my-state self ‘(bit))))) ACT EVENTS REPRESENTATION. Dif- ferent event-driven simulators have differing semantics and coding conventions; however, they share a common event- driven core. The abstract event representation partitions the problem of translating from code to equations (and back) into (1) a simulator-dependent segment which ab- stracts from the particular semantics of the simulator into abstract events, and (2) a simulator-independent segment which translates from the event representation to the equa- tion representation. This will make it easier to re-target the system for different behavior descriptions. An abstract event description consists of the event condition predicate, the variable to be scheduled for change, the relative delay, and the symbolic expression for the new value. The event condition predicate con- sists of two parts, the enablement predicate (ep) and the changing-variable (cv) list. The event occurs if either the enablement predicate becomes true or one of the changing- variables changes value while the enablement predicate is true. jMeinen, 19791 used a notation similar to our enable- ment predicate (ep) . The abstract event representation of the storage node, produced from the code above, is (Event-cell inputs : (b) event: ((cv (b) ep <#? * b)) (schedule bstcLte at 0 equal-to b)) event: ( (cv(bstate) ep T) (schedule c at 0 equal-to b,tste>)) This represents a cell with one input, b. bstate repre- sents the memory of the storage node, and c is the output. The first event clause indicates that when either the value of b changes and the predicate [#? * b] is true, or when the predicate changes from false to true, an event is sched- uled at the same time which sets bstate to the value of b. (Note that in the example b is not * only when the clocked inverter is driving.) The second event clause schedules a change of c, but since the enablement predicate is T it happens on any change of bstate. BORAL EQUATIONS REPRESENTATHON. One key insight in our approach is that one needs a behav- ior representation with locality of reference among circuit values at given times. This means that the time points rel- evant to the computation of a vai?ue must be explicit, and that circuit value dependencie: Zould be explicit and local to uses of the circuit values. For example, applica‘ive pro- grams have locality of referericsY, while programs which set and use global variables at widely separated places do not. [Davis, 19831 has emphasized the importance of locality for reasoning about the behavior of circuits. 800 Expert Systems Tinlelines are mappings of the real numbers into val- ues. We view circuits as mappings between timelines. [Kelly and Steinberg, 19821 used a timeline notion, but their time domain was discrete. A circuit output at time t is expressed in a tempo- ral equation as an applicative function of its inputs at the same or earlier times. We use two primitive time opera- tors, - and + for expressing values at earlier times. The second argument to - must be a non-negative constant. -+= allows reference to “the most recent time a predicate was true.” Thus, {$ [Pu] t} refers to the most recent time, u, prior (or equal) to t, such that the predicate P was satisfied at u. This is similar to the left-arrow opera- tor of [Schwartz et al., 19831; however, they also work with a discrete time domain. lAmblard et al., 19851 proposed a similar applicative formalism for representing and rea- soning about circuits, but did not automate the reasoning and did not relate the representation to simulation. We represent computation on values as functional ap- plication in a side-effects-free LISP-like format. (We have converted prefix to infix notation here for readability.) The temporal equations for t,he example’s components are Clocked Inverter: b(t) = (if (=? I 4(t-1.7)) ;test - up- 1.7) ;then *> ;else Storage Node: c(t) = b({-s [#? 4 b(u)] t}) Inverter: y(t) = 1 c(t-0.5) REPRESENTATION CONVERSIONS. The repre- sentation conversion algorithms are treated in more detail in [Lathrop et al., 19871. Here are the key ideas. b Code t Abstract Events. The code is symbolically ex- ecuted to associate each symbol with a formula which computes its value under the appropriate conditions. Unknown forms are treated as “black-box” functions according to the semantics of pure LISP. Forms which perform side-effects (e.g., reading or setting a global variable) are not modeled correctly. o Abstract Events --+ Equations. A value doesn’t change between events, so the value at time t is the value of whichever event expression occurred most recently. By constructing a predicate which indicates when a variable’s value last changed, we are able to reason about the last time an event would have triggered. e Equations -+ Abstract Events. “State objects” may be created to conditionally delay the values of the inputs. e Abstract Events + Code. Each variable can be re- solved into either an I/O port or a state object, using the circuit structure. Language constructs can then be generated which produce the effect of each event. SElW4NTIC CONNECTIONS BETWEEN THE REPW ESENTATIONS. A multiple representation scheme must at some point answer the question of seman- tic equivalence of different representations. We view the equation representation as a notation for a denotational se- mantics for the circuit structure. A circuit, together with inputs and initialization, denotes the unique solution to the simultaneous equations. The abstract event representation (and the simulator code) are endowed with event-based op- erational semantics. ]Hall, 19871 shows that our equation representation is equivalent to an essentially similar ab- stract events representation as long as zero-delay loops are disallowed. Proving equivalence of the code and abstract events representations is an open problem. MPOSITION. Composing the behaviors of the compo- nents in equation representation amounts to algebraic sub- stitution of equations. This process maintains locality of reference: when a reference to b(t) is expanded by replac- ing it with b’s definition, the variables on which b depends appear explicitly everywhere b was used. Here is the fully substituted example: y(t) = 1 (if (=? 1 d({Z [#? * (if (=? 1 @(u-1.7)) ;test 1 a(u-1.7) fthen {)I ;else 1 -O.S} 1 A({$ - 1.7)) ‘f? * (if (=? 1 d(v-1.7)) ;test 7 a(z.- 1.7) ;then *h ;else t -0.5) -1.7) *) ; final else clause On circuits of even moderate complexity, the combi- natorial explosion is much worse; hence, simplification is needed. FUNSTRUX interleaves substitution and simpli- fication to reduce intermediate expression size. PATTERN-ACTION SIMPLIFIERS. Our system’s syntactically local representation supports simple pattern- action expression transformations, similar to those used by [Darlington, 19811 for program optimization. The rules in FUNSTRUX are tailored to simplification. They form a terminating rule set, so the system applies them until no more are applicable. Experience has shown that we do not need to search different application orders. One simplifier applicable to the equation above is (#? * (if p re zca e value *)) d’ t simplifies to - (AND predicate (#? * value)) We have implemented a symbolic simplifier which uses ap- proximately 50-75 rules of this type. It typically reduces the size of tl-a expressions by about 90%. Syntactic lbcal- ity is cruciai LO the efficiency of this technique, as hunting all over a non-local representation would slow down the pattern matchers. Furthermore, the action parts would be less efficient, as relatively major surgery would be required. Hall, Lathrop, and Kirk 801 Simplifiers free of hime operators could also be ap- Simplifiers free of hime operators could also be ap- plied to the abstract event and/or code representations. plied to the abstract event and/or code representations. However, the same can not be said for pattern-action sim- However, the same can not be said for pattern-action sim- plification which involves time relationships. plification which involves time relationships. REASONING ABOUT TIME RELATIONSHIPS. REASONING ABOUT TIME RELATIONSHIPS. Locality of reference among time relationships of variables Locality of reference among time relationships of variables is also important. First, there are several useful time-based is also important. First, there are several useful time-based pattern-action simplifiers. For example, this is applicable pattern-action simplifiers. For example, this is applicable to the equation above: to the equation above: {g [predicate (u-y)] t}-7 {Z [predicate (u-y)] t}-7 simplifies to simplifies to - {$ [predicate(u)! (t-y)} {$ Iprediellle’(u)j (t-y)} Subtracting y from the most recent time u 5 t such that Subtracting y from the most recent time u 5 t such that predicate is true at u - y, is the same as the most recent predicate is true at u - y, is the same as the most recent time u 5 t - y that predicate is true. time u 5 t - y that predicate is true. Applying all of the system’s pattern-action simplifiers Applying all of the system’s pattern-action simplifiers to the example, we get to the example, we get y(t) = (if (=? 1 q5 ((2 [=? 14(u)] (t-2.2)})) ; test y(t) = (if (=? 1 4 ((2 [=? 14(u)] (t-2.2)})) ; test a({-Z I=? 14(u)] (t-2.2)}) a({-Z I=? 14(u)] (t-2.2)}) ; then ; then - . - . 1 1 ; else ; else Another crucial feature of the locality property is that Another crucial feature of the locality property is that it exposes exactly the set (relative to 2) of time points it exposes exactly the set (relative to 2) of time points which are relevant t.o the output value. These are just which are relevant t.o the output value. These are just the ones explicitly mentioned in the equation (2, (i-2.2), the ones explicitly mentioned in the equation (2, (i-2.2), and {g [=? 1 b(u)] (t-2.2)}), plus -00. Note that we and {g [=? 1 b(u)] (t-2.2)}), plus -00. Note that we have reduced the problem from reasoning about Vt and 3 to have reduced the problem from reasoning about Vt and 3 to the much easier problem of propositional reasoning about the much easier problem of propositional reasoning about a finite number of time points. a finite number of time points. It is an open question It is an open question whether our system will need to reason about any other whether our system will need to reason about any other times than these for simplification; however, we have not times than these for simplification; however, we have not yet come across any examples which indicate that it will. yet come across any examples which indicate that it will. We have implemented a propositional reasoner to sup- We have implemented a propositional reasoner to sup- port reasoning about the truth of predicates at time points. port reasoning about the truth of predicates at time points. This is needed in order to incorporate some types of back- This is needed in order to incorporate some types of back- ground knowledge, such as ground knowledge, such as “if a value is known to be 1 “if a value is known to be 1 at a time, it is not also 0 at that time;” to handle certain at a time, it is not also 0 at that time;” to handle certain kinds of simplifying assumptions [Feldman and Rich, 19861; kinds of simplifying assumptions [Feldman and Rich, 19861; and to support simplifications based on logical conditions and to support simplifications based on logical conditions implied by the context of an expression, for example, pred- implied by the context of an expression, for example, pred- icates which are true due to nesting within a conditional. icates which are true due to nesting within a conditional. In the example, (=? 1 c$({-& [=? 1 4(u)] (t-2.2)})) In the example, (=? 1 c$({-& [=? 1 4(u)] (t-2.2)})) can not be simplified to TRUE, because it could be that can not be simplified to TRUE, because it could be that 4 has never been 1 prior to t - 2.2. Frequently, however, 4 has never been 1 prior to t - 2.2. Frequently, however, we wish to consider only the normal-case behavior of the we wish to consider only the normal-case behavior of the circuit, in which 4 will have been 1 prior to t - 2.2 for any t circuit, in which 4 will have been 1 prior to t - 2.2 for any t under consideration. We can communicate this simplifying under consideration. We can communicate this simplifying assumption to our system by the axiom assumption to our system by the axiom iqf ,-- --20).(-z [=? 1 qqu)! t} > -co iqf ,-- --20).(-z [=? 1 qqu)! t} > -co The system reduces this axiom from a universal quantifi- The system reduces this axiom from a universal quantifi- cation to a proposition for each time point in the equation cation to a proposition for each time point in the equation and concludes, through propositional reasoning, that th- and concludes, through propositional reasoning, that th- predicate is true. With the other simplifiers, this product; predicate is true. With the other simplifiers, this product; y(t) = a({$ [=? 14(u)] (t-2.2)}) y(t) = a({$ [=? 14(u)] (t-2.2)}) as the simplified equation for the example. Converting this to abstract events, the system produces (Event-cell inputs:(a 4) event:((cv(a 4) ep(=? 1 4)) (schedule astrrte at 0.0 equal-to a>> event : ( (cv(a,tate) ep T) (schedule y at 2.2 equal-to astnte)) The system then converts this to the code in Section 2. v. Conchsions an srk We have explained our multiple representation approach to understanding the time behavior of digital circuits. To our knowledge, this is the first system to accept program code for the functional models of the circuit components, together with their structural connections, and produce the 8 program code for the circuit model as a whole. The equation-based representation makes easier sev- eral forms of reasoning about the time behavior of digital circuits. Locality of reference is the crucial propert,y of this representation. The code-based representation makes simulation effi- cient. The abstract event-based representation partitions the translation problem between code and equations. Unknown forms in the program code are treated as black-boxes according to the semantics of pure LISP. The system’s local representations support efficient pattern-action simplification. The finitely-many relevant time points are made ex- plicit, allowing propositional reasoning for time-based simplification. This work is preliminary and represents only a first step toward our goal. Currently, FUIVSTRUX is limited in the class of circuits to which it can be applied. The restrictions are (1) busses can connect only to blocks (not to each other), (2) b usses change state only when driven by a block, and (3) zero-delay loops are disallowed (in par- ticular, this disallows zero-delay bidirectional elements). Here are a few issues for further research. e There are several interesting reasoning tasks in the realm of digital circuits to which we hope to ex- tend our representation scheme. Some examples: de- sign optimization [Steinberg and Mitchell, 19841, trou- bleshooting [Davis and Shrobe, 19831, testing [Shirley, 19861, and learning about design [Hall, 19861. Each of these tasks requires its own representations. e There are several ways the system could be improved: it currently does not find closed forms for the recursion equations which result from feedback; ir could allocate state objects for simulation better than it currently does; it could recognize low-level implementations of higher level functions, such as integer +. 802 Expert Systems o The code which is output by FUNSTRUX is not or- ganized for readability. e It may be useful to incorporate work on pattern-action simplification of VLSI structure [Lathrop and Kirk, 19861. o What constraints must be met by a particular simu- lator in order that the abstract event representation be able to capture an equivalent meaning? cknowledgments The authors would like to acknowledge helpful discussions with Mark Alexander, Walter Hamscher, Chuck Rich, Ron Rivest, Brian Williams, and Patrick Winston. Personal support for the first author was furnished by an NSF Grad- uate Fellowship. Personal support for the second author was furnished by an IBM Graduate Fellowship, and dur- ing the early stages of this research by an NSF Gradu- ate Fellowship. This paper was prepared jointly at the Gould Semiconductors CAD Research Laboratory and at the %IJT Artificial Intelligence Laboratory. Support for the ~11’1‘ .4rt,ificial Intelligence Laboratory’s research is pro- vided in part by the Office of Naval Research under con- tract X00014-80-C-0505. eferences [Alexander, 19861 Mark Alexander. A spatial reasoning approach to cell layout generation. In Proceedings of the IEEE 1986 Custom Integrated Circuits Conference (CICC-86), IEEE, May 1986. [Amblard et al., 19851 P. Amblard, P. Caspi, and N. Halb- wachs. Describing and reasoning about circuits be- havior by means of time functions. In Proceedings of the 7th International Symposium on Computer Hard- ware Description Languages and their Applications, IFIP, 1985. [AMD, 19851 AMD. Bipolar Microprocessor Logic and In- terface, AM2900 Family Databook. Advanced Micro Devices, 1985. [Darlington, 19811 J. Darlington. An experimental pro- gram transformation and synthesis system. Artificial Intelligence, 16, 1981. [Davis, 19831 Randall Davis. Diagnosis via causal reason- ing: paths of interaction and the locality principle. In Proceedings of the Third National Conference on Artificial Intelligence (AAAI-83), AAAI, 1983. iDavis and Shrobe, 19831 Randall Davis and Howard Shrobe. Representing structure and behavior of digi- tal hardware. Con:puter, 16(10), October 1983.. ‘Feldman and Rich, 19+6] Yishai Feldman and Charles Rich. Reasoning with simplifying assumptions: a methodology and example. In Proceedings of the Fifth National Conferer :e on Artijicial Intelligence (AAAI- 86), AAAI, 1986. IHall, 19861 Robert Joseph Hall. Learning by failing to ex- plain. In Proceedings of the Fifth National Conference on Artificial Intelligence (AA,4I-86), AAAI, 1986. ]Hall, 19871 Robert J. Hall. A fully abstract denotational semantics for event-based simulation. In Proceedings of the Fifteenth Conference on Applied Simulation and Modelling, IASTED, 1987. [Kelly and Steinberg, 19821 Van E. Kelly and Louis Stein- berg. The CRITTER system: analyzing digital cir- cuits by propagating behaviors and specifications. In Proceedings of the Second National Conference on Ar- tificial Intelligence (AAAI-82), AAAJ, 1982. [Kirk et al., 19871 Robert S. Kirk. Robert J. Hall, and Richard H. Lathrop. SCORE cell development en- vironment. In Proceedings of the IEEE Custom Inte- grated Circuits Conference (CICC-871, IEEE, 1987. [Lathrop and Kirk, 19851 Richard H. Lathrop and Robert S. Kirk. An extensible object-oriented mixed-, mode functional simulation system. In Proceedings of the 22nd Design Automation Conference, IEEE, 1985. iJ,athrop and Kirk. 1986’ Richard H. Lathrop and Robert S. Kirk. A system M hich uses examples to fear11 \‘LSl structure manipulation. In Proceedings of the Fifth ,1’aiiorlal Corlfererlce on Artificial ln1elll- gence (.4AA4 I-86), .4AAI, 1986. ilathrop ei al., 1987, Richard II. Lathrop. Robert J. Hall, and Robert S. Kirk. Functional abstraction from structure in VLSI simulation models. In Proceedings of the 24th Design Automation Conference, IEEE, 1987. [Meinen, 19791 P. Meinen. Formal semantic description of register transfer language elements and mechanized simulator construction. In Proceedings of the 4th IEEE International Symposium on Computer Bard- ware Description Languages, IEEE, 1979. [Rich, 19851 Ch ar es 1 Rich. The layered architecture of a system for reasoning about programs. In Proceedings of the Ninth International Joint Conference on Arti- ficial Intelligence, 1985. [Schwartz et al., 19831 R. L. Schwartz, I’. M. Melliar- Smith, F.H. Vogt, and D.A. Plaisted. An Interval Logic for Higher-Level Temporal Reasoning. Contrac- tors Report: Contract Number NASl-17067, National Aeronautics And Space Administration, 1983. [Shirley, 19861 Mark H. Shirley. Generating tests by ex- ploiting designed behavior. In Proceedings of the Fifth National Conference on Artificial Intelltigence (AAAI- 86), AAAI, 1986. ISteinberg and Mitchell, 19841 Louis 1. Steinberg and Tom 11. Mitchell. A knorvledge based approach to VLSI CAD: the REDESJ:~;N system. In Proceedings of the 2lsf Design A v/c. :zation Conference, IEEE, 1984. Hall, Lathrop, and Kirk 803
1987
143
598
Assistant Professor Department of Civil Engineering St&ml University Stanfold, CA 94305 AbStraCt Database management systems (DBMSs) are impor- tant components of existing integrated computer-aided engineering (CAE) systems. Expert systems (ESs) am being applied to a broad range of engineering problems. However, most of the prototype expert system applica- tions have been restricted to limited amounts of data and have no facility for sophisticated data management. KADBASE is a flexible, knowledge-based interface in which multiple expert systems and multiple databases can communicate as independent, self-descriptive com- ponents within an integrated, distributed engineering computing environment. 1. n Integrated engineering computing systems have evolved into sets of algorithmic programs that revolve around a central database management system (DBMS). The DBMS frees the application subsystems from the details of manag- ing data storage and retrieval while providing a common pool of information that cooperating subsystems can share. Now the character of integrated engineering systems is changing. Knowledge-based programming techniques, specifically expert systems (ES), are being applied to a broad range of engineering problems. However, most of the prototype expert system applications have been restricted to limited amounts of data and have no facility for sophisticated data management. As expert systems are integrated into en- gineering computing environments, the data management capabilities of the integrated systems must be adapted to serve these new components. Likewise, expert systems must evolve to incorporate capabilities to access large shared databases. Integrated systems are built from a combination of in- dividual programs (both algorithmic and knowledge-based). Each of these has its own data structures, databases and in- formation models. Integrating these disparate data models into a single central common database is complex and, in many cases, unrealistic or undesirable. An alternative an- preach is to develop an integrated system which recognizes this disparity and is designed to deal with multiple databases. Such a collection of databases is likely to be quite heterogeneous, i.e., a variety of DBMSs with different data models, varying implementations, and operating on different hardware. Several systems have been proposed to support networks of heterogeneous database management systems [Adiba 78, Cardenas 80, Smith 81, Jakobson 861. These DBMS networks provide users with access to multiple Expert Technologies, Inc. Pittsburgh, PA 152 13 databases on a computer network while hiding the details of network communications and allowing the users (or applica- tion programs) to treat data within the contexts of their own data representations. These techniques can he applied the design of an engineering expert system-database interface by adapting the model to accommodate the data access needs of expert systems and extending the data representation capabilities to account for the complexities of engineering data. KADBASE moward 86a]’ is a prototype of a flexible, knowledge-based interface in which multiple expert systems, or - more generally knowledge-based systems (KBSs), and multiple databases can communicate as independent, self- descriptive components within an integrated, distributed en- gineering computing system. The interface takes a data re- quest from a expert system, performs the indicated opera- tions using the available DBMSs, and returns a reply to the expert system. Each expert system and database is linked only to the interface; therefore, new ESs and DBMSs can be added to the integrated environment with ease. KADBASE can be generalized to serve all the components of the en- gineering application, both algorithmic and non-algorithmic, providing the basis for a large-scale integrated engineering environment composed of diverse software systems running on heterogeneous hardware. II. KADBASE is a prototype distributed network database in- terface between database management systems and knowledge-based system components of an integrated CAE system. The interface processor responds to data requests by using the declarative knowledge about the data spaces of the components being interfaced in conjunction with its own general knowledge about processing requests and interpret- ing the components’ data descriptions. Because the infor- mation required for reasoning about each component is represented separately as descriptive knowledge, the inter- face is more flexible than purely algorithmic linkages in which the descriptive information is embedded in the processing instructions. Furthermore, each expert system and database is linked only to the interface; therefore, new ‘The issues and motivation behind the development of KADBASE have been explored in earlier papers IRehak 85, Howard 85, Howard 86bl. 884 Expert Systems From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. ESs and DBMSs can be added to the integrated environment with ease. The multiple DBMS networks mentioned earlier provide a conceptual model for the organization of KADBASE. The information contained in the schemata of the individual en- gineering databases is integrated into a single global schema that is based on a semantic database model. A request for data issued by a knowledge-based system (KBS) is translated (mapped) from the data manipulation language (syntax) and data structure (semantics) of the requesting component into a global syntax referencing the global schema. The mapping process is formally divided into two separate processes: first, a syntactic translation from the KBS data manipulation lan- guage to the global data manipulation language; and second, a semantic translation from the KBS data structure to the global data structure. After the request is mapped, the inter- face processor identifies a set of target databases that contain information required to answer the query and generates subqueries to those databases to gather that information. Each subquery to a target database is translated to the specific database syntax and semantics, and the corre’spond- ing database manager is invoked to process the resultant sub- query. Inverse mappings return the results to the requesting component. KADBASE is diyided into .three basic components as described below. @The ased system Interface (KBSI) is a part of each knowledge-based sys- tem. It formulates the queries and updates sent to the network data access manager and processes the replies from the network data ac- cess manager. The KBSI possesses knowledge about the schema of the KBS context (data space) and uses that knowledge to perform semantic (and syntactic translations) for the “standard” DBMS. It accepts queries and up- dates from the network data access manager and returns the appropriate replies. Like the KBSI, the KBDBI possesses knowledge about the local database schema and the local language for data manipulation requests. It uses that knowledge to perform semantic and syntactic translations for &a Access Mamnager (NDAM) provides the actual interface. It receives re- quests (queries and updates) expressed in terms of the global schema from the knowledge-based systems (through their KBSIs). Using infor- mation associated with the global schema, the NDAM locates sources for the data referenced in a request and decomposes each request into a set of subqueries or updates to the individual target databases. The subrequests are sent to the corresponding knowledge-based database inter- faces (KBDBIs) for processing. The replies from the KBDBIs are combined to form a single reply to the original request and sent to the re- questing application through its KBSI. In KADBASE, components are organized into knowledge- based systems, with knowledge grouped into knowledge modules (KMs) (processing knowledge about particular subproblems) and knowledge sources (KSs) (passive, descriptive information about the knowledge-based component). KMs typically perform control and translation tasks, while KSs are used to represent schema descriptions. Throughout KADBASE, a frame data mtiel is used to represent various types of knowledge including schema definitions,. syntactic translation procedures, queries and replies. As used in KADBASE, the frame data model uses frames to represent objects. A frame consists of slots that contain values describing the object. Slots may be attributes, which contain simple descriptive values, or rekzationships, which serve to link the object to another type of object to provide inheritance. In the remainder of this paper, frame names are typeset in bold face, and slot names am typeset in SMALL CAPITALS. The remainder of this section presents an overview of the knowledge representation and translation processing in KADBASE. The first subsection discusses the syntactic translation process. The next two subsections describe the organization of the schema description knowledge and the semantic translation process. yntactic The syntactic translation in KADBASE is the transfor- mation of requests (queries and updates) between the exter- nal data manipulation languages (e.g., QUEL, SQL) and the internal KADBASE request representation. In KADBASE, the local system (KBSI or KBDBI) is responsible for per- forming the syntactic translation. The syntactic translation is dependent only on the local data manipulation language; the same syntactic processors may be used for multiple applica- tions of the same database management system or expert sys- tem building tool. Each syntactic processor maps requests between two fixed language: the component data manipula- tion language and the KADBASE internal representation. Since the translation task is does not vary with the applica- tions, the syntactic processor may be implemented as a special-purpose program using an algorithmic approach. Internally, KADBASE uses networks of frames to represent requests. The organization of the request frame representation serves as the global data manipulation lan- guage, paralleling the global schema. The precise character of the internal request representation is not important to the discussion of remainder of the translation process and is therefore omitted from this paper. Howard and Rehak 805 B. Schema Descripti Knowledge KADBASE integrates the components’ data spaces through a global schema based on frame data model. The basic unit of the frame model used in KADBASE is the entity, represented by a frame. An entity is “any distinguish- able object -- where the ‘object’ in question may be as con- crete or as abstract as we please” Date 831. The schema description knowledge required by the inter- face is partitioned into three levels: the component’s local data representation expressed in the component’s own data model (hierarchical, network, relational, frame-based, object- oriented, etc.), the component’s local data representation ex- pressed in terms of the global frame model, and the global schema expressed in terms of the global frame model. This information is represented as knowledge sources within the knowledge-based components of KADBASE. The three types of schema description knowledge sources are described more fully below: e The Local Sckerna (LS) describes the organiza- tion of a component’s local data structure in terms of the local data model. The character of the local schema is highly dependent on the type of database management system (DBMS) or KBS being described. In a relational DBMS, the local schema consists of the definitions of the relations and their attributes in the database. In a object-oriented expert system, the local schema consists of the definitions of the hierar- chy of object classes in the context. e The Local Frame-Based Schema (LFBS) represents the local schema in the semantics of the frame data model. The LFBS should be a fully updatable view of the underlying data structure expressed in the local terminology (the names for entities and attributes). The organiza- tion of the LFBS may differ that of the local schema because the LFBS may group slots from several underlying data structures (relations, frames, etc) into a single entity if those data structures have the same primary or candidate keys. The LFBS should also contain or be capable of referencing all the information in the local schema with respect to constraints, key at- tributes, and domain properties (data types, ranges, dimensions, etc.). The LFBS may con- tain information about semantic relationships between entities not found in the underlying data models; thus, the LFBS may be used to provide an enhanced semantic data model when the local model lacks the capabilities to express relation- ships between entities. 0 The Global Schema (GS) represents the com- mon data space shared by all components in the integrated system. Pn effect, it is the union of the frames and slots in the LFBSs. Since LFBSs may differ with respect to terminology (the names for common frames and slots) and slot domains (data types and dimensions), the es- tablishment of the global schema involves the selection of a single set of global names and domains. This selection is performed by a global schema administrator, who is responsible for the consistency and completeness of the global schema. The three schema description knowledge sources serve to represent a specific data structure in terms . _ each of a specific data model. They do not contam knowledge about how to relate that data structure to the other schema representations, i.e., how to map between schemata (e.g., LS to LFBS and S to GS). That schema mapping knowledge is contained in two additional knowledge sources: from several underlying ES data structures (relations, frames, etc) into a single entity if those data structures have the same primary or candidate keys. It contains the information relating each slot in the LFBS with its counter- parts in the LS. 8 The Local Integration apping (Lml) con- tains mapping information necessary to integrate the LFBS into the global schema (GS). The LIM is necessary because the LFBS can differ from the global schema in two ways: in the names for entities and slots, and in the domains of the attribute values. The L represents the former with a set of terminology (name) map- pings for the entity and slot names, and the latter with domain mappings for attribute values (tables or functions that map local values into the corresponding global values when the at- tribute domains are different). Domain map- pings are required whenever the local com- ponent represents the value of an attribute in dif- ferent terms than the global schema. For in- stance, the global schema may represent the value of an enumerated type with a descriptive string (e.g., “3ooO psi concrete”) while a database may store that value as an integer code (e.g., 34). Two additional mapping knowledge sources are required at the global level. e T&e GBobal Data Source lates each slot (data item to the list of databases and KBS contexts in tains constraints and functions relating the slots in the global schema. The mappings represent mathematical and logical interrelationships be- tween attributes found in different databases and KBS contexts. A constraint such as “ARJZA = WIDTH * BRF2AlX7-X” may represent a mapping 806 Expert Systems between one database that describes rectangular level consists of changing references to entities by area alone and another that uses entities and slots from their designations width and breadth instead. in the LS to their designations in the In the KADBASE prototype, the integration of the local frame-based schemata into the global schemata is performed manually. The task of defining a global view and relating it to each local view of data requires substantial domain knowledge and intelligent reasoning. The implementation of an intelligent schema integrator was not attempted in this project. Therefore, KADBASE requires that a global database administrator standardize the terminology for the global entities and slots, select global slot domain properties (data types and units), and define global relationships and constraints. and removing clauses from the er that denote links between local data structures that are represented as a entity in the EBBS. FJ to GS -- The semantic mapping s between the LFBS and the GS in- volves name changes for the entities and slots as well as domain mappings for the slot values. These mappings are represented in the LIM. The name changes in the LFBS to GS translation are trivial, one-to-one mappings. These name changes are required wherever a slot or entity reference appears in the request. Domain mapping may involve data type conversions (e.g., real-to-integer), unit conversions (e.g., inches-to-feet), tabular mappings (representing one-to-one cor- respondences between local and global domain values) or functional mapping (non-one-to-one correspondences). Data type and unit conversions are imple- mented by replacing the slot reference with a mathematical expression represent- ing the mapped value (e.g., ‘e(real)” for real-to-integer, “‘lengthll2” for inches-to- feet, etc.). Tabular and functional domain mappings are implemented only for qualifier expressions of the form “<slot with domain mapping> <comparison operator> <value>“; in those expressions, the “<value>” is mapped from the local domain into the global domain (e.g., “color = 6” is mapped to “color = The semantic translation of requests (queries and updates) ADBASE is independent of the types components involved. Semantic processing is based on the information provided in the cription and mapping knowledge sources the previous section. Therefore, the semantic s performed by an application-independent knowledge module that may be invoked by any component. The semantic translation process can be divided into two steps: local schema (LS) to local frame-based schema (LFBS) and local frame-based schema to global schema tep involves only local information (LS, S) and, therefore, should be performed dge-based system interface or knowledge- based database interface using the application-independent semantic translation module. cond step involves both local ancl global informatio maybe implemented either 1 In addition to the two steps described above, the semantic translation process can be divided into two types (requests and updates) and two directions (local to global and global to local). For convenience, these divisions can be grouped ac- cording to the flow of requests and replies through the sys- tem as follows: -- The only dif- LS and LFBS that affect requests am organizational dif- ferences; i.e., the attributes of the entities may be distributed in separati data struc- tures in the ES (e.g., the single entity beam may be represented by two objects in the context: beam-location and beam-type). These organizational dif- represented by knowledge in Semantic translation at this 21n the prototype implementation, formed at the bcal level. LFBS to GS translation is m- -- The semantic translations domain mappings from the GS to the LBBS is basically the same as the LF’BS to GS mapping described above, only im in reverse. eEFBS to I.23 -- The only dif- ferences between the LS and LFBS that affect requests are organizational dif- ferences as described previously. Gnce again, these organizational differences are represented by knowledge in the LFBM. The semantic translation at this level con- sists of changing references to entities and slots from their designations in the LFBS to correspond to their designations in the ES and adding clauses to the qualifier to denote links between local data structures that are represented as a single entity in the LFBS. a Reply ~~Q~$~~~~~ (Now the object of the trans- lation is a set of data.) S -- Since the map- Howard and Rehak ping between the LS and the LFBS is purely one of organization and naming, no reply translation is required. eLFBS to GS -- For replies, only domain mappings are required in the translation between the LFBS and the GS. The types of domain mappings are the same as for re&est mappings (data type, unit, tabular, and functional mappings), but for replies the conversions are applied directly to the data being returned. oGS to LFBS -- As above, the required translations between the GS and LFBS consist of domain mappings applied directly to the data. * LFBS to LS (KBS) -- Since the only dif- ferences between the LFBS and the LS are in terms of attribute grouping, no semantic translation of replies is required at this level. KADBASE provides the mechanism to develop a dis- tributed, integrated engineering computing system composed of the components described above. Communication be- tween components is isolated in a communications module associated with each component. This module hides the physical message passing mechanism. Thus, the distributed nature of the KADBASE architecture is hidden from the user and applications (databases and knowledge-based systems may co-exist on a single machine or on multiple, heterogeneous machines). In a distributed environment, each component may be implemented as a separate process. In that case, the NDAM and KBDBIs function as servers responding to incoming requests. The KADBASE prototype has been test in conjunction with two knowledge-based structural engineering application systems: @ SBEX (Standards Processing &pert) [Garrett 861 -- is a knowledge-based, structural com- ponent design system developed by James Gar- rett. SPEX uses KADBASE to provide access to a database of standard structural steel mem- bers for use in its component design process. * MPCOST -- a knowledge-based cost estimator for detailed building designs developed to demonstrate KADBASE. HICQST uses KAD- BASE to access a multiple databases (a building design database, a project management database, and a library database of unit costs). KADBASE and its demonstration applications are imple- mented in a distributed computing environment consisting of a DEC VAX 1 l/750 and several MicroVAXs using the Mach operating system and linked by Ethernet. Franz Lisp [Foderaro 821 is the principal programming language used in implementation. The KADBASE sample databases are supported by the INGRES database management system [Stonebraker 761. [Adiba 781 Adiba, RI., and Portal, D., “A Cooperation Sys- tem for Heterogeneous Data Base Management Systems,’ ’ Information Systems, Vol. 3, No. 3, pp. 209-215, 1978. [Cardenas 801 Cardenas, A., and Pirahesh, MI. H., “Data Base Communication in a Heterogenous Data Base Management System Network,” bnfomtion Systems, Vol. 5, No. 1, pp. 55-79, 1980. [Date 831 Date, C. J., An Introduction to Database Systems, Vol. II, The System Programming Series, Addison- Wesley Publishing Co., Reading, Massachusetts, 1983. [Foderaro 821 Foderaro, J. K., and Skower, K. L., The FRANZ LISP Manual, University of California at Berkeley, 1982. [Garrett 861 Garrett, J.H., SPEX -- A Knowledge-Based Standards Processor for Structural Component Design, unpublished Ph.D. Dissertation, Depart- ment of Civil Engineering, Carnegie-Mellon University, Pittsburgh, PA, September 1986. [Howard 851 Howard, H.C., and Rehak, D.R., “Knowledge Based Database Management for Expert Systems,” ACM S&art Newsletter, Special Interest Group on Artificial Intelligence, Association for Computing Machinery, Spring 1985. [Howard 86a] Howard, H. C., and Rehak, D. R., Interfacing Databases and Knowledge Based Systems for Structural Engineering Applications, Technical Report EDRC- 12-06-86, Engineering Design Research Center, Carnegie-Mellon University, Pittsburgh, PA, November 1986. [Howard 86b] Howard, H.C., and Rehak, D.R., “Expert Systems and CAD Databases,” Knowledge En- gineering and Computer Modeliing in CAD, CAD86: Seventh International Conference on the Computer as a Design Tool, London, pp. 236-248, September, 1986. [Jakobson 861 Jakobson, G., Lafond, C., Nyberg, E., and Piatetsky-Shapiro, G., “An Intelligent Database Assistant,” IEEE Expert, Vol. 1, No. 2, pp. 65-78, Summer 1986. [Rehak 851 Rehak, D.R., and Howard, H.C., “Interfacing Expert Systems with Design Databases in In- tegrated CAD Systems,” Computer-Aided Design, November 1985. [Smith 811 Smith, J. M., et al., “Multibase -- Integrating Heterogenous Distributed Database Systems,” AFIPS Conference Proceedings, Vol. 50, pp. 487-499, 1981. [Stonebraker 761 Stonebraker, M., Wong, E., and Kreps, P., “The Design and Implementation of INGRES,” ACM Transactions on Database Systems, Vol. 1, No. 3, pp. 189-222, September 1976. 808 Expert Systems
1987
144
599
Artificial htelligence Department oneywelll Corporate Systems Development Divisiola 1000 Boome Avenue North Golden Valley, Minanessta 5542 7 A system for continuously providing advice about the operation of some other device or process, rather than just problem diagnoses, must not only function in real time, but also cope with dynamic problem courses. The reasoning technique underlying such a system must not assume that faults have single causes, that queries to the user will be answered and advice to the user will be followed, nor that aspects of a problem, once resolved, will not reoccur. This paper presents a reasoning technique that can be used in conjunction with an inference engine to model the state of a problem situation throughout the entire problem-handling process, from discovery to final resolution. The technique has been implemented and installed on-line in a factory control room, as part of a real time expert system for advising the operators of a manufacturing process. There are many potential practical applications for a reasoning technique enabling a knowledge-based system to provide continuous “coaching” to the operators of a complex device or process. For example, manufacturing operations might benefit from advisory systems providing operators with continuous assistance in monitoring and troubleshooting process behavior; similarly, computer installations might provide better service by using expert systems to assist operators in managing the systems’ performance. However, the goal of continuously providing a user with operational advice, as well as problem diagnoses, makes unique demands on the reasoning technique to be employed. Such an advisory system must not only function in real time, but also cope with dynamic situations, and unpredictable interactions with the user. The goal of a real time expert advisory system is not only to monitor the target system to detect, diagnose, and suggest a remedy for problems, but also to continue to advise the operator on further actions to take as the problem resolves. Functioning in a dynamic situation requires the ability to revoke or update remedial advice if the corresponding problem resolves of its own accord, or if the remedy is no longer appropriate to the situation. The advisory system also should not rely on assumptions that problems have single causes, or that individual aspects of a problem situation, once resolved, will not reoccur. The ability for an expert advisory system to function interactively with an operator is required, even if the system matures to the point people are willing to “close the loop” and allow it to exert control over the target system. This is because, in most applications, there will always be some actions that cannot be performed without human intervention (e.g.? replacing broken parts, operating manual valves, etc.) Thus, the reasoning technique used by such systems must be able to cope with the unpredictability of operator behavior. The system cannot be based on assumptions that the operator will always approve and comply with recommended actions, respond to queries for information not obtainable through instrumentation, or even be available at the time advice is issued. In many application environments, it is also important that the advisory system l-lot interact with the operator unnecessarily. This paper presents a reasoning technique we have found suitable for providing the problem-monitoring and the advice-giving functions of a real time, interactive expert advisory system meeting the above requirements. In related research, Griesmer and others [Griesmer et al., 1984; Kastner et al., 19861 discuss a real time expert system for assisting in the management of a computer installation using the MVS operating system. They describe features added to a forward-chaining inference engine to handle the initiation of actions at the appropriate times, to manage communications among system components, and to exert controls that prevent sequences of remedial actions from being interrupted. However, they do not present methods for interrupting, retracting, or revising advice when it is appropriate to do so, nor for coordinating the treatment of multiple faults arising in the same episode. A method for reasoning about multiple faults is presented by [deKleer and Williams, 19861. Their research addresses the problems of coping with very large search spaces comprised of combinations of failed components, and performing diagnostic reasoning from a model of the structure and function of the target system, in a static data environment. Our work focuses on managing diagnostic and remedial efforts over time, in a dynamic environment. Nelson [Nelson, 19821 has utilized a “response tree” technique as part of an expert system to dynamically select among the possible responses that operators of a nuclear reactor might take in a failure situation. However, the main goal of this approach is to efficiently encode and utilize “precompiled” knowledge about responses that will lead to a safe system shutdown. Our work has been in less critical application domains, and is directed toward methods to help operators keep a system functioning. The technique we present serves as an adjunct to the inference engine of an expert advisory system, in a similar manner as various Truth Maintenance Systems (TM!& ATMS, etc.) can serve as an adjunct to a deductive reasoner. The latter systems (e.g, [deKleer, 19841) are used for problems in which the assertions given to the system aemmerer and AQlard 809 From: AAAI-87 Proceedings. Copyright ©1987, AAAI (www.aaai.org). All rights reserved. are relatively unchanging; the elements of the problem space are inferences based on the givens plus additional assumptions which may later be found incorrect. Thus, dependencies of inferences on assumptions are recorded, so that when a contradiction is discovered, the inferences based on the contradictory set of assumptions are readily identified, and may be explicitly or implicitly “undone.” Our technique is appropriate for problems in which the assertions (data) given to the system change frequently; the elements of t% problem space are states of affairs that are causally related, but which m&y or may not hold given the next round of data. Thus, dependencies of the current problem state on the state of antecedent causes are recorded, so that when the status of a cause changes, the effect. on the overall course o.f the problem episode is readily identified. IL Paoble naiysis Consideration of the type of behavior desired from an expert advisory system leads to several conclusions about the required features of the reasoning technique and knowledge representation to be used. Because the reasoning is to be performed in real time, and is to be about the status of a dynamic target system, the reasoning approach must utilize some form of multi-valued logic. At a minimum, logic predicates in the system must be permitted to take on an “unknown” value, as well as true/false, whenever the data involved is too obsolete to be considered a valid descriptor of the corresponding aspect of the target system. Likewise, the advisory system cannot halt and await a response from the user when the value of non-instrumented variable is required; hence the reasoning approach must be able to proceed while some data values are unknown. The reasoning technique must also be nonmonotonic, both in what Ginsberg terms the “truth value” (t- nonmonotonic) and “knowledge” (k-nonmonotonic) sense [Ginsberg, 19861. The world in which an expert advisory system functions is t-nonmonotonic, in that the truth value of conclusions changes over time. For example, problem situations can spontaneously resolve (eg., if a stuck valve frees itself), default assumptions can prove incorrect (eg., a manual valve normally open may have been closed), or the operator of the system can resolve a problem independently of the advisory system. As a result, the reasoning technique must be able to correctly “back up” in the state of affairs concluded. The advisory system’s world is also k-nonmonotonic, because the amount of information known for certain to the system decays over time, as the data on which it is based ages. As a result, reasoning by an expert advisory system must be interruptable. The system cannot afford to suspend data scanning for an indefinite period of time until its inference engine reaches conclusions; data scanning and updates must occur regularly. Although data collection and inferencing can proceed in parallel machine processes, the inference engine must operate on a stable “snapshot” of data, in order to ensure that the data it is using, and hence its conclusions, are internally consistent. Thus, it must be possible to interrupt the reasoning process periodically to allow data updates to occur, then resume. Upon resumption, the reasoning process should not necessarily proceed to follow its prior reasoning paths, which may no longer be productive given the new data, nor can it “start over” each time it receives new data, lest it never reach useful conclusions at all, given the time slice it has available. These considerations suggest that an effective reasoning approach for an advisory system is one based on a representation of the states a problem can attain during the problem-solving process. Transitions among these states should permit the system to proceed despite incomplete data whenever possible, and enable the system to handle “nonmonotonic progress” through the problem- solving states. (By nonmonotonic progress, we mean transitioning that returns to a previously visited state in the path from the start state to the final state in a problem episode.) Use of a representation of intermediate states in the problem-solving process makes the inference engine interruptable. The reasoning process can be suspended any time the representational structures are in an internally consistent condition. The problem-solving process will be responsive to data changes that occur during the problem- solving, since upon resumption, the next state transitions will be a function of the newly updated data. In contrast, for example, if a backward-chaining inference engine is interrupted for a data update, and its state (goal stack) saved and restored, the “line of reasoning” that the inferencing will initially pursue is still a function of the goal stack alone. By defining the state transitions in a way that allows transitioning to occur in some parts of the problem despite unknown data values in other parts, the advisory system can proceed to offer some advice to the operator, even though it must await more data to draw conclusions about other aspects of the problem. As a practical matter, we have found that if the application domain involves problems in which the various potential contributors to a problem are weakly connected, (that is, the cause-effect connections from problems to their potential, underlying causes form more of a tree structure than a lattice), the advisory system can use a strict, three-valued logic, and still generate useful advice while some desired data are unknown. Otherwise, it may be necessary to resort to a more complex logic approach, involving guesses and default values that are subject to later belief revision. Finally, by defining a state transition network that allows cyclic paths to be followed during a problem episode, the t-nonmonotonic nature of problem-solving in dynamic situations (e.g., the possibility that a subproblem will reoccur within a given overall problem episode) is represented. 111. Technique Used The “problem status monitoring system” (PSMS) we have developed is for use in conjunction with an inference engine capable of detecting problem conditions and incrementally generating the search space of possible antecedent causes. These antecedent causes are the nodes of the search space; each node has associated with it a single -state label from the set defined in PSMS (see below). We assume that the descendants of any given node in the search space, if found to be an actual cause of the current problem, must be remedied or otherwise rendered harmless before their ancestors can be remedied. The PSMS approach is based on an augmented transition network, consisting of a set of state labels applied to each node of the search space as the problem- solving progresses, and lists attached as properties of each node. The lists are used to record the status of the problem-solving (and remedying) with respect to that node’s descendants. Problem nodes transition from state to state depending upon data, the knowledge base of the 810 Expert Systems advisory system, and the status of these property lists. In turn, state transitions are augmented by actions that update the properties of a node’s ancestors in the search space. A node can be in one and only one state at any given time. The states, and their corresponding labels, are as follows: nil: pending: diagnosed: ready: no-remedy: resolved: uncle: No problem-solving from this node has yet begun. The descendants of this node are under investigation, to be ruled in or out as actual causes of the current problem situation. At least one of the descendants of this node has been confirmed as a cause of the current problem situation. All the descendants of this node that were confirmed as causes have been “fixed,” hence, the cause represented by this node is ready to be remedied. One or more descendants of this node has been confirmed as a cause, but no remedy has been effective, and/or no remedy for the cause represented by this node is known. The cause represented by this node has been remedied, or ruled out as a contributor to the current problem situation. The cause represented by this node has been confirmed as a cause, but no remedy has been found; the advisory system cannot help the user with this aspect of the problem. Four lists are attached as properties to each node of the problem space. These lists are the list of “confirmed,” “rejected, ” “fixed,” and “can’t-be-fixed” descendants of the node. If a node is confirmed as a contributing cause of the problem situation, it is entered on its parents’ “confirmed” lists. (Note that a node may have more than one immediate parent in the problem space.) Conversely, if the node is rejected as a contributing cause, it is entered on its parents’ “rejected” list. Likewise, once a node is confirmed, if the cause it represents in the application domain is remedied, the node is entered on its parents’ “fixed” lists. Alternatively, if the advisory system exhausts its supply of recommendations to the user, and the cause remains problematic, the corresponding node is entered on its parents’ “can’t-be-fixed” lists. The management of these lists obeys the following four constraints: (1) Set-Union (Confirmed,Rejected) E (descendants} (2) Set-Intersection (Confirmed,Rejected) = null (3) Set-Union(Fixed,Cant-be-fixed) E {confirmed} (4) Set-Intersection (Fixed,Cant-be-fixed) = null The test used to determine the state transition to be undergone by a node in the problem space involves both the advisory system’s knowledge base, and the status of these property lists. This transition test consists of a maximum of seven steps, as follows. (The letters in brackets [‘J correspond to the rows of the state transition table found in Table 1.) 1. 2. 3. 4. 5. 6. 7. The inference engine is called upon to determine whether the problem (cause) represented by node has been remedied; if so [A], the node transitions to Resolved. Otherwise, if new direct descendants of the node can be generated [B], they are added, and the node transitions to Pending. Otherwise, if some of the nodes’ descendants are not on either its Confirmed or Rejected lists [Cl, no transition is made; (the jury is still out on some antecedent causes.) Otherwise, if the nodes’ Confirmed list is empty, then if the knowledge base contains some remedial advice associated with this node [D], transition to Ready; else [E] transition to No-Remedy. Otherwise, if not all members of the nodes’ Confirmed list are on either its Fixed or Can’t- be-fixed lists [F], the node is labeled Diagnosed; (we’ve confirmed at least one cause, but we’re still waiting for some antecedent cause to be remedied). Otherwise, if the nodes’ Can’t-be-fixed list is not empty [G], and the node is not already labeled No-Remedy, transition to No-Remedy; else, transition to Uncle. Otherwise, if the knowledge base contains some remedial advice associated with this node, transition to Ready [H]; else [I] if the node is not already labeled No-Remedy, transition to No-Remedy, otherwise, transition to Uncle. By defining the state transition network to include a No-Remedy state as a “way-station” on the way to the Uncle state, a “hook” is provided allowing the advisory system to have a second chance at problem-solving before “giving up.” This is useful if an initial attempt at problem solving without involving querying of the user is desirable, to avoid unnecessary interactions with the user. (Specific ways of implementing this approach, and integrating PSM[s with the rest of an expert advisory system, are beyond the scope of this paper.) Table 1 summarizes the PSlvlS state-transition table. Entries in this table indicate the resulting state that a node assumes, based on its current state (column), and the result of the above test (row). The state transitions are augmented by actions to update the property lists of the nodes’ parents. Whenever a node transitions from Pending to Resolved, it is entered on its parents’ Rejected lists, as this corresponds to “ruling out” the associated cause as a culprit in the current problem. Whenever a node makes a transition from Pending to any other state except Resolved, it is entered on its parents’ Confirmed lists, as it is now known to be a contributor to the problem situation. Similarly, a transition of a node to Resolved from any state (other than Pending) causes it to be entered on its parents’ Fixed lists. Any transition to the No-Remedy state causes the node to be entered on its parents’ Cant-be- fixed lists. The effect of these actions is to propagate findings about all causes of the problem situation, and readiness for remedial action, from the fringe to the root of the problem search space lattice. To the extent that this lattice is weakly interconnected, progress in problem- solving and advice-giving can proceed along one path from fringe to root, even while other paths are awaiting the results of further data collection and inferencing. The transition from the Ready state back to itself (row H) is notable. It is here that the advisory system can issue additional advice to the operator regarding how to Kaemmerer and Allard $11 remedy the corresponding problem, since presumably any previously issued advice has been ineffective (else the transition in row A, to Resolved, would have occurred). The ability of PSMS to support nonmonotonic progress in problem-resolution is based on row B of the state transition table. This row indicates that at any point in a problem episode, a node may transition “back” to the pending state. This transition is augmented as follows: When returning to the Pending state, the node is removed from its pareqts’ property lists. If as a result, a parent’s Confirmed list becomes empt?, that parent transitions to the Pending state, and the updating of property lists proceeds recursively toward the root of the problem lattice. Otherwise, the parent transitions to the Diagnosed state. Unlike the other state transitions in PSMS, this series of propagated transitions must be uninterrupted in order for the representation to be internally consistent. (Otherwise, for example, the parent might remain in a Diagnosed state even though none of its direct descendents are now Confirmed.) However, the propagation may be accomplished in Order(n log n) time, where n is the number of nodes in the problem lattice. Thus, this poses little difficulty for practical real time applications. Of course, if an upper bound for n in the application domain is known, an upper bound for an invocation of PSMS can be determined. The reasoning technique of PSMS has a type of completeness property that is useful in advisory systems. Assuming that the inference engine it is used with employs a logically complete method for generating the search space and diagnosing individual causes, the PSMS approach assures that if advice to the user is needed and available in the knowledge base, the advice will be issued. Likewise, if no advice for the problem situation exists in Table 1 PSMS State Transitions Prom Current State to New State Results of Transition Test* Nil Pend. Current State Diag. Ready NoRem. Resol Uncle Problem Remedied [Al New Direct Descendent U-4 Some Desc. not Conf. or Rej. [Cl Confirmed=nil remedy exists PI Confirmed=nil no remedy known [El A confirmed desc. not yet fixed Fl Some conf. cause can’t be fixed PI Conf. desc. fixed remedy exists WI Conf. desc. fixed no remedy exists VI Nil Resol. Pend. Pend. ** ** ** ** ** ** ** Pend. Ready NoRem. Diag. ** ** ** Resol. Pend. Diag. +* ** Diag. NoRem. Ready NoRem. Resol. Pend. ** Ready NoRem. ** ss; Ready NoRem. Resol. Pend. ** ** Uncle ** Uncle ** Uncle Nil Pend. ** ** ** ** ** ** ** Resol. Nil ** *+ *+ ** ** ** ** + For an interpretation of the row labels, see the text. ** Empty cells are unreachable state/condition combinations. 812 Expert Systems the knowledge base, the user will be informed of that fact. Justification for these claims follows from inspection of the state-transition network: PSMS will cause the advisory system to generate pertinent advice when it exists, so long as there is no path to the Uncle state for nodes that have advice associated with them except through the Ready state. Table 1 shows there is no path to Uncle except through No-Remedy, and while there are paths into the No-Remedy state from Pending, Diagnosed, and Ready, rows D and H of the table show that there is no path from nodes with advice associated with them to the No-Remedy state except through the Ready state. Similarly, as long as advice to the user is needed (i.e., a problem node hasn’t entered the Resolved state), the node will not enter the Uncle state except through the No-Remedy state, at which point the user can be notified that the knowledge base contains no further pertinent advice for the problem. A PSMS component has been included in a real time expert advisory system we have implemented and installed in the control room of a factory of a major manufacturer of consumer products. The expert system is interfaced to the plant’s process control computer, and obtains on-line sensor data from the manufacturing process on a continuous basis. The expert system monitors these data, detects emerging problems with the manufacturing process, and advises the operator regarding actions to take to avoid or recover from them. It then continues to monitor the process, updating and/or retracting advice as the problem situation evolves. The expert system monitors and provides advice on four parallel manufacturing lines, simultaneously. The system is currently implemented in Zetalisp on a Symbolics computer. The operator interface, data collection component, and inference engine (with embedded PSMS component) run as separate processes, passing messages and data among them. The amount of process data being scanned by the system varies with the state of the manufacturing process; typically, 60-70 data points are being monitored at any given time. Within the inference engine process, the main tasks are emptying the input data buffer from the data collection component, monitoring the manufacturing process for emerging problems, and advancing the problem-solving process (including advancing each problem node through a state transition). On the average, these tasks require 900, 477 and 530 milliseconds, respectively, for a total top-level inference engine cycle of about 2 seconds. In the manufacturer5 application domain, a typical problem search space (lattice) is 2 to 5 plies deep from detected problem to “ultimate” cause. Generating one ply per inference engine cycle, and allowing for the 2 to 3 transitions required for a node to reach the Ready state, the typical amount of processing from problem detection to the first advice to the operator is 4 to 8 inference engine cycles. Thus, if the inference -engine had exclusive use of the machine, its “reaction time” to problems would be 8 to 16 seconds. In practice, a multiple second delay was deliberately built into the inference engine cycle to guarantee other processes (operator interface, incremental garbage collection, etc.) ample time to run, yielding a reaction time of about 30 to 60 seconds. This speed is sufficient for the manufacturing application involved. We have presented a Problem-State Monitoring System, consisting of, an augmented transition network of problem states, useful as an adjunct to inference engines for real time expert advisory systems. The defined transitions allow the system to model a real time problem resolution process, even if it follows a nonmonotonic course with subproblems reoccurring in the same episode. Also, the PSM!3 approach supports the requirement that an advisory system be capable of updating its recommendations in real time, retracting advice that has become unnecessary. Coupled with the ability interrupt and resume the problem-solving process, the existence of cyclic paths in the transition network allows PSMS to model reoccurring problems. However, this situation also could lead to undesirable cycles in advisory system behavior, with the advisory system repeatedly recommending remedial actions that only temporarily manage a persistent problem. This behavior has not been observed in our application. However, an interesting direction for further research might be to extend the PSMS approach with a meta-level reasoning component to detect cycles and produce advice for resolving the problem on a more permanent basis. Such a system could be one more step toward to goal of a genuinely “expert” assistant to process operators. effaces [deKleer, 19841 J. deKleer. Choices without backtracking. Proceedings AAAI-84, pages 79-85, Austin, Texas, American Association for Artificial Intelligence, August, 1984. [deKleer and Williams, 19861 J. deKleer and B.C. Williams. Reasoning about multiple faults. Proceedings AAAZ-86, pages 132-l 39, Philadelphia, Pennsylvania, American Association for Artificial Intelligence, August, 1986. [Ginsberg, 19861 ML. Ginsberg. Multi-valued logics. Proceedings AAAZ-86, pages 243-247, Philadelphia, Pennsylvania, American Association for Artificial Intelligence, August, 1986. [Griesmer et al., 19841 J.H. Griesmer, S.J. Hong, M. Karnaugh, J.K. Kastner, M.I. Schor, R.L. Ennis, D.A. Klein, K.R. Milliken, and H.M. VanWoerkom. YES/MvS: A continuous real time expert system. Proceedings AAAI-84, pages 130-l 36, Austin, Texas, American Association for Artificial Intelligence, August, 1984. [Kastner et al., 19861 J.K. Kastner, R.L. Ennis, J.H. Griesmer, S.J. Hong, M. Karnaugh, D.A. Klein, K.R. Milliken, MI. Schor, and H.M. VanWoerkom. A continuous real-time expert system for computer operations. Proceedings of the International Conference on Knowledge-Based Systems (KBS-86), pages 89- 114, London, England, July, 1986. [Nelson, 19821 W.R. Nelson. Reactor: An expert system for diagnosis and treatment of nuclear reactor accidents. Proceedings AAAI-82, Pages 296-301, Pittsburgh, Pennsylvania, American Association for Artificial Intelligence, August, 1982. Kaemmerer and Allard
1987
145