index
				 
			int64 0 
			18.8k 
			 | text
				 
			stringlengths 0 
			826k 
			 | year
				 
			stringclasses 38
				values  | No
				 
			stringlengths 1 
			4 
			 | 
|---|---|---|---|
0 
							 | 
	QUESTION ORDERING IN MIXED INITIATIVE PROGRAM SPECIFICATION DIALOGUE    Louis Steinberg    Department of Computer Science    Rutgers University    New Brunswick, N. J. 08903    ABSTRACT    It would be nice if a computer system could    accept a program specification in the form of a    mixed initiative dialogue. One capability such a    system must have is the ability to ask questions in    a coherent order. We will see a number of reasons    it is better if such a system produces all the    questions it can and has a "dialogue moderator"    choose which to ask next, than if the system asks    the first question it thinks of. DM    [9I,    the    dialogue moderator of PSI C51, chooses questions    by searching a network model of a program, under    control    of a set of    heuristic rules.    This    technique is simple and flexible.    I. Introduction    When you need a computer program, it is    usually easier to tell a human being what the    program should do than to specify the program    directly to a computer system (eg a compiler).    There are a number of reasons for this, including    the knowledge and reasoning ability that a human    has. We will concentrate here, however, on another    advantage of    communicating with humans, their    ability to engage in a mixed initiative dialogue,    and on one particular capability required for    carrying on such a dialogue,*the ability to ask    questions in a coherent order.    A mixed initiative dialogue is one in which    either    party may take    initiative.    From the    perspective of the work reported here, to "take    initiative" in a dialogue is to alter the structure    of the dialogue. This definition is essentially    equivalent to that of Bobrow, et al [II, who    define taking    initiative as    establishing    or    --m---w-    ' This work was supported by the Defense Advanced    Research Projects Agency at the Department of    Defense under contract MDA 903-76-C-0206.    The    author was also partially supported by an IBM    Graduate Fellowship.    The work reported here would not have been    possible    without the    environment provided by    Cordell Green and the other members of the PSI    project.    I would also like to thank N.    S.    Sridharan for his comments on an earlier draft of    this paper.    violating expectations about what will come next,    since it is precisely the structure of a dialogue    which gives    rise to    such expectations.    In    particular, we will be concerned here with "topic    structure", the order and relationships of the    topics covered in the dialogue, and with "topic    initiative", the ability to affect topic structure.    The work described here 191 been done in the    context of the PSI program synthesis system [5].    PSI acquires    program specifications via mixed    initiative, natural language dialogue.    II. The General Scheme    In order to ask questions, such a system must    be able to do two things: it has to decide what    aspects of the specification are as yet incomplete,    and it has to decide which one of these aspects to    ask about next.    We will refer to the latter    problem, deciding which question to ask next, as    the task of "question ordering".    A. Order from the Reasoning Process    One common way to handle question ordering    might be summarized as asking the first question    the system thinks of. In this scheme, the system    goes through its normal reasoning process, and at    some point comes across a fact which it wants to    know, but cannot deduce.    Whenever this happens,    the system stops and asks the user.    (See, for    example, [11 and 143).    Note that the system stops whenever it finds    any question to ask. Thus, the system asks each    question    as it comes up,    and the order is    determined by the reasoning process. If a system's    reasoning process seems natural to the user, then    this scheme produces a question order which seems    natural,    at least to    a first approximation.    However, there are some problems.    The basic problem is that this scheme ties the    topic structure of the dialogue to the reasoning    procedures    of the system.    This makes topic    structure harder to change, since any change in    topic structure requires a change in the reasoning    procedure.    It can also make it hard to transfer    the question ordering methods to another system    that uses a different reasoning method. Finally,    this method of question ordering assumes that there    is a single, sequential reasoning process, and is    61    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    not possible in a system structure such as that of    HEARSAY-II [71.    B. Order from a Dialog    Moderator    A better scheme is to have the reasoning    process produce as many questions as it can, and to    use some other mechanism to select a single one of    them to ask next. This scheme largely avoids the    problems of the previous one. Its main drawback is    that it requires a reasoning process which is able    to produce more than one question at a time. An    additional advantage of this scheme is that it    allows us to implement question ordering in a    separate module, with a clearly defined interface    to the rest of the system. I have termed such a    module a "dialogue moderatortt.    Thus, the dialogue moderator is given a list    of all the questions currently open, and must    choose which one is to be asked next, so as to keep    the dialogue well structured. Much recent research    (eg [2], C61, [8]>    has shown that structure of a    dialogue is closely tied to the structure of goals    and plans    being    pursued by    the    dialogue's    participants. One might therefore imagine that the    dialogue moderator needs a complete model of goals    and plans, both those of the system and those of    the user.    However, in a program specification    dialogue, the goals and plans of both participants    are tied very closely to the structure of the    program. As will be seen, it has been possible in    PSI to use a simple model of the program structure    instead of a complex model of goals and plans.    (It might be argued that any system which    handles natural language will eventually need the    full model of goals and plans anyway, so using a    simpler model here is no savings in the long run.    It should be noted, however, that mixed initiative    does not necessarily imply natural language.    A    useful system might be constructed which handles    mixed initiative dialogue in some formal language.)    III. J& Specific Method    DM is the dialogue moderator of the PSI    system. As noted above, DM maintains a simplified    model of the program being specified. The program    is viewed as a structured set of objects.    Each    object is either a piece of algorithm or a piece of    data structure - the pieces of algorithm correspond    roughly to the executable statements of a program,    and the pieces of data structure correspond roughly    to the variable declarations. A specific loop or a    specific input    operation might be algorithmic    objects, while a set or a 5-tuple might be data    structure objects.    These    objects    are    structured    by    two    relationships: an object may be a subpart of    another (eg an input operation might be a step of a    loop, and thus one of its subparts), and an    algorithm object may use a data structure object    (eg an input operation "usestt the data structure it    inputs).    DM represents this structure in a standard    network form; nodes represent the objects, and arcs    represent the    relations subpart/superpart and    uses/used-by.    Each node also has associated with    it a list    of questions about the object it    represents. (A question asks about some attribute    of some specific object.    The objects, relations,    and questions come from other modules of PSI.)    In order to choose the next question to ask,    DM searches the net, starting at the "present    topic". The present topic is the object currently    being discussed.    Determining which object this is    is a difficult and important problem in its own    right, involving the syntax of the user's sentences    as well as the status of the program specification,    and has not been seriously dealt with in this work.    Instead, some simple heuristics are used, the main    one being to assume that most of the time the user    will be talking about the object that the system    just asked about.    Once the present topic has been chosen, the    search proceeds, under control of a set of rules.    (The rules are listed in the appendix. See [9]    for a discussion of the specific rules.) Each time    the search reaches an object, a list of rules is    chosen (depending on whether the object is a piece    of algorithm or data structure) and these rules are    applied in order. Some say to look for a specific    kind of question about the current object. Others    say to move along some particular kind of arc from    the current object, and recursively apply the rules    on the object we reach. If no question is found by    this recursive    application, we come back and    continue applying the rules here. If at any point    a rule that looks for questions finds one, that    question is the one to ask, and the search stops.    This scheme of moving through the net and    looking for questions, under control of a set of    rules, has proven to be simple and flexible.    A related technique was used in SCHOLAR [3].    SCHOLAR is a CA1 system which teaches geography by    engaging in a mixed initiative dialogue with the    student.    Both participants may ask and answer    questions.    SCHOLAR chooses which question to ask    by a random (rather than rule directed) walk on a    net which encodes its knowledge about geography.    As ultimately envisioned, SCHOLAR would teach in a    Socratic manner, that is, by asking a carefully    designed sequence    of questions.    However, the    structure of goals and plans in such a dialogue is    probably very different from the structure of the    net as discussed in [31.    Because of this, a    scheme of moving through this net is unlikely to be    useful for producing such a sequence of questions.    DM's question    ordering behavior has been    tested in two ways.    First, a log of runs of PSI    was surveyed. This log included 42 dialogues which    were essentially    complete.    Each dialogue was    checked, both to see if the user complained about    the question ordering (there is a comment feature    that can be used for such complaints), and also to    see if    the question    order was    subjectively    acceptable. Except for one instance, later traced    to a program bug, DM's behavior was correct. This    test was too subjective, however, so a simulated    dialogue was recorded, with myself playing the role    of PSI and a programmer from outside the PSI group    as the user.    The inputs DM would have gotten    during this dialogue were hand coded and given to    DM, and the questions DM chose were compared with    those I had chosen. DM had to choose a question at    sixteen points, with two to seven questions to    choose from. The correct question was chosen at    thirteen of these points.    An analysis of the    errors indicates that they could be removed by some    straightforward extensions    of    the    current    methodology, particularly    by maintaining    more    history of how the dialogue got to the present    topic.    IV. Conclusions    Thus we see that it is advantageous for a    system which engages in mixed initiative dialogue    to have the reasoning modules produce all the    questions they can at each point in the dialogue,    and to have a separate dialogue moderator choose    which one to ask next.    In such a system, the    question ordering mechanism is decoupled from the    reasoning process, so that either can be modified    without changing the other. A given mechanism for    selecting one of the proposed questions can be more    easily transferred to a system with very different    reasoning mechanism.    Also, multiple    parallel    reasoning processes can be used with this scheme.    DM, the dialogue moderator of PSI, represents    the program as    a simple net of objects and    relations.    It chooses a question by starting at    the node representing the present topic of the    dialogue, and searching the net, under control of a    set of rules. It is possible to use a simple model    of the program, rather than a complex model of    goals    and    plans,    because    in    the    program    specification task, the participants' goals and    plans are so closely tied to the program structure.    This general scheme    of rule based search is    advantageous because it is simple and flexible.    These techniques are probably applicable to other    settings where the structure of goals and plans can    be tied to some simple task related structure.    APPENDIX: Question Choice Rules    -    _I_-    (These are slightly simplified versions of the    content of the rules. The actual rules consist of    LISP code.)    Rules for Algorithms    Al) Are there questions about the NAME of this    object?    A2) Look at all objects that are USED-BY this    object.    A3) Are there questions about this object other    than EXIT-TEST, PROMPT, or FORMAT?    A4) Are there questions about the PROMPT or FORMAT    for this object?    A5) Look at all objects that are SUB-PARTS of this    object.    A61 Are there questions about the EXIT-TEST of this    object?    A7) Look at all objects that are SUPER-PARTS of    this object.    Rules for Data Structures    Dl) Look at all objects that are SUB-PARTS of this    object.    D2) Are there questions about the STRUCTURE of this    object?    D3) Are there OTHER questions about this object?    D4) Look at all objects that are SUPER-PARTS of    this object.    D5) Look at all objects that USE this object.    REFERENCES    Cl] Bobrow, D., Kaplan, R., Kay, M., Norman, D.,    Thompson, H., Winograd, T., "GUS, A Frame-    Driven    Dialogue    System." Artificial    Intelligence 8 (1977) 155-173.    [2] Brown,    G., "A    Framework    for    Processing    Dialogue", Technical Report 182,    MIT Laboratory    for Computer Science, June '977.    [3]    Carbonell, J. R., "AI in CAI: An Artificial    Intelligence Approach    t.0    Computer-Aided    Instruction." IEEE Trans. Man-Machine Svst.    J-l-    (1970) 190-202.    [4] Davis,    R., Buchanan,    B., Shortliffe, E.,    Production Rules as a Representation for a    Knowledge-Based Consultation Program, Memo AIM-    266,    Stanford    Artificial    Intelligence    Laboratory, October, 1975.    [5]    Green, C., A    Summary of the PSI Program    Synthesis    System, Proceedings of &h& Fifth    International    Joint    Artificial IntelliKZ,    Conference    on    Cambridge    Massachusetts, August 1977,    380    -    381.    [63    Grosz, B., The Representation and Use of Focus    in    a    System    for Understanding    Dialogues, Proceedings    of    &t&    Fifth    d    International    Joint    Conference    on    Artificial Intelligence,    Cambridge,    Massachusetts, August 1977, 67 - 76.    [7]    Lesser, V., Erman, L., A Retrospective View of    the HEARSAY-II Architecture, Proceedin= of the    Fifth    International Joint    Conference on    Artificial Intelligence,    Cambridge,    Massachusetts, August 1977, 380    -    381.    183    Mann, W., Man-Machine Communication Research:    Final    ReDort, ISI/RR-77-57, USC Information    Sciences Institute, February, 1977.    [9] Steinberg, L.,    A Dialogue    Moderator for    Program Specification    Dialogues in t,he psi    Svstem, PhD dissertation, Stanford University,    in progress.    63     
 | 
	1980 
 | 
	1 
 | 
					
1 
							 | 
	AUTOMATIC    GENERATION    OF SEMANTIC    ATTACHMENTS    IN FOL    Luigia Aiello    Computer Science Department    Stanford University    Stanford, California    94305    ABSTRACT    Semantic    attachment    is provided    by FOL as a    means for associating    model values (i.e. LISP code)    to symbols    of a first order language.    This    paper    presents an algorithm    that    automatically    generates    semantic attachments    in FOL and    discusses    the ad-    vantages deriving from its use.    I    INTRODUCTION    In    FOL    (the    mechanized    reasoning    system    developed    by    R. Weyhrauch    at    the    Stanford A.I.    Laboratory    [4,5,6 1, the    knowledge    about a given    domain of discourse    is represented    in the form    of    an L/S structure.    F    An L/S structure is the FOL counterpart    of the    logician    notion of a    theory/model    pair.    It is a    triple <L,S,F>    where    L is a    sorted    first    order    language with equality, S is a simulation    structure    (i.e. a    computable    part    of a model    for    a first    order theory), and F is a finite set of facts (i.e.    axioms and theorems).    Semantic attachment    is one of the characteriz-    ing features of FOL. It allows for the construction    of a simulation    structure S by attaching a    "model    value" (i.e. a LISP data structure)    to (some of) the    constant, function and predicate    symbols of a first    order language.    Note    that the    intended semantics    of a given theory can be    specified only partially,    i.e.    not    necessarily    all    the    symbols    of    the    language need to be given an attachment.    The FOL evaluator, when evaluating    a term    (or    wff),    uses both    the    semantic    and the    syntactic    information    provided    within    an L/S structure.    It    uses the semantic    attachments    by directly invoking    the    LISP    evaluator    for    computing    the    value of    ground    sub-terms    of the    term    (wff).    It    uses a    simplification    set,    i.e. a    user-defined    set    of    rewrite rules to do symbolic evaluations    on the term    (wff).    Semantic information    and syntactic informa-    tion are repeatedly    used - in this order - until no    further simplification    is possible.    The research reported here    has been carried    out    while    the    author was    visiting with    the Computer    Science Department    of Stanford University    on leave    from    IEI of CNR, Pisa, Italy.    Author's permanent    address:    IEI-CNR, via S. Maria 46,    I-56100 Pisa,    Italy    Semantic    attachment    has    been    vital in the    generation    of many    FOL proofs, by    significantly    increasing    the efficiency    of evaluations.    The idea    of    speeding    up a    theorem    prover    by    directly    invoking the evaluator of the underlying    system to    compute some functions    (predicates) has been    used    in other proof generating    systems. FOL is different    from other systems in that it provides the user with    the    capability    of explicitly    telling    FOL which    semantic information    he wants to state and use about    a given theory.    This approach has many advantages,    mostly    epistemological,    that    are too long to    be    discussed here.    II    AUTOMATIC    GENERATION    OF SEMANTIC ATTACHMENTS    It is    common experience    among the FOL    users    that they tend to    build    L/S structures    providing    much more syntactic    information    (by    specifying    axioms and deriving theorems) than semantic inform-    tion (by attaching LISP code to symbols). In recent    applications    of FOL, L/S structures    are    big, and    (since the information    is essentially    syntactic)the    dimension    of    the    simplification    sets    is rather    large.    The unpleasant    consequence    is that    the    evaluations    tend to be very slow,    if    feasible    at    all.    This    has prompted us to devise and    implement    an extension of the FOL system, namely, a compiling    algorithm    from FOL    into LISP, which allows for    a    direct    evaluation    in    LISP    of    functions and pre-    dicates defined in First Order Logic.    The compila-    tion of systems of function    (predicate) definitions    from FOL into LISP allows FOL to transform syntac-    tic information    into semantic information.    In other    words,    the    compiling    algorithm    allows    FOL    to    automatically    build parts of a model for a    theory,    starting from a syntactic description.    Semantic attachment    has often been    criticised    as error prone.    In fact,    the    possibility    of    directly    attaching    LISP    code    to    symbols of the    language    allows    the    FOL    user to set up the    se-    mantic    part    of    an    L/S structure    in a    language    different    from that of first    order logic.    This    forbids him to use FOL itself to check the relative    consistency    of    the    syntactic    and    semantic part    of an L/S structure.    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    The automatic    transformation    of FOL axioms (or,    in    general,    facts)    into    semantic    attachments,    besides the above mentioned    advantage of substanti-    ally increasing    the efficiency    of the evaluator, has    the    advantage    of guaranteeing    the    consistency    between the syntactic    and semantic specifications    of an FOL    domain    of    discourse,    or    at least,    to    keep    to a minimum    the user's freedom of introduc-    ing non-detectable    inconsistencies.    The semantic attachment    for a function    (predi-    cate)symbol    can be automatically    generated    through    a compilation    if    such    a symbol    appears    in    the    syntactic part of an L/S structure as a definiendum    in    a    system    of    (possibly    mutually    recursive)    definitions    of the following form:    Yx 1 . ..x r.fixl,... X r>    =6 i@i,Iyi,q⌧19 .     l . +I    V    Yl    l    - Ys 4Pj (Yl .    Y,)     = Zj@jr~j,~j,Yl,...,Ysl)    Here the f-s are function    symbols and the P-s    are    predicate    symbols.    The mare    terms in4-s,    Y-s,    F-s and x-s; the t-s    are wffs in the$-s,    q-s, ‘E-s    and y-s.    By ? we    denote a tuple    of constant sym-    bols.    By 9 (resp.2)    we denote a tuple of function    (resp. predicate)    symbols. @(resp.2)    may    contain    some    of f (resp. P), but    it    is    not necessarily    limited to them, i.e. other function and predicate    symbols    besides the definienda    can appear in each    definiens.    The compilation    algorithm,    when provided with    a system    of definitions,    'first    performs a well-    formedness    check, then a compilability    check.    The well-formedness    check tests whether or not    all the facts to be compiled    are definitions,    i.e.    if they have    one of the two following forms    (note    that here we use the word "definition"    in a broader    sense than logicians do):    fjxl...xr.fi(xl,...,xr)    = . . .    if    Yl l    **Ys.(Pj(yl'...,ys)=    . . .    The compilability    check- consists    in verifying    that    a>    each    definition    is a closed wff, i.e.    no free    variable    occurs    in    it;    b)    all    the    individual    constants    and    the function    (predicate)    symbols    appearing    in the definiens    either    are    one of the    definienda    or are    attached    to a model value    (the    first case    allows    for    recursion    or    mutual    re-    cursion);    c)    the definiens    can    contain    logical    constants,    conditionals    and    logical    connectives    but no quantifiers.    When    the    FOL    evaluator    invokes    the    LISP    evaluator,    it expects a model value to be returned;    it does not know how to handle errors    occurring at    the LISP level. This, for various reasons too    long    to be    reported    here,    justifies    the three    above    restrictions.    Actually,    the second and the    third    restrictions    can    be    weakened with and appropriate    extension of the FOL evaluator and of the compiler    (respectively)    to    cope    with    the    new situation.    More details are presented    in [l].    To    present a simple example    of    compilation,    consider the following facts:    yy x.f(x,y) = if P(x) then g(x,x) else f(y,x)    -    vy x.g(x,y) = x+y    If we tell FOL to compile them in an L/S structure    where    a semantic    attachment    exists both for the    symbol P and for the symbol + (let them be two LISP    functions    named    C-P    and    PLUS, respectively),    it    produces the following LISP code:    (DE c-f (X    y>    (COND ((C-P x) (C-g x x))    (T (C-f y xl>>>    (DE C-g (x y) (PLUS x Y> )    and    attaches it to the function symbols    f    and g,    respectively.    III    SOUNDNESS OF THE COMPILATION    The    compiling    algorithm    is pretty straight-    forward,    hence,    its    correctness    should not con-    stitute    a    problem.    Conversely,    a    legitimate    question    is the    following:    Is    the    compilation    process sound?    In    other    words:    Who    guarantees    that running the FOL evaluator    syntactically    on a    system    of    definitions    gives the same    result    as    running    the    LISP    evaluator    on    their (compiled)    semantic attachments?    The    answer    is    that the two    evaluations    are    weakly    equivalent,    i.e.    if both    terminate,    they    produce    the    same    result.    This is because the FOL    evaluator    uses    a    leftmost outermost strategy    of    function invocation    (which corresponds    to call-by-    name) while the mechanism    used    by the LISP evalua-    tor is call-by-value.    Hence, compiling a function    can introduce some nonterminating    computations    that    would    not    happen    if the same function were eval-    uated symbolically.    This, however, does    not    constitute    a serious    problem and it will be overcome in the next version    of FOL.    In    fact,    it will be implemented    in a    purely    applicative,    call-by-need    dialect of LISP    (note that,    call-by-need    is strongly equivalent    to    call-by-name    in purely applicative    languages).    IV    CONCLUSION    FOL is an experimental    system and, as is often    the case with such systems, it evolves    through the    experience    of its designer    and users.    Particular    attention    is    paid    to    extend    FOL only    with new    features    that either improve its    proving power or    allow    for a more    natural interaction    between the    user and    the    system (or both)    in    a    uniform way.    The addition of the compiling algorithm    sketched in    the    previous    sections    is in this    spirit.    This    extension    of    FOL    has been very useful    in recent    applications    (see, for instance    [2]).    Experience    has shown that the largest part of    the syntactic information    in    an L/S structure can    be compiled.    This    suggests a further improvement    to be    done    on FOL evaluations.    The    use of the    compiling algorithm    leads    to L/S structures    where    (almost)    all    the    function    (predicate) symbols of    the    language    have    an attachment.    Hence,    the    strategy    of    the FOL    evaluator    to    use semantic    information    first    (that was    the    most reasonable    one    when    semantic    attachments    were very few and    symbolic    evaluations    could be rather long) is    in    our opinion no longer the    best    one.    In    fact,    sometimes,    properties    of    functions    (stated    as    axioms or theorems in the syntactic part of the L/S    structure)    can    be    used to avoid long computations    before    invoking the LISP evaluator    to compute that    function.    Finally,    a comment on related work.    Recently    (and independently),    Boyer and Moore have added to    their theorem prover the possibility    of introducing    meta-functions,    proving them correct and using them    to    enhance    the proving power of their system [3].    This is very much in the    spirit of the use of META    in FOL    and    of the    compiling algorithm    described    here.    ACKNOWLEDGMENTS    The members    of the Formal    Reasoning Group of    the    Stanford A.I. Lab are acknowledged    for    useful    discussions.    Richard    Weyhrauch    deserves special    thanks    for    interesting    and    stimulating    conversa-    tions about FOL.    The    financial    support of    both    the Italian    National Research Council    and    ARPA (through Grant    No. MDA903-80-C-0102)    are acknowledged.    REFERENCES    [1] Aiello, L.,    "Evaluating    Functions Defined in    First    Order    Logic."    Proc    of    the    Logic    -*    -    -    Programming    Workshop, Debrecen, Hungary, 1980.    [2] Aiello, L., and Weyhrauch,    R. W., "Using Meta-    theoretic Reasoning    to do Algebra."    Proc. of    --    the 5th Automated    Deduction    Conf.,    Les Arcs,    France, 1980.    [3] Boyer, R.S., and Moore, J.S., "Metafunctions:    Proving them correct and using them efficiently    as new proof procedures."    C.    S.    Lab,    SRI    International,    Menlo Park, California,    1979.    [4] Weyhrauch,    R.W.,    "FOL:    A Proof Checker for    First-order    Logic."    Stanford    A.I. Lab, Memo    AIM-235.1,    1977.    [5] Weyhrauch,    R. W.,    "The    Uses of    Logic    in    Artificial    Intelligence."    Lecture Notes of the    Summer School on the Foundations    of Artificial    Intelligence    and Computer Science (FAICS '78),    Pisa, Italy, 1978.    [6] Weyhrauch,    R.W., "Prolegomena    to a Mechanized    Theory of Formal Reasoning."    Stanford A.I. Lab,    Memo AIM-315,    1979;    Artificial    Intelligence    Journal, to appear, 1980.    92     
 | 
	1980 
 | 
	10 
 | 
					
2 
							 | 
	ABSTRACT    HCPRVR: AN INTERPRETER    FOR LOGIC PROGRAMS    Daniel Chester    Department    of Computer Sciences    University    of Texas at Austin    An overview of    a    logic    program    interpreter    written in Lisp is presented.    The interpreter    is a    Horn clause-based    theorem prover augmented by Lisp    functions    attached    to    some predicate names.    Its    application    to    natural    language    processing    is    discussed.    The    theory of operation is explained,    including the high level organization    of the    PROVE    function    and    an efficient version of unification.    The paper concludes with comments    on    the    overall    efficiency    of the interpreter.    An axiom is either an    atomic    formula,    which    can    be    referred to as a fact, or an expression    of    the form    ( <conclusion>    <    <premissl>    .*. <premissN>    )    where both the conclusion    and    the    premisses    are    atomic    formulas.    The symbol "<" is intended to be    a left-pointing    arrow.    An    atomic    formula    is    an    arbitrary    Lisp    ___    -    expression    beginning    with a Lisp atom.    That atom    is referred to as a    relation    or    predicate    name.    Some    of    the    other atoms in the expression may be    designated    as variables by a flag on their property    lists.    I    INTRODUCTION    III    CALLING LOGIC PROGRAMS    -    --    HCPRVR, a Horn Clause    theorem    PRoVeR,    is    a    Lisp    program    that -interprets    a    ---    simple    logical    formalism as a programming    language.    It    has    been    used for over a year now at the University    of Texas    at Austin    to write    natural    language    processing    systems.    Like    Kowalski    [II,    we    find    that    programming    in logic is an efficient way    to write    programs    that are easy to comprehend.    Although we    now have    an    interpreter/compiler    for    the    logic    programming    language Prolog [2], we continue to use    HCPRVR because it allows us to    remain    in    a Lisp    environment    where there is greater flexibility    and    a more familiar notation.    This    paper    outlines    how    HCPRVR    works    to    provide    logic    programming    in a Lisp environment.    The syntax of logic programs is given, followed    by    a    description    of    how    such programs are invoked.    Then attachment    of Lisp functions to predicates    is    explained.    Our    approach    to    processing    natural    language in logic    programs    is    outlined    briefly.    The    operation    of    HCPRVR    is    presented by giving    details of the    PROVE    and    MATCH    functions.    The    paper closes with some remarks on efficiency.    II    LOGIC PROGRAM SYNTAX    ~___--    ___    A logic program is an ordered list of axioms.    --    -e--e    * This work was supported by NSF Grant MCS 74-2491    -8.    There are two ways to call a logic program    in    HCPRVR.    One way is to apply the EXPR function TRY    to an atomic formula.    The other way    is    to    apply    the    FEXPR    function    ?    to    a list of one or more    atomic formulas, i.e., by evaluating an    expression    of the form    ( ? <formulal>    ..- <formulaN>    >    In either case the PROVE function is called to    try    to    find    values    for the variables in the formulas    that    makes    them    into    theorems    implied    by    the    axioms.    If    it finds a set of values, it displays    the formulas to the interactive user and    asks    him    whether    another    set    of    values should be sought.    When told not to seek further, it terminates    after    assigning    the formulas, with the variables replaced    by their values, to the Lisp atom VAL.    IV    PREDICATE NAMES AS FUNCTIONS    ~-    Occasionally    it is useful to let    a    predicate    name be a Lisp function that gets called instead of    letting HCPRVR prove the formula in the usual    way.    The predicate name NEQ*, for example, tests its two    arguments    for    inequality    by    means    of    a    Lisp    function    because    it would be impractical    to have    axioms of the form (NEQ* X Y)    for    every    pair    of    93    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    constants X and Y such that X    does    not    equal    Y.    Predicate    names that are also functions are FEXPRs    and expect that their arguments have been    expanded    into    lists    in which all bound variables have been    replaced by their values.    These    predicate    names    must    be    marked    as    functions    by having the Lisp    property    FN    set    to    T,    e.g.,    executing    (PUT '<predicate    name> 'FN T),    so thi: HCPRVR will    interpret them as functions.    By letting syntactic categories    be    predicates    with    three arguments, we can make axioms that pull    phrases off of a list    of    words    until    we    get    a    sentence    that    consumes    the    whole    list.    In    addition, arbitrary    tests can be performed    on    the    phrase representations    to check whether they can be    semantically    combined.    Usually    the    phrase    representation    in    the conclusion part of an axiom    tells    how    the    component    representations    are    combined,    while    the premisses tell how the phrase    should be factored into the component phrases, what    their'    representations    should    be,    and    what    restrictions    they have.    Thus, the axiom    ((S X (U ACTOR V . W) Z) < (NP X V Y)    (VP Y (U . W) Z)    (NUMBER v ~1)    (NUMBER U N2)    (EQ N1 N2))    says    that    an    initial segment of word list X is a    sentence if first there is    a    noun    phrase    ending    where word list Y begins, followed by a verb phrase    ending where word list Z begins, and    both    phrases    agree in number (singular or plural).    Furthermore,    the noun phrase representation    V is made the    actor    of    the    verb U in the verb phrase, and the rest of    the verb phrase representation,    W, is carried along    in the representation    for the sentence.    After suitable axioms have    been    stored,    the    sentence    THE    CAT    IS    ON THE MAT can be parsed by    typing    (? (S (THE CAT IS ON THE MAT) x NIL)    The result of this computation    is the theorem    (S (THE CAT IS ON THE MAT)    (IS ACTOR (CAT DET THE)    LOC (ON LOC (MAT DET THE)))    NIL)    VI    THEORY OF OPERATION    --    A.    General Organization    HCPRVR    works    essentially    by    the    problem    reduction    principle.    Each    atomic formula can be    thought of as a    problem.    Those    that    appear    as    facts in the list of axioms represent problems that    have    been    solved,    while    those    that    appear    as    conclusions    can be reduced to the list of problems    represented    by the premisses.    Starting    from    the    formula    to    be proved, HCPRVR reduces each problem    to lists of subproblems    and then    reduces    each    of    the    subproblems    in    turn until they have all been    reduced to    the    previously    solved    problems,    the    "facts"    on    the    axiom list.    The key functions in    HCPRVR that do all this are PROVE and MATCH.    B.    The PROVE Function    -~    PROVE    is    the    function    that    controls    the    problem    reduction process.    It has one argument, a    stack of subproblem    structures.    Each    subproblem    structure has the following format:    ( <list of subproblems>.<binding    list> )    where the list of subproblems    is a sublist    of    the    premisses    in some axiom and the CAR of the binding    list is    a    list    of    variables    occurring    in    the    subproblems,    paired    with    their    assigned values.    When PROVE is initially called by    TRY,    it    begins    with the stack    ( ( ( <formula> ) NIL ) )    The algorithm of PROVE    works    in    depth-first    fashion,    solving    subproblems    in    the    same    left-to-right    order as they occur in the axioms and    applying    the    axioms as problem reduction rules in    the same order as they are listed.    PROVE begins by    examining    the    first    subproblem    structure on its    stack.    If    the    list    of    subproblems    in    that    structure    is    empty,    PROVE    either    returns    the    binding list, if there are no other    structures    on    the    stack,    i.e., if the original problem has been    solved, or removes the    first    structure    from    the    stack and examines the stack again.    If the list of    subproblems    of the first    subproblem    structure    is    __    empty,    not    PROVE examines the first subproblem on    the list.    If    the    predicate    name    in    it    is    a    function,    the function is applied to the arguments.    If the function returns NIL, PROVE fails; otherwise    the    subproblem    is removed from the list and PROVE    begins all over again with the modified structure.    94    When    the    predicate    name    of    the    first    subproblem    in    the    list    in    the first subproblem    stucture is not a    function,    PROVE    gets    all    the    axioms    that    are    stored under that predicate name    and assigns them to the local variable Y.    At    this    point    PROVE    goes into a loop in which it tries to    apply each axiom in turn until one    is    found    that    leads    to    a    solution to the original problem.    It    does this by calling the MATCH function to    compare    the    conclusion    of    an    axiom    with    the    first    subproblem.    If the match fails, it tries the    next    axiom.    If the match succeeds, the first subproblem    is removed from    the    first    subproblem    structure,    then a new subproblem structure is put on the stack    in front of that structure.    This    new    subproblem    structure    consists    of    the list of premisses from    the axiom and the binding list that was created    at    the time MATCH was called.    Then PROVE calls itself    with this newly formed stack.    If this call returns    a    binding    list,    it    is    returned as the value of    PROVE.    If the    call    returns    NIL,    everything    is    restored    to what    it was    before    the    axiom was    applied and PROVE tries to apply the next axiom.    The way that PROVE applies an axiom    might    be    better    understood    by    considering    the    following    illustration.    Suppose that the    stack    looks    like    this:    ( ( (Cl C2).<blist>    ) ..* )    The    first    subproblem    in    the    first    subproblem    structure    is Cl.    Let the axiom to be applied be    (C < Efl P2 P3)    PROVE applies it by creating    a    new    binding    list    blist',    initially    empty, and then matching C with    Cl with the call (MATCH C <blist'> Cl <blist>).    If    this    call    is    successful,    the following stack is    formed:    ( ( (Pl P2 P3).<blist'>    ) ( (C2).<blist>    ) e.. )    Thus problem Cl has been reduced to problems Pl, P2    and    P3    as    modified    by    the binding list blist'.    PROVE now applies PROVE to this stack in    the    hope    that all the subproblems    in it can be solved.    In the event that the axiom to be    applied    is    cc>,    that    is,    the    axiom is just a fact, the new    stack that is formed is    ( ( ().<blist'>    )    ( (C2).<blist>    ) . . . )    When PROVE is called with this    stack,    it    removes    the first subproblem    stucture and begins working on    problem C2.    c.    The MATCH Function    --    The    MATCH    function    is    a    version    of    the    unification    algorithm    that    has    been modified so    that renaming of    variables    and    substitutions    of    variable    values    back    into    formulas are avoided.    The key idea is that the identity of a variable    is    determined    by    both    the    variable    name    and    the    binding list on which its    value    will    be    stored.    The    value    of a variable is also a pair:    the term    that will replace the variable and the binding list    associated    with    the    term.    The    binding    list    associated    with the term is used to find the values    of    variables    occurring    in    the term when needed.    Notice that variables do not    have    to    be    renamed    because MATCH is always called (initially) with two    distinct binding lists, giving distinct    identities    to    the    variables    in    the    two    expressions    to be    matched, even if the same variable name    occurs    in    both of them.    MATCH assigns a value to a variable by CONSing    it    to the CAR of the variable's binding list using    the RPLACA function; it also puts that binding list    on    the    list    bound    to    the Lisp variable SAVE in    PROVE.    This is done so-that the effects    of MATCH    can be undone when PROVE backtracks    to recover from    a failed application    of an axiom.    VII    EFFICIENCY    HCPRVR    is    surprisingly    efficient    for    its    simplicity.    The    compiled code fits in 2000 octal    words of binary programming    space and runs as    fast    t:    the Prolog interpreter.    Although the speed can    further    improved    by    more    sophisticated    programming,    we    have    not    done    so because it is    adequate for    our    present    needs.    A    version    of    HCPRVR    has been written in C; it occupies 4k words    on a PDP11/60 and appears to run about half as fast    as the compiled Lisp version does on a DEC KIlO.    The most important kind of efficiency we    have    noticed,    however,    is    program    development    efficiency,    the ease with which logic programs    can    be    written    and debugged.    We have found it easier    to write natural    language    processing    systems    in    logic    than    in    any other formalism we have tried.    Grammar rules can be easily written as axioms, with    an    unrestricted    mixture    of    syntactic    and    non-syntactic    computations.    Furthermore,    the    same    grammar rules can be used for parsing or generation    of sentences with no change in the    algorithm    that    applies    them.    Other    forms    of    natural language    processing    are similarly easy to program in    logic,    including    schema instantiation,    question-answering    and text summary.    We have found HCPRVR very useful    for gaining experience    in writing logic programs.    REFERENCES    [l] Kowalski,    R.    A.    "Algorithm    =    logic    +    control." CACM 22, 7, July, 1979, 424-436.    [2] Warren,    D.    H.,    L.    M.    Pereira,    and    F.    Pereira.    "PROLOG    -    the    language    and    its    implementation    compared    with    lisp."    Proc.    SymP'    AI    and    Prog.    Langs.,    SIGPLAN    12,    -___    8/sIGARTx4,    August, 1977, 109-115.    95     
 | 
	1980 
 | 
	11 
 | 
					
3 
							 | 
	FIRST EXPERIMENTS WITH RUE AUTOMATED DEDUCTION    Vincent J. Digricoli    The Courant Institute and Hofstra University    251 Mercer Street, New York, N.Y. 10012    ABSTRACT    RUE resolution represents a reformulation of    binary resolution so that the basic rules of    inference (RUE and NRF) incorporate the axioms of    equality. An RUE theorem prover has been imple-    mented and experimental results indicate that this    method represents a significant advance in the    handling of equality in resolution.    A. Introduction    In (1) the author presented the complete    theory of Resolution by Unification and Equality    which incorporates the axioms of equality into two    inference rules which are sound and complete to    prove E-unsatisfiability. Our purpose here is to    present systematically the results of experiments    with an RUE theorem prover.    The experiments chosen were those of McCharen,    Overbeek and Wos (2), ahd in particular we are    interested in comparing the results achieved by    these two theorem provers.    In MOW, the equality axioms were used expli-    citly for all theorems involving equality and    apparently no use was made of paramodulation. In    RUE, where proofs are much shorter, the inference    rules themselves make implicit use of the equality    axioms which do not appear in a refutation and also    no use of paramodulation is made. Both systems    are pure resolution-based systems.    Before considering the experiments, we first    review and summarize the theory of resolution by    unification and equality as presented in (1).    There we define the concept of a disagreement set,    the inference rules RUE and NRF, the notion of    viability, the RUE unifying substitution and an    equality restriction which inhibits redundant    inferences. Here we simply introduce the concept    of a disagreement set and define the rules of    inference.    A disagreement set of a pair of terms (tl,t2)    is defined in the following manner :    is the on y disagreement set and if    "If (tl,ts) are identical, the empty ;;st2)    differ, the set of one pair { (tl,t2)    is the origin disagreement set.    i,    Furt ermore,    if tl has the form f(al,...,ak) and t2 the    form f(bl,...,bk), then the set of pairs of    corresponding arguments which are not iden-    tical is the topmost disagreement set    -0    In the simple example :    tl = f( at g(WW)    1    t2 = f( a', g(b',h(c')) 1    besides the origin disagreement, there are the    disagreement sets :    Dl = i (ana'), ( g(b,h(c)) ,g(b',h(c')) 1)    D2 = { (a,a'), (b,b'), (h(c) ,h(c')    1)    D3 = ( tara'), (b,b'), (c,c') 1    This definition merely defines all possible    ways of proving t =t , i.e. we can prove t =t    by proving equali    4    12    y ?n every pair of any one    disagreement set. An input clause set, for ex-    ample, may imply equality in D    D3'    but not in D2    Or it may most directly phove tl=t2 by    or    proving equality in D3.    We proceed to define a disagreement set of    --    complementary literals :    W1,...,sn)    f;(t,    v.,tn)    as the union of disagreement sets :    D = .U    D    i=l,n i    where D i is a disagreement set of (si,ti).    We see immediately that :    P(sl"",sn) A ht1 I-O&~) *    D    where D now represents the disjunction of inequal-    ities specified by a disagreement set of P,;,    and furthermore, that :    f(al~~~~,ak) # f(blto..,bk) --$ D    where D is the disjunction of inequalities speci-    fied by a disagreement set of f(al,...,ak),    f(bl,-wbk) 0 For example,    p(f(a,g(b,h(c)))) A F(f(a',g(b',h(c'))))    4    afa' A bfb' A cfc' .    The reader is invited to read (1) which    states the complete theory of RUE resolution with    many examples. Our primary concern here is to    discuss experiments with an RUE theorem prover    and to begin to assess the effectiveness of this    96    inference system.    we    Experiments    Our experiments deal with Boolean Algebra    are asked to prove from the eight axioms    :    Al :x+0=x    A2 :x*1=x    A3 :x+Z=l    A4 : x *xt=o    A5 : x(y+z) = xy +x2    A6 : x + yz = (x+y) (x+2)    A7 :x+y=y+x    A0 :x*y=y*x    (we are denoting logical or by +r logical and by *    or juxtaposition, and negation by overbar),    the following theorems :    and    a*0 # 0    I    -    x*x = 0    a*: # a*0    d    = (a/x}    -    x(y+z) = xy + xz    (7 ={a/x}    y+z # ;; ay+az # a*0    t-    x+0=x    o- = E a/y,o/z *a/x    j    a*: + a*0 # a*0    t-    0+x=x    0- =la*O/x)    Tl : Z=l    T2 :x+1=1    T3 :x*0=0    T4 : x + xy = x    T5 : x(x+y?" = x    T6 :x+x=x    T7 :x*x=x    T8 : (x+y)    +z = x+(y+z)    T9 : (x*y)*z = x* (y*z)    TlO : the complement of x is unique    (x*a=O) (x+a=l) (x*b=O) (x+b=l) 4    a= b    Tll :z=x    --    T12 :x+y    =x*y    De Morgan's Law I    --    T13 :x*y=x+;;    De Morgan's Law II    These theorems are stated in the order of    increasing complexity of proof, with 6= 1 being    trivially easy for a human to prove and De Morgan's    Laws being very difficult for a human to deduce    from the axioms.    George and Garrett Birkhoff have a paper on    the above proofs published in the Transactions of    the American Mathematical Society (3) and Halmos    comments on the significantly difficult character    of the proofs in his Lectures on Boolean Algebras    (4) 0    The following is a machine deduced, five step    RUE refutation which proves x*0 = 0 :    0 # a*a    -    0 = x*;    D    o-=<a/x]    The above experiments together with many    others (dealing with group theory, ring theory,    geometry, Henken Models, set theory and program    verification) were proposed as benchmarks by    McCharen,Overbeek and Wos, who in (2) published    the results of their own experiments.    We here tabulate    the comparative performance    of the RUE and MOW theorem provers on the above    thearems. The MOW theorem prover uses    binary    resolution with explicit use of the equality    axioms    and is implemented in Assembly language    on the IBM System 370-Model 195. Great effort was    made to enhance the efficiency of their theorem    prover and this is described in (2). The RUE    theorem prover, on the other hand, represents a    first implementation in PLl on a CDC 6600 machine    which is much slower than the Model 195.    In the experiments each theorem is treated    as an independent problem and cannot use earlier    theorems as lemmas, so that for example in proving    associativity (T8), we need to prove (T2,T3,T4,T5)    as sub-theorems. The total number of unifications    performed is suggested as the primary measure of    comparison rather than time. The comparative    results are given in Table 1.    From Tl to T7, The RUE theorem prover was    very successfull, but at T8 (associativity)    results have yet to be obtained since refinements    in the heuristic pruning procedure are required    and are being developed with the expectation that    more advanced results will be available at the    conference.    RUE represents one of several important    methods for handling equality in resolution and    it is important to emphasize that it is a complete    method whose power is currently being tested in    stand-alone fashion. However, it is not precluded    that we can combine this method with other tech-    niques such as demodulation,paramodulation and    reduction theory to achieve a mutually enhanced    effect.    97    TABLE 1.    THEOREM    TOTAL NUMBER OF    UNIFICATIONS    RUE : MOW    Tl 'i=l    77    26,702    T2 x+1=1    688    46,137    T3 x*0=0    676    46,371    T4 x+xy=x    3,152    see below    T5 X(X+Y) = x    -3,113    tl I!    T7 X*X=X    2,145    n    m    T6rrT7    4,326(l)105,839    T8 (x+y)+z = x+(y+z) IP 413,455    T9 (x*y)*z = x*(y*z) IP NPR    TlO ';:;~;I ';;-;I+ a=b    IP    NPR    (    -I(    1    e    Tll z =x    IP    NPR    T12 x+y    = ;; * ;;    IP    NPR    -    -    -    T13 x*y=x+y    IP    NPR    TIME    LENGTH OF PROOF    (SECONDS)    RUE : MOW    RUE : MOW    10.1    27.5    7    51.5    "    "    12    102.9    57.0    24    41.6    see below    13    Note 1 :    To prove the double theorem, T4,,T5,    x+xy=x A x(x+y)=x, we add the negated theorem as    a single clause, a+ab#a v a(a+b)#a , to the input    clause set. It is evident that the erasure of    these two literals in a refutation decomposes into    two independent subproblems since no variables    appear in the clause. Hence, the refutations for    a+ab#a and a(a+b)#a obtained in separate experi-    ments T4,T5 can be concatenated and the results    of these experiments simply summed which is what    we have done to state the RUE results for the    double theorem. The same holds true for T6hT7.    *    The estimated length of MOW proofs with the    equality axioms is twice as long as corresponding    RUE proofs.    REFERENCES :    1. "Automatic Deduction and Equality" by    Vincent J. Digricoli, Proceedings of the Oct.,    1979, Annual Conference of the ACM, 240-250.    2. "Problems and Experiments for and with Automat-    ed Theorem-Proving Programs", by McCharen,    Overbeek and Wos, IEEE Transactions on Comput-    ers, Vol C-25,No.8,August 1976.    3. "Distributive Postulates for Systems Like    Boolean Algebras", Transactions of the    American Mathematical Society, Volume 60,    July-Dec. 1946.    4. "Lectures on Boolean Algebras", by Paul R.    Halmos, Section 2, pages 3-5, D.Van Nostrand.    The completion of these Boolean experiments    together with other experiments with a more fully    delineated comparative analysis with systems other    than MOW represents work the author will report    upon in the future.    98     
 | 
	1980 
 | 
	12 
 | 
					
4 
							 | 
	WHAT’S WRONG WITH NON-MONOTONIC LOGIC?    David    J . Israel    Bolt    Beranek    and Newman Inc.    50 Moulton    St.    Cambridge,    Mass.    02238    ABSTRACT    In this    paper ’    I ask,    and attempt    to    answer,    the    following    question    :    What’s    Wrong    with    Non-Monotonic    Logic?    The answer,    briefly’    is    that    the    motivation    behind    the    wonderfully    impressive    work    involved    in    its    development    is    based    on    a    confusion    of    proof-theoretic    with    epistemological    issues.    ------------    What’s    wrong with    non-monotonic    logic    (and    fo!    that    matter,    with    the    logic    of    default    reasoning)?    The    first    question    we should    ask    is:    What’s    supposed    to    be    wrong    with    “standard”,    monotonic    logic?    In    recent    -    and    extremely    impressive    -    work,    Doyle    and    McDermott    [ 1 I,    McDermott    C21,    and    Reiter    C31 have    argued    that    classical    logic    -    in    virtue    of    its    monotoniqity    -    is    incapable    of    adequately    capturing    or    representing    certain    crucial    features    of    real    live    reasoning    and    inference.    In    particular’    they    note    that    our    knowledge    is    always    incomplete,    and    is    almost    always    known to    be so ; that,    in    pursuing    our    goals    - both    practical    and theoretical    - we are    forced    to    make    assumptions    or    to    draw    conclusions    on    the    basis    of    incomplete    evidence    ;    conclusions    and    assumptions    which    we may have    to    withdraw    in    the    light    of    either    new evidence    or    further    cogitation    on    what    we already    believe.    An essential    point    here    is    that    new evidence    or new inference    may lead    us    to    reject    previously    held    beliefs,    especially    those    that    we knew to    be inadequately    supported    or    merely    presumptively    assumed.    In sum, our    theories    of    the    world    are    revisable;    and thus    our    attitudes    towards    at    least    some our    beliefs    must    likewise    be    revisable.    Now what has    all    this    to    do with    logic    and its    monotonicity?    Both    Reiter    and    Doyle-McDermott    characterize    the    monotonicity    of    standard    logic    in    syntactic    or    proof-theoretic    terms.    If    A and B are    two    theories,    and    A is    a    subset    of    B,    then    the    ‘The    research    reported    in    this    paper    was    supported    in part    by the    Advanced    Research    Projects    Agency,    and was monitored    by ONR under    Contract    No.    NOOO14-77-C-0378.    To    remedy    this    lack,    Doyle    and    McDermott    introduce    into    an    otherwise    standard    first    order    language    a modal    operator    “M”    which,    they    say,    is    to    be    read    as    “It    is    consistent    with    everything    that    is    believed    that..    .”    (Reiter’s    “M”,    which    is    not    a    symbol    of    the    object    language,    is    also    supposed    to    be    read    “It    is    consistent    to    assume    that..“.    I    think    there    is    some    unclarity    on    Reiter’s    part    about    his    “M”.    He speaks    of    it    in    ways    conducive    to    interpreting    it    as    a    metalinguistic    predicate    on sentences    of    the    object    language    ;    and    hence    not    as    an    operator    at    all,    either    object-language    or    metalanguage.    So    his    default    rules    are    expressed    in    a    language    whose    object-language    contains    sentences    of    the    form    l’Mp” ,    i .e . ,    in    a language    which,    relative    to    the    original    first-order    object    language,    is    a    meta-meta-language    .)    Now    in    fact    this    reading    isn’t    quite    right.    **    The suggested    reading    doesn’    t    capture    the    notion    Doyle-McDermott    and    Reiter    seem    to    have    in mind.    What they    have    in mind is,    to    put    it    non-linguistically    (and    hence,    of    course,    non-syntactically)    : that    property    that    a belief    has    just    in case    it    is    both    compatible    with    everything    a    given    subject    believes    at    a    given    time    and    remains    so when the    subject’s    belief    set    undergoes    certain    kinds    of    changes    under    the    pressure    of    both    new    information    and    further    thought,    and    where    those    changes    are    the    result    of    rational    epistemic    -----_    policies.    99    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    I’ve    Put    the    notion    in    this    very    epistemologically    oriented    way precisely    to    hone in    on    what    I    take    to    be    the    basic    misconception    underlying    the    work    on non-monotonic    logic    and the    logic    of    default    reasoning.    The    researchers    in    question    seem    to    believe    that    logic    -    deductive    logic    ,    for    there    is    no    other    kind    -    is    centrally    and crucially    involved    in the    fixation    and revision    of    belief.    Or    to    put    it    more    poignantly,    they    mistake    so-called    deductive    rules    of    inference    for    real,    honest-to-goodness    rules    of    inference.    Real    rules    of    inference    are    precisely    rules    of    belief    fix ation    and    revision    ;    deductive    rules    of    transformation    are    precisely    not.    Consider    that    old    favorite    : modus    ( ponendo)    ponens.    It    is    not    a    rule    that    should    be    understood    as    enjoining    us    as    follows    :    whenever    you    believe    that    p and    believe    that    if    p then    q,    then    believe    that    q.    This,    after    all,    is    one    lousy    policy.    What    if    you    have    overwhelmingly    good    reasons    for    rejecting    the    belief    that    q?    All    logic    tells    you is    that    you had    best    reconsider    your    belief    that    p    and/or    your    belief    that    if    p    then    q    (or,    to    be    fair,    your    previously    settled    beliefs    on    the    basis    of    which    you    were    convinced    that    not-q);    it    is    perforce    silent    on how to    revise    your    set    of    beliefs    so    as    to    . .    to    what?    Surely,    to    come    up    with    a good    theory    that    fits    the    evidence,    is    coherent,    simple,    of    general    applicability,    reliable,    fruitful    of    further    testable    hypotheses,    etc.    Nor    is    it    the    case    that    if    one    is    justified    in    believing    that    p    and    justified    in    believing    that    if    p then    q    (or    even    justified    in    believing    that    p entails    q) , is    one    justified    in    be1 iev ing    (inferring)    that    c~. Unless,    of    course,    one    has    no    other    relevant    ----    beliefs.    Butone    always    does.    ----    The    rule    of    modus    ponens    is,    first    and    foremost,    a    rule    that    permits    one    to    perform    certain    kinds    of    syntactical    transformations    on    (sets    of)    formally    characterized    syntactic    entities.    (Actually,    first    and foremost,    it    is    not    really    a    rule    at    all;    it    is    f’reallyff    just    a    two-place    relation    between    on    the    one    hand    an    ordered    pair    of    wffs.,    and on    the    other,    a wff .>    It    is    an important    fact    about    it    that,    relative    to    any    one    of    a    family    of    interpretations    of    the    conditional,    the    rule    is    provably    sound,    that    is    ** Nor is    it    quite    clear.    By “consistentff    are    we    to    mean    syntactically    consistent    in    the    standard    monotonic    sense    of    syntactic    derivability    or    in the    to-be-explicated    non-monotonic    sense?    Or    is    it    semantic    consistency    of    one    brand    or    another    that    is    in    question?    This    unclarity    is    fairly    quickly    remedied    .    We are    to    understand    by    If consistencyff    standard    syntactic    consistency,    which    in    standard    systems    can    be    understood    either    as    follows:    A    theory    is    syntactically    consistent    iff    there    is    no    formula    p of    its    language    such    that    both    p and its    negation    are    theorems,    or    as    follows    : iff    there    is    at    least    one    sentence    of    its    language    which    is    not    a theorem.    There    are    otherwise    standard,    that    is,    monotonic,    systems    for    which    the    equivalence    of    these    two notions    does    not    hold;    and note    that    the    first    applies    only    to    a    theory    whose    language    includes    a negation    operator.    truth    (in    an    interpretation)-preserving    .    The    crucial    point    here,    though,    is    that    adherence    to    a    set    of    deductive    rules    of    transformation    is    not    a    sufficient    condition    for    rational    belief;    it    is    sufficient    (and    necessary)    only    for    producing    derivations    in    some    formal    system    or    other.    Real    rules    of    inference    are    rules    (better    :    policies)    guiding    belief    fixation    and    revision.    Indeed,    if    one    is    sufficiently    simple-minded,    one    can    even    substitute    for    the    phrase    ” good    rules    of    inference”,    the    phrase    ‘I( rules    of)    scientific    procedure”    or    even    “scientific    method”.    And,    of    tour se,    there    is    no clear    sense    to    the    phrase    “good    rules    of    transformation”.    (Unless    ffgoodff    here    means    ffcompleteff    -    but    with    respect    to    what?    Truth? >    Given    this    conception    of    the    problem    to    which    Doyle-McDermott    and    Reiter    are    addressing    themselves,    certain    of    the    strange    properties    of,    on    the    one    hand,    non-monotonic    logic    and    on    the    other,    the    logic    of    default    reasoning,    are    only    to    be    expected.    In    particular,    the    fact    that    the    proof    relation    is    not    in    general    decidable.    The    way the    “Mfl operator    is    understood,    we believers    are    represented    as    follows:    to    make an assumption    that    p or    to    put    forth    a presumption    that    p is    to    be1 iev e    a    proposition    to    the    effect    that    p    is    consistent    with    everything    that    is    presently    believed    and    that    it    will    remain    so    even    as    my    beliefs    undergo    certain    kinds    of    revisions.    And in    general    we can    prove    that    p only    if    we can    prove    at    least    that    p is    consistent    with    everything    we now    be1 iev e .    But,    of    course,    by Church’s    theorem    there    is    no    uniform    decision    procedure    for    settling    the    question    of    the    consistency    of    a set    of    first-order    formulae    .    (Never    mind    that    the    problem    of    determining    the    consistency    of    arbitrary    sets    of    formulae    of    the    sentential    calculus    is    NP-complete    . >    This    is    surely    wrong-headed    :    assumptions    or    hypotheses    or    presumptions    are    not    propositions    we    accept    only    after    deciding    that    they    are    compatible    with    everything    else    we    be1 iev e , not    to    speak    of    having    to    establish    that    they    won’t    be    discredited    by    future    evidence    or    further    reasoning.    When we assume    p,    it    is    just    p    that    we assume,    not    some    complicated    proposition    about    the    semantic    relations    in which    it    stands    to    all    our    other    beliefs,    and    certainly    not    some    complicated    belief    about    the    syntactic    relations    any    one    of    its    linguistic    expressions    has    to    the    sentences    which    express    all    those    other    beliefs.    (Indeed,    there    is    a    problem    with    respect    to    the    consistency    requirement,    especially    if    we    allow    be1 ief s    about    beliefs.    Surely,    any    rational    subject    will    believe    that    s/he    has    some    false    be1 iefs    ,    or    more    to    the    point,    any    such    subject    will    be    disposed    to    accept    that    belief    upon    reflection.    By    doing    so,    however,    the    subject    guarantees    itself    an inconsistent    belief-set;    there    is    no    possible    interpretation    under    which    all    of    its    beliefs    are    true.    Should    this    fact    by    itself    worry    it    (or    us?) .)    After    Reiter    has    proved    that    the    problem    of    determining    whether    an arbitrary    sentence    is    in    an    extension    for    a    given    default    theory    is    undecidable,    he comments:    (A)ny    proof    theory    whatever    for...    the    facts?    (Are    the    rules    provably    sound    rules    of    transformation?)    Or are    the    conclusions    legitimate    because    they    constitute    essential    (non-redundant)    parts    of    the    best    of    the    competing    explanatory    accounts    of    the    original    data;    the    best    by our own,    no    doubt    somewhat    dim,    lights?    (Are    the    rules    arguably    rules    of    rational    acceptance?)    At    the    conclusion    of    his    paper,    McCarthy    disambiguates    and    opts    for    the    right    reading.    In the    context    of    an    imaginative    discussion    of    the    Game of    Life    cellular    automaton,    he    notes    that    "the    program    in    such    a    computer    could    study    the    physics    of    its    world    by    making    theories    and    experiments    to    test    them    and    might    eventually    come up with    the    theory    that    its    fundamental    physics    is    that    of    the    Life    cellular    automaton.    We    can    test    our    theories    of    epistemology    and    common sense    reasoning    by    asking    if    they    would    permit    the    Life-world    computer    to    conclude,    on the    basis    of    its    experiments,    that    its    physics    was that    of    Life."    McCarthy    continues:    default    theories    must    somehow    appeal    to    some inherently    non semi-decidable    process.    [That    is,    the -proof-relation,    not    just    the    proof    predicate,    is    non    recursive;    the    proofs,    not    just    the    theorems,    are    not    Why such    a beast    recursively    enumerable.    is    to    be called    a logic    is    somewhat    beyond    me    -    DI.1    This    extremely    pessimistic    result    forces    the    conclusion    that    any    computational    treatment    of    defaults    must    necessarily    have    an heuristic    component    and    will,    on    occasion,    lead    to    mistaken    beliefs.    Given    the    faulty    nature    of    human    common sense    reasoning,    this    is    perhaps    the    best    one could    hope    for    in any event.    Now once    ag ain    substitute    in the    above    "(scient    ific    or    common    sense)    reasoni    ng I1 for    "defaulted    and    then    reflect    on how odd    it    is    to    think    that    there    could    be    a    purely    proof-theoretic    treatment    of    scientific    reasoning.    A heuristic    treatment,    that    is    a    treatment    in    terms    of    rational    epistemic    More    generally,    we    can    imagine    a    metaphilosophy    that    has    the    same    relation    to    philosophy    that    metamathematics    has    to    mathematics.    Metaphilosophy    would    study    mathematical    (?    -    D.1.)    systems    consisting    of    an 'fepistemologist'f    seeking    knowledge    in    accordance    with    the    epistemology    to    be    tested    and interacting    with    a ffworldff.    It    would    study    what    information    about    the    world    a    given    philosophy    would    obtain.    This    would    depend    also    on the    structure    of    the    world    and    the    ffepistemologist'sff    opportunities    to    interact.    AI    could    benefit    from    building    some    very    simple    systems    of    this    kind,    and    so    might    philosophy.    policies,    is    not    just    the    best    we could    hope    for.    It    is    the    only    thing    that    makes    sense.    (Of    course,    if    we are    very    fortunate    a    "syntactic"    encoding    we may be able    to    develop    f    these    policies;    but    we    0    certainly    mustn't    expect    to    come up with    rules    for    rational    belief    fixation    that    are    actually    provably    truth-preserving.    Once    again,    the    only    thing    that    makes    sense    is    to    hope    to    form ulate    a set    of    rules    which,    from    within    our    current    theory    of    the    world    of    ourselves    as    both    objects    within    and    inouirers    about    that    world,    can    be argued    to    embody    rational    policies    for    extending    our    admittedly    imperfect    grasp    of    things.)    Inference    (reasoning)    is    non-monotonic:    New    information    (evidence)    and further    reasoning    on old    beliefs    (including,    but    by    no    means    limited    to,    reasoning    about    the    semantic    relationships    -    e.g.,    of    entailment    -    among beliefs)    can and does    lead    to    Amen;    but    might    I note    that    such    a metaphilosophy    does    exist.    Do    some    substituting    again:    for    " hilosophyf'    P    (except    in    its    last    occurrence),    substitute    ff~~iencef';    for    'fepistemologistff,    ffscientistff;    for    ffepistemologyff,    either    "philosophy    of    science"    or "scientific    methodology".    The moral    is,    I    hope,    clear.    Here    is    my    constructive    proposal:    AI    researchers    interested    in    "the    epistemological    problem"    should    look,    neither    to    formal    semantics    nor    to    proof-theory;    but    to    -    of    all    things    -    the    philosophy    of    science    and    epistemology.    the    revision    of    our    theories    and,    of    course,    to    revision    bv f'subtractionff    as well    as by ffaddition'f.    Entailment-    and    derivability    are    monotonic.    That    is,    logic    -    the    logic    we have,    know,    and -    if    we    understand    its    place    in the    scheme    of    things    - have    every    reason    to    love,    is    monotonic.    BRIEF POSTSCRIPT    I've    been    told    that    the    tone    of    this    paper    is    REFERENCES    overly    critical;    or    rather,    that    it    - lacks    constructive    content.    A brief    postscript    is    not    [IlMcDermott,    D.,    Doyle,    J.    "Non-Monotonic    Logic    I" ,    AI    Memo 486,    MIT Artificial    Intelligence    Laboratory,    Cambridge,    Mass.,    August    1978.    C2lMcDermott.    D. "Non-Monotonic    Logic    II",    Research    Report    174,    Yale    University    Department    of    Computer    Science,    New Haven,    Conn.,    February    1980.    the    appropr iate    locus    for    correcting    this    defect;    but    it    may be an appropri    ate    place    for    cas ting    my    vote    for    a    suggestion    made by    John    McCarthy.    In    his    "Epistemological    Problems    of    Artificial    Intelligence"    [41.    McCarthy    characterizes    the    epistemological    part    of    "the    AI    problem"    as    follows:    "(it)    studies    what    kinds    of    facts    about    the    world    are    available    to    an observer    with    given    opportunities    to    observe,    how these    facts    can    be    represented    in    the    memory of    a computer,    and    what    rules    permit    legitimate    conclusions    to    be    drawn    -    -    from    these    facts."    [Emphasis    added.]    ThisTthough    brief,    is    just    about    right,    except    for    a perhaps    C3lReiter,    R. "A    Logic    for    Default    Reasoning",    Technical    Report    79-8,    University    of    British    Columbia    Department    of    Computer    Science,    Vancouver,    B.C.,    July    1979.    143McCarthy'    J.    "Epistemological    Problems    of    studied    ambiguity    in    that    final    clause.    Are    the    Artificial    Intelligence",-In    Proc.    IJCAI-77.    Cambridge,    Mass.,    August,    1977, pp.    1038-1044.    conclusions    legitimate    because    they    are    entailed    by    101     
 | 
	1980 
 | 
	13 
 | 
					
5 
							 | 
	PATHOLOGY ON GAME TREES:    A SUMMARY OF RESULTS*    Dana S. Nau    Department    of Computer Science    University    of Maryland    College Park, MD 20742    ABSTRACT    Game trees    are    widely    used    as    models    of    various    decision-making    situations.    Empirical    results with game-playing    computer    programs    have    led to the general belief that searching deeper on    a game tree improves the quality    of    a    decision.    The    surprising    result of the research summarized    in this paper is that there is an    infinite    class    of    game    trees    for    which    increasing    the search    depth does not improve the decision    quality,    but    instead makes the decision more and more random.    I    INTRODUCTION    -    Many decision-making    processes are    naturally    modeled    as    perfect information    games between two    players    [3,    71.    Such    games    are    generally    represented    as trees whose paths represent various    courses    the    game    might    take.    In    artificial    intelligence,    the well-known    minimax procedure    [2,    71 is generally    used    to    choose    moves    on    such    trees.    If a correct decision    is    to    be    guaranteed    using minimaxing,    substantial    portions of the game    tree must    be    searched,    even    when    using    tree-    pruning    techniques    such    as    alpha-beta    [2, 71.    This    is    physically    impossible    for    large    game    trees.    However,    good results have been obtained    by searching    the    tree    to    some    limited    depth,    estimating    the minimax values of the nodes at that    depth using a heuristic    evaluation    function,    and    computing    the    minimax values for shallower nodes    as if the estimated values were    correct    [2,    71.    There is almost universal agreement that when this    is done, increasing    the search depth increases the    quality    of    the    decision.    This    has    been    dramatically    illustrated    with    game-playing    computer    programs    [l, 8, 91, but such results are    purely empirical.    the    The    author    has    developed    a    mathematical    ory modeling    the effects of search depth on the    *    This work was supported in part by    a National    Science Foundation    graduate fellowship,    in part by    a James B. Duke graduate fellowship,    and    in    part    by    N.S.F.    grant    number    ENG-7822159    to    the    Laboratory    for Pattern Analysis at the    University    of    Maryland.    The results discussed in this paper    are presented in    detail    in    the    author's    Ph.D.    dissertation    [4].    probability    of making a correct    decision.    This    research    has    produced the surprising    result that    there is an infinite class of game trees for which    as    long    as    the search does not reach the end of    the tree (in which case the best possible decision    could    be    guaranteed),    deeper    search    does    not    improve the decision quality,    but    instead    makes    the    decision    more and more random.    For example,    probability    of    search    depth    0    2    4    6    8 10 12 14 16    FIGURE l.--Probability    of correct decision as    a    function    of    search    depth    on    the    game tree    G(l,l), for five different    evaluation    functions.    On    G(l,l), the probability    of correct decision is    0.5 if the choice is made at random.    For each    of    the    five    functions,    this value is approached    as    the search depth increases.    102    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    Figure    1 illustrates    how    the    probability    of    correct decision varies.    Section    2 of    this    paper    summarizes    the    mathematical    model used in this research, Section    3 presents the main result, and Section 4 contains    concluding    remarks.    II    THE MATHEMATICAL    MODEL    --    Let G be a game tree for a game    between    two    players    named    Max    and    Min.    Nodes where it is    Max's or Min's move are called max and min    nodes,    respectively.    Assume    that    G has no draws (this    restriction    can    easily    be    removed,    but    it    simplifies    the mathematics).    Then if G is finite,    every node of G is a forced win for either Max    or    Min.    Such    nodes    are    called    "+I' nodes and 1(-11    nodes, respectively.    If G is infinite, not    every    node    need    be    a "+,. or II-    II node, but the "+" and    M-n labeling can easily be extended to    all    nodes    of    G in a way which is consistent    with all finite    truncations    of G.    Correct decisions for Max and Min    are    moves    leading    to    "+" and u-11 nodes, respectively.    "+"    max nodes (which we call S nodes)    may    have    both    "+" and 1(-(1    children; "+" min nodes (T nodes) have    only "+" children; *-II min    nodes    (U    nodes)    may    have    both "+" and 11-(1    children; and --II max nodes    (V nodes) have only 11-11    children.    Thus it is only    at    the    S and U nodes that it makes a difference    what decision is made.    These    nodes    are    called    critical nodes.    An    evaluation    function    on    G may    be    any    mapping    e    from    the    nodes    of    G    into a set of    numbers indicating    how    good    the    positions    are    estimated    to be.    For computer implementation,    the    range of e must be finite.    We    take    this    finite    set to be fO,l,...,r},    where r is an integer.    Ideally, e(g) would equal r if g were    a    "+"    node    and    0    if g were a 11-u node, but evaluation    functions    are    usually    somewhat    (and    sometimes    drastically)    in error.    Increasing the error means    decreasing    e(g) if g is a "+" node and    increasing    e(g)    if    g is a 11-11    node.    Thus if we assume that    the    errors    made    by    e    are    independent    and    identically    distributed,    the    p.d.f.    f    for the    values e returns on "+" nodes is a mirror image of    the    p.d.f.    h    for    the    values    e returns on 1)-11    nodes; i.e., f(x) = h(r-x), x = O,l,...,r.    f may    be    represented    by    the    vector    P=    (f(O),f(l),...,f(r>>,    which    is    called    the    probability    vector for e.    ---    111    RESULTS    --    The    probability    vector    for    e    induces    probability    vectors    on the minimax values of the    nodes of    G,    and    the    probability    of    making    a    correct    decision at any critical node g of G is a    function    of    the    probability    vectors    for    the    minimax    values    of    the    children    of    g.    This    probability    is thus determined    by the structure of    the    subtree    rooted    at g, and little can be said    about    it    in    general.    However,    if    G has    a    sufficiently    regular structure,    the properties    of    this probability    can be analyzed.    Let m and n be    positive    integers,    and    let    G(m,n) be the unique game tree for which    1. the    root    is    an    S node    (this    choice    is    arbitrary    and    the    results    to    follow    are    independent    of it);    2.    each critical node has m children of the    same    sign and n children of opposite sign;    T . . . T U . . . U    U . . . U    m+n A-A    . . .    . . .    FIGURE 2.--The game tree G(m,n).    Min nodes are indi-    cated    by the horizontal    line segments drawn beneath them.    103    3.    every node has m+n children.    G(m,n) is illustrated    in Figure 2.    If moves are chosen at random on G(m,n),    the    probability    that    a    correct    choice is made at a    critical node is obviously m/(m+n).    If the choice    is made    using    a    depth    d minimax search and an    evaluation    function with probability    vector P,    it    is    proved    [4,    51    that the probability    that the    decision is correct depends only on m, n,    P,    and    d.    We denote this probability    by &,n(P,d).    The    trees    G(m,n)    have    the    following    surprising    property.    Theorem 1.    For    almost    every*    probability    ~-    vector P and for all but finitely many values of m    and n,    lim &,n(P,d)    = m/(m+n).    d-3 co    Thus, as    the    search    depth    increases,    the    probability    of correct decision converges to what    it would be if moves were being chosen at    random.    This    pathological    behavior occurs because as the    search depth    increases    it    becomes    increasingly    likely    that    all    children    of    a    critical    node    receive the same minimax value,    whence    a    choice    must be made at random among them.    Figure 1 illustrates    Theorem 1 on    the    game    tree    G(l,l),    using    five    different values of P.    The significance    of Theorem 1 for finite games    is    that infinitely many finite games can be generated    by truncating    G(m,n)    in    whatever    way    desired.    Deeper    search    on    these    trees    will    yield    increasingly    random    decisions    as    long    as    the    search does not reach the end of the tree.    Additional    theoretical    and    experimental    results    reported    elsewhere    [4,    5,    61    provide    additional    information    about which of    the    G(m,n)    are    pathological    and    why.    Theorem    1 almost    certainly extends to a much larger class    of    game    trees,    but    the    irregular structure of most game    trees would require a much more complicated    proof.    IV    CONCLUSIONS    -    The author believes that the pathology of the    trees    G(m,n) indicates an underlying    pathological    tendency present in most game trees.    However,    in    most    games this tendency appears to be overridden    by other factors.    Pathology does    not    appear    to    occur    in    games    such as chess or checkers    [l, 8,    91, but it    is    no    longer    possible    blithely    to    assume    (as    has    been    done    in    the    past)    that    searching deeper will always result    in    a    better    decision.    1.    2.    3.    4.    5.    6.    7.    8.    9.    REFERENCES    Biermann, A. W.    Theoretical    issues    related    to computer game playing programs.    Personal    Comput.    (Sept. 1978), 86-88.    Rnuth, D. E., and Moore, R. W.    An    analysis    of    alpha-beta    pruning.    Artif.    Intel.    6    -    -    (1975), 293-326.    LaValle,    I. H.    Fundamentals    of    Decision    Analysis.    Holt,    Rinehart and Winston,    New    York, 1978.    Nau, D. S.    Quality of decision versus depth    of    search    Dissertation,    Dull Uni~?m~Aug~~~eSb79).    Ph.D.    Nau, D. S.    Decision quality as    a    function    of search depth on game trees.    Tech. Report    TR-866, Computer Sci. Dept.,    Univ.    of    Md.    (Feb. 1980).    Submitted for publication.    Nau, D. S.    The last player theorem.    Tech.    Report TR-865, Computer Sci. Dept., Univ. of    Md. (Feb. 1980).    Submitted for publication.    Nilsson, N. J.    Problem-Solving    Methods    in    Artificial    Intelligence.    --ram    NG    York, 1971.    Robinson,    A.    L.    Tournament    competition    fuels    computer    chess.    Science 204 (1979),    1396-l 398.    Truscott,    T.    R.    Minimum    variance    tree    searching.    Proc.    First    Int.    Symp.    on    Policy    Anal.    and    IX-------    Syst.    (1979X    203-209.-    -    -    *    A property holds for almost every member    of    a    set    if    it    holds    everywhere    but on a subset of    measure zero.    Thus for any continuous    p.d.f.    on    the    set,    the probability    of choosing a member of    the set to which the property does not apply is 0.    104     
 | 
	1980 
 | 
	14 
 | 
					
6 
							 | 
	ISEOC9PFREE    S. W. Ng and Adrian Walker    Work performed    at Rutgers    University*    ABSTRACT    If a system    uses    assertions    of the    general    form    x    causes y , (e.g.    MYCIN    rules)    then    loop    situations    in    which    X, causes X2,    X2 causes X3,    . . . . ,    X, causes X,,    are,    intuitively,    best avoided.    If an assertion    has an attached    confidence    weight,    as in x (0.8)-causes y , then    one can    choose    to say that    the    confidence    in a chain    of such    assertions    is as strong    as the weakest    link in the chain.    If there    are several    chains    of assertions    from    X to Z,    then    one    can    choose    to say that    X causes    Z with    a    confidence    equal to that of the strongest    chain.    From    these    cfioices,    it follows    that    the confidence    that    X causes    Z corresponds    to a loop-free    chain    of    assertions.    This is true even    if there    are chains    from    X    to Z with    common    subchains    and    loops    within    loops.    An algorithm    for computing    the confidence    is described.    I    INTRODUCTION    and TERMINOLOGY    There    is    currently    considerable    interest    in    representing    knowledge    about    a practical    situation    in the    form    of weighted    cause-effect    or situation-action    rules,    and in using    the knowledge    so represented    in decision-    making    systems.    For example,    in medical    decision    mak-    ing systems,    the rules    may represent    causal    trends    in a    disease    process    in    a patient    [6],    or    the    rules    may    represent    trends    in the decision    process    of a physician    who is diagnosing    and treating    a patient    [2,4].    In such    representations,    the    chaining    together    of rules    can be    written    as a weighted,    directed    graph.    In MYCIN    [2]    the    graphs    are    like and-or trees,    while    in OCKHAM    [3,4,5]    the graphs    may have loops.    This paper    presents    a result    which appears    in [l].    From    the result    it follows    that,    using    the    max and    min operations,    a graph    con-    taining    loops    can be interpreted    as though    it were loop-    free.    *Authors’    pres ent addresses:    S. W. Ng, 6F Wing Hing Street,    Hong Kong.    Adrian    Walker,    Bell Laboratories,    Murray    Hill, NJ.    The    kind of graph    in question    is a stochastic graph    (sg)    consisting    of nodes N = {1,2,...,n) and a function    P    from    N X N to the real numbers    W, 0 5    w 11.    P is such    that,    for each i E N,    2 P(i,j)    5 1. If P(i,j)    = W,    then    w    J=i    is the weight of the arc from node i to node j.    A path in    an sg is a string    n, . . nl c N+ such    that    P(nk,nk+,)    > 0    for    lsk-cl.    n2,...,ni-l    are    intermediate    nodes    of    . . . q.    A path    n,    .    ;dop if n, = n    nl of a graph    is said to have    a    J for some    i.j such    that either    1 li    < j <I    or I <i < j I I.    Otherwise    the    path    is loop-free.    The    weight of a path    n,    n2    . nl of an sg is the    minimum    over    1 <i < I of the weight    of the arc from    n, to    n,+l.    The k-weight w: from    node i to node j of a graph,    is the    maximum    of the    weights    of all the    paths    from    i to j    having    no intermediate    node    with    number    higher    than    k.    The weight w,/ from    node i to node j of an sg is w,;.    II    EXAMPLES    This    section    gives    examples    of    potential    causal    loops,    in MYCIN    [2] and in OCKHAM    [3,4,5],    and it    shows    how    these    loops    are avoided    by the    use of the    maximum    and minimum    operations.    A. A MYCIN Example    Consider    a set of MYCIN    rules    B A C (l.O)-    A    B (l.O)-    D    D V E (0.5)-    B    G AH    (0.5)-    B    and    suppose    that    C,    E,    G,    and    H    are    known    with    confidences    0.9,    0.8,    0.5,    0.4,    respectively.    Writing    C(X)    for the confidence    in X, confidences    propagate    through    rules by:    105    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    c(Z) = w . max(c(X),c(Y))    for    xv    Y (w)+ 2    III    ALGORITHM    and RESULTS    and,    c(Z) = w . min(c(X),c(Y))    for    x A Y (w)-2.    The greatest    confidence    which    can be computed    in A is    c (A ) = 0.4 by the tree    A+((Bt-(D+(B*G    AH))    V E)    A C)    B occurs    twice,    so the tree can be thought    of as a graph    with a loop.    However,    the value    of    C(A)    depends    only    on the loop-free    path EBA.    B. An OCKHAM Example    The    following    set    of OCKHAM    [3,4,5]    rules    is    intended    to show a strategy    of a person    who is deciding    whether    to stay at home,    go to the office,    or to try for a    standby    flight    to go on vacation.    The    external    factors    project deadline, sno&torm,    project completed, and another    flight    influence    the    choice    by    placing    the    arc(s)    so    labelled    in a stochastic    graph.    The rules are:    HOME (project deadline, 1 .O)+ OFFICE    OFFICE (snowstorm, 0.5)--+ HOME    OFFICE (project completed, 0.5)-+    AIRPORT-STANDBY    AIRPORT-STANDBY    (another flight, 0.25)+    AIRPORT-STANDBY    AIRPORT-STANDBY    (snowstorm, 0.75)-+ HOME    These    rules    make    up a stochastic    graph    with    nodes    HOME,    OFFICE, and    AIRPORT-STANDBY.    If all of the    external    factors    project deadline, snowstorm, project com-    pleted, and another flight are true,    then the graph has five    arcs    and    multiple    loops.    If the    weight    from    HOME to    AIRPORT-STANDBY    is considered,    then    it turns    out to be    0.5.    The    corresponding    path,    HOME-OFFICE-AIRPORT-STANDBY,    is loop-free.    The algorithm    MAXMIN,    shown    below,    computes    the weight    from    a given    node to another    node in an sg.    Note that,    by Step 2, MAXMIN    runs in o(n3) time.    MAXMIN    Input:    A stochastic    graph    of n nodes    Output:    n2 real numbers    Step 1: for 1 5 i,j 5 n do B,,O := P(i,j)    Step 2: for k:=l    to n do    for 1 1. i,j I n do    B{ := max (BG-‘, min (B,kk-‘, BjJ-‘))    Step 3: for 1 I ij I n do output    Bl    The    properties    of    paths,    path    weights,    and    the    values    Blf, described    in the    Lemma    below,    are    esta-    blished    in Appendix    I.    Lemma    In an sg of n nodes,    the    following    hold for 1 5 i,j 5 n andforork    (n:    statements    (i) If WI; > 0, then    there    exists    i to j whose    weight    is w;,    a loop-free    path from    (ii) B: =    WI:.    Setting    k=n in parts (i) and (ii) of the Lemma    yields the    two results:    Result I In any sg the weight    wlJ, that is, the maximum    path weight    over all paths    from i to j, is equal the max-    imum    over only the loop-free    paths from i to j.    Result 2 If MAXMIN    receives    as input    an sg with    n    nodes,    then,    for 1 I ij    I n, the output    Bl; is equal the    weight    wi, from    node i to node j of the graph.    Result    1 establishes    a property    of any sg, namely    that the weight    from    one node to another    is the weight    of some    loop-free    path,    while    Result    2 establishes    that    MAXMIN    is one    possible    algorithm    for    finding    such    weights.    106    This is because    (A) and (B) exhaust    all possibilities.    IV    CONCLUSIONS    In a system    in which    weighted    causal assertions    can    be combined    into causal    paths    and graphs,    causal    loops    can    occur.    Common    sense    about    everyday    causality    suggests    that such loops are best avoided.    If the weight    of a path is chosen    to be the minimum    of the individual    arc weights,    and the net effect    of a start node    on a final    node is chosen    to be the maximum    of the path weights    from    the start    node    to the final node,    then    the weights    (by    whatever    algorithm    they    are    computed)    are    independent    of the presence    of loops    in the underlying    graph.    There    is a simple    O(n3) algorithm    to compute    these    weights.    ACKNOWLEDGEMENT    Thanks    are due    to A. Van der    Mude    for helpful    comments.    APPENDIX    I    Proof of Lemma    Let    k = 0. If    wij > 0, then    by    definition,    there    exists    a path    from    i to j having    no    intermediate    nodes,    whose    weight    is wiio. Clearly    this    path is ij, which    is loop-free.    So we may write    r,$’ = ij,    where    76 denotes    a path    from    i to j having    no inter-    mediate    node    with    number    greater    than    k.    Then    WijO    = P (i j) = Biio. If wi! = 0, then    there    is no such    path,    and BijO = 0.    Suppose,    by way of inductive    hypothesis,    that    for    1 5 ij I n and for some    (k-l)    -c n,    (i) if wi’f-’ > o then    there    is a loop-free    path    r$-‘,    from    i to j with    each    intermediate    node    at most    k-l,    whose    weight    is w,$-l, and    (ii) B,!j-l = ~4~~.    If w,$-’ > o then    there    is a path    7 from    i to j whose    weight    is w$. y is such that either    (A)    each    intermediate    node    of 7 is at most    (k-l),    or    (B)    y goes    from    i to k; from    k to k some    number    of times,    then    from    k to j, with each    intermediate    node    of each    subpath    being    at    most    (k-l).    In case (A) it is clear that    wlf = WI:-‘,    and the induc-    tive    step    for part    (i) of the Lemma    is completed    with    k-    -f?I    k-‘.    -    Y,,    In case    (B),    it follows    from    our    induction    hypothesis    that    there    exist    loop-free    paths    r,kk-‘, r&‘,    rkk,-’     with    weights    wi-‘,    wrtk- ’ ,    wkJ k-’ respectively.    Let    w = min(w,i-‘,wl,-‘) and    w’ = w,&‘, and consider    the    sub-    cases    (Bl)    in which    y goes from    k to k zero times,    and    (B2) in which y goes from    k to k one or more    times.    In    (Bl)    the    weight    of y is clearly    W,    while    in (B2)    it is    min(w ,ti).    Hence,    from    the    definition    of    WI:,    we    have    wif = max(w ,min (w ,ti)),    which    is simply    w. So part (i) of    the Lemma    holds    with 7,: = r,kk-’ riJ-‘.    From    part (ii) of    the inductive    hypothesis,    and from    Step 2 of the MAX-    MIN    algorithm,    it follows    that    B; = max(wJ-‘,w).    So    Bk = max(wk-’ wkj = wk    since    it    dlfinition    zf k{that    ‘z; zz w,:-‘.    follows    from    the    So in either    of the cases    (A)    and    (B) B[ = w,f,    which    establishes    part    (ii) of the    Lemma    for the case    W;    >    O.    If w; = 0 then there    is no path from i to j.    Suppose    Bt f 0. Then    either    w,t-’ # 0, or both    of    w;-‘,    wiJ-’    are    nonzero.    In each case there    is a path from i to j, a con-    tradiction.    So if wlf = o then    B: = w:.    q    HI    PI    [31    [41    [51    h5    REFERENCES    Ng, S. W., and A. Walker    “Max-min    Chaining    of    Weighted    Assertions    is Loop-free”,    CBM-TR-73,    Dept. of Comp.    Sci., Rutgers    University,    1977.    Shortliffe,    E.    Computer    Based    Medical    Consulta-    tions:    MYCIN.    American    Elsevier,    1976.    Van der Mude,    A. and A. Walker    “On the Infer-    ence of Stochastic    Regular    C-7dmmars”    Information    and Control,    38:3 (1978)    310-329.    Walker,    A.    “A Framework    for Model    Construc-    tion and Model    Based    Deduction    in a System    with    Causal    Loops”    In Proc.    Thi ,i Illinois    Conf.    Med.    Info. Syst.,    1976.    Walker,    A.    “On the Induction    of a Decision    Mak-    ing System    from    a Data Base”, CBM-TR-80,    Dept.    of Comp.    Sci., Rutgers    University,    1977.    Weiss,    S.    “Medical    Modeling    and    Decision    Mak-    ing”,    CBM-TR-27,    Dept.    of Comp.    Sci.,    Rutgers    University,    1974.    107     
 | 
	1980 
 | 
	15 
 | 
					
7 
							 | 
	Applying General Induction Methods to the Card Game Eleusis    Thomas G. Dietterich    Department of Computer Science    Stanford University    Stanford, CA 94305    Abstract    Research    was undertaken    with    the goal of applying    general    universally-applicable    induction    methods    to complex    real-world    problems.    The goal was only partially    met.    The chosen    domain-the    card game Eleusis-was    still somewhat artificial, and    the universally-applicable    induction    methods    were found to be    lacking in important ways. However, the resulting Eleusis program    does show that by using knowledge-based    data interpretation    and    rule evaluation techniques and model-fitting    induction techniques,    general induction methods can be used to solve complex problems.    Introduction    Work in the area of computer    induction is characterized    by a    continuum from general, universally-applicable methods [5, 6, 7, 9,    10, 121 to specific, problem-oriented    methods    [2, 8, 111.    The    general-purpose methods have been criticized for lacking the power    to operate in real-world domains.    Problem-oriented    methods have    been criticized for being too specialized to be applied to any    problems outside their original domains.    This paper describes an    attempt to bridge this gap by applying general-purpose    induction    algorithms to the problem of inducing secret rules in the card game    Eleusis. Further details are available in [3].    A Program    for Eleusis    Eleusis (developed by Robert Abbott [l, 41) is a card game in which    players attempt to induce a secret rule invented by the dealer. The    secret rule describes a linear sequence of cards.    In their turns, the    players attempt to extend this sequence by playing additional cards    from their hands.    The dealer gives no information    aside from    indicating whether or not each play is correct. Players are penalized    for incorrect plays by having additional cards added to their hands.    The game ends when a player empties his hand.    A record of the play is maintained as a layout (Figure 1) in which    the top row, or mainline, contains all of the correctly-played cards in    sequence.    Incorrect cards are placed in side lines below the main    line card which they follow.    mainline: 3H QS 4C    JD    2C    10D 8H    7H    2C    5H    sidelines:    JD    AH    AS    IOH    5D    8H    10s    QD    Rule 1: “If the last card is odd, play black, if the last card is even,    play red.”    Figure 1. Sample Eleusis Layout (after El]).    This research sought to develop a program which could serve as an    intelligent assistant to a human Eleusis player. The program needed    to be able to:    )    discover rules which plausibly describe the layout,    ) accept rules typed by the user and test &hem against the    layout,    ) extend the layout by suggesting cards to be played from the    player’s hand.    Although Eleusis is artificial and noise-free, it is sufficiently complex    to provide a reasonable test bed for inductive techniques.    The    development    of an intelligent assistant required    not only basic    induction    methods    but also extensive deduction    techniques    for    testing rules and extending    the layout.    Problems with Existing Induction    Methods    While designing the rule-discovery portion of the Eleusis program,    existing induction algorithms [5, 6, 7, 9, 10, 121 were examined and    found to be lacking in three fundamental ways.    The first major    problem    with some of these algorithms    is their emphasis    on    conjunctive generalizations.    Many Eleusis rules are disjunctive.    For    example, Rule 1 can be written as:    tli {odd(cardi-1) A black(cardi)    V    even(cardi-1) A red(cardi)}    The second major problem with these algorithms is that they make    implicit    assumptions    concerning    plausible    generalizations-assumptions    which are not easily modified.    Of the    algorithms examined, only Mitchell’s version space algorithm [lo]    maintains information concerning all rules consistent with the data    (and    his    algorithm    is    still    oriented    toward    conjunctive    generalization).    The algorithms of Hayes-Roth and Vere both seek    the most specific rule consistent with the data, while Michalski’s Aq    algorithm seeks a disjunctive description with the fewest conjunctive    terms.    In contrast, the plausibility heuristics for Eleusis are:    Choose    rules    with    intermediate    degree    of    generality.    (Justification: the dealer is unlikely to choose a rule which is    overly general because it would be too difficult to discover.    Conversely, overly specific rules are easily discovered because    they lead to the creation of numerous    counter-examples    during play.)    Choose    disjunctive    rules    based    on symmetry.    (Justification:    Rule 1 is an excellent example of a symmetric disjunctive    rule. Most often in Eleusis, the terms of a disjunction define    mutually    exclusive    cases    which    have    some    symmetric    relationship to each other. The dealer is very likely to choose    such rules because they are not too hard-nor    too easy--to    discover.)    (These plausibility heuristics are based on the assumption that the    dealer is rational and that he is attempting to maximize his own    score (according to the rules of the game).    This is an artificial    assumption.    It is very rare in science that we have such insight into    nature.    However, in all domains plausibility criteria must be    available-otherwise,    we don’t know what we are searching for.)    The third major problem    with using general-purpose    induction    techniques in Eleusis is that the raw data of the Eleusis layout are    not in a form suitable for generalization.    (Many researcliers [2, 111    have pointed out this problem in other domains.) One aspect of this    problem is evident in Rule 1: neither color nor parify is explicit in    the representation    of the cards.    Another difficulty is that the    sequential ordering of the cards is implicit in their position in the    layout. It must be made explicit in order to discover rules like Rule    1.    Two techniques were developed to address these problems.    First, in    order to avoid an exhaustive search of rule space and at the same    218    time avoid the “tunnel vision” of existing algorithms, rule models    were developed to guide the induction process.    Secondly, in order    to    transform    the    input    data    info    a    form    appropriate    for    generalization, a series of knowledge-based    processing layers were    created.    Induction    by Model-Fitting    By analogy    with traditional    statistical    time-series    analysis, the    program uses a model-fitting    approach to induction.    The term    model denotes a syntactic or functional skeleton which is fleshed out    by the induction algorithms to form a rule. In traditional regression    analysis, for example, the model is the regression polynomial whose    coefficients    must be determined    by induction    from the data.    Properly chosen models can strongly constrain the search required    for induction.    After looking at several Eleusis games, the following    models were designed for the Eleusis program:    B Decomposition.    This model specifies that the rule must take    the form of an exclusive disjunction of if-lhen rules.    The    condifion parts of the rules must refer only to cards prior to    the card to be predicted.    The action parts of the if-then    rules describe correct plays given that the condition parts are    true.    The condition    parts must be mutually exclusive    conjunctive    descriptions.    The    action    parts    are    also    conjunctions.    Rule 1 fits the decomposition    model:    Vi odd(cardi-1) * black(cmdi) V    even( cardi- 1) =P red(cardJ    ) Periodic.    A rule of this model describes the layout as a    periodic function.    For example, Rule 2 (Figure 2) is a    periodic rule.    The layout is split into phases according to    the length of the proposed period.    The periodic model    requires that each phase have a conjunctive    description.    JC    4D QH 3s    QD 9H QC 7H    QD 9D    QC 3H    KC 5s    4s    10D    7s    M    phase 0: JC    QH QD QC QD QC    5s    4s    1OD    7s    phase 1: 0    4D    3s    9H 7H 9D    3H    KC    Rule 2: (periodic rule with length 2):    phase 0: Vi faced(cardi)    phase 1: tli nonfaced(cardi)    Figure    2. A Periodic Rule.    ) Disjunctive    Normal Form (DNF) with fewest terms. The Aq    algorithm (Michalski [9]) is used to discover rules which    have the fewest number of separate conjunctions.    The Aq    algorithm    was    given    heuristics    to    guide    it    towards    symmetric, disjoint disjunctive terms.    By definition, not all Eleusis rules can be represented using these    three models. But, these models, when combined with segmentation    (see below), cover all but one or two of the Eleusis rules which I    have seen.    For each of these models, an algorithm was developed to fit the data    to the model.    In order to fit the data to the decomposition model,    the program must determine which variables to decompose on, i.e.    which variables to test in the condition part of the rule (Rule 1    decomposes on parity E {odd, even)).    The program must also    decide how far into the past this decomposition should apply (i.e. do    we look at just the most recent card, or the two most recent cards,    . . . . etc.).    Once the decomposition variables and the degree    of    lookback are determined,    the algorithe    must find a conjunctive    description    for the action parts of the rules.    The program uses a generate-and-test    approach.    First, it considers    rules which look back only one card, then two cards, and so on until    a rule consistent    with the data is found.    To determine    the    219    decomposition variable(s), it generates trial rules by decomposing on    each variable in turn and chooses the variable which gives the    simplest rule.    If the resulting rule is not consistent with the data,    the layout is decomposed    into sub-layouts based on the chosen    variable, and a second decomposition variable is again determined    by generating trial decompositions and selecting the simplest.    This    process is repeated until a rule consistent with the data is found.    (This is a beam search with a beam width of 1).    In order to fit the periodic model, the program chooses a length for    the period, splits the layout into separate phases, and finds a    conjunctive    description of each phase.    Since the rule is more    plausible if the descriptions of each pha’se are mutually exclusive,    the algorithm attempts to remove overlapping conditions    in the    descriptions    of the different phases.    Again, a generate-and-test    approach is used to generate periodic rules with different length    periods    (from length    1 upwards)    until an acceptable    rule is    discovered.    The Aq aigorithm is used to fit dati to the DNF model.    Knowledge-layer    Structure    Like many other AI systems, the Eleusis program is structured as a    set of layers, or more accurately, rings, based on the kinds of    knowledge used to solve the problem (Figure 3). Each layer takes    input data from the outside, transforms the data using knowledge    appropriate to this layer, performs generalization by searching the    space of rules at this level of abstraction, and evaluates the obtained    rules.    Rules which are sufficiently plausible are returned to the    outer layers.    Each layer calls the inner layers to perform tasks    requiring knowledge appropriate to those inner layers.    Figure 4    shows the five layers of the Eleusis program. Notice that in Eleusis,    the outermost ring is very specific to Elcusis, while the inner-most    rings contain only the general model-fitting    induction algorithms.    This design is intended to allow the program to be easily applied to    similar problems.    Since all Eleusis-specific knowledge is in the    outer-most two layers, these could be stripped off and replaced by    layers which apply different kinds of data transformations to solve    different    problems    (e.g.    letter    series    completion,    sequence    extrapolation).    Figure 3. The Knowledge-layer    Scheme.    5    User Interface    4    Eleusis KnowledPe    ,    ;    Seg~    Sequential Analysis    1    Basic induction    Most Specific    CL    Most General    Figure 4. Layered Structure    of Eleusis Program.    The five layers fiinction as follows. The outer-most layer provides    an interface for the user. Layer 4 transforms the layout by making    explicit such things as the color, parity, prime-ness, and faced-ness of    the cards.    Layer 3 segments the layout.    Segmentation is used to    discover rules such as Rule 3 (Figure 5) which involve first splitting    up the layout into segments according to some criterion    (like    constant color) and deriving a new layout based on the lengths of    these segments.    Layer 2 makes the order of the events explicit    [4]    Gardner, Martin, “On Playing the New EIeusis, the game that    either by creating “first difference” variables (e.g. Avalue(caQ    =    simulates the search for truth,”    Scientific    American,    237,    value(c&$    - value(cardi_l)) or by breaking the layout into separate    October. 1977, pp 18-25.    phases (for periodic rules). The result of the preprocessing of layers    5 through 2 is that layer 1 is called with a specific model for which    [S]    Hayes-Roth, F:; J. McDermotf    “An Interference    Matching    Technique for Inducing Abstractions”, Communicufions of fhe    the degree of Zookback and (optionally) length of period have been    specified and with a set of unordered events to which the model is    to be fitted.    Layer 1 actually performs the model-fitting using one    of the three model-fitting    induction algorithms.    Once the rules have been induced, they are passed back up through    the layers for evaluation.    Layer 2 evaluates the rules using    knowledge about ordering (e:g. guaranteeing that the rule doesn’t    lead to a dead end).    Layer 3 checks that the rules are consistent    with the scgmcntation    it performed    (in particular, the boundary    values cause some problems).    Layer 4 evaluates the rules according    to the heuristics for plausible rules in Elcusis. Finally, layer 5 prints    any rules which survive this evaluation process.    ACM, 21:5, 1978, pp. 401-410.    [6]    Hunt, E.B.. Experiments in Induclion, Academic Press, 1966.    f’7]    Larson, J., “Inductive    Inference    in the Variable Valued    Predicate Logic System VL21 : Methodology and Computer    Implementation”,    Rept. No. 869, Dept. of Comp. Sci., Univ.    of III., Urbana, May 1977.    [8]    Lenat, D., “AM:    An Artificial Intelligence    Approach    to    Discovery in Mathematics as Heuristic Search,” Comp. Sci.    Dept., Rept. STAN-CS-76-570, Stanford University, July 1976.    191    Michalski. R. S., “Algorithm    Ag for the Quasi-Minimal    Solution of the Covering Problem,‘* Archiwum -Automafyki i    Telemechaniki, No. 4, Polish Academy of Sciences, 1969 (in    Polish).    [lo]    Mitchell, T. M., “Version Spaces: an Approach to Concept    Learning,” Comp. Sci. Dept. Rept. STAN-CS-78711, Stanford    University, December    1978.    I    AH 7C    6C    9s    10H 7H    1 OD JC    AD 4H    8D    7C    KD    6s    QD    3s    I    JH    I    Rule 3: “Play odd-length strings of cards where color is constant    within each string.”    1 The segmenled layout looks like this (color, length):    (red, 1)    (black, 3)’    (red, 3)    (black, 1)    (red, 3)    Figure 5. A Segmentation-based Rule.    The program works well. The three rule models, when combined    with segmentation, span a search space of roughly 1O183 possible    rules (several control parameters affect the size of this space). The    program generates and tests roughly 19 different parameterizations    of the three models in order to choose three to five plausible rules.    It runs quite quickly (less than seven seconds, on a Cyber 175, in    the worst case so far).    The rules developed are similar to those    invented by humans playing the same games (15 complete games    have been analyzed).    Conclusion    General induction techniques can be used to solve complex learning    tasks, but they form only part of the solution.    In the Eleusis    domain, data interpretation,    rule evaluation, and model-directed    induction were all required    to develop a satisfactory program,    A degree of generality was obtained by segregating the functions of    the program into layers according to the generality of the knowledge    they required.    This should allow the program to be applied to    similar tasks merely by “peeling off’ and replacing its outer layers.    Acknowledgments    Thanks go to R. S. Michalski, my M.S. thesis advisor, for suggesting    Eleusis as a domain and for providing numerous ideas including the    basic idea for the decomposition algorithm.    Thanks also to David    Shur for proofreading this paper.    NSF grant no. MCS-76-22940.    This research was supported by    References    [l]    Abbott, Robert, “The New Eleusis,” Available from Abbott at    Box 1175, General Post Office, New York, NY 10001 ($1.00).    [2]    Buchanan, B.G., D. H. Smith, W. C. White! R. J. Gritter, E.    A. Feigenbaum,    J. Lederberg,    C. Djerassr, Journal    of the    American    Chemical    Society, 98 (1976) p. 6168.    [ll]    Soloway, E., “Learning = Interpretation    + Generalization: a    case study in knowledge-directed    learning,”    PhD Thesis,    COINS TR 78-13, University of Massachusetts, Amherst, MA.,    1978.    [12] Vere, S. A., “Induction    of Relational Productions    in the    Presence of Background Information,”    In Proceedings of the    Fifrh International Joint Conference on Artificial Intelligence,    MIT, Cambridge,    MA., 1977.    [3]    Dietterich,    Thomas G., “The Methodology    of Knowledge    Layers for Inducing Descriptions    of Sequentially Orderid    Events,” MS Thesis, Dent    of Corn. Sci., Univ. of Illinois,    Urbana, October, 1979. -    220     
 | 
	1980 
 | 
	16 
 | 
					
8 
							 | 
	MODELLING STUDENT ACQUISITION OF PROBLEM-SOLVING SKILLS    Robert Smith    Department of Computer Science    Rutgers University    New Brunswick, N. J. 08903    ABSTRACT    This paper describes the design of a system    that simulates a human student learning to prove    tneorems in logic by interacting with a curriculum    designed to teach those skills.    The paper argues    that sequences in this curriculum use instructional    strateu,    and that the student recognizgs these    strategies in driving the learning process.    I. INTRODUCTION    A central issue in the design of learning    systems (LS's) is the classification of the sources    of information that    the system uses for its    acquisition. The general notion is that an LS    begins with certain knowledge and capabilities, and    then extracts information from training sequences    or    experiments in    the    acquisition    process.    Implicit within this general characterization is    the idea that    the LS enforces some kind of    interpretation on the training sequence by way of    driving the acquisition process. Often the nature    of the interpretation is left implicit. An example    of    an LS    that makes    somewhat explicit the    interpretation of its    training    sequence    is    Winston's    program    for    learning    structure    descriptions, where the program interprets the near    m    example as providing key information about the    structure being learned [lr].    We speculate that much human learning takes    place in a more richly structured environment,    wherein the human learner is interpreting the    instructional seauences provided to him in a richer    way than LS's typically envision.    Indeed, most    LS's nave made few if any explicit assumptions    about the structure of the environment in which the    training sequence occurs. One particularly rich    environment is learning &    teaching. We suggest    that teachers use certain instructional strategies    in presenting material, and that students recognize    these strategies.    This paper describes the motivation for an LS    called REDHOT. REDHOT is a simulation of a student    --------    * I would like to thank Phyllis Walker for analysis    of the curriculum and student protocols; Saul    Amarel, Tom Mitchell, Don Smith, and N. Sridharan    for many    ideas and assistance.    The research    reported here is sponsored by the Office of Naval    Research under    contract N00014-79-C-0780.    We    gratefully acknowledge their support for this work.    acquiring the skill of constructing proofs in    elementary logic.    We characterize this skill as    consisting of (1) primitive operators in the form    of natural-deduction rules of inference, (2) "macro    moves" consisting of several rules of inference,    and (3)    application heuristics that describe when    to use the rules. The central theme of this    research is to model the acquisition of these    skills around the recognition of instructional    strategies in a curriculum designed to teach the    student.    II. CURRICULUM FOR REDHOT    We are using the curriculum from the computer-    assisted instruction (CA11 course developed at    Stanford University by Patrick Suppes and co-    workers.    (See [3]    for details.)    This CA1 system    is used as the sole mode of instruction for    approximately 300 Stanford students each year. We    chose a CA1 curriculum because we thought that the    explicitness inherent in a successful CA1 system    developed and tested over a number of years might    make the instructional strategies relatively clear.    The    curriculum contains explanatory text,    examples, exercises, and hints.    The explanatory    text is rather large, and uses both computer-    generated    audio    and    display    as    modes    of    presentation. The presentation strategy used by    the actual CA1    system is linear through the    curriculum. For use with REDHOT, we have developed    a stylized curriculum in an artificial language CL.    It contains the examples, exercises, partially    formed rules, and hints.    The exercises are the central part of the    curriculum.    There are approximately 500 theorems    that the student is asked to prove, with about 200    in propositional logic, 200 in elementary algebra,    and 100 in the theory of quantification.    The human student performs these exercises by    giving the steps of the developing proof to an    interactive proof checker. This proof checker is    the heart of the original instructional system. We    developed a version of this proof checker for use    with the REDHOT student simulation.    III. THE DESIGN OF REDHOT    REDHOT learns rules for proving theorems.    These rules are initially the natural deduction    331    rules of many logic systems. The student improves    upon these rules by building macro operators and by    adding heuristics to existing rules--i.e., giving    strategic advice in the left-hand-sides of the    production rules.*    For example,    the rule    AA ("affirm    the    antecedent", the system's version of modus ponens)    can be stated as the following rule:    Rule AA    I    GOAL: Derive Q    I    Prerequisites:    f    P already on some line i    I    P -> Q on some line j    I    Method:    f    AA command on lines i and j    I    Heuristics: None (yet)    I    Effectiveness: Perfect    I    In the above, we have adopted a style of rule    presentation that is similar to the rules of a    production system. The letters P and Q stand for    arbitrary formulas, and i and j for arbitrary lines    of the already existing proof. The aoal tells what    the rule will produce. The mreauisites    tell us    that two formulas of the appropriate forms are    needed, and the method gives the schematic command    to the proof checker that will accomplish this.    The heuristics associated with a rule are learned    by the system, and indicate how the rule should be    used.    Effectiveness of the rule is also learned,    and indicates how effective the rule will be in    achieving its goal given its prerequisites.    The    effectiveness of this rule is "perfect" since the    rule is given as such in the curriculum.    The underlying problem solver for REDHOT is a    depth-first, heuristic problem solving over the    existing rules. It is assumed that: (a) the system    has sufficient memory and CPU facilities for this    search, if necessary; and (b) that the underlying    pattern matchers are sufficient to match arbitrary    formulas,    line numbers, etc.    Both of these    assumptions are open to criticism on the grounds of    being psychologically unrealistic.    One of the    goals of the construction of REDHOT is to decide on    plausible ways to restrict the problem solver and    pattern matcher to make a more realistic system.    REDHOT    learns    the    heuristics    for    the    application of the rule.    These heuristics are    stated    in a heuristic-language HL, which is    strongly tied to the curriculum language CL. The    heuristics associated with a rule guide the student    as to when to try that rule or not to try it.    For example, the curriculum appears to teach    the student that the AA rule is a plausible thing    to    try when the    prerequisites are available    --------    * See [II for a conceptual discussion of the    levels through which this process might proceed.    One way to regard this research is a suggestion of    the mechanism for this acquisition of heuristics    and macro moves.    (whether or not the goal is immediately desired).    This is one of the primitives of the HL heuristics    language.    An example of a macro operator that is not a    primitive rule is the "multiple AA" macro move. A    paraphrase of this macro operator might be:    I    Multiple-AA Macro Move    I    I IF you want to prove Q    :    I    AND    I    I    have P, P -> P1, PI -> P2, . . . . Pn -> Q I    : THEN    I    I    make multiple AA applications,    I    I    which is guaranteed to succeed    I    We discuss below t.he training sequence that teaches    this macro move.    IV. m    RECOGNITION @ INSTRUCTIONAL STRATEGIES    REDHOT bases its acquisition of application    heuristics and macro operators on its recognition    of instructional    strategies in    the    training    sequence.    For example, consider the sequences of    exercises in Figure I, taken from the actual    curriculum.    The    sequence, which    is at    the    beginning of    the whole curriculum, gives the    primitive rule    of inference    AA, then    shows    successive elaborations of the use of that rule of    inference.    REDHOT detects this to be a use of a    strategy called focus a&    elaborate, in which a    rule is first focussed, and then a particular    elaboration is given.    Teacher: Here is a rule called AA.    Teacher: Here are some exercises involving AA:    1.    Goal:    Q    Premises: S -> Q, S    2-5    [Several more exercises with different    formulas.    1    6.    Goal:    W    Premises: S -> Q, Q -> W, S    7.    Goal:    S    Premises: R -> S, Q -> W, W -> R, Q    899    [Two more similar exercises involving    multiple applications of AA.]    Figure _1_    Sequence of Exercises for Learning    Multiple Application of AA Command    In the above training sequence, REDHOT takes    steps l-5 as focussing on the AA rule, and steps 6-    9 as providing a useful elaboration of that rule,    in the form of the macro operator for multiple    application.    222    A    second    example of    the    use    of    an    instructional strategy concerns removing possible    bugs in learned heuristics and macro operators. We    illustrate this with the macro rule for conditional    proof, a common strategy in logic and mathematics,    which we paraphrase as follows:    I    Conditional Proof MACRO MOVE    I    1    IF you want to prove P -> Q    I    I    THEN    I    I    ASSUME P as a working premise    I    I    PROVE Q (likely from P);    I    I    APPLY "CP" rule, removing premise    I    I    The actual    instructional sequence goes to    great length to teach this principle, and students    typically have a difficult time with the principle;    one defective version of the rule that students    seem to learn is the following:    I    "Defective" MACRO MOVE    I    I    IF you have a formula P -> Q    I    I    AND you want to prove Q    I    I    THEN    I    I    ASSUME P as a working premise;    I    I    PROVE Q using AA;    I    This is a very poor strategy; but the evidence    suggest that over half of the students learn it.    The following exercise seems to help students debug    the rule "Defective" .    Derive: S -> (Q OR R)    Premise: (R -> R) -> (Q OR R)    In examining student protocols, we see that    many students will try several times to apply the    "Defective" rule to this exercise. Finally, (we    speculate) they realize that (R -> R) is already    something that they know how to prove, using a    previously learned    macro operator.    Then, the    actual proof becomes obvious, the student corrects    the defective rule, and goes on to handle similar    exercises correctly.    We call this instructional    strategy "focus and frustrate"    wherein a student    discovers --somewhat traumatically--that a rule he    learned is defective.    Therefore, an exercise such as the above is    not just randomly selected, but instead tests    possible student "bugs" in an exact way. Notice    that it is one of the simplest exercises that will    discriminate between the correct and defective    formulations of the macro rule for conditionaL    proof.    (See [2] for a discussion of "debugging"    student skills.)    to    V. REDHOT AND LEARNING SYSTEMS    Like many LS's, REDHOT starts wi th the ab ility    state everything it will "learn" I in some sense    at least.    The initial rules for solving the    problem (the natural deduction rules for logic) are    complete with respect to the underlying problem    solver --unless it is restricted in time/space (in    practice it is). The heuristic and macro languages    are also given in advance, and they of course    define a space of the possible rules that might be    learned.    So, the    object is to select among    heuristics and macro rules in this space. One way    to formulate doing this is by experimentation or    exploration.    REDHOT selects objects from this    meta-space by being taught.    Learning by being taught consists of the    "teacher" laying out exercises in an organized and    structured way,    and the    student    recognizing    something of that structure.    The student makes--    believes that he is entitled to make--fairly bold    hypotheses about the rules he is learning, and    relies on    the training    sequence to    contain    exercises that will check for common errors that he    the student may have made in formulating these    rules. REDHOT compares somewhat radically to many    LS's that rely on a somewhat slow, computationally    coherent delimitation of the rule (or concept)    involved.    We speculate that learning by "discovery" or    "experimentation" is    a slow process for even    humans, done over the eons of time and through    social    interaction. Most human learning is by    being taught, and one can argue that AI should give    attention to the relation between learning and    teaching, in terms of modelling the acquisition of    concepts, problem-solving    skills, and    natural    language. We further speculate that learning by    "discovery" will be aided by extracting as much    information as possible from the structure of the    environment in which the LS operates.    IllI    L-21    [31    II41    REFERENCES    Amarel, Saul, "An Approach to Problem Solving    and    Theorem Proving    in the Propositional    Calculus", in Svstems and Computer Science,    (Hart and Takasu, eds.), Toronto: University of    Toronto Press, 1967.    Brown, John Seely, Burton, Richard R., and    Larkin, Kathy    M., "Representing and Using    Procedural Bugs for Educational Purposes", in    Proceedings of the 1977 Annual Conference of    the Association for Computing Machinerv, New    York, 1977.    Suppes, P., Smith, R. L., and Beard, M.,    "University-level Computer-assisted Instruction    at Stanford: 1975", in Instructional Science,    1977, 4, 151-185.    Winston, Patrick Henry, "Learning Structural    Descriptions from Examples", Ph.D. thesis, in    The Psvchology of Commuter Vision, (Patrick    Henry Winston, ed.), McGraw-Hill, New York,    1975.    223     
 | 
	1980 
 | 
	17 
 | 
					
9 
							 | 
	A Computer Model of Child Language Learning    Mallory Selfridge    Yale University    Abstract    A computer program modelling    a child    between    the    ages    of    1 and 2 years is described.    This program    is based on observations    of    the    knowledge    this    child had at age 1, the comprehension    abilities he    had at age 2, and the language experiences    he    had    between    these    ages.    The    computer    program    described begins at the    age    1    level,    is given    similar    language    experiences,    and uses inference    and learning rules to acquire comprehension    at the    age 2 level.    Introduction    This paper describes a computer model    of the development    of comprehension    abilities    in a    child, Joshua, between the ages    of    one    and    two    years.    The    program    begins    with    the    kind    of    knowledge    that Joshua had at age 1, when he under-    stood    no    language, and learns to understand    com-    mands involving action, object, and spatial    rela-    tion words at Joshua's age 2 level.    It does so by    being    given    the    kind    of    language    experiences    Joshua had between the ages of 1 and 2, and making    use of rules to 1) infer    the    meaning    of    utter-    ances,    2)    attend to words, and 3) learn language    meaning and structure.    The program passes    through    a    reasonable    developmental    sequence and makes the    same kind of errors that children make    at    inter-    mediate stages.    This work suggests that language learning    to    the 2 year old level can be accounted for primari-    ly by the learning of word meaning and    structure,    that    world    knowledge    is    crucial    to enable the    child to infer the meaning of utterances,    and that    children    hear language in situations which enable    them to perform such inferences.    The    success    of    the    program    in    modelling    Joshua's    language    development    -- both its progression    and its errors    -- suggests that it embodies a plausible    theory of    how Joshua learned to understand    language.    While    there    are    several aspects of the model which are    unrealistic    (for example, segmented input, no    am-    biguous words, no simultanious    conceptual    develop-    ment), there is reason to believe that future work    can sucessfully    address these issues.    Further de-    tails can be found in Selfridge    (1980).    This paper first considers    Joshua's    initial    state    of    knowledge    at    age    1,    and    then    his    comprehension    abilities at age 2. It describes    the    kind    of    language experiences    he had, and several    kinds of learning    rules    which    can    account    for    Joshua's    development.    The computer program incor-    porating    these    observations    and    rules    *    described,    and    finally    some    conclusions    ai:    presented.    Joshua's Initial Knowledge    The first component    of    ---    a    computer    model    of the development    of Joshua's    comprehension    is Joshua's knowledge prior    to    his    language    learning.    Observations    like the follow-    ing suggest that Joshua had considerable    knowledge    of    objects,    actions, spatial relations,    and ges-    tures    at    age    1    (ages    are    given    in    YEARS:MONTHS:DAYS):    0:11:19 Joshua and I are in the playroom.    I build a few block towers for him to knock    down, but he doesn't do so; rather, he    dismantles    them removing the blocks from the    top, one at a time    1:0:16 Joshua and I are in the playroom . Joshua    takes a toy cup, and pretends to drink out of it.    1:2    Joshua is sitting in the living room    playing with a ball. I hold my hand out to    him, and he gives me the ball.    The above observations    show that Joshua    knew    the    properties    and    functions of objects like blocks,    cups and balls. He knew actions that could be per-    formed    with    them,    and various spatial relations    that they could enter into. Finally, he knew    that    behavior can be signaled through gestures by other    people. Thus, a language learning program must    be    equipped with this kind of knowledge.    Joshua's Comprehension    Abilities    at Age2    At    age    2, Joshua could respond correctly to commands with    unlikely    meaning    and    structure.    His    correct    responses    suggests full understanding    of them. For    example, consider the following:    2:0:5 We walk into the living room and Joshua    shows us his slippers. His mother says "Put    your slippers on the piano." Joshua picks up    the slippers and puts them on the piano keys,    looking at his mother.    She laughs and says    "Thats silly." Joshua removes the slippers.    The meaning of this utterance    is unlikely    since    slippers    do    not    generally go on piano keys, and    piano keys don't    generally    have    things    put    on    them.    His response suggests that he was guided by    full understanding    of the meanings of the words in    "Put your slippers on the piano."    At age    2    Joshua    also    understood    language    structure, as the following example shows:    2:o:o Joshua and I are in the p layroom, my    tape recorder i on the floor in front of me.    I say "Get on the tape recorder,    Joshua".    Joshua looks at me oddly, and looks at the    the tape recorder.    I repeat "Get on the tape    recorder."    Joshua moves next to the tape    tape recorder. I once more repeat 'Get on the    the tape recorder."    Joshua watches me intently,    and lifts his foot up and slowly moves it over    the tape recorder to step on it. I laugh and    pull the tape recorder away.    It seems that Joshua understood    "Get on    the    tape    recorder"    the    first time I said it, and that his    reluctance    to comply reflected his knowledge    that    what    I was    asking    was    very unlikely. That is,    Joshua understood    that the tape recorder    was    the    object    to be underneath    him, although this is un-    likely given his    experience    with    it.    This,    in    turn,    suggests    that Joshua understood    the struc-    ture of the word "on", namely, that the word whose    meaning    is    the    supporting    surface follows "on".    Thus a program modelling    Joshua    at    age    2 must    understand    utterances    using language structure.    Joshua's Language Experiences    In the year    between    --    the ages of 1 and 2, Joshua experienced    situations    which allowed him to make    inferences    concerning    the    utterances    he heard.    In this section, three    examples of such situations are given, and    infer-    ence    rules    accounting    for Joshua's response and    attention    to words are presented.    In the first example, I am using an utterance    and    simultaniously    signalling the meaning of that    utterance    through gestures:    1:2:17 We are sitting in the living room, Joshua    is holding a book. I look at Joshua, maintain    eye contact for a moment, hold my hand out to    him and say "Give me the book, Joshua."    Joshua    holds the book out to me.    In this situation, Joshua probably    inferred    that    the meaning of "Give me the book, Joshua." was the    same as that signalled by the gestures.    The    fol-    lowing rule captures this idea:    Gestural Meaning Inference    If an utterance    is accompanied    by gestures    with associated meanings    then infer that the    the utterance means the same as the gestures.    Knowledge    of object function    and    properties    helped Joshua infer responses    in other situations.    In the following,    Joshua used his    knowledge    that    books can be opened in his response:    1:0:9 Joshua has a book in his hand, and is    looking at it, turning it over, and examining    it. His mother says 'open the book, open the    book..." Joshua opens the book. She says,    "Good Joshua, good."    A rule summarizing    this inference    is    the    follow-    ing:    Function/Property    Inference    If an utterance    is heard while interacting with    an object then the meaning of the utterance    involves a function or property of that object.    Parent    speech    to    children    posesses    many    attention-focussing    characteristics    (e.g. Newport,    1973).    The following example is typical:    1:18:0 Joshua's father is trying to demonstrate    that Joshua knows the names of the upstairs    rooms, and has put a toy lawnmower in the    bathroom. He says "Where is the lawnmower,    Josh? Its in the BATHROOM.    The LAWNMOWER    is    in the BATHROOM. BATHROOM!"    Joshua's attention    to "bathroom"    in    this    example    can be explained by the following rule:    Attention    Inference    If a word is emphasised,    repeated, or    said in isolaytion,    then attend to it.    These are the kind of rules which I postulate    enabled    Joshua to infer the meaning of utterances    from context, and attend to part of the utterance.    The    program    must be equipped with such rules and    must be given input in similar contexts.    Learning Rules This section will consider Joshua's    learning    of    action,    object, and relation words,    and language structure.    It presents    accounts    of    how    Joshua might have learned each of these. Most    of the rules have    their    roots    in    the    learning    strategies proposed by Bruner, Goodnow, and Austin    ('956).    One way Joshua learned the names    of    ob-    jects    is    by having them named for him, as in the    following example:    1:0:0 Joshua is crying. His mother picks him    up and goes over to the refrigerator.    She    gets some juice, holds it up, and asks, "Do    you want some JUICE?" Joshua keeps crying.    She gets a banana and asks, "Do you want some    BANANA, Joshua?"    Joshua reaches for it.    The following    rule    models    Joshua's    ability    to    learn by having objects named:    Direct Naming Inference    If a word and an object are both brought to    attention,    infer the word is the object's name.    This rule, and other object word    learning    rules,    can    account    for    how Joshua learned object words    such a "slippers",    "piano", "ball", and "table".    Action words can be    learned    via    inferences    about    other    known words in the utterance.    In the    following example, summarized from Schank and Sel-    fridge    (1977), Hana could have inferred the mean-    ing of "put" based on her knowledge    of    the    mean-    ings of "finger" and "ear."    (age 1) Hana knows the words "finger" and    "ear", but not "put." She    was asked to "Put    your finger in your ear," and she did so.    The following two rules can account    for    learning    "Put"    in situations    like this. The first suggests    that "put" would    initially    be    learned    as    "put    something    in something else." The second, applied    after the first in a slightly different    situation,    would    refine    the    meaning of "put" to "put some-    thing someplace".    Response Completion    Inference    Infer the meaning of an unknown word to be    the meaning of the entire utterance with the    meanings of the known words factored out.    Meaning Refinement    Inference    If part of the meaning of a word is not part    of the meaning of an utterance    it occurs in,    remove that part from the word's meaning.    Rules like the above can account    for    Joshua    learning action words like "put", "bring", "give",    and so on. However,    they    can    also    account    for    Joshua    learning    relation words, such as "on" and    “in”.    If Joshua knew    "put",    "ball",    and    "box",    say,    and    was asked to "put the ball in the box",    these rules would account for    his    learning    that    "in" referred to the "contained"    relation.    These, then, are the sort of rules    the    pro-    gram    uses    to learn word meanings.    The program's    rule    for    learning    language    structure    is more    direct.    It    is    based    around    the two structural    predicates,    PRECEDES and FOLLOWS, which relate the    positions    of    words    and    concepts    in short-term    memory. This rule models Joshua's    acquisition    of    structural    information upon hearing utterances    he    understands,    and appears below:    Structure Learning Rule    If a slot filler occurs preceding or    fol lowing a word' s meaning then update    the word's defini .tion that information.    This rule accounts for Joshua    learning    that    the    filler    of    the VAL slot of "in"'s meaning --    (CONT VAL (NIL)) -- is found FOLLOWing    "in" in the    utterance.    The Program This section    presents    four    excerpts    from    a    run    of the program, written in LISP on a    DECSYSTEM-20.    Each represents    the    program    at    a    different    stage    in    development    as it progresses    from Joshua's age 1 abilities    to    Joshua's    age    2    abilities,    using    the    inference    rules described    previously.    The knowledge    representation    used    is    Conceptual    Dependancy    (Schank,    19731,    and    the    language understanding    process    embedded    in    the    program    is    similar    to that in Birnbaum and Sel-    fridge (1979).    The first stage of the program corresponds    to    Joshua    at    age 1. At this stage, the program had    only the knowledge    ascribed to Joshua at that age.    In    the excerpt below, the "parent" types a lower-    case utterance    to the    program,    and    the    program    responds with a message stating its lack of under-    standing. When the parent    provides    gestures    via    simulated    visual    input,    however,    the    program    understands,    and prints the CD    representation    of    its response.    IPARENT SAYS: give me the ball    ICHILD STARES BLANKLY AT PARENT    ICHILD RETURNS TO PLAY    ICHILD sms:    (PARENT HOLDS OUT HAND)    (PARENT ~0oKs AT BALLS)    ICHILD INFERS RESPONSE USING RULE:    I    GESTURAL MEANING    ICHILD RESPONDS:    (ATRL~NS ACTOR (CHILD)    I    OBJECT (BALLS) TO (~0~s VAL (PARENT))    In the second stage,    shown    in    the    excerpt    below,    the    program    has    learned    the meaning of    several words,    and    understands    some    utterances    correctly.    In this case, it has learned the words    "put", "ball", and "box".    However,    notice    that    although    it responds correctly    to the first utter-    ance given by the parent,    it misunderstands    the    second.    This sort of error is reported in Hoogen-    raad et al. (1976). Not knowing "on", the    program    incorrectly    infers    that the appropropriate    rela-    tionship is containment.    IPARENT SAYS: put the ball in the box    I    ICHILD 11wERs RESPONSE USING RULE:    I    UTTERANCE UNDERSTANDING,    FUNCTION/PROPERTY    ICHILD RESPONDS:    (PTRANS ACTOR (CHILD)    I    OBJECT (BALLS) TO (CONT VAL (BOXl))    I PARENT SAYS: put the ball on the box    I    iCHILD INFERS RESPONSE USING RULE:    I    UTTERANCE UNDERSTANDING,    FUNCTION/PROPERTY    ICHILD RESPONDS:    (PTRANS ACTOR (CHILD)    OBJECT (BALLS) TO (CONT VAL &0X1))    The transition    from the second stage    to    the    third is accomplished    by teaching the program more    words. In this case it has learned the    additional    words    "slippers",    "on",    "piano",    “ball”,    and    “table. ” At    this    stage,    the    program    can    now    understand    "Put    the    slippers    on    the    piano",    whereas at any earlier stage it would    not    have.    The program also prints out a message    showing that    it recognizes    this as an unusual request.    However,    although    this    stage    represents    Joshua's    age 2 understanding    of word meaning,    the    program has not yet    learned    language    structure.    The    program    interprets    the second utterance    in-    correctly,    however, in accord with    its    knowledge    of    the    usual relationships    between objects. This    sort of error    is    similar    to    that    reported    in    Stroher and Nelson (1974).    IPARENT SAYS: put the slippers on the piano    ICHILD LOOKS AT PARENT STRANGELY    I CHILD INFERS RESPONSE USING RULE:    I    UTTERANCE UNDERSTANDING    ICHILD RESPONDS:    I    (PTRANS ACTOR (CHILD)    I    OBJECT (SLIPPERS~)    TO (TOP VAL (~1~~01)))    IPARENT SAYS: put the table on the ball    I    iCHILD INFERS RESPONSE USING RULE:    I    UTTERANCE UNDERSTANDING    ICHILD RESPONDS:    (PTRANS ACTOR (CHILD)    I    OBJECT (BALLS) TO (TOP VAL (TABLET)))    The fourth stage    is    shown    in    the    excerpt    below.    The    program has now learned the structure    of "on", and can hence correctly    understand    "Put    the table on the ball." In addition,    it prints out    226    a message indicating    its    liarity of this reque st.    awareness    of    IPARENT SAYS: put the table on the ball    ~CHILD LAUGHS AT UNUSU~~L REQUEST    ICHILD INFERS RESPONSE USING RULE:    I    UTTERANCE UNDERSTANDING    ICHILD RESPONDS:    (PTRANS ACTOR (CHILD)    I    OBJECT (TABLET) TO (TOP VAL (BALLS)))    the    pecu-    At the four th stage, the program has success-    fully    learned    to understand    a subset of language    at Joshua's age 2    level.    It    began    with    world    knowledge    similar    to that Joshua began with, was    equipped with reasonable    learning    and    inference    rules,    and    progressed    as    he did by being given    language experiences    similar to those    he    experi-    enced.    Conclusions    This paper has    described    a    computer    model    of    a child learning to understand    commands    involving action, object, and relation words.    The    program    learns    language meaning and structure to    the level attained by the child at age 2, by being    initially    given    the    same    kind of knowledge    the    child had and by being exposed to language in    the    same    kind of contexts as the child did.    gram learned language according    to    a    The pro-    reasonable    progress ion,    making    the same sort of error s that    children do a t intermediate    stages.    No    par ts    of    speech    or    traditional    grammatical    constructions    are    learned.    It    also    acquires    structural    knowledge    after    knowledge    of meaning, because no    structural knowledge    can be-associated    with a word    until    the    meaning    of that word is learned. This    This aspect of the model offers an explanation    for    why    children    learn    structure    following meaning    (Wetstone and Friedlander,    1973).    In addition    to    English, the program has been tested on comparable    subsets of Japanese, Russian, Chinese, Hebrew, and    Spanish.    Its performance    with these languages was    equivalent    to its learning of English,    suggesting    that    the    program has no English-specific    mechan-    isms.    This research suggests    several    conclusions.    It    suggests    that    a    large    part of the language    learning problem lies in accounting    for    how    the    child infers the meaning of the language he hears.    It    argues    that    the    mechanisms    underlying    the    learning of meaning and structure are the same. It    questions    the    role    of    traditional    grammatical    models    both    in    language    learning    and language    understanding,    and    suggests    that    models    of    language    learning    must be based on strong models    of language understanding.    In particular,    it ques-    tions    Chomsky's    (1980) position that language is    not learned. This    work    suggests    that    plausible    learning models of language development    are possi-    ble.    Further    research    should    proceed    in    many    directions.    In    particular,    the program discussed    here should be extended to model    the    development    of    comprehension    of    more complex constructions,    such as relative clauses, and    the    generation    of    language.    Acknowledgements    Dr. Roger Schank's assistance    in this work was in-    valuable.    Peter    Selfridge    provided useful com-    ments on this paper.    Bibliography    Birnbaum, L., and Selfridge, M. (1979).    Prob-    lems    in    Conceptual    Analysis of Natural Language.    Research Report 168, Department    of    Computer    Sci-    ence, Yale University.    Bruner, J.S., Goodnow, J. J., and Austin, G.A.,    (1956).    A    Study    of    Thinking.    John Wiley and    Sons, New York    -    Chomsky, N., (1980). Rules and Representations,    excerpted    from Rules and Representations.    Colum-    --    bia University    Press, New York    Hoodenraad,    R., Grieve, R.,    Baldwin,    P.,    and    Campbell, R. (1976).    Comprehension    as an Interac-    tive Process.    In R. N. Campell and P.    T.    Smith    (eds.)    Recent    Advances    in    the    Psychology    of    Language.,    Plenum Press, NeFYor    -    Newport, E.L., (1976). Motherese:    the Speech of    Mothers    to    Young    Children.    in N.J. Castellan,    D.B. Pisoni,    and    G.R.    Potts,    (eds.)    Cognitive    Theory: VI II., Lawrence Erlbaum Assoc., Hilsdale,    N.J.    Schank, R. C., (1973). Identification    of    Con-    ceptualizations    Underlying    Natural Language.    In R.    C. Schank and K. M.    Colby (eds.) Computer    Models    of    Thought and Language W.H. Freeman and Co., San    Fransisco.    Schank, R. C., and Selfridge, M.    (1977).    How    to    Learn/What    to    Learn.    in Proceedings    of the    International    Joint Conference    on    Artificial    In-    telligence,    Cambridge,    Mass.    Selfridge,    M.    (1980).    A    Process    Model    of    Language    Acquisition.    Computer Science Technical    Report 172, Yale University,    New Haven, Ct.    Strohrer, H. and    Nelson,    K.E.,    (1974).    The    Young    Child's    Development    of Sentence Comprehen-    sion: Influence of Event    Probability.    Non-verbal    . .    Context,    Syntactic    Form,    and Strategies.    Child    m.,    45: 567-576    Westone, H. and Friedlander,    (1973).    The    Ef-    fect    of    Word Order on Young Children's    Responses    to Simple    Questions    and    Commands.    Child    Dev    44:734-740    -    -0    227     
 | 
	1980 
 | 
	18 
 | 
					
10 
							 | 
	APPROACHES    TO KNOWLEDGE    ACQUISITION:    THE    INSTRUCTABLE    PROQUCTION    SYSTEM    PROJECT    Michael D. Rychener    Carnegie-Mellon    University    Department of Computer Science    Schenley Park    Pittsburgh,    PA 15213    Abstract    Progress in building    systems that acquire knowledge    from a    variety    of sources    depends    on determining    certain    functional    requirements    and ways for them to be met.    Experiments    have    been    performed    with    learning    systems    having    a variety    of    functional    components.    The results of these experiments    have    brought    to light deficiencies    of various    sorts, in systems    with    various degrees    of effectiveness.    The components    considered    here    are:    interaction    language;    organization    of    procedural    elements; explanation    of system behavior; accommodation    to new    knowledge;    connection    of    goals    with    system    capabilities;    reformulation    (mapping) of knowledge; evaluation of behavior; and    compilation    to achieve efficiency    and automaticity.    A number of    approaches    to knowledge    acquisition    tried within the Instructable    Production    System (IPS) Project are sketched.*    1.    The    lnst ructable    Production    System    Project    The IPS project [6] attempts to build a knowledge    acquisition    system tinder    a number    of constraints.    The instructor    of the    system    gains    all    information    about    IPS    by    observing    its    interactions    with    its environment    (including    the    instructor).    Interaction    is to take place in (restricted)    natural language.    The    interaction    is mixed initiative, with both participants    free to try to    influence    the direction.    Instruction    may be about any topic or    phenomenon    in the system’s    external    or internal    environment.    Knowledge accumulates    over the lifetime of the system.    *This    research    was    sponsored    by the    Defense    Advanced    Research    Projects    Agency    (DOD),    ARPA    Order    No. 3597,    monitored    by the Air Force    Avionics    Laboratory    Under    Contract    F33615-78-C-1551.    The views    and    conclusions    contained    in this    document    ate    those    of    the    author    and    should    not    be    interpreted    as    representing    the    official    polictes,    either    expressed    or    implied,    of    the.    Defense    Advanced    Research    Projects    Agency    or the US Government.    Throughout    these IPS experiments,    the underlying    knowledge    organization    has been Production    Systems    (PSs) [2], a form of    rule-based    system in which learning is formulated    as the addition    to, and modification    of, an unstructured    collection    of production    rules.    Behavior    is obtained    through    a simple    recognize-act    cycle    with a sophisticated    set of principles    for resolving conflicts    among rules. The dynamic short-term memory of the system is the    Working    Memory    (WM), whose contents are matched each cycle    to the conditions    of rules in the long-term    memory, Production    Memory.    Study of seven major attempts to construct    instructable    PSs    with various    orientations    leads to recognizing    the centrality    of    eight functional    components.    Listing the components    and their    embodiment    in various versions of IPS can contribute    to research    on learning systems in general, by clarifying some of the important    subproblems.    This discussion    is the first overview of the work of    the project to date, and indicates    its evolutionary    development.    Members    of the IPS project    are no longer    working    together    intensively    to build an instructable    F ‘S, but individual studies that    will add to our knowledge about one or more of these components    are continuing.    Progress on the problem of efficiency    of PSs has    been important    to the IPS project [3], but will not be discussed    further here.    2.    Essential    Functional    Components    of    Inst ructable    Systems    The components    listed in this section    are to be interpreted    loosely as dimensions along which learning systems might vary.    Interaction    The    content    and    form    of    communications    -A    between    instructor    and IPS can have a lot to do with ease and    effectiveness    of instruction.    In particular,    it is important to know    how    closely    communications    correspond    to    internal    IPS    structures.    Similarly, we must ask how well the manifest behavior    of IPS indicates its progress on a task.    An IPS can have    various    orientations    towards interactions,    ranging from passive to active,    with maintenance    of consistency    and assimilation    into existing    structures.    Organization.    Each version    of IPS approaches    the issue of    obtaining    coherent    behavior    by    adopting    some    form    of    organization    of its ‘procedural’    knowledge.    This may involve such    techniques    as collecting    sets of rules into ‘methods’    and using    signal conventions    for sequencing.    Whether IPS can explain its    228    static organization    and whether the instructor    of procedural control are important subissues.    can see the details    Explanation.    A key operation    in an instructable    system is that    of explaining    how the system has arrived at some behavior,    be it    correct    or incorrect.    In the case of wrong    behavior,    IPS must    reveal enough    of its processing    to allow the more intelligent    instructor to determine what knowledge    IPS lacks.    Accommodation.    Once corrections    to IPS’s knowledge    have    been    formulated    by    the    instructor,    it    remains    for    further    interactions    with IPS to augment    or modify    itself.    In the IPS    framework,    these modifications    are taken to be changes to the    rules of the system, rather than changes to the less permanent    WM.    As with    interaction,    IPS can take a passive    or active    approach to this process.    Connection.    Manifest errors are not the only way a system    indicates    a need for instruction:    inability    to connect    a current    problem with existing    knowledge    that might help in solving    it is    perhaps    a more fundamental    one.    An IPS needs    ways    to    assimilate    problems into an existing    knowledge    framework,    and    ways to recognize    the applicability    of, and discriminate    among,    existing methods.    Reformulation.    Another    way that IPS can avoid    requiring    instruction    is for it to reformulate    existing knowledge    to apply in    new circumstances.    There are two aspects    to this function:    finding    knowledge    that is potentially    suitable    for mapping,    and    performing    the actual mapping.    In contrast    to connection,    this    component    involves transformation    of knowledge    in rules, either    permanently    or dynamically.    Evaluation.    Since the instructor has limited access to what IPS    is doing,    it is important    for IPS to be able to evaluate    its own    progress,    recognizing    deficiencies    and errors ai they occur so    that instruction    can take place as closely    as possible    to the    dynamic point of error.    Defining what progress is and formulating    relevant    questions    to ask to fill gaps in knowledge    are two    subissues.    Compilation.    Rules    initially    formed    as a    result    of    the    instructor’s    input may be amenable to refinements    that improve    IPS’s    efficiency.    This    follows    from    several    factors:    during    instruction,    IPS may be engaged in search or other ‘interpretive’    execution&; instruction    may provide    IPS with fragments    that can    only be assembled into efficient form later; and IPS may form rules    that are either too general    or too specific.    Improvement    with    practice is the psychological    analog of this capability.    Anderson    et al [l] have formulated several approaches    to compilation.    3.    Survey    of Approaches    Kernell,    ANA, Kernel2 and IPMSL have been fully implemented.    The    others    were    suspended    at    various    earlier    stages    of    development,    for reasons that were rarely related to substantive    or    conceptual    difficulties.    Kernel Version    1    -ZL    The starting point for IPS is the adoption of    a pure means-ends    strategy:    given explicit goals, rules are the    means to reducing    or solving    them.    Four classes of rules are    distinguished:    means rules; recognizers of success; recognizers    of    failure; and evocation    of goals from goal-free data.    The Kernel1    [6] approach    further organizes    rules into methods, which group    together (via patterns for the same goal) a number of means, tests    and failure rules.    Interaction    consists of language    strings that    roughly correspond    to these methods and to system goals (among    which are queries).    Keywords    in the language    give rise to the    method sequencing    tags and also serve to classify    and bound    rules.    Explanation    derives from the piecing together    of various    goals in WM, along with associated    data.    The major burden of    putting together    raw data that may be sufficient    for explanation    rests on the instructor, a serious weakness.    Additive    Successive    Aouroximations    [ASA).    Some of the    drawbacks    of I<ernell can be remedied*    by orienting    instruction    towards fragments of methods that can be more readily refined at    later times.    Interaction    consists of having the instructor    point at    items in IPS’s environment    (especially WM) in four ways: condition    (for data to be tested), action (for appropriate    operators),    relevant    (for essential data items), and entity (to create a symbol for a new    knowledge    expression).    These designations    result in methods    that are very loose collections    of rules, each of which contributes    some    small    amount    towards    achievement    of    the    goal.    Accommodation    is done    as post-modification    of an existing    method    in    its    dynamic    execution    context,    through    ten    method-modification    methods.    Analoclv.    A    concerted    attempt    to    deal    with    issues    of    connection    and accommodation    is represented    by McDermott’s    ANA program    [4].    ANA starts out with the ability to solve a few    very specific problems, and attacks subsequent    similar problems    by using the methods it has analogically.    The starting methods are    hand-coded.    Connection    of a new goal to an existing    method    takes    place    via special    method    description    rules that are    designed to respond to the full class of goals that appear possible    for a method to deal with by analogy.    An analogy    is set up by    finding    paths    through    a semantic    network    containing    known    objects and actlons.    As a method operating by analogy executes,    rules recognize    points where an analogy    breaks down.    Then    general    analogy    methods    are able either to patch the method    directly with specific mappings or to query the instructor    for new    means-ends rules.    Each attempt to build an IPS has been based on the idea of an    initial hand-coded    kernel    system, with enough structure    in it to    support all further growth by instruction,    A kernel establishes the    internal    representations    and the overall approach    to instruction.    The following    are presented    in roughly    chronological    order.    *These    ideas    were    introduced    by A. Newell    in October, 1977.    229    Problem    Soaces.    Problem spaces [5]* provide a novel basis    for IPS by embedding    all behavior and interactions    in search.    A    problem space consists of a collection    of knowledge    elements that    compose    states in a space, plus a collection    of operators    that    produce    new states from known ones.    A problem consists of an    initial    state,    a desired    state,    and    possibly    path    constraints.    Newell’s    Problem    Space    Hypothesis    (ibid.)    claims    that    all    goal-oriented    cognitive    activity occurs in a problem space, not just    activity    that is sufficiently    problematical.    Interaction    consists of    giving IPS problems    and search control    knowledge    (hints as to    how to search specific spaces).    Every Kernel component    must be    a problem space too, and thus subject to the same modification    processes.    The concrete    proposal as it now stands concentrates    on interaction,    explanation    (which involves sources of knowledge    about the present state of the search), and organization.    Schemas.    The use of schemas as a basis for an IPS kerner*    make slot-filling    the primary information-gathering    operation.    A    slot is implemented    as a set of rules.    The slots are: executable    method; test of completion;    assimilation    (connects    present WM    with the schema for a goal); initialization    (gathers operands    for a    method);    model    (records    the    instruction    episode    for    later    reference);    accommodation    (records    patches    to the method);    status (records gaps in the knowledge);    monitoring    (allows careful    execution);    and    organization    (records    method    structure).    Orientation    towards instruction    is active, as in ASA.    Explanation    consists    of interpreting    the model slot, and accommodation,    of    fitting additions into the model. Connection    is via a discrimination    network    composed    of the aggregated    assimilation    slots of all    schemas.    Compilation    is needed here, to map model to method.    Kernel    Version    2. An approach with basic ideas similar to    ASA and to Waterman’s    Exemplary    Programming    [8], Kernel2 [7]    focusses on the process of IPS interacting    with the instructor    to    build    rules    in a dynamic    execution    context.    The instructor    essentially steps through the process of achieving    a goal, with IPS    noting what is done and marking    elements    for inclusion    in the    rules to be built when the goal is achieved.    Kernel2 includes a    semantic    network of information    about its methods, for use as a    ‘help’ facility.    Kernel2 is the basis from which the IPMSL system,    below, is built.    Semantic    Network.    Viewing accumulation    of knowledge    as    additions    to a semantic    network    is the approach    taken by the    IPMSL    system    [7].    Interaction    consists    of    definition    and    modification    of nodes in a net, where such nodes are represented    completely    as rules. Display and net search facilities are provided    as aids to explanation    and accommodation.    The availability    of    traditional    semantic    network    inferences    makes    it possible    for    IPMSL to develop an approach    to connection    and reformulation,    since    they    provide    a set of tools    for    relating    and    mapping    knowledge    into more tractable expressions.    *This    1978.    approach    was    formulated    by A. Newell    and J. Laird    in October    of    4.    Conclusions    The IPS project has not yet succeeded    in combining    effective    versions    of components    as discussed    above,    to produce    an    effective    IPS.    The components    as presently    understood    and    developed,    in fact, probably    fall short of complete    adequacy    for    such a system.    But we have explored and developed    a number of    approaches    to instructability,    an exploration    that has added to the    stock of techniques    for exploiting    the advantages    of PSs. We are    encouraged    by the ability of the basic PS architecture    to enable    explorations    in a variety of directions    and to assume a variety of    representations    and organizations.    Acknowledqments.    Much    of the work    sketched    has been    done jointly    over the course    of several    years.    Other    project    members are (in approximate    order of duration    of commitment    to    it):    Allen Newell, John    McDermott,    Charles    L. Forgy,    Kamesh    Ramakrishna,    Pat Langley,    Paul Rosenbloom,    and John Laird.    Helpful comments on this paper were made by Allen Newell, Jaime    Carbonell, David Neves and Robert Akscyn.    References    1.    Anderson, J. R., Kline, P. J., and Beasley, C. M. Jr. A Theory    of the Acquisition    of Cognitive Skills. Tech. Rept. 77-1, Yale    University, Dept. of Psychology,    January, 1978.    2.    Forgy, C. L. OPS4 User’s Manual.    Tech. Rept.    CMU-CS-79-132,    Carnegie-Mellon    University, Dept. of Computer    Science, July, 1979.    3.    Forgy, C. L. On the Efficient    Implementation    of Production    Systems.    Ph.D. Th., Carnegie-Mellon    University, Dept. of    Computer Science, February 1979.    4.    McDermott,    J. ANA: An Assimilating and Accommodating    Production    System. Tech. Rept. CMU-CS-78-156,    Carnegie-Mellon    University, Dept. of Computer Science,    December, 1978. Also appeared in IJCAI-79    5.    Newell, A. Reasoning, problem solving and decision    processes: the problem space as a fundamental    category.    In    Attention    and Performance    VIII, Nickerson, R., Ed.,Erlbaum,    Hillsdale, NJ, 1980.    6.    Rychener, M. D. and Newell, A. An instructable    production    system: basic design issues. In Pattern-Directed    Inference    Systems,    Waterman, D. A. and Hayes-Roth,    F., Eds., Academic,    New York, NY, 1978, pp. 135-153.    7.    Rychener, M. D. A Semantic Network of Production    Rules in    a System for Describing Computer Structures.    Tech. Rept.    CMU-CS-79-130,    Carnegie-Mellon    University, Dept. of Computer    Science, June, 1979. Also appeared in IJCAI-79    8.    Waterman, D. A. Rule-Directed    Interactive    Transaction    Agents: An Approach to Knowledge Acquisition.    Tech. Rept.    R-21 71 -ARPA, The Rand Corp., February, 1978.    d*Schemas    were    first    proposed    for IPS by Hychener,    May,    1978    230     
 | 
	1980 
 | 
	19 
 | 
					
11 
							 | 
	SOME    ALGORITHM    DESIGN    METHODS    Steve Tappel    Systems Control, Inc., 1801 Page Mill Road    Palo Alto, California 94304    and Computer Science Department, Stanford University    Abstract    Algorithm    design may be defined as the task of finding an    efficient data and control structure that implements a given input-    output    specification.    This    paper describes a methodology for    control    structure    design, applicable to combinatorial algorithms    involving    search or minimization.    The methodology includes an    abstract    process representation    based on generators, constraints,    mappings    and orderings, and a set of plans and transformations    by which to obtain an efficient algorithm.    As an example, the    derivation    of a shortest-path    algorithm is shown, The methods    have    been developed    with automatic programming    systems in    mind, but should also be useful to human programmers.    1. introduction    The general goal of automatic programming research is to    find methods    for constructing efficient implementations of high-    level    program    specifications.    (Conventional    compilers embody    such methods to a very limited extent.) This paper describes some    methods    for the design of efficient control structures, within a    stepwise    refinement    paradigm..    in stepwise refinement (see for    instance    Cl,Zl), we view the program specification itself as an    algorithm,    albeit a very inefficient one. Through    a chain of    transformation    steps, we seek to obtain an efficient algorithm.    Specification    -+ Alg -+ . . . 3    Algorithm    Each transformation    step preserves input-output equivalence,    so the final algorithm requires no additlonal verification.    Algorithm    design    is a difficult artificial intelligence task    involving    representation    and planning issues. First, in reasoning    about    a complicated    object like an algorithm it is essential to    divide    it into parts that interact in relatively simple ways. We    have    chosen    asynchronous    processes, communicating    via data    channels, as an appropriate    representation for algorithms. Details    are in Section II. Second, to avoid blind search each design step    must be clearly motivated, which in practice requires organization    of the transformation    sequence according to high-level plans. An    outline of the plans and transformations    we have developed is    given    .in Section    III, followed in Section IV by the sample    derivation    of a shortest    path algorithm.    Sections V and VI    discuss extensions and conclude.    This methodology    is intended for eventual implementation    within    the CHI program synthesis system 131, which is under    development    at Systems Control Inc.    This research is supported in part by the Defense Advanced    Research    Projects Agency under DARPA Order 3687, Contract    N00014-79-C-0127,    which is monitored by the Office of Naval    Research. The views and conclusions contained in this paper are    fhe    of the author and should not be interpreted as necessarily    representing the oflicial policies, either expressed or implied, of Xl,    DARPA, ONR or the US Government.    II. Process graph representation of algorithms    Our    choice of representation    is motivated largely by our    concentration    on the earlier phases of algorithm design, in which    global    restructurings    of the algorithm    take place. Most data    structure    decisions can be safely left for a later phase, so we    consider    only simple, abstract data types like sets, sequences and    relations.    More importantly, we observe that conventional high-    level languages impose a linear order on computations which is    irrelevant    to the structure of many algorithms and in other cases    forces a premature    committment to a particular algorithm.    To    avoid    this problem,    we have chosen a communicating process    representation    in which each process is a node in a directed graph    and processes communicate by sending data items along the edges    which act as FIFO queues. Cycles are common and correspond to    loops in a conventional language.    The use of generators (or producers) in algorithm design was    suggested    by 141. Our representation    is essentially a specialized    version    of the language for process networks described in I51    Rather    than strive for a general programming language we use    only    a small    set of process types, chosen so that: (I) the    specifications    and algorithms we wish to deal with are compactly    represented,    and (2) plans and transformations can be expressed    in terms of adding, deleting or moving process nodes. The four    process types are:    Generator: produces elements one by one on its output edge.    Constraint:    acts as a filter; elements that satisfy the constraint    pass through.    Mapping:    takes each input element and produces some    function of it. If the function value is a set its elements are    produced one by one.    Ordering: permutes its input elements and produces them in    the specified order.    The representation    is recursive, a very important property.    There can be generators of constraints, constraints on constraints,    mappings    producing    generators, etc. Most of the same design    methods will apply to these “meta-processes”.    To illustrate the representation,    we encode the specification    for our sample problem of finding the shortest path from a to b in    a graph.    Idotation    and terminology    for shortest path.    A directed    graph    is defined by a finite vertex set V and a binary relation    Edge(u,v).    A path p is a sequence of vertices (pi . . . p,), in which    Edge(pi.pbr)    holds for each pair.    The “.” operator is used to    construct    sequences: (u . . . v).w = (u . . . v w). Every edge of the    graph is labelled with a positive weight W(u,v) and the weight of    an entire path is then Weight(p) = W(pi,pz)+...+W(p,l,p,,).    The    shortest path from a to b is just the one that minimizes Weight.    64    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    A specification should be as simple as possible to ensure    correctness.    Shortest path can be simply specified as: generate ail    paths from a to b, and select the shortest. We express selection of    the shortest path in a rather odd way, feeding all the paths into    an    ordering    process    whose very first output    will cause the    algorithm    to stop. The point is that by using a full ordering for    this comparatively    minor task, we can apply all the plans and    transformations    for orderings.    As for the paths from a to b, they    are defined    as a certain kind of sequence of vertices, so we    introduce a generator of all vertex sequences and place constraints    after it to eliminate non-paths.    This completes the specification.    Constraint    methods. The goal of constraint methods is to    reduce the number of elements generated. The top level plan for    constraints    says to:    I. propagate    constraints    through the process graph to bring    them adjacent to a generator,    2. incorporate constraints    into a generator whenever possible,    and if the results are not satisfactory,    3. deduce    new    constraints    beyond    those    given    in    the    specification, and repeat.    Each of the three subtasks is nontrivial in itself and is    carried out according to a set of (roughly speaking) intermediate-    level methods.    For (2) an intermediate-level method that we use    several times in the Shortest Path derivation is:    Selection    of    an    appropriate    internal    structure    for the    generator    of Sequences(V) is actually part of the design process,    but to simplify the example we will take as a default the usual    recursive definition of sequences. The recursion in the definition    corresponds    to a cycle in the process graph.    The constraint    incorporation    plan ConstrainComponent.    ConstrainComponent    applies when a constraint on composite    objects x (sets, sequences, not numbers) is reducible to a constraint    on    a    single    component    c    of    x,    i.e.    P(x)    e    P’(x,).    ConstrainComponent    then gives a plan:    I. Inside the generator,    find the sub-generator of values for    component    c. If necessary, manipulate the process graph to    isolate this generator. Again, other methods must be called    upon.    2. Remove    constraint    P and add constraint P’ to the sub-    generator.    0 csequences    (VI    e    Ordering    methods. Another group of methods is concerned    with the deduction,    propagation    and incorporation of orderings    i f scSe    4    uences(VI    I    on a generated set. These methods are analogous to the methods    r    I    and vc    for    constraints    but more complicated.    In the Shortest Path    then    s. vcsequences    (VI    Map:    is.v    1 WV)    derivation    we use a powerful transformation,    explained rather    A    sketchily here:    The generation process starts when the empty sequence () is    produced    on the 3” edge. From the “s” edge it goes to the    constraint    and also to the mapping, which produces the set of all    one-vertex    sequences    ().v, for vcV.    These are fed back to    generate    two-vertex sequences, and so on. A mapping cycle like    this is a very common kind of generator.    111. hods    for algorithm design    The    program    specification    from which design starts is    typically written as an exhaustive generate-and-test (or generate-    and-minimize)    process,    and    bears    little resemblance    to the    algorithm    it will become. The design methods all have the goal    of incorporating    constraints,    orderings    or mappings    into the    generator, or else the goal of planning or preparing to do so. To    incorporate    a constraint means to modify the generator so that it    only generates    items which already satisfy the constraint; to    incorporate an ordering    means to modify the generator so it    generates    elements directly in that order; and to incorporate a    mapping f means to generate elements f(x) instead of elements x.    Accordingly, the methods fall into three main classes, briefly    described    below.    Sunerimnosed    upon this class division    is a    heirarchy (not strict) &th multi-step’plans at the higher levels and    a large number of specific syntactic transformations at the bottom.    The    heirarchy    is organized    according to goals and subgoals.    Heuristics    and    deduction    rules are required    to support    the    planning    activity.    At the time of writing, a total of about 20    methods    have been formulated    not counting low-level syntactic    transformations.    The    ordering    incorporation    transformation    InterleaveOrder.    InterleaveOrder    applies when an ordering R is    adjacent    to a generator consisting of a mapping cycle, in which    the mapping    f has the property R(x f(x)) for all x. In other    words,    f(x)    is    greater    than    x    under    the    ordering    R.    InterleaveOrder    moves the ordering inside the mapping cycle and    adds a synchronization    signal to make the ordering and mapping    operate    as coroutines.    The ordering produces an element x, the    mapping    receives it and produces its successors f(x) (there would    be no need for the ordering at all if f were single-valued), then    the ordering produces the next element and so on.    Mapping    methods.    The    methods    for    incorporating    a    mapping    into a generator are mostly based upon recent work in    the “formal differentiation    of algorithms” [6] and are related to    the well-known    optimizing    technique of reduction in operator    strength.    (They are not used in our sample design.)    Some    syntactic    transformations    and other    methods    described    in this section will appear in the derivation.    Example:    Design of a shortest path ~~go~~t~rn    not    In    the    design    which    follows, the specification will be    transformed    from an inefficient generate-and-minimize    scheme    into a dynamic    programming    algorithm.    The final algorithm    grows paths out from vertex a, extending only the shortest path to    each intermediate    vertex, until reaching b. Of necessity we omit    many details of the design.    IV. 1. Constraint methods.    Since the specification’s constraints are already next to the    generator    (step i), the overall plan for constraints says to try to    incorporate    them    (step 2.) We will follow the heuristic of    incorporating    the strongest    constraint    first.    Right    now, the    algorithm reads    Incorporate    the Edge constraint.    More detail will be shown    in    this    first    step    than    in    later    derivation    steps.    ConstrainComponent    applies because once a vertex Si has been    added    to a skquence, -the constraint    Edge(Si,Sk,) reduces to a    constraint    on the single component    Si+lm (This reasoning step is    really the application of another method, not described here.) Step    (I) in the Co ns raincomponent    t    plan says to find the generator of    values for components    Si+l* Though we have written it in linear    form    for convenience,    the expression    (s.v 1 vcV} is really a    generator followed by a mapping.    Unfortunately “vcV” generates    SI as well as the desired siri values, so we have to unroll one cycle    of the graph to isolate the generator of Si.1 values. (Agaln, we    have applied methods not described in this paper.) Step (2) is now    possible and consists in constraining v to satisfy Edge(s,,v). With    the    Edge constraint    inzorporated,    only paths are now being    generated so we change s to p in the diagram.    I    Hap:    (p.v    1 vcV A Edge(pn,vll    I    Incorporate    the    constraiut    that    pl=a. Since the pl=a    constraint    refers only to a component of p, ConstrainComponent    applies again.    We constrain v in the first “vcV” generator to be    equal to a. After simplifying, we obtain    Incorporate    the    constraint    that    pn=b. Once    again    ConstrainComponent    applies. This time, however, we are unable    to isolate a generator for the last vertex of paths. The last vertex    of one path    is the next-to-last vertex of another, and so on.    ConstrainComponent    fails, other methods fail too; we leave the    pn=b constraint unincorporated.    Deduce    new constraint.    In accordance with the general    constraint    plan (step 3) we now try to deduce more constraints.    One method for deducing new constraints asks: do certain of the    generated elements have no eflecf whatroever upon the result of the    algorithm?    If the answer is “yes”, try to find a predicate that is    false on the useless elements, true on others. Motive: if we later    succeed in incorporating    this constraint into the generator, the    useless elements will never be produced.    NOW consider the Order + STOP combination.    Because all    it does is select the shortest path, any path which is not shortest    will have no effect! The corresponding constraint says:    p is a shortest    path from a to b.    A further deduction gives the even stronger constraint that    every subpalh of p must be a shortest path (between its endpoints).    Incorporation    of this constraint is complex and is deferred till    after incorporation    of the Weight ordering.    IV.2.    Ordering methods.    So far paths are generated    according    to the partial    order of    path inclusion; path p is generated before path q if q = p.u . . .V    for some vertices u,..,v. We may generate a lot of paths to b    before generating    the shortest one - possibly an infinite number.    However    if the Weight ordering can be incorporated into the    path generator, then only a single path to b (the shortest one) will    ever be generated.    Propagate    Ordering.    Before    applying    an incorporation    method    we need to bring    the Weight ordering next to the    generator.    Constraints and orderings commute so this is easy.    Incorporate    the    ordering    into    the    generator.    The    InterleaveOrder    method applies, because Weight(p.v) is greater    than Weight(p).    It moves the ordering from outside the generator    cycle to inside    and also causes the ordering to wait for the    mapping    to finish extending the previous path before it produces    another.    Incorporate    new constraint.    The “p is a shortest path”    constraint    is readily incorporated    now: the shortest path to any    vertex will be the jirsi path to that vertex. Any later path q, with    the same last vertex q,=p,, can be eliminated by a new constraint    C(p) = xq. q,=p,.    We introduce a mapping to produce these new    constraints    C(p), and now we have a generator of consfrainfs.    The result of the last three steps is    The algorithm is now a breadth-first search for a path to b,    with elimination    of non-optimal paths at every vertex. Despite    various    inefficiencies    that remain, the essential structure of a    dynamic    programming    algorithm    is present.    One interesting    improvement    comes from incorporating the generated constraints    C(p) Into the generator of paths, using ConstrainComponent.    To    complete the derivation would require data structure selection and    finally    a    translation    into    some    conventional    programming    language.    66    V. Other results and limitations    Besides    the Shortest    Path    algorithm    shown    here (and    variants of it) the algorithm design methods have been used [71 to    derive a simple maximum finding algorithm and several variants    on prime finding including the Sieve of Eratosthenes and a more    sophisticated    linear-time    algorithm.    In    these    additional    derivations,    no new process types and only a few new methods    had to be used.    Single and multiple processor implementations    have informally been obtained from process graph algorithms, for    both prime finding and Shortest Path.    More algorithms need to be tried before specific claims about    generality    can be made. The intended domain of application is    combinatorial    algorithms, especially those naturally specified as an    exhaustive    search (possibly over an infinite space) for objects    meeting    some stated criteria, which can include being minimal    with respect to a defined ordering.    Backtrack algorithms, sieves,    many graph algorithms and others are of this kind 181.    The methods described here are quite narrow in the sense    that a practical    automatic programming    system would have to    combine them with knowledge of:    1. Standard    generators    for different kinds of objects. Our    methods    can only modify an existing generator, not design    one.    2. Data    structure    selection    and    basic operations    such as    searching a list.    3. Efficiency analysis to determine if an incorporation really    gives a speedup.    4. Domain specific facts, e.g., about divisibility if designing a    prime finding algorithm.    5. How to carry out the final mapping of process graph into a    conventional    programming (or multiprogramming) language.    VI. Discussion and Conclusions    The    main    lesson to be learned from this work is the    importance    of using an abstract and modular representation of    programs    during algorithm design. Details of data structure, low-    level operations    and computation sequencing should be avoided,    if possible, until the basic algorithm has been obtained.    (Since    some    algorithms    depend    crucial)y upon    a well-chosen data    structure,    this    will not    always be possible.) Further,    it is    advantageous    to represent algorithms in terms of a small number    of standard    kinds of process, for which a relatively large number    of design methods will exist. The results so far indicate that just    four standard    processes suffice to encode a moderate range of    different specifications and algorithms.    Presumably others will be    required    as the range is extended, and it is an important question    whether    (or how long) the number can be kept small. A similar    question can be asked about the design methods.    One would not expect methods based upon such general    constructs    as generators, constraints, orderings and mappings to    have much power for the derivation of worthwhile algorithms.    For instance, if we had explicitly invoked the idea of dynamic    programming,    our derivation of a shortest path algorithm would    have    been shorter.    For really difficult algorithms, the general    methods may be of little use by themselves. We suggest that they    should still serve as a useful complement to more specific methods,    by    finding    speedups    (based    on    incorporation    of whatever    constraints,    orderings    and mappings    may be present) in an    algorithm obtained by the specific methods.    As a final issue, it is interesting to speculate how the stepwise    refinement    approach    to programming might be used by human    programmers.    Use of a standard    set of process types and    correctness-preserving    transformations    would be analogous to the    formal manipulations    one performs in solving integrals or other    equations.    If that were too restrictive, perhaps one could use the    methods    as a guide,    without    attempting    to maintain    strict    correctness.    After obtaining a good algorithm, one could review    and    complete    the    design,    checking    correctness    of    each    transformation    step. The result would be a formally correct but    also well-motivated derivation.    Acknowledgements.    Many helpful ideas and criticisms were provided by Cordell    Green, Elaine Kant, Jorge Phillips, Bernard Mont-Reynaud, Steve    Westfold, Tom Pressburger    and Sue Angebranndt.    Thanks also    to Bob Floyd for sharing his insights on algorithm design.    References    I. Baiter, Robert; Goldman, Neil: and Wile. David. “On the    transformational    implementation approach to programming”,    Proc.    2nd    Int’l Conference    on Software    Engineering    (1976)    337-344.    2    3    4.    5    6,    7.    8.    9.    Barstow. David R. Knowledge    Based Program Construction,    Elsevier North-Holland,    New York, 1979.    Phillips, Jorge and Green, Cordell. “Towards Self-Described    Programming    Environments”,    Technical Report, Computer    Science Dept., Systems Control, Inc., Palo Alto, California,    April 1980.    Green,    Cordell    and Barstow, David R.    “On Program    Synthesis Knowledge”, Arti.cial    Intelligence,    103 ( 1978) 24 I -    279.    Kahn,    Gilles and MacQueen, David B. “Coroutines and    Networks    of Parallel Processes”, lnformafion    Processing    77,    IFIP,    North-Holland    Publishing    Company,    Amsterdam,    ( 1979) 993-998.    Paige,    Robert.    “Expression    Continuity    and the Formal    Differentiation    of Algorithms”, Courant Computer Science    Report x5, (1979) 269-658.    Tappel,    Steve. “Algorithm    Design: a representation    and    methodology    for    control    structure    synthesis”, Technical    Report, Computer Science Dept., Systems Control, Inc., Palo    Alto, California, August 1980.    Reingold. Edward M., Nievergelt. Jurg, and‘ Deo, Narsingh.    Combinatorial    Algorithms:    Theory and Practice, Prentice-Hall    Inc., Englewood Chffs, New Jersey, 1977.    Elschlager,    Robert    and    Phillips,    Jorge.    “Automatic    Programming”    (a section of the    Handbook    of Artificial    Intelligence,    edited    by    Avron    Barr    and    Edward    A.    Feigenbaum),    Stanford    Computer Science Dept., Technical    -    --_    _--_    Report 758, 1979.    10. Floyd, R. W. “The Paradigms of Programming” (Turing    Award Lecture), CACM 22:s (1979) 455-460.    67     
 | 
	1980 
 | 
	2 
 | 
					
12 
							 | 
	USING A MATCHER TO MAKE AN EXPERT CONSULTATION SYSTEM BEHAVE INTELLIGENTLY*    Rene'    Reboh    Artificial    Intelligence    Center    SRI International,    Menlo    Park,    CA 94025    ABSTRACT    This    paper    describes    how    even    a    simple    matcher,    if    it    can    detect    simple    relationships    between    statements    in    the    knowledge    base,    will    support    many features    that    will    make a knowledge-    based    consultation    system    appear    to    behave    more    intelligently.    We describe    three    features    that    are    useful    during    the    knowledge    acquisition    phase    (involving    the    building    and    testing    of    the    knowledge    base),    and    four    features    that    are    of    assistance    during    a    consultation.    Although    these    features    are    described    in    terms    of the    Prospector    environment    [2],    it    will    be clear    to    the    reader    how    these    features    can    be    transferred    to    other    environments.    I    INTRODUCTION    Partial-matching    (also    referred    to    as    interference-matching,    correspondence-mapping,    . ..)    touches    many    issues    representation    and    efficiency    in    many AI    syZ:ems    [ 31.    Its    role    in    Prospector    is    significant    because    it    is    involved    in    many    aspects    of    consultation    and    knowledge    acquisition.    Given    any two statements    Sl    and S2 of    the    knowledge    base,    the    Semantic    Network    Matcher    of    Prospector    determines    which    of    the    following    situations    applies:    Sl    and S2 are    identical    (Sl    = S2)    Sl    is    a restriction    of    S2 (SlC    S2),    (or    S2 is    a generalization    of    Sl    (s2 3 Sl))    Sl    and S2 are    disjoint    statements    (Sins2    = $ >    Sl    overlaps    S2 (otherwise)    For    instance,    suppose    the    knowledge    base    contains    the    following    statements:    Sl :    "rhyolite    is    present"    s2:    "a rhyolite    plug    is    present"    s3:    I(an igneous    intrusive    is    present"    s4:    "rhyolite    or    dacite    is    present"    s5:    "pyrite    is    present"    As these    statements    are    being    added    to    the    knowledge    base,    the    Matcher    will    conclude    that:    S2 is    a restriction    of    Sl,    S2 is    a restriction    of    S3    (rhyolite    is an igneous    rock    and a    plug    is    a special    kind    of    intrusive),    Sl    and S3 overlap    (rhyolite    is    an igneous    rock,    but    need    not    be an intrusive),    Sl    is    a restriction    of S4,    S2 is    a restriction    of S4    (transitivity    from    the    first    and    third    conclusion),    S3 and S4 overlap,    S5 is    disjoint    from    Sl,    S2,    S3 and S4.    A    detailed    description    of    how    the    Matcher    operates    in    the    Prospector    environment    can be found    in    [6].    We mention    briefly    here    that    the    Matcher    views    each    statement    as    a    set    of    constraints    corresponding    to    a    set    of assertions    about    the    existence    of    physical    entities    or    processes    and    their    attributes.    semantic    networks    [:i    l5-;spez;;;,    partitioned    to    represent    statements    in    the    knowledge    base    whereby    these    assertions    are    expressed    in    terms    of relations    and    entries    in    taxonomies    of    the    domain    of application    (in    this    case    geology).    Let    us examine    some    of    the    features    of    a    knowledge-based    system    that    can be    supported    by    such    a Matcher.    --a-----    * This    work was supported    by the    Office    of Resource    Analysis    of    the    U.    S.    Geological    Survey    under    Contract    No.    14-08-0001-15985.    Any    opinions,    findings,    and    conclusions    or    recommendations    expressed    in    this    publication    are    those    of    the    author,    and do not    necessarily    reflect    the    views    of    the    U.    S.    G.    S.    231    II    USE    OF    THE    MATCHER    IN    KNOWLEDGE    ACQUISITION    -----    A.    Aid    in    Maintaining    Probabilistic    Consistency    --    of    the    Inference    Network    --    The    knowledge    bases    of    many    expert    systems    are    organized    as    explicit    or    implicit    networks    of    statements    connected    by    rules    or    logical    constructs.    Because    such    networks    provide    the    framework    for    judgmental    reasoning,    various    numerical    values,    such    as    probabilities    or    certainties,    are    often    maintained    in    them.    A major    concern    of    expert    systems    is    the    difficulty    of    keeping    the    knowledge    base    error    free    and    consistent    in    form    and    content    as    it    grows.    Let    us    examine    how    the    Matcher    assists    Prospector    in    maintaining    probabilistic    consistency    in    the    case    where    Sl    is    the    most    recently    entered    statement,    and    S2,    which    already    exists    in    the    knowledge    base,    is    a restriction    of    Sl.    (a)    Because    S2    is    a restriction    of    Sl,    the    probability    of    S2    can    never    exceed    that    of    Sl.    In    particular,    if    the    prior    probabilities    supplied    by    the    domain    specialist    (DS)    do    not    satisfy    this    constraint,    the    Matcher    will    detect    the    violation    and    a    correction    will    be    required.    Thus,    before    a    consultation    begins,    we    can    assume    that    P(S2)    <    P(S1).    (b)    Unfortunately,    even    though    all    the    probabilistic    constraints    are    initially    satisfied,    the    probability    changes    .that    follow    from    the    use    of    inference    rules    may    not    maintain    them.    For    example,    if    Sl    and    S2    are    the    hypotheses    (right-    hand    sides)    of    two    rules    El    -->    Sl    and    E2    -->    S2,    and    if    the    evidence    (left-hand    side)    El    is    sufficiently    unfavorable    for    Sl,    we    may    have    P(Sl[El)    < P(S2).    Similarly,    if    the    evidence    E2    is    sufficiently    favorable    for    S2,    we    may    have    P(S1)    <    P(S21E2).    In    essence,    the    problem    is    that    when    the    DS    provided    the    rule    saying    that    the    evidence    El    is    unfavorable    for    Sl    (rhyolite),    he    overlooked    the    fact    that    El    is    also    unfavorable    for    S2    (a    rhyolite    plug),    and    did    not    supply    a    rule    of    the    form    El    -->    s2.    Similarly,    when    he    supplied    the    rule    saying    that    the    evidence    E2    is    favorable    for    a rhyolite    plug,    he    overlooked    the    fact    that    E2    is    also    favorable    for    rhyolite,    and    did    not    supply    a rule    of    the    form    E2    -->    Sl.    Indeed,    the    DS    should    not    be    asked    to    supply    rules    for    such    obvious    deductions.    The    Matcher    helps    to    detect    these    situations.    It    is    the    responsibility    of    the    consultation    system    to    take    the    appropriate    actions    to    maintain    the    probabilistic    consistency    in    the    inference    networks.    In    [l]    it    is    shown    how    Prospector    uses    results    from    the    Matcher    to    create    additional    rules    ensuring    that    the    probabilistic    constraints    will    be    maintained    at    run    time    when    inference    rules    are    applied.    B.    Aid    in    Designing    the    Semantic    Representation    --    Because    statements    in    the    knowledge    base    may    be    arbitrarily    complex,    their    semantic    encoding    is    often    entered    manually    during    the    knowledge    acquisition    phase.    During    a    consultation,    however,    the    user    is    allowed    to    volunteer    information    to    the    system,    and    semantic    re;r;;~;;:t,~~]    is    used    to    create    the    corresponding    to    the    volunteered    statements.    The    kinds    of    statements    that    can    be    translated    by    the    parser    depend    upon    taxonomy    contents    and    an    external    grammar.    Whether    the    semantic    representation    of    statements    is    entered    manually    or    constructed    by    a parser,    the    knowledge    engineer    needs    to    determine    if    the    resulting    representation    is    adequate.    He    must    ensure    that    it    reflects    the    intentions    of    the    DS    in    all    situations    that    could    occur    during    of    a    consultation.    Statements    can    be    combined    to    form    larger    statements    or    broken    into    smaller    units,    and    their    semantic    representation    need    not    always    be    elaborate.    Which    representation    is    finally    chosen    depends    upon    what    other    statements    are    in    the    knowledge    base    and    how    they    are    related,    as    well    as    what    the    DS thinks    is    the    most    appropriate    for    each    particular    situation.    Because    the    Matcher    can    be    used    to    analyze    how    statements    are    related,    it    can    assist    in    choosing    an    appropriate    representation.    In    particular,    no    elaborate    semantic    representation    may    be    needed    for    a    statement    (or    a    portion    of    a    statement)    that    is    unrelated    to    any    other    statement    in    the    knowledge    base.    Because    such    a statement    is    unlikely    to    have    a    major    effect    on    the    consultation,    a    simple    text-string    representation    would    be    adequate    for    most    purposes.    In    addition    to    determining    if    a    restriction/generalization    relation    exists    between    two    statements,    the    Semantic    Network    Matcher    in    Prospector    can    identify    corresponding    elements    of    the    statements    and    point    out    the    nature    of    their    differences.    This    feature    has    been    exploited    to    some    extent    in    the    knowledge    acquisition    module    of    Prospector    [2]    where    it    was    used    to    choose    a    representation    for    a    statement    from    possible    alternative    representations.    For    instance,    a    conjunction    "X    and    Y"    can    be    encoded    either    as    a    single    statement    or    as    two    statements,    "X"    and    "Y,"    connected    by    a    logical    AND    in    the    inference    network.    The    first    alternative    is    chosen    if    a    statement    already    exists    in    the    knowledge    base    that    is    equal    to    or    is    a restriction    of    "X    and    Y,"    or    if    "X    and    Y"    is    not    related    to    any    existing    statement.    The    second    alternative    is    chosen    otherwise.    We    believe    this    approch    can    be    generalized,    and    that    an    automatic    procedure    using    the    Matcher    can    be    devised    to    assist    in    the    uniform,    and    perhaps    optimal,    encoding    of    all    statements    in    the    knowledge    base.    C.    As    a Search    Feature--Accessing    by    Contents    ---    -    Development    and    testing    of    a knowledge    base    typically    extend    over    long    periods.    The    knowledge    engineer    cannot    be    expected    to    remember    all    the    statements    (or    any    labels    assigned    to    them)    that    he    232    or    another    knowledge    engineer    developing    the    same    knowledge    base    has    already    entered.    The    Matcher    can    be    used    as    a    search    feature    allowing    the    knowledge    engineer    to    access    statements    by    specifying    a    partial    description    of    their    (semantic)    contents.    In    effect,    the    Matcher-search    (a    command    of    the    knowledge    acquisition    system)    will    allow    the    knowledge    engineer    to    say    something    like:    "Now    I    want    to    work    on    that    portion    of    the    inference    network    that    deals    with    sulfide    mineralization."    The    search-by-content    feature    is    accomplished    by    matching    the    partial    description    specified    by    the    knowledge    engineer    with    statements    currently    in    the    knowledge    base.    III    USE    OF    THE    MATCHER    IN    CONSULTATION    -----    A.    As    a Tool    for    Maintaining    Consistency    of    the    --    User'-s    Answers:    --    1 .    Discovering    Inconsistencies    If    a user    is    allowed    to    volunteer    information    to    an    expert    system,    logical    inconsistencies    in    the    input    could    result.    For    example,    suppose    the    user    volunteers    the    two    statements    Sl    and    S2    concerning    rhyolite    and    a    rhyolite    plug.    Because    S2    is    a    restriction    of    Sl,    if    his    statements    imply    that    P(S2)    > P(Sl),    he    will    be    reminded    that    his    answers    are    contradictory.    This    is    the    case,    for    instance,    if    the    user    says    that    "there    are    igneous    rocks"    with    some    degree    of    certainty,    but    later    volunteers    that    "there    is    a    rhyolite    plug"    with    a    higher    degree    of    certainty.    The    contradictions    occurring    in    a consultation    often    involve    several    levels    of    inference    and    long    chains    of    restriction/generalization    links,    which    sometimes    have    embarrassed    our    expert    users    while    being    impressed    by    Prospector's    ability    to    detect    the    inconsistencies.    2.    Changing    Answers    A significant    advantage    of    the    Bayesian    method    used    in    Prospector    for    computing    probabilities    is    the    ease    with    which    answers    to    questions    can    be    changed    without    having    to    repeat    all    previous    calculations.    Basically,    all    that    is    required    in    changing    an    answer    to    a question    about    any    evidence    E    is    to    change    the    probability    for    E    and    to    propagate    the    results    through    the    inference    network.    The    possibility    of    violating    the    restriction/generalization    probabilistic    constraints    causes    the    only    complication    in    this    process.    However,    by    keeping    a    record    of    how    statements    are    related    as    computed    by    the    Matcher    ,    the    answer-changing    program    knows    that    it    may    also    have    to    change    the    probabilities    of    some    of    the    related    statements    in    order    to    maintain    consistency.    For    instance,    if    the    inference    network    contains    the    two    rules    Sl    -->    Hl    and    S2    -->    H2,    and    the    user    gives    a    negative    answer    to    a    question    about    Sl,    the    probability    of    Hl    will    be    updated    (in    accordance    with    the    rule    strengths    associated    with    the    first    rule).    In    addition,    because    S2    is    a restriction    of    Sl,    the    probability    of    H2    must    also    be    updated    (in    accordance    with    the    rule    strengths    of    the    second    rule)    as    if    a negative    answer    had    been    given    for    S2    as    well.    When    the    user    then    changes    his    answer    for    Sl,    the    probabilities    of    both    Hl    and    H2    will    be    automatically    updated    and    propagated    through    the    inference    network.    By    changing    an    answer,    the    user    may    contradict    some    of    his    earlier    assertions,    and    changing    these    assertions    may    give    rise    to    still    further    contradictions.    This    can    confuse    the    user,    but    poses    no    problem    for    the    answer-changing    program,    which    is    recursive    and    will    make    sure    no    contradictions    are    introduced    before    resuming    a    consultation.    B.    Use    of    the    Matcher    as    a Dialog    Management    Tool    ---____--    1.    Mixed    Initiative,    Volunteering    Information    Prospector    can    work    in    either    of    two    modes    --    the    consequent    mode    or    the    antecedent    mode.    In    the    consequent    mode    Prospector    attempts    to    establish    (or    rule    out)    an    hypothesis,    and    it    traces    backward    through    the    inference    network    and    searches    for    appropriate    evidences    to    ask    about.    In    the    antecedent    mode,    the    inference    network    is    used    in    a    forward    direction    to    propagate    the    consequences    of    input    information.    Prospector    is    a    mixed    initiative    system    whereby    the    user    has    the    option    of    taking    control    any    time    during    the    consultation    session    to    inform    Prospector    about    facts    he    believes    to    be    relevant.    The    Matcher    makes    this    possible    by    relating    the    volunteered    information    to    the    current    knowledge    base    in    the    same    fashion    as    it    did    for    the    knowledge    acquisition    phase.    2.    Control    Strategy    and    Goal    Selection    --    The    information    volunteered    by    the    user    is    often    relevant    to    several    hypotheses;    In    Prospector,    a simple    scoring    criterion    is    used    to    select    the    goal    hypothesis.    Among    other    things,    this    criterion    takes    into    account    the    volunteered    statements    (whose    effect    on    the    hypotheses    may    be    encouraging    or    discouraging)    that    are    linked    to    each    hypothesis    as    recorded    by    the    Matcher.    7.    Interaction    Psychology    Before    the    user    is    asked    about    the    evidence    E    selected    by    the    control    strategy,    Prospector    233    reminds    him    about    any    facts    it    thinks    relevant.    The information    needed    to recognize    these    facts    are    the    links    relating    E    to other    statements    in    the    knowledge    base    computed    by the    Matcher    and recorded    at    some earlier    phase    of    the    consultation    or during    knowledge    acquisition.    How    these    facts    are    presented    to    the    user    depends    upon    the    current    llstatew    of    the    statements    involved.    The state    of    a    space    is    determined    by its    certainty    value    and how    that    certainty    was    established    --whether    it    was    inferred    by using    rules,    volunteered    by the    user,    Or    inferred    through    restriction/generalization    links    through    the    Matcher.    Depending    upon    the    actual    situation,    one of    several    standard    phrases    is    displayed    before    the    question    is    asked,    and an    appropriate    phrase    is    selected    to ask    the    question.    used    The fol lowing    are    some of the    in    a Pr ospector    consultation:    standard    phrases    - You told    me about    . . .    - You suspected    . . .    - I know you doubted    . . .    - Your    statements    imply    . . .    - I know there    is    reason    to doubt    . . .    - I have    reason    to    suspect    . . .    - I have    reason    to doubt    . . .    Thus,    the    program    might    preface    a    question    about    a    piece    of    evidence    E    by saying:    "I    have    reason    to    doubt    E.    What is    your    degree    of    belief    about    that?"    Clearly,    these    stock    phrases    are    simple    attempts    to inform    the    user    about    the    implications    of    his    earlier    statements.    Although    they    have    no    effect    on the    function    of Prospector    and are    not    necessary    in    any    logical    sense,    they    enhance    communication    between    the    user    and the    consultation    system    and    often    serve    to    make    the    logical    processes    of the    consultation    system    more    evident.    The Matcher    has been    an important    tool    for    the    design    of    the    interaction    environment    in    all    phases    of development    and use of the    Prospector    knowledge-    based    system.    It    is    particularly    important    in    the    "psychology"    of    man-machine    interaction    in    consultation    systems    that    the user    does    not    feel    ignored    and    that    the    dialogs    are    not    totally    dictated    by    the    system.    Whenever    possible,    the    user    should    be shown    evidence    that    the    system    listens    to    him,    understands    what    he    says,    and    sometimes    can even    use the    information    he supplied!    IV    CONCLUSION    w    M    [31    [41    [53    b1    L-71    REFERENCES    Duda,    R.O.,    P.E.    Hart,    N.J.    Nilsson,    R. Reboh,    J. Slocum    and G.L.    Sutherland,    "Development    of    a    Computer-Based    Consultant    for    Mineral    Exploration,"    Annual    Report,    SRI Projects    5821    and    6415,    SRI    International,    Menlo    Park,    California    (October    1977).    Duda,    R.O.,    P.E.    Hart,    K. Konolige,    R. Reboh,    "A    Computer-Based    Consultant    for    Mineral    Exploration,"    Final    Report    SRI    Project    6415    SRI    International,    Menlo    Park,    California    (September    1979).    Hayes-Roth,    F.,    "The    Role    Of Partial    and Best    Matches    in    Knowledge    Systems,"    in    Pattern-    Directed    Inference    Systems,    D.A-Waterman    and    F.    Hayes-Roth,    eds.,    pp.    557-574    (Academic    Press,    New York,    1978).    Hendrix,    G.G.,    "LIFER:    A    Natural    Language    Interface    Facility,"    SIGART    Newsletter,    No.    61,    pp 25-26    (Februarym.    Hendrix,    G.G.,    "Encoding    Know1 edge    in    Partitioned    Ne tworks."    in    Associative    -    The Representation    and Use    Networks    of Knowledge    in    --    -    Computers,    N. V. Findler,    ed.,    Academic    Press,    New York    (1979).    Reboh,    R.,    "A    Knowledge    Acquisition    Environment    for    Expert    Consultation    Systems,"    Ph.D.    Dissertation,    Department    of Mathematics,    Linkoping    University,    Sweden    (to    appear    1980).    Waterman,    D.A.    and    F. Hayes-Roth,    eds.,    Pattern-Directed    Inference    Systems    (Academic    -New    York,    1978).    -    By    providing    means to    relate    the    statements    in    the    knowledge    base to    each other,    the    semantic    network    Matcher    in Prospector    has been an important    instrument    in    supporting    many of the    features    that    constitute    the    AI contents    of    the    system.    We    believe    that    the    approach    is    a general    one,    and can    enhance    the    intelligent    behaviour    of any knowledge-    based    system.    234     
 | 
	1980 
 | 
	20 
 | 
					
13 
							 | 
	AN APPROACH TO ACQUIRING AND APPLYING KNOWLEDGE    Norman Haas and Gary G. Hendrix    SRI International    333 Ravenswood Avenue    Menlo Park, California 94025    ABSTRACT    The problem addressed in this paper is how to    enable a computer system to acquire facts about new    domains from tutors who are experts in their    respective fields, but who have little or no    training in computer science. The information to    be acquired is that needed to support question-    answering activities. The basic acquisition    approach is "learning by being told." We have been    especially interested in exploring the notion of    simultaneously learning not only new concepts, but    also the linguistic constructions used to express    those concepts. As a research vehicle we have    developed a system that is preprogrammed with    deductive algorithms and a fixed set of    syntactic/semantic rules covering a small subset of    English. It has been endowed with sufficient seed    concepts and seed vocabulary to support effective    tutorial interaction. Furthermore, the system is    capable of learning new concepts and vocabulary,    and can apply its acquired knowledge in a    prescribed range of problem-solving situations.    I    INTRODUCTION    Virtually any nontrivial artificial    intelligence (AI) system requires a large body of    machine-usabge knowledge about its domain of    application. Construction of a knowledge base is    currently a tedious and time-consuming operation    that must be performed by people familiar with    knowledge representation techniques. The problem    addressed in this paper is how to enable computer    systems to acquire sets of facts about totally new    domains from tutors who are experts in their own    fields, but have little or no training in computer    science. In an attempt to find a practical    solution to this problem, we have developed a pilot    system for knowledge acquisition, which, along with    --------    * This research was supported by the Defense    Advance Research Projects Agency under contract    N00039-79-C-0118 with the Naval Electronic Systems    Command. The views and conclusions contained in    this document are those of the-authors and should    not be interpreted as necessarily representing the    official policies, either expressed or implied, of    the Defense Advanced Research Projects Agency of    the United States Government.    several related research issues, is discussed    below.    The kinds of information we are most    interested in acquiring are those needed to support    what have been called "question-answering" or    "fact-retrieval" systems. In particular, our    interest is in collecting and organizing relatively    large aggregations of individual facts about new    domains, rather than in acquiring rules for    judgmental reasoning. This is in contrast to    previous work on such systems as those of Davis    [I] and Dietterich and Michalski [2], that treat    knowledge not so much as a collection of facts, but    as a set of instructions for controlling the    behavior of an engine.    The type of acquisition process we are    exploring is "learning by being told," in contrast    to the idea of "learning by example." It is this    latter concept which has formed the basis of    research by other investigators in this area, such    as Winston [ll] and Mitchell [8].    Our interest in knowledge acquisition is    motivated by the desire to create computer-based    systems that can aid their users in managing    information. The core idea is that of a system    that can talk to a user about his problems and    subsequently apply other types of software to meet    his needs. Such software would include data base    management systems, report generators, planners,    simulators, statistical packages, and the like.    Interactive dialogs in natural language appear the    most convenient means for obtaining most of the    application-specific knowledge needed by such    intelligent systems.    II    KNOWLEDGE ACQUISITION THROUGH ENGLISH DIALOGS    Systems that acquire knowledge about new    domains through natural-language dialogs must have    two kinds of special capabilities. First, they    must be capable of simultaneously learning both new    concepts and the linguistic constructions used to    express those concepts. (This need for    simultaneous acquisition of concepts and language    reflects the integral connection between language    and reasoning.) Second, such systems must support    interactive, mixed-initiative dialogs. Because a    tutor may provide new knowledge in an incremental    235    and incomplete manner, the system must keep track    of what it has already been told so that it can    deduce the existence of missing information and    explicitly ask the tutor to supply it.    We are exploring the feasibility of such ideas    by developing a series of Knowledge-Learning and    -Using Systems (KLAUS). A KLAUS is an interactive    computer system that possesses a basic knowledge of    the English language, is capable of learning the    concepts and vocabulary of new subject domains, and    has sufficient expertise to apply its acquired    knowledge effectively in problem-solving    situations.    III    RESEARCH ISSUES FOR KNOWLEDGE ACQUISITION    --    To create systems capable of acquiring    knowledge through tutorial dialogs in English,    several fundamental research problems must be    resolved:    A powerful natural-language processing    capability is required. Although much    progress has been made in recent years,    previous work has assumed a complete    knowledge base. Knowledge-acquisition    dialogs require several adaptations and    extensions.    Seed concepts and seed vocabulary must be    identified for inclusion in a core system.    It is not at all obvious which words and    concepts will be most useful in helping    tutors describe the concepts of new    domains.    A structure for lexical entries must be    specified so that the system can acquire    new lexical information. Because such    information provides a key link between    surface linguistic form and underlying    meaning, structural specification is a very    challenging task for certain categories of    words, particularly verbs.    The linguistic constructions that people use    in introducing new concepts must be    identified and analyzed so they can be    interpreted correctly by the natural-    language processing system. Such    constructions range from simple syntactic    patterns to complex uses of analogy.    A flexible scheme of knowledge representation    is necessary. The representation must have    general expressive power, since it may be    applied to many different domains and must    support the addition of new information.    It should include inherent features that    can aid in organizing knowledge and in    supporting the incremental acquisition of    knowledge.    An efficient problem-solving capability is    needed to answer questi6ns and to draw    inferences for integrating newly acquired    information. This capability must be based    on general principles, because no    application-specific problem-solving    procedures will be included in the system.    (How to acquire such procedures is a    separate and interesting research    question.)    methodology is needed for integrating new    concepts into the system's knowledge base.    Because tutors will often provide only    partial descriptions of new concepts,    methods have to be devised for ascertaining    what additional facts must be sought from    the tutor to insure proper linkage between    the new concepts and those previously    acquired.    A set of readily understandable questions is    needed for eliciting information from    tutors. The length and number of questions    should be minimized to impose as small a    burden on tutors as possible.    These problems must be dealt with in an integrated    manner, balancing the requirements of one facet of    the system against those of others. Our initial    attempts to cope with this complex of issues are    presented below.    OK, now I have assimilated LENGTH.    'i'4>    A foot is a unit of length    ---____-    FOOTS are the first units of LENGTH you've    defined. Now I have assimilated FOOT.    75> The plural of foot is feet    -----    FEET - got it. Thanks.    76> A meter is a unit of length    ----__--    How is it related to FOOT? >> There are 3.3    ---    feet in a meter    ----    Now I have assimilated METER.    77> A physical object has a length    So PHYSICAL OBJ~a~L~N~    Exactly what to use as seed concepts is    problematical for a system that must bootstrap its    entry into a new domain. Most of the seed concepts    in NANOKLAUS are classes of THINGS and RELATIONS.    They have been included in the system either simply    to avoid forcing tutors to relate everything to the    most general concept (THING), or because they have    a special status in English. For example, because    pronominal usage depends on gender, the class MALE-    BEING is defined and associated with the pronoun    "he." One might consider defining as seed concepts    a set of primitives, to which all other concepts    must be reduced, but such a reductionist approach    is probably unworkable [3].    NANOKLAUS uses five principles of knowledge    organization to integrate new knowledge: (1)    there    are things; (2) th ere are subclasses of things    (i.e., things can be subclassified); (3) there are    relations among things; (4) there are subclasses of    relations; (5) some of the relations are functions.    The concepts of uniqueness and equality also play    important roles. NANOKLAUS is not programmed to    hold explicit conversations about these concepts,    but rather to use them in its internal operations.    C.    The Natural-Language Component    The natural-language component of NANOKLAUS    uses a pragmatic grammar in the style of LADDER    b1.    Although most of the linguistic processing    performed by the system follows fairly standard    practice, the pragmatic grammar is distinguished by    its explicit identification of a number of    syntactic structures used principally to define new    concepts. As an oversimplified example, NANOKLAUS    might be thought of as looking for the syntactic    pattern    <s> => <A> <NEW-WORD> <BE> <A> <KNOWN-COUNT-NOUN>    to account for such inputs as    A CARRIER IS A SHIP.    When one of these concept-defining patterns is    recognized, an acquisition procedure associated    with the pattern is invoked. 'This procedure    generally adds new facts to the system's set of    wffs and generates new entries in its lexicon. The    various acquisition procedures also have provi sions    for responding to the tutor. Response generation    is accomplished through the use of preprogrammed    phrases and templates.    D.    Using Dialog to Aid Acquisition and    ----    Assimilation    By and large, it is unreasonable to expect    tutors to volunteer all the information NANOKLAUS    needs to assimilate new concepts. In particular,    tutors cannot be expected to know what conclusions    NANOKLAUS will draw about a newly taught concept    from its previous knowledge, since they know    neither the details of its knowledge nor the even    more esoteric details of its assimilation    procedures. NANOKLAUS must ask the tutor for the    information it needs. But wording the system's    questions in an understandable form involves a    number of human engineering difficulties.    The acquisition of new verbs offers a cogent    illustration of the problem. English verbs are    highly idiosyncratic; making proper entries for    them in a lexicon can be a formidable task. Among    other considerations, one must ascertain a verb's    passivity and transitivity, determine whether its    indirect object can become the object of a FOR or    TO prepositional phrase, whether it is reflexive or    nonreflexive, and how the syntactic cases of its    arguments may be "normalized" when the verb appears    in different syntactic constructions. NANOKLAUS's    tutors, of course, cannot be expected to describe    verbs in linguistic terms, so the system asks a    series of questions tutors can understand to elicit    the same information. The following is a typical    verb-acquisition exchange:    92>An officer can assign a unit    ------    Give the -ed and -en forms of ASSIGN    (for example, for GO: WENT GONE):    >>assigned assigned    If an OFFICER can ASSIGNanUNIT something,    type the most general description of that    something, otherwise <return>    >>    (user typed a carriage return)    If an OFFICER can ASSIGN SomethinganUNIT,    type the most general description of    that something, otherwise <return>    >>a task group    ---    Can an OFFICER ASSIGN an UNIT TO a    TASK GROUP ? >>yes    Could an UNIT be ASSIGNED a TASK GROUP by    an OFFICER ? >>yes    Can an OFFICER ASSIGN an UNIT with a    TASK GROUP ? >>no    Can an OFFICER ASSIGN a TASK GROUP ? >>no    -    Can an OFFICER ASSIGN ? >>no    Verbs that produce rather differenTanswers to the    above questions include GIVg, BUY, RUN (run a    machine), SERVE and SUPPLY.    E.    The Knowledge Representation System    NANOKLAUS's knowledge representation system    uses a many-sorted first-order logic, that combines    features from [g] and [5].    The backbone of the    system is a treelike data structure reflecting the    NANOKLAUS does not consider modal verbs (e.g.,    "want" and "know"), or verbs with particles (e.g.,    "pick up", "preside over") or sentential objects    (e.g., "The captain requested that the ship change    ----    coursell).    237    hierarchy of sorts (or classes) used by the system.    This tree is "tangled" in that any given sort may    if any, together with CARRIER, span the    class of all SHIPS (with or without    have multiple ancestors. It is no accident that a    sort hierarchy should serve as the primary data    structure for an English-speaking acquisition    system. "ISA" hierarchies are used by many    natural-language processing systems; it appears    that something very similar    plays a central role in the    their knowledge [7].    to a sort hierarchy    way humans organize    to determine whether or not a given entity    to a particular sort . To a id in this task    A fundamental task of the deductive system is    belongs    NANOKLAUS maintains information about the direct    descendants of each sort, such as whether some or    all of the descendants of an ancestor exhaust    (span) it, and whether    are mutually exclusive.    two or more sibling sorts    The utilization of sort information is clearly    illustrated in the following dialog with NANOKLAUS:    80>A ship is a unit    YouTrGyGg-that    anything that is a SHIP is    also an UNIT. What is an UNIT? >>a thing    You're saying that anything that is znm    is    also a THING. Which of the following, if    any, are proper subclasses of UNIT:    PHYSICAL OBJECT, LEGAL PERSON, or MEASURE?    >>none of them    ---    Which of the following classes, if any, could    never have any members in common with UNIT:    PHYSICAL OBJECT, LEGAL PERSON, or MEASURE ?    >>none    OK, now I have assimilated UNIT.    OK, now I have assimilated SHIP.    81>A ship is a physical object    YouTrEyzg-that    anything that is a SHIP is    also a PHYSICAL OBJECT. Is ANIMATE OBJECT a    proper subclass of SHIP? >>no    IS ANIMATE OBJECT necessarily<omposed of    entirely different members from SHIP?    >>yes    Do ANIMATE OBJECT and SHIP span the set of all    PHYSICAL OBJECTS >>no    OK, now I have assimicted SHIP.    82>A submarine is a ship    ---    YouTre saying that anything that is a SUBMARINE    is also a SHIP. OK, now I have assimilated    SUBMARINE.    83>A destroyer is a ship    ---    YouTre saying that anything that is a DESTROYER    is also a SHIP. Is SUBMARINE a proper    subclass of DESTROYER? >>no    Is SUBMARINE necessarily cozosed of entirely    different members from DESTROYER?    Do SUBMARINE and DESTROYER span the set-    all    SHIPS? >>no    OK, now I ha= assimilated DESTROYER.    84>A carrier is a ship    Yo?rwgThgt    anything that is a CARRIER    is also a SHIP. Which of the following, if    any, are proper subclasses of CARRIERI    SUBMARINE or DESTROYER? >>neither of them    _I__--    Which of the following classes, if any, could    never have any members in common with    CARRIER: SUBMARINE or DESTROYER ?    >>both of them    ---    What combination of the following subclasses,    overlapping): SUBMARINE or DESTROYER ?    >>neither of them    ---    OK, I have assimilated CARRIER.    85>A Kittyhawk is a carrier    YouTre saying that-anything that is a KITTYHAWK    is also a CARRIER. OK, now I have assimilated    KITTYHAWK.    .    98>A carrier has a doctor who is a person    ---------    So CARRIERS have PEOPLE for DOCTORS.    Must a CARRIER have exactly one DOCTOR? >>no    -    99>=    Constellation is a Kittyhawk    --    You're saying that CONSTELLATION is one    instance of a KITTYHAWK.    OK, now I know about CONSTELLATION.    lOO>The JFK is a Kittyhawk    ---    You'resaying that JFK is one instance of a    KITTYHAWK. OK, now I know about JFK.    l>The JFK is commanded by Moffet    AS-T=    THA-(MOFFET C~MMANDFK)    2>The length of the JFK is 1072    feet    ---------    V    FUTURE PROSPECTS    At this time NANOKLAUS can be best described    as a fragile proof-of-concept system still in its    early developmental stage. During this coming    year, we plan to greatly expand its linguistic    coverage by replacing our current pragmatic grammar    with Robinson's [lo] DIAGRAM grammar. Once this    has been accomplished and NANOKLAUS's verb    acquisition package extended to accept particles    and prepositional phrases, we believe NANOKLAUS can    serve as a useful tool for aiding AI researchers in    the construction of knowledge bases for other AI    systems --a task that currently consumes an    inordinate proportion of research effort.    As suggested in the introduction, one of our    long-term objectives is the extension of KLAUS to    knowing about diverse types of external software    packages. Given knowledge of such packages, a    KLAUS could serve as an agent that interacts with    them on a user's behalf. To explore these    possibilities, we plan in the near future to    provide NANOKLAUS with the capability of using a    conventional data base management system. In this    configuration, a user should be able to tell    NANOKLAUS about a new domain, about a data base    containing information pertaining to that domain,    and about the interrelationship of the two. The    new system would then be able to use the data base    in answering questions regarding the domain.    Our work in the area of knowledge acquisition    per se has really just begun. As development    proceeds, we plan to turn our attention to making    provisions for learning by analogy, for acquiring    and reasoning about the internal structures of    processes, for dealing with causality, and for    dealing with mass terms.    238    ACKNOWLEDGMENTS    The deduction system supporting NANOKLAUS was    developed in large part by Mabry Tyson, with Robert    Moore, Nils Nilsson and Richard Waldinger acting as    advisors. Beth Levin made major contributions to    NANOKLAUS's verb-acquisition algorithm. Paul    Asente assisted in the testing of the demonstration    system. Barbara Grosz, Earl Sacerdoti, and Daniel    Sagalowicz provided very useful criticisms of early    drafts of this paper.    1.    2.    3*    4.    5.    6.    7.    8.    9.    REFERENCES    R. Davis, "Interactive Transfer of Expertise:    Acquisition of New Inference Rules," Proc. 5th    International Joint Conference on Artmar    Intelligence, Cambridge, Massaczsetts, pp.    321-328    (August 1977).    T. G. Dietterich and R. S. Michalski,    "Learning and Generalization of Characteristic    Descriptions: Evaluation Criteria and    Comparative Review of Selected Methods," Proc.    6th International Joint Conference on    Artificial Intelligence, Tokyo, Japan, pp.    223-231 (August 1979).    J. A. Fodor, The Language Of Thought, pp. 124-    156, (Thomas Y. Crowell CoFaz    York, New    York 1975).    G. G. Hendrix, "The LIFER Manual: A Guide to    Building Practical Natural Language    Interfaces," Technical Note 138, Artificial    Intelligence Center, Stanford Research    Institute, Menlo Park, California (February    1977).    G. G. Hendrix, "Encoding Knowledge in    Partitioned Networks." in Associative Networks    I    - The Representation and Use of Knowledge in    --    Computers, N. V. Findler, ed.-(Academic Press,    New York, New York 1979).    G. G. Hendrix, E. D. Sacerdoti, D. Sagalowicz    and J. Slocum, "Developing a Natural Language    Interface to Complex Data," ACM Transactions    on Database Systems, Vol. 3, No. 2 (June    1978).------    P. H . Lindsay and D. A. Norman, Human    Information Processing. (Academicxs,    New    York, New York, 1972.)    T. M. Mitchell, "Version Spaces: a Cand    Elimination Approach to Rule Learning,"    5th International Joint Conference on    -    idate    Proc.    Artificial Intelligence, Cambridge,    Massachusetts, pp. 305-310 (August 1977).    R. Moore, "Reasoning from Incomplete Knowledge    in a Procedural Deduction System," MIT    Artificial Intelligence Laboratory, AI-TR-347,    Massechusetts Institute of Technology,    Cambridge, Massechusetts (1975).    IO.    11.    J. J. Robinson, "DIAGRAM: an Extendable    Grammar for Natural Language Dialogue,"    Technical Note 205, Artificial Intelligence    Center, SRI International, Menlo Park,    California (February 1980).    P. H. Winston, "Learning Structural    Descriptions from Examples," Chapter 5 in The    Psychology of Computer Vision, P. H. Wins%%,    ed. (McGrawxi.11 Book Company, New York, New    York (1975).    239     
 | 
	1980 
 | 
	21 
 | 
					
14 
							 | 
	SELF-CORRECTING    GENERALIZATION    Stephell B. Wilitehill    Deyartment    of Irlformation and Computer    Science    University    of California    at Irvine    Irvine, Ca 92717    ABSTRACT    A system is    described    which    creates    and    generalizes    rules from examples.    The    system    can    recover    from    an    initially    misleading    input    sequence    bY    keeping    evidence    which    supports    (or    doesn't    support)    a    given    generalization.    By    undoing    over-generalizations,    the    system    maintains    a    minimal    set    of    rules for a    given set of inputs.    I GENERALIZATION    Many programs have been written    which    generalize    examples    into rules.    Soloway[5]    generalizes    the    rules    of    baseball    from    examples.    Hayes-Roth[3]    and    Winston[8]    yeneralize    common properties    in    structural    descriptions.    Vere[G]    has    formalized    yeneralization    for several applications.    If    a    program    maintains    a    current    hypothesis    about    the    rule set as it sees    new    examples    it    is    said    to    generalize    incrementally.    A    program    that    incrementally    forms generalizations    may    be    sensitive    to    the    order in which examples    are presented.    If exceptional    examples    are    encountered    first,    the    program    may    over-generalize.    If    the    program    is    to    recover and undo the over-generalization    it    must have a    certain    amount    of    knowledge    about why the over-generalization    was made.    The system to be    described    here    has    this    type    of    self-knowledge.    BY    associating    positive and negative    evidence    with    each    generalization,    the    system is    able to reorganize    its    rules    to    recover    from    over-generalizations.    Even if it is    initially misled by an unusual sequence    of    inputs    it    still    discovers    the    most    reasonable    set of rules.    II THE PROBLEM DOMAIN    The    problem    domain    chosen    for    the    system is learning    language rnorphology from    examples.    For    example,    from    the    words    " j urnped" ,    "walked"    and    "kissed"    we    can    deduce the rule that the English past tense    is formed by adding "ed".    The    Concept    Description    Language    consists of a set of rules.    Each rule is a    production    consisting    of    a    pair    of    character    strings.    When the left-hand    side    is matched    the right-hand    side is returned.    The left-hand    string may optionally    contain    a    I*#    which    will    match    zero    or    more    cllaracters.    In    this    case the right hand    cklaracter string may contain a '*I and    the    value    the star on the left hand side which    was matched    is suhstltuted    for the star    on    the    riyht    hana    side.    For    example    the    protiuction    for    t11e    example    above    looks    like:    *->*Nl.    'l'his *->*ED rule does not    always work.    From "baked",    "related"    and    "hoped" we see that for words ending in "e"    we need only to add an 'Id". This    rule    is    written    as    *E->*ED.    For this domain, the    problem    bears    resemblence    Grammatical    Inferznce Problem[4].    to    the    III RELATIONSHIPS    BETWEEN RULES    Rule Containment.    Given rules Pl    and    p2 I let    Sl    be    the    set of strings which    match the left-hand    side of Pl and    S2    the    set    of    strings which match left-hand    side    of P2.    If Sl is a subset of S2 then we say    that    Pl    is contained    by P2.    This forms a    partial ordering of the rules.    The    Is-a-generalization-of    Operator.    Given rules Pl and P2, let Sl be the set of    strinys which match the left-hand    side    of    Pl and S2 the set which match the left-hand    side of P2.    If Pl contains    P2    and    if    Pl    and    P2    produce    the same result for every    element    in S2 trlen Pl is    d    yeneralization    of P2.    This is also a partial ordering.    Note    the    distinction    between    the    containment    operator    and    the    is-a-qeneralization-of    operator.    Basically,    containment    deals    with    the    left-hand    side    of    a    rule.    Is-a-generalization-of    deals    with    both    sides.    An example will clarify    this:    *->*S contains    *K->*KS    *->*S is a generalization    of *K->*KS    *->*S contains    *CH->*CHES    *->*S and *CH->*CHES    are unrelated    by generalization    By definition,    if Pl is a generalization    of    P2,    Pl    contains    P2.    The converse    is not    necessarily    true.    is    If Pl is a generalization    of P2 and Pl    a    generalization    of    P3    then Pl is a    a    generalization    of    no    other    common    generalization.    Roughly,    the    maximal    common    generalization    is    the    one    which    captures    all    common features of the rules    being    generalized.    E'o    r    example,    given    WALK->WALKED    and    TALK->TALKED,    possible    generalizations    are:    *->*ED,    *K->*KED,    *LK->*LKED    and *ALK->*ALKED.    The last one,    *ALK->*ALKED    is the maximal    one.    In    the    concept    description    language we are using    all common generalizations    are    related    on    the    is-a-generalization-of    operator.    Therefore    in our domain the maximal    common    yeneralization    is unique.    IV ORGANIZATION    OF    -    RULES    The    rules    and    their    evidence    are    organized    in a tree structure.    At the top    level the rules are    organized    as    a    rule    list.    A    rule    list    is    a    list of rules    partially    ordered    on    the    containment    operator.    No    rule    may be contained    by a    rule    which    precedes    it    in    the    list.    Associated    with most rules is some evidence    which is itself in the form of another rule    list.    The only rules without    evidence    are    the example pairs whose evidence    is    fact.    These    correspond    to terminal    nodes in the    evidence    tree.    If a rule Rl would    contain    a    rule    R2    which    follows    it    then Rl is    marked as    being    blocked    by    R2.    If    Rl    blocks    R2 then evidence    for Rl is negative    evidence    for H2.    The    positive    evidence    consists    of    those rules which were generalized    into the    current generalization.    Negative    evidence    for    a generalization    G is all the evidence    of generalizations    that are blocked    by    G.    .Thus when *->*ES blocks    (*N->*NS + *K->*KS    == > *->*S) that is    negative    evidence    for    *->*ES.    The    evidence    described    here is    much like evidence used for explanation    in    [ll    or to maintain    beliefs    in [2].    In our    system    the    evidence    is    used    for    reorganization,    but    it    could be used for    these other purposes as well.    Hule    Application    and    Conflict    Hesolution.    When    the    rule    interpreter    produces a response,    it is as if    it    finds    a11    rules    which match the given input and    tnen uses the one which doesn't contain any    of the others (informally,    the one with the    least general left-hand    side).    In    reality    the    rules    and    rule    interpreter    are    organized    so    that    the    first    rule    that    matches    is the desired one.    Inserting New Rules.    If    a    rule    has    produced    the ----EOrrect    result,    the    new    example pair is inserted    into the    evidence    list    for the rule.    If the rule system has    not produced    the correct result the rule is    inserted in the main rule list    before    the    first    rule    witn which it will generalize.    If it will not generalize    with any rule, it    is    inserted    before    the    first    rule that    contains    it.    The    same    rule    insertion    algorithm    is    used    to insert new rules or    evidence.    This means that    generalizations    take    place in an evidence list in the same    way that they do in the main rule list.    V SYSTEM REORGANIZATION    Each    blocked    generalization    has    knowledge    about    which    generalization    is    blocking    it.    Whenever    evidence    for    a    blocked    generalization    Gl is entered    into    the rule structure,    we    check    to    see    if    there    is now more evidence    for Gl than for    the blocking generalization    G2.    If so,    G2    is    moved    to the position    in the rule list    immediately    preceding Gl, G2 is    marked    as    being    blocked    by    Gl    and Gl is no longer    marked as being blocked.    There are several choices    on    how    to    compare    positive    and    negative    evidence.    The one chosen is to count how much    direct    evidence    there    is    for    a    rule.    Direct    evidence    is that evidence    found in the    top    level    rule    list    in    the    evidence    tree.    Another metnod which was reJected for    this    application    is to count the total number of    pieces of evidence    in    the    evidence    tree.    The    first    method    was    chosell    because    *Ch->*CHES    and *X->*XES are    exceptions    to    *-> *s    (rather    than    "A->*AS,    *B->*BS,    *C->*CS,    etc.    being exceptions    to    *->*ES)    because    there    is more direct evidence    for    *->*S (rules like *A->*AS) than for *->*ES.    Even    if    half    the    words    in English used    *CH->*CHES    this would still be an exception    to *->*s.    This    method    produces    the    most    reasonable    set    of    rules.    The system has    been    tested    on    pluralizing    French    adjectives.    French    has    a    much    more    complicated    morphology    than English,    having    not    only    exceptions    to    rules    but    also    exceptions    to exceptions.    The    system    was    found    to    produce    the    same    rules    for    pluralizing    French    adjectives    as    those    found    in    a French-English    dictionary    . A    detailed    example of this appears    in [7].    VI UNDOING GENERALIZATIONS    - AN EXAMPLE    -    The system is    written    in    MLISP    and    runs    on    UC1    LISP.    The following    example    was annotated    from a trace of the system    in    operation.    The    rules    and    evidence    are    printed as a tree.    The evidence    for a node    is    indented    from    the    parent node in the    printout.    INPUT WORD? church    WHAT IS RESPONSE?    churches    INPUT WORD? match    WHAT IS RESPONSE?    matches    241    INPUT WORD? bus    WHAT IS RESPONSE?    buses    RULES:    *->*ES    BUS->BUSCS    *CH->*CHES    MATCH->MATCHES    ChUHCH->CHURCHES    At this point we have over-generalized.    We    will    find    this    out later.    The only rule    seen by tile rule    interpreter    is    *-> *ES.    BUS->BUSES    arid *CH->*CdES    are evidence    for    *->*ES.    MATCH->MATCHES    and    CHURCH->CHURCHhS    are    evidence    for    the    *CH->*CHES    rule (which is itself evidence).    INPUT WORD? book    IS RESPONSE BOOKES? n    WHAT IS RESPONSE?    books    INPUT WORD? back    IS RESPONSE BACKES? n    WHAT IS RESPONSE?    backs    RULES:    B*K->B*KS    BACK->BACKS    BOOK->BOOKS    *->*ES    BUS->BUSES    "CH->*CHES    MATCH->MATCHES    CHURCH->CHURCHES    What should be regular cases are treated as    exceptions.    INPUT WORD? car    IS RESPONSE CARES? n    WHAT IS RESPONSE?    cars    RULES:    (*->*s>    CAR->CAKS    B*K->B*KS    BACK->BACKS    BOOK->BOOKS    *->*ES    Bus->i3usLs    *CH->*CfiES    MATCH->MATCHES    CHURCH->ChURCHES    At    this    point    we    want    to    make    the    generalization    *->*s    but    this    generalization    is blocked    by    *->*ES.    We    make    the    generalization    but    mark    it as    blocked.    The    parentheses    indicated    that    the    rule is blocked.    The only productions    seen    by    the    production    system    are:    CAR->CARS,    B*K->B*KS    and    *->*ES.    The    blockage    of *->*S is negative    evidence    for    *->*lzs.    The    system    will detect that the    rules are in the wrong order when there    is    more    evidence for *->*S (and hence against    *->*ES) than there is for *->*ES.    At    this    point    there    is    just    as    much    negative    evidence    as    positive    (looking    down    one    level in the evidence    tree).    INPUT WORD? bat    IS HESPGNSE    BATES? n    WHAT Is RESPONSE?    bats    RULES:    (*->*ES)    BUS->BUSES    *CH->*ChES    MATCH->kATCHES    CHURCH->CHUHCHES    *-> *s    CAR->CAhS    B*K->B*KS    BACK->BACKS    BOOK->BOOKS    BAT->BATS    This    addition negative    evidence    for *->*ES    has    caused    a    reordering    of    the    rules.    *->*ES    is    now    blocked    by    *->*s    (as it    should be).    INPUT WORD? house    IS RESPONSE HOUSES? y    INPUT WORD? bunch    IS RESPONSE    BUNCHES?    y    The system    now has a properly    ordered    rule    set    and    can    handle    both    regular    and    irregular    cases.    VII CONCLUSIONS    BY    giving    a    generalization    program    some    self-knowledge    it    can    recover from    initially misleading    input sequences.    This    introspection    can    be    achieved    by    associating    positive and negative    evidence    with    generalizations.    Without    knowledye    about what led to a generalization,    it    is    not    possible    to    undo the generalization.    The system    described    here    discovers    the    most    reasonable    set of morphological    rules    for a yiven    language    construct    (the    set    found    in    a    dictionary)    reyardless    of the    input sequence.    Tne    choice    of    language    morpllology    as    the    problem    domain    was    arbitrary.    Any    domain    with    a    concept    descriptiorl    language    whose maximal common    generalization    is unique would    serve    just    as    well.    Furtlier work    is    needed    for    concept description    languages    whose maximal    common    generalization    is    not necessarily    unique.    Any    incremental    generalization    program    could    irnprove    its    ability    to    recover from misleading    input    by    applying    the techniques    described.    [II    [21    c31    [41    c51    [61    [71    [al    REFERENCES    Bechtel, R., Morris, P. and Kibler, D.,    "Incremental    Deduction    in    a    Real-time    Environment",    Canadian    Society    for the    Computational    Studies    of    --    Intelligence    (May 1980).    Doyle,    J.,    "A Glimpse    of    Truth Main-    terlance", 6-IJCHI    (1979), 232-237.    Hayes-Rotll,    F .    and    McDermott,    J . ,    “Knowledye    Acquisition    from    Structurai    Descriptions",    Department    of Computer    Sciellce, Carnegie-Mellon    Univ.    (1976).    Hunt, E. "Artificial    Intelligence"    1975    Soloway,    E. and    Riseman,    E., "Levels    of Pattern    Description    in    Learning",    5-IJCAI    (1977), 801-811.    Vere,    S.,    "Induction    of    Relational    Productions    in    the    Presence    of    Background    Information",    5-IJCAI    (19771, 349-355.    Whitehill,    S.,    "Self-correcting    Gener-    alization".    U.C.    Irvine    C.S.    Tech    Report no.    149.    (June 1980).    Winst on,    P - ,    " Le    Descr iptio ns    From    Techn ical Report 231    ar    EX    ning    ample    (1970    Structural    s " ,    MIT-AI    242     
 | 
	1980 
 | 
	22 
 | 
					
15 
							 | 
	intelligent Retrieval    Planning    Jonathan J. King    Computer Science Department    Stanford University    A. introduction    artificial    intelligent    retrieval    planning is the application of    intelligence    techniques    to the task of efficient    retrieval    of information from very large databases.    ’ Using    such techniques,    significant increases in efficiency    can be    obtained.    Some of these    improvements are not available    through standard    methods of database    query optimization.    lntelligent    retrieval    planning presents    interesting    issues    related    to other    artificial    intelligence    planning research:    planning with limited resources[2],    optimizing the combined    planning    and execution    process[9],    and pursuing plans    whose success    depends upon the current contents of the    database[S].    An    experimental    system    has    been    implemented    to demonstrate    the    novel kinds of query    optimizations    and to test    strategies    for controlling the    inference    of constraints.    The problem of query optimization has arisen wlth    the    development    of high level logical data models and    nonprocedural    query languages    ([I],    [3]).    These free a    user from the need to understand the physical organlzation    of the database    when posing a query. However, the user’s    statement    of    the    query    may lead    to very    inefficient    processing.    Standard    techniques    of query optimization    WI, Cl 11, Cl211 manipulate the set of retrieval operations    contained    in the query to find a relatively    inexpensive    sequence.    The    manipulations    are    independent    of    the    meaning of the query, depending entirely on such factors    as the size of the referenced    files.    The    essential    advance    of intelligent    retrieval    planning    over    standard    techniques    of database    query    optimization    is to combine knowledge about the semantics    of    the    application    domain with    knowledge    about    the    physical    organization    of the database.    Domain knowledge    makes    it possible    to use the constraints    in a database    query    to infer additional constraints    which the retrieved    data *must satisfy.    These additional constraints may make    it possible    to use more efficient    retrieval    operations or    permit the execution    of a sequence of operations that has    a lower    cost.    Knowledge of the physical organization of    the database    can be used to limit the attempts to make    such inferences    so that the combined process of retrieval    and inference    is cost effective.    ” The    research    described    here is part of the    Knowledge    Base Management    System Project at Stanford    and SRI, supported    by the Advanced    Research    Projects    Agency    of the    Department    of Defense    under contract    MDA908-77-C-0822.    8. Findino semantic eauivaients    of a database auerv    The techniques of intelligent retrieval planning will    be illustrated    with a simple example    relational database    with    data    about    the deliveries    of cargoes    by ships to    ports.    The database    contains three files, SHIPS, PORTS,    and VISITS, with the attributes    indicated:    SHIPS: (Shipname Type Length Draft Capacity)    PORTS: (Portname Country Depth Facilities)    VISITS: (Ship Port Date Cargo Quantity)    Semantic    knowledge of the application domain is    represented    as a set of rules. The database is forced, via    update    restrictions,    to conform to this set of rules.    The general    semantic knowledge for our sample    database    consists of these rules:    Rule RI. “A ship can visit a port only if the ship’s    draft is less than the channel depth of the port.”    Rule R2. “A ship can deliver no more cargo than    its rated capacity.”    Rule    R3.    “Only liquefied    natural    gas (LNG) is    delivered    to ports that are specialized LNG terminals.”    Rule R4. “Only tankers deliver oil”.    Rule R5. “Only tankers    can be over 600    feet    long.”    During intelligent retrieval planning, the use of the    rules    is    shifted    from    checking    updates    to    inferring    constraints.    That is, given certain query constraints, it is    Possible    to infer new constraints    that the desired items    must meet.    For example,    suppose a query requests    the    names of all ships that are longer than 650 feet.    By rule    R5,    it    can    be inferred    that    a semantically    equivalent    retrieval    request    is for the names of tankers    that are    longer    than    860    feet.    This inferred    description    of the    items to be retrieved    may permit more efficient processing    than the origlnal description.    243    C. The physical    orqanization    of a database    Inferred    semantically    equivalent    sets    of    constraints    can    be exploited    for intelligent    retrieval    only if    the    physical    organization    of the    database,    and hence    the    cost    of processing    queries,    is taken    into account.    Often,    the    physical    organization    has    been    arranged    so that    the    cost    of    retrieving    a restricted    subset    of data    depends    upon    the    data    attributes    that    have    been    restricted.    For    instance,    a file may have    an auxiliary    “index”    on one of its    attributes.    If such    an index    exists,    then    the data    pages    that    contain    items    that    meet    a constraint    on that attribute    can    be    identified    directly    and    only those    pages    will be    fetched.    An indexed    scan    will be    much less    expensive    than    a scan    through    an entire    file,    measured    in terms    of    pages    fetched    from    disk.    A discussion    of retrieval    costs    for-different    physical    database    organizations    is contained    in [4].    Thus,    given    a    query    that    constrains    only    unindexed    attributes.    a    reasonable    semantic    retrieval    strategy    (subject    to qualifications    discussed    in [4])    is to    attemdt    to    infer    constraints    on    indexed    attributes.    Suppose    that    the    SHIPS    file    has    an index    on the    Type    atf%bute.    In that    case.    the    best    way    to retrieve    all the    ships    longer    than    650.    feet    would    be    to fetch    all the    tankers    by means    of an indexed    scan on Type,    and then to    check    the    Length    value    of each    record    fetched    into main    memory    by that- scan.    D. Novel    auery    optimization    based    on the use of domain    semantics    A query    optimization    method    that    uses    domain    semantics    is interesting    to the    extent    that    it achieves    significant    increases    in efficiency    that    are not available    by    other    methods.    One unique    strategy    that    can arise    when    semantics    are considered    is the inclusion of an extra    file in    the    set    of files    examined    when a query is processed.    For    example,    suppose    a    query    requests    the    quantity    of liquefied    natural    gas delivered    for each    known    visit    to ports    with    a channel    depth    of less than    20    feet.    With    no    inference,    a    typical    query    processor    would    retrieve    all PORTS records    with a Depth    value of less than    20.    For each    one,    it would retrieve    all VISITS    whose    Port    attribute    was    the    same    as the    Portname    for the    PORTS    record    and    whose    Cargo    attribute    was    liquefied    natural    gas.    The    cost    of the retrieval    varies    as the product    of the    sizes    of the PORTS and VISITS    files.    However,    with    appropriate    rules    and    indexes,    intelligent    retrieval    planning    can    provide    a much faster    retrieval    method.    Suppose    that    the    VISITS    file    has    an    index    on the    Ship attribute.    In effect,    this means that the    database    has    been    set    up to provide    inexpensive    access    from    each    ship    to the    set    of its visits,    while    the    set    of    visits    to a specific    port is much costlier    to retrieve.    Using    rule Rl,    it can be inferred    that    the visits requested    by the    query    could    have    been    made only by ships with a draft    of    less    than    20 feet.    It is now    possible    to retrieve    SHIPS    with    Draft    less    than    20,    then    retrieve    their    associated    VISITS    (using    the    index),    and finally,    for each    VISITS    record    with a Cargo    value    of    liquefied    natural    gas,    retrieve    the    associated    PORTS    record    to check    the    value    of Depth.    If the    Draft    constraint    substantially    restricts    SHIPS (and therefore    the    associated    VISITS    as well),    then    the    overall    cost    will be    much    lower    than    that    of    the    straightforward    method,    despite    the    fact    that    an extra    file and an extra    retrieval    operation    have    been    added.    In a simulation    test    of this    method    using    a    cost    model    based    on    the    System    R    relational    database    system[7]    in which    the VISITS    file Is    much    larger    than    the PORTS and SHIPS files, the simulated    retrieval    cost    was    reduced    by    more    than    order    of    magnitude.    E. Controllinq    the inference    of additional    constraints    Intelligent    retrieval    planning is complicated    by the    need    to weigh    possible    gains in retrieval    efficiency    against    the    cost    of performing    inferences.    The amount of planning    done    in    the    intelligent    retrieval    planning    system    in    processing    a particular    query    is determined    by the cost of    answering    the    unimproved    query,    and    the    possible    improvements.    The inference    control mechanism    has these    key    features:    (1)    The    specific    retrieval    problem    determines    which    constraints    to try to infer    (for example,    an attempt    is made    to add constraints    to indexed    fields).    (2)    Knowledge    about    both the structure    and the    content    of the    database    determines    the effort    to devote    to attempting    some inference.    (3)    Retrieval    from    the    database    is an inherent    part    of the    inference    process.    The ability to carry    out an    inference    (and    hence    the    shape    of    the    whole    retrieval    plan)    may    depend    upon    the    current    contents    of    the    database.    These    features    can    be    illustrated    briefly    in    another    example.    Suppose    the VISITS    file is indexed    only    on Cargo,    and a query    requests    data    on visits    to the port    of Zamboanga.    The retrieval    strategy    mentioned    in section    3 suggests    an attempt    to infer    a constraint    on Cargo from    the    given    constraint    on Port.    Given    the    number    of records    in the VISITS    file, it    is    possible    to    compute    the    effort    needed    to perform    a    sequential    scan.    The    effort    alloted    to inference    will be a    function    of    this.    There    is no guarantee    that    a helpful    constraint    can    be    found    for    any    particular    query.    This    suggests    a policy    to allot to the inference    process    a fixed    small    fraction    of    the    effort    which    the    original    retrieval    would    take.    With    such    a policy,    the    effort    to plan    the    retrieval    will result    in a minor increase    in response    time if    the    inference    attempt    fails,    but    may    provide    a    major    improvement    if    It    succeeds.    Although    the    policy    is    intuitively    plausible,    other    strategies    for    alloting    effort    during    problem    solving    under    uncertainty,    such    as those    discussed    in [S],    are being investigated.    Control    of the    inference    process    can be viewed    as    control    of    the    moves    in a space    of    constraints    on    attributes.    Constraints    can be moved either    by applying    a    rule,    by    retrieving    items    restricted    on one    attribute    and    observing    their    values    on other    attributes,    or by matching    constraints    on attributes    defined    on the    same    underlying    set    of    entities.    Continuing    the    example,    starting    with    a    constraint    on the    Port attribute    of VISITS,    new constraints    can    be found    by retrieving    from VISITS    or by assigning    the    244    value    “Zamboanga”    to the Portname field of PORTS. The    first choice is rejected    because the objective is to reduce    the    cost    of that    very    retrieval.    With a constraint    on    Portname    in PORTS,    a    retrieval    from PORTS can    be    performed.    In this case,    just    a single record    will be    obtained    because    Portname is the unique identifier in that    file.    With appropriate    access    methods, such as hashing,    the retrieval    will be very inexpensive.    When    the    PORTS record    for l@Zamboanga” has    been    obtained,    rules Rl    and R3 may apply.    if rule R3    applies,    that    is, if Zamboanga is a specialized    liquefied    natural    gas    terminal,    then    a strong    constraint    will be    obtained    on the goal attribute    Cargo, and retrieval    from    VISITS will take    place by means of an indexed scan rather    than by means of a more expensive    complete scan.    If the    data on Zamboanga does not support that inference, then    other    inference    paths will have to be considered.    This    illustrates    the possible dependence of retrieval planning on    the current contents of the database.    References    1.    Codd,    E.F.,    A relational    model    for    large    shared    dala banks Commun ACM 13:6 (19701, 377-387.    IA-    2.    Garvey,    Thomas    D.,    Perceptual    strategies    for    purposive    vision,    Technical    Note    117,    SRI    International,    Menlo    Park,    California, September    1976.    3.    Kim, Won, Relational database systems, ACM Computinq    Surveys    11:3 (19791,    186-212.    4.    King,    Jonathan    J.,    Exploring    the    use    of    domain    knowledge    for query    processing    efficiency,    Technical    Report    HPP-79-30,    Heuristic    Programming    Project,    Computer    Science    Department,    Stanford University, December 1979.    The cost of each inference step: generating new    inference    path nodes, testing rules, and retrieving from the    database    itself,    is taken    from the allotment of planning    resources.    Planning terminates    if a strong goal constraint    is found, if no potential inference path can be extended, or    if planning resources    are exhausted.    F. Conclusion    6.    Klahr,    Philip, Planning    techniques    for    rule selection    in deductive    question-answering,    In    Pattern    Directed    Inference    Systems, D.A.Waterman and F.    Hayes-Roth    (Eds.), Academic Press, 1978.    6.    Moore,    Robert    C., Handling    complex    queries    in a    distributed    data base, Technical    Note 170,    SRI    International,    Menlo    Park,    California, October,    1979.    Intelligent    retrieval    planning can provide novel    and    significant    improvements    in    query    processing    efficiency.    it draws    on a knowledge    of the physical    organization    of the database    and on semantic knowledge    of the application    modelled in the database.    The outcome    of retrieval    planning, both the retrieval method chosen and    its cost,    can depend    upon the current contents of the    database.    An experimental    system    exists    that    performs    inferences    on queries    stated    in a subset of the SODA    relational    database    query language[6].    The system uses a    simple retrieval    cost model to select the least expensive    semantically    equivalent    expression    of    the    retrieval    request.    The cost model is used in conjunction with a    planning    executive    to limit the inference    of additional    constraints.    Work Is under way to codify intelligent retrieval    strategies    which, though they are specific to a given class    of physical    database    organizations, are independent of the    application    domain.    The eventual    aim of this work is to    develop    a system which, given the set of domain rules and    the description    of the physical organization for a database,    can provide    the functions of intelligent retrieval    planning    described    in this paper, much as the EMYCiN system[lO]    provides    knowledge    acquisition functions independently of    the knowledge    base to be built.    7.    Selinger,    P. Griffiths et. al. Access path selection    in a    relational    database    management    system,    In Proc.    ACM-SIGMOD    1979    Boston, Mass., pp. 23-34.    -9    8.    Smith J.M. and P. Chang, Optimizing    the performance    of    a relational    algebra    data    base    interface,    Commun ACM 18:lO    (1975),    568-579.    --    9.    Sproull,    Robert    F.,    Strategy    construction    using    a synthesis    of    heuristic    and decision-theoretic    methods,    Report    CSL-77-2,    Xerox    Palo    Alto    Research    Center, Palo Alto, California, July 1977.    10.    Van Melle, William, A domain-independent    production-    rule    system    for    consulation    programs,    IN Proc.    IJCAI-79    Tokyo, Japan, 1979, pp. 923-926.    11.    Yao,    S.    Bing,    Optimization    of    query    evaluation    algorithms,    ACM    Transactions    on    Database    Systems 4:2 (1979)    133-l 55.    12.    Youssefi,    Karel    A.    Allen,    Query    processing    for    a relational    database    system,    Memorandum    UCB/ERL    M78/3,    Electronics    Research    Laboratory,    University    of    California,    Berkeley,    California, January 1978.    Acknowledgments    Many    thanks    for perceptive    comments by Jim    Bennett,    Jim Davidson, Larry Fagan and Jerry Kaplan of    Stanford    University, and Barbara Grosz of SRI International.    245     
 | 
	1980 
 | 
	23 
 | 
					
16 
							 | 
	A THEORY OF METRIC SPATIAL INFERENCE    Drew McDermott    Yale University    Department of Computer Science    New Haven, Connecticut    ABSTRACT*    Efficient    and    robust    spatial    reasoning    requires that the properties of real space be taken    seriously.    One approach to doing this is to    assimilate    facts    into a "fuzzy map" of the    positions and orientations of the objects involved    in    those    facts.    Then many inferences about    distances and directions may be made by "just    looking"    at the map, to determine bounds on    quantities of interest. For flexibility, there    must be many frames of reference with respect to    which coordinates are measure.    The    resulting    representation    supports many tasks, including    finding routes from one place to another.    ***Jr    In the past, AI researchers have often sought    to    reduce    spatial    reasoning    to topological    reasoning. [4, 6, 81 For example, the important    problem of finding routes was analyzed as the    problem of finding a path through a network or tree    of known places. This sort of formulation throws    away the basic fact that a route exists in real    physical space regardless of our knowledge of any    of the places along the way.    So a network-based    algorithm    will fail to exhibit two important    phenomena of route-finding:    > Often you know roughly what direction to go    in without having any idea of the details of the    path, or even if the path is physically possible.    > You can tell immediately that you don't know    how to get to a place, just by verifying that you    don't know the direction to that place.    There are many    other    problems    that    a    topological approach fails to treat adequately.    Here are some of the problems we (Ernie Davis, Mark    Zbikowski and I> have worked on:    > How are metric facts, such as "A is about 5    miles from B" or “The direction from A to B is    north" to be stored?    > How are queries such as "Is it farther    A to B than from A to C?" to be answered?    from    > Given a large set of objects and facts    relating them, how do you find the objects that    might be near some position?    or    with    some    orientation?    Some of these problems have received more of    our attention than others. In what follows, I will    sketch our approach, the details    of    various    algorithms and data structures, and the results we    have so far.    All of our solutions revolve around keeping    track of the fuzzy coordinates of objects in    various frames of reference.    That is, to store    --    metric facts about objects, the system tries to    find, for each object, the ranges    in    which    quantities    like    its    X    and    Y coordinates,    orientation and dimensions lie, with respect to    convenient coordinate systems. The set of all the    frames and coordinates is called a fuzzy map.    We    represent shapes as prototypes plus modifications.    13, 51 The domain we have used is the map of Yale    University, from which most of my examples will be    taken.    To    tasks:    date we have written programs to do    (1) Given a stream of metric relationships,    create a fuzzy map of the objects involved.    Research supported by NSF under contract MCS7803599    246    (2) Given a fuzzy map, test the consistency of    a relationship or find the value of a term.    (3) Given a fuzzy map, find objects with a    position and orientation near some given value.    (4) Plot a course around objects or through    conduits discovered using (3).    So far we have invested most of our effort in    the study of task (21, what I described as "just    looking" at the map to see what's true.    This    actually involves using hill climbing to see if a    relationship can be satisfied, or to find the    possible range of values of a term. So, in Figure    1, to answer the query 'What's the distance from    Kline to Sterling in meters?" the system plunks    down two points in the fuzz boxes of Kline and    Sterling, and moves them as close together, then as    far apart, as it can.    To answer the query "Is    Kline closer to Dunham than to Sterling?" it looks    for a configuration of points from the fuzz boxes    of Kline, Dunham and Sterling in which Kline is    further from Dunham than Sterling. (Since it fails    to find it, the answer to the query is "Yes.")    The same hill-climbing algorithm is used for    task (11, the assimilation of facts into a map. In    this case, the object is to find the smallest and    largest possible values of each "primitive term"    involved in a relationship. (A primitive term is a    quantity    like    (X    A)    or    (LENGTH    A) that    characterizes an object's position, orientation or    dimensions. More complicated terms, like (DIST A    B), are functions of primitive terms.) The new,    smaller range of a primitive term is then stored    for future reference.    This device, called fuzz    constriction, suffices to capture many spatial    facts*    However, it can happen that the range of a    primitive term does not change at all after a new    fact is assimilated, especially when the fact    relates objects about which little is known. For    example, if we tell the system that the orientation    of Sterling Library is the same as the orientation    of Becton Library, when it knows nothing about    their orientations with respect to Yale, this new    fact doesn't constrain them any further.    In this    case, mere fuzz constriction has failed to capture    the new fact. The solution is to introduce a new    frame of reference F, and to indicate that (ORIENT    STERLING) = (ORIENT BECTON) = 0.0 in this frame,    while the orientation of F is completely fuzzy    (from 0 to 2pi) with respect to Yale.    The machinery introduced so far enables us to    retrieve characteristics of given objects. It is    also important to be able to retrieve objects given    their spatial characteristics (task (3)).    For    example, if you are trying to get from one place to    another in a city, YOU will want to know what    streets to use, i.e., how to find the nearest 'long    porous    objects"    with approximately the right    position and orientation. This is a job for k-d    trees of the kind devised by Bentley. [l, 21 In    these trees, a large set of objects is broken down    into manageable chunks by an obvious generalization    of binary search: repeatedly discriminate on each    coordinate. An example is shown in Figure 2.    The original version of k-d trees was designed    to work on data bases in which all primitive terms    have known values.    In our application, most    primitive terms can only be assigned ranges. To    deal with this problem, we take the following tack:    if a given attribute of an object is "very fuzzy"    (e.g., its orientation is known only to lie between    0 and 2 pi>, then we do not index it on that    attribute. But if it is only moderately fuzzy,    247    I I    then we index it as though its value were the    midpoint of its range.    This requires that on    retrieval we be willing to look around a little for    objects fitting our requirements. That is, if we    need to find a street with a given orientation, we    keep relaxing the requirement until we find one.    Obviously, a street found after many relaxations is    only a plausible candidate, and must be proven to    actually work; and the process must be terminated    when it is unlikely to give worthwhile results.    02    route means depends on the density of the region to    be traversed. If it is mostly open, then the    problem is to plan to avoid obstacles; if it is    mostly obstacle, then the problem is to plan to use    conduits. Either way, the system must find objects    with known properties (e.g., "open and pointing in    the right direction" or "filled and lying in my    way").    To summarize, our main results so far are    these:    representing space as a structure of    multiple frames of reference, within which objects    have fuzzy positions, is efficient and robust. In    this context, assimilation is the process    of    constricting fuzz or creating new frames to capture    a new fact. Route finding involves computing a    fuzzy vector from where you are to where you want    to be, then finding objects which can help or    hinder your progress, and altering the plan to take    them into account.    Many problems still remain. The assimilation    algorithm needs improvement. The route finder has    not yet been completed or connected with the    assimilation and retrieval algorithms. As yet we    have not implemented a (simulated) route executor,    although this is a high priority.    AcknowledPements: Ernie Davis and Mark Zbikowski    have helped develop many of the ideas in this    paper, and made suggestions for improving the    exposition.    [ll    121    [31    [41    [51    [61    [71    181    REFERENCES    Jon Bentley 1975 Multidimensional binary search    trees used for associative searching, Comm.    ACM 18, no. 9, PP. 509-517    Jon Bentley and Jerome Friedman 1979 Data    I    structures    for    range    searching,    Comnut.    Surveys ll, no. 4, pp. 397-409    John Hollerbach    1975    Hierarchical    shape    description    of    objects    by selection and    modification of prototypes, Cambridge: MIT AI    Laboratory Technical Report 346    Benjamin    Kuipers    1978    Modeling    spatial    knowledge, Cognitive Science 2_, no. 2, p. 129    David Marr and H.    Keith Nishihara    1977    Representation and recognition of the spatial    organization of three    dimensional    shapes,    Cambridge: MIT AI Laboratory Memo 416    Drew McDermott 1974 Assimilation    of    new    information by a natural language-understanding    w-w=-,    Cambridge:    MIT    AI    Laboratory    Technical Report 291    Drew McDermott 1980 Spatial inferences with    ground, metric formulas on simple objects, New    Haven: Yale Computer Science Research Report    173    James Meehan 1976 The metanovel:    writing    stories by computer, New Haven: Yale Computer    Science Research Report 74    This algorithm for finding objects that might    have    given    characteristics is    used by our    route-finding programs.    Exactly what finding a    248     
 | 
	1980 
 | 
	24 
 | 
					
17 
							 | 
	DESIGN SKETCH    FOR A MILLION-ELEMENT    NETL MACHINE    Scott E. Fahlman    Carnegie-Mellon    University    Department    of Computer    Science    Pittsburgh,    Pennsylvania    152 13    Abstract    This paper    describes    (very    briefly)    a parallel    hardware    implementation    for NETL-type    semantic    network    memories.    A    million-element    system can be built with about 7000 IC chips,    including 4000 64K RAM chips.    This compares favorably with the    hardware    cost of holding    the same body    of knowledge    in a    standard computer    memory, and offers significant    advantages    in    flexibility    of access and the speed of performing    certain searches    and deductions.    1.    Introduction    In [l]    I presented    a scheme    for    representing    real-world    knowledge    in the form of a hardware    semantic network.    In this    scheme, called NETL, each node and link in the network is a very    simple hardware processing element capable of passing single-bit    markers through the network in parallel.    This marker-passing    is    under the overall control    of an external    serial processor.    By    exploiting the parallelism of this marker-passing    operation, we can    perform searches, set intersections,    inheritance    of properties and    descriptions,    multiple-context    operations,    and    certain    other    important    operations    much faster than is possible    on a serial    machine.    These new abilities make it possible to dispense with    hand-crafted    search-guiding    heuristics for each domain and with    many of the other procedural attachments    found in the standard Al    approaches    to representing    knowledge.    In addition    to the    ditficuity of creating such procedures,    and the very great difficulty    of getting the machine to create them automatically,    I argue that    the heuristic systems are brittle because they gain their efficiency    by ignoring much of the search space.    NETL, on the other hand,    looks at every piece of information    that might be relevant to the    problem at hand and can afford to do so because it does not have    to look at each piece of information    serially.    NETL has been viewed by many in the Al community    as an    interesting    metaphor    and    a    promising    direction    ‘for    future    research,    but not as a practical    solution    to current    Al problems    because of the apparently    impossible cost of implementing    a large    NETL system with current technology.    The problem is not that the    hardware for the nodes and links is too costly -- hundreds or even    thousands    of these elements can be packed onto a single VLSI    chip.    Rather,    the    problem    is in forming    new    private-line    connections    (wires) between particular    nodes and links as new    information    is added to the network.    These connections    cannot    be implemented    as signals on a single shared bus, since then all of    1 This    research    was    sponsored    by the    Defense    Advanced    Research    Projects    Agency    (DOD),    ARPA    Order    No. 3507,    monitored    by the Air Force    Avionics    Laboratory    Under    Contract    F33615-78-C-1551.    The views    and    conclusions    contained    in this    document    are    those    of the    author    and    should    not    be interpreted    as    representing    the    official    policies,    either    expressed    or    implied,    of    the    Defense    Advanced    Research    Projects    Agency    or the US Government.    the parallelism    would be lost.    Indeed,    it is in the pattern    of    connecting    wires, and not in the hardware nodes and links, that    most of the information    in the semantic network memory resides.    A large switching    network,    similar to the telephone    switching    network, can be used in place of physical wires, but for a network    of a million elements one would need the functional equivalent    of a    crossbar switch with 4 x IO’*    crosspoints.    Such a switch would    be impossibly expensive to build by conventional    means.    In the past year I have developed    a multi-level    time-shared    organization    for switching    networks    which makes it possible to    implement large NETL systems very cheaply.    This interconnection    scheme,    which I tail    a hashnet    because    some of its internal    connections    are wired up in a random pattern, has many possible    uses in non-Al applications;    it is described    in its general form in    another paper [2]. In this paper I will briefly describe a preliminary    design, based on the hashnet scheme, for a semantic    network    memory with lo6 NETL elements.    (An “element”    in NETL is the    combination    of    a single    node    and    a four-wire    link.)    A    million-element    NETL system is lo-20 times larger than the largest    Al knowledge    bases in current    use, and it offers substantial    advantages    in speed and flexibility    of access.    It is an open    question whether a knowledge-base    of this size will be adequate    for common-sense    story understanding,    but a system of lo6 NETL    elements should hold enough knowledge    for substantial    expertise    in a variety of more specialized domains.    In a paper of this length I    will be able to sketch only the broad outlines of the design -- for a    more complete account see [3].    The NETL machine itself, excluding the serial control computer,    requires about 7000 VLSI chips, 4000 of which are commercial    64K dynamic RAM chips.    (See the parts list, table 1.) As we will    see later, with the same 64K memory technology    it would require a    comparable    number of chips to store the same body of information    in a conventional    Planner-style    data base, assuming that the entire    data base is kept in a computer’s    main memory and not on disk.    So, far from being impossibly    expensive,    this scheme    is quite    competititve    with standard random-access    memory organizations.    I am about to seek funding    to build a million-element    prototype    machine within the next two or three years.    The 64K RAM chips    are not available today in sufficient    quantities,    and may be quite    expensive    for the next couple of years.    The prototype    machine    will be designed so that 16K RAMS can be substituted    if necessary,    giving us a 256K element machine to use until the 64K RAM chips    are obtained.    2.    Requirements    of NETL    Figure    1 shows a basic NETL element    as it was originally    conceived.    Commands    from the control computer    are received    over    the    common    party-line    bus.    The    applicability    of any    command    to a given element depends    on the element’s    unique    serial number,    on the state of 16 write-once    flag bits which    indicate what type of node or link the element represents, and on    249    the state of 16 read-write marker    bits which indicate the current    state of the element.    These marker bits represent the short-term    memory in the system.    Also present are some number (4 in this    design) of distinct link wires, and a node terminal    to which link    wires from other elements can be connected.    Commands typically    specify that all elements with a certain bit-pattern    should send a    one-bit signal across incoming or outgoing link wires, and that any    element receiving such a signal should set or clear certain marker    bits.    It is also possible    to address    a command    to a specific    element, or to get any element with a certain marker pattern to    report    its serial number    over the common    bus.    Using these    commands,    it is possible    to propagate    markers    through    the    network Quillian-style    or to control the marker propagation    in any    number    of more precise ways.    For details, see [l],    especially    section 2.3 and appendix A.1.    In a million-element    design, then, we have 4 sets of lo6 link    wires to connect to IO6 node terminals, each by a private line. A    link wire is connected    to only one node terminal,    but a node    terminal may have any number of link wires attached to it. Unlike    the telephone    system, this network    must support    all 4 million    connections    simultaneously;    once    a connection    is made,    it    becomes    part    of    the    system’s    long-term    memory,    and    a    connection    is seldom, if ever, released.    As the system learns new    information,    new links are wired up one at a time, and this must be    done without    disturbing    the connections    already in use.    If the    same network    hardware    is to be used for different    knowledge    bases at different    times, it must be possible to drop one set of    connections    and replace them with a new set.    A few additional constraints    will help us to separate interesting    designs from uninteresting    ones.    If we want to obtain a roughly    human-like    level of performance    in our knowledge    base, the    transmission    of a bit from one element to another can take as long    as a few milliseconds.    Since answering    a simple question    -- for    example, determining    whether an elephant can also be a cabbage    -w takes something    like 20 to 60 basic marker-propagation    cycles,    a propagation    time of 5 milliseconds gives a response time of .l to    .3 seconds.    This figure is independent    of the total size of the    network.    This means that some degree of parallelism is essential,    but that with microsecond-speed    technologies    there is room for    some time-sharing    of the hardware as well.    New connections    are established    individually,    and setting them    up can take somewhat    more time than simple    propagations:    humans are able to add only a few items to long-term memory per    second.    If an attempt to create a new connection    should    fail    occasionally,    nothing disastrous occurs -- the system simply skips    over the link it is trying to wire up and goes on to the next free one.    3. The NETL Machine    Design    As mentioned    earlier,    the key problem    here is to build    a    switching    network for connecting    link-wires to nodes.    Since the    four wires of a link are used at different times, we can think of this    switch as four separate sub-netowrks,    each with lo6 inputs, lo6    outputs,    and with all lo6 connections    operating    at once.    This    network    must    be,    in    the    jargon    of    network    theory,    a    seldom- blocking    network.    It must be possible to add new    connections    one by one, without disturbing    any connections    that    are    already    present,    but    some    small    chance    of    failure    in    establishing    new connections    can be tolerated.    Once established,    a connection    must work reliably,    and must be able to transmit    one-bit signals in either direction.    Note that by “connection”    here    I mean a setting of the network switches that establishes    a path    from a given input to a given output;    the pysical wiring of the    network is of course not altered during use.    The basic concept    used in this design is to build a 960 x 960    seldom-blocking    switching    network,    then    to    time-share    this    network    1024 ways.    The number    960 arises from packaging    considerations;    this gives us a total of 983,040 virtual connections,    close enough    to one million for our purposes.    The 1024 time    slices roll by in a regular cycle; a different set of switch settings is    used during    each slice.    There are four sets of 1024 switch    settings, corresponding    to the four link-wire sub-netowrks.    The    bits describing    the 4096 settings for each switch are stored in    random access memory chips between uses. The NETL elements    are likewise implemented    in a time-shared    fashion:    960 element    units are implemented    in hardware (four to a chip with shared bus    decoders),    and each of these element devices is time shared 1024    ways.    Each NETL element    exists in hardware    only during    its    assigned time-slice:    most of the time, it exists only as 32 bits of    state in a memory chip.    Let us begin by considering    the construction    of a 960 x 960    hashnet without time-sharing.    The basic unit of construction    is the    15way    selector    cell shown in figure 2a.    This cell connects    its    input to any of its 15 outputs,    according    to the contents    of a    four-bit    state register.    A value of 0 indicates    that the cell is    currently    unused and that its input is not connected    to any output.    A single integrated circuit chip can easily hold 15 of these selector    cells; the corresponding    outputs from each cell are wired together    internally, as shown in figure 2b. With assorted power and control    lines, this 15 x 15 switching element requires a 48.pin package.    To build a 960 x 960 seldom-blocking    network    out of these    elements, we arrange them in four layers with 1920 selector cells    (128 chips) in each. (See figure 3.) The outputs of each layer are    wired to the inputs of the next layer with a fixed but randomly    chosen    pattern    of wires.    Each of the input terminals    of the    hashnet is wired to 2 selector cells in the first layer; each of the    outputs of the hashnet    is wired to 2 outputs    lines from the last    layer of cells. Initially all of the selector cells are in the non-busy    slate. As paths through the network are set up, each one uses up    one of the selector cells in each layer. Note, however, that half of    the selector cells remain unused even when all 960 connections    are in place; this spare capacity ensures that the network will have    a low chance of blocking even for the last few connections.    To set up a new connection    from a given input to a given    output, we first broadcast    a marking signal through    the network    from the input to all of the selector cells and outputs that can be    reached.    Only non-busy    cells play a role in this process.    If this    signal reaches the desired    output,    one of the marked paths is    traced back toward the source, with the selector cells along the    way being set up appropriately.    These cells become busy and will    not participate    in any other connections.    Since the inter-layer    wiring of the network    is random, and since we are using many    fewer    switches    than    are    needed    for    a strictly    non-blocking    network, we cannot guarantee    that a desired connection    can be    found.    We can guarantee    that the probability    of being unable to    find a desired connection    is very small. In simulation tests of this    design,    100 complete    sets of connections    were attempted    --    10,000 connections    in all -- with only 2 failures.    As noted earlier,    an occasional    failure to find a connection    is not disastrous    in the    NETL application;    we just try again with a different link.    Instead of using four layers of selector cells, it is possible to get    the same effect using only one layer. The output wires of this layer    are randomly shuffled and fed back to the inputs; these wires also    go to the network’s    ouput terminals.    Four sets of switch settings    are used, corresponding    to the four layers of the original network.    The signal bits are read into this layer of cells and latched.    Using    the    first-layer    switch    settings,    they    are    sent    out    over    the    appropriate    output wires and shuffled    back to the inputs where    these bits are latched again. Then the second layer setup is used    and the signal bits are shuffled again. After four such shuffles, the    bits are in the right place and are read from the network’s outputs.    We have traded more time for fewer switches and wires; the same    number of setup bits are used in either case.    To go from a thousand    connections    to a million, we time-share    this network    1024 ways.    This requires two modifications    to the    network.    First, instead of having only 4 bits of state for each    selector cell (or 16 bits if the shuffling scheme is in use), we need a    25CI    different    set of bits for each time slice.    These bits are read from    external    memory chips; 4 bits of setup are read in through    each    selector cell input, followed by the signal bit.    Second, since the    NETL elements are permanently    tied to their assigned time-slices,    we need some way to move the signal bits from one time-slice to    another.    This operation    is carried out by time-slice shifter chips.    Each of these devices    is essentially    a 1024-bit    shift register.    During one cycle of time-slices this shift register is loaded:    during    each slice, a signal bit is received    along with a IO-bit address    indicating    the slice that the signal bit is to go out on. The address    governs where in the shift register the signal bit is loaded.    On the    next cycle of slices, the bits are shifted out in their new order.    An    entire layer of 1920 time-shifters    is needed (packed 5 to a chip),    along with memory chips to hold the lo-bit time-slice addresses.    The chance    of blocking    is minimized    if these are placed in the    center of the network,    between    the second    and third layers of    cells.    Some    additional    chance    of blocking    is introduced    by    addition    of the shifters to the network,    but not too much.    In our    simulations    of an almost-full    network    with    time sharing,    we    encountered    37 blocked connections    in 110,000 attempts.    4.    Cost and Performance    As can be seen from the parts list, most of the cost of the NETL    machine is in the memory chips.    In order to keep the number of    chips below 10,000 -- a larger machine would be very hard to build    and maintain in a university    research environment    -- 64K dynamic    RAM chips have been used wherever    possible in the design.    The    memory associated    with the element chips must be either 2K x 8    static RAMS or fast 16K dynamic    RAMS for timing reasons.    In    fact, the limited    bit-transfer    rate of the memory    chips    is the    principal factor limiting the speed of the network; if 16K x 4 chips    were available, the system could be redesigned    to run four times    as fast.    As it currently    stands, the system has a basic propagation    time    of about 5 milliseconds.    This is the time required to accept lo6    bits and to steer each of these to its independent    destination.    This    assumes the use of 2K x 8 static RAMS for the element memories    and allows for a rather conservative    page-mode    access time of    200 nanoseconds    for the 64K RAM chips.    (In page mode, half of    the address bits remain constant from one reference to the next.)    The 256K element version, using 16K RAM chips, should have a    propagation    time of 1.25 milliseconds.    Since the million-element    machine    contains    4000 64K RAM    chips, the parts cost of the machine is tied closely to the price of    these chips.    Currently,    if you can get 64K RAMS at all, they cost    over $100, but the price is likely to drop rapidly in the next two    years.    If we assume a price of $20 for the 64K RAMS and $10 for    the 2Kx8 RAMS, we get a total of roughly $100,000 for the memory    chips in the machine.    It is also hard to know what price to assign    to the three custom chips in the design.    An initial layout cost of    $150,000 for all three would probably be reasonable,    but this cost    would occur only once, and the chips themselves should be easy    to fabricate.    We have not yet thought    hard about board-level    layout, but rough calculations    suggest that the machine would fit    onto about    52 super-hex    boards    of two types.    Two standard    equipment    racks ought to be sufficient for the whole machine.    For comparison,    it might be worthwhile    to calculate the cost of    storing the same information    in a Planner-style    data base in the    memory of a serial machine.    To make the comparison    fair, let us    assume that the entire data base is to be kept in main memory, and    not paged out onto disk.    To store the information    in the network    itself, in the simplest possible form, would require 80 million bits of    memory:    4 million pointers of 20 bits each.    This would require    1250 64K RAM chips.    A simple table of pointers, of course, would    be very slow to use. If we add back-pointers,    the figure doubles.    If    we add even the most limited sort of indexing structure,    or store    the entries in a hash table or linked list, the arnount of memory    doubles    again.    At this point, we have reached    4000 64K RAM    chips, the same number used in the NETL machine.    The moral of    the story would seem to be that the greater power and flexibility of    the NETL organization    can be had at little, if any, extra cost.    Acknowledgements    Ed Frank and Hank Walker provided    me with much of the    technical    information    required    by this design.    The network    simulations were programmed    and run by Leonard Zubkoff.    References    ill    Fahlman, S. E.    NETL: A System    for Representing    and Using Real-World    Knowledge.    MIT Press, Cambridge,    Mass., 1979.    PI    Fahlman, S. E.    The Hashnet    Interconnection    Scheme.    Technical Report, Carnegie-Mellon    University,    Department    of Computer Science, 1980.    [31    Fahlman, S. E.    Preliminary    Design    for a Million-Element    NETL Machine.    Technical Report, Carnegie-Mellon    University, Department    of Computer Science, 1980.    (forthcoming).    Gable    1: Parts    List    Device Type    Custom 15 x 15 Selector (48 pin)    Custom 5x 1 K Time Shifter (16 pin)    Custom 4x NETL Element Chip (48 pin)    64K Dynamic RAM (for selectors)    64K Dynamic RAM (for shifters)    2Kx8 Static RAM (for elements)    Total Device Count    Number    128    240    2176    1920    2160    7008    251    I    I    C%Ol    Computer    INPUT    SELiCT    BITS    Marker bits (16)    Node Terminal    Figure    1: NETL Element    IN 1    IN 2    IN 3    IN 4    OUT 1    OUT 2    OUT 3    OUT 4    Figure    2A: The Selector    Cell    Figure    2B: Selectors    on 15x15    Chip    (Drawn    as 4x4 chip.)    Link Wires    Node Terminals    Figure    3: The Basic Washnet    A rrangement    (simplified)    252     
 | 
	1980 
 | 
	25 
 | 
					
18 
							 | 
	PERCEPTUAL REASONING IN A HOSTILE ENVIRONMENT    Thomas D. Garvey and Martin A. Fischler    Artificial Intelligence Center    SRI International, Menlo Park, CA 94025    ABSTRACT    The thesis of this paper is that perception    requires reasoning mechanisms beyond those    typically employed in deductive systems. We    briefly present some arguments to support this    contention, and then offer a framework for a system    capable of perceptual reasoning, using sensor-    derived information, to survive in a hostile    environment. Some of these ideas have been    incorporated in a computer program and tested in a    simulated environment; a summary of this work and    current results are included.    I    INTRODUCTION    Living organisms routinely satisfy critical    needs such as recognizing threats, potential mates,    food sources, and navigable areas, by extracting    relevant information from huge quantities of data    assimilated by their senses. How are such    "relevant" data detected?    We suggest that a reasoning approach that    capitalizes on the goal-oriented nature of    perception is necessary to define and recognize    relevant data. Perception can be characterized as    imposing an interpretation on sensory data, within    a context defined by a set of loosely specified    models. The ability to select appropriate models    and match them to physical situations appears to    require capabilities beyond those provided by such    "standard" paradigms as logical deduction or    probabilistic reasoning.    The need for extended reasoning techniques for    perception is due to certain critical aspects of    the problem, several of which we summarize here:    * The validity of a perceptual inference    (interpretation) is determined solely by    the adequacy of the interpretation for    successfully carrying out some desired    interaction with the environment (as    opposed to verification within a "closed"    formal axiomatic system).    --------    * This work was supported by the Defense Advanced    Research Projects Agency under Contracts MDAgOT-79-    C-0588, F33615-77-C-1250, and F33615-80-C-1110.    Since it is impossible to abstractly model    the complete physical environment, the    degree to which purely abstract reasoning    will be satisfactory is limited. Instead,    perception requires tight interaction    between modeling/hypothesizing,    experimenting (accessing information from    the environment), and reasoning/verifying.    Reasoning processes that embody concepts    from physics, geometry, topology,    causation, and temporal and spatial    ordering are critical components of any    attempt to "understand" an ongoing physical    situation. Explicit representations    appropriate to these concepts are necessary    for a perceptual system that must provide    this understanding. These representations    are incommensurate and it is not reasonable    to attempt to force them into a single    monolithic model.    There is typically no single, absolutely    correct interpretation for sensory data.    What is necessary is a "maximally    consistent" interpretation, leading to the    concept of perception as an optimization    problem [l, 21 rather than a deductive    problem.    Research in perception and image processing at    SRI and elsewhere has addressed many of these    issues. An early effort focused upon the goal-    directed aspect of perception to develop a program    capable of planning and executing special-purpose    strategies for locating objects in office scenes    [31*    Research addressing interpretation as an    optimization problem includes [l, 2, 41.    Current research on an expert system for image    interpretation [ 5] h as also considered the    strategy-related aspects of determining location in    situations involving uncertainty.    The most recent work (at SRI) on perceptual    reasoning has addressed the problem of assessing    the status of a hostile air-defense environment on    the basis of information received from a variety of    controllable sensors [6]. This work led us to    attempt to formulate a theory of perceptual    reasoning that highlighted explicit reasoning    processes and that dealt with those aspects of    perception just described. In the following    section, we will use this work as a vehicle to    illustrate a paradigm for perceptual reasoning.    253    II    PERCEPTUAL REASONING IN A SURVIVAL SITUATION    The specific problem addressed was to design a    system able to interpret the disposition and    operation (i.e., the order of battle or OB) of    hostile air-defense units, based on information    supplied by sensors carried aboard a penetrating    aircraft [6]. The situation may be summarized as    follows. A friendly aircraft is faced with the    task of penetrating hostile airspace en route to a    target behind enemy lines. Along the way, the    aircraft will be threatened by a dense network of    surface-to-air missiles (SAMs) and antiaircraft    artillery (AAAs). The likelihood of safe    penetration and return is directly related to the    quality of acquired or deduced information about    the defense systems.    Partial information is furnished by an initial    OB, listing known threats at, say, one hour before    the flight. Additional knowledge is available in    the form of descriptions of enemy equipment,    typical deployments, and standard operating    procedures. Since the prior OB will not be    completely accurate, the information must be    augmented with real-time sensory data. The OB    forms the starting point for this augmentation.    The explicit goal of the overall system is to    produce and maintain an accurate OB, detecting and    identifying each threat prior to entering its    lethal envelope. The density of threats means that    this goal will result in conflicting subgoals, from    which selection must then be made to ensure that    critical data will be received. This must be    accomplished by integrating data from imperfect    sensors with prior knowledge. The paradigm that    was developed for this task is summarized below:    (1) Available knowledge is used to create an    hypothesized OB that anticipates the    developing situation.    (2) A plan that attempts to allocate sensors    to detect or verify the presence of    threats, in an optimal way, is    constructed. Sensors are then allocated    and operated.    (3) Information returned from the sensors is    interpreted in the context established by    the anticipated situation.    Interpretations modify the current OB,    and the process is iterated.    We will briefly discuss each of these steps.    B.    Experimentation (Accessing Information from    the Environment)    The goal of this step is to access information    needed to detect or verify the presence of threats    inferred in the anticipation step, but not    available in the "internal" knowledge base of the    system. In general, it might be necessary to    define and execute one or more experiments to    extract this needed information from the    environment. In the more limited context of model    instantiation by "passive" sensing, the problem    reduces to that of allocating sensor resources to    maximize the overall utility of the system; sensing    is a specific instance of the more general process    of experimentation.    First the list of data-acquisition goals is    ordered, based on the current state of information    about each threat and its lethality. The allocator    attempts to assign (a time sliced segment of) a    sensor to satisfy each request based on the    expected performance of the sensor for that task.    Sensor detection capabilities are modeled by a    matrix of conditional probabilities. These    represent the likelihood that the sensor will    correctly identify each threat type, given that at    least one instance thereof is in the sensor's field    of view. This matrix represents performance under    optimal environmental conditions (for the sensor)    and is modified for suboptimal conditions by means    of a specialized procedure. This representation is    compact and circumvents the need to store complete,    explicit models describing sensor operation in all    possible situations. Similar models describe each    sensor's identification and location capabilities.    The sensor models are used to compute the    utility of allocating each sensor to each of the    highest priority threats. These utilities form the    basis for the final allocation, which is carried    out by a straightforward optimization routine. At    the same time, the program determines how the    sensor should be directed (for example, by pointing    or tuning). Appropriate control commands are then    sent to the simulated sensors.    C.    Interpretation (Hypothesis Validation; Model    Instantiation)    In this phase, the program attempts to    interpret sensor data in the context of threats    that were anticipated earlier. It first tries to    254    determine whether sensor data are consistent with    specifically anticipated threats, then with general    weapon types expected in the area. Since sensor    data are inherently ambiguous (particularly if    environmental conditions are suboptimal), this step    attempts to determine the most likely    interpretation.    Inference techniques used for interpretation    include production rule procedures, probabilistic    computations, and geometric reasoning. Production    rules are used to infer probable weapon operation    (e-g., target tracking, missile guidance), on the    basis of such information as past status,    environmental conditions, and distance from the    aircraft. Probabilistic updating of identification    likelihoods is based on the consistency of actual    sensor data with expected data, and on agreement    (or disagreement) among sensors with overlapping    coverage. Geometric reasoning introduces a concept    of global consistency to improve identification by    comparing inferred identifications and locations of    threat system components with geometric models of    typical, known system deployments.    The interpretation phase brings a great deal    of a priori knowledge to bear on the problem of    determining the most likely threats the sensors are    responding to. This results in much better    identifications than those produced by the sensors    alone. Confident identifications are entered into    the OB and the entire process is continued.    D.    Performance    An experimental test of the system, using a    simulated threat environment, allowed a comparison    between two modes of operation--an "undirected"    mode and one based on perceptual reasoning. A    scoring technique that measured the effectiveness    with which the system detected, identified, and    located hostile systems in a timely fashion was    used to grade performance. The ability of the    perceptual reasoning system to use external    knowledge sources effectively, and to integrate    information from multiple sensors, produced    superior capabilities under this measure. These    capabilities showed themselves even more    prominently in situations where environmental    conditions tended to degrade sensor performance,    rendering it critical that attention be focused    sharply.    III    DISCUSSION    Our approach to perceptual reasoning suggests_    that the problem of perception actually involves    the solution of a variety of distinct types of    subproblems, rather than repeated instances of the    same general problem. The system we described    utilizes a nonmonolithic collection of    representations and reasoning techniques, tailored    to specific subproblems. These techniques include    both logical deduction and probabilistic reasoning    approaches, as well as procedures capable of    geometric reasoning and subjective inference.    We have discussed several key aspects of the    general problem of perceptual reasoning, including    the assertion that perception is goal oriented, and    inductive and interpretative rather than deductive    and descriptive; that because complete modeling of    the physical world is not practical,    nexperimentationn is a critical aspect of    perception; and finally, that multiple    representations and corresponding reasoning    techniques, rather than a single monolithic    approach, are required.    The specific system discussed above    constitutes an attempt to address the reasoning    requirements of perception in a systematic way and,    to our knowledge, represents one of the few    attempts to do so. While systems that truly    interact with the physical world in an intelligent    manner will certainly assume a variety of forms, we    believe they will all ultimately have to resolve    those aspects of the problem that have been    described here.    REFERENCES    1 .    2.    3.    4.    6.    M. A. Fischler and R. A. Elschlager, "The    Representation and Matching of Pictorial    Structures," IEEE Transactions on Computers,    Vol. C-22, No. 1, pp. 67-92 (January 1973)    H. G. Barrow and J. M. Tenenbaum, "MSYS: A    System for Reasoning About Scenes," Artificial    Intelligence Center Technical Note No. 121,    SRI International, Menlo Park, California    (1976)    T. D. Garvey, "Perceptual Strategies for    Purposive Vision," Artificial Intelligence    Center Technical Note No. 117, SRI    International, Menlo Park, California (1976)    A. Rosenfeld, "Iterative Methods in Image    Analysis," Pattern Recognition, Vol. 10, No.    4, pp- 181-187 (1978)    R. C. Bolles et al., "The SRI Road Expert:    Image-to-Database Correspondence," Proc. DARPA    Image Understanding Workshop, Scienr-    Applications, Inc., Report No. SAI-79-814-WA,    pp. 163-174 (November 1978)    T. D. Garvey and M. A. Fischler, "Machine-    Intelligence-Based Multisensor ESM System," '    Technical Report AFAL-TR-79-1162, Air Force    Avionics Laboratory, Wright-Patterson AFB, OH    45433 (Final Report DARPA Project 3383)    (October 1979)    255     
 | 
	1980 
 | 
	26 
 | 
					
19 
							 | 
	OVERVIEW OF AN EXAMPLE GENERATION SYSTE?l    Edwina L. Rissland    Elliot M. Soloway    Department of Computer and Information Science    University of Massachusetts    Amherst, MA 01003    ABSTRACT    This paper addresses the process of generating    examples which meet specified criteria: we call    this activity CONSTRAINED EXAMPLE GENERATION (CEG).    We present the motivation for and architecture of an    existing example generator which solves CEG problems    in several domains of mathematics and computer    science, e.g., the generation of LISP test data,    simple recursive programs, and piecewise linear    functions.    II THE CEG MODEL    ---    From protocol analyses of experts and    novices    working CEG problems in mathematics and computer    science, we developed the following description of    the CEG task [lOI. When an example is called for,    one can search through one's storehouse of known    examples for an example that can be JUDGEd to    satisfy the desiderata. If a satisfactory match is    found, then the problem has been solved through    SEARCH and RETRIEVAL. If a match is not found, one    can MODIFY an existing example, judged to be close    or having the    potential    for    fulfilling    the    desiderata.    If generation through modification fails, one often    switches to another mode in which one CONSTRUCTS an    example by    instantiating certain    models    and    principals or combining more elementary exemplars.    The CEti model we use thus consists of processes    that RETRIEVE, JUDGE, MODIFY and CONSTRUCT examples.    III THE CEG    --    SYSTEM    The CEG system described here is written in    LISP and runs on a VAX 11/780.    In addition to    solving CEG problems concerning data and simple    programs in the LISP domain [I1 1, it is being used    to solve CEG problems in a number of other domains    1121:    the generation of descriptions of scenes for    use    in    conjunction    with    the    VISIONS    scene    interpretation system II21; the generation of action    sequences in games; and the generation of piecewise    linear functions.    The flow of control in the Exanple Generator is    directed by an EXECUTIVE process.    In addition,    there is: (1)    the RETRIEVER which searches and    retrieves examples from a data base of examples;    (2) the    MODIFIER    which    applies    modification    techniques to an example;    (3)    the CONS'ER which    instantiates general "model" examples, such as code    'lte.mplates" (C91,    1161)~    (4) the JUDGE which    determines how well an example satisfies the problem    desiderata;    (5) the AGENDA-KEEPER which maintains    an agenda of examples to be modified, based for    instance on the degree    to which they meet the    desiderata or possess flepistemological" attributes    (C81,    191).    The components use a common knowledge base    consisting of two parts: a "permanent" knowledge    base which has an Examples-space 191 containing    known examples, and a temporary knowledge base which    -contains information gathered in the solution of a    specific CEG problem.    In the Examples-space, an    example is represented by a frame, which contains    information describing various attributes of that    example, e.g., epistemological class, worth-rating.    Examples are linked together by the relation of    nconstructional derivation," i.e., Example    1 --->    Example2    means    that Exanplel is used    in the    construction of Example2. The temporary knowledge    256    base contains such,information as evaluation data    generated    by the JUDGE and proposed candidate    examples created from known    examples    by    the    MODIFIER.    IV AN EXAMPLE OF A CEG PROBLEM    ----VP    In the context of examples of LISP    data    elements, an example of a simple CEG problem would    be the followirg:    Give an example of a LISP list of length 3    with the depth of its first atom equal to 2.    (Examples    teaching.)    such as this are needed when debugging and    Suppose the permanent knowledge base    only    contains the lists    RETRIEVAL phas;hasf;ils), the system enters    the    MODIFICATION    (A B C), (0 11, (A), ( );    Since the list (A B C)    the first two lists are    satisfies two of the'three constraints, and thus has    standard    ttreferencelt    examples, the third, a "start-up" example, and the    fourth, a known "counter example," i.e., an    a higher "constraint-satisfaction-count"    example    which often    than the    is handled    other examples, it is    incorrectly by programs.    Since the knowledge base does not contain an example    placed    which completely satisfies the desiderata (i.e., the    as the top-ranking    candidate for MODIFICATION by the AGENDA-KEEPER and    modifications are tried on it first.    existing example is modified to create a new example    that is no longer deficient with respect to the    constraints. For example, if there were more than    one unsatisfied constraint, one could attempt to    rectify a candidate example along each of the    unsatisfied    dimensions.    Consider the following    problem:    Give an example of a list of O’s and l's,    which is longer than 2, and which has at least    one element deeper than depth 1.    One could modify the list (0 1) to meet the    unsatisfied second and third constraints by adding    at least one more element, say another 1 to give the    list (0 1 11, and then modify this new list by    adding parens    around    0r.e of    the    elements.    Alternatively, one could add parens and then fix the    length, or even do both 'at once' by appending on an    element such as (1) to the original list (0 1).    In this example, there are many ways to modify    the original list to meet the constraints, and the    order of modification does not much matter. One can    'ldivide-and-conquer'l    the work to be done in a    GPS-like manner of difference-assessment followed by    However,    difference-reduction    in other cases the order of difference    reduction    since    matters greatly. For instance, consider    the constraints are    independent [51.    the problem:    Give an example of a list of length 5 with an    embedded sublist of length 2.    The candidate example, (A B C) is analysed and    found to be lacking in one respect, namely, the    depth of its first atom, A, must be made deeper by    1.    The    sys tern accomplishes    this by adding    parentheses around the atom A to create the new    example    ( (A) B C).    This list meets all the constraints specified;    as    it is a solution to the problem, it is entered into    the Examples-space as a new example constructionally    derived from (A B C) .    Thus * if the following    problem is asked of the system,    Give an example of a list of length 3,    the    depth of the first atom is 2, and the depth of    the last is 3.    the system can use the newly constructed    its attempt to satisfy the new problem.    example in    Suppose one is working with the list (A B C>.    If    one first rectifies the length by adding two more    elements, such as 1 and 2, to create the list (A B C    1 2 ), and then modifies this list to have the    needed embedded list by “making-a-group” of length 2    around any of the first four elements, say to arrive    atthelist(ABC(12)    ),one    has also modified    the    length    as a side-effect of the grouping    modification, i.e., one has messed up the length    constraint.    In other circumstances, it possible to    set up totally contradictory constraints    where    satisfaction of one precludes satisfaction of the    other. Thus a purely GPS approach is not sufficient    to handle the complexity of constraint interaction.    We are currently investigating the    constraint    interaction problem as well as issues concerning    maintenance of agendas. For example, we are looking    to    employ planning-type control mechanisms for    dealing with the constraint interaction problem.    cc 131,    C31)    VI THE LARGER PICTURE    --__I_    V THE HANDLING OF CONSTRAINTS    -    An application of our CEG system    is    in an    The hand 1 ing of constraints is    especially    intelligent computer-assisted instruction tutoring    important    in    the MODIFICATION phase where an    environment . We are currently building a tutor to    257    teach students about programming languages such as    LISP and PASCAL (C143, C151). In this context, CEC    will serve the tutor in two ways: (1) it will    generate examples to the specifications set by the    tutor;    and (2) it will evaluate student generated    examples for the tutor. The same JUDGE used by CEG    can be used to evaluate a student's example and help    track down his misconceptions and bugs through    analyses of differences between what the tutor    requested and what the student acutually generated.    In the future, we also plan to incorporate    adaptation into the system. For example, the system    can keep track of the performance of the various    example ordering functions and choose the one that    has the best performance. Also, we plan to apply    hill-climbing techniques to the modifying processes    themselves. That is, since there are alternative    ways    to massage and modify an example, those    routines which lead to the most succeses should    eventually be selected first.    Adaptation on the    modification    techniques    will    be    particularly    important if the system is to be able to improve its    performance, and thus    "learn"    from    its    own    experience.    The    current    implementation is    only    a    “first-pass” and does not capture the richness of    the CEG model. Nonetheless, we feel that it has    demonstrated the utility of this model and we feel    that subsequent implementations incorporating the    characteristics of additional task domains should    provide us with a rich environment to continue our    investigation into the important process of example    generation.    Lll    [21    c31    c43    c51    REFERENCES    Bledsoe, W. (1977) A Maximal Method for Set    Variables in Automatic Theorem Proving, Univ.    of Texas at Austin, Math Dept. Memo ATP-33.    Hanson, A. and Riseman E. (1978) VISIONS:    A    Computer System for Interpreting Scenes, in    Computer Vision Systems, Hanson and Riseman,    Eds. t Academic Press, New York.    Hayes-Roth, B., ar,d F.    Hayes-Roth    (1978)    Cognitive Process in Planning, Rand Report    R-2366-ONR, The Rand Corporation, CA.    Lakatos, I.    ( 1963) Proofs and Refutations,    -    -    British Journal for the Philosophy of Science,    Vol.    19, May 1963.    Also    published    by    Cambridge University Press, London, 1976.    Newell, A., Shaw, J., and Simon, H.    (1959)    Report on a General Problem-Solving Program.    Proc.    of the International Conference on    Information Processing. UNESCO House, Paris.    163    Polya, G .    (1973)    now To Solve St, Second    Edition, Princeton UXJ. Press,    N.J,    E7l    Polya, G.    (1968) Mathematics and Plausible    Reasoning, Volunes I and II, S&?&d Edition,    Princeton Univ. Press, N.J.    C83    Rissland (Michener) , E. (1978a) Understanding    Understanding Mathematics, Cognitive Science,    Vol. 2, No.    4.    [91 Rissland (Michener), E. (1978b) The Structure    of Mathematical Knowledge, Technical Report No.    472,    M.1.T    Artificial    Intelligence Lab,    Cambridge.    Cl01 Rissland, E.    (1979) Protocols of    Ex mple    Generation 1 internal report, M.I.T., Cambr idge.    [ill Rissland, E. and E.    Soloway (1980) Generating    Examples in LISP:    Data and Programs, COINS    Technical Report 80-07,    Univ.    of    Mass,    (submitted for publication).    Cl21 Rissland, E., Soloway, E. ,    O’Connor,    S.,    Waisbrot, S., Wall, R. , Wesley, L., and T.    Weymouth (1980) Examples of Exaple    Generation    using the CEG Architecture. COINS Technical    Report, in preparation.    cl31 Sacerdoti, E. (1975) The Nonlinear Nature of    Plans,    Proc.    4th.    Int.    Joint    Conf.    Artificial Intelligence, Tbilisi, USSR.    Cl41 Soloway.    E.    ( 1980)    The    Development    and    Ev aluafiion of Instructional Strategies for an    Intelligent    Computer-Assisted    Instruction    System, COINS Technical Report 80-04, Univ.    of    Mass., Amherst.    El51 Soloway, E., and E.    Rissland (1980)    The    Representation ar,d Organization of a Knowledge    Base About LISP Programming for an ICAI System,    COINS Technical Report 80-08, Univ of Mass., in    preparation.    [161 Soloway,    E., and Woolf, B.    ( 1980) Problems,    Plans and Programs, Proc. of the ACM Eleventh    SIGCSE Technical Symposium, Kansas City.    258     
 | 
	1980 
 | 
	27 
 | 
					
20 
							 | 
	STRUCTURE    COMPARISON AND SEMANTIC INTERPRETATION OF DIFFERENCES*    Wellington Yu Chiu    USC Information Sciences Institute    4676 Admiralty Way    Marina de1 Roy, California 90291    ABSTRACT    Frequently    situations    are    encountered    where    the    ability    to    differentiate    between    objects    is necessary.    The typical situation    is one in which one is in a current    state and wishes to achieve a    goal state.    Abstractly,    the problem    we shall address    is that of    comparing    two data    structures    between    the    two    structures.    work    well in determining    all    of    addressing    the    semantic    syntactic    issues.    and determining    all    Current    comparison    differences    techniques    differences    but fall short    We address    this gap by    applying    Al techniques    that    yield    semantic    interpretations    of    differences.    I INTRODUCTION    One    frequently    encounters    situations    where    the    ability    to    differentiate    between    objects    is necessary.    The typical situation    is one in which one is in a current    state and wishes to achieve a    goal    state.    Such    situations    are    encountered    under    several    guises    in    our    transformation    based    programming    system    research    [I,    2, 31.    A simple case is one in which we are given    two program    states    and need to discover    the changes from one    to the other.    Another    case is one in which a transformation    we    wish    to apply    to effect    a desired    change does not match in the    current    state    and one wishes    to identify    the differences.    An    extension    of this second case is the situation where    a sequence    of    transformations,    called    a development,    is to be applied    to    (replayed    on) a slightly different    problem.    Abstractly,    the problem    we shall address    is that of comparing    two data structures    and determining    all differences    between    the    two    structures.    Current    comparison    techniques    determining    all syntactic    differences    but fall short    the    semantic    issues.    situations,    comparisons    address    this gap by applying    interpretations    of differences.    work    well    in    of addressing    In the    replay    and    state    differencing    must be more semantically    oriented.    We    This    paper    describes    a    semantic    based differencer    Al techniques    that yield semantic    part    of my thesis:    the design    and its ongoing implementation.    of a    * This work woa rupportrd by Nitional Scknce    the viowr    cwpremed    are thaw    of thr author.    Foundation Grant MCS 7683880.    II AN EXAMPLE    --    The    following    example    is presented    to show    the    types    of    information    used to infer    and deduce    the semantic differences.    Below    are    the    before    and    after    program    states    from    a    transformational    development    [ 1 J.    BEFORE :    uhile    there    exists    character    in    text    do    1    if    character    is    linefeed    2    then    replace    character    in    text    by space:3    uhile    there    exists    character    in    text    do    i f Pfcharacter)    :    then    remove    character    from text:    6    AFTER:    while    there    exists    character    in    text    do    begin    ;:    if    character    is    Iinefeed    then    replace    character    in    text    by space:    :    if    character    in    text    then    e    i f P (character)    f    then    remove    character    from    text    end:    h9    The after    state    above was produced    from the before    state via    application    of a transformation    but the following explanations    of    differences    were    generated    without    knowledge    of    the    transformation.    - The    current    syntactic    differencing    techniques    [4, 5, 6, 7, 8, 9, lo]    typically    explain    differences    in    the following    terms:    For    BEFORE:    Delete    while    in    line    4    Delete    there    in    line    4    For    AFT&I:’    ’    Insert    if    in    line    f    Insert    then    in    line    f    .    * .    - A higher    level    syntactic    explanation    is achieved    by    generalizing    and    combining    the    techniques    for    syntactic    differencing    to    explain    differences    in    terms    of embeds,    extracts,    swaps and associations,    in    addition    to    inserts    and    deletes,    and    by    incorporating    syntactic    information    about    the    structures    being compared    [3]    The    second    loop    is    coerced    into    a conditional.    259    The    conditional    is    embedded    into    the    first    loop,    trees    for 23,    5,6 and c,d,f,g are loop generators.    - The    proposed    explanation    of    the    semantic    difference    is:    Loops    merged.    The following    is the derivation    of the syntactic    explanation.    It    - Infer    the similarity    of 5,6 to e,f,g from the syntactic    equivalence    of 5,6 to f,g.    5,6 embedded    in a test    for    generator    consistency    inferred    from    semantic    knowledge    of loop merging.    - Conclusion:    Loops merged.    is presented    to show the mechanisms upon which the semantic    differencer    will be based.    - Syntactically,    2,3    (first    loop    body    of the    before    state)    is equivalent    to c,d (part of the loop body of    the after    state)    and 5,6 is equivalent    to f,g.    - Infer composite    structure    1,2,3 similar to composite    structure    a,b,c,d,e,f,g,h    based    on    2,3    being    equivalent    to c,d.    - tnfer    an embed of composite structure    4,5,6 into the    composite    structure    1,2,3    to produce    a,b,c,d,e,f,g.    The    support    for    this    inference    comes    from    5,6    being    equivalent    to f,g, the adjacency    of 1,2,3    to    4,5,6,    and the adjacency    of c,d to f,g.    - Infer    coercion    of    loop    4,5,6    to    conditional    e,f,g    based    on    5,6    being    equivalent    to    f,g,    and    the    similarity    of the loop generator    to the conditional    predicate.    - Conclude    second loop embedded    in the first loop.    Our    current    syntactic    differencer    produces    this    type    of    difference    explanation.    It exte_nds    the    techniques    currently    The explanation    generated    by our syntactic    clifferencer    is not    plausible    because    i? doesn’t    make sense to transform    a loop into    a conditional    only    to embed    this new conditional    into a similar    loop.    The following    is the desired    explanation:    The body of the    decond    loop    is embedded    in a conditional    that tests    for loop    generator    consistency.    This    is done    without    changing    the    functionality    of the body.    The two adjacent    loops can now be    merged    subject    to any side effects,    caused    by the    first loop    body, that will not be caught by the loop generator    consistency    check    around    the second loop body.    Ill DESIGN OF THE SEMANTIC BASED DIFFERENCER    -e--p    We    start    by defining    relations    (profiles)    on objects,    where    objects    are    the    substructures    of    the    structures    we    are    comparing    (see    Appendix    A).    The information    provided    by this    profile    consists    of:    - Sequence    of nonterminals    from the left hand side of    productions    used in generating    the context    tree of    the substructure.    A context    tree (i.e. super tree)    is    that    part    of the parse    tree    with the substructure    part deleted.    available    by    imposing    structure    on    the    text    strings    being    advance,    the explanations    fall short of the desired    semantics of    compared,    thereby    making use of structural    constraints    and the    the    changes.    In the    SCENARIO section    below,    an explanation    extra    structural    dimensions    in its analysis    [3].    Despite    this    - The sequence    of productions    used in the derivation    of the substructure,    given    the context    tree    above    as the starting    point.    Positional    information.    from the proposed    semantic based differencer    is presented.    The    domain    we    are    dealing    with is one where    changes    are    made for the following    reasons:    1. To optimize    code    Our syntactic    differencer    makes use of the information    provided    by    the    object    profile    to    determine    base    matches.    The    techniques    used    in the differencer    to determine    base    matches    are    combinations    of the common unique, common residual    and    common    super    tree    techniques    described    in [3, 4, 6, 7, lo].    2. TO prepare    code for optimization.    Below    is a brief    description    of the common unique and common    residual    techniques    for linear structures.    Within    such    a domain,    we    can use the    constraints    on the    semantics    of changes    to derive    the semantic    explanation    “loop    merged”    and at the same time rule out the syntactic explanation.    - Longest    Common    Subsequence    (LCS):    The    main    coniern    with LCS is that .of finding the minimal edit    distance    between    two linear    structures.    The only    edit    ooerations    allowed    are: delete    a substructure,    Building on the mechanisms    that generated    the derivation    above,    or    insert    a substructure    [6, 81.    For    the    above    the    following    is a proposed    derivation    of the comparison    that    example,    the LCS is: 1,2,3,5,6    matching a,c,d,f,g.    yield the semantic    interpretation    “loops merged”.    - Common Unique KU):    The key to this technique    the    - Syntactically,    2,3 (the first loop body) is equivalent    to c,d and 5,6 (the second loop body) is equivalent    to    f,g.    The    context    trees    (i.e.    super    trees)    containing    2,3    and 5,6    and the    context    tree    for    c,d,f,g    are    the same.    Infer    FACTORING of context    trees    with    supports    being    the 2 to 1 mapping of    substructures    and the equivalence    of context trees.    use    of    substructures    that    are    common    to    both    structures    and unique    within each    as anchors    [4].    For    the    above    example,    the    common    unique    substructures    are:    linefeed,    replace,    space,    P,    remove.    We then build on these base matches by inferring    matches of    - Infer    loop    merge    from    the    fact    that    the    context    substructures    that contain    substructures    that are matches.    An    example    of this is inferring    that    1,2,3 is similar to a,b,c,d,e,f,g,h    260    from the assertion    that 2,3 matches c,d. There    are two types Of    inferred    matches:    those    without    syntactic    boundary    conflicts    and    those    with    conflicts.    Syntactic    boundary    conflicts    result    from embedding,    extracting    or associating substructures.    The third type of profile    is one that describes    the relationship    between    substructures    within a given structure.    Considerations    here    are: adjacency,    containment,    and relative    positioning.    There    are several    semantic    rules that describe    a majority    of    structure    changes.    Some are: factoring,    distributing,    commuting,    associating,    extracting,    embedding,    folding,    and    unfolding.    A    oartial    description    of    the    factor    semantics    can    be    found    in    Appendix    A. Factors    currently    considered    by our semantic rules    are:    support    for    matches,    the    generating    grammar,    object    profiles,    and    relations    between    substructures    of    the    same    structure.    With each set of examples    tried, we add to our set of    semantic    rules,    and our    intuitive    guess,    given    our    domain of    changes    due    to optimization    or preparation    for optimization,    is    that    this set will be fairly    small when compared    to the set of    transformations    needed    in a transformation    based programming    system.    IV A SCENARIO    --    Our syntactic    differencer    makes use of structural    information.    For LISP programs    it know about S-expressions.    For programs    written    in our specification    language,    differencing    is performed    on    the    parse    trees.    The    differencer    first    tries    to isolate    differences    into the smallest composite    substructure    containing    all changes.    With this reduced    scope, the differencer    uses the    common    unique    technique    to assert    relations    on content    base    matches.    In our example,    substructure    2,3 is equivalent    to c,d    and substructure    5,6 is equivalent    to f,g.    Once all of these    possible    assertions    have been made, we use    them    as    anchors    to    assert    relations    based    on    positional    constraint    and    content    matches    (residual    structure    matches).    This    residual    technique    works    well    as    a relaxation    of    the    uniqueness    condition    in the    common unique    requirement,    and    acts as a backup    in case no substructures    are common to both    structures    and unique within each.    The super tree    technique    is    used as a default    when neither    of the above techniques    applies.    The intuitive    explanation    for this third technique    has to do with    both    objects    occupying    the    same    position    with    respect    to a    common    super    tree.    With    the super    tree    technique,    content    equivalence    is relaxed.    With    the    two    asserted    relations    regarding    substructures    2,3    being    equivalent    to c,d and 5,6 being equivalent    to f,g, we now    infer    that    4,5,6    is similar to e,f,g because    5,6 is common unique    with    f,g    and    without    conflicting    evidence    (i.e.    boundary    violations)    the assertion    is made.    Once made, further    analysis of    this    reduced    scope    shows    relationships    between    the    loop    generator    4 and the conditional    predicate    e.    Since    we    are    given,    via    the    super    tree    technique,    that    1,2,3,4,5,6    matches    a,b,c,d,e,f,g,h    we assert    the inference    2,3 to    c,d even    though conflicts    due to boundary    violations    arise.    The    boundary    violation    in this    instance    is the    mapping    of    two    substructures    into    one    (i.e.    the    segmental    or n-m problem).    Given    that    we    want    to produce    all plausible    explanations,    we    assert    that    1,2,3    is similar to a,b,c,d,e,f,g,h    because    2,3 and c,d    are common unique matches.    With this assertion,    we could with    our    Embed    Semantics    say that the second loop is embedded    in    the    first.    But our knowledge    about optimizations    makes more    plausible    the    Factor    Semantics    that    is    triggered    by    the    segmental    matching.    When    the    Factor    Semantics    is triggered,    the    relationships    within    a    given    structure,    such    as    adjacency    and    relative    positioning,    are asserted    (see Appendix    A). All the requirements    except    for body2    being equivalent    to body4    are met.    But once    the cases    are considered,    we discover    that the operator    being    factored    is indeed    a loop generator    and that we can relax    the    requirement    that    body2    be    equivalent    to body4    to that    of    similarity    of the two bodies.    This follows from the support    for    Final    analysis    the    relationship    between    body2    and    body4.    reveals    that body2    is embedded    in body4.    V CONCLUSION    Given    the small example    above, we see that the derivation    of    the    semantic    difference    involves    syntactic    and    semantic    knowledge    of the    structure    domain as well as techniques    for    managing    and applying    the knowledge.    We present    a design that    addresses    the issue of managing and applying both syntactic    and    semantic    knowledge    of the structures    being compared    so as to    provide    a semantic    interpretation    of changes.    This allows us to    bridge    the gap that exists between    the information    provided    by    current    differencers    and    the    information    needed    by    current    differencing    tasks.    VI APPENDIX A:    --    TEMPLATES    FROM THE SEMANTIC DIFFERENCER    ----I__    Every    substructure    of the    structures    being    compared    has    associated    with    it an Object    Profile    that is an record    with the    following    role names as fields:    Content:    Value    of    the    substructure.    Type    Context:    Sequence    of    nonterminals    of    productions,    of    the    grammar,    used    in    the    derivation    of    the    substructure.    Posi    t ional    Position    in parse    tree.    (i.e.    a    Context:    sequence    of    directions    in reaching    the    substructure    from    the    root    of    the    parse    tree.    Abetraction:    If    a grammar    is    used    to    generate    the    substructure,    this    refers    to    the    sequence    of    productions    used    to    generate    the    substructure    itself.    261    A Relation    Profile    describes    the    relationships    between    two    substructures,    one from each of the structures    being compared.    The role names of this record    are:    Base    Content    (i.e.    common unique)    Matches:    Context    (i.e.    positional    determined    from    context    trees)    Positional    constraint    and Content    (i.e.    from    the    largest    common    residual    substructure    technique    where    uniqueness    of    context    matches    is    relaxed).    Inferred    Conflict    free    (i.e.    no syntactic    Matches:    boundary    violations)    With    conflicts    (inferences    depends    on    heuristics    regarding    current    substructure    abstraction    and    weights    associated    with    substructure    matches).    Happ i ngs:    l-l    substructure    matches    2-1    substructure    matches    n-m    substructure    matches    A second    relation    profile    is one between    substructures    within    a    given    structure.    Some    of    the    considerations    here    are:    adjacency    of    two    substructures,    containment,    and    relative    positioning.    There    are several    semantic    rules that describe    a majority    of    structure    changes.    Some are factoring,    distributing,    commuting,    associating,    extracting,    embedding,    folding and unfolding.    Below    is    a    partial    description    of    the    Factor    Semantics    used    in    generating    the semantic interpretation    above:    FACTOR    SEllANTICS:    FORM:    LHS:    opl    body1    op2    body2    RI-IS:    op3    body3    body4    KEY:    Segmental    matching    REDUIREMENTS:    requirements    opl=opZ    requirements    opl =op3    requirements    bodyl=body3    requirements    bodyZ=body4    requirement9    relative    positioning    of    body1    to    body2    holds    for    body3    to body4    requirements    adjacency    of    body1    to    body2    holds    for    body3    to    body4    CASES :    opl    is    a    loop    generator    relaxations    body2=body4    relaxed    to    body2    similar    to body4    RELAXATIONS:    relaxations    adjacency    requirement    relaxations    relative    positioning    relaxations    equivalence    (=I    to    similar    VII REFERENCES    1.    Baiter,    R., Goldman, N., and Wile, D., “On the    Transformational    Implementation    Approach    to Programming,”    in Second    International    Conference    on Software    Engineering,    pp. 337-344,    IEEE, October    1976.    2.    Balzer,    R., Transformational    Implementation:    An Example,    Information    Sciences    Institute, Research    Report 79-79,    September    1979.    3.    Chiu, W., Structure    Comparison,    1979.    Presented    at the    Second    International    Transformational    Implementation    Workshop    in Boston, September,    1979.    4.    Heckel,    P., “A Technique    for Isolating Differences    Between    Files,” Communications    of the ACM 21, (41, April 1978,    264-268.    5.    Hirschberg,    D., The Longest Common Subsequence    Problem,    Ph.D. thesis, Princeton    University,    August 1975.    6.    Hirschberg,    D., “Algorithms    for the Longest Common    Subsequence    Problem,” journal    of the ACM 24, (41, October    1977, 664-675.    7.    Hunt, J., An Algorithm    for Differential    file Comparison,    Bell    Laboratories,    Computer    Science Technical Report 41, 1976.    8.    Hunt, J., “A Fast Algorithm for Computing Longest Common    Subsequences,”    Communications    of the ACM 20, (51, May    1977,350-353.    9.    Tai, K., Syntactic    Error Correction    in Programming    Languages,    Ph.D. thesis, Cornell University,    January    1977.    10.    Tai, K., “The Tree-to-Tree    Correction    Problem,” Journal    of    the ACM 26, (3), July 1979, 422-433.    262     
 | 
	1980 
 | 
	28 
 | 
					
21 
							 | 
	Performing Inferences over Recursive Data Bases    Shamim A. Naqvi    and    Lawrence J. Henschen    Dept. of Electrical Engineering and Computer Science    Northwestern University    Evanston, Illinois 60201    Abstract    The research reported in this paper presents a    solution to an open problem which arises in sys-    tems that use recursive production rules to re-    present knowledge. The problem can be stated as    follows: "Given a recursive definition, how can    we derive an equivalent non-recursive program    with well-defined termination conditions". Our    solution uses connection graphs to first detect    occurrences of recursive definitions and then    synthesizes a non-recursive program from such a    definition.    I. Introduction    In recent years, attention has focused on adding    inferential capability to Codd's relational    model of data (Codd 1970). This usually takes    the form of defining new relations in terms of    existing relations in the data base. The defined    relations constituting the Intensional Data Base    describe general rules about the data whereas    explicit facts stored in the data base as base    relations comprise the Extensional Data Base.    This paper is concerned with the problem of find-    ing a finite inference mechanism for a defined    relation.    Reiter (1977) suggests that for non-recursive    data bases the essentially logical operations in-    volved in unifying and resolving intensional lit-    erals can be taken care of, i.e. "compiled", be-    fore receiving queries, leaving only those opera-    tions specifically related to information retrie-    val from the extensional data base.    We propose to extend this idea to the general case    by analyzing what resolutions are possible that    can lead to answers for a particular kind of    query. In the case of recursive axioms this in-    volves finding a pattern of data base retrievals    instead of just a single data retrieval as in    Reiter (1977).    II. Problem Representation    We shall view a data base as the ground unit    clauses of a first order theory without function    signs. The words literal and relation will be    used interchangeably and all variables are as-    sumed to be universally quantified.    We propose to solve the problem of recursive def-    initions by using connection graphs like those of    Sickel (1976) in which nodes represent intensional    axioms and edges connect unifiable literals of    opposite signs. A loop is a Potential Recursive    Loop (PRL) if the substitutions around the loop    are such that the two literals at both ends of the    loop are not the same literal and are not in the    same instance of the clause partition. Figure 1    shows an example PRL in which E and F are base    relations and letters at the end of the alphabet    denote variables whereas letters at the start of    the alphabet denote constants.    In this case, starting from A(a,x,z,p) and resolv-    ing around the loop (separating variables as we go)    we eventually come back to clause 1 yielding an    ultimate resolvent -El(x,y) A(a,y,z,b) B(y,y')    lEl(y,y') in which the literal at the end of the    cycle, A(a,y,z,b), is a different literal than the    one we started the loop with. Two features of    this loop traversal are noteworthy. First, the    literal E causes data base accesses which provide    possibly new values for y. Second, these values    of y instantiate x for the next traversal around    the loop and also cause data base accesses for F    which provides answers to the query.    4    Flgure    1.    Example    PRL    263    III. Derivation of the Iterative Program    Since non-atomic queries can be decomposed into    equivalent atomic queries we shall only consider    atomic queries in this paper. Before describing    our method of deriving an iterative program for a    recursive definition we notice that two kinds of    edges exist in a PRL. A cycle edge contributes    to the PRL whereas an exit edge provides an exit    from the PRL. Extensional literals reached by    traversing exit edges are called Exit Extensional    Literals and those reached by traversing cycle    edges are called Cycle Extensional Literals. For    example in Figure 1 edges 2, 3 and 4 are cycle    edges and edges 5 and 6 are exit edges; 1El(x,y)    is a Cycle Extensional Literal whereas 1E3(u,v)    and 1E4(q,r) are Exit Extensional Literals. We    make the following observations about a PRL.    Observation 1 A PRL must have an exit edge, which    corresponds to the presence of a basis case for a    recursive definition, in order for its clauses to    contribute an answer to a query. In figure l the    basis case is A(a,q,r,b) 1F(q,r). Notice that a    literal having an exit edge has a non-exit edge    which contributes to the cycle.    Observation 2 In Horn data bases, if a PRL exists    for a literal Q, then a literal-    must exist which    provides the closing edge for the PRL.    We represent the defined relations as a connection    graph and in a preprocessing step identify all    PRLs. A set of answer expressions corresponding    to a PRL is derived as follows: We note that the    exit edges of Observation 1 above must be connect-    ed to cycle literals. Starting from the intension-    al axiom from which we expect to get a query, we    first delete the literal which would resolve against    the query. We then resolve around the cycle until    we come to an exit edge. At this point the exit    literal represents an expression which can be con-    sidered as a call to a procedure. This procedure    provides one way of obtaining some of the answers.    Paving derived the expression for the first exit    we proceed to successive exits in the same manner.    These expressions are called answer expressions.    In Figure 1 the answer expressions are -tEl(x,y) OR    1E3(y,z) and 1El(x,y) OR -rE4(y,z).    A loop residue is obtained by resolving around the    loop, starting from the intensional axiom from    which we expect to get a query, and traversing    only the cycle edges of the PRL. The ultimate    resolvent is of the form    E:= 1(El(argl,..) & . . . SC Ei(argl,..))    where the Ei (ihO) are base or defined relations.    This expression is called the loop residue. In    Figure 1 the loop residue is 1El(x,y).    In order to derive a program from a PRL we use an    algorithm given in Naqvi (1980). In this section    we shall illustrate the working of this algorithm    by considering two similar definitions of the an-    cestor relation. Consider the first definition    given below and the corresponding connection graph    is shown in Figure 2.    7.    8.    -'ANCESTOR(x,y)    lFATHER(y,z) ANCESTOR(x,z)    lMOTHER(u,v) ANCESTOR(u,v)    It is straight-forward to show that with the    +NCESTOR(w,a) we can generate the resolvent    query    9. tiNCESTOR(w,y') lFATHER(y',y")....lFATHER(y,a)    which corresponds to a left recursive definition    of the ancestor relation. In this case the basis    statement is used in the end to get the expression    -+iOTHER(w,y')    1FATHER(y',y")...lFATHER(y,a). The    data retrieval pattern is to find successive    fathers of 'a' and then find a mother. In terms    of the connection graph this corresponds to tra-    versing the loop a certain number of times and    then taking the exit edge. Examining the PRL we    find that z, which is the variable that is expect-    ed to be the driver, is replaced in the loop by y.    Moreover, z determines y through the extensional    evaluation of Father(y,z). This determination    occurs within the loop without recourse to the    basis statement.    Our algorithm does the above kind of analysis and    uses the answer expression derived from the loop    and the substitutions from the closing edge of the    loop to derive a program for the PRL. For this    example it derives the following program fragment:    2: =a    ENQUE(q,z) /* q is a queue */    while (q 1= empty) do    z:= DEQUE(q)    x:= 1(MOTHER(x,y) & FATHER(y,z))    ENQUE (4,    Y 1    od    Now, consider the second definition of the ancestor    relation given below and the corresponding connec-    tion graph shown in Figure 3.    11. lFATHER(x,y) -'ANCESTOR(y,z)    ANCESTOR(x,z)    12. -+lOTHER(u,v)    ANCESTOR(u,v)    Once again, the query -&NCESTOR(w,a) and 11 can    be shown to generate the resolvent    13. 1FATHER(x,y) 1FATHER(y,y)...7ANCESTOR(y",a)    In this case, our first answers come from resolv-    ing (12) and (13) which corresponds in figure 3    to taking the exit edge to the basis case. Sub-    sequent answers are derived by finding the succes-    sive fathers which corresponds to going around    the loop a certain number of times. Examining    the PRL we find that the values of y derive the    next set of answers, x. The expected driver vari-    able z does not participate in this process. Our    algorithm uses the resolvent of the basis case and    the query to start the loop. The substitutions    at the closing edge of the PRL identify the cor-    rect variables which drive the loop and serve as    place holders for answers. The loop residue de-    rives all the subsequent answers. The program    is as shown below.    .-    x.-w    t'=lOTHER(w,a)    ENQm (q ,w>    while (q 1 = empty) do    y:= DEQUE(q)    1 FATHER(x,y)    ENQUE (4,    x>    od    YlZ    i ANCESTOR(u,v)    LMOTHER(J    Figure    2.    First    Definition    of Ancestor    Relation    -rFATHER(x,y)    -,ANCESTOR(y,z)    ANCESTOR(x,z)    (y’u*z’y    &?j    z)    (w/u,a/v)    Figure    3.    Second    Definition    of Ancestor    Relation    To review then, our algorithm analyzes the PRL oE    a recursive definition to determine the loop resi-    due, the answer expressions, the resolvent of the    query and the basis case and whether the defini-    tion is left or right recursive. It then derives    a program whose structure corresponds to one of    the two program structures outlined above.    It now remains to discuss the termination condi-    tions of the derived programs. Our termination    conditions are designed for data bases without    function signs. Briefly, we use a queue to store    all unique values of the variables, indicated by    the loop analysis, during each iteration. Each    new iteration dequeues a previously enqueued value    to generate some possibly new answers. Since the    domain of discourse is finite in the absence of    function signs the number of unique answers in the    data base is finite. Thus the program queue will    ultimately become empty. It should be noted that    our technique for the detection of and generating    programs for recursive definitions works in the    presence of function signs. However, the termina-    tion condition does not guarantee finite computa-    tions in this case.    v.    Summary and Conclusions    We have outlined an algorithm which derives itera-    tive programs for recursively defined relations.    The case where a defined relation is mutually    recursive with some other definition (e.g. X & R    -> R and R & Y -b X) leads to the derivation of    mutually recursive programs. Transitive recursive    axioms (e.g. ancestor of an ancestor is an ancestor)    lead to the derivation of recursive programs.    These situations require a fairly complicated con-    trol mechanism for execution time invocation of    the derived programs. This is discussed in detail    in Naqvi (1980) and the algorithJz    for deriving the    programs is also given there. We can show the    finiteness and completeness of our method (Naqvi    1980). Although we have considered a first order    theory without function signs the method is ap-    plicable to data bases containing function signs.    The termination condition, however, may not be    rigorous in this case. This is an obvious area    for further research.    References    Chang, C. L., (1979) "On Evaluation of Queries    Containing Derived Relations in a Relational Data    Base", Workshop on Formal Bases for Data Bases,    Toulouse, France.    Codd, E.F., (1970) "A Relational Model of Data    for Large Shared Data Banks", CACM 13, 6, 377-387.    Naqvi, S. (1980) "Deductive Question-Answering in    Recursive Rule-Based Systems", Ph.D. Diss.,    Northwestern University, (in preparation).    Reiter, R., (1977) "An Approach to Deductive    Question Answering", BBN Tech. Report no. 3649.    Sickel, S. (1976) "A Search Technique for Clause    Interconnectivity Graphs", IEEE Trans. on Comput-    ers, Vol. C-25, No. 8.    265     
 | 
	1980 
 | 
	29 
 | 
					
22 
							 | 
	Automatic    Goal-Directed    Progrem    ‘bvnsformetion    Stephen Fickas    USC/Information    Sciences    Institute*    Marina    del Rey, CA 90291    1. INTRODUCTION    This    paper    focuses    on a major    problem    faced    by the    user    of a    semi-automatic,    transformation-based    program-development    system:    management    of low level    details.    I will    argue    that    it is    feasible    to    take    some    of    this    burden    off    of    the    user    by    automating    portions    of the development    sequence.    A prototype    system    is    introduced    which    employs    knowledge    of    the    transformation    domain    in achieving    a given    program    goal state.    It    is assumed    that    such    a system    will    run    in a real    environment    containing    a large    library    of    both    generalized    low    level    and    specialized    high level    transformations.    2. THE TI APPROACH    TO PROGRAM DEVELOPMENT    The    research    discussed    here    is part    of    a larger    context    of    program    development    through    Transformational    Implementation    (or    TI) [I,    21. Briefly,    the TI approach    to programming    involves    refining    and optimizing    a program    specification    written    in a high    level    specification    language    (Currently,    the    GIST    program    specification    language    [8]    is being    used for    this    purpose)    to a    particular    base    language    leg. LISP). Refinement    and optimization    are    carried    out    by    applying    transformations    to    program    fragments.    This process    is semi-automatic    in that    a programmer    must both    choose    the transformation    to apply    and the context    in    which    to apply    it; the TI system    ensures    that    the left    hand side    (LHS) of the transformation    is applicable    and actually    applies    the    transformation.    The TI system    provides    a facility    for    reverting    to some previous    point    in the development    sequence    from    which    point    the    programmer    can explore    various    alternative    lines    of    reasoning.    3. CONCEPTUAL    TRANSFORMATIONS    AND JITTERING    TRANSFORMATIONS    In using    the    TI system,    programmers    generally    employ    only    a    small    number    of    high    level    “conceptual”    transformations,    ones    that    produce    a large    refinement    or optimization.    Examples    are    changing    a control    structure    from iterative    to recursive,    merging    a number    of loops    into one, maintaining    a set incrementally,    or    making    non-determinism    explicit.    Typically    these    transformations    have    complex    effects    on the program;    they    may    even    have to interact    with the user.    Although    only    a    relatively    small    number    of    conceptual    transformations    are employed    in a typical    TI development,    the    final    sequence    is    generally    quite    lengthy.    Because    the    applicability    conditions    of    a conceptual    transformation    may    be    9;    This research    was supported    by Defense Advaked    Research Projects    Agency    contract DAHCIS    72 C 0308    Views    and conclusions contained in this document are    fhooe of the authors    and should not be interpreted    as repreeenting    the official    opinion or policy of DARPA, the U.S. Government, or any other person    or agency    connected wilh    them.    quite    specialized,    usually    with    a number    of properties    to prove,    much    of    the    development    sequence    is made    up of lower    level    transformations    which    massage    the    program    into    states    where    the set of conceptual    transformations    can be applied.    I call these    preparatory    steps    “jittering”    steps.    Examples    of quite    low level    jittering    steps    include    commuting    two    adjacent    statements    or    unfolding    nested    blocks.    More    difficult    jittering    steps    include    moving    a statement    within    a block    from    the kth position    to the    first    position,    merging    two    statements    (eg. conditionals,loops)    into one, or making two loop generators    equivalent.    4. AN AUTOMATIC    JITTERING    SYSTEM    Requiring    the    programmer    to carry    out    the    Jittering    process    detracls    from    his performance    in several    ways:    it consumes    a    large    portion    of his time    and effort;    it disrupts    his high    level    planning    by forcing    him to attend    to a myriad    of details.    There    is    then    strong    motivation    for    automating    some    or    all    of    the    Jittering    process.    The following    sections    will    discuss    the    types    of    mechanisms    used    to    actually    implement    such    a    system    thenceforth    known    as the Jitterer).    4.1. BLACK    BOX DESCRIPTION    The Jitterer    is initially    invoked    whenever    the TI system    is unable    to match    the LHS of a transformation    Tk selected    by the    user.    The    Jitterer’s    inputs    are    1) the current    program    state    C, 2) a    goal state    G corresponding    to the mismatched    LHS of Tk, and 3)    a library    of    transformations    L to use in the    jittering    process.    The    Jltterer’s    return    value    is either    a failure    message    or    a    program    state    S matching    G and a sequence    of instantiated    transformations    from L which    when    started    in C will result    in S.    If the    Jltterer    is successful,    then    Tk will    be removed    from    its    suspended    state    and applied    as specified    in state    S.    If G is a conjunction    of sub-goals    then currently    a simple    STRIPS    like    approach    is employed    in solving    each    in some    determined    order.    This approach    can be both    inefficient    and unbale    to find    solwtlons    for    certain    jittering    problems.    Improving    this    process,    possibly    using    some    of    the    techniques    proposed    by    problem    solvers    in    other    domains    (see    Sacerdoti    [9]    for    a survey),    remains    a high priority    item for future    research.    4.2. THE GOAL LANGUAGE    Contained    in the TI system    is a subsystem    called    the Differencer    [53. The Differencer    takes    as input    a current    state    pattern    and a    goal    pattern,    and    returns    a    list    of    difference-descriptions    between    the    two    (an    empty    list    for    an exact    match).    Each    element    of the list is a description    taken    at a particular    level    of    detail,    and is written    in a goal language    I call GLl.    For example,    68    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    suppose    that    the    Differencer    was passed    “if    P then    FOO” as a    current    pattern,    and    “if    Q then    SA”    as a goal    pattern    (The    notation    SX stands    for    a variable    pattern    matching    a single    statement).    The output    of the Differencer    would    be the following    three    descriptions    in    GLl:    CHANGE-PREDICATE(P    0);    CHANGE-ACTION(“if    P then    FOO” “if Q then    SA”), ie. change    from    one    conditional    to    another    without going    out    of    the    current    context;    PRODUCE-ACTION-AT-POSITION(“if    Q then    $A” J), ie. use    any    means    necessary,    including    looking    at    the    surrounding    context,    to    produce    a conditional    matching    “if    Q then    $A”    at    position    J, J being    bound    by the Differencer.    The above    three    descriptions    form    a disjunction    of goals,    each    of which    gives    a    slightly    wider    perspective.    Currently    the    Jitterer    attempts    to    solve    the most narrow    goal first.    If it is successful,    then    control    returns    to TI. If not, it attempts    to solve    the next    higher    goal and    so on until    the list is exhausted.    De fault-bindings:    none    t various    tactics    1    We    now    must    define    the    individual    jittering    plans    or    Tactics    which    will    achieve    the    CHANGE-PREDICATE    goal.    Each    Toceic    construct    is composed    of a set of constraints    which    further    limit    its    applicability    and an Actron for    achieving    the    matching    goal.    Our three    plans    become    formally    STRATEGY    change-predicate-from-a-to-b    - _ -    1 . .    Tactic (1)    enlbed    Applicabhty-condition:    none    Action:    POST (EMBED (Predl    Pred21)    Taceic(2)    extract    Applicability-condition:    NOT (VARIABLE    (Predl    I 1    Action:    POST (EXTRACT tPred2    Predl)    1    Other    goals    of GLl    not    included    in the    above    example    include    EMBED,    EXTRACT,    DELETE,    ADD,    COMMUTE,    DISTRIBUTE,    PROVE-PROPERTY,    FOLD    and    UNFOLD.    Because    GLl    is    the    language    used    to    both    describe    pattern    differences    by    the    Dlfferencer    and describe    desired    goal states    by the    Jitterer,    it    acts as a common    interface    between    the two.    5. JiTTERIf’&    PLANS    A small number    of jittering    plans are capable    of solving    many of    the    goals    of GLl.    Suppose    we take    for example    the goal of the    previous    section    CHANGE-PREDICATE(P    Q). This    is    a    specific    instance    of the    more    general    goal CHANGE-PREDICATE(pattern1    pattern2).    There    are    three    basic    plans    for    solving    this    more    general    goal: 1) embed    pattern1    in patternil,    2) extract    pattern1    from    pattern2,    or    3)    first    embed    pattern1    in    something    containing    pattern2,    and then extract    pattern2.    Since in our case    each    pattern    is a simple    variable,    we can rule    out    the    second    plan.    Similar    plans exist    for the other    two goals of the example.    In general,    jittering    plans    can be defined    for    all but    the    most    detailed    goals of GLl.    5.1. STRATEGY    AND TACTICS    The    plans    available    for    solving    a particular    goal    have    been    organized    into    a construct    I call a STRATEGY.    Each of the    plans    is    known    as a Tactic.    Each    STRATEGY    construct    takes    the    following    form    (see    [4],    [7]    for    similar    planning    constructs    in    other    domains.):    STRATEGY    name    Relevant-goal:    goal    the    strategy    matches    Applicability-condition:    used    to    narrow    strategy’s    scope    Default-bindings:    bind    any    unbound    goal    parameter    6    Tactic(l)    . . .    Tactic(n)    . . .    To illustrate,    let us package    the three    plans described    informally    for    solving    the    CHANGE-PREDICATE    goal    into    a    STRATEGY    Tactic (31    embed-and-then-extract    Applicabihy-condition:    none    Action:    SEQ (POST (EMBED (Predl    (#ANY-PRED    fr Pred211)    POST (EXTRACT tPred2    (#ANY-PRED    Predl    Pred21    Here,    POST(G)    says    mark    the    goal    G as    a sub-goal    to    be    achieved.    *ANY-PRED    will    match    to    either    AND    or    OR.    SEQUENCE(A1    A2    . . .    An)    says    execute    the    list    of    actions    scqucntlally;    if Ai is of the form    POST(G), then    do not    move    on    to Ai+l    until    G has been    achieved.    There    exist    similar    functions    for executing    a sequence    of actions    in parallel    and in disjunctive    form.    In general,    an Action can specify    an arbitrarily    ordered    list of actions    to take to achieve    the STRATEGY’S    goal.    5.2. BACKWARD    CHAINING    The    transformations    available    for    jittering,    along    with    the    Tactics introduced    through    the STRATEGY construct,    define    the    methods    available    for solving    a particular    goal.    Transformations    are    applied    in    backward    chaining    fashion:    once    a    transformation’s    RHS matches    a goal G, variables    are bound    and    the LHS is instantiated.    The Differencer    is then called    in to see if    a match    exists    between    the    current    state    and the    instantiated    LHS. If so, then    G is marked    as achieved    by the    application    of    the    transformation.    if there    is a mismatch,    then    the disjunction    of    sub-goals    produced    by    the    Differencer    will    be    marked    as    awaiting    achievement.    6. THE JITTERlNG    SCHEDULER    Whenever    a new    goal    is posted    (marked    to    be achieved),    the    Jitterer    attaches    to    it a list    of    methods    (transformations    and    Tactics)    which    might    solve    it.    It    is    at    this    point    that    the    Scheduler    is called    in.    The Scheduler    first    must decide    among all    active    goals    (ones    with    at least    one    untried    method)    which    to    construcl.    Suppose    that we wanted    to rule out working    on goals    work    on next.    Once a goal IS chosen,    it must decide    which    of the    that    attempt    to change    True    to False or False to True,    and that    untried    methods    to employ.    A set of domain    dependent    metrics    we    expected    both    of    CHANGE-PREDICATE’s    parameters    to    be    which    help    the    Scheduler    make    an intelligent    decision    in both    bound.    We would    get the following    STRATEGY header:    cases has been identified.    STRATEGY    change-predicate-from-a-to-b    Relevant-god:    CHANGE-PREDICATE    (Predl    Pred2)    .&n!:cability-conditions:    NOT (Predl=True    h Pred2=Fa    I se) A    NOT (Predl-Fa    I se A Pred2=True)    6.1. Choosing    among competing    goals    Lcngfh    of path: the types    of problems    presented    to the .litterer    by    Ti do    not    generally    involve    a large    (over    10) number    of    transformation    steps.    Clcnce,    as    a    path    (sequence    of    69    transformations)    grows    continuing    it decreases.    over    a fixed    threshold,    the desirability    of    Conflict with high level motivation: if the    Scheduler    is able    to    determine    the user’s    current    high level    goal, it may be able    to    rule out    certain    goal states    as un-productive.    For example,    if it    is known    that the user is trying    to optimize    his control    structure    by merging    a number    of loops    into one, it may be unwise    to try    to achieve    a sub-goal    which    produces    still    another    loop.    Such    a    sub-goal    would    be given    low priority.    Ease of achieving:    A rough    estimate    is made    of    the    cost    to    continue    the    goal    by    taking    the    minimum    cost    of    the    untried    methods    attached    to it. This is a rough    estimate    because    there    is    no easy    way    to compute    exactly    the    final    cost    associated    with    any method,    or in fact that any method    will lead to a solution.    6.2. Choosing    among competing    methods    Ease of application:    a rough    static    estimate    of how    difficult    the    method    may    be to apply.    In the case of a transformation,    how    complex    is the LHS (eg, how many    properties    must    be proven)?    In the case of a Tactic,    how many sub-goals    must be achieved?    User    assistance:    some    methods    call    for    the    user    to    supply    needed    information.    The    preference    is to avoid    bothering    the    user    as much as possible.    Side effects:    What undesirable    actions    will a method    take besides    the desired    one (a qualitative    judgment)?    For instance,    a method    which    unfolds    a large    procedure    body    to produce    a certain    type    of goal pattern    is seen as having    a large    side effect,    ie. it tends    to “flatten”    the functional    structure    of the program.    On the other    hand,    a method    which    simply    changes    a current    statement    into    something    matching    a goal pattern    has very    little    side-effect;    it    does    nothing    to disturb    the surrounding    context.    Prefer    small    side effects    over    large ones.    Tactic ordering    rules:    a STRATEGY writer    may provide    rules    for    ordering    Tactics.    These    rules    take    the    form    “if Condition then Ordering”, where    Condition can refer    to any    piece    of knowledge    known    to the STRATEGY    at match    time    and    Ordering takes    the form    “Try    Tactic(J) before    Tactic(K)”    or “Try    TactdN)    last”.    The    Scheduler    makes    use of this    information    when    choosing    among competing    tactics for a particular    goal.    7. CONCLUSION    The    purpose    of    this    paper    has    been    to    present    a prototype    Jitterer    with    enough    domain    knowledge    to deal competently    with    the    types    of    jittering    problems    typically    encountered    in a TI    environment.    The    prototype    Jitterer    described    is    currently    being    implemented    in the    Hearsay-ill    knowledge    representation    language    [3]    and    represents    a    preliminary    system.    Future    systems    will    deal    much more    with    performance    issues    (see    for    example    the next    section).    8. FURTHER    RESEARCH    There    are    generally    many    ways    of achieving    a given    jittering    goal.    The metrics    of section    5 give some help in ordering    them.    Even    so, in some cases    the Jitterer    will    produce    a solution    not    acceptable    to the user.    A simple    minded    approach    (and the one    currently    employed)    is to    keep    presenting    solutions    until    the    user    ik satisfied.    A better    approach    is to    allow    the    user    to    specify    what    he didn’t    like about    a particular    solution    and allow    lhts information    to guide    the search    for subsequent    solutions.    in    fact,    the    user    may not    want    to wait    until    an incorrect    solution    has    been    presented,    but    give    jittering    guidance    when    the    Jitterer    is    initially    invoked    (Feather    [6]    proposes    such    a    guidance    mechanism    for    a    fold/unfold    type    transformation    system).    Still another    approach    may be to “delay”    jittering    until    more    high    level    contextual    information    can    be obtained.    Both    user    guidance    and delayed    reasoning    are being    actively    studied    for inclusion    in future    jittering    systems.    Acknowledgments    I would    like to thank    Bob Balzer,    Martin    Feather,    Phil London    and    Dave    Wile    for    their    contributions    to this    work.    Lee Erman    and    Neil    Goldman    have    provided    willing    and able    assistance    to the    Hearsay    Ill implementation    effort.    9, REFERENCES    1. Balzer,    R., Goldman,    N., and Wile, D. On the Transformational    Implementation    Approach    to programming.    Second    International    Conference    on Software    Engineering,    October,    1976.    2.    Balzer,    R. TI: An example.    Research    Report    RR-79-79,    Information    Sciences    Institute,    1979.    3.    Aalzcr,    R., Erman, L., London,    P., Williams, C. Hearsay-Ill:    A    Domain-Independent    Framework    for Expert    Systems.    First    National    Conference    on Artificial    Intelligence,    1980.    4.    Bulrss-Rozas,    J. GOAL: A Coal Oriented Command language    for Int ractue    Proff    Constructlorr. Ph.D. Th., Computer    Science    Dept.,    Standford    University,    1979.    5.    Chiu, W. Structure    Comparison    and Semantic    Interpretation    of    Differences.    First    National    Conference    on Artificial    Intelligence,    ,    1980.    6.    Feather,    M. A System For Developing Programs by    Transformatron.    Ph.D. Th., Dept. of Artificial    Intelligence,    University    of Edinburgh,    1979.    7.    Friedland,    P. Knowledge-based Hierarchical Planning in    Molecular Genetics.    Ph.D. Th., Computer    Science    Dept., Stanford    University,    1979.    8.    Goldman,    N., and Wile, D. A Data Base specification.    International    Conference    on the Kntity-Relational    Approach    to    Systems    Analysis    and Design,    UCLA, 1979.    9.    Sacerdoti,    E. Problem    Solving    Tactics.    Technical    Note 189,    SRI, July    1979.    70     
 | 
	1980 
 | 
	3 
 | 
					
23 
							 | 
	PIAGET    AND ARTIFICIAL    INTELLIGENCE    Jarrett K. Rosenberg    Department of Psychology    University of California, Bcrkelcy 94720    ASSTRACT    Piaget’s Genetic Epistemology and Artl$cial Intelligence can be of    great use to each    other, since the strengths of each approach    complement the weaknesses of the other.    Two ways of bringing the    two approaches    together are suggested:    elaborating the parallels    between them (such as the concept of schemata), and building AI    models based directly on Piaget’s theory. How this mighl benejit AI is    illustrated by examining how Piagerian research on problem solving    can suggest new ways of building programs that learn.    I.    INTRODUCTION.    My thesis in this paper is that two such superficially disparate    areas as Piaget’s Genetic Epistemology and Artificial Intelligence can    inform each other immensely. This fact has already been noticed, as    long ago as Seymour Papert’s work at Geneva [l] and as recently as    Margaret Boden’s writings on Piaget [2, 31. Nevertheless, I think it    still needs to be pointed out how the two fields both parallel and    complement each other, and how one might go about trying to    connect them.    Piaget’s work is immense, stretching over half a    century and dozens of volumes, and so only a glimpse of it can be    given here, but hopefully it will be enough to stimulate some AI    people (and maybe psychologists too).    II.    WHY SHOULD    PIAGET AND    RELEVANT?    To address the problem of artificial intelligence is to presuppose    what natural intelligence is, but AI work in general doesn’t really    treat the latter in any comprehensive way; it’s assumed that playing    chess is intelligent, or planning, or ordering a hamburger.    But can    we really hope to understand intelligent behavior, natural or artificial,    without deciding what criteria we use for intelligence? If the answer    is No. then it seems natural to look to psychology for a theory of    intelligence. The only theory I find satisfying, and I suspect most AI    people will as well, is Piaget’s.    Piaget’s theory is relevant to AI because it is a fully developed,    motivated theory of intelligence which is general enough to subsume    not only human    and animal intelligence,    but purely artificial    intelligence as well. By motivated I mean that the form of Piaget’s    theory of knowledge is based on both the logical requirements of    epistemology,    and on the biological requirements    of behavioral    evolution.    Hence there are good reasons for accepting his approach    to intelligence, just as there are few, if any, compelling reasons to    believe the (often implicit) definitions given in AI (or psychology, for    that matter). Part of my argument, then, will be that AI rcscarchcrs    might be able to use this foundation as a basis For their own work.    Piaget, on the other hand, can learn 3 lot from AI, since it has    developed ways of looking at the detailed structures and processes of    complex behavior, thus suggesting ways of elaborating Piaget’s theory    into one with a more “psychological” flavor.    Piaget’s theory has    always been more epistemologically than psychologically oriented (in    the sense of information-processing psychology), and it has only been    recently that Genevan researchers have addressed the question of the    procedures    involved in the acquisition    and use cf knowledge-    structures.    Table 1 gives a general (and biased!) comparison of the strengths    and weaknesses of Piaget’s theory and AI work taken as a whole.    The important thing to note is that the strengths and weaknesses of    the two approaches complement each other nicely.    Let me discuss    them briefly.    The advantages of Piaget’s theory are:    b It is both epistemologically and biologically motivated, that is. it    starts from the most central issues of a theory of knowledge (e.g.,    how space, time, and causality are constructed), and places this    analysis within the biological context of human evolution.    This    contrasts with most AI work, which considers intelligence outside of    both its philosophical and biological context.    b It is highly theoretical and comprehensive.    As a biologist with    philosophical interests, Piaget resolved to study the “natural history    of knowledge”, and so developed a general theoretical framework    which he then applied to a number of areas, particularly those of    physical and logico-mathematical knowledge. This contrasts with the    general lack of interest in AI for large-scale theories or applications.    b It has a large amount of empirical validation: virtually everything    he has proposed has been extensively replicated and consistently    extended.    b It proposes a “fundamental unit” for cognition, the scheme, from    which all knowledge-structures    are constructed.    This supplies a    uniform analytic framework for all research in cognition.    b It has a formalism for the structures of knowledge.    This    formalism is an algebraic, rather than a computational one, and is    only in its beginnings, but is nevertheless valuable.    b It is a developnmtal theory of knowled&, promising a way to    --    understand how knowledge-structures can radically change and yet    prcscrve    their    integrity.    It can thus provide    an additional,    developmental, set of constraints on AI theories: both structural ones    (certain    logical    properties    of    cognition,    with    a logical    and    devclopmcntal ordering on them), and l%nctional/proccdural ones on    the way those structures are constructed and used (e.g., forms of    coordination).    This should be one of its greatest attractions for AI.    The disadvantages of Piaget’s theory are:    b It is not very detailed about how knowledge is represented, or    how it is processed; to someone looking for a theory at that level,    Piagct’s seems mostly descriptive.    b It says little about the proc&zrres used in cognition; only recently    has this issue been addressed at Geneva.    b Its emphasis is on physical and logico-mathematical knowledge,    rather than other kinds such as social or linguistic.    Here the field of AI has to offer prccirely what Piagctian research    lacks, and a glance at the table shows th&t I consider the converse to    be true.    III.    HOW CAN WE PUT THEM TOGETHER?    Given that we want to bring these two areas into contact, two    possible ways of doing it come immediately to mind.    First, there are a number of parallels between Piagetian concepts    and ones introduced in AI, the most obvious being that between    Piaget’s notion of a scheme and the schema/frame/script    ideas in    vogue the past few years in AI. The most important such parallels    are:    b Schemata.    For Piaget, the fundamental unit of knowledge is the    scheme, a generalizable action-pattern.    b Schematically-based processing    As a consequence of his scheme-    based theory, Piaget views cognitive processes as primarily top-down.    b Knowlcdgc as action.    In Piagct’s epistemology, no knowledge is    ---    gained    without    acting    and    its conscqucnt    transfclmaticn    of    experience.    These parallels can be used not only to suggest new ways of    developing the AI notions, but as ways of formulating the Piagetian    concepts in more detail.    Second, we can attempt to create AI models of Piaget’s theory    (not tasks, as has sometimes been done). This will introduce another    paradigm into AI research, no worse than the others and hopefully    better.    Simultaneously, such work can serve to rigorously develop    the processing aspects of Piaget’s theory, using all the insights of AI    research.    Let me give a brief example.    One of the ways that Piagctian theory is relevant to AI is that it    can give both theoretical and empirical suggestions for how to design    programs that learn, i.e., develop new cognitive structures in order to    solve problems.    Current AI learning programs work by assuming    some initial body of knowledge and learning techniques, and then    showing how additional knowledge and techniques could be learned.    However, this creates a regress: how the starting knowledge and    techniques are acquired is left for someone else to explain.    In    addition, the initial knowledge used is often much more advanced    than what is to be acquired; for example, Sussman’s HACKER uses    fairly sophisticated debugging techniques to learn something children    acquire by age two.    Piaget’s theory, on the other hand, tries to    account for all of cognitive development    starting with only the    simplest of structures.    Piagetian research reveals a number of important characteristics of    problem solving that are not tilly achieved until adolescence, being    gradually constructed starting around age five (ignoring the sensori-    motor precursors).    They are:    b The ability to construct and overgcncralize theories, and as a    corollary, to interpret    phenomena    in terms of confirmation or    disconfirmation    of those theories.    b The ability to construct refifations as tests, as well as confirmations.    b The presence of articulated part-whole relations, e.g., as reflected in    an appropriate    decomposition    of the problem.    Table 1. A Comparison    of Piagetian    Theory and Artificial    Intelligence.    Piaget    Al    Strengths    1. Motivated (biologically    & epistemologically)    2. Comprehensive    theory    3. Lots of empirical    validation    4. Fundamental unit    of knowledge    S..Formal model of    knowledge structures    6. Developmental theory    Weaknesses    1. Not detailed    2. Not procedural    Strengths    Weaknesses    --    1. Precise/formal    1. Unmotivated    2. Process-oriented    2. Not theoretical    3. Detailed    3. Little empirical    4. Body of    validation    techniques    4. Little agreement    5. Narrow    6. No model of    development    267    ) The ability to reflect on procedures in order to switch or modify    them.    ) The ability to coordinate goals and subgoals.    Piagetian studies of how ,these abilities are constructed can provide    useful ideas for AI.    Consider, for example, how children learn to    balance blocks (see [4]).    For very young children, actions of objects are more or less    random unless assimilated to their own actions and intentions. Their    attempts to balance blocks ignore completely the relevant factors in    favor of pushing    down on the block to maintain a balance.    Somewhat older children, after much trial and error, recognize a    regularity in the outcomes of their efforts, namely, that most things    balance at their geometrical centers.    Once this idea (a primitive    theory) is grasped, it is rapidly gcncralized, to the point where    exceptions    to    it    are    dismissed    as    chance.    The    eventual    overgeneralization    of this notion of regularity, and consequent    rcpeatcd    failures to confirm it, give rise to the idea that the    exceptions to the theory may have regularities themselves.    At this    point, children will retain their original theory where it works, but    construct an additional one for the counterexamples (balance based    on weight distribution).    As their powers of conceptual coordination    grow, they can combine the two theories into a single, more general,    one. Then finally the idea of a retitation test usually appears (the    preference for confirmations stemming from the same source as that    of overgeneralizing in the first place).    What does this suggest for AI work on learning and problem    solving? If Piaget is correct, then in order to be generally successful,    AI programs should have the abilitieamentioned,    and they should be    able to develop ,them in the manner given above.    In particular, we    can imagine a class of AI programs that approach problem solving in    the following way.    Faced with a new problem they would:    ) engage extcnsivcly in trial-and-error    behavior,    ) strive to find regularities in the outcomes (including regularities in    the procedures used),    ) use the regularities to construct and then overgeneralize a theory,    ) construct confirmation tests of the theory and note the exceptions,    ) construct a theory for the exceptions, if they are numerous enough,    ) try to coordinate the two theories into a larger one,    ) test this resultant theory by confirmation and refutation tests.    Besides theories of problem situations, the programs would also    construct theories of the problem-solving procedures themselves, i.e.,    they would study the relationships of procedures among themselves    and with respect to their outcomes, rather than just theories of the    relationships among states of the world.    Some of the above is partially captured in various AI programs    (Sussman’s HACKER, Winston’s arch builder, erc.), but none of    them are configured in exactly this way, and all of them assume    fairly sophisticated techniques and knowledge as a basis for learning.    The proposal above is obviously not the last word in learning and    problem solving, but at least *provides a useful place to start.    In addition, the vast amount of empirical data that Piagetians have    collestcd can be invaluable as an empirical testing ground for AI    models.    Up to now, AI rcscarchers have only attempted to model    performance on Pidgetian tasks, *vithout rcalbing that it is both the    empirical and the theoretical constraint5 that would make Piagetian    work of use to Al: were a model to simulate performance on a    Piagetian task. and do so in a way fully consistent with the theory, it    would have met a very strong test of its validity. Moreover, the tasks    themselves    have    a    high    degree    of    ecological    validity    and    cpistcmological significance (for example, conservation of matter and    number).    Thus, by making use of the existing parallels between the two    approaches, and by recasting Piagctian concepts in AI terms, the two    might produce a hybrid that would have all the advantages of a    Piagetian foundation as well as all the benefits of Al’s hard-won    insights into knowledge-representation    and information-processing.    For this to happen, however, it will be necessary for both    Piagctians and Al researchers to learn more about each other’s work;    as mentioned above, previous AI work has ncglccted Piagct’s theory    in favor of his tasks, while Piagetians have only the vaguest notion of    what a computational model is all about.    And of course, Piaget’s    theory is not some sort of panacea-psychology’s    gift to AI. On the    contrary, it needs to be much more developed, along the lines that    AI work is pursuing.    It is precisely because each could profit from    the other that I’ve presented    these points here.    ACKNOWLEDGEMENTS    I would like to thank Jonas Langer, John Sccly Brown, and Tom    Moran for their comments.    No endorsements    implied.    REFERENCES    [l]    Papert, S.    “Etude comparee de l’intelligence chez l’enfant et    chez le robot.” in La ZYiafion    des Structures.    Etudes d’Epistemologie G&nCtique.    vol. 15.    Paris:    P.U.F.    1963.    [2]    Boden, M.    “Artificial Intelligence and Piagetian Theory.”    Synlhese.    38: 389-414.    1978.    131 --    Piagel.    New York: Viking.    1979.    [4] Karmiloff-Smith, A. and B. Inhcldcr.    “If you want to get ahead,    get a theory.”    Cognirion.    3: 195-212.    1975.    268     
 | 
	1980 
 | 
	30 
 | 
					
24 
							 | 
	RI: an Expert    in the Computer    Systems    Domain’    John McDermott    Department of Computer Science    Carnegie-Mellon University    Pittsburgh, Pennsylvania 15213    INTRODUCTION.    Rl* is a rule-based system that has much in    common with other domain-specific    systems that have been    developed over the past several years [l, 81. It differs from these    systeins    primarily    in    its    use    of    Match    rather    than    Generate-and-Test    as its central problem solving method [2];    rather than exploring several hypotheses until an acceptable one    is found, it exploits its knowledge of its task domain to generate a    single    acceptable    solution.    Rl’s    domain    of expertise    is    configuring Digital Equipment Corporation’s VAX-l l/780    systems.    Its input is a customer’s order and its output is a set of diagrams    displaying the spatial relationships among the components on the    order; these diagrams are used by the technician who physically    assembles the system.    Since an order frequently lacks one or    more components required for system functionality, a major part    of Rl’s task is to notice what components are missing and add    them to the order. Rl is currently being used on a regular basis by    DEC’s manufacturing organization.3    THE DOMAIN.    The VAX-11/780    is the first implementation of    DEC’s VAX-1 1 architecture.    The VAX-1 l/780    uses a high speed    synchronous bus, the sbi, as its primary interconnect; the central    processor, one or two memory control units, up to four massbus    interfaces, and up to four unibus interfaces can be connected to    the sbi. The massbuses and particularly the unibuses can support    a wide variety of peripheral devices.    A typical system contains    about 90 components; these include cabinets, periperal devices,    drivers for the devices, and cables. There are a large number of    rules that constrain the ways in which these components may be    associated.    RI’S    DEFINING    CHARACTEKISTICS.    Rl is implemented in    OPS4,    a    production    system    language    developed    at    Carnegie-Mellon University [3, 71.    An OPS4 production system    ‘This    paper    describes    Rl    as it exists    in June    of 1980;    it is a highly    condensed    version    of [5].    2 Four years    ago I couldn’t    even say *‘knowledge    engineer”,    now I . . .    3The    development    of    Ri    was    supported    by    Digital    Equipment    Corporation.    The    research    that    led    to the    development    of OPS4,    the    language    in which    Rl    is written,    was sponsored    by the Defense    Advanced    Research    Projects    Agency.    (DOD),    ARPA    Order    No.    3597,    and monitored    by the Air Force    Avionics    Laboratory    under    Contract    F33615-78-C-1151.    The views    and conclusions    contained    in this    document    are those    of the    author    and should    not be interpreted    as representing    the official    policies,    either    expressed    or    implied,    of    Digital    Equipment    Corporation,    the    Defense    Advanced    Research    Projects    Agency,    or the U.S.    Government.    VAX,    PDP-11,    UNIBUS,    and    MASSBUS    are    trademarks    of    Digital    Equipment    Corporation.    consists of a set of productions held in production    memory    and a    set of data elements (eg, state descriptions) held in working    memory.    A production is a rule composed of conditions and    actions:    Pi    (Cl    C2 . . . Cn --> A,    A2 . . . Am)    Conditions are forms that are instantiated by memory elements.    Actions add elements to working memory or modify existing    elements. The recognize-act    cycle repeatedly finds all production    instantiations and executes    one of them.4    Rl    exploits this    recognition    match.    Its rules have conditions that recognize    situations in which a particular type of extension to a particular    type of partial configuration is permissable or required; the actions    then effect that extension.    OPS4’s    two    memories    have been    augmented,. for this    application, with a third.    This memory, the data base, contains    descriptions of each of the 420 components currently supported    for the VAX.    Each data base entry consists of the name of a    component and a set of eight or so attribute/value    pairs that    indicate the properties of the component that are relevant for the    configuration task. As Rl begins to configure an order, it retrieves    the relevant component descriptions.    As the configuration is    generated,    working memory grows to contain descriptions of    partial    configurations,    results of various computations,    and    context symbols that identify the current subtask.’    Production memory contains all of Rl’s permanent knowledge    about how to configure VAX systems. Rl currently has 772 rules    that enable it to perform the task.5 These rules can be viewed as    state transition operators.    The conditional part of each rule    describes features that a state must possess in order for the rule to    be applied. The action part of the rule indicates what features of    that state have to be modified or what features have to be added in    order for a new state that is on a solution path to be generated.    Each rule is a more or less autonomous piece of knowledge that    watches for a state that it recognizes to be generated.    Whenever    40PS4’s    cycle    time,    though    it is essentially    independent    of the size of    both production    memory    and working    memory    [4],    depends    on particular    features    of the production    system    (eg, the number    and complexity    of the    conditions    and    actions    in each    production);    the    average    cycle    time    for    OPS4    interpreting    Rl    is about    150    milliseconds.    CPS4    is implemented    in    MACLISP;    Rl    is run on a PDP- 10 (model    KL) and loads    in 4 12 pages    Of    core.    5 Only    480    of these    ru!es    are    “configuration    rules”;    the    remainder    contain    more    general    (non    domain-specific)    knowledge    that    Rl    needs    in    order    to use the configuration    rules.    269    that happens, it can effect a state transition.    If all goes well, this    new state will, in turn, be recognized by one or more rules; one of    these rules will effect another state transition, and so on until the    system is configured. English translations of two sample rules are    shown in Figure 1.    ASSlGN-UB-MODULES-EXCEPT-THOSE-CONNECTlNG-TO-PANELS-4    IF:    THE CURRENT    CONTEXT    IS ASSIGNING    DEVICES    TO UNIBUS    MODULES    AND THERE    IS AN UNASSIGNED    DUAL    PORT    DISK    DRIVE    AND THE TY PE OF CONTROLLER    IT REQUIRES    IS KNOWN    AND THERE    ARE TWO SUCH    CONTROLLERS    NEITHER    OF WHICH    HAS ANY    DEVICES    ASSIGNED    TO IT    AND THE NUMBER    OF DEVICES    THAT    THESE    CONTROLLERS    CAN SUPPORT    IS KNOWN    THEN:    ASSIGN    THE DISK    DRIVE    TO EACH    OF THE CONTROLLERS    AND NOTE THAT    THE TWO CONTROLLERS    HAVE    BEEN    ASSOCIATED    AND THAT    EACH    SUPPORTS    ONE DEVICE    degree of conditionality in the configuration task. The fan-in    of a    rule is the number of distinct rules that could fire immediately    before that rule; the fan-out is the number of distinct rules that    could fire immediately after the rule.    The average fan-in and    fan-out of Rl’s    rules is 3.    The graph of possible rule firing    sequences,    then, has 666 nodes, one for each of the rules    (excluding the 106 output generation rules); each of these nodes    has, on the average, three edges coming into it and three going    out. It should be clear that unless the selection of which edge to    follow can be highly constrained, the cost (in nodes visited) of    finding an adequate configuration (an appropriate path through    the rules) will be enormous.    It is in this context that the power of    the Match method used by Rl becomes apparent.    When Rl can    configure a system without backtracking, it finds a single path that    consists, on the average, of about 800 nodes.    When Rl must    backtrack, it visits an additional N nodes, where N is the product of    the number of unsuccessful unibus module sequences it tries    PUT-UB-MODULE-6    IF:    THE CURRENT    CONTEXT    IS PUTTING    UNIBUS    MODULES    IN BACKPLANES    IN SOME    80X    AND IT HAS BEEN DETERMINED    WHICH    MODULE    TO TRY    TO PUT IN A BACKPLANE    AND THAT    MODULE    IS A MULTIPLEXER    TERMINAL    INTERFACE    AND IT HAS NOT BEEN ASSOCIATED    WITH    ANY PANEL    SPACE    AND THE TY PE AND NUMBER    OF BACKPLANE    SLOTS    IT REQUIRES    IS KNOWN    AND THERE    ARE AT LEASTTHAT    MANY    SLOTS    AVAILABLE    IN A BACKPLANE    OFTHE    APPROPRIATETYPE    AND THE CURRENT    UNIBUS    LOAD’ON    THAT    BACKPLANE    IS KNOWN    AND THE POSITION    OFTHE    BACKPLANE    IN THE BOX IS KNOWN    THEN:    ENTER THE CONTEXT    OF VERIFYING    PANEL    SPACE    FOR A MULTIPLEXER    Figure    1:    Two Sample Rules    (which is rarely more than 2) and the number of nodes that must    be    expanded    to    generate    a    candidate    unibus    module    configuration (which is rarely more than 300).    Rl ‘S EVOLUTION.    In a period of less than a year, Rl went from    an idea, to a demonstration system that had most of the basic    knowledge required in the domain but lacked the ability to deal    with complex orders, to a system that possesses true expertise. Its    development parallels, in many respects, the development of the    several    domain-specific    systems    engineered    by    Stanford    University’s    Heuristic    Programming    Project    [2].    Rl’s    implementation    history divides quite naturally into two stages.    During the first stage, which began in December of 1978 and    lasted for about four months, I spent five or six days being tutored    in the basics of VAX system configuration, read and reread the two    manuals that describe many of the VAX configuration constraints,    It is usual to distinguish the matching of forms and data from    search: for example, in discussing the amount of search occurring    in a resolution theorem prover, the unification of clauses is    considered to be part of the elementary search step. But Match is    also a method for doing search in a state space [6]; it is analogous    to methods such as Hill Climbing or Means-ends Analysis, though    much more powerful. The characteristic that distinguishes Match    from other Heuristic Search methods is that in the case of Match    the conditions (tests) associated with each state are sufficient to    guarantee that if a state transition is permissible, then the new    state will be on a solution path (if there is a solution path). Thus    with Match, false paths are never generated, and so backtracking    is never required.    Match is well suited for the configuration task    because, with a single exception, the knowledge that is available    at each step is sufficient to distinguish between acceptable and    unacceptable paths. The subtask that cannot always be done with    Match alone is placing modules on the unibus in an acceptable    sequence;    to perform    this subtask,    Rl    must occassionally    generate several candidate sequences.    and implemented an initial version of Rl (consisting of fewer than    200 domain rules) that could configure the simplest of orders    correctly, but made numerous mistakes when it tried to tackle    more complex    orders.”    The second stage, which lasted for    another four months, was spent in asking people who were expert    in the VAX configuration task to examine Rl’s output, point out    Rl’s mistakes, and indicate what knowledge Rl was lacking.    RI    was sufficiently ignorant that finding mistakes was no problem.    Given a criticism of some aspect of the configuration by an expert,    all that was necessary in order to refine Rl’s knowledge was to    find the offending rule, ask the expert to point out the problem with    the condition elements in the rule, and then either modify the rule    or split it into two rules that would discriminate between two    previously undifferentiated states. During this stage, Rl’s domain    knowledge almost tripled.    VALIDATION.    During October and November of 1979, Rl was    involved in a formal validation procedure.    Over the two month    period, Rl was given 50 orders to configure. A team of six experts    The fan-in and fan-out of Rl’s rules provide a measure of the    ‘During    this first    stage,    Rl’s    name    was XCON.    270    examined RI’s output, spending from one to two hours on each    REFERENCES    order. In the course of examining the configurations, 12 pieces of    errorful knowledge were uncovered. The rules responsible for the    errors were modified and the orders were resubmitted to RI and    were all configured correctly. Each of these 50 orders contained,    ‘.    Amarel, S. et al. Reports of panel on applications of artificial    intelligence. Proceedings of the Fifth International Joint    Conference on Artificial Intelligence, MIT, 1977, pp. 994-1006.    on the average, 90 components; RI fired an average of 1056 rules    2.    and used an average of 2.5 minutes of cpu time in configuring    Feigenbaum, E. A. The art of artificial intelligence.    Proceedings of the Fifth International Joint Conference on    each order. Since January of 1980, RI has configured over 500    orders.    It    is    now    integrated    into    DEC’s    manufacturing    organization.    It has also begun to be used by DEC’s sales    organization to configure orders on the day they are booked.    Artificial Intelligence, MIT, 1977, pp. 1014-1029.    to enlarge its domain so that it can become a more helpful system.    CONCLUDING    REMARKS.    RI has proven itself to be a highly    competent configurer of VAX-l l/780 systems. The configurations    that it produces are consistently adequate, and the information    Work has already begun on augmenting RI’s knowledge to enable    that it makes available to the technicians who physically assemble    systems is far more detailed than that produced by the humans    who do the task. There are, however, some obvious ways in which    3.    Forgy, C. L. and J. McDermott. OPS, A domain-independent    production system language. Proceedings of the Fifth    International Joint Conference on Artificial Intelligence, MIT, 1977,    pp. 933-939.    4.    Forgy, C. L. RETE: A fast algorithm for the many    pattern/many object pattern match problem. Carnegie-Mellon    University, Department of Computer Science, I 980.    5.    systems. Carnegie-Mellon University, Department of Computer    McDermott, J. RI : a rule-based configurer of computer    Science, 1980.    it to configure other computer systems manufactured by DEC. In    those capabilities.    Ultimately we hope to develop a salesperson’s    addition, we plan to augment its knowledge so that it will be able to    assistant, an Rl that can held a customer identify the system that    help with the scheduling of system delivery dates. We al,so plan to    augment    RI’s    knowledge    so that it will be able to provide    best suits his needs.    interactive assistance to a customer or salesperson that will allow    him, if he wishes, to specify some of the capabilities of the system    he wants and let RI select the set of components that will provide    6.    Newell, A. Heuristic programming: ill-structured problems.    ln Progress    in Operations    Research,    Aronofsky, J. S.,’ Ed.,John    Wiley and Sons, 1969, pp. 361-414.    7.    Newell, A. Knowledge representation aspects of production    systems. Proceedings of the Fifth International Joint Confeience    on Artificial Intelligence, MIT, 1977, pp. 987-988.    8.    Waterman, D. A. and F. Hayes-Roth. Pattern-Directed    Inference    Systems.    Academic Press, 1978.    ACKNOWLEDGEMENTS.    Many people have provided help in    various forms. Jon Bentley, Scott Fahlman, Charles Forgy, Betsy    Herk, Jill Larkin, Allen Newell,    Paul Rosenbloom,    and Mike    Rychener    gave me much encouragement    and many valuable    ideas.    Dave Barstow, Bruce Buchanan, Bob Englemore, Penny    Nii, Ted Shortliffe, and Mark Stefik contributed their knowledge    engineering expertise.    Finally, Jim Baratz, Alan Belancik, Dick    Caruso, Sam Fuller, Linda Marshall, Kent McNaughton,    Vaidis    Mongirdas, Dennis O’Connor, and Mike Powell, all of whom are at    DEG, assisted in bringing RI up to industry standards.    271     
 | 
	1980 
 | 
	31 
 | 
					
25 
							 | 
	RULE-BASED MODELS OF LEGAL EXPERTISE    D. A. Waterman and Mark Peterson    The Rand Corporation    1700 Main Street    Santa Monica, California    ABSTRACT    This    paper describes a rule-based    legal    de-    cisonmaking    system    (LDS) that embodies the skills    and knowledge of an    expert    in    product    liability    law.    The system is being used to study the effect    of changes in legal doctrine    on    settlement    stra-    tegies and practices.    LDS is implemented    in ROSIE,    a rule-oriented    language designed to facilitate the    development    of    large    expert    systems.    The ROSIE    language is briefly described and our    approach    to    modeling    legal expertise using a prototype version    of LDS is presented.    L. INTRODUCTION    We are    currently    engaged    in    designing    and    building    rule-based    models of legal expertise.    A    rule-based model of expertise is a computer program    organized    as a collection of antecedent-consequent    rules [l] that embodies the skills and knowledge of    an    expert in some domain.    The primary goal of our    work is to develop rule-based    models    of    the    de-    cisionmaking    processes of attorneys and claims ad-    justers involved in product    liability    litigation.    We    will    use    these    models to study the effect of    changes in legal doctrine on settlement    strategies    and practices.    Some    progress    has    already    been    made    in    developing    computer    systems    to    perform    legal    analysis.    The LEGOL Project [2] has    been    working    for    a number    of    years    on the construction    of a    language for expressing    legislation.    In    addition,    systems    have been developed for analyzing cases on    the basis of legal    doctrine    13,41,    investigating    the tax consequences    of corporate transactions    [S],    automating the assembly    of    form    legal    documents    if313 and performing    knowledge-based    legal informa-    tion retrieval    [7].    Our legal decisionmaking    system (LDS) is being    implemented    in ROSIE,    a    rule-oriented    language    designed to facilitate the development    of large ex-    pert    systems.    In section II the ROSIE language is    briefly described.    Section III discusses    our    ap-    proach    to    modeling    legal expertise and describes    the operation of our prototype version of IDS.    The    conclusions    are presented    in section IV.    II. METHODOLOGY    -    A rule-oriented    system for implementing    exper-    tise (ROSIE) is currently under development-to    pro-    vide a tool for building expert systems in    complex    domains    181.    ROSIE is a direct descendant of RITA    [9] and more    distantly    MYCIN    [lo]    in    that    the    models    created    are    rule-based with data-directed    control [ll], and are expressed in an English-like    syntax.    In    addition,    the    models    use    special    language primitives    and pattern    matching    routines    that    facilitate interaction with external computer    systems.    not    found    The ROSIE design also    includes    features    in these    successor systems, such as a    hierarchical    data structure capable    of    supporting    abstraction    and inheritance    in a general way, par-    titioned rulesets that can be called as subroutines    or    functions,    a    clearer    differentiation    between    rule antecedent matching and iterative    control    by    permitting    actions that involve looping through the    data base, and a user support environment with    ex-    tended facilities for editing and explanation.    In the latest version of ROSIE,    directed    modules    data structure    are    divided    knowledge or rules and the declarative    knowledge or    facts.    Both rules and    facts    are    represented    as    the    pattern-    used    to    examine and modify the    antecedent-consequent    into    the    imperative    where the consequent    is either an action to be executed (for rules) or a    statement to be deduced (for facts).    Rules operate    via forward chaining and are of    two    basic    types:    existence-driven    (IF-THEN)    as in RITA, and event-    driven (WHEN-THEN) as in ARS [12].    Facts,    on    the    other    hand,    operate via backward chaining and are    represented only as IF-THEN pairs.    The    facts    in    ROSIE    are similar to RITA goals, but are more gen-    eral since they are implicitly    referenced    by    the    rules and automatically    executed whenever the rules    need information the facts can supply.    In    effect,    the information that can be inferred from the facts    is a %irtual    data base" or extension to the    stan-    dard ROSIE data base.    The current ROSIE syntax is more    English-like    than that of RITA or earlier versions of ROSIE.    It    is intended to facilitate model creation, modifica-    tion    and    explanation.    This syntax is illustrated    in Figure 1, which shows our definition    of    strict    liability in the product liability domain.    272    IF:    THEN: assert the defendant is liable    theory of strict-liability.    under the    FIGURE 1. Definition of Strict Liability in ROSIE    III. LEGAL MODEL    -    --    The    model    of    legal    decisionmaking    we    are    building    will    contain    five basic types of rules:    those based on formal    doctrine,    informal    princi-    pies,    strategies,    subjective    considerations    and    secondary effects (see Figure 2).    The formal    doc-    trine    evolves    from    court decisions and statutes,    while the informal principles,    strategies,    etc. are    shaped    by    example    and    experience.    Sources for    these rules include    legal    literature,    case    his-    tories    and interviews with experts.    By separating    the rules    as described    we    can    study    both    the    relevant    inference mechanisms    and the influence of    each type of knowledge on the    decisionmaking    pro-    cess.    We are using our model of legal decisionmaking    to    systematically    describe how legal practitioners    reach settlement decisions and to test    the    effect    of    changes in the legal system on these decisions.    Individual cases    are    analyzed    by    comparing    the    chains of reasoning (the chains of rules) that lead    to the outcomes to similar chains    in prototypical    cases.    This helps clarify the relationships    exist-    FORMAL DOCTRINE: rules    used    as    the    basis    for    legal    judgements    such as legislation and common    law.    INFORMAL PRINCIPLES:    rules that don't    carry    the    weight    of    formal    law but are generally agreed    upon    by    legal    practitioners.    This    includes    ambiguous    concepts (e.g., reasonable and proper)    and generally accepted practices    (e.g., pain    and    suffering = 3 J; medical expenses).    STRATEGIES: methods used by    legal    practitioners    to    accomplish    a    goal,    e.g., proving a product    defective.    SUBJECTIVE CONSIDERATIONS:    rules that    anticipate    the    subjective    responses    of people involved in    legal interactions,    e.g., the effect of plaintiff    attractiveness    on the amount of money awarded, or    the effects    of    extreme    injuries    on    liability    decisions.    SECONDARY    EFFECTS:    rules    that    describe    the    interactions    between rules, e.g., a change in the    law from contributory    negligence    to    comparative    negligence    may    change    other    rules    such    as    strategies    for settlement or anticipated behavior    of juries.    FIGURE 2. Components of Legal Decisionmaking    ing between the formal doctrine, informal practices    and    strategies used in the decisionmaking.    We are    examining the effects of changes in legal doctrine,    procedures    and    strategies    on    the    processing    of    cases by modifying appropriate    rules in    the    model    and noting the effect on the operation of the model    when applied to a body    of    selected    test    cases.    This    can provide insights that will suggest useful    changes in legal doctrine and practices.    Our current implementation    of LDS is    a    small    prototype    model of legal decisionmaking    containing    rules representing    negligence    and    liability    laws.    This    prototype    contains    rules    describing    formal    doctrine and informal principles    in product liabil-    ity.    Future    versions    of    the system will incor-    porate the other rule types shown in Figure 2. The    model    consists    of approximately    90 rules, half of    which    represent    legal    doctrine    and    principles.    Given a description    of a product liability case the    model attempts to determine what theory of liabili-    ty applies, whether or not the defendant is liable,    how much the case is worth, and what    an    equitable    value    for settlement would be.    Once a decision is    reached the user may    ask    for    an    explanation    in    terms of the rules used to reach the decision.    We will now describe the use of    LDS    to    test    the    effect    of a legislative change on a case out-    come.    The case is briefly summarized    in Figure    3,    while    the    operation    of the model on this case is    illustrated    in Figure 4.    The system was first    ap-    plied    using    the    definition    of    strict liability    given in Figure 1.    It was determined that the    de-    fendant    was partially    liable for damages under the    theory of comparative negligence, with    the    amount    of    liability    lying    somewhere between $21,000 and    273    The plaintiff was cleaning a bathtub    drain    with    a    liquid    cleaner    when    the cleaner    exploded out of the    drain    causing    severe    burns    and    permanent    scarring to his left    arm.    Medical expenses    for    the    plaintiff    were    $6000,    and he was unable to work for    200 working days,    during    which    time    his    rate of pay was $47 per day.    The cleaner was manufactured    and    sold    by    the    defendant,    Stanway    Chemical Company.    The contents of the product were judged not    to    be defective by experts retained by the    defendant.    The product's    label    warned    of    potentially    explosive    chemical    reactions    from improper use of the product,    but    did    not    give    a    satisfactory    description    of    means    to    avoid    chemical    reactions.    The    plaintiff was familiar with the product but    did not flush out the    drain    before    using    the    cleaner.    The    amount of the claim    was    $40,000.    FIGURE 3. Description    of Drain cleaner Case    (Note: the model actually used a    much more detailed description    of    of the case than is shown here-.)    use was    reasonable    QI IU PI U)JW    $29,000.    The case was valued between    $35,000    and    $41,000.    After the definition of strict liability    was modified to state that the product must be    un-    reasonably dangerous for strict liability to apply,    the defendant was found to be not liable.    In    this    prototype    implementation    of    LDS    a somewhat more    restrictive ROSIE rule    syntax    was    used    than    is    shown in Figure 1.    v. CONCLUSIONS    Our preliminary work with LDS has demonstrated    the    feasibility    of    applying    rule-based modeling    techniques to the product liability area.    In spite    of    the    inherent    complexity    of product liability    law, the number of basic    concepts    manipulated    by    the    rules    is    easily    handled    (in the hundreds),    while the number of rules    required    to    adequately    represent    legal doctrine and strategies is manage-    able (in the thousands).    The rules that    represent    legal    doctrine    in    this    area    are    basically    declarative    in nature.    use was    *    rl    lforeseeable    no strict    Droduct was defective    defendant    manufactured    product -    product not unreasonably    dangerous    victim’s responsibility    = .4    victim was not a minor    victim knew of hazard    \-    liability    y    - -    -    -    defenclant    -la/,    Y    “9    . . _ ,iabi.ity    total amount of    loss is between    $35,000    and $41,000    location not dangerous    z    r6    medical expenses were $6136    lost 228 working days __IIc    base pay of $47 per day    / victim    iparative    :ligence    pa    I defendant’s    liabilitv = .6    FIGURE 4.    Inference Process for Drain Cleaner Case (Crosshatched    area shows inference before law change)    274    Most of them are easily represented    as    definitions    with    complex    antecedents    and    simple consequents    that name the concept being defined.    Rules of this    sort    can be organized as relatively unordered sets    that are processed with a simple    control    scheme.    Most    of    the    action takes place in calls to other    rule sets representing    definitions    of terms used by    the initial set.    This simple control structure fa-    cilitates rule modification    and explanation.    In this application    area improved methods    are    needed for dealing with vague or ambiguous concepts    used in the rules.    It is    sometimes    difficult    to    decide whether or not these concepts are applicable    in a particular    case, e.g., whether the use of    the    product    was actually "reasonable    and proper." Pos-    sibilities    include    gradual    refinement:    a query    scheme    involving presenting the user with increas-    ingly specific sets of questions, each of which may    have    ambiguous    terms that will be further refined    by even more specific    query    lists,    and    analogy:    displaying    case histories involving similar proto-    typical concepts and having the user select the one    closest to the term in question.    ACKNOWLEDGMENTS    This work has been supported by a    grant    from    the    Civil    Justice    Institute at the Rand Corpora-    tion, Santa Monica, California.    [71    [al    [91    (101    [Ill    [=I    Process: A Computer That Uses Regulations    and    Statutes to Draft Legal Documents."    American    Bar    Foundation    Research    Journal,    No.    1,    (1979) 3-81.    --    Hafner, C.D., "Representation    of Knowledge    in    a Legal    Information    Retrieval    System"    In    Research and Development    in    Information    Re-    trieval.    Proceedings    of the Third Annual SI-    GIR Conference,    Cambridge, England, 1980.    Waterman, D.A., Anderson,    R.H.,    Hayes-Roth,    F    s:;. ,    Klahr, P., Martins, G., and Rosenschein,    Design of 2    Rule-Oriented    System    for    Implementing    Expertise.    -    N-~~s~-ARPA,    The    Rand Corporation,    Santa    Monica,    California,    1979.    Anderson, R.H., and Gillogly, J.J., Rand    In-    -    -    telligent    Terminal Agent (RITA): Design Phi-    losophy.    R-1809-ARPA,    The Rand    Corporation,    Santa Monica, California,    1976.    Shortliffe, E.H., Computer-Based    Medical Con-    ---    sultations:    MYCIN.    New York: American El-    sevier, 1976.    Waterman, D.A., and Hayes-Roth,    F.    Pattern-    Directed    Inference    Systems.    New    York:    Academic Press, 1978.    Sussman, G.J.    "Electrical Circuit Design:    A    Problem    for    Artificial    1977    Intelligence    Research."    In Proceedings    of the 5th    Inter-    ---    -    national    Joint    Conference on Artificial    In-    telligence,    Cambridge,    Massachusetts,    1977,    894-900.    REFERENCES    [II    [21    [31    [41    [51    [61    Waterman, D. A., "User-Oriented    Systems    for    Capturing Expertise:    A Rule-Based Approach."    In D. Michie (ed.) Expert Systems in the    Mi-    --    -    cro    Electronic    &.    Press, 1979.    Edinburgh    University    Jones,    S.,    Mason    P.J.,    &    Stamper,    R.K.    "LEGOL-2.0:    A    Relational    Specification    Language for Complex Rules." Information Sys-    terns, 4:4, (1979).    Meldman,    J. A.    (( A    Structural    Model    for    Computer-aided    Legal    Analysis."    Journal of    Computers and Law, 6 (1977) 27-71. -    -    --    Popp, W.G., and Schlink, B., "JUDITH: A    Com-    puter    Program to Advise Lawyers in Reasoning    a Case."    Jurimetrics    Journal,    15:4    (1975)    303-314.    McCarty,    L.T.,    and    Sridharan,    N.S.,    "The    Representation    of an Evolving System of Legal    Concepts, Part One:    Logical    Templates"    In    Proceedings    of the Third National Conference    -    _--    of the    Canadian    --    Society    for    Computational    Studies    of    Intelligence',    Columbia,T980.    Victoria, British    Sprowl, J.A., "Automating    the Legal Reasoning    275     
 | 
	1980 
 | 
	32 
 | 
					
26 
							 | 
	EXPLOITING A DOMAIN MODEL IN AN EXPERT SPECTRAL ANALYSIS PROGRAM    David R. Barstow    Schlumberger-Doll Research, Ridgefield, Connecticut 06877    ABSTRACT    Gamma ray    activation    spectra    are    used    by    nuclear    physicists    to    identify    the    elemental    composition    of    unknown    substances.    Neutron    bombardment causes some of the atoms of a sample to    change    into    unstable    isotopes,    which    then decay,    emitting    gamma radiation    at characteristic    energies    and intensities,    By identifying    these    isotopes,    the composition    of    the original    substance    can be    determined.    GAMMA is    an    expert    system    for    performing    this    interpretation    task.    It    has    a    detailed    model of    its    domain and can exploit    this    model for a variety    of purposes,    including    ratings    for individual    isotopes    and elements,    ratings    based    on multiple    spectra,    complete    interpretations,    and    even calibration.    GAMMA's performance    is generally    quite good when compared with human performance.    1. INTRODUCTION    Gamma ray spectra    are commonly used by nuclear    physicists    to identify    the elemental    composition    of    a substance.    One kind of    gamma ray spectrum    (an    "activation    spectrum")    is produced by bombarding a    sample of the substance    with neutrons.    This causes    certain    changes in some of the atoms in the sample,    many of which result    in unstable    isotopes    that then    begin    to    decay.    As    they    decay,    the    unstable    isotopes    emit gamma rays at characteristic    energies    and intensities.    By measuring    these,    the unstable    isotopes    (and    from    these,    the    elements    of    the    original    sample)    can be identified.    For example,    Figure    1 shows    such    a    spectrum,    with    peaks    identified    and labeled    by a physicist.    In this    case,    the peaks were produced by emissions    from the    isotopes    Na-24,    Cl-37,    and    S-37; the    original    substance    was a sample of salt.    An expert    system,    called    GAMMA, has    been    developed    to    perform    this    task,    and    GAMMA's    performance    compares well    with human interpreters.    The basic    strategy    employed in developing    GAMMA    was    to develop    a detailed    model of the domain and then    to exploit    this    model for    a variety    of    tasks    and    situations.    Early work on GAMMA was discussed    in    another    paper[ll;    in    this    paper,    recent    progress    will    be discussed.    2. The Domain Model    GAMMA's domain model was described    in detail    in the earlier    paper and will    only    be summarized    here.    Basically,    the process    that    produces    gamma    ray activation    spectra    can be seen at six different    levels    as follows:    Figure 1: Gamma Ray Activation Spectrum    276    (1) elements in the original    sample    3.1. Isotopic    and Elemental Ratings    (2) isotopes    in the original    sample    (3) isotopes    after    bombardment by neutrons    (4) decays of unstable    isotopes    (5) gamma ray emissions    during decay    (6) g-a    ray detections    during decay    Level    6 represents    the    actual    spectrum    and    level    1 represents    the    ultimate    goal    of    the    interpretation.    Level    3    is    a    convenient    intermediate    level    used by most practicing    nuclear    physicists.    GAMMA’s use    of    this    domain model    involves    considering    hypotheses    at each of    the levels.    A    hypothesis    consists    of    a    set    of    triples,    each    triple    consisting    of    an object    appropriate    to the    level    (e.g.,    naturally    occurring    isotope    for    level    2; gamma ray emissions    at a particular    energy    for    level    5),    an estimated    concentration    for the object    (e.g.,    the number of    decays    of    a given    unstable    isotope    for level    4),    and a label    which encodes the    path    from    level    1    to    the    triple    (e.g.,    “NA-23/NG/NA-24,’ denoting    that the unstable    isotope    Na-24 was produced    when the    naturally    occurring    isotope    Na-23 underwent an N-y transition).    Relationships    between levels    can be expressed    in terms of several    formulae that have been derived    from both theoretical    considerations    and empirical    observations.    These    formulae    involve    such    parameters as the likelihood    of particular    isotopic    transitions    during    neutron    bombardment,    the    half-lives    of    unstable    isotopes,    and    the    characteristic    gamma ray energies    and intensities    for different    isotopes.    Nuclear physicists    consult    published    reports    when they need such information;    the data from one such source[21    has been converted    into    a LISP data    base    for    GAMMA’s use.    Further    details    of the formulae and data base are available    elsewherec 11.    In GAMMA’s case,    the    formulae    are    all    used    predictively;    that    is,    given    an hypothesis    at one    level,    the    appropriate    formula    can    be    used    to    predict    hypotheses    at    the    next    level    down.    By    chaining    such predictions    together,    GAMMA can go    from hypothetical    interpretations    at levels    1 or 3    down to predicted    gamma ray detections    that can be    compared against    spectral    data.    3. Applications    of the Doglain    Model    The accuracy    with    which    predictions    can    be    made and the    high    resolution    of    this    particular    detector    enable    GAMMA to exploit    the domain model    in    a variety    of    tasks    and situations.    Some of    these    were    discussed    earlierElI,    and    will    be    mentioned here only briefly.    GAMMA    was first    used to “rate,,    the likelihood    that    any particular    unstable    isotope    was present    after    neutron    bombardment.    This    was    done    by    hypothesizing    decays    of    that    isotope    at    level    4,    predicting    detections    at    level    6,    and using    an    evaluation    function    to compare the predictions    with    the    spectral    data.    The evaluation    function    was    designed    to    take    positive    and negative    evidence    into    account,    allowing    both    for    background    radiation    and    for    noise    and    errors    in    the    prediction    and detection    processes.    The peaks were    individually    rated    for    both energy    and intensity,    and    the    final    rating    was    the    average    of    the    individual    ratings,    weighted    by    intensity    (i.e.,    stronger    peaks    were    more    important).    When a    predicted    peak had a corresponding    detected    peak, a    positive    rating    was    given;    when    no    peak    was    detected,    a negative    rating    was assessed,    unless    the predicted    intensity    was low enough that    the    peak    could    have    been    obscured    by    background    radiation.    Noise    and    errors    were    taken    into    account by using what we call    the trapezoidal    rule.    For example, the trapezoidal    rule    for peak energies    is shown in Figure    2.    If    a peak was predicted    at    energy    E, then a detected    peak within    the    range    (E-6    E+b > was considered    a perfect    match, peaks    out&e    thJ range (E- ’    all,    and peaks    in    8;    E?a!$e~e~~-~t    ?Za!?:edazdt    (E+&    E+h2)were scaled    to    provide    2a’ con&nuous    fun&on.    Such    trapezoidal    rules    were    used    throughout    GAMMA’s evaluation    function,    and has    proved quite    adequate.    GAMMA’s performance    at the    isotopic    rating    task was moderately    good compared    with that of human experts:    although    it gave high    ratings    to isotopes    identified    by experts,    it    also    occasionally    gave    such    ratings    to    isotopes    considered    implausible    by the experts.    GAMMA’s second task was to do a similar    rating    for    elements    in    the    original    sample    (i.e.,    hypotheses    at level    1).    The same predict-and-match    technique    was used,    and GAMMA,    s    performance    was    again moderately    good,    although    not quite    as good    as    in    the    isotopic    case:    fewer    implausible    elements were rated but some elements    identified    by    the human experts    received    low ratings.    This was    due largely    to certain    simplifying    assumptions    in    the formulae    relating    levels    2 through 4.    Further    details    of    GAMMA’s rating    scheme    are    given    elsewhere[ll.    Recently,    GAMMA’s repertoire    has been expanded    to include    several    other tasks,    and its    performance    seems to have improved with age.    rating    1.0    -1 .o    -I    -Gy+    A-A-+-    energy    62 blE%    b2    Figure 2: Trapezoidal    Rule for Peak Energies    277    3.2.    Ratings for Multiple    Spectra    GAMMA's next    major    task    was to    do    similar    ratings    for    individual    isotopes    and elements,    but    to do so on the basis    of multiple    spectra:    in a    typical    experimental    situation,    not one but several    spectra    are recorded,    each    for    a different    time    interval.    Generally,    the first    few spectra    are for    comparatively    short    time    periods    (10    to    30    seconds),    and the later    spectra    may be for    periods    as long as several    hours.    The primary advantage of    multiple    spectra    is that they permit greater    use of    half-life    information:    unstable    isotopes    with    short half-lives    will    appear on the earlier    spectra    but not on the    later    ones;    isotopes    with    longer    half-lives    emit    ganrma rays    at    roughly    constant    rates,    so they appear most distinctly    on the later    spectra    (for    which the detection    time is longer).    The technique    used by GAMMA is    to    find    the    isotopic    (or elemental)    ratings    for    the individual    spectra,    use    these    to    hypothesize    an    initial    concentration    for the isotope,    redo the predictions    based on this    concentration,    and finally    combine    the    ratings    for    these    predictions    into    a single    overall    rating.    The hypothetical    concentration    for    an isotope    is determined by considering    all    spectra    for    which    the    isotope's    rating    is    sufficiently    high,    taking    the concentrations    (byproducts    of the    original    prediction-and-match    rating)    that    agree    sufficiently    well    (within    one order of magnitude),    and then computing the average.    This technique    is    designed    to ignore    those results    which,    for any of    several    reasons,    deviate    from    the    norm and    in    practice    seems to    work quite    well.    Given    this    hypothesized    concentration,    the    prediction-and-match    rating    is    again    computed for    each individual    spectrum.    These ratings    are then    averaged    to    determine    the    overall    ,,multiple    spectra,,    rating    for the isotope.    In our first    attempt to average these ratings,    we weighted    them by the total    predicted    intensity    for    a spectrum,    as was done    for    ratings    within    individual    spectra.    But this    seemed to attach    too    much    weight    intensities,    to    spectra    with    high    predicted    so on our second attempt,    we took the    simple average of the ratings    for    all    spectra    for    which the evidence    was significant    (either    positive    or    negative),    and the    results    were much better.    (Interestingly,    the first    symptom of    this    problem    was due    to    an    INTERLISP error:    under    certain    circumstances,    INTERLISP ignores    floating    point    overflow    and underflow,    thereby    producing    a very    large    number when multiplying    two very small    ones.    With    simple    averaging,    such    isolated    erroneous    computations    no longer    have much overall    effect.    In fact,    we now take this    as a maxim:    no numeric    rating    scheme should    depend    too    heavily    on    any    single    data point.)    GAMMA's performance    on multiple    spectra    is    generally    much better    than on individual    spectra,    primarily    because    of    the    value    of    half-life    information.    GAMMA's ratings    generally    compare    well    with those    of human experts,    and implausible    isotopes    (Or elements)    are only    rarely    given    high    ratings.    3.3. Producing a Complete Interpretation    The major problem with the tasks described    so    far    is that    the ratings    are given    to isotopes    and    elements    as    if    they    were totally    independent    of    each other.    The fact    that    the same peak may be    caused    by    emissions    from    two different    isotopes    does not    detract    from the    rating    of    either    one.    The ultimate    interpretation    of spectral    data should    not be ratings    for individual    elements,    but rather    a set of elements    (and concentrations)    which, taken    together,    explain    the data well.    A first    pass at    coming up with such a complete    interpretation    might    be    to    take    all    and    only    those    elements    with    sufficiently    high ratings,    but that    does not take    into account    the interaction    between the elements,    and is simply inadequate.    GAMMA's solution    to this    problem    is    essentially    a hill-climbing    algorithm    designed    to maximize an "interpretation    measure,,.    For this    algorithm,    a complete    interpretation    is defined    to be a set of <element,    concentration>    pairs,    and a mapping of detected    peaks to sets    of    labels.    (The labels    describe    the path from one of    the <element,    concentration>    pairs    to the detected    peak.    Under this    definition,    the    same detected    peak may have several    different    labels,    a situation    which    actually    occurs    in    the    spectra    under    consideration.)    The    interpretation    measure    that    GAMMA    currently    uses    is    based    on    two    different    considerations.    First,    the individual    spectra    are    rated in terms of    (1) how many peaks have no labels    (i.e.,    are there    peaks which are not explained    by    the interpretation?),    (2)    how many labels    have no    peaks    (i.e.,    are    there    predictions    which do not    appear in the detected    spectra?),    and (3) how well    the peaks and associated    labels    match (i.e.,    do the    energies    and    intensities    of    the    detected    peaks    match    well    with    the    energies    and    intensities    predicted    for    the associated    labels?).    The second    consideration    is    that    the relative    concentrations    of the elements be plausible.    This is used only as    negative    evidence:    if    the    concentration    of    an    element is high (relative    to the concentrations    of    the    other    elements),    but    the    rating    for    that    element is low, then the interpretation    is suspect,    since the detector    and model can be expected    to be    quite    accurate    for    relatively    pure substances;    if    the    concentration    is    below    a certain    threshold,    then the interpretation    is also    suspect,    since    the    detector    simply cannot be expected    to find elements    in such smali concentrations:    The task is thus to find    the set of <element,    concentration>    pairs    that    maximizes    this    measure.    GAMMA    uses the following    hill-climbing    algorithm:    INTERPRETATION    := 0;    CANDIDATES    := {<element,    concentration>    I    rating    is above a threshold);    consider    all    interpretations    formed by moving    one element from CANDIDATES    to INTERPRETATION    or from INTERPRETATION    to CANDIDATES;    if no such interpretation    increases    the measure    then quit    else    select    that which maximizes the measure;    update INTERPRETATION    and CANDIDATES;    repeat.    278    While    we    have    no    theoretical    basis    for    claiming    that this    algorithm    does,    indeed,    find the    subset    of    candidates    with    maximal    measure,    our    experience    indicates    that    it    performs    very    well,    and the    interpretations    that    GAMMA produces    are    quite good.    5. References    Cl1    Barstow,    D.R.    Knowledge engineering    in nuclear    physics.    In Sixth International    Joint    Conference    on    Artificial    Intelligence,    pages 34-36.-    Stanford    Computer Science    Department,    1979    3.4. Calibration    GAMMA’s final    task    is    to    calibrate    the    spectra.    Up to this    point,    it    has been assumed    that the input spectra    have already    been calibrated    (spectral    channels    have been associated    with gamma    ray    energies),    and    this    is    a    task    which    has    hitherto    been performed    by physicists    before    the    data are given to GAMMA. We have not yet completed    the attempts at solving    this    problem,    but our first    results    are    encouraging :    we have    developed    a    technique    for    recalibrating    a    spectrum    more    precisely,    given    an    initial    approximate    calibration.    The basic    technique    is    to use a set    of    good    calibration    priori    by    physicists)    isotopes    (as    identified    a    and    look    for    any    which    satisfy    two criteria:    (1) there    is    exactly    one    match    within    a    given    range    of    the    initial    calibration;    (2)    that    match    gets    a fairly    high    rating.    A linear    least    squares    fit    of these points    gives    the    recalibration.    Experience    with    this    approach is quite    good,    and it is currently    used to    correct    for    calibration    differences    among    the    individual    spectra    within    a set of spectra.    c21    Erdimann , G .    Neutron Activation    Tables.    WChemie    , New York,    1976 .    c31    Shortliffe,    E.H.    MYCIN: Computer-Based Medical Consultations.    American Elsevie-w-1975    .    Our plan    for    finding    the initial    calibration    is    to    take    a sum spectrun    for    the    entire    set    (thereby    increasing    the data/noise    ratio)    and apply    a similar    strategy:    find any ,,calibration,,    isotope    which has exactly    one match anywhere on the    sum    spectrum and for    which the match rating    is    quite    high.    We hope to know soon whether this    approach    will    succeed.    4. Shallow and Deep Domain Models    GAMMA’s success    is due largely    to its    use of a    relatively    detailed    model of its    domain.    This may    be compared with    systems    such    as MYCINC31 whose    success    is    due    largely    to    shallow    models    that    encode,    in a sense,    an expert’s    compiled    version    of    a more detailed    model that    the expert    may or may    not    know explicitly.    In    comparing    these    two    approaches,    several    observations    can    be    made.    First,    while a deep model can be put to great    use    (as    it    was    in    GAMMA), there    are    several    circumstances    in which a shallow    domain model is    necessary:    (1) a deep model doesn’t    exist    (e.g.,    there    is    no computational    theory    of    the    entire    hunan    body) ;    and    (2)    a    deep    model    is    not    computationally    feasible    (e.g.,    one cannot hope to    do    weather    prediction    based    on    the    fundamental    properties    of    gases).    Second,    although    shallow    models    will    often    suffice,    it    seems likely    that    future    expert    systems based on shallow    models will    require    access    to deep models    for difficult    cases.    Third,    “deep,, and ,,shallow,,    are obviously    relative    terms :    GAMMA’s ,deep model is    shallow    when viewed    as a model of subatomic    physics.    The relationship    between deep    and shallow    models    seems to    be    an    important topic    for future    work on expert    systems.    279     
 | 
	1980 
 | 
	33 
 | 
					
27 
							 | 
	Project    EPISTLE:    A system for the Automatic    Analysis    of Business    Correspondence    Lance A. Miller    IBM Thomas    J. Watson    Research    Center    Yorktown    Heights,    New York 1 0598    ABSTRACT:    The developing    system    described    here is plan-    ned to provide    the business    executive    with useful applications    for the computer    processing    of correspondence    in the office    environment.    Applications    will include    the synopsis    and    abstraction    of incoming    mail and a variety    of critiques    of    newly-generated    letters,    all based    upon    the capability    of    understanding    the natural    language    text at least    to a level    corresponding    to customary    business    communication.    Succes-    sive sections    of the paper describe    the Background    and Prior    Work, the planned    System Output,    and Implementation.    I. BACKGROUND    AND PRIOR    WORK    We conclude    from these    behavioral    findings    that there    are    indeed    extensive    regularities    in the characteristics    of    business    letters,    determined    primarily    by the purpose    objec-    tives.    It is these constraints    that most strongly    indicate    to us    the feasibility    of developing    automatic    means for recognizing    content-themes    and purposes    from the letter text (as well as    the converse,    generating    letter    text from information    about    purposes).    Other    analyses    have been    undertaken    to estimate    the    linguistic    complexity    and regularities    of the texts.    The aver-    age letter appears    to contain    8 sentences,    with an average    of    18 words    each;    in the 400 letter-bodies    there    are roughly    57900    words    and 4500    unique    words    totaL    An    ongoing    hand    analysis    of the syntactic    structure    of sentences    in a    50-letter    sample    reveals    a relatively    high    frequency    of    subject-verb    inversions    (about    1 per    letter)    and    complex    lengthy    complementizers    (l-4    per letter).    These    features,    along with very frequent    noun phrase    and sentence    coordina-    tion, accompanied    by a wide variety    of grammatical    but un-    systematic    structure    deletions,    indicate    an exceptionally    high    level of grammatical    complexity    of our texts.    With respect    to    overall text syntax    we have analyzed    10 letters    for text cohe-    sion,    using    a modification    of Halliday    and Hasan’s    coding    scheme    E41; 82 percent    of the instances    of cohesion    detect-    ed were    accounted    for by 4 categories:    lexical    repetitions    (29%),    pronouns    (28%),    nominal    substitutions    (9%,    e.g.,    “one”,    “same”),    and lexical    collocations    (words    related    via    their    semantics,    16%).    In an extension    of this discourse    structure    analysis    we are analyzing    50 letters,    coding    all    occurrences    of functional    nouns in terms of (1) the grammat-    ical case    function    served    and    (2) the cohesive    relation    to    prior nouns.    Preliminary    results    indicate    consistent    patterns    of case-shift    and type of cohesion    as a function    of the prag-    matic    and content    themes.    The results    of these    linguistic    analyses    will help determine    the strategy    ultimately    adopted    for selecting    surface    parses and meaning    interpretations.    II. SYSTEM    OUTPUT    The planned    system will provide    the following    for each letter    in our database:    (1) surface    syntactic    parses    for each sen-    tence;    (2) meaning    interpretations    for each sentence,    adjust-    ed to the context    of prior sentences;    (3) a condensed    synop-    sis of the overall meaning    content    of the letter;    (4) a critique    of each letter’s spelling,    punctuation,    and grammaticality;    (5)    a mapping    of the meaning    content    onto    common    business    communication    themes;    and (6) some characterization    of the    author’s    style and tone.    In addition    to the above,    we plan to    develop    a limited facility    to generate    short letters of a certain    type    (e.g., information    requests)    conforming    to a particular    author’s    normal    “style”    and “tone”.    III. IMPL.EMENTATION    COMPONENTS    A.    Semantic    Representations:    Many of the lexical items    in our texts    appear    to have two or more literal    word-sense    usages (as well as occasional    non-literal    ones);    it also appears    that    the    discriminating    semantic    features    among    highly-    related    lexical items cannot    be ignored    if the intended    letter    nuances    are to be preserved.    We therefore    do not expect    much reduction    in cardinality    when mapping    from the lexical    to the concept    space;    we also anticipate    that our representa-    tions will have to be unusually    rich,    in terms of both a large    number    of features    distinguishing    the concepts    underlying    lexical    items and the capability    to relate    different    concepts    together.    Among    the most    important    anticipated    semantic    features    are those    describing    the prcxmdhns    and cmse-    quences    of ACTIONS    and those    characterizing    the internal    states    of ACTORS    (e.g.,    their intentions,    expectations,    and    reactions).    B. Parsing:    We will employ    a system    called    NLP    as the    basic    “operating    system”    for our application    development.    This    system    has been used for other    understanding    projects    and    provides    all of the general    components    required,    including    a    word-stem    dictionary,    a parser,    knowledge    representation,    and natural    language    generation    I: 25 1.    In particular,    the    parser    proceeds    left to right, character    by character,    in proc-    essing a sentence,    generating    all possible    descriptions    of text    segments    in a bottom-up    fashion    by application    of rules from    an augmented    phrase    structure    grammar    -- essentially    a set of    context-free    rules augmented    with arbitrary    conditions    and    structure-building    actions.    In writing    the grammar    we are    attempting    to keep to an absolute    minimum    the use of se-    mantic    information,    to increase    the applicability    of the parser    over a variety    of semantic    domains.    Our general    strategy    to    minimize    the number    of surface    parses    (and increase    parsing    efficiency)    is to attach    lower    syntactic    constituents    (e.g.,    post-nominal    prepositional    phrases)    to the highest-possible    unit (e.g.,    to the verb-phrase    rather    than the noun-phrase),    with the final decision    as to the most appropriate    attachment    to be resolved    by the interpretation    components.    C. Meaning-Assignment:    Our planned    strategy    for choosing    the meaning    to assign    to a sentence    is basically    to find that action-concept    whose    case-relations    are most completely    “satisfied”    by all of the    concepts    implied    by the sentence.    In the expected    frequent    case of only partial    “fits”    to several    action-concepts,    prefer-    ence among    these will be based on several factors,    including:    (1) the number    of case relations    of a concept    “filled”    or    “unfilled”    by elements    of the present    (or prior)    text, and the    relative    importance    in the intensional    definition    of each of    these;    (2) the “directness”    of the mappings    of text segments    to underlying    concepts;    and (3) the syntactic    structure    of the    sentence    (e.g., syntactically    “higher”    components    usually will    be preferred    to “lower”    ones).    D. Text-Interpretation:    We propose    to keep separate    lists of each action-concept    and entity-concept    encountered    in the text.    Following    the    meaning-assignment    to a sentence,    the sentence    will be re-    examined    to determine    if it supplies    qualification    information    for any prior-mentioned    action or entity;    if so, these separate    representations    will    be    so    augmented,    a process    called    “updating”.    Statistics    of each such updating    of information    will be kept for each sentence    for subsequent    characteriza-    tion of. style.    Next, these separate    entity/action    representa-    tions will be examined    directly    to determine    whether    they can    be combined    as elements    of some broader    concept.    By this    process    we will therefore    be able to update    and condense    our    representations    as we go along,    facilitating    eventual    synopsis    and abstraction    of content-themes.    In addition    to the above semantic    interpretations    for the    complete    text, we will also build up a composite    representa-    tion of the syntactic    structure    of the text.    We, first,    are    hopeful    of being able to discover    a relatively    small number    of    schema    for characterizing    syntactic    structures    within    sen-    tences:wetllenheJieYethatthe~Qfleuer~canbe    accounted    for    these schemas.    in terms    of    E. Adaptation:    frequently    occurring    patterns    of    to    As a unique feature,    we plan to implement    the capability    dynamically    modify    or adapt    our system    so as to change    the manner    in which    word-senses    are selected    or meanings    assigned    as a function    of the system’s    experience    with various    kinds    of texts.    This would    be accomplished    by assigning    “weight”    attributes    to each lexical item and to each underly-    ing concept    (and    its attribute-values);    the    weight-values    along the path finally    selected    for mapping    text to concepts    would    then    be incremented    by some    amount.    Given    that    preference    ordering    of text to concept    paths is determined    by    such    overall    path-weights,    the system    could    thus    achieve    self-adaptation    to word-usages    of particular    application    do-    mains.    This facility    could also be employed    to characterize    individual    authors’    styles.    F. Abstraction    and Critique    of    -    Letters:    Concerning    Content-Themes    and Purposes,    we plan to    map the system’s    meaning    interpretations    onto a set of com-    mon business    content-themes    and communication    purposes,    and we are presently    conducting    behavioral    and analytical    studies    to determine    these.    With respect    to Grammaticality,    we anticipate    being    able    to detect    incomplete    sentences,    subject-verb    disagreements,    and inappropriate    shifts    in verb    tenses;    in addition,    we will be able to identify    ambiguities    and some instances    of clearly    “awkward”    syntax.    Spelling    errors    of the “non-word”    type are easily caught,    and certain    spelling    errors    in which the misspelled    word is in the diction-    ary may also be caught    if they    contain    sufficient    syntactic    information.    In addition,    some fraction    of “spelling”    errors    involving    semantically    inappropriate    words should be detecta-    ble.    Finally,    we may    be able    to discover    a number    of    Punctuation    errors.    The last aspect of critiques    is that of style and tone.    We    are    aware    of the several    “indices”    for measuring    various    aspects    of these but consider    them to be at best very crude    indicators    C 61.    As a starting    point we have identified    five    dimensions    for each    concept,    and we will implement    the    capability    to assess    texts    on these    dimensions    until we are    better    informed.    For S~J&, defined    as “the    organizational    strategy    for conveying    content”,    the dimensions    are: sentence    precision,    sentence    readability,    reference    clarity,    information-    value, and cohesion.    Tone, defined    as “the connotations    of    interpersonal    attitudes”,    is to be rated on the dimensions    of:    personal-ness,    positive-ness,    informal-ness,    concrete-ness,    and strength.    We plan    to output    and highlight    those    text    segments    which fall below a certain    level of acceptability    on    these measures.    REFERENCES    Cl1    Miller, L. A. “Behavioral    studies    of the programming    process. ” IBM Research    Report    RC 7367,    1978.    c 21    Heidorn,    G. E. “Augmented    phrase    structure    gram-    mars”.    In Nash-Webber,    B.L. and    Schank,    R. C.    (Eds.),    Theoretical    Issues    in    Natural    Language    ---    -    -    --    Processing.    Association    for Computational    Linguis-    t.ic& .hle+    1975.    281    c31    Miller, L. A. and Daiute,    C. “A taxonomic    analysis    of    c51    Heidorn,    G. E. “Automatic    programming    through    natu-    business    letters”.    IBM Research    Report,    in prepara-    ral language    dialogue:    A survey”.    IBM Journal    of    ---    tion, 1980.    Research    and Development,    1976, 20 302-3 13.    --    -    141    Halliday,    M. A. K. and Hasan, R. Cohesion    in English.    --    London:    Longman    Group Ltd., 1976.    C61    Dyer, F. C.    Executive’s    Guide to Effective    Speaking    ---    and Writing.    Englewood    Cliffs, N. J.: Prentice-Hall,    Inc., 1962.    282     
 | 
	1980 
 | 
	34 
 | 
					
28 
							 | 
	A KNOWLEDGE BASED DESIGN SYSTEM FOR DIGITAL ELECTRONICS*    Milton R. Grinberg    Department of Computer Science    University of Maryland    College Park, Maryland 20742    1. Overview and Goals of the SADD System    v----p    the goal of a human ex e;ia;;    such as "build a digits P    display interface into a    This is a problem solving    activity in which the    roblem    expert's general purpose    R    solving abilities interact with his rich    nowledge of the digital world.    initial ideas    By translatin    through successively more    !!    the    re ined    "sketches", the human expert gradually arrives at a    chip-level digital schematic that realizes the    initial goals, and which may have bugs that will    only be discovered after the circuit has been    simulated or built and tested.    The Semi-Automatic Digital Designer (SADD) is an    experimental interactive knowledge-based design    s stem,    II    whose domain of expertise is digital    e ectronics.    First, I want The SADD project has two goals.    to provide an intermediate digital    design problem solver for which a human expert can    interactively provide the high-level functional    descriptions-of-a circuit and which can take    .    high-level circuit descri;tion and refine it in?:    ciycuit    Second,    schematic that performs the required task.    corn uter    guman iii    I am ;EaEyptmg to discover and express in    that_.knowledge    esig;t; an expert in.digital that makesT;F;    desi n.    *    includes    f    eneric    information K or    each    hi~h-le~~l d;g,f;zdsfun;;;c!;d(e.g.,    counter, clock)    transform these    functional descriptions into r%izable    circuits.    As a first case study in    relative1    desi n,    sophisticated TV video %    I adopted a    called    4:    isplay circuit    t e Screensplitter. This is a real circuit    of moderate overall complexity.    2. Digital Design    --    How does a digital desi    designing a circuit? The %    ner go about the task of    esigner generally starts    with a well-defined goal for a circuit with only an    ill-defined solution for accomplis$;ng    Using his design experience,    that goal.    designer can    pinpoint some of the required and hence pivotal    components needed in the circuit. The circuit is    slowly sketched around these    The    pivotal    sketching process allows    components.    specify where the inputs come from    the designer to    and where the    outputs    go.    concerning    The designer of:;: makes notes    characteristics of    I$f;;;g this sketch phase,    components.    the expert discovers that    components are needed and adds them to the    design.    each    Eventually the designer starts to refine    component by selecting and interconnecting    chips to implement the component.    What are the "primitives" that a designer uses?    At the im leme$afion*(i.$., final circui;idlevel    R    $h;;; a;zdt ree prim+tives :    (2)(l)    a chip    its    wire    and    si nal.    (3) a    f*    The"%!;tisPin?&rction those &tputs    are    de ined by its *current state ;~~,,a;;,,current    A wire    i?!iE:siefining    anizleFtrica1 equivalence. A ST::    physical    f    is the electrical information present on a wire.    At the sketch ad level    "primitives": (17 a    there are two additional    f unction and (2) a connection.    *This research is sup orted b the Office of Naval    Research under Grant 800014-765-0477.    A function performs some desi nated digital task.    I have concentrated on nine 5ifferent functions    (i.e., counter, shifter, memory, combinational    d    divider, selector, and clockj    in the Screensplitter. These are    components that a designer uses in the    A connection is an information path    between- two    functions.    It identifies where    information flows and when it is allowed to flow.    3.    SADD Organization    There are three distinct design phases in the    In the specification acquistion phase, the user    describes the functionatsstructure offEk;e;ircuit    in English.    During    .    phase,    are    introduced    relevant information about the frames    is filled in, and the interrelationship among the    frames established. From    builds a model of    this description, SADD    the    semantic net.    circuit, expressed as a    component is selected using the conceptual function    &zracteristics (which may-have to be deduced from    characteristics provided by the designer and    the functions relationship to the other functions    in the model).    the circuit schematic is    implemented using '%:    selected    and    conceptual function description.    strategy    In the circuit simulation phase, the correctness    of the circuit is determined by simulating the    circuit on a conceptual simulator.    circuit proves not to    If the designed    fulfill the goal of the    desi ner during the simulation phase, it can be    modi led and redesigned.    3.    3.1. Screensplitter Circuit    In order to develop SADD, the Screensplitter was    chosen as a benchmark. Fig. 1 illustrates one of    DCCBR    READ    DM    ADDRESS    DCCBR    LOAD    STCOL    Fig. 1. Schematic for the DCC Counter    the 12 logical components (the Display Character    283    Counter) from the original Screensplitter circuit    schematic.    the circuit schematic    design scenerYii%s develoned    a verbal    that descsibes the    funcrional components, characteristics of these    functional components and the interconnections.    Fig. 2 shows the relevant portion of the input    1.    2.    3.    4.    5.    THE DISPLAY CHARACTER COUNTER (DCC) COUNTS FROM    .    _    0 TO 3519.    WHEN THE COUNT OF THE SLC EQUALS 4, THE COUNT OF    THE DCC CAPTURED-IN THE LOAD OF THE DCC-BASE    REGISTER    (DCCBR) WHEN THE HORIZONTAL BLANK    (HBLANK) BEGINS. _    THE DCC CAPTURED-FROM THE DCC-BASE REGISTER    (DCCBR) WHEN HORIZONTAL BLANK (HBLANK) BEGINS.    EACH-TIME THE COUNT OF THE PIXEL COUNTER (PC)    EQUALS 5 ENDS, THE DCC INCREMENTS.    THE COUNT OF THE DCC CAPTURED-IN THE ADDRESS OF    THE DISPLAY MEMORY (DM).    Fig. 2. Scenerio for the DCC Counter    description for the Display Character Counter. The    complete scenerio is 41 sentences.    3.2.    Parser    A phrase-keyword parser    using    procedurally    encod ed case-framewo    rks for the verbs was develoned    to interpret the input. Phrase-keyword means that    the parser is $way;r;;z;ng    to build up one of    seven    digitalt$gE?gn wor P    that are common in the    d.    identify    It uz;; the keyworcl; tot::::    phrases.    the beginnings    endings    After a sentence has been Darsed into Dhrases.    the procedure associated w!ith the verb' in the    sentence is    verifies    applied.    This    processing    first    that    the    sentence    is    acceptable (that the phrase    semantically    for the verb    types    Then those    are legitimate    currently in    circuit objects not    t h g circuit model are introduced into    the model, and the verb's manipulations of the    model are processed.    The parser uses 5 directives to manipulate the    model. These directives either add new information    to the model or interrelate existing parts of the    model. The directives are:    (a>    (b)    cc>    (d)    (e>    3.3.    _ To    Specify a function - a function (e.g., clock,    counter) is introduced into the model.    Assi n a value ~0 a function's.aspect -. one    i?i    o,fvt,l;efunction    s characteristics 1s assigned    Define i conceptual si    assumed to be a port    nal - a global name is    o B either a known or    unknown function.    Define the source of a conceptual signal -    the source function of a global signal is    identified.    Make a connection - identify an information    path between two functions and a condition    gating the information flow.    Example -    --    Specification Acquisition    illustrate the    specification acquistion    g*    hase, the five sentences shown above for the    isplay Character Counter are reduced to their    effects    on the model.    The effects for each    sentence are preceded by a letter which references    the corresponding directive type from the above    list of directives.    1. THE DISPLAY CHARACTER COUNTER (DCC) COUNTS FROM    .    .    ?aT"Iz?k%uce a COUNTER function named DISPLAY    CHARACTER COUNTER (DCC).    (b) Assi n a value to the Aspect    of (b 3519).    COUNT-SEQUENCE    2. WHEN THE COUNT OF THE SLC EOUALS 4. THE COUNT OF    THE DCC CAPTURED-IN THE LOAD OF zTHE DCC-BASE    REGISTER    t    (DCCBR) WHEN THE HORIZONTAL BLANK    HBLANK) BEGINS.    c) Define a conceptual signal named HORIZONTAL    ?IANK (HBLANK).    3.    4.    5.    (e) Make a connection between (DCCBR LOAD) and    (DCC COUNT) under the condition (AND (HIGH    SLC DS~) (RISING UNKNOWN HBLANK)).    THE DCC CAPTURED-FROM THE DCC-BASE REGISTER    DCCBR WHEN HORIZONTAL BLANK (HBLANK) BEGINS.    [e) EPIC; cpon;;ti;;de:etween (DCC LOAD) and    the condition (RISING    UNKNOWN HBLANK).    EACH-TIME THE COUNT OF THE PIXEL COUNTER (PC)    . _    EQUALS 5 ENDS, THE DCC INCREMENTS.    (a) Introduce COUNTER function    named    PIXEL    COUNTER (PC).    (e) Make a connection between (DCC COUNT-UP) and    T under the condition (FALLING PC DSS).    THE COUNT OF THE DCC CAPTURED-IN THE ADDRESS OF    THE DISPLAY MEMORY (DM).    (a) Introduce MEMORY function named    DISPLAY    . ~ MEMORY (DM).    (e) Make a connection between (DM ADDRESS) and    (DCC COUNT).    The model of the circuit after just these five    sentences are processed is shown in Fig. 3. After    (I I    -    HBLANK    Fig. 3. State of the Model - (1, 2, 3, 4, and 5)    this description, the    specified and its    DCC has been completely    the next section.    implementation is discussed in    3.4.    Function    Each function type has an associated frame    structure that provides the prototypical knowledge    about that function. There are two corn onents    the function frame: the ASPECT and the 8    to    ONPT. The    ASPECT identifies the im ortant characteristics of    the function which mig R t    in the input scenerio.    be mentioned by the user    The CONPT identifies the    ports    (the input and output lines) that are    associated with the function.    The width of the    CONPT is also identified aseeither 1 or the value    of one of the ASPECTS of    function.    When a    particoulfar    function is introduced into the model, a    co Y    x    the protot pe    Eg.    instantiated wit K    is copied intFhkhe database    the name of    function.    4 is the instantiation of the DCC counter as    represented in the model after sentences 1, 2, 3,    4, and 5 have been processed. The asterisked items    are those    processing. that were modified during the sentence    The ex ert    component E    has a conce tual    K    view    einer described. T e function    of    the    frame with    the- associated filled-;;easpect val;:; re,pL;ese:;,"    that conceptual view    component. -    exnert    I    4.    Designing a Circuit for a Function    ---_    There    are    two    phases    involved    '    the    implementation of a function. First a method for    implementing the function must be selected.    Then    the selected method must be processed to produce    the chip-level design.    284    (COMPONENT DCC SADD COUNTER)    *    ‘TYPE DCC DEVICE (DISPLAY CHARACTER COUNTER))    *    DESC-VAL WIDTH DCC 10-2)    DESC-VAL COUNT-SE UENCE DCC (0 3519))    ASPECT WIDTH DCC 8 IL)    ASPECT OUTPUT-CODING DCC NIL)    ASPECT COUNT-SEQUENCE DCC NIL)    ASPECT COUNT-DIRECTION DCC NIL)    ASPECT DISTINGUISHED-STATES DCC NIL)    ASPECT LOADABLE DCC NIL)    ASPECT RESETABLE DCC NIL    CONPT LOAD DCC WIDTH Gl 0))    4    CONPT READ DCC WIDTH G265 G171))    CONPT COUNT-UP DCC 1 1    CONPT COUNT-DOWN DCC 1 NIL    CONPT LOAD-LINE DCC 1 NIL)    CONPT CARRY DCC 1 NIL)    where    IO-2    G150    G265    = 12    - (CONNECT    1 ~~~~~~    t    - (CONNECT    (DC    t %    (DM    c Mom) (DCCBR    C COUNT-UP)(:P    CBR LOAD) (DCC    ADDRESS) (Dd!    READ)    UNKNOWN HBLANK)    'D,g;TpC DSS))    IGH COMB1 OUTPUT    COUNT) T)    Fig. 4. DCC Counter Function - (1, 2, 3, 4, and 5)    4.1. Implementation Method    The first ste    F    in the selection    E    recess is to    deduce values    or    with the    many of the ASP CTs associated    function and to verify that there are no    inconsistencies    '    deductions made flZr t'k    ASPECT    values    DCC are that it    The    is    loadable,    and that    the count    resetable, has a binary out ut,    direction    *    K    Te    set    of    implementation methods aiiocii!ld with the type of    function is then considered.    An implementation method has 4    components:    Prere uisites,    Fig. 2    Eliminators, Traits, and Procedure.    illustrates two im lementation methods for a    counter, one based on a 7 e 161 loadable counter and    the other on a 7493 counter. The Prerequisites are    (STRATEGY BIN7493 COUNTER    (PREREQUISITES    I    COUNT-DIRECTION UP    OUTPUT-CODING BIN)    (ELIMINATORS )    I    (LOADABLE YES)    mms    (RESETABLE YES))    >    PROCEDURE $BIN7493)    Fig. 5. Example Counter Implementation Methods    a list of ASPECTS and their values that must be    valid for the function in order to select the    method. In the BIN74161    function being    im lementation.method, the    implemente    8 must    output coding that only counts up.    re ;i;e a bi~~;~    9    1s    true for the BIN7493 implementation method.    The Eliminators are a list of ASPECTS and their    values which if matched by the function under    im lementation,    P    eliminate that method from being    se ected to implement the function.    a list of ports that can be explicitly available if    the method is selected.    The Procedure is the    recipe    used    to    build    the circuit.    It is    re resented procedurally in LISP code and is    !i    re erenced by name in Fig. 5.    only    Each method that can be used to implement a    function is processed in the selection ph;s;liz    checking the Prerequisites and Eliminators.    of all acceptable methods is    selection.    compiled during the    If there is only one acceptable method    it is used to implement the function. If more than    one method is acceDtable then the Traits of all    candidates are used    method.    to find the most appropriate    If this does not narrow the list to a    single method then one    user chooses the method.    randomly chosen or the    this examnle for the    DCC    assuming    that    there    are    I    5 "ZZilk%e    two    implementation methods from Fig.    im lement    51    a counter, the BIN74161 method is tlt$    on y acceptable method and is selected.    The procedure associated with the    selected    method is then run. This procedure consults the    frame associated with the    function    and    the    connections    information    involving the function and from this    constructs a real diiital    interconnecting-    circuit,    introducin    and    collective y implement the function.    7    chips    that    4.2. Implemented Circuit    After the first    levels of    two phases, there are three    the DCC design.    is the DCC function and the    A:e:;e,e;op    level there    connections.    At the bottom level are the chins and wires used to    im lement    in erfaces    f    %,e    Dee*    The -intermediate level    other two levels bv    Drovidine.    interconnections between chip pins or wire's    and th:    connection points on the function.    Hence the    hierarchy of the components is maintained and. if    necessary, any circuit    can be altered    without much effect on the    fragment    rest of the circuit.    The im lementation    BIN7416P    for the DCC as designed b the    implementation method is illustrate    in    i    Fig. 6.    DCCBR    READ    DM    ADDRESS    DCCBR    LOAD    vcc    Fig. 6. DCC Implementation    5. Conclusion    the    SADD is a general purpose design system based on    ideas of structured,    an interactive    modular circuit design via    user interface.    The current 41    sentences in the innut scenerio have been run    successfully thro    3    h the parser creating a database    of approximately 6 0 entries.    counters in the    The design of the 3    Screensplitter are czmpleted and    circuits functional1    used in the    P    equivalent to those    Screensp    ac tuaktz    itter have been designed.    design of a symbolic simulator is currently in    progress. This simulator will allow the desi ner    to test and debug the circuits and will camp ete    f    the design    S&i?%?1    envisioned for SADD.    When    completed,    extensible    E    rovide a generaioFurpose and    design.    knowledge- ased system    digital    The Traits are a list of ASPECTS and their    values that are true if the method is selected and    285     
 | 
	1980 
 | 
	35 
 | 
					
29 
							 | 
	THEORY DIRECTED READING DIAGNOSIS RESEARCH USING COMPUTER SIMULATION    Christian C. Wagner    John F. Vinsonhaler    The Institute for Research on Teaching    Michigan State University    East Lansing, Michigan 48823    Abstract    A five year project studying the diagnosis and    remediation of children with reading problems is    described. The discussion includes observational    studies    of    reading    diagnosticians at work,    observations of diagnostician training programs    and computer simulation of theories about decision    making in reading diagnosis. The results of the    observational studies are mentioned    and    the    theories    and    systems for computer simulated    diagnosis are described.    I Introduction    The Institute for Research on Teaching is a    federally funded project whose purpose is to    investigate    teaching,    where    teaching    is    conceptualized as an information processing task.    The Clinical Studies project of the Institute is    just finishing its first five year plan studying    teachers who diagnose and remediate children with    reading problems.    This paper should serve as a    short introduction of this work to    the    AI    community.    The    Clinical    Studies    project has been    primarily concerned with understanding reading    diagnosis and remediation - whether performed by    classroom teacher, reading specialist, learning    disability    specialist or school psychologist.    Theories or models have been developed to account    for the significant behaviors that occur when one    of the teaching professionals works with a child.    These theories have been tested against 1)direct    observational studies of the specialists working    with    cases, 2)training studies observing the    instruction of new specialists, and 3)computer    simulation studies observing the behaviors implied    by the theory through simulation.    Results of    these studies have shown that an individual's    decision    making    may    be    very unreliable -    suggesting that individual behavior    may    not    warrant    simulation.    Before    turning to the    computer simulation studies, consider briefly the    theory and results of the other studies.    II Theories and Models    The content-independent theory that attempts    to account for the problem solving behavior of the    clinicians is termed the Inquiry Theory Cll.    The    reading clinician-child interaction is viewed by    this theory as follows:    1)The case is considered to be an information    processing system that must perform a certain    set of tasks.    Some information processing    abilities    are    critical    to    the adequate    performance of these tasks.    An    adequate    diagnosis of a problematic case must contain a    statement relevant to each    such    critical    ability.    There may exist prerequisites for    these    critical    abilities.    An    adequate    diagnosis will include a statement relevant to    all prerequisites for each deficient critical    ability.    Finally,    a good diagnosis will    include a statement of the history of the case    that has led to current deficiencies.    2)The    clinician    must diagnose a case as    described above. This is accomplished by the    application    of    elementary    information    processing tasks to an understanding of how a    case performs its function.    The elementary    tasks    include    ones    such    as    hypothesis    generation, hypothesis testing, cue collection,    cue interpretation, etcetera [21.    286    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    processing    task,    the    MORAL describes other    significant aspects of behavior required such as    the allocation of information processing capacity    and conditions of information overload.    At the current time the MORAL describes the    critical abilities for human reading. It further    details what other factors might effect these    abilities and cause deficiences and how correction    of any deficiency might be attempted. To date it    appears to be quite effective in determining the    reading problems in a case. The MORAL does not at    the current time make any attempt to describe how    reading comprehension takes place.    Instead the    various    types    of    reading    or    listening    comprehension tasks which people must perform to    be good readers are listed. Of course the MORAL    is at best incomplete and possibly incorrect. But    it does serve to direct research and diagnose    cases of reading difficulty.    One final note with respect to theory - it is    the    intersection of    these    two    relatively    independent models (a content independent model of    clinical problem solving and a content dependent    model of the reading process) that is our area of    concern. The questions that arise include: how do    the specialists diagnose and treat children, how    should they do it, how can they be trained to do    what they should, what impact does the model of    reading have on the decision making process, etc.    III Observational and ApDlication Studies    The    observational    studies    of    teaching    professionals and clinicians in reading    have    basically indicated one thing - the specialists    are not reliable. Careful observation in seven    interconnected studies of these professionals    diagnosing a simulated case of reading difficulty    has shown correlations generally not significantly    different    from    zero, whether from the same    clinician diagnosing the same case twice    or    different clinicians each diagnosing the same case    once.    This    finding holds for professionals    selected by their peers as the best; for classroom    teachers, reading    specialists,    and    learning    disability specialists.    After eliminating most    counter explanations of low reliability through    carefully    designed    replication    studies, the    reliabilities are still of borderline significance    from zero (e.g., 0.10 - 0.15).    c41    Training    studies    have    indicated    more    optimistic possibilities - in thirty hours of    instruction, the correlations may be raised from    0.1    to    0.4    - a value close to physician    reliability. Close observation of the training    process and its transfer to work settings will    hopefully uncover means by which reliability may    be raised further.    IV Computer Simulation Studies    With this backdrop, consider the contribution    of    computer    simulation    to this program of    research. For this discussion we will ignore the    use of case simulation which has been so vital for    stimulus control in our experimental design and    turn ins tead to clinician simulation    theories described earlier.    on the    All of our simulations of reading specialists    have been simulations based on the Inquiry Theory    and the Model of Reading and Learning to read    described earlier.    In this way, the results of    each simulation can be used to expand and refine a    model that directs our    reseach    efforts    in    simulation, training and observation. All studies    described    here    were run on an interpretive    procedural language whose primitives were based on    the Inquiry Theory. This system is entitled BMIS    -    the    Basic    Management Information System.    Effectively, a system subroutine was created for    each    elementary    information processing task    described by the Inquiry Theory (e.g., hypothesis    generation,    cue    collection,    diagnostic    determination, etcetera). Each subroutine could    be called up by a command in an interpretive    language. An initial hypothesis directed program    was set up in which the hypotheses generated about    a case direct the collection of information about    a case and information interpretation, which    generated more hypotheses, and so on. On the    basis of any decision that was made (accept    hypothesis X as part of the diagnosis, reject    hypothese Y, etc.) sub    procedures    might    be    activated to handle the peculiarities of the    particular decision.    This    system    was    designed    for    theory    investigation and was not intended to be easy to    use or flashy.    Furthermore, there were many    restrictions on its input to bypass the natural    language    communication problems.    As    time    permitted, a new system was created to rectify    these and other identified shortcomings of BMIS.    The new production oriented system with similar    types of primitives is entitled    MOSES,    the    Multiple Object Simulated Encounter System. Both    systems are available through TELENET on the Wayne    State University Amdahl 470~6.    *The SIMCLIN Modeling Study:    The    first    simulation study was basically a modeling study.    Given the framework provided by    the    Inquiry    Theory,    memory    structures were    created by    systematic interview with    a    senior    reading    clinician.    Such things as hypotheses, cues,    observations, diagnoses, strategies, etc.    were    defined.    The    goal    was the creation of a    simulation that would    closely    emulate    this    specialist's problem solving behavior.    Comparisons were drawn between the human    specialist and the computer analog    as    they    diagnosed    the    simulated    cases    of    reading    difficulties mentioned earlier.    The    results    indicated that the simulation was a very effective    model in terms of all measures used - the number    and order of cues collected, the diagnosis and    suggested remedial plan, etc.    *The Pilot SIMCLIN Reliability Study: With    the very low human clinician reliability, it    became clear that modeling of individual people    was a pointless procedure.    Instead we directed    our efforts to the simulation of behavior of    287    groups of clinicians; i.e., to the simulation of    models of diagnosis agreed upon by clinicians. At    this point, then, the emphasis turned to the    creation of intelligence that would be reliable    and valid with respect to group reading diagnosis    and still    be    teachable    to    unaided    human    specialists.    It was at this point that the    development of the Model of Reading and Learning    was begun - it would serve to define the content    of clinician memory.    This study examined the reliability of a    computer diagnostic system that was based on the    Inquiry Theory and the newly developed MORAL. The    simulated clinician (SIMCLIN) was set up and asked    to diagnose four simulated cases twice (no SIMCLIN    memory of previous runs was allowed but different    initial    contact    settings were used).    These    diagnoses    were    compared    with    respect    to    reliability    with    the    diagnoses    of    human    clinicians.    The results were that the SIMCLIN had a    reliability of 0.65 compared to human reliability    of 0.10.    Further, commonality scores - which    indicate how an individual agrees with a criterion    group diagnosis - indicated that the SIMCLIN    included 80% of the categories agreed upon by the    group of human clinicians while the mean for    individual human clinicians was 50%.    *The Pilot SIMCLIN Validity Study: Finally, a    simulation study has been run to get a first    measure    of    the    validity    of the SIMCLIN's    diagnostic decisions when those decisions are    directed by the Inquiry Theory and MORAL. Reading    case    records were taken from Michigan State    University's reading clinic for SIMCLIN workup.    Records were selected which indicated correct    diagnosis and others that indicated poor diagnosis    (as measured    by    the    child's    response    to    treatment).    The    areas of concern were the    adequacy of the SIMCLIN as an embodiment of the    theories, the reliability of the SIMCLIN diagnosis    and the validity of the SIMCLIN diagnosis. It was    hoped that the SIMCLIN would agree closely with    the clinic's diagnosis for the correctly diagnosed    case and not as closely for the poorly diagnosed    one.    The SIMCLIN did, in fact, behave as dictated    by the MORAL - the simulation checked out the    critical abilities of reading and the prerequisite    factors and past history of those that were    problematic. The reliability of the diagnostic    decisions was essentially 1.    Adherence to the    MORAL almost guarentees this. With respect to the    SIMCLIN diagnosis on the well and poorly diagnosed    cases, the results were equivocal. The reason for    this is that data required by the SIMCLIN was not    present in the clinic files.    Such things as    classroom observation of engaged academic time,    listening comprehension scores, and change scores    over time    were    not    available.    In    fact,    indications are that these types of data are not    routinely    collected    by    reading    clinicians,    although the SIMCLIN considers them significant.    The model    and    its    simulation    might    well    demonstrate inadequacies in the state of the art    in reading diagnosis.    V Conclusion    In    conclusion,    the    research    paradigm    described here has been quite effective. Models    and theories direct and focus research designs.    These designs - whether observational, training or    simulation - reflect back to expand and refine the    theories.    Substantial data has shown that an    individual's decisions may be very unreliable.    Training in decision making models and content    area theories can improve the reliability.    But    the key to effective problem solving seems to be    the validity of the theories that are used to    direct decision making. One effective means for    examining the validity of such theories is through    computer simulation. The next step will be the    completion of a production oriented SIMCLIN that    will be used as a preceptor during instruction of    student clinicians and a decision aid by reading    specialists in schools. The validity of the MORAL    SIMCLIN will be    checked    by    following    its    recommendations and watching the results for real    children. The research will continue to be theory    oriented. Further information on many aspects of    this    research    program    may    be obtained by    contacting the Institute for Research on Teaching    at Michigan State University.    References    Cl1    c21    c31    c41    Wagner, C.C.    and J.F.    Vinsonhaler "The    Inquiry Theory of Clinical Problem Solving:    1980",    The    Institute    for    Research on    Teaching, Michigan State University, 1980.    Elstein, A.S., L.S. Shulman, S. Sprafka, H.    Jason, N. Kagan, L.K. Akkak, M.J.    Gordon,    M.J.    Loupe, and R.D.    Jordon.    Medicd    Problem Solving: An Analvsis of Clinical    Reasoning    Press, l&8.    Cambridge: Harvard University    Sherman, G.B., J.F.    Vinsonhaler, and A.B.    Weinshank "A Model of Reading and Learning",    The Institute for Research on    Teaching,    Michigan State University, 1979.    Vinsonhaler,    J.F.    "The    Consistency of    Reading    Diagnosis",    The    Institute    for    Research    University, 51:80.    Teaching,    Michigan    State    288     
 | 
	1980 
 | 
	36 
 | 
					
30 
							 | 
	A WORD-FINDING    ALGORITHM WITH A DYNAMIC LEXICAL-    SEMANTIC MEMORY FOR PATIENTS WITH ANOMIA USING A SPEECH PROSTHESIS    Kenneth Mark Colby, Daniel Christinaz,    Santiago Graham, Roger C. Parkison    The Neuropsychiatric    Institute/UCLA    760 Westwood Plaza    Los Angeles, California    90024    ABSTRACT    Word-finding    problems    (anomia) are common in    brain-damaged    patients suffering    from various    types of aphasia.    An algorithm    is described which    finds words for patients using a portable micro-    processor-based    speech prosthesis.    The data-    structures    consist of a lexical-semantic    memory    which becomes reorganized    over time depending on    usage.    The algorithm finds words based on partial    information    about them which is input by the user.    I WORD RETRIEVAL PROBLEMS    We are developing    an "intelligent"    speech    prosthesis    (ISP) for people with speech impairments    Cll.    An ISP consists of a small, portable computer    programmed    to serve a number of functions and    interfaced with a speech synthesizer.    ISPs can be    operated by pressing keys on a keyboard or by eye-    movements    using a specially-designed    pair of    spectacles.    How words are stored, organized,    and re-    trieved from human lexical memories constitutes    a    lively area of research in current psychology,    computational    linguistics,    neurology,    aphasiology    and other cognitive sciences [2], [3], [4], [5],    C61, 1171, 181.    Words in a lexical memory can be    associated    to other words by means of several    relations - synonymy, antonymy, part-whole,    spatio-temporal    contiguity,    etc. [33.    It can also    be assumed that the process of word-finding    for    production    begins with semantic concepts to which    word-signs    are connected.    Once the word-represen-    tation for the concept is found, it is subjected    to    phonological    rules if it is to be spoken, and to    graphological    rules, if it is to be written.    In    the final stage of output, articulatory    rules    governing muscle movements    for speech are utilized    or rules for hand and finger movements    for writing    are applied.    Impairment    in word expression    can be due to    failures at any stage of this process from concept    to utterance.    Our interest here is in those    instances in which the speaker has the word, or    part of the word, or some information    about the    word, in consciousness    but cannot produce the    target word.    Our efforts have been directed    -_------    This research was supported by Grant #MCS78-09900    from the Intelligent    Systems Program of the    Mathematics    and Computer Science Division of the    National Science Foundation.    towards writing and field-testing    a computer    program which can find words in a lexical-semantic    memory, given some working information    about the    target word.    It has long been known that some aphasic    patients, who cannot produce a word, can indicate    how many syllables    the word contains by finger    tapping or squeezing the examiner's hand [9].    Both Barton [lo] and Goodglass et. al. Lll]    reported that aphasics know some generic proper-    ties of the target word such as its first letter,    last letter, etc.    Our own experience with    patients having anemic problems have confirmed and    extended these observations.    The word-expression    disorders    in which we are    interested    are commonly divided into two groups    which are (weakly) correlated with the locations    of brain lesions and accompanying    signs and    symptoms.    The first group consists of patients    with lesions in the anterior portion of dominant    cerebral hemisphere.    These patients have many    concrete and picturable words in consciousness,    and they perform well, although slowly, on naming    tasks [12], [13].    The naming disruption    is part    of a more generalized    disturbance    which includes    laborious articulation    and often a disruption    of    productive    syntax.    The second group of disorders    in this    classification    scheme involves lesions in the    posterior    region of the dominant hemisphere.    Although these patients often fail to provide the    correct name of an object, their substituted    response can be related to the target word as    described above in the studies of Rinnert and    Whitaker    [6].    Since the substitute word is so    often systematically    related to the target word, we    thought it might be usable as a clue or pointer in    a lexical search for the target word.    II    A WORD-FINDING    ALGORITHM    ---    The first step involves constructing    the data    base, a lexicon of words stored on linked lists at    various levels.    The highest level is a "Topic    Area" (TA), representing    the idea being talked    ahout.    Currently we use 15 topic areas.    The TA    consists of list of words, each connected in a    large lexicon of words, falling within that topic    area.    Each word within a particular    topic area is    also linked to a list of selected words, as in the    289    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    following example:    Assuming that each slot in the pattern is    (BODY (ACHE (HEAD HURT PAIN    ( ANKLE (FOOT    LEG) )    STOMACH) )    (WOUND (BLOOD    CUT    HURT ) ) )    The organization    of the word lists changes    dynamically    over time according    to their usage, as    will be described below.    In attempting    to find a word, the program    first asks the user to identify the topic-area    from a list presented    to him on a display.    We    start with an initial set of word-lists    but    change them in accordance with the individual    user's environment.    A topic-area    specifies where    the user is linguistically    in the discourse,    not    where he is physically    or socio-psychologically.    He is then asked about certain properties    of the    target word, the questions appearing on the    display as follows:    (1)    What is the topic area?    (2) What is the first letter of the    word?    (3) What is the last letter of the    word?    (4) What letters are in the middle of    the word?    (5) What word does this word go with?    Question    (5) attempts to obtain a word associated    to the target word however idiosyncratic    it might    be.    It may even resemble the target word in    sound.    Our starting lexical memory of about 1800    words was based on known high-frequency    discourse    words, associations    from Deese [81, word associa-    tion norms cl41 and word-pairs    from Rinnert and    Whitaker    [6] . Word-associations    can be highly    idiosyncratic    to the individual.    For example, if    one asks 1,000 college students which word they    associate    to the word table, 691 says chair but    the remainder of the responses are distributed    over 32 other words including jewels like big,    cards, and tennis [151.    Hence, with each user,    we add his particular    associations    and words that    do not appear in our starting lexicon.    This data    is collected by a spouse, friend, research    assistant or speech pathologist    in conversations    with the user.    After the user has offered clues about the    target word, we can represent the responses using    a clue pattern of the form:    CLUE = TA + L + L + STRING + GWW    -t;rh-=    TA = Topic area    L = Letter    STRING = One or more letters    GWW =    GOESWITH    word    correct and not null, the program first finds a    short list of numbered candidate    target words    (zero to a maximum of 20 words) meeting the input    criteria of the clue pattern.    The search will be    illustrated    by cases.    Case (1).    Suppose the target word were steak.    The clue pattern might be:    CLUE = FOOD + S + K + A + meat.    The word meat is looked up on the FOOD list and    then a search is made on the list of words linked    with the word meat which begin with S and end with    K and have an A in between.    If steak were on the    list of words linked to meat, it would be display-    ed visually and auditorily    as:    1.    STEAK    When he sees or hears the target the user enters    its number (in this case "1.") and the word is    then inserted in the utterance    of the ISP.    (Some-    times, on seeing or hearing the target, the user    can utter the word himself.)    In this first    illustration    of the program's operations,    we have    assumed full and correct entries have been made    in the clue pattern and that steak was on the list    of words which [GOESWITH] meat.    But suppose    steak is not on the meat list of FOOD.    Case (2).    If steak were not found on the    meat list of FOOD, the "meat" part of the clue is    ignored and all words under FOOD beginning with    (S) , ending in (K), and with an (A) in between    are retrieved.    The word steak might appear in    this group and if so, the program will add it to    the meat list after the user signifies    this is    his target word.    Thus the lexical memory becomes    automatically    reorganized    over time.    Case (3).    If still no acceptable    word is    retrieved,    the "FOOD" part of the clue is ignored    and a search is made on all topic-area    lists for    (S + K + A) words.    The word steak might appear    in the ANIMAL topic-area    associated with the word    cow.    After the user indicates    this is the target    word, steak is added to the meat list under FOOD.    With repeated usage, steak becomes promoted to    the top of the meat list.    Case (4).    If the target-word    is still not    retrieved bv an exhaustive    search of all topic-    areas, the word does not exist in the lexicon and    the program ends.    One might consider varying the    constraints    of (S + K + A) clues and searching    further but in our experience    this is rarely    productive.    Time is not a problem for the    program since an exhaustive    search requires only    a few seconds.    But a large number of candidate    words are retrieved when most of the clues are    ignored.    And it is too time-consuming    for the    user to search through the list of candidates    looking for the desired word.    Case (5).    It might be that the user cannot    290    answer completely    the questions required by the    clue pattern or the entries may be in error.    For    example, he may not know the first and last letters    of steak, but he does know the topic-area    is FOOD    and the [GOESWITH] word is meat.    If steak is on    the meat list, the search will succeed.    If not,    there is no point to displaying    all the words    under FOOD because the candidate-set    is too large.    In our experience    thus far we have found that at    least 2-3 pieces of information    are necessary    to    retrieve a target word.    Further experience may indicate that some    users can benefit from variations    in the clue    pattern.    For example,    with some patients, we    have found it expedient to ask them first    for the topic-area    and the [GOESWITH] word.    If    this fails, the program then asks the letter    questions.    With other patients questions regard-    ing word-size,    number of syllables,    and "what do    you use it for?" may be helpful.    We are currently    field testing the program with a variety of anemic    patients using keyboard or ocular control to study    the value of utilizing different    clue patterns.*    In the meantime,    others in the field of communica-    tion disorders may wish to utilize and improve on    the word-finding    algorithm reported here.    Such a program can also be used by speech    pathologists    as a therapeutic    aid.    The patient    can be given linguistic    tasks to practice on using    the ISP at home.    Repeated exercise and home-    practice may facilitate    the    finding functions.    patient's own    III SUMMARY    We have described a computer program for an    intelligent    speech prosthesis which can find words    in a lexical-semantic    memory given some informa-    tion about the target words.    The program is    currently being tested with a variety of anemic    patients who have word-finding    difficulties.    A    future report will describe    the results of this    field-testing.    REFERENCES    [ll    Colby, K.M., Christinaz,    D., Graham, S.    "A    Computer-Driven    Personal, Portable and    Intelligent    Speech Prosthesis."    Computers    and Biomedical    Research 11 (1978) 337-343.    [2]    Caramazza,    A., Berndt, R.S.    "Semantic and    Syntactic Processes    in Aphasia:    A Review    of the Literature."    Psychological    Bulletin    85 (1978) 898-918.    --------    *We are grateful to Carol Karp and patients of the    Speech Pathology Division of Northridge Hospital    Foundation,    Northridge,    California    (Pam    Schiffmacher,    Director)    for their collaboration.    c31    c41    c51    [61    c71    c81    c91    Cl01    Cl11    Cl21    Cl31    Cl41    Cl51    Miller, G.A., Johnson-Laird,    P.N.    Language    and Perception.    Cambridge:    Harvard    University    Press, 1976.    Goodglass,    Naming and    Aphasia."    374.    H ., Baker, E.    "Semantic Field,    Auditory Comprehension    in    Brain and Language 3 (1976) 359-    Zurif, E., Caramazza,    A., Myerson, R.,    Galvin, J.    "Semantic Feature Representation    for Normal and Aphasic Language.    Brain and    Language 1 (1974) 167-187.    Rinnert, C , Whitaker, H.A.    "Semantic    Confusions    by Aphasic Patients."    Cortex, 9    (1973) 56-81.    Geschwind,    N.    "The Varieties of Naming    Errors."    Cortex 3 (1967) 97-112.    Deese, J.    The Structure of Associations    in    Language and Thought.    Baltimore:    Johns    Hopkins Press, 1965.    Lichtheim,    L.    "On Aphasia."    Brain 7 (1885)    433-484.    Barton, M.    "Recall of Generic Proper    Words in Aphasic Patients."    Cortex 7    73-82.    ties of    (1971)    Goodglass,    H., Kaplan, E., Weintraub,    S.,    Ackerman, N.    "The "Tip-of-the-Tongue"    Phenomenon    in Aphasia."    Cortex 12 (1976)    145-153.    Wepman, J.M., Bock, R.D., Jones, L.V., Van    Pelt, D.    "Psycholinguistic    Study of the    Concept of Anomia."    Journal of Speech and    Hearing Disorders    21 (1956) 468-477.    Marshall,    J.C., Newcome, F.    "Syntactic    and    Semantic Errors in Paralexia."    Neuro-    psychologia    4 (1966) 169-176.    Postman, L., Keppel, G. (Eds.).    Norms of    Word Association.    New York:    Academic Press,    1970.    Palermo, D.S., Jenkins, J.    Word Association    Norms.    Minneapolis:    University    of    Minnesota    Press, 1964.    291     
 | 
	1980 
 | 
	37 
 | 
					
31 
							 | 
	TROUBLE-SHOOTING    BY PLAUSIBLE    INFERENCE *    Leonard Friedman    Jet Propulsion    Laboratory,    California    Institute of Technology    Pasadena,    CA 91103    ABSTRACT    The PI system    has been    implemented with    the    ability to    reason    in both    directions.    This    is    combined    with    truth    maintenance,    dependency    directed backtracking,    and time-varying    contexts to    permit modelling    dynamic situations.    Credibility    is propagated    in a semantic network, and the belief    transfer factors    can be    modified    by    the    system,    unlike previous systems for inexact reasoning.    I TROUBLE-SHOOTING    LOGIC    The    PI    (for    Plausible    Inference)    system    enables a user to trouble-shoot    physical systems in    a very general    way.    The trouble-shooting    process    requires that    our    user first    define    a    physical    model which represents what    takes place in    normal    operation with    everything    functioning    correctly.    If this    physical    model    is    translatable    to    the    representation    used    by PI,    it    can be    stored    in    computer memory and    used to guide    the search    for    the most    likely failure.    -In    order to make    the    process clearer,    we shall    describe a    few of    the    many methods of reasoning    employed by human    beings    in their    trouble-shooting.    These    methods are    the    ones we can at present imitate in the PI system.    Suppose we have a    desired goal state    defined    in our physical    model, and this    state depends    on    three conditions being true to attain the goal.    If    we execute the    process and observe    that the    goal    state was not attained,    we conclude that at    least    one of the    three conditions    on    which it    depended    must have been false,    and all are possibly    false.    If we    then perform    some test    designed to    verify    whether one of the conditions    is actually true    and    the test shows that it is indeed true, we    conclude    that at    least one    of the    remaining    two    untested    conditions must be    false.    If all    but one of    the    remaining    conditions    has    been    eliminated    from    consideration    by further    testing, we may    conclude    that the    single condition    remaining must    be    the    guilty party.    The    process    of    elimination    just    described    is the one    normally employed by    humans,    and it is this process    we have implemented    on    the    computer.    Of course, the    three conditions may    in    turn have conditions    on which they depend.    In that    case the    method    just    described    may    be    applied    *This paper presents    the results of    one phase    of    research    performed    at    the    Jet    Propulsion    Laboratory,    California    Institute    of    Technology,    sponsored by    the    National Aeronautics    and    Space    Administration    under Contract NAS 7-100.    recursively    to narrow    the fault    down further,    at    least to the granularity    of the conditions    employed    in the representation.    This method fails if    there are conditions    on    which the goal    state depends    for realization    and    which are not explicitly    represented    in the    model.    Nevertheless,    the exercise may serve as a valuable    guide    to    help    a user    to    focus    attention    on    specific, possibly false,    areas as likely    sources    of failure.    Another difficulty    with the method    is    the fact    that    either a    test    does not    exist    to    determine whether a specific sub-goal was    reached,    or the sub-goal state in question was changed by    a    later event occurring    in    the model.    In this    case    it is    difficult    to    verify    whether    the    changed    sub-goal state    was ever    achieved. Only    if    there    were long-lasting    side-effects    will it be    possible    to verify.    Such    difficulties    plague    human    trouble-shooters    as    well.    The    present    implementation    can not reason about such "vanished"    events,    in    a    hypothetical    mode,    from    a    past    context.    II IMPLEMENTATION    AND THEORY    The PI    system    is    part of    a    larger    system    called    ERIS,    named    for    the    Greek    goddess    of    argument.    The    basic    module    of    the    system,    described    in [ll,    performs deduction and    modeling    in the Propositional    Calculus.    A planning    module    has been    built by    M.    Creeger    by augmenting    the    basic deduction module with many special    features,    including a    "causal" connective    that    supplements    the standard logical    connectives    (AND, OR,    etc.).    Similarly,    the    PI    module    has    been    built    by    augmenting    Propositional    Calculus with extra    rules    of    inference,    and    another    belief    besides    truth-value    associated with    each assertion.    This    additional    belief, which we call credibility,    is    a    subjective    numerical measure    of the confidence    in    the truth-value,    with values between -1 and 1.    The basic ERIS module    generates a network    of    nodes linked    by connectives    as    it reads    in    its    knowledge    base    of    assertions;    this    feature    is    retained in the other modules. The techniques    used    in ERIS    make    it    possible    to    perform    deduction    without rewriting.    Instead, "specialists"    for each    connective    propagate    the    correct    values    of    the    beliefs to    the    assertions    which    they    link.    A    theoretical    foundation    for this approach,    aw lying    292    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    both to    Propositional    Calculus    and    First    Order    Predicate Calculus,    is    given    by    the    method    of    Analytic    Tableaux"    [Zl.    Because    rewriting    is    totally    avoided,    inference,    planning,    model-revision,    and    dependency-directed    backtracking    can    be    performed    in    a    single    integrated    system.    Al(Q(    PI 2)    I    :T; +.5    :F; -.5    Arcs link antecedent/consequent    in implication    Fig. 1 Transfer Factors in a Semantic Net    Plausible    Inference    introduces    two    rules    of    inference besides Modus    Ponens and Modus    Tollens,    called the Method of Confirmation    and the Method of    Denial.    These extra    modes permit the    propagation    of truth values, true or false, even in    directions    forbidden    bY    Propositional    Calculus.    Simultaneously,    the    associated    credibilities    are    propagated    through    the    net,    employing    all    four    modes as    appropriate.    A    calculus of    Credibility    transfer between arbitrary    logical expressions    has    been worked out to    specify exactly the process    of    Credibility    propagation    through    the    net.    The    calculus is    described    in    [31, and    is    based    on    equations employed in MYCIN [41.    The basic    quantities    controlling    propagation    between antecedents    and consequents    in    implication    are transfer    factors or    DELTAl's, and    there    are    four for each    antecedent/consequent    pair, one    for    each mode.    (See Fig. 1) Both MYCIN and    PROSPECTOR    are limited to a single reasoning mode and transfer    factor    for    each    implication    and    use    a    static    transfer factor    structure    that    is    specified    by    human    "experts".    The    PI    system,    in    trouble    shooting,    recalculates    the    appropriate    transfer    factors on    the    basis of    incoming    evidence.    In    addition to a dynamic transfer factor structure,    PI    also incorporates    the use of default values for the    transfer factors in trouble-shooting    when the users    do not have better information.    III APPLICATION    EXAMPLE    Our example of automated trouble-shooting    uses    a toy case    selected from    the application    domain,    the    mission    control    of    spacecraft.    It    is    a    simplified    representation    of    signal    transmission    from a    space craft    to    Earth.    The    desired    goal    state is    "Signal Received"    from the    SIC.    Fig    2    shows a    plan to    accomplish    this which    has    been    generated by the ERIS    planner.    The user    supplies    three basic    "action CAUSES    state" relations:    (1)    "Ground    Antenna    Receiving"    CAUSES    "Signal    Received",    (2)    "Point    SIC"    CAUSES    "Pointed    Correctly",    and    (3)    "Transmit    Signal"    CAUSES    "Signal Transmitted".    The    pre-conditions    for    the    first relation are the    end states achieved in    the    GROUND    ANTENNA    RECEIVING    S/C POINTED    CORRECTLY    )    -“.“yT)    -0.001    TRANSMIT    SIGNAL    RSS POWER ON    -0.001    SIGNAL    RECEIVED    (3    77    CORRECT    DATA MODE    SIGNAL    TRANSMITTED    Fig. 2    Plan for Command Sequence Generation    293    second and    third.    In    effect we    are saying,    "In    order for    an    action    to achieve    the    goal    state    certain pre-conditions    must be    true.    In order    to    make those pre-conditions    true certain actions will    cause them which    may also    have pre-conditions    to    succeed".    Thus complex sequences may be built    up.    The ERIS planner links the basic strategies    in    the    manner shown,    using the    pre-conditions    as    hooks.    It    then    collects    the    actions    in    a    list,    and    supplies that list as the desired command sequence.    The    plan    generated    in    this    way    is    a    descriptive    model    of    the    signal    transmission    process    and    constitutes    our    trouble-shooting    knowledge base.    The propagation    of beliefs    that    takes place with the CAUSES connective    is identical    to the    belief propagations    of an    implication    as    defined in    [3],    although    the    timing    of    belief    propagation    of CAUSES are different.    We define the    belief propagation-equivalent    implication    form    of    the    CAUSES    relation    as    (IMPLIES    (AND    action    precondition1    . .    preconditionN)    goal-state).    At    the start, the assumption    is made that all    states    are true.    Suppose that the sequence is executed and    the    ground station fails to    receive the signal.    Then    "Signal Received"    is false, and this can be entered    into the data base. The    effects of this change    of    belief are propagated    through the data    base by    a    modified Modus Tollens,    making all    the events    on    which "Signal Received" depends Possibly-False    (or    PF).    If    a test    is then    performed by    the    human    controllers    like    causing    the s/c    to    roll    while    transmitting,    and a signal    is received during    the    roll, we may conclude that the action "Point    space    craft" worked,    "Signal Transmitted"    is true,    and    "Ground    Antenna    Receiving"    has    been    verified.    Inputting these facts into the data base causes the    PI system to do two things:    (1)    Those    preconditions    required    by    "Signal    Transmitted"    are changed from PF to T by Modus    Ponens. (Possibly False to True)    (2) "Pointed Correctly"    is    changed to False,    F,    rather than PF. In    addition, the    PI    system    raises    the    credibility    of failure    for those    events    on    which "Pointed Correctly"    depends.    Their    truth    value    remains    Possibly    False    because there are multiple possibilities.    If one of these latter    events is shown to    be    true by testing, the remaining    one may be the    only    possibility    left.    For example,    if the Sun    Sensor    and Canopus Sensor can be shown to work, and    their    truth status    is input,    the system    will    conclude    that the Starmap must be at fault, even though    the    a    priori    credibility    of    such    a mistake    was    extremely    low.    How the credibilities    change at various stages    of operation    can be    described now.    At the    start    there    are    two    possibilities:    either    a    priori    credibilities    may    be    entered    or    default    credibilities    generated.    Figure    2 shows a    priori    credibilities    entered on the branching    lines. These    are    subjective    measures    of    the    likelihood    of    failure of the respective    events to which they lead    given that    the    state    which depends    on    them    is    false.    Thus,    for    "Signal    Received",    "Pointed    Correctly"    has an    a priori credibility    associated    with false of -.7, "Signal Transmitted"    a value    of    -.3 and "Point space craft" a value of -.OOl .    When we start,    the assumption    is that    every    state is    true (at    the    appropriate    time)    with    a    credibility    of 1.0    (equivalent    to certainty).    At    the next stage,    when all we    know is that    "Signal    Received"    is false, the a priori credibilities    are    assigned to all the states.    If we used the default    mode, credibility    would    be assigned    equally;    i.    e., for two    events, each would    get .5, for    three    .33, for    four    .25,    etc.    Whenever    an    event    is    eliminated    as true, the remaining    credibilities    are    raised by    an    empirical formula    that    reflects    a    reasonable    sharing    of suspicion,    based either    on    the a priori splits    or an equal partition.    Thus,    in our example, "Starmap    Accurate"    went from    true    (cred    1.0)    to    Possibly    False    (cred    -.007)    to    Possibly False (cred -.Ol> to False (cred -1.0).    This,    in    a    simplified    way,    describes    the    operation of PI in trouble-shooting    using reasoning    by Plausible    Inference.    Of course, humans    employ    many other    methods in    trouble shooting,    such    as    analogy.    For    example,    a    person    may    say    "This    problem resembles    one    I    encountered    in    another    area. Maybe it has the same cause I deduced    then."    By such techniques,    humans can often vector in on a    problem, bypassing    step-by-step    elimination.    We    hope    to    implement    some    of    these    techniques    eventually.    References    111    Thompson    A.    M.,    "Logical    Support    in    a    Time-Varying    Model", in Lecture Notes in    Computer    __I_--    Science,    Springer-Verlag,    Berlin,    1980.    Fifth    Conf.    on Automated Deduction,    July 8-11, Les Arcs,    Savoie, France.    [21    Smullyan    R.    M,    First-Order    Logic,    Springer-Verlag,    Berlin, 1968    [3] Friedman L., "Reasoning by Plausible    Inference"    in    Lecture    Notes    *    580.    Computer    Science    Springer-Verlag,    Berlin,    Proc.    Fifth    Conf.    on    Automated    Deduction,    July    8-11,    Les    Arcs,    Savoie, France.    [41 Shortliffe    E. H. and    Buchanan B. G., "A    Model    of Inexact Reasoning    in Medicine" , Math.    Biosci.    23,    PP.    351-379,    1975.    Also    chapt.    4    of    Computer-Based    Medical    Consultations:    MYCIN,    Elsevier, New York, 1976.    294     
 | 
	1980 
 | 
	38 
 | 
					
32 
							 | 
	AN    APPLICATION    OF    THE    PROSPECTOR    SYSTEM    TO    DOE’S    NATIONAL    URANIUM    RESOURCE    EVALUATION*    John    Gaschnig    Artificial    Intelligence    Center    SRI    International    Menlo    Park,    CA 94025    Abstract    A    practical    criterion    for    the    success    of    a    knowledge-based    problem-solving    system    is    its    usefulness    as a tool    to    those    workilb    in    its    specialized    domain    of    expertise.    Here    we    describe    an    application    of    the    Prospector    consultation    system    to    the    task    of    estimating    the    favorability    of    several    test    regions    for    occurrence    of    uranium    deposits.    This    pilot    study    was    conducted    for    the    National    Uranium    Resource    Estimate    program    of    the    U.S.    Department    of    Energy.    For    credibility,    the    study    was    preceded    by    a    performance    evaluation    of    the    relevant    portion    of    Prospector’s    knowledge    base,    which    showed    that    Prospector’s    conclusions    agreed    very    closely    with    those    of    the    model    designer    over    a broad    range    of    conditions    and    levels    of    detail.    We    comment    on    characteristics    of    the    Prospector    system    that    are    relevant    to    the    issue    of    inducing    geologists    to    use the    system.    1. Introduction    This    paper    describes    an    evaluation    and    an    application    of    a knowledgerbased    system,    the    Prospector    consultant    for    mineral    exploration.    Prospector    is    a    rule-based    judgmental    reasoning    system    that    evaluates    the    mineral    potential    of    a site    or region    with    respect    to    inference    network    models    of    specific    classes    of    ore    deposits.    Here    we    describe    one    such    model,    for    a class    of    “Western    statesIr    sandstone    uranium    deposits,    and    report    the    results    of    extensive    quantitative    tests    measuring    how    faithfully    it    captures    the    reasoning    of    its    designer    across    a    set    of    specific    sites    (used    as    case    studies    in    fine-tuning    the    model),    and    with    respect    to    the    detailed    subconclusions    of    the    model    as    well    as    its    overall    conclusions.    Having    so    validated    the    performance    of    this    model    (called    RWSSU),    we    then    describe    a pilot    study    performed    in conjunction    with    the    National    Uranium    Resource    Evaluation    (NURE)    program    of    the    U.S.    Department    of    Energy.    The    pilot    study    applied    the    RwSSU    model    to    evaluate    and    compare    five    target    regions,    using    input    data    provided    by    DOE    and    USGS    geologists    (using    the    medium    of    a model-specific    questionnaire    generated    by    Prospector).    The    results    of    the    experiment    not    only    rank    the    test    regions,    but    also    measure    the    sensitivity    of    the    conctusions    to    more    certain    or    less    certain    variations    in    the    input    data.    One    interesting    facet    of    this    study    is    that    several    geologists    provided    input    data    independently    about    each    *This    research    was    supported    by    the    U. S. Geological    Survey    under    USGS    Contract    No.    14-08-0001-l    7227.    Any    opinions,    findings,    and    conclusion    or    recommendations    expressed    in    this    report    are    those    of    the    author    and    do    not    necessarily    reflect    the    views    of    the    U.S.    Geological    Survey.    test    region.    Since    input    data    about    each    region    varies    among    the    responding    geologists,    so do    the    conclusions;    we    demonstrate    how    Prospector    is used    to    identify    and    resolve    the    disagreements    about    input    data    that    are    most    significantly    responsible    for    differences    in    the    resulting    overall    conclusions.    This    paper    is    a    condensation    of    portions    of    a larger    report    [4].    2. Validation    of    the    Model    The    practical    usefulness    of    an    expert    system    is    limited    if    those    working    in    its    domain    of    expertise    do    not    or    will    not    use    it.    Before    they    will    accept    and    use    the    system    as a working    tool,    such    people    (we    shall    call    them    the    “domain    users”)    usually    expect    some    evidence    that    the    performance    of    the    system    is    adequate    for    their    needs    (e.g.,    see    [8]).    Accordingly,    considerable    effort    has    been    devoted    to    evaluating    the    performance    of    the    Prospector    system    and    of    its    various    models    [2,    31.    In    the    present    case,    we    first    needed    to    validate    the    performance    of    the    uranium    model    to    be used    r- the    pilot    study    for    the    U.S.    Department    of    Energy.    The    methodology    used    to    evaluate    Prospector’s    performance    is discussed    in detail    elsewhere    [2,    31.    For    brevity,    here    we    outline    a few    relevant    factors.    The    Prospector    knowledge    base    contains    a distinct    inference    network    model    for    each    of    a number    of    different    classes    of    ore    deposits,    and    a separate    performance    evaluation    is    performed    for    each    model.    Here    we    are    concerned    with    one    such    model,    called    the    regional-scale    “Western    states”    sandstone    uranium    model    (RWSSU),    designed    by    Mr.    Ruffin    Rackley.    Since    there    exist    no    objective    quantitative    measures    of    the    performance    of    human    geologists    against    which    to    compare    that    of    Prospector,    we    instead    use    a relative    comparison    of    the    conclusions    of    a    Prospector    model    against    those    of    the    expert    geologist    who    designed    it.    To    do    so,    first    a number    of    test    regions    are    chosen,    some    being    exemplars    of    the    model    and    others    having    a    poor    or    less    good    match    against    the    model.    For    each    such    case,    a questionnaire    is    completed    detailing    the    observable    characteristics    that    the    model    requests    as    inputs    for    its    deliberation.    Prospector    evaluates    each    such    data    set    and    derives    its    conclusion    for    that    test    case,    which    is    expressed    081 a    scale    from    -5    to    5.    As    a basis    of    comparison,    we    also    independently    elicit    the    model    designer’s    conclusion    about    each    test    case,    based    on the    same    input    data,    and    expressed    on the    same    -5    to    5 scale.    Then    we    compare    Prospector’s    predictions    against    the    target    values    provided    by the    model    designer.    Table    1    compares    the    top-level    conclusions    of    Prospector    (using    the    RWSSU    model)    against    those    of    the    model    designer    for    eight    test    regions.    295    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    Table    1.    Comparison    of    RWSSU model    with    Designer    for    Eight    Test    Cases    Test    Region    Designer’s    Prospector    Di fference    Target    Score    --------------------------------------------------    Black    Hi Ils    3.50    4.33    -0.83    Crooks    Gap    4.70    4.26    0.44    Gas    Hills    4.90    4.37    0.53    Shirley    Basin    4.95    4.13    0.82    Ambros    i a Lake    5.00    4.39    0.61    Powder    River    4.40    4.40    0.00    Fox    Hi I I s    1.50    2.17    -0.67    Oi I Mountain    1.70    3.32    -1.62    -------------------------------------------------    Average:    0.69    Table    1    indicates    that    the    average    difference    between    the    Prospector    score    and    the    corresponding    target    value    for    these    eight    cases    is 0.69,    which    is 6.9%    of    the    -5    to    5 scale.    Besides    the    overall    conclusions    reported    above,    quite    detailed    information    about    Prospector’s    conclusions    was    collected    for    each    test    case.    One    feature    of    the    Prospector    system    is    the    ability    to    explain    its    conclusions    at    any    desired    level    of    detail.    In    its    normal    interactive    mode,    the    user    can    interrogate    Prospector’s    conclusions    by    indicating    which    conclusions    or    subconclusions    he wishes    to    see    more    information    about.    The    same    sort    of    information    is    presented    in    Table    2    (using    the    Gas    Hills    region    as an    example),    in    the    form    of    Prospector’s    overall    evaluation,    the    major    conclusions    on    which    the    overall    evaluation    is    based,    and    the    subconclusions    that    support    each    major    conclusion.    For    brevity,    each    section    of    the    RWSSU    model    represented    in    Table    2 is identified    by    its    symbolic    name,    which    is    indented    to    show    its    place    in    the    hierarchy    of    the    model.    For    comparison,    we    first    elicited    from    the    model    designer    his target    values    for    each    section    of    the    model    listed    in    Table    2;    these    values    are    included    in    Table    2.    Table    2.    Detailed    Comparison    of    RWSSU Model    with    Designer    for    Gas    Hills    Designer’s    Prospector    Difference    Target    Score    ------------------------------------------------    RWSSU    4.90    4.37    .53    FTRC    4.80    4.64    .16    TECTON    4.50    4.50    .oo    AHR    5.00    4.95    .05    FAVHOST    4.80    5.00    -. 20    S EDTECT    4.80    4.88    -.08    FAVSED    4.90    4.68    .22    FLUVSED    4.90    4.68    .22    KAlti’lESE:)    -3.54    -2.03    -1.43    AEOLSED    -2.50    -2.10    -.40    FMA    4.95    4.41    .54    RBZONE    5.00    4.60    .40    AIZONE    4.00    4.77    -.77    M I NZONE    5.00    5.00    l    oo    Average    difference    =    0.36    (Average    of    absolute    values)    The    data    in    Table    2    indicate    that    Prospector    not    only    reaches    essentially    the    same    numerical    conclusions    as    its    designer,    but    does    so    for    similar    reasons.    This    detailed    comparison    was    repeated    for    each    of    the    eight    cases,    resulting    in    112    distinct    comparisons    between    Prospector’s    prediction    and    designer’s    target    value    (i.e.,    8    test    regions    times    14    sections    of    the    model).    The    average    difference    between    Prospector’s    score    and    designer’s    target    value    over    these    112    cases    was    0.70,    or 7.0%    of    our    standard    lo-point    scale.**    Gaschnig    [4]    also    reports    sensitivity    analysis    experiments    showing    the    models    to    be    rather    stable    in    their    conclusions:    for    the    RWSSU    model,    a    10%    perturbation    in the    input    certainties    caused    only    a 1.2%    change    in the    output    certainties.    3.    Results    of    the    NURE    Pilot    Study    Having    established    the    credibility    of    the    RWSSU    model    by    the    test    results    just    discussed,    we    then    undertook    an    evaluation    of    five    test    regions    selected    by    the    Department    of    Energy.    For    this    purpose    USGS    and    DOE    geologists    completed    questionnaires    for    this    model.    As    a    sensitivity    test,    several    geologists    independently    completed    questionnaires    for    each    test    region.    For    comparison,    the    model    designer,    R. Rackley,    also    completed    questionnaires    for    the    five    test    regions.    The    overall    results    are    reported    in    Table    3,    in    which    the    abbreviations    M.H.,    P.B.,    MO.,    N.G.,    and    W.R.    denote    the    names    of    the    test    regions,    namely    Monument    Hill,    Pumpkin    Buttes,    Moorcroft,    Northwest    Gillette,    and    White    River,    respectively.    Table    3.    Overall    Conclusions    for    Five    Test    Regions    Geologist    A    B    C    D    USGS Rackley    Range    team    data    ------------------------------------------------    M.H.    4.17    3.32    3.97    4.40    1.08    P.B.    4.20    3.30    4.19    4.40    1.10    MO.    3.92    3.88    4.00    0.12    N.G.    3.64    0.10    3.42    3.54    W.R.    0.13    0.01    0.12    The    results    in    Table    3 indicate    that    the    Monument    Hill,    Pumpkin    Buttes,    and    Moorcroft    regions    are    very    favorable,    and    about    equally    favorable,    for    occurrence    of    “Western    States”    sandstone    uranium    deposits.    Northwest    Gillette    is    scored    as    moderately    favorable,    whereas    White    River    is    neutral    (balanced    positive    and    negative    indicators).    Note    that    each    respondent    has    had    different    exposure    to    the    target    regions,    in    terms    of    both    first-hand,    on-site    experience    and    familiarity    with    field    data    reported    in    the    literature.    These    differences    in    experience    are    reflected    in    their    answers    on    the    questionnaires.    Since    different    inputs    yield    different    conclusions,    one    would    expect    a    spread    in    the    certainties    about    each    region,    reflecting    the    differences    in    input    data    provided    by    the    various    geologists.    Inspection    of    Table    3 reveals,    however,    that    the    scores    derived    from    different    geologists’    input    data    about    the    same    region    agree    rather    closely    for    each    region    except    Northwest    Gillette    (see    the    column    labeled    IrRange”).    These    generally    close    agreements    reflect    the    capability    of    Prospector    models    to    synthesize    many    diverse    factors,    mechanically    ascertaining    general    commonalities    without    being    unduly    distracted    by    occasional    disparities.    In    cases    such    as Northwest    Gillette    in which    a large    difference    in    conclusions    occurs,    it    is easy    to    trace    the    source    of    the    disagreement    by    comparing    the    individual    conclusions    for    different    sections    of    the    model    (representing    different    geological    subconclusions),    as in    Table    4.    Table    4.    Comparison    of    Detailed    Conclusions    About    Northwest    Gi I lette    Geologist    C    D    Rackley    Avg.    data    -----------------------------------------    RWSSU    .lO    3.66    3.42    3.56    FTRC    4.67    3.80    4.63    4.37    TECTCN    4.90    4.50    4.50    4.63    AHR    4.95    1.03    4.94    3.64    FAVHQST    5.00    5.00    5.00    5.00    S EDTECT    4.98    4.33    4.78    4.69    FAVSED    .04    3.92    4.79    2.92    FLUVSED    .04    3.92    4.79    2.92    MAR I NESED    -4.60    3.34    .02    -.41    AEOLSED    -4.99    -2.10    -3.23    -3.44    FMA    .27    2.45    1.33    2.18    RBZQNE    4.10    4.83    4.73    4.55    AIZQNE    -3.29    2.40    0.00    -0.30    M I NZQNE    .41    2.82    2.59    1.94    Inspection    of    Table    4    reveals    that    the    conclusions    agree    fairly    closely    for    the    FTRC    section    of    the    model,    and    less    closely    for    the    FAVSED    and    FMA    sections.    Tracing    the    differences    deeper,    one    sees    that    of    the    three    factors    on    which    FMA    depends,    there    is    fairly    good    agreement    about    RBZONE,    but    larger    differences    in the    cases    of    the    AIZONE    and    MINZONE    sections.    In    some    cases,    such    a    detailed    analysis    can    isolate    the    source    of    overall    disagreement    to    a few    key    questions    about    rhich    the    respondents    disagreed.    These    can    then    be    resolved    by    the    respondents    without    the    need    to    be    concerned    with    other    disagreements    in    their    questionnaire    inputs    that    did    not    significantly    affect    the    overall    conclusions.    Prospector    has    also    been    applied    to    several    other    practical    tasks.    One    evaluated    several    regions    on    the    Alaskan    Peninsula    for    uranium    potential    [l 1,    as    one    of    the    bases    for    deciding    their    ultimate    disposition    (e.g.,    wilderness    status    versus    commercial    exploitation).    Another    app!iccztian    das    concc’rnad    with    measuring    quantitatively    the    economic    value    of    a geological    map,    resulting    in statistically    significant    results    [ 71.    4.    Discussion    We    have    measured    Prospector’s    expertise    explicitly    and    presented    a    practical    application    to    a    national    project,    demonstrating    in    particular    how    the    Prospector    approach    deals    effectively    with    the    variabilities    and    uncertainties    inherent    in    the    task    of    resource    assessment.    This    work    illustrates    that    expert    systems    intenderr    for    actual    practical    use    must    accommodate    the    special    characteristics    of    the    domain    of    expertise.    In    the    case    of    economic    geology,    it    is    not    rare    for    field    geologists    to    disagree    to    some    extent    about    their    observations    at    a given    site.    Accordingly,    the    use    of    various    sorts    of    sensitivity    analysis    is    stressed    in    Prospector    to    bound    the    impact    of    such    disagreements    and    to    isolate    their    sources.    In    so    doing,    we    provide    geologists    with    new    quantitative    techniques    by    which    to    address    an    important    issue,    thus    adding    to    the    attractiveness    of    Prospector    as    a    working    tool.    Other    domains    of    expertise    will    have    their    own    peculiarities,    which    nust    be    accommodated    by    designers    of    expert    systems    for    those    domains.    A    more    mundane,    but    nevertheless    important,    example    concerns    the    use    of    a    questionnaire    as    a medium    for    obtaining    input    data    to    Prospector    from    geologists.    Most    geologists    have    little    or    no experience    with    computers;    furthermore,    access    to    a    central    computer    from    a    remote    site    may    be    problematic    in    practice.    On    the    other    hand,    geologists    seem    to    be    quite    comfortable    with    questionnaires.    Our    point    is simply    that    issues    ancillary    to    Al    usually    have    to    be    addressed    to    ensure    the    practical    success    of    knowledge-based    Al    systems.    References    1.    2.    3.    4.    5.    6.    7.    8.    Cox,    D. P.,    D. E.    Detra,    and    R. L.    Detterman,    “Mineral    Resources    of    the    Chignik    and    Sutwick    Island    Quadrangles,    Alaska,”    U.S.    Geological    Survey    Map    MF-1053K,    1980    in press.    Duda,    R.O.,    P.E.    Hart,    P. Barrett,    J. Gaschnig,    K.    t\onolige,    R. Reboh,    and    J. Slocum,    “Development    of    the    Prospector    Consultation    System    for    Mineral    Exploration,”    Final    Report,    SRI    Projects    5821    and    6415,    Artificial    Intelligence    Center,    SRI    International,    Menlo    Park,    California,    October    1978.    Gaschnig,    J. G.,    “Preliminary    Performance    Analysis    of    the    Prospector    Consultant    System    for    Mineral    Exploration,”    Proc.    Sixth    International    Joint    Conference    on    Artificial    Intelligence,    Tokyo,    August    1979.    Gaschnig,    J. G.,    “Development    of    Uranium    Exploration    Models    for    the    Prospector    Consultant    System,”    SRI    Project    7856,    Artificial    Intelligence    Center,    SRI    International,    Menlo    Park,    California,    March    1980.    National    Uranium    Resource    Evaluation,    Interim    Report,    U.S.    Department    of    Energy,    Report    GJO-111(79),    Grand    Junction,    Colorado,    June    1979.    Roach,    C. H.,    “Overview    of    NURE    Progress    Fiscal    Year    1979,”    Preprint    of    Proceedings    of    the    Uranium    Industry    Seminar,    U. S.    Department    of    Energy,    Grand    Junction,    Colorado,    October    16-17,    1979.    Shapiro,    C.,    and    W. Watson,    “An    Interim    Report    on    the    Value    of    Geologic    Maps,”    Preliminary    Draft    Report,    Director’s    Office,    U.S.    Geological    Survey,    Reston,    Virginia,    1979.    Yu,    V.L.,    et    al.,    “Evaluating    the    Performance    of    a    Computer-Based    Consultant,”    Heuristic    Programming    Project    Memo    H PP-78-17,    Dept.    of    Computer    Science,    Stanford    University,    September    1978.    297     
 | 
	1980 
 | 
	39 
 | 
					
33 
							 | 
	INCREMENTAL, INFORMAL PROGRAM ACQUISITION’    Brian P. McCune    Advanced Information & Decision Systems    201 San Antonio Circle, Suite 286    Mountain View, California    94040    AbJrract. Program acquisition is the transformation of a program    specification    into    an executable, but not necessarily efficient,    program    that meets the given specification. This paper presents a    solution to one aspect of the program acquisition problem, the    incremental    construction of    program    models from    informal    descriptions [II, in the form of a framework that includes (1) a    formal    language for expressing program fragments that contain    informalities, (2) a control structure for the incremental recognition    and assimilation of such fragments, and (3) a knowledge base of    rules for acquiring programs specified with informalities.    1. Introduction    The    paper describes a LISP    based computer system called the    Program    Model    Builder    (abbreviated    “PMB”),    which receives    informal program fragments incrementally and assembles them into    a very high level program model that is complete, semantically    consistent,    unambiguous,    and    executable.    The    program    specification comes in the form of partial program fragments that    arrive    in    any order    and may exhibit    such informalities as    inconsistencies and ambiguous references. The program fragment    language    used for specifications is a superset of the language in    which program models are built. This program modelling language    is a very high level programming language for symbolic processing    that deals with such information structures as sets and mappings.    2. The Problem    The    two key problems faced by PMB    come from processing    fragments that specify programs incrementally    and informally.    The    notion of incremental program specification means that the    fragments specifying a program may be received in an arbitrary    order    and    may contain an arbitrarily    small amount of new    information.    The user thus has the most flexibility to provide new    knowledge    about any part of the program at any time.    For    example, a single fragment conveying a small number of pieces of    information    is the statement “A is a collection.” This identifies an    information    structure called A and defines it as a collection of    objects.    However, the fragment says nothing about the type of    objects, their number, etc. These details are provided in program    fragments occurring before or after this one.    Informality    means that fragments may be incomplete, semantically    inconsistent, or ambiguous; may use generic operators; and may    provide    more than one equivalent way of expressing a program    part.    An incomplete program model part may be completed either    by use of a default value, by inference by PMB, or frog later    fragments from the user.    ’ This    paper describes research done at the Stanford Artificial    Intelligence Laboratory and Systems Control, Inc., and supported    by DARPA    under Contracts MDA903-76-C-0206    and NOOOl4-79    C-0127, monitored by ONR.    Fellowship support was provided by    NSF, IBM,    and the De Karman Trust. The views and conclusions    contained in this paper are those of the author.    Program model consistency is monitored at all times. PMB tries to    resolve inconsistencies first; otherwise, it reports them to the user.    For example, the membership test fragment x E A requires that    either A have elements of the same type as x (whenever the types    of R and x finally become known) or their types inferred to be the    same.    Because a fragment may possess ambiguities, its interpretation    depends upon the model context. So PMB specializes a generic    operator into the appropriate primitive operation, based upon the    information    structure used. For example, part-oflx,A)    (a Boolean    operation    that checks if information    structure x is somehow    contained within A) becomes x E R, if A is a collection with    elements of the same type as X, and an k.componenr    if A is a plex    (record structure).    PMB    is capable of canonization,    the transformation of equivalent    information    and procedural structures into concise, high level,    canonical forms.    This    allows subsequent automatic coding the    greatest freedom in choosing implementations. Interesting patterns    are detected by specific rules set up to watch for them. For    example, expressions that are quantified over elements of a set are    canonized to the corresponding expression in set notation.    3. Control Structure    The model building problem is to acquire knowledge in the form    of a program model. The control structure of PMB is based upon    the “recognition” paradigm 121, in which a system watches for new    information,    recognizes the information based upon knowledge of    the domain and the current situation, and then integrates the new    knowledge into its knowledge base. PMB    has one key feature:    subgoals may be dealt with in an order chosen by the user, rather    than dictated by the system. Subgoals are satisfied either externally    or internally to PMB.    The two cases are handled by the two kinds    of data driven antecedent rules, response rules and demons, which    are triggered respectively by the input of new fragments or changes    in the partial model. When new information arrives in fragments,    appropriate response rules are triggered to process the information,    update the model being built, and perhaps create more subgoals    and associated response rules. Each time a subgoal is generated, an    associated “question” asking for new fragments containing a    solution to the subgoal is sent out. This process continues until no    further information is required to complete the model. To process    subgoals that are completely internal to PMB, demon rules are    created that delay execution until their prerequisite information in    the model has been filled in by response rules or perhaps other    demons.    4. Knowledge Base    PMB has a knowledge base of rules for handling constructs of the    program modelling language. processing informalities in fragments,    monitoring consistency of the model, and doing limited forms of    program canonization. Rules about the modelling language include    facts    about    five different    information    structures, six control    structures, and approximately twenty primitive operations. The    71    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    control structures are ones that are common to mose high level    languages.    The modelling language’s real power comes from its    very high level operators for information structures such as sets,    lists, mappings, records, and alternatives of these.    Below are English paraphrases of three rules that exemplify the    major types of rules used in PMB.    Rule I is a response rule for    processing a new loop. Rule 2 is a demon that checks that the    arguments of an i~-&~et    operation are consistent. Rule 3 is a    canonization    demon that transforms a case into a test when    appropriate.    [II    A loop consists of an optional initialization, required body,    and required pairs of exit tests and exit blocks. Each exir test must    be a Boolean expression occurring within the body.    [21 Require that the two arguments of an is-J&Jet operation both    return collections of the same prototypic element.    [31 If the statement is a case, the case has two condition/action    pairs,    and    the first condition is the negation of the second    condition, then change the case into a test.    Both    response    rules and simple demons are procedural.    Compound    demons (i.e., those whose antecedents test more than one object in    the model) use declarative antecedent patterns that are expanded    automatically into procedural form by a rule “compiler”.    5. Example of PMB in Operation    The    model building    excerpt below displays (1) growth of the    program model tree in a fashion that is generally top down, but    data driven, and (2) completion and monitoring of parts of the    model by demons. Note that this excerpt does justice neither to the    concept    of    arbitrary    order    6f    fragments nor the types of    programming knowledge in PMB.    The    trace discusses three program fragments generated from an    English dialog. Each fragment is followed by a description of how    it was processed by PMB, a snapshot of the partial model at that    point, and a list of the outstanding demons. A detailed trace for    the first fragment shows PMB    focusing on individual slots of a    fragment,    creating model templates, and creating subgoals. The    other fragments emphasize the creation and triggering of demons.    Names preceding colons are unique template names that allow    fragments to refer to different parts of the model. Missing parts of    the partial model are denoted by “???“. Newly added or changed    lines are denoted by the character ‘I” at the right margin.    [The excerpt starts after the first fragment has already caused the    partial program model shown below to be created. It only contains    the names of the model, CLASSIFY,    and the main algorithm,    “algorithm-body”.    No demons exist.]    Current    program    model:    program classify;    algorithm-body:    ???    Current    demons    act ive:    None    [The second fragment describes the top level algoriehm as a control    structure    having    type    composite    and    two    “Input-concept”    steps called    and “classify-loop”. This fragment might have    arisen from a sentence from the user such as “The algorithm first    inputs the concept and then classifies it.“]    Inputting    fragment:    algorithm-body:    begin    input-concept    classify-loop    end    CA composite    is a compound statement with an optional partial    ordering on the execution of its subparts. The response rule that    processes the composite creates the following two subgoals, along    with response rules to handle them (not shown).]    Processing    ALGORITHM-BOOY.TYPE = COMPOSITE    Creating    subgoal:    ALGORITHM-BOOY.SUBPARTS - 711    Creating    subgoal:    ALGORITHM-BOOY.OROERINGS = 711    Done    processing    ALGORITHM-BOOY,TYPE = COMPOSITE    [Within    the same fragment the two subparts are defined as    operational units with unique names, but of unknown types. An    operational    unit can be any control structure, primitive operation,    or procedure call. Two new templates are created and their types    are requested.1    Process    i ng ALGORI THM-BODY. SUBPARTS =    (INPUT-CONCEPT    CLASSIFY-LOOP)    Creating    template    INPUT-CONCEPT with    value    I NPUT-CONCEPT. CLASS - OPERAT I ONAL-UNI T    Creating    subgoal:    INPUT-CONCEPT.TYPE    - 131    Creating    template    CLASSIFY-LOOP with value    CLASSIFY-LOOP.CLASS    = OPERATIONAL-UNIT    Creating    subgoal:    CLASSIFY-LOOP.    TYPE = ???    Done    processing    ALGORITHM-BOOY.SUBPARTS    =    (INPUT-CONCEPT    CLASSIFY-LOOP)    [At    this point, the    of the composite.]    model IS missing the definitions of the two parts    Current    program    model:    program classify;    begin    Input-concept: ???;    classify-loop: ???    end    Current    demons    active:    None    [The third fragment, which defines “input-concept” to be an input    primitive    operation, is omitted. Information structures from this    fragment are not shown in the models below.    The    fourth    fragment defines the second step of the composite.    This fragment might have come from “The classificaeion step is a    loop with a single exit condition.“]    Inputting    fragment:    classify-loop:    until exit (exit-condition)    repeat loop-body    finally exit:    endloop    [This    fragment defines a loop that repeats “loop-body” (as yet    undefined) until a Boolean expression called “exit-condition” is    true. At such time, the loop is exited to the empty exit block, called    “exit”, which    is associated with “exit-condition”.    Since PMB    doesn’t know precisely where the test of aexit-condition” will be    located, it is shown separately from the main algorithm below. The    response rule that processes the loop needs to guarantee that    “exit-condition”    is contained within the body of the loop. Since    72    this can’t be determined    until the location of “exit-condition” is    defined in a fragment, the response rule attaches Demon 1 to the    template for “exit-condition” to await this event. Similarly, Demon    2 is created to await the location of “exit-condition” and then put it    inside    a test with an asserl-exit-connlilon    as its true branch.    This    will cause the loop to be exited when the exit condition becomes    true.1    Current    program    model8    program classlf3r;    begirl    concept c input(concept#otot~pe,    user, concept-prompt);    until exit    repeat    I    loop-body: ???    finally    I    exit:    endloop    I    end    exit-condition:    ???    I    Current    demons active:    Demon 1:    awaiting    control    structure    containing    “ex i t-cond i t i on”    I    Demon 2:    awaiting    control    structure    containing    “ex i t-cond i t i on”    I    [The fifth fragment defines the body of the loop, thus triggering    the two demons    set up previously.    A possible source of this    fragment    is ‘The loop first inputs a scene, tests whether the datum    that was input is really the signal to exit the loop, classifies the    scene, and then outpu&this    classification to the user.“]    Inputting    fragment:    loop-body:    begin    loop-input;    exit-condition;    classification;    output-classification    end    [“Loop-body”    is a composite with four named steps. PMB now    knows where “exit-condition”    occurs and that it must return a    Boolean value. Demon I is awakened to find that “exit-condition”    is located inside the composite “loop-body”. Since this isn’t a loop,    Demon    1 continues up the tree of nested control constructs.    It    immediately    finds that the parent of “loop-body” is the desired    loop, and succeeds.    Demon 2 is also awakened.    Since it now    knows “exit-condition”    and its parent, Demon 2 can create a new    template    between    them.    The    demon    creates    a    test    with    “exit-condition”    as its predicate and an assert-exit-condition    that    will leave the loop as its true action.]    Current    program    mode I :    program classify;    begin    concept t input(concept.$ototype,    user, concept-prompt);    until exit    repeat    begin    loop-input:    ???;    I    if exit-condition:    ???    then assert-exit-conditio(exit);    I    classification: ??!;    output-classification:    ???    end    finally    exit:    endloop    end    Current    demons act i ve:    None    I    [At the end of the excerpt, five of 32 fragments have been    processed.]    6. Role of PMB in a Program Synthesis System    PMB    was designed to operate as part of a more complete program    synthesis    system    with    two distinct    phases:    acquisition    and    automatic coding. In such a system the program model would serve    as the interface between the two phases. Automatic coding is the    process of transforming    a model into an efficient program without    human intervention. The model is acquired during the acquisition    phase; the model is coded only when it is complete and consistent.    PMB may work within a robust acquisition environment.    In such    an environment,    program fragments may come from many other    knowledge    sources, such as those expert in traces and examples,    natural    language, and specific programming domains.    However,    the operation of PMB is not predicated on the existence of other    modules:    all fragments    to PMB    could be produced    by a    straightforward    deterministic parser for a surface language such as    the one used to express fragments.    7. Conclusion    PMB    has been used both as a module of the PSI program    synthesis    system E31 and independently.    Models built as part of    PSI    have    been    acquired    via natural    language    dialogs and    execution    traces and have been automatically coded into LISP by    other    PSI modules.    PMB has successfully built a number of    moderately complex programs for symbolic computation.    The most important topics for future work in this area include (1)    extending    and revising    the knowledge base, (2) providing an    efficient mechanism    for testing alternate hypotheses and allowing    program    modification, and (3) providing a general mechanism for    specifying    where in the program model a program fragment is to    go. The last problem has resulted in a proposed program reference    language    I1 3.    8. References    [I J Brian    P. McCune, Building    Program    Models    Incrementally    from    informal    Descriptions,    Ph.D. thesis, AIM-333, STAN-CS-79    772, AI Lab., CS Dept., Stanford Univ., Stanford, CA, Oct. 1979.    [2J Daniel G. Bobrow and Terry Winograd, “An Overview of    KRL,    a Knowledge    Representation    Language”, Cognifive    SCienCe,    Vol. 1, No. 1, Jan. 1977, pp. 3-46.    [33 Cordell Green, Richard P. Gabriel, Elaine Kant, Beverly I.    Kedzierski,    Brian P. McCune, Jorge V. Phillips, Steve T. Tappel,    and Stephen    J. Westfold, “Results in Knowledge Based Program    Synthesis”,    1 JCAI-79:    Proceedings    of the Sixth International    Joint    Conference    on Artijciai    Intelligence,    Vol. 1, CS Dept., Stanford    Univ., Stanford, CA, Aug. 1979, pp. 342-344.    73     
 | 
	1980 
 | 
	4 
 | 
					
34 
							 | 
	SOME REQUIREMENTS FOR A    COMPUTER-BASED LEGAL CONSULTANT    L. Thorne    McCarty    Faculty    of    Law, SUNY at    Buffalo    Laboratory    for    Computer    Science    Research,    Rutgers    Although    the    literature    on    computer-based    consultation    systems    has    often    suggested    the    possibility    of    building    an    expert    system    in the    field    of    law    (see,    e.g.,    [2])    it    is    only    recently    that    several    AI researchers    have begun    to explore    this    possibility    seriously.    Recent    projects    include:    the    development    of    a computational    theory    of    legal    reasoning,    using    corporate    tax    law    as an    experimental    problem    domain    C61    C73    C81;    the    development    of    a language    for    expressing    legal    rules    within    a    data-base    management    environment    [Y];    the    design    of    an information    retrieval    system    based    on a computer    model    of    legal    knowledge    [ 31;    and    the    design    of    an    artificial    intelligence    system    to analyze    simple    tort    cases    [lo].    This    paper    attempts    to    identify    the    principal    obstacles    to    the    development    of    a    legal    consultation    system,    given    the    current    state    of    artificial    intelligence    research,    and    argues    that    there    are    only    certain    areas    of    the    law    which    are    amenable    to    such    treatment    at    the    present    time.    The    paper    then    suggests    several    criteria    for    selecting    the most    promising    areas    of    application,    and indicates    the    kinds    of    results    that    might    be    expected,    using    our    current    work    on    the    TAXMAN    project    [q]    as an example.    I.    Poter&i&    &nlications.    One    can    imagine    numerous    applications    of    artificial    intelligence    techniques,    in    several    diverse    areas    of    law,    but    most    of    these    would    fall    into    one of    the    following    categories:    (I.)    m-al    Svstema.    There    are    a    number    of    systems    in    operation    today    which    maintain    data    bases    of    statutes    and decided    cases,    in    full    text,    and which    are    capable    of    searching    these    texts    for    combinations    of    key    words,    using    standard    information    retrieval    techniques.    (For    a    comparative    survey    of    the    principal    domestic    systems,    LEXIS    and    WESTLAW, see    cf21.1    These    retrieval    systems    have    turned    out    to    be useful    for    certain    kinds    of    legal    research    tasks,    but    only    when    used    in    conjunction    with    the    traditional    manual    digests    and    indices,    all    of    which    are    organized    according    to    a    rigid    conceptual    * This    research    has    been    funded    by the    National    Science    Foundation    through    Grant    SOC-78- 1 1408    and Grant    MCS-79-21471    (1979-81).    classification    of    the    law    (usually:    the    West    "key    number    systemV1).    With the    use    of    artificial    intelligence    techniques,    however,    the    retrieval    systems    could    be augmented    to    provide    a    form of    automated    conceptual    searching    as    well,    and    without    the    rigidities    of    the manual    indices.    For    a discussion    of    these    possibilities,    see    [6]    and    c31.    (2.)    I&g,&    Analvsis    and    Planni.    Svstems.    A    step    more sophisticated    than    a retrieval    system,    a    legal    analysis    and planning    system    would    actually    analyze    a set    of    facts,    or    propose    a    sequence    of    transactions,    in accordance    with    the    applicable    body    of    legal    rules.    This    is    the    kind    of    system    that    most    often    comes    to mind when    one    speculates    about    computer-based    legal    consultation,    for    it    is    the    system    most    similar    to    the    successful    systems    in chemical    and medical    domains:    a lawyer,    engaged    in a dialogue    with    a computer,    would    describe    the    facts    of    his    case,    and the    computer    would    suggest    an analysis    or a possible    course    of    action.    In    fact,    there    are    systems    of    this    sort    under    development    today,    using    techniques    much    less    powerful    than    those    available    to    the    artificial    intelligence    community,    and they    seem    close    to    commercial    application:    see,    e.g.,    [I31    . The    advantages    of    artificial    intelligence    techniques    for    these    applications    have    been    discussed    by [ 61.    (3.)    Integrated    u    Information    Svstems.    Instead    of    looking    only    at    the    tasks    of    the    private    attorney,    we could    focus    our    attention    more broadly    on the    legal    system    as a    whole.    One    of    the    tasks    of    the    legal    system    is    to manage    information    and to make decisions    about    the    rights    and obligations    of    various    individual    actors,    and    there    seems    to    be no    reason,    in    principle,    why    some    of    this    information    and    some    of    these    decisions    could not    be represented    entirely    within    a computer    system.    For a current    example,    using    conventional    programming    technology,    consider    the    computerized    title    registration    systems    which    are    now being    used    to manage real    estate    transactions    (see,    e.g.,    c511.    With    the    availability    of    artificial    intelligence    techniques,    a large    number    of    additional    applications    come to mind:    financial    transactions,    securities    registration,    corporate    taxation,    etc.    At    present    it    appears    that    these    possibilities    are    being    taken    more    seriously    by    European    lawyers    than    by American    lawyers    (see,    e.g.,    [II    and [Ill).    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    If    we    consider    the    potential    role    of    artificial    intelligence    techniques    in all    of    these    applications,    a    basic    paradigm    emerges.    A    computer-based    legal    consul tat ion    system    must    represent    the    11facts11    of    a case    at    a comfortable    level    of    abstraction,    and    it    must    represent    the    I1 1 awl1    in    the    chosen    area    of    application.    The    “lawl’    would    consist    of    a system    of    “concepts”    and    ltrulesl’    with    the    following    characteristics    : (a. 1    they    would    be    relatively    abstract,    that    is,    they    would    subsume    large    classes    of    lower-level    factual    descriptions;    and    (b. ) they    would    have normative    implications,    that    is,    they    would    specify    which    actions    were    permitted    and    which    actions    were    obligatory    in a given    situation.    Legal    anal ysi 3,    in its    simplest    form,    would    then    be a    process    of    applying    the    lllawll    to    the    “facts.”    .    Put    this    way,    the    paradigm    seems    to    be an ideal    candidate    for    an    artificial    intelligence    approach:    the    “facts11    would    be    represented    in    a    lower-level    semantic    network,    perhaps    ;    the    ltlawll would    be represented    in    a higher-level    semantic    description    ;    and the    process    of    legal    analysis    would    be    represented    by    a pattern-matching    routine.    The    difficult    problems    with    this    paradigm,    however,    are    the    representation    problems.    In the    existing    knowledge-based    systems,    in    other    domains,    the    representation    of    the    llfactsll    and the    I1 1 awl1    has    been    relatively    straight    forward.    In    DENDRAL, for    example,    the    ground-level    description    of    all    possible    chemical    strut tures    could    be    represented    in a    simple    graphical    notation,    and    the    rules    for    splitting    these    structures    in a mass    spectrograph    coul d    be    -represented    as    simple    operations    on the    links    of    the    graphs.    In MYCIN,    the    basic    facts    of    a case    could    be    represented    as    a set    of    features    listing    the    presence    or absence    of    certain    symptoms    and the    results    of    certain    laboratory    tests,    and the    diagnostic    rules    could    then    be represented    as a    probabilistic    judgment    that    a given    symptom    or test    result    implied    a    certain    disease.    By    contrast,    the    facts    of    a    legal    case    typically    involve    al 1    the complexities    of    daily    life    :    human    actions,    beliefs,    intentions,    motivations,    etc.    , in a world    of    ordinary    objects    like    houses    and    automobil    es,    and    complex    institutions    like    businesses    and courts.    Even if    the    facts    of    a    particular    case    could    be    represented    in    a    computer    system,    the    rules    themselves    would    often    be    problematical.    Some    rules,    usually    those    embodied    in    statutes,    have    a    precise    logical    strut ture , and    this    makes them    amenable    to    the    existing    artificial    intelligence    techniques.    But it    is    a commonplace    among lawyers    that    the    most    important    legal    rules    do    not    have    this    form at all:    instead    they    are    said    to    have an    ltopen    texture”;    their    boundaries    are    not    fixed,    but    are    llconstruc    ted”    and ltmodifiedll    as    they    are    applied    to    particular    fat tual    situations.    A    sophisticated    legal    consul tation    system    would    not    be able    to    ignore    these    complexities,    but    would    have    to    address    them directly.-    II.    Possible    mroaches.    Since    the    representation    problems    for    a legal    consul    tation    system    are    so    difficult,    it    is    tempting    to    start    with    the    “simplestl’    possible    1 egal    issue 3,    such    as the    subject    matter    of    the    first-year    law school    courses.    We might    therefore    be    tempted    to    investigate    assault    and battery    cases    from    the    first-year    torts    course    [ lOI,    or    offer    and    acceptance    cases    from    the    first-year    contracts    tour se .    But    these    cases    are    ltsimplelV    for    law students    primarily    because    they    draw upon    ordinary    human    experience,    and this    is    precisely    what    makes them    so difficult    for    an artificial    intelligence    system.    To understand    tort    cases,    we    must    understand    all    the    ways in which    human beings    can be injured,    intentionally    and unintentionally,    mental 1 y    and    physically    ,    with    and    without    justification.    To    understand    contract    cases,    we    must understand    the    expectations    of    real    people    in    concrete    business    situations,    and    the    ambiguities    of    human    language    in    expressing    particular    contractual    intentions.    If    we abstract    away these    details,    we    will    miss    entirely    the    central    features    of    legal    reasoning,    and    our    consultation    systems    will    tend    to    produce    only    the more    trivial    results.    Paradoxically,    the    cases    that    are    most.    tractable    for    an    artificial    intelligence    system    are    those    cases,    usually    involving    commercial    and    corporate    matters,    which    a    lawyer    finds    most    complex .    There    is    a simple    reason    why this    is    so.    A    mature    1 egal    system    in    an    industrialized    democracy    i a    composed    of    many    levels    of    legal    abstractions:    the    law initially    defines    llrightsll    in terms    0 f concrete    objects    and    ordinary    human    actions    ,    but    these    rights    are    then    treated    as    llobjects    I1 themselves,    and made subject    to    further    human    llactionsll    ;    by    repeating    this    process    of    reification    many    times,    a    complex    body    of    commercial    law can be developed.    Because    of    their    technical    complexity,    the    legal    rules    at    the    top    levels    of    this    conceptual    hierarchy    are    difficult    for    most    lawyers    to    comprehend,    but    this    would    be    no obstacle    for    an artificial    intelligence    system.    The    commercial    abstractions,    in    fact,    are    artificial    and formal    systems    themselves,    drained    of much of    the    content    of    the    ordinary    world;    and    because    of    the    commercial    pressures    for    precision    and uniformity,    they    are,    by legal    standards,    well    structured.    A reasonable    strategy    for    developing    a computer-based    legal    consul    tation    system,    then,    would    be to    start    here.    This    is    the    strategy    we have    followed    in the    TAXMAN project    C61    [7]    .    The    TAXMAN system    operates    in the    field    of    corporate    tax    law,    which    is    very    near    the    apex    of    the    hierarchy    of    commercial    abstractions.    The basic    11facts11    of    a    corporate    tax    case    can be captured    in a relatively    straightforward    representation:    corporations    issue    securities,    transfer    property,    distribute    dividends,    etc.    Below    this    level    there    is    an    expanded    representation    of    the    meaning    of    a    security    interest    in terms    of    its    component    rights    and    obligations:    the    owners    of    the    shares    of    a    common stock,    for    example,    have    certain    rights    to    299    the    "earnings",    the    "assets",    and the    llcontrol"    of    the    corporation.    Above    this    level    there    is    the    lllawll :    the    statutory    rules    which    classify    transactions    as taxable    or    nontaxable,    ordinary    income    or    capital    gains,    dividend    distributions    or    stock    redemptions,    etc.    Although    these    rules    are    certainly    complex,    the    underlying    representations    are    manageable,    and    we have    concluded    from our    earlier    work    that    the    construction    of    an expert    consultation    system    in this    area    of    the    law    is    a    feasible    proposition    C61    .    In our    current    work [7]    we are    taking    this    model    one step    further,    in an attempt    to account    for    the    "open    texture"    of    legal    concepts    and the    process    by which    a legal    concept    is    "constructed"    and "modified"    during    the    course    of    an argument    over    a contested    case.    In many areas    of    the    law    this    would    be    an impossible    task:    the    complexity    of    the    representation    would    be    overwhelming,    and    the    structural    and    dynamic    properties    of    the    concepts    would    be    obscured.    But in the    world    of    corporate    abstractions    the    task    appears    to be    feasible.    Looking    at    a series    of    corporate    tax    cases    before    the    United    States    Supreme    Court    in    the    1920's    and    19301s,    we    have    been    able    to    construct    a    model    of    the    concept    of    "taxable    income"    as it    appeared    to    the    lawyers    and judges    at    the    time    (see    [9])    . Although    the    concept    is    sometimes    represented    as a "logicalfl    pattern    which    can be "matched"    to a lower-level    semantic    network    (we call    this    a -ical    temblate    structure),    the    more important    representation    for    the    process    of    legal    analysis    consists    of    a    prototvpe    structure    and a sequence    of    deformations    of    the    prototype.    We are    currently    involved    in an    implementation    of    these    ideas,    and    we will    describe    them    in detail    in a future    paper    (for    an initial    description    of    the    implementation,    see    [81).    We    believe    that    the    "prototype-plus-deformation"    structure    is    an    essential    component    of    a    system    of    legal    rules,    and that    it    ought    to    play    an important    role    in a    sophisticated    legal    consultation    system.    III.    Prosbects.    This    paper    has emphasized    the    difficulties    in    constructing    a computer-based    legal    consultation    system,    but    it    has    also    suggested    some feasible    approaches.    The    main    difficulty    is    the    representation    problem:    the    factual    situations    in    a    legal    problem    domain    involve    complex    human    actions,    and the    most    important    legal    rules    tend    to contain    the most    amorphous    and    malleable    legal    concepts.    By selecting    legal    problems    from the    commercial    and corporate    areas,    however,    we can    construct    a representation    of    the    legally    relevant    facts    without    having    to model    the    entire    human    world,    and we    can begin    to develop    the    necessary    structures    for    the    representation    of    the    higher-    level    legal    rules.    We have    had some    success    with    this    strategy    in    the    TAXMAN project,    and    we    believe    it    can be applied    elsewhere    as well.    REFERENCES    T: 11 Bing,    J.,    and Harvold,    T.,    &Q&-    Decisions    m    Information    Svstems    (Universitetsforlaget,    Oslo,    1977).    [2]    Buchanan,    B.G.,    and    Headrick,    T.E.,    "Some    Speculation    About    Artificial    Intelligence    and    Legal    Reasoning,"    23 Stanford    ti    Review    40-    ( 1970).    [3] Hafner,    C.,    "An Information    Retrieval    System    Based on a Computer    Model    of    Legal    Knowledge,"    Ph.D.    Dissertation,    University    of    Michigan    (1978).    [4]    Jones,    S.,    Mason,    P.,    and Stamper,    R.,    "LEGOL    2.0:    A    Relational    Specification    Language    for    Complex    Rules,    )1 4 Information    Svstems    293-305    ( 1979).    [5] Maggs,    P.B.,    "Automating    the    Land    Title    System,"    22    American    Universitv    Law Review    369-91    (1973).    C63 McCarty,    L.T.,    flReflections    on    TAXMAN: An    Experiment    in    Artificial    Intelligence    and    Legal    Reasoning,11    90 Harvard    &    Review    837-93    (1977).    [7] McCarty,    L.T.,    "The    TAXMAN Project:    Towards    a    Cognitive    Theory    of    Legal    Argument,"    in B.    Niblett,    ed.,    Computer    &g&    &    Legal    &gl&@Jg    (Cambridge    University    Press,    forthcoming    1980).    183 McCarty,    L.T.,    and    Sridharan,    N.S.,    "The    Representation    of    an Evolving    System    of    Legal    Concepts:    I.    Logical    Templates,"    in    Proceedings.    Third    National    Conference    of    the    Canadian    Societv    for    Computational    Studies    of    A-    Intelligence,    Victoria,    British    Columbia,    May    14-16, 1980.    [9]    McCarty,    L-T.,    Sridharan,    N-S.,    and Sangster,    B.C.,    "The    Implementation    of    TAXMAN II:    An    Experiment    in    Artificial    Intelligence    and    Legal    Reasoning,"    Report    LRP-TR-2,    Laboratory    for    Computer    Science    Research,    Rutgers    University    (1979).    [ IO1 Meldman,    J.A.,    "A    Preliminary    Study    in    Computer-Aided    Legal    Analysis,"    Ph.D.    Dissertation,    Massachusetts    Institute    of    Technology,    Technical    Report    No. MAC-TR-157    (November,    1975).    [ 111 Seipel,    P.,    Computing    &    (LiberForlag,    Stockholm,    1977).    [I21    Sprowl,    J.A.,    A Manual    for    Commuter-Assisted    Legal    Research    (American    Bar    Foundation,    1976).    El31    Sprowl,    J.A.    "Automating    the    Legal    Reasoning    Process:    A Computer    that    Uses    Regulations    and    Statutes    to    Draft    Legal    Documents,"    1979    American    &    Foundation    Research    Journal    l-81.    300     
 | 
	1980 
 | 
	40 
 | 
					
35 
							 | 
	A FRAME-BASEDPRODUCTION SYSTEM ARCHITECTURE    David E. Smith and Jan E. Clayton    Heuristic Programming Project*    Department of Computer Science    Stanford University    ABSTRACT    We propose a flexible frame-structured    representation    and    agenda-based    control    mechanism    for    the    construction    of    production-type    systems.    Advantages of this architecture include    uniformity, control freedom, and extensibility. We also describe an    experimental system, named WHJXZE, that uses this formalism.    The success of MYCIN-like production systems 141 [7] [9] has    demonstrated that a variety of types of expertise can be successfully    captured    in rules.    In some cases, however, rules alone are    inadequate necessitating the USC of auxiliary representations (e.g.    property lists for paramzters in MYCIN). Other limitations result    from the use of goal-directed    control.    In this paper we outline a flexible schemata for constructing    high performance    production-like    systems.    The    architecture    consists of two components:    1. An extensible representation (utilizing a frame-structured    language) which captures production rule knowledge.    2. An agenda-based control mechanism allowing    considerable freedom in tailoring control flow.    We have used this architecture in the development of a system    named WHEEZE, which performs    medical pulmonary    function    diagnosis based on clinical test results. This syst%m is based on two    earlier efforts, PUFF [7], an EMYCIN-based production rule system    Ill], and CENTAUR [l] 121, a system constructed of both rules and    prototypes.    AN ~LTERNATXVE    RI~PRESENTATION    FOH PRODUCTIONS    Figure 1 shows how a typical PUFF rule would be transformed    into our representation.    Each assertion is represented as a frame in    the knowledge. base, with anteccdcnt sub-assertions appearing in its    h4auzjdarion    slot. The number associated with each manifestation    is its corresponding importance.    Similarly, the certainty factor and    findings from the rule are given separate slots in the assertion.    Assertions appearing in the SuggesliveOf    and ComplemenlaryTo    . . ..-...........    *This research was supported in part by ONR contract N00014-    79C-0302 and by .NIH Biotechnology Resource Grant RR-00785.    54    slots are those worth investigating    cornfirmed    or denied    respectively    assertions are suggesfivilies).    Implicit in the production    rule    if the original assertion is    (numbers    following    these    representation    is a fimction    which indicates how to compute the “belief’ of the consequent    assertions given belief in the antecedent assertion.    Unfortunately,    evaluation of the antecedent assertion involves modal logic (since    greater shading is required than simple binary values for belief and    disbelief). Therefore, a “HowToDetermineBelief’    slot is associated    with each assertion indicating how its belief is to be computed.    If: 1) The severity of Obstructive Airways Disease of the patient is    less than or greater to mild, al’;1    2) The number of pack-years smoked is greater than 0, and    3) The number of years ago that the patient quit smoking is 0    Then: It is definite (1000) that the following is one of the conclusion    statements about this interpretation:    Discontinuation of    smoking should help re1iev.e the symptoms.    OADwithSmoking:    Manifestation    SuggestiveOf    ComplementaryTo    Certainty    Findings    ((OAD-Present 10) (PatientHasSmoked    10)    (PatientSillSmoking 10))    ((SmokingExaccrbatedOAD    5)    (SmokingInduccdOAD    5))    ((OADwithSmoking-None    5))    1000    “Discontinuation of smoking should help relieve    the symptoms. ”    HowToDetermincBelicf finelion for compuring the minimum of the    beliefs of the manifestations    Figure    1. English    translation    of PUFF rule (top)    and    Corresponding WHEEZE Frame for OADwithSmoking (bottom).    Numbers appearing in the Manifestation, SuggcstiveOf and    Complcmcntaryl’o slots arc importance and suggcstivity wcightings.    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    The    declarative    nature    of    this    representation    facilitates    modification and extension.    For example, the addition of related    knowledge, such as justifications, explanations, and instructional    material, can be accomplished by the addition of slots to already    existing assertions. The single uniform structure alleviates the need    for any auxilliary means of representation.    Considerable efficiency has been gained by the use of rule    compilation    on production    systems [lo] [ll].    We feel that a    technique similar to this could also be used effectively on our    representation but have not yet fully investigated this possibility.    AN AGENDA BASED CONTROL MECHANISM    Depth-first, goal-directed search is often used in production    systems because questions asked by the system are focused on    specific topics. Thus, the system appears to follow a coherent line    of reasoning, more closely mimicking that of human diagnosticians.    There are, however, many widely recognized limitations.    No    mechanism is provided for dynamically selecting or ordering the    initial set of goals.    Consequently, the system may explore many    “red herrings” and ask irrelevant questions before encountering a    good hypothesis.    In addition, a startling piece of evidence (strongly    suggesting a different hypothesis)    cannot cause suspension of the    current investigation and pursuit of the alternative.    Expert    diagnosticians    use more    than simple goal-directed    reasoning.    They seem to work by alternately constructing and    verifying hypotheses, corresponding    to a mix of data- and goal-    directed search.    Furthermore, they expect these systems to reason    in an analogous manner.    It is desirable, therefore, that the system    builder have control over the dynamic reasoning behavior of the    system.    To provide this control, we employ a simple relaxation of goal-    and data-directed mechanisms.    This is facilitated by the use of an    agenda to keep track of the set of goals to be examined, and their    relative priorities.    The control strategy is:    1.    Examine the top assertion on the agenda.    2.    If its sub-assertions (manifestations) arc known, the    relative belief of the assertion is determined.    If    confirmed, any assertions that it is suggestive of are    placed on the agenda according to a specified measure    of suggestivity.    If denied, complementary    assertions    are placed on the agenda according to a measure of    suggestivity.    3. If it cannot be immediately verified or rejected then its    unknown sub-assertions    are placed on the agenda    according to a measure of importance, and according    to the agenda level of the original assertion.    By varying the importance factors, SuggesfiveOf    values, and the    initial items placed on the agenda, numerous strategies are possible.    For example, if high-level goals are initially placed on the agenda    and subgoals are always placed at the top of the agenda, depth-first    goal-directed behavior will result.    Alternatively, if low-level data    are placed on the agenda initially, and assertions suggested by these    data assertions arc always placed below them on the agenda,    breadth-first    data driven behavior will result.    More commonly, what is desired is a mixture of the two, in    which assertions suggest others as being likely, and goal directed    verification is employed to investigate the likely assertions.    The    example below illustrates how this can be done.    FEVl/FVC<80    QAD    FEVl/FVC>SO    Figure 2.    A simplified portion of the WHEEZE knowledge base.    The solid lines indicate Manifesatation    links (e.g. OAD is a    manifestation of Asthma), the dashed lines represent SuggestiveOf    links. The numbers associated with the links are the corresponding    “importances”    and “suggestivities” of the connections.    In -the knowledge base of figure 2, suppose that RDX-ALS is    confirmed, suggesting RLD to the agenda at level 6 and ALS at    level 4. ‘RLD is then examined, and since its manifestations are    unknown, they are placed at the specified level on the agenda.    The agenda now contains FEVl/FVC>80    at level 8, RV<80 and    RLD at level 6, and ALS at level 4. FEVl/FVC>80    is therefore    selected,    and    suppose. that    it is found    to be    false.    Its    complementary assertion (FEVl/FVC<80)    is placed at level 8 on    the agenda. and is immediately investigated.    It is, of course, true,    causing OAD to be placed at level 8 on the agenda. The diagnosis    proceeds by investigating the manifestations of OAD; and, if OAD    is confirmed,    Asthma and Bronchitis are investigated.    While many subtleties have been glossed over in this example,    it is important    to note that:    1.    The manipulation    of SuggestiveOf    and importance    values can change the order in which assertions are    examined,    therefore    changing    the order    in which    questions are asked and results printed out.    (In the    example, FEVUFVC    was asked for before RV.)    155    2. Surprise data (data contrary to the hypothesis currently    being investigated) may suggest goals to the agenda    high enough    to cause suspension    of the current    investigation.    (The surprise FEVl/FVC    value caused    suspension of the RLD investigation in favor of the    OAD investigation. If the suggestivity of the link from    FEVl/FVC<80    to OAD were not as high, this would    not have occurred.)    3. Low-level data assertions cause the suggestion of high-    level goals, thus selecting and ordering goals to avoid    irrelevant questions.    (In the ex‘ample, RLD and ALS    were suggested and ordered by the low-level assertion    RDX-ALS.)    Thus, extreme control flexibility is provided by this mechanism.    Besides the mechanism    proposed    above, there have been    several other attempts to augment simple goal directed search.    Meta-rules [S] can be used to encode strategic information, such as    how to order or prune the hypothesis space. They could also be    used, in principle, to suspend a current investigation when strong    alternatives were discovered.    In ‘practice, however, meta-rules for    accomplishing this task could be quite clumsy.    In the CENTAUR    system [l] [2], procedural    attachment    mechanisms    (in disease    prototypes) are used to capture the control information explicitly,    and “triggering” rules serve to order the initial hypothesis space.    Our solution differs from these earlier attempts by proposing a    single uniform control mechanism. lt is sufficiently straightforward    that tailoring of the control flow could potentially be turned over    to the domain expert.    RIZSULTS    Not suprisingly, WHEEZE exhibits the same diagnostic behavior    as its predecessors, PUFF and CENTAUR, on a standard set of 10    patient test cases.    In refining the knowledge base, suggestivities    and importance factors were used to great advantage to change the    order in which questions were asked and conclusions printed out.    This ‘eliminated the need to carefully order sets of antecedent    assertions.    The reprcscntation described has proven adcquatc for capturing    the domain knowledge. In some cases, several rules were collapsed    into    a single    assertion.    In    addition,    the    combination    of    representation and control structure eliminated the need for many    awkward interdependent    rules (e.g. rules with screening clauses).    Representation of both the rule and non-rule knowledge of the    PUFF and CENTAUR systems has been facilitated by the flexibility    of the architecture described.    This flexibility is the direct result of    the uniform    representation    and control    mechanism.    Further    exploitations    of    this    architecture    appear    possible,    providing    directions for future research.    ACKNOWLEDGEMENTS    We would like to thank Jan Aikins, Avron Barr, James    Bennett, Bruce Buchanan, Mike Genesereth,    Russ Greiner, and    Doug Lenat for their help and comments.    111    PI    t31    141    151    161    171    PI    [91    [lOI    REFERENCES    Jan. S. Aikins. Prototypes and Production Rules: an Approach    to Knowledge Representation for Hypothesis Formation.    Proc    6th IJCAI,    1979. Pp. l-3.    Jan S. Aikins.    Profolypes    and Producrion Rules:    A Knowledge    Represenrarion    for    Computer    Consullations    Doctoral    dissertation, Dept. of Computer Science, Stanford University.    Jan S. Aikins. Representation of Control    Systems. Proc    Zsr AAAI,    1980.    Knowledge in Expert    James Bennett, Lewis Creary, Robert Englemorc and Robert    Melosh.    SACON: A Knowledge    Based Consultant for Sfructural    Analysis.    Computer    Science Report    CS-78-699, Dept. of    Computer    Science, Stanford University, September    1978.    Randall    Davis    and    Bruce    G.    Buchanan.    Meta-Level    Knowledge:    Overview and Applications.    Proc Slh    IJCAI,    1977. Pp. 920-927.    Russell Greiner, Douglas Lenat.    A    Language. Proc    Zsf AAAI,    1980.    Representation    Language    J. C. Kunz, R. J. Fallet, et. al..    A Physiological    Rule Based    System for Inlerpreting    Pulmonary    Function Test Results.    HPP-    78-19 (Working Paper), Heuristic Programming Project, Dept.    of Computer Science, Stanford University, December    1978.    Stephen    G. Pauker and Peter Szolovits.    Analyzing and    Simulating Taking the History of the Present Illness: Context    Formation.    In    Schneider    and    Sagvall    Hein    (Eds.),    Compulational    Linguislics    in Medicine.    North-Holland,    1977.    Pp. 109-118.    E. hi. Shortliffe.    Computer-based    Medical Consultations:    MYCIN.    New York: American Elsevier, 1976.    William van Melle.    A    System for Consultation    923-92s.    Domain-independent    Production-rule    Programs.    Proc 6th IJCAI,    1979. Pp.    [ll] William van Melle. A Domain-independenr    Syslem thai Aids in    Construcling    Knowledge-based    Consultation    Programs.    Doctoral    dissertation, Dept. of Computer Science, Stanford University.    156     
 | 
	1980 
 | 
	41 
 | 
					
36 
							 | 
	ABSTRACT    Knowledge Embedding in    the Description    System Omega    Carl Mewitt,    6iuseppe    Attardi,    and Maria Simi    M.I.T.    545 Technology    Square    Cambridge, Mass 02139    Omega    is a description    system for knowledge embedding    which    combines    mechanisms    of the predicate calculus,    type    systems,    and    pattern    matching    systems.    It can    express arbitrary    predicates (achieving the power of the    o-order    quantificational    calculus),    type declarations    in    programming    systems    (Algal,    Simula,    etc.),    pattern    matching    languages    (Planner,    Merlin,    KRL,    etc.).    Omega    gains    much    of its power by unifying    these    mechanisms    in a single formalism    In    this    paper    we present    an axiomatization    of basic    constructs    in    Omega    which    serves as an important    component    of the interface    between implementors    and    users+    Omega    is    based    on    a    small    number    of    primitive    concepts.    It    is sufficiently    powerful    to be able to    express    its own rules of inference    In this way Omega    represents    a self-describing    system in which a great deal    of    knowledge    about    itself    can    be embedded.    The    techniques    in Omega represent an important    advance in    the    creation    of    self-describing    systems    without    engendering    the    problems    discovered    by    Russell    Meta-descriptions    (in the sense used in mathematical    logic) are ordinary    descriptions in Omega.    Together    with    Jerry    Barber    we have constructed    a    preliminary    implementation    of Omega on the MLT.    CADR    System    and used it in the development    of an    office workstation    prototype.    I -- Introduction    First    Order    Logic    is    a    powerful    formalism    for    representing    mathematical    theories    and    formalizing    hypotheses    about the world.    Logicians have developed a    mathematical    semantics in which a number of important    results    have    been    established    such    as completeness.    These    circumstances    have motivated the development of    deductive    systems based on first order predicate calculus    [FOL,    PROLOG,    Bledsoe’s Verifier,    etc. ]    However,    First    Order    Logic is unsatisfactory    as a language for    embedding    knowledge    in computer    systems.    Therefore    tnany    recent    reasoning    system    have tried to develop    their    own    formalisms    [PLANNER,    FRL,    KL-ONE,    KRL,    LMS, NETL,    AMORD,    XPRT,    ETHERJ    The    semantics    and    deductive    theory    of these new systems    however    has not been satisfactorily    developed.    The only    rigorous    description    of most of them has been their    implementations    which are rather large and convoluted    programs.    2 -- Overview    The syntax    of Omega is a version of template    English,    For    example    we use the indefinite    article in instance    descriptions    such as the one below:    (a Son)    Instance    descriptions    like the previous one in general    describe    a whole category    of objects,    like the category    of sons, in this example    Such description    however can be tnade more specific, by    ‘prescribing    particular    attributes    for    the    instance    ‘description    So for example,    (a Son (With    father Paul) (With    mother Mary))    describes    a son with father Paul and with mother Mary.    Otnega    differs    from    systems    based on records with    attached    procedures    (SIMULA    and    its descendants),    generalized    property    lists (FRL,    XRL,    etc.),    frames    ‘(Minsky),    and units (KRL) in several important respects.    One    of the most important    differences is that instance    descriptions    in Omega cannot    be updated.    This is a    consequence    of    the    monotonicity    of    knowledge    accutnulation    in Omega    Change in Omega is modeled    through    the use of viewpoints [Barber: 19801    Another    difference    is that in Omega an instance description can    have more than one attribution    with the same relation,    For example    (a Human (with    child Jack) (with    child Jill))    is a description    of a human    with a child Jack and a    child Jill    Statements    inheritance    can be    relation    deduced    is.    For    of    because    example    transitivity of the    157    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    (John is (a Man))    can be deduced from the following statements    (John is (a Son))    ((a Son) is (a Man))    In order    to aid readability    we will freely mix infix and    prefix notations.    For example the statement    (John is (a Man))    is completely    equivalent    to    (is John (a Man))    3 -- Inheritance    The    inheritance    relation    in Omega differs somewhat    from    the usual    ISA relation    typically found in semantic    networks.    For example from    (John iS (a Human))    ((a Human) is (a Mammal))    (Human is (a Species))    we can deduce    (John is (a Mammal))    but    cannot    conclude    that    (John is (a Species)).    However    we can deduce that    (John is (a (a Species)))    which says that John is something which is a species    We    can    often    avoid    the    use    of explicit    universal    quantifiers.    For    instance    the    following    sentence    in    quantificational    calculus    Vx Man(x) *    Mortal(x)    can be expressed as    ((a Man) is (a Mortal))    In this case we avoid the need to restrict the range of a    universal    quantifier    by means of a predicate, as it is    usually    necessary    in the quantificational    calculus.    When    unrestricted    quantification    is required, or when we need    to give    a name    to a description    which occurs several    times    in the same expression,    we denote a universally    quantified    variable    by prefixing    the symbol = to the    name    of    the    variable,    wherever    it    appears    in    the    statement,    as in:    (=x is (a Man)) *    (=x is (a Mortal))    The    scope of such a variable    is the whole statement.    Thus    the above statement    is an abbreviation for    (for-a// =x ((=x is (a Man)) 3    (=x is (a Mortal))))    Occasionally    it is necessary    to use a    for-all    in the    interior    of a statement.    For example in the following    statement    expresses the Axiom of Extensionality    which is    one of the most fundamental    principles in Omega:    ((for-a//    =d ((=d is =dr) *    (=d is =d2))) 3    (=dl is =d2))    In the above statement,    the scope of =dl and =d2 is the    whole    statetnent,    while    the scope of =d is only the    statetnent    ((=d is =dl)    *    (=d is =d2)).    A form of existential    quantification    is implicit in the use    of attributions+    For instance    (Pat is (a Man (with father (an Irishman))))    says that there is an Irishman who is Pat’s father.    Omega    makes good use of the ability to place nontrivial    descriptions    on    the left hand    side of an inheritance    relation.    For example from the following statements    (a Teacher    (with    subject =s)) iS (an Expert (W/f/l    field ES))    (John is (a Teacher    (With    subject Music)))    we get the following by transitivity    of inheritance:    (John iS (an Expert (Wifh field Music)))    Note that statements    like the following    (is    (and    (a WarmBloodedAnimal) (a BearerOfLiveYoung))    (a Mammal))    are much    more difficult    to express in systems such as    KRL    and FRL which are based on generalized records    and property lists.    If it happens    that    two descriptions    inherit    from each    other, we will say they are the same.    For example if    ((a Woman) is (a Human (Wifh sex female)))    ((a Human (wifh    sex female)) is (a Woman))    then we can conclude    (a Woman) Same    (a Human (wifh sex female))    We can express the general principle being used here in    Omega as follows    158    ((=dl same =d2) <=> (A f=dl is =d2) (=d2 is =dl)))    4 -- Lattice    Operators    and Logical Operators    The    domain    of descriptions    constitutes    a complemented    lattice,    with respect to the inheritance    ordering is, meet    and    join    operations    and    and    or,    complementation    operation    not,    and    Nothing    and Something    as the bottom    and    top respectively    of the lattice    Some axioms are    required    to express these relations    For example all    descriptions    inherit from Something.    (=d is Something)    Furthermore    Nothing is the complement of Something    (Nothing    same    (not Something))    The    usual    logical operators    on statements    are A, V, -, =>    for    conjunction,    disjunction,    negation,    and implication    respectively.    The description operators    not,    and,    or, etc.    apply to all descriptions    including statements.    It is very    important    not to confuse the logical operators with the    lattice    operators    in Omega.    Note for example:    ((A true false) is false)    ((and true false) is /Vothingj    ((and    true true)    is true)    Unfortunately    most “knowledge representation    languages”    have not carefully    distinguished    between lattice operators    and    logical    operators    leading    to    a    great    deal    of    confusion.    Note that a statement    of the form (p is true) does not in    general    imply that (p same    true).    For example    ((Nixon    iS (a UnindictedCoConspirator))    is true)    (((a Price (with merchandise Tea) (with place China)) is 81) is true)    does not imply    (same    (Nixon is (a UnindictedCoConspirator))    (a Price (with merchandise Tea) (with place China)))    5 -- Basic Axioms    We will state    some of the axioms for the description    system.    The axioms for a theory are usually stated in a    metalanguage    of    the    theory.    However,    since    our    language    contains    its metalanguage,    we can here give the    axioms    as ordinary    statements    in the description system    itself.    5.1 Extensionality    Inheritance    obeys an Axiom of Extensionality    which is    one of the most fundamental    axioms of Omega.    Many    important    properties    can be derived from extensionality    which can be expressed in Omega as follows:    (W    (=descriptionl    iS =description2)    (for-a//    =d (=j    (=d iS =descriptionl)    (=d iS zdescription2))))    Note that the meaning of the above statement would be    drastically    changed    if we simply omitted the universal    quantifier    as follows    (W    (=descriptionl    iS =description2)    (3    (=d iS =descriptionl)    (=d is =description2)))    The    axiom    extensionality    illustrates    the    utility    of    explicitly    incorporating    quantification    in the language in    contrast    to some programming    languages which claim to    be based on logic.    From    this axiom alone we are able to derive most of    the    lattice-theoretic    properties    of    descriptions.    In    particular    we can    deduce    that    is is a reflexive and    transitive    relation.    The following    (=description    iS =description)    expresses    the    reflexivity    of inheritance    whereas    the    following    (4    (A    (=descriptionl    /S =description2)    (=description2    is =descriptions))    kdescriptionl    is zdescriptiong))    expresses the transitivity    of inheritance.    5.2 Commutativity    Commutativity    says that the order in which attributions    Of a concept    are written    is irrelevant    We use the    notation    that    an    expression    of the form XC..>> is a    sequence    of 0 or more elements    159    (same    (a =descriptionl    <<=attributionsl>>    =attribution2    <<=attributions3>>    =attributionq    <<=attributionsg>>)    (a =descriptionl    <<=attributionsl>>    =attributionq    <<=attributionsg>>    =attribution2    <<=attributionsg>>))    (Susan is (a Mother    (with    child Jim)    (with    father Bill)    (with    child (a    Female))))    5.5 Monotonicity    of Atributes    Monotonicity    of attributes    is a fundamental    property of    instance    descriptions    which    is    close)y    related    to    transitivity    of inheritance.    (=descriptionl    is =description2)    (is    (a =concept (With =attribute =descriptionl))    (a =concept (With =attribute =descriptionp))))    For example    ((a Father    (with    child Henry) (With mother Martha)) same    (a Father    (with    mother Martha) (with    child Henry)))    For example if    5.3 Deletion    (Fred    is (an American))    (Bill is (a Person (with    The    axiom    of    Deletion    is that    attributions    of an    instance    description    can be deleted to produce a more    general    instance    description.    (is    (a =descriptionl    <<=attributions-l>>    zattribution-2    <<=ettributiona-3    (a =descriptionl    <<=attributions-I>>    <<=attributions-3))))    father Fred)))    then    (Bill is (a Person (With    father (an kkTWkaI7~~~~    Note    that    the    complementation    in    monotonic.    For example    Omega    is    not    For example    ((a Bostonian)    is (a NewEnglander))    (is    (a Father    (With    child Henry)    (With    mother Martha))    (a Father    (with    mother Martha)))    does not imply that    ((not    (a Bostonian))    is (not (a NewEnglander)))    5.6 Constraints    5.4 Merging    Constraints    can be used to restrict    the objects    will satisfy certain    attributions.    For example    which    One    of    the    most    fundamental    axioms in Omega is    Merging    which    says    that    attributions    of the    same    concept    can be merged.    (a Human (withh?straint    child (a Male)))    describes    humans    who have    Axiom    for Constraints    is    only male children.    The    (=descriptionl    is (a =description2 <<=attributions-l>>))    kdescriptionl    is (a =description2 <<=attributions-2)))))    (is    =descriptionl    (a =description2    <<=attributions-1>> <<=attributions-2)))))    ((a =C (withconstraint    =R =dl)    (with    =R =d2)) is    (a =C (with    =R (and=dl    =d2))))    If    For example if    (Joan is (a Human    (WithConstraint    child (a Male))    (With child Jean)))    (Susan is (a Mother (with    child Jim)))    (Susan is (a Mother    (with    father Bill)))    (Susan is (a Mother    (With    child (a Female))))    160    then    6.2 Projective    Relations    (Joan iS (a Human (with    child (SfK! (a Male) Jean))))    Note that solely from the statement    (Ann iS (a Human (with child (a Male)) (With child Jean)))    no important    conclusions    can be drawn in Omega.    It    entirely    possible that Jean is a female with a brother.    is    We have found the constrained    attributions    in Omega to    be    useful    generalizations    of the    increasingly    popular    “constraint    languages” which propagate values through a    network    of property lists.    If    (2 is (a Complex (with    real-part (> 0))))    and    (2 iS (a Complex (With    real-part (an Integer))))    then    by    merging    it    follows    that    (z iS (a Complex (With    real-part    (> 0)) (with real-part    (an Integer)))).    However    in order to be able to conclude that    (z iS (a Complex (With    real-part    (and (> 0) (an Integer)))))    some    additional    information    is needed.    One    very    general    way to provide this information    is by    (rsalgart    iS (a Projective-relation    (with    concept Complex)))    6 -- Higher    Order Capabilities    and by the statement    In this section    we present examples which illustrate    power of the higher-order    capabilities of Omega.    the    6.1 Transitive    Relations    If    (3 is ($7 Integer    (with    larger 4)))    and    (4 is (an Integer    (with    larger 5))),    we    can    conclude    by    monotonicity    that    (3 is (an Integer    (With    larger (an Integer (with larger 5)))))    From    the above statement,    we would like to be able to    conclude    that    (3 is (an Integer    (with    larger 5))). This goal    can be accomplished    by the statement    (larger    is (a Transitive-relation    (with    concept Integer)))    which    says that    larger is a transitive    relation    for the    concept    Integer.    The    Axiom for Transitive    Relations states that if R is a    transitive    relation    for a concept c and x is an instance    of c which    is R-related    to an instance    of c which is    R-related    to m, then x is R-related    to m.    (=>    (=R iS (a Transitive-relation    (With concept =C)))    (is    (a =C (with    =R (a =C (with    =R =m))))    (a =C (with    =R m))))    The    desired    conclusion    can be reached    by using the    above    description    with c bound to Integer,    R bound to    larger,    and    m bound to 5.    (=R k (a Projective-relation    (with    concept =C)))    (is    (a =C (with    =R =d))    (a =C (wifhConsfrainf    =R =d))))    The    desired    conclusion    is reached by using the above    description    with    =C    bound    to    Complex,    =R bound    to    real-part,    =descriptionl    bound    to (> 01,    and    =description2    bound    to (an Integer).    6.3 Inversion    Inverting    relations    for    efficiency    of    retrieval    is a    standard    technique    in data base organization.    Inversion    makes    use of the converse of a relation with respect to    a    concept    which    satisfies    the    following    Axiom    for    Converse:    (=R same    (a Converse    (With    relation (a Converse    (with    relation =R)    (With    concept =C)))    (With    concept =C)))    The    Axiom    of    Inversion    expresses    how    to    invert    inheritance    relations    for    constrained    instance    descriptions    (<=>    (=dl is (a =C (withConstraint    =R (an =d2))))    ((a =R (wifh    (a Converse    (with relation =R)    (With    relation =C) =dl))    is =d2)))    161    For example suppose    ((a Converse (with relation son) (with concept Person))    same Parent)    we can conclude    (Sally iS (a Person (wifhConsfrainf    son (an American))))    if and only if    ((a Son (with parent Sally)) is (an American))    We have inversion    to be a useful generalization    of the    generalized    selection    mechanisms    in Simula, SmallTalk,    and KRL    as well as the generalized getprop mechanism    in FRL.    The    interested    reader    might    try    to    define    the    transitivity,    projectivity,    and converse relations in other    “knowledge    representation    languages”    7 -- Conclusions    Omega    encompasses ‘?he capabilities of both the w-order    quantification    calculus,    type    theory,    and    pattern    matching    languages    in    a    unified    way.    We    have    illustrated    how    Omega    is more    powerful    than    First    Order    Logic    by showing    how it can directly express    important    properties    of relations    such as transitivity,    projectivity,    and    converse    that . are    not    first    order    definable    Omega    is based on a small number of primitive concepts    including    inheritance,    instantiation,    attribution,    viewpoint,    logical    operations    (conjunction,    disjunction,    negation,    quantification,    etc.)    and    lattice    operations    ( meet,    join,    complement,    etc.)    It    makes    use    of    inheritance    and    attribution    between    descriptions    to    build a network    of descriptions in which knowledge can    be embedded.    Omega    is sufficiently    powerful to be able to express its    own rules of inference.    In this way Omega represents a    self-describing    system    in    which    a    great    deal    of    knowledge    about itself can be embedded.    Because of its    expressive    power,    we have to be very careful in the    axiom    system    for Omega    in order to avoid Russell’s    paradox.    Omega uses mechanisms which combines ideas    from    the    Lambda    Calculus    and Intutionistic    Logic to    avoid contradictions    in the use of self reference    We    have    found    axiomatization    to    be    a    powerful    technique    in    the    development,    design,    and    use of    Omega    Axiomatization    has enabled us to evolve the    design    of Omega    by removing    many bugs which have    shown    up as undesirable    consequences    of the axioms.    The axiomatization    has acted as a contract between the    implementors    and    users of the system.    The axioms    provide    a succinct    specification    of the rules of inference    that    can be invoked.    The development    of Omega has    focused    on the goals of conceptual simplicity and power.    The    axiomatization    of Omega in itself is a measure of    our progress in achieving these goals.    8 -- Related Work    The intellectual    roots of our description system go back    to    von    Neumann-Bernays-Goedel    set theory    [Goedel:    19401,    the    o-order    quantificational    calculus,    and the    lambda    calculus.    Its development    has been influenced    by    the    property    lists of LISP, the pattern    matching    constructs    in PLANNER-71    and its descendants QA-4,    POPLER,    CONNIVER,    etc., the multiple descriptions    and beta structures    of .MERLIN, the class mechanism of    SIMULA,    the frame theory of Minsky, the packagers of    PLAShlA,    the stereotypes in [Hewitt: 19751, the tangled    hierarchies    of NETL,    the attribute    grammars of Knuth,    the type system of CLU, the descriptive mechanisms of    KRL-0,    the partitioned    semantic networks of [Fikes and    Hendrix:    19771,    the    conceptual    representations    of    [Yonezawa:    19771,    the    class    mechanism    of    SMALL-TALK    [Ingalls: 19781, the goblets of Knowledge    Representation    Semantics    [Smith:    19781, the    selector    notation    of BETA,    the inheritance    mechanism of OWL,    the    mathematical    semantics    of    actors    (Hewitt    and    Attardi:    19781, the type system in Edinburgh LCF, the    XPRT    system of Luc Steels, the constraints in [Borning:    1977, 1979 and Steele and Sussman: 1978]    9 -- Further Work    We have also developed    an Omega Machine (which is    not    described    in    this    paper)    that    formalizes    the    operational    semantics of Omega.    Mike    Brady has suggested that it might be possible to    develop    a denotational    semantics    for Omega along the    lines    of Scott’s model    of the lambda    calculus.    This    development    is one possible approach to establishing the    consistency    of Omega.    162    10 -- Acknowledgments    We are grateful    to Dana Scott for pointing out a few    axioms    that    were incorrectly    stated    in a preliminary    version    of this paper.    Jerry Barber has been extremely    helpful    in aiding us in developing and debugging Omega.    Brian    Smith and Gene Ciccarelli helped us to clear up    some    important    ambiguities.    Conversations    with Alan    Borning,    Scott Fahlman,    William Martin, Allen Newell,    Alan    Perlis,    Dana    Scott,    Brian    Smith,    and    the    participants    in the “Message Passing Systems” seminar    were extremely    helpful in getting the description system    ‘nailed    down.    Richard    Weyhrauch    has    raised    our    interests    in meta-theories.    His system FOL is one of    lthe    first    to    exploit    the    classical    logical    notion    of    metatheory    in AL    systems    Several discussions with    Luc    Steels    have    been the source of cross-fertilization    between    the ideas in our system and his XPRT system    ‘Roger Duffey and Ken Forbus have served as extremely    able    teaching    assistants    in    helping    to    develop    this    material    for the Paradigms    for Problem Solving Course    at    MIT.    Comments    by    Peter    Deutsch    and    Peter    Szolovits    have    materially    helped    to    improve    the    presentation.    Our logical rules of inference    are a further development    of a natural    deduction    system by Kalish and Montague.    Some of the axioms for inheritance    were inspired by Set    Theory.    11 -- Bibliography    Barber,    G.    “Reasoning    About Change in Knowledgeable    Office Systems” 1980.    Birtwistle,    G.    M;    Dahl,    0.;    Myhrhaug,    B.;    and    Nygaard,    K.    “SIM..JLA    Begin”    Auerbach    1973.    Bobrow,    D. G. and Winograd,    T. “An Overview of    KRL-0    Knowledge    Representation    Language” ’ Co:nitive    Science VoL 1 No.    I.    1977.    Borning,    A.    “ThingLab    -- An Object-Oriented    System    for    Building    Simulations    Using    Constraints”    Proceedings    of IJCAI-77.    August, 1977.    Bourbaki,    N.    “Theory    of Sets” Book I of Elements of    Mathematics.    Addison-Wesley.    1968.    Church,    A..    “A Formulation    of the Simple Theory of    Types”, 194 1.    Dahl,    0.    J.    and    Nygaard,    K.    “Class and Subclass    Declarations”    In    Simulation    Programming    Languages    J. N. Buxton (Ed.)    North Holland.    1968.    pp. 158-174.    Fahlman,    Scott    “Thesis    Progress Report”    MIT    AI    Memo 331.    May, 1975.    Fikes,    R.    and    Hendrix,    G.    “A    Network-Based    Knowledge    Representation    and    its    Natural    Deduction    System”    IJCAI-77.    Cambridge,    Mass.    August 1977. pp 235-246.    Goedel,    K    “The Consistency    of the Axiom of Choice    and of the Generalized    Continuum    Hypothesis    with    the Axioms of Set Theory”    Annals    of    Mathematics    Studies    No. 3, Princeton, 1940.    Hammer,    M.    and    McLeod,    D. “The Semantic    Data    Model:    A Modeling &Mechanism for Data Base    Applications    SIGMOD    Conference    on    the    Management    of    Data.    Austin    Texas.    May    31-June 2, 1978.    Hawkinson,    Lowell    “The Representation    of Concepts in    OWL”    Proceedings    of IJCAI-75.    September,    1975.    Tiblisi, Georgia, USSR    pp. 107-114.    Hewitt,    C.    “Stereotypes    as an    ACTOR    Approach    Towards    Solving    the    Problem    of Procedural    Attachment    in FRAME    Theories” “Proceedings    of    Interdisciplinary    Workshop    on Theoretical    Issues    in    Natural    Language    Processing”    Cambridge,    June 1975.    Kalish    and Montague    Kristensen,    B. B.; Madsen, 0. L.; Moller-Pedersen,    B.;    and Nygaard,    K.    “A Definition of the BETA    Language”    TECHNICAL    REPORT    TR-8.    Aarhus    University.    February    1979.    Moore,    J.    and    Newell,    A.    “How    Can    MERLIN    Understand?”    CMU AM. November, 1973.    Quine,    W.    K.    “New    Foundations    of Mathematical    Logic” 1952    Burstall,    R and Goguen , J. “Putting Theories Together    to    Make    Specifications”,    Proceedings    of    IJCAI-77.    August, 1977.    163    Rulifson,    J. F.; Derksen,    J. A.; and Waldinger,    R. J.    “QA4:    A    Procedural    Calculus    for Intuitive    Reasoning”    SRI Technical Note 73. November    1972.    Schubert,    L. K.    “Extending    the Expressive Power of    Semantic    Networks” Artificial Intelligence 7.    Steele,    G. L. and Sussman, G. J.    “Constraints”    MIT    Artificial    Intelligence    Memo    502.    November    1978.    Steels, L. Master Thesis, MIT 1979.    Weyhrauch,    R.    “Prolegomena    to a Theory of Formal    Reasoning”,    Stanford    AI    Memo    AIM-315,    December    1978.    Forthcoming    in AL JournaL    164     
 | 
	1980 
 | 
	42 
 | 
					
37 
							 | 
	A Representation Language Language    Russell Greiner    and Douglas B. Lenat    Computer Science Deptartment    Stanford University    ABSTRACT    The field of AI is strewn with knowledge representation languages.    The language    designer typically has one particular application    domain in mind: as subsequent    types of applications are tried, what had originally been US&II j2urure.y become    undesirable limitations, and the language is overhauled or scrapped.    One remedy    to this bleak cycle might be to construct a representational scheme whose domain    is the field of representational languages itself. Toward this end, we designed and    implemented    RLL,    a frame-based    Representation    Languange    Language.    The    components of representation languages in general (such ti slots and inheritance    mechanisms) and of    RLL itself are encoded declaratively as frames.    Modifvinn    these frames can change the    of the RLL    environment.    semantics of RLL, by    altering the    1. MOTIVATION    “One ring to rule them all... and in the darkness bind them. ”    Often a large Artificial Intelligence project begins by designing and    implementing a high-level language in which to easily and precisely    specify the nuances of the task. The language designer typically    builds his Representation    Language around the one particular    highlighted    application    (such as molecular    biology for Units    [Sfe$k], or natural language understanding    for KRL [Bobrow &    Winogrudj and OWL [Szolovifs, et al.]).    For this reason, his    language is often inadequate    for any subsequent    applications    (except those which can be cast in a form similar in structure to the    initial task): what had originally been useful features become    undesirable limitations (such as Units’ explicit copying of inherited    facts, or KRL’s sophisticated    but slow matcher).    Building a new language seems cleaner than modifying the flawed    one, so the designer scraps his “extensible, general” language after    its one use. The size of the February 1980 SIGART shows how    many    similar    yet    incompatible    representation    schemes    have    followed this evolutionary path.    One    remedy    to this bleak cycle might    be to construct    a    representation scheme whose domain is the field of representational    languages itself. a program which could thc’n bc tailored to suit    many specific applications. Toward this end, WC arc designing and    implementing RLL, an object-centered’    Representation Languange    I,anguage.2    This paper reports on the current state of our ideas    and our implementation.    1 This    “object-centering”    does    not represent    a loss in geneiality.    We will    soon see that each    part of the full system,    including    procedural    information,    is    reified    as    a    unit.    *    As    RLL    is    itself    a completely    self-descriptive    there    is    no    need    for    an    RLLL.    representation    language,    2. INTRODUCTION    RLL explicitly represents (i.e. contains units-? for) the components    of reprcscntation languages in general and of itself in particular.    The programming language LISP derives its flexibility in a similar    manner: it, too, encodes many of its constructs within its own    formalisms.    Representation    languages aim at easy, natural    interfacing to users; therefore their primitive building blocks are    larger, more abstract, and more complex than the primitives of    programming    languages.    Building blocks of a representation language include such things as    control regimes (ExhaustiveBackwardChaining,    Agendas), methods    3f associating procedures    with relevant knowledge    (Footnotes,    Demons), tindamental    access functions (Put/Get,    Assert/Match),    automatic inference mechanisms (InheritFromEvery2ndGeneration,    InheritBut-PermitExceptions),    and    even    specifications    of    the    intended    semantics    and    epistemology    (ConsistencyConstraint,    EmpiricalHeuristic).    of    the    components    RLL is designed to help manage these complexities, by providing    (1)    an    organized    library    of    such    representation    language    components,    and (2) tools for manipulating,    modifying,    and    combining    them.    Rather than produce a new representation    language as the “output” of a session with RLL, it is rather the    RLL language itself, the environment the user sees, which changes    gradually in accord with his commands.    3. HOWJS A REPRESENTATION    LANGUAGE LIKE AN    ORGAN?    When the user starts RLL, he finds himself in an environment very    much like the Units package [Slefik], with one major difference. If    he desires a new type of inheritance mechanism, he need only    create a new Inheritance-type of unit, initialize it with that desired    property; and that new mode of inheritance will automatically be    enabled.    This he can do using the same editor and accessing    functions hc uses for cntcring and codifying his domain knowledge    (say, poultry inspection); only hcrc the information pertains to the    actual Knowledge    Base system itself, not turkeys.    The Units package has Get and Put as its tindamcntal    storage and    retrieval functions; therefore RLL also begins in that state.    But    there is nothing sacred about cvcn these two “primitives”.    Get and    Put arc encoded as modifiable units: if they are altered, the nature    of accessing iI slot’s value will change correspondingly.    In short, by    issuing a small number of commands hc can radically alter the    character of the RLI. cnvironmcnt.    molding it to his personal    -----______________    3 RLL is a frame-based    system    [Minsky],    whose    building    blocks    are called    Units    [Stefik],    [Bobrow    & Winograd].    Each    unit consists    of a set of Slots    with    their    respective    values.    165    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    preferences and to the specific needs of his application. RLL is    responsible    for performing    the necessary “truth maintainence”    operations,    (e.g. retroactive    updates)    to preserve    the    overall    correctness of the system as a whole.    For example, Get and Put    can be transformed    into units which, when run as functions,    resemble    Assert    (store    proposition)    and    Match    (retrieve    propositions), and the user need never again mention “slots” at all.    RLL is more like a stop organ than a piano.    Each stop    corresponds to a “pre-fabricated” representational part (e.g. a slot,    inheritance,    format, control regime, etc.), which resides in the    overall RLL system. The initial RLL is simply one configuration    of this organ, with certain stops “pulled out” to mimic the Units    package.    These particular stops reflect our intuitions of what    constitutes a general, powerful system.    Some of the units initially    “pulled out” (activated) define more or less standard inheritance    regimes, such as Inherit-Along-IS-A-Links,    which enables Fido to    gather default information    from AnyDog.    We chose to include a large assortment, of common slots. One    hundred    and six types of slots, including    IS-A, SuperClass.    BroaderHeuristics, and TypicalExamples,    are used to hierarchically    organize the units.    That number grows daily, as we refine the    organizing relationships which were originally “smeared” together    into just. one or two kinds of slots (e.g. A-Kind-Ofl.    An additional    fifteen    types    of    slots,    including    ToGerValue,    ToPut Value,    ToKillUnit.    and    To Add Value,    collectively    define    the    accessing/updating    functions.    This bootstrapping    system (the initial configuration    of “organ    stops”,) dots not span the scope of RLL’s capabilities: many of its    stops are initially in the dormant position.    Just as a competent    musician can produce a radically different sound by manipulating    an organ’s stops, so a sophisticated RLL user can define his own    representation by turning off some features and activating others.    For instance, an FRL devotee may notice -- and choose to use    exclusively -- the kind of slot called A-Kind-Ox which is the    smearing    together    of    Is- A,    SuperSet,    Abstraction,    TypicalExampleOJ    PartOf etc.    He may then deactivate those    more specialized units from his system permanently.    A user who    does not want to see his system as a hierarchy at all can simply    deactivate the A-Kind-Of unit and its progeny. The user need not    worry about the various immediate, and indirect, consequences of    this alteration (e.g., deleting the Inherit-Along-IS-A-Links    unit);    RLL will take care of them. By selectively pushing and pulling, he    should be able to construct    a system resembling    almost any    currently used representational language, such as KRL, OWL and    KLONE;4 after all, an organ can be made to sound like a piano.    Unlike musical organs, RIL also provides its user with mechanisms    for building his own stops (or even type of stops, or even    mechanisms for building stops).    With experience, one can use    RLL to build his own new components.    Rather than building    them from    scratch, (e.g., from CAR,    CDR, and CONS,) he can modi&    some existing units of RLL (employing other units which are    themselves tools designed for just such manipulations.)    4 This particular    task, of actually simulating    various    existing Representation    languages,    has not yet been done.    It is high on our agenda of things to do.    We anticipate it will require the addition of many new components (and types of    components) to RLL, many representing orthogonal decompositions of the space    of    knowledge    representation.    The following examples convey the flavor of what can currently    done with the default settings of the RLL “organ stops”.    4. EXAMPLES    be    4.1. EXAMPLE:    Creating a New Slot    In the following example, the user wishes to define a Fafher slot,    in    a sexist genealogical knowledge base which containes only the    primitive slots Morher and Spouse.    As RLL devotes a unit to    store the necessary knowledge associated with each kind of slot,    (see Figure 1,) defining a new kind of slot means creating and    initializing one new unit. In our experience, the new desired slot is    frequently quite similar to some other slot(s), with but a few    distinguishing    differences.    We exploited    this iegularity    in    developing a high level “slot-defining” language, by which a new    slot can be defined precisely and succinctly in a single declarative    statement.    1 Name:    IS-A    1 Description:    Lists the classes    I AM-A    member    of.    I Format:    List-of-Entries    1 Datatype:    Each entry represents    a class of objects.    I Inverse:    Examples    1 IS-A:    (AnySlot)    1 UsedSylnhefifance:    Inherit-Along-/S-A-Links    1 MyTimeOfCfeation:    1 April    1979, 12:Ol    AM    1 MyCreator:    D.B.Lenat    Figure    # 1 - Unit devoted to the “IS-A”    slot. There are many other slots which are    appropriate    for this unit; whose value will be deduced    automatically    (e.g.    inherited    from AnySlot) if requested.    Creating a new slot for Father is easy: we create a new unit called    Father, and fill its HighLevelDefn slot with the value (Composition    Spouse Mofher). Composition is the name of a unit in our initial    system, a so-called “slot-combiner” which knows how to compose    two slots (regarding each slot as a function from one unit to    another). We also fill the new unit’s Isa slot, deriving the unit    shown in Figure 2.    Name:    1 IS-A:    I HighLevelDefn:    Father    (AnySlot)    (Composition    Spouse    Mother)    Figure    # 2 - Slots filled in by hand when creating    the unit devoted to the”Father”    slot. Several other slots (e.g., the syntactic    slots MyCreator,    MyTimeOfCreation)    are filled in automatically    at this time.    The user now asks for KarlPhilippEmanuel’s    father, by typing    (GetVal    ue 'KPE 'Father).    GetValue first tries a simple associative lookup (GET), but finds    there is no Fafher property stored on KPE, the unit representing    KarlPhilippEmanuel.    GctValue then tries a more sophisticated    approach: ask the Father unit how to compute the Father of any    person.    Thus the call becomes    [Apply*    (GetValue    'Father    'ToCompute)    'KPE].    Notice this calls on GctValue recursivelv. and once again there is    no value stored on the ToCompure slot-of the unit tilled Father.    The call now has expanded    into    [Apply*    (Apply*    (GetValue    'ToCompute    'ToCompute)    'Father)    'KPE].    Luckily, there is a value on the ToCompute slot of the unit    ToCompute:    it says to find the HighLevelDefn,    find the slot-    combiner it employs, find its ToComput$, and ask it.    Our call is    now cxpandcd    out into the following:    [Apply*    (Apply*    (GetValue    'Composition    'ToCompute)    'Spouse    'Mother)    'KPE].    ’ Each unit which represents a function has a ToCompure    slot, which holds the    actual LISP function it encodes. Associating such a ToCompute    slot with each slot.    reflects our view that each slot is a function, whose argument happens to be a    unit,    and    whose computed    value    may be cached away.    166    The unit called Composition does indeed have a ToCompute    Slot;    after applying it, we have:    CApply*    ‘(h    (x)    (GetValue    (GetValue    x ‘Mother)    ‘Spouse))    ‘KPE].    This asks for the Mother slot of KPE, which is always physically    stored in our knowledge base, and then asks for the value stored in    her    Spouse slot.    The final result, JohannSebastian,    is then    returned. It is also cached (stored redundantly for future use) on    the Fafher slot of the unit KPE. See [Lenat et al., 19791 for details.    Several other slots (besides ToCompufe) are deduced automatically    by RLL from the HighLevelDefn of Father (see Figure 3) as they    are called for. The Format of each Fafher slot must be a single    entry, which is the name of a unit which represents a person.    The    only units which may have a Fafher slot are those which may    legally have a Mofher slot, viz., any person.    Also, since Fafher is    defined    in terms of both    Mother and Spouse, using the slot    combiner Composition, a value stored on KPE:Fafher    must be    erased    if ever we change    the value for KPE’s    M&her    or    AnnaMagdelena’s Spouse, or the definition (that is, 7’oCompufe) of    Composition.    Name:    1 IS-A:    Father    (AnySlot)    I    1 HighLevelDefn:    (Composition    Spouse    Mother)    1 Description:    Value    of unit’s Mother’s    Spouse.    1    1 Format:    SingleEntry    1 Datatype:    Each entry is unit representing    a person.    I    1 MahesSenseFor:    AnyPerson    1 DefinedlnTermsOf:    (Spouse    Mother)    I    1 DefinedUsing:    Composition    1 ToCompute:    (A (x) (GetValue    (GetValue    x ‘Mother)    ‘Spouse)    i    Figure    # 3 - Later form of the Father unit, showing    automatically.    slots filled in    Notice the the ease with which a use; can currently :‘extend his    representation”, enlarging his vocabulary of new slots.    A similar,    though more extravagant example would be to define FavorifeAunf    *as    (SingieMost    (Unioning    (Composition    Sister    Parenfs)    (Composition Spouse Brofher Parenfs)) $$).    Note that “Unioning”    and “SingleMost” are two of the slot combiners which come with    RLL, whose definition    and range can be inferred    from this    example.    It is usually no harder to create a new type of slot format    (OrderedNonemptySet),    slot combiner    (TwoMost,    Starring), or    datatype (MustBePersonOverl6),    than it was to create a new slot    type    or    inheritance    mechanism.    Explicitly    encoding    such    information helps the user (and us) understand the precise function    of each of the various components.    We do not yet feel that we    have a complete    set of any of these components,    but are    encouraged by empirical results like the following: The first two    hundred    slots we defined    required    us to define thirteen slot    combiners, yet the lasf two hundred slots required only five new    slot combiners.    4.2. EXAMPLE:    Creating a New Inheritance Mode    Suppose a geneticist wishes to have a type of inheritance which    skips every second ancestor.    He browses through the hierarchy of    units descending from the general one called Inheritance, finds the    closest existing unit, InheritSelectively, which he copies into a new    unit, InheritFromEvery2ndGeneration.    Editing this copy, he finds    a high level description of the path taken during the inheritance,    wherein    he    replaces    the    single    occurrence    of    “Parenf”    by    “GrandParent” (or by two occurrences of Parenf, or by the phrase    (Composition Parenf Parent)). After exiting from the edit, the new    type of inheritance will be active; RLL will have translated the    slight change in the unit’s high-level description into a multitude of    low-level    changes.    If    the    geneticist    now    specifies    that    Organism # 34 is an “InheritFromEvery2ndGeneration    offspring” of    Organism#20,    this will mean the right thing. Note that the tools    used (browser, editor, translator, etc.) are themselves encoded as    units in RLL.    4.3. EXAMPLE: Epistemological Status    Epistemological Status:    To represent the fact that John believes    that Mary is 37 years old, RLL    adds the ordered pair (SeeUnit    AgeOfMaryOOOl) to the the Age slot of the Mary unit.    RLL    creates a unit called AgeOfMaryOOOl, fills its *value* slot with 37    and its EpiSfa/us slot with “John believes”.    See Figure 4. Note    this mechanism suffices to rcprcscnt belief about belief (just a    second chained SeeUnit pointer), quoted belief (“John thinks he    knows    Mary’s    age”,    by    omitting    the    *value*    37    slot    in    AgeOfMaryOOOl), situational fluents, etc. This mechanism can also    be used    to represent    arbitrary    n-ary relations,    escaping    the    associative triple (i.e. Unit/SZof/value)    limitation.    Name:    ] IS-A:    Mary    I Description:    (Person    Female    ContraryActor)    1    The grower    of silver bells etc.    I    i Age:    1 Name    AgeOfMaryOOOl    il    Name-    1 Isa    (UnitForASlotFiller)    I I lsa    AgeOfMaryOO02    I    I LiveslnUnit    I I LiveslnUnit    (UnitForASlotFiller)    I    Mary    Mary    I    1 LiveslnSlot    Age-    I ~Zs    37    John believes    I Teleology:    Epistemic    . .    I 1 LiveslnSlot    Age-    1 1 *value*    1    I I Epistatus    gring    Wedding805    I    1 I Teleology:    Historic    I    Figure 4 - Representing    “John believes that Mary is 37, but she’s    When she was married. she was 21”.    really 39.    4.4. EXAMPLE: Enforcing Semantics    Suppose that Lee, a user of RLL, is constructing HearSayXXIV, a    representation    language which contains cooperating    knowledge    sources (KSs).    He specifies that each unit representing    a    knowledge source should have some very precise applicability    criteria (he defines a FuZZRelevancy slot) and also a much quicker,    rougher    check    of    its    potential    relevance    (he    defines    a    PrePreCondifions slot).    If HearSayXXIV users employ these two    slots in just the way he intended, they will be rewarded with a very    efficiently-running    program.    But how can Lee be sure that users of HearSayXXIV will use these    two new slots the way he intended?    He also defines a new kind of    slot called Semantics. The unit for each type of slot can have a    Semanfics slot, in which case it should contain criteria that the    values stored in such slots are expected to satisfy.    Lee fills the Semanfics slot of the unit called PrePreConditions with    a piece of code that checks that the PrePreConditions slot of every    KS unit is filled by    a Lisp predicate, which is very quick to    execute, which (empirically) correlates highly to the FullRelevancy    predicate, and which rarely returns NIL when the latter would    return T. This bundle of constraints captures what he “really    means” by PrePreCondifions    A user of HearSayXXIV, say Bob, now builds and runs a speech    understanding program containing a large collection of cooperating    knowledge    sources.    As he does so, statistics are gathered    empirically.    suppose    Bob    frequently    circumvents    the    PrePreCondifions slot cntircly,    by placing a’ pointer there to the    same long,    slow, complete    criteria    he has written    for the    FuNReZevancy slot of that KS.    This is empirically caught as a    violation of one of the constraints which Lee recorded in the    Semanfics slot of the unit PrePreConditions.    As a result, the    Semanfics slot of the Semantics unit will be consulted to find an    appropriate reaction; the code therein might direct it to print a    warning message to Bob: “The PrePreCondifions    slot of a KS is    meant to run very quickly, compared with the F&Relevancy    slot,    but 70% of yours don’t; please change your PrePreCondifions    slots,    167    or your FullRelevancy slots, or (if you insist) the Semantics slot of    the PrePreConditions    unit”.6    5. SPECIFICATIONS    FOR ANY REPRESENTATION    LANGUAGE LANGUAGE    The following are some of the core constraints around which RLL    was designed. One can issue commands to RLL which effectively    “turn off’ some of these features, but in that case the user is left    with an inflexible system we would no longer call a representation    language language. Further details may be found in [Lena& Haye*    Rorh, & Klahr] and in [Geneserelh    & Lenaf].    Self-description:    No part of the RLL system is opaque; even the    primitive Get and Put and Evaluate finctions are represented by    individual    units    describing    their    operation.2    Current    status:    complete (to a base language level).    Self-modification: Changes in the high-level description of an RLL    process automatically result in changes in the Lisp code for -- and    hence behavior of -- RLL. Current status: this works for changes    in definition, format, etc. of units representing slots and control    processes.    Much additional effort is required.    Codification    of    Representation    Knowledge:    Taxonomies    of    inheritance, function invocation, etc. Tools for manipulating and    creating same. These correspond    to the stops of the organ,    illustrated above. Current status: this is some of the most exciting    research we foresee; only a smattering of representation knowledge    has yet been captured.    6. INITIAL “ORGAN STOPS”    The following characteristics pertain especially to the initirll state of    the current RLL system, wherein all “organ stops” are set at their    default positions.    Each RLL user will doubtless settle upon some    different settings, more suited to the representation environment he    wishes to be in while constructing his application program.    For    details, see [Greiner].    Cognitive    economy:    Decision-making    about what intermediate    values to cache away, when to recompute values, expectation-    f-iltcring. Current status: simple reasoning is done to determine    each of these decisions; the hooks for more complex procedures    exist, but they have not been used yet.    Syntactic vs Semantic slots: Clyde should inherit values for many    slots from TypicalElephant, such as Color, Diet, Size; but not from    slots which refer to TypicalElephant qua data structure, slots such    as NumerOjFilledInSlots    and DaleCreated. Current status: RLL    correctly treats these two classes of slots differently, e.g. when    initializing a new unit.    Onion field of languages:    RLL contains a collection of features    (e.g., automatically adding inverse links) which can be individually    enabled or disabled, rather than a strict linear sequence of higher    and higher level languages.    Thus it is more like an onion field    than the standard “skins of an onion” layering.    Current status:    Done. Three of the most commonly used settings are bundled    together as CORLL, ERLL, and BRLL.    ’ This work has led us to realize the impossibility    of unambiguously    stating    semantics.    Consider the case of the semantics of the Lisp function    “OR”.    Suppose one person believes it evaluates its arguments left to right until a non-    null value is found: a second person believes it evaluates right to left: a third    person believes it evaluates all simultaneously.    They go to the Semanfics slot of    the unit called OR, to settle the question.    Therein they find this expression:    (OR (Evaluate    the args Iefi to right) [Evaluate    the args right to leJ)J.    Person #3    is convinced now that he was wrong, but persons 1 and 2 point to each other    and exclaim in unison “See, I toId you!”    The point of all this is that even    storing a Lisp predicate in the Semanfics    slots only specifies the meaning of a slot    up to a set of fixed points.    One approaches the description of the semantics with    some preconceived ideas, and there may be more than one set of sucpi hypotheses    which are consistent with everything stored therein.    See    [Genesereth    & Lenar].    Economy    via Appropriate    Placement:    Each    fact, heuristic.    comment, etc. is placed on the unit (or set of units) which are as    general and abstract as possible. Frequently, new units are created    just to facilitiate such appropriate placement.    In the long run, this    reduces the need for duplication of information. One example of    this is the use of of appropriate    conceptual units:    Clarity of Conceptual Units:    RLL can distinguish (i.e. devote a    separate    unit    to    each    of)    the    following    concepts:    TheSetOfAllElephants, (whose associated properties describe this as    a set -- such as #OjMembers    or SubCaregories), TypicalElephant,    (on which we might store Expected-TuskLenglh    or DefauKolor    slots), ElephantSpecies, (which EvoZvedAsASpecies some 60 million    years ago and is CloselyRelaredTo the HippopatamusSpecies,)    ElephantConcept,    (which    QuaZifiesAsA BeastOfBurden    and    a    TuskedPackyderm,)    ArchetypicalElephant    (which    represents    an    elephant, in the real world which best exemplifies the notion of    “Elephant-ness”).    It is important for RLL to be able to represent    them distinctly, yet still record the relations among them.    On the    other hand, to facilitate interactions with a human user, RLL can    accept a vague term (Elephant) from the user or from another unit,    and automatically refine it into a precise term. This is vital, since a    term which is regarded as precise today may be regarded as a    vague catchall tomorrow.    Current status: distinct representations    pose no problem; but only an exhaustive solution to the problem    of automatic disambiguation    has been implemented.    7. CONCLUSION    “‘...in Mordor, where the Shadow lies. ”    The’system is currently usable, and only through use will direction.*    for future effort be revealed.    Requests for documentation    and    access to RLL are encouraged.    There are still many areas for    tirther    development    of RLL.    Some require merely a large    amount    of    work    (e.g.,    incorporating    other    researchers’    representational    schemes and conventions);    others require new    ideas (es., handling intensional objects).    To provide evidence for    our arguments, we should exhibit a large collection of distinct    representation    languages which were built out of RLL; this we    cannot yet do. Several specific applications systems live in (or are    proposed to live in) RLL; these include    ELI-RISK0 (discoiery of    heuristic rules), E&E (combat gamina). FUNNEL (taxonomv of    Lisp objects, with an aim toward-autor&tic programming), ROGET    (Jim Bennett: guiding a medical expert to directly construct a    knowledge based system), VLSI (Mark Stefik and Harold Brown: a    foray of AI into the VLSI layout area), and WHEEZE (Jan Clayton    and Dave Smith: diagnosis    of pulminary    function    disorders,    reported in [Smifh & Clayton]).    Experience in AI research has repeatedly shown the need for a    flexible and extensible    language    -- one in which    the very    vocabulary    can    be    easily    and    usefully    augmented.    Our    representation    language language addresses this challenge.    We    leave the pieces of a representation in an explicit and modifiable    state. By performing simple modifications to these representational    parts    (using    specially-designed    manipulation    tools),    new    representation    languages    can    be quickly    created,    debugged,    modified, and combined.    This should ultimately obviate the need    for dozens of similar yet incompatible representation    languages,    each usable for but a narrow spectrum of task.    68    ACKNOWLEDGEMENTS    The work reported here represents a snapshot of the current state of an on-going    research effort conducted at Stanford University.    Researchers from SAIL and    HPP are examining    a variety of issues concerning representational    schemes in    general, and their construction in particular (viz., [Nii & Aiello]).    Mark Stefik and    Michael Geneseretb provided frequent insights into many of the underlying issues.    We thank Terry Winograd    for critiquing    a draft of this paper.    He, Danny    Bobrow, and Rich Fikes conveyed enough of the good and bad aspects of KRL    to guide us along the path to RLL.    Greg Ilarris implemented an early system    which perfomred the task described in Section 4.1. Others who have directly or    indirectly influenced this work include Bob Balzer, John Brown, Cordell Green,    Johan deKleer, and Rick Hayes-Roth.    To sidestep InterLisp’s space limitation,    Dave Smith implemented    a demand unit swapping package (see [Smith]).    The    research is supported    by NSF    Grant    #MCS-79-01954    and    ONR    Contract    #NOOO14-80-C-0609.    BIBLIOGRAPHY    Aikins, Jan, “Prototypes and Production    Rules:    An Approach    to Knowledge    Representation    from Hypothesis Formation”,    HPP Working Paper HPP-79-    10, Computer    Science Dept., Stanford    University,    July    1979.    Bobrow, D.G.    and    Winograd,    T., “An    Overview    of    KRL;    a    Knowledge    Representation    Language”,    SIJCAI,    August    1977. a    Brachman, Ron, “What’s in a Concept, Structural    Foundations    for Semantic    Networks”.    BBN report    3433, October    1976.    Findler,    Nicholas    V. (ed.).    Associafive    Networks.    NY. hcadcmic    Press, 1979.    Gencsereth, Michael, and Lenat, Douglas B.. SeIfDescripfion    and -Modijicnrion    in    a Knowledge    Representation    Language,    HPP Working    Paper HPP-80-10,    June,    1980.    Greiner. Russell. “A Representation    Language Language”, HPP Working Paper    HPP-80-9, Computer    Science Dept., Stanford    University,    June    1980.    Lenat, Douglas B.. “AM: Automated Discovery in Mathematics”, SIJCAI, August    1977    Lenat, D. B., Hayes-Roth, F. and Klahr. P., “Cognitive Economy”, Stanford HPP    Report HPP-79-15. Computer Science Dept., Stanford University, June 1979.    Minsky, Marvin, “A Framework for Representing Knowledge”, in 77re I’s#rofogv    OJ Computer    Vision.    P. Winston    (ed.), McGraw-Ilill,    New York, 1975.    Nii, II. Penny, and Aiello, N., “AGE (Attempt to Generalize):    A Knowledge-    Based Program for Building    Knowledge-Based Program”, 6IJCAI.    August    1979.    SIGART    Newsletter, February 1980 (Special Representation    Issue; Brachman &    Smith,    eds.).    Smith. David and Clavton. Jan. “A Frame-based Production System Architecture”,    AAAI    Conference, .1980.    Smith, David, “CORLL:    A Demand Paging System for Units”, HPP Working    Paper    HPP-80-8, June    1980.    Stefik, Mark J.. “An Examination of a Frame-Structured    Representation System”,    SIJCAI,    August    1977.    Szolovits, Peter, Hawkinson, Lowell B., and Martin, William A., “An Overview of    OWL,    A Language    for Knowledge    Representation”,    MITLLCS/TM-86,    Massachusetts    Institute    of Technology,    June    1977.    Woods, W. A., “What’s in a Link, Foundations for Semantic Networks”, in D. G.    Bobrow & A. M. Collins (eds.),    Representation    and Understanding,    Academic    Press, 1975.    ,    169     
 | 
	1980 
 | 
	43 
 | 
					
38 
							 | 
	SPATIAL AND QUALITATIVE ASPECTS OF REASONING ABOUT MOTION    Kenneth D. Forbus    MIT Artificial Intclligcncc Laboratory    545 Technology Square    Cambridge, Mass. 02139    ABSTRACT    Reasoning about motion is an important part of common    sense knowledge. The spatial and qualitative aspects of reasoning    about motion through fret space arc studied through the construction    of a program to perform such reasoning.    An analog gcomctry    rcprcscntation serves as a diagram, and descriptions of both the    actual motion of a ball and envisioning are used in answering simple    questions.    1 Introduction    People reason fluently about motion through space. For    example, WC know that if two balls arc thrown into a well they might    collide, but if one ball is always outside and the other always inside    they cannot.    The knowl”cdgc involved in this qualitative kind of    reasoning seems to be simpler than formal mechanics and appears to    be based on our expcricnce in the physical world.    Since this    knowledge is an important part of our common SCIISC, capturing it    will help us to understand how pcoplc think and enable us to make    machines smarter. The issues involved in reasoning about motion    through space wcrc studied by constructing a program, called FROB,    that reasons about motion in a simple domain.    1 believe three    important ideas have been illustrated by this work:    1.    A quantitative geometric rcprescntation    simplifies    reasoning about space. It can provide a simple method for answering    a class of gcomctric questions. The descriptions of space required for    qualitative    reasoning    can bc defined    using the quantitative    representation, making it a communication device between several    rcprcsentations of a situation    2. Describing the actual motion of an object can be thought    of as creating a network of descriptions of qualitatively distinct types    of motion, linked by descriptions of the state of the object before and    after each of these motions. This network can bc used to analyze the    motion and in some cases can be constructed by a process of    simulation.    3. The description of the kinds of motion possible from    some state (called the cnvisionment) is useful for answering certain    questions and for checking an actual motion against assumptions    made about it.    The assimilation of assumptions about global    properties of the motion into this description makes heavy use of    spatial descriptions.    FRO11 reasons about motion in a simplified domain called    the “Bouncing Ball” world. A situation in the Bouncing IjJl world    consists of a two dimensional scent with surfaces represented by line    scgmcnts, and one or more balls which are mod&d as point masses.    WC ignore the exact shape of balls, motion after two balls collide,    spin, motion in a third spatial dimension, air resistance, sliding    motion, and all forces other than gravity.    The initial description of a situation is a diagram containing    a description of the surfaces and one or more bails, as in ligure 1.    Fig. 1. A typical sccnc from the Rouncing Ball world    A silualion in the Bouncing Ml World consisls or a diagram that    q.x~fies surfaces and one or more balls This drawing only shows    lhe geometric aspects of the descriptions involved.    When given a description of a situation in the Ijouncing hall world,    I-ROB analyzes the surface geometry and computes qualitative    descriptions of the free space in the diagram. ‘l’hc person using the    program can describe bails. properties of their states of motion,    rcqucst simulations, and rnakc global assumptions about the motion.    FROl? incrcmcntally crcatcs and updates    its descriptions    to    accommodate this information, complaining if inconsistencies are    dctcctcd.    Questions may bc asked by calling procedures that    intcrrogatc thcsc descriptions. The four basic questions FROR can    answer are: (1) What can it (a ball) do next?, (2) Where can it go    next?, (3) Where can it end up?, and (4) Can thcsc two balls collide?    ,Ll Spatial descriptions    WC do not yet know why people are so good at reasoning    about space.    Thcorcm proving and symbolic manipulation of    algebraic expressions do not seem to account for this ability.    Arguments against the former mdy bc found in [l], while the sheer    complexity of algebraic manipulations argues against the latter.    I    conjccturc that the fluency pcoplc exhibit in dealing with space    comes mainly from using their visual apparatus. One example is the    170    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    USC of diagrams. Thcmarks in a diagram reflect the spatial relations    bctwcen the things they rcprcscnt. which allows us to use our visual    apparatus to interpret these relationships as WC would with real    objects. In this case, perception provides a simple (at least for the    processes that use it) decision procedure for a class of spatial    questions.    We do not yet understand the complcxitics of human vision,    but the techniques of analytic gcomctry can be used to provide    decision proccdurcs for geometrically simple cases. FROR uses a    Metric Diagram. which is a representation of geometry that combines    symbolic /and numerical information.    The geometric aspects of a    problem are rcprcscntcd by symbolic elements whose parameters are    numbers in ;I bounded global coordinate system. The rcprescntation    is used to answer questions about basic spatial relationships between    elements, such as which side of a lint a particular point lies or    whcthcr or not two lines touch. Calculation based on properties of    the clcmen ts su ffkes to answer these questions.    My conjecture about qualitative spatial reasoning is that it    involves a vocabulary of PLACES whose relationships are described    in symbolic terms. By PLACE, I mcan a piece of space (point, line,    region, volume, etc.) such that ail parts of it share some property.    The nature of a domain determines the notion of place appropriate to    it. In FROII the Space Graoh provides the vocabulary of places.    Since all balls arc point masses and arc subject to the same forces, the    Space Graph is independent of them and dcpcnds only on the    surface geometry in the Metric Diagram. Free space is divided into    regions in a way that insures the description of qualitative state    (dcscribcd below) will bc unique and simple, and these regions and    the edges that bound them are the n_odcs of the Space Graph. These    nodes are connected by arcs that are labclled with the name of the    relationship between them (such as IXFT or UP). Any other place    rcquircd for qualitative reasoning can be dcscribcd by composing    these nodes, and the graph structure provides a framework for    cfficicnt processing (see section 4). An example of the places in a    scene and the graph structure they produce is contained in figure 2.    Pointu-    Fig. 2. Space Graph for a scene    The free space in the diagram is broken up into regions in a way that    simplifies the description of the kinds of motion possible. The labels on the pointers    which indicate the spatial relationships between the nodes are not shown due to lack of    space.    III Describing a Particular Motion    When we watch an object move, we generally couch OUT    description in terms of a sequence of qualitatively distinct motion    types. WC    will    call    a network    built from descriptions of motions    linked by descriptions of the state of the object before and after each    motion    an Action    SCQUCI~. ‘I’he knowledge associated with each    type of motion    allows    it to IX further analy?cd, the consistency of the    proposed description to be checked, and permits making predictions    about    what    will happen    next.    A drawn    trajectory    of motion    in the    Bouncing    Ml    domain    and    the    schema    of    its associated    Action    Sequence is illustrated in figure 3.    Fig. 3. Action Sequence Schema for Bouncing Balls    This schema describes the motion depicted in Figure 1 The PilYSOR    constrarnt dcscribcs Lhc state of Ihc ball al some instant in time.    and the ACT constraints dcscrlbe a piece of the ball’s history.    l’hc two basic types of motion in the Bouncing Hall world    are FLY and COLI.II)F.. The difference in computing boundary    conditions between flying up and flying down rcquircs their    consideration as separate acts in the sequence, and additional motion    types are defined    for    transformations    to    motions    outside    the    Bouncing I\all world (such as CONTINUE for leaving the diagram    and SI,IDE/S’I‘OP when a ball is travclling along a surface). The    description of a ball’s state includes such information as its velocity    (quantitative if known, or just in terms of a rough heading like    (LEFT UP)) and what it is touching.    In FROB    the Action    Sequence    descriptions    are embedded    in    a constraint    language    (SW    [2]    for    an    overview),    and    include    equations describing projectile motion to compute numerical values    if numerical descriptions of the state parameters are obtained. The    use of quantitative parameters in the qualitative description of    motion    makes    possible    a diffcrcnt    kind    of simulation    from    the usual    incremental time simulations used in physics. When numbers are    provided, an Action Sequence can bc produced by gcncrating a    description of the next motion from the last known state of motion.    The time to gcncrate the description, as well as the complexity of the    result, depends on the qualitative complexity of the motion rather    than some fixed increment of time used to evolve a set of state    parameters.    Simulation is not the only way an Action Sequence can be    created.    A network of constraints can bc built to describe some    proposed motion and the knowlcdgc of the equations of motion can    be used to analyze it to see if it is consistent. ‘The dependence on    quantitative paramctcrs in FliOl~s analysis is a drawback.    For    example, FROB can detect that the situation -in figure 4 is    171    inconsistent only after being given some final height for the ball and    a value for the elasticity. People can argue that this proposed motion    is impossible with simpler arguments that require less information.    Fig. 5. A Sequence Graph    The arrows represent the Arcction of a quahtative state at the place    the arrow is drawn Circles represent states without well defined    direclions. The pomters expressing the possible temporal orderings    of the states are not shown.    Fig. 4. An inconsistent description of motion    This motion is impossible hecause the ball could not get as high as it does after the    second collision unless it had gone higher on the tirst. If it had gone higher after the    first, the second collision would not even have happened    ‘To discover that this    description is inconsistent l:KOl3 requires a specific re!ocity a( the highest point and a    specific value for the elasticity of the ball as well as the coordinates of the collision    points.    The basic idea of an Action Sequence stems highly suited as    a target representation for parsing quantitative data about motion,    perhaps gleaned by perception. For this purpose    a more qualitative    set of methods for analysis would havP to bc encoded. An example of    such a rule for the Bouncing Ball domain would be “A ball cannot    increase its energy from one act to the next”.    E Describing Possible Motions    The quantitative state of a ball consists of its position and    velocity.    A notion of qualitative state can be defined which    generalizes position to be a PLACE, the velocity to be a symbolic    heading (such as (RIGHT DOWN)), and makes explicit the type of    motion that occurs.    A set of simulation rules can be written to    operate on qualitative state descriptions, but because of the ambiguity    in the description the rules may yield several motions possible from    any given state. Since there arc only a small number of places and a    small number of possible qualitative states at each place, all the    possible kinds of motion from some given initial qualitative state can    easily bc computed. This description is called the cnvisionmcnf (after    [3]) for that state. dcKleer used this description to answer simple    questions about motion directly and plan algebraic solutions to    physics problems.    In FROB envisioning results in the Scclucncc Graph, which    uses the Space Graph for its spatial framework (see Figure 5). It is    ----lnll&lstate-    (FLY (SREGIONl) (LEFT    Knowing more about a ball than its state of motion at some    time can restrict the kinds of motion possible to it. Energy limits the    height a ball can reach, and knowing that a ball is pcrfcctly elastic or    completely    inelastic excludes    certain    results    of a collision.    Assumptions about whether a ball must or may not reach a particular    place or qualitative state can restrict the possibilties as well. The    Scqucncc Graph can bc modified by pruning states to reflect this    information about the ball and its motion.    of    Fach of the constraints above directly rules out some states    mot&n.    The full consequences of eliminating such states are    determined by methods that rely on specific properties of space and    motion.    Among these properties arc the fact that a mo:ion of an    object must be “continuous” in its state path (which means that the    active part of a Sequence Graph must be a single connected    component) and that the space it moves in must bc connected (which    is uscfi11 because there are many fcwcr places than qualitative states    in any problem).    Dependency information is stored so that the    cffccts of specific assumptions    may be traced.    Conflicting    assumptions, overconstraint. and conflicts bctwcen a description of    the actual motion (as spccificd by an Action Sequence) and its    constrained possibilties arc dctcctcd by FROB and the underlying    assumptions arc offcrcd up for inspection and possible correction.    v Answering Oucstions    Many of the questions that could be asked of the Bouncing    Ball domain can bc answered by direct examination    of the    descriptions built by FROB. These include questions (1) and (2)    above.    The three levels of description of motion in FROB (the    Action Sequence, the Sequcncc Graph, and the path of qualitative    states corresponding to the Action Sequence) allow some kind of    answer to be given even with partial information.    used for summarizing propcrtics of the long term motion of an    object, evaluating collision possibilties, and,assimilating assumptions    More complicated questions (such as (3) and (4) above) can    about the global propcrtics of motion.    Only the assimilation of    be answer-cd with additional computation using these descriptions.    Determining whcthcr or not a ball is trapped in a well (see figure 6)    assumptions will be discussed here.    can be done by examining a Scqucnce Graph for the last state in an    172    Fig. 6. Summarizing Motion    ->>(motion-summary-for bl)    FOR Bl    TlIE BALL WILI. I:VI:NTUAI 1;Y STOP    II‘ IS TRAPPED INSIDE (WELID)    AND IT WILL S I’OP FLYING IZT ONE OF (SEGMENTll)    Action Sequence to see if it is possible to bc moving outside the    places that comprise the well. Often a collision between two balls can    be ruled out because the two balls arc never in the same PI.ACE, as    determined by examining their Scqucncc Graphs. With the Action    Sequence description of motion it is possible to compute exactly    where and when two balls collide if they do at all. Figure 7 contains    the answers given by the progratn to collision questions in a simple    situation.    Fig. 7. Collision Problems    ->>(collide? f g)    (I’OSSIRI 1:. AT SEGMEN l-13 SREGlONl    .,,)    ->>(rannot-bc-at f segment31)    (SIiGhlENT31)    Ul’Dh I‘ING ASSUMPT10NS FOR (>> INlTlAl,-STATE 17)    ClII:CKING    l’ATli 01:MOIION    ACAINSTASSUMPTIONS    ->>(collide? f g)    NO    VI liclation to Other Work    The focus of this work is very different from that of [4]!5][6],    which arc mainly concerned with modcling students solving textbook    physics problems. All of the problems dealt with in these programs    were static. and    the    rcprcscntation    of gcomctry    expressed    connectivity rather than free sp~c.    lssucs such as gerting algebraic    solutions and doing only the minimal amount of work rcquircd to    answer a particular question about a situation were ignored here in    order to bcttcr deal with the questions of spatial reasoning and the    semantics of motion.    The process of formalizing common sense knowledge is    much in the spirit of the Naive Physics effort of Hayes (described in    [7]). The Action Sequence, for example, may bc vicwcd as the history    for a ball since it contains explicit spatial and temporal limits.    However, this work is conccrncd with computational issues as well as    issues of rcprcscntation.    Unlike this work, Hayes (see [8]) explicitly    avoids the use of metric rcprcscntations for space. 1 suspect that a    metric representation will bc required to make the concept of a    history useful, in that to compare lhcm rcquircs having a common    coordinate frame.    The Metric Dingrdm has much in common with the    dcrcriptions used as targets for langu,tgc translation of [I] and the    imagery theory of [9]. Arguments against the traditional “pure    relational” gcomctric representations used in A1 and the “naive    analog” rcprcscntations used by [lc)],[l l] may bc found in [12].    The concept of envisioning was first introduced in [3] as a    technique for answering simple questions about a sccnc directly and    as a planning device for algebraic solutions.    ‘I’hc inclusion of    dissipative forces, a true IWO dimensional domain, interactions of    more than one moving object, and its USC in assimilation of global    constraints on motion arc the envisioning advances incorporated in    this work.    vu Biblioprauhy    [I] Waltz, D. and Boggess, L. “Visual Analog Representations for Natural Language    Understanding” in Proc. IJCAI-79 Tokyo, Japan, August 1979    --    121 Steele, G and Sussman. G “Constraints” Memo No. 502. MIT AI Iab. Cambridge,    Massachusetts, November 1978    [3] deKleer, Johan “Qualitative and Quantitative Knowledge in Classical Mechanics”    Technical Report 352. MIT AI lab, Cambridge, Massachusetts. 1975    [4) Bundy, A. et. al “MECllO:Year    One” Research Report No. 22, Department of    Artificial Intelligence. FAinburgh, 1976    (51 Novak, G    “Computer Understanding of Physics Problems Stated in Natural    Language” Technical Report NL-30. Computer Science Department, The University    of Texas at Austin, 1976    [6] McDermott, J and Larkin, J. “Re-representing Textbook Physics Problems” in    Proc of the 2nd National Conference of the Canadian Society for Computational    Studies of Intelhgence, Toronto 1978    (71 llayes, Patrick J “The Naive Physics Manifesto” unpublished, May 1978    [8] Hayes, Patnck J “Narve Physln 1:Ontology for Liquids” unpublished, August    1978    191 Hinton. G    “Some    Demonstrations of the Effects of Structural kcription~ in    Mental Imagery” Conmtive Science, Vol 3. No. 3, July-September 1979    [JO] Funt, B V. “Wl1ISPER:A    Computer implementation    Using Analogues in    Reasoning” PI1.D. Thesis, Umvcrslty of British Columbia, 1976    [ll] Kosslyn & Schwartz, “A Slmuiation of Visual Imagery” Connitive Science. Vol. 1,    No. 3, July 1977    [12] Forbus. K. “A Study of @ualitative and Geometric Knowledge in Reasoning    about Motion” MIT Al Lab T~IJJJC~ report, in preparation.    173     
 | 
	1980 
 | 
	44 
 | 
					
39 
							 | 
	COMPUTER    INTERPRETATION    OF    HUMAN    STICK    FIGURES    Martin Herman    Department of Computer Science    Carnegie-Mellon    University    Pittsburgh,    PA 15213    ABSTRACT    A    computer    program    which    generates    context-sensitive    descriptions    of human stick figures is described.    Three categories    of knowledge    important    for the task are discussed:    (1) the 3-D    description    of the figures, (2) the conceptual    description    of the    scene, and (3) heuristic    rules used to generate    the above two    descriptions.    The program’s representation    for these descriptions    is also discussed.    1. Introduction    This paper describes a computer    program, called SKELETUN,    which    generates    context-sensitive    descriptions    of 2-D, static,    human    stick figures.    The motivating    interest    is to study the    process    of    extracting    information    communicated    by    body    postures.    Stick figures    have been chosen    to approximate    the    human    form because    they eliminate    the problems    involved    in    processing    fleshed-out    human figures (e.g., extracting    them from    the image, identifying    and labeling body parts), yet they maintain    the overall form conveyed by gross body posture.    SKELETUN    currently    operates    in two    domains,    emotions    (figures may be sad, happy, depressed,    etc.) and baseball (batting,    catching, etc.)    Its knowledge    of baseball is much more complete,    however.    It can accept any figure and interpret it in terms of the    following baseball activi ties: (1) batting, (2) throwing,    (3) running,    (4) catching a high ball with one or both arms, (5) catching    a ball    at torso height with one or both arms, (6) fielding a grounder    with    one or both arms.    An example    of how a figure    is interpreted    in the baseball    domain    is shown    in Fig.    1, where    hand-generated    English    Q    1. Two arms catching torso-high    ball    2.    (very good confidence)    Batting    (fair confidence)    3. Two arms fielding grounder    (poor    confidence)    Fig. la    PHYSICAL DESCRIPTION    The figure is in a vertical    orientation with the feet below    the head.    The figure is facing    left and the face is    pointing left. The torso    is bent forward. The    elbow of arm1 is in-    middle and down. It can    be considered    either as    partly or half bent. The    elbow of arm2 is in-    middle and down. It can    be considered either as    partly or half bent.    The knee of leg1 is forward    and partly bent.    The knee of leg2 is    down and partly bent.    The lower body is in a    configuration    similar to    “feet well planted.”    The figure can also be consider-    ed in a diagonal orientation    with the feet to the lower right    of the head (but with lower    confidence    than vertical).    In this case, it is facing lower    left with the face pointing low-    er left, The following then    changes from the previous    description:    the elbow of arm1    can be considered    as either down    or forward. The knee of leg2 is    forward.    MEANING-BASED    DESCRIPTION    The figure is catching a ball at    torso height with two arms, with    very good confidence.    It may    also be viewed as batting, but    with only fair confidence.    Finally, it may be fielding a    grounder with two arms, but with    only poor confidence.    Fig. lb    ((vertical orientation)    verygood)    ((feet to bottom of head)    verygood)    ((facing left) good)    ((face pointing left) good)    ((torso bent forward) good)    ((elbow1 is in-middle) good)    ((elbow1 is down) verygood)    ((elbow1 partly bent) good)    ((elbow1 half bent) good)    ((elbow2 is in-middle) good)    ((elbow2 is down) verygood)    ((elbow2 partly bent) good)    ((elbow2 half bent) good)    ((kneel is forward) verygooc    ((kneel partly bent) good)    ((knee2 is down) verygood)    ((knee2 partly bent) good)    ((both legs “feet well    planted”    cfig) good)    ((diagonal orientation)    good)    ((feet to lowerright of    head) good)    ((facing lowerleft) fair)    ((face pointing lowerleft)    fair)    ((elbow1 is down) good)    ((elbow1 is forward) good)    ((knee2 is forward)    verygood)    ((two-arms-catching-torso    high-ball) verygood)    ((batting) fair)    ((two-arms-fielding-    grounder)    poor)    174    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    descripticns    are shown alongside the computer-generated    output.    SKELETUN’s primary purpose is to generate a description    of what    is communicated    by    body    posture    - the    “meaning-based”    description.    In the process of generating    this description,    it also    provides    the 3-D configuration    of the figures    - the physical    descripticn.    Briefly, the notation in the example is as follows. If a    figure is viewed from the front or back, each elbow or knee can be    either QI& from the torso, b to the torso (i.e., crossing the torso) or    If the figure is    in-middle (i.e., along the same line as the torso).    viewed    from the side, each elbow or knee can be either u,    forward.    backward,    or back-uo    (i.e., backward    and up).    All    assertions in the dessriptlons    have discrete confidence    values.    The input    to SKELETUN    is a hand-encoding    of the x, y    coordtnates    of the end points of the line segments of each figure,    plus the center of the circle representing    the head.    SKELETUN    assumes that all figures    are complete    and valid, and that no    objects other than figures are in the scene (a scene may have two    figures).    This paper    gives    an overview    of the types    of information    conveyed by gross body postures, SKELETUN’s    representation    for    this information,    and some inference    rules used to generate this    information    from 2-D scenes. See [7] for details.    1 .l Backaround    This work    views    vision    as a medium    of communication,    recognizing    that an important    goal of the visual process    is to    provide the viewer with a “meaning”    description    of the external    world.    Most scene    analysis    systems    are primarily    concerned    with    identifying    objects and other entities in a scene and specifying the    spatial configuration    of these entities [6, 3, 8, 41. Given a scene    with human    figures,    such systems    would    tend to identify    the    individual    figures, their body parts, and other objects, and then    specify    the    spatial    relationships    of these    entities    [9, 10, 11.    SKELETUN goes one step further in the interpretation    process.    It    tries to determine    what the people are doing, and perhaps why    they are doing it.    Although    some previous work has taken the point of view of    vision as communication    [2, 14, 151, their primary purpose was to    analyze and describe    motion scenes,    rather than to study how    body posture conveys information.    2. Knowledqe    cateqories    Five categories    of knowledge    have been identified as important    in the process of generating    descriptions    of 2-D scenes of stick    figures.    The first three represent    important    levels at which the    scene should be described.    o Two-Dimensional    Descriotion    - a low-level description    involving    the direction    of each body part (each is a    straight    line segment), the angle of each joint (in the    2-D plane), and body parts which overlap (required for    establishing    touching relationships).    e Phvsical Soace Descriotion    - a 3-D description    of the    physical configurations    of the figures.    o Meanina Soace Descriotion    - a description    in terms of    the information    communicated    by the figures    (e.g.,    running, fighting, crying).    The concepts    here are said    to    be    in    Meaning    Space    since    “meaning”    (or    “conceptual”    information)    is extracted    from    the    scene.    The next two categories involve knowledge    used to extract the    physical    and    meaning    space    descriptions    from    the    2-D    description.    @ Human Phvsical Structure    - information    dealing with    the    various    parts    of the    stick    figure    body    and    components    of these parts.    Q Inference    Rules - heuristic    rules used to obtain the    --    3-D configuration    of the figures from the 2-D scene,    and to determine what the figures are doing based on    the 2-D and 3-D configurations    of the limbs.    The following sections will further discuss the 2nd, 3rd, and 5th    categories.    More details than can be provided here on all of the    categories may be found in [7].    3. Phvsical Soace Descriotion    In order to infer what is being communicated    by a figure’s body    posture,    there must be knowledge    of at least part of its 3-D    configuration,    for it is a 2-D figure interpreted    as being in 3-D    space to which meaning is applied.    It is convenient    to have two different    levels of physical space    descriptions.    One,    called    the    lower    level    phvsical    soace    descrintion,    deals with the 3-D positions    of the individual    body    parts.    The second,    called    the hiaher    level phvsical    soace    clescriotion, deals with frequently    occurring    positions of groups of    body parts.    Only the first description    will be discussed    in this    paper (see [7] for more details).    Although a figure’s 3-D configuration    may be represented    many    ways, the representation    to be described next was chosen for two    reasons:    1. Its purpose is to describe the figure in a manner useful    for generating    meaning-based    interpretations.    If the    resolution    is too fine (as in [lo]),    it will contain much    information    not significant for the task, thus burdening    the system.    If the resolution    is too coarse, it will not    contain enough information to perform the task.    2. It is convenient    for SKELETUN to be able to express a    figure’s    3-D    configuration    in    a    manner    easily    understood    by humans.    The current    representation    makes this kind of information explicit.    3.1 Descriotions    relative to the torso    The 3-D descriptions    in SKELETUN    are object-centered,    as    opposed to viewer-centered.    That is, locations and directions    of    parts of the figure are indicated    with respect to the figure, rather    than the viewer.    A viewer-centered    description    depends not only    on the figure being described,    but also on its orientation.    An    object-centered    description,    however, depends only on the figure    being described,    resulting in a smaller set of possible descriptions    [111.    Accordingly,    the positions    of the upper arms and legs are    represented    relative to the torso, and the shape of the torso is    represented    relative    to the    overall    orientation    of the    figure.    SKELETUN    uses the predicates    OUT, IN-MIDDLE,    and IN to    describe the position of each elbow or knee as viewed from the    175    front, and UP, FORWARD, DOWN, BACKWARD, and BACK-UP to    describe the positions as viewed from the side. These predicates    are adequate    to completely    specify (within the resolution    of the    representation)    the 3-D position of any elbow or knee (i.e., upper    arm or leg).    SKELETUN    uses the predicates    BENT-FORWARD    and BENT-BACKWARD    to specify how the torso joints are bent.    3.2 Hierarchv of obiect-centered    c&.criotions    The positions    of the lower arms and legs are represented    relative to the upper arms and legs, respectively.    Note that a    representation    of the lower limbs relative to the torso would result    in a much larger set of possible descriptions    than a representation    relative to the upper limbs, since a different    description    of the    lower limb would be required for each position of the upper limb    relative to the torso, even if the position of the lower relative to the    upper limb were to remain constant.    Since similar arguments    apply to describing    positions of other    body parts, such as hands, fingers, feet, etc., we conclude    that    each body part should be represented    relative to the part it is    connected to, resulting in a hierarchy of descriptions    [lo].    SKELETUN represents the positions of the lower arms and legs    by specifying    the 3-D angle of the elbow and knee joints.    The    predicates    used are PARTLY-BENT,    HALF-BENT,    FULLY-BENT,    and NOT- BENT.    3.3 Orientation    relative to viewer    ---    Thus far, all descriptions    have been relative to parts of the    figure. The whole figure must also be placed in 3-D space, relative    to the viewer.    The predicate ORIENTATION    describes the overall    orientation    of the figure either as vertical, horizontal,    or diagonal.    Given    one    of    these    orientations,    the    predicate    DIR-OF-FEET-TO-HEAD    specifies the direction of the feet relative    to    the    head.    Finally,    the    predicates    DIR-FACING    and    DIR-FACE-IS-POINTING    specify the direction    the figure is facing    and the direction the face is pointing.    3.4 Phvsical space inference rules    These rules generate the physical space description.    They are    domain-independent,    for    they    depend    only    on    the    3-D    configuration    of the figures.    As an example of the knowledge    in    these    rules, consider    how SKELETUN    determines    the overall    orientation    of the figure.    A figure is horizontal    if both feet are east    or west of the head (as in lying).    A figure is diagonal if both feet    are southeast, southwest,    northeast, or northwest of the head.    There    are two    types    of vertical    orientations,    upright    and    upside-down.    (SKELETUN currently    cannot handle upside-down    figures.) Fig. 2 shows three extremes of upright figures.    In Fig. 2a,    both feet are south of the head. In Fig. 2b, both feet are not south    of the head; the point midway between    the feet is south of the    head.    In Fig. 2c, the midway point is not south of the head; only    one foot is south of the head.    Rules which determine whether a    figure is upright    must examine these three types of cases.    For    more details on these and other inference rules, see [7].    4. Meaninq soace descriotion    4.1 Reoresentation    Meaning    space concepts    in SKELETUN    are not represented    explicitly    in terms of simpler concepts    and relationships    between    them (as in Conceptual    Dependency    [13]), since SKELETUN’s    (a)    (5)    Cc)    Three upright stick figures.    2    Fig.    concern is not to extract all the details of each concept.    Instead,    they are represented    as labels (RUNNING,    CRYING,WALKING,    etc.), where the meaning is represented    implicitly in terms of the    inference    rules which may assert the concept    and those which    may use the concept    to assert other concepts.    This is because    SKELETUN’s concern is to discover and make use of relationships    among concepts [12].    Two important    classes cf information    that can be extracted    from the body postures of stick figures deal with (1) the physical    states of the figures (running,    walking,    throwing,    standing,    etc.)    and (2) the mental or emotional    states of the figures (weeping,    happy, thinking, etc.).    Two types of physical states can be distinguished,    active and    passive.    Active physical states involve activities requiring motion,    such as running,    dancing,    or hitting.    Passive physical    states    involve no motion; examples are standing, pointing, and watching.    Mental-emotional    states    can    also    be    divided    into    two    categories,    negative    and    positive.    Negative    states    generally    involve    feelings    or tendencies    such    as painful    excitement,    destruction,    dullness,    loneliness,    discomfort,    tension,    incompetence,    dissatisfaction,    and    helplessness    (e.g.,    anger,    sadness,    apathy,    panic,    hate, grief,    disgust).    Positive    states    generally involve feelings or tendencies    such as vitality, empathy    toward others, comfort,    and self-confidence    (e.g., cheerfulness,    enjoyment,    happiness,    hope, love, pride) [5].    The negative    and    positive states can each be further subdivided    into passive and    active. These will not be pursued here (see [7]).    4.2 Meaninq soace inference rules    These rules generate    the meaning space description.    They    tend    to    be    domain-dependent,    since    most    meaning-space    concepts are applicable only in limited domains.    As an example of    the    knowledge    in    these    rules,    consider    how    SKELETUN    determines that a figure is fielding a grounder    (assuming that the    domain is baseball).    (See Fig. 3 for exampies.)    First, one or both    arms must be in a “fielding grounder”    configuration    (a higher level    physical configuration    described    in [7]).    In addition,    the lower    body should be in a configuration    similar to “kneeling    on one    knee”    (Fig. 3b), “kneeling    on both knees”,    “feet well planted”    (Fig. 3c), or “crouching”    (Fig.    3d) [7] and the figure should be    vertical.    If the figure’s    orientation    is diagonal,    its lower body    should be in a “crouching”    configuration    and it must be facing    either lower-left    or lower-right.    Finally, if both arms are in a    176    “fielding    grounder”    configuration    and the figure is running,    it is    also fielding a grounder, i.e., running after a ground ball (Fig. 3a).    Acknowledoement    This research is part of the author’s    Ph.D. thesis done at the    University of Maryland,    under the guidance    of Chuck Rieger and    Azriel Rosenfeld.    The support of the National Science Foundation    under Grant MCS-76-23763    is gratefully acknowledged,    as is Mike    Shneier    for valuable    comments,    and Ernie Harris for help in    preparing this paper.    >    (a)    (b)    Cd)    Each figure is fielding a grounder.    Fig. 3    References    1.    Adler, M. Computer interpretation    of Peanuts cartoons.    Proc.    SIJCAI, Cambridge,    MA, 1977.    2.    Badler, N. I. Temporal scene analysis: conceptual    descriptions    of object movements.    Tech. Rept. 80, Dept. of    Computer Science, University of Toronto, 1975.    3.    Bajcsy, R., and Joshi, A. K. A partially ordered world model    and natural outdoor scenes.    In Computer    Vision Systems,    Hanson    and Riseman, Ed.,Academic    Press, 1978.    4.    Barrow, H. G., and Tenenbaum,    J. M. MSYS: A system for    reasoning about scenes. Artificial Intelligence    Center Technical    Note 121, Stanford Research Institute, 1976.    5.    Davitz,    1969.    J. R. The Language    of Emotion.    Academic Press,    7.    Herman, M. Understanding    body postures of human stick    figures.    Tech. Rept. 836, Computer Science Center, University of    Maryland, College Park, MD, 1979.    8.    Levine, M. D. A knowledge-based    computer vision system.    In    Computer    Vision    Systems,    Hanson and Riseman, Ed.,Academic    Press, 1978.    9.    Marr, D., and Nishihara, H. K. Spatial disposition    of axes in a    generalized    cylinder representation    of objects that do not    encompass the viewer.    AIM 341, MIT, 1975.    10.    Marr, D., and Nishihara, H. K. Representation    and    recognition    of the spatial organization    of three-dimensional    shapes. AIM 416, MIT, 1977.    11.    Nishihara, H. K. Intensity, visible surface, and volumetric    representations.    Workshop    on the Representation    of    Three-Dimensional    Oblects,    Univ. of Pennsylvania,    Philadelphia,    PA, 1979.    12.    Rieger, C. Five aspects of a full-scale story comprehension    model. In Associative    Networks:    The Representation    and Use of    Know/edge    in Computers,    N. Findler, Ed.,Academic    Press, 1978.    13.    Schank, R. C. Identification    of conceptualizations    underlying    natural language.    In Computer    Models    of Thought    and    Language,    Schank and Colby, Ed.,W. H. Freeman and Co., 1973.    14.    Tsuji, S., Morizono, A., and Kuroda, S. Understanding    a    simple cartoon film by a computer vision system. Proc. HJCAI,    Cambridge, MA, 1977.    15.    Weir, S. The perception    of motion: actions, motives, and    feelings.    Progress in Perception Research Report No. 13, Dept. of    Artificial Intelligence,    University of Edinburgh,    1975.    6.    Hanson, A. R., and Riseman, E. M. VISIONS: a computer    system for interpreting    scenes.    In Computer    Vision Systems,    Hanson and Riseman, Ed.,Academic    Press, 1978.    177     
 | 
	1980 
 | 
	45 
 | 
					
40 
							 | 
	RESEARCH ON EXPERT PROBLEM SOLVING IN PHYSICS    Gordon S. Novak Jr. and Agustin A. Araya    Computer Science Department    University    of Texas at Austin    Austin, Texas    78712    ABSTRACT    Physics problems cannot in general    be    solved    by methods of deductive search in which the laws of    physics are stated as axioms.    In    solving    a    real    physics    problem,    it    is    necessary    to    treat the    problem as a "nearly decomposable    system"    and    to    design    a method of analysis which accounts for the    salient    factors    in    the    problem    while    ignoring    insignificant    factors.    The analysis method which    is chosen will depend not only on    the    objects    in    the problem and their interactions,    but also on the    context, the accuracy needed, the factors which are    known,    the    factors    which    are    desired,    and the    magnitudes    of certain quantities.    Expert    problem    solvers    are    able    to    recognize    many    frequently    occurring problem types and    use    analysis    methods    which    solve such problems efficiently.    Methods by    which a program might learn such expertise    through    practice are discussed.    I    INTRODUCTION    We are investigating    the    cognitive    processes    and    knowledge    structures    needed for expert-level    problem    solving    in    physics.    We    believe    that    physics    is    a    particularly    fruitful area for the    investigation    of general issues of problem solving,    for    several reasons.    The laws of physics are well    formalized,    and there is a large    set    of    textbook    physics    problems    (often    with answers and example    solutions)    available for analysis and for testing a    problem    solving    program.    The    application    of    physical laws is considered    to be well defined,    so    that    both    the method of analysis of a problem and    the final answer can be judged as either correct or    incorrect.    At the same time, physics is considered    to be a difficult subject;    students    find    problem    solving    especially    difficult,    even    when    the    equations    which    express    the    physical    laws    are    available    for    reference.    Although    the    laws of    physics are "well known", nobody has yet produced a    program    which    can    approach    expert-level    problem    solving    in    physics.    Such    a    program    would    potentially    have    great    value    for    applications,    since the    types    of    reasoning    used    in    computer    science    and    engineering    are    closely    related to    those used in solving physics problems.    Such a    --------------------    *    This research was supported    by    NSF    award    No.    SED-7912803    in    the    Joint    National    Institute of    Education - National Science Foundation Program    of    Research    on    Cognitive Processes and the Structure    of Knowledge    in Science and Mathematics.    PB    SCHWM    PRGE 25    NUMBER    19    [THE    FOOT OF R LRDDER    RESTS    RCAINST    fl    VERTICRL    WFlLL RN0    ON fl HORIZONTAL    FLOOR)    (THE    TOP OF THE LROOER    IS    SUPPORTEO    FROM THE WLL    BY 17 HORIZONTRL    ROPE 30    FT    LONG) (THE    LfXKlER    IS    SO FT LONG    . WEIGHS    100    LB WITH    ITS    CENTER    OF GRAVITY    20    FT    FROM THE FOOT    . f4NO F1 150    L8 MAN    IS    10 FT FROM THE TOPl[DETERMINE    THE    TENSION    IN THE ROPE1    FINSWER :    120.00000    LB    Figure 1:    Example of Output of ISAAC Program    program    could    also    be    of    value    in    education,    because    many of the crucial skills used in solving    physics problems are now taught only implicitly,    by    example;    students    who    are    unable    to    infer the    skills from the examples do poorly in physics.    The first    author    has    previously    written    a    program    which can solve physics problems stated in    English in the limited area of rigid    body    statics    [1,21; an example of its output is shown in Figure    1.    This program, which uses a general    formulation    of    the    laws of rigid body statics (similar to the    form of the laws presented    in    modern    textbooks),    produces    between    nine    and    fifteen equations for    simple textbook problems for    which    human    problem    solvers    generate    only one or two equations.    This    somewhat    surprising    result    indicates    that    the    expert    human    problem    solver    does    not slavishly    apply the general forms of physical laws as    taught    in textbooks, but instead recognizes    that a    178    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    particular    problem can    be    solved    by    applying    a    special    case    of    the    more general law and writes    only the    equations    appropriate    for    the    special    case.    By doing so, the expert greatly reduces the    algebraic complexity of the problem.    II    THE NATURE OF PROBLEM SOLVING IN PHYSICS    -__I__-----    Most people, even experts,    tend    to    identify    the    "content"    of physics with the equations which    express the physical laws.    Bundy [3] has    written    PROLOG    programs    in which the laws of physics are    expressed as Horn clauses and deductive    search    is    used    to    find answers to problems posed as sets of    predicates.    As noted by Larkin, McDermott,    Simon,    and    Simon    r41,    novice problem solvers do tend to    use an "equation-driven    search", working    backwards    from    the desired quantity until the equations they    have    invoked    can    be    solved;    experts,    however,    usually    work    "forward" from the given information    until the desired unknown has been found.    Experts    do    not often verbalize    the equations that they are    using,    but    usually    only    verbalize    intermediate    "answers".    The identification    of physics    knowledge    with    the    equations    which express physical laws and the    notion that search is the primary mechanism used in    solving    physics    problems are unsatisfying    because    they    fail    to    account    for    several    observed    phenomena.    Why    is physics hard?    If physics were    "only" the equations for the laws, these    equations    could    be collected in a reference book (along with    tables of integrals    and    such),    and    the    physics    course    could be dispensed with.    Why does practice    help?    What is expertise    in    physics,    i.e.,    what    does    the    expert    have    that the novice lacks that    enables the    expert    to    perform    so    much    better?    Superior    algebraic skills alone cannot account for    the difference.    What is the    intellectual    content    of    a    physics course?    Without continued practice,    students forget the equations soon after taking the    course;    what    is    it    that    they retain that makes    taking the course worthwhile?    We believe that methods which employ deductive    search    and express the laws of physics directly as    predicate    calculus    clauses    (01: the    equivalent)    cannot    account    for    expert-level    problem solving    ability when a variety of physical    principles    are    involved    (say,    the    principles    covered    in    a    first-year    college physics    course).    Indeed,    the    best    ways    of    solving    certain    problems    are    self-contradictory    if    examined    too    closely.    Consider the following problem (from [5], p.    67):    A rifle with a muzzle velocity of    1500    ft/s    shoots a bullet at a target 150 ft away.    How    high above the target must the rifle be aimed    so that the bullet will hit the target?    An "expert" solution to this problem might    proceed    as    follows :    "It    takes    the bullet 0.1 second to    reach the target.    During    this    time,    the    bullet    falls    distance    d = (1/2)*g*t**2    or    (1/2)*32*70.1)**2    ft, or 0.16 ft.    So we aim'up    by    that amount to cancel the fall."    In this solution,    the    "expert"    has    made    several    conflicting    assumptions:    first,    that the bullet travels in a    straight line; second, that the bullet    falls    from    that path as it travels; and third, that the bullet    is    aimed    upward    to    cancel    the    fall.    Each    succeeding    assumption    invalidates    previous    assumptions    upon which it is based.    In    fact,    the    final    answer    is    not    exactly    right; however, it    differs from a more    careful    calculation    for    the    parabolic    path    actually followed by the bullet by    only about one part in a million.    The    "expert"    has    not    solved    this    problem    precisely,    using all the applicable    physical laws,    but instead has treated the problem    as    a    "nearly    decomposable    system"    [61.    Using    qualitative    knowledge    that    bullets    move    approximately    in    a    straight line, the expert has decomposed    the motion    of    the    bullet    into    the    dominant    straight-line    motion and the much smaller fall and upward motion.    If we look harder, other decomposition    assumptions    can be found, viz.    that air friction is negligible    and that the earth is flat (i.e., that    gravity    is    "straight    down").    In    fact,    the laws of physics    cannot be used directly in a deductive    fashion    to    solve    problems.    For example, Newton's law, which    we    write    compactly    as    "F = ma",    relates    the    acceleration    of    a    body    (relative to an inertial    reference frame) to the vector sum of all forces on    the    body;    however, there are infinitely many such    forces, and    the    frame    of    reference    (e.go,    the    earth) isn't really inertial.    Fortunately,    in real    problems most of the forces on the body    are    small    and    can    be ignored, and the frame of reference is    nearly inertial; using an appropriate    decomposition    of    the problem, a good approximation    of the answer    to the problem can be found.    Thus, solution    of    a    real    physics    problem    always    involves treating a    nearly decomposable    system as if it were    actually    decomposable.    Programs    which    use    deduction    to    solve physics problems in a "microworld"    are    able    to    do    so only because the decomposition    decisions    have been made, by the    choice    of    the    microworld    and/or    by    the    form    in which    the problem to be    solved    is    encoded;    however,    this    limits    the    extension    of    such    a    program    to wider    problem    domains where    other    decompositions    of    "similar"    problems are required.    This    view    of    problem    solving    in    physics    suggests    answers    to    the questions posed earlier.    Physics is hard because it is    necessary    to    learn    not    only    the    equations,    but    also    ways    of    decomposing    actual problems so that application    of    the    equations    is tractable.    The expert can solve    problems better than the novice because the    expert    recognizes    that    a    given    (sub)    problem    is    an    instance of a class which can be    decomposed    in    a    particular    way; this knowledge of how to decompose    a real-world    problem along lines suggested    by    the    "fundamental    concepts"    of physics is a large part    of what is sometimes    called    "physical    intuition"    r41*    The    knowledge    of how to decompose problems    may be retained by    the    student    even    though    the    formulas    have    been forgotten, and allows problems    to be solved correctly    even    though    the    formulas    have    to    be    looked    up    again.    The expert works    forwards rather than backwards    because    the    first    179    order    of business is not to deduce the answer from    the    given    information    (which    is    likely    to    be    grossly    inadequate    at.the beginning),    but to find    an appropriate    decomposition    or way of modeling the    interactions    of    the objects in the problem.    Once    an    appropriate    decomposition    has    been    found,    solution of the problem is often straightforward.    III    RESEARCH ON PROBLEM SOLVING AND    --~-    LEARNING TO BE EXPERT    ---    We are currently writing a    program    to    solve    physics    problems    involving    a variety of physical    principles.    Our    work    on    this    program    is    concentrating    on    the    representation    of problems,    recognition    of (sub) problem    types    which    can    be    decomposed    in    particular    ways,    and    learning of    problem solving expertise through experience.    Each    of these areas is discussed briefly below.    To insure that the problem solver is not    told    how to solve the problem by the manner in which the    problem is stated to it, we assume that    the    input    will    be    English    or a formal language which could    reasonably    be expected as the output of an    English    parser such as the parser in [1,2].    For example, a    car in a problem will be presented to    the    problem    solver as "a car"; whether the car should be viewed    as a location, a    point    mass,    a    rigid    body,    an    energy conversion machine, etc.    must be decided by    the    problem    solver.    We    are    developing    a    representation    language    which will allow multiple    views of objects and    variable    levels    of    detail.    For    example,    a block sliding on an inclined plane    may be viewed as    a weight,    a    participant    in a    frictional contact relation, and a location.    A car    might be viewed simply as a point mass,    or    as    an    object    with    its own geometry and complex internal    structure,    depending on the needs of    a particular    problem.    The expert problem    solver    does    not    have    a    single,    general    representation    of    each physical    law, but instead is able to recognize a    number    of    special    cases of each physical principle and use a    rich set of    redundant    methods    for    dealing    with    them.    For    example,    in    addition    to the general    rigid body problem, the expert    recognizes    special    cases    such as a pivoted linear rigid body acted on    by forces perpendicular    to its axis.    Such    special    cases    often    allow    a    desired unknown to be found    using    a    single    equation    rather    than    many,    or    simplify    analysis,    e.g.    by    approximating    a    nonlinear    equation with a linear one.    Recognition    of    special    cases    is    based    on    context, on what    information    is    known,    and    on what    answers    are    desired,    as    well    as    on    the    type    of object or    interaction.    Our    approach    to    recognition    of    special    cases    is    to use a discrimination    net, in    which tests of features of    the    problem    alternate    with    construction    of    new    views    of    objects and    collection    of information    into the appropriate    form    for    a    "schema"    or    "frame" representation    of the    view.    Such a discrimination    net can also be viewed    as    a    hierarchical    production    system,    or    as    a    generalization    of an Augmented    Transition    Network    [71*    If the recognition    of    the    type    of    a    (sub)    problem    is    done by means of a discrimination    net,    special cases of    a    particular    kind    of    physical    system    can    be    added    to    the    net    by    adding    discriminating    tests for the special cases    "above"    the point in the net at which the more general case    is recognized.    We are investigating    ways in which    knowledge    for handling such special cases could be    learned automatically    from    experience    in    solving    problems.    One    method of initiating    such learning    is data flow analysis of solutions using    the    more    general    solution.    For    example,    if    a    problem    involving a pivoted lever    were    solved    using    the    general    rigid    body laws, data flow analysis would    indicate that with a particular    choice of the point    about which to sum moments (the pivot), the "sum of    forces" equations would not play a part in reaching    the    solution.    A special case method could then be    constructed    from the general method by adding tests    for    the special case to the discrimination    net and    adding a corresponding    "action" part    which    writes    only    the moment equation.    Other opportunities    for    special case learning include elimination    of    zero    terms    (rather    than    eliminating    them    later    algebraically),    elimination    of    forces    which    turn    out    to    be small when calculated,    linearization    of    "almost    linear"    equations,    use    of    small-angle    approximations,    and    selection of simplified views    of objects under appropriate    conditions.    REFERENCES    1.    2.    3.    4.    5.    6.    7.    Novak,    G.    "Computer    Understandi .ng    of    Physics    Problems    S tated    in    Na tural    Language",    American    Journal    of    Computational    Linguistics,    Microfiche    53,    1976.    Novak, G.    "Representations    of    Knowledge    in    a    Program    for    Solving    Physics    Problems",    Proc.    5th    IJCAI,    Cambridge,    Mass., Aug.    1977.    Bundy, A., Byrd, L., Luger,    G.,    Mellish,    C.    and    Palmer,    M.    "Solving Mechanics    Problems    Using    Meta-Level    Inference",    Proc.    6th IJCAI, Tokyo, 1979.    Larkin, J., McDermott,    J., Simon, D.    P.,    and    Simon,    H.    A.    "Expert    and Novice    Performance    in Solving Physics    Problems",    Science,    vol.    208,    No.    4450 (20 June    1980).    Halliday, D.    and    Resnick,    5.    Physics.    ~-    New York:    Wiley, 1978.    Simon,    H.    A.    The    Sciences    of    the    Artificial,    M.I.T.    Press, 1969.    -    Woods,    W.    A.    "Transition    Network    Grammars    for    Natural Language Analysis",    comm.    &J,    vol.    13,    no.    10    (Oct.    19701, PP.    591-606.    180     
 | 
	1980 
 | 
	46 
 | 
					
41 
							 | 
	KNOWLEDGE-BASED    SIMULATION    Philip Klahr and William S. Faught    The Rand Corporation    Santa Monica, California 90406    ABSTRACT    Knowledge engineering    has    been    successfully    applied    in    many domains to create knowledge-based    "expert" systems.    We have applied this    technology    to    the    area    of    large-scale    simulation    and have    implemented    ROSS,    a    Rule-Oriented    Simulation    System,    that    simulates    military    air    battles.    Alternative    decision-making    behaviors    have    been    extracted    from    experts    and    encoded    as    object-    oriented rules.    Browsing    of    the    knowledge    and    explanation    of    events    occur at various levels of    abstraction.    I.    INTRODUCTION    -    Large-scale    simulators have been plagued    with    problems    of    intelligibility    (hidden    embedded    assumptions,    limited    descriptive    power),    modifiability    (behaviors and rules buried in code),    credibility    (minimal explanation    facilities),    and    speed    (slow    to build, to run, to interpret).    The    area of    large-scale    simulation    provides    a    rich    environment    for the application    and development    of    artificial    intelligence techniques,    as well as    for    the discovery of new ones.    The field    of    knowledge    engineering    [2]    is    developing    tools    for    use in building intelligent    knowledge-based    expert    systems.    A    human    expert    communicates    his    expertise    about    a particular    domain in    terms    of    simple    English-like    IF-THEN    rules    which are then incorporated    into a computer-    based expert system.    The rules are understandable,    modifiable,    and self-documenting.    Knowledge-based    systems provide explanation    facilities,    efficient    knowledge    structuring    and sharing, and interfaces    that are amiable for system building and    knowledge    refinement.    Our approach to simulation views    a decision-    based    simulator    as a knowledge-based    system.    The    behaviors    and    interactions    of    objects,    the    decision-making    rules, the communiciation    channels    are all    pieces    of    knowledge    that    can    be    made    explicit,    understandable,    modifiable,    and can be    used to explain simulation results.    For our    work    in    simulation,    we    chose    the    domain    of    military    air    battles.    Current large-    scale simulators in this domain exhibit exactly the    simulation    problems    we    discussed    above and thus    provide a good area in which    to    demonstrate    the    feasibility    and    potential    of    knowledge-based    simulation.    II    KNOWLEDGE REPRESENTATION    --    .---    -    Our domain experts    typically    centered    their    discussions    of military knowledge around the domain    objects.    For    example,    a particular    type    of    aircraft    has certain attributes associated with it    such as maximum    velocity,    altitude    ranges,    time    needed    to    refuel,    etc.    Similarly,    individual    planes have    given    positions,    speeds,    altitudes,    routes,    etc.    In    addition,    experts    defined the    behaviors of objects relative to    object    types    or    catagories.    For example, they defined what actions    aircraft take when they enter radar coverages, what    ground radars do when they detect new aircraft (who    they    notify,    what    they    communicate),    etc.    It    became    clear    that an object-oriented    (Simula-like    [l]) programming    language would provide    a natural    environment    in which to encode such descriptions.    We    chose    Director    [6]    for    our    initial    programming    language.    Director    is    an    object-    oriented message-passing    system that has been    used    primarily    for computer graphics and animation.    It    offered    considerable    promise    for    our    use    in    simulation,    both in how knowledge is structured and    how    it    is    executed.    Each    object    (class    or    individual)    has    its    own data base containing    its    properties    and behaviors.    Objects are defined    and    organized    hierarchically,    allowing knowledge to be    inherited,    i.e.,    offsprings    of    objects    automatically    assume    (unless    otherwise modified)    the properties    and behaviors of their parents.    In Director,    as    in    other    message-passing    systems    (e.g.,    Smalltalk    [3] and    Plasma    [5]),    objects communicate    with    each    other    by    sending    messages.    The    Director    format    for    defining    behaviors is    (ask <object> do when receiving <message-pattern>    <actions>),    i.e., when the object receives a message    matching    the    pattern,    it performs the associated actions.    In ROSS, we have added the capability of specifying    IF-THEN rules of the form    (IF <conditions>    THEN <actions> ELSE <actions>)    as part of an object's    behavior.    The    conditions    typically    test    for    values    (numeric, boolean) of    data items while actions change data items or    send    messages    to objects.    Since Director is written in    Maclisp, one may insert any Lisp s-expression    as    a    condition or action.    The following behavioral    rule    contains all of these options:    181    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    (ask RADAR do when receiving    (IN RADAR RANGE ?AC)    (SCRIPT    (IF    (lessp    (length (ask MYSELF recall your OBJECTS-IN-RANGE))    (ask MYSELF recall your CAPACITY))    THEN    (ask    (ask    (ask    (ask    ELSE    (ask    >>I    HISTORIAN at ,STIME MYSELF detects ,AC)    MYSELF add ,AC to your list of    OBJECTS-IN-RANGE)    ,(ask MYSELF recall your SUPERIOR)    MYSELF detects ,AC)    MYSELF monitor ,AC while in coverage)    HISTORIAN at ,STIME MYSELF doesn't detect ,AC)    This rule is activated when a radar receives a    message    that    an    aircraft    (AC)    is    in its radar    range.    The radar    tests    whether    the    number    of    objects    currently    in    its    radar coverage is less    that its capacity.    If its capacity    is not    full,    then    the radar tells the historian that it detects    the aircraft at time STIME (the current    simulation    time),    it    records    the    aircraft    in    its log, it    notifies    its    superior    that    it    detected    the    aircraft,    and it continues to monitor the aircraft    through its radar coverage.    III    -*    BEHAVIORAL DESCRIPTIONS    A    difficult    problem    with    large-scale    simulators    is    that they contain complex behaviors    that are hard to    describe    and    understand.    Each    object    has many potential actions it can take, and    there may be hundreds of objects whose behavior the    user may wish to examine and summarize at differing    levels of generality    and detail.    To alleviate this    problem,    we    organized    the simulator's    behavioral    descriptions    along two lines:    1.    Static descriptions:    descriptions    of    the    major    events    simulated and the rules governing    each object's potential actions.    The events and    rules    are    organized    so that users can quickly    peruse and understand    the knowledge.    2.    Dynamic descriptions:    descriptions    of    each    object's behavior as the simulator runs.    Events    are organized so the user is not overwhelmed    by    a mass of event reports.    Events are reported as    patterns with    detail    eliminated    and    selected    events highlighted.    To organize the descriptions,    we    constructed    scenarios    representing    sequential    event    chains.    Each major event in ROSS has an    associated    llevent    descriptor"    (ED).    EDs are collected into chains,    where each chain is a linear list of EDs.    Each    ED    is causally associated with its immediate neighbors    in the list: a preceding ED is necessary    to    cause    its    successor    in    that chain, but not necessarily    sufficient.    ([7] discusses the use of such    chains    in constructing    proofs.)    ED    chains    are    further    organized    into    Ilactivities," e.g., radar detection of an aircraft.    Each activity is a tree of EDs.    The    root    of    the    tree corresponds    to    the    event    that    starts    the    activity.    Each path from the root to a leaf is an    ED chain.    Logically,    each ED chain corresponds    to    one possible scenario of events that could occur in    a simulation.    The scenario structure is used    for    both static and dynamic behavior descriptions.    !!*    BROWSING KNOWLEDGE    Static    behavior    descriptions    are    given    by    ROSS's    "browse"    function,    an on-line interactive    facility    with    which    users    can    examine    ROSS's    knowledge base.    The user is initially given a list    of all activities.    He then    selects    an    activity,    and    the browse function prints a list of the names    of all EDs in that activity.    The user can ask    for    a more    complete    description,    in which case the    browse function prints a tree of the EDs.    The user    can then select a particular ED to examine, and the    browse function prints a simplified description    of    the    event.    If    the user asks for a more complete    description,    the browse function prints the    actual    code    for    the    corresponding    behavior    (as in the    example above).    At any point the user    can    select    the    next    ED,    or    go    up    or    down activity/event    levels.    The composition    of ED chains, i.e., which    EDs    are    members    of    which    chains, is selected by the    system developers    for "naturalness"    or appeal to    a    user's    intuitive    structure of the simulator.    The    system itself constructs the actual ED chains    from    the simulator's    source code and a list of start and    end points for each chain.    The facility appears to    be    quite    useful    in    our domain where the objects    have interdependent    yet self-determined    behavior.    B.    EVENT REPORTING    --    ROSS contains several objects that    have    been    defined    to    organize, select, and report events to    users.    The Historian    receives event    reports    from    other    objects    (as    exemplified    in the behavioral    rule above) and, upon request, supplies    a history    of    any    particular    object, i.e., a list of events    involving the object up to the    current    simulation    time.    The    Historian    also sends event reports to    the Reporter, who selectively    reports events to the    user    on    his terminal as the simulator is running.    The user can request to see    all    reports    or    only    those involving a particular    object or objects of a    particular    class.    In    addition,    a    Statistician    accumulates    statistics    about    particular    fixed    events    (e.g.,    the    total    number    of    radar    detections).    (We have    also    interfaced    ROSS    to    a    color    graphics    system which visually displays simulation    runs.    The    graphics    facility    has    been    an    indispensable    tool    for understanding    ROSS and the    simulations    it produces and for debugging.)    C.    EXPLANATION    USING SCENARIOS    -    To    explain    "why"    certain    results    occur,    traditional    rule-based    systems typically show the    182    rules that were used, one by one, backchaining    from    the    conclusions.    We    have    taken    a different    approach    to    explanation    in    ROSS.    Rather    than    displaying    individual rules to explain events, ROSS    presents higher-level    behavioral    descriptions    of    what    happened,    akin    to    the    activity    and event    descriptions    used for browsing.    It is often the case that an    event    occurring    in    a    simulaton run can be explained by specifying    the chain of    events    leading    up    to    the    current    event.    This    is    accomplished    simply by comparing    event histories    (gathered by the Historian)    to    the    ED trees described above.    The user can then browse    the events and activities specified to    obtain    the    applicable rules.    What is perhaps more interesting    in simulation    is    explaining    why    certain    events    do not occur.    Often times simulations    are run    with    expectations    and    the    user    is particularly    interested in those    cases where the expectations    are    violated.    ([41    describes    how    such    expectations    can    be used to    drive a learning mechanism.)    We have developed    an    initial    capability    for    such    "expectation-based"    explanation.    Explanations    are given relative    to    specified    ED    chains.    One    event chain is designated as the    "expected" chain.    An "analyzer" reports deviations    from    this    chain,    i.e.,    it    determines    the point    (event) at which the ED chain no lcnger matches the    simulation    events.    Typical    responses    from    the    analyzer are:    aircraft    in    radar    range    but    not    detected; radar sent message to superior but it was    not received;    command    center    requested    aircraft    assignment    but    none available.    Such analysis can    occur at    any    time    within    a    simulation    run    to    determine    the    current    status    of    expected event    chains.    It is important to note that expectations    need    not be specifed prior to a simulation run (although    this    could    focus    the    simulator's    reporting    activity).    Users    can analyze events with respect    to any number of existing scenarios.    Each analysis    provides    a    simplified description    and explanation    of the simulator's    operation from a different point    of    view    (e.g., from radar's view, from aircraft's    view, from a decision maker's view).    This    feature    has    also been extremely useful in debugging ROSS's    knowledge base.    IV*    ----    SUMMARY AND FUTURE RESEARCH    -    ROSS    currently    embodies    approximately    75    behavioral    rules, 10 object types, and has been run    with up to 250 individual objects.    To show    ROSS's    flexibility, we have developed a set of alternative    rule sets which encompass    various    strategies    and    tactics.    Simulation    runs using alternative    rules    show quite different behaviors and results.    Future research will include scaling    ROSS    up    both    in complexity and in numbers of objects.    Our    goal is to turn ROSS into a realistic, usable tool.    A    more    user-oriented    English-like    rule-based    language will be    required    for    users    to    express    behaviors and strategies.    We    are    looking    toward    ROSIE    PI,    or    a    hybrid    ROSIE    object-oriented    language for this purpose.    Scaling-up will    necessitate    enhancements    in    speed.    We    plan    to    explore parallel processing,    abstraction    (e.g.,    aggregating    objects,    adaptive    precision),    sampling, and focusing on user queries    to avoid irrelevant processing.    In    summary,    we    have    applied    knowledge    engineering    to    large-scale    simulation    and    implemented ROSS,    an    interactive    knowledge-based    system    that    simulates    the    interactions,    communications,    and decision-making    behavior within    the    domain of military air battles and command and    control.    We have shown the feasibility    and    payoff    of    this    approach    and    hope    to apply it to other    domains in the future.    ACKNOWLEDGMENTS    We wish to thank Ed    Feigenbaum,    Rick    Hayes-Roth,    Ken    Kahn,    Alan Kay, Doug Lenat, Gary Martins, Raj    Reddy,    and    Stan    Rosenschein    for    their    helpful    discussions    and    suggestions    during    ROSS's    development.    We thank our domain    experts    Walter    Matyskiela    and    Carolyn Huber for their continuing    help and    patience,    and William    Giarla    and    Dan    Gorlin    for    their    work    on    graphics    and    event    reporting.    REFERENCES    1. Dahl,    O-J.    and    Nygaard,    K.    Simula    --    an    Algol-based    simulation    language.    Communications    ACM, 9 (9), 1966, 671-678.    2. Feigenbaum,    E.    A.    The    art    of    artificial    intelligence:    themes and case studies in knowledge    engineering.    Proc. IJCAI-77, MIT, 1977, 1014-1049.    -    --    3.    Goldberg,    A.    and    Kay,    A.    Smalltalk-    Instruction    Manual,    SSL    76-6,    Xerox    Palo    Alto    Research Center, 1976.    4.    Hayes-Roth,    F., Klahr, P., and    Mostow,    D.    J.    Knowledge    acquisition,    knowledge programming,    and    knowledge    refinement.    R-2540-NSF,    Rand    Corporation,    Santa Monica, 1980.    5.    Hewitt,    C.    Viewing    control    structures    as    patterns    of    passing    messages.    Artificial    Intelligence,    8 (3), 1977, 323-364.    6.    Kahn, K. M.    Director    Guide,    AI    Memo    482B,    Arti .ficial Intel1 .igence Lab, MIT, 1979.    7.    Klahr,    P.    Planning    techniques    for    rule    selection    in    deductive    question-answering.    In    Pattern-Directed    Inference Systems, D. A.    Waterman    -____    and F. Hayes-Roth    (Eds.), Academic Press, New York,    1978, 223-239.    8.    Waterman, D. A., Anderson, R.    H.,    Hayes-Roth,    F    Klahr, P., Martins, G., and Rosenschein    S. J.    DeAign of a rule-oriented    system    for    impleienting    expertise.    N-1158-l-ARPA,    Rand Corporation,    Santa    Monica, 1979.    183     
 | 
	1980 
 | 
	47 
 | 
					
42 
							 | 
	I NTERACTI VE FRAME I NSTANTIATION    Carl Engelman    Ethan A. Scar1    Charles H. Berg*    ABSTRACT    This paper discusses the requirements    that    interactive    frame instantiation    imposes on    constraint Verification.    The representations    and    algorithms    of an implemented    software solution    are presented.    INTRODUCTION    A number of frame representation    languages    or data access packages,    seven of which are    discussed    in [STEFIK], have been developed    as    LISP extensions.    In most applications    of these    languages,    frame instantiation    -- the creation of    a new frame which represents    an "instance",    i.e.,    a more specific example, of a given "generic"    frame -- occurs as a major theme.    Yet, these    languages do not really provide control    structures    sufficient    to support interactive    frame instantiation.    Most of this paper will be concerned with    constraint    verification.    A frame representation    language typically provides the programmer    with a    way of attaching    a constraint    as a "facet" Of a    given slot.    It will reject any proposed values    for the slot which fail that constraint,    screaming a bit to the user.    Such constraints    attached to a slot in some generic frame also    obtain automatically    for slots of the same name    occurring within its progeny.    That is, they are    "inherited".    And that's about it.    To explain    why we felt the need for more control of    constraint verification    and, in fact, of the    whole dynamics of interactive    frame    instantiation,    we must present just a bit of our    application.    THE APPLICATION    The KNOBS project [ENGELMAN]    is directed    towards the development    of experimental    consultant    systems for tactical air command and    control.    We chose to focus first on what seemed    to be a very simple type of aid.    Imagine an Air    Force officer is trying to plan a mission to    strike some particular    target.    We are providing    a program which interactively    accepts the target,    the airbase from which to fly the mission, the    type of plane, the time of take-off, etc., and    checks the input for inconsistencies    and    oversights.    Such missions    are stereotypes    which    itCurrent affiliation,    AUTOMATIX    Inc.    The MITRE Corporation    P.O.Box 208    Bedford, MA 01730    are represented    naturally as frames and the    checks are constraints    among the possible    slot    values in such frames.    DATA BASE/LANGUAGE    SETTING    k    We first translated FRL [ROBERTSJULY]    ROBERTSSEPT)    f rom MACLISP to INTERLISP    ERICSON].    Data bases of targets and resources    ave been implemented    as nets of individual and    generic frames.    An individual    target frame, for    example, contains information,    e.g., location,    specific to a particular    target, while a generic    target frame contains information    true about    classes of targets, for instance, the type of    radar normally associated    with a particular    kind    of surface-to-air    missile.    In all, the data base    currently contains some 1400 frames.    We have introduced    several upwards    compatible    extensions    to FRL, e.g., programmer    controlled parallel inheritance    along paths    defined by a specified set of slot names and the    controlled    automatic invocation    of "$IF-NEEDED"    procedures    during attempts to retrieve missing    data.    We split the concept of generic frame:    those which we continue to refer to as "generic    frames" contain information    (defaults, attached    demons, etc.) applicable    to all their instances.    Their slots are not necessarily    in correspondence    with those of their instances.    What we refer to    as a "template",    on the other hand, is a    prototypical    representation    of an instance of the    associated    generic frame.    Its slots correspond    to those to be found in an instance, but contain    "$IF-NEEDED"    procedures    where the instance would    contain values.    It also contains constraints    on    the values that may appear in an instantiation.    We also differentiate    frames representing    fully specified objects from those representing    subclasses.    The former is referred to as    "instance"    call it    in [sTEFIK] and [SZOLOVITS],    and we    "individual".    Such a frame is identified    by having an "AIO" (An-Individual-Of)    slot    pointing back to its generic frame.    AI0    corresponds    to set membership,    while AK0    (A-Kind-Of)    corresponds    to set inclusion.    While    the frame instantiation    procedures    are designed    for use at any level, we have thus far employed    them only in the creation of individual    frames.    DESIDERATA    Our goal is a demonstration    which, like a    good shortstop, makes it look easy.    Some    requirements    are:    184    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    1. The system must know what information    is    Constraints    needed and how-to ask for it.    It must also know    when the instantiation    is complete and what to do    then.    2.    The user must be able to enter a value    for'any slot in any frame at any time and the    system must know what must be checked or    rechecked and what remains to be done.    3. There should be a general facility to    suggest choices for a slot which are consistent    with the current values of the other slots.    4.    The system should complain as soon as    the slot values become inconsistent.    It must    show dynamic discretion    in explaining    enough, but    not too much, of the difficulty.    5.    The user must be able to ask questions    about the data base and about the status of the    instantiation.    The form of a constraint    is: (name domain    expr), where "name" is an atom    used to index a    user-oriented    explanation    of the constraint,    "domain"    (the terminology    is suggested in    {STANSFIELD])    is the list of slot names whose    interrelations    are tested by the constraint,    and    "expr" is a predicate to be satisfied.    "Expr"    can refer to the current values of slots being    instantiated    simply by reference to their slot    names.    We define a bucket as an unordered    list    of constraints.    The contents of the CONSTRAINTS    slot in the template is a list of buckets,    ordered to express priority.    All constraints    in    the same bucket are of equal priority.    The    attachment    of constraints    to the template at the    slot level, rather than the usual attachment    to    slots at the facet level, reflects our view that    all the action is in the interaction    of the slots    and that it is presumptive    to make a decision --    especially    a static one -- as to which slot is    "bad".    6. The instantiation    of one frame must be    able to initiate the instantiation    of another.    7. Constraint    satisfaction    must be    maintained    after the original instantiation    whenever a slot value is changed or a template is    changed.    EXECUTION    Assuming a sequence of values is being    suggested for the instantiated    slots, the    algorithm    is as follows:    Initially all constraints    are unmarked.    REPRESENTATION    Templates    Templates    are represented    by frames with    slots whose names will be replicated    in the    instantiated    frames and which contain either    $IF-NEEDED    or $VALUE facets.    The $IF-NEEDED    procedures    are responsible    for deciding whether    the value can be computed or is to be requested    from the user.    It is our normal practice to have    the $IF-NEEDED procedures    also perform type    checking.    The presence of a SVALDE facet causes    a recursive call to the instantiator.    The template also contains two special    slots, named CONSTRAINTS    (discussed below) and    BOOKEEEP.    The BOOKKEEP slot contains procedures    to be run upon completion    of the frame    instantiation,    a sort of "IF-ADDED" mechanism    at    the frame level.    The interpreter    either steps    through the template filling slots or fills those    commanded by the user.    Moreover,    the user may    interact at any time wi th LISP or with a natural    language Q/A system for retrieving    facts from the    data base, including those inferred through    "inheritance".    The latter is implemented    as an    ATN parser, whose syntax and semantics are    intimately    related to the structure of the frame    net.    In add ition, there are a number of    amenities:    spelling correctors,    synonym and word    truncation    recognizers,    and facilities    for    viewing the current status of the instantiation    process    values,    i.e. , a presentation    of the current slot    disti nguishing with explanation    those    At any time, there is a current slot name,    the one for which a value has been most recently    proposed.    A constraint    is considered    timely if    its domain includes the current slot name and if    all the other slot names in its domain have    values already assigned.    The interpreter    passes    through the buckets in decreasing    priority until    it discovers    a timely constraint.    If the    constraint    fails, the interpreter    marks the    constraint    and traps the slots in its domain,    i.e., renders their values unknown to constraints    in lower priority buckets, which are not tested.    If these lower priority constraints    are already    marked, they become unmarked since they are no    longer timely.    Other failed constraints    in the    current bucket are marked and their domains    trapped.    If a previously    marked constraint    now    succeeds, then it is unmarked.    If all the    constraints    in a bucket having a given slot name    become unmarked, the slot name is released, i.e.,    pushed down to lower priority buckets along with    the current slot name.    The process normally    terminates when all the slots are filled and none    of the constraints    are marked.    CONSISTENCY    MAINTENANCE    Should a slot value in an instantiated    frame    or a constraint    in a template be changed, the    system makes appropriate    checks.    COMPLEX CONSTRAINTS    The discussion    above deals with constraints    on the related slots in a given frame.    Such    constraints    are called simple in [STANSFIELD].    What he calls complex constraints,    those    which violate constraints.    185    involving slots in ditrerent    frames, are of great    importance    to us.    For example, we must be    concerned with the timing constraints    needed to    synchronize    a primary mission with its support    missions    (refueling,    defense suppression,    etc.).    We are currently engaged in the design and    implementation    of suitable representations    and    algorithms.    We believe that a purely recursive    (depth first) sequence of frame instantiations    is    not acceptable,    and that we shall have to provide    flexible control of interleaved    "co-instantiations".    CHOICE GENERATION    When cueing the user for a slot-value,    we    would often like to present a list of values    consistent with those already chosen for other    slots.    This is, in general, computationally    impossible.    It turns out, however, to be in the    nature of our application    that we frequently    can    produce a list of consistent values.    Furthermore,    we can do this by a fairly general    method, generating    the choices, in fact, from the    constraints.    The key point -- and this is    obviously application    dependent -- is that many    of our constraints    are of the form    (name (A Bl -- Bn) (MEMBER A (f00    Bl    -- Bn))),    where, if the code is to make sense, (foo Bl -- Bn)    is a computable,    finite list.    So, for example, one    constraint might mean:    (1)    The airbase is one of our airbases in    Europe.    and another might mean:    the    (2)    The chosen    chosen airbase.    fighter wing is located at    We sav a constraint    enumerates    a slot. S.    +    I    I    absolutely    iff it is of the form:    (name (S)(MEMBER S----)).    Constraint    (l), above,    enumerates    airbases absolutely.    A constraint    To make choice generation more efficient,    we    optimize the constraints    within the current    context by collecting    those which are timely and    "compiling"    them into a function of only the    current slot, essentially    by pre-evaluating    all    subexpressions    which do not contain this slot.    CRITICISM AND FUTURE DIRECTIONS    1) We need to design and implement a    comparable    system for the complex constraints.    2)    The only relative priorities    we can    express between constraints    are static.    This    might, someday, prove inadequate.    3)    There is danger of an existential    trap.    The simplest example occurs when the first slot    filled is A and the candidates value satisfies    every constraint whose domain is (A).    There may,    however, be a constraint whose domain is (A B)    which cannot be satisfied with the proposed value    of A and any value of B.    Our interpreter    does    not see this until B is selected.    The choice    generation    scheme discussed above could also be    employed to test (perhaps very expensively),    for    such traps.    ACKNOWLEDGEMENTS    This work was supported by the Rome Air    Development    Center under Air Force contract    F19628-80-C-0001.    The FRL system was originally    created at MIT    by R. Bruce Roberts and Ira P. Goldstein.    Roberts assisted us by defining and creating an    "export nucleus" of FRL.    The work reported here    is built directly on the INTERLISP version of FRL    translated    and extended by Lars W. Ericson, while    he was at MITRE.    We are deeply indebted to him.    REFERENCES    LENGELMAN]    Engelman,    C., Berg, Charles H., and    Bischoff, Miriam, "KNOBS:    An Experimental    Knowledge Based Tactical Air Mission Planning    System and a Rule Based Aircraft Identification    Simulation Facility",    Proc. Sixth Inter. Joint    Conf. Artificial    Intelligence,    Tokyo, 1979, pp.    247-249.    [ERICSON]    Ericson, Lars W., "Translation    of    Programs    from MACLISP to INTERLISP",    MTR-3874,    The MITRE Corporation9    Bedford, MA, Nov. 1979.    [ROBERTSJULY]    Roberts, R. Bruce, and Goldstein,    Ira P., "The FRL Primer", MIT AI Lab. Memo 408,    July 1977.    [ROBERTSSEPT)    Roberts, R. Bruce, and Goldstein,    Ira P., "The FRL Manual", MIT AI Lab. Memo 409,    September    1977.    [STANSFIELD]    Stansfield,    James L., "Developing    Support Systems for Information    Analysis",    in    Artificial    Intelligence,    An MIT Perspective,    Winston, P. H., and Brown, R. H., (Eds.), The MIT    Press, Cambridge,    MA, 1979.    [STEFIK]    Stefik, Mark, "An Examination    of a    Frame-Structured    System", Proc. Sixth Inter.    Joint Conf. on Artificial    Intelligence,    Tokyo,    1979, pp. 845-852.    [SZOLOVITS]    Szolovits,    P., Hawkinson,    L. B.,    Martin, W. A., "An Overview of Owl, A Language    for Knowledge Representation",    MIT/LCS/TM-86,    MIT, Cambridge,    MA, June, 1977.    186     
 | 
	1980 
 | 
	48 
 | 
					
43 
							 | 
	DESCRIPTIONS    FOR A PROGRAMMING    ENVIRONMENT    Ira P. Goldstein and Daniel G. Bobrow    Xerox    Palo Alto Research    Center    Palo Alto, California    94304,    U.S.A    Abstract    PIE    is    an    experimental    personal    information    environment    implemented    in    Smalltalk    that    uses    a    description    language    to    support    the    interactive    development    of programs.    PIE contains    a network    of    nodes,    each    of    which    can    be    assigned    several    perspectives.    Each    perspective    describes    a different    aspect    of the    program    structure    represented    by the    node, and provides    specialized    actions    from that point    of view.    Contracts    can be created    that monitor    nodes    describing    different    parts    of a program’s    description.    Contractual    agreements    are    expressible    as    formal    constraints,    or, to make the system    failsoft,    as English    text interpretable    by the user.    Contexts    and layers    are    used    to    represent    alternative    designs    for    programs    described    in    the    network.    The    layered    network    database    also facilitates    cooperative    program    design by    a wow,    and    coordinated,    structured    documentation.    Int reduction    In most programming    environments,    there is support    for    the text    editing    of program    specifications,    and support    for    building    the program    in bits and pieces.    However,    there is    usually no way of linking these interrelated    descriptions    into a    single integrated    structure.    The English    descriptions    of the    program,    its rationale,    general    structure,    and tradeoffs    are    second class citizens at best, kept in separate files, on scraps    of paper next to the terminal, or, for a while, in the back of the    implementor’s    head.    Furthermore,    as the software evolves, there is no way of    noting the history of changes, except in some primitive fashion,    such as the history list of Interlisp [lo].    A history list provides    little support for recording    the purpose of a change other than    supplying    a comment.    But such comments    are inadequate    to    describe the rationale for coordinated    sets of changes that are    part    of some    overall    plan    for    modifying    a system.    Yet    recording    such rationales    is necessary    if a programmer    is to    be able to come to a system and understand    the basis for its    present    form.    Developing    programs    involves    the    exploration    of    alternative    designs.    But    most    programming    environments    provide little support    for switching    between alternative    designs    or comparing    their similarities    and d<fferences.    They do not    allow alternative    definition,s of procedures    and data structures    to exist simultaneously    in the programming    environment:    nor    do    they    provide    a representation    for    the    evolution    of    a    particular    set of definitions    across    time.    In this paper we argue that by making descriptions    first    class objects in’s programming    environment,    one can make life    easier for the programmer    through    the life cycle of a piece of    software. Our argument is based on our experience    with PIE, a    description-based    programming    environment    that supports    the    design,    development,    and    documentation    of    Smalltalk    programs.    Networks    The PIE environment    is based on a network    of nodes    which    describe    different    types of entities.    We believe    such    networks    provide    a better basis for describing    systems than    files.    Nodes provide    a uniform    way of describing    entities    of    many sizes, from small pieces such as a single procedure    to    much    larger    conceptual    entities.    In    our    programming    environment,    nodes are used to describe    code    in individual    methods, classes, categories    of classes, and configurations    of    the system to do a particular    job.    Sharing structures    between    configurations    is made natural and efficient by sharing regions    of the network.    Nodes are also used to describe    the specifications    for    different    parts of the system.    The programmer    and designer    work in the same environment,    and the network links elements    of the program    to elements    of the design    and specification.    The documentation    on how to use the system is embedded    in    the network    also.    Using the network allows multiple views of    the documentation.    For example,    a primer and a reference    manual    can    share    many    of the    same    nodes    while    using    different    organizations    suited    to their    different    purposes.    In applying    networks    to the description    of software,    we    are following    a tradition    of employing    semantic    networks    for    knowledge    representation.    Nodes in our network    have the    usual    characteristics    that    we    have    come    to    expect    in a    representation    language--for    example,    defaults,    constraints,    multiple perspectives,    and context-sensitive    value assignments.    There    is    one    respect    in    which    the    representation    machinery    developed    in PIE is novel: it is implemented    in an    object-oriented    language.    Most representation    research    has    been done in Lisp. Two advantages    derive from this change of    soil.    The first is that there    is a smaller    gap between    the    primitives    of the representation    language    and the primitives    of    the implementation    language.    Objects    are closer    to nodes    (frames, units) than lists.    This simplifies the implementation    and gains some advantages    in space and time costs.    The    second is that the goal of representing    software    is simplified.    Software    is built    of objects    whose    resemblance    to frames    makes them natural to describe    in a frame-based    knowledge    representation.    187    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    Perspectives    Attributes    of nodes are grouped    into perspectives.    Each    perspective    reflects a different    view of the entity represented    by the node.    For example,    one view of a Smalltalk    class    provides    a definition    of    the    structure    of    each    instance,    specifying    the    fields    it must    contain;    another    describes    a    hierarchical    organization    of the methods of the class; a third    specifes    various    external    methods    called    from the class; a    fourth    contains    user documentation    of the behavior    of the    class.    The attribute    names of each perspective    are local to the    perspective.    Originally,    this was not the case.    Perspectives    accessed    a common    pool of attributes    attached    to the node.    However, this conflicted    with an important    property that design    environments    should    have, namely, that different    agents can    create perspectives    independently.    Since one agent cannot    know the names chosen by another, we were led to make the    name space    of each    perspective    on a node    independent.    Perspectives    may provide    partial    views which    are not    necessarily    independent.    For    example,    the    organization    perspective    that categorizes    the methods    of a class and the    documentation    perspective    that describes the public messages    of a class are interdependent.    Attached    procedures    are used    to maintain    consistency    between    such    perspectives.    Each perspective    supplies    a set of specialized    actions    appropriate    to its point of view.    For example, the print    action    of the structure    perspective    of a class knows how to prettyprint    its    fields    and    class    variables,    whereas    the    organization    perspective    knows how to prettyprint    the methods of the class.    These    actions    are implemented    directly    through    messages    understood    by the Smalltalk    classes defining    the perspective.    Messages    understood    by perspectives    represent    one of    the    advantages    obtained    from    developing    a    knowledge    representation    language within an object-oriented    environment.    In most knowledge    representation    languages,    procedures    can    be    attached    to    attributes.    Messages    constitute    a    generalization:    they are attached to the perspective    as a whole.    Furthermore,    the    machinery    of the    object    language    allows    these messages to be defined locally for the perspective.    Lisp    would    insist    on global    functions    names.    Contexts    and    Layers    All values of attributes    of a perspective    are relative to a    context.    Context    as we use the term derives from Conniver    [9].    When one retrieves the values of attributes of a node, one    does so in a particular    context, and only the values assigned in    that context    are visible.    Therefore    it is natural    to create    alternative    contexts    in which    different    values are stored    for    attributes    in a number of nodes.    The user can then examine    these alternative    designs, or compare them without leaving the    design environment.    Since there is an explicit    model of the    differences    between    contexts,    PlEc can highlight    differendes    between    designs.    PIE also provides    tools for the user to    choose or create appropriate    values for merging two designs.    Design    involves    more    than    the    consideration    of    alternatives.    It also involves the incremental    development    of a    single alternative.    A context    is structured    as a sequence    of    layers.    It is these layers that allow the state of a context    to    evolve.    The assignment    of a value to *a property    is done in a    particular    layer.    Thus the assertion that a particular    procedure    has a certain    source    code    definition    is made    in a layer.    Retrieval from a context    is done by looking up the value of an    attribute,    layer by layer.    If a value is asserted for the attribute    in the first layer of the context,    then this value is returned.    If    not, the next layer is examined.    This process is repeated until    the layers    are exhausted.    Extending    a context    by creating    a new    layer    is an    operation    that    is    sometimes    done    by    the    system,    and    sometimes by the user.    The current    PIE system adds a layer    to a context    the first time the context    is modified    in a new    session.    Thus, a user can easily back up to the state of a    design    during    a previous    working    session.    The user can    create layers at will.    This may be done when he or she feels    that    a given    groups    of    changes    should    be    coordinated.    Typically,    the user will group dependent    changes    in the same    layer.    Layers    and    contexts    are    themselves    nodes    in    the    network.    Describing    layers in the network    allows the user to    build a description    of the rationale    for the set of coordinated    changes    stored in the layer in the same fashion as he builds    descriptions    for any other    node    in the network.    Contexts    provide    a way    of grouping    the    incremental    changes,    and    describing    the rationale for the group as a whole.    Describing    contexts    in the network    also allows the layers of a context    to    themselves be asserted in a context sensitive fashion (since all    descriptions    in the network are context-sensitive).    As a result,    super-contexts    can be created    that act as big    switches    for    altering    designs    by altering    the layers of many sub-contexts.    Contracts    and    Constraints    In any system, there are dependencies    between different    elements    of the system.    If one changes,    the other should    change    in some corresponding    way.    We employ    contracts    between nodes to describe these dependencies.    Implementing    contracts    raises issues involving    1) the knowledge    of which    elements    are    dependent;    2)    the    way    of    specifying    the    agreement; 3) the method of enforcement    of the agreement;    4)    the time when    the agreement    is to be enforced.    PIE    provides    a number    of different    mechanisms    for    expressing    and implementing    contracts.    At the implementation    level, the user can attach a procedure    to any attribute    of a    perspective,    (see    [2]    for    a fuller    discussion    of    attached    procedures):    this allows    change    of one attribute    to update    corresponding    values of others.    At a higher    level, one can    write simple constraints    in the description    language    (e.g. two    attributes    should always have identical    values), specifying    the    dependent    attributes.    The system creates attached procedures    that    maintain    the constraint.    There are constraints    and contracts    which cannot    now    be expressed    in any formal language.    Hence, we want to be    able to express that a set of participants    are interdependent,    but not be required    to give a formal predicate    specifying    the    contract.    PIE allows us to do this.    Attached    procedures    are    created    for such contracts    that notify the user if any of the    participants    change, but which do not take any action on their    own to maintain    consistency.    Text can be attached    to such    informal    contracts    that    is displayed    to the user when    the    contract    is triggered.    This provides    a useful inter-programmer    means of communication    and preserves a failsoft    quality of the    environment    when    formal    descriptions    are not available.    Ordinarily    such non-formal    contracts    would    be of little    interest in artificial intelligence.    They are, after all, outside the    comprehension    of a reasoning    program.    However,    our thrust    has been    to build    towards    an artificially    intelligent    system    through    succcessive    stages of man-machine    symbiosis.    This    188    approach    has the    advantage    that    it allows    us to observe    human reasoning    in the controlled    setting of interacting    with    the system.    Furthermore,    it allows us to investigate    a direction    generally    not taken    in Al applications:    namely the design    of    memory-support    rather    than    reasoning-support    systems.    An issue in contract    maintenance    is deciding    when to    allow    a contract    to    interrupt    the    user    or    to    propagate    consistency    modifications.    We use the closure    of a layer as    the time when contracts    are checked.    The notion    is that a    layer is intended    to contain a set of consistent    values.    While    the user is working within a layer, the system is generally    in an    inconsistent    state.    Closing    a layer    is an operation    that    declares    that    the    layer    is complete.    After    contracts    are    checked,    a closed layer is immutable.    Subsequent    changes    must be made in new layers appended    to the appropraiate    contexts.    Coordinating    designs    So far we have emphasized    that aspect of design which    consists    of a single individual    manipulating    alternatives.    A    complementary    facet of the design process    involves    merging    two    partial    designs.    This task    inevitably    arises    when    the    design    process    is undertaken    by    a team    rather    than    an    individual.    To coordinate    partial    designs,    one    needs    an    environment    in which    potentially    overlapping    partial    designs    can be examined    without    overwriting    one another.    This is    accomplished    by the convention    that different    designers    place    their contributions    in separate layers.    Thus, where an overlap    occurred, the divergent    values for some common attributes are    in distinct    layers.    Merging two designs is accomplished    by creating    a new    layer into which are placed the desired values for attributes as    selected from two or more competing    contexts.    For complex    designs, the merge process    is, of course, non-trivial.    We do    not,    and    indeed    cannot,    claim    that    PIE    eliminates    this    complexity.    What it does provides    is a more finely    grained    descriptive    structure    than    files in which    to manipulate    the    pieces    of the design.    Layers    created    by a merger    have    associated    descriptions    in the network    specifying    the contexts    participating    in the    merger    and the basis    for the merger.    Meta-description    Nodes can be assigned meta-nodes whose purpose is to    describe    defaults,    constraints,    and    other    information    about    their object    node.    Information    in the meta-node    is used to    resolve ambiguities    when a command    is sent to a node having    multiple    perspectives.    One situation    in which    ambiguity    frequently    arises    is    when    the PIE interface    is employed    by a user to browse    through    the network.    When the    user selects    a node    for    inspection,    the interface examines the meta-node    to determine    which    information    should    be automatically    displayed    for the    user.    By appropriate    use of meta-information,    we have made    the default display of the PIE browser identical to one used in    Smalltalk.    (Smalltalk code is organized    into a simple four-level    heirarchy,    and the Smalltalk    browser    allows examination    and    modification    of Smalltalk    code    using this taxonomy.)    As a    result, a novice PIE user finds the environment    similar to the    standard    Smalltalk    programming    environment    which    he has    already    learned.    Simplifying    the    presentation    and    manipulation    of the    layered    network    underlying    the PIE environment    remains an    important    research    goal,    if the    programming    environment    supported    by PIE is to be useful as well as powerful.    We have    found    use    of    a meta-level    of descriptions    to    guide    the    presentation    of the network to be a powerful device to achieve    this utility.    Conclusion    PIE has been used to describe    itself, and to aid in its    own    development.    Specialized    perspectives    have    been    developed    to aid in the description    of Smalltalk code, and for    PIE    perspectives    themselves.    On-line    documentation    is    integrated    into the descriptive    network.    The implementors    find    this network-based    approach    to developing    and documenting    programs    superior    to    the    present    Smalltalk    programming    environment.    A small number of other people have begun to    use the system.    This paper presents    only a sketch of PIE from a single    perspective.    The PIE description    language    is the result of    transplanting    the ideas of KRL [2] and FRL [6] into the object    oriented    programming    environment    of Smalltalk    [8], [7].    A    more extensive    discussion    of the system in terms of the design    process    can be found    in [l],    and [4].    A view of the PIE    description    language    as an extension    of the object    oriented    programming    metaphor can be found in [5].    Finally, the use of    PIE as a prototype    office information    system is described    in    [31.    References    ill    PI    [31    [41    151    161    [71    WI    WI    [lOI    Bobrow,    D.G. and Goldstein,    I.P. “Representing    Design    Alternatives”,    Proceedings    of    the    A/S6    Conference,    Amsterdam,    1980.    Bobrow,    D.G. and Winograd,    T. “An overview    of KRL, a    knowledge    representation    language”,    Cognifive    Science    1, 1 1977.    Goldstein,    I.P.    “PIE:    A    network-based    personal    information    environment”,    Proceedings    of    the    Office    Semantics    Workshop,    Chatham,    Mass.,    June,    1980.    Goldstein,    I.P. and Bobrow, D.G., “A layered approach    to    software design “, Xerox    Palo    Alto    Research    Center    CSL-    80-5. 1980a.    Goldstein,    I.P.    and    Bobrow,    D.G.,    “Extending    Object    Oriented    Programming    in Smalltalk”,    Proceedings    of the    Lisp    Conference.    Stanford    University,    1980b.    Goldstein,    I.P. and Roberts, R.B. “NUDGE,    A knowledge-    based    scheduling    program”,    Proceedings    of    the    Fifth    International    Joint    Conference    0”    Artificial    Intelligence,    Cambridge:    1977, 257-263.    Ingalls,    Daniel    H.,    “The    Smalltalk-    Programming    System: Design and Implementation,”    Conference    Record    of    the    Fifth    Annual    ACM    Symposium    on    Principles    of    Programming    Languages,    Tucson, Arizona, January    1978,    pp 9-16.    Kay, A. and Goldberg,    A. “Personal    Dynamic Media”    /EEE    Computer,    March,    1977.    Sussman,    G., & McDermott,    D.    “From    PLANNER    to    CONNIVER    -- A genetic    approach”.    Fall Joint    Computer    Conference.    Montvale,    N. J.:    AFIPS    Press,    1972.    Teitelman,    W.,    The    lnterlisp    Manual,    Xerox    Palo    Alto    Research    Center,    1978.    189     
 | 
	1980 
 | 
	49 
 | 
					
44 
							 | 
	A BASIS    FOR A THEORY OF PROGRAM SYNTHESIS’    P.A.Subrahmanyam    USC/Information    Sciences Institute    and    Cepartment of Computer Science    University    of Utah, Salt Lake City, Utah 84112    1. Introduction    and Summary    In order    to obtain    a quantum    jump    in the quality    and    reliability    of software,    it is imperative    to have a coherent    theory    of program synthesis which can serve as the basis for a    sophisticated    (interactive)    software    development    tool.    We    argue    that    viewing    the    problem    of    (automatic)    program    synthesis    as that (automatically)    synthesizing    implementations    of abstract    data types    provides    a viable basis for a general    theory    of program    syntheeis. We brlefly    describe    the salient    features    of such a theory    [5, Sj, and conclude by listing some    of the applications    of the theory.    1.1. Roquiromonts for en Accrptable Theory of Program    Synthesis.    We view some of the essential requirements    of en acceptable    theory    of program synthesis to be the following:    - the theory should be general;    - it must adhere    to Q coherent    set of underlying    principles,    and    not    be    based    on    an ad    hoa    collection    of heuristics;    - it    must    be    based    on    a    sound    mathematical    framework;    - it    must    account    for    the    “state    of the    art”    of    program    synthesis;    in particular,    it must allow for    the generation    of “efficient”    programs.    Further,    if a theory is to be useful, we desire that it possess    the following    additional attributes:    - it    should    serve    as the    basis    for    a program    development    system which can generate provably    correct    non-trivial    programs;    - it    should    possess    adequate    flexibility    to    admit    being tailored to specific application tasks1    - it should provide    new and insightful    perspectives    into    the    nature    of    programming    and problem    solving,    1    This    work    warn rupportrd in prrt by l n IBM Followskip    With these requirements    in mind, we now examino the nature    of the programming    process in an attempt    to characterize    the    basic    principles    that    underly    the    conttructlon    of    “good”    programs.    2 A basis for a Theory    of Program Synthesis    ---_    Intuitively,    the abstraction    of a problem can be viewed    as    consisting    of an appropriate    set of functions to be performed    on an associated set of objects. Such a collection of objects and    functions    is an “abstract    data typo”    and has the important    advantage    of    providing    a    representation    independent    characterization    of a problem.    Although the illustrations    that    most    readily    come    to    mind are    commonly    employed    data    structures    such as a stack,    a file, a queue, a symbol table, etc.,    any partial    recursive    function can be presented    as an abstract    data type.    Programming    involves    representing    the    abstractions    of    objects    and operations    relevant    to a given problem domain    using    primitives    that are presumed    to be already    available;    ultimately,    such primitives    are those    that    are provided by the    available    hardware.    Various    programming    methodologies    advocate    ways of achieving “good”    organizations    of layers of    such    representations,    in attempting    to provide    an effective    means of coping with the complexity of programs.    _    there    exists,    therefore,    compelling    evidence    in favor    of    viewing    the process of program synthesis as one of obtaining    an    implementation    for    the data type    correponding    to    the    problem    of interest    (the “type    of interest”)    in terms of another    data type that corresponds    to some representation    (the “target    type.“)    This perspective    is’ further supported    by the following    basic    principles    that we think    should underly    the synthesis    process (if reliable programs are to be produced consistently):    1. The programming    process    should essentially    be    one    of    program    synthesis    proceeding    from    the    spocificrtions    of    a problem,    rather    than    being    primarily    analytic (e.g. constructing    a program and    then verifying    it) or empirical (e.g. constructtng    a    program and then testing it).    74    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    2. The    specification    of    a    problem    should    be    representation    independent.    This    serves    to    guarantee    complete    freedom    in    the    program    synthesis    process,    in that    no particular    program    is    excluded    a priori    due    to    overspecification    of    the    problem    caused    by    representation    dependencies.    3. The    synthesis    should    be    guided    primarily    by    the    semantics    of the    problem    specification.    4. The    level    of    reasoning    used    by    the    synthesis    paradigm    should    be    appropriate    to    “human    reasoning,”    rather    than    being    machine    oriented    (see    [6]).    In    addition    to    making    the    paradigm    computationally    more    feasible,    this    has    two    major    advantages:    a.    existing    paradigms    of    programming    such    as    “stepwise    refinement”    can    be    viewed    in    a    mathematical    framework    b.    user    interaction    with    the    system    becomes    more    viable,    since    the    level    of    reasoning    is    now    “visible”    to the    user.    The    above    principles    led us to adopt    an algebraic    formulation    for    the    development    of our    theory    [l, 21, [3, 43. An important    consequence    of this    decision    was    that    the    synthesis    paradigm    is    independent    of    any    assumptions    relating    to    the    nature    of    the    underlying    hardware.    In fact,    it can    even    point    to target    types    suited    to    particular    problems    of    interest    i.e.    trrget    machine    architectures    which    aid efficient    implementations.    3. the    Proposed    Paradigm    for Prowam    Synthesis    We    adopt    the    view    that    any    object    representing    an instance    of    a    type    is    completely    characterized    by    its    “externally    observable    behavior”.    The    notion    of an implementation    of one    data    type    (the    type    of interest)    in terms    of another    (the    target    type)    is    then    defined    as    a map    between    the    functions    and    objects    of    the    two    types    which    preserves    the    observable    behavior    of    the    type    of    interest.    The    objective,    then,    is    to    develop    methods    to    automate    the    synthesis    of    such    implementations    based    on    the    specifications    of    the    type    of    interest    and    the    target    type.    Intuitively,    the    crux    of    the    proposed    paradigm    lies    in    “mathematically”    incorporating    the    principle    of    stepwise    refinement    into    automatic    programming.    This    is    done    by    appropriately    interpreting    both    the    syntactic    and    semantic    structure    inherent    in a problem.    An important    distinction    from    most    transformation    based    systems    is    that    the    refinem@rnt    is    guided    by    the    semantics    of the    functions    define    on the    type    of    interest,    rather    than    by    a fixed    set    of rules    (e.g.    [7]>.    An formal    characterization    of    some    of    the    pivotal    steps    in the    synthesis    process    is provided,    and    an attempt    is made    to pin-point    those    stages    where    there    is leeway    for    making    alternative    choices    based    upon    externally    imposed    requirements.    (An    example    of    such    a requirement    is    the    relative    efficiency    desired    for    the    implementations    of    different    functions    depending    upon    their    relative    frequency    of use.)    This    separation    of    the    constraints    imposed    by    (a)    the    structure    inherent    in    the    problem    specification,    (b)    the    requirements    demanded    by    the    context    of    use,    and    (c)    the    interface    of    these    two,    serves    to    further    subdivide    the    complexity    of    the    synthesis    task    --    it becomes    possible    now    to    seek    to    build    modules    which    attempt    to    aid    in each    of    these    tasks    in a relatively    independent    manner.    In summary,    our    goal    was    to seek,    in as far    as is possible,    a    mathematically    sound    and    computationally    feasible    theory    of    program    synthesis.    The    formal    mathematical    framework    underlying    our    theory    is algebraic.    The    programs    synthesized    are    primarily    applicative    in nature;    they    are    provably    correct,    and    are    obtained    without    the    use    of    backtracking.    There    is    adequate    leeway    in the    underlying    formalism    that    allows    for    the    incorporation    of    different    “environment    dependent”    criteria    relating    to    the    “efficiency”    of    implementations.    The    objectives    of    the    theory    include    that    conventional    programs    be    admitted    as valid    outcomes    of the    proposed    theory,    This    is in consonance    with    our    belief    that    any    truly    viable    theory    of synthesis    should    approximate    as    a limiting    case    already    existing    empirical    data    relevant    to its domain.    4. An Example:    The Synthesis    of Block Structured    Symbol    Table    Using an Indexed    Array    To    illustrate    some    aspects    of    the    paradigm    for    program    synthesis    we    outline    the    synthesis    of    a    block-structured    SymbolTable    using    an indexed    array    as a target    type    (cf.    [4].)    The    sorts    of    objects    involved    are    instances    of    SymbolTable,    Identifier,    Attributes,    Boolean,    etc.;    our    primary    interest    here    is    on    the    manipulation    of    instances    of    SymbolTables.    The    functions    that    are    defined    for    manipulating    a    SymbolTable    include:    NEWST    (spawn    a new    instance    of    a symbol    table    for    the    outermost    scope,)    ENTERBLOCK    (enter    a new    local    naming    scope,)    ADDID    (add    an identifier    and associated    attributes    to the    symbol    table,)    LEAVEBLOCK    (discard    the    identifier    entries    from    the    most    current    scope,    re-establish    the    next    outer    scope,)    ISINBLOCK    (test    to    see    if    an    identifier    has    already    been    declared    in    the    current    block,)    and    RETRIEVE    (retrieve    the    attributes    associated    with    the    most    recent    definition    of    an    Identifier.)    The    formal    specifications    may    be found    in [4]    (see    also    [6].)    Although    implementations    for    more    complex    definitions    of    SymbolTables    have    been    generated    (which    include    tests    for    “Global”    Identifiers),    we    have    chosen    this    definition    because    of    its    familiarity,    The    overall    synthesis    proceeds    by    first    categorizing    the    functions    defined    on    the    type    of    interest    (here,    the    SymbolTable)    into    one    of the    flowing    three    categories:    0) Base    75    Constructor    functions    that    serve    to spawn    new    instances    of the    type    (e.g.    NEWST);    (ii)    Canstructor    functions    that    serve    to    generate    new    instances    from    existing    ones    (e.g.    ADDID,    ENTERBLOCK,    LEAVEBLOCKh    and    (iii)    E&&or    functions    that    return    instances    of    types    other    than    SymbolTable    (e.g.    RETRIEVE,    ISINBLOCK).    The    next    step    is    to    identify    a subset    of    these    functions    (termed    kernel    functions)    which    serve    to generate    all instances    of    SymbolTables:    these    are    NEWST,    ADOID,    and ENTERBLOCK.    A major    step    in obtaining    an implementation    for    the    TOI is to    provide    an    implementation    for    the    kernel    functions.    Since    no    model    for    the    kernel    functions    is explicit    in the    specification    of    a type,    a suitable    model    must    be inferred    from    the    behavior    of    the    functions    defined    on    the    type.    Such    an inference    follows    from    an    examination    of    the    axioms    defining    the    extraction    functions.    Specifically,    the    domain    of    the    terms    of    type    SymbolTable    is partitioned    into    its)    equivalence    classes    by    the    extractors    defined    on the    type    --    and this    is precisely    what    an    implementation    is attempting    to capture.    The    defining    equations    of    each    function    indicate    how    it    “contributes”    towards    this    partitioning,    and    therefore    how    this    “semantic    structure”    imposed    upon    the    terms    of    the    SymbolTable    is related    to    the    syntactic    structure    of the    underlying    terms,    Due    to lack    of space,    we omit    the    details    of how    this    is done.    One    of    the    implemetations    generated    (automatically)    is shown    in    figure    1, wherein    0 denotes    the “implementation    map.    We    note    that    an    auxiliary    data    type    which    is    almost    isomorphic    to    a    Stack    (of    integers)    was    (automaticlly)    defined    in course    of the    implementation;    this    Stack    can,    in turn,    be synthesized    in terms    of    an indexed    Array    by    a recursive    invocation    of the    synthesis    procedures.    Other    Implementations    that    are    generated    for    the    Symbol    Table    include    an implementation    using    a “Block    Mark”    to identify    the    application    Of    the    function    ENTERBLOCK,    and    an    implementation    similar    to    a    “hash    table”    implementation    is    suggested    upon    examining    the    semantics    of    the    functions    defined    on    the    Symbol    Table.    --l----------cII--------------------~~~==~=-------~~~~~~~”~~--    We    list    below    one    of    the    implementations    generated    for    a    SymbolTable,    using    an    Array    as    the    initially    specified    target    type.    The    final    representation    consists    of    the    triple    <Array,lnteger,AOT1*1    the    integer    represents    the    current    index    into    the    array,    whereas    AOTl    (for    &diary    Eata    lype-1)    is    introduced    in course    of the synthesis    process,    and is isomorphic    to a Stack    that    records    the    index-values    corresponding    to each    ENTERBLOCK    performed    on    a    particular    instance    of    a    SymbolTable.)    We    denote    this    by-writing    t#s)    -    <a,i,edtl>.    informally,    ENTERBLOCK.ADTl    serves    to    “push”    the    current    index    value    onto    the    stack    adtl,    LEAVEBLOCK.ADTl    serves    to    “POP”    the    stack,    and    D.ADTl    returns    the -topmost    element    in the    Stack,    returning    a zero    if the    stack    is empty.    WCC    and    PREO    are    the    successor    and    menus    functions    on    Integers.    B(NEWST)    E <NEWARRAY,ZERO,NEWAOTl’    B(AOOIO(s,id,al))    - <ASSIGN(a,SUCC(i),<id,al>),SUCC(i),adt    1’    g(ENTERBLOCK(s))    - <a,& ENTERBLOCK.AOTl(adtl,i)>    aLEAVEBLOCK(s))    - <a,D.AOTl(adt    1), LEAVEBLOCK.ADTl    (adtl)>    BIISINBLOCK(s,id    1)) = ISINBLOCKTT(<a,i,adt    l>,id 1)    bKRETRIEVE(s,idl))    - RETRIEVETT(<a,i,adt    l>,ldl)    lSlNE3LOCKTT    and    RETRlEVETT    are    defined    as follows:    ISINBLOCKTT(~a,i,adt    1~’ id 1) -    if i - ZERO    then    FALSE    else    if D.ADTl(adt    1) < i    then    if proj(l,    OATA    = idl,    then    TRUE    else    ISINBLOCKTT(<a,PRED(i),adt    l*,id 1)    else    FALSE    RETRIEVETT(<a,i,adt    l>,id 1) =    if i - ZERO    then    UNDEFINED    else    if proj(l,OATA)a,i))    - id1    then    proj(2,    OATA(a,i))    else    RETRIEVETT(<a,    PRED(i),adt    I>, idl)    Here,    proj(i,exl..xn”)    - Xi.    Figure    1. A Symbol    Table    lmplemen!a!iOn    Othw    Examples    of    Applications    of    the    Synthesis    Paradigm.    Several    Programs    have    been    synthesized    by    direct    applications    of    the    synthesis    algorithms    developed    so    far.    These    include    implementations    for    a    Stsck,    a    Queue,    a    Oeque,    a    Block    Strut    tured    SymbolTable,    an    interactive    line-orlented    Text-Editor,    o    text    formatter,    a    hidden    surface    elimination    algorithm    for    graphical    dispteys,    and    an execution    engine    for    a    data    driven    machine,    eferencso    [l]    J.Goguen,    J.Thatcher,    E.Wagner,    J.Wright.    Initial    Algebra    Semantics    and    Continuous    Algebras.    JACM    24:68-95,    1977.    (23 J.Goguen,    J.Thatcher,    E.Wagner.    An Initial    Algebra    Approach    to    the    Specification,    Correctness,    and    implementation    of    Abstract    Data    Types,    in    Current    Trends    in    Programming    Methodology,    Vol    IV, Ed.    R.Yeh,    Prentice-Hall,    NJ,    1979,    pages    80-149.    [33    J.Guttag,    E.Horowitz,    O.Musser.    The    Design    of    Data    Type    Specifications,    in Current    Trends    in Programming    Methodology,    Vol    IV, Ed. R.Yeh,    Prentice-Hall,    N.J.,    1979.    [43    J.Guttag,    E.Horowitz,    O.Musser.    Abstract    Data    Types    and    Software    Validation.    CACM    21:1048-64,    1978.    [53 P.A.Subrahmanyam.    Towards    a Theory    of Program    Synthesis:    Automating    Implementations    of    Abstract    Data    Types.    PhO    thesis,    Dept.    of    Comp.    SC.,    State    University    of    New    York    a!    Stony    Brook,    August,    1979.    16 J    P.A.Subrahmanyam.    A    Basis    for    a    Theory    of    Program    Synthesis.    Technical    Report,    Dept.    of    Computer    Science,    University    of Utah,    February,    1980.    [7 3 OBarstow.    Knowledge-Based    Program    Construction,    Elsevier    North-Holland    Inc.,    NY.,    1979.    76     
 | 
	1980 
 | 
	5 
 | 
					
45 
							 | 
	ABSTRACT    Rule-Based    Inference In Large Knowledge Bases *    William Mark    USC/Information    Sciences Institute    Having galned some experience    with knowledge-based    systems    (e.g.,    [a),    [9],    [ 1 l]),    our aspirations    are growing.    Future    systems    (for VLSI design, office    automation,    etc.)    will have to model more of the knowledge    of their domains    and do more Interesting    things with It.    This means larger,    more    structured    knowledge    bases    and    inference    mechanisms    capabl?    of manipulating    the structures    these    knowledge    bases    contain.    The    necessarily    large    Investment    In building these systems,    and the very nature    of    some    of    the    applications    (e.g.,    data    base    query,    cooperative    interactive    systems),    also    require    these    systems    to    be    more    adaptable    than    before    to    new    domains    within    their    purview    (e.g.,    a new data base, a    new Interactive    tool).    II    RULE-BASED    INFERENCE    The    need    for    adaptability    argues    strongly    for    perspicuity    and modularity    in the inference    engine:    the    adapter    must be able to see what    must be changed    (or    added)    and    be    able    to make    the    alterations    quickly.    Inference    mechanisms    based    on    rules    have    these    characterlstlcs.    Unfortunately,    most    rule-based    *    Thus research    was    supported    in part by the Defense    Advanced    Research    Projects    Agency    under    Contract    No.    DAHClS    72 C 0308,    ARPA    Older    NO.    2223,    and    tn part    by    General    Motors    Research    Laboratories.    Views    and    conclustons    contained    in    this    paper    are    the    author’s    and    should    not    be    interpreted    as representing    the official    opinion or policy    of DARPA,    the U.S.    Qovsrnment    or any person or agency connected    with them.    approaches    rely    on    small,    simply    structured    system    knowledge    bases    (e.g.,    the rule-based    formalism used In    [lo]    and [la] is dependent    on representation    In terms of    triples).    As    rule-based    systems    grow    to encompass    a large    number    of    rules,    and    as they    are    forced    to work    on    complex    knowledge    structures    to keep pace with modern    knowledge    base organizations,    two major problems arise:    o The    inference    mechanism    becomes    inefficient:    It is hard to find the right rule to    apply If there are many possibilities.    o Rules    begin    to    lose    their    properties    of    modularity    and perspicuity:    the “meaning” of    a rule,    especially    In the    sense    of how It    affects    overall    system    behavior,    becomes    lost if the rule can interact    with many other    rules In unstructured    ways.    The    remainder    of this paper    describes    an approach    to    solving    these    problems    based    on a philosophy    of doing    inference    that Is closely    coupled with a principle    of rule    base    organization.    This approach    is discussed    in the    context    of two Implementation    technologies.    Ill    PHILOSOPHY    The inference    methodology    described    here is a reaction    to the use of rules as pattern/action    pairs in a rule base    that    is not intimately    related    to the program’s knowledge    baso.    The basic philosophy [6] is to treat expert    system    inference    as a process    of    redescription:    the    system    starts    with some kind of problem specification    (usually    a    singlo    user    request)    and redescribes    It until It fits    a    known    solution    pattern    (a data base access    command or    an    implemented    program    function    in the    two    examples    described    below).    The    control    structure    for    this    redescription    depends    on a knowledge    representation    that    explicitly    includes    the    Inference    rules    in    the    knowledge    base.    The    redoscriptlon    effort    proceeds    In two    modes:    a    narrowing-down    process    that simply uses any applicable    rules    to redescribe    the input as something more amenable    to the program’s    expertise;    and a m-b    process    that    190    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    takes    the    description    remaining    from the first    process    (I.e.,    v&en no more applicable    rules can be found), finds    the    most closely    related    solutlon pattern,    and then uses    that    solution    as    a goal.    In thls    homlng-in    mode, the    inference    procedure    uses consequent    rules to resolve the    differences    between    the    current    description    and    the    desked    end.    The narrowing-down    phase focuses    solely on the Input    and transforms    It lnto something    closer to the system’s    solution    model.    Depending    on the    input,    the    result    of    narrowlng-down    might be an actual solution (meaning the    Input    request    was    quite    close    to    the    system    expectations),    or something that the system    has no idea    of how to deal with (meaning    that it was very    far from    what    was expected).    Given that the user has some Idea    of what    the system    can do, and that he wants to make    himself    understood,    we    can assume    that    the result    of    narrowing-down    will often    be only a minor perturbatlon    of    a system    solution.    This enables    the homing-in process to    find    a closely    related    solution to use as a goal.    Very    speclflc    consequent    rules can then be used to resolve    the few remalnlng differences.    Assuming rule-based    inference    and a structured    system    knowledge    base, the above phllosophy can be restated    as    follows:    o Inference    Is    the    transformation    of    one    significant    knowledge    base    structure    into    another.    o The    organization    of    the    knowledge    base    organ&es    the rule base.    o The control structure    supports both straight    rule application    anywhere    in the knowledge    base and consequent    rule application    In the    context    of a well defined goal.    This    approach    makes    rule application    more efficient,    even    in a large knowledge    base,    because    highly specific    rules    are not used--are    not even looked for--until    they    have a good chance of success.    That Is, the system does    not have to look through everything    it knows every time it    must apply a rule.    This efficiency    is further    enhanced    by    the    fact    that    inference    is modeled as redescriptlon    and    rules are tied closely    to the system knowledge    base.    The    inference    mechanism    will not even    look at rules that do    not    relate    directly    to the kind of knowledge    structures    found    in the description    to be transformed.    This makes    the    problem    of finding    the    right rule to apply far less    dependent    on the    number of rules In the system.    The    highly    structured    and detailed    nature    of the knowledge    base    now works    for efficiency    of rule application    rather    than against    it.    Separating    the Inference    process    into modes and tying    rules    directly    to the structures    of the knowledge    base    also enhances    the modularity    and perspicuity    of the rule    Thls    methodology    has    been    used    to Implement    the    inference    components    of two knowledge-based    systems    using    quite    different    technologies.    The    first,    a    pat tern-match    design,    Is    slmply    sketched,    while    the    second,    a network    based    scheme,    Is presented    in more    detail.    .    Inference    In Asklt    --    The Asklt    system    currently    being developed    at General    Motors    Research    Laboratories    (173,    [S])    is a natural    language    data    base    query    facility    designed    to    be    adaptable    to new domains (e.g., a new project    scheduling    data    base    or a new inventory    management    data    base).    The    inference    task    is to translate    the user’s    query Into    the appropriate    set of data base commands to provide the    neodcd    data.    Askit’s    knowledge    base consists    of case    structures    representing    user    request    types,    data    base    commands,    individual    English    words,    etc.    Rules    are    expressed    as    transformations    between    system    case    frames.    The    condition    part    of the    rule is a partially    instantiated    case frame that is treated    as a pattern    to be    further    Instantiated    (Le., “matched”)    by a description    (a    fully Instantiated    case frame)    from the program’s current    state.    The conclusion    is a case frame to be filled on the    basis of the instantiation    of the condition part.    When the    rwlo is applied,    the instantiating    description    is replaced    In    tho    current    state    by the new structure    generated    from    the conclusion    part of the rule.    Rules therefore    represent    allowable    redescriptions    of    case    frames    for    certain    choices    of case fillers.    For example,    the    following    is a rule for redescribing    restrictions    in certain    user requests    as SUBSET commands    to the database    system:    191    (REOUEST    (OBJECT    <table-data>:objI    (RESTRICTION    oobj    (ccolumn-data>:property    :relation    :vaiue))    ->    tSUBSt?T    =obj    WHERE -property    w-elation    =value)    That    is, the condition    of the rule applies    to any request    which    deals    with something    that    is classified    as “table    data”    in    the    system    knowledge    base,    and    which    expresses    a restriction    of that    table    data    in terms    of    “column data”.    The conclusion states    that such a request    can be redescribed    as a SUBSET command.    When the rule    is applied,    the SUBSET command replaces    the request    In    Askit’s    current    state.    Rules are organized    into “packets”    based on the case    frames    In the knowledge    base: all rules whose conditions    aro    partial    instantiations    of the    same    case    frame    are    grouped    in the same packet.    For example,    all rules that    deal    with    restrictions    In user    requests    would    form a    packet.    The packet    is represented    by a packet    pattern    which states    the common case structure    dealt with by the    rules of the packet,    I.e., a generalization    of their condition    parts.    The    packet    pattern    for    the    ‘@restriction    in    rwquests”    packet    would be:    (REClUEST    (OBJECT    :obj)    (RESTRICTION    nobj    :rstr))    Packets    play    a key    role    in Askit’s    rule    application    process.    The process    begins by matching a description    in    Asklt’s    current    state    agains’t the packet    patterns    known    to the system.    If a match is found, the individual rules of    the packet    are tried.    If one of the rules is successfully    matched,    the    input structure    is redescribed    and placed    back in the current    state.    If no rule matches,    Askit goes into homing-in mode. It    posits    the    discrepant    part    of a not-fully-matched    rule    from the packet    as a new description    in the current state,    and looks for consequent    rules to match this description.    Consequent    rules    are    also organized    into packets    (but    based    on their    conclusions    rather    than their conditions).    If Askit    finds    a matching    rule in the consequent    packet,    and if the condition    part of the consequent    rule matches    other    descriptions    in the current    state,    the discrepancy    between    the original description    and the partially matched    rule    is    considered    to    be    resolved,    and    processing    continues.    Otherwise,    other    possible    discrepancies    or    other    partially    matched    rules are posited, and homing-in Is    tried agaln.    Thus,    the    narrowing-down    process    in this system    Is    represented    by normal    packet    application,    in which    an    lnltial    description    is    matched    against    packets    and    successively    redescribed    until    it can    be    seen    as an    instantiation    of    one    or    more    system    case    structures    representing    data base commands.    If this process breaks    down,    i.e., if a packet    pattern    is matched    but no rule In    tho    packet    can    be    matched,    the    input    description    is    troated    as a perturbation    of one of the case structures    rapresented    by rule conditions in the packet.    Consequent    rules are then used to home-in the discrepant    description    on the    deslred    case    structure.    Successful    resolution    of    the    discrepancies    allows    normal    processing    (i.e.,    nerrowlng-down)    to continue.    t3,    Inference    in Consul    --    The    Consul    system    is    being    designed    to    support    cooperative    interaction    between    users and a set of online    tools    (for    text    manipulation,    message    handling, network    file    transmission,    etc.).    This “cooperative”    interaction    Includes    natural    language    requests    for system    action and    explanation    of system    activities.    Since    user requests    may differ    radically    from the Input form expected    by the    tool designer,    Consul’s    inference    task is to translate    the    user’s    natural    form of requesting    system    action into the    system’s    required    input    for    actually    performing    that    action.    In    the    Consul    system,    the    dependence    of    rule    ropresentation    and organization    on the system knowledge    base    is carried    much further    than in Askit.    On the other    hand,    the    control    structure    does    not have    to    be    as    complex.    Consul’s    knowledge    base    is a KL-ONE [2]    topresentation    of    both    tool-independent    and    tool-dependent    knowledge.    The    major    organizational    framework    of    the    knowledge    base    is    set    by    the    tool-independent    knowledge,    with    the    tool-dependent    elements    instantrating    it.    inference    in Consul is a process    of taking an exlstlng    description    in the    knowledge    base,    redescribing    it In    accordance    with    an    inference    rule,    and    then    reclassifying    *    It    In    the    knowledge    base.    If    the    description    can    be    classified    as the    invocation    of an    actual    tool function    In the system,    then the description    is    “executable”,    and inference    is complete.    A key aspect of    Consul    inference    is    that    inference    rules    are    also    rspresented    in KL-ONE in the knowledge    base; the same    process    of classification    that determines    the status    of a    description    also    determines    the    applicability    of a rule    (compare    153).    For example,    let us examine    Consul’s treatment    of the    user request    “Show me a list of messages.”    Parsing    (using    the PSI-KLONE    system    [l J) results    In the    structure    headed    by    ShowAct.    in figure    1.    Consul    classifies    thls structure    in its knowledge    base, finding, as    shown in figure    1, that it is a subconcept    of the condition    of Rulcl.    This means    that    Consul can redescribe    the    request    ShowAct.    as a call to a “display    operation”    according    to the conclusion of Rulel.    The result is a new    description,    DisplayOperationlnvocationl    .l, which Consul    then    classifies    in Its    knowledge    base.    If,    via    this    classification    process,    the new descrlptlon    Is found to be    B subconcept    of    an executable    function,    inference    Is    “c The classification    algorithm    was written    by Tom Lipkis.    192    complete--DlsplayOperationlnvocationl    .I    can simply be    invoked    by the Consul interpreter.    Otherwise    Consul will    have    to    use    additional    rules    to    further    refine    the    description    until it can be seen as an actual call on 8ome    other    function or functions.    (ab]ect)    (mclpiant)    Figure 1: Rule Application in Consul    Figure    2    shows    the    classlflcatlon    of    DlsplayOperatlonlnvocatlonl    .I    in    Consul’s    knowledge    base.    Unfortunately,    the    system    does    not know very    much about it In this posltion: it is not a subconcept    of an    executable    function    (shown    shaded),    nor    Is    It    subconcept    of any rule condition.    Since    no applicable    function    or rule can handle    the current    description,    the    system    seeks    a “closely    related”    function    description    to    use    as    a mapping    target.    The    deflnltlon    of    Wosely    related”    Is that the two descriptions    must share a common    ancestor    that    is a @*basic” concept    On Consul%    actual    ., the ancestor    cannot    be    rule    Itself,    nor    newly    generated    description.    the    function    d    P)lsplayMessageSummeryl    ocation    is closely    related    to    the    current    description    because    they    share    an    appropriate    lsplayOperationlnvocation.    Once    a related    tool function    description    Is found, It Is    used    as a goal for consequent    reasoning.    First, Consul    must    find    the    discrepancies    between    the    current    description    and the desired    result.    These    discrepancies    re    simply    the    differences    that    escrlption    from being classified    as    desired    description:    In this example,    DisplayOperationlnwocationI    .I    that    prevent    It    from    instantiating    DlsplayMcssageSummerylwwoc    ThW    discrepancy    is    that    the    “input”    role    of    the    current    ption    is    filled    wlth    list    of    messages    ageLIst.    f ),    while    the    executable    function    yMessageSummerylnwocation    requires    list    of    mmarles    (Summaryhist).    Figure 2: Flndlng B Closely Related Function Description    y to redes    If It hope    iatlon of P    executable    function.    ere are two ways    the    current    description:    rules and executable    functions    (since    functions    produce    new descriptions    via output and    side-effects).    Rules are preferable    because    they    save    tool execution    time; Consul therefore    looks for rules first.    In this case,    there    is a quite general    rule that produces    the desired    effect.    Users frequently    ask to see a list of    things    (messages,    files,    etc.)    when they    really want to    see    a    list    of    summaries    of    these    things    (surveys,    directories,    etc.).    Consul therefore    contains    a rule to    make this transformation,    if necessary.    The rule, shown as Rule2    in figure 3, says that if the    current    description    is a display operation    to be invoked on    list    of    “summarizable    objects”,    then    it    can    be    redescribed    as a display operation    on a list of summaries    of those    objects.    Consul finds this rule by looking In the    knowledge    base    for rules (or executable    functions)    that    produce    the    “target”    part    of the    discrepancies    found    earlier.    As shown    in figure    3, Rule2    can be found by    looking    “up” the generalization    hierarchy    (not all of which    is shown)    from SummaryLIst.    Consul must next be sure that the condition part of the    rule    is met in the current    state    of the knowledge    base    (this    Includes    the current    description,    descriptions    left by    previous    rule    applications    and tool function    executions,    and the original Information in the knowledge    base).    if the    condition    is not satisfied,    Consul will try to produce    the    needed    state    through further    rule application    and function    193    application    of    the    execution--i.e.,    through    recursive    consequent    reasoning    process.    In this example,    the condition of Rule2 is entirely    met in    the    current    description    (see    figure 3).    Furthermore,    the    conclusion    of Rule2    resolves    the entire    discrepancy    at    hand.    In general,    a combination of rules and functions    is    needed    to resolve    discrepancies.    Therefore,    with    the    application    of Rule2,    Consul has successfully    redescribed    the    initial    user request    as an executable    function.    The    Inference    process    passes    the resulting    description    on to    the    interpreter    for execution,    and the user’s    request    is    fulfilled.    Narrowing-down    and homing-in    are    much the same    In    Consul    as    they    are    in Askit.    In    Consul,    however,    relationships    between    rules    and    notions    such    as    “perturbation”    and    Iccfosely    related    structure”    come    directly    from the existing    knowledge    base representation;    a superimposed    packet    structure    is not necessary.    The    control    structure    of rule application    therefore    need    not    consider    the separate    issues of matching packets,    setting    up packet    environments,    matching    rules,    etc.    Instead,    classification    includes    matching,    as applicable    rules are    found “automaticallyl*    when a newly generated    description    Is put in its proper    place    in the knowledge    base.    Finding    related    knowledge    structures    and consequent    rules are    aI90    classification    problems,    representing    only    different    use of the classification    algorithms.    a slightly    b    DisplryOporatlon    Invocatlonl.1    summarized    object)    Rusty Bobrow and Bonnie Webber,    “F’SI-KLONE:    Parsing and Semantic    Interpretation    in the BBN    Natural    Language    Understanding    System,”    in    Proceedings    of the 1980 Conference    of the    Canndlan    Sociely    for Computational    Studies    of    /nte//igonco,    CSCSI/SCEiO,    1980.    Ronald Brachmen, A Structural    Paradigm    for    Reprsscntlng    Knowledge,    Bolt, Beranek, and    Newman,    Inc., Technical    Report, 1978.    Bruca Buchanan, of al., Heuristic    DENDRAL: A    Program    For Generating    Explanatory    Hypotheses    In    Organ/c    Chemistry,    Edinburgh University    Press,    1960.    Ranclall Davis, Applications    of Meta Level    Knowledge    to the Construction,    Maintenance    and    Use of Large Know/edge    Bases, Stanford Artificial    Intelligence    Laboratory,    Technical    Report, 1976.    Richard Fikes and Gary Hen&ix,    “A Network-Based    Knowledge    Representation    and it:; Natural    Deduction    System,”    in Proceedings    of the Fifth    lnternatlonal    Joint Conference    on Artificial    Intelligence,    IJCAI, 1977.    William Mark, The Reformulation    Model of Expertise,    MIT Laboratory    for Computer Science,    Technical    Report,    1976.    William Mark, The “Asklt”    English    Database Query    Facility,    General    Motors Research    Laboratories,    Technical    Report GMR-2977,    Jim3    1!379.    William Mark,    A Rule-Based    Inference    System for    Natural    Language    Database Query, General Motors    Research    Laboratories,    Technical    Report GMR-3290,    May 1980.    William Martin and Richard Fateman, “The MACSYMA    System,”    Proceedings    of the Second Symposium    on    Symbolic    and Algebraic    Manipulation,    197 1.    Edward    Shortliffe,    MYCIN:    Computer-Based    Medical    Consultations,    American Elsevier,    1976.    William Swartout,    A Digitalis    Therapy Advisor    with    Explanations,    MIT Laboratory    for Computer Science,    Technical    Report, February    1977.    Figure 3: Using Consequent    Reasoning    194     
 | 
	1980 
 | 
	50 
 | 
					
46 
							 | 
	N PROCESS FOR EVALUATING TREE-CONSISTENCY    John L. Goodson    Departments of Psychology and Computer Science    Rutgers University, New Brunswick, N.J. 08903    ABSTRACT    General knowledge about conceptual classes    represented in a concept hierarchy can provide a    basis for various types of inferences about an    individual.    However    the various    sources    of    inference may not lead to a consistent set of    conclusions about    the individual.    This paper    provides a brief glimpse at how we represent    beliefs about specific individuals and conceptual    knowledge,    discusses some of    the sources of    inference we have defined, and describes procedures    and structures that ' can be used to evaluate    agreement among sources whose conclusions can be    viewed as advocating various values in a tree    partition of alternate values. *    I. INTRODUCTION    Recent work by several investigators; [II,    C31, C4l and C7l;    has focused on the importance of    augmenting deductive problem solvers with default    knowledge. Their work provides some of the logical    foundations for using such knowledge to make non-    deductive inferences and for dealing with the side    effects of such inferences.    Currently, we are    pursuing how conceptual knowledge about general    classes of persons, locations, objects, and their    corresponding properties can be represented and    used by a planning process and a plan recognition    process to    make deductive    and    non-deductive    inferences about particular persons, objects and    locations in an incompletely specified situation    (see C51 and C61).    General knowledge about conceptual classes    represented as a concept hierarchy provides a basis    for various types of inferences about individuals.    The definition of a conceptual class might be    believed to hold for an individual, xl, that is    believed to be a member of that class (Definitional    Inference).    The definitions of concept classes    which include the class of which xl    is a member    might be believed to hold for x 1    (Inheritance    Inference). The definition of some class that is a    subset of the class to which xl    belongs might be    --------    * This research is supported by Grant RROO643-08    from the Division of Research Resources, BRP, NIH    to the Laboratory for Computer Science Research,    Rutgers University.    used as a source of potential inferences (a kind of    Plausible Inference).    Additionally, information    might be    stored    directly about    xl    (Memory    Inference) and there may be other inference types    based on different strategies of deductive or    plausible inference (for a more detailed discussion    see [61).    However, these sources of inference may not    lead to a consistent set of conclusions about the    individual.    For    default theories in general,    Reiter [4] has shown that when a default theory is    used to extend a set of beliefs, determining the    consistency of the extensions is an intractable    problem. In this paper we are concerned with a    very local and focused subproblem involved in    evaluating the agreement or consistency of a set of    conclusions and with a strategy for dealing with    belief    inconsistency. The    focus arises from    considering these issues in the context of a    concept class hierarchy where the classes form a    tree partition.    Before we discuss this restricted    case of belief consistency, we provide a brief    glimpse at how we represent beliefs about specific    individuals and    at    several types    of    class    hierarchies used to represent concepts.    II. REPRESENTATION a BELIEFS AND CONCEPTS    Beliefs about specific objects, persons, etc.    are represented as binary relations of the form    ((x r y).CT F or Ql) where : r is a relation    defined between two basic classes of entities X and    Y; x is an instance of X; and y is either an    instance of Y or is a concept that is part of the    concept hierarchy with Y as its root. An example    of the former relation is ((DON LOC NYC).F) where    DON is an instance of PERSON and NYC is an instance    of LOCATION.    The latter form is exemplified by    ((DON AGEIS YOUNG1.T) where YOUNG is a concept that    is part of an AGE hierarchy. T, F or Q represents    the truth value    in the current situation.    Concepts    are    organized    into    inclusion    hierarchies which have as their root one of several    basic classes, such as PERSON, AGE, LOCATION,    OBJECT, which may have instances in a specific    situation.    A particular individual may be an    instance of several basic classes, e.g. a person    DON may be viewed as an OBJECT or a PERSON. Two    simplified hierarchies are given below in graph    form. Note that no claim is being made here about    the adequacy    or naturalness of the knowledge    represented in these examples.    195    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    PERSON    III. CONSISTENCY m INFERENCE SOURCES    AGE    I    I    -------------------    -------    We now consider structures that can be used t.o    I    I    f    t    determine the consistency of a set of sources    STUDENT    ATHLETE    YOUNG OLD    contributing beliefs relevant to a proposition    I    I    I    I    about    an individual.    The following paradigm    -----mm----    ----------    m-w- ----    provides a more specific context in which to    I    I    I    I    I    I    I    I    I    I    discuss    the problems and    mechanisms we have    HIGH SCHOOL COLLEGE BASEBALL FOOTBALL I5    25    40    80    considered. Assume that the task specification is:    STUDENT    STUDENT PLAYER    PLAYER    YR YR YR YR    1) A goal proposition is given whose truth value is    Hiearchies may contain subtree partitions as might    desired by a higher level process. The goal is    be the case for the PERSON hierarchy shown above.    a    statement about    a particular individual    A particular person may be a high school student    represented as a binary relation between the    &a    football    player.    Some    hierarchies,    individual and a concept, called the goal-    especially those representing properties, may form    target.    The goal-target is a member of a tree    tree partitions, as is the case for the AGE    partition of concepts called the target-tree.    hierarchy shown above.    This    '1family11    of concepts is    the set of    potential targets    defined for the relation    There are several types of information that    occurring in the goal. It should be noted that    can    be associated with a concept.    One type, the    there may be several trees rooted in the same    concept    definition,    provides    an    intensional    basic class. For now we limit our consideration    characterization of the elements of the conceptual    to the case where there is only one tree.    class.    A definition is a conjunctive set of    descriptions    of    the    form    (<basic    2) Several sources of inference are consulted for    class> (rl yl)(r2 zl)...) that are the necessary    beliefs relevant to the goal, that is, beliefs    and sufficient conditions for an instance of the    relating the individual to concepts in the    basic class to be considered an element of the    target-tree.    conceptual class. The relation that represents the    hierarchical structure among concepts is termed    3)    Since the target-tree is a partition, the "tree-    COVERS and appears in the above example as a line    consistency" of the beliefs can be evaluated and    joining pairs of concepts.    This relation implies    used to determine a truth value for the goal    that for each description in the higher level    (one of T for true, F for false or Q for    concept definition (e.g. (rl yl)),    there is a    question, determinate T or F not assignable).    corresponding description -(e.g. (1-1 ~2))    in the    This evaluation    amounts to    assessing    the    lower level concept definition and either yl=y2 or    agreement among the sources on a truth value for    (~1 COVERS ~2) in the Y (basic class) hierarchy and    the goal.    the lower or more specific concept definition must    contain at least one description that is more    4) The structures created in 3    can be used to    specific    (where    (yl    COVERS ~2))    than    the    record the sources of inference drawn upon in    corresponding description    in the more general    the attempt to achieve a conclusion about the    concept definition.    As an example, consider these    goal.    possible definitions for the concept ATHLETE and    Many of the structures and procedures discussed in    and a more specific concept, FOOTBALL-PLAYER.    relation to this paradigm are implemented in the    knowledge representation system AIMDS (see [    83).    (ATHLETE DEF    I    (PERSON (PHYSICALSTATE SOUND)    The target-tree can be represented as a Truth    I    (PLAYSON TEAM))    Value Tree (TVT) where each node represents a    COVERS1    concept in the target-tree. Each node has a slot,    (FOOkBALL-PLAYER DEF    TV, for one of the determinate truth values, T or    F, and slots for two lists, a true-list and a    (PERSON (PHYSICALSTATE SOUND>    false-list. These lists consist of two inner    (PLAYSON FOOTBALL-TEAM))    lists.    The true-list contains    one list for    recording the sources that support the truth value    T and one    for recording nodes that require the node    An example of a type of plausible inference    to have the truth value T. The false-list has the    can be given using these definitions.    If the    same structure and records the information relevant    beliefs    in    memory    about    DON    satisfy    the    to the truth value F.    An example will serve to    descriptions in the definition of ATHLETE, then    clarify how this structure is used to evaluate    there    '    basis    for    believing    (DON PLAYS;: FOOTBiLL-TEAM) with truth value T.    tree-consistency.    However, it is not the case that this inferred    Assume that the goal is (DON AGEIS 40YR). The    belief is consistent with those in memory. Thus    TVT for the goal represents the AGE tree presented    two sources of information about DON, memory and    in a previous example. Initially each node in the    plausible inference, may not agree.    TVT has three empty slots shown below.    196    slots    (TV    NIL)    Truth Value    (T    1 True-list    (F    Ii    1 False-list    supported    required    by    by    If ( (DON AGEIS YOUNG) .T) is contributed by memory,    the following cycle of actions takes place.    1) The truth value slot of each node is made NIL.    2) T is entered as the TV of the node representing    YOUNG.    3)    Tree-consistency rules propagate truth value    requirements to other nodes.    Note that. these    rules depend on interpreting the AGE hierarchy    as a tree partition of a finite and closed set    of values. One rule propagates T up the tree    ( to all ancestors), thus the AGE node receives    the truth value T. Another rule gives the node    OLD the truth value F since the relation AGEIS    has been defined such that only one path (leaf    to root) of targets in the tree can be true for    an individual at any given time, and all others    must be false. Finally , a third rule propagates    F down the tree (to all descendents) , thus the    nodes 40YR and 80YR receive the truth value F.    In the general case, these rules are looped    through until no additional nodes can be given a    truth value.    Two other rules not applicable    here are: propogate T from a parent to a    daughter if all other siblings are F; and if all    daughters are F, propogate F to the parent.    4)    The source of the truth value T for the node    YOUNG,    i .e.    Memory    Inference    (MI),    is    registered as support on its true-list.    For    each of the remaining nodes with a non-null TV,    YOUNG is registered as a requirement on the    true-list or false-list depending on the truth    value required by YOUNG being T.    At the end of this cycle, the nodes in the AGE TVT    have the following form.    YOUNG    OLD    TV T    TV F    (T (MI)())    (T 00)    (F 00)    (F O(YOUNG))    15YR    25YR    4OYR    80~~    TV NIL    TV NIL    TV F    TV F    (T (10)    (T 00)    (T 00)    (T 00)    (F 00)    (F 00)    (F O(YOUNG))    (F O(YOUNG))    This cycle can be repeated for each of the    relevant beliefs that have been contributed. Note    that a cycle generates the deductive consequences    of a truth value assignment, T or F, to a single    node.    We are not concerned with the propagating    changes to previously assigned truth values based    on a new assignment from a second relevant belief.    Such a mechanism would be required for updating a    model and might utilize antecedent and consequent    propagation proposed by London [ 21.    The TVT created by this cycle is inspected for    a truth value for the goal. In our example, the    node 40YR represents the goal-target and it has an    entry for a single truth value, F. Thus F is    returned as t,he truth value of the goal. However,    the assignment of a truth value to the goal can be    made    contingent on    the    target-tree.    Tree-    consistency and thus agreement among the sources is    easily determined. If any node has entries in both    the true and false lists, then an inconsistency    exists and the value Q (indeterminate truth value)    should be returned even if a determinate truth    value    is    indicated    for    the    goal -target.    Conversely, if no such node can be found, then the    tree, and thus the set of beliefs contributed, are    consistent and the truth value indicated for the    goal-target may be returned.    One way to find a det,erminate    truth value for    the goal when the tree is inconsistent is to seek a    subtree of TVT such that :    I) the subtree is rooted in the top node;    2) the subtree is tree-consistent;    3)    the subtree contains the node representing the    goal-target; and    4) this node has a true-list or false-list entry,    then a determinate value can be assigned to the    goal.    An intuitive example of this case is where you can    be fairly sure that Don is young even    though you    have conflicting information about whether he is    ten or fifteen years old.    If the consistent    subtree does not indicate a determinate truth value    for the node representing the goal-target, the    tree-consistency requirement can be relaxed in    order to find a determinate truth value.    The consistent subtree can    be extended sue h    that the resulting subtree is tree-consistent, each    node has a determinate trut,h    value and the set of    truth values is maximally supported by the sources    contributing information. One extension procedure    involves the following steps :    1)    For each node in the consistent subtree assign    the truth value    indicated by its non-empty list.    2) From the set of nodes without a truth value (TV    NIL), select the node with the maximum support    for a truth value that is tree-consistent with    the current subtree.    3)    Assign this node the truth value indicated and    apply the tree-consistency rules to further    extend the consistent subtree.    4) Continue at 2 until all nodes have a truth value    or the basis for deciding 2 does not exist (some    197    nodes may have empty support and cannot be    assigned a    determinate truth value by the    rules).    Several issues must be addressed in carrying    out step 2.    First, since ties are possible, a    decision procedure must be provided. Second, the    degree to which a node is required to have a    particular truth value might be taken as a measure    of indirect support for that truth value. Since    the set of nodes that could possibly require a node    to have a truth value is partially dependent on its    position in the tree, positional bias must be taken    into    account in deciding    degree of indirect    support.    This procedure can be applied even when a    consistent subtree cannot be found.    In this case    the top node is given the truth value T to provide    a trivial consistent subtree from which to extend.    The staged    relaxation of the tree-consistency    constraint is particularly important when all of    the beliefs relevant to a goal are drawn from    default knowledge (arrived at through non-deductive    inferences).    There may be    little basis for    expecting this knowledge to be consistent yet it    may be rich enough to suggest that one truth value    is more plausible than the other.    ACKNOWLEDGEMENTS.    My collaborators,    C.F.    Schmidt and N.S.    Sridharan, have contributed to the evolution and    implementation of the ideas expressed here.    REFERENCES.    Cl] Doyle, J.    A Truth    Maintenance    System.    Artificial Intelligence 12 (1979) 90-96.    [2] London, P.    A Dependency-based Modelling    Mechanism for Problem Solving.    AFIPS Proc.    Vol. X    NCC-78, Anaheim, Ca., (June, 1978)    263-274.    [3] McDermott, D.    & Doyle, J.    Non-Monotonic    Logic I.    Proc    of the Fourth Workshoo on    Automated Deduction. Austin, Texas, (Feb.,    1979)    26-35.    [4] Reiter, R.    A Logic for Default Reasoning.    Technical Report 79-8, Dept.    of Computer    Science, University    of British    Columbia,    (July,    1979).    151    Schmidt, C.F. The Role of Object Knowledge in    Human Planning. Report CBM-TM-87, Dept. of    Computer Science, Rutgers University, (June,    1980).    t.71    [81    c91    Shrobe, H.E. Explicit Control of Reasoning in    the Programmer's Apprentice. Proc    -    oft&&    Fourth    WorkshoD on    Automated    Deduction.    Austin, Texas, (Feb., 1979) 97-102.    Sridharan, N.S.    Representational Facilities    of AIMDS: A Sampling. Report CBM-TM-86, Dept.    of Computer Science, Rutgers University, (May,    1980).    Sridharan, N-S., Schmidt, C.F., & Goodson,    J.L. The Role of World Knowledge in Planning.    Proc    1980).    AISB-80. Amsterdam, Holland, (July,    C63 Schmidt, C-F., Sridharan, N.S.    & Goodson,    J.L.    Plausible Inference in a Structured    Concept Space. Report CBM-TR-108, Dept. of    Computer Science, Rutgers University, (May,    lg8o).    198     
 | 
	1980 
 | 
	51 
 | 
					
47 
							 | 
	Reasoning    About    Change    in    Knowledgeable    Office    Systems    Gerald R. Barber    Room 800b    Massachusem Ittslitute of Techtlologv    54.5 Techttologv Square    Cambridge, Mass. 02139    (617) 253-5857    ABSTRACT    Managing    of and    reasoning    about    dynamic    processes    is a    central    aspect of much    activity    in the office.    WC present    a brief    dcscrintion    of our view of office systems and why change is of central    importance    in the office.    A description    system used to describe the    structure    of the office    and office    activity    is discussed.    A viewpoint    mechanism    within    the description    system is prcscntcd    and we discuss    how this mechanism    is used to dcscribc and reason about change in the    offke.    A    gcncral    scenario    .is dcscribcd    in    which    viewpoints    arc    illuslratcd    as a meiins of describing    change.    Previous technologies    for    accommadating    change    in    knowlcclgc    embedding    languages    are    charnctcrizcd.    WC contrast    the    approach    using    viewpoints    with    previous    technologies    whcrc    change    is propagated    by pushing    and    pulling    information    bctwccn slots of data structures.    I. Int reduction    The computer    has been used in the office    cnvironmcnt    for    many years with its application    mainly    limited    to highly structured    and    repetitive    tasks in a non-interactive    mode.    With    the cvcr-dccrcasing    cost of hardware computers    can bc potentially    used in the future to aid    office workers in a wider variety of tasks.    Indeed, the computer    based    office    system is today seen as both a motivation    for achieving    a new    understanding    of office    work    and    as a medium    within    which    to    intcgratc    new tools and knowledge    into a cohcrcnt    system.    This has led    to the realisntidn    that there is enormous    potential    in the use of the    computer    in the office in novel anti as yet unforcscen    ways. These new    uses will    impact    the way office    work    is done    in fundamental    ways    demanding    new ideas about how to manage information    in an office    and a new conceptualization    of what off&    work is in the presence of    powerful    computational    capabilities.    As a step toward    developing    computer    systems    that    may    effcctivcly    support    office    workers    in    their    tasks we employ    two    paradigms    from Artificial    Intclligencc,    those of knowledge    embedding    and problem    solving.    We arc dcvcloping    ;I descriptions    system called    OMEGA    [IIcwitt    SO] to bc used to cmbcd    knowlcdgc    about    the    structure    of the office ilnd office    work in an oflicc    system.    One of our    objcctivcs    is to support    the problem    solving activities    of individuals    in    an office.    Much    of the problem    solving    activity    within    an office    concerns reasoning about change.    WC hnvc dcvclopcd    mechanisms    in    OMEGA    to dcscribc changing    situations.    In the following    section we    present a short description    of our model of the office.    Following    this    WC discuss the importance    of ch,mgc in the office and the mechanism    within    OMEGA    to deal    with    change.    ‘I’hc    approach    WC take    is    compared    with other approaches to the problem of accommodating    and    managing change.    II. The Knowledgeable    Office    System    We view    an office    system    in terms    of the two    dominant    structures    in the office, the applicalion s(nlcfure    and the organiza~iunal    struclure. l’hc    application    structure    of an office    system concerns    the    subject    domain    of the office.    It includes    the rules and objects    that    compose    the intrinsic    functions    of a particular    office    system.    As an    example,    in an office    concerned    with    loans the application    structure    includes such cntitics    as loan applications.    credit ratings and such rules    as criteria    for accepting or rejecting loans.    In an insurance company    the    application    structure    is concerned    with insurance    politics,    claims and    actuarial    tables.    ‘Ihc    application    structure    explains    the scope of the    functionality    an offrcc    system has on a subject    domain    as well    as    providing    a model by which those functions    arc charncteriir.cd.    OVCI tly,    the application    structure    is the primary    reason for the existence    of the    office system.    In contrast to the application    structure    is the social structure    of    the office    system as an organi/.ation    [Katz    781. Our concern    with this    aspect of an office    system stems from the fact that the activity    in the    application    domain    of    an    ofl?cc    system    is    IcalLed    by    people    coopcrating    in a social system.    ‘I’hc structure    of this social system    involves    such aspects of an organization    as the roles of the individual    participants.    the interaction    of roles, the social norms of the office and    the various subsystems    that make up the organization.    WC bicw the    office system as a functioning    organism    in an environment    from which    it extracts    resources    of various    kinds    and to which    it dclivcrs    the    products of its mechanisms.    OMEGA’s    descriptions    arc    the    hlndamental    entities    upon    which the Knowlcdgcnblc    Office System is based. ‘I’hc emphasis of our    approach    is on a description    manipulation    system    for    cmbcdding    knowledge    as opposed    to a forms manipulation    system.    Descriptions    are    used    to    express    the    relationships    bctwcen    objects    in    the    Knowledgcablc    Office    System.    Descriptions    are more    fimdamental    than    electronic    forms,    in particular,    electronic    forms    arc a way of    viewing descriptions,    a visual manifestations    of descriptions.    One of the goals of our work    is to support    office    workers    in    their problem    solving activity.    Problem    solving is a pervasive aspect of    office work that has been ncglcctcd    until recently [Wynn    79, Suchman    791.    Office    work    is naturally    characteri/.cd    as goal oriented    activity.    ‘The office proccdurc    is mcrcly a suggested way by which to accomplish    a particular    goal.    WC believe that this is one reason why it has proved    to bc difficult    to dcscribc office work from a procedural    point of view.    ‘I‘hc formalism    WC are developing    allows    us to describe    and    reason about    the application    and organizational    structures    of office    systems as well as the inlcraction    bctwccn    these structures.    ‘I’hc major    bcncfits    of OMEGA    with relevance    to our discussion    here arc that a    computational    system    can    support    problem    solving    in    dynamic    cnvironrncnts    that    are    weakly    structured,    and    knowledge    rich.    OMEGA    also provides a prccisc Lmgu,igc within    which to characterize    the static and dynamic aspects of office systems.    A central    problem    in an office    system is reasoning    about and    managing    change.    This    is a rccul rent thcmc    at several    lcvcls    of    199    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    abstractions.    Viewing    the organization    in relation    to its environment,    the organization    must    cvolvc    in order    to adapt    to    the changing    cnvironmcnt.    For example.    an accounting    off~cc must adapt to new tax    laws    or    an office    must    adapt    to    new    technology.    Viewing    the    organization    as producer    of some product,    the organization    must adjust    its production    output    to the demand    for t.hc product    which it produces    in light of the rcsourccs available to the organitation    and the constraints    under    which    it must    operate.    The    individuals    that    make    up an    organization    arc faced with such tasks as reasoning about processes that    have produced anomalous    results, maintaining    system constraints    as the    state of the constrained    parts change and analyzing    the implications    of    hypothesized    processes.    OMEGA    has a viewpoint mechanism    that is used to describe    and reason about change.    ‘I’hc viewpoint    mechanism    provides a means    to present time varying proccsscs to office workers for analysis, bc these    proccsscs historical,    in progress or postulated.    Cltanging    cnvironmcntal    dcpcndcncics    and changing    aspects of the organization    can be captured    in descriptions    using the viewpoint    mechanism.    In the remainder    of    this pnpcr WC dcscribc OMEGA    and its viewpoint    mechanism.    IV. The Viewpoint    Mechanism    OMEGA    is a system with which a structure    of descriptions    is    built.    ‘1%~ system is dcsigncd to bc incremental;    new knowlcdgc    can be    incorporated    into the system as it is discovered    or as the need for it    arises. There is no minimal    amount    of information    nccdcd bcforc    the    system is usable.    The system is monotonic    in the sense that nothing    is    lost when new information    is added.    As is cxplaincd    in the following    paragraphs,    knowlcdgc    is rclntivir.cd    to viewpoints,    information    that is    inconsistent    with information    in a particular    viewpoint    can be placed in    a different    viewpoint.    ‘This ac&nmodntcs    aspects of non-monotonic    systems    [McDermott    79]--where    new    information    may    invalidate    previously    held    beliefs--without    the    need    for    a notion    of    global    consistency.    OMEGA’s    fundamental    rule    of inference    is mergitzg; new    descriptions    are    merged    with    previous    descriptions.    Any    new    deductions    as a result of the new information    arc carried out during the    merging operation.    OMEGA    is used to build, maintain    and reason over a lattice of    descriptions.    Descriptions    are related via an inhcritancc    relation    called    the is    relation.    ‘I’he    is    relation    is rclativized    to a viewpoint    that    indicates the conditions    under which the is    rcl,ltion    holds.    Intuitively    a viewpoint    represents    the conditions    ulider    which    the inheritance    relation    holds.    In this rcspcct it is rcminisccnt    of McCarthy’s    situational    calculus [McCarthy    691 and the contexts of QA4 [Rulifson    721.    A major differcncc    between these approaches and viewpoints    is    that    viewpoints    arc descriptions    and    thus    are subject    to the    full    descriptive    power    of OMEGA.    Viewpoints    may be cmbcddcd    in    structures    expressing    complex    inheritance    relationships    relating    viewpoints    to one    another.    Other aspects of OMEGA    include    higher    order    capabilities    such    as the    ability    to    describe    properties    like    transitivity    for relations    in the system and meta-description    capabilities    to talk about the parts of descriptions.    Ill. Dealing    With Change    A key property-of    viewpoints    is that information    is only added    to them and is never changed.    Consider,    for example,    a description    which    is the underlying    representation    of a form.    I’hc    description    is    rclativi7cd    to a kicwpoint    and information    is added to this description    increasing its specificity.    JIcscriptions    may contain constraints    bctwecn    attributes,    as information    is added further    information    may be deduced.    Should    the information    in a field    of a form    be changed    then    the    following    scenario might occur:    1. A new    viewpoint    is created    successor to the old viewpoint.    and    dcscribcd    as being    a    2. All    information    that    was not derived    from    information    is copied to the new viewpoint.    the changed    3. The new information    is added in the new viewpoint,    deductions    resulting    from this information    arc made.    4. ‘J’hc description    in the new    recent contents of the form.    viewpoint    rcprcscnt    the most    In this case the new viewpoint    inherits    all but the changed    information    and    the    information    deduced    from    the    changed    information    from    the old viewpoint.    What    actions    arc taken    when    information    in a viewpoint    is changed is specified via mcta-descriptions.    Previous    approaches    to    the    problem    of    accommodating    changing    information    have    been    to perform    updntcs    to the    data    structures    in question.    System based on property    lists such as J,lSP    have    used pur and    gel operations    to    update    and    read    database    information.    ‘I’hcsc hate    the disadvantage    that deductions    based on    updated    infonnation    must    bc    handled    explicitly    leading    to    unacccptablc    complexity    and modularity    problems.    J.angungcs    like    J:RJ* [Goldstein    771 use triggers on data structure    slots to propagate    changes.    ‘Ilrc disadvantage    here is that thcrc is no support    for keeping    track of what was deduced    and why.    ‘This makes changes difficult    because information    dependencies    arc not recorded.    ‘lhc    language    K RI,    has    been    used    to    implement    a    knowledge-based    personal    assistant    called    OlIYSSJ<Y    [Fikes    X0].    ODYSSlXY    aids a user in the planning    of trips.    In this system pushers    and pullers arc used to propagate deductions    as a result of updates and    to mnkc deductions    on reads. A simple dependency    mechanism    is used    to record information    dcpcndcncics.    In this cast it is necessary to be    very careful    ilbout    the order in which    triggers fire for as updates    arc    made there is both new and old information    in the database making    it    difficult    to prcvcnt    anamolous    results due to inconsistencies.    OMEGA    separates    new and old information    into    diffcrcnt    viewpoints.    Information    consistency    is maintained    within    viewpoints.    ‘l’hc propagation    of information    bctwccn    viewpoints    is controlled    via    mcta-description.    An advantage    of the approach    using viewpoints    is    that the system has a historical    character.    This is an important    step    toward    our goal of aiding    office    workers    in problem    solving    about    dynamic    proccsscs.    Viewpoints    can bc used as historical    records of past    processes, as an aid in tracking    ongoing    proccsscs and as an aid to    dctcrminc    the implications    of postulated    actions.    200    VI. Conclusion    We    have    presented    the    viewpoint    mechanism    of    the    descriptions    system OMEGA    along with some examples    of its use to    describe    a changing    form    in an accounting    office.    The    viewpoint    mechanism    has proved    useful    in describing    objects    whose properties    vary with time as well as a means with which    to interpret    statements    about    the system’s description    structure.    The viewpoint    mechanism    presented    here is related to that in IYHER    [Kornfcld    791 and to the    layers of the PIE system [Goldstein    801.    Viewpoints    arc a powerful    unifying    mechanism    which combine    aspects of McCarthy’s    situational    tags [McCarthy    691 and the contexts    of QA4 [Rulifson    721. They serve    as a replacement    for update and pusher-puller    mechanisms.    Omega    is monolottic using    merging of    descriptions    as a    fundamental    rule of inference.    It uses viewpoinls to keep track    of    different    possibilities.    This aspect causes it to differ    substantially    from    systems based on property    lists [IPL,    Lisp, etc.] which    arc based on    operations    to yul    and    get attributions    in data    structures.    These    differences    carry over to more recent systems [SIR,    SIMUI,A,    FRL,    KRL,    etc.]    based on record structures    with attached    procedures    that    execute when a plr( (rrytkt&) or ger (rettcl) operation    is performed.    References    \Fikes 801    Richard Fikes.    Odyssey: A knowledge-Based    Assistant.    To appear in Artificial    Intelligence.    [Goldstein    771    Goldstein,    I. P. and Roberts, R.B.    NUDGE,    a Knowledge-Based    Scheduling    Program.    Proceedings of the Fifth In~erna(ional Joint Cotlference on    Artificial Intelligence.    [Goldstein    801    Goldstein,    Ira.    PIE: A Network-Based    Personal Information    Environment.    Presented at the Off~cc Semantics Workshop,    Chatham,    Mass.    June 15-18    [Hewitt    801    Hewitt,    C., Attardi,    G., and Simi, M.    Knowledge lkbedding wirh a Descriplion System.    AI Memo, MIT,    August, 1980.    to appear    [Katz 781    Katz, D. and Kahn, R.    The Social Psychology of Organizations.    John Wiley and Sons, 1978.    [McCarthy    691    McCarthy,    J. and Hayes, P. J.    Some Philosophical    Problems from the Standpoint    of Artificial    Intelligence.    In Machine Inrelligence 4, pages 463-502.    Edinburgh    University    Press, 1969.    [McDermott    791    McDermott,    D. and Doyle, J.    Non-Alonoronic I,ogic I.    AI Memo 486b, MIT,    July, 1979.    [Rulifson    721    Rulifson.    J., Derksen, J. and Waldinger,    R.    QA4: A Procedural Calculus for lnluilive Reasoning.    Artificial    lntclligcncc    Center Technical    Note 73, Stanford    Research Institute,    November,    1972.    [Suchman    791    Suchman,    L.    Office Procedures as Praclical .4ction: A Case Sludy.    Technical    Report, XEROX    PARC, Scptcmber,    1979.    [Wynn    791    Wynn, E.    Office Conversaliott as an Infortnaiion Medium.    Phi) thesis, Department    of Anthropology,    University    of    California,    Berkeley,    1979.    [Kornfeld    791    Kornfeld,    W.    Using Parallel Processing for Problem Solving.    AI Memo 561, MIT,    December,    1979.     
 | 
	1980 
 | 
	52 
 | 
					
48 
							 | 
	On Supporting the Use of Procedures in Office Work    Richard E. Fikes and D. Austin Henderson, Jr.    Systems Sciences Laboratory    Xerox Palo Alto Research Center    3333 Coyote Hill Road    Palo Alto, California 94302    Abstract    In this paper, we discuss the utility of AI techniques in the    construction    of    computer-based    systems    that    support    the    specification and use of procedures in office work.    We begin by    arguing that the real work of carrying out office procedures is    different in kind from the standard computer science notions of    procedure “execution”.    Specifically, office work often requires    planning and problem solving in particular situations to determine    what is to be done. This planning is based on the goals of the tasks    with which the procedures are associated and takes place in the    context of an inherently open-ended body of world knowledge. We    explore some of the ways in which a system can provide support for    such work and discuss the requirements that the nature of the work    places on such support systems.    We argue that the AI research    fields of planning and knowledge representation    provide useful    paradigms and techniques for meeting those requirements, and that    the requirements, in turn, present new research problems in those    fields.    Finally, we advocate an approach to designing such office    systems that emphasizes a symbiotic relationship between system    and office worker.    Introduction    We are interested in developing office systems that would make use    of a knowledge base describing whal tasks are to be done, who is to    do them, and how they are to be done.    Such descriptions specify    the functions of an office and how it is organized to achieve that    functionality.    We claim that such a knowledge base can form the    basis for a broad range of system support in an office.    In this    paper, we discuss some of the ways in which AI paradigms and    techniques are relevant to the support of office work by such    computer-based    systems.    We begin by describing some of the support functions we have in    mind, and then address what we consider to be the primary issue;    namely: what is the nature and structure of the information in such    a knowledge base?    We are guided in addressing that issue by    considering the nature of the work that occurs in an offrce and how    such information    is used in that work.    We first argue that the work involved in carrying out office    procedures is different in kind from the “execution” of a procedure    that one might expect by drawing analogies with the behdvior of a    computer executing a progmm. We illustrate and support this claim    by presenting a typical case of office work and analyzing the actions    that take place there. From this argument we derive a requirement    for systems which support office work: namely, that they be flexible    enough to support the variety of behavior occasioned by the    unpredictable    details of particular situations.    We then turn to the relevance of AI for achieving this functionality.    We develop the idea that the paradigms from the AI literature for    automatic planning and execution monitoring of plans provide a    more suitable alternative to the procedure execution model of office    work: and furthermore that the demands of supporting office work    require extensions to those paradigms.    Second, we argue that the    knowledge representation problems presented by the open-ended    office domain are unsolved and challenging.    We suggest that they    can be attacked by the use of specialization-based representations    and    facilities    for    storing    “semi-formal”    structures    in which    uninterpreted    text is intermixed    with data whose semantics is    understood by the system.    Finally, we argue that the whole enterprise of supporting office    work can only hope to succeed if we regard the office systems as    functioning in a partnership with the office workers.    Due to the    open-endedness    of the    domain,    the    system    cannot    hope    to    “understand”    the full import of the information    which it is    handling, and so must rely on human aid.    Furthermore, to fully    support the users, the system must be able to represent, although    not necessarily understand, any of the infcrmation in the domain.    We conclude by advocating an approach of “symbiotic processing”    between system and office worker and the use of AI techniques in    constructing systems to support office work.    Supporting the Production and Use    of Procedural Descriptions    We begin by considering some of the whys in which computer-based    office systems could facilitate the effective production and use of    descriptions of what tasks are to be done, who is to do them, and    how they are to bc done. There are two groups of people whom an    office system dealing with such descriptions    can support:    the    producers    and the users. However, the production and LW phases    are often tightly interwoven, with the same people often involved in    both (despite what managers may choose to think).    The producers    of these “what, who, and how” specifications    <typically managers and planners) are engaged in a process of    organizing the work in the office so that the office’s goals and    commitments will be met. That process involves defining the tasks    to be done, designing procedures    for doing those tasks, and    assigning individuals to carry out those procedures.    A system can support these specification processes by providing a    descriptive namework in which to express the specifications and by    helping to manage the complexity that arises from the interactions    of the tasks. constraints, procedures, and policies being specified.    The descriptive. framework would provide a guide as to what    information needs to be specified (based on the intended purpose    and uses of the specifications) and a terminology for expressing that    information.    For example, the system might provide a template for    describing a task that would include fields for the task’s goals,    inputs, outputs, responsible agent, activation conditions, etc., and a    description language for filling those fields. The system could also    indicate direct implications of a description, such as the subtasks    implied by a task description of recognizing the task’s activation    events, obtaining the task’s inputs, or communicating its outputs.    The system    could    aid in managing    the complexity    of the    specifications primarily by monitoring interface requirements atnong    interacting components to help assure that those interfaces are well    specified and the requirements    are met.    For example, if the    description of a task included the source of each of the task’s inputs    and the destination of each of its outputs, then the system could    alert the specifier when those input-output    connections between    tasks are inconsistent (e.g., when some input is not an output of the    202    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    source task), and could    input-outputs    prompt for a specification    are to be communicated.    of how    The other grotto that the system can support is the users of the    “what, who. how” specifications.    -.    That support would include    facilities for accessing the specifications in useful ways, for adding    infarmal notes to the specifications, for monitoring the carrying out    of tasks, and for doing some of the procedural steps.    One useful way in which the system would act as an information    source is in providing “how-to” information to a person who has a    task to do and doesn’t know how to carry it out.    For example,    when a secretary is given a task that he is not familiar with (such as    “order some business cards for me” or “obtain a consulting contract    and arrange the travel plans for this person”), the system could    provide him with a description of what needs to be done, how to do    it, and who the people are who will play a role in getting it done.    One could amplify the system’s usefulness in this role as a how-to    resource by allowing its users to add informal notes to the    descriptions.    Then the system also becomes a repository for the    accumulated societal wisdom concerning the realitics of actually    carrying out the tasks.    The functionality we have discussed thus far has only required    knowledge of the procedures in general.    The system’s usefulness    can be further enhanced by providing it with the capability of    knowing about specific instances.    With this capability the system    could participate in the work in one of two ways: by tracking the    progress of tasks, and by carrying out tasks itself.    A task tracking facility would allow the system to:    * be a source of information regarding the task’s status, history,    and plan:    * send requests to the agents who are to do the next steps, and    make available to them a description of what they are to do    what has been done, and pojnters to the relevant documents:    * send out reminders and alerts when expected completion    times of task steps have passed; and    * ask for intervention by the appropriate agent when problems    arise.    A system which is tracking tasks in this way is participating as a    partner in doing the work.    Once that symbiotic relationship has    been established between system and office worker, there are many    steps in office procedures that the system could do itself.    Such    tasks would certainly include communication activities (e.g., using    electronic mail), and maintenance    of consistency    in structured    information bases (e.g., automatically filling in fields of electronic    forms,    see [Fikes], 1980).    Office Work and Office Procedures    With this class of intended systems in mind, we now turn to the    question of the nature of the office work that we hope to support.    In so doing, our goal is to determine the nature and structure of the    information needed by our intended systems to support that work.    A Procedure Execution Model    A common model of office work considers an office worker to be a    processor with a collection of tasks to be done and a procedure for    doing each task. The work, in this model, involves executing each    procedure and “time sharing” among them.    However, studies of    office work reveal a complexity and diversity of behavior far beyond    what would be predicted    by this model    (For example, see    [Suchman], [Wynn], and [Zimmerman]).    In this section we explore,    the nature of this apparent discrepency as a way of exposing1    characteristics    of office work that we think    have importam    implications in the design of systems to support that work. {Note:    The potential usefulness of this discrepancy was suggested to us by    [Suchman].}    tasks assigned to him. His work involves the planning, scheduling,    and context switching associated with time sharing among those    tasks.    However, he can exercise options in carrying out his    scheduling    task that are not available to the scheduler    in a    computerized time sharing system.    In particular. he can modify    tasks themselves.    For example, the worker may choose to    * ignore some of the requirements    of a task,    * reni:gotiate the requirements    of a task,    * get someone else to do a task,    * create and follow a new procedure    for doing a task.    Iicnce, office work includes, in addition to the carrying out of tasks,    the determination of when a task should be done, how the task is to    be done, and whether the task will be done at all.    {Note: The office worker also has goals other than the completion    of assigned tasks.    For example, he has career goals (try to get    ahead in the company), company goals (maximize profit), personal    goals (keep from being bored), social goals (be regarded as good    company), and societal goals (be honest).]    Second, we take it as obvious that the domain with which office    systems must deal is open-ended:    truly anything may become    relevant to the workings of offices at one time or another. This fact    inplies that a procedure which implements a task is necessarily an    inadequate description of all the actions which must be done to    achieve the task’s goals in all the various situations that can (and    inevitably will) occur. That is, at the time the procedure is defined    which implements a task, one cannot predict ‘either the range of    situations that will be encountered in an office or the extent of the    knowledge, activities, and considerations that will be needed to carry    out the task. Hence, for any given procedure, situations may occur    in which the procedure does not indicate what is to be done, or in    which what is indicated in the procedure cannot be done.    For    example, situations may occur in which:    a case analysis in the procedure does not include the current    situation,    assumptions about the form and availability of inputs for a    step are not met.    resources required to do a step are not available,    the action described in a step will not have the intended    effects.    The procedures associated with each task serve as a guide in that    they indicate one way of doing the task under a particular set of    assumptions. The office worker has the responsibility of deciding in    each particular situation whether the procedure’s assumptions are    satisfied and whether he wants to carry out the task in the way    specified by the procedure.    Third. the office worker has the problem of interpreting abstract    specifications of what is to be done.    For example, it is not    uncommon in procedure specifications to find phrases like “include    any other pertinent data”, “send forms approximately six weeks in    advance of the deadline”, and “arrange for employee to receive    benefit checks at home”. What is “any other pertinent data”, when    is “approximately six weeks in advance of the deadline”, and how is    one to “arrange for the employee to receive benefit checks at    home”? The specification of the procedure doesn’t say. Hence, a    necessary part of the work of following office procedures    is    determining what the abstract specification implies is to be done in    each particular case.    We conclude from these observations that the standard model of    procedure cxccution is inadequate for describing office work. The    original procedure specification serves only as a guide in this process    and can be thought of as the first approximation to a plan for the    particular task at hand.    It is the responsibility of the office worker    in each particular case to determine the suitability of the procedure,    fill in missing details, and modify it where necessary to achieve the    goals of the task.    First,    consider    the office    worker’s ongoing    (meta-)task    of    determining how to allocate his resources among the collection of    203    An Example Of Office Work.    To make these points more tangible.    of the everyday work which goes    _ _    let us now look at an example    on in an office (This is an    elaboration of an actual case of office work reported by [Wynn], p.    49). This example exhibits the problematic nature of the work, and    the need for reflecting upon the specifications of the procedures.    Xerox sells supplies for its copiers - paper, toner, and such.    Customer orders for supplies are taken over the phone by    a    “customer order entry clerk” (COEC).    The COEC talks to the    customer and fills out a form which records the order.    This order    form is used by other clerks to bill the customer and to deliver the    supplies.    The form has a field for rccordir,g the address at which    the copier is located, and there is an underlying assumption that this    is the address to which the supplies are to be delivered.    In the particular incident of interest, the customer informed the    COEC ihat he could not supply an address for the copier because it    was located on an ocean-going bargc(!j.    This situation, of course,    raised the question of what should be put into the address field of    the order form.    The clerk realized that the intended use of the address was to    specify where the supplies were to be delivered, and that because    the copier was on a barge that the needed address was dependent    upon when the delivery of the supplies was to be made.    Since he    could not predict that date, he obtained from the customer a    telephone number that could be called when the delivery was about    to be made to obtain the address of the current location of the    barge. He entered that telephone number into the field of the form    and added a notation indicating how the number was to be used.    The story continues: When the billing clerk was making up the bill,    the question arose as to whether or not to charge California sales    tax. The answer depends on whether or not the supplies were to be    delivered out-of-state.    The address field of the order form was    examined, as per the usual procedure for answering the question,    and of course no information about the state was available. What    now?    The billing clerk read the notation, called the telephone number,    and ask the respondent whether the delivery was to be made in or    out of California.    Again, the date of the delivery was crucial in    determining    the    answer.    However,    the billing    clerk    knew    approximately when the supplies would be available, and therefore    was able to determine from the person called that the delivery    would be made in California, even though the precise delivery    address was still not known.    An addition was made to the    information in the address field of the order form indicating that    the delivery was to be made in Califo&a, and the bill was prepared    and sent.    Finally, the shipping clerk, with the supplies in hand, repeated the    telephone call when preparing the shipping label. The address was    then known, the address was added to the form, and the supplies    were delivered.    Analysis of This Example.    What we have here is a case of a blown assumption.    The    iprocedures in which all three of these clerks were playing a role    were designed on the assumption that copiers do not move and    therefore have a fixed address.    The particular case violated that    ‘assumption.    The COEC ‘was confronted with a problem because he could not    carry out a step of a procedure (i.e., he could not fill in an address    for the copier). There are several things hc could have done at that    point, including ignoring the step or telling the customer that unless    he provided an address that the order could not be taken. Instead,    he chose to stop “executing” the procedure and to step back and    reason about it. In particular, he considered what were the intended    uses for the problematical address; i.e.. what was the goal of filling    in the form’s address field.    Using that information, he created a    plan involving both himself and the shipping clerk that was within    the spirit, although    not within the letter, of the established    procedures. That is, he devised an alternative that would satisfy the    goals of the intended users of the address, as he perceived them.    Hence, those goals were the crucial information that the COEC    needed in order to determine suitable alternative actions when the    unexpected situation occured.    Note that the COEC was apparently not aware of the billing clerk’s    use of the address field to determine state sales tax.    Hence, the    COEC’s alternative plan did not indicate how the billing clerk was    to deal with this situation.    The billing clerk, like the COEC, was    confronted with a problem of not being able to carry out a step in a    procedure (because the address field of the order form did not    contain an address). Again, as was the case with the COEC, he did    not ignore the problem or reject the situation as unacceptable.    Instead, he attempted to find suitable alternative actions that would    satisfy his task goals and allow the billing to proceed. His planning    involved understanding the alternative procedure for the shipping    clerk that had been formulated by the COEC, and realizing that he    could use the telephone number included in that formulation to    satisfy his goals.    Consider the nature of the information involved in this example.    Note the unpredictabiliry at the time the form was designed of the    kinds of information that would be put on the fom:. Note also that    the information on the form regarding the address was changing    throughout the procedure.    First there was a note describing a    procedure    for obtaining    the    address,    then    a parfial address    containing only the state added to that note, and finally a complete    description of the address. Another form of partial description that    played a role in the example was approximafim; in particular, the    clerks’ knowledge of the approximate delivery date.    The strength    and certainty of those approximations determined when and to what    extent the delivery address was obtained.    Supporting the Work Requires Flexibility    We have presented the idea that the work that actually goes on in    offIces is not routine.    It consists of many particular cases of    applying the given procedures to the details of those cases.    This    work involves dealing with unsatisfied assumptions, doing planning,    understanding    goals,    and    using    information    that    is partial,    approximate, and changing. The illusion that office workers execute    procedures in a manner that is analogous to the way computers    execute procedures ignores these realities of the situation.    Given    that picture of office work, we now turn our attention to the    requirements placed on the design of computer-based    systems to    support such work.    A primary design challenge is to find ways of providing the    flexibility that is needed to allow for the application of established    procedures to the circumstances of particular cases. With respect to    information being supplied to the system by users, this flexibility    involves    dealing    with    cases    where    information    is    missing,    information is provided in unexpected forms, and/or information in    addition to what was expected is supplied.    With respect to the    procedural steps being carried out, this flexibility involves dealing    with cases where steps are omitted, steps are done in different order,    and/or additional steps are done.    When office systems lack the    flexibility to deal with these contingencies, they severely restrict the    options of their users and thereby become yet another bureaucratic    barrier to be overcome in “getting the work done”.    Consider. for cxamplc. an electronic fcrms system for supporting the    work of the COEC. When the “copier on a barge” problem arose,    the COEC would hake ncedcd that system to be flexible CI;OL@ to    allow entries other than addresses in the form’s address ficid.    In    particular, the COEC needed to be able to say to the sy%tcm, in    effect., “I can’t give you an address in this c&c.    Insicad, 171 give    you a note for the shipping clerk.”    If the syr.tem aI50 used its    descriptions of the procedures being followed to provide instmctions    to the clerks regarding what is to be done, then the system would    need to bc able to accept the COECs decision to omit the step of    204    providing an address in the form’s address field, and to incorporate    into the shipping clerk’s procedure an instruction to read the    COEC’s note the first time the address was needed.    In addition to being able to accept such alternative inputs, any of    the system’s facilities for doing computations based on those inputs    (e.g., to compute the state sales tax on customer orders using the    address on the order form) must be designed to deal with cases in    which those inputs have some unexpected structure or are not    available at all.    The challenge in the design of such processing    facilities is to provide ways for the system, in cooperation with the    office worker who is being supported, to overcome the difficulty    posed by the failed computation so that work on the task can    continue in a productive manner.    One often hears the argument that this need for flexibility and    variation from established procedures could be overcome by doing a    more thorough analysis of the office tasks and thereby producing    complete procedures that would cover all of the cases that could    occur. Our claim is that because of the open-ended nature of the    office domain, one cannot anticipate all of the situations that will    occur in the carrying out of a given task, and therefore cannot    totally characterize the inputs that will be available or the actions    that might be taken to satisfy the task’s goals.    The Relevance of AI to Supporting the Work    An Alternative Model:    Planning and Plan Execution    The observations we have presented on the characteristics of office    work have led us to seek an alternative to the procedure execution    model to guide us in building a knowledge base for office support    systems. We have found what we think to be a suitable alternative    in the paradigms from the AI literature for automatic planning and    execution monitoring of plans. That is, we take the viewpoint that    we are confronted not so much with the problems of representing    and supporting the execution of procedures, but with the problems    of representing plans and supporting the monitoring and replanning    that occurs during their execution.    This viewpoint provides    us with a conceptual    framework for    understanding the use of procedures in an office, an understanding    that we feel is critical to dealing with the problems of designing    systems to actually support that work. In the following paragraphs    we present some of the key aspects of this point of view and discuss    the ways in which it suggests that a system could provide useful    support.    The basic requirement on a data base describing these plans are that    they provide the information needed to monitor a plan’s execution    and to do whatever    replanning    might    be required.    What    information is needed during those operations?    By referring to the    planning paradigm used in the STRIPS systems ([Fikes], et al, 1972),    we obtain the suggestions    that execution    monitoring    requires    descriptions of the expected results of each operator, the intended    use of each operator result, the preconditions of each operator, and    the assumptions made by the planner about the world at each step    of the plan. Planning involves the use of descriptions of the current    state of the world, the operators available as potential plan steps,    and the goals to be achieved by the plan.    This planning paradigm characterizes some of the information that    might be useful in the doing of office tasks and therefore suggests    what to include in the description of office tasks and their associated    procedures.    In particular, it suggests the inclusion of information    about task goals, intended uses of operator results, and precondition    assumptions of operators.    For example, the COEC employed    information    regarding the intended    use of the address by the    shipping clerk to determine an alternative plan when the address    was not available.    If the COEC had also known about the billing    clerk’s intended use of the address, then he would have tried to    obtain the information needed for that use (i.e., the state in which    the delivery would be made) and, if successful, would have    eliminated the difficulties that the billing clerk had in the example.    One of the major ways we see for a system to provide support is by    serving as an information source for office workers -buring the    execution of plans and during any replanning that may be required.    Hence, the planning p?radigm suggests what information to include    in the system’s representation of the tasks and procedures, and    provides us with a basis for characterizing the questions that the    user may ask of the system.    The paradigm of hierarchical planning (e.g., see [Sacerdoti]) also    applies here and can be used in our characterization of office work.    That paradigm would suggest that we consider each individual step    in a plan as being a task with its own inputs, enabling conditions,    goals, etc.    There may or may not be a plan associated with any    given step’s task. In the cases where there are, these plans form a    tree and then get combined in various ways to form a planning    network.    Such a network represents a hierarchical plan, where the    top of the hierarchy decribes a top level task and each sucessive    level of the hierarchy describes increasingly detailed subtasks.    In    the standard non-hierarchical planning case, there is a plan for each    step and each plan consists of a single operator; hence, there is a    one-to-one correspondance between plan steps and operators.    In    the hierarchical planning paradigm and in the office, that one-to-one    correspondance    need not exist.    Hierarchical planning networks appear to be an important device for    representing office plans for several reasons.    They are a useM    structure    for    representing    the    task-subtask    and    goal-subgoal    relationships    that need to be known about during execution    monitoring and replanning, and they provide the basic descriptive    framework for indicating how the work is to be organized.    Also, since there are effectively no primitive operators in the office,    there is a need for describing office plans at varying levels of detail,    depending on the specific needs of the describer and users of the    descriptions.    That flexibility in the level of detail of specification is    therefore needed in an office system’s representation facilities. The    system can then be involved in the office work at varying levels of    detail.    For example, the system may know that a travel request    needs to be authorized, but know nothing about the subtasks    involved in obtaining the authorization.    Such flexibility is also an    important tool for enabling the system to participate in situations    that it does not understand.    For example, if a plan that the system    is helping to monitor fails, the user may not describe to the system    the alternative plan he decides to use. However, the system knows    about the goals of the original plan that the alternative must also    satisfy and can therefore monitor the accomplishment of those goals,    ev;;evz~gh    it now has no model of how those goals are being    An important way in which office work motivates extension of    current AI planning paradigms is that office work is done in a    multi-processor environment,    That is, one can consider each agent    (person    or system) in an office to be a processor    that is    accomplishing tasks by carrying out plan5 that have either been    given to it or created by it.    Any given plan may involve the    participation of several agents, each asent acting as an independent    processor executing one or more of the plan’s steps.    The processors make commitments to each other regarding the goals    th.ey will achieve during the execution of a plan [Flores]. Therefore,    in creating a plan to be carried out by multiple agents, the    commitments that those agents will make to each other are a crucial    part of a multi-processor plan.    Furthermore, the commitments an agent has made and that have    been made to him form for him a set of constraints within which he    must work.    In particular, these commitments form the context in    which replanning takes place. in that any new plan must satisfy    those commitments.    However, the agent also has the options during    replanning of renegotiating the commitments he has made or of    ignoring them altogether.    205    Any system that is to participate in the replanning process needs to    support this commitment-based view of planning and take them into    consideration.    In particular, a system could help an agent keep    track of the commitments    he is involved with that relate to a    particular task and indicate his options for changing them during    replanning.    To    support    this    tracking,    the    system’s    plan    representation    needs to include    for each plan step both the    commitments made by and to the agent responsible for doing the    step.    The multi-processing nature of office work also implies that steps of    a plan can be done in parallel.    Hence, representations for office    plans need to allow specification of partial orderings for plan steps.    That requirement and the pervasiveness of replanning that occurs    during plan execution suggest that task descriptions should include a    set of necessary and sufficient “enabling conditions” for beginning    the task so that it can be determined when the task can be begun    irrespective of the order or nature of the steps that achieved those    conditions.    Up to this point we have not considered    perhaps    the most    immediate question that arises out of looking at office work from a    planning point of view: to what extent can we expect a system to    automatically do the planning and replanning that is needed for    office tasks? The primary limitation on such automatic planning    seems to be the open-endedness of the office domain.    That is, the    extent to which there are considerations relevant to the formation of    the plan of which the system has no understanding will limit the    system’s effectiveness at determining    an appropriate plan.    For    example, the system may not know about possible operators in the    situation, the costs or chances for success of renegotiating or    ignoring existing commitments, other goals that interact with the    task’s goals, or the implications of an unexpected situation.    These    limitations have lead us to focus on a symbiotic relationship    between system and office worker during planning and replanning,    ,where the system plays primarily a role of supporting the planning    /being done by the users by helping    represent., manage, and    communicate the resulting plans.    In conclusion, then, we are claiming that    *    a multi-processor hierarchical planning model is B useful one    for understanding office work and therefore for structuring an    offlice system’s knowledge base, and    *    the demands of supporting offlce work motivate new research    in multi-processor    commitment-based    planning.    Knowledge Representation Challenges    We turn now to the demands that supporting office work makes on    the representation of the knowledge which systems have of the    ‘office domain.    We then discuss two techniques that arise from    work in the AI community that provide particularly promising    starting points for confronting    those demands.    The single most salient demand of such representation schemes is    that they be    able to respond    to the need    for change    in    conceptualization of the work.    As we have seen, the domain of    office work is inherently open-ended (e.g., before the barge case    there was no notion of addresses being time-dependent:    afterwards    there was).    Consequently there is no way to anticipate the full    range of subject matter with which the system will have to deal. In    consequence, the representation scheme must be able to handle any    conceivable conceptualization which, over the course of the life of    the office, the users of the office system choose to enter into the    system.    Furthermore, as time passes, this conceptualization will change to    meet the changing understanding of the office domain which the    users of the system have. Sometimes these changes will be small; at    other times, there will be major “re-thinkings” of the information.    The system must not only be able to represent this changing pattern    of thought, but must also be able to simultaneously represent the    pattern of changing thought: to support office work, it will have to    be able to support the history of what has happened previously, and    consequently will have to be able to hold simultaneously the old    conceptualizations - for supporting the understanding of the past,    and the new.    The second demand placed by offlice systems on the mechanisms for    representing the knowledge which is within them is to support    partial knowledge of their domain. This incompleteness comes in at    least three forms: the support of a subset of some expected body of    knowledge (e.g., the state in which an address is may be known, but    nothing more): the support of an abstraction of the knowledge (e.g.,    the supplies ordered    are a paper product,    but which one is    unknown); and the support of an approximation (e.g,. the date of    delivery is between mid-April and mid-May).    In particular, this    ability to support partial knowledge will permit the entry into the    system of all that one knows, even though that may only be part of    what is desired in a complete description.    A resultant    demand    is that    the    mechanisms    which    access    information must be prepared for the expected information not to    be there (e.g.,. the state portion of an address is missing from the    available information when the billing clerk tries to bill the barge-    ownei).    This preparation involves the representation scheme in at    least being able to detect the absense, and further, in having some    means of coping with the resulting problems.    The third demand results from the fact that knowledge of things    /often accumulates over time. Sometimes such an accumulation of    partial descriptions can be reformed into a coherent whole.    But    more often the pieces are better retained as independent,    un-    coordinated facts. Indeed, rather than think about OX description of    an entity, it is often useful to view the object as having mulfiple    descriptions.    Thus, for example, the knowledge about the address    of the copier might at some point include three distinct descriptions    of the address:    as having California as its state portion, as being a    changing thing, and as being something which can be fkthur    determined by carrying out the procedure “call this number and    ask”.    The final demand arises from the expectation that the system should    provide ‘a general model of office work.    This model could be    crafted by experts on the organizational structuring of oflices, and    would then be available as a conceptual framework to support the    description of more particular details.    In fact, these concepts    become the zerrns in which the details are not only described, but    understood.    Thus, for example, the concepts    of task, goal,    procedure, plan, agent, post, authorization, commitment, and data    repository might be provided as a very general framework for    modeling offices. Particular offices would have their own particular    tasks, goals, etc.    These demands pose challenging research problems in knowledge    representation which we are not claiming to have solved. However,    we discuss in the following paragraphs two starting points for    confronting these problems that look particularly promising to us    and that we are using in our work.    Our first starting point for responding to the demands of supporting    office work is the use of a specialization-based    knowledge    representation formalism (see, for example, [Brachman]). This, and    similar, schemes for formally and precisely representing knowledge    take as their goals the first three of our needs:    representation    of    a    changing    support for    conceptual    structure,    description,    and multiple    description.    partial    The major structuring    principle for their representations is defnilion 6~ specialization: any    concept in a knowledge base can be taken as a basis for defiriing a    new concept which is a special case of the old one.    Thus,    describing the details of a particular office can be done in this    formalism by specializing more general descriptions of offices. This    specialization can be done in steps, thus permitting the tailoring of    the conceptualization in various ways to produce progressively less    abstract descriptions.    In the end, the most specific descriptions are    understood through their place in a taxonomic lattice (a concept can    specialize more than one abstraction) of more abstract concepts.    Not only does this enrich the understanding of the domain, for one    can understand similarities between concepts in terms of common    abstractions, but it also provides locations for attaching knowledge    about abstractions which will immediately apply to all special cases    of those abstractions.    Our second    response    to the knowledge    representation    needs    presented    by the    offlice domain    is the    use    of    semi-formal    descriptions.    That is, WC are using the    formal knowledge    representation mechanisms in a style which permits us to capture    information that the system dots not “understand” in such a way    that it can be uscfmly employed b:i human users of the system. For    example, paragraphs of English prose or diagrams can be associated    with concepts: the only use the system will be able to make of them    will be to present them to a human user, and permit him to read    and modify them.    We view these mixtures of formal (the structure is understood by    the system) and informal (the structure is understood    only by    humans)    descriptions    as an essential    “escape valve” for our    knowledge representation    systems: if it were required that the    system had to understand the conceptualization underlying all the    informaion in its knowledge structure, then the cost of entering    information which is currently beyond the system’s understanding    would be very high. Instead, by escaping into informal description,    the system can still be used as a repository for all of the information    about the situation at hand, and yet permit the work to proceed.    The system becomes    primarily    a communications . device    for    supporting inteiaction between people carrying out office work.    However, because the informal descriptions are represented in the    same description formalism as the formal descriptions, they can be    integrated into the knowledge base in a consistent manner.    An Approach to System Building:    Symbiotic Processing    In the account given above of our vision of the behavior and    properties of a system to support the real work of offices, there has    been repeated reference to interaction between the system and its    human users.    This will be’ an important aspect of successfully    completing the planning which must be done in office work. Also in    representing knowledge within the system, we have argued that a    “semi-formal”    mixture    of information    will be important    for    achieving practical systems. That is, it takes both man and machine    to understand    the information held within the system.    Changing the emphasis, we prefer to think of the humans and the    computer    as cooperating    processing    engines    carrying out the    “computations” of the system in partnership.    The idea here is that    both processing engines, each with its own processing capabilities,    knowledge, and memory structures. are essential to getting the task    done.    Neither could effectively do the work without the other.    In biology, such interdependence    is called symbiosis, and a system    composed of two or more interdependent    organisms is called a    symbiolic system. We therefore use the same term to refer to the    sort of offrce systems envisioned here:    there is a symbiosis of    human and machine.    Why is it that this quite obvious co-operation between man and    machine has not so far been the dominating pattern of computer    use in office (as well as other) systems? Our theory is this: when    batch processing    was the only economically    feasible form of    business    (and    therefore    office)    system,    such    interaction    was    impossible - the human partners were simply not around when they    were needed to help the computer in its tasks.    However, the    pervasiveness of the belief among designers of computer systems    that ale procedures in offices were “routine” obscured the need for    truly cooperative interaction.    A result of this belief, and of the    introduction and widespread use of batch processing systems in the    business environment, has been the establishment and buttressing of    the pow-accepted notion that there is something fundamental about    partitioning the world into routine cases and exceptions.    We view this distinction between routine and exception as quite    artificiai: routine cases are those cases where no action is required of    the human partner in the symbiosis: exceptions are everything else.    And, in fact, it is even worse than that: the distinction breeds    viewing the world this way, which in turn enhances the distinction.    To get the proportion of routine cases up to the point where batch    systems could be justified, many cases which require some small    amount of human processing are handled by forcing the human    processing to be done before the cases are “entered into” the    system. This is often done at the expense of some capacity of the    system (as a whole) to handle not-quite-standard cases: these cases    are wedged into the mold of the “routine”.    And when exceptions    are not even permitted - when everything has to go through the    system as a routine case, the mold can well become a straight-jacket.    We propose that office systems need not make this distinction    between the routine and the exceptional.    Instead, it should be    possible, and is desirable, to return to the “good old days” when all    cases were processed in the same way, some with more effort than    others.    We believe that, armed with the understanding of ofliccs    presented    here, and supported    by studies in the AI fields of    automatic planning and knowledge representation, a modem version    of the “good old world” can be achieved through systems built    around the notion that all cases are handled by the poweri%    symbiosis of humans and machines.    Acknowledgements    We would like to thank Lucy Suchman and Eleanor Wynn for early    contributions to the ideas in this paper.    In particular, they made    available to us transcripts of their interviews with office workers and    observations of office work, and helped open our eyes to what those    interviews and observations had to say about offlice work.    We    would also like to thank Lucy for her insightful participation in the    continuing research that has led to the observations in this paper.    References    Brachmap, R. et al. KLONE    Reference Manual, BBN Report No.    3848, July 1978.    Fikes, R. E., Hart, P. E., and Nilsson, N. J.    “Learning and    Executing Generalized Robot Plans”.    Artificial Intelligence, 3(4),    winter 1972, pp 251-288.    Fikes, R. E.    “Odyssey: A Knowledge-Based    To appear in ArtiJicial Intelligence.    Personal Assistant”.    Flores, F. Univ. of California at Berkeley, personal communication.    Sacerdoti, E. D.    A Structure for Plans and    American Elsevier, 1977.    Behavior, New York:    Suchman, L. A. “Office Procedures as Practical Action:    A Case    Study”.    SSL Internal Report, Xerox Palo Alto Research Center,    Sept. 1979.    WY% Eleanor    Herasimchuk    “Office    Conversation    as    an    Information Medium”.    Department of Anthropology, University of    California at    Berkeley, May 1979.    Zimmerman, D. H.    “The Practicalities of Rule Use”. in J. D.    Douglas (Ed.),    Underslanding    Everyday Life, Aldine Publishing    Company, Chicago, pp 221-237.    207     
 | 
	1980 
 | 
	53 
 | 
					
49 
							 | 
	Metaphors and Models    Michael R. Gcnesereih    Computer    Science Department    Stanford    University    Stanford,    California    94305    1. Introduction    Much of one’s knowledge    of a task domain    is in the form of    simple facts and procedures.    While these facts and procedures    may    vary from domain    to domain,    there is often substantial    &nilarity    in    the “abstract    structure”    of the knowledge.    For example,    the notion    of a hierarchy    is ‘found    in biological    taxonomy,    the geological    classification    of time, and the organization    chart of a corporation.    One advantage    of recognizing    such abstractions    is that they can be    used in selecting    metaphors    and models    that are computationally    very powerful    and efficient.    This power and efficiency    can be used    in evaluating    plausible    hypotheses    about    new domains    and can    thereby    motivate    the induction    of abstractions    even in the face of    partial    or inconsistent    data.    Furthermore,    there    is a seductive    argument    for how such information    processing    criteria    can be used    in characterizing    “intuitive”    thought    and in explaining    the cogency    of causal    arguments.    The idea of large-scale,    unified    knowledge    structures    like    abstractions    is    not    a    new    one.    The    gestalt    psychologists    (e.g. [Kohler])    had    the intuition    decades    ago, and    recently    Kuhn    [Kuhn],    Minsky    [Minsky],    and Schank    [Schank    &    Abelson]    have    embodied    similar    intuitions    in their    notions    of    paradigms,    frames, and scripts.    (See also [Bobrow    & Norman]    and    [Moore & Newell] for related ideas.)    The novelty hcrc lies in the    use of such structures    to select g6od metaphoa    and models and in    the effects    of the    resulting    power    and    efficiency    on cognitive    behavior.    This    paper    describes    a    particular    formalization    of    abstractions    -in a knowlcdgc    rcprcsentation    system called    hNAr.OG    and    shows    how    abstractions    can    be    used    in model    building,    understanding    and    generating    analogies,    and    theory    formation.    The prcscntation    here is necessarily    brief and mentions    only the    highlights.    The next section defines the notions of abstraction    and    simulation    structure.    Section 3 describes    the use of abstractions    in    building    computational    models,    and    section    4    shows    how    abstractions    can bc    used    to gain    power    as well    as efficiency.    2. Abstrrrclions and Sirrwlalion Struclures    Formally,    an abslrucfion is a set of symbols    for relations,    functions,    constants,    and    actions    together    with a set of axioms    relating    these symbols    to each other.    Abstractions    include    not only    small,    simple    concepts    like hierarchies    but    also    more    complex    notions    like concavity    and convexity    or particles    and waves.    A    model for an abstraction    is essentially    an interpretation    fcr the    symbols that satisfies the associated    axioms.    Different    task domains    can bc mod&    of the same abstraction    (as biological    taxonomy,    geological    time,    and    organization    charts    are    instances    of    hierarchies);    or, said the other    way around,    each abstraction    car    have a number    of different    models.    Importantly,    there are multipk    computational    models for most abstractions.    In order to distinguish    computer    models    from    the task    domains    they    are designed    tc    mimic,    they    are hereafter    termed    simulufion slrucfures,    following    Weyhrauch    [Weyhrauch].    There    is a strong    relationship    between    abstractions    and    metaphors,    or analogies.    Many    analogies    are best understood    as    statements    that    the situations    being    compared    share    a common    abstraction.    For example,    when one asserts that the organization    chart of a corporation    is like a tree or like the taxonomy    of animals    in biology,    what he is saying is that they are all hierarchies.    With    this view, the problem    of understanding    an analogy becomes    one of    recognizing    the shared    abstraction.    Of course    there    are an infinite    number    of abstractions.    What    gives the idea    force    is that    the simulation    structures    for    certain    abstractions    have    representations    that    arc    particularly    economical,    algorithms    that are particularly    efficient,    or theorems    that are particularly    powerful,    e.g. hierarchies,    grids, partial    orders,    rings, groups,    monoids.    Consequently,    there    is advantage    to bL    gained    from recognizing    the applicability    of one of these special    abstractions    rather    than    synthesizing    a new    one.    Even when the applicability    of such special abstractions    and    simulation    structures    cannot    be determined    with certainty    (say, in    the face of incomplete    or faulty information),    there is advantage    in    hypothesizing    them.    Until one is forced to switch abstractions    due    to incontrovertible    data, one has an economical    representation    and    powerful    problem    solving methods.    By biasing    the early choice of    abstractions    in this way, these criteria    can have qualitative    effcctc    on    theory    formation.    3. Models    The    importance    of    abstractions    and    their    associated    measures    of economy,    efficiency,    and    power    is clearest    in the    context    of a concrete    implementation    like the ANALOG knowledge    reprcscntation    system.    l’hc interesting    feature of ANALOG is that it    utilizes a variety of simulation    structures    for representing    different    portions    of the    knowledge    of a task    domain.    This    setup    is    graphically    illustrated    in figure    1.    The user asserts    facts in the    system’s    uniform,    domain-independent    formalism,    and the system    stores    them    by modifying    the appropriate    simulation    structure.    Facts for which no simulation    structure    is appropriate    are simply    filed away in the uniform    representation.    (ANALOG currently    uses    a semantic    network    representation    called Dl3 [Gcnesereth    761. The    formalism    allows one to cncodc    scntcnccs    in the predicate    calculus    of    any    order    and    provides    a    rich    meta-level    vocabulary.)    Descriptions    of each    of ANALOG'S abstractions    and    simulation    structures    arc    also    cncodcd    within    the    DU rcprescntation.    I    ASS1    Figure 1 - An Overview of ANALQC    This    approach    departs    from    the    custom    in    knowledge    representation    systems    of    using    uniform,    domain    independent    formalisms.    While    there    are advantages    to uniformity,    in many    cases the representations    are less economical    than specialized    data    structures,    and the associated    general    procedures    (like resolution)    are less efficient    or less powerful    than specialized    algorithms.    For    208    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    example,    a set in a small universe    can be efficiently    represented    as    a bit vector in which the setting of each bit determines    whether    the    corresponding    object    is in    the    set.    Union    and    intersection    computations    in this    representation    can    be    done    in a single    machine    cycle by hardware    or microcoded    boolean    operations.    By    contrast,    a frame-like    representation    of sets would    consume    more    space,    and    the    union    and    intersection    algorithms    would    have    running    times linear    or quadratic    in the sizes of the sets.    The    distinction    here    is    essentially    that    between    “Fregcan”    and    “analogical”    representations,    as described    by Balzer [Balzer].    Note    that    ANALOG'S approach    is perfectly    compatible    with    uniform    knowledge    representation    systems    like DB and    RLL    [Greiner    &    Lenat].    The addition    of abstractions    and simulation    structures    can    be viewed    as an incremental    improvement    to such systems,    and    their absence    or inapplicability    can be handled    gracefully    by using    the    uniform    representation.    It’s important    to realize    that    ANALOG    is not necessarily    charged    with inventing    these clever representations    and algorithms,    only    recognizing    their    applicability    and    applying    them.    The    approach    is very much    in the spirit of the work done by Green,    Barstow,    Kant,    and    Low    in that    there    is a knowledge    base    describing    some of the best data    representations    and algorithms    known    to computer    science.    This    knowledge    base    is used    in    selecting    good    data    representations    and    efficient    algorithms    for    computing    in a new    task    domain.    One    difference    with    their    approach    is that in ANALOG there is a catchall    representation    for    encoding    assertions    when    no simulation    structure    is applicable.    Other    differences    include    an emerging    theory    of representation    necessary    in designing    new simulation    structures    (see section    3.2)    and the use of the criteria    of economy,    efficiency,    and power    in    theory    formation.    ANALOG'S use of simulation    structures    is in a very real sense    an instance    of model    building.    Architects    and ship designers    use    physical    models    to get answers    that would bc too difficult    or too    expensive    to obtain    using purely    formal    methods.    ANALOG uses    simulation    structures    in much    the same’ way.    In fact, there is no    essential    reason    why the simulation    structures    it uses couldn’t    be    physical    models.    Furthermore,    as VLSI dissolves what John Backus    calls the vonNeumann    bottleneck,    the number    of abstractions    with    especially    efficient    simulation    structures    should    grow dramatically.    3.1 Building a Model    As an    example    of    modeling,    consider    the    problem    of    encoding    the ofganization    chart of a corporation:    ‘[‘he first step in    building    a model    for a new task domain    is finding    an appropriate    abstraction    and simulation    structure.    The knowledge    engineer    may    directly    name the abstraction    or identify    it with an analogy,    or the    system may be able to infer it from an examination    of the data.    In    this case, the hierarchy    abstraction    is appropriate,    and there    are    several appropriate    simulation    structures.    One of these is shown in    figure 2. Each object in the universe    is represented    as a “cons” cell    in which the “car” points to the object’s parent.    The relation    (here    called Rel) is just the transitive    closure of the Car relation,    and Nil    is the root.    For the purposes    of this example,    the “cdr”    of each    cell may    be    ignored.    ,++$$yj&$$    Figure 2 - A Simulation Structure for the Hierarchy Abstraction    An important    requirement    for a simulation    structure    is that it be    modifiable.    Therefore,    it must    include    actions    that    the    model    builder    can    use    in encoding    knowledge    of    the    task    domain.    Usually,    this requires    the ability    to create    new objects    and    to    achieve    relations    among    them.    In this case, the Neons subroutine    creates    a new    object,    and    Rplaca changes    an object’s    parent.    Part of the task of finding    an appropriate    abstraction    and    simulation    structure    is setting it up for use in encoding    knowledge    of the task domain.    This includes    three kinds of information.    The    first is an index so that the system can determine    the simulation    structure    appropria‘tt:    to a new assertion.    (This index is necessary    since several    domains    and    simulation    structures    may    be in use    simultaneously).    Secondly,    there must be a procedure    for mapping    each assertion    into its corresponding    assertion    about    the simulation    structure.    And,    finally,    the system    must have information    about    how    to achieve    the new    assertion.    Once    the simulation    structure    is chosen    and    set up, the    system builder can begin to assert facts about the task domain,    and    the system will automatically    modify    the simulation    structure.    As    an example    of this procedure,    consider    how    the system    would    handle    the assertion    of the fact (Boss-of Carleton Bertram).    First,    it would use its index to determine    that the simulation    structure    of    figure    2 is being    used and    to recover    the mapping    information.    ‘I’hcn it would map the assertion    into the simulation    domain.    In    this cast, let’s say that Arthur is the boss of Bcrtrm    and lhdrice    while    Carleton has been    installed    in the model    as Ileatrice’s    employee.    Then    the new assertion    would    bc (He1 ((Nil)) (Nil)),    where    the first argument    is the object    rcprcsenting    Carleton    and    the    second    represents    Bcrtram.    By examining    the    mcta-level    information    about    the simulation    structure,    the system retrieves    a    canned    proccdurc    (Rplaca) for achieving    this fact and cxecutcs    it,    with the result that Carleton’s    “car” is redirected    from Bcittr!.ce to    Bcrtram.    An intcrcsting    aspect    of model    building    is that complete    information    is often required.    For example,    in adding    a node to    the simulation    structure    of figure    2, the system    must    know    the    object’s parent    in order to succeed.    (It has to put something    in the    “car” of the cell.)    This problem    can sometimes    be handled    by the    addition    of new objects    and relations    that capture    the ambiguity.    For example,    one could add the special token Unknown as a place    filler in the simulation    structure    above.    (Of course,    the resulting    structure    would    no longer    be a hierarchy.)    Another    example    is    using    the concept    of uncle    as a union    of father’s    brother    and    mother’s brother.    Unfortunately,    this approach    increases    the size of    the model    and makes deductions    more difficult.    Unless there are    strong    properties    associated    with such    disjunctive    concepts,    it is    usually better    to carry the ambiguity    at the meta-level    (i.e. outside    the model, in the neutral    language    of the knowledge    representation    system)    until    the uncertainty    is resolved.    Another    interesting    aspect of the use of simulation    structures    is the automatic    enforcement    of the axioms of the abstraction.    For    example,    in the simulation    structure    of figure 2, it is impossible    to    assert two parents    for any node simply because    a “cons”    cell has    one and only one “car”.    Where    this is not the case (as when a    simulation    structure    is drawn    from a more general    abstraction),    thy    axioms can still be used to check the consistency    and completeness    of the assertions    a system    builder    makes    in describing    his task    domain.    For    example,    if the    system    knew    that    a group    of    assertions    was intended    to describe    a hierarchy,    it could    detect    inconsistent    data such as cycles and incomplete    data such as nodes    without    parents.    3.2 Designing a Simulation Sfructure for an Abslraclion    The    only    essential    criteria    for    simulation    structures    are    representational    adequacy    and    structure    appropriate    to    their    abstractions.    For every assertion    about    the task domain    in the    language    of the abstraction,    there must be an assertion    about    the    simulation    structure;    and the structure    must satisfy the axioms    of    the    abstraction.    In creating    a simulation    structure,    one good heuristic    is to    try to set up a homomorphism.    Sometimes,    the objects    of the    simulation    structure    can be used directly,    as in the case of using    “cons”    cells to represent    nodes    in a hierarchy.    In the example    above,    the mapping    of objects    from the corporation    domain    into    the domain    of list structure    was one-to-one,    i.e. the corporate    objects were all rcprcsentcd    by distinct pieces of list structure,    and    209    the relations    and actions    all mapped    nicely into one another.    Of    course,    this need not always be the case.    Consider,    for example,    the state vector    representation    of the Blocks World    proposed    by    McCarthy,    in which    the Supports    relation    bctwccn    each    pair of    blocks is represented    by a distinct bit in a bit vector.    (Think of the    vector    as a matrix    in which    the Ci, jxh    bit is on if and only if    block i is on    block J).    In this representation    the fact (Supports    A    B) would    translate    into something    like (On Bit-AR Vector-l),    and    theie would be no distinct    representations    of the blocks A and B.    In other cases, more complex    objects    may be ncccssary    in    order    to provide    enough    relations.    When    a domain    does not    provide    an adequate    set of relations,    it’s a good idea to synthesize    complex    structures    from simpler    ones.    For example,    a simulation    structure’    for an abstraction    with three’ binary    relations    could    be    built    in the list world by representing    objects    as pairs of “cons”    cells in which    the “car”    represents    the    first relation,    the “cd?    points to the second cell, the “cad?    represents    the second    relation,    and the,“cddr”    represents    the third.    This approach    is facilitated    by    programming    languages    with    extensible    data    structures.    Obviously,    it pays    to economize    by using    the prcdefined    relations    of the simulation    domain    where possible.    For example,    a    good    representation    for a univariate    polynomiat    is a list of its    coefficients,    and one gets the degree of the polynomial    for free (the    length    of the list minus    1).    One    advantage    is representational    economy;    another    is automatic    enforcement    of the abstraction’s    axioms,    as described    in the last    section.    In order    to use a simulation    structure,    it may be necessary    to transform    objects    into a canonical    form.    For example,    one can    represent    a univariate    polynomial    as a list of coefficients,    but the    polynomial    must be in expanded    form.    There is a large body of    literature    on canonical    forms for specific    algebraic    structures,    while    [Genesereth    791 gives a general    but weak technique    for inventing    such    forms    directly    from    an abstraction’s    axioms.    In using a simulation    structure,    there is a tradeoff    betweeh    the amount    of work done by the model and the amount    done by    the    knowledge    representation    system.    For    example,    in    the    simulation    structure    of figure    2, one must    loop over the parent    relation    to determine    whether    two objects are related.    This can be    done either by the knowledge    representation    system    or by a L'ISP    procedure    in the simulation    structure.    Obviously,    it’s a good idea    to have the simulation    structure    do as much    work    as possiblq,    3.3 Interfacing Simulation Stniclures    For most interesting    task domains,    the chances    are that a    single simulation    structure    is not sufficient.    In such cases, it is    sometimes    possible    to piece    together    several    different    simulation    structures.    The simplest    situation    arises when the objects    of the    task domain    form a hierarchy    under the “part”    relation.    Then one    can choose    one representation    for the “topmost”    objects    and a    different    representation    for the parts.    The spirit of this approach    is    very    similar    to that    of “object-oriented”    programming    in which    each    object    retains    information    about    how it is to be processed.’    One disadvantage    of this approach    is that each object    must have    explicit    “type”    information    stored    with    it.    Barton,    Genesereth,    Moses,    and    Zippel    have    recently    developed    a    scheme    that    eliminates    this need by separating    the processing    information    from    each    object    and    passing    it around    in    a separate    “tree”    of    operations.    ANALOG uses this schcmc    for encoding    the operations    associated    with    each    simulation    struct’ure.    Task    domains    with    several    relations    are    sometimes    decomposable    into several abstractions,    and these relations    can then    be    represented    independently.    More    often    the    relations    are    interdependent;    and,    when    this is the case, the interdependence    must    be dealt    with    in the uniform    representation,    Even when a single abstraction    would fit the task domain,    it    may be advisable    to use several.    Consider,    fw .example,    a partial    order    that nicely decomposes    into two trees.    Furthermore,    there    are often    advantages    to multiple    representations    of objects,    as    argued    by    Moses    [Moses].    4. Thinking    With Abstractions and Simulation S@uclures    The USC of specialized    simulation    structures    gives ANALOG    an    economy    and    efficiency    not    possible    with    a    uniform    representation.    The economy    can be expressed    in terms    of the    space saved by representing    assertions    in the simulation    structure    rather than the uniform    representation.    This economy    derives from    the elimination    of the overhead    inherent    in uniform    formalisms    and the use of relations    implicit    in the simulation    structure    (as the    length of a list reflects the dcgrce    of the polynomial    it represents).    The efficiency    refers to the time involved    in doing dcd*:rtions    and    solving    problems.    This efficiency    may be attributable    to clever    algorithms,    or it may be the result    of long    familiarity    with    the    domain    from which the abstraction    evolved (due to the memory    of    many    special case heuristics).    Lenat [Lenat et. al.] discusses how a    computer    might    improve    its own performance    by self-monitoring.    An interesting    pos,*bility    suggested    by this economy    and    efficiency    is for the program    to use these criteria    in evaluating    plausible    hypotheses    about    a    new    domain.    In    the    face    of    incomplete    or contradictory    data,    the program    should    favor    the    more economical    abstraction.    Clearly,    there is some evidence    for    this sort of behavior    in human    cognition.    Consider,    for example,    Mendeleev’s    invention    of the periodic    table of the elements.    He    was    convinced    of    the    correctness    of    the    format    in spite    of    contradictory    data,    for reasons    that    can    only    be identified    as    simplicity.    These criteria    of economy    and efficiency    are also of use in    characterizing    why it is easier to solve problems    from one point of    view than another,    e.g. proving    a theorem    using automata    theory    rather    than    formal    grammars.    Part    of    what    makes    causal    arguments    (see [deKleer]    for example)    so compelling    is that they    are easy to compute    with.    The reason    for this is that a causal    argument    is an instance    of a cognitively    efficient    abstraction,    namely    a diicctcd    graph.    One is tempted,    therefore,    to generalize    dcKleer’s    notion    of causal envisionment    as finding    economical    and    efficient    abstractions    (perhaps    identified    with analogies)    in which    the    desired    conclusions    are    reached    via simple    computations.    The idea can be carried    a bit fUrther and generalized    to    include    the criterion    of problem    solving power.    In particular,    one    should    favor    an abstraction    for its ability    to solve    a pending    problem    despite insufficient    data.    The obvious difficulty    is that the    assumption    may bc wrong or there may be scvcral abstractions    that    are    equally    probable    and    useful.    Consider,    for example,    the    following    arguments    for determining    the distance    between    the    observer and the middle vertex of a Necker cube.    “Well, the    lines    form a cube, and so the middle    vertex must be closer to me than    the top edge.”    “No, not at all, the figure    is concave,    and so the    middle    vertex    must    be    filrther    away.”    Both    arguments    are    consistent    with the data and refer to a single abstraction,    2nd in    each case the conclusion    is deductively    related    to that view.    A    second example    is evident    in the particulate-wave    controversy.    The    particulate    view is a simple abstraction    that accounts    for much    of    the data and allows one to solve outstanding    problems.    Of cours6,    the    same    can    be said    for the wave    view.    Unfortunately,    the    predictions    don’t agree.    A similar argument    explains    the inferential    leap a child makes in declaring    that the wind is caused by the trees    waving    their leaves.    When    the child waves his hand,    it makes    a    breeze;    the trees wave when the wind blows;    so they must have    volition    and motive    power;    and that would    account    for the wind.    The    reasoning    in    these    examples    is    usually    termed    “analogical”.    The key is the recognition    of a known    abstraction    common    to the situations    being    compared.    This conception    of    analogy    differs    markedly    from that of Hayes-Roth    and Winston.    In their view two situations    are analogous    if there    is any match    between    the two that satisfies    the facts of both    worlds.    If the    match    is good,    the    facts    or heuristics    of one    world    may    be    transferred    to the other.    The problem    is that these facts may have    nothing    to do with the analogy.    Just because    two balls are big and    plastic, one can’t infer because    one ball is red that the other is also    red.    Abstractions    are    ways    of    capturing    the    necessary    interdependence    of facts.    For example,    the size and material    of a    ball do affect its mechanical    behavior,    and so the skills usefU1 for    210    bouncing    one should be of value in bouncing    the other.    Also notL    that the match    need not be close in order for there to be a useful    analogy.    Linnaean    taxonomy    and    organization    charts    have    few    superficial    details    in common,    but    the    analogy    is nonetheless    compelling,    and as a result the algorithms    for reasoning    about one    can bc transferred    to the other.    The work    of Hayes-Roth    and    Winston    is, however,    applicable    where    no abstractions    exist yet.    Their    matching    algorithms    and    the    techniques    of    Buchanan,    Mitchell,    Dietterich    and Michalski,    and Lenat should    be important    in inducing    new    abstractions.    An    important    consumer    for    these    ideas    is the    field    of    computer-aided    instruction.    There is a current    surge of interest    in    producing    a “generative    theory    of cognitive    bugs”    (see [Brown],    [Genesereth    80a], and [Matz]).    The use of abstractions    and the    criteria    of economy,    efficiency,    and power    in theory    formation    is    very seductive    in this regard.    Unfortunately,    there is no reason to    believe that the hardware    of a vonNeumann    computer    in any way    resembles    the specialized    capabilities    of the human    brain.    (Indeed,    psychologists    are still debating    whether    there    are any analogical    processes    in the brain    at all.    See,    for example,    [Kosslyn    &    Pomerantz],    [Kosslyn    & Schwartz],    pylyshyn],    and    [Shepard    &    Metzler].)    Thus,    the idea at present    is not so much    a model    for    human    cognitive    behavior    as a metaphor.    5. Conclusion    The ANALOG    system was dcvclopcd    over a period of time to    test the ideas presented    here.    One program    accepts an analogy    and    infers    the appropriate    abstraction;    another    builds    a model    of the    task domain    as assertions    are entered;    and a third uses the model    to answer    questions.    There    is a sketchy    implementation    of the    simulation    structure    designer,    but no effort has been made to build    the    theory    formation    program.    In summary,    the key ideas are (1) the role of abstractions    in    understanding    metaphors    and    selecting    good    models    for    task    domains,    (2) the use of’ models to acquire    economy,    efficiency,    and    problem    solving power,    and (3) the-importance    of these criteria    in    theory    formation.    Abstractions    and simulation    structures    make for    a    knowledge    representation    discipline    that    facilitates    the    construction    of powerful,    efficient    AI programs.    The approach    suggests    a program    for much    future    work    in AI and    Computer    Science,    viz.    the    identification    of us&11    abstractions    and    the    implementation    of corresponding    simulation    structures    that    take    advantage    of    the    spccia!    computational    characteristics    of    the    vonNeumann    machine    and    its successors.    Acknowledgemenfs    The content of this paper was substantially    influenced    by the author’s    discussions with Bruce Buchanan, Rick Hayes-Roth, Doug Lenat, Earl Sacerdoti,    and Mark Stefik, though they may no longer recognize the ideas.    Jim Bennett,    Paul Cohen, Russ Greiner, and Dave Smith read early drafts and made significant    suggestions to improve ‘he presentation.    grants    from    ARPA,    NLM,    and    ONR.    The work was supported    in part by    References    Balzer. R. Automatic    Programming,    Institute    Technical    Memo,    Southern    California/    Information    Sciences Institute,    1973.    University    of    Barstow, D. R. Knowledge    Bused    Program    Construction, Elsevier North-Holland,    1979.    Brown, J. S. & vanlehn,    K. forthcoming    paper    on    learning.    Buchanan,    B.    &    Feigenbaum.    E. A.    Dendral    and    Meta-Dendral:    Their    Applications    Dimension,    Arfificial    InteUigence,    Vol. 11, 1978, pp    5-24. .    deKleer, J. The Origin and    of the Sixth International    197-203.    Resolution of Ambiguities in Causal Arguments, Proc.    Joint Conference on Artificial    Intelligence,    1979. pp    Hayes-Roth, F. & McDermott An Interference Matching Technique for Inducing    Abstractions,    Comm    of rhe    ACM,    Vol. 21 No. 5. May 1978. pp 401-411.    Genesereth. M. R. A Fast Inf&ence Algorithm for Semantic Networks, Memo 5,    Mass. Inst.    of Tech. Mathlab    Group,    1976.    Genesereth,    M. R. The    Canonicality    of Rule    Systems,    Proc. of the 1979    Symposium    on Symbolic and Algebraic    Manipulation,    Springer    Verlag, 1979.    Genesereth,    M. R. The Role    of Plans in Intelligent    Teaching    Systems, in    Inrclligent    Teaching    Systems,    D. Sleeman,    ed. 1980.    Genesereth,    M. R. & Lenat, D. B. Self-Description    and Self-Modification    in a    Knowledge    Representation    System, IIPP-880-10. Stanford    University    Computer    Science    Dept.,    1980.    Green, C. C. The Design of the PSI Program Synthesis System, Proc. of the    Second International    Conference on Software Engineering,    Oct. 1976, pp 4-18.    Greiner, R. D. & Lenat, D. B. A Representation    Language Language, submitted    for inclusion in the Proc. of the First Conference of the American Association for    ArtiIicial    Intelligence,    Aug.    1979.    Kant, E. Efficiency Considerations    in Program Synthesis: A Knowledge    Based    Approach, doctoral dissertation, Stanford    Univ. Computer    Science Dept., 1979.    K_ohler. W.    Ges!nJc    P.weholnPv:    An    Introduction    to    New    Concepts    in    Modem    Psychology,    Liveright,    1947.    Kosslyn, S. M. & Pomcrantz. J. R. Imagery, Propositions,    and the Form of    Internal    Representations,    Cognifive    Psychofogy,    Vol. 9 No. 1, 1977, pp 52-76.    Kosslyn, S. M. & Schwartz, S. P. A Simulation    of Visual Imagery,    Cosnitive    Science,    Vol. 1, 1977, pp    265-295.    Kuhn, T. The    Structure    of Scientific    Revolutions,    Univ. of Chicago Press, 1962.    Lenat, D. B. Automated Theory Formation    in Mathematics, Proc. of the Fifth    International    Joint    Conference    on    Artificial    Intelligence,    1977, pp    833-842.    Lenat,    D. B., Hayes-Roth,    F.,    Klahr.    P. Cognitive    Economy    in    Artificial    Intelligence    Systems,    Proc. of the    Sixth    International    Joint    Conference    on    Artificial    Intelligence,    1979. pp    531-536.    Low, J. R. Automatic    Coding:    Choice of Data Structures, ISR 15. Birkheuser    Verlag, 1976.    Matz, M. A Generative    Theory of High School Algebra Errors, in    Intefligenf    Teaching    Systems,    D. Sleeman.    ed. 1980.    McCarthy,    J. Finite    State    Search Problems,    unpublished    paper.    Minsky, M. A Framework    for Representing    Knowledge, in The    Psychology    o/    Computer    Vision,    P. H. Winston,    ed., McGraw-Hill,    1975.    Mitchell, T. Version Spaces: A Candidate Elimination Approach to Rule Learning,    Proc. of the Fifth International    Joint Conference on Artificial Intelligence. 1977.    Moore, J. & Newell, A. How can MERLIN    Understand,    in L. W. Gregg, ed.    Knowledge    and    Cognition,    Lawrence    Erlbaum,    1974.    Moses. J. Algebraic Simplification:    A Guide    for the Perplexed,    Comm    of rhe    ACM,    Vol. 14 No.    8, 1971, pp    527-537.    Pylyshyn. Z. W. What the Mind’s Eye Tells the Mind’s Brain: A Critique of    Mental    Imagery,    Psychological    Bullerin,    Vol. 80, 1973, pp    l-24.    Schank. R. & Abelson, R. Scripts, Plans, and Knowledge, Proc. of the Fourth    Internatylonal    Joint    Conference    on Artificial    Intelligence,    1975, pp    151-157.    Shepard, R. N. & Metzler, J. Mental Rotation    of Three-dimensional    Objects,    Science.    Vol. 171, 1977, pp 701-703.    Thorndyke, P. W. & Hayes-Roth, B. The Use of Schemata in the Acquisition and    Transfer    of Knowledge,    Cognifive    Psyckology    Vol. 11, 1979, pp    82-106.    Weyhrauch, R. Prolegomena to a Theory of Formal Reasoning, STAN-CS-78-687,    Stanford    Univ.    Computer    Science    Dept.,    Dec. 1978.    Winston,    P.    H.,    Understanding    Analogies,    Mass.    Inst.    of    Tech.    Artificial    Intelligence    Laboratory,    Apr.    1979.    Dietterich, T. G. & Michalski, R. S. Learning and Generalization    of Characteristic    Descriptions:    Evaluation Criteria and Comparative    Review of Selected Methods,    Proc. of the Sixth International    Joint Conference on Artificial Intelligence. 1979,    pp    223-231.    211     
 | 
	1980 
 | 
	54 
 | 
					
50 
							 | 
	EVERYTHING YOU ALWAYS WANTED TO KNOW ABOUT    AUTHORITY STRUCTURES    BUT WERE UNABLE TO REPRESENT    James R. Meehan    uept. of Information    and Computer Science    university    of California    lrvine CA 92717    If we're    ever    to    get    programs    to    reason intelligently    about activities    that    are governed    by    laws,    we’ll    need    a    clear    understanding    of such concepts as obliga-    tions, politics,    disputes,    and    enforce-    ment.    Our work on this topic started when    we tried to build    a    program    that    could    understand    some legal correspondence    from    l'i'th-century England,    and    its    current    application    is in a program that simulates    social    interaction    among small groups such    as    families    and    schools.    While    the    long-range    applications    are    in    natural    language    processing,    this    work    is    primarily    concerned    with    fundamental    issues of representation.    we first attempted    to adapt the    work    by    Schank    and    Carbonell    [I,21    but later    chose    alternative    definitions    and    representations,    because    we were looking    at a wider variety of legal systems, where    legal    authority,    for    example,    did    not    always imply the power of enforcement,    and    while    they    focused    on the resolution    of    disputes,    we were more interested    in    how    aecisions    are influenced.    Authority    structures    can    be    found    everywhere,    in law courts, AI conferences,    bridge    games,    friendships,    and    even    restaurants.    The    language    is used meta-    phorically    even    in    such    domains    as    everyday    physics.    We define an authority    structure    in    terms of a group of people who participate    in some set of activities.    There are many    structures    one could impose on a group for    purposes    of    analysis,    such    as    the    sequencing    of    events,    or    the plans and    goals that motivate    the participants,    and    different    contexts    will    provide different    answers    to    the    same    question.    For    example,    if we ask, "Why did John order a    hamburger?"    we might answer with    '-because he was hungry"    or    "So that the waitress    would tell    the    cook"    or    "In order to initiate a contract with    the restaurant"    depending    on the context.    An    authority    structure,    then,    is    associated    with a aroun    to pick a neutral    term.    A group has a se; of    participants,    connected    by    a social set which specifies    the    attitudes    they    have    about    their    acquaintances    in    the group.    Every group    has a set of normal Drocedures    or    activ-    ities,    in    which    the    participants    take    certain    roles.    For our present    purposes,    it doesn't matter whether those activities    are highly    predictable,    goal-driven,    or    structured    in any particular    way.    Some of    those acts change the social net ("social"    acts);    others    involve    the    exchange    of    goods    and services    ("economic"    acts);    and    others    are    acts    of    authority    (to    be    defined shortly).    An individual    belongs to many    groups    at    tne    same    time.    In    fact, a pair of    individuals    may Delong to two    groups    and    relate    to    each    other    in different    ways    (e.g., role conflict).    Any group-associated    act may    have    a    legal    status,    an    indication    of    its    conformance    with the laws.    We    define    6    types of legality:    1 . An act (in the past, present,    or    future)    is    explicitly    legal,    requiring    no permission.    [Example:    free speech.]    2.    An    act    is    legal    only    .    permission    has been given.    [YiG    need    a    license    to    practice    medicine.]    3.    An act is legal    only    if    it    is    commanded.    [A six-year old child    taking medicine.]    4.    An act is legal only if    YOU    are    acting    as    someone's    agent (for    whom the act may or    may    not    be    legal).    LA    judge    authorizes    a    police    officer to search a house,    even    though ne is not allowed to    searcn it himse1f.j    212    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    5.    An act is legally required.    LIn    Autnority    structures    are    often    bridge,    YOU    must follow suit if    emoeaaea    in    one    anotner,    you can.1    and if a group    has no rules    governing    certain    actions,    then    it    may    inherit    the    rules from an    6.    An act is    explicitly    forbidden.    embeaaing    structure.    r'or    example,    a    [You may    not submit more than one    Gontract    paper to this conference.]    is an embedded    system specifying    little more than mutual obligations.    The    embedding    system    takes    care of the rest.    embedding    is not universal,    however.    You    Of the many possible states, some are    can't    sue    your    friends    if they fail to    of    concern to the law, such as ownership,    show up for an invited dinner.    legal responsibility,    and right    of    claim    (debt    or injury).    eisputes    are questions    In general, your power within a group    either    about    the legal    status    of some    act    is    measured    by    your    ability    to    "make    LAm    1    required    to    file a tax return    by    April lbtn if l'm getting    a    refund?]    or    things happen," to cause social, economic,    legal,    or    other    kinds    of actions.    You    a bout    the trutn value of some law-relatea    incur a debt when someone acts to increase    state    LThe defendant pleaded not guiltyJ.    your    power,    and    it is expected    that you    will    reply,    though    not    t'primitivesll    necessarily    in    'The    acts    of    authority    kind.    A    bribe,    for    example,    is    an    are:    exchange    of    economic    power    (money)    for    legal power (position of authority).    1.    to define    or    decide    the    legal    status    of    an    act, according    to    many activities    call    for decisions    to    the six types listed above,    such    be    made.    Some    of    these    decisions    are    as commanding    or obliging    someone    based    solely on evidence    and    are    to do something    [Clean    up    your    simply    evaluations,    but    in    other    cases,    real    room]    choices must be    made.    A crucial    part    of    the    description    of    any participant    in a    2.    to    enforce    a    decision    [Papers    group is the ability    he has    to    influenca    received    after    the    program    aecisiqns    by    whatever    means, and we treat    committee    meets will be    rejected    tnis as a special kind of power,    automatically]    categor-    lzed    bY    tne    kind    of    decision    being    influenced.    Politics    is    defined    as    the    3.    to create and    revise    the    rules    influencing    of authority-related    themselves    [The    voters approved    decisions,    i.e.,    whether    (or    in    what    the ERA1    manner>    to    perform    one    of    the acts of    authority,    the    laws    or    4.    to    resolve    disputes    [The    such as revising    jury    admitting    a    new    member    to    the    acquitted    the defendant]    group.    Attempts    to influence    aconomiq    decisions    range    from    friendly    advice    ["Try    the    5.    to change the position    of someone    cheeseburgerl']    to    hard-sell    in the group Lkou're hiredJ    advertising    techniques.    Finally, you might attempt to    influence    a decision    about a    social    act,    such    as    arranging    a    blind    date    for    breaking    tne    rules    may    entail    someone.    punisnment;    non-compliance    may    entail    enforcement;    but then again,    may be    not.    In some groups, the rules    cannot    be    in    Tudor    Gngland,    if a british shipping    changed,    in    which    case    attempts    to    merchant    were unable to obtain payment for    influence    decisions    are absurd.    [I'll let    goods    he    had    delivered    to    a    Dutch    YOU    capture    my rook if you let me ignore    merchant,    he could ask the    Court    of    the    the fact that I’m    in check    for    the    next    Admiralty    for    a    "Letter of Reprisal,"    a    few    m0ves.J    On    tne    other    hand,    in a    document    that permitted    him to    seize    any    highly    reticulated    authority    Dutch    structure,    ship    and    to share in the worth of    where    the    rules    can    all    be    changed,    the ship and its cargo.    Two    aspects    are    politics    are    likely.    The    important    simplest    here.    First,    the Court in no    political    acts    are those that attempt to    way enforced its decision;    it was not the    change    tne    rules    by    following    the    British    Navy    that    went out to capture a    appropriate    procedures    Lworking witnin the    Dutch ship, but the merchant    himself    (or    estaolisnmentJ.    Another    method    is    to    more    likely, his agents).    Second, he was    behave    as if the rules had changed in the    permitted    to commit what    would    otherwise    way you seek    Idefiance],    which    may    mean    be a highly illegal act.    that you've committed    an illegal act.    You    might decide not to fill a role in a group    whose    rules    YOU    disagree    with [Boycott    213    Coors and Carl's Jr.], and    YOU    can    also    exploit    the    embedding    of    inconsistent    authority    structures    [sit-ins    at    lunch    counters].    APPLICATIONS    Perhaps    the    clearest    applications    of    this    work would be in a    natural    language    understanding    program,    and    would    be    visible    in at least three    places.    First, acts would be    categorized    by    their    legal    status.    Some    of these    would be explicit    LJohnny    is    allowed    to    stay    up    until    10    o’clockJ,    but    most are    quantified    to some degree    La11    personnel    actions    require    tne manager's    approvalJ.    we would    nave    a    more    accurate    Ji!uzini    xepresentation    than we now do for examples    such as 88Mommy, can 1 go to the    movies?"    l!or kids    (i.e.,    in    the    family-group),    going to    the    movies    might    be    directly    labeled    as    requiring    permission,    but we    might also be    able    to    infer    that    from    knowing    that it involves leaving home and    spending    money,    both    of    which    require    permission.    Second,    understanding    an    authority    structure    would    enable a program to make    the    inferences    needed    to    connect    and    explain    events    in    a    story.    Example:    "Mary forgot to    renew    a    book    from    the    library.    They sent her a bill."    Without    some understanding    of the    library    rules,    the    second    sentence    is    difficult    to    explain.    Third,    we    can    use    authority    structures    to    make    gredictions    about    people's    actions.    If Mary orders her    son    Johnny    to go to bed, we    can make    a    set    of    reasonaole    predictions    about Wnat he might    do ana hOW    Mary will respond.    If Johnny's    little sister orders Johnny to go to    bed,    tne    predictions    are quite different    since    the authority    relationsnip    is    obviously    different.    1 f' Sue loans Tom some money,    thUS    increasing    his economic    power, we can    inf'er a    state    of indebtedness    and expect    him to    repay    her    in    some    way.    If    a    student    slips a $20 bill in an examination    book, he    probably    intends    to    induce    a    state    of    indebtedness    on the part of his    professor.    I;ONCLUSIONS    Our    goal    here    been to organize general informat    authority    struct ures, providing    framework    in    which    we    can    specific    cultura 1 instances    and d    relevant    infere rices.. We    envis    authority    struct ures as a    necess    of the represent ation for aroubs    i    has    to    on abo ut    comm on    descri be    fine t he    on usi ng    w    pa rt    f pew le    ACKNuWLoUGhnENTs    my thanks to Lamar Hill, Gene Fisher,    Bob    bechtel,    Dave Keirsey,    and Steve Heiss for    their    constructive    (and    lively)    aiscussions    on this topic.    11    21    RCr'tinBNChS    Jaime G. Carbonell.    Sub.jective Understanding:    Computer    Models sf Belief Svstemg,    PhD dissertation,    kale University,    1979.    Research    Report    150,    Yale Computer    Science Department.    Roger C. Schank and Jaime G.    Carbonell.    He: The Gettysburg    Address:    Representing    social and political    acts.    nesearcn    Report 127, kale Computer    science Department,    January    1970.    whose behavior pattern is shared.    214     
 | 
	1980 
 | 
	55 
 | 
					
51 
							 | 
	REAL TIME CAUSAL MONITORS    FOR COMPLEX PHYSICAL SITES*    Chuck Rieger and Craig Stanfill    Department    of Computer Science    University    of Maryland    College Park, MD 20742    ABSTRACT    Some general and specific ideas are advanced. about    the    design and implementation    of cau;llmn;ltorlng    s    P    stems for complex sites such as a    p ant    or NASA missions control room.    Such '*c,P:%    monitors"    and    are interesting    from l;idth    ;zEl;heoretical    engineering    greatly    improve    viewpoints,    existing man-machine    interfaces    to complex    systems.    INTRODUCTION    Human    understandin    of    physical    facility,    a    ~NASA mission control    large    complex    sue    as a    room or a nuclear power plant, is often a hapk;;a:d    affair.    Traditionally,    once    a    site    is    knoyledge    about. it    resides    ,in technical manual;    $hlch    tend to sit on the-she1 ) and in    the    minds    the experts. As technlcal staff turnover occurs    over an extended period    of    time,    this    knowledge    tends    to    get misplaced,    rearranged,    or altogether    forgotten, - at    least - as    knowledge.    The    result    is    da--to-day    usua P    working    ly that no single    individhl    fully    appreciates    or    understands    the    system    as    a    whole;    cope with it    to    the operators learn simply to    the    extent    required    for    daily    operations    and    maintenance.    Of course, thisa;;;;;    that when an emergency occurs, or when need    to    perform    some    unusual    human ex ertise    maneuver,    the required    the she1 P    is often hard to locate. Manuals on    are virtually worthless    in most contexts,    especially    those in which time is    critical.    Even    ~JF    the expert.is on the site, it may take him too    per orm the synthesis of a lar e set of    parameters    P    to    perceive    the context of an emergency and    necessary    to diagnose the prob em.    9    Given this state    of    produces    systems too deep1    high    technology,    which    or broadly complex for    an    individual    or    reasona ly    41    sized    group    of    individuals    to    -there is a clear need    for "intelligent"    comprehend,    secondary systems for    monitoring    the    primary    systems.    To develop such intelligent    systems, we need (1) flexible    representations    for    physical    causality,    (2) good human interfaces    that    accept descriptions    of the physical world andrnk;;ii    $szrlptions    In tkese representations,    (3)    ComDrehension    that are capable of relating the    myriad * states    of    the    site    to    the    causal    description,    then passing along only    relevant    and    ~~por%:t    information    to the human operators,    and    efficient    symbolic    computation    environments    to support items l-3 in real time.    This paper brief1    CAM    (Causal    Monitor)    Project,    ?I    desc;;kes*the    w ose    1s    to    develop    a    framework    for    the    construction    of    intelligent,    causal1 -b;s;zs real-time    physica P    monitors    for    ar~;:;~;~    The    project    is    under    funding    by    NASA    Goddard Space Flight Center, and    will result in a    causal monitor generator    Our aim    rototype    ?%kEm*about    K ere is to advance    some    specific    the conceptual    architecture    of such a    ~~~~~~*for    man-made devices can be found in [6].    Background    ideas on the concept of    causal    GOALS    An ideal causal    monitoring    system    does    not    replace    humans,    F,ut rather    'x;;$$&~~$,b,'oad~;;    their    abi:;j-;;s    process    enhances    ability    to collect and synthesize    relevant information    in time    critical    situations.    The    ideal    system    would    ap ear    to    the    human    controllers    as a small number o I; color CRT displays    keyboard.    The system would    and a    1.    2.    3.    4.    5.    6.    7.    continually    sense    and    all    in    symbolically    characterize    sensors    reflected their    relative    a way ttCi    importance    acceptable    ranges in the current operating    context    continually    verify that    causally    related    sensor    groups    obey    symbolic    rules    ex ressin    P    Fi    the    nature    of    their    causal    re ations ip    be aware of the human onerator's    exoressed    intent    (e.g.;    "runn'ing    check 32",    preventative    maintenance    "executin    mornin    power-up")    and adiust causal rela ions an    f    8    ex ectations    P    and- lower-level    to erances accordingly    parametric    have a knowledge of component    and    sensor    failure    modes and limit violations,    their    &&e;table    precursors,    probable    their    i..iicators,    corrective    proceduresc8%%'in    the    their    automatic    correction    algorithms    forma;:    recommendations    to the huma:)    have a knowledge of standard    "maneuvers",    expressed    action    cog?irmable    sequences    with    ste wise    f:    (for    bot    automatic    consequences    execution    and    as    an    archival reference for the human)    continually    s nthesize    K    all    aspects    of    important    t e    system    synthesis,    and    from    t;k,s    identify which aspects    of    system to display to the human controllers    (in    the absence of specific requests from    the humans)    decide    on    the    most    appropriate    screen    allocation    piece of    and displayu;;c,nique    for each    information    the    current    context    Most procedural knowledge about the    would    be on-line,    primar    and in a form meaningfu P t:YE:etmh    the hu;nTeando:he    corn uter.    P    The s stem would    play    the    inte ligent    ifl    watt er    continually    monitorin    H    hundreds of sensors and refating them to    the causa    model for the cur:;zt o crating context.    The system would summarize    s P te s    state    via    well-managed    CRT displays, and would be capable not    Otll    pro able    4:    of    detecting    irregularities    and suggesting    causes,    consequences    and    corrective    measures,    but    also    of automatically    carrying out    both routin: ;;2a;m;rge:;z    c;;;z;tive measures w",;;    given the    physical s !? -    te would ge "self aware"c~~t~~~~efimited    sense.    * The research described here    is funded    by    NASA.    Their support is gratefully    acknowledged.    215    ARCHITECTURE    OF THE CAM GENERATOR    SYSTEM    ---    In the CAM Project we are concerned    not    with    modelin    specific    *    but    rather    with    develop ng z    general-purp~~~e'model    P    that    can    be    imported    to any site. Once there, it will interact    with the site experts as    causal    they    define    the    site's    structure    system.    From    the    v~~si.~t~nte;~cti;;~sframe-driven    acquisition    generated.    phase,    the    specific    site    knowledge    CAM    is    The CAM generator    system consists    of    several    pieces:    1.    2.    3.    4.    5.    6.    A collection    of frame-like    knowledge uni;;    that    collectively    virtually any    hysica    incorporate    fj    Actuator,    ActionStep,    Component,    -* FailureMode;    Dis laypacket,    Maneuver,    Operatingcontext,    wil !f    OperatorIntent.    Others    emerge as the project progresses.    A collection    of procedz;s    for interact;;5    with site    scientists    will    engineers    interactively    describe    the site by    filling    in    frames.    interface    is    semi-active    *    in a methodical    way    from the site engineers.    compiled    by    the    The    information    frame-driven    interface    results in a collection    of    and    production    data    objects    rule-like knowledge about    their interrelationships.    A collection    of    display    primitives    and    display handling    techniques    ,A,iollection of *primitive    sensor-reading    actuator-driving    schemata (i.e., code    schemata)    A system for corn iling    R    the production    rule    description    of t e site onto an    efficient    lattice-like    data structure that obviates    most run-time pattern matching    A    run-time    maintenance    for    coordinatin    system    causal mode 9    the real time of the symbolic    monitor.    The frames and the    nature    of    the    knowledge    acquisition    the    interface    are described    in [3]. Since    real-time efficiency    of the generated    system is    a critical issue, we devote the    remainder    of    discussion    here    to    the    architecture    of the Ek    real-time monitor s stem    Propagation    Driven ifi    which we have termed    achi;e (PDM).    the    PROPAGATION    DRIVEN MACHINES    from    its    runtime    system.    CAM will require an extre;Eky hi,p$ throughput    cannot tolerate much    reason, we    the basic machine cyc    f eneral    e.    pattern    matchin    f,i;    To e iminate the nee    d"    attern    mate hing,    ependency lattice    we base our machine on a form of    similar in structure    described    in [l], [2],[4]    and [5].    The e%enE?%    such a scheme is to re reient antecedent-consequent    relations    forward and    b~ac~wa~,ra~~~sohhi~~    is    useful    in both    The central data structure of    the    graph    whose    nodes    PDM    is    a    represent    the current value of    some parameter or proposition    (e. .,    value of sensor 23 is X"    "the    current    condition    in    K    chamber    19"ih'rghesl~~e~~ef,p:~~s~~    network nodes are the primitive    sensor readers    real    time    clocks    whose    PD8    spontaneous    se uences all propagation    in the net.    Nodes    ticking    are connected by Above and Below    in the    links    which    reflect computational    dependencies    among noies. For    example,    form    i    if    node    Nl    represents knowledge    of the    A and B and C) causes D    then in the PDM    net    E wtid E-o-1,    a TZlT?lrwiil be Above each of A,    ,    .    Some event in the model    operator    start-up,    ',,"L,    ;zFputation    at a no E'HA'the net) identi-ffieFhz    6    modeled    , R, that deals with    some    aspect    environment,    the    causal    e.g., the ru~es~~;t    describe    relationships    Pressure S stem.    Rules in ti?s set are then Cht%dr    on    r;    3    even;t;ll; Fe ;p,;gtzd azd    from which they wi 1 all    evaluated.    An    important    PDM    concept    is    structural    augmentation    of a    node's    of the network as a natural byproduct    evaluation:    evaluation    of    a    s    ueued    newly    nto    rule causes the rule to be structurally    knit    the    PDM network.    any ex ressions    During a node's evaluation,    B    not alread    as new net nodes, P*    present in the net    are    create    expression's    inked to the referencing    node via Above links    for    expansion    themselves.    This    and placed in    "downward    chaining"    structurally    process    P    introduces    t:e    rule into the PDM net.    Once    rule's    instz$l.Fd by. downward    chaining,    the    value    evaluation    via    begin    to receive constant PDM    "upward chaining":    changes in    lower    nodes'    values    in    the    net    Will    throu h this structure,    ropagate upward    reeva uated.    f    causing hig er nodes to    be    R    Nodes subject to reevaluation    by this    mechanism    are queued on Q. By using the same    for    both    structural    integration    of rules in;~~~~~    current context and for the    values,    the    PDM    scheme    P    ropagation    of    a lows    d namic    i:    a    graczfil    and    mechanism.    it    0 viates    context-shifting    the    matching.    need    for    most    run-time    pittern    To illustrate,    suppose some node's    evaluation    k$s called for zfrule set for monitoring    the degree    0 enness    x    Relief-Valve-14    (RV-14)    to be    swappe    and    in. The PDM system will, say, then    evaluate    others)    a    R?    ueue up    for    comparing    the    (among    .    Pipe-P"4sii$?14yf    rule,    RV-14    again;t    the    R    ressure of    This    leads    to    nitting    process:    Rl    looks    for    RV-14 and PP-lz    determinin    trigger a higher node,    for    communicating    e.g.,    the    a    display ,~reirn;~;;    results    to    R    operators.    for    This dual-purpy;:    use of the PDM queue,    staking    out    net    structure    and    nam;;;    performing    in net    node    co;~~:~~ion;ak~;    pf-ygztion    of changes    ",,y~lz,ic    system.    R    topologically    upward    sprouting,    AE~ .&tsp;ft;heoBett;;e    g;;pagating    garbage    collected).    The    system    is e.sr:,tizi$g    while others are atrophying    sompi ed production rule system, but one    in    compilation"    is    values.    as dynamic as the propagation    of    Rule sets can come    and go as    naturally    as    values can be changed.    Semantically,    the    lowest    nodes    of    the    PDM    network,    and those that are present initially    spontaneously    ticking real-time clocks whose    Lb    are    ove    pointers gob;;iEensor-reading    nodes.    certain    mon*toring    Additionally,    expressions    are    represented    by nodes initially in the net.    As    the    site's    state changes, these 'caretaker" rules (net    nodes) will make reference to other rules    not    ;kE;,turally    part    of the net    references    come    to    queuing them up.':,'    bg    evaluated.    their    evalua    begin    ----.    tion    draws    to participa    them into the PDM net, where they    te    in    the    monitoring    process.    216    When    the    context    shifts in a way that causes the    ;a;Et&;r    node to be uninterested    in    the    paged-in    , the caretaker node drops its reference to    iktwe?!?tsE?f"and    the top members of the rule set.    effectively    cutting the Below link    Nodes with no Above    pointers    and    which    are    not    marked    as    "permanent"    are subject to PDM    collection.    Garbage.collection    is    inve:;: oEu:E    origmzl;ddownward    effectivef;rbX    chaining that knit    in    set,    structurally    removes the    entire rule set by following Below pointers.    SUMMARY    A CAM generator system is a    s ecial    of    knowledge    acquisition    R    causality in large,    do:z:    l;,s    sites.    requires    (1)    ~ornpl~~"~~"hys~cZ~~~    models of knowled    3    e elicitation    from    the site experts,    models    of    the    conce ts    causa T    (2)alframe-li    e    common    to    physical sites and their    topology, and (3) an efficient,    yet flexible    a!?    en ineering solution to thEfproblems    of controll+ng    arge symbolic    knowledge    in    rea?y~~~~.    The    production    rule-like    PDM model appears to    be a realistic approach to real-time monitoring:    it    exhibits    the    breadth    and    thorou hness    of    classical    production    rule system, % ut without thz    usual problems of    pattern    matching    and    database    maintenance.    The    area    of    data    driven    real-time    causal    models    of    corn lex    sites appears to be a    very fertile an 8    I    ph sical    large y unexplored    area    of    AI    research. Research in this area will he1    brin    following    areas of AI closer together: E    %a::    rame-    systems, knowledge    acquisition,    causal    modeling,    and    efficient    implementation    techniques    for    real-time    symbol    manipulation.    The    domain    '    manageable    because we are modeling    s stems in whit;    deep    problem    solving    is not    a    %    issue, and    theoretically    interesting    becausEY    of    its    breadth-wise    complexity.    REFERENCES    ill    [21    E31    [41    [51    [61    London,    P.    E.,    Dependency    Networks    as    Representation    for    I?~?~~~~ Solvers,.    Modelling    in    Genera?    Department    of    Computer    University    Report ?R-698, 1978.    of    Maryland,    Technical    McDermott,    D.,    Computer Pro    AIM-402,    197 .    7    Flexibility    and Efficiency hinTa    ram for Designing Circuits,    ,    Rieger, C. and Stanfill, C., A Causal    Monitor    Generator    System,    Forthcoming    TR, Department    of Computer Science, University    of    Maryland,    1980.    Shortliffe,    E.    H.,    MYCIN:    A    Rule    Based    Computer    Program    for    Advising    Physicians    Regardin    Antimicrobial    Therapy    Selection,    Memo    ifi    IM-251,    Artificial    Intelligence    Laboratory,    Stanford University,    1974.    Sussman, G. J.    and Knight, I. F.    iZ*:tal    Holloway J.    Computer Aided    Evolutionar    K    Desi n    for    1999.    Integrated    Systems,    IT,    AHM-526,    Rieger, C.,    System    of    Representation    $orACo$??~;E~~~~~    and Grinberg, M.    Design, Artificial    Intelligence    and    Pattern    puteran~:d~~7D~gn,    J. co    217     
 | 
	1980 
 | 
	56 
 | 
					
52 
							 | 
	GENERATING    RELEKWC FXPLANATICNS:    NATURAL LANGUAGERESPCNSES    'I0QJESTIONSAl3OUl?DATAHASE    STRUCIWB*    Kathleen R. McKeown    Department    of Ccmputer and Information    Science    The Moore School    University    of Pennsylvania    Philadelphia,    Pa. 19104    ABSTRACTT    The research described here is aimed at    unresolved problems in both natural language    generation and natural language interfaces to    database systems. How relevant information is    selected and then organized for the generation    of    responses to questions about database structure is    examined. Due to limited space, this paper    reports on only one method of explanation,    called    'kanpare and contrast". In particular, it    describes a specific constraint    on relevancy    and    organization    that can be used for this response    type*    I    INTRODUCTICN    Following Thanpson [141, the process of    generating    natural language may be divided into    two interacting phases: (1) determining the    content, force, and shape of what is to be said    (the "strategic    aznponent") and (2) transforming    that message from an internal representation    into    English (the "tactical    ccmponent"). The decisions    made in the strategic component are the focal    point of the current work. These decisions are    being investigated through the development    of a    system for answering questions about database    structure that require sane type of explanation    or    ' description. This mrk, therefore,    has two goals:    (1) providing a facility that is lacking in many    natural language interfaces    to database systems,    and (2) exercising theories about the nature of    natural language generation. The system has been    designed and implementation    is in its beginning    stages [12].    The decisions that the strategic component of    a natural language generator    must make are of two    different types: decisions of a semantic/pragmatic    nature and decisions that are structural in    nature, Given a guestion, the strategic    component    must select only that information    relevant to its    answer (semantic/pragmatic    decisions). What is    selected must then be organized appropriately    (structural decisions). These two types of    decisions are the issues this work addresses. Not    covered in this paper are the syntactic issues and    problems of    lexical choice that a tactical    ccmponent    must address.    * This work was partially supported    by NSF Grant    MCS 79-08401 and an IBM Fellowship.    Structural issues are important since the    generation    of text and not simply the generation    of single sentences is being considered. Anumber    of organizational    principles that can be used for    structuring    expository text have been identified    WI.    These are termed ccmpare and contrast,    description,    top-down    7    illustration through    example, definition,    bottcm-up description, and    t3IELLogy.    In this paper, discussion is limited to    compare and contrast and its effect on the    7    organization    and selection    processes.    II THE APPLICATION    Current database systems, including those    enhanced by a natural language interface (e.g.    [61), are, in most cases, limited in their    responses to providing lists or tables of objects    in the database.*    Thus, allowable    questions are    those which place restrictions    upon a class of    objects occurring in the database. To ask these    kinds of questions,    a user must already knaw what    kind of information    is stored in the database and    must be    aware of    how that information is    structured.    The system whose design I am describing    will    answer questions about the structure and    organization of    the    database (i.e.    -    meta-questions)**.    The classes of meta-questions    which will be accepted by the system include    requests    for    definitions, requests    for    descriptions of information available in the    database, questions about the differences    between    entity-classes,    and questions about relations that    hold between entities.    Typical of    such    meta-questions are the following, taken from    Malhotra [7]:    What kind of data do you have?    What do you know about unit cost?    What is the difference    between    material cost and production    cost?    What is production    cost?    --------------------------------------------------    * Note that in sane systems, the list (especially    in cases where it consists of only one object) may    be embedded in a sentence,    or a table may be    introduced    by a sentence which has been generated    by the system ( e.g. - [43).    ** I am not addressing    the problem of deciding    whether the question is about structure or    contents.    306    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    III KNWLEDGIZ BEBBESENTATICJN    In order    for a    system to    answer    meta-questions, it requires information beyond    that normally encoded in a database schema. The    knowledge    base used in this system will be based    on a logical database schema similar to that    described by Mays 191. It will be augmented by    definitional    information, specifying    restrictions    on class membership,    and contingent information,    specifying attribute-values    which hold for all    members of a single class. A generalization    hierarchy,    with mutual exclusion and exhaustion    on    sub-classes, will be used to provide further    organization    for the information. For more detail    on the knowledge representation to be used, see    WI.    Iv SAMf?LEQTJESTIONS    Textual responses to meta-questions    must be    organized according to sane principle in order to    convey information    appropriately. The compare and    contrast principle is effective in answering    questions that ask explicitly    about the difference    between entity-classes    occurring in the database.    (It is also effective in augmenting definitions    but this would require a paper in itself.) In this    paper, the following two questions will be used to    illustrate    how the strategic ccmponent operates:    (1) What is the difference between a part-time    and a full-time student?    (2) What is the difference    between a raven and a    writing desk?    V    SELECTION    OF RELEVAW IEFOBMATION    -    Questions about the difference between    entities require an assumption    on the part of the    speaker that there is sane similarity    between the    items in question. This similarity must be    determined    before the ways in which the entities    differ can be pointed out.    Entities can be    contrasted along several    different dimensions, all of which will not    necessarily be required in a single response.    These include:    attributes    super-classes    subclasses    relations    related entities    For sane entities, a comparison    along the lines of    one information type is more appropriate than    along others.    For example, ccmparing the    attributes    of part-time and full-time students (as    in (A) belaw) can reasonably be part of an answer    to question (1)    r but a comparison of the    attributes of raven and writing desk yields a    ludicrous answer to question (2) (see (B) below).    (A) A    part-time student takes 2 or    3    courses/semester    while a full-time student    takes 3 or 4.    (B) A writing desk has 4 legs while a raven has    only 2.    One factor influencing    the type of information    to be described is the "conceptual closeness"    of    the entities in question. The degree of closeness    is    indicated by    the distance between the    entity-classes in the knowledge base. Three    features of the knowledge base are used in    determining distance:    the    generalization    hierarchy,    database    relationships,    and    definitional    attributes. A test for closeness is    made first via the generalization    hierarchy and if    that fails,    then via    relationships and    definitional    attributes.    A successful generalization hierarchy test    indicates the highest degree of    closeness.    Usually, this will apply to questions about two    sub-types of a common class, as in:    What is the difference between production    cost and material cost?    What is the difference between a part-time    and a full-time student?    In the generalization hierarchy, distance is    determined by two factors: (1) the path between    the entity-classes    in question and the nearest    ccnnmon    super-class;    and (2) the generality    of the    common super-class (path between the common    super-class    and the root node of the hierarchy).    The path is measured by considering    its depth and    breadth in the generalization    hierarchy, as well    as the reasons for the branches taken (provided    by    the definitional attributes). Entities are    considered close in concept if path (1) is    sufficiently    short and path (2) sufficiently    long.    If the test succeeds, a discussion of the    similarity in the hierarchical class structure    of    the entities, as well ,as a comparison    of their    distinguishing    attributes,    is appropriate.    Although the entities are not as close in    concept if this test fails, sane similarities    may    nevertheless    exist between them (e.g. - consider    the difference between a graduate student and a    teacher). A discussion of similarities    may be    based on relationships    both participate in (e.g.    - teaching)    or entities both are related to (e.g.    - courses). In other cases, similarities may be    based on definitional    attributes which hold for    both entities. For both cases, a discussion    of    the similarities should be augmented by    a    description of the difference in hierarchical    class structure.    Entities that satisfy none of these tests are    very different in concept, and a discussion    of the    class structure which    separates them is    informative. For example, for question (2) above,    indicating that ravens belong to the class of    animate objects    , while writing desks are inanimate    results in a better answer than a discussion    of    their attributes.    307    VI    TEXT ORGATYIZATION    There are several ways in which a text can be    organized to achieve the canpare and contrast    orientation. One approach is to describe    similarities between the entities in question,    followed by differences. Alternatively, the    response can be organized around the entities    themselves; a discussion    of the characterizing    attributes of one entity may be followed by a    discussion    of the second. Finally, although the    question may ask about the difference between    entities, it may be impossible to compare them on    any basis and the ccanpare    and contrast must be    rejected.    The determination    of the specific text outline    is made by the structural processor of the    strategic oamponent. On the basis of the input    question, the structural    processor selects the    organizing principle to be used (for the two    sample questions, compare and contrast is    selected). Then, on the basz of information    available in the knowledge base, the decision is    reevaluated    and a ccxrnnitment    made to one of the    outlines described above.    Because of    this    reliance on    semantic information to resolve    structural    problems, a high degree of interaction    must exist between the structural    processor and    the processor which addresses semantic and    pragmatic issues.    One type of semantic information    which the    structural    processor uses in selecting an outline    is, again, the distance between entity-classes    in    the knowledge base. For entities relatively    close    in concept, like the part-time and the full-time    student, the text is organized by first presenting    similarities and then differences. By first    describing    similarities,    the response confirms the    questioner's    initial assumption that the entities    are similar and provides the basis for contrasting    them. Two entities which are very different in    concept can be described by presenting    first a    discussion    of one, followed by a discussion    of the    other. Entities which cannot be described using    the ccmpare and contrast organization    are those    which have very little or no differences. For    example, if one entity is a sub-concept of    another, the two are essentially    identical, and    the compare and contrast organizing    principle    must    be rejected and a new one selected.    VII    STRATEGICPRCCESSING    Although dialogue facilities between the    structural processor (SIR) of    the strategic    ccmponent and the semantic/pragmatic    processor    (S&P) have not yet been implemented,    the following    hypothetical dialogue gives an idea of the    intended result.    Question (1): What is the difference between a    part-time and a full-time student?    STR: notes form of query and selects COMPARE AND    -T    S&P: queries knowledge base:    DISTANCE(part-time,full-time)    -->    very    close (same immediate    super-classes)    SIR: retains COMPARE AND CONTRAST    selects outline:    SIMIL,ARITIES    DIEFERENCFS:    ATTRIBUTE-TYPE1    .    .    ATTRIBUI'E-TYPEn    CONSEQUENCES*    S&P: queries knowledge base and fills in outline:    SIMILARITIES    super-classes(part-time,full-time)    -->    graduate student    attribute/value(part-time,full-time)    -->    degree-sought    = MS or PhD    DIFFERENCES    attribute/value(part-time,full-time)    -> courses-required    =    part-time: 1 or 2/semester    full-time: 3 or 4/sanester    -> source-of-inme =    part-time: full-time job    full-time:    unknown    CONSEQUENCES    none    STR: further organizational tasks, not described    here, include determining paragraph breaks    (see [12]). Here there is 1 paragraph.    The    tactical canponent, with    additional    information frcrn    the strategic    component, might    translate this into:    Both are graduate students going for a    masters or Phd.    A full-time student,    however, takes 3 or 4 courses per semester,    while a part-time student takes only 1 or 2    in addition to holding a full-time job.    After engaging in similar dialogue for    question (2)    p the strategic mponent    might    produce outline (C) belaw, which the tactical    ccanponent    could translate as (D):    (C) RAVEN FACTS:    super-classes(raven)    =    raven E bird E animate object    WRITING DESK FACTS:    super-classes(writing    desk)=    writing desk E furniture    E    inanimate    object    CONS~S:    bird and furniture inaznpatible    2 different objects    (D)    A raven is a bird and birds belong to the    class of animate objects. A writing desk is    -------------------------------------------------    *    CDNS~S    here involve OdY    minimal    inferences that can be made about the class    structure.    308    a piece of furniture    and furniture belongs to    the class of inanimate    objects. A bird can't    be a piece of furniture and a piece of    furniture can't be a bird since one is    animate and the other isn't. A raven and a    writing desk therefore,    are 2 very different    things.    VIII RELATEDRESEARCH INGHNFPATICN    -    Those working on generation    have concentrated    on the syntactic    and lexical choice problems that    are associated with the tactical component (for    example,    DOI,    [31, D-31, D11) - Research on    planning and generation ([l], 121) comes closer to    the problems I am addressing although it does not    address the problem of relevancy    and high-level    text organization. Mann and Moore [81 deal with    text organization    for one particular    domain in    their generation system, but avoid the issue of    relevancy. The selection of relevant information    has been discussed by Hobbs and Robinson [5] who    are interested    in appropriate    definitions.    IX CGNCLus1oNs    The effects of a    specific metric, the    "conceptual closeness" of the items being    ccmpared, were shown on the organization and    selection of    relevant    information for    meta-question    response generation. Other factors    which influence the response, but were not    discussed here include information about the    user's knowledge and the preceding discourse.    Further research will attempt to identify specific    constraints    from these two sources which shape the    response.    The research described here differs frcm    previous work in generation in the following    ways:    1. Previous work has concentrated on the    problems in the tactical ccmponent of a    generator. This work focusses on the    strategic cunponent:    selecting and    organizing relevant    information for    appropriate    explanation.    2. While previous work has dealt, for the most    part, with the generation of    single    sentences, here the emphasis is on the    generation    of multi-sentence    strings.    When implemented, the application for generation    will provide a facility for answering questions    which the user of a database system has been shown    to have about the structure of the database. In    the process of describing    or explaining    structural    properties of the database, theories about the    nature of text structure and generation    can be    tested.    ACKNB    I would like to thank Dr. Aravind K. Joshi,    Dr. Bonnie Webber, and Eric Mays for their    detailed comments on the content and style of this    paper.    -S    [l] Appelt, D. E.    "Problem Solving Applied to    Language Generation"    In Proc. of the 18th Annual    Meeting of the ACL.    1980, pp.-59x.-    Philadel$icEz    [2] Cohen, P. R. "On Knowing What to Say:    Planning Speech Acts," Technical Report # 118,    University    of Toronto, Toronto, Canada, 1978.    I31 Goldman,N.    "Conceptual Generation" In    R. C. Schank    led.1    , Conceptual Information    Processing.    North-Holland Publishing Co.    Amsterdam, 1975.    [41 Grishman, R.    "Response Generation '    Question-Answering    Systems" In Proc. of the 17::    Annual Meeting of the ACL.    ---    La Jolla, Ca., August,    ---    1979, pp. 99-101.    [5] Hobbs, J. R.    and J. J. Robinson "Why Ask?",    Technical Note 169, SRI International,    Menlo Park,    Ca., October 1978.    [63 Kaplan, S. J.    "Cooperative    Responses Fran a    Portable Natural Language Data Base Query System",    Ph.D. Dissertation, Ccmputer and Information    Science Department,    University of Pennsylvania,    Pa., 1979.    [71 Malhotra, A.    "Design Criteria for a    Knowledge-based English Language system for    Management:    an Experimental    Analysis",    MAC TR-146,    MIT, Cambridge,    Ma., 1975.    [81 Mann, w. c.    and J. A. Moore "Computer as    Author - Results and Prospects", ISI/RR-79-82,    ISI, Marina de1 Rey, Ca., 1980.    [91 Mays, E.    "Correcting Misconceptions About    Database Structure" In Proc. of the Conference of    the CSCSI. Victoria,    May 1980, pp. 123-128.    BritisrCaia,    Canada,    I101    McDonald, D.    "Steps    Toward    a    Psycholinguistic    Model of Language Production",    MIT AI Lab Working Paper 193, MIT, Cambridge,    Ma.,    April 1979.    [ll] McKeown, K. R. "Paraphrasing    Using Given and    New Information in a Question-Answer System", In    Proc. of the 17th Annual Meeting of the ACL. La    e--p    ---    Jolla, Ca., August, 1979, pp. 67-72.    1121 McKeown, K. R.    "Generating    Explanations    and    Descriptions: Applications to Questions about    Database    Structure," Technical Report    #    MS-CIS-80-9, University    of    Pennsylvania,    Philadelphia,    Pa., 1979.    [131 Simns, R.    and J. Slccum "Generating    English Discourse from Semantic Networks." CACM.    15:lO (1972) pp. 891-905.    [14] Thompson, H. "Strategy    and Tactics: A Model    for Language Production"    In Papers fran the 13th    Regional Meeting, Chicago Linguistic    S&ieFlm    309     
 | 
	1980 
 | 
	57 
 | 
					
53 
							 | 
	THE SEMANTIC INTERPRETATION OF NOMINAL COMPOUNDS    *    Timothy Wilking Finin    Coordinated Science Laboratory    University of Illinois    Urbana IL 61801    ABSTRACT    This paper briefly introduces an approach to the    problem of building semantic interpretations    of nominal    ComDounds, i.e. sequences of two or more nouns related    through modification. Examples of the kinds of nominal    compounds    dealt with are: "engine repairs", "aircraft    flight arrival", ~aluminum water pump", and "noun noun    modification".    I INTRODUCTION    This paper briefly introduces an approach to the    problem of building semantic    COmDOundS,    interpretations    of nominal    i.e. sequences of two or more nouns related    through modification. The work presented in this paper    is discussed    in more detail in [3] and [4].    The    semantics of ncminal compounds have been    studied, either directly or indirectly,    by linguists    and    AI researchers.    impressive    In an early study, Lees [8 3 developed    an    taxoncm of the forms.    II81 and Downing f 21    More recently,    Levi    have attempted to capture the    linguistic    regularities    evidenced    by nominal ccmpounding.    Rhyne explored the problem of generating    canpounds from    an underlying    representation in [JO]. Bra&man Cl] used    the problem of interpreting and representing nominal    compounds    as an example domain in the development    of his    SI-Net representational    formalism    in [l]. Gershman [6]    and McDonald and Hayes-Roth    [9] attempt to handle noun-    noun modification    in the context of more general semantic    systems.    In this work, the interpretation of    nominal    compounds    is divided into three intertwined    subproblems:    lexical internretation    (mapping words into concepts),    modifier narsina (discovering    the structure    of compounds    with more that two nominalsj    and concert modification    (assigning an interpretation    to the modification    of one    concept by another),    this paper.    This last problem is the focus of    The essential    feature of this form of    modification    is that the underlying    semantic relationship    which exists between the two concepts is not explicit.    Moreover, a large number of relationships might, in    principal,    exist between the two concepts. The selection    of the most appropriate one can depend, in general, on a    host of semantic,    pragmatic    and contextual    factors.    been    As a part of this research, a computer program has    written which builds an appropriate semantic    interpretation when given a string of nouns. This    program has been designed as one component    of the natural    language question answering system JETS 151, a successor    to the PLANES query system [133. The interpretation    is    done by a set of semantic interpretation    rules. Some of    the rules are very specific,    capturing the meaning of    idioms and canned phrases. Other rules are very general,    representing fundamental    case-like relationships which    * Ailthor's    current address: Department    of Ccmputer    and Information Science,    Philadelphia,    PA 19104.    University of Pennsylvania,    This work was supported by the Office of Naval    Research    under Contract    NOOOlll-75-C-0612.    can hold between concepts. A strong attempt has been    made to handle as much as possible with the more general,    highly productive    interpretation    rules.    The approach has been built around a frame-based    representational    system (derived from FRL [ill) which    represents concepts and the relationships    between them.    The concepts are organized into an abstraction    hierarchy    which supports inheritance of attributes. The same    representational    system is used to encode the semantic    interpretation    rules. An important    part of the system is    the concert matcher which, given two concepts,    determines    whether the first describes    the second and, if it does,    how well.    II THE PROBLEM    Let's restrict our attention for a moment to the    simplest of ccmpounds    - those made up of just two nouns,    both of which unambiguously    refer to objects that we know    and understand. What is the fundamental problem in    interpreting the modification    of the second noun by the    first? The problem is    to find the underlying    relationship    that the utterer intends to hold between the    two concepts that the nouns denote. For example, in the    compound "aircraft    engine" the relationship    is part of,    in "meeting room" it is location,    in "salt water" it is    dissolved in.    There are several aspects to this problem which make    it difficult. First, the relationship    is not always    evident in the surface form of the compound. What is it    about the compound GM cars which suggests the    relationship    made bv? The correct interpretation    of this    compound depends on our knowledge of several facts. We    must know that a is the name of an organization    that    manufactures things, and in particular, automobiles.    Another fact that helps to select this interpretation    is    that the identity of an artifact's    manufacturer is a    salient fact. It is even more important when the    artifact is an automobile (as opposed to, say, a pencil).    A second source of difficulty    is the general lack of    syntactic    clues to guide the interpretation    process. The    interpretation    of clauses involves discovering    and making    explicit the relationships    between the verb and its    "arguments", e.g. the subject, direct object, tense    marker, aspect, etc. Clauses have well developed    systems    of syntactic    clues and markers to guide interpretation.    These include word order (e.g. the agent is usually    expressed as the subject,    which canes before an active    verb), prepositions which suggest case roles, and    morphemic    markers. None of these clues exists in the    case of nominal compounds.    Third, even when the constituents    are unambiguous,    the result of compounding    them may be multiply ambiguous.    For example, a woman doctor may be a doctor who is a    woman or a doctor whose patients are women. Similarly,    Chicano fliQhtS may be those bound for Chicago, coming    from Chicago or even those making a stop in Chicago.    A fourth aspect is that compounds    exhibit a variable    degree of lexicalization    and idiomaticity. In general,    310    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    the same compound form is used for lexical items (e.g.    duck soup, hanger queen) and completely productive    expression (e.g. engine maintenance,    faculty meeting).    Finally, I point out that it is possible for any two    nouns to be ccmbined as a compound and be meaningful    in    some context. In fact, there can be arbitrarily many    possible relationships between the two nouns, each    relationship    appropriate    for a particular    context.    III m    INTERPRETATION    RULES    The implemented system contains three components,    one for each of the three sub-problems    mentioned in the    introduction. The lexical internreter    maps the incoming    surface words into one or more underlying    concepts. The    concert modifier takes a head concept and a potential    modifying concept and produces a    set of possible    interpretations. Each interpretation has an associated    score which rates its "likelihoodm. Finally, the    modifier parser applies a parsing strategy which compares    and combines the local decisions    made by the other two    components    to produce a strong interpretation for the    entire compound,    without evaluating    all of the possible    structures (the number of which increases    exponentially    with the number of nouns in the ccmpound). The remainder    of this paper discusses    some of the interpretation    rules    that have been developed    to drive the concept modifier.    Three general classes of interpretation    rules have    been used for the interpretation of nominal compounds.    The first class contains idiomatic    rules - rules in which    the relationship created is totally dependent on the    identity of the rule's constituents. These rules will    typically match surface lexical items directly. Often,    the compounds    will have an idiomatic or exocentric    (R)    meaning. As an example, consider the Navy's term for a    plane with a very poor maintenance    record - a "hanger    queentl. The rule to interpret    this phrase has a pattern    which require an exact match to the words **hanger"    and    l*queenl*.    The second class consists of productive rules    These rules attempt to capture forms of modifian    which are productive    in the sense of defining a general    pattern which can produce many instantiations. They are    characterized by the semantic relationships    they create    between the modifying and modified concepts. That is,    the nature of the relationship    is a property of the rule    and not the constituent    concepts. The nature of the    concepts only determines whether or not the rule applies    and, perhaps,    how strong the resulting    interpretation    is.    For example, a rule for dissolved in could build    interpretations    of such compounds as l*salt water" and    (tsugar    water" and be triggered    by compounds    matching the    description:    (a NominalCompound    with    Modifier matching (a ChemicalCompound)    Modified matching (a Liquid)    preferably    (a Water))    The third class contains the structural rules.    These rules are characterized by    the structural    relationships they create between the modifying and    modified concepts. The semantic nature of    the    relationship    that a structural    rule creates is a function    of the concepts involved in the modification. Many of    these rules are particularly useful for analyzing    compounds    which contain ncminalized    verbs.    IV STRUCTURAL    RULES    I have found this last class to be the most    interesting and important, at least from a theoretical    * An exocentric compound is one in which the    modifier changes the basic semantic category of the head    noun, as in hot dog and ladv finger.    perspective. This class contains the most general    semantic interpretation    rules - precisely    the ones which    help to achieve a degree of closure with respect to    semantic coverage [5]. Similar structural    rules form the    basis of the approaches of Bra&man Cl] and McDonald and    Hayes-Roth C91. This section presents some of the    structural    rules I have catalogued. Each rule handles a    compound with two constituents.    RULE: RoleValue    + Conceot. The first structural    rule that I present is the most common. It interprets    the modifying-concept    as specifying    or filling one of the    roles of the modified concept. Some examples of    compounds which can be successfully interpreted    by this    rule are:    engine repair (a to-repair    with object = (an engine))    January flight (a to-fly with time = (a January))    F4 flight    (a to-fly with vehicle = (an F4))    engine housing (a housing with superpart = (an engine))    iron wheel    (a wheel with raw-umterial    = (a iron))    Note that when the compound fits the form **subject+verb"    or    "object+verbll this works very nicely.    The    applicability of this rule is not limited to such    compounds,    however, as the last two examples demonstrate.    To apply this rule we must be able to answer two    questions. First, which of the modified concept's    roles    can the modifier fill? Obviously some roles of the    modified concept may be inappropriate. The concept for    the to-repair event has many roles, such as an agent    doing the repairing, an object being repaired, an    instrument, a location, a time, etc. The concept    representing an engine is clearly inappropriate    as the    filler for the agent and time roles, probably    inappropriate    as a filler for the location and instrument    roles, and highly appropriate    as the object's filler.    Secondly, given that we have found a set of roles    that the modifier may fill, how do we select the best    one? Moreover,    is there a way to measure how well the    modifier fits a role? Raving such a figure of merit    allows one to rate the overall interpretation. The    process of determining    which roles of a concept another    may fill and assigning    scores to the alternatives is    called role fitting. This process returns a list of the    roles that the modifier can fill and, for each, a measure    of how "good" the fit is. Each possibility    in this list    represents one possible interpretation.    Not all of the    possibilities are worthy of becoming interpretations,    however. A selection    process is applied which takes into    account the number of possible interpretations,    their    absolute scores and their scores relative to each other.    Making a role fit into an interpretation    involves making    a new instantiation of the modified concept, and filling    the appropriate role with modifier. Details of this    process are presented    in the next section.    RULE: Conceot + RoleValue. This rule is similar to    the first, except that the concepts change places. In    interpretations produced by this rule, the modified    concept is seen as filling a role in the modifier    concept. Note that the object referred to by the    compound is still an instance    of the modified concept.    Some examples where this rule yields the most appropriate    interpretation    are:    drinking water (a water which is    (an object of (a to-drink)))    washing machine (a machine which is    (an instrument    of (a to-wash)))    maintenance    crew (a crew which is    (an agent of (a to-maintain)))    Again, the application    of this rule is mediated by the    role fitting process.    311    RULE'    Concent + RoleNominal.    This rule is    app1i.G    when the modified concept is in the class I    call role nominals,    nouns that refer to roles of other    underlying concepts. &glish has but one productive    system for naming role ncminals: the agent of an verb can    commonly be referenced    by adding the -er or -or suffix to    the verb stem. This should not hide the possibility    of    interpreting many concepts as refer@    to a role in    another related concept. Some examples are: a student is    the recipient of a teaching,    flowing,    a pipe is the conduit of a    a pump is the instrument of a pumping, and a    book is the obiect of a reading.    This rule tries to find an interpretation    in which    the modifier actually modifies the underlying    concept to    which the role nominal refers. For example, given *IF4    Pilot'*, the rule notes that ~pilotl*    is a role nominal    refering to the agent role of the to-fly event and    attempts to find an interpretation    in which l*F4"    modifies    that to-fly event. The result is something    like "an F4    pilot is the agent of a to-fly event in which the vehicle    is an F4". Some other examples are:    cat food (an object of (a to-eat with agent = (a cat)))    oil pump (an instrument    of (a to-pump with    object = (an oil)))    dog house (a location of (a to-dwell    with    agent = (a dog)))    Viewing a concept as a role nominal (e.g. food as the    object of eating) serves to tie the concept to a    characteristic    activity in which it participates. It is    very much like a relative clause except that the    characteristic    or habitual nature of the relationship    is    emphasized.    RULE: RoleNominal    + Concert. This rule is very    similar to the previous one exceot that it anolies when    the modifying concept is a role -nominal. Thgaction is    to attempt an interpretation    in which the modification    is    done,    concept    not by the first concept, but by the underlying    to which it refers. For example, given the    compound "pilot school", we can derive the concept for    "an organization    that teaches people to fly". This is    done by noting that pilot refers to the agent of a to-fly    event and then trying to modify "school" by this "to-    fly".    This, in turn, can be interpreted by the    Conceot + RoleNominal rule if school is defined as *'an    organization    which is the agent of a to-teach". This    leads to an attempt to interpret to-fly modifying to-    teach. The RoleValue + Conceot rule interprets    to-fly as    filling the object (or discipline)    role of to-teach.    Some other examples of ccmpounds that benefit from    this interpretation    rule are newspaper    glasses (glasses    used to read a newspaper),    driver education (teaching    rfTple to drive), food bowl (a bowl used to eat food out    .    Other Structural Rules    Other    structural    interpretation rules that-?-have identified include    SDecific+Generic    which applies when the modifier is a    specialization of the modified concept (e.g. F4 planes,    boy child), Generic+SDecific    tiich applies when the the    modifier is a generalization of the modified concept    (e.g. Building    NE43, the integer three), Eauivalence    in    which the resulting    concept is descendant    frcm both the    modifier and modified concepts (e.g. woman doctor) and    Attribute    Transfer in which a salient attribute of the    modifier is transferred    to the modified concept (e.g.    iron will, crescent    wrench).    V ROLE FITTING    The process of role fitting is one in which we are    given two concepts,    a RoleValue-and a Host, and attempt    to find appropriate    roles in the Host concept in which    the RoleValue    concept can be placed. Briefly, the steps    carried out by the program are: [ll Collect the local    and inherited roles of the Host concept; [2] Filter out    any inappropriate    ones (e.g. structural ones); 133 For    each remaining    role, compute a score for accepting the    RoleValue concept; [4] Select the most appropriate    role(s).    In the third step, the goodness-of-fit score is    represented    by a signed integer. Each role of a concept    is divided into an arbitrary    number of facets, each one    representing a different aspect of    the role. In    computing the goodness of fit measure, each facet    contributes    to the overall score via a characteristic    scoring function. The facets which currently    participate    include the following:    Requirements descriptions    candidate    value &match.    Preferences descriptions    candidate    value should match    DefaultValue a default value.    TypicalValues    other very ccmmon values for this role.    Modality    one of Optional,    Mandatory,    Dependent    or    Prohibited.    Multiplicity maximum and minimum number of values.    Salience    a measure of the role's importance    with    respect to the concept.    For example, the scoring function for the reauirements    facet yields a score increment    of +I for each requirement    that the candidate value matches and a negative infinity    for any mismatch. For the preferences    facet, we get a +4    for each matching preference description    and a -1 for    each mismatching    description. The salience facet holds a    value frcm a 5 point scale (i.e. Verylow, Low, Medium,    High, VeryHigh). Its scoring function maps these into    the integers -1, 0, 2, 4, 8.    VI SUMMARY    This paper is a brief introduction    to an approach to    the task of building semantic interpretations    of nominal    compounds. A nominal compound is a sequence of two or    more nouns or nominal adjectives    (i.e. non-predicating)    related through modification. The concepts which the    nouns (and the compound) denote are expressed in a frame-    based representation system. The knowledge    which drives    the interpretation comes from the knowledge of the    concepts themselves and from three classes of    interpretation    rules. Examples of the most general class    of interpretation    rules have been given.    L-11    L-21    [31    141    151    C61    [71    181    r.91    II101    [ill    [I21    El31    REFERENCES    Tennant, H., **i%perience    with the Evaluation    of    Natural Language aestion Answerers**,    Proc. IJCAI-    $Ltpkp    Japan,    1979.    *IAn    English language Question Answering    S steh fob a Large Relational    Data Base", CACM, vol.    2Y , PP* 526-539,    1978.    312     
 | 
	1980 
 | 
	58 
 | 
					
54 
							 | 
	TOWARDS AN AI MODEL OF ARGUMENTATIGA.    Lawrence Birnbaum, Margot Flowers, and Rod McGuire    Yale University    Department    of Computer Science    New Haven, Connecticut    Abstract    This paper describes a process model of    human    argumentation,    and    provides    examples    of    its    operation as implemented    in    a    computer    program.    Our    main concerns include such issues as the rules    and structures underlying    argumentation,    how    these    relate    to    conversational    rules, how reasoning    is    used in arguments,    and how    arguing    and    reasoning    interact.    implemented,    and using them the program is    capable    of    participating    in    the    following    argument    fragment,    concerning    the    question    of    who    was    responsible    for    the    1967    Arab-Israeli    war.    The    program can assume either the Israeli or    the    Arab    point of view.    [ll    Arab:    Who started the 1967 War?    [ 21 Israeli:    The    Arabs    did,    by    blockading    the Straits of Tiran.    Introduction    [31 Arab:    But Israel attacked first.    Engaging    in    an    argument    is    a    task    of    considerable    complexity,    involving the coordination    of many different abilities    and knowledge    sources.    Some    questions that arise in trying to construct a    process model of argumentation    include:    What sub-tasks comprise argumentation?    What    are    the    argumentation ?    rules    underlying    What    representations    of    argument    structure    are necessary    to support these    argument rules?    What    are    the    conversational    rules    required    for dealing with arguments as a    specific type of dialogue?    How is the ability to    reason    about    the    world used in the argumentation    process?    How    do    reasoning    interact?    and    argumentation    To address these questions, we are in    the    process    of    building a system, called ABDUL/ILANA,    that can    adopt a point of view and    engage    in    a    political    argument,    supporting    its    beliefs    with    argument    techniques using appropriate    facts    and    reasoning.    The    program    takes    natural    language    input    and    produces natural language output.    All of the rules    and    mechanisms    described    in this paper have been    This work was supported in    part    by    the    Advanced    Research    Projects    Agency    of    the    Department    of    Defense, monitored    by the Office of Naval    Research    under contract N00014-75-C-1111,    and in part by the    National    Science    Foundation    under    contract    IST7918463.    [41 Israeli:    According    to    international    law, blockades    are acts of war<    [51 Arab:    Were we supposed    to    let    you    import    American    arms    through    the    Straits?    [6l Israeli:    Israel    was    not    importing    arms through the Straits.    The reason    for the blockade was to    keep    Israel    from importing oil from Iran.    No matter which    point    of    view    the    program    adopts,    it    uses    the    same    principles    of    argumentation,    abstract knowledge,    and    historical    facts.    In    this way, the argument is not over the    facts but instead over the    interpretation    of    the    facts, slanted by the sympathies of each arguer.    Arpument    tasks    What tasks must an arguer perform    when    faced    with    his    opponent's    responses?    Before anything    else, each input must be transformed    into a meaning    representation    for    use in further processing    (see    Birnbaum and Selfridge    (1980)).    But much    more    is    necessary.    An arguer must understand    how new input    relates to the    structure    of    the    argument    as    a    whole,    and    this    task    usually    requires    making    non-trivial    inferences.    For example, consider what    the    program (in the role of an Israeli) must do to    relate the Arab's input [31, "But    Israel    attacked    first",    to    the utterances    which preceded it.    The    Israeli must realize that [31 constitutes    evidence    for the unstated claim that Israel started the 1967    War, which is contrary to his claim in [21.    An arguer must also relate new input    to    what    he knows about the domain, for two reasons.    First,    this allows the arguer to determine whether or    not    313    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    he    believes    that    some    claim put to him is true.    knowledge    of how to look through this network for a    Second, in    doing    this    he    can    uncover    relevant    point    which    can    be    attacked or defended.    As an    information    which    may    be    useful    in    forming    a    example of the use of such rules, consider how    the    response.    program    (operating    as    the    Arab)    generates    a    Once the understanding    sub-tasks    sketched    out    response to input 141:    above have been accomplished,    an arguer must decide    [41 Israeli:    According    to    international    how to respond.    This involves    several    levels    of    law, blockades    are acts of war.    decision.    First,    he    must    make    large-scale    strategic choices, such as whether    to    attack    his    opponent's    claims,    defend    his    own,    change    the    This point stands in    a    support    relation    to    the    previous Israeli claims of utterance    [21:    subject, and so forth.    Second, he    must    determine    which    points    should be taken up, and how evidence    [2al The Arabs started the war.    can    be    provided    for    or    against    those    points.    A    A    Third,    he    must    use    his    knowledge    and reasoning    I    I    ability in order to actually produce such evidence.    attack    This    final    step    ends    with    the    generation    of a    response in natural language (see McGuire (1980)).    I    support    I ---[3al The Israelis    ArPument    structures and rules    --    I    started the war.    A    There are basically    two ways that propositions    in an argument can relate to each other.    One point    [2bl The Arab blockade    led to the war.    6    I    support    can be evidence for another point,    in which    case    I    the    argument relationship    is one of support, or it    I    [3bl The Israeli attack    can challenge the other point, in which    case    the    support    led to the war.    relationship    is one of attack.    Thus, for example,    the ultimate analysis of the Israeli's    statement:    I    141 Blockades are acts of war.    [ 21 Israeli:    The    Arabs    did,    by    blockading    the Straits of Tiran.    Once the program    has    decided    to    follow    an    attack    requires not only understanding    strategy,    the    it needs to find a weak point in    two    separate    the opponent's    argument.    In    this    case    a    search    propositions:    rule    suggests    traversing    up the support links in    the graph, starting with    the    most    recent    [2al The Arabs started the war.    input.    The    first    item    considered    is 141, the Israeli's    claim that blockades    are acts of war.    However this    [2bl The Arab blockade    led to the war.    proposition    was    already    checked    during    the    understanding    phase and found    to    be    one    of. the    but also realizing    that the second proposition    [2bI    program's    beliefs.    stands    in    a    relation    to    the    first    Thus,    it    is not    a    good    support    candidate for attack.    Traversing    the support    link    proposition    112al.    The    argument    relations    are    leads    to    [2bl,    the    proposition    that    the    Arab    themselves    complex    structures,    including    such    blockade led to the war.    However,    this    too    was    f information    as which inference    rules    enabled    the    establishment    of    verified    during understanding.    Following one more    the    relation,    (See de Kleer et    link leads    to    (1977) and Doyle (1979) for    [2al,    the    claim    that    the    Arabs    al.    a    discussion    of    related techniques.)    started the war, which the Arab does I&    believe to    be true.    Hence,    this    is    a    good    candidate    for    attack.    One    motivation    for    having    these    argument    relations    is    that    their    local "geometry"    can be    Now the program considers    the    three    tactics    used by argument    tactics    in    determining    how    to    for    attacking    this    proposition's    respond    to    the    input.    simple support    These    are    rules    that    relation.    describe    Tactic (a>, attacking    the    options    as    the    main    point,    to    how    to    go    about    attacking    or    defending a proposition    based on the    has already been used once, as can be determined    by    inspecting    the    argument    graph.    Tactic    argument relations    in which    it    (b),    takes    part.    For    attacking    the evidence, can't be used in this case    example,    one    such rule coordinates    the three ways    to attack a simple support relationship:    since the evidence has already been rejected    as    a    candidate    for    attack by the argument graph search    rule.    (a> Attack the main point directly;    This leaves tactic cc>, attacking the    claim    that    the evidence is adequate support for the main    (b) Attack the supporting    evidence;    point.    (c) Attack the claim    that    the    evidence    Having decided to attack this support relation    between    proposition    [2al,    that the Arabs started    gives support for the main point.    the war, and [2bl, that the Arab    blockade    led    to    the    war,    The representation    of the    entire    history    of    the    program    now attempts to do that by    inferring a justification    for the    blockade,    using    the    argument,    as    a    network    of    propositions    its abstract knowledge    of blockades    and its factual    connected by argument relations,    forms an    arpument    knowledge    of U.S.    military    aid    to    Israel.    This    graph.    Argument    graph    search    rules    -    -    embody    justification    is ultimately    generated as:    314    151 Arab:    Were we supposed    to    let    you    import    American    arms    through    the    Straits?    An interesting    point about question    [51 is that    it    has    the    form    of    the standard argument gambit of    asking one's    opponent    to    support    or    justify    a    position.    What    makes    the question rhetorical    is    the assumption    that, in    this    case,    there    is no    justification    for    demanding that arms importation    be allowed,    Reasoning &    memory &I arguments    We    have    been    particularly    concerned    with    investigating    how    reasoning    and    memory    search    interact with the argument process.    Reasoning    in    an    argument    is    not    simply    blind    inference:    requirements    imposed    by    the    structure    of    the    argument    constrain    when and how inferences    should    be made.    For example, consider    how    the    program,    when    adopting    the Israeli point of view, responds    to question [ll:    El1 Arab:    Who started the 1967 War?    [2al    Israeli:    The Arabs did,    [2b] Israeli:    . . .    by    blockading    the    Straits of Tiran.    In this case,    the    generation    of    [2al    does    not    require    the    use    of    argument    rules    or episodic    memory retrieval.    a "gut reaction"    Instead it is derived by use    of    rule that always assigns blame for    "bad" events to some other    participant.    However,    as    soon    as such a claim is put forth in a serious    argument,    it must be supported.    The    support    [2bl    is    then    produced    by    a more    complex    use    of    inferential memory, activated only in    response    to    the argument goal of providing    support.    Conversely,    reasoning    and    memory    guide    the    argument    process    by    discovering    information    that    affects    subsequent    argument    choices.    In    particular,    good    rebuttals    may often be found in    the course of understanding    the    input.    Consider    how the program (in the role of the Arab) processes    utterance    [21:    [21    Israeli:    The    Arabs    did,    by    blockading    the Straits of Tiran.    To understand    this input, the Arab must    relate    it    to    what he knows in order to verify its truth, and    perhaps    more    importantly,    uncover    his    relevant    knowledge.    How    does the program verify the claim    that the Arab blockade    led to the war?    Access to historical    events in memory    depends    upon    their    organization    into    temporally    ordered    chains    of    related    events    (see    Schank    (1979)).    These    chains    are    searched    by    a    process called    causal-temporal    (CT) search.    CT    search    contains    the    system's    knowledge    about    relations    between    causality and temporal    ordering.    Such    knowledge    includes,    for    example, the rule that any event up    to and including the first    event    of    an    episodic    sequence    can    be    a cause of that sequence, but no    subsequent    event can.    The program checks    the    plausibility    of    121    above by employing CT search backwards    in time from    the    start    of    the    war,    looking    for    the    Arab    blockade.    However,    in the course of this search it    naturally    discovers the initial event    of    the    war    episode, the Israeli attack, which is then noted as    .    a possible cause    of    the    war.    Search    continues    until    the    Arab    blockade is verified.    Then, when    the time comes to respond to input [21, the Israeli    attack    has    already    been    found    as    a    possible    rebuttal,    so    that    no    explicit    search    need    be    initiated.    Thus    a    prime    method    for    finding    possible    responses    is simply to make use of facts that have    been    noticed    by    prior    inferential    memory    processing.    This mechanism    also enables the system    to deny false presuppositions    without the necessity    for    a special rule.    This is because in the course    of    relating    input    which    contains    a    false    presupposition    to    long-term    memory,    the program    would discover    the    falsehood    --    and    often    the    reason why it's false.    Having this in hand, it can    then be used in a rebuttal.    Conclusion    Engaging    in    an    argument    requires    several    distinct    kinds    of    knowledge:    knowledge    of the    domain, knowledge    of how to reason,    and    knowledge    of    how to argue.    The proper coordination    of these    disparate knowledge    sources    can    have    a mutually    beneficial    effect    in    reducing    undirected    processing.    This    kind    of    result    would    be    impossible    in a simplistic model based on strictly    separate processing    stages.    References    Birnbaum, L., and Selfridge, M.    1980.    Conceptual    Analysis    of Natural Language,    in R.    Schank and C.    Riesbeck,    eds.,    Inside    Computer    Understanding,    Lawrence Erlbaum Associates,    Hillsdale,    N.J.    de Kleer, J., Doyle, J., Steele, G.,    and    Sussman,    G.    1977.    AMORD:    Explicit Control of Reasoning,    in Proc.    on Artificial    Intellipence    and    ACM SYUIP.    Programming    Languages,Rochester,    N.Y.    Doyle,    J.    1979.    A    Truth    Maintenance    System,    Artificial    Intelligence,    vol.    12, no.    3.    McGuire, R.    1980.    Political Primaries    and    Words    of    Pain,    unpublished    ms., Yale University,    Dept.    of Computer Science, New Haven, CT.    Schank,    R.    1979.    Reminding    and    Memory    Organization:    An    Introduction    to MOPS, Research    Report    no.    170,    Yale    University,    Dept.    of.    Computer Science, New Haven, CT.    315     
 | 
	1980 
 | 
	59 
 | 
					
55 
							 | 
	A PROGRAM MODEL AND KNOWLEDGE BASE FOR COMPUTER AIDED PROGRAM SYNTHESIS'    Richard J. Wood    Department of Computer Science    University of Maryland    College Park, Maryland 20742    1    Utroductaon    Program synthesis is a complex task comprising    many interacting subactivities and requiring access    to a variety of    knowledge    sourcesr    Recent    investigations have discovered the inadequaciest;ef    current synthesis techniques to keep pace with    increasing difficulties of managing large intricate    problem solutions* An alternative approach to    software methodologies is the    development    of    intelligent computer systems that mana e the vast    amount of information assimilated an % accessed    during this process= The system's ttintelli    ence"    is characterized not by an innate ability to P nvent    solutions, but by the incorporation of an internal    model of the problem domain and corresponding    program solution.    This project ex    called the uent-    domain expert (the    programmer (the cons    %?%%onsu    tant    Bf casting this investigation in the    paradigm, *the. techniques    and    knowleci$e    general to pro rammln    can    Isolated    5    2 1n y$~~ipy&    exam nedo    coo erative frameworkanof program s nthesis the    fol!?owing four major categories of ac ivities have    I    been identified:    domain from the client,    Formulation:    transformation,    completion,    %    refinement of the problem requirements    into terms recognized by the system=    .    tructu : the selection of    known techniques for solving a task's    subproblems and the combination of    these solution fragments to form an    overall task solution.    .    !&&    P-ludaan    :    language    the instantiationthztf    construct    schemata    corres ond    R    to ste s of the solution    and w ose execu e ion will achieve the    overall program behavior,    A system capable of supporting computer aided    synthesis must have corn onents    each of the above active ies,    -&    corresponding to    Additionally, an    automated consultant must access a rich knowledge    base of programming information in constructing a    model of the evolving    the-e F    This model must    %%trnGf the activities    syn;t$sls and must be ;;;;;;ed and    consultant    the    he final program,    The Interactive Pro ram Synthesizer (IPS) is a    system designed to fulfl 1    - f the role of a consultant    *The research described in this    the Office    N00014-76C-0471;:    Naval    re ortu;;e;unded    Researc R    b    gran 8    Their support and encouragement    are gratefully acknowledged+    oomplete process descriptions, but- at a cost of    additional processing for referent identificationD    This renort focuses on the nature of the    program model and    required for a    Specificall    successfu!? sy%h%?ed&~~~~    the pro rammi    e' the architecture of the Interacti;;    Program Syn hesizer, under current development,    described,    iS    zk~ central datamE;gycture of the IPS system    developin    program    which represents the    2    program    during    synthesis.    The    organizat on of the program model must accommodate    operations that include the introduction of new    terms from the user's problem description, the    refinement and further definition of    existing    the detection    &E%i! tion    of inconsistencies in the    and the efficient retraction of the    incons !i    stent! assertions* These activities occur in    client-consultant programming and correspond to the    initial problem description by the client, the    explanations and clarifications required by the    consultant and the rejection of unfruitful partial    solutions* The program model is a record of all    the assertions, inferences, and deductions made    during the synthesis and the justifications for    each assertion*    The IPS program model is encoded as a semantic    network.,    a data structure which facilitates the    processing of synthesis activitiesi The nodes of    describe the cE'f'esi;;jE;ce between an ofg;ratiioz    and the set    requisite    achievement.    When the sentence "The screen is thought of as    a 40 b 86 byte arrayi" is processed by the system    two 0 jects are    5    introduced into the model, 1) an    object whose name is "screen" and 2) an object    77    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    which is an instantiation of the two dimensional    frame    ZZXnce    (e,g    with the information presented in the    the    a3 the extents of*each dimension and    These two objects    are t p~~gdf tSeiaarray    entries)    1.    *DEF link* that reflects the    client's decision ?o consider the abstract object    "screen', as a two dimensional array* Similarly,    the processing of the sentence "To clear the screen    store a blank in ever    e    position of the screenF,,    introduces two objezn; corre;;;;;;;    operation ,,clear,'    a    f    to the d;;n;E    -array    operation and links them via a *REF link, The *REP    and *RED links are used in a similar manner    represent s    z    stem generated decisions (i.nferenfc!$    as opposed 0 user specificationss    The division of link types parallels the    distinction between the two domains of expertise of    the client and the consultant. Clarifvina reauests    to    the    user are    identified by    expressed    SDEF and *REF yi%E    " tZrmirio1;~~    ob.iects    (e.g    "screenI,    and "clear,'    in the above e&&nnle)    ;E;',,e    tk;E;ysi;; automatically infers information    array store),    *RED ObJeCtS (e@g., 2-D array and    two state    9    When an inimica~t~~;~;action between    in the model is    unravels Rieger&London, 1977    B    the system    the cu&ent solution    and selects alternative strategies for achievin    the *RED state before causing a retraction of tha f    state*    If the offender is a    must appeal to the user for a    *REF state the system    restatement of the    goal*    The system cannot jud e the soundness of a    user-supplied decom osition    e    an % must turn to the    user for an alterna e decompositionr    Other link types exist in the program model    $Z$%dency)    ~ZerZnce,    feature-description,    report.    beyond the scope of this    The reader is directed to other projects    that investigate the foundation    networks (e*g,,    1979115    [Bra&man,    19779    semantic    a:df [Fahlman,    The    program    model    is    instantiating    constructed    by    schematic programming knowledge with    problem specific data presented by the user.    The    programming knowledge base consists of facts and    program    construction    techniques    considered    primi;;;;    to    programming and employed during    9r'; descriptions    This    collection    includes:    combination    to    of data types and rules for their    form    new    (2) criteria    abstract    required    t pes;    (3) techniques    by .a. type descrip ion;    I    problem    for dz;;mposltlon.    solving    recognition    of st$?s ,;zy    interactions: and (4) methods for construction of    expressions,*    statements,    conditionals,    A fundamental    input    and    out ut    characteristic of e    knowledge base is that the facts are applicable ",E    many programming domains,    _ _    The IPS kn;;;;zge base is organized in a    hierarchical    system,    efficient    or anization    ac ivities:    2    of knowledge    for    iE0    s nthesis    recogn;kcn    and inference. % eatures    presented during    user's    behavioral    ~~crlpt=on    *task    sugg;;; potential programming objects    represent    abstract    domain    Identification of a particular programmin    ob Jet ts.    supplies information normally associated    object    w th    !3    the    object., but not stated in the user's discourse,    These inferences provide a basis for queries to the    user    requesting    additional    selection    information,    candidatess    of a particular object from a set %    The programming frames contain information    describing defining characteristics and potential    roles of an object in a program,    the sentence    While processing    "The screen is thought of as a 40 by    86 byte array*", for example, the prototypical t..~    dimensional    array    frame    is    retrieved    *The IPS is designed to communicate with the user    in English.    translated    Curre;;;xr the sentences    into    corresponding    are rn:daZ!    manipulation functions.    This transformation will    ultimately utilize a kevword narser built around a    dictionary of programming terminology,    instantiated with the data presented    *    the    sentence.    ",;g$ult    Addit;onall ,    ci    in;;;;;kon .a out    the array frame imovides    some characteristics    common?;' used    f    indicies of the dimensions an d    erminology    for    components of    referencing    the array (ergs, ,'rowl    and "column    for the dimensions), It also suggests queries to    the user about requisite information (e&g,, is the    size fixed?    is this a    specific    individual?)    type description or a    kz%ted    at?)    ., are the values cdnt%ied    selectional queries    in the    uses of the'airay    Features describing the ~%%ti%    (e-.g., the ability to define    subregions of an array> are included in the frame    but not processed immediately. If later assertions    refer to these roles they can be retrieved from the    protytpical frame and instantiated,    The knowledge base contains frames for both    programming objects and operations, Object frames    contain defining characteristics and    potential    roles of an ob'ect, while o    by the set of s ates requisi e    i    t    erations are described    for their correct    execution and the post conditions and side-effects    of the action+    This note describes the    nature    of    two    corn onents    ?    of a computer aided program synthesis    the internal program    %?w?fdge    base    model    and    the    structures are    of programming information.    19801    part of a larger project C%Z"    directed    towards the development of I    theoretical modekf of program synthesis and an    implementation    incor orates    e    this    Eode P.    rogra%Y:ng system ""2    inves igating    project    simple    assembly    microprocessor    language programming oth,a    activities    particular    and knowledge ?sed    by a co&ultant    during the construction of a software package that    manages a video dis lay    program synthesis in t e concrete and    R    buffer,    By examinin    uncluttere    %    realm of-assembl    P language prog ams (as contrasted    to abstract high- eve1 languages    f progress towards    a successful corn uter aided programming system can    advance in much t e same manner that advances to    R    general    Investigat!~%??%o    problem solvi    resulted from    the blocks-wor d domain.    T    Trig    Thanks to Chuck Rieger, Steve Small, and Randy    for reading drafts of this paper and making    help ul    B    commentsI    [Br=$mhm;a'n197~lJ    Represent!in    A Structural    Paradigm    for    Inc,, Repor &    'KAdwledge, Bolt Beranek and Newman,    3605, Many 1978.    [Fahlman, 19791    .    Fahlman,*S.Ei ,NETL,: A n    fer B    ?%!bri!!&'!%a%?$@'%!    -,    M1T YPress'    Results in Knowledge Based    J?roc& a-I-79,    Tokoyo,    [Rieger&London, 19771    Rleger, C, & London, Pa, Subgoal Protection and    Unravellin    .    m,    f ambridge, Mass.,    during Plan S~;;~~;~7). pr0c.a    [Wood, 19801    ;,oyf 9 R,J, t Computer Aided Pro ram Synthesis,    . of Maryland, TR-861, Jan, $980.    78     
 | 
	1980 
 | 
	6 
 | 
					
56 
							 | 
	Knowledge Representation    for Syntactic/Semantic    Processing    Robert J. Bobrow    Bolt Beranek and Newman Inc.    50 Moulton St.    Cambridge,    Mass. 02238    Bonnie L. Webber    Department    of Computer Science    The Moore School of Elelectrical    Engineering    D2    University    of Pennsylvania    Philadelphia,    Pa. 19104    1. Introduction    This paper discusses some theoretical    implications of recasting parsing and semantic    interpretation    as a type of inference process which    we call incrementalhescription refinement.* It    draws upon our recent experience with RUS. a    framework for natural language processing    developed    at BBN and in use in several different natural    language systems across the country (for details    see [II, C21 and C71>.    RUS is a very practical    system that is as efficient as a semantic grammar    like the SOPHIE parser 161 and as flexible and    extensible    processor lik? LUiAR    modular syntactic/semantic    CIOI.    It achieves this    ccmbination of efficiency and flexibility by    cascading Cl21 syntactic and semantic processing-    producing the semantic interpretation    of an input    utterance incremental1    y during the parsing process,    and using it to guide the operation of the parser.    *The research reported in this paw    was    supported in part by the .Advanced    Research Proj    ects    Agency, and was monitored by ONR under Contract No.    NOOO14-77-C-0378.    Because RUS provides a very clean interface    between syntactic and semantic processing, it has    been possible to experiment with a variety of    knowledge representations in    the    different    implementations    noted above. The most recent such    implementation uses the KL-ONE formalism c31,    [41,    to represent the knowledge needed for    incremental processing. (This implementation    has    been dubbed PSI-KLONE, for "Earsing and Semantic    InterpretatioGsmL-ONE1l.)    KL-ONE is a uniform    Ebject-centered representational    scheme based on    the idea of    structured inheritance in a    lattice-structured    taxonomy of generic knowledge.    As we shall discuss iater, - PSI-KLONE takes    advantage of KL-ONE's taxonomic lattice [ill which    ccmbines the properties of an inheritance network    with those of-a discrimination    net.    The next section of this paper describes the    syntactic/semantic    cascade in general terms, and    then gives a short example of its operation in the    PSI-KLONE implementation. We then define the    concent of an incremental describtion refinement    (IDR)' process to use as a* paradigm for    usrstanding    the operation of the semantic    component    of-the cascade.    section of the paw,    requirements for a general    This introduces    the last    which discusses the    frame-like knowledge    representation    if it is to be capable of supporting    such an IDR process.    2. The Syntactic/Semantic    Cascade    Within the RUS framework, the interaction    between the. parser and the semantic interpreter    (the interpreter)    takes place incrementally    as the    parser scans the input string from left to right,    one word at a time. The semantic interpretation    of    each syntactic constituent    is produced in parallel    with the determination    of its syntactic structure.    Knowledge developed in the course of producing the    interpretation    is used to control further action by    the parser. Parsing supports the processes of    semantic interpretation and discourse inference    (not discussed in this paper) by finding the    constituents of each phrase, determining their    syntactic stS;ucture,    and labelling    their functional    relationship to the phrase as a whole (the    * We use an extended notion of functional    relation    here that includes surface syntactic relations,    logical syntactic (or shallow case structure)    relations, and relations useful for determining    discourse structures    such as primary focus.    316    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    matrix). These labels are proposed purely on the    basisof syntactimrmation,    but are intended to    reflect a constituent's functional role in the    matrix, apd not simply its internal syntactic    structure. We will refer to these labels as    functional    or syntactic labels for constituents.    The parser and interpreter engage in a    dialogue consisting of transmissions from the    parser and responses from the interpreter. A    transmission is a proposal by syntax that some    snecific functional relation holds between a    previously parsed and interpreted constituent and    the matrix phrase whose parsing and interpretation    is in progress. The proposal    takes the form of a    matrix/label/constituent    triple. The interpreter    either rejects the proposal or accepts it and    returns a pointer to a KL-ONE data-re    which    represents-its    knowledge of the resulting phrase.    (This pointer is not analyzed by the parser, but is    rather used in the description    of the matrix that    syntax includes in its next proposal (transmission)    to extend the matrix.) The parser is implemented    as an ATN 191, and transmissions    occur as actions    on the arcs of the ATN grammar. The failure of an    arc because of a semantic rejection of a    transmission    is treated exactly like the failure of    an arc because of a syntactic    mismatch; alternative    arcs on the source state are attempted,    and if none    are successful,    a back-up occurs.    2.1. The role of the semantic interpreter    in a    cascaded system    The PSI-KLONE interpreter must perform two    related tasks:    1. provide. feedback to the parser by    checking the semantic plausibility of    syntactic    labels    for    proposed    constituents    of a phrase, and    2. build semantic    individual    phrases    interpretations for    The mechanism for performing both these tasks    is based on the idea of mapping between the    (syntactic) functional labels provided by the    parser and a set of extended case-frame    or semantic    relations (defined by the inters)    that can    hold between a constituent and its matrix phrase.    The mapping of functional labels to semantic    relations is clearly one to many. For example, the    logical subject of a clause whose main verb is    "hiV1 might be the agent of the act (e.g. "'Ihe    boy    hit . ..I'> or the instrument (e.g. "The brick hit    11    . . . 1.    A semantic relation (or semantic role), on    the other hand, must completely specify the role    played by the interpretation    of- the constituent    the interpretation    of the matrix phrase.    in    example, a noun phrase (NP) can serve    various functions in a clause, including logical    subject (LSUBJ), logical object (LOBJ), surface    subject (SSUBJ), and first NP (FIRSTNP).    The task of the interpreter is to determine    which, if any, semantic relation could hold between    a matrix phrase and a parsed and interpreted    constituent,    given a functional label proposed by    the parser. This task is accomplished    with the aid    of a- set of pattern-action    relation mapping rules    (RMRULES)    that specify how a given fun-    label    cmapped    into a semantic relation. An RMRULE    has a pattern (a matrix/label/constituent    triple)    that specifies thext=in    which it applies,    in terms of:    0    the syntactic shape of the matrix (e.g.    "It is a transitive    clause whose main verb    is 'run'."), and the interpretation    and    semantic role    assigned to    other    constituents (e.8. 'IThe logical subject    must be a person and be the Agent of the    clause"),    o the proposed functional    label, and    o the interpretation    of the    be added.    constituent to    The action of the RMRULE is to map the given    functionallabel onto a semantic relation.    A proposed syntactic label is semantically    plausible if its proposal triple matches the    pattern triple(s) of some RMRULE(s). KL-ONE is a    good language for describing structured objects    such as phrases built up out of constituents,    and    for representing classes of objects such as the    matrix/label/constituent    triples that satisfy the    constraints given by RMRULE patterns.    In    PSI-KLONE, each RMRULE pattern is represented    as a    KL-ONE structure called a Generic Concept (see    section 2.2). These Concep55??5- am    in a    taxonomy that is used as a discrimination    net to    determine the set of patterns which match each    triple. We refer to this as the taxonomy of    syntactic/semantic shapes; note that it is    generally a lattice and not simply a tree    structure.    Associated with each semantic relation is a    rule    (an IRULE) that specifies how    the    interpretation    of the constituent    is to be used in    building the interpretation    of the matrix phrase.    When all the constituents of a matrix have been    assigned appropriate semantic relations, the    interpretation    of a phrase is built up by executing    all of the IRULEs that apply to the phrase. The    separation    of RMRULEs from IRULEs allows PSI-KLONE    to take full advantage of the properties of the    syntactic/semantic cascade.    As    each new    constituent is proposed by the parser, the    interpreter uses the RMRULEs to determine which    IRULEs apply to that constituent;    but it does not    actually apply them until the parser indicates that    all constituents have been found.    This buys    *That is, the constituent    interpretable    as a person.    labelled LSUBJ must be    317    efficiency by    rejecting constituent label    assigrments which have no hope of semantic    of the PHRASE, and may have 0 or more other    SYNTACTIC-CONSTITUENTS    which are Modifiers. The    interpretation,    while deferring the construction    of    double arrow or SuperC Cable between PHRASE and    an    interpretation until    the    syntactic    well-formedness    of the entire phrase is verified.    SYNTACTIC-CONSTITUi?~indicates    that every instance    of PHRASE is thereby a SYNTACTIC-CONSTITUENT.    I    2.2. An example of the cascade    The simnlified taxonomv* for our example is    As    a    simplified example    of    the    parser-interpreter    interaction,    and the use of the    KL-ONE taxonomy of syntactic/semantic    shapes in    this interaction, we will briefly describe the    process of parsing the clause "John ran the drill    press." The simplified    ATN grammar we use for this    example is shown in Fig. 2-1.    given in Fig: 2-3. This ind*icates    that any CLAUSE    whose Head is the verb "run"    Figure 2-l: A simplified    ATN    For readers unfamiliar with KL-ONE, we will    explain three of its major constructs as we note    the information    represented in the simple taxonomy    shown in Fig. 2-2. In E-ONE, Generic Concepts    (ovals in the diagram, boldfaceinhm    represent    Figure 2-2: A simple EL-ONE network    description templates, from which individual    descriptions    or Individual Concepts (shaded ovals,    also boldface in text) are formed. In Fig. 2-2,    the    most    general    description    *    SYNTACTIC-CONSTITUENT,    which is specialized    by tiz    two descriptions, PHRASE and WORD.    All KL-ONE    descriptions are structured objects. The only    structuring    device of concern here is the Role. A    Role (drawn as a small square, and underEd    in    text) represents    a type of relationship    between two    objects, such as the relation between a whole and    one of its parts. Every Role on a Generic Concept    indicates what type of object can fill the Role,    and how many distinct instances of the relation    represented    by the Role can occur. The restriction    on fillers of a Role is given by a pointer to a    Generic Concept, and the ntanber of possible    instances of the Role is shown by a number facet    (indicated    in the form "M < # < NH in thegs    In our diagram we indicate that every PHRASE has a    WORD associated with it which fills the Head Role    Figure 2-3: A simple EL-ONE Syntactic Taxonomy    (independent    of tense and person/nunber    agreement)    is an example of a RunCLAUSE. There are two    classes of RunCLAUSEs    represented    in the taxonmy -    those    whose    LSUBJ    is    a    person    (the    Per    sonRunCI.AUSEs),    and those whose LSUBJ is a    machine (the MachineRunCLAUSEs). The class of    PersonRunCLAUSEs is again sub-divided, and its    subclasses are RunMachineCLAUSE    (in which the LOBJ    must be a machine), RunRaceCLAUSE (in which the    LOBJ is a race), and SimpleRunCLAUSE    (which has no    LOBJ).    If we get an active sentence like "John ran    the drill press", the first stage in the parsing is    to PUSH for an NP from the CLAUSE network. For    simplicity we assume that the result of this is to    parse the noun phrase "John" and produce a pointer    to NPl, an Individual    Concept which is an instance    of the Generic pattern PersonNP. This is the    result of interaction    of the parser and interpreter    *To reduce clutter, several superC cables to the    Concept NP have been left out.    318    at a lower level of the ATN.    Since it is not yet clear what role NPl plays    in the clause (i.e. because the clause may be    active or passive), the parser must hold onto NPl    until it has analyzed the verb. Thus the first    transmission    from the parser to the interpreter    at    this level is the proposal that rrrunV    (the root of    "ran") is the Head of a CLAUSE. The interpreter    accepts this and returns a pointer to a new    Individual Concept Ckl which it places as an    instance of RunCLAUSE.    Since the parser has by now determined that    the clause is a simple active clause, it can now    transmit the proposal that NPl is the LSUBJ of CLl.    Because NPl is an instance of a PersonNP, the    interpreter can tell that it satisfies the    restrictions on the LSUBJ of one of the    specializations    of RunCLAUSE, and thus it is a    semantically    plausible assignment. The interpreter    fills in the LSUBJ Role of CL1 with NPl, and    connects CL1 to PersonRunCLAUSE,    since that is the    only subConcept of RunCLAUSE which can have a    PersonNP as its LSUBJ.    Finally, the parser PUSHes for an NP,    resulting in a pointer to NP2, an instance of    MachineNP. This is transmitted    to the interpreter    as the LOBJ of CLI.    Since CL1 is a    PersonRunCLAUSE,    the taxonomy indicates that it can    be either an instance of a RunRaceCLAUSE or a    RunMBCLAUSE,    or a SimpleRunCLAUSE. Since-P2    has been classifieras an instance of MachineNP, it    is not compatible with being the LOBJ of a    RunRaceCLAUSE    (whose LOBJ must be interpretable    as    a race). On the other'hand NP2 is compatible    with    the restriction    on the filler of the LOBJ Role of    RunMachineCLAUSE.    We assume that the taxonomy indicates all the    acceptable subcategories    of PersonRunCLAUSE~Thus    it is only semantically    plausible for NP2 to fill    the LOBJ Role of CL1 if CL1 is an instance of    RunMachineCLAUSE. This being the case, the    interpreter can join CL1 to RunMachineCLAUSE    and    fill its LOBJ Role with NP2, creating a new version    of CL1 whmit    returns to the parser.    At this point, since there are no more words    in the string, the parser transmits a special    message to the interpreter,    indicating that there    are no more constituents    to be added to CLI. The    interpreter responds by finding the IRULEs    inherited by    CL1    from    RunMachineCLAUSE,    PersonRunCLAUSE,    etc. and using the actions on    those IRULEs to create the interpretation    of CLl.    It associates that interpretation with CL1 and    returns a pointer to CLl, now a fully parsed and    interpreted    clause, to the parser.    *Actually, the interpreter creates a Generic    subConcept of RunCLAUSE, in order to facilitate    sharing of information    between alternative    paths in    the parse, but we will ignore this detail in the    remainder of the example.    3. Incremental    Description    We view the cascaded    Refinement    analysis    and    semantic    interaction    of svntactic    interpretation as    implementing    a recognition    paradigm we refer to as    intiremental    description refinement.    In this    paradigm we assume we are initially given a domain    of structured    objects, a space of descriptions,    and    rules that determine whic$ descriptions apply to    each object in the domain. As an example, consider    the domain to be strings of words, the structured    descriptions    to be the parse trees of some grammar,    and say that a parse tree applies to a string of    words if the leaves in the tree correspond to the    sequence of words in the string. In general we    assume each description is structured, not only    describing the object as a whole, but having    ccmponents that describe the parts of the    and their relationship    to the whole as well.    object    We consider a situation that corresponds to    left-to-right    parsing. A machine is presented with    facts about an object or its parts in some    specified order, such as learning the words in a    string one by one in a left-to-right    order. As it    learns more properties of the object the machine    must determine which descriptions are compatible    with its current knowledge of the properties    object and its parts.    the process of:    Incremental description refinement    o determining the set    of the    (IDR) is    of descriptions    ccmpatible    with an object known to have a    given set of properties,    and    o refining the set of descriptions    as more    properties    are learned.    More precisely, for every set of properties P    = {pl ,...,pn)    of some object 0 or its parts, there    is an associated set of descriptions C(P), the    descriptive cover of P. The descriptive    cover of P    consists of-    those descriptions which might    possibly be applicable to 0, given that 0 has the    properties PI ,-**I    b;    that is, the set of    descriptions which apply to at least one object    which has all the properties    in P.    As one learns more about some object, the set    of descriptions consistent with that knowledge    shrinks. Hence, the basic step of any IDR process    is to take (1) a set of properties P, and (2) its    cover C(P), and (3) some extension of P into a set    P', and to produce C(P') by removing inapplicable    elements from C(P). The difficulty is that it is    usually impractical, if not impossible, to    represent C(P) extensionally:    in many cases C(P)    will be infinite. (For example, until the number    of words in a string is learned, the nLPnber    of    parse trees in C(P) remains infinite no matter how    many words in the string are known.) Thus, the    *We assume that at least one    to each object in the domain.    description    applies    319    covering set must be represented intensionally,    with the consequence that "removing elements"    becomes an inference process which determines the    intensional representation of C(P1) given the    intensional    representation    of C(P). Note that just    as any element of C(P), represented    extensionally,    may be structured, so may the intensional    representation    of C(P) be structured    as well.    The trick in designing an efficient and    effective IDR process is to choose a synergistic    inference process/intensional    representation    pair.    One example is the use of a discrimination    tree.    In such a tree each terminal node represents an    individual description,    and each non-terminal    node    represents    the set of descriptions    corresponding    to    the terminals below it. Every branch indicates a    test or discrimination    based on some property (or    properties) of the object to be described. Each    newly learned property of an object allows the IDR    process to take a single step down the tree, as    long as the properties are learned in an order    ccmpatible with the tree's structure. Each step    thus reduces the set of descriptions    subsumed.    Another IDR process is the operation of a    constraint    propagation    system. In such a system an    object is described by a set of nodes, each of    which bears a label chosen from some fixed set.    The nodes are linkedinto a network, and there is a    constraint relation that specifies which pairs of    labels can occur on adjoining (i.e. linked) nodes.    The facts learned about an object are either links    between previously    known nodes, or label sets which    specify the possible labels at a single node. A    descriptive cover is simply the cross-product    of    some collection    of node label sets. The refinement    operation consists of (I) extending    the analysis to    a new node, (2) removing all incompatible labels    from adjacent nodes and (3)    propagating the    effects. Unlike the use of a discrimination    net,    constraint propagation does not require that    information about nodes be considered in some a    priori fixed order.    -    As mentioned earlier, in the RUS framework we    are attempting to refine the semantic description    of an utterance in parallel with determining its    syntactic structure. The relevant properties for    this IDR process include the descriptions of    various constituents    and their functional    relations    to their matrix (cf. Section 2). Unfortunately,    surface variations such as passive forms and dative    movement make it difficult to assume any particular    order of discovery of properties as the parser    considers    words in a left to right order. However,    the taxonomic lattice of KL-ONE can be used as a    generalization    of a discrimination    tree which is    order independent. The actual operation used in    PSI-KLONE    involves an extended notion of constraint    propagation operating on nodes in the taxonomic    lattice, and thus the resulting system has    interesting    analogies to both simpler forms of IDR    processes.    The complete algorithm for the IDR process in    PSI-KLONE is too ccmplex to cover in this paper,    and will be described in more detail in a    forthcoming report. However, the reader is urged    to return to the example in Sec. 2.2 and reconsider    320    it as an IDR process as we have described above.    Briefly, we can view the KL-ONE taxonomy of    syntactic/semantic shapes    as    a    set    of    discrimination    trees, each with a root labelled by    some syntactic phrase type. At each level in a    tree the branches make discriminations    based on the    properties of some single labelled constituent    (El, such as the LSUBJ of a CLAUSE.    The parser first proposes a phrase type such    as CLAUSE, and the IDR process determines which    tree has a root with that label. That root beccmes    the current active node in the IDR process. All    further refim    isone    within the subtree    dominated by an active node.    As the parser    proposes and transmits new LC's to the IDR, the IDR    may respond in one of two ways:    1. it may reject the LC because it is not    compatiblewith any branch below the    currently active node(s), or    2. it may accept the LC, and replace the    current -58376 node(s) with the (set of)    node(s) which can be reached by branches    whose discriminations    are compatible    with    the LC.    4. The IDR Process and Knowledge Representation    We-have-    identified    four    critical    characteristics of any general representation    scheme that can support an IDR process in which    descriptions are    structured and    covering    descriptions are represented intensionally. In    such a scheme it must be possible to efficiently    infer from the representation:    1. what properties of a structured object    provide sufficient information to    guarantee the    applicability of    a    description to (some portion of) that    object - i.e., criteriality    conditions,    2. what mappings are possible between    classes of    relations -    e.g. how    functional    relationships    between    syntactic constituents    map onto semantic    relationships    3.    which pairs of descriptions    are mutually    incompatible    - i.e., cannot both apply to    a single individual    4. which sub-categorizations    of descriptions    are exhaustive - i.e., at least one of    the sub-categories    applies to anything to    which the more general description    applies.    Wr analysis of the assumptions implicit in    the current implementation    of PSI-KLONE    has led us    to an understanding    of the importance    of these four    points in a IDR.    By making these four points    explicit in the next implementation    we expect to be    able to deal with a larger class of phenomena than    the current system handles.    In the following    sections we illustrate these four points in terms    of the behavior of the current version of PSI-KLONE    and the improvements    we expect to be able to make    with more    information    explicit    of    types of    4.1. Criteriality    Conditions    The point here is an obvious one, but bears    repeating. If a taxonomy is to be used for    recognition,    then there must be some way, based on    partial evidence, to get into it at the right place    for the recognition (IDR) process to begin. That    is, for any ultimately recognizable phrase there    must be at least one criteria1 condition, i.e. a    collection of facts which is sufficient to ensure    the abnlicabilitv    of some particular descris    In the' syntactic"/semantic    taxonomy, the criteria1    condition is often, for a phrase, the properties    of    belonging to a particular    syntactic category (e.g.,    noun phrase, clause, etc.) and having a particular    lexical item as head. Recalling the example given    in Section 2.2, the evidence that the input had the    shape of a CLAUSE and had the verb llrunV    as its    head constituted    sufficient    conditions    to enter the    taxonomy at the node RunCLAUSE - i.e., a    specialization    of CLAUSE whose head is filled by    the verb "run". Without the notion of criteria1    properties, we cannot ensure the applicability    of    any description and therefore have no way of    continuing    the recognition    process.    4.2. Mapping Syntactic to Semantic Relations    In RB,    the parser intermittently sends    messages to the interpreter asking whether it is    semantically    plausible for a constituent    to fill a    specified functional role.    The interpreter's    ability to answer this question ccmes from its    RMRULEs and their organization. This is based on    the assumption that a potential constituent can    fill some functional role in the matrix phrase if    and only if it also fills a semantic role    compatible with:    o that functional    role    o the interpretation    of that constituent    o the head of that matrix phrase    o the roles filled    of that phrase    by the other constituents    o other syntactic/semantic propert    that phrase and its constituents.    ies of    With respect to the first of these points, one    effective way of representing the compatibility    restrictions between syntactic and semantic    relations derives from the fact that each purely    syntactic relation can be viewed as an abstraction    of the syntactic    properties    shared by some class of    semantic relations (i.e., that they have    syntactically    identical argunents). If    1. a general frame-like system is used to    represent the system's syntactic/semantic    knowledge,    2. possible syntactic and semantic relations    are represented therein as "slots" in a    frame, and    3.    there is an abstraction hierarchy among    slots (the Role hierarchy in KL-ONE), as    well as the more common IS-A hierarchy    among frames (the SUPERC link between    concepts in KL-ONE),    then the interpreter can make use of this    abstraction hierarchy in answering questions from    the parser.    As an example, consider a question from the    parser, loosely translatable as "Can the PP 'on    Sunday' be a PP-modifier of the NP being built    whose head is 'party'?".    Fig. 4-1jo    Figure 4-l: A Simple NP Taxonomy    We assume the NP headed by "party" has already been    classified by the interpreter    as a CelebrationNP.    As indicated in Fig.    4-l this concept inherits    from EventNP two specializations    of the general    PP-modifier relation applicable to    NP    -    location-PP-modifier    and time-PP-modifier. Thus    "on SundaT can be one of%    p-modifiers iff it    can be either its location-PFmodifier or its    time-PP-modifier. The next sEtion will discuss    howt&    decision can be made. The point here is    that there must be some indication of which    syntactic relations can map onto which semantic    ones, and under what circumstances. An abstraction    hierarchy among Roles provides one method of doing    so.    4.3. Explicit compatibility/incompatibility    annotations    As noted above, the semantic interpreter    must    be able to decide if the interpretation    assigned to    the already parsed constituent is compatible with    the type restrictions on the argLPnents of a    semantic relation. For example, the PP "on Sunday"    can be a PP-modifier    of an NP whose Head is "party"    if it isccmpatible with being either a time-PP,    and hence capable of instantiating the relation    time-PP-modifier, or a location-PP and hencs    instaziating the relation location-PP-modifier.    -    izing    There are two plausible strateg    the somewhat informal notion of    ies for formal    compatibility:    *In this, as in Fig. 4-1,    we assume for    simplicity that only these two semantic relations    are consistent with the syntactic relation    PP-modifier    for an NP whose head is "party".    -    321    1. a constituent    is judged ccmpatible    with a    restriction if its syntactic/semantic    shape    (and    hence    interpretation)    guarantees consistency with the type    restrictions,    or    2. it    is    3-&W    compatible if    its    interpretation does    not    guarantee    inconsistency.    -    Consider the problem of rejecting "on Sunday"    as a location-PP-modifier.    Conceivably one could    reject it on thegrounds that "Sunday" doesn't have    a syntactic/semantic    shape that guarantees that it    is a location-NP. This is essentially    the strategy    followed by the current version of PSI-KLONE. More    specifically,    the PSI-KLONE system searches along    the superC cables of a constituent to find just    those semantic relations which are guaranteed    to be    canpatible with the interpretation of    the    constituent    and matrix.    However,    "birthday    that strategy would have to reject    present" as being compatible with    apparel-NP (thereby rejecting "Mary wore her    birthday present to New York"), vehicle-NP (thereby    rejecting "Mary drove her    Boston to    birthday present from    Philadelphia"),    rejecting    animate-NP (thereby    "Mary fed her birthday present some    Little Friskies"), etc.    Thus, we believe that    future systems should incorporate the second    strategy, at least as a fall-back when no    interpretation is found using only the first    strategy. This strategy also makes it easier for    the system to handle pronouns and other    semantically empty NPs (e.g.    etc.) whose    "thing" "stuff"    syntactic/semantic    shapes' guarantei    almost nothing, but which are compatible    with many    semantic interpretations.    The imp1    ication here for both    processing    language    and knowledge repr    mesentation    is that:    1. incompatibility    must be marked    in the representation,    and    explicitly    2. the most useful strategy for determining    compatibility    involves not being able to    show explicit inccmpatibility.    One caveat and one further observ    ation: this    strategy is not by itself effective in certain    cases of metonymy, which Webster's defines as "the    use of the name of one thing for that of another    associated    with or suggested-by    it."    semantics would reject    For example,    "the hamburger" as the    subject of,a clause like "the hamburger is getting    impatient" which might occur in a conversation    between a waiter and a short-order    cook. However,    the taxonomy would be able to provide information    *If we assume something like "hamburger"    being an    instance of definite-food-NP,    which is marked as    incompatible with animate-NP, the restriction on    the subject of "impatient".    needed to resolve the metonymy, since it would    indicate that "the hamburger" is possibly being    used metonymously    to refer to some discourse entity    which is both describable by an animate-NP and    associated    with some (unique) hamburger.    The observation concerns the way in which    semantic interpretation    was done in LUNAR [lOI,    which was to judge semantic compatibility    solely on    the basis of positive syntactic/semantic    evidence.    A semantic interpretation rule could only be    applied to a fragment of a parse tree if the rule's    left-hand side - a syntactic/semantic    template -    could be matched against the fragment. The only    kinds of semantic constraints    actually used in the    LUNAR templates were predicates    on the head of some    tree constituent    -- e.g. that the head of the NP    object of a PP constituent were of class element,    etc.    rock,    Given this restriction,    LUNAR w-t    be able to handle an utterance like "give me    analyses of alLaninun    in NASA's gift to the Royal    Academy", where clearly "gift to the Royal Academy"    is not incompatible    with rock.    4.4. Explicit marking of exhaustive    sub-categorization    in the taxonomy    The    algorithm we    have developed for    incremental description refinement requires that    the IDR process be able to distinguish exhaustive    from non-exhaustive sub-categorization in the    taxonomy    of    syntactic/semantic    shapes.    Exhaustiveness    marking plays a role similar to that    played by inclusive or in a logical framework.    That is, it justifies the application of    case-analysis techniques to the problem of    determining    if a proposed constituent    is ccmpatible    with a given syntactic role. The interpreter is    justified in rejecting a proposed label for a    constituent    only if it has considered all possible    ways in which it can correspond to a semantic    relation.    Exhaustiveness    marking also make it possible    to infer positive information from negative    information as was done in the example in section    2.2.    There, the interpreter inferred that the    clause was a RunMachineCLAUSE,    because it was known    to be a PersonRunCLAUSE    and the proposed LOBJ was    incompatible    with it being a RunRaceCLAUSE. Such    reasoning is justified only if the subcategories    RunMachineCLAUSE,    SimpleRunCLAUSE    and RunRaceCLAUSE    exhaust the possibilities    under PersonRunCLAUSE.    These types of inference    do not always ccme up    in systems that are primarily used to reason about    inherited    properties    and defaults. For example, as    long as one knows that DOG and CAT are both    specializations    of PET, one knows what properties    they inherit from PET.    It is irrelevant to an    inheritance process whether there are any other    kinds of PET such as MONKEY, BOA-CONSTRICTOR    or    TARANTULA.    Many formalisms, including KL-ONE, do not    require the sub-categorization    of a node to be    exhaustive. So there are two options vis-a-vis the    way exhaustiveness    can be indicated. A recognition    algorithm can act as if every node were    exhaustively sub-categorized    - a type of Closed    World Assumption L-81    - this is essentially    the way    322    the    current    PSI-KLONE    system    operates.    Unfortunately,    there are other uses of KL-ONE in    the natural language system in which concepts are    subcategorized    but it is clear that an exhaustive    subcategorization    has not been made.    If the    meaning of the links in the representation    scheme    is to be well-defined, it must be possible to    distinguish exhaustive from    non-exhaustive    sub-categorization. The implication for both    knowledge representation    and inference is that some    clear stand must be    taken vis-a-vis the    representation    of exhaustive sub-categorizations.    5. Conclusion    The approach we have taken in RUS is midway    between completely    decoupled syntactic and semantic    processing and the totally merged processing that    is characteristic    of semantic grammars. RUS has    already proven the robustness of this approach in    several different systems, each using different    knowledge representation techniques for the    semantic ccmponent.    The RUS grammar is a    substantial and general grammar for English, more    extensive than the grammar in the LUNAR system    ClOl.    Although the grammar is represented as an    ATN, we have been able to greatly reduce the    backtracking    that normally occurs in the operation    of an ATN parser, allowing RUS to approach the    performance    of a "deterministic"    parser [21. With    the aid of a "grammar compiler" [51 this makes it    possible to achieve parsing times on the order of    .X CPU seconds, on a DEC KLIO, for twenty word    sentences.    In this paper we have- focused on the latest    embodiment of the RUS framework in the PSI-KLONE    system -- in particular on the nature of its    cascaded syntactic/semantic    interactions and the    incremental description refinement process they    support. We believe that what we are learning    about such cascaded structures and IDR processes in    building PSI-KLONE is of value for the design of    both natural language systems and knowledge    representation    systems.    ACKNOWLEDGEMENTS    Cur work on the PSI-KLONE system has not been done    in vacua - our colleagues in this research effort    includeEd Barton, Madeleine Bates, Ron Bra&-man,    Phil Cohen, David Israel, Hector Levesque, Candy    Sidner and Bill Woods. We hope this paper reflects    well on our joint effort.    The authors wish to thank Madeleine Bates,    Danny Bobrow, Ron Bra&man, David Israel, Candy    Sidner, Brian Smith, and Dave Waltz for their    helpful comments on earlier versions of this paper.    Our special thanks go to Susan Chase, whose gourmet    feasts and general support made the preparation    of    this paper much more enjoyable than it might    otherwise have been.    [II    PI    L-31    [41    [51    Ii61    c71    [81    191    Cl01    Cl11    r121 Woods, W. A.    Cascaded ATN Grammars.    Amer. J. Computational    Linguistics    6(l),    Jan:-Mar., 1980.    REFERENCES    Bobrow, R. J.    The RUS System.    --    BBN Repom8,    Bolt Beranek and Newnan    Inc,, 1978. .    Bobrow, R. J. & Webber, B. L.    PSI-KLONE - Parsing and Semantic    Interpretation    in the BBN Natural Language    Understanding    System.    In CSCSI/CSEIO    Annual Conference.    7z?xrim,*1980.    Brachnan, R. J.    On the Epistemological    Status of Semantic    Networks.    In Findler, Nicholas V. (editor),    Associative    Networks - The Representation    and Use of    Knowledge inomputers, . AcaGi'Gs,    New York, 379.    Bract-man,    R. J.    An Introduction    to KL-ONE.    In Bra&man, R.J., et al. (editors),    Research    in Natural Language Understanding,    Annual    Rep-    31 Aug. 79), pages    --    --    13;116, Bolt Berzek and Nekanan    Inc,    Cambridge,    MA, 1980.    Burton, R. & Woods, W. A.    A Compiling System for Augmented Transition    Networks.    In COLING 76. Sixth International    Conference    ?%7Ziputational Linguistics,    Ottawa,    Canada, June, 1976.    Burton, R.. Seelv Br0wn.J.    Semantic Grammar: A Technique for    ConstrwNa?%ral    LanguazInterfaces    to Instructional    Svstems.    BBNTeport 3587, E!ol-?&%%k And Newnan    Inc., May, 1977.    Mark, W. S. & Barton, G. E.    The RUSGrammar    Parsing System.    m    3243, Genermrmarch    Laboratories,    1980.    Reiter, R.    Closed World Data Bases.    In Gallaire,    H. & Minker, J. (editor),    Logic    and Data Bases, .    --    Plenum Press, 197T    Woods, W. A.    Transition Network Grammars for Natural    Language Analysis.    CACM 13(10), October, 1970.    Woods, W. A., Kaplan, R. M. & Nash-Webber.    B.    The Lunar Sciences Natural Language    '    -- Information    Systmnal    Report.    BBN Report 2378, Bolt Beranek andwnan    Inc., June, 1972.    Woods, W. A.    Taxonomic Lattice Structures    for Situation    Recognition.    In Theoretical    Issues in Natural Language    Processing    -r-    1978.    -    ACL amSIGART,    July,    323     
 | 
	1980 
 | 
	60 
 | 
					
57 
							 | 
	LANGUAGE AND MEMORY: GENERALIZATION AS A PART OF UNDERSTANDING    Michael Lebowitz    Department of Computer Science    Yale University, P.O. Box 2158    New Haven, Connecticut 06520    ABSTRACT    process. Here I will present the program's ability    to remember    Integrated Partial    events and make generalizations about    This paper presents the    the stories it reads.    Parser (IPP), a computer model that combines text    understanding and memory of events.    An extended    example of the program's ability to understand    MEMORY IN UNDERSTANDING    newspaper stories and make generalizations that are    useful for memory organization is presented.    The addition of information to    long-term    memory is an integral part of IPP's operation.    actively    INTRODUCTION    Furthermore, the memory update process    changes    the    structure of memory by noticing    similarities    new memory    Memory of specific events has not been a    among events, creating    structures    serious part of previous Artificial Intelligence    based upon generalizations about these    language understanding systems. Most understanding    events, and using these new structures in storing    events.    systems have been designed to simply parse natural    language into an internal representation, use the    I will    representation for output tasks, and then discard    In order to illustrate IPP's memory,    present    three    stories    taken    This has been true even for systems primarily    directly    from    it.    concerned    with    creating    high-level    and show how IPP incorporates them into    story    newspapers,    The stories all describe kidnappings that    representations in terms of structures such as    memory.    scripts 1121 [31 [6l, plans and goals [6l [71,    took place in Italy.    frames [II    and other such structures, as well as    more syntactically oriented systems.    (Sl) Boston Globe, 5 February 79, Italy    Three gunmen kidnapped a 67 year-old retired    The Integrated Partial Parser (IPP) is an    industrialist yesterday outside his house near    understanding program that addresses the problems    this north Italian town, police said.    of integrating parsing and memory updating.    By    making use of memory, IPP is able to achieve a high    (S2) New York Times, 15 April 79, Italy    A building contractor kidnapped here on Jan.    level of performance when understanding texts that    17 was released last night after payment of an    it has not been specially prepared for. IPP has    been designed to read and remember news stories    undisclosed ransom, the police said.    about international terrorism taken directly from    newspapers and the UPI news wire.    (S3) New York Times, 25 June 79, Italy    Kidnappers    released    an    Italian    shoe    manufacturer here today after payment of an    IPP performs six    different    understanding    undisclosed ransom, the police said.    tasks, in a single, integrated process. These    tasks include the addition of new events    to    After a person has read these three stories he    long-term memory,    being reminded of previous would have undoubtedly drawn some conclusions about    stories, noticing the interesting aspects    of    kidnapping in Italy. The similar nature of the    stories, making generalizations to improve the    victims - all businessmen of one sort or another -    quality of its world knowledge, predicting likely    In what follows I will    future events, and, of course, parsing stories from    is immediately apparent.    show the    natural language into an internal representation.    actual generalizations IPP made upon    reading this sequence of stories. The output that    follows    In this paper, I will mention IPP's parsing    is from IPP's operation on the three    stories above.    abilities only in passing. The interested reader    is refered to [41 for more details of the parsing    Story:    Sl (2 5 79) ITALY    --------------------    (THREE GUNMEN KIDNAPPED A 67 YEAR-OLD RETIRED    INDUSTRIALIST YESTERDAY OUTSIDE HIS HOUSE NEAR THIS    This work was supported in part by the Advanced    NORTH ITALIAN TOWN POLICE SAID)    Research Projects Agency of the Department of    Defense and monitored under the Office of Naval    *** Parsing and incremental memory access ***    Research under contract N00014-75-C-1111.    324    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    >>> Beginning final memory incorporation . . .    Story analysis: EVl (S-MOP = S-EXTORT)    HOSTAGES    AGE    OLD    OCCUPATION-TYPE    RETIRED    ROLE    BUSINESSMAN    POL-POS    ESTAB    GENDER    MALE    ACTOR    NUMBER    3    METHODS    SCRIPT    SKIDNAP    NATION    LOCATION    ITALY    Indexing EVl as variant of S-EXTORT    >>> Memory incorporation complete    Stories are represented in IPP by MOPS (Memory    Organization Packets) [51. MOPS were designed to    represent events    in    memory,    and    are    thus    particularly well    suited for    maintaining a permanent    the purpose of    episodic memory.    Sl is    represented by S-EXTORT, a simple MOP (S-MOP) that    is intended to capture the common aspects of    extortion and SKIDNAP, a script that represents the    low-level events of a kidnapping.    IPP's long term memory is organized around    S-MOPS,    structures that represent generalities    among scripts, and the ways in which different    stories vary from the S-MOPS. The first step in    adding a new story to memory is to extract the    various features of each instantiated S-MOP. These    features consist    of    the    scripts    that    are    instantiated subordinate to the S-MOP (methods and    results, typically), and features from each of the    role fillers (such as the actor and victim). The    output above    shows    the    features    that    IPP    accumulated about the S-EXTORT S-MOP as it read Sl.    Since there were no similar events in memory when    IPP read this story, its addition to memory    consisted of indexing it under S-EXTORT using    simply    each    of the features as an index.    Now consider they way IPP reads S2.    The    collection of the features of the S-EXTORT occurs    during parsing of the story.    Story: S2 (4 15 79) ITALY    (A BUILDING CONTRACTOR KIDNAPPED HERE ON JAN 17    WAS RELEASED LAST NIGHT AFTER PAYMENT OF AN    UNDISCLOSED RANSOM THE POLICE SAID)    *** Parsing and incremental memory access ***    >>> Beginning final memory incorporation . . .    Story analysis: EV3 (S-MOP = S-EXTORT)    RESULTS    SCRIPT    SS-GET-RANSOM    SCRIPT    SS-RELEASE-HOSTAGES    HOSTAGES    ROLE    BUSINESSMAN    POL-POS    ESTAB    GENDER    MALE    METHODS    SCRIPT    SKIDNAP    NATION    LOCATION    ITALY    Creating more specific S-EXTORT (SpMl)    from events EV3 EVl with features:    HOSTAGES    ROLE    BUSINESSMAN    POL-POS    ESTAB    GENDER    MALE    METHODS    SCRIPT    SKIDNAP    NATION    LOCATION    ITALY    Reminded of: EVl (during spec-MOP    [EVl is from Sll    >>> Memory incorporation complete    creation)    While reading S2, IPP adds it to long-term    memory. However, as it uses the indexing procedure    described above, it naturally finds Sl, which is    already indexed under S-EXTORT in many of the same    ways as S2 would be. Thus IPP is reminded of Sl,    and what is more, since there are such a large    number of features in common, IPP makes a tentative    generalization that these features normally occur    together. Informally speaking, IPP has concluded    that the targets of kidnappings in Italy are    frequently businessmen.    The new generalization causes the creation of    a new node in memory, known as a spec-MOP, to be    used to remember events that are examples of the    generalization. A spec-MOP is simply a new MOP,    equivalent in kind to the S-MOPS that IPP starts    with, that is used to embody the generalizations    that have been made. The events that went into the    making of the generalization are then indexed off    of the spec-MOP by any additional features they may    have.    Story: S3 (6 25 79) ITALY    (KIDNAPPERS RELEASED AN ITALIAN SHOE MANUFACTURER    HERE TODAY AFTER PAYMENT OF AN UNDISCLOSED RANSOM    THE POLICE SAID)    Processing:    KIDNAPPERS    : Interesting token - KIDNAPPERS    Instantiated SKIDNAP -- S-EXTORT    325    >>> Beginning memory update . . .    New features: EV5 (S-EXTORT)    METHODS    SCRIPT    SKIDNAP    NATION    LOCATION    ITALY    Best existing S-MOP(s) -- SpMl    [the spec-MOP just created]    Predicted features (from SpMl)    HOSTAGES    GENDER    MALE    POL-POS    ESTAB    ROLE    BUSINESSMAN    >>> Memory update complete    . . [rest of the parsing process]    >>> Beginning final memory incorporation . . .    Story analysis: EV5 (S-MOP = S-EXTORT)    RESULTS    SCRIPT    SS-GET-RANSOM    HOSTAGES    NATIONALITY ITALY    ROLE    BUSINESSMAN    POL-POS    ESTAB    GENDER    MALE    RESULTS    SCRIPT    SS-RELEASE-HOSTAGES    METHODS    SCRIPT    SKIDNAP    NATION    LOCATION    ITALY    Creating more specific S-EXTORT (SpM2) than SpMl    from events EV5 EV3 with features:    RESULTS    SCRIPT    SS-GET-RANSOM    SCRIPT    SS-RELEASE-HOSTAGES    HOSTAGES    ROLE    BUSINESSMAN    POL-POS    ESTAB    GENDER    MALE    METHODS    SCRIPT    SKIDNAP    NATION    LOCATION    ITALY    Reminded of: EV3 (during spec-MOP creation)    [EV3 is from S21    >>> Memory incorporation complete    As it finishes reading S3, IPP completes its    addition    of    the    new event to memory.    The    processing is basically the same as that we saw for    s2,    but instead of beginning the updating process    with the basic S-EXTORT S-MOP, IPP has already    decided that the new story should be considered as    a variant of the first spec-MOP created.    IPP is    able to create another new spec-MOP from S3, this    one including all the features of the    first    spec-MOP, plus the RELEASE-HOSTAGES and GET-RANSOM    results of S-EXTORT. IPP has noticed that these    are frequent results of kidnappings of businessmen    in Italy.    So after having read these three stories, IPP    has begun to create a model of kidnapping in Italy.    This is the sort of    behavior    displayed    by    interested human readers. It is also the kind of    behavior    that    cannot    be    captured    by    an    understanding    system    that    does    not involve    long-term memory.    CONCLUSION    The inclusion of long-term memory as a part of    IPP has been a key factor in allowing the program    to be a powerful, robust understanding system. (To    date it has a vocabulary of about 3200 words and    has processed over 500 stories from newspapers and    the    UP1    news    wire,    producing    an accurate    representation for 60 - 70%.)    Furthermore, the    addition of memory makes possible the inclusion of    many familiar language-related phenomena, such as    reminding and generalization, into a model of the    understanding.    As shown in the examples in this paper, memory    updating can be implemented as a natural part of    language processing.    In fact without it, the    ability of a program to fully understand a story is    doubtful, since it cannot improve its knowledge of    the world, and adapt its processing to fit the    knowledge it has obtained - important measures of    the human understanding ability. A program without    episodic memory can never conclude that Italian    kidnappings are often against businessmen, or any    of a myriad of other generalizations that people    make all the time. And that is a crucial part of    understanding.    [II    [21    [31    [41    I51    t61    [71    REFERENCES    Charniak, E. On the use of framed knowledge in    language comprehension. Research Report #137,    Department    of    Computer    Science,    Yale    University, 1978.    Cullingford, R. Script application:    computer    understanding of newspaper stories. Research    Report #116, Department of Computer Science,    Yale University, 1978    DeJong, G. F. (1979) Skimming stories in real    time:    An    experiment    in    integrated    understanding.    Research    Report    #158,    Department    of    Computer    Science,    Yale    University.    Lebowitz, M.    Reading with a purpose.    In    Proceedings of 17th Annual Meeting; of the    Association of Computational Linguistics, g    Diego, CA, 1979.    Schank, R. C.    Reminding    and    memory    organization:    An    introduction    to    MOPS.    Research Report #l70, Department of Computer    Science, Yale University, 1979.    Schank, R. C and Abelson, R. P.    Scripts,    Plans,    Goals    and Understanding.    Lawrence    Erlbaum Associates, Hillsdale, New    Jersey,    1977.    Wilensky, R. Understanding Goal-Based Stories.    Research Report #l40, Department of Computer    Science, Yale University, 1978.    326     
 | 
	1980 
 | 
	61 
 | 
					
58 
							 | 
	FAILURES IN NATURAL LANGUACE SYSTEIG    :    APPLICATIONS    ?o DATA BASE WERY SYSTEMS    Eric Mays    Department of Canputer and Information    Science    University    of Pennsylvania    Philadelphia,    PA 19104    ABSTRACT    A    significant class of    failures    in    interactions with data base query systems are    attributable to misconceptions or    incunplete    knowledge regarding the danain of discourse    on the    part of the user. This paper describes several    types of    user failures, namely, intensional    failures of presumptions. These failures are    distinguished fran extensional failures of    presumptions since they are dependent on the    structure rather than the contents of the data    base e A knowledge representation has been    developed for the recognition of intensional    failures that are due to the assumption of    non-existent relationships between entities.    Several other intensional    failures which depend on    more sophisticated knowledge representations    are    also discussed. Appropriate    forms of corrective    behavior are outlined which would enable the user    to formulate    queries directed to the solution of    his/her particular task and compatible    with the    knowledge    organization.    I INTRCDUcrICN    An important aspect of natural language    interaction with intelligent systems is the    ability to deal constructively with failure.    Failures can be viewed as being of two types. One    can be ascribed to a lack of syntactic, semantic,    or pragmatic coverage by the system. This will be    termed system failure    , and manifests itself in the    inability of    the system to assign an    interpretation    to the user's input. Recent work    has been done in responding to these types of    failures, see for example, Weischedel and Black    [aI, and Kwasny and Sondheimer E31 - A second    class of failures    may be termed user failures. A    user failure arises when his/hermiefs about the    domain of    system.    **    discourse diverge from those of the    *This work is partially supported by a grant frcm    the National Science Foundation,    NSF-W 79-08401.    **Sane user beliefs regarding the domain of    discourse are implicitly encoded in questions    posedtothe system. The beliefs held by the    system    are    explicit    in    its knowledge    representation, either    procedurally    or    declaratively.    To avoid confusion, a clear distinction    should be made between failures and errors. An    error occurs when the system's response to an    input is incorrect. Errors generally    manifest    themselves    as incorrect resolution    of ambiguities    in word sense or modifier placement. These errors    would usually be detected by the user when    presented with a paraphrase that differs in a    meaningful way frcan    the original input [61. More    serious errors result fran incorrect    coding of    domain knowledge,    and are often undetectable by    the user.    This paper concerns itself with the    recognition and correction of user failures in    natural language data base query systems -- in    particular, failures that arise fran the user@s    beliefs about the structure, rather than the    cOntent, of the data base. The data base model    that has been implemented    for the recognition and    correction of simple user failures about the data    base structure is presented. Several other    failures which depend on more sophisticated    knowledge representation    are also discussed.    II PRESUPFDSITICNANDPRESUMPTICN    The linguistic notion of    presupposition    provides a formal basis for the inference    of a    significant    class of user beliefs. There is a    less restrictive    notion, presumption,    which allows    the inference    of larger class of user beliefs,    namely, that knowledge    which the user must assume    when posing a question.    A presupposition    is a proposition that is    entailed by    all the direct answers of a    question.*** A    presumption is    either    a    presupposition or it is a proposition    that is    entailed by all but one of the direct answers of a    question [21. Hence,    presupposition    is a stronger    version of presumption    , and a presupposition    is a    preswtption by definition. For example, question    (la) has several direct answers such as, "John",    "Sue"    , etc., and, of    course, "no one".    Proposition (lb) is entailed by all the direct    answers to (la) except the last one, i.e., "no    -------m-e    ***The complete definition of    includes the condition that the    question, direct answer pair    presupposition.    presupposition    negation of a    entails the    327    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    one". Therefore, (lb) is a presumption of (la).    Proposition (Id) is a presupposition of (lc),    since it is entailed by all of the question's    direct answers.    (la) Which faculty members teach CSEllO?    (lb) Faculty members teach CSEllO.    (lc) When does John take CSEllO?    (ld) John takes CSEllO.    Presumptions    can be classified    on the basis    of what is asserted -- i.e., an "intensional"    statement about the structure of the data base or    an "extensional" statement about its contents.    Thus an extensional failure of a presumption    occurs based on the current contents of the data    base, while an intensional    failure occurs based on    the structure or organization. For example,    question (2a) presumes propositions (2b), (2c),    and    ( a)    l    Presumption (2b) is subject to    intensional    failure if the data base does not    allow for the relation "teach" to hold between    "faculty"    and "course" entities. An extensional    failure of presumption (2b) would occur if the    data base did not contain any "faculty member"    that "teaches" a "course". Also note that the    truth of (2b) is a pre-condition    for the truth of    (2c) l    (2a) Which faculty members teach CSEllO?    (2b) Faculty members teach courses.    (2~) Faculty members teach CSEllO.    (Xi) CSEllO is a course.    Although a    presumption which    fails    intensionally    will    of    necessity    fail    extensionally,    it is important to differentiate    between them, since an intensional    failure that    occurs will occur consistently    for a given data    base structure, whereas extensional    failure is a    transitory    function of the current contents of the    data base. This is not meant to imply that a data    base structure is not subject to change. However,    such a change usually represents a fundamental    modification    of the organization    of the enterprise    that is modelled. One can observe that structural    modifications occur over long periods of time    (many months to years, for example), while the    data base contents are subject to change over    relatively    shorter periods of time (hourly,    daily,    or monthly, for example).    Kaplan [2] has investigated the -tation    and correction of    extensional failures of    preslrmptions.    The approach taken there involves    accessing the contents of the data base to    determine if a presumption has a    non-empty    extension. The remainder    of this paper discusses    several ways a presumption might be subject to    intensional failure. These inferences    are made    from the structural information    of the data base.    III DATA F3ASE    M3DEL    ---    In order to    recognize failures    of    presumptions concerning    the structure    of the data    base, it is necessary to use a robust data model.    The discussion here will assume a data base model    similar to that proposed by Lee and Gerritsen [4],    which incorporates the generalization    dimension    developed by Smith and Smith [7] into the    entity-relationship model    [U *    Basically,    entities participate    in relationships along two    orthogonal dimensions, aggregation    bxmg    dissimilar entities) and generalization (among    similar entities), as well as having attributes    that assume values. As an example of this type of    structure consider the data base model fragment    for a typical university    in figure 1. Entity sets    are designated    by ovals, aggregation    relationships    by diamonds, and generalization relationships by    edges from the super-entity    set to the sub-entity    set. The parallel arcs denote mutual exclusion.    328    Mutual exclusion is used to    infer the    difference between "men that are also faculty" (a    possibly non-empty set) and "men that are also    women" (an empty set by definition),    for exmple    given figure 1. This distinction    can be made by    prohibiting the traversal of a path in the data    model that includes two entity sets which are    mutually exclusive. Furthermore,    the path in the    generalization    dimension is restricted    to "upward"    traversals followed by "downward"    traversals. An    upward (downward)    traversal is from a sub-entity    (super-entity)    set to a super-entity (sub-entity)    set. This restriction is made to prevent    over-specialization of    an    entity set when    traversing    downward edges. The set of inferences    that can be made in the presence of this    restriction    is not overly constrained, since any    two entity sets that have a ccmmon intersection    (sub-entity    set) will also have a common union    (super-entity    set).*    IV INTENSICNAL    FAILURES    A. Non-existent    Relationships    The most basic intensional    failure that can    occur is    the presumption of a non-existent    relationship between entity sets.    In the    university data base model fragment given above,    such a failure occurs in the question "Which    faculty take courses?". This question presumes    that a "take" relationship could exist between    "faculty" and "courses" entities. Since no such    relationship    can be established,    that presumption    has failed intensionally. Recognizing    the failure    is only part of the problem -- it is also useful    to provide the user with related intensional    knowledge. Given a relation R, entities X and Y,    and a    failed presumption (R X Y), salient    intensional    knowledge    can be found by abstracting    on R, X, or Y to create a new relation. For    example, consider the following exchange:    Q: Which faculty take courses?"    A: "I don't believe that faculty can take    courses.    Faculty teach courses.    Students take courses."    A similar failure occurs in the presumption of a    non-existent attribute of an entity set. For    example, What is the cost of all courses taught    by teaching assistants?", incorrectly presumes    that in this data base, "cost" is an attribute of    "courses".    B. Inapplicable    Functions    Intensional failures may also occur when    attempting to apply a function on a domain. The    question, What is the average grade in CSEllO?",    will cause no processing    problems provided grades    are assigned over the real numbers. But if grades    ranged fron A to F, then the system should inform    the user that averages can not be performed on    character data. (Note that the clever system    designer might trap this case and make numerical    assignments to the letter grades.) A more    significant    aspect is the notion of a function to    be meaningful over a particular    dcmain. That is,    certain operations, even though they might be    applicable, may not be meaningful. An example    would be "average social security number". The    user who requested such a amputation does not    really understand    what the data is supposed to    represent. In such cases a short explanation    regarding the function of the data would be    appropriate. To achieve this type of behavior, of    course, the data base model must be augmented to    include type and functional information.    C. Higher Order Failures    The mutual exclusion operator allows the    detection of a failure when the question specifies    a restriction    of an entity set by any two of its    mutually exclusive sub-entity    sets. For examplep    Which teachers that advise students take    courses?" presumes that there could be same    "teachers"    that are both "faculty"    and "students".    Since this situation could never arise, given the    structure in figure 1 , it should be cammunicated    to the user as an intensional    failure. If an    exhaustiveness    operator is incorporated as well,    unnecessary restrictions of an entity set by    disjunction    of all of its exhaustive sub-entity    sets can be detected. Although this would not    constitute    a failure, it does indicate that there    is scane misconception    regarding    the structure    of    the data base on the part of the user. If the    sub-entity    sets were known to be exhaustive    by the    user, there would be no reason to make the    restriction. As an example, the addition of the    fact that "grads" and "undergrads"    were exhaustive    sub-entity sets of "students" would cause this    misconception    to arise in the question "Which    students are either grads or undergrads?". The    following behavior would be desired in these    cases:    Q: "Which teachers that advise students take    courses?"    A: "Faculty advise students.    Students take courses.    I don't believe that a teacher can be    both a faculty member and a student."    *See [5] for a more detailed description.    329    D. Data Currency    Same failures depend on the currency of the    data. One such example occurs in a naval data    base about ships, subs, and aircraft. The    question "What is the position of the Kitty Hawk?"    presumes that timely data is maintained.    Actually, positions of friendly vessels are    current, while those of enemy ships might be    hopelessly out of date. In this case, the    failures would be extensional since the last    update of the attribute must be checked for    currency. It may be the case that sane data is    current while other data is not. However, the    update processing time lag from actual event    occurence to capture in the data base might be    sufficiently    long that such presumptions    might be    subject to intensional failure. Thus the user    could be made aware that current data was never    available.    v coNcLus1oN    In this paper we have discussed several.kinds    of    failures of presumptions that depend on    knowledge    about the structure or organization of    the data base. It is important to distinguish    between structure and content, since there is a    significant difference in the rate at which they    change. When responding    to intensional failures    of presumptions, simply pointing out the failure    is in most cases inadequate. The user must also    be informed with regard to related knowledge    about    the structure of the data base in order to    formulate queries directed at solving his/her    particular    problem. The technique described for    recognizing intensional failures that are due to    the presumption of non-existent relationships    between entities and attributes    of entities has    been implemented. Further work is aimed at    developing knowledge representations    for temporal    and functional    information. We hope to eventually    develop a general account of user failures in    natural language query systems.    I would like to thank Peter Buneman, Aravind    Joshi, Kathy McKeown, and Bonnie Webber for their    valuable camments on an earlier draft of this    paper    l    [2] Kaplan, S.J., Cooperative Responses Fran a    Portable Natural Language Data Base Query -em;    Ph.D. Dision,    Ccmpz    and    Information    Science Department, University of Pennsylvania,    Philadelphia,    PA., 1979,    [31 Kwasny, S.C.r and Sondheimer, N.K.,    "Ungrammaticality and Extra-Grannnaticality    in    Natural    Language    Understanding    sys terns"    ,    Proceedings of the Conference of the Association    for Cqutational Linguistics, La Jolla, CA.,    August 1979.    [4]    Lee, R.M. and Gerritsen, R., "A Hybrid    Representation for Database Semantics",    Working    Paper 78-01-01, Decision Sciences Department,    University of Pennsylvania,    1978.    [51 Maysr    E.,    "Correcting Misconceptions About    Data Base Structure", Proceedings of    the    Conference of    the Canadian    Society    for    Computational Studies of Intelligence,    Victoria,    British Columbia, Canada, May 1980.    [63 McKeown, K., "Paraphrasing    Using Given and New    Information in a    Question-Answer system"    ,    Proceedings    of the Conference    of the Association    for Canputational Linguistics, La Jolla, CA.,    August 1979.    [7] Smith, J.M. and Smith, D.C.P., "Database    Abstractions: Aggregation and Generalization",    ACM Transactions    on Database Systems, Vol. 2, No.    2, June 1977.    183 Weischedel,    R.M., and Black, J., "Responding    Intelligently to Unparseable    Sentences",    American    Journal of Canputational    Linguistics,    Vol. 6, No.    2, April-June 1980.    [l] Chen, P.P.S., "The Entity-Relationship    Model    -- Tawards a    Unified View of Data", ACM    Transactions    on Database Systems, Vol. 1, No. 1,    March 1976.    330     
 | 
	1980 
 | 
	62 
 | 
					
59 
							 | 
	WHEN EXPECTATION    FAILS:    --    Towards a self-correcting    inference    system    Richard H. Granger, Jr.    Artificial Intelligence    Project    Department    of Information    and Computer Science    University of California    Irvine, California    92717    ABSTRACT    Contextual understanding depends on a    reader's ability to correctly infer a context    within which to interpret the events in a story.    This "context-selection    problem" has traditionally    been expressed in terms of heuristics for making    the correct initial selection of a story context.    This paper presents a view of context selection    as    an    ongoing process spread throuqhout the    understanding    process. This view requires that    the understander be capable of recognizing    and    correcting erroneous initial context inferences.    A computer program called ARTHUR is described,    which selects the correct context for a story by    dynamically    re-evaluating its own initial    inferences    in light of subsequent information    in a    story.    INTRODUCTION    Consider the following simple story:    [l] Geoffrey Huggins walked into the Roger    Sherman movie theater. He went up to    the balcony, where Willy North was    waiting with a gram of cocaine. Geoff    paid Willy in large bills and left    quickly.    Why did Geoff go into the movie theater? Most    people infer that he did so in order to buy some    coke, since that was the outcome of the story.    The alternative possibility, that Geoff went to    the theater to see a movie and then coincidentally    ran into Willy and decided to buy some aoke from    him, seems to go virtually unnoticed by most    readers in informal experiments. On the basis of    pure logic, either of these inferences is equally    plausible. However, people overwhelmingly    choose    the first inference to explain this story,    maintaining that Geoff did not go into the theater    to see a movie.    The problem is that the most plausible    initial inference    from the story's first sentence    is that Geoff did go inside to see a movie.    Hence, selection of the correct inference    about    Geoff's goal in this story requires rejection of    this initial inference. This paper describes a    program called ARTHUR (A Reader THat understands    Reflectively) which understands    stories like [l]    by generating tentative initial context inferences    and then re-evaluating    its own inferences    in light    of subsequent information    in the story. By this    process    ARTHUR understands misleading and    surprising stories, and expresses its surprise in    English. For example, from the above story,    ARTHUR answers the following question about    Geoff's intentions:    Q) Why did Geoff go into the movie theater?    A) AT FIRST I THOUGHT IT WAS BECAUSE HE    WANTED TO WATCH A MOVIE, BUT ACTUALLY    IT'S BECAUSE HE l.&NTED    To BUY COCAINE.    (For a much more complete description of ARTHUR,    see Granger [19801).    We call the problem of finding the correct    inference in a story the "context-selection    problem" (after the "script-selection    problem" in    Cullingford El9781 and Dejong [1979],    which is a    special case (see Section 4.2)).    All    the    "contexts" (or "context inferences")    referred to    in this paper are goals, plans or scripts, as    presented by Schank and Abelson [1977]. Other    theories of contextual understanding (Charniak    C19721, Schank [19731, Wilks [1975],    Schank and    Abelson [1977], Cullingford [19781,    [1978]) involve    Wilensky    the selection    of a context which    is then used to interpret    subsequent    events in the    story, but these theories fail to understand    stories such as [l], in which the initially    selected context turns out to be incorrect.    ARThllR    operates by maintaining    an "inference-fate    g rayh", containing the tentative inferences    generated during story processing, alony with    information about the current status of each    inference.    BACKGROUND:    SCRIPTS, PLANS, GOALS & UNDERSTANDING    2.1 Contextual understanding    ARTHUR's representational    scheme is adopted    from Schank and Abelson's [1977] framework for    representing    human intentions (goals) and methods    of achieving those goals (plans and scripts). The    problem ARTHUR addresses is the process by which a    given story representation is generated from a    story. It will be seen that this process of    mapping a story onto a representation is not    straightforward,    and may involve the generation    of    a number of intermediate    representations    which are    discarded by    the time the final    story    representation    is complete.    Recall the first sentence of story r11:    "Geoffrey Huggins walked into the Roger Sherman    movie theater." ARlHUR's attempt to infer a    context for this event is based on knowledge of    typical functions associated with objects and    locations. In this instance,    a movie theater is a    known location with an    associated    activity:    "scripty"    viewing a movie. Hence, whenever a    story character goes to such a location, one of    the plausible inferences    from this action is that    the character    may intend to perform this activity.    Seeing a movie also has a default goal associated    with it: being entertained. Thus, ARTHUR infers    that Geoff plans to see a movie to entertain    himself.    301    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    When the next sentence is read, "He went up    to the balcony, where Willy North was waiting with    a gram of cocaine", ARTHUR again performs this    bottom-up inference process, resulting in the    inference that Geoff may have been planning to    take part in a drug sale. Now ARTHUR attempts to    connect this inference with the previously    inferred    goal of    watching a    movie for    entertainment. Now, however, ARTHUR fails to find    a connection    between the goal of wanting to see a    movie and the action of meeting a uocaine dealer.    Understanding    the story requires    ARTHUR to resolve    this connection failure.    2.2 Correcting an erroneous inference    -    Having failed to specify a    connecting    inferential path between the initial goal and the    new action, ARTHUR generates an alternative goal    inference from the action. In this case, the new    inference is that Geoff wanted to entertain    himself by intoxicating himself with cocaine.    (Note that this inference    too is only a tentative    inference,    and could be supplanted    if it failed to    account for the other events in the story.) ARTHUR    now has a disconnected representation for the    story so far: it has generated two separate goal    inferences to explain Geoff's two actions. ARTHUR    thinks that Geoff went to the theater in order to    see a movie, but that he then met up with Willy in    order to buy some coke. This is not an adequate    representation for the story at this point. The    correct representation    would indicate that Geoff    performed both of his actions in service of a    single goal of getting coke, and that he never    intended to see a movie there at all; the theater    was just a meeting place.    Hence, ARTHUR instead infers that Geoff's    action of going into the theater was in service of    the newly inferred goal, and discards the initial    inference (wanting to see a movie) which    previously explained this action. We call this    process    supplanting an    inference: ARTHUR    supplants its initial "see-movie"    inference    by the    new "yet-coke" inference,    as the explanation    for    Geoff's two actions.    ARTHUR's representation of the story now    consists of a single inference about Geoff's    intentions (he wanted to acquire some coke) and    two plans performed in service of that goal    (getting to the movie theater and getting to    Willy), each of which was carried out by a    physical action (PTRANSing to the theater and    PTRANSing to Willy). At this point, the initial    goal inference (that Geoff wanted to see a movie)    has been supplanted: it is no longer considered    to be a valid inference    about    Geoff's intentions    in light of the events in the story.    3.1    OPERATICN OF THE ARTHUR PROGRAM    Annotated run-time output    The following represents actual annotated    run-time output of the ARTHUR program. The input    to the program is the following    deceptively    simple    story:    [2] Mary picked up a magazine. She swatted a fly.    The first sentence causes ARTHUR to generate the    plausible inference that Mary plans to read the    magazine for entertainment,    since that is stored    in ARTHUR's memory as the default use for a    magazine. ARTHUR's internal representation of    this situation consists of an "explanation    triple": a goal (being entertained), an event    (picking    up the magazine),    and an inferential    path    connecting the event and goal    (reading the    magazine).    The    following ARTHUR output is    generated from the processing of the second    sentence. (ARTHUR'S    output has been shortened    and    simplified here for pedagogical and financial    reasons.)    :CURRENT EXPLANATION-GRAPH:    GOAL: (E-ENTERTAIN    (PLANNER    MARY) (OBJECT MAG))    EW: (GRASP (ACTOR MARY) (OE+JECT    M&))    PATHO: (READ (PLANNER    MARY) (OBJECT    MAG))    ARTHUR's explanation    of the first sentence'has    a goal (being ENTERTAINed),    an act (GRASPiq    a magazine) and an inferential    path connecting    the action and goal (READing    the magazine).    :NEXTSENTENCE CD:    (PROPEL (ACTOR MARY) (OBJECT    NIL) (TO FLY))    The Conceptual    Dependency for Mary's acti    she struck a fly with an unknown object.    On:    :FAILURE    TO CONNECT TO EXISTING    GOAL CONTEXT:    ARTHUR's initial goal inference (Mary planned    to entertain    herself by reading the magazine)    fails to explain her action of swatting a fly.    :SUPPLANTING    WITH NEW PLAUSIBLE    GOAL CONTEXT:    (PHYS-STATE    (PLANNER    MARY) (OEUECT    MAG) (VAL -10))    ARTHUR now generates an alternative    goal on the    basis of Mary's new action: she may want to    destroy the fly, i.e., want its physical state    to be -10. This new goal also serves to explain    her previous action (getting    a magazine) as a    precondition    to the action of swatting the fly,    once AKI'HUR    infers that the magazine was the    INSTRument in Mary's plan to damage the fly.    :FINAL EXPLANATION-TRIPLE:    GOALl: (PHYS-STATE    (PLANNER    MARY) (OBJECT FLY) (VAL -10))    EVl: (GRASP (ACtOR MARY) (OBJECT MAG))    PATHl: (DELTA-CONTROL    (PLANNER    MARY) (OBJECT Mpc,)    EV2: (PROPEL (ACTOR MARY) (OBJECT    MAG) (TO FLY))    PAm2: (CHANGE-~Ys-STATE    (PLANNER    MARY)    (ow~cr FLY) (DIRECTION    NM;) (INSTR MAG))    302    This representation    says that Mary wanted to    destroy a fly (GOALl),    so she planned to damage    it (PATH2).    Her first step in doing so was to    get an instrumental    object (PATHl).    These two    plans were realized (Events    1,2) by her picking    up a magazine and hitting the fly with it.    :READY FOR QUESTIONS:    >Why did Mary pick up a magazine?    AT FIRST I THOUGHT IT c91S    BECAUSE SHE WANTED To    READ IT, BUT ACTUALLY IT'S BECAUSE SHE WTED TO    USE IT TO GET RID OF A FLY.    The question asks for the inferred goal under-    lying Mary's action of GRASPing the magazine.    This answer is generated according to ARTHUR's    suw$aed    inference    about the action (READ)    and the active inference    about    the action    (CHANGE-PhYS-STATE)    . The English generation    mechanism used is described in Granger [1980].    3.2 The parsimony principle    AKIWUR's answer as given above is not the    only possible interpretation    of the story; it is    only one of the following three alternatives, all    of which are valid on the basis of what the story    says:    (2a) Mary picked up a magazine to read it. She    then was annoyed by a fly, and she swatted it    with the magazine she was holding.    (2b) Mary picked up a magazine    to read it. she    then was annoyed by a fly, and she swatted it    with a flyswatter that was handy.    (2~) Mary picked up a magazine to swat a fly with    it.    The last interpretation (2~) reflects a story    representation which consists of a single goal,    getting rid of a fly, which both of Mary's actions    were performed in service of. The other    interpretations both consist of two separate    goals, each of which explains one of Mary's    actions.    In [21, as in [U,    the interpretation    generated by the reader is the most parsimonious    of the possible interpretations. That is, the    preferred interpretation is the one in which the    fewest number of inferred goals of a    story    character account for the maximum number of his    actions. We ammarize this observation in the,    following    principle:    The Parsimony Principle    The best context inference is the one    which accents for the most actions of a    story character.    -    303    In other words, the decisicn to replace a    previous inference by a new one is not based on    the explicit contradiction    of that inference by    subsequent information    in the story. Example [2],    for instance,    has three possible interpretations,    none of which can be ruled out on strictly logical    grounds. Rather, the reader prefers the most    parsimnious story representation over less    parsimonious    ones, that is, the representation    which includes the fewest goal inferences to    account for the actions in the story. This is    true even when it requires the reader to do the    extra work of replacing    one of its own previous    inferences,    as in example [2].    CATEGORIES    OF ERRONEOUS INEERENCES    4.1 Goals    ARTHUR is capable of    recognizing and    correcting erroneous context inferences    in order    to maintain a parsimonious    explanation    of a story.    The examples given so far have dealt only with    erroneous goal inferences, but    other conceptual    categories of    inferences can be generated    erroneously    as well. In this section, examples of    other classes of erroneous inferences will be    given, and it will be    shown why each different    class presents its own unique difficulties    to    ARTHUR's correction    processes.    4.2 Plans and scripts    --    Consider the following    simple story:    [3] Carl was bored. He picked up    the    newspaper. He reached under it to get    the tennis racket that the newspaper    had    been covering.    This is an example in which ARTHUR correctly    infers the goal of the story character, but    erroneously infers the plan that he is going to    perform in service of his goal. ARTHUR first    infers that Carl will read the newspaper to    alleviate his boredom, but this inference    fails to    explain why Carl then gets his tennis racket.    ARTHUR at this point attempts to supplant the    initial goal inference,    but    in    this    case    ARTHUR    knows that that goal was correctly inferred,    because    it    was    implicitly stated in the first    sentence of the story (that Carl was bored).    Hence ARTHUR infers instead that it erroneously    inferred the plan by    which Carl intended to    satisfy his goal (reading    the newspaper). Rather,    Carl planned to alleviate his boredom    by playing    tennis.    The problem now is to connect Carl's action    of picking up the newspaper with his plan of    playing tennis. Instead of using the newspaper as    a    functional object (in this case, reading    material), Carl has treated it as an instrumental    object    that    must be moved as a precondition    to the    implementation of    his    intended    plan.    (Preconditions    are discussed in Schank and Abelson    [1977])    e ARTHUR recognizes    that an object can be    used either functionally or instrumentally.    Furthermore,    when an action is performed as a    precondition    to a plan, typically the objects used    in the action are used instrumentally,    as in [31    .    ARTHUR's initial inference    about Carl's plan was    based on the functionality    of a newspaper. It is    able to supplant this inference    by an inference    that Carl    instead    used    the    newspaper    instrumentally, as a precondition    to getting to    his tennis racket, which in turn was a presumed    precondition to using the racket to play tennis    with. Hence, correcting this erroneous plan    inference required ARTHUR to re-evaluate its    inference    about the intended use of a functional    object.    4.3 Causal state changes    --    Consider the following example:    [4] Kathy and Chris were playing golf. Kathy    hit a shot deep into the rough.    We assume that Kathy did not intend to hit her    ball into the rough, since she's playing golf,    which implies that she probably has a goal of    winning or at least playing well. Her action,    therefore, is probably not goal-oriented    behavior,    but is accidental: that is, it is an action which    causally results in some state which may have an    effect on her goal.    This situation differs from stories like    [l],    121 and [31, in that ARTHUR does not change its    mind about its inference    of Kathy's goal. Rather    than assuming that .the goal inference was    erroneous,    ARTHUR infers that the causal result    state hinders the achievement of Kathy’s    goal.    Any causal    state    which affects a character's    goal,    either positively or negatively, appears in    ARTHUR's story representation in one of    the    following four relationships    to an existing    goal:    l- the state    helps the achievement    of the goal;    2 - the state hinders achievement of the goal;    3 - the    state    aZG%Z    the goal entirely; or    4- the state    thwarts the goal entirely.    If ARTHUR did assume that Kathy's shot was    intentional, then the concomitant inference    is    that she didn't really want to win the game at    all;    or, in other words, that the initial    inference    was erroneous. This is the case in the    following example:    [S] Kathy and Chris were playing golf. Kathy    hit a shot deep into the rough. She    wanted to let her good friend Chris win    the game.    Understanding    this story requires    ARTHUR first to    infer that Kathy intends to win the game; then to    notice that her action has hindered her goal, and    finally to    recognize that the initial goal    inference    was erroneous, and to supplant it by the    inference    that Kathy actually    intended to lose the    game, not win it.    4.4 Travelling    down the garden path . . .    --    If the correct context inference    for a story    remains unknown until some significant    fraction of    the story has been read, the story can be thought    of as a "garden path" story. This term is    borrowed from so-called garden path sentences, in    which the correct representation    of the sentence    is not resolved until relatively late in the    sentence. We will    call    a garden path story any    story which causes the reader to generte an    initial inference    which turns out to be erroneous    on the basis of subsequent    story events. Obvious    examples of garden path stories are those in which    we experience    a surprise ending, e.g., mystery    stories, jokes, fables.    Since ARTHUR operates by generating    tentative    initial inferences and then re-evaluating    those    inferences    in light of subsequent information,    ARTHUR understands simple garden path stories.    Not all garden path stories cause us to experience    surprise. For example    , many readers of story [2]    do not notice that Mary might have been planning    to read the magazine, unless that intermediate    inference is pinted out to them. Hence we    hypothesize that the processes involved in    understanding    stories with surprise endings must    differ from the processes of understanding    other    garden    path    stories.    Hence,    ARTHUR's    understanding mechanism    is    not entirely    psychologically    plausible in that it does not    differentiate between stories with surprise    endings and other garden path stories.    A more sophisticated    version of ARTHUR (call    it    “Macro-MTHUR”) might differentiate between    "strong" default inferences    and "weak" tentative    inferences when generating an initial context    inference. If a strong initial inference is    generated, then MacARTHUR would consciously    "notice" this inference    being supplanted, thereby    experiencing surprise that the inference was    incorrect. Conversely, if the initial inference    is weak,    MacAKTHUR may not commit itself to that    inference,    but rather may choose to keep around    other possible alternatives. In this case    MacARTHUR would    onlv    exmrience    further    specification of the initial- tentative set of    inferences, rather than supplanting a    sinqle    strong inference. The question of-when readers    processes consciously    versus unconsciously is    still    an open question in psychology. Future    psychological    studies of the cognitive phenomena    underlying human story understanding    (such as in    Thorndyke [19761, [1977], Norman and Bobrow    [19751 ,    and Hayes-Roth [1977],    to name a few) may    be able    to provide data which will shed further    light on this issue.    304    a)NCLUSIONS:    WHERE WE'VE BEEN/ WHERE WE'RE HEADING    REFERENCES    5.1 Process and representation    in understanding    This paper has presented a process for    [ll Cullingford,    R. (1978).    Application:    Script    Computer Understanding    of Newspaper Stories.    Ph.D. Thesis, Yale University, New Haven,    building story representations    which contain    inferences    not explicitly stated in the story.    The representations    themselves    are not new; they    are based on those presented by Schank and Abelson    [19771. What is new here is the process of    corm m    [2l Granger, R. (19813). Adaptive Understanding:    Correcting Erroneous Inferences. Ph.D.    Thesis, Yale University,    New Haven, Conn.    arriving at    a    story representation. Most    contextual understanders (e.g. Charniak E19751,    Cullingford [1978],    Wilensky [1978])    would fail to    arrive at the correct story representations    for    any of the examples in this paper, because initial    statements in the examples trigger inferences    which prove to be erroneous in light of subsequent    story statements. ARTHUR's processing    of these    examples shows that arriving at a given story    representation may require the reader to generate    a nunber of intermediate inferences which get    discarded along the way, and which therefore    play    no role in the final representation    of the story.    [3] Hayes-Roth,    B. (1977). Implications    of human    pattern processing for the desiyn of    artificial    knowledge systems. In    Pattern-directed inference    systems (Waterman    and Hayes-Roth,    eds.). Academic Press, N.Y.    [4] Kintsch, h. (1977)    e Memory @    Cognition.    Wiley, New York.    [S] Norman, D. and Bobrow, D (1975). On the role    of active memory processes in perception    and    cognition. In The structure    of human memory    --    (Coffer,    C., ed.). Freeman, San Francisco.    Thus a final story representation may not    completely wecify the process by which it was    generated, since there may have been intermediate    inferences which are not contained in the final    [6] Schank, R. C. and Abelson, R. P. (1977).    Scripts, Plans, Goals and Understanding.    Erlbaum Press, Hill=,    E.    representation. Yet we know that when people have    understood    one of these examples, they can express    these intermediate    inferences with phrases like    "At first I thought X, but actually it's Y."    ARTHUR keeps track of its intermediate inferences    while understanding a story, and maintains an    "inference-fate    graph" containing all inferences    generated during story processing,    whether they    end up in the final story representation    or not.    [71 Thorndyke,    P. (1977). Pattern-directed    processing of knowledge from texts. In    Pattern-directed    inference    systems (Waterman    and Hayes-Roth,    eds.). Academic Press, N.Y.    [8] Wilensky, R. (1978). Understanding    Goal-based    Stories. Ph.D. Thesis, Yale University,    New    Haven, Conn.    The point here is that the relationship    between a given story representation    on the one    hand, and the process of arriving at    that    representation on the other, may be far from    straightforward. The path to a final story    representation    may involve sidetracks    and spurious    inferences    which must be recognized    and corrected.    Therefore,    specifying the    representations    corresponding    to natural language inputs is not    enough for a    theory of    natural language    processing; such a theory must also include    descriptions of the processes by which a final    representation is constructed. ARTHUR has    demonstrated one area in which specification    of    process and representation    diverge: the area of    correcting erroneous inferences    during    understanding. Further work will be directed    towards specifying other conditions    under which    process and representation    are not    straightforwardly related in natural language    tasks.    [9] Wilks, Y. (1975). Seven Theses on Artificial    Intelligence and Natural Language,    Research    Report No. 17, Instituto per gli Studi    Semantici    e    Cognitivi, Castagnola,    Switzerland.    305     
 | 
	1980 
 | 
	63 
 | 
					
60 
							 | 
	ORGANIZING MEMORY AND KEEPING IT ORGANIZED    Janet L. Kolodner    Dept. of Computer Science    Yale University, P. 0. BOX 2158    New Haven, CT 06520    ABSTRACT    Maintaining good    memory    organization    is    important in large memory systems.    This paper    presents a scheme for automatically reorganizing    event information in memory.    The processes are    implemented in a computer program called CYRUS.    INTRODUCTION    People are quite good at retrieving episodes    from their long term memories. In fact, they are    much better at information retrieval than any    current    computer    system.    Psychologists have    described human memory as a reconstructive process    (e.g., 131).    When people attempt to remember    events and episodes from their lives, they often    must go through a complicated reasoning and search    process ([6l and [51).    These    processes    are    dependent on good memory Drganization.    In order to keep memory well organized as new    data is added, memory organization must support the    creation of new memory categories and the building    up of generalized knowledge. If a memory held only    10 events that could be described as meetings, a    "meetings" category would be useful. But, unless    new meeting    sub-categories were    created    as    additional meetings were added to the memory,    retrieval    of    meetings    would    become    very    inefficient. Thus, a memory system needs the    ability to create new categories automatically from    old ones.    CYRUS is a computer program which implements a    theory of human memory organization and retrieval.    The program is designed to store and retrieve    episodic information about important people, and is    based on a theory of the way people organize and    remember information about themselves. Right now,    it,holds information about Secretaries of State    Cyrus Vance and Edmund Muskie.    CYRUS answers    questions posed to it in English, using search    strategies [51 to search its memory and a set of    constructive strategies 121 to construct search    keys. CYRUS is connected to the FRUMP program [ll,    which produces summaries of stories off the UP1    *This work was supported in part by the Advanced    Research Projects Agency of the Department of    Defense and monitored under the Office of Naval    Research under contract NOOOl4-75-C-1111.    wire. When FRUMP sends CYRUS new information about    Vance or Muskie, CYRUS automatically updates its    memory. As it does that updating, it reorganizes    its memory and builds up generalized knowledge    about the information it stores.    CYRUS' memory is organized around episodes    using    Memory Organization Packets (MOPS) [41.    Because episodes include references to persons who    participated    in    them, their locations, other    episodes they are related to, etc., they are good    global    organizers    of    otherwise    disjoint    information. For example, Camp David and Menachim    Begin have in common that Camp David was the    location of an important summit conference that    Begin attended.    There are a number of problems that must be    addressed in maintaining a self-updating memory.    1. what constitutes a good    category    in    memory, i.e., what attributes does a good    category have?    2. what kind of knowledge must be stored    about each category to enable retrieval    and new category creation?    3. how do categories relate to each other?    4. when is it appropriate to reorganize a    category into smaller pieces?    5. how can generalized knowledge be added to    new categories?    The remainder of this paper will address some of    these problems.    RECOGNIZING SIMILARITIES BETWEEN EPISODES    People notice common aspects of episodes and    make    generalizations in the normal course of    understanding. Reorganization of memory requires    noticing similarities and making generalizations    based on those similarities. Generalized knowledge    is    needed    to predict future occurrences, to    elaborate on a context being understood, to help    direct memory update, and as an aid in directing    search during retrieval.    Like    people,    CYRUS    notices similarities between episodes and makes    generalizations from them.    Similar episodes in CYRUS are stored in the    same MOP, along with the generalized knowledge    built UP from them. MOPS act as event categories    in memory holding episodes and knowledge about    those episodes. The generalized information a MOP    331    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    holds resides in its "content frame" 141 and    includes such things as typical preconditions and    enablement    conditions for its episodes, their    typical sequence of events, larger episodes they    are usually part of, their usual results, typical    location, duration, participants, etc.    The structure of individual episodes provides    a framework for deciding whether two episodes are    similar to each other. If, on two different trips,    Vance is welcomed at the airport and then has a    meeting with the president of the country he is in,    then the episodic structure of the two trips will    look alike, and we can say that the two episodes    are similar. While on the second trip, he might be    reminded [41 of the first one because of their    similarities. In the same way, a person hearing    about the two trips might be reminded of the first    when hearing about the second. If the result of    the first trip had been a signed accord, then he    may predict, or at least hope, that an accord would    be signed at the end of this trip also.    If an    accord is signed, he will generalize that when the    first event of a diplomatic trip is a meeting with    the head of state, then an accord will be reached.    Later, he will be able to use that knowledge in    understanding and retrieval.    REORGANIZING EVENT CATEGORIES    In order for such reminding and subsequent    generalization to occur in CYRUS, its MOPS must be    highly structured internally. Episodes are indexed    in MOPS according to their important aspects.    Indexing in a MOP is by content-frame components    and    includes    sequence    of    events, location,    participants, duration, etc.    When an event is    indexed similarly to an event already in memory,    reminding occurs and generalizations are made based    on their similarities. As a result, a sub-MOP of    the larger MOP is formed to hold those episodes and    their generalizations. With the addition of more    episodes, new sub-MOPS are further divided.    In    creating new MOPS and building up generalized    information, new knowledge, which can be used for    later understanding and retrieval, is added to the    data base.    The actual processing when an event is added    to a MOP depends on its relationship to events    already in the MOP.    One of the following four    things is true about each component of an event    description:    1. It is unique to this event in the MOP    2. It is semi-unique to this event in the MOP    (it has happened once or a small number of    times before)    3. It is often true of events in the MOP    4. It is typical of events in the MOP    In case 1, when the descriptive property is unique,    the event is indexed under that aspect in the MOP.    For instance, one of the discriminations CYRUS    makes on meetings is the topic of the contract    being discussed. A meeting about the Camp David    Accords is indexed under "contract topic = peace",    and a meeting about military aid- to Pakistan is    indexed under "contract topic = military aid". The    first time CYRUS hears about a meeting in which    Vance discusses mili tary aid, it will index that    meeting uniquely in the "diplomatic meetings" MOP    under the property "contract topic = military aid".    If it were also the first meeting he had with a    defense minister, then it would also be indexed    uniquely under the occupation of its participants    (because meetings are occupational).    If a property being indexed has occured once    before in a MOP (case 21, then reminding occurs,    the two events are compared to see which other    common aspects they have, and generalizations are    made. When Vance meets again about military aid,    CYRUS is reminded of the prior meeting because both    have the same topic. It checks the descriptions of    both to see what other similarities they have. If    both, for example, are with defense ministers, it    will conclude that meetings about military aid are    usually with defense ministers.    It also begins    indexing within the new MOP:    *Jr***********    Adding    SMEET actor (Vance)    others (defense minister of Israel)    topic (Military aid to Israel)    place (Jerusalem)    to memory . . .    Reminded of    SMEET actor (Vance)    others (defense minister of Pakistan)    topic (Military aid to Pakistan)    place (Washington)    because both are "diplomatic meetings'    both have contract topic "military aid"    creating new MOP: meetings about military aid    generalizing that when    Vance meets about military aid,    often he meets with a defense minister    *****Jr*******    Later, if CYRUS hears about a third meeting    whose topic is military aid, it will assume that    the meeting is with the defense minister of the    country requesting aid (unless it is given contrary    information). If asked for the participants of    that event' i&w.'.11    be able to answer "probably the    defense minister". If, on the other hand, a number    of meetings about military aid with participants    something other than defense ministers are added to    memory, CYRUS will remove that generalization and    attempt a better one instead.    On entering the next meetings about military    aid to memory, CYRUS will index them along with    other events already indexed there. A new meeting    about    military aid will be entered into the    "meetings    about    military    aid"    sub-MOP    of    "diplomatic meetings" and will be indexed within    that MOP (case 3).    In this way,    reminding,    generalization, and new MOP creation will occur    within newly created MOPS. If a new meeting about    military aid to Pakistan is added to memory, CYRUS    will be reminded of the first because both will be    indexed under "contract sides = Pakistan" in the    "meetings about military aid" MOP.    332    No discrimination is done on properties that    are typical (case 4) of events in a MOP (i.e.,    almost all events in the MOP fit that description).    In    that    way, generalization can control the    expansion of MOPS in memory.    If memory    has    generalized that meetings are called to discuss    contracts, then the fact that the topic of a later    meeting is a contract will never be indexed.    Appropriate    aspects of the contract, however, will    be indexed.    Thus, if a new event with no unique aspects is    added to memory, reminding of individual events    does not occur, but generalizations already made    are    confirmed    or    disconfirmed. Disconfinned    generalizations are removed.    When CYRUS hears    about yet another meeting in the Mid East about the    Camp David Accords, it will not be reminded of any    specific episodes, but will put the new meeting    into the MOPS it fits into.    *Jr***********    Adding    SMEET actor (Vance)    others (Begin)    topic (Camp David Accords)    place (Jerusalem)    to memory . . .    Putting it into MOP:    meetings with Begin in Israel    confirming generalizations    Putting it into MOP:    meetings about the Camp David Accords    with Israeli participants    confirming generalizations    Putting it into MOP:    meetings in Israel    confirming generalizations    a..    ***Jr*********    IMPLICATIONS IN RETRIEVAL    What are the implications of this indexing    scheme in retrieval?    The retrievability of an    event depends on how distinct its description is,    or how many of its features turn out to be    significant. As events with similar properties are    added    to    memory,    their common aspects lose    significance as good retrieval cues and category    specifiers (case 4).    An event with no unique or    semi-unique descriptors will become lost in memory    or "forgotten". Since events are indexed by their    differences, they can be retrieved whenever an    appropriate set of those differences is specified,    but specification of only common aspects of events    will not allow distinct episodes to be retrieved.    Generalized knowledge can be used    during    retrieval in a number of ways. One important use    is in guiding search strategy application (see    151).    Generalized knowledge can also be used for    search key elaboration. There is not always enough    information given in a question to direct search to    a relevant MOP or to a unique episode within a MOP.    Generalizations and a MOP's indexing scheme can be    used to direct the filling in of missing details.    Only those aspects of a MOP that are indexed need    be elaborated.    Generalized information can be used to answer    questions when more specific information can't be    found. If CYRUS has made a generalization that    Gromyko    is    usually    a    participant    in SALT    negotiations, it will be able to give the answer    "probably Gromyko" to "Last time Vance negotiated    SALT, who did he meet with?", even if it could not    retrieve the last negotiating episode. In the same    way, generalizations can be used    for    making    predictions during understanding.    CONCLUSIONS    Good memory organization is crucial in a large    information    system.    Some important processes    memory organization must support include dynamic    reorganization of memory, creation of new memory    categories, and generalization. In this paper,    we've tried to show how a long-term memory for    episodic information can keep itself    usefully    organized.    This    requires    a    good    initial    organization    plus    rules    for    reorganization.    Reorganization of memory happens through reminding,    or noticing similarities between episodes, and    generalization. The generalizations produced are    useful    both    for    controlling    later    memory    reorganization and for retrieval.    Some related    problems which have not been addressed in this    paper, but    which are important, are updating    generalized knowledge (particularly recovery from    bad generalizations), judging the usefulness of new    categories, and guiding indexing properly so that    new    MOPS    and generalizations are useful and    relevant.    Those topics    are    currently    being    addressed.    REFERENCES    [ll    DeJong, G. F. (1979). Skimming    stories    in    real    time:    An experiment in integrated    understanding.    Research    Report    #158.    Department    of    Computer    Science,    Yale    University.    [21 Kolodner, J. L. (1980). Organization    and    Retrieval from Long Term Episodic Memory.    Ph.D. Thesis (forthcoming). Department of    Computer Science, Yale University.    [31 Norman, D. A. and Bobrow, D. G. (1977).    Descriptions:    a basis for memory acquisition    and retrieval.    Report #7703.    Center for    Human    Information    Processing,    La Jolla,    California.    [41 Schank, R. C. (1979). Reminding and    memory    organization:    An    introduction to MOPS.    Research Report #170. Department of Computer    Science, Yale University.    151 Schank, R. C. and Kolodner, J. L. (1979).    Retrieving Information from an Episodic Memory    Research Report #159. Department of Computer    Science, Yale University. also in IJCAI-6.    [61 Williams, M. D. (1978). The    process    of    retrieval from very long term memory. Center    for Human Information Processing Memo CHIP-75,    La Jolla, California.    333     
 | 
	1980 
 | 
	64 
 | 
					
61 
							 | 
	Meta-planning    Robert Wilensky    Computer Science Division    De artment of EECS    University o F California, Berkeley    Berkeley, California 94720    1.0 INTRODUCTION    This paper is concerned with the problems    of planning    related    and understanding.    because    These problems    are    natural    understander must apply knEwledge about people s    language    goals and    necessary f    lans in order to make the inferences    o    in a sto    explain the behavior of a character    story u~er~~~~;;;ky,    19,;3Fa).    a planner, it mus?    Thus while    embody a theory of plilning knowledge.    I have    construction    story    dt?;e+;get    rl *    understanding    PIZYhApp?ieZh$Z anliZm>    F":    concerned not with t R    rogram.    This paper is    itself, but    e understanding mechanism    which is    that part of its    inde    lanning knowledge    is used to    21 *    endent of whe her that knowledge    if    someone's behavior or to    generate a p%    "f,'",    one's own use.    One part of this theory    of    knowledge is essentially world knowled    includes    a    plan;;i;fg    e.    classification of    in entional    f    them, plans are used to achieve goals, etc.)    and an actual body of knowledge about particula;    elements (e    getting some hing from someone    'f' 2 asking for som    7    thing is a way of    .    When one attempts to use    this    world    knowledge to understand the intentions of a    story's characters, a number of problems soon    become    apparent.    particular, what '    difficult in understan%ng a person s behavii:    iz iE':perating under, but the fact    so much understanding the goal and    that t#$%    are usually numerous    a situation. It is f oals and plans present in    he interactions between    these intentional elements that cause much of    complexity in both understanding and planning.    For    example,    consider    the    stories:    following    (1) John was in a hurr    Vegas,    I* to get to Las    but he no iced that there    were a lot of cops around so he    stuck to the speed limit.    (2) John was    noticed    eating dinner when he    to break in to his house.    that a thief was ti$;;f    he finished his dessert, John    called the police.    Likewise, (2)    strikes most    strange since John should have rea%", 'Fo t:z    z    intruder more strongly. The unusualness of this    ;;; ry is due not to kno;r;dge ;,";ut the plans    goals    involved,    scheduling of these plans.    ap arent    unproductive    1 more    intelligent planner would have dealt with t? e    threat immediately, and then perhaps returned to    ;+s meal when that situation had been disposed    .    Thus to understand the behavior of    character, or to enerate an intelligent plana    it is necessary f0    take into account    th;!    interacti ns between goals.    programs. e.g., Sussman 1975,    9    y;;;py;+f;g    deal    interactions bv    providi&tkpec??-!E pr~&a~~%emechanisms to deal    with    particular    situations.    Sussman s HACKER has a celebrated FZL tZ""~iEt    knows about goals clobbering "brother oalslt,    and detects this bug in plans suggested $Y the    plan synthesizer.    The difficulty with this ty e    is that burying this knowledge a t    of solution    out how to plan    in a rocedure assures that such knowled    not %e shared by a program that wishe 2 et;o;zi    this knowledge to understand someone else's    behavior    in    a    complicated situation.    addition,    there is a lot i"f    structure    as I hope to show    to this knowle6ge that is missed in    this fashion,    to the tasks    and which is extreme~l.l~seful    both    of planning as    as plan    understanding.    2.0 META-PLANNING    One solution to this problem is to create a    second body of planning knowledge that is called    meta-planning. By this I mean that knowledge    about now to plan should itself be expressed in    P    ;;ys,;f a set of g alsaf,;r thesg;an;pg    '5    E    recess    meta-goals ,    achieve th    meta-plans    e~re~m~~Z1~n~~.atheMe~~~~oa~~~~~~~~    mechanism (or plan understa der) that is u    produce a plan of action if    or explanation    7    ed to    from    ordinary plans.    For    consider    the    following    situation,    example    either Prom the point of view of plan    understanding or plan generation:    (7) John's wife called him and told    him they were all out of milk.    He decided to pick some up on his    way home from work.    Most intelligent planners would come up with    John's -plan, assumi    by a grocery store on T    they knew that they pass    he ;;ute home. In order    to produce this plan,    is necessary to go    through the following processes:    1.    334    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    2. A&j;sting one's plans according1    case, the plan is modifie 3' so Ia:    to    Produce a route that takes the    planner near the grocery store.    2. The "go home" plan is suspended at    the point at which the grocery    store is reached.    3.    4.    The "get milk" plan is executed.    The "go home" plan is resumed.    2.1 Kinds Of Meta-goals    The following is a brief description of the    more important meta-goals    along with the situations in    so far encountered,    which the    and some standard plans applicable I arise    o them.    This list is not meant to be corn lete.    merely    reflects    the current sta e    ?    of 0::    analysis.    META-GOALS, SITUATIONS, AND META-PLANS    1 . Don't Waste Resources    Situations to Detect    1 . Goal Overlap    Associated meta-plans:    1 . Schedule Common Subgoals First    2. Plan Integration    In terms.of meta-planning, this situation    has    the ;f;;o;;;    structure:    There '    an    imnortant    -    d "Don't Waste Resou*Ees".    Plan Piggybacking (find a new    'a g~~nflalls    k",;k goa~;~ultaneously    meta-pian fulfills the 'Don't Waste Resources"    meta-goal.    The advantage of the meta-    is that the problem of how to Ei    lanning approach    eal with complex    goal interactions can be stated as a problem to    be solved ,,by    applies to    the same planning mechanism one    ordinary" goals. For example, one    may first try out a number of canned sol;;;ons,    then some standard planning procedures,    lf    all    else fails, try to construct a novel    solution.    Note that there are    at    least    three    important differences between meta-plannin and    planning    using    constraints    +    Constraints and plan generators Ze asGike:ZE    in that constraints reject plans, but they don't    themselves propose new ones.    In constrast    meta-goals not    violations, but    su gest    only pick up    f    new    plans    to    fix    the problem.    Me a-goals are declarative structures, and thus    may be used in the ex lanation process as well    as in planning.    In    3    are    domain    a dition,    independent,    meta-goals    encoding    about planning in general.    only knowledge    McDermott's notion of a    *AotP,'n    0;    secondary task comes closest to    meta-planning I propose here.    A policy is    essentially    constraint.    ex licitly    Theli;;rnary    x*    ifferences r;epeEezz;ted    polic 3 and a    - oal are    inclu e    goals    tha 'i    that meta-goal:    are    not    constraints    necessarily    facts about p ,"~ni~~'as    E    meta-goals refer CXI;T,;~    policies    their domain,    include    domain    information;    may    specific    of    policies often entail the creation    psuedo-tasks,    whereas    meta-goals    have    meta-plans that deviate less from the structure    of normal plans.    Hayes-Roth and Hayes-Roth (1978)    uses the    term meta-nlannina to refer to decisions about    the planning process. While my use of the term    similar to theirs the    cyanning decisions under t K    include all types of    is    meta-planning is    name, and their    not formulated in terms of    explicit meta- 5    oals    term    to    and meta-plans. I use the    re er    of this    knowledge    t    and    only to a subset    convenien ly    only when that knowledge is    expressable in    meta-goals and meta-plans.    terms of explicit    2. Multiple Planning Options    (more    l%tin g:l) plan is applicable to a    Associated meta-plans:    1 . Select Less Costly Plan    3.    Plan Non-integrability situations    in which the execution o two plahs    will adversely affect one another    e.5,    one    undoes    B    subgoal    established by the other    Associated meta-plans:    1.    Schedule Sequentially    4.    Recurring Goals (A goal    repeatedly)    arises    Associated meta-plans:    1. 9    ubsume Recurring Goal    Establish    state    that    fulfills a    r    re:ondition for a    plan for    ndures    he goal and which    7    over a    P%~    of time    see Wilensky,    -5.    Recursive subgoals (a subgoal is    identical to a higher level goa ,    causing a potential infinite loop $    Associated meta-plans:    1. Select Alternate Plan    2. Achieve As Many Goals As Possible    Situations to detect    1. Goal Conflict    Associated meta-plans:    various conflict resolution plans    (see Wilensky 1978a)    2. Ass mme;;$;otGoal    3[    Conflict    (Both    goa s    be accomplished‘if A    kgr ormed    F erformed beff;;;    B, but B being    difficulty)    poses    no    335    Associated meta-plans:    Schedule Innocuous Plan First    2. Plan Splicing (If one plan has    already been started, suspend    it    6    divert to the other plan,    ie"w p%?%s    been executed)    original plan when    3.    Goal Competition (Goal interference    with the goal of another planner)    Associated meta-plans:    1.    various anti-plans (plans to    deal specificly with    opposition)    2. various plans for resolving the    competition    3. Maximize the Value    of    the    Achieved    Goals    Situations to detect    1. Unresolvable Goal Conflict    Associated meta-plans:    1.    Abandon Less Important Goal    4. Don't Violate Desireable States    Situations to detect    1.    Danger    Associated meta-plans:    1.    Create a preservation goal    2. Maintenance Time    Associated meta-plans:    1.    Perform Maintenance    3. Anticipated Preservation Goal    performance of another plan $??    cause the plann r    e    to    preservation goal    have    a    Associated meta-plans:    1.    Select Alternate Plan    2. Protective Modification (Modify    original plan so as n t to    provoke preservation goal P    Ef a stored lan for this    %    rotective    clot ing, it    R-    wou d    f    oal is to wear    be scheduled    efore the initial plan. If not, then we could    establish a subgoal of getting a raincoat. This    mia:; spawn a plan that involves going    would violate    outside,    the Recursive Subgoals    condition.    another    x:    The ~;ta--~la;ai~e;; *    lan.    t?nd    to choose    "Achieve    one, the    s Many Subgoals As Possible" meta-goal    is activated, as a goal conflict is now seen to    exist.    resol-    ution    The meta-plans for goal conflict    are attempted.    If they fail, then an    unresolvable goal conflict situation exists, and    Maximize the Goals Achieved is activated. The    meta-plan here is to abandon the less    goal. The nlanner selects whichever important    goal he    values more and then abandons the other.    3 .O APPLICATIONS    We are    meta-planning    understanding    interactions    :Ytple    goal    length el    currently    attemptin    F    use    in two programs.    AM,? sto    ?$StE:derstand    uses kno;;EtfEsabout    Y    w    involving    S.    As PAM has been discussed    sewhere, we will forego a discuss-    ion of rts use of meta-planning here.    Meta- P    lanning is also being used in the    developmen of a planning program called PANDORA    (Plan ANalyzer    with    D namic    Revision and Application .    ai    Organization,    PANDORA is given a    d,,esrcrit    ;io;ozfsa $tuation and. creates. a plan    R    PANDORA    .    may have in that situation.    developmen$z,    dynamically    told    about    new    and changes it plans accordingly.    References    11 Hayes-Roth, B. and Hayes-Roth, F. (1978).    Cognitive Processes in Planning. RAND    Report R-2366-ONR.    21 McDermott, Drew (1?78>. PLanning and    $cting. In Cognitive Science vol. 2, no.    .    31    Sacerdoti, E. D. (1977). A Structure for    Plans and Behavior. ElZevier    North-H?iIlanu,    Amsterdam.    41    Schank, R. C. and Abelson, R. P. (1977).    %%%%in~?@~'L%%&e%$lbaum    Press,    Billsdale, N.J.    51    Sussman G. J. (1975). A Corn uter Model of    $%:l~o~;quisition. Aiiie?&&ETs~,-    61 Wilensky R. (1978). Understanding    goal-based stories. Yale University    Research Report #140.    336     
 | 
	1980 
 | 
	65 
 | 
					
62 
							 | 
	NARRATIVE TEXT SUMMARIZATION    Wendy Lehnert    Yale University    Department of Computer Science    New Haven, Connecticut    ABSTRACT    In order to summarize a story it is necessary to    access a high level analysis that highlights the    story's central concepts. A technique of memory    representation based on affect units appears to    provide the necessary foundation for such    an    analysis.    Affect units are conceptual structures    that overlap with each other when a narrative is    cohesive.    When    overlapping intersections are    interpreted as arcs in a graph of affect units, the    resulting graph encodes the plot of the story.    Structural features of the graph then reveal which    concepts are central to the story. Affect unit    analysis is currently being investigated as a    processing strategy for narrative summarization.    When a reader summarizes a story, vast amounts    of information in memory are selectively ignored in    order to produce a distilled version of    the    original narrative. This process of simplification    relies on a global structuring of memory that    allows search procedures to concentrate on central    elements of the story while ignoring peripheral    details.    It is apparent that some hierarchical    structure is holding memory together, but the    precise formulation of this structure is much more    elusive. How is the hierarchical ordering of a    memory representation constructed at the time of    understanding? Exactly what elements of the memory    representation are    critical in building this    structure? What search processes examine memory    during summarization? How are summaries produced    after memory has been accessed? In this paper we    will outline a strategy for narrative summarization    that addresses each of these issues.    This proposed representation for high level    narrative analysis relies on affect units. An    affect unit is an abstract structure composed of    three affect states and four affect links.    AFFECT STATES    AFFECT LINES    Positive Events (+>    Motivation cm>    Negative Events (-1    Actualization (a>    Mental States (M)    Termination (t)    Equivalence (e>    -------    This work was supported in part by ARPA contract    N00014-75-C-1111 and in part by NSF contract    IST7918463.    For example, if John wants to buy a house, his    desire is a mental state (Ml. If John subsequently    buys the house, his desire is actualized by a    positive event (+>.    But if someone else buys it    instead, John will experience that transaction as a    negative    event    (-1    signalling    actualization    failure.    These particular affect    states    are    derived by recognizing an initiated goal (Ml, an    achieved goal (+>, and a thwarted goal (-1.    The    status of a goal is just one way that an affect    state can be recognized. A more complete account    of affect state recognition is presented in [31.    All affect states are relative to a particular    character.    If another buyer (Mary) takes the    house, we have a negative event for John, and a    positive    event for Mary.    We use a diagonal    cross-character link to identify their two affect    states as reactions to the same event:    John    Mary    wants to buy    (,    M    MJ"    wants to buy    a    house is sold    -/+    buys house    The above configuration of four states and three    links is the affect unit for "competition." Two    actors have a goal, and success for one means    failure for the other. "Success" and "failure" are    primitive affect units    contained within    the    competition unit. Success is recognized whenever a    mental state is actualized by a positive event.    Failure is the non-actualization of a mental state    through a negative event.    Now suppose John decides to get even by setting    the house on fire.    And suppose further that it    takes two tries to get it going.    John    Mary    wants to buy    L    M    U    ,:.'    a    wants to buy    buys house    house is sold    desires fire    4-    can't set fire    e    e    desires fire    "GM 5    gets fire going    +-+-k    house burns down    The sale of the house to Mary motivates John to set    the house on fire (Ml. This mental state fails to    be actualized (-1 the first time he tries to commit    337    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    arson.    But his desire persists in an equivalent    mental state (M)    and    is    then    successfully    actualized (+I by John setting the fire. This fire    is a positive event (+I for John, but a negative    event (-1 for Mary who suffers a loss.    We first derive a baseline summary from the    pivotal unit by accessing a "generational frame"    associated with the pivotal unit. For example, a    generational frame for retaliation is:    "When Y caused a negative event for X,    X caused a negative event for Y.”    "Loss" is an affect unit that occurs whenever a    negative event terminates a positive event in the    sense of removing whatever satisfaction was derived    from that positive event. When a loss wipes out a    previous success, we get the affect unit for    "fleeting success." When a smaller unit is embedded    in a larger unit (e.g.    "loss" is embedded in    "fleeting success"), we recognize the structure of    the larger unit as a "top level" affect unit.    Using this convention, our story about John and    Mary contains 4 top level affect units.    (11    (21    (31    This is a conceptually abstract description of    retaliation.    To produce a reasonable summary, we    must (1) instantiate the generational frame, and    (2) augment it with information from units adjacent    to the pivotal unit. We will try to convey what's    involved by showing how a baseline summary evolves    into a reasonable summary with the addition of    information from adjacent units. (This sequence is    not intended to reflect actual processing stages).    Sl = Retaliation (the baseline summary)    "When Mary prevented John from gett    wanted, John set her house on fire.    ing    11    something he    s2 = Sl + Competition    (11    represents    "competition",    (21    "fleeting    success", and (3) "perseverance after failure." A    fourth affect unit is recognized by merging the two    equivalent mental states of John:    "When Mary bought something that John    set her house on fire."    wanted, John    s3 = S2 + Fleeting Success    "retaliation"    'When Mary bought a hous    set the house on fire."    e that John wanted, John    ,(x>    mcA-    aG;    A,    s4 = S3 + Perseverance After Failure    "When Mary bought a house that John wanted,    set the house on fire after two tries."    John    The unspecified (X) in the retaliation unit can be    any affect state.    In our story, John's negative    event happened to be a positive event for Mary.    If the information from the perseverance unit seems    less important than the other contributions, there    is a good reason for this.    "Perseverence after    failure" resides between two equivalent mental    states that are merged within the retaliation unit.    It is often desirable to ignore units that are lost    when equivalent mental states are merged.    Top level affect units for a narrative can be    used as the basis for a graph structure that    describes narrative cohesion.    The nodes of the    graph represent top level affect units, and an arc    exists between two nodes whenever the corresponding    affect units share at least one common affect    state. The affect unit graph structure for our    simple story looks like:    Suppose for comparison, that John gave up on his    intended    arson    after    the first unsuccessful    attempt. Then our affect analysis for the story    would be a truncated version of the original:    wants to buy    M    MiP    wants to buy    4    /------+    buys house    house is sold    desires fire    mS;;    can't set fire    a 4,    Where C = "competition", F = "fleeting success", R    =    "retaliation", and P = "perseverance after    failure."    In general, the affect unit graph for a cohesive    narrative will be connected. And in many cases,    the graph will have a unique node whose degree    (number of incident arcs) is maximal over all nodes    in the graph. In our example, the retaliation unit    has a uniquely maximal degree of 3. We will call    any node of maximal degree a "pivotal unit." If a    story has a unique pivotal unit, then that unit    encodes the "gist" of the story.    A good summary    for the story will be based on the pivotal unit and    its adjacent units.    We st ill have a competition unit, but the other    level units are now "motivation" and "fail    ure":    top    "motivation"    "failure"    The a ffect unit graph now contains three connected    units , with motivation acting as the pivotal unit:    338    The baseline summary is therefore built from a    generational frame associated with motivation:    “When a negative event happened to X, X wanted Z.”    Augmenting this baseline summary with information    from the competition and failure units, we derive a    reasonable summary:    Sl = Motivation (the baseline summary)    “When Mary prevented John from getting something he    wanted, John wanted to set her house on fire.”    s2 = Sl + Competition    “When Mary bought a house that    wanted to set it on fire.”    s3 = S2 + Failure    “When Mary bought a house that John wanted,    unsuccessfully tried to set it on fire.”    John wanted,    John    These two examples illustrate how pivotal units and    their adjacent units can be used to drive processes    of narrative summarization. While many simple    stories will succumb to an algorithm that uses a    pivotal unit for the baseline summary,    other    stories yield affect unit graphs that do not have    unique pivotal units. For example, consider “The    Gift of the Magi” by 0. Henry.    In this story a young couple want to buy each    other Christmas presents. They are very poor but    Della has long beautiful hair, and Jim has a prized    pocket watch. To get money for the presents, Della    sells her hair and Jim sells his pocket watch.    Then she buys him a gold watch chain, and he buys    her expensive ornaments for her hair.    When they    realize what they’ve done, they feel consoled by    the love behind each other’s sacrifices.    The    affect    unit    analysis    is    perfectly    symmetrical    across    the two characters.    Both    characters have affect units for nested subgoals, a    regrettable mistake, two distinct losses, and a    hidden blessing. The affect unit graph for this    story is connected, but there is no unique pivotal    unit:    Both “HM” and “WM” are pivotal units. These units    correspond to their regrettable mistakes. Let the    family of a node N be the set of nodes adjacent to    N.    Then this graph can be partitioned into the    families of “HM” and “WM.” “HN”, “WN”, “HLl”, and    “WLl” are boundary units in the sense that each of    their families cross this partition.    It is not    easy to come up with a one sentence summary of “The    Gift of the Magi ,I’ but it can be    done    by    concentrating on the boundary units of maximal    degree (“HN” and “WN”). These are the units for    their nested subgoals:    “Della sold her long locks of hair to buy her    husband a watch chain, and he sold his watch to    buy her ornaments for her hair.”    This example shows how the summarization algorithm    must be sensitive to structural features of affect    unit graphs. In this case the connected graph can    be partitioned into two families of two pivotal    units, and the simplest summary originates from the    boundary units of maximal degree.    The process of narrative text summarization    relies    on    (1)    a    high level of conceptual    representation that readily    encodes    coherence    within the narrative, and (2) a process of language    generation that can easily be driven by that high    level memory representation. In this paper we have    attempted to show how affect units and their    resulting graph structures are well-suited to these    requirements.    We    have    necessarily    omitted    important    explanations concerning techniques of recognition    for affect units and the processes of generation    that express target summaries in English. The    representational system itself requires further    explication    concerning    which    affect    unit    configurations are legitimate (there are 15 legal    configurations of the form “state” - “link” _    “state” rather than    36).    the combinatorially possible    Using these 15 primitive configurations, we    can represent speech acts, voluntary compliance,    coerced compliance, the notion of a double-cross,    and similar abstractions of    complexity [31 e    equivalent conceptual    The    use    of    affect    units    in    narrative    summarization    is    currently being explored by    psychological experiments on text comprehension and    within a    computer implementation for the BORIS    system    121.    While    related    work    on    text    summarization has    been conducted using story    grammars , there are serious flaws in that approach    due to the top-down nature of story grammars Ill.    These difficulties will not arise with affect unit    approach because affect units are constructed by    bottom-up processing at the time of understanding.    The resulting affect unit graphs are consequently    far more flexible in their content and structure    than the rigid hierarchies of fixed story grammars.    This flexibility is the key to recognizing a    diverse range of plot structures without recourse    to an a priori taxonomy of all possible plot types.    REFERENCES    111 Black, J. B., and Wilensky, R.    (1979) “An    Evaluation    of    Story Grammars .” Cognitive    Science. Vol. 3, No.    3, pp. 213-230.    [21 Dyer, M.    and Lehnert, W,    (1980) “Memory    Organization    and    Search    Processes    for    Narratives . ” Department of Computer Science TR    i/175.    Yale University. New Haven, Conn.    [31 Lehnert, W. (1980) “Affect Units and Narrative    Summarization” Department of Computer Science    TR #179. Yale University. New Haven, Conn.    339     
 | 
	1980 
 | 
	66 
 | 
					
63 
							 | 
	HEARSAY-III:    A Domain-Independent    Framework    for Ex    Robert Balzer    Lee Ermzrn    Philip London    Chuck Williams    USC/Information    Sciences Institute*    Marina del Rey, CA 90291    Abstract    Hearsay-Ill    is a conceptually    simple extenslon    of the basic ideas in    the Hearsay-II    speech-understanding    system    [a].    That    domain-    dependent    expert    system was, in turn, a product    of a tradition    of    increasingly    sophisticated    production-rule-based    expert    systems.    The use of production    systems    to encapsulate    expert    knowledge    in manageable    and relatively    independent    chunks    has    been    a    strong recurrent    theme in Al. These systems have steadily    grown    more sophisticated    in their    pattern-match    and action    languages,    and in their    conflict-resolution    mechanisms    [la].    In this paper,    we    describe    the    Hearsay-Ill    framework,    concentrating    on    its    departures    from Hearsay-II.    1. The Heritage From Hearsay-II    Hearsay-II    provided    two major advances    -- the structuring    of    the    workspace,    called    the    bhckboard    in    Hearsay,    and    the    structuring    of    the    search,    via    scheduling    mechanisms.    The    blackboard    provided    a    two-dimensional    structure    for    incrementally    building    hierarchical    interpretations    of    the    utterance:    - levels which contained    different    representations    (and    levels    of    abstraction)    of    the    domain    (phones,    syllables, words, phrases, etc.).    - a location    dimension    (the    time    within    the    spoken    utterance)    which    positioned    each    partial    interpretation    within its level.    Knowledge sources (KSs), relatively    large    production    rules,    were    agents    which reacted    to blackboard    changes    produced    by other    KSs and in turn produced    new changes.    The expertise    was thus    organized    around    the    activity    of    building    higher-level,    more    encompassing    partial    interpretations    from several    nearby    lower-    level    partial    interpretations    (e.g.,    aggregating    three    contiguous    syllables    into    a word)    and    producing    lower-level    ones    from    higher-level    ones (e.g., predicting    an edjacent    word on the basis    of an exisiting phrase interpretation).    Within this aggregation-based    interpretation-building    paradigm,    Hearsay-II    also    provided    a    method    for    exploring    alternative    interpretations,    i.e., handling search.    Interpretations    conflicted    if    they    occupied    the    same    or    overlapping    locations    of    a level;    conflicting    interpretations    competed    as alternatives.    Thus,    in    addition    to organizing    activity    around the interpretation-building    process,    Hearsay-II    also    had    to    allocate    resources    among    competing    interpretations.    This required    expertise    in the form of    critics    and    evaluators,    and    necessitated    a    more    complex    ‘This research    wss supported    by Defense Advrnced    Research Projects    Agency    contrect    DAK15    72 C 0308    View.s and conclusions contained in this document ere    those    of the authors    and rho&l    not be interpreted    em representing    the official    opinion or policy of DARPA, the U.S. Government, or eny other person    or egency    connected with thorn.    scheduler,    which at each point    one previously-matched    KS.**    chose for execution    the action of    2. The Directions for Hearsay-III    To this heritage,    we bring two notions that    motivate    most of    our changes:    - Through    simple generalization,    the    can be made domain independent.    Hearsay    approach    - Scheduling    is itself    so    complex    a task    that    the    Hearsay    blackboard-oriented    knowledge-based    approach    is needed to build adequate    schedulers.***    Our generalizations    activities:    consist of systematizing    the main blackboard    - aggregating    several    interpretations    at one    a composite interpretation    at a higher level,    level into    - manipulating    alternative    interpretations    (by creating    a placeholder    for an unmade decision,    indicating    the    alternatives    of that decision, and ultimately    replacing    the placeholder    by a selected    alternative),    and    - criticizing    proposed    interpretations.    The    complexity    of    scheduling    is handled    by    introducing    a    separate,    scheduling    blackboard    whose    base    data    is    the    dynamically    created    activation    records    of KSs.    These    include    both    the    domain-dependent    KSs, which    react    to    the    regular,    domain blackboard,    and scheduling    KSs, which react    to changes    on the scheduling    blackboard    as well.    The organization    of these    activations    (with agendas, priorities,    etc.) is left to the application    writer;    Hearsay-Ill    provides    only    the    basic    mechanisms    for    building expert    systems.    Thus domain KSs can be viewed    as the    legal    move    generators    (competence    knowledge)    with    the    scheduling    KSs controlling    search (performance    knowledge).    3. Blackboard Structure    In Hearsay-II,    nodes    on the    blackboard,    which    represented    partial    interpretations,    were    called hypotheses.    In Hearsay-Ill,    we    adopt the more neutral    term unit.    Hearsay-Ill    provides    primitives    for    creating    units    and aggregating    them,    i.e., associating    them    hierarchically.    The    blackboard    is implemented    in    a general-    purpose,    typed,    relational    database    system    (built    on    top    of    INTERLJSP), called A83.    AP3 has a pattern-matching    language;    this    “A good discussion of scheduhng WI Hearsay-II    csn be found in (51    “‘This    notlon, in one form    owemple, [6], (21. and [III    another, is common to 8 number of others,    for    108    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    is used for retrieval    from the blackboard.    AP3 also has demons;    the    triggering    pattern    which the    application    writer    supplies    as    part of the definition of a KS is turned into an AP3 demon.    The    blackboard    levels    of Hearsay-II    have    been    generalized    somewhat    into a tree-structure    of classes.    Each unit is created    permanently    as an instance    of    some    class.    The    unit    is, by    inheritance,    also an instance of all superclasses    of that class.    The    apex of the class tree    is the general    class Unit.    The immediate    subclasses    of    Unit    are    DomainUnit    and    SchedulingUnit;    these    classes    serve    to define    the domain and scheduling    blackboards.    All other    subclasses    are    declared    by    the    application    writer,    appropriate    to his domain.    For example,    in the SAFE application,    which is a system for building formal specifications    of programs    from    informal    specifications    [l],    one    of    the    subclasses    of    DomainUnit    is ControlFragment,    and it has subclasses    Sequence,    Parallel,    Loop, Conditional, Demon, etc.    The semantics    of the unit    classes other than Unit, DomainUnit, and SchedulingUnit    are left to    the application    writer.    Any    unit    may    serve    to    denote    competing    alternative    interpretations.    Such a unit, called    a Choice Set, represents    a    choice point in the problem-solving.    The Choice Set is a place-    holder    for the interpretation    it represents;    it can be dealt with as    any other    unit, including its incorporation    as a component    into    higher-level    units.    Associated    with    a Choice    Set    unit are    the    alternatives of the choice.    These may be explicit existing    units or    they    may be implicit in a generator    function    associated    with the    Choice    Set.    When    appropriate,    a KS may    execute    a Select    operation    on    a    Choice    Set,    replacing    it    with    the    selected    alternative.    The    Selection    can    be    done    in    a    destructive,    irrevocable    manner, or it can be done in a new contezt, retaining    the ability to Select another    alternative.    Contexts    are described    more in Section 5.    Hearsay-II’s    location    dimension    (e.g.,    time-within-utterance    in    the    speech-understanding    domain)    is    not    imposed    on    the    Hearsay-III    blackboard.    The application    writer    may create    such a    dimension,    either    inherently    in the interconnection    structure    of    units    or    explicitly    as values    associated    with    the    units.    The    flexibility    of    the    underlying    relational    database    system    allows    such constructs    to have first-class    status, for example,    to be used    in KS triggering    patterns.    4. Scheduling    Hearsay-Ill    retains    Hearsay-II’s    basic    sequencing    of    KS    execution:    When the triggering    pattern of a KS is matched    by a    configuration    of data on the blackboard,    an activation    record is    created    containing    the information    needed    to execute    the KS in    the environment    of the match.    At some later    time, the activation    record    may be selected    and saecuted,    i.e., the KS’s action,    which    is arbitrary    code, is run. The executing    KS has available    to it the    blackboard    data    that triggered    it, which    usually    serves    as the    initial focus for the activity of the execution.    Each KS execution    is indivisible; it runs to completion    and is not    interrupted    for the execution    of any other    KS activation.    The    effect    of a KS execution    is an updated    blackboard.    lndpendent    activations    of the same KS can pursue    the same exploration    by    retrieving    (potentially    private)    state    information    from    the    blackboard.    The    scheduling    problem    is:    given    the    current    state    of the    system,    select    the appropriate    activation    record    to execute    next.    The    separation    of    KS execution    from    triggering    allows    for    complex    schedultng    schemes (i.e., a large collection    of activations    may be available    from which to select).    To allow the application    writer    to use the Hearsay    problem-solving    features    for building    such schemes, several    mechanisms were added in Hearsay-Ill:    - Each activation    record    is a unit on the scheduling    blackboard.    The application    writer    supplies,    as part    of the definition    of each KS, code to be executed    when    the triggering    pattern    is matched;    this    code    computes    a scheduling-blackboard    class    (level)    in    which the activation record will be created.    - When executed,    scheduling KSs are expected    to make    changes    to the    scheduling    blackboard    to    facilitate    organizing    the    selection    of activation    records.    In    addition    to    triggering    on changes    to    the    domain    blackboard,    scheduling KSs can trigger    on changes    to    the scheduling    blackboard,    including the creation    of    activation    records.    The actions a scheduling    KS may    take    include    associating    information    with    activation    records    (e.g., assigning priorities)    and creating    new    units    that    represent    meta-information    about    the    domain    blackboard    (e.g.,    pointers    to    the    current    highest-rated    units on the domain blackboard).    The    scheduling    blackboard    is    the    database    for    the    scheduling problem.    - The    application    writer    provides    a base scheduler    procedure    that    actually    calls the    primitive    Eaecute    operation    for executing    KS activations.    We intend    the base scheduler    to be very    simple; most of the    knowledge    about    scheduling    should    be    in    the    scheduling    KSs.    For example,    if the scheduling    KSs    organize    the    activation    records    into    a queue,    the    base    scheduler    need    consist    simply of a loop that    removes    the first element    from the queue    and calls    for its execution.    If the queue    is ever    empty,    the    base scheduler    simply terminates,    marking the end of    system execution.    5. Context Mechanism    While Choice Sets provide    a means for representing    an unmade    decision    about alternative    interpretations,    we still need a method    of    investigating    those    alternatives    independently.    For    that,    Hearsay-Ill    supports    a context mechanism similar to those found in    Al programming    languages such as QA4 [lo] and CONNIVER [9].    The method by which KS triggering    interacts    with the context    mechanism    allows    controlled    pursuit    of    alternative    lines    of    reasoning.    A KS triggers    in the most general    context    (highest    in    the    tree)    in which    its pattern    matches.    ixecution    of that    KS    occurs    in the    same context    and, unless    it explicitly    switches    contexts,    its changes    are made in that context    and are inherited    down toward    the leaves.    Contexts    are sometimes    denoted    as unsuitable    for    executing    KSs -- a condition called poisoned.    Poisoned contexts    arise from    the    violation    of    a    Hearsay    constraint    (e.g.,    attempting    to    aggregate    conflicting units).    In addition, a KS can explicitly    poison    a context    if, for example,    the    KS discovers    a violated    domain    constraint.    A KS activation    whose execution    context    is poisoned    is placed in a wait state until the context    is unpoisoned.    Special    KSs, called    poison handlers,    are    allowed    to    run    in poisoned    contexts,    and    specifically    serve    to    diagnose    and correct    the    problems    that gave rise to the poisoning.    A common application    for the context    mechanism    arises    when    alternative    interpretations    lack good “locality”.    First consider    the    109    exampfe    of SAFE% Pfanning Phase,    which    uses    Choice    Sets    to    represent    alternative    interpretations    for    control    fragments.    In    the case of the input sentence    “Send an acknowledgment    to the imp and pass the    message on to the host.”    a Choice Set served    well.    The possible    interpretations    for this    sentence    include    being    put in parallel    or in sequence    with    an    existing    structure;    since    all alternatives    would    be    positioned    identically    in the existing aggregate    structure,    the Choice Set unit    can be placed where    the chosen interpretation    eventually    will go.    In some cases, however,    locality is lacking.    An example    is the    input sentence,    “After    receiving    the message,    the imp passes    it to    the host.”    The    possible    interpretations    for    this    include    a    demon    (“The    occurrance    of r triggers    y”) and a sequence    to be embedded    in    an    existing    procedure    (“After    r    do    y”).    Since    the    demon    interpretation    resides    at    the    same    structural    level    as    the    procedure    into which the sequence    would be embedded,    there    is    no convenient    place    to put the Choice    Set representing    these    alternatives.    Instead,    the    KSs    producing    these    alternative    interpretations    put them in brother    contexts,    so that each can be    pursued    independently.    6. Relational Database    As mentioned    earlier,    the blackboard    and all publicly    accessible    Hearsay-Ill    data structures    are represented    in the AP3 relational    database.    In addition, any domain information    which is to cause    KS firing    must also be represented    in the    database.    This    is    because    KSs are AP3 demons, and their    triggering    is controlled    by activity in the database.    The AP3 database    is similar to those    available    in languages    such as PLANNER [7], but also includes strong    typing for each of    the relational    arguments    in both assertion    and retrieval.    These    typed    relational    capabilities    are    available    for    modeling    directly    the application    domain.    7. Implementation and Current Status    The Hearsay-Ill    system is implemented    in AP3, which in turn    is    implemented    in    INTERLISP    [12].    AP3    was    chosen    as    an    implementation    language    because    it    already    contained    the    mechanisms    needed    to support    Hearsay-Ill    (e.g., contexts,    demons    and    constraints,    and    strong    typing).    In fact,    the    design    of    Hearsay-Ill’s    initial implementation    was almost trivial, being largely    a    set    of    AP3    usage    conventions.    However,    efficiency    considerations    have forced a substantial    implementation    effort.    Hearsay-Ill    has    been    tested    on    two    small    applications:    a    cryptarithmatic    problem    and    a cryptogram    decoding    problem.    Three    major implementation    efforts    are currently    underway.    The    first of these, as described    above, is the reimplementation    of the    SAFE system [l].    Second, Hearsay    is being used as the basis for    a system    for producing    natural    language    descriptions    of expert=    system    data structures    [8].    Finally, the system    is being used as    the    basis    for    a    “jitterer”    which    automatically    transforms    a    program    so tHat a transformation    chosen by a user is applicable    143.    The Hearsay-Ill    architecture    seems to be a helpful    one.    The    separation    of    competence    knowledge    from    p.erformance    knowledge    helps    in rapidly    formulating    the    expert    knowledge    required    for a solution.    Pretiminary    experience    with the    larger    applications    now under    development    seem to bear    this out, and    seem to indicate that performance    (scheduling)    is a difficult    issue.    The    flexibility    that    the    Hearsay-III    architecture    gives    toward    developing    scheduling    algorithms will undoubtably    go a long way    toward    simplifying    this    aspect    of    the    overall    problem-solving    process.    Acknowledgments    We wish to thank    Jeff Barnett,    Mark    Fox, and Bill Mann    for    their    contributions    to the Hearsay-Ill    design.    Neil Goldman    has    provided    excellent    and responsive    support    of the AP3 relational    database    system.    Steve    Fickas,    Neil Goldman,    Bill Mann,    Jim    Moore,    and Dave Wile have served    as helpful    and patient    initial    users of the Hearsay-Ill    system.    References    1.    2.    3.    4.    5.    6.    7.    8.    9.    10.    11.    12.    13.    Balzer, R., N. Goldman, and D. Wile, “Informality    in Program    Specifications,”    IEEE Trans. Software    Enp. SE-4, (21, March    1978.    Davis, R., Meta-Rules: Reasoning About Control, MIT Al    Laboratory,    Al Memo 576, March 1980.    Erman, L. D., F. Hayes-Roth,    V. R. Lesser, and D. R. Reddy,    “The Hearsay-II    Speech-Understanding    System: Integrating    Knowledge    to Resolve Uncertainty,”    Computing    Surveys    12,    (2), June 1980.    (To appear)    Fickas, S., “Automatic Goal-Directed    Program    Transformation,”    in 1st NationaL Artificial Intelligence    Conf.,    Palo Alto, CA, August 1980.    (submitted)    Hayes-Roth,    F., and V. R. Lesser, “Focus of Attention    in the    Hearsay-II    System,”    in Proc. 5th International    Joint    Conference    on Artificial Intefligence,    pp. 27-35,    Cambridge,    MA, 1977.    Hayes-Roth,    B., and F. Hayes-Roth,    Cognitive Prooesses in    Phning,    The Rand Corporation,    Technical    Report R-2366-ONR,    1979.    Hewitt, C. E., Description and Theoretical Analysis (Using    Schemata)    of PLANNER:    A Language    for Proving Theorems    and Manipulating Models in a Robot, MIT Al Laboratory,    Technical Report TR-258,    1972.    Mann, W. C., and J. A. Moore, Computer    as Author -- Results    and Prospects, USC/Information    Sciences Institute,    Technical    Report RR-79-82,    1979.    McDermott,    D., and G. J. Sussman, The CONNIVER    Reference    Manual, MIT Al Laboratory,    Memo 259a,    1974.    Rulifson, J. F., R. J. Waldinger,    and J. A, Derksen,    “A Language    for Writing Problem-Solving    Programs,”    in ZFIP    71,    pp. 201-205,    North-Holland,    Amsterdam,    1972.    Stefik, M., Planning with Constraints, Ph.D. thesis, Stanford    University,    Computer    Science Department,    January    1980.    Teitelman, W., lnterlisp Reference ManuaL, Xerox Palo Alto    Research Center, 1978.    Waterman,    D. A., and F. Hayes-Roth,    Pattern-Directed    fnference    Systems, Academic Press, New York, 1978.    110     
 | 
	1980 
 | 
	67 
 | 
					
64 
							 | 
	QUANTIFYING AND SIMULATING    THE BEHAVIOR OF KNOWLEDGE-BASED INTERPRETATION SYSTEMS*    V.R.    Lesser,    S.    Reed and J.    Pavlin    Computer    and Information    Science    Department    University    of    Massachusetts    Amherst,    Mass.    01003    ABSTRACT    The    beginnings    of    a    methodology    for    quantifying    the    performance    of    knowledge-sources    (KSs)    and    schedulers    in    a    knowledge-based    interpretation    sys tern    are    presented.    As part    of    this    methodology,    measures    for    the    “reliability”    of    an    intermediate    state    of    system    processing    and the    effectiveness    of    KSs and schedulers    are    developed.    Based    on    the    measures,    techniques    for    simulating    KSs and schedulers    of    arbitrary    effectiveness    are    described.    I INTRODUCTION    The development    and    performance-tuning    of    a    knowledge-based    interpretation    sys tern    like    the    Hearsay-II    speech    understanding    system    [I]    is    still    an    art.    There    currently    does    not    exist    sufficient    formal    methodology    for    relating    the    performance    characteristics    of    such    a system    to    the    performance    characteristics    of    its    components;    i.e.,    knowledge-sources    (KSs)    and schedulers**.    For that    matter,    there    does    not    even    exist    an    adequate    framework    for    quantifying    the    performance    of    the    system    and    its    components    in    an    uniform    and    integrated    way.    Thus,    when the    initial    operational    configuration,    Cl,    of    the    Hearsay-II    speech    understanding    system    had    poor    performance,    there    existed    no    methodology    for    detailing    in    a    quantifiable    way    what    types    of    performance    improvements    in    specific    components    would    be needed    National    -0412    and    Projects    of    Naval    The views    and conclusions    contained    in    this    document    are    those    of    the    authors    and should    not    be    interpreted    as    representing    the    official    policies,    either    expressed    or    imp1 ied,    of    the    National    Science    Foundation,    the    Defense    Advanced    Research    Projects    Agency,    or    the    US Government.    made    some    efforts    in    this    we feel    that    his    model    of    these    abstract    to    capture    adequately    important    issues.    (A detailed    be found    in [41.)    to    improve    significantly    the    overal    1    system    performance    .    Therefore,    the    development    of    the    completely    reorganized    C2    configuration,    which    turned    out    to have much superior    performance,    was    based    on “seat    of    the    pants”    intuitions.    These    intuit    ions    were    testable    only    when the    new set    of    KSs were integrated    together    into    a working    system.    In    the    following    sections    we    present    the    beginnings    of    a    methodology    for    quantifying    the    performance    of    KSs and schedulers.    We    then    show    how    this    methodology    can be used    to    simulate    the    performance    of    an upgraded    component    in    a    working    system,    so that    more accurate    estimates    can be made    of    the overall    performance    improvement    that    would    be    realized    if    the    component    were    actually    upgraded.    II A MODEL FOR A HEARSAY-LIKE    ---v-p    KNOWLEDGE-BASED SYSTEM    --    111    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    into    more abstract    and    encompassing    higher-level    hypotheses    (partial    interpretations).    The    lower-level    hypotheses    are    said    to    support    the    higher-level    hypotheses.    This    aggregation    process,    accomplished    by    synthesis    KSs ,    involves    the    detection    of    local    consistency    (or    inconsistency)    relationships    among hypotheses*.    In KS processing,    the    belief-values    of    supporting    hypotheses    are    not    changed    as    a    result    of    detection    of    local    consistency,    as    would    be the    case    in a relaxation    process.    The    construction    of    a    higher-level    hypothesis    and    its    associated    belief-value    is    an    explicit    encoding    of    the    nature    and    degree    of    consistency    found    among its    supporting    hypotheses.    The    hope    is    that    this    incremental    aggregation    process    resolves    the    uncertainty    among competing    interpretations    and,    simultaneously,    distinguishes    between    correct    and incorrect    interpretations.    We have    not    discussed    KSs as    solely    reducing    the    uncertainty    in    the    system,    because    uncertainty    is    a measure    of    the distribution    of    belief-values    and    does    not    reflect    the    accuracy    of    hypotheses.    We feel    that    KS processing    causes    changes    in    both    certainty    and    accuracy    in a system’s    database    and    we have developed    a measure,    called    ‘lreliability”,    that    combines    the    two.    A    good KS will    produce    higher-level    hypotheses    which    are    more    reliable    than    the    lower-level    hypotheses    supporting    them.    III    MEASURING THE SYSTEM STATE    Basic    to    our    view    of    processing    in    knowledge-based    systems    is    the    concept    of    system    state.    The system    state    at any point    in    time    is    --    the    current    set    of    hypotheses    and    their    relationships    to    the    input    data.    It    is    through    measures    of    the    system    state    that    we can    talk    in a    uniform    manner about    the    performance    of    KSs    and    schedulers.    For    purposes    of    measuring    reliability,    we    associate    a    hidden    attribute    with    each    hypothesis    which    we    call    its    truth-value.    This    attribute    measures    the    closeness    of    the    hypothesized    event    to    the    correct    event.    For task    domains    in    which    a    solution    is    either    totally    correct    or    incorrect,    we    quantify    truth-value    as    either    1    (true)    or    0    (false)    ,    while    in    domains    in    which    there    are    solutions    of    varying    degrees    of    acceptability,    truth-values    range    between    1 and 0.    One way of    evaluating    an intermediate    state    of    processing    is    by measuring    the    reliability    of    the    set    of    all    possible    complete    interpretations    (final    answers>    that    are    supported    (at    least    in part)    by    hypotheses    in the    current    state.    We    feel    that    a    direct    measure    of    this    sort    is    not    feasible    because    it    is    very    difficult    in general    to    relate    the    set    of    partial    interpretations    to    the    very    large    set    of    complete    interpretations    they    can    support.    We take    an alternative    approach,    called    +    To simplify    this    presentation,    we    focus    here    on    synthesis    KSs    only,    though    prediction,    verification,    and extrapolation    KSs also    have    a place    in our model.    reflecting-back,    which    is    based    on    two    premises.    First,    the    creation    of    a high-level    hypothesis    is    a    result    of    detecting    the    constitency    among    its    supporting    hypotheses.    This    creation    process    iS    an    alternative    to    actually    changing    the    belief-values    of    the    supporting    hypotheses,    as    occurs    in the    relaxation    paradigm.    Thus,    the    creation    of    a    high-level    hypothesis    implicitly    changes    the    reliability    of    its    supporting    hypo the se s .    This    change    can    be traced    down to    the    input    hypotheses    whose    reliability    is    implicitly    improved    to    the    extent    that    they    are    aggregated    into    reliable    high-level    hypo the se s .    Second,    we    assume    that    processing    which    implicitly    improves    the    reliability    of    the    input    hypotheses    al so    improves    the    reliability    of    the    complete    interpretations    supported    by these    hypotheses.    In the    reflecting-back    approach,    we    associate    with    each    input    hypothesis    the    highest    belief    hypothesis    it    supports.    The    truth-    and    belief-values    of    this    highest    belief    hypothesis    are    reflected-back    to    the    input    hypothesis    .    The    process    is    illustrated    in Figure    1.    It    should    be    stressed    that    the    hypotheses’    truth-values    and    reflected-back    values    are    used    only    for    measuring    the    system    state;    they    are    not    available    to KSs during    processing.    Our    measure    for    the    reliability    of    an    intermediate    system    state    is    based    on a measure    of    the    reliability    of    input    competitor    sets    computed    from    the    reflected-back    belief-    and truth-values.    Intuitively,    a    measure    of    reliability    for    a    competitor    set    should    have    the    following    properties,    based    on    both    the    be1 ief-    and    truth-values    of    its    hypotheses:    1.    With    respect    to    accuracy,    reliability    should    be    high    if    a true    hypothesis    has    high    belief-value    or    if    a false    hypothesis    has    a low belief-value,    while    reliability    should    be low    if    a true    hypothesis    has low    belief-value    or    a    false    hypothesis    has    high    belief-value;    2.    With    respect    to    uncertainty,    reliability    should    be    high    if    one    hypothesis    in a    competitor    set    has a high    belief-value    and    the    rest    have    low    belief-values,    while    reliability    should    be    low    if    all    the    hypotheses    have    similar    belief-values.    A    measure    for    the    reliability,    RC(S),    of    a    competitor    set,    S,    that    captures    and adequately    combines    both    of    these    properties    is:    RC(S)    = 1 - avg    iTV(h)-BV(h)    I    h in    S    which    is    equivalent,    in    the    case    of    binary    truth-values,    to    the    correllation    of    truth-    and    belief-values:    RC( s)    = avg    CBV(h)*TV(h)    + (l-BV(h)    )*(I-TV(h))1    h in S    where    BV(h)    is    the    belief-value    of    hypothesis    h and    112    TV(h)    is    the    truth-value    of    hypothesis    h.    Other    measures    may    also    be applicable,    but    of    those    we    considered    this    one    best    captures    our    intuitive    notion    of    reliability.    Based    on    this    measure    of    competitor    set    reliability,    we    can    construe    t,    for    instance,    a    measure    of    processing    effectiveness    associated    with    the    current    intermediate    sys tern    state.    This    measure    is    the    average    over    the    input    competitor    sets    of    the    difference    between    the    initial    reliability    and the    reliability    of    the    intermediate    state.    The    initial    reliability    is    the    average    reliability    of    the    competitor    sets    formed    from    input    hypotheses,    using    the    initial    belief-    and    truth-values.    The reliability    of    the    intermediate    --    state    is    the    average    reliability    of    the    competitor    sets    formed    from the    input    hypotheses,    where    the    reflected-back    truth-    and    belief-values    of    hypotheses    are    used    in place    of    the    original    ones.    In    Figure    1,    if    the    three    competitor    sets    were    the    only    ones    on the    input    level    and each    one contained    just    the    two    hypotheses    shown,    then    the    initial    rel iab il i ty , RI,    is    .517,    the    reliability    of    the    intermediate    state,    RS,    is    .575,    and    processing    effectiveness,    PE, is    .058:    RI = l/3 (.7+.6)/2    + (.5+.4)/2    + (.7+.2)/21    = .517    RS q 1/3[(.6+.5)/2    + (.7+.5)/2 + (.45+.7)/21    = .575    PE q RS - RI = .058    A positive    PE value,    as in this    example,    indicates    that    some uncertainty    and error    has been    resolved.    The larger    the    value,    the more resolution,    the    more    effective    is    the    processing.    IV MEASURING THE RESOLVING POWER OF A    KS    ___--    -    We define    the    instantaneous    resolving    power    of    a    KS as a change    in reliability    due to    a single    KS    -    execution.    This    change    is measured    on    competitor    sets    construe    ted    from    the    KS    input    hypotheses.    Thus,    instead    of    calculating    the    reflected-back    reliability    of    the    entire    system    state,    the    procedure    is    localized    only    to    the    subset    of    the    state    directly    affected    by the    KS execution.    We    Figure    1:    an example    of    the    reflecting    back    process.    INPUT    LEVEL:    COMPETITOR    SETS    The    system    state    contains    three    competitor    sets    (represented    by    boxes)    on    the    input    level,    and a number of    higher    level    hypotheses.    The hypotheses    are    represented    as circles    with    belief-values    on    the    left    and    truth-values    CT=1 and F=O) on the    right.    The initial    values    are    in the bottom    of    the    circle,    the    reflected-back    values    are    in the    top    half.    The thick    1 ines    show the    support    1 ink    to    the    highest    belief-value    hypothesis    supported    by each    input    hypothesis    and indicate    the    source    of    the    reflected-back    values.    Hypotheses    which    intervene    between    the    inputs    and    their    highest-belief    supported    hypotheses    are    not    shown.    113    measure    the    change    in reliablilty    as a result    of    KS    execution    by    measuring    before    KS processing    the    reliability    of    the    KS input    competitor    sets,    then    measuring    after    KS    processing    the    reliability    of    these    sets    based    on values    reflected-back    from    the    KS    output    hypotheses,    and    finally    taking    the    difference    between    the    results    obtained.    The resolving    power of    a KS can now be defined    ----    as its    average    instantaneous    resolving    power over    a    series    of    system    executions.    This    view    of    KS    resolving    power    does    not    take    into    account    the    global    impact    of    KS execution    on the    entire    system    state    and    on    future    processing.    Rather,    it    is    a    local,    instantaneous    measure    of    the    effects    of    KS    execution.    The    global    effects    occur    through    KS    interactions,    which    we believe    should    be    separated    from our measure    of    the    resolving    power of    a single    KS.    V    SIMULATING A KS    --    Given    a formal    measure    of    KS resolving    power,    we    can    simulate    KSs of    any desired    power.    This    is    accomplished    by introducing    an lloracle"    which    knows    how    to    judge    the    closeness    of    a    hypothesized    interpretation    to    the    correct    interpretation    (this    is    the    source    of    the    truth-values).    Our    reliability    measures    can    thus    be calculated    during    processing    rather    than    in    retrospect,    after    the    system    has    completed    processing.    Therefore,    a    system    does    not    hav% to complete    execution    in order    to    be evaluated.    A KS is    simulated    in    stages:    candidate    We believe    that    our    approach    to    simulating    KSs    of    different    resolving    power,    which    makes heavy    use    of    an oracle,    will    prove    useful    in    designing    and    debugging    knowledge-based    systems**.    However,    there    are    some limitations:    ------------------    I    In many cases,    it    is    relatively    easy    to    design    a    KS    which    provides    moderate    accuracy.    Most    of    the    effort    in    knowledge    engineering    is    spent    in    increasing    this    accuracy    to    gain    superior    performance.    Our simulation    of    KS    resolving    power    is    based    on a combination    of    simple    knowledge    about    local    consistency    and    reference    to    an oracle,    while    real    KSs infer    truth    from    local    consistency    alone    (and    falsehood    from    local    inconsistency).    The behavior    of    different    simulated    KSs    sharing    similar    errors    in knowledge    will    not be correlated    due to    our    statistical    approach    to    KS simulation.    Given    these    limitations,    we    do    not    expect    a    simulated    KS    to    behave    exactly    the    same as a real    KS.    We hope,    however,    the    essential    behavior    of    a    KS    has    been    captured    so that    system    phenomena    are    adequately    modelled.    In order    to validate    our models    of    KS    power,    we    plan    to    analyze    the    behavior    of    KSs in some    existing    knowledge-based    systems.    A measures    of    KS    power    will    be taken    for    an existing    KS and then    the    KS will    be replaced    by a simulated    KS of    the    same    power,    and the overall    system    behavior    compared    in    the    two cases.    The results    of    these    experiments    should    give    us some understanding    of    the    extent    to    which    data    derived    from our    simulation    studies    can    be used    to    predict    the    behavior    of    real    systems.    VI SIMULATION OF    -    ACCURACY IN THE    --    SCHEDULER    Reliability    measures    can    also    be used    in    the    simulation    of    a scheduler    of    a specific    accuracy.    The    task    of    a    scheduler    is    choosing    a    KS    instantiation    for    execution.    A KS instantiation    is    a KS-stimulus    pair,    where    the    stimulus    is    the    set    of    hypotheses    which    caused    the    KS to    be considered    for    scheduling.    The    scheduler    evaluates    alternative    instantiations    according    to    its    knowledge    of    the    characteristics    of    the    KSs,    the    stimuli,    and the    current    state    of    processing.    The    effects    of    future    processing    are    not    factored    into    this    model    of    scheduling;    we take    an instantaneous    view    of    scheduling    decisions.    Because    of    this,    we    are    unable    to model    scheduling    algorithms    such    as    the    "shortfall    density    scoring    method"    171    which    use    information    about    future    processing.    We hope    to    develop    a formulation    that    includes    this    type    of    information.    A good    scheduler    chooses    for    execution    the    KS    instantiation    that    will    most    improve    the    reliability    of    the    current    system    state.    The    accuracy    of    a single    scheduling    decision    is    defined    ---    relative    to    the    performance    of    an    optimum    scheduler,    which    uses    accurate    information    about    the    resolving    power    of    the    KSs and the    reliability    of    the    KS stimuli    and system    state.    The accuracy    of    a scheduler    is    the    average    of    the    accuracy    of    --    many scheduling    decisions.    We    steps:    view    the    optimum    scheduling    process    in    ------------------    **    The work of    Paxton    C5l comes    the    closest    our    approach,    but    was much more limited.    two    to    114    1.    For each    KS instantiation    on    the    scheduling    queue,    make    accurate    predictions    concerning    its    instantaneous    resolving    power.    These    predictions    involve    determining    the    truth-value    of    the    stimulus    hypotheses    (using    the    oracle)    and    knowledge    of    the    resolving    power of    the    KS.    2.    Make accurate    predictions    as    to    the    global    system    state    which    would    result    from    scheduling    each    instantiation    given    the    predictions    of    step    1.    These    predictions    will    determine    optimum    ratings    for    the    instantiations    and    result    in    an    optimum    schedule.    Our approach    to modelling    the    scheduler    is    to    obtain    statistically    accurate    ratings    for    the    instantiations,    based    on the    optimum    schedule,    and    then    choose    for    execution    an instantiation    from    within    the    ordering    which    results.    The position    in    the    ordering    of    the    chosen    instantiation    depends    on    the    desired    accuracy    of    the    scheduler    being    modelled;    the    closer    to    the    top    of    the    order,    the    more accurate    the    scheduler.    We    feel    it    would    be    an    error    to    model    scheduling    only    as a function    of    the    truth-value    of    stimulus    hypotheses.    Real    schedulers    do    not    have    access    to    the    truth-values    of    hypotheses,    but    only    infer    truth    from    belief-values    and    processing    history.    The    point    is    that    two instantiations    of    the    same    KS,    whose    stimulus    hypotheses    have    equivalent    characteristics    (same    belief-value,    level    of    abstraction,    database    region,    processing    history,    etc.)    except    for    their    truth-values    would    be rated    the    same    by    even    the    best    scheduler.    Additionally,    in order    to determine    the    rating    of    a    KS    instantiation,    real    schedulers    [ 31    consider    other    factors,    besides    the    characteristics    of    the    stimulus    hypotheses.    For example,    schedulers    take    into    account    such    factors    as the    balance    between    depth-first    vs.    breadth-first    processing    or    between    executing    KSs that    work in    areas    with    rich    processing    history    vs.    executing    KSs    that    work    where    little    processing    has    been    done.    These    additional    considerations    are,    in    fact,    heuristics    which    attempt    to    capture    the    concept    of    improvement    in the    reliability    of    the    system    state.    Thus,    in    our    view,    a    scheduler    should    be characterized    in    terms    of    its    ability    to    estimate    the    improvement    in    system    state    reliability,    rather    than    its    ability    to    detect    the    truthfulness    of    the    instantiation's    stimulus    hypotheses.    We could    have modelled    the    scheduler    just    as    we    modelled    KSs,    with    a candidate    evaluator    and a    scheduling    resolver.    The candidate    evaluator    would    take    the generated    KS instantiations    and give    them    ratings    based    on simple    scheduling    knowledge.    The    scheduling    resolver    would    minimally    alter    these    ratings    (with    statistical    perturbation)    to    produce    an    ordering    for    the    instantiations    which    corresponds    to    a desired    scheduler    accuracy.    For    several    reasons,    too    complicated    to discuss    in this    short    paper,    we have    not    used    such    an approach    for    modelling    schedulers.    Further    details    of    this    issue    and a more detailed    formulation    of    scheduling    measures    ar e disc    this    paper    c41.    in an    extended    version    of    VII SUMMARY    This    work    represents    the    beginnings    of    a    methodology    for    understanding    in quantitative    terms    the    relationship    between    performance    of    a    knowledge-based    system    and the    characteristics    of    its    ccmponents.    This    quantification    may also    allow    us    to    develop    simulations    of    these    systems    which    can    accurately    predict    the    performance    of    alternative    designs.    ACKNOWLEDGMENTS    We    would    like    to    recognize    the    helpful    comments    on    various    drafts    of    this    paper    given    by    Daniel    Corkill    and Lee Erman.    REFERENCES    II11    c21    c31    r41    [51    [61    c71    Erman,    L.    D.,    F.    Hayes-Roth,    V.    R.    Lesser    and    R.    Reddy (1980).    "The    Hearsay-II    Speech    Understanding    System:    Integrating    Knowledge    to    Resolve    Uncertainty,"    Computing    Surveys,    12:2,    June    1980.    Fox,    M.    S.    (1979).    "Organizational    Structuring:    Designing    Large    Complex    Software,"    Technical    Report    CMU-CS-79-155,    Department    of    Computer    Science,    Carnegie-Mellon    University,    Pittsburgh,    Pennsylvania.    Hayes-Roth,    F.    and    V.    R.    Lesser    (1977>,    "FOCUS    of    Attention    in the    Hearsay-II    Speech    Understanding    System,"    In Proceedings    of    the    Fifth    International    Joint    Conference    on    Artificial    Intelligence-1977,    P. 27-35,    Cambridge,    Massachusetts,    1977.    Lesser,    V.    R.,    J.    Pavlin    and    S.    Reed    (19801,    "First    Steps    Towards    Quantifying    the    Behavior    of    Knowledge-Based    Systems,"    Technical    Report,    Department    of    Computer    and    Information    Science,    University    of    Massachusetts,    Amherst,    Massachusetts.    Paxton,    W.    H.    (19781,    I1 The    Executive    System,"    in    D.    E.    Walker,    (editor),    Understanding    Spoken    Language,    Elsevier,    North-Holland,    N.    Y.,    1978.    Rosenfeld,    A.    R. , R.    A.    Hummel , and s.    w.    Zucker    (19761,    "Scene    Lab eling    by Rel axation    Operators,"    IEEE Transactions    on Systems,    Man    and Cybernetics,    SMC-6,    pg.    420-433,    1976.    Woods,    W.A.    (1977))    "Shortfall    and    Density    Scoring    Strategies    for    Speech    Understanding    Control,11    In    Proceedings    of    the    Fifth    --    International    Joint    Conference    on Artificial    Intelligence-1977,    p.    18-26,    -    Cambridge,    Massachusetts,    1977.    115     
 | 
	1980 
 | 
	68 
 | 
					
65 
							 | 
	Representation    of Task-Specific    Knowledge    in a Gracefully    Interacting    User Interface    Eugene Ball and Phil Hayes    Computer Science Department, Carnegie-Mellon    University    Pittsburgh, PA 15213, USA    Abstract    Command    interfaces    to current    interactive    systems    often    appear inflexible    and unfriendly    to casual and expert users    alike.’    We are constructing    an interface    that will behave    more cooperatively    (by correcting    spelling and grammatical    errors, asking the user to resolve ambiguities    in subparts of    commands,    etc.).    Given that present-day    interfaces    often    absorb    a major portion    of implementation    effort,    such    a    gracefully    interacting    interface    can only be practical    if it is    independent    of the specific tool or functional    subsystem with    which it is used.    Our interface    is tool-independent    in the sense that all its    information    about    a particular    tool    is expressed    in a    declarative    tool description.    This tool description    contains    schemas for each operation that the tool can perform, and for    each kind of object known    to the system.    The operation    schemas describe    tne relevant    parameters,    their types and    defaults,    and    the    object    schemas    give    corresponding    structural    descriptions    in terms    of defining    and derived    subcomponents.    The schemas    also include    input syntax,    display formats, and explanatory    text.    We discuss how these    schemas can be used by the tool-independent    interface    to    provide a graceful interface to the tool they describe,    1. Introduction    Command    interfaces    to most current    interactive    computer    systems tend to be inflexible and unfriendly.    If the user of such a    system issues a command    with a trivial (to a human) syntactic    error, he is likely to receive an uninformative    error message, and    must re-enter the entire command.    The system is incapable    of    correcting    the error in the “obvious”    way, or of asking him to    retype only the erroneous segment, or of providing an explanation    of what the correct    syntax    really is.    Anyone    who has used an    interactive    computing    system    is only    too    familiar    with    such    situations,    and knows well how frustrating    and time-consuming    they are, for expert as well as novice users.    We are involved    in a project to build an interface    which will    behave in a more flexible and friendly way, one that will inferact    graceful/y.    As we have described    in earlier work [3, 41, graceful    interaction    involves    a number    of relatively    independent    skills    including:    m the parsing of ungrammatical    input, either to correct it    or to recognize any grammatical substrings;    1 This    research    was    sponsored    by the    Defense    Advanced    Research    Projects    Agency    (DOD),    ARPA    Order    No. 3597,    monitored    by the Air Force    Avionics    Laboratory    Under    Contract    F33615-78-C-1551.    The views    and conclusions    contained    in this document    are those    of the    authors    and    should    not    be    interpreted    as    representing    the    official    policies,    either    expressed    OI Implied,    of the Defense    Advanced    Research    Projects    Agency    or the US Government.    o robust communication    techniques    to ensure that any    assumptions    the system    makes    about    the    user’s    intentions    are implicitly or explicitly    confirmed    by the    user;    o the ability to give explanations    of how to use the    system or the system’s current state;    e interacting    to resolve ambiguities    or contradictions    in    the user’s    specification    of objects    known    to the    system;    o keeping track of the user’s focus of attention;    e describing    system objects in terms appropriate    to the    current dialogue context.    Providing    these facilities    is clearly    a major programming    task    requiring extensive use of Artificial Intelligence    techniques    (see [2]    for just the flexible parsing aspect). We believe that it is unrealistic    to expect the designers of each interactive    sub-system (or tool) to    implement    a user interface    with these capabilities.    Therefore,    instead    of constructing    a gracefully    interacting    interface    for a    single application,    we are attempting    to build a tool independent    system, which can serve as the user interface    for a variety    of    functional sub-systems.    The availability    of a tool independent    user interface    would    greatly simplify the construction    of new computer    sub-systems.    Currently,    even if the new system is not intended to be gracefully    interacting    but merely to perform according    to minimal standards,    a large amount of implementatron    effort must be devoted to user    interface issues. The system designers must decide on a style of    interaction    with the user, select the general format and detailed    syntax of all commands,    and provide for the de.tection of illegal    input.    The command language must then be thoroughly    checked    to ensure    that it does not contain    ambiguities    or misleading    constructions    and    that    likely    error    sequences    will    not    be    misinterpreted    and cause unrecoverable    system actions.    Often,    the design can only be completed after an Initial implementation    of    the system has produced    feedback    about    the usability    of the    human interface.    This design process represents the minimum effort necessary    to produce    a system that is even usable by a large number of    people;    if a superior    (but still far from gracefully    interacting)    interface    or one which    can be used by non-programmers    is    required. much more work must be expended.    Editing facilities,    which    are required    in most interactive    systems    (at least for    correction    of typed    input),    must be fully    integrated    into the    sub-system; compatibility    with other editors in common use on the    computer    must be considered,    even though    this may lead to    difficult    interactions    with the sub-system    command    language.    Error detection    and    reporting    must be Improved;    generating    coherent    diagnostics    for the inexperienced    user can be very    difficult    indeed.    Online    documentation    must    be    provided,    116    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    including    reasonable    facilities which allow a user to quickly find    the answer to specific    (although    vaguely    expressed)    questions.    The complexity    of this task    often    means    that    most of the    implementation    effort in adding a new tool to a computer system is    absorbed by the user interface.    Technological    trends are aggravating    the problem by raising    the level of performance    expected    of an interface.    In particular, as    high-resolution    graphics displays equipped    with pointing devices    become available, users expect to use menu-selection    and other    more sophisticated    forms of input, and to see system output    displayed    in an attractive    graphical    format.    The very recent, but    growing,    availability    of speech input and output will intensify this    pressure for sophistication.    An additional    reason    for    constructing    a tool-independent    interface    is to make the computer    system as a whole appear    consistent    to the user.    If the interfaces    for different    tools use    different    conventions,    then no matter how sophisticated    each of    them is individually.    the user is likely to be confused as he moves    from one to another because the expectations    raised by one may    not be filled by the other.    For all these reasons, we are attempting    to make our gracefully    interacting    interface    as tool-independent    as possible.    In the    remainder    of this paper we outline the system structure    we have    developed,    and go on to give further details about one component    of this structure, the declarative format in which information    about    the tool is made available to the interface, together with sketches    of    how    the    tool-independent    part    of the    system    uses    the    information thus represented.    2.    System    Structure    The basis for our system structure    is the requirement    that the    interface    contain    no tool-dependent    information.    All such    information    must be contained    in a declarative    data base called    the tool description.    In an effort further to improve portability and    reduce duplication    of effort between    interfaces    implemented    on    different    hardware configurations,    we have made a second major    separation    between the device-dependent    and -independent    parts    of the interface.    The resulting structure is illustrated in figure 1.    User    Agent    Figure    1. User Interface    System    Structure    The intelligent functions of the interface, those itemized above,    are isolated in a tool and device independent    User Agenf,    which    interacts    with    the    tool    through    a narrow    interface    that    is    completely    specified    by    the    declarative    tool    description.    Communication    between the Agent and the user is not direct, but    goes via a device-dependent    Front-End,    which allows the Agent to    specify its output in a high-level device-Independent    manner, and    which    preprocesses    the    user’s    input    into    a    standard,    device-independent,    format.    Communication    between the Agent    and Front-End is thus restricted to a well-defined    format of input    and output requests.    Display formats in which to realize the tool’s    and Agent’s high-level output requests are specified declaratively    in the tool description.    The basic function of the Agent is to establish from the user’s    input what functional    capability    of the tool the user wishes to    invoke and with what parameters he wishes to invoke it. Once this    is established, the Agent issues the appropriate    request to the tool    and reports to the user relevant portions of the tool’s response.    To make this possible, the tool description    includes a specification    of all the operations    provided    by the tool in terms    of their    parameters and their types, defaults, etc., plus specifications    of all    the abstract    objects    manipulated    by the tool in terms of their    defining    (and descriptive)    sub-components.    This representation    of operations    and objects follows Minsky’s frames paradigm [7] in    the spirit of KRL [l] or FRL [13].    The representation    allows the    Agent to follow the user’s focus of attention    down to arbitrarily    deeply    nested aspects    of object    or operation    descriptions    to    resolve ambiguities or contradictions.    This facility depends on the    tool to provide    resolution    of object    descriptions    into sets of    referents.    The tool description    also specifies    the syntax    for the user’s    input descriptions    of the objects    and operations.    The Agent    applies    the grammar    thus    specified    to the    user’s    input    (as    pre-processed    by the Front-End)    in a flexible way, providing    the    kinds of flexible parsing facilities mentioned above.    The user may    also request    information    about the tool or other help, and the    Agent will attempt to answer the query with information    extracted    from    the    tool    description,    and    displayed    according    to    tool-independent    rules.    Besides requesting that the Front-End output text strings to the    user, the Agent may also specify instances of system objects.    The    Front-End    will then drsplay the objects according    to a display    format specified in the tool description.    For the Front-End we are    using, whrch operates through a graphics display equipped with a    pointing    device. this allows the user to refer directly    to system    objects by pointing.    The Front-End reports such pointing events    to the Agent in terms of the system object refered to. Typed input    is pre-processed    into a stream of lexical items, and other pointing    events, such as to menus, can also be reported    as lexical items.    We are also experimenting    with a limited-vocabulary,    single word    (or phrase) speech    recognizer,    isolated    in the Front-End.    It’s    output can also be reported as a lexical item.    This concludes    the overview    of the system structure.    For the    remainder of the paper, we will concentrate    on the representation    employed    in the tool-description,    and the way the information,    thus represented,    is used by the remainder    of the system.    Our    examples will be in terms of the tool being used as a test-bed for    the development    of the Agent    and Front-End:    a multi-media    message system, capable    of transmitting,    receiving,    filing, and    retrieving pieces of electronic mail whose bodies contain mixtures    of text, speech, graphics, and fax.    3.    Representation    of task specific    information    A functional    sub-system,    or tool, is characterized    for the user    interface program by a data base which describes the objects    it    manipulates    and the operations    it can    perform.    This fool    description    is a static    information    structure    (provided    by the    sub-system implementor)    which specrfies everything    that lhe User    117    Agent    needs to know    about the tool.    We’ll first give a brief    overview of the structure of the tool description    and how the Agent    uses it to provide an interface to the sub-system.    Then the content    of the data base will be explained    in more detail, with examples    showing    how this information    is utilized    in the processing    of    commands from the human user. The tool description    consists of:    o Declarations    of the data objects    used    by the    tool.    These declarations    specify    the internal structure    of    each    object    type    defined    within    a    particular    sub-system.    The tool may also contain references    to    object types that are defined in a global data base and    are used by many different    tools    (e.g. files, user    names, dates and times).    The object    declaration    provides rules for displaying an instance of the object,    syntax    for    descriptions    of    it in commands,    and    documentation    that can be used to explain its function    to the user.    o Descriptions    of the operations    which    the tool    can perform.    Each operation entry specifies the parameters that the    Agent must provide to the tool to invoke that action.    It    also defines    the    legal    syntax    for    the command,    provides    some    simple    measures    of its cost    and    reversability,    and supplies    a text explanation    of its    purpose.    As mentioned    earlier, the primary goal of the Agent is to help    the human user to specify sub-system operations    to be executed.    To carry    out this    function,    it parses    the    user’s    commands    (including    text, pointing, and possibly spoken input) according    to    the syntax specifications    in the tool description.    It decides which    operation has been selected and attempts to fill out the parameter    template associated with it. This process may involve interpreting    descriptions    of sub-system    objects,    negotiating    with the user    about errors or ambiguities that are discovered,    and explaining    the    meaning of command options,    3.1.    Object    Descriptions    The tool description    contains a declaration    of each data type    that is defined within that sub-system.    Data objects which will be    manipulated    by both the Agent and the tool are represented    as    lists and property    lists (sets of name-value    pairs),    using the    formalism    defined    by Postel for the communication    of Internet    messages [9].    This representation    is self-describing    in that the    structure and type of each data element is represented    explicity in    the object.    Thus, complexly structured    objects can be transferred    between the User Agent and the tool, and the Agent can interpret    them    according    to    the    information    contained    in the    tool    description.    For    example,    the    following    is    the    internal    representation    of a simple    message    (primitive    elements    are    integers or text strings, and brackets are used to delimit sets of    name-value pairs):    [    StructureType:    Objectlnstance    ObjectName: Message    Sender:    [    PersonName:    [ First: John    Middle: Eugene    Last: Bali]    Host:    [ Site: CMU    Machine: A ]    1    Recipient:    [    PersonName:    [ First: Phil    Last: Hayes ]    Host:    [ Site: CMU    Machine: A ]    1    Copies: []    Date: [ Year:1980    Month:April    Day:10 Weekday:Thursday    AMPM: AM    Hour: 11    Minutes: 16    Seconds: 371    Subject: “Meeting tomorrow?”    Body: “Phil, Could we meet tommorrow    at 1 pm? -Gene”    1    The structure of a Message    is defined in the tool data base by    a message schema.    This schema declares each legal field in the    object and its type, which may be a primitive type like TEXT or an    object type defined    by another schema    in the tool description.    The schema also specifies the number of values that each field    may have, and may declare default values for new instances of the    object.    The following is a simplified schema for a message object    and some of its components:    StructureType:    ObjectSchema    ObjectName:    Message    DescriptionEvaluation:    ToolKnows    Schema:    Sender: [ FillerType: Mailbox ]    Recipient:    [ FillerType: Mailbox    Number: OneOrMore ]    Copies:    [ FillerType: Mailbox    Number: NoneOrMore    ]    Date:    [ FillerType: Date ]    Subject:    [ FillerType: MultiMediaDocument    ]    Body:    [ FillerType: MultiMediaDocument    ]    After:    [ FillerType: Date    UseAs: DescriptionOnly    ]    Before:    [ FillerType: Date    UseAs: DescriptionOnly    ]    StructureType:    ObjectSchema    ObjectName:    Mailbox    DescriptionEvaluation:    ToolKnows    Schema:    [    PersonName:    [ FillerType: PersonName ]    Host:    [ FillerType: Host ]    11    StructureType:    ObjectSchema    ObjectName:    PersonName    DescriptionEvaluation:    OpenEnded    Schema:    [    First:    [ FillerType: TEXT ]    Middle:    [ FillerType: TEXT    Number: NoneOrMore]    Last:    [ FillerType: TEXT ]    II    StructureType:    ObjectSchema    ObjectName:    Host    DescriptionEvaluation:    ToolKnows    Schema:    [    Site:    [ FillerType: TEXT Default: CMU ]    Machine:    [ FillerType: TEXT Default: A ]    11    In addition to defining the structure of an instance of a message    object, the schema includes fields which are used by the Agent to    interpret descriptions    of messages.    The DescriptionEvaluation    field tells the Agent how to evaluate a description    of an object in    this    class.    For    example,    ToolKnows    indicates    that    the    sub-system    is prepared    to evaluate    a description    structure    and    return a list of instances matching the description.    A description    structure is an instance of the object with special wild card values    for some fields and with possible extra DescriptionOnly    entries,    such as the After field in the example above.    Since a description    of one object may reference    other objects (“the messages from    members of the GI project”),    the Agent uses the hierarchy defined    by the object declarations    to guide its evaluation    of the user’s    commands.    Each level generates    a new sub-task    in the Agent    which processes that portion of the description,    and is responsible    for resolving ambiguities that it may encounter.    This structure also    makes it possible to follow the user’s focus of attention, since new    input may apply to any of the currently active subgoals (“No, only    the ones since October”    or “No, only the ones at ISI”).    118    Each object declaration    also includes information which is used    information    network at a node representing    the data type, which is    by the Front-End module to display instances of that object type.    connected    to Zog frames for other objects referenced by that type.    Several    different    formats    may be defined;    the tool (or Agent)    The user can find information    about a related sub-system object    selects an appropriate    format by name each time it displays an    by choosing    a link to follow (with a menu selection); that frame is    object.    The format declaration    specifies which fields of the object    quickly displayed.    The legal syntax for descriptions    of the object    to display and their layout; it may also provide parameters to the    and links to frames for operations    which manipulate    it are also    Front-End    which    invoke    special    capabilities    (highlighting,    font    included    in the    Zog    network.    In    the    following    example    selection)    of the display hardware.    For example,    the following    documentation    frame    for Message,    the italicized    entries    are    section    of the description    for a Message    defines two different    buttons which can be activated to request more information    about    styles of message header display.    a specific topic:    DisplayFormat:    [    ShortHeader:    //    style: From Hayes on 18-Mar    [    Text: “From &Sndr& on &Day&”    Sndr: [Field: Sender/PersonName/Last    FaceCode: Bold]    Day: [ Field: Date    Style: ShortDay ]    1    FullHeader:    //    From Eugene Ball on lo-Apr-80    11:16am    about ‘Meeting tommorrow?    [    Text: “From &Sndr& on &Day& about ‘&Subj&“’    Sndr: [ Field: SenderIPersonName    Style: FullName ]    Day: [ Field: Date    Style: FullDate ]    Subj: [ Field: Subject    Style: FullSubj ]    11    Messase:    (Multi-Media    Message System)    Each    message    contains    the    author    and    date    of    origination    and    is addressed    to    (one    or    more)    destination    mailboxes; copies may optionally    be sent to    additional    destinations.    The body of a message may    contain uninterpreted    text, images, sketches, and voice    recordings.    Example syntax:    ‘messages [from Person] [before/after    Date]    [about String]’    Detailed    syntax    Each object declaration    also defines the legal syntax that can    be used in commands that refer to objects of that type. In order to    understand    a phrase like “the messages from Phil since March”,    the Agent uses the syntax definition    associated    with the object    type Message,    which in turn refers to the syntax for other objects    like Date and PersonName.    In the example    below, question    marks indicate optronal syntactic    elements, asterisks mark fields    that can be repeated, slashes indicate word class identifiers, and    ampersands mark places where the syntax for other object types is    to    be    expanded.    Some    syntactic    entries    also    specify    correspondences    to particular fields in the object; as a command    is parsed, the Agent builds description    structures    which represent    the objects referred to in the command.    Thus, the phrase “since    March”    results in an appropriate    After clause in the Message    description.    The grammar    defined    by these syntax    entries    is    applied to the user’s input in a flexible way [2], so that grammatical    deviations such as misspellings,    words run together,    fragmentary    input, etc. can still be parsed correctly.    Related object types:    Mailbox    Display    Multi Media    Document    Operations:    Send Date    Edit    3.2.    Operation    Descriptions    Each sub-system operation which can be invoked by the Agent    is also described    by an entry in the tool data base. An operation    entry specifies the parameters that the Agent must provide to the    tool to have it perform    that action.    The object type of each    parameter    is declared    and the tool description    can optionally    indicate that a parameter    position may be filled by a set of such    objects.    In addition, constraints    on the legal values of a parameter    are sometimes    provided,    which    can help the Agent    to avoid    requesting an illegal operation.    Syntax:    [    Pattern:    (?/Determiner    /MessageHead    */MessageCase)    Determiner:    (the (all ?of ?the) every)    MessageHead:    (messages notes letters mail)    MessageCase:    [    StructureType:    Operation    OperationName:    Forward    Reversible:    false    Cost: moderate    Parameters:    [    Message:    [ FillerType: Message    Number: OneOrMore ]    Recipient:    [ FillerType: Mailbox    Number: OneOrMore ]    Forwarder:    [ FillerType: Mailbox    MustBe: CurrentUser ]    ( [ Syntax:    (/From &Mailbox)    StructureToAdd:    [ Sender: &Mailbox ]]    [ Syntax:    (/Before    &Date)    StructureToAdd:    [ Before: &Date]]    [Syntax:    (/After &Date)    1    Syntax:    [    Pattern:    (/Forward    %Message to %Recipient)    Forward:    (forward send mail (pass on) deliver redeliver)    1    Explanation:    “Message Forwarding    StructureToAdd:    [ After: &Date ]]    1    From:    (from (arriving from) (that came from) (/Mailed by))    Mailed:    (mailed sent delivered)    Before:    (before (dated before) (that arrived before))    After:    (after since (dated after))    1    A copy of a message that was delivered to you can be sent to    another person with the Forward command.    You must specify    the message to forward and the destination mailbox. Sample    syntax:    ‘Forward the last message from Phil to Adams at ISIE”’    Finally, the object description    provides    information    which    is    used    to    automatically    construct    documentation    and    provide    answers to user requests for help.    Each object contains a brief    text explanation    of its structure    and purpose in the sub-system,    which can be presented to the user in response to a request for    information.    The documentation    is also placed into a Zog [14]    The example entry for the forward    operation    also mcludes a    declaration    of the legal syntax for the command, and a text entry    which will be included in its documentation    frame. It also indicates    that this command is not reversible (once executed    it cannot be    undone),    and that it is moderately    expensive    to execute.    This    information    is used by the Agent to select an appropriate    style of    interaction    with the User; for example, irreversible operations    will    usually require explicit confirmation    before the request is given to    the sub-system for execution.    119    4.    Conclusion    References    The design and implementation    of a good user interface for a    computer    sub-system    is a difficult    and time-consuming    task; as    new techniques    for communication    with computers    (especially    high resolution    displays    and speech)    gain widespread    use, we    expect this task to become even more expensive.    However, we    also feel that the typical user interface must be made much more    robust,    graceful,    and intelligent.    For this goal to be feasible,    substantial    portions    of the interaction    with the user must be    independent    of the    details    of the    application,    so that    the    development    cost of the user interface    code can be shared by    many sub-systems.    Therefore,    we are designing    a generalized    User Agent which    can be used to control    a variety of different    sub-systems.    The    Agent carries on a dialog with the human user; it can understand a    variety of different    command styles, recognize    and correct minor    syntactic    or spelling errors, supply default    values for command    arguments    based on context,    and provide    explanations    when    requested.    All of the information    that the Agent needs to know    about    the    application    system    is explicitly    stored    in a tool    description    provided    by the sub-system    implementor.    This paper    1.    Bobrow, D. G. and Winograd, T. “An Overview of KRL-0, a    Knowl&ge    Representation    Language.”    Cognitive    Science    1,1    (1977).    2.    Hayes, P. J. and Mouradian,    G. V. Flexible Parsing.    PrOC. of    18th Annual Meeting of the Assoc. for Comput. Ling., Philadelphia,    June, 1980.    3.    Hayes, P. J., and Reddy, R. Graceful Interaction    in    Man-Machine    Communication.    Proc. Sixth Int. Jt. Conf. on    Artificial Intelligence,    Tokyo, 1979, pp. 372-374.    4-    Hayes, P. J., and Reddy, R. An Anatomy of Graceful    Interaction    in Man-Machine    Communication.    Tech. report,    Computer Science Department,    Carnegie-Mellon    University, 1979.    5.    Kernighan,    Brian W. and Ritchie, Dennis M. The C    Programming    Language.    Prentice-Hall,    Inc., 1978.    6.    Metcalf, Robert and Boggs, David.    “Ethernet:    Distributed    has concentrated    on the content of that data base, detailing    the    information    represented    there and demonstrating    how the Agent    can apply it to provide    a sophisticated    interface    to a specific    Packet Switching for Local Computer Networks.”    Comm.    ACM    79,    7 (July 1976), 395404.    application system.    7.    Minsky, M. A Framework for Representing    Knowledge.    In    The tool description    is represented    in a unified    formalism,    which enables us to maintain a single data base which specifies all    of the task-specific    attributes of a particular sub-system.    Because    the information    is stored in a single format, it can easily be utilized    by multiple portions of the interface system.    For example, a single    syntax description    is used to parse user commands,    to generate    Winston, P., Ed., The Psychology    of Computer-Vision,    McGraw Hill,    1975, pp. 211-277.    8.    Newell, A., Fahlman, S., and Sproull, R.F. Proposal for a joint    effort in personal scientific computing.    Tech. Rept. , Computer    Science Department, Carnegie-Mellon    University, August, 1979.    explanations    of system actions, and to construct documentation    Of    the options available in the tool.    9.    Postel, J. Internet Message Protocol.    Draft Internet    Experiment    Note, Information    Sciences Institute, Univ. of    The initial implementation    of the Agent will provide the user    Southern California, April, 1980.    interface for the Multi-Media    Message System; an electronic    mail    facility which manipulates    messages containing    mixtures of text,    lo-    Rashid, R. A proposed DARPA standard inter-process    recorded    speech,    graphics,    and images.    The system is being    communication    facility for UNIX version seven. Tech. Rept. ,    implemented    as a multiple machine, multiple language distributed    Computer Science Department, Carnegie-Mellon    University,    system:    screen    management    (multiple    windows)    and graphics    February, 1980.    support are provided in Bcpl [l I] on a Xerox Alto [15] with a high    resolution raster display and pointing device: audio recording    and    1 1 -    Richards, M. BCPL: A tool for compiler writing and systems    playback    is controlled    by a DEC PDP-11 in C [5]; the Front-End    programming.    Proceedings    of the Spring Joint Computer    module    and    User    Agent    are    implemented    in C and    LISP    Conference,    AFIPS, May, 1969, pp. 34557-566.    respectively,    on a VAX-l l/780    running    Unix [12]; and the tool    (message system) runs in C on the VAX.    The system modules    12.    Ritchie, D. M. and Thompson, K. “The UNIX Time-Sharing    communicate    using    a    message    based    Inter-Process    System.“    Comm.    ACM    77, 7 (July 1974), 365375.    Communication    facility [lo]    within Unix, and a packet broadcast    network    (Xerox    Ethernet    [6]) between    machines.    Most of the    13.    Roberts, R. B. and Goldstein, I. P. The FRL Manual.    A. I.    system components    are currently    running as individual    modules,    the first version of a single integrated system should be completed    Memo 409, MIT Al Lab, Cambridge, Mass., 1977.    by June 1980. Because of our goal of a smoothly working, robust,    14.    and graceful system, we expect to continue tuning and improving    Robertson, G., Newell, A., and Ramakrishna,    K. ZOG: A    the implementation    for at least another    year.    The system will    Man-Machine    Communication    Philosophy.    Tech. Rept. ,    eventually    be moved to a single powerful    personal    computer,    Carnegie-Mellon    University Computer Science Department,    where we expect it to make substantial    contributions    to the CMU    August, 1977.    Spice (Scientifc Personal Integrated    Computing    Environment    [8])    development    effort.    1 6 .    Thacker, C.P., McCreight, E.M., Lampson, B.W., Sproull,    R.F., and Boggs, D.R. Alto: A personal computer.    In Computer    Structures:    Readings    and Examples,    McGraw-Hill,    1980. Edited by    D. Siewiorek, C.G. Bell, and A. Newell, second edition, in press.    120     
 | 
	1980 
 | 
	69 
 | 
					
66 
							 | 
	AN EFFICIENT RELEVANCE CRITERION FOR MECHANICAL    THEOREM PROVING*    David A. Plaisted    Department    of Computer Science    University    of Illinois    Urbana, Illinois    61801    ABSTRACT    To solve problems in the presence of large    knowledge bases, it is important to be able to de-    cide which knowledge    is relevant to the problem at    hand.    This issue is discussed in [l 1. We pre-    sent efficient algorithms    for selecting a relevant    subset of knowledge.    These algorithms    are presene    ed in terms of resolution    theorem proving in the    first-order    predicate    calculus, but the concepts    are sufficiently    general to apply to other logics    and other inference rules as well.    These ideas    should be particularly    important when there are    tens or hundreds of thousands of input clauses.    We also present a complete theorem proving strate-    gy which selects at each step the resolvents    that    appear most relevant.    Thisstrategy    is compatible    with arbitrary    conventional    strategies    such as P -    deduction,    locking resolution,    et cetera.    Also, 1    this strategy uses nontrivial    semantic information    and "associations"    between facts in a    to human problem-solving    processes.    I RELEVANCE FUNCTIONS    Definition.    A support set for a set S of    clauses is a subset Ti of S such that S-Ti is con-    way similar    sistent.    A support class for S is a set {Tl,...,    Tk] of support sets for S.    Definition.    A (resolution) proof of C from S    is a sequence C1,C2 ,...,C, of clauses in which Cn    is C and each clause C i is either an element of S    (an input clause) or a resolvent of two preceding    clauses in S.    (Possibly both parents of Ci are    identical.)    The length of such a refutation    is n.    A refutation    from S is a proof of NIL (the empty    clause) from S.    Definition.    A relevance    function is a func-    tion R which, given a set S of clauses, a support    class T for S, and an integer n, maps onto a sub-    set Rn(S, T) of S having the following property:    If there is a length n refutation    from S,    then there is a refutation    from Rn(S, T) of    length n or less.    Thus if we are searching for length n refutations    from S, we need only search for length n refuta-    * This research was partially supported by the Na-    tional Science Foundation    under grant MCS-79-04897.    tions from R,(S, T).    In fact, the derivation    from    R,(S, T) will be a subderivation    of the derivation    from S, for all relevance    functions considered    here.    Thus if there is a length n P1-deduction    from S, there will be a P l-deduction    of length n    or less from R,(S, T), and similarly    for other corn    plete strategies.    Definition.    Suppose S is a set of clauses.    The connection graph of S, denoted G(S), is the    graph whose nodes are the clauses of S, and which    has a directed arc from C    to C    1    2 labeled (Ll, L2)'    if there are literals L 1 c C 1 and L2 E C2 such    that Ll and z2 are unifiable.    Such graphs have    been introduced    and discussed in [2].    Note that    there will also be an arc labeled (L 2' Ll> from C 2    to Cl in the above case.    Definition.    A path from Cl to Cn in G(S) is    a sequence Cl,C2,..., Cn of clauses of S such that    there is an arc from C i to c i+l in G(S), for lli<n.    Also, the length of the path is n.    Definition.    The distance d(Cl, C2) between    Cl and C2 in G(S) is the length of the shortest    path from Cl to C2 in G(S), and m if no such path    exists.    Definition.    If S is a set of clauses, T is a    support class for S, and n is a nonnegative    inte-    ger, then Q,(S, T) is EC E S:    d(C, Ti)(n in G(S)    for all Ti in T}, where d(C, Ti) is min{d(C, D):D    E Ti}.    Intuitively,    if d(Cl, C2) is small, Cl and C 2    are "closely related."    Also, Q,(S, T) is the    clauses that are closely related to all the sup-    port sets.    Typically we will know that several    clauses are essential    to prove a theorem, and each    such clause by itself can be made into a support    set.    Definition.    A set S of clauses is fully    matched if for all C    there exists C2cS an a    ES    for all literals L    LicC2 such that Ll an    EC    a    2    L1    are unifiable.    79    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    Definition.    R (S, T) is the largest fully    matched subset of Q"(S, T).    Thus we obtain R (S,    T) from Qn(z, T) bynrepeatedly    deleting Claus&    containing    unmatched"    literals.    This definition    is not ambiguous,    since if Q (S, T) contains more    than one nonempty    fully mate ed subset, then Rn(S,    R    T) is the union of all such subsets.    Theorem 1.    The function R is a relevance    function.    That is, if there is a length n refuta-    tion from S, and T is support class, then there is    a refutation    from R (S, T) of length n or less.    In fact, there is s&h    a refutation    from R    Pl    6, T)    Proof.    Assume without loss of generalgty    that NIL appears only once in the refutation    and that every clause in the refutation    contributes    to the derivation    of NIL.    Let S1 be the set of    input clauses appearing    in the refutation.    Then    S1 is connected,    intersects    all the support sets,    and has at most n elements.    binary trees we can show that    Using properties    ;f    elements.    Note that R (S T)    Sl has at most r;;l    Vance criterion.    Thatnisl;    is a "globaL" relg-    it depends in a non-    trivial way on all the input clauses and on inter-    actions between all the support sets in T.    II    EXAMPLES    Let S be the    Pl    P2    --    Pl P2    E    Ql    p2 Q2    P3    Q7    following    Q6    ?% 44 Q5    set of clauses:    Q4 P4    LB 44    P3 P4    EL    Q7 P4    Here ?i Ql indicates    (??, Ql], i.e., E    V Ql et    cetera.    Let Tlbe    {{Pl], {P2]], let T2 be {{E]),    and T = {Tl, T2).    Then R (S, T) = R,(S, T) = R    --    (S, T) = $j but R (S, T) =+Pl],{P2}"    {Pl P2 P3?    {p3,P4], {x}]    which is in fact a miAimal'in:on-'    sistent subset of S.    For a second example,    let S be the following:    IN(a,box)    IN(x,box) 1 IN(x,room)    IN(x,room) > IN(x,house)    E(x,house)    ON(x,box) 1 m(x,box)    ON(x,street)    ' %?(x,house)    IN(x,house)    > IN(x,village)    G(house,box)    AT(house,street)    ON(b,box)    ON(c,street)    m(d,village)    Let Tl be {{IN(a,box)))    and let T2 be ([IN(x,    house)]].    Also, T = {Tl, T2].    R2(S, T) = R3(S, T) = 0 but R    Then Rl(S, T) =    (S, T) = {(IN(a,    box)], (m&,box),    IN(x,room) 4, {IN(x,room)    IN(x,    house)], {IN(x,house))].    This is a minimal'incon-    sistent subset of S.    Here "box", "room", "house",    "a" et cetera are constants and x is a variable.    Note that we cannot always guarantee that this re-    levance criterion will yield minimal inconsistent    .    80    sets as in these examples.    is a set of clauses.    Let    Suppose S    I I 4 1 be    the length of S in characters when written in the    usual way.    Let Lits(S) be the sum over all clauses    III    ALGORITHMS    C in S, of the number of literals in C.    If S is a set of propositional    clauses and    IISII = m9 then G(S) may have O(m2) arcs.    However,    we can construct a modified version G,(S) of G(S)    which has the same distances between klauses as    G(S) does but which has only O(m) arcs.    The idea    is to reduce the number of arcs as follows:    sup-    pose C    ,C. are the clauses containing L and D1,    . . . . Dk%;e'tha    clauses containing z.    Then we add    a node NL and arcs as follows in Gl(S):    The numbers indicate the lengths of the arcs.    Sim-    larly, there are arcs of the form Di~-Nr,.f3C    i2'    Although Gl(S) is not a connection graph, and has    arcs of length 0 and 1, it preserves    distances be-    tween clauses as in G(S).    Using this modified    con-    nection graph, we have linear time algorithms    to    do the following,    if S is a set of propositional    clauses ,T=C    T1 ,...,Tk] is a support class    a suppo rt set, andn    is a posi tive in teger:    2.    Construct G1(S) from S.    Find {C c S.    d(C, T;> 5 nl.    , Ti is    3. Given Q (S, T) for stpport class T, to    find Rnn( S, T).    Since step 2 must be performed 1 TI times to obtain    Q (S, T), the total algorithm    to obtain R (S, T)    rzquires O(lT/*IIS/ I) time.    (Here ) Tl isnthe num-    ber of support sets.)    The algorithm    to find {C E S:    d(C,T.) < r$is    a simple modification    of standard shortestipath    al-    gorithms.    gorithms see [ 31.    For a presentation    of these standard al-    This can be done in linear time    because the edge lengths are all 0 and 1.    pute R,(S, T) as follows:    We com-    Definition.    If S is a set of clauses, let    M(S) be the largest fully matched subset of S.    Note that Rn(S, T) = M(Q,(S, T)).    The following algorithm M1 computes M(S) for    set S of propositional    clauses in linear time.    This algorithm    can therefore be used to compute    R,(S, T) if S is a set of propositional    clauses.    Note that t is a push-down    stack.    procedure Ml(S);    tt- empty stack;    for all L such that LEC or ~EC for some C&S do    clauses(L) f {C~S:LEC];    -    count(L) + I clauses(L)1    od;    for all C&S do    member(C)T    TRUE:    for all LEC do    '    -    if count (L) = 0 then    -    push C on t; member(C) + FALSE fi    od    while t not empty do    -    pop c off t;    for all L&C- do    count(L) ycount(L)    - 1;    if count(L) = 0 then    -    for all C    I%    E clauses(l)    do    if mem er(C,) then    -    -    push C    on t;    member Cl) + FALSE fi    t    -    od    fi -    -    od    *;-    return((CeS:    member(C)    = TRUE]);    end Ml;    If S is a set of first-order    clauses, then    G(S) can be constructed    in O(Lits(S)*I IS I I) time    using a linear unification    method [4].    This bound    results because a unification    must be attempted    between all pairs of literals of S.    The number of    edges in G(S) is at most Lits(S)2.    Given G(S), we    can find {C E S:    d(C, T.) < n] in time propor-    tional to the number of &dges in G(S) (since all    edge lengths are one).    Also, given Q (S, T), we    can find R (S, T) in time proportiona f to the num-    ber of edggs in G(S) by a procedure    similar to M    above.    The total time to find R (S, T) is there-l    fore O<I T19cLits(S)2 + I IS) I*LitsTfS)). If 11 Sl I= m    then the time to find R (S, T) is O(m2( TI).    By    considering    only the przdicate symbols of literals,    the propositional    calculus algorithms    can be used    as linear time preprocessing    step to eliminate    some clauses from S.    An interesting    problem is to    compute R (S, T) efficiently    for many values of n    at the sa#ie time.    There are methods for doing    this, but we do not discuss them here.    IV    REFINEMENTS    A.    Connected Components    Proposition    1.    If there is a length n refu-    tation from S, and T is a support class for S,    then there is a length n refutation    from one of    the connected components    of R    IF1    (S, T).    Also, the    connected components    can be found in linear time    [ 31.    B.    Iteration    Definition.    If T = {T1,T2, . . ..Tk} is a sup-    port class for S and Sl is a subset of S then Th Sl    (T restricted    to S1) is {Tl n S    may be that T//R (S,    ,...,Tkn Sl].    It    T) # T or t at distances in    ii    G(Rn(S, T)) arendifferent    than distances in G(S).    This motivates    the following definitions.    Definition.    R;(S, T) = S, R;(S, T) = Rn(S,T),    and if i>l then Ri(S, T) = Rn(Riql(S,    T),TlRi-l(S,    n    T)).    Also, Ry(S, T) is the limit of the sequence    R;(S, T), R$,    T), R;(S, T), *..    l    Proposition    2.    Rn i+l(S, T) GRi    (S, T) for    i>l.    -    Therefore the limit Ri(S, T) exists.    Also,    Rw(S, T) can be computed in at most I SI iterations    oh@    *>.    thannth&?    Can it be computed more efficiently    Theorem 2.    If there is a length n refutation    from S, and T is a support class for S, then there    is a length n refutation    from one of the connected    components    of R m (S, T).    $1    Proposition    3. For all i>O there exist n,    S and T such that R:+'(S,    T) = R: (S, T) #    Rn    i-l(S, T).    Thus this computation    can take arbi-    trarily long to converge.    Proof.    Let n = 2, S = {P. 1 Pi+l:    l<i<k] U    CP i+l 3 P. : l<i<k).    --    T. = {Pi:    P    --    Let T = {+l,T2,T3} where    J    i+l: i s j(mod3)) U {Picl 1 P. : isj    (mod 3)).    Then R;(S, T) = @ if 2azk but iy(S, T) #    6 if 2a < k.    C.    Selecting Support Clauses    We now give another approach.    Theorem 3.    Suppose there is a length n refu-    tation from S and T = IT ,...,T 1 is a support    class for S.    Then there'exist    tlauses Ci E Ti,    l<i<k, such that    --    a)    {Cl,C2,...,Ck] C R co (S, {CC,], CC,},    El    . . . . {Ck}}) and    L    b)    thzre is a length n refutation    from    R    $1    (S, CCC,}, . . . . k.,H).    Thus it is possible to select particular    clauses    from the support sets and use them to define R.    There may be many sets R co (S, {{Cl),    I"1    . . . ,    {C,}})    satisfying    condition a) $bove, but they may    be smaller than the connected components    of R ~0    Pl    (S, T).    It is possible to construct examples z av-    ing this property.    Therefore it may help to use    the above sets rather than R co (S, T) when search-    ing for refutations.    rg    Another advantage is that it    is possible to examine the clauses C. E T. in some    heuristically    determined    order.    Fu&herm;re,    the    81    above approach is useful when the support sets Ti    are not known in advance but are obtained one    clause at a time.    We now give a recursive procedure    "re13" for    generating    all the sets as in Thereom 3.    The idea    is to order the Ti so as to reduce the branching    factor as much as possible near the beginning    of    the search.    Thus we use the principle of "least    commitment."    This procedure has as input sets Sl    and S of clauses, integer n, and support class T    Ri(S, {(D1),...,    {Dj}, {Cl}, . . . . {Ck})) having    ,...,Ck} as a subset, for Ci e Ti,    ere is a length n refutation    from S,    and T is a support class for S, then there will be    a length n refutation    from some set output by    rel3(lb, rtl, S, T).    Definition.    If S = {Dl,    = {{D,) , . . . . {Dj%    . . ..Dj) then Single(S)    procedure    rel3(S,,n,S,T)    S2 -+ <(S,    T 6 Single(S1))    if s1 C S2 then    if T = fl then output (S2) else    -    T,+- TG    I    choose 'I$ e Tl minimizing    I T21;    for all C E T2 do    od:    re13(S1U    {C),n,S2,Tl-T2)    fi;-’    fi;    end re13;    By searching    for such sets Sl, we can some-    times obtain much better relevance criteria than by    previous methods.    The use of centers insures that    elements of Sl will be closer together than in pre-    vious methods.    To implement this method, let S2 be Q p+2,@    T) . For each C E S2, let S3 be RrnT2 (S,{{b}}4).'    -I    1-p    If S3 intersects    all support sets, then it is a    candidate set of input clauses for a length n refu-    tation.    Here S, is a set of possible    centers.    Note that two clauses of S will have distance at    most rtl + 1 in G(S3).    No?e also that if n=6 then    IF1    = 2 and if n = 10 then r?l    = 3.    Thus we    can get somewhat nontrivial    refutations with quite    small distance bounds.    E.    Typing Variables    For these relevance    criteria to be useful,    there must exist clauses C    and C    of S such that    dC+    C2) is large.    Howe&r,    if z he axiom x=y 1    y=x is in S then two clauses of the form tl = t2 d    D    and t3 # t4    Tib    v D2 will have distance 3 or less.    is may cause everything    to be close to everything    else.    To reduce this problem, we propose that all    variables be typed as integer, Boolean, list,    string, et cetera and unifications    only succeed if    the types match.    Thus the above clauses would not    necessarily    be within distance 3 if tl and t4 or t2    and t3 have different    types.    The use of types may    increase the number of clauses, since more than one    copy of some    clauses    may    be needed.    However,    the overall effect may still be beneficial.    F.    Logical Consequences    D.    Center Clauses    By using the idea that graphs have "centers,"    we can reduce the distance needed to search for    relevant clauses by another factor of 2.    Theorem 4.    Suppose there is a length n refu-    tation from set S of clauses, and T is a support    class for S.    Then there exists a clause C E S and    a set S 1 c S having the following properties:    1.    ;:    There is a length n refutation    from S1    Sl is fully matched    4.    Sl intersects    all the support sets in T    C E Sl and for all Cl E Sl, d(C, Cl) 2    +1    in G(Sl).    Proof.    Let Sl be the input clauses actually    used insome    minimal refutation    from S. Then I s1 I    Choose a "center" C of Sl, and note that    The preceding    ideas can also be applied to    derivations    of clauses other than NIL from S.    Definition.    A support set for S relative to    C is a subset V of S such that C is not a logical    consequence    of S-V.    A support class for S relative    to C is a collection    of support sets for S relative    to c.    For example, if I is an interpretation    of S    in which C is false, and V is the set of clauses of    S that are false in I, then V is a support set for    S relative to C.    Definition.    M(S, C) is the largest subset of    S in which all literals are matched, except possi-    bly those having literals of C as instances.    Definition.    Rn(s,T,c) is M(Q,(s,T) 23.    Theorem 5.    If there is a length n derivation    of something subsuming C from S, and T is a support    class for S relative to C, then there is a length    n derivation    of something subsuming C from R    (S,    T,O.    $1    As before, we can introduce R O" (S,T,C) and    other relevance    criteria.    121    82    Plaisted, D.    5    G.    Procedures    scribed earlier.    To incorporate    procedural    and heuristic    in-    formation, we may add clauses expressing    the as-    sertion A(x) 3 (sy)B(x,y) where A and B are input    and output assertions    for the procedure and x and    y are input and output variables.    To account for    the fact that heuristics    may fail, we assign pro-    babilities    of truth to clauses.    The task then is    to find a set S of clauses from which the desired    consequence    can'possibly    be derived, subject to    the condition that the product of the probabili-    ties of the clauses in Sl is as large as possible.    One way to do this is to run many trials, gene-    rating relevant subsets of S, where the clauses    of S are chosen to be present or absent with the    appropriate    probability.    We then select a rele-    vant set of clauses from among those clauses that    have been found to be relevant in many of the    trials.    Note that if procedures    are encoded as    above, then a short proof may correspond    to a so-    lution using a few procedure    calls, but each pro-    cedure may require much time to execute.    H.    Subgoals    If procedures    are encoded as above, then each    procedure may call the whole theorem prover recur-    sively.    This provides a possible subgoal mechan-    ism.    By storing the clauses from all subgoals in    a common knowledge    base, we may get interesting    interactions    between the subgoals.    By noticing    when subgoals are simpler than the original pro-    blem in some well-founded    ordering,    we may be able    to get mathematical    induction in the system.    The    use of clauses, procedures,    subgoals, and relevance    criteria as indicated here provides a candidate    for a general top-level control structure    for an    artificial    intelligence    system.    V A COMPLETE STRATEGY    The following procedure attempts to construct    a refutation    from set S of first-order    clauses:    procedure    refute(s);    for d = 1 step 1 until (NIL    --    for j = 1 step 1 until (j    refl(S, j, d) od od;    --    is derived) do    -    > d) do    -    end refute;    procedure refl(S, i, d);    let T be a support class for S;    RtR?(S,    '0;    if R 'is empty then return fi;    ~Ru    llevel    1 resolvents    from R);    if NIL E V or d = 1 then return fi;    for j = 1 step 1 until(NIL is delved)    do    -    refl(V, j, d - 1) &;    end refl;    This procedure selects at each step the clauses    that seem most relevant and attempts to construct    a refutation    from them.    Similar procedures    can be    given using other of the relevance functions de-    A.    Generating    Support Sets    One way to generate support sets for the    above procedure    is to let each support set be the    subset of S in which specified predicate symbols    occur with specified signs.    This would yield    2n    support sets for n predicate    symbols.    Of course,    it is not necessary    to use all of these support    sets.    A more interesting    possibility    is to have a    collection {I,, 12, . . . ,    Sandtolet    T.    Ikl of interpretations    of    be the set of clauses that are    false in I..    'If I, has a finite domain then T.    can be comiuted by ehaustive    testing.    Otherwisk,    special methods may be necessary    to determine if a    clause C is true in I..    If I. has an infinite    domain, a possible he?iristic 3s to let T. be the    set of clauses that are false on some fiiite sub-    set of the domain.    If f is an abstraction    mapping    or a weak abstraction    mapping    [5] and I is an    interpretation,    then{CeS:    some clause in f(C) is    false in 11 is a support set for S.    This approach    may allow the use of nontrivial    support sets which    are easy to compute, especially    if all elements of    f(C) are ground clauses for all C in S.    Note that    T may include support sets obtained both syntac-    tically and semantically.    Although it may require    much work to test if C is true in I., this kind of    effort is of the kind that humans s&em to do when    searching    for proofs.    Also, this provides a    meaningful way of incorporating    nontrivial    seman-    tic information    into the theorem prover.    The arcs    in the connection    graph resemble "associations"    between facts, providing another similarity with    human problem solving methods.    REFERENCES    [l] Gallaire, H. and J. Minker, eds. Logic and    Data Bases. New York: Plenum Press, 1978.    --    [2] Kowalski, R., "A Proof Procedure using    Connection    Graphs". J-ACM 22(1975)572-595.    --    [3]    Reingold, E. M., J. Nievergelt,    and N. Deo.    Combinatorial    Algorithms:    Theory and Practice.    7    Englewood Cliffs, New Jersey: Prentice-Hall,    1977.    [4] Paterson, M. S. and M. N. Wegman, "Linear    Unification",    IBM Research Report 5304, IBM,    1976.    [5] Plaisted, D., "Theorem Proving with Abstrac-    tion, Part I", Departmental    Report UIUCDCS-R-    79-961, University    of Illinois, February 1979.    83     
 | 
	1980 
 | 
	7 
 | 
					
67 
							 | 
	This paper presents the results of research done on the    representation    of control    knowledge    in rule-based    expert    systems.’    It discusses    the problems    of representing    co&o1    knowledge implicitly in object-level inference rules and presents    specific    examples    from a MYCIN-like    consultation    system    called PUFF. As an alternative, the explicit representation of    conerol knowledge    in sloes of a frame-like data structure is    demonstrated    in    the    CENTAUR    system.    Explicit    representation    of control knowledge has significant advantages    both for the acquisition and modification of domain knowledge    and for explanations    of how knowledge is used in the expert    system.    REPRESENTATION OF CONTROL KNOWLEDGE    IN EXPERT SYSTEMS    Janice S. Aikins    Computer Science Department    Stanford University    Stanford, California 94305    ABSTRACT    I    INTRODUCTION    This paper emphasizes the importance of representing    domain-specific    control knowledge explicitly    and separately    from other forms of domain knowledge in expert systems. The    particular    focus of research on this topic has been MYCIN-like    consultaeion    systems    [S] which    represent    their    domain    knowledge in the form of condition-action or production rules.    Examples    in this paper are taken from the PUFF system [43    which    performs    consultations    in the domain of pulmonary    (lung) physiology.    The    CENTAUR    system was created in response to    several knowledge representation and control structure problems    in the rule-based    systems, among which were the problems    caused    by ehe implicit representation    of control knowledge.    CENTAUR    provides a framework for performing tasks using    an hypothesize    and match approach [5] to problem solving.    This approach    focuses the search for new information around    recognized patterns of knowledge in the domain, a strategy that    was not represented    in the rule-based systems. Knowledge in    CENTAUR    is represented in the form of frame-like structures,    called prototypes,    which represent the expected patterns of    knowledge, and in production rules, which serve as a stylized    form of procedural attachment and are used to infer values or    “fill in” slots in the prototype.    This knowledge of prototypical    situations is used for control of the consultation, for explanation    of system performance,    and also as a guide for acquiring    additional    knowledge and for modifying the existing knowledge    base.    ’ This work was supported by the Advanced Research    Projects    Agency    under    contract    MDA    903-77-C-0322.    Computer facilities were provided by the SUMEX-AIM    facility    at Stanford    University    under National Institutes of Health    grant    RR-00785-07.    The author is sponsored by the Xerox    Corporation    under    the direction    of the Xerox Palo Alto    Research Center.    CENTAUR’s    combination of prototypes and rules results    in a knowledge    representation    that is expressive enough to    allow the many kinds of domain knowledge necessary for    system    performance    to be explicitly    represented.    Control    know/edge    for    the    consultation    is represented    in slots    associated    with each prototype, separately from the inference    rules.    Rules are associated with prototypes as the explicit    contexts    in which the rules are applied.    The slots in the    prototype specify the function of the attached rules, such as to    summarize data already given or to refme an interim diagnosis.    Other details of the CENTAUR system and a full discussion of    the knowledge representation    and control structure problems in    the rule-based    systems can be found in [I].    II    TME PUFF SYSTEM    One such rule-based    system is the PUFF system which    was created using a MYCIN-like framework. PUFF’s domain-    specific knowledge is represented by a set of approximately 60    production    rules. The “IF” part of the productlon states a set of    conditions    (the premise clauses) in which the rule is applicable.    The    action, or “THEN” part of the production, states the    appropriate    conclusions. The goal in PUFF is to interpret a set    of lung function tests performed on a patient, and to produce a    diagnosis    of pulmonary    disease in that patient.    Each rule    clause    is a LISP    predicate    acting on associative (object-    attribute-value)    triples in the data base. In PUFF there is a    single    object,    the    patient.    The    attributes    (or    clinical    parameters)    are the lung function tests and other information    about the patient.    The PUFF control structure is primarily a    goal-directed,    backward chaining of the production rules as it    attempts to determine a value for a given clinical parameter.    written.    A complete description of this mechanism is given in    E61.    III    IMPLICIT CONTROL IN THE RULES    P’roduction    rules, in theory, are modular    pieces of    knowledge,    each one capturing    some “chunk” of domain-    specific    expertise.    Indeed, one of the advantages of using    production    rules [3] is that there need be no direct interaction    of one rule with the others, a characteristic which facilitates    adding rules to the knowledge base or modifying existing rules.    In practice, however, there are significant interactions among    rules.    Executing one rule will in turn cause others to be tried    when the information    needed for the first rule is not already    known.    Therefore,    the order of the premise clauses of a rule    affects the order in which other rules are executed. Further, in    an interactive system such as PUFF, in which the user is asked    for information    that can not be inferred by rules, the order of    the    premise    clauses    also determines    the order    in which    questions are asked.    121    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    This    means of controlling    question order by placing    premise    clauses in a specific order is, in fact, exploited by    experts who recognize the need for ordering the questions that    are asked, but have only this implicit and indirect mechanism    for achieving    their goal.    In this case, the production rule    framework    itself becomes a programming language where the    rules have multiple functions; some rules represent independent    chunks    of expertise    with premise clauses specified    in an    arbitrary    order, while others serve a controlling function with    premise clauses that cannot be permuted without altering the    behavior of the system.    An example of implicit control knowledge is illustrated    by the PUFF rule in Figure I below. This rule invokes other    rules in an attempt to determine whether there is Obstructive    Airways    Disease. (Clause One), and if so, to determine the    subtype (Clause Two) and findings associated with the disease    (Clause Three). If Clause One were inadvertently placed after    either Clause Two or Three, the system’s questions of the user    would probe for more detailed information about Obstructive    Airways Disease without having confirmed that the disease is    present.    For example, by reordering the clauses in RULEOOP,    PUFF might begin its consultation by asking about the patient’s    smoking    history,    one    of    the    findings    associated    with    Obstructive    Airways Disease, and a question that would be    inappropriate    in a patient without a smoking-related disease.    However, this rule contains no explicit indication that the order    of    the    clauses    is    critical.    The    problem    with    implicit    representation    of control    knowledge    becomes apparent    in    working    with    the    knowledge    base, either    to modify the    knowledge or to explain its use in the system.    RULE882    ----B-s    If:    1)    2)    3)    An attempt    has been made to deduce the    degree    of obstructive    airways    disease    of the patient,    An attempt    has been made to deduce the    subtype    of obstructive    airways    disease,    and    An attempt    has been made to deduce the    findings    about the diagnosis    of    obstructive    airways    disease    Then :    It is definite    (1.8)    that    there    is an    interpretation    of potential    obstructive    airways    disease    FIGURE 1. PUFF Rule--Implicit Control    Modifying rules is a normal part of system development.    Clauses    often must be added to or removed from rules in    response to perceived errors or omissions in the performance of    the system.    However, removing or modifying the clauses of a    controlling    rule can alter the system’s behavior in unexpected    ways, since the implicit control knowledge also will be altered.    Therefore,    modifications    can be safely done only by persons    intimately    familiar with the knowledge base. This factor not    only limits the set of people who can make modifications, and    of    course    precludes    the    success of automatic    knowledge    acquisition    systems    in    which    each    rule    is    considered    individually,    but it also limits the size of the knowledge base, as    even the best of knowledge engineers can retain familiarity with    only a limited number of rules at a time.    A system’s explanations    of its own performance    also    suffer    when    information    critical    to performance    is not    represented    explicitly. The rule-based systems studied generate    explanations    of why questions are being asked using direct    translations    of those rules which were being used when the    question was asked. (See [2] for details.) There is no distinction    made between rules that control a line of reasoning, as opposed    to rules that infer a piece of information. However, users of the    system should be able to ask both kinds of questions in order to    obtain justifications    of the system’s reasoning process as well as    justifications    of its inference rules. The uniform representation    of control    and    inference    knowledge in rule-based    systems    further    confuses    the    user by mixing    the two kinds of    explanations.    IV    CONTROL KNOWLEDGE IN CENTAUR    Control    knowledge about the process of pursuing an    hypothesis    in CENTAUR    is represented in slots associated with    each prototype, separate from the inference knowledge which    will actually confirm or deny the hypothesis represented as    production    rules. Each slot specifies one or more LISP clauses,    or control tasks, that are executed at specific points during the    consultation    as defined    by a top-level prototype representing    the “typical” consultation (the CONSULTATlON Prototype).    For    the    pulmonary    function    domain,    prototypes    correspond    to    specific    pulmonary    diseases.    During    a    CENTAUR    consultation, initial case data suggest one or more    disease    prototypes    as likely matches.    Control knowledge in    these prototypes then guides the consultation by specifying what    information    should be sought next. Expected data values in    each prototype enable CENTAUR to pinpoint inconsistent or    erroneous    information    during    the    consultation.    Final    conclusions are presented in terms of the prototypical situations    determined    to be present in the case, and any inconsistencies    are noted.    Thus the system developer can specify “what to do” in a    given prototype context as an important part of the knowledge    about    the    domain    that    is distinct    from    the inferential    knowledge    used in the consultation.    These control tasks are    specified    as LISP functions, and the system developer can    define    any new functions as they are required.    For example,    Figure    2 shows CENTAUR’s    representation    of the control    knowledge    in the PUFF rule shown in Figure I. The control    knowledge    is represented in two of the control slots associated    with the Obstructive    Airways D’isease (OAD) prototype. They    specify that when OAD is confirmed (the If-Confirmed    Slot),    the next tasks are to deduce a degree and a subtype for OAD,    and, at a later stage in the consultation (when the prototype    ACTION    slots are executed), to deduce and print findings    associated with OAD.    If-Confirmed Slot:    Deduce the Degree of OAD    Deduce the Subtype    of OAD.    Action Slot:    Deduce any Findings    associated    with OAD    Print    the Findings    associated    with OAD    FIGURE 2. OAD Prototype Control Slots    122    Prototypes    not    only    represent    the    domain-specific    knowledge    of a particular    application,    but also represent    domain-independent    knowledge about the operation of the    CENTAUR    system. At the highest level in CENTAUR, the    Consultation    Prototype    lists    the    various    stages    of    the    consultation    (e.g., entering initial information, suggesting likely    prototypes,    filling    in prototypes)    in its control slots. The    advantages    of explicit representation of control knowledge thus    extend to control of the consultation process itself.    V    ADVANTAGES OF THE CENTAUR APPROACH    The    association    of control knowledge with individual    prototypes    allows control to be specific to the prototype being    explored.    Thus domain experts can specify a different set of    control tasks for each prototypical situation. In the pulmonary    domain, for example, the expert proceeds in a different way if    he has confirmed OAD rather than some other disease in the    patient.    Further,    because this control knowledge is separate    from the inference rules, the expert does not have to anticipate    and    correct    incidental    interactions    between    control    and    inference knowledge.    Representing    the entire consultation process itself as a    prototype    has    additional    advantages.    First,    the    system    designer’s    conception    of the consultation    process is clearly    defined for all system users. Second, representing each stage of    the consultation    as a separate control task allows stages to be    added or removed from the consultation process. For example,    the    Refinement    Stage, which uses additional    expertise to    improve upon an interim conclusion, was omitted during early    stages    of system development    for the pulmonary function    problem.    “Filling in” a consultation    prototype    with user-    specified options, such as a choice of strategy for choosing the    current    best prototype (for example, confirmation, elimination,    or fixed-order),    results in a control structure that can be    tailored to the desires of each individual user.    The    organization    of    knowledge    into    prototypical    situations allows the user to more easily identify the affected set    of knowledge when changes to the knowledge base are desired.    Points at which specific control knowledge is used during the    consultation    are clearly defined, with the result that it is easier    to predict the effects of any control modifications that may be    made.    Explicit    representation    of    control    knowledge    also    facilitates    explanations    about that knowledge. In addition to    the HOW and WHY keywords available in MYCIN, a new    keyword, CONTROL, has been defined so that a user of the    system    can inquire    about the control task motivating the    current    line of reasoning.    For example, if the user types    “CONTROL”    in response    to a system question about the    patient’s    smoking    history,    the system would respond, The    current control task Is to determine the findings associated    with OAD.    VI    SUMMARY    This paper has discussed the importance of representing    control knowledge explicitly, particularly as it affects knowledge    acquisition    and explanation in a knowledge-based system. The    representation    of control knowledge as slots in a prototype in    the CENTAUR    system demonstrates one feasible approach.    Augmenting    the    rule representation    to include rules that    function    exclusively as control rules might be another.    The    critical lesson learned from working with the rule-based systems    is that the system’s representation structures must be expressive    enough to represent control knowledge explicitly, so that it will    not be inaccessible to the system and to the knowledge engineer.    ACKNOWLEDGMENTS    Many thanks to Doug Aikins, Avron Barr, Jim Bennett,    Bruce Buchanan, and Bill Clancey for their helpful advice and    comments on earlier versions of this paper.    REFERENCES    [l] Aikins,    J.    Prototypes    and    Production    W/es:    A    Knowledge    Representation    for    Computer    Consultations.    (Forthcoming    Ph. D. Thesis), Heuristic    Programming    Project,    Dept.    of    Computer    Science,    Stanford University, 1980.    [Z] Davis R. Applications    of Meta Level Knowledge to the    Construct/on,    Maintenance    and    Use    of    Large    Knowledge    Bases.    STAN-CS-76-552,    Stanford    Universlty, July 1976.    [3] Davis    R., and King, J.    An Overview    of Production    Systems. In E. W. Elcock and D. Michie (Eds.), Machine    lntelllgence    8. New York: Wiley & Sons, 1977. Pp. SOO-    332.    [4] Kunz, J., Fallat, R., McClung, D., Osborn, J., Votteri, B.,    Nii, H., Aikins, J., Fagan, L., and Feigenbaum, E. A    Physiological    Rule Based System for    lnterpretlng    Pulmonary    Function    Test    Results.    HPP-78-19    (Working Paper), Heuristic Programming Project, Dept.    of Computer    Science, Stanford    University,    December    1978.    [5] Newell, A. Artificial Intelligence and the Concept of Mind.    In R. Schank and K. Colby (Eds.), Computer Models of    Thought and Language. San Francisco: W. H. Freeman    and Company,l973. Pp. l-60.    f6j Shortliffe,    E.    H.    MYCIN:    A Rule-based    Computer    Program    for    Advising    Physicians    Regarding    Antimicrobial    Therapy Selection. Ph. D. dissertation in    Medical    Information    Sciences,    Stanford    University,    1974. (A Iso, Computer-Based    Medical    Consultations:    MYCIN. New York: American-Elsevier,    1976.    123     
 | 
	1980 
 | 
	70 
 | 
					
68 
							 | 
	DELTA-MIN:    A Search-Control    Method    for    Information-Gathering    Problems    Jaime G. Carbonell    Computer Science Department    Carnegie-Mellon    University    Abstract    The A-MIN method consists of a best-first backtracking    algorithm    applicable    to a large class of information-gathering    problems,    such    as    most    natural    language    analyzers,    many    speech    understanding    systems,    and    some    forms    of    planning    and    automated    knowledge    acquisition.    This paper    focuses    on the    general    A-MIN    search-control    method    and characterizes    the    problem spaces to which it may apply. Essentially, A-MIN provides    a best-first    search    mechanism    over    the    space    of alternate    interpretations    of an input sequence,    where the interpreter    is    assumed to be organized as a set of cooperating    expert modules.’    1. Introduction    A present trend in Al is to design large systems as cooperating    collections    of experts,    whose    separate    contributions    must be    integrated in the performance    of a task. Examples of such systems    include    HEARSAY-II    [4], POLITICS    [l],    PSI [5], SAM, [3]. The    division of task responsibility    and grain size of the experts differs    markedly.    For instance,    the    latter    two    systems    contain    few    large-scale experts, which are invoked in a largely predetermined    order, while the former two systems contain    a larger number of    smaller modules whose order of invocation    is an integral part of    problem solving task itself.    In this paper I discuss a new search method, called A-MIN, that    incorporates    some of the desirable features from best-first search    and some properties    of gradient    search.    (Gradient    search    is    locally-optimized    hill-climbing.)    The primary objective    is to make    global    control    decisions    based    on local    knowledge    provided    by    each    expert    module.    No module is required    to know either the    internal    structure    of another    module,    or the overall    controling    search    mechanism.    In this way, I depart    somewhat    from the    freerer blackboard    control structure of HEARSAY-II,    where search    was controled    by the    experts    themselves.    The module    that    “shouted    loudest”    was given control, hence each module had to    know when and how loud to shout with respect to other expert    modules. In addition,    there was a “focus knowledge    source”    [6]    that helped    guide    forward    search.    This method    acquires    its    flexibility    by placing    a substantial    amount    of global    control    responsibility    on    local    experts.    Moreover,    it    entails    no    externally-transparent    search    discipline.    Finally,    the    primary    emphasis is on forward search, not reconsidering    wrong decisions    in favor of choosing    an alternate interpretation.    In light of these    considerations,    I attempted to factor domain knowledge    (what the    experts    know) from search discipline    (when to pursue alternate    1 This    research    was    sponsored    in part    by the    (ONR)    under    grant    number    NO001    4-79-C-0661.    Office    of Naval    Research    paths suggested    by different    experts), so that each problem may    be investigated    in its own right.    Here, I focus on the search    control aspect, and consider the internal structure of each domain    expert as a virtual “black box”.    To simplify    matters, I confine    my discussion    to tasks whose    terminating    condition    is defined by processing    an input sequence    to completion    without error. This class of problems is exemplified    by    natural    language    analysis,    where    an    input    sentence    is    processed    left-to-right    and the goal state is the formation    of a    consistent    semantic representation    of the input. Clearly, this is a    satisficing    rather than optimizing    task [8], in the sense that only    the first of potentially    many solutions    is sought. Since I want the    language    analysis to give the same parse of the sentence    as a    human would, the process must be biased to favor reaching the    appropriate    solution    first. This biasing    process,    based on local    decisions    made by expert    modules    is the primary    input to the    A-MIN    search    method    described    below.    It must    be noted,    however,    that the left-to-right    processing    assumption    is more    restrictive    than    the    HEARSAY    paradigm,    where    “islands    of    interpretation”    could grow anywhere    in the input sequence    and,    when possible,    were later merged    into larger islands until the    entire input sequence was covered [6, 41.    2.    An Information-Gathering    Search    Space    Consider    a search space for the task of processing    a finite    sequence    of input symbols (such as an English sentence)    and    producing    an integrated    representation    incorporating    all the    information    extracted    from    the    input    (such    as a semantic    representation    of the meaning    encoded    in the sentence).    The    problem solver consists of a set of experts that may be applied at    many different    processing    stages, without    fixed constraints    on    their order of application.    For instance,    in the language analysis    domain,    one    can    can    conceive    of    a verb-case    expert,    a    morphological-transformation    expert,    an    extra-sentential    referent-identifier    expert    (or several    such    experts    based    on    different    knowledge    sources),    a    dialog-context    expert,    an    immediate-semantics    expert,    a syntactic-transformation    expert,    etc... A robust language analyzer must be capable of invoking any    subset    Of these    and other    experts    according    to dynamically    determined needs in analyzing the sentence at hand.    NOW, let US back off the natural language domain and consider    the    general    class    of    problem    information-gathering,*    spaces    to    which    an    cooperating-expert    approach    appears    useful.    First, we draw a mapping between    the general problem    solving    terminOlOgy    and the expert module approach.    The search    space outlined below is a considerably    constrained    version of a    general search space.    This property    is exploited    in    the    A-MIN    search method described in the fcllowing    Section.    124    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    e The operators    in the search space are the individual    expert    modules.    Each module may search its own    space internally,    but I am concerned    only with the    macro-structure    search    space.    Each    expert    has    conditions    of strict    applicability    and preference    of    applicability.    The latter are used for conflict resolution    decisions when more than one expert is applicable.    o A state in the space consists of the total knowledge    gathered    by invoking    the set of experts that caused    the state    transitions    from the    initial    state    to ihe    present.    This    definition    has    two    significant    implications:    An expert that adds no new knowledge    when    invoked    does    not    generate    a new    state;    therefore,    it can be ignored    by the search control.    There is a monotonicity    property    in that each step    away from the initial states adds information    to the    analysis, and therefore    is guaranteed    to “climb”    to a    potential final state. (Left-to-right,    single-pass natural    language    analysis    can    exhibit    such    monotonic    behavior.)    e A final state is defined by having reached the end of    the input sequence without violating a path constraint,    and no expert    can add more information    (i.e., no    transition in the state space is possible).    e A path constraint is violated if either a new segment of    the input cannot be incorporated    or an expert asserts    information    that contradicts    that which is already part    of the current    state.    When this situation    arises,    directed backtracking    becomes necessary.    o The initial state is a (possibly empty) set of constraints    that must be satisfied by any interpretation    of the input    sequence.    For instance,    a dialog    or story context    constrains    the interpretation    of an utterance    in many    natural language tasks.    o Each expert    can draw    more than    one conclusion    when    applied.    Choosing    the    appropriate    conclusion    and    minimizing    backtracking    on    alternatives    is where    the real search    problem    lies. Choosing the next expert to apply is not a real    problem,    as    the    final    interpretation    is    often    independent    of the order of application    of experts.    That is, since information-gathering    is in principle    additive, different    application    sequences    of the same    experts should converge to the same final state. The    experts    preselect    themselves    as to    applicability.    Selecting    the expert who thinks it can add the most    information    (as in HEARSAY-II)    only tends to shorten    the path to the final state. The real search    lies in    considering    alternate    interpretations    of the    input,    which    can    only    be    resolved    by    establishing    consistency    with later information    gathered    by other    experts.    Finally,    given    the possibility    of focused    backtracking    fro& a dead end in the forward search,    less effort needs to be directed    at finding the “one    and only correct expert” to apply.    Search    Metho    A-MIN is a heuristic search method specifically    tailored to the    class of search spaces described    above. It combines some of the    more desirable features of gradient search and best-first search,    with the modularized    information    sources of a cooperating-expert    paradigm.    Figure 3-O is the search-control    strategy algorithm    in    an MLISP-style    form. Subsequently    I discuss how A-MIN works,    exemplifying    the discussion with a sample search-tree    diagram.    Every    expert    is    responsible    for    assigning    a    likelihood-of-correctness    value    to    each    alternative    in    the    interpretation    it outputs.    These values are only used to determine    how much    better    the best alternative    is than    the next    best    alternatives,    an item of information    crucial to the backtrack    control    mechanism.    There is no global evaluation function (the outputs of    different experts are not directly comparable    - it makes no sense    to ask questions    like: “Is this anaphoric    referent    specification    better than that syntactic    segmentation?“)    Nor is there any    mechanism to compute differences    between the present state and    the goal state.    (Recall our definition    of goal state -- only upon    completion    of the    input    processing    can    the    goal    state    be    established.)    PROCEDURE    A-MIN(initial-state.    experts)    altlist    := NULL    globaldelta    := 0    state    := initial-state    input    := READ(firsfinput)    NEXTOP:    IF    NULL(input)    THEN RETURN(state)    ELSE operator    := SELECTBEST(APPLICABLE(experts))    IF    NULL(operator)    THEN    input : = READ( next input)    ALSO GO NEXTOP    ELSE alts    := APPLY(operator,    state)    IF    NULL(alts)    THEN MARK(operator.    'NOT-APPLICABLE,    'TEMP)    ALSO GO NEXTOP    bestalt    := SELECTMAX(alts)    IF    Ilaltsll    > 1    THEN alts := FOR-EACH    alt IN REMOVE(bestalt,    alts)    COLLECT    <'ALT:    alt,    'STATE:    state,    'DELTA:    globaldelta    + VALUE(bestalt)    - VALUE(alt)>    altlist    := APPEND(alts,    altlist)    NEWSTATE:    state    := MERGE-INFORMATION(state,    bestalt)    IF NOT(state    = 'ERROR')    GO NEXTOP    ; if no error,    continue    gradient    search,    ; else delta-min    backup    below    WHILE    state    has no viable alternatives    DO BEGIN    MARK(state,    'DEAD-END,    'PERM)    ; delete    dead ends    state    := PARENT(state)    ; from    search    tree    END    backup-point    := SELECTDELTA-MIN(altlist)    state    := GET(backup-point.    'STATE:)    globaldelta    := GET(backup-point,    'DELTA:)    bestalt    := GET(backup-point,    'ALT:)    altlist    := REMOVE(backup-point.    altlist)    GO NEWSTATE    END A-MIN    Figure    3-1:    The A-MIN Search-Control    Algorithm    Let us see how A-MIN can be applied to an abstract example,    following    the diagram in figure 3-Z The roman numerals in the    arcs reflect the order in which they are traversed.    At the initial    state,    expert-4    applies    and    generates    three    alternate    interpretations    of the input.    One alternative    is ranked as most    likely.    A “A”    value is computed    for the remaining    alternatives,    2 "lnformetion gathering" is a term coined by Rai Reddy to refer to    encoding    the difference    in confidence    that expert-4    had between    search problems where progress towards agoalstateischaracterized    by    them and the most likely alternative.    The more sure expert-4    is of    accruing and integrating information from    outside SOUrCeS.    its best choice relative to the other alternatives,    the larger the A    125    values. The best interpretation    generated by expert-4 is integrated    with the initial state constraints    and found to be consistent.    At this    point, a new state has been generated    and expert-2 applies to this    state generating    no new information.    More input is read and    expert-l    applies, generating only one alternative,    which is found to    be consistent with the information    in the present state. In a similar    fashion, the rest of the tree in figure 3-xis generated.    Up to now, we have witnessed an instance of gradient search,    where a different evaluation function is applied at each node (The    local evaluation function is, in effect, the expert who generated the    likelihood    values.) If no error occurs, (i.e., if the interpretation    of    the input remains consistent)    no backup is needed. The likelihood    rankings    clearly    minimize the chance    of error as compared    to    straightforward    depth    first search.    Now, let us consider    the    possibility of an inconsistency    in the interpretation,    as we continue    to examine figure 3-X    Figure    3-2:    A-MIN Search Tree With Directed Backup    The most likely interpretation    generated by expert-6 was found    to be inconsistent    with the information    in the present    state.    Backup is therefore    necessary, given the depth-first    nature of the    search.    But, where do we back up to?    Normally,    one might    consider a depth-first    unwinding    of the search tree; but, is this the    most reasonable    strategy? Expert-5 was much less certain in its    choice of best alternative    than expert-6    (A = 2 vs A = 4). It seems    more reasonable    to doubt    expert-5’s    decision.    Therefore,    one    wants to back up to the point where the probability    of having    chosen a wrong branch is highest, namely to the choice point with    the minimal A (hence the name A-MIN).    Continuing    with figure 3-z we restore the state at expert-5 and    incorporate    the A = 2 interpretation.    It is found to be consistent,    and we apply expert-l    to the new state. The best interpretation    of    expert-l    leads to error, and backup is again required.    Where to    now? The minimal A is at expert-l,    but this would mean choosing    a non-optimal    branch    of a non-optimal    branch.    Lack    of    confidence    in the choice from expert-5 should be propagated    to    the present invocation    of expert-l.    Hence, we add the two As in    the path from the inital state and get the value:    A =3, which is    greater than the minimal A at expert-4    (A = 2).    Therefore,    we    back up to expert-4.    This process continues    until a consistent    interpretation    of the entire input is found    (i.e., a goal state is    reached),    or    the    search    exhausts    all    viable    alternate    intepretations.    Essentially,    A-MIN    is a method    for    finding    one    globally    consistent    interpretation    of an input sequence    processed    in a    predetermined    order. In natural language analysis, the problem is    to find a semantically,    syntactically,    and contextually    consistent    parse of a sentence.    In speech understanding    the constraint    of    formulating    legal phonemes and words is added, but the nature of    the problem and the applicability    of the A-MIN approach    remains    the same. For instance, A-MIN is an alternate control structure to    HARPY’s beam search [7], which also processes a sequence    of    symbols left to right, seeking a globally consistent interpretation.    4.    Concluding    Remarks    To summarize,    the    A-MIN method    exhibits    the    following    properties:    o A-MIN is equivalent    to gradient search while no error    occurs.    Path length (from the initial state) is not a    factor in the decision function.    o The backtracking    mechanism    is directed to undo the    choice most likely to have caused an interpretation    error.    This method compares all active nodes in the    tree, as in best-first    search, but only when an error    occurs (unlike best-first search).    o Perseverance    in one search path is rewarded, as long    as    the    interpretation    remains    consistent,    while    compounding    less-than-optimal    alternate    choices    is    penalized.    This behavior falls out of the way in which    A values are accrued.    o No    global    evaluation    function    forces    direct    comparisons    among information    gathered by different    knowledge    sources.    Such    an evaluation    function    would    necessarily    need    to encode    much    of the    information    contained    in the separate    experts,    thus    defeating    the    purpose    of    a    modular    cooperating-expert    approach.    The A comparisons    contrast    only the differences    between locally-optimal    and locally-suboptimal    decisions.    These differences    are computed    by local experts,    but the comparisons    themselves    are only between    relative ratings on the    desirability    of alternate decisions.    126    Additional discussion of implementation,    analysis, and details of    the A-MIN search method may be found in [2], where an effective    application    of A-MIN is discussed    for constraining    search    in a    natural language processing task.    1.    2.    3.    4.    5.    6.    7.    8.    References    Carbonell,    J. G., “POLITICS:    An Experiment    in Subjective    Understanding    and    Integrated    Reasoning,”    in    inside    Computer    Understanding:    Five Programs    Plus Miniatures,    R. C. Schank    and    C. K. Riesbeck,    eds.,    New Jersey:    Erlbaum, 1980.    Carbonell,    J. G., “Search in a Non-Homogeneous    Problem    Space    - The A-MIN Algorithm,”    Tech.    report,    Dept. of    Computer Science, Carnegie-Mellon    University, 1980.    Cullingford,    R.,    Script    Application:    Computer    Understanding    of, Newspaper    Stories,    PhD dissertation,    Yale University, Sept. 1977.    Erman    L. D. and    Lesser,    V. R., “HEARSAY-II:    Tutorial    Introduction    & Retrospective    View,” Tech. report, Dept. of    Computer Science, Carnegie-Mellon    University, May 1978.    Green, C. C., “The Design of the PSI Program Synthesis    System, ”    Proceedings    of    the    Second    International    Conference    on Software    Engineering,    October    1976 , pp.    4-18.    Hayes-Roth, F. and Lesser, V. R., “Focus of Attention in the    Hearsay-II Speech Understanding    System,”    Proceedings    of    the    Fifth    international    Joint    Conference    on    Artificial    Intelligence,    1977 , pp.. 27-35.    Lowerre,    B., “The HARPY Speech    Recognition    System,”    Tech.    report Computer    Science    Department,    Carnegie-Mellon    University, April 1976.    Newell, A. and Simon, H. A., Human    Problem    Solving,    New    Jersey:    Prentice-Hall,    1972.    127     
 | 
	1980 
 | 
	71 
 | 
					
69 
							 | 
	ON WAITING    Arthur M. Farley    Dept. of Computer and Information    Science    University    of Oregon    Eugene, Oregon    ABSTRACT    Waiting is the activity of maintaining    selected    aspects of a current situation over some period of    time in order that certain goal-related    actions can    be performed    in the future.    Initial steps toward    the formalization    of notions relevant to waiting    are presented.    Conditions    and information    pertin-    ent to waiting decisions are defined.    Introduction    Waiting is the activity of maintaining    selected    aspects of a current situation over some period of    time in order that certain goal-related    actions can    be performed    in the future.    Those aspects of the    current situation which are maintained    serve as    preconditions    for the desired future actions.    Waiting extends from a decision to maintain these    preconditions    until (attempted) execution of the    goal-related    actions    (or a decision to forego them).    The act of waiting is closely associated with the    future actions; a problem solver is said to be    "waiting to" (do) these actions.    The primary function of waiting is to improve    the efficiency    with which problem solving plans are    executed.    Waiting attempts to overcome reestablisk    ment of preconditions    for desired, future actions.    Not only may less effort be expended during problem    solving, but performance    can become more coherent.    The problem solver can avoid frequent side- and    back-tracking    in multiple-goal    situations.    Waiting    also allows for a certain degree of parallelism    in    problem solving.    Waiting can be engaged in simul-    taneously with other actions which do not destroy    satisfaction    of the preconditions    being maintained.    Waiting is an important and frequent problem    solving activity in the real-world.    By real-world,    we mean an ongoing, continuing,    schedule based con-    text, within which cooperative,    as well as compet-    itive, efforts among groups of problem solving    systems normally occur.    Waiting minimally    requires    such a context to be effective.    A decision to wait    implies that other, as yet unsatisfied,    precondi-    tions of anticipated    goal-related    actions are    expected to be met by means other than direct    intervention    by the waiting system.    Waiting as a problem solving activity has been    largely    (if not totally) ignored by AI research to    date.    This is primarily due to the fact that real-    world contexts as defined here have only recently    been considered.    This paper outlines initial steps    toward formalisms within which issues of waiting    can be addressed.    The research represents    exten-    sions to a knowledge-based    problem solving system    previously    described by the author    [3,41.    We    briefly review important aspects of that system    before describing    straightforward    extensions which    aid our understanding    of waiting.    We conclude by    discussing    related research and suggesting    future    work.    Knowledge-based    Problem Solving    The form of its representation    of the environ-    ment influences    all other aspects of a problem    solving system.    Let a situation of the relevant    environment    at any point in time (past, present, or    future) be represented    by a situation state.    A    situation state consists of a finite set of propo-    sitions which are true with respect to the specific    environmental    situations(s)    which the state is said    to represent.    The current state represents    the    present environmental    situation.    A goal state is a    situation state which the problem solving system    desires the current state to satisfy (i.e. be con-    sistent with).    A problem exists for a problem    solving system when the current state does not sat-    isfy constraints    specified by a goal state.    Proh-    lem solving refers to any activity undertaken    in    attempts to eliminate differences    between current    and goal states.    A problem is solved when differ-    ences between its goal state and the current state    no longer exist.    A knowledge-based    problem solving system    solves most problems by instantiation    and execution    of known general solution plans.    A general solution    plan describes a process which is capable of satis-    fying a set of goal states from any of a set of    current states.    A general solution plan is repre-    sented as a rooted, directed,    labelled tree of plan    states.    The root of the tree is the goal state.    Plan states of the tree are interconnected    by dir-    ected arcs, each labelled by an operator.    An oper-    ator is a description    of an action (or process),    represented    as sets of add, delete, and precondition    propositions    [71. The operator labelling an arc is    capable of transforming    the plan state at the tail    of the arc into the plan state at the head of the    arc.    The plan state at the tail satisfies precon-    ditions of the operator, while the one at the head    reflects the results of additions and deletions    associated with the operator.    The maximal directed    path from any plan state ends at the goal state.    A plan state is a situation state whose pro-    positions have been partitioned    into three compo-    nents.    For each proposition    in the SELFOP compo-    nent, the problem solving has one or more operators    capable of satisfying    the proposition.    Furthermore,    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    the system normally expects    (prefers) to satisfy a    SELFOP proposition    itself by executing one or the    operators.    For each proposition    in the OTHEROP    component,    (costly) operators may exist allowing    the system to satisfy the proposition    itself, but    the system normally expects the proposition    to be    satisfied by other problem solving systems in the    environment.    Finally, for each proposition    in the    NOOP component,    neither the system itself nor any    other problem solving system has control over sat-    isfaction of the proposition    (i.e., weather condi-    tions).    Though the structure of a general solution plan    is made clear by its tree, representing    the plan as    a production    system can facilitate    its execution.    A production    system [61 is a collection    of    condition-action    pairs called rules.    In the pro-    duction system representation    of a general solution    plan, each    plan state serves as the condition part    of a rule whose action part is the operator label-    ling the arc leaving that plan state.    In [41,    the    author describes how this representation    can be    useful in a selective approach to coordinating    the    execution of multiple plans.    Rules from general    solution plans associated with as yet unsatisfied    goals are combined to form one production    system.    A rule classification    scheme is defined which pro-    vides for the avoidance of most inter-plan    con-    flicts while allowing for responsive,    efficient    problem solving.    For example, a goal state is    classified    as ungS;ounded if it denies satisfaction    of conditions    from a (critical) rule of another    plan (in a conjunctive    goal set).    Rules from plans    for ungrounded    goal states are not executed until    conflicting    plans have been completed.    Waiting    We are concerned here with the control of wait-    ing during plan executions    by knowledge-based    pro-    blem solving systems operating    in real-world    con-    texts.    We first consider the question: what situ-    ations trigger consideration    of waiting?    A rule is    self-satisfied    if all propositions    in its SELFOP    component are satisfied.    Whenever a self-satisfied    rule exists during plan execution,    a problem solv-    ing system may consider waiting, maintaining    those    conditions    of the rule it has satisfied until the    rule can be executed.    A rule whose conditions    are    completely    satisfied, and thus can be executed    during a current system cycle, is classified    as    ready.    Ready rules can suggest waiting.    The sys-    tem may wait to execute one ready rule, selecting    another which leaves the rule ready (through con-    flict resolution).    During plan execution,    a rule corresponding    to    a plan state whose NOOP component is not satisfied    is classified as irrelevant.    Such a rule can only    fire if uncontrollable    aspects of the environment    change favorably.    An irrelevant    rule which is    otherwise    satisfied is a contingent    rule.    Being    self-satisfied,    contingent    rules suggest consider-    ation of waiting.    Any NOOP condition must be satis    fied occasionally,    otherwise a rule would be fan-    tasy and not considered part of a problem solving    system's executable plans.    An important class of NOOP propositions    are    those which deal with time.    In a general solution    plan reflecting    real-world    time constraints    (a    scheduled plan), the NOOP component of a plan state    will contain one or both of the propositions    ISNOWBEFORE(t)    and ISNOWAFTER(t),    where t is a time    specification.    The truth of a time proposition    is    determined    relative to the current time, considered    to be an ever present aspect of the current state.    Time propositions    of plan states derive from time    constraints    placed upon goal state satisfaction.    They are propogated    to other states (rules) of a    scheduled plan, with time parameters    adjusted to    reflect time estimates    for traversed operators.    In a scheduled plan, time propositions    alone    may determine the relevancy status of rules.    Rules    which are irrelevant solely due to unsatisfied    time    constraints    have their NOOP components classified    as early or late, depending upon which time propo-    sition is not satisfied.    A rule with early NOOP    component, but which is otherwise ready, is classi-    fied as imminent.    Whenever an imminent rule exists    during plan execution,    the problem solving system    may consider waiting while time passes until the    rule becomes relevant and can be fired.    One source    of imminent rules are overestimations    of time re-    quirements    for completed, prior operations    from a    scheduled plan.    For example, you estimate a 30    minute drive to a shopping mall; but it only takes    15; you arrive before the mall opens.    The rule by    which you would enter the mall is imminent.    Another situation which may prompt waiting is    the existence of a dependent rule.    A relevant,    self-satisfied    rule corresponding    to a plan state    whose OTHEROP component is not satisfied is ClaSSi-    fied as dependent.    Whenever a dependent rule exists    during plan execution,    the system may consider    waiting until the expected    (necessary) assistance    arrives and the rule can be fired.    Dependent    rules    can often arise during cooperative    problem solving    efforts.    For example, you are painting a house    with a friend and turn around, expecting him to    hand you a new can of paint, but he hasn't finished    opening it.    The rule by which you would take the    paint is dependent.    Finally, rules which have only their SELFOP    component    satisfied are classified    as needy.    Needy    rules arise frequently when coordinating    with pub-    lic, problem solving support systems, such as mass    transportation.    To board a bus, a problem solver    must be at a bus stop (SELFOP), at the scheduled    time (NOOP), with the bus being at the stop    (OTHEROP) . If a problem solver arrives at a stop    five minutes early, the rule by which it boards the    bus is needy.    129    Given the existence of a self-satisfied    rule,    what information    is pertinent    to subsequent waiting    decisions?    With each self-satisfied    rule, the pro-    blem solving system can associate three values: a    waiting period, a set of compatible    rules, and a    set of contending    rules.    The waiting period is an    estimate of the length of time before a self-    satisfied rule will become ready.    A compatible    rule is a ready rule which requires less (esti-    mated) time than the waiting period, and would not    destroy satisfaction    of the self-satisfied    rule's    conditions.    A contending    rule is a ready rule    from another scheduled plan which would destroy    the other's self-satisfaction,    requires more time    than the waiting period, but will become classified    late if it is not fired within the waiting period.    Determining    these two sets of rules would not    require dramatic additional    computational    effort.    They only contain ready rules, rules which the    system always determines    before selecting the next    rule to execute.    A simplest policy for waiting can be stated in    terms of these two sets of rules.    If there is a    contending    rule associated    with a self-satisfied    rule, do not wait; otherwise wait, firing a com-    patible rule, if any exist.    Though this may pro-    duce effective behavior    in many circumstances,    a    moment's thought suggests further considerations.    A goal state is nearby to a self-satisfied    rule if    time estimates    indicate that the system could    satisfy the goal state and reestablish    conditions    of the self-satisfied    rule within the expected    waiting period.    A more complex waiting policy    could have a system elect to wait in the face of    contending    rules, especially when compatible    rules    and/or nearby goal states allow active waiting.    Relative importances    of goal states would influence    such a policy.    Finally, rather than an expected    waiting period, a cumulative probability    function    representing    the likelihood    that a self-satisfied    rule will be able to fire within a given period of    time could add further sophistication    to waiting    policies.    Conclusion    Research on distributed    problem solving has    used waiting as a coordination    primitive    [ll.    Coordination    is realized in our model through    ungrounded    goal states and unsatisfied    OTHEROP com-    ponents.    Postponing    actions until others have    completed their responsibilities    may or may not    result in waiting as defined here.    Research on    medical diagnosis has dealt with time duration    propositions    in rule conditions    [2]. These suspend    judgement until time lends its support,    Again,    time-based postponement    differs from issues of    waiting addressed here.    In this paper we have specified formalisms    which add waiting to the repertoire of capabilities    of problem solving systems dealing with real-world    contexts.    An (incomplete) set of conditions    under    which waiting may be reasonably    considered    are    defined, as are aspects of the subsequent waiting    decision process.    We are continuing    investigation    of waiting by considering    a particular    context --    that of satisfying    a set of work and personal goals    on a given day within an office and city setting.    This context has received attention    in recent    research on plan formulation    [51.    It appears to be    just as fruitful a source of ideas on plan execu-    tion and the role of waiting.    Meeting schedules    and office hours, going shopping, and using public    transportation    are actions which require frequent,    even planned, waiting.    A simulation    to evaluate    various waiting policies is planned.    r11    [21    r31    II41    [51    [61    [71    REFERENCES    Corkill, D.    'Hierarchical    planning in a    distributed    environment:,    Proceedings    IJCAI-79,    Tokyo, 1979, p. 168-175.    Fagan L. etal., "Representation    of dynamic    clinical knowledge",    Proceedings    IJCAI-79,    Tokyo, 1979, p. 260-262    Farley, A.M.    "The coordination    of multiple    goal satisfaction',    Proceedings    IJCAIS, MIT,    1977, p. 495.    Farley, A.M.    "Issues in knowledge-based    problem solving" to appear in I.E.E.E. Trans-    actions on Systems, Man, and Cybernetics,    August, 1980.    Hayes-Roth,    B. and Hayes-Roth    F.    "A cog-    nitive model of planning",    Cogntive Science,    3    (19791,    pp. 275-310.    Newell, A. and Simon, H.A.    Human Problem    Solving, Prentice-Hall:    Englewood    Cliffs,    NJ, 1972.    Sacerdoti, E.D.    "Planning in a hierarchy of    abstraction    spaces", Artificial    Intelligence,    5, 1974, pp. 115-135.    130     
 | 
	1980 
 | 
	72 
 | 
					
70 
							 | 
	DOU&S    E. &pelt    Slar@rd University, Stanford, California    SRI International, Menlo Park, California    ABSTRACT    This    paper    reports    recent    results    of research    on planning    systems    that have the ability    to deal with multiple    agents    and to    reason    about    their knowledge    and the actions    they perform.    The    planner    uses a knowledge    representation    based    on the possible    worlds    semantics    axiomati7ation    of knowledge,    belief    and    action    advocated    by Moore    [5].    This work has been motivated    by the    need    for such capabilities    in natural    language    processing    systems    that will plan speech    acts and natural    language    utterances    [1, 21.    The sophisticated    use of natural    language    requires    reasoning    about    other    agents,    what    they    might    do and    what    they    believe,    and    therefore    provides    a suitable    domain    for planning    to achieve    goals    involving    belief.    This paper    does not directly    address    issues of    language    per se, but focuses on the problem-solving    rcquircmcnts    of a language-using    system,    and describes    a working    system,    KAMP    (Knowledge    And    Modalities    Planner),    that    embodies    the    ideas    reported    herein.    I. WI-IAT,A KNOWLEDGE PLANNER MUST DO    Consider    the following    problem:    A robot named    Rob and a    man    named    John    arc in a room    that    is adjacent    lo a hallway    containing    a clock.    Both Rob and John    are capable    of moving,    reading    clocks, and talking    to each other, and they each know that    the other is capable    of performing    these actions.    They both know    that they are in the room, and they both know where the hallway    is. Neither    Rob nor John knows what time it is. Suppose    that Rob    knows that the clock is in the hal& but John    does not.    Suppose    further that John wants to know what time it is, and Rob knows he    does.    Furthermore,    Rob is helpful,    and wants to do what he can    to insure that John achieves    his goal.    Rob’s planning    system must    come up with a plan, perhaps    involving    actions    by both Rob and    John,    that    will result    in John    knowing    what    time    it is.    We would like to see Rob devise a plan that consists    of a    choice    between    two alternatives.    First,    if John    could    find out    where the clock was, he could go to the clock and read it, and in    the resulting    state would know the time.    So, Rob might tell John    wh&re the clock was, reasoning    that this information    is sufficient    for John to form and execute    a plan that would achieve    his goal.    The second    alternative    is for Rob to move into the hall and read    the clock himself, move back into the room, and tell John the time.    This research was supported by ihe Defense Advanced Research    Projeers Agency under contract NOOO39-79-C-0118 with the Naval    Electronic Systems Command    The views and conclusions contained    in this document    are those of the aitthor and should not be    interpreted as represenlative of the oficial policies, eilher expressed    or implied, of the Defense Advanced Research Projects Agency, or    the I/. S.    Government.    Existing planning    mechanisms    such as NOAII    [6] or SrI:It% [3]    arc incapable    of dealing    with this sort of problem.    First, to solve    this problem    a system must reason    cffcctivcly    about    propositional    attitudes    such as knorv, believe, and    want. Fxisring planning    syctcms    arc based on knowledge    rcprcscntations    that arc not adequate    for    that purpose.    Morcovcr,    they arc not equipped    to handle    the    integration    of the actions of multiple    agents into a single plan.    In    lhc solution    to the above    problem,    the first choice    consists    of    actions by both John and Rob.    Rob does an informing    act, pnd    John moves into the hall and reads the clock.    This means    that    Rob has planned    for events to occur that are beyond    its immediate    control    and    which    involve    knowledge    about    the capabiliticc    of    another    agent.    The    KAMP system    solves    problems    such    as the cxamplc    above.    It adopts    a knowledge    representation    based    on possible    worlds semantics,    which is capable    of representing    the knowledge    needed    for the task.    By reasoning    about the knowledge    and wants    of other    agents,    KAMP dctermincs    what courses    of action    other    agents can be expected    to take in the filturc, and incorporates    those    actions    into    its own    plans.    II. REPRESENTING KNOWT,EDGE ABOUT BELIEF    It is important    that a planner    be based    on a knowledge    representation    that is adcquatc    for representing    different    kinds of    facts about knowing,    wanting,    believing,    etc.    For example,    it may    be necessary    to reprcscnt    that someone    knows the value of a term,    without    the system itself knowing    what that vnluc is, or the system    may need to rcprcscnt    that a person knows -P as opposed    to not    knowing    whether    P.    A variety    of strategies    have been suggcstcd    for    representing    such    knowledge,    but    just    representing    the    knowledge    is not sufficient.    It is also necessary    that the system be    able    to reason    with    the    knowledge    efficiently.    Many    of the    alternatives    that have been    proposed    for rcprcscnting    knowledge    are fundamentally    lacking    in cithcr    rcprcscntational    adequacy    or    efftcicncy.    Moore [5] discusses    some of the specific proposals    and    their    shortcomings.    The    representation    which    has been    selected    For the    KAMP    system    is based    on    Moore’s    axiomatization    of possible    worlds    semantics.    This approach    has a great deal of power    to represent    and reason ‘efficiently    with modal    operators,    and it is particularly    elegant    in describing    the relation    between    action    and knowledge.    Because    the design    of the planner    is largely    motivated    by the    design    of the    knowledge    representation,    I will    briefly    outline    Moore’s strategy    for representing    knowlcdgc    about belief and how    it relates    to action.    For comparison    with a system    that uses a    different    knowledge    representation    for planning    to influence    belief,    see Konolige    and    Nilssun    [4].    131    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    The representation    consists    of a modal    object language    that    has operators    s~uz11 as believe and knovv.    This object    langauge    is    translated    into a m&a-language    that    is based    on a first order    axiomitization    of the possible    worlds semantics    of the modal logic.    All the planning    and deduction    takes place at the lcvcl of the mcta    language.    In this paper I will adopt    the convention    of writing the    meta language    translations    of object language    terms and predicates    in boldface.    For example,    to represent    the fact that John knows P, one    asserts that P is true in every possible    world that is compatible    with    John’s knowledge.    If K(A, wl, w2) is a predicate    that means that    w2 is a possible    world which is compatablc    with what 4 knows in    wl, and T(w, P) means that P is true in the possible world M’, and    Wo is the actual    world,    then the statement    that John    knows P is    represented    by the    formula:    (I)    VW K(John, W,, w) 1 T(w, P),    This states that in any world which is compatablc    with what John    knows    in the actual    world,    P is true.    Just    as knowledge    defines    a relation    on possible    worlds,    actions    have    a similar    effect.    The    predicate    CV RWo(A    PI> y. w2)    rcprcsents    the fact that world w2 is rclatcd    to world wl by agent A    performing    action P in wl.    Thus possible worlds can bc used in a    manner    similar    to state    variables    in a state    calculus.    Using a combination    of the K and R predicates,    it is possible    to develop    simple, elegant    axiom schemata    which clearly state the    relationship    between    an action and how it effects knowledge.    For    example,    it would    bc possible, to jxiomatize    an informing    action    with    two    simple    axiom    schemata    as follows:    (3)    VW, Vw2 Va Vb VP R(Do(a, Inform(t), P)), wI, w2)    3 [t/w, Ma, wl, w3) 3 T(w3, 01    (4)    VW, Vw2 ‘da Vb VP R(Do(a, Inform(b, P), wl, w2)    3    VW, W, w2, w3> 3    3QC(b,    wl, w4>    ii R(Do(a, Inform(b, P)), w4, w+]    Axiom (3) is a precondition    axiom.    It says that it is true in    all possible    worlds    (i.e. that    it is universally    known)    that    when    someone    does an informing    action,    he must know that what he is    informing    is in fact the case.    Axiom (4) says that it is universally    known    that in the situation    resulting    from an informing    act, the    hearer    knows that the inform    has taken place.    If the hearer knows    that an inform    has taken    place,    then according    to axiom    (3) he    knows    that    the speaker    knew    P was true.    From    making    aat    deduction,    the hearer    also    knows    P.    III. USING THE POSSIBLE WORLDS KNOWLEDGE    REPRESENTATION    IN PIANNING    The    possible    worlds    approach    to representing    facts    about    knowledge    has many advantages,    but it presents    some problems    for    the    design    of a planning    system.    Axiomatizing    knowledge    as    accessiblity    relations    between    possible    worlds makes it possible    for    a first order logic deduction    system to reason about knowledge,    but    it carries the price of forcing    the planner    to deal with infinite    sets    of possible    worlds.    Planning    for someone    to know    something    means making    a proposition    true in an infinite    number    .of possible    worlds.    The goal wff, instead    of being a ground    level proposition,    as is the case in other planning    systems developed    to date, becomes    an expression    with    an implication    and    ‘I universally    quantified    variable,    similar    to (1).    Another    problem    arises    from the way in which    actions    are    axiomatized.    l’fficicnt axioms for deduction    require    the assumption    that people can reason with their knowledge.    ‘I’hc cffccts of SOITIC    actions    manifest    themselves    through    this reasoning    process.    For    example,    this occurs    in the axiomatization    of IWORM    giVCn in (3)    and    (4).    Speakers    usually    execute    informing    actions    to get    someone    to know something.    However,    the hearer    dots not know    what the speaker is informing    directly as a result of the inform, but    from    realizing    that the speaker    has performed    an informing    act.    (See Searlc [7]) ‘I’hc only effect WC know of for an informing    act is    that    the hcarcr    knows    that    the spcakcr    has pcrformcd    the act.    When    KAMP has a goal of somconc    knowing    something,    it must be    able to dctcrmine    somehow    that INI:OKM is a rcasonablc    thing to    do, even though    that is not obvious    from the effects of the action.    A related    problem    is how to allow    for the possiblity    that    people can reason with their knowlcdgc.    If the system has the goal    Know(A,    Q), and    this goal    is not achicvcablc    directly,    and    the    system knows that A knows that P > Q, then Know(A,    P) should    be generated    as a subgoal.    IIxamples    of this sort of situation    occur    whenever    Q is some proposition    that is not directly    obscrvablc.    If    P is some observable    proposition    that entails Q, then the planner    can perform    some action that will result in knowing    P, from which    Q can be inferred.    This    is the basis of planning    cxpcriments.    Since    there    is a tradeoff    between    being    able to take    full    advantage    of the logical power of the formalism    and being able to    efficiently    construct    a plan,    a strategy    has    been    adopted    that    attempts    to strike a balance.    The strategy    is to have a module    propose    actions    for the planner    to incorporate    into    a plan    to    achieve    the current    goal.    This module    can be thought    of as a    “plausible    move generator”.    It proposes    actions    that are likely to    succeed in achicvcing    the goal.    The system then uses its deduction    component    to verify that the suggested    action actually    does achieve    the goal.    To    facilitate    the    action    generator’s    search    for    reasonable    actions,    the preconditions    and effects of actions    are collected    into    SrRrPs-like    acfiott summaries. Thcsc summaries    highlight    the direct    and indirect    effects of actions    that are most likely to be needed    in    forming    a plan.    For example,    the action summary    of the INFORM    action would include    the hcarcr    knowing    P as one of the (indirect)    effects of 1NI;ORM. The effects of actions as they are represented    in    the action    summaries    can be any well formed    formula.    So it is    possible    to state    effects    as implications    which    can    match    the    implications    that arc the meta language    translations    of the Know    operator.    Using action summaries    is not cquivalcnt    to recasting    the    knowledge    rcprescntation    as a SrRrPS-like    system.    The    action    summaries    are    only    used    to suggesf    alternatives    to be tried.    To    allow    the    possibility    of    agents    reasoning    with    their    knowledge,    KAMP follows    the following    process    whenever    a goal    involving    a knowlcdgc    or belief state is encountered:    The system    tries to invoke    an operator    that will achieve    the goal directly.    If    this fails, then the system’s base of consequent    rules is examined    to    find subgoals    that could    be achieved    and    that would    allow    the    agent    to deduce    the desired    conclusion.    Although    this approach    has the advantage    of simplicity,    the number    of subgoals    that the    planner    has to consider    at each step can grow exponentially.    It    seems intuitively    correct    to consider    subgoals    in the order    of the    length    of inference    that the agent has to go through    to reach    the    desired conclusion.    The planner    needs good criteria    to prune    or at    132    least postpone    consideration    of less likely paths    is an interesting    area    for    further    research.    of inference.    ’ I‘his    IV. UNIVERSAL KNOWLEDGE    PRECONI)ITIONS    When    planning    actions    on a strictly physical    level, there are    few if any preconditions    which can bc said to apply universally    to    all actions.    However,    when dealing with the knowledge    required    to    perform    an action, as well as its physical    enabling    conditions,    there    are a sufficient    number    of interesting    universal    preconditions    so    that their treatment    by the planner    as a special case is warranted.    Universal    knowlege    preconditions    can    be    summarized    by    the    statement    tl?at an agent has to have an executable    description    of a    procedure-    in order    to do anything.    For example,    if an agent    wishes    to perform    an INFORM action,    it is necessary    for him to    know what it is that he is informing,    how to do informing,    and    who the intended    hearer’ is. Since thcsc preconditions    apply ro all    actions,    instead    of including    them in the axioms    for each action,    the planner    automatically    sets them up as subgoals    in every case.    V. THE REPRESk:?\‘l-ATION OF THE PLAN WITHIN    I’HE SYSTEM    The    KAMP planner    uses a system    of plan    representation    similar    to that of Sacerdoti’s    procedural    networks    [6].    The major    difference    is that CHOICE nodes (OR-SPL.H'S)    arc factored    out into a    disjunctive    normal    form.    Since a choice may occur within an AND-    SPLIT that    affects    how    identical    goals    and actions    are cxpandcd    after the choice,    all the choices    arc factored    out to the top level,    and    each    choice    is treated    as a spearate    alternative    plan    to be    evaluated    independently,    (but    perhaps    sharing    some    subplans    as    “subroutines”    with    other    branches.)    As in Sacerdoti’s    system,    nodes can specify goals to be achieved,    actions to bc performed,    or    they may    be “ph;lntoms”,    goals    which    coincidentally    happen    to    already    be satisfied.    Each    node    of the procedural    network    is    associated    with a world.    This world    represents    the real world at    that particular    stage in the execution    of the plan.    At the beginning    of the plan, the first node is associated    with WO. If the first action    is that the robot moves    from A to B, then the node    rcprcscnting    that action would have an associated    world W1 and there would be    an assertion    of the form R(Do(Robot,    Movc(A, B)), WO, WL) added    to the    data    base.    VI. CONTROL STRUCTURE    The planner’s    control    structure    is similar to that of Sacerdoti’s    NOAH system.    A procedural    network    is created    out of the initial    goal.    The planner    then attempts    to assign worlds    to each node of    the    procedural    network    as follows:    First,    the initial    node    is    assigned    Wo, the initial actual world.    Then intcratively,    when the    planer proposes    that a subsequent    action is performed    in a world to    reach a new world, a name is generated    for the new world, and an    R relation    between    the    original    world    and    the    new    world    is    asserted    in the dcducer’s    data base.    Then all goal nodes that have    worlds    assigned    are evaluated,    i.e. the planner    attempts    to prove    that the goal is true using the, world assigned    to that node as the    current    state of the actual    world.    Any goal for which    the proof    succeeds    is marked    as a phantom    (achicvcd)    goal.    Next,    all the unexpanded    nodes    in the network    that    have    been assigned    worlds,    and which are not phantoms,    are examined.    Some of them may bc high    level actions    for which    a procedure    exists to determine    the appropriate    expansion.    These    procedures    are invoked    if they exist, otherwise    the node is an unstaisfied    goal    node, and the action generator    is invoked    to find a set of actions    133    PI    PI    [31    [41    151    Kl    [71    REFERENCES    Appclt,    Douglas    E.,    Problem    Solving    Applied    to    Nutural    Larlgrrage    Generafion, proceedings    of the Annual    Confcrcnce    of    the    Association    for Computational    Linguistics,    1980.    Cohen, Philip, On Knowing Whal 10 Say: Planning Speech Acts,    University    of Toronto    Technical    Report    #LB,    1978.    Fikes, Richard    E., and N. Nilsson,    STRIPS: A New Approach to    the    Application    of    Theorem    Proving    to Problem    Solving,    Artificial    Intelligence,    No.    2, 1971.    Konoligc,    Kurt    and N. Nilsson,    Planning in n Mulfiple Agent    Environment, Proceedings    of the First Annual    Conference    of the    American    Association    for Artificial    Intelligence,    August,    1980.    Moore,    Robert    C., Reasoning About Knowledge and Action,    Massachusetts    Jnstitute    of Technology    Artificial    Intelligence    Laboratory    Technical    Report    ‘I’R-???,    1979.    Saccrdoti,    Earl, A Structure    for Plans and    13chaviol;    Elscvier    ---    North-Holland,    Inc.,    Amsterdam,    The    Netherlands,    1977.    Scarle,    John,    Sgccch    Acts, Cambridge    University    Press,    1969.    which    might    be performed    to achieve    the goal.    If an action    is    found,    it is inscrtcd    into the procedural    network    along    with its    preconditions,    both    the universal    ones and    those    specific    to the    particular    action.    After the nodes are expanded    to the next level, a    set of critic procedures    are invoked,    which can examine    the plan    for global    interactions    and take corrective    action    if needed.    This    cntirc    process    is rcpeatcd    until the plan has been expanded    to the    point where every unexpanded    node is either a phantom    goal or an    executable    action.    VII. CURRENT STATUS OF RESKARCH    The KAMP planning    system has been implemented    and tested    on several    examples.    It has been used to solve the problem    of    John, Rob and the clock cited earlier in the paper.    All the critics    of Sacerdoti’s    NOM1 have either been implemented    or are currently    uncicrgoing    implcmcntation.    Further    development    of the planner    will    be    dictated    by    the    needs    of    a language    planning    and    generation    system currently    under development.    It is expected    that    this langauge    generation    task will make    full use of the unique    features    of this system.    KAMP is a first    attempt    at dcvcloping    a planner    that    is    capable    of    using    the    possible    worlds    semantics    approach    to    representing    knowledge    about belief.    Combining    a planner    with a    very    powerful    knowledge    representation    will    enable    problem    solving    tcchniqucs    to bc applied    to a variety    of domains    such as    language    generation,    and    planning    to    acquire    and    distribute    knowledge,    in which they have played    a relatively    small role in the    past.    ACKNOWLEDGEMENTS    The author    is grateful    to Barbara    Cross,    Gary    Hendrix    and    Terry    Winograd    for comments    on earlier    drafts    of this    paper     
 | 
	1980 
 | 
	73 
 | 
					
71 
							 | 
	Making    Judgments    Hans J. Berliner    Computer Science Department    Carnegie-Mellon University    Pittsburgh, Pa. 15213    Abstract    Reasoning-based problem solving deals with discrete entities and    manipulates these to derive new entities or produce branching behavior    in order to discover a solution.    This paradigm has some basic    difficulties when applied to certain types of problems.    Properly    constructed arithmetic functions, such as those using our SNAC    principles, can do such problems very well. SNAC constructions have    considerable generality and robustness, and thus tend to outperform    hand coded case statements as domains get larger. We show how a    SNAC fimction can avoid getting stuck on a sub-optimal hill while    hill-climbing.    A clever move made by our backgammon program in    defeating the World Champion is analyzed to show some aspects of the    method.    1 Int reduction    Problem solving research and examples usually deal with sequential    reasoning toward a conclusion or required response.    For such    situations, criteria exist that make it possible to identify the correct    response and possibly order other responses with respect to their    goodness. However, in most domains such a paradigm is not possible    because the number of states in the domain is so large that it is next to    impossible to describe the properties of an arbitrary state with sufficient    accuracy to be able to reason about it. Expertise in such domains    appears to require judgment. We consider judgment to be the ability to    produce    graded    responses    to    small changes    in the stimulus    environment.    In judgment    domains several responses may be    considered adequate, while reasoned decisions would appear to only be    correct or incorrect.    The ability to reliably judge small differences in chess positions is    what separates the top players from their nearest competitors.    Even    though a decision procedure exists for determining    whether one    position is better than another, it is intractable. It is this intractability or    the inability to isolate features that can be used in a clear reasoning    process that distinguishes the judgment domain from the reasoning    domain.    The boundary between the two is certainly fuzzy, and    undoubtedly changes as new information about any particular domain    is developed. It seems that the larger the domain and the less precise    the methods of making comparisons between elements of the domain,    the less adequate are reasoning techniques.    2 The Problem    There are a number of techniques available to allow a program to    make comparisons, i.e. to discriminate good from bad from indifferent    in selecting among courses of action and among potential outcomes.    However, while these techniques are fine for doing simple comparisons,    most of them break down with even small additional complexity.    Consider the syllogism:    1) The more friends a person has, the happier he is.    2) John has more friends than Fred.    Therefore: John is happier than Fred.    So far so good. However, adding just a small amount of complexity    with the two additional propositions:    3) The more money a person has, the happier he is.    4) Fred has more money than John.    makes it possible to derive two contradictory conclusions from the    premises. This is a most unsatisfactory state of affairs. Especially so,    since recoding the premises into first order predicate calculus does not    help either. Neither will using productions or the branching logic of    programming languages.    For such rcprcsentations, the most likely    formulation would be that X will be happier than Y #he is superior in    This research was sponsored by the Defense Advanced Research Projects Agency (DOD), ARPA    Order No. 3597, monitored by the Air Force Avionics Laboratory Under Contract F33815-78-C-1551.    134    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    all applicable categories. Another formulation would have X happier    than Y if he is superior in a majority of categories (with a tie being    undefined). Such “voting” techniques can be shown to be deficient if    We further increase the complexity of the decision that is to be made.    If premises 2 and 4 were restated as:    2a) John has 25 friends and Fred has 20.    4a) Fred has $20,000 and John has $500.    Most people would agree that Fred was happier according to our    definitions of happiness. Yet, the only machinery available for coming    to grips with problems such as this in systems that reason is to produce    a large number of additional axioms that contain compound conditions,    or to define degrees of difference so that degrees of happiness can be    ascertained and summed.    The world of reasoning is a world of quantized perception and    action. These systems are discrete and do business on a “case” basis. In    order to achieve expertise it is necessary to react differentially to states    that were formerly considered equivalent. Thus, the number of distinct    perceptions and actions gets larger with expertise. This makes it more    expensive to find the applicable rule or pattern, and creates difficulty in    keeping new rules from interfering in unintended ways with the effects    of older rules. Further, the possibility that more than one pattern will    match grows as complexity grows, and default conditions are usually    defined for states that fail to match specific rules or patterns.    This    makes adding new knowledge a formidable task for even moderate size    domains [3]. So unless, a method is found for automatically appending    viable rules to such a system, there seems to be a definite limit on the    expertise it can achieve.    Because it is easier to pay attention to only a few things at one time,    reasoning systems seem to have more of a sub-optimization nature than    is necessary in sequential problem solving. The need to solve the top    level goal can obscure the fact that it could possibly by solved later with    greater facility. For instance, a plan for taking a trip by car could    include:    1. Get suitcase    2. Pack clothes in suitcase    3. Put suitcase in car    If the raincoat is already in the car, this would involve getting it from    the car only to bring it back later inside the suitcase. Conceivably, it    would be simpler to bring the packed suitcase to the car and put the    raincoat inside it at that time. This shows that goals need not have an    immutable hierarchy. Further, there are times when achieving several    low level goals is more desirable than achieving the top level goal.    In addition to the above there is another problem that exists in    domains that interface to the real world, where sensed parameters that    have a quasi-continuous    character may have to be quantized.    Premature quantization of variables loses information and can cause    problems when the variable is to be used later for making decisions.    For instance, if day/night is a binary variable and it is advantageous to    be in day, a program may arrange its problem solving behavior so that it    samples the environment just before day turns to night (by the system    definition), and, being satisfied with what it finds, pronounces this    branch of the solution search as favorable. If it had been forced to    continue the branch even a few steps, if would have come to a different    conclusion as night was closing in.    However, quantization of the    relatively continuous day/night variable causes the blenrish effect [2], a    behavior anomaly similar to the horizon effect [l], but with the step size    of the variable rather than the fixed depth of the search being the    culprit. This problem can be prevented by retaining a variable in its    quasi-continuous state as long as possible. However, if a variable has a    very large range it is impractical to create tests for each value in the    range.    Resorting to the testing of sub-ranges merely recreates the    problem.    Thus, discrete treatment of such a variable can cause    problems, no matter how it is done.    3 A Better Way    Arithmetic functions can do all the above things easily and cheaply if    they are constructed in the right way. A polynomial of terms that    represent important features in the domain is constructed.    We have    described our SNAC method of constructing such polynomials and    shown [2, 41 that:    e It is important that the values of terms vary smoothly.    o Non-linearity of terms is extremely important for expertise.    e Some method must exist for determining the degree to which    each feature is applicable in the present situation. This is done    with slowly varying variables that we call application coefficients.    The SNAC method also makes it possible to avoid the previously    vexing problem    of getting stuck on a sub-optimal    hill while    hill-climbing.    Figure 1 shows how getting stuck on a hill is avoided.    With    non-linear functions, the peaks of hills can be rounded so that retaining    the peak becomes less desirable, especially if some other high ground is    in view of the searching process. Further, with application coefficients    it is possible to change the contour of the hill even as it is being    climbed. This is shown in a - c; the arrow showing the location of the    current state. As the hill is being climbed, one or more application    coefficients that sense the global environment cause the goal of    achieving the hilltop to become less important since it is very near being    135    ' *    3    4    5    6 Black    Figure 1: Effect of SNAC on Hill Shape    achieved. The change in value of the application coefficients causes the    contour of the hill to begin to flatten, making the achievement of the    summit less desirable, and resulting in the program looking for the next    set of goals before even fully achieving the current set.    Thus    application coefficients can direct progress by reducing the importance    of goals that are near being achieved, have already been achieved, or    arc no longer important.    The above is achieved mathematically as follows:    The function    p + Y* = C* for -C 2 X 5 C and Y> 0 will produce a semi-circle similar    to Figure la.    If we now change the function to be >+ A* p= Ce    where A 2 1 is an application coefficient (a variable), we can flatten the    semi-circle into a semi-elipse of arbitrary flatness. Here, let OLDX be    OLDX increases in value.    The construction is finalized by only    recognizing values of A while OLDX is in the range of (say) -2C to    the hill would never seem a desirable thing to do because the program    could not tell the difference between getting there when it was far away    or already very close.    4 An Example of SNAC Sensitivity    The backgammon position in Figure 2 occurred in the Anal game of    the match in which my program, BKG 9.8, beat World Champion Luigi    Villa in July, 1979. In this position, BKG 9.8 had to play a $1. There    are four factors that must be considered here:    1. Black has established a strong defensive    points made on the 20 and 22 points.    backgame position with    2. In backgame positions timing is very important. Black is counting    on hitting White when he brings his men around and home. At    such time he must be prepared to contain the man sent back.    This can only be done if the rest of his army is not too far    advanced so it can form a containing pocket in front of the    sent-back man. At the moment Black would not mind having    additional men sent back in order to delay himself further and    improve his timing.    24 23 22 21 20 19 White 18 17 16 15 14 13    Figure 2: Black to Play a 5,l    3. There is also a possibility that Black could win by containing the    man that is already back, but this is rather slim since White can    escape with any 5 or 6. However, blockading this man is of some    value in case White rolls no 5’s or 6’s in the near future.    4. In case the sole White back man does not escape, there is a    possibility of bringing up the remainder of Black’smen not used    for the backgame and trying to win with an attack against the    back man.    In view of the above it is very difficult to determine the right move,    and none of the watching experts succeeded in finding it. The most    frequently mentioned move was 13-7, which brings a man into the    attack and hopes he is hit so as to gain timing (delay one’s inevitable    advance). However, BKG 9.8 made the better move of playing 13-8,    3-2, breaking up its blockade somewhat in order to get more attack, and    attempting to actively contain the White back man. It did not worry    about the increased chance of being hit, as this only helps with later    defense.    This gives the program two chances to win: If the attack    succeeds, and by getting more men sent back, if the attack fails it    improves the likelihood of success of its backgame.    I have not seen this concept in this form before in books or games.    Humans tend to not want to break up the blockade that they have    painstakingly built up, even though it is now the least valuable asset    that Black has.    It is instructive to see how the program arrived at the judgment it    made; one that it had never been tested for. Black has 28 legal moves.    Ibe top choices of the program were (points of the Scoring    pOlYnOmid    in parentheses):    13-8, 3-2 (687); 10-5, 3-2 (682); 13-8, 10-9 (672); and    136    13-7 (667). The third and fourth choices were the ones most frequently    mentioned by watching experts, thus showing they missed the idea of    breaking up the blockade; the thing common to the program’s top two    choices.    Let us see why it judged the move actually played as better than the    third choice (12-17, 15-16). The program considers many factors    (polynomial terms) in its judgments and quite a few of these are    non-linear.    The six factors on which the two moves differed were    (points for each and difference in parentheses):    1. Containment of enemy man (177, 131, +46). The move made    does hinder his escape more. Containment is always desirable    unless one is far ahead and the end of the game is nearing.    2. Condition of our home board (96, 110, -14). It breaks up one    home board point. Breaking up the board (points 1 thru 6) is    never considered desirable.    3. Attack (37, 21, +16).    It is the best attacking move. Attack is    desirable unless we are jeopardizing a sure win in the process.    4. Defensive situation (246,260, -14). The move slows White down,    thus could reduce the effectiveness of the backgame.    5. Long-term positional (-2, 11, -13). It puts a man on the 2 point,    which is undesirable when the game still has a long way to go    because it is too far%dvanced to be able to influence enemy men    from there.    6. Safety of men (-12, -4, -8). The move made is dangerous. The    program realizes this, but also understands that with a secure    defensive position such danger is not serious. However, all other    things being equal, it would prefer the least dangerous move    Thus the better containment and attack are consider-cd to be more    important than the weakening of the homcboard, the temporary    slowing down of White, the long-term positional weakness, and the    safety of the men. The difference between the first and second choice    was that in the first choice the attack is slightly stronger.    The importance of each of the above terms varies with the situation.    In the example, a backgame is established; else the safety term would    outweigh the attack term, and BKG 9.8 would not leave two blots in its    home board. It does recognize the degree of danger, however, and will    not make a more dangerous move unless it has compensating benefits.    This is typical of the influence that application coefficients exert in    getting a term to respond to the global situation.    5 Perspective    We have been employing the SNAC method of making judgments    for over two years now, and are struck with its simplicity and power.    The happiness example posed earlier is solved trivially in all its forms    with SNAC. If the above travel planning problem were solved as a    search problem using SNAC functions that measure the economy of    effort of the steps used, then undoubtedly SNAC would also do better    than sequential planning based on rules, with no evaluation of outcome    other than success or failure.    At the moment it is difficult to determine what role, if any, SNAC    like mechanisms have in human thinking. We have constructed them    to simulate lower level “intuitive” type of behavior, and they appear to    work admirably in capturing good judgment in the large domain of    backgammon. We conjecture that as variables become more and more    discrete in character and as criteria for success become more obvious,    reasoning gradually replaces such judgment making.    At present our backgammon program is being modified to be able to    interpret its own functions with the aim of being able to explain its    actions, and ultimately being able to identify its failures by type and    modifying the culprit functions.    111    PI    [31    [41    References    Berliner, H. J.    Some Necessary Conditions for a Master Chess Program.    In Third Inlernational Joinl Conference on Arlificial Intelligence,    pages 77-85. IJCAI, 1973.    Berliner, H.    On the Construction of Evaluation Functions for Large    Domains.    In Sixth International Joinr Conference on Arbjicial Intelligence,    pages 53-55. IJCAI, 1979.    Berliner, H.    Some Observations on Problem Solving.    In Proceeding of Ihe Third CSCSI Conference. Canadian Society    for Computational Studies of Intelligence, 1980.    Berliner, H. J.    Backgammon Computer Program beats World Champion.    Arlificial Intelligence 14(l), 1980.    137     
 | 
	1980 
 | 
	74 
 | 
					
72 
							 | 
	ABSTRACT    MULTIPLE-AGENT    PLANNING SYSTEMS    Kurt Konolige    Nils J. Nilsson    SRI International,    Menlo Park, California    We analyze problems confronted by computer    agents that synthesize    plans that take into account    (and employ) the plans of other, similar,    cooperative    agents.    From the point of view of each    of these agents, the others are dynamic entities    that possess information    about the world, have    goals, make plans to achieve these goals, and    execute these plans.    Thus, each agent must    represent not only the usual information    about    objects in the world and the preconditions    and    effects of its own actions, but it must also    represent and reason about what other agents    believe and what they may do.    We describe a    planning system t??at address es these is    show how it solves a    sample problem .    sues and    INTRODUCTION    Certain tasks can be more advantageously    performed by a system composed of several "loosely    coupled,"    cooperating    artificial    intelligence    (AI)    agents than by a single, tightly integrated    system.    These multiple agents might be distributed    in space    to match the distributed    nature of the task.    Such    systems are often called distributed    artificial    intelligence    (DAI) systems Ll].    We are interested    here in systems where the component agents    themselves    are rather complex AI systems that can    generate and execute plans,    communicate    with each other.    make infe rences, and    Among the poten tial advantages    of such DA1    systems are graceful    (fail-soft)    degradation    characteristics    (no single agent need be    indispensable),    upward extensibi lity (new agents    can be added without requiring major system    redesign),    and communication    efficiency    (a message-    sending agent can plan its "communication    acts"    carefully,    taking into account the planning and    inference abilities of the receiving agents).    In planning its actions, each agent must    consider the potential actions of the other agents.    Previous AI research on systems for generating    and    executing    plans of actions assumed a single    planning agent operating in a world that was static    except for the effects of the actions of the    planning agent itself.    Examples of such systems    include STRIPS [2], NOAH [S], and NONLIN [4].    Several important    extensions must be made to    planning systems such as these if they are to    function appropriately    in an environment    populated    by other planning/execution    systems.    First, each agent must be able to represent    certain features of the other agents as well as the    usual information    about static objects in the    world.    Each agent must have a representation    for    what the other agents "believe" about themselves,    the world, and other agents.    Each agent must have    a representation    for the planning, plan-execution,    and reasoning abilities    of the other agents.    These    requirements    presuppose    techniques    for representing    the "propositional    attitudes"    believe and want.    Second, among the actions of each agent are    ~communication    actions"    that are used to inform    other agents about beliefs and goals and to request    information.    Finally, each agent must be able to    generate plans in a world where actions not planned    by that agent spontaneously    occur.    We introduce    here the notion of spontaneous    operators to model    such actions.    In this paper, we give a brief summary of our    approach toward building DA1 systems of this sort.    It should be apparent that the work we are    describing    also has applications    beyond DAI.    For    example, our multiple agents plan, execute, and    understand    communication    acts in a manner that    could illuminate    fundamental    processes in natural-    language generation    and understanding.    (In fact,    some excellent work has already been done on the    subject of planning    "speech acts" [5-61.)    Work    on models of active agents should also contribute    to more sophisticated    and helpful "user models" for    interactive    computer systems.    The development    of    multiagent    systems might also stimulate the    development    of more detailed and useful theories in    social psychology--just    as previous AI work has    contributed    to cognitive psychology.    At this early    stage of our research, we are not yet investigating    the effects of differing    "social organizations"    of    the multiple agents.    Our work to date has been    focussed on representational    problems for such    systems independent    of how a society of agents is    organized.    A MULTIPLE-AGENT    FORMALISM    Each agent must be able to represent other    agents' beliefs, plans, goals, and introspections    about other agents.    Several representational    formalisms might be used.    McCarthy's    formalism for    first-order    theories of individual    concepts and    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    propositions    [7] is one possibility,    although    certain problems involving quantified    expressions    in that formalism have not yet been fully worked    out.    Another candidate is Moore's first-order    axiomatization    of the possible world semantics for    the modal logic of knowledge and action [8-g].    Appelt [lo] h as implemented    a system called KAMP    that uses Moore's approach for generating and    reasoning about plans involving    two agents.    We    find Moore's technique somewhat unintuitive,    and it    seems needlessly    complex when used in reasoning    about ordinary (nonattitudinal)    propositions.    Here    we develop a representation    for each agent based on    Weyhrauch's    notion of multiple first-order    theories    and metatheories    [ll].    Al:    Facts    --w-e    HOLDING(Al,A)    HOLDING(Al,B)    CLEAR(B)    A    pickup(Al,B)    HANDEMPTY(A1)    CLEAR(B)    /    putdown(A1,A)    HOLDIiG(Al,A)    Using Weyhrauch's    termi nology, each computer    individual    is defined by the combination    of a    first-order    language,    -    a simulation    structure or    partial model for that language, a set of Facts    (expressed    in the language),    and a Goal Structure    that represents a goal for the agent and a plan for    achieving    it.    We assume that each agent has a    deductive    system (a combination    of a theorem-prover    and attached procedures    defined by the simulation    structure)    used for deriving new facts from the    initial set and for attempting    to determine whether    goals and subgoals follow from the set of facts.    Each agent is also assumed to have a planning    system (such as STRIPS) for creating plans to    achieve goals.    Using a typical "blocks-world"    example, we    diagram an agent's structure in the following way:    Al (agent's name):    Facts    -----    Goal    --s-e    HOLDING(Al,A)    HOLDING(Al,B)    CLEAR(B)    Viewed as a computational    entity, an agent's    structure    is typically not static.    Its deductive    and sensory processes may expand its set of facts,    or its planning system may create a plan to achieve    a goal.    Also, once a plan exists for achieving a    goal, the agent interacts with its environment    by    executing    its plan.    In this summary, we deal only with the    planning processes of agents.    In the example    above, the occurrence    of the goal HOLDING(Al,B)    in    Al's goal structure triggers the computation    of a    plan to achieve it.    Once generated,    this plan is    represented    in the goal structure of agent Al as    follows:    Goal    m---w    Plans are represented by goal/subgoal    trees    composed of planning operators and their    preconditions.    We assume a depth-first    ordering of    the operators in the plan.    Now let us introduce another agent, AO.    Agent    A0 can have the same sort of structure as Al,    including its own first-order    language, a    description    of the world by wffs in that language    (facts), a simulation    structure, a goal structure,    and a planner and deducer.    Some of AO's facts are    descriptions    of Al's structure and processes.    By    making inferences    from these facts, A0 can reason    about the planning and deductive activities    of Al    and thus take Al into account in forming its own    plans.    Also, a structure similar to Al's actual    structure,    and procedures    similar to Al's deducer    and planner, can be used as components of AO's    simulation    structure.    Procedural    attachment    to    these "models" of Al can often be employed as an    alternative    method of reasoning about Al.    (Of    course, A0 may have an incomplete or inaccurate    model of Al.)    Because Al's structure is a first-order    language (augmented by certain other structures),    A0 can use a formal metalanguage    to describe it    along the lines suggested,    for example, by Kleene    [12] and developed in FOL by Weyhrauch    [Ill.    A0    has terms for any sentence that can occur in Al's    language or for any of Al's goals or plans; the    predicates    FACT and GOAL are used to assert that    some sentences are in Al's Facts list or goal    structure.    Consider the following example: assume    A0 is holding block A, block B is clear, and A0    believes these facts and further believes that Al    believes A0 is holding A and that Al believes B is    not clear.    A0 would have the following structure:    AO:    Facts    -----    HOLDING(AO,A)    CLEAR(B)    ;;;;;;;,'HOLDING(AO,A)')    /-CLEAR(B)    '>    We use quote marks to delimit strings, which    may have embedded string variables.    The denotation    of a ground string is the string itself.    Thus, the    intended interpretation    of FACT(Al,'HOLDING(AO,A)')    is that the wff HOLDING(AO,A)    is part of the facts    139    list of Al (that is, Al "believes" that A0 is    holding A).    By using the FACT predicate and terms    denoting other agents and wffs, any facts list for    other agents can be described.    (A0 can describe    its own beliefs in the same manner.)    We purposely use "believe" instead of "know"    because we are particularly    interested    in    situations    where agents may be mistaken in their    representation    of the world and other agents.    In    the above example, AO's opinions about its own and    Al's belief about whether or not block B is clear    are inconsistent.    We avoid formalizing    "know" and    thus do not take a position about the relationship    between knowledge and belief (such as "knowledge is    justified true belief").    We can describe some of    the usual properties    of belief by axioms like    FACT(x,p)    => FACT(x,'FACT(x,p)');    i.e., if an agent    believes p, it believes that it believes p.    We do    not, however, use an axiom to the effect that    agents believe the logical consequences    of their    beliefs, because we want to admit the possibility    that different agents use different procedures    for    making inferences.    In particular,    we want to    emphasize    that the deductive capabilities    of all    agents are limited.    While the static structure of Al is described,    for AO, by FACT and GOAL predicates,    the action of    Al's deductive system and planner can also be    axiomatized    (for AO) at the metalevel    (see Kowalski    [13] for an example).    This axiomatization    allows    A0 to simulate Al's deducer or planner by purely    syntactic    theorem-proving.    Thus A0 might use    predicates    such as ISPROOFfx,p)    and ISPLAN(x,p)    to    make assertions    about whether certain proof or plan    structures    are proofs or plans for other agents (or    for itself).    In certain cases, A0 can find out if Al can    deduce a particular    theorem (or if Al can create a    plan) by running its procedural model of Al's    deducer (or planner) directly,    rather than by    reasoning with its own facts.    This is accomplished    by semantic attachments    of models of Al's deducer    and planner to the predicates    ISPROOF and ISPLAN in    AO's metalanguage.    Semantic attachment    thus allows    A0 to "think like Al" by directly executing its    model of Al's planner and deducer.    (Here, we    follow an approach pioneered by Weyhrauch    [ll] in    his FOL system of using semantic attachments    to    data structures    and programs in partial models.)    The same kind of attachment    strategy can be used to    enable A0 to reason about its own planning    abilities.    The usual problems associated    with formalizing    propositional    attitudes [8,14] can be handled    nicely using the FACT predicate.    For example, the    atomic formula FACT(Al,'CLEAR(A)    v CLEAR(B)')    asserts the proposition    that Al believes that A is    clear or B is clear, and is not confused with the    formula [FACT(A~,'CLEAR(B)')    v    FACT(Al,'CLEAR(A)')],    which asserts the different    proposition    that Al believes    that A is clear or Al    believes    that B is clear.    Furthermore,    semantic    attachment    methods confer the advantages    of the so-    called "data base approach"    [8] when appropriate.    Of particular    importance among statements    concerning AO's beliefs about Al are those that    involve "quantifying    in," i.e., where a quantified    variable appears inside the term of a FACT    predicate.    We follow the general approach of    Kaplan [15] toward this topic.    For example, the    sentence (Ex)FAOT(A~,'H~LDING(A~,X)')    occurring    among AO's facts asserts that Al is holding an    identified    (for Al) block without identifying    it    (for AO).    AN EXAMPLE    We can illustrate    some of the ideas we are    exploring by a short example.    Suppose that there    are two agents, A0 and Al, each equipped with a    hand for holding blocks.    Initially Al is holding a    block, A, and A0 wants to be holding A.    Suppose    that A0 believes these initial facts, but (to make    our example more interesting)    A0 has no information    about whether or not Al itself believes it is    holding A.    Thus, the initial structure    for A0 is:    AO:    Facts    Goal    ------    me---    HANDEMPTY(A0)    HOLDING(AO,A)    HOLDING(Al,A)    Let us assume the following planning operators    (for both A0 and Al).    We use standard STRIPS    notation [16].    ('P&D' denotes the precondition    and    delete lists; 'A' denotes the add list.)    putdown(x,b)    agent x puts    block b on the table    P&D:    HOLDING(x,b)    A:    ONTABLE    & CLEAR(b) & HANDEMPTY    pickup(x,b)    agent x picks up block b    P&D:    CLEAR(b) & HANDEMPTY    A:    HOLDING(x,b)    asktoachieve(x,y,g)    agent x gives agent y the    P:    T    goal denoted by string g    A:    GOAL(y,g)    tell(x,y,s)    agent x tells agent y the    P:    FACT(x,s)    expression    denoted by string s    A:    FACT(y,s)    Agents take into account the possible actions    of other agents by assuming    that other agents    generate and execute plans to achieve their goals.    The action of another agent generating    and    executing a plan is modelled by a "spontaneous    operator."    A spontaneous    operator is like an    ordinary planning operator except that whenever its    preconditions    are satisfied,    the action    corresponding    to it is presumed automatically    executed.    Thus, by planning to achieve the    preconditions    of a spontaneous    operator, a planning    agent can incorporate    such an operator into its    plan.    Let us assume that agent A0 can use the    operator "achieve" as a spontaneous    operator that    models the action of another agent generating    and    executing a plan:    achieve(x,g)    agent x achieves goal g by creating    and executing a plan to achieve g.    PC:    GOAL(x,g) & ISPLAN(x,p,g,x)    & ISPLAN(x,p,g,AO)    D:    **the delete list is computed    from the plan, p**    A:    FACT(x,g)    FACT(AO,g)    The expression ISPLAN(x,p,g,f)    is intended to    mean that there is a plan p, to achieve goal g,    using agent x‘s planner, with facts belonging to    agent f.    Our precondition    for achieve ensures that    before A0 can assume that condition g will be    spontaneously    achieved by agent x, A0 has to prove    both that agent x can generate a plan from its own    facts and that agent x could generate a plan from    AO's facts (to ensure that the plan is valid as far    as A0 is concerned).    Here are some axioms about ISPLAN that A0 will    need:    1) FACT(x,w)    => ISPLAN(x,NIL,w,x)    (If x already believes w, then    x has a plan, namely NIL, for    achieving w.)    2)    [ISPLAN( X,U,Y,d    & PC(Z,Y) &    OP(x,z,g)]    => ISPLAN(x,extend(u,z),g,x)    (If x has a plan, namely u, to achieve    the preconditions,    y, of its operator,    z, with add list containing g, then x    has a plan, namely extend(u,z)    for    achieving    g. The functional expression,    extend(u,z),    denotes that plan formed    by concatenating    plan z after plan u.)    The planning tree in Figure 1 shows a possible    plan that A0 might generate using its facts    (including axioms about ISPLAN) and operators.    The sequence of operators in this plan is    {tell(AO,Al,'HOLDING(Al,A)'),    asktoachieve(AO,Al,'CLEAR(A)'),    achieve(Al,'CLEAR(A)'),    pickup(AO,A)j.    Note that    semantic attachment    processes were used in several    places in generating    this plan.    We leave to the    control strategy of the system the decision about    whether to attempt to prove a wff by semantic    attachment    or by ordinary syntactic methods.    We    are now in the process of designing a system for    generating    and executing plans of this sort.    Space    prohibits describing some additional    features of    our system, including its control strategy for    generating    plans.    We plan to experiment    with a,,    complex of several agents, each incorporating    planning systems like that briefly described here.    We gratefully    acknowledge    helpful discussions    with Doug Appelt, Bob Moore, Earl Sacerdoti,    Carolyn Talcott and Richard Weyhrauch.    This    research is supported by the Office of Naval    Research under Contract No. N00014-80-C-0296.    1 .    2.    3.    4.    5.    6.    7.    8.    9.    10.    REFERENCES    Sacerdoti,    E. D., "What Language Understanding    Research Suggests about Distributed    Artificial    IntelligenceT,, in Distributed    Sensor Nets,    ppm 8-11. Paper presented at the DARPA    Workshop,    Carnegie-Mellon    University,    Pittsburgh,    Pennsylvania    (December 7-8, 1978).    Fikes, R. E. and N. J. Nilsson, "STRIPS: A New    Approach to the Application    of Theorem Proving    to Problem Solving,,, Artificial    Intelligence,    2(3/4), pp. 189-208 (1971).    Sacerdoti,    E. D., A Structure for Plans and    Behavior,    (New York: Elsevier,    197r    Tate, A., "Generating Project Networks,,, in    IJCAI-5, pp. 888-893 (1977).    Searle, J. R., "A Taxonomy of Illocutionary    Acts," in Language Mind and Knowledge,    K. Gundersm,(University    of Minnesota    Press, 1976).    Cohen, P. R. and C. R. Perrault, "Elements of    a Plan-Based    Theory of Speech Acts,,, Cognitive    Science, _    3(3), pp. 177-212 (1979)e    McCarthy,    J., "First Order Theories of    Individual    Concepts and Propositions,,, in    Machine Intelligence 9, pp.-120-147,    -ayes    and D. Michie (Eds.), (New York:    Halsted Press, 1979).    Moore, R. C., "Reasoning About Knowledge    and    Action,,, in IJCAI-5, pp. 223-227 (1977)'    Moore, R. C., "Reasoning About Knowledge and    Action,,, Artificial    Intelligence    Center    Techincal Note 191, SRI International,    Menlo    Park, California'(lY80).    141    13.    Kowalski, R., Logic for Problem Solving,    (New    15.    Kaplan, D., "Quantifying    In," in Reference    and    York: North-Hom,T'79).    Modality,    L. Linsky (Ed.), pp. 112-144,    (London: Oxford University    Press, 1971).    14.    Quine, W. V. O., "Quantifiers    and    Propositional    Attitudes,"    in Reference and    16.    Nilsson, N. J., Principles    of Artificial    Modality,    L. Linsky (Ed.), pp. 101-111,    Intelligence,    (Menlo Park: Eoga    Publishing    (London:    Oxford University    Press, 1971).    co., 1980).    Al:    HOLDING(AO,A)    bkup(AO,A)    HANDEMPTY(A0)    (initial fact)    achieve(Al,'CLEAR(A)')    plan axiom    {extend(u,z)/pj    ISPLAN(Al,p,'CLEAR(Al)',AO)    (verified by proc. attach. to a model    of Al's planner using AO's facts    after substituting    for p)    GOAL(Al,'C    EAR(A)')    [ISPLAN(Al,u,y,Al)    & PC(z,y) & OP(Al,z,'CLEAR(A)')]    asktoachieve(AO,Al,    I    'CLEAR(A)')    proc. attach. to PC and OP    x    {'putdown(Al,A)'/z,    'HOLDING(AI,A)'/y~    T    ISPLAN(Al,u,'HOLDING(Al,A)',Al)    I    plan axiom    b'IL/uj    FACT(Al,'HOLDING(Al,A)')    I    tell(AO,A1,'HOLDING(A1,A)')    FACT(AO,'HOLDING(Al,A)')    (verified by proc. attach. to    AO's "fact finder".)    Figure 1 '    142     
 | 
	1980 
 | 
	75 
 | 
					
73 
							 | 
	SCOUT:    A SIMPLE GAME-SEARCHING    ALGORITHM WITH PROVEN OPTIMAL PROPERTIES    Judea Pearl    Cognitive Systems Laboratory    School of Engineering    and Applied Science    University    of California    Los Angeles, California    90024    ABSTRACT    This paper describes a new algorithm for    searching games which is conceptually    simple, space    efficient,    and analytically    tractable.    It    pos-    sesses    optimal asymptotic    properties    and may offer    practical advantages    over a-6 for deep searches.    I.    INTRODUCTION    We consider a class of two-person    perfect    information    games in which two players, called MAX    and MIN, take alternate turns in selecting one out    of d legal moves.    We assume that the game is    searched to a depth h, at which point the terminal    positions are assigned a static evaluation    function    VO*    The task iS    to evaluate the minimax value, Vh,    of the root node by examining,    on the average the    least number of terminal nodes.    SCOUT, the algorithm described    in this paper,    has evolved as a purely theoretical    tool for ana-    lyzing the mean complexity    of game-searching    tasks    where the terminal nodes are assigned random and    independent    values [l].    With the aid of SCOUT we    were able to show that such games can be evaluated    with a branching factor of P*/(l-P*), where P* is    the root of xd+x-1 = 0, and that no directional    algorithm    (e.g., ALPHA-BETA)    can do better.    We    have recently tested the performance    of SCOUT on a    'real' game (i.e., the game of Kalah) and were    somewhat surprised to find that, even for low    values of h, the efficiency    of SCOUT surpasses that    of the a-6 procedure [2].    The purpose of this    paper is to call attention of game-playing    practi-    tioners to the potentials of SCOUT as a practical    game-searching    tool.    Section II    describes the operation of SCOUT in    conceptual    terms avoiding algorithmic    details.    Section III    presents, without proofs, some of the    mathematical    properties    of SCOUT and compares them    to those of the a-~ procedure.    Finally empirical    results are reported comparing the performances    of SCOUT and a-6 for both random and dynamic    orderings.    * Supported    in part by NSF Grants MCS 78-07468 and    MCS 78-18924.    II.    THE SCOUT ALGORITHM    SCOUT invokes two recursive procedures    called    EVAL and TEST.    The main procedure EVAL(S) returns    V(S), the minimax value of position S, whereas the    function of TEST(S, v, >) is to validate (or    refute) the truth of the inequality V(S) > v where    v is some given reference value.    Procedure:    TEST(S, v, >)    To test whether S satisfies the inequality    V(S) > v, start applying the same test (calling    itself) to its successors    from left to right:    If S is MAX, return TRUE as soon as one suc-    cessor is found to be larger than v; return FALSE    if all successors    are smaller than or equal to v.    If    S is MIN, return FALSE as soon as one suc-    cessor is found to be smaller than or equal to v;    return TRUE if all successors are larger than v.    An identical procedure , called TEST(S, v, z), can    be used to verify the inequality V(S) 1 v, with    the obvious revisions    induced by the equality sign.    Procedure:    EVAL(S)    EVAL evaluates a MAX position S by first eval-    uating (calling itself) its left most successor S,,    then 'scouting' the remaining successors,    from    left to right, to determine    (calling TEST) if any    meets the condition V(Sk) > V(S1).    If    the inequal-    ity is found to hold for Sk:, this node is then    evaluated exactly (calling EVAL(Sk)) and its value    V(Sk) iS    used for    subsequent    'Scoutings'    tests.    Otherwise Sk iS    exempted from evaluation    and Sk+1    selected for a test.    When all successors    have    been either evaluated or tested and found unworthy    of evaluation,    the last value obtained is issued    as V(S).    An identical procedure    is used for evaluating    a    MIN position S, save for the fact that the event    V(Sk) 1 V(S1) now constitutes    grounds for exempt-    ing S    from evaluation.    Flow-charts    describing    both 6 COUT and TEST in algorithmic    details can be    found in [l].    At first glance it appears that SCOUT is very    wasteful;    any node Sk which is found to fail a test    criterion    is submitted back for evaluation.    The    terminal nodes inspected during such a test may    143    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    (and in general will) be revisited during the eval-    uation phase.    An exact mathematical    analysis,    however, reveals that the amount of waste is not    substantial    and that SCOUT, in spite of some dupli-    cated effort, still achieves the optimal branching    factor P*/(l-P*),    as will be demonstrated    in Sec-    tion III.    Two factors work in favor of SCOUT:    (1) most    tests would result in exempting    the tested node    (and all its descendents)    from any further evalua-    tion, and (2) testing for inequality    using the    TEST(S, v) procedure    is relatively    speedy.    The    speed of TEST stems from the fact that it induces    many cutoffs not necessarily    permitted by EVAL or    any other evaluation    scheme.    As soon as one suc-    cessor of a MAX node meets the criterion u(s ) > v,    all other successors    can be ignored.    EVAL,    y    I:    contrast, would necessitate    a further examination    of the remaining successors    to determine    if any    would possess a value higher than V(Sk).    Several improvements    could be applied to the    SCOUT algorithm to render it more efficient.    For    example, when a TEST procedure    issues a non-exempt    verdict, it could also return a new reference value    and some information    regarding how the decision was    obtained in order to minimize the number of nodes    to be inspected by EVAL.    However, the analysis    presented in Section III,    as well as the simulation    tests, were conducted on the original version    described above.    These studies show that, even in    its unpolished    form, SCOUT is asymptotically    opti-    mal over all directional    algorithms    and is somewhat    more efficient than the a-6 procedure for the game    tested (i.e., Kalah).    Recently, Stockman [3] has also introduced    an    algorithm which examines fewer nodes than a-6.    However, Stockman's    algorithm requires an enormous    storage space for tracing back a large number of    potential strategies.    SCOUT, by contrast, has    storage requirements    similar to those of a-6; at    any point in time it only maintains    pointers along    one single path connecting    the root to the current-    ly expanded node.    III.    ANALYSIS OF SCOUT'S EXPECTED PERFORMANCE    In this section we present, without proofs,    some mathematical    results related to the expected    number of nodes examined by SCOUT and a-B.    Addi-    tional results,    reference    [l].    including proofs, can be found in    The model used for evaluating    these    algorithms    consists of a uniform tree of height h    (h even) and branching factor d, where the terminal    positions are assigned random values, independently    drawn from a common distribution    F.    We shall refer    to such a tree as a (h, d, F)-tree.    Theorem 1:    The root value of a (h, d, F)-tree    with continuous    strictly increasinq    terminal dis-    tribution F converges, as h -+ CQ (in probability)    to    the    1-P*)-fractile    of F, where P* is the solution    6    of x +x-l = 0.    If    the terminal values are discrete:    v, < v2 < . . . < VM, then the root value converges    to a definite limit iff l-P* # F(v.) for all i, in    which case the limit is the smallelt vi satisfying    l-P* < F(Vi).    Definition:    Let A be a deterministic    algo-    rithm which searches the (h, d, F)-game and let    IA(h,    d, F) denote the expected number of terminal    positions examined by A.    The quantity:    r&b    F) =    'im [IA(h, d, F)]l'h    h-too    is called the branching factor corresponding    to    the algorithm A.    Definition:    Let C be a class of algorithms    capable of searching a general (h, d, F)-tree.    An    algorithm A is said to be asymptotically    optimal    over C if for all d, F, and BEC,    rA(&    F) 5 rg(d, I=).    Definition:    An algorithm A is said to be    directional    if for some linear arrangement    of the    terminal nodes it never selects for examination    a    node situated to the left of a previously    examined    node.    Theorem 2:    The expected number of terminal    positions examined by the TEST algorithm    in    the orooosition    "V(S)> v" for the root gf-a    testing    (h, h, F)-tree, has a branching    v * v* and P*/(l-P*)    if v = v*    F(v*) = l-P* and P* is the root    factor d112 if    , where v* sati    of xd+x-1 = 0.    sfies    Theorem 3:    TEST is asymptotically    optimal    over all directional    algorithms which test whether    the root node of a (h, d, F)-tree exceeds a speci-    fied reference v.    Corollar    1:    Any procedure which evaluates    a    (h, debust    examine at least 2dh/2-1 nodes.    Corollary 2:    The expected number of terminal    positions examined by any directional    algorithm    which evaluates a (h, d)-game tree with continuous    terminal values must have a branching    factor great-    er or equal to P*/(l-P*).    The quantity P*/(l-P*) was shown by Baudet [3] to    be a lower bound for the branching    factor of the    a-6 procedure.    Corollary 2 extends the bound to    all directional    game-evaluating    algorithms.    Theorem 4:    The expected number of terminal    examinations    performed by SCOUT in the evaluation    of (h, d)-game trees with continuous    terminal    values has a branching factor of P*/(l-P*).    Theorem 5:    The expected number of terminal    examinations    performed by SCOUT in evaluating    a    (h, d, F)-game with discrete terminal values has a    branchina factor dl/L, with exceptions    only when    one of the discrete values, v*, satisfies F(v*) =    1-p*.    144    Corollary 3:    For games with discrete terminal    values satisfying    the conditions    of Theorem 5, the    SCOUT procedure is asymptotically    optimal over all    evaluation    algorithms.    The improvement    in efficiency    due to the dis-    crete nature of the terminal value manifests    itself    only when the search depth h is larger than    log M/log [d(l-P*)/P*],    where M is the quantization    density in the neighborhood    of VO = v*.    The branching factor of a-8 is less tractable    than that of SCOUT.    At the time this paper was    first written the tightest bounds on r,,B were    those delineated    by Baudet [3] giving the lower    bound r,-@ 2 P*/(l-P*)    (a special case of Ccrollary    2) and an upper bound which is about 20 percent    higher over the range 2 5 d I 32.    Thus, SCOUT was    the first algorithm    known to achieve the bound    P*/(l-P*) and we were questioning    whether a-6 would    enjoy a comparable    asymptotic    performance.    More-    over, it can be shown that neither SCOUT nor a-6    dominate one another on a node-by-node    basis; i.e.,    nodes examined by SCOUT may be skipped by a-6 and    vice versa [l].    The uncertainty    regarding the branching factor    of the a-0 procedure has recently been resolved [5]    Evidently,    U-B and SCOUT are asymptotically    equiva-    lent; r    e uals P*/(l-P*)    for continuous    valued    trees a%'dl92    for games with discrete values.    P    For low values of h the branching    factor is no    longer an adquate criterion of efficiency    and the    comparison    between SCOUT and a-6 must be done    empirically.    The following table represents    the    number of node inspections    spent by both algorithms    on the game of Kalah (l-in-a-hole    version) [2]:    It appears that as the search depth increases SCOUT    offers some advantage over cl-@. Experiments with    higher numbers of stones in each hole indicate that    this advantage may deteriorate    for large d.    We    suppose, therefore,    that SCOUT may be found useful    in searching games with high h/d ratios.    Cl1    PI    [31    [41    [51    REFERENCES    Pearl, J.    "Asymptotic    Properties of Minimax    Trees and Game-Searching    Procedures."    UCLA-    ENG-CSL-7981,    University    of California,    Los    Angeles, March 1980, to be published    in    Artificial    Intelligence.    Noe, T.    "A Comparison    of the Alpha-Beta    and    SCOUT Algorithms    Using the Game of Kalah."    UCLA-ENG-CSL-8017,    University    of California,    Los Angeles, April 1980.    Stockman, G.    "A Minimax Algorithm    Better Than    Alpha-Beta?"    Artificial    Intelligence    12,    1979, 179-196.    Baudet, G. M.    "On the Branching    Factor of the    Alpha-Beta    Pruning Algorithm."    Artificial    Intelligence    10, 1978, 173-199.    Pearl, J.    "The Solution for the Branching    Factor of the Alpha-Beta    Pruning Algorithm."    UCLA-ENG-CSL-8019,    University of California,    Los Angeles, May 1980.    II    Random Orderina    II    Dvnamic Orderina    Search    Depth    2    SCOUT    82    a-B    70    %    Improvement    -17.0    .#    J    %    SCOUT    a-6    Improvement    39    37    -5.4    I    3    II    394    I    :    380    -3.7    62    61    -1.    4    1173    1322    +11.3    91    96    -1.    5    2514    4198    +40.1    279    336    +17.    6    5111    6944    +26.4    371    440    +15.    .45     
 | 
	1980 
 | 
	76 
 | 
					
74 
							 | 
	Problem Solving in Frame-Structured    Systems    Using Interactive    Dialog    Harry C. Reinstein    IBM Palo Alto Scientific    Center    1530 Page Mill Road    Palo Alto, Ca. 94304    ABSTRACT    This paper provides an overview of the process    by which problem solving in a particular    frame-like    knowledge-based    system    is    accomplished.    The    inter-relationship    between specialization    traversal    and entity processing    is addressed and the specific    role of the user interaction    is described.    I    INTRODUCTION    Semantic networks    Cl1 and frame-like    systems    have    emerged as    powerful tools    in    a variety    of    problem domains    [Z&l. In many    of these    systems an    initial    knowledge    base    is    used    to    drive    an    interactive    dialog    session, the    goal of    which is    the instantiation    of the    particular    knowledge base    elements which represent a    solution to the problem    being addressed.    In    a system developed    at    the IBM    Scientific    Center in    Palo Alto [3,41, a    dialog is    generated    from a KRL-based    c51 semantic network for    the purpose of generating    a well-formed    definition    of a medical sensor-based    application    program.    It    is    intended    that    the    user    of    the    system    be    conversant with    the problem    to be    solved by    the    application    but    not    that    they    be    a    computer    programmer.    The    overall logic    of this    process is    the subject of this paper.    II    THE DIALOG LOGIC    ---    The    ultimate    goal    of    the    problem-solving    dialog session is the complete instantiation    of all    entities    (knowledge units) relevant    to the problem    solution.    To    do this the    system must be    able to    create    new work    contexts    (in our case    entities)    from    existing ones    and be    able    to traverse    the    specialization    hierarchies    rooted at these entities    to    accomplish    complete instantiation.    The    logic    governing the interrelationships    between these two    tasks, and the methods of    user interaction    tend to    characterize    frame-based    dialog systems.    One    could, for    example, choose    to pursue    a    path    of    'least    commitment'    by    processing    all    relevant references    at their    highest levels in the    specialization    hierarchies    before attempting    deeper    specialization    traversal    [6,71.    This    approach    seems    well-suited    to problem    domains    where    the    solutions are    highly dependant on    the interaction    of constraints    between the processed    entities.    In    the case    of our application    development    system it    was    felt    that    the    solution    process    would    be    enhanced    if    individual    entities    were    completely    specialized    as they were    encountered,    with outside    references    posted    as pending    work    contexts    for    subsequent processing.    This    'greatest    commitment'    approach    seems    well-suited    to    semantic    networks    in    which    specialization    traversal in    one hierarchy provides    increasingly    more    specialized    references    to other    hierarchies,    and    where    the    constraints    between    hierarchies    are not    potentially    inconsistent.    An    example of    this can    be seen    in the    relationship    between    our    SENSOR    hierarchy,    which    provides    descriptive    knowledge    about    the analog,    digital,    and    keyboard-entry    sensors    available    in    the    laboratory    and the DATA    hierarchy,    which describes    the kinds of data which    can be processed.    In these    interdependant    hierarchies,    one    finds that    BLOOD    PRESSURE is measurable    only by a subset of PRESSURE    SENSORS,    and    that    ARTERIAL    BLOOD    PRESSURE    (a    specialization    of    BLOOD    PRESSURE)    makes    even    further specializations    on that set.    Traversal of    either    hierarchy    will implicitly    specialize    the    related    hierarchy.    Downward    traversal    of    specialization    trees    is the main driving    force of    the dialog system.    III MECHANICS    OF SPECIALIZATION    TRAVERSAL    -    Entities intended for inclusion in the problem    solution    are dynamically    replicated    and    inserted    into    the    knowledge    structure    as    an    immediate    specialization    of    the entity    from which    they are    copied. As more becomes known about them, these new    entities    are    moved    down    the    specialization    hierarchy,    always appearing as    a specialization    of    the most constrained    model available in the initial    knowledge base. These dynamic entities have exactly    the same representation    as the initial entities and    differ from them only in that their constraints    can    be overwritten    in the process of instantiation.    If,    for example, one    of the attributes    of    a DEVICE is    its user supplied name, then the value obtained for    that name during the dialog    would be placed in the    dynamic    entity    while    the    corresponding    146    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    entity/attribute    in    the initial knowledge    base is    only constrained    to be a name.    V    SUMMARY    The mechanism    for migrating    a    dynamic entity    down    the associated    specialization    hierarchy    may    require    user    interaction.    This    interaction    is    accomplished    using    a video character    display with    full screen data entry facilities,    a light pen, and    program function    keys. It has been    our experience    that non-computer-trained    users    are very sensitive    to the    level of human    factors provided and    it is    well worth    any effort one    can make    to facilitate    interaction.    First the user    is prompted to supply    values for    attributes which have been    declared in    the    knowledge base    to be    user-sourced.    (This    is    equivilent    to the    'lab data'    assertion in    MYCIN    c81).    Having    obtained these,    a pattern-matching    search    is performed    to see    if specialization    is    possible.    If not,    the    next    step is    to    attempt    specialization    by allowing the    user to choose from    a list of the names of the immediate descendants    at    the current level in the    hierarchy.    If the user is    unable to select the specialization    by name, he or    she is    interrogated    for selected    attribute values    which, if    known, would determine    a specialization    path. This    process continues    until a    terminus in    the    hierarchy    is reached.    During    the    traversal    process any    references    to    other entities    must be    resolved, and these    references    generate additional    work contexts    for the system.    It    is particularly    important that    the resolution    process be    able to    determine    if the    reference should    resolve to    an    already    existing dynamic    entity or    if it    should    resolve to an entity in the initial knowledge base.    Some considerations    relevant to    this problem    are    discussed below.    Processing    entities to    their most specialized    form is a valid driving    function in some knowledge    bases, and generating    user interaction    specifically    for this traversal can    be a sufficient    involvement    for    the    user    in    the    problem    solving    process.    Representing    the results    of    the problem    solving    session    in    the    same    form    as,    and    in    direct    association    with the    initial    knowledge base    has    many positive features. Included among these is the    ability to use a single search/resolution    mechanism    to select from either set of entities when building    the problem solution.    In general, frame-structured    knowledge    bases,    in    conjunction    with    user    interaction    can provide a powerful problem solving    facility.    REFERENCES    Cl1 Fikes, R.    and Hendricks,    G. "A Network Based    Representation    and Its Natural Deduction System" In    Proc. IJCAI-77.    Cambridge,    Massachusetts,    August,    1977, pp. 235-245.    c21 Waterman,    D.A.    and    Hayes-Roth,    F.,    (Eds.)    Pattern-Directed    Inference Systems.    New York, New    York: Academic Press, 1978.    [31 Hollander,    C.R.    and    Reinstein,    H.C.    "A    Knowledge-based    Application    Definition    System"    In    Proc. IJCAI-79.    Tokyo, Japan,    August, 1979,    pp.    397-399.    [41 Reinstein,    H.C.    and    Hollander,    C.R.    "A    Knowledge-based    Approach to Application    Development    IV PROCESSING    ENTITY REFERENCES    for    Non-programmers",    Report    G320-3390,    IBM    Scientific    Center,    Palo    Alto,    California,    July    1979.    When a    reference resolves to a    single entity    one of three situations    prevails:    1) the reference    [51 Bobrow, D.    and Winograd,    T., "An    Overview of    is to    an exactly matching    dynamic entity,    2)    the    m,    a    Knowledge    Representation    Language."    reference is to an existing dynamic entity which is    Cognitive    Science 1:l (1977), 3-46.    less constrained    than desired,    or 3) the reference    is to an    entity in the initial    knowledge base. In    L61 Martin,    N.    et al    "Knowledge    Management    for    the first of    these cases no further    processing    is    Experiment    Planning in Molecular Genetics"    In Proc.    required.    In the second    case, the more constrained    IJCAI-77.    Cambridge, Massachusetts,    August, 1977,    form of    the attribute is    forced into    the dynamic    pp. 882-887.    entity and a search is performed    to see if this new    form    permits    further    migration    down    the    [71 Stefik, M.J., Planning    With Constraints,    Ph.D.    specialization    hierarchy.    In    this    way values    of    Thesis, Stanford University, 1980,    (available from    attributes    obtained from    specialization    down    one    Computer Science Dept., Stanford University,    Report    hierarchy    can    implicitly    cause    specialization    STAN-CS-80-784).    acitivity    in related    hierarchies.    In the    third    case a new dynamic entity would be created.    C8l Shortliffe,    E.H.    MYCIN: Computer-based    Medical    Consultations,    New    York,    New    York:    American    Elsevier,    1976.    147     
 | 
	1980 
 | 
	77 
 | 
					
75 
							 | 
	REPRESENTING    KNOWLEDGE    IN AN INTERACTIVE    PLANNER    Ann E. Robinson and David E. Wilkins    Artificial    Intelligence    Center    SRI International    Menlo Park, California    94025    ABSTRACT    This    note discusses    the    representation    for    actions    and plans    being developed as    part of the    current    planning research at SRI.    Described    is a    method for    uniformly    representing    actions that can    take place both    in the domain and during planning.    The    representation    accommodates    descriptions    of    abstract    (hypothetical)    objects.    I. INTRODUCTION    A principal goal of current planning and plan-    execution    research    at SRI    is    development.    of    a    planning    system    that    interacts    with    a    person,    allowing that    person to:    (1) explore    alternative    plans for performing    some activity,    (2) monitor the    execution    of a plan that has been produced, and (72    modify    the    plan    as    needed    during    execution.    Described    here    is    the    knowledge    representation    being developed.    Our    research    builds' directly    on    previous    planning    research and on    research in representing    the domain knowledge necessary for participating    in    natural-language    dialogs    about    tasks.    In    particular,    some    of our    representation    ideas    are    based    on the process model    formalism described in    [2]    and [S].    The    basic approach to planning is to    work    within the    hierarchical    planning    paradigm,    representing    plans in procedural    networks, as has    been done in NOAH [4]    and other systems.    Unlike its    predecessors,    our new    system    is    being    designed    to allow    interaction    with    users    throughout    the    planning    and    plan-execution    processes.    The    user will    be able    to watch    and,    when desired,    guide    and/or control    the    planning    process.    During execution of a    plan, some person    or computer system monitoring    the execution will be    able    to specify    what actions    have been performed    and    what changes have occurred    in the world being    modeled.    On the    basis of    this, the    plan can be    interactively    updated to accommodate    unanticipated    occurrences.    Planning    and plan-execution    can    be    intermingled    by    producing a    plan for    part of    an    activity and    then executing    some or    all of    that    plan before working out remaining details.    We are    extending planning research in several    major    directions.    One of the    key directions,    the    one    discussed here,    is a    method for representing    --------    +    The    research    reported    here is    supported by    Air    Force    Office    of    Scientific    Research    Contract    F49620-79-C-0188    and by    Office of Naval    Research    Contract N00014-80-C-0300.    actions that can take    place both in the domain and    during    planning.    Action    descriptions    (often    referred to as operators),    procedural networks, and    knowledge    about    domain    objects    and    their    interrelationships    are    represented    in    the    same    formalism -- a hierarchy    of nodes with attributes.    This uniform representation    provides the ability to    encode partial    descriptions    of unspecified    objects    as well    as    objects in    the domain    model.    Thus,    operator    descriptions    referring    to    abstract    (unbound)    objects can    be represented    in the same    formalism as    procedural    network nodes referring to    specific objects    in    the domain    model.    (Partial    descriptions    of    unspecified    objects    will    be    described    here    as    constraints    on    the    possible    values of a variable representing    the object.)    Operators can be    encoded at several levels of    abstraction.    Each one    contains    information    for    planning at    the    next level    of detail.    We    have    already    encoded    many    domain    operators    for    a    construction    task;    planning    operators    will    be    encoded shortly.    The domain operators provide the    planning system    with information    about producing a    plan in the domain.    The planning operators provide    the planning    system    with    information    so    it    can    reason    about    its    own    planning    process    (meta-    planning).    They also provide a    major part of the    interface between the planning system and the user,    who will be able to direct the planning process via    the planning operators.    The uniformity    of representation    for    domain    knowledge,    specific    plans    of    action,    and    all    operators    will facilitate    both    the user's ability    to    interact with and control    the planning system,    and the system's ability to incorporate    (learn) new    operators    from plans it has    already produced.    We    will describe    the    representation    in    more    detail    below.    148    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    II. THE FORMALISM    The formalism for representing    knowledge about    actions,    plans,    and domain    objects    consists    of    typed    nodes linked in a    hierarchy.    Each node can    have attributes    associated    with it.    There are four    node    types    for    representing    objects:    CLASS,    INSTANCE,    INDEFINITE,    and DESCRIPTION.    These will    not be discussed in more detail here since they are    similar    to    those    occurring    in    re resentation    formalisms    such as KRL, FRL, and UNITS -f 51.    The node types for representing    actions can be    grouped into four categories:    OPERATOR,    for encoding operators;    PNET, for representing    specific actions (nodes    in the procedural    network);    PLOT, for describing how to expand a given    OPERATOR,    i.e., a description    of an action    in greater detail;    PNET.ACTION,    for encoding plan steps (procedural    network nodes) that have been 'executed' and    thus represent actions assumed to have    occurred in the world being modeled.    Nodes can have    lists of attributes    and can be    connected    into    a hierarchy    through    CLASS    and    SUBCLASS    links.    Attributes    of    nodes    for    representing    actions    include    the    resources    and    arguments    of    the action    (i.e., the    objects    that    participate    in the    action), the action's goal, the    action's    effects    on    the    domain    when    it    is    performed,    and    the    action's    preconditions.    OPERATOR    nodes    have    a    plot    attribute    which    specifies PLOT nodes for carrying out the operator.    The    PLOT of an operator    can be described not    only in terms of    GOALS to be achieved, but also in    terms    of    PROCESSes    to    be    invoked.    (Previous    systems    would represent    a PROCESS as    a goal with    only a    single choice    for an    action to    perform.)    The ability to    describe operators in terms of both    GOALS and PROCESSes    will help simplify encoding of    operators and    will allow    the planning    system    to    reason    about    alternative    action    sequences    more    efficiently.    Figure    1 shows    a sample operator    and a PNET    it    might    produce.    The    figure    illustrates    the    uniformity    across different types of    nodes in our    formalism.    The    nodes are    expressed in    the    same    formalism,    and, for    the most part,    have the same    attributes    (e.g.,    resources,    shared-resources,    arguments,    preconditions,    purpose)    with    similar    values    for    these    attributes.    "Similar    values"    means    that the values    refer to the    same types of    objects -- often the value of an attribute    for some    node will be more constrained    than the value of the    same    attribute    in    a    corresponding    node    of    a    different    category.    The    next    two    paragraphs    illustrate    this in    detail, after which we describe    two    instances    where    the    uniformity    of    the    representation    is advantageous.    Attributes    in    OPERATOR    and    PLOT    nodes    generally    refer to variables    rather than specific    objects,    since these    are uninstantiated    operators    that    may    be    instantiated    into    PNET    nodes    in    different    ways    during planning.    For example,    in    Figure    1 resource variables meat1    and vegl in the    operator FIX.MEAL refer    to objects of the meat and    vegetable    class, respectively.    In the expansion of    FIX.MEAL,    meat1 has been constrained    to be a fish    (denoted by    calling it    "fishl") since    it was    so    constrained    in the node being    OPERATOR    expanded.    FIX.MEAL    RESOURCES.    meat 1, vegl    -    SPLIT    GOAL    (PREPARED    meat 1)    RES. meat1    GOAL:    (PREPARED    vegl)    RES: veg 1    f    PROCESS:    SERVE    JOIN    -    RES    meatl,    vegl    A PNET node which represents usmg FIX.MEAL    ma plan.    PROCESS.    FIX.MEAL    . . . 17    RESOURCES.    . . 0    flshl    veg 1    Expansion of thts PNET node at the next level usmg FIX.MEAL.    Figure 1    In our formalism,    such variables are described    by    INDEFINITE    nodes    with    constraints    on    their    possible    values.    For    PNET    nodes,    attributes    frequently    refer    both to    variables    (which    will    often be    more    constrained    in    this case)    and    to    completely    specified    objects.    For    PNET.ACTION    nodes,    attributes    generally    refer    to    specific    objects in the    domain model.    The system's ability    to    use    INDEFINITE    nodes    to    partially    describe    objects is    important for representing    objects with    varying    degrees    of    abstractness    in    the    same    formalism.    Few previous planning systems have used    this approach    (e.g., NOAH cannot partially describe    objects and has different formalisms    for describing    operators and    procedural    nets).    Stefik's    [51    system    does    allow    abstract    descriptions    and    constraints    on partially    described arguments,    but    arguments are    required    to be    fully    instantiated    before the constraints    can be evaluated.    (See also    Hayes-Roth    et al. cw    The uniformity    of representation    between PLOT    and PNET nodes permits the description    of operators    as    what    amounts    to    generalized    fragments    of    procedural    network.    This    turns    problem    solving    into    a    process    of    incremental    instantiation.    During    planning,    PNET    nodes    are    incrementally    expanded to a    greater level of detail by selecting    149    an    appropriate    operator, determining    which of its    variables    match those in the    node being expanded,    creating    new variable records    for those variables    not    matched, adding    any new    constraints    to these    variables,    and    following    the    operator's    plot    description    to create new procedural    network nodes.    The    uniformity    of    representation    facilitates    this    production    of PNET nodes from PLOT nodes.    Once a plan has been successfully    constructed,    it    may be    desirable    to    save it    for    subsequent    planning    activities,    incorporating    it    into    the    system as    a new    operator.    We    expect to    develop    algorithms    for    doing    this,    i.e.,    producing    an    operator    (with its associated    PLOT nodes) from PNET    fragments.    For each    control node and each action-    oriented    node    in    a    procedural    network,    a    corresponding    PLOT node can be    easily created for    the    operator    because    of    the    uniformity    of    representation.    The    major    task    remaining    in    producing an    operator    would be    generalizing    the    constraints    on    values    for    variables    in    the    procedural    network nodes into looser constraints    in    the new operator.    An    additional    uniformity    between descriptions    of    specific actions and    operators facilitates    the    matching    of an    operator    to    the node    it    is    to    expand.    Thus    PROCESS    and    GOAL    nodes    in    a    procedural    network    or    plot will    have    attributes    similar    to    those    of    the    OPERATOR    node    which    represents    a more    detailed description    of    their    corresponding    action.    The    similarities    of    representation    of    all    action-oriented    nodes    facilitates    interaction    with    the user who can talk    in the same    way about operators,    steps in operator    plots,    and    nodes    in    the    procedural    network.    Similarly,    description    of actions is facilitated    by    this uniformity.    Organizing    the    representation    as    nodes    with    attributes    is,    of    course, not    new    and    is    not    essential.    The    representation    could    also    be    expressed    in a formal logic (a translation    to logic    would    be fairly straightforward).    We have chosen    to    represent    things    as    nodes    with    attributes    because    this    blends    well    with    our    plans    for    interaction    with the user.    III. PARTIAL DESCRIPTION    USING CONSTRAINTS    Stefik's    system [5], one of    the few existing    planning    systems    with the    ability    to    construct    partial    descriptions    of    an    object    without    identifying    the    object,    contains    a    constraint-    posting mechanism    that allows partial descriptions    similar to those    described above.    Our system also    provides    for partial description    using constraints,    and extends Stefik's approach in two ways.    Unlike    Stefik's system,    our    system    permits    evaluation    of    constraints    on    partially    described    objects.    Both    CLASSes    and    INSTANCES    can    have    constraints.    For    example, a    set can    be    created    which can be constrained    to be only bolts, then to    be    longer    than    one inch    and    shorter    than    two    inches, and    then to    have hex    heads.    Our    system    also provides    with the    for partial    descriptions    that    vary    context, thus permitting    consideration    of    alternative    plans    simultaneously.    A    context    mechanism    has    been    developed    to    allow    for    alternative    constraints    on variables    relative    to    different    plan    steps.    The    constraints    on    a    variable's    value    as    well as    the    binding    of    a    variable    to    a    particular    instance    (possibly    determined    during    the    solution    of    a general    constraint-satisfaction    problem)    can    only    be    retrieved    relative to a    particular    context.    This    permits the    user to    easily shift    focus back    and    forth between alternatives.    Hayes-Roth    et al.    [I]    describe the use of a blackboard    model for allowing    shifting of focus between alternatives.    Such focus    shifting    can    not    be    done    in    systems    using    a    backtracking    algorithm    where descriptions    built up    during expansion    of    one alternative    are    removed    during    the    backtracking    process    before    another    alternative    is investigated.    Most other planning    systems either    do    not allow    alternatives    (e.g.,    NOAH[4]),    or    use a    backtracking    algorithm    (e.g.,    Stefik [5], Tate [6]).    IV. CONCLUSION    We    have    described    some    properties    of    the    knowledge    representation    developed    for    our    new    planning system.    Most    of    the planner    is    still    under    development    (e.g.,    critics, reasoning about    resources,    and    search    control    have    yet    to    be    implemented).    The central idea    discussed here is    the uniform representation    of the domain operators,    planning    operators,    procedural    networks,    and    knowledge about    domain objects.    Ways to    exploit    this    uniformity    are    pointed to.    These include a    rich interaction    with the user, meta-planning,    and    having the system learn new operators from plans it    has constructed.    REFERENCES    1.    Hayes-Roth,    B., F.    Hayes-Roth,    S. Rosenschein,    S.    Cammarata,    "Modeling    Planning    as    an    Incremental,    Opportunistic    Processll, In Proc.    IJCAI-79. Tokyo, Japan, August, 19791 pp. 375-    383.    3. Robinson, A.E., D. Appelt, B. Grosz, G. Hendrix,    and    J.    Robinson,    "Interpreting    Natural-    Language    Utterances    in    Dialogs About Tasks",    Technical    Note 210,    SRI International,    Menlo    Park, California.    March, 1980.    4.    Sacerdoti,    E.,    A    Structure    for    Plans    and    Behavior.    Elsevier North-Holland,    NewYork.    1977.    5. Stefik,    M., Planning    With Constraints.    Report    STAN-CS-80-784,    Computer    Science    Department,    Stanford University,    Ph.D Dissertation.    1980.    6.    Tate,    A., "Generating    Project    Networks",    In    Proc. IJCAI-77. Cambridge, Mass. August, 1977.    150     
 | 
	1980 
 | 
	78 
 | 
					
76 
							 | 
	INFEHENCEWITHRECURSIVERULES    Stuart C. Shapiro and Donald P. McKay    Department    of Computar Science    State University    of New York at Buffalo    Amherst, New York 14226    ABSTRACT    Recursive rules, such as "Your parents'    ances-    tors are your ancestors",    although very useful for    theorem proving, natural language understanding,    questions-answe    ring and information    retrieval    systems, present problems for many such systems,    either causing infinite loops or requiring that    arbitrarily    many copies of them be made. We have    written an inference    system that can use recursive    rules without either of these problems. The solu-    tion appeared automatically    from a technique    designed to avoid redundant    work. A recursive    rule causes a cycle to be built in an AND/OR graph    of active processes. Each pass of data through the    cycle resulting in another answer. Cycling stops    as soon as either the desired answer is produced,    no more answers can be produced, or resource    bounds are exceeded.    Introduction    Recursive rules, such as "your parents' ances-    tors are your ancestors",    occur naturally in    inference systems used for theorem proving,    question answering,    natural language understanding,    and information    retrieval. Transitive    relations,    v(x,y,z)    [ANCES'lloR(x,y)    & ANCESToR(y,z)-+    %%SToR(x,z)], inheritance    rules, e.g. ~z(x,y,p)    [ISA(x,y)    & HAs(y,p) -+ HAS(x,p)l, circular defini-    tions and equivalences    are all occurrences    of    recursive rules. Yet, recursive rules present    problems for system implemantors. Inference    systems which use a %aive chaining" algorithm can    go into an infinite loop, like a left-to-right    top-down parser given a left recursive grammar    [41.    Sme systems will fail to use a recur-    sive rule more than once, i.e. are incomplete    [6,121. Other systems build tree-like    data    structures (connection    graphs) containing    branches    thelengthofwhichdependonthenlrmberoftimes    the recursive rule is to be applied [2,131. Since    scme of these build the structure    before using it,    the correct length of these branches is problematic.    Son-e    systems eliminate recursive rules by deriving    and adding to the data base all implications    of    the recursive    rules in a special pass before normal    inference is done [91.    The inference system of SNePS [13] was designed    ti use rules stored in a fully indexed data base.    when a question is asked, the system retrieves    --7-s-    Theworkwas supported inpartbytheNationa1    Science Foundation    under Grant No. MCS78-02274.    relevant rules and builds a data structure of    processes    which attempt to derive the answer frcm    the rules and other information    stored in the data    base. Since we are using a semantic network to    represent all declarative    information    available in    the system, we do not make a distinction    between    "extensional"    and "intensional"    data bases, i.e.    non-rules and rules are stored in the same data    base. More significantly,    we do not distinguish    "base" frm "defined"    relations. Specific    instances    of ANCESTOR    maybe storedaswell as    a ruledefining ANCESTOR.    This point of view    contrasts    with the basic assurrption    of several    data base question answering systems [3,8,9]. In    addition, the inference system described here does    not restrict the left hand side of rules to con-    tain only one literal which is a derived relation    [3], does not need to recognize cycles in a graph    [3,8] and does not require that there be at least    one exit frm a cycle [8l.    The structure    of processes    may be viewed as    an AND/OR problem reduction graph in which the    process working on the original question is the    mot, and rules are problem reduction    operators.    Partly influenced    by Kaplan's producer-consumer    model [53, we designed the system so that if a    process working on some problem is about to    create a process for a subproblem,    and there is    another process already working on that subproblem,    the parentprocess canmake useof the extant    process and so avoid solving the same problem    again. The method we employ handles recursive    rules with no additional    mechanism. The structure    of processes may be viewed as an active connection    graph, but, as will be seen below, the size of the    resulting structure    need not depend on the number    of times a recursive rule will be used.    This paper describes hm our system handles    recursive    rules. Aspects of the system not    directly relevant to this issue will be abbreviated    orcmitted. Inparticular,details of thematch    routine which retrieves formulas unifiable    with a    given formula will not be discussed (but see [lOI).    The Inference    Systeq    The SNePS inference system builds a graph of    processes [7,11] to answer a question (derive    instances    of a given formula) based on a data base    of assertions (ground    atomic formulas)    and rules    (non-at&c formulas). Each process has a set of    151    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    registers    which contain data, and each process may    send messages to other processes. Since, in this    system, the messages are all answers to scnte    ques-    tion, we will call a process P2 a boss of a process    PI if PI sends messages to P2. Sane processes,    called data collectors,    are distinguish& by two    features: 1) they can have store    than one boss: 2)    they store all massages they have sent to their    bosses. The stored messages are used for two    purposes: a) it allows the data collector to avoid    sending the sama message twice; b) it allows the    data collector to be given a new boss, which can    inmediately be brought up to date by being given    all the messages already sent to the other bosses.    Four types of processes are important to the    discussion    of recursive rules. They are called    INFER, CHAIN, SWITCH and FILTER. INFER and CHAIN    are data collectors,    SWITCH and FILTER are not.    Four Processes    An INFER process is created to derive    instances    of a formula, Q. It first matches Q    against the data base to find all formulas    unifiable    with Q. The result of this match is a    list of triples, <T,-r,W, where T is a retrieved    formula called the target. and 'c and 0 are sub-    stitutions    called ti?get    binding and source    binding respectively.    --    Essentially    -c and o are    factorea versions of the mst general unifier    (mgu) ofQ and T. Pairs of the rqu whose variables    are in Q appear in G, while those whose variables    areinTappearinT.    Any variable in term position    is taken from T. Factoring the q-u obviates the    need for renaming variables. For example if    Q=P(x,a,y)    and *P(b,y,x), we would have    o=(b/x,x/y)    and T={a/y,x/x) (the pair x/x is    included to make our algorithms    easier to describe).    Note that Qa = T-r    = P(b,a,x), the variables in the    variable position of the substitution    pairs of 0    are all and only the variables in Q, the variables    in the variable position of 'c are all and only the    variables in T, all terms of cr ce    from T, and    the non-variables    in T came from Q.    for    For each match    Q, there are two    <TJ,o> that an INFER finds    possiblities    we shall consider.    First, T might be an -assertion    in the data base.    In this case, G is an answer (Qcr    has been derived).    If the INFER has already stored 0, it is ignored.    Otherwise, CT is stored by the INFER and the pair    <Q,o> is sent to all the INFER's bosses. For CT to    be a reasonable    answer, it is crucial that all its    vai&les occur in Q. The other case we shall con-    sider is the one in which T is the consequent    of    some rule of the form Al&...&An1T. (our system    allcrws    other forms of rules, but consideration    of    this one will suffice for explaining    how we handle    recursive rules). In this case, the INFER creates    two other processes,    a SWITCH and a CHAIN to    derive instances    of T-r. The SWITCH is made the    CHAIN's boss, and the INFER the SWITCH's boss.    It maybe the case that an already extant CHADJ    maybeused insteadof anewone.    This will be    discussed below.    The SWITCH process has a register which is    set to the source binding, O. The answers it    receives from the W    are substitutions    B,    signifying    that TTB has been derived. SW'PK!H    sends to its boss the application    46. the sub-    ,    stitution    derived from o by replacing each term    t in o by tB. The effect of the SWITCH is to    change the answer from the context of the vari-    ables of T to the context of the variables of Q.    1n0u.r example, themmightsendthe    answer    B=k/xl. SWITCH would then send a\B = {b/x,x/y}\    {c/xl= (b/x,c/y)    to the INFER, indicating    that    Q~\B = P(x,a,y)(b/x,c/y)    = P(b,a,c) has been    derived. The importance    of the factoring    of the    mguof Q andT into the sourcebinding 0 and the    targetbinding T - a separation    which the SWITCH    repairs -- is thattheCHAIN canwork onT in the    context of its original variables and report to    many bosses, each through its own SWITCH.    A CHAIN process is created to use a particu-    lar substitution    instance, T, of a particular    formula,    Al&...    GAk1T to deduce instances    of T-r.    Its answers, which will be sent to a SWITCH, will    be substitutions    B such that T-rB    has been deduced    using the rule. For each Ai, ISilk, the CHAIN    tries to discover if ANT is deducible by creating    an INF'ER    process for it. However, an INFER process    might already be working on Aia. If ~.=TT,    the    already extant INFER is just what the CHAIN wants.    It takes all the data the INFER has already    collected,    and adds itself to the INFER's bosses    so that itwill also get future answers. If a    is more general than -r,    the INFER will produce all    the data the CHAIN wants, but unwanted data as    well. In this case the CHAIN creates a FILTER    process to standbetween it and the INFER. The    FILTER stores a substitution    consisting    of those    pairs of T for which the term is a constant, and    when it receives an answer substitution    from the    INFER, it passes it along to its CHAIN only if    the stored substitution    is a subset of the answer.    For example, if T were (a/x,y/z,b/w}    and a were    (u/x    ,v/z    ,v/wl    , a FILTER would be created with a    sutstitution    of {a/x,b/w),    insuring that unwanted    answers such as (c/x,d/z,b/w)    produced by the more    general INFER were filtered out. If a is not    compatible    with T, or is less general than T, a    new INFER must be created. However, if a is less    general than T, the old INFER might already have    collected answers that the new one can use. These    are takenby the new DJFERand senttoits bosses.    Also, since thenew INFERwillprcduce all the    additional    answers that the old one would (plus    others), the old INFER is eliminated    and its    bosses given to the new INFER with intervening    FILTEXs. The net result is that the same    structure    of processes is created regardless    of    whether the mre general or less general question    was asked first.    A CHAIN receives answers from INFERS (possibly    filtered) in the form of pairs <Ai,Bi> indicating    that AiPi, an instanceof the antec&entAi,has    beendeduced. Whenever the CHAIN collects a set    of consistent    substitutions    (Bi,...,Bn),    one for    each antecedent,    it sends an answar to its bosses    consisting    of the ccgnbination    of Bl,...,Bk (where    the ambination of 61 = ~tll/vll,...,tln,/vlnlI    ,...,Bk = ctkl/vkl    ,...,tknk/vknkl    is the mgu of    the expressions (vll,...,vlnl,...,vkl,...,vknk)    and    (tll ,. . . ,tlnl,.    . . ,tkl    ,...,tknk)    [l, p.1871).    152    Recursive Rules Cause Cycles    Just as a CHAIN can make use of already exist-    ing INFERS, an INFER can r&e use of already    existing ms,    filtered if necessary. A    recursive rule is a chain of the form    Al&...    &Ak~Bl,Bl&...&Bn~...X,    with C unifiable    with at least one of the antecedents,    Al say.    When an INFER operates on Al, it will find that C    matches Al, and it may find that it can use the    CHAIN already created for C. Since this CHAIN is    in fact the INFER's boss, this will result in a    cycle of processes. The cycle will produce more    answers as new data is passed around the cycle,    but no infinite loop will be created since no data    collector sends any answernore thanonce. (If an    infinite set of Skolem constants is generated,    the    process will still terminate if the root goal had    a finite number of desired answers specified [ll,    p.1941).    Figure 1 shows a structure of processes    which    we consider an active connection    graph. It is    built to derive instances    of ANCESToR(William,w)    form the rules vCrx,y)    [PARENT(x,y)~ANCESToR(x,y)    1    and V‘(x,y,z)    [ANcESToR(x,y)    & PAREMT(y,z)x    ANCESToR(x,z)]. The notation for the rule    instances is similar to thatpresented in [31.    Note particularly    the SWITCH in the cycle which    allows newly derived instances    of the goal    ANCESToR(William,w)    to be treated as additional    instances    of the antecedent    ANCESToR(William,y).    A similar structure    would be built regardless    of    the order of asserting the two rules, the order    of anteaedents    inthetwoantecedentrule, the    order of execution of the processes,    whether the    query had zero, either one, or both variables    ground, or if the twoanteceiientruleused    ANCESTOR for both antecedents.    In the SNePS inference system, recursive    rules cause cycles to be built in a graph structure    of processes. The key features of the inference    system which allow recursive rules to be handled    are: I) the processes that produce derivations    (INFER and CHAIN) are data collectors:    2) data    collectors    never send the same answer nore than    once: 3) a data collector    may report to mre than    one boss: 4) a new boss may be assigned to a data    collector at any time -- it will miately    be    given all previously    collected    data; 5) variable    contexts are localized,    SWITCH changing contexts    dynamically    as data flows around the graph: 6)    FILTERs allow more general producers to be used    by less general consumers.    1. chang, C.-L., and Lee, R.C.-T., Symbolic Logic    and Mechanical    Theorem Proving, Acadtic Press,    New York, 1973.    2. chang, C.-L., and Slagle, J.R., Using rewriting    rules for connection    graphs to prove theorems,    Artificial Intelligence    12, 2 (August 1979),    159-180.    3. chang, C.-L., On evaluation    of queries contain-    ing derived relations in a relational    data base.    In normal Bases for Data Bases, Gallaire, H.,    Minker, J. and Nicolas, J. (edS.),    Plenum, New    York, i980.    4. Fikes, R.E., and Hendrix G.G., The deduction    component. In Understanding    Spoken Language,    Walker, D.E., ed., Elsevier North-Holland,    1978,    355-374.    5. Kaplan, R.M., A multi-processing    approach to    natural language understanding. Proc. National    Computer Conference,    AFIPS Press, Montvale, NJ,    1973, 435-440.    6. Klahr, P., Planning techniques    for rule selec-    tion in deductive question-answering.    In Pattem-    Directed Inference Systems, Waterman, D.A., and    Hayes-Roth,    R., eds., Academic Press, New York,    1978, 223-239.    7. McKay, D.P. and Shapiro, S.C., MULTI--A LISP    based muitiprocessing    system. Proc. 1980 LISP    Conference.    Stanford Universitv.    1980.    8. Naqvi,'S.A.,    and Henschen,-L.J.,    Performing    inferences    over recursive    data bases. Proc.    First AAAI Conference,    Stanford Univers-    1980.    9. Reiter, R., On structuring    a first order data    base, Proc. SecondNational Conference,    Canadian    Society for Computational    Studies of Intelligence,    1978, 50-99.    -    10. Shapiro, S.C., Representing    andlocatingde-    duction rules in a semantic network. Proc. Work-    shop on Pattern-Directed    Inference    Systexns. In    SIGART Newsletter,    63 (June 1977), 14-18.    11. Shapiro, S.C., The SNePS sepnanticneixork    processing system. In Associative    Networks: The    Representation    and Use of Kncwledse    bv Camrsuters.    F&dler, N.V., ed., Acadenic Press, NW York, 1979,    179-203.    12. Shortliffe,    E-H., Computer Based Medical Con-    sultations:    MYCIN, &rerican Elsevier, New York,    1976.    13. Sickel, S., A search technique for clause    interconnectivity    graphs, IEEE Transactions    on    Computers    Vol. C-25, 8 (August 1976)    I 823-835.    ANCZSMR(William,z)    PARETJT(William,y) AN(TlZSMR(William,    y)    Figure 1.    153     
 | 
	1980 
 | 
	79 
 | 
					
77 
							 | 
	ON PF0VING LAWS OF TBE ALGEBRA OF FP-SYSTE2%    INEDINB~LCF    Jacek Leszczykcwski    Polish Acadeq of Sciences    Institute of Computer Science    P.O.BOX 22, 00-901 Warszawa PKiN,    PQLMD    I    INTFODUCX'ION    J.Backus, in CACM 21/8, defined a class of ap-    plicative prograrrunin    g systems called FP /functional    prograrmcing/    systems in which a user has: l.objects    built recursively frcxn atcms, UU /an undefined ele-    ment/ and objects by a strict /i.e. a UU-preserving/    "list" operator, 2. elementary functions over obje-    cts, 3. tools for building functions out of already    defined functions.    One can think of a machine support while work-    ing with FP systems and proving facts about FP sys-    tems as well as facts concerning the functions be-    ing defined. The choice of EDINBURGH KF is rather    natural because it is an interactive ccmputer sys-    tem /implemented in LISP/ for reasoning about fun-    ctions /see &I/. It consists of two parts. The    first part is a family of calculi each of which is    characterized by four factors: 1. type operators    /representing domains in the sense of Scott's theo-    ry; see m/,    2. constants /representing continuo-    us functions/, 3. axicms, 4. inference rules. One    of them, PPLAMBDA, is given as the "initial" calcu-    lus, and other calculi may be built by users as e.x-    tensions of existing calculi. The second part is a    high level prograrmxin    g language ML which is fully    higher order and is strongly typed. Its polymorphic    types make it as convenient as typeless languages.    This paper is a short report on the application    of EDINBUZH LCF to proving the laws of the algebra    of FP systems listed by Backus in [ll. Actualiy,    we generalized FP-systems and the laws are formula-    ted in stronger fonn than it was done by Backus. We    briefly describe /sec.II/ the style of proving with    the system, then /sec.III/ ccxtnnent    the strtegies u-    sed in the proofs giving only their specifications.    The summing up remarks will be given in sec.IV. r@-    re detailed report on the project is given in D>.    II    STYLEOFPROVING    As mentioned there are inference rules associa-    ted with each of the calculi of the system; the in-    ferece rules of PPLAMBDA are primitive, and derived    rules may be prcgramned by users. The inference ru-    les are represented as ML-functions taking theorems    /being a data type in ML/ as arguments and giving    values which are theorems as well; an example is the    computational induction rule INDUCT /it is the Scott    induction rule; for more detailes see c21/. We could    prove our theorems applying the inference rules in    appropriate order but it is not a convenient style    of proving.    We base our proofs on partial subgoaling meth-    ods, called tactics; these mean that given a formula    to be proved-t    to transform it into "simpler"    formulae /which in turn have to be proved/ and the    proof justifying the "transformation". The system    can support this kind of proving via predefined ty-    pes: goal, tactic and proof, defined as follatiJs:    goal = form # siqset f form list    proof = tlxn list -7 thm    tactic = goal -> goal list# proof    The first element of the Cartesian product defining    the type goal is for stating the formula which is    going to be proved, the third one for listing assum-    ptions, the second one is /frcm the user's point of    view/ an abstract type consisting of simplification    rules; these are /possibly conditional/ equivalen-    ces of terms to be used as left-to-right rewriting    rules.    We shall explain new the use of the subgoaling    methods. Let us define when a theorem A>    f' /A'-    hypotheses, f'- conclusion/ achieves a goal f,ss,A.    This is the case when, up to renaming of bound va-    riables, f' is identical with f and each member    of A' is either in A or is in the hypotheses of    a member of the simplification set ss . Then, we    say, a theorem list achieves a goal list if the    first element of the theorem list achieves the first    element of the goal list, etc. Thus, a tactic T    will work "properly" if for any goal g and any    goal list gl and any proof p such that    T(g) =    gl,p    if we have a theorem list thml which achieves the    goal list gl , then p(thml) will achieve the goal    g - An important special case is when gl is empty    for then we need only apply p to the empty theorem    list - i.e. evaluate p(ni1) to obtain a theorem    achieving our original goal g .    We shall use two of the standard tactics. The    first is SIMPTAC; applied to any goal (w,ss,A) ,    SIMPTAC produces a singleton list of goals    E(w', ss ,Afl    andproof    p , where w' is the simplification of    w by rewriting rules in ss and p justifies all    the simplifications made. In the case where w' is    a tautology the goal list is null. The second one    is CONDCASESTAC ; for any goal (w,ss,wl) it    84    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    finds a term t of the type tr /truth values do-    main/ which is free in w and occurs as the con-    ditional expression and then produces three subgo-    als with formula w as the their first element and    the simplification sets extended by the new assump-    tions /caseswhen    t equal UU,    true    and    false respectively/.    NW we introduce the last tool for desigining    proofs: tacticals - mechaninsms for ccxnbining    tactics to form larger ones. We shall use only one    of the standard tacticals of the system called THEN    For any tactic Tl and T2    the canposed tactic    Tl THEN T2 applies Tl to the'goal and then applies    T2 to all resulting subgoals produced by Tl ; the    proof function returned is the ccmposition of the    proof functions produced by Tl and T2 .    III    CQMMENTSONTHEPROOFS    There are tree Foups of the proofs of the    laws of the FP-system algebra listed by Backus in    09. The first one is based on SIMPTAC ; for exam-    ple, to prove II.1 of fl) we used the tactic:    AFTAC THEN SIMPTAC where AFTAC is one of the pro    gramned tactic and is specified as follms:    AFTAC ("u =<v",ss,wi) =    I"!X. u x = v x",ss,wl-J , p    where:    ! - universal quantification    = < stands for equality or inequality    of terms    X- anewvariable    The second group of proofs is based on COND-    CASESTAC . The ccxrrposed    tactic used in the proofs    was:    APTAC THEN CONDCASESTAC THEN SIMFTAC .    For example in proofs of the laws: II.2 and    11.3.1 presented in PI .    The third group of proofs involves the use of    the ccmputational induction rule INDUCT in an in-    derectway. The proofs are based on the programmed    tactic INDTAC which is an :'inverse"    of the struc-    tural induction rule over the type of lists which    in turn is derived frcm the ccxnputational    induction    rule INDUCT.    The specification of INDATC is    the following:    where:    X- anew variable    ss/ is ss extended by the asswtion    w simplified by ss    P    uses the structural induction ru-    le on lists derived form INDUCT    nil and cons are the constants    over lists.    Let us presentoneof    the proofs using INIYTAC    Suppose we want to prove:    "ToALL(G o F)= (ToALL G)o @CALL F) 11    where :    0    - the ccmposition of functions    ToAILis takes two arguments:H-being    a function and a list and produces    a list of the results and the fol-    lowing axicms are satisfied:    "!H. TbALL H UU = UU II    "!H. ToALL H nil = nil It    "!H.!X.!L. ToALL H (cons X L)=    cons(HX)(ToALLHL)i'    .    Frcxn the shape of subgoals produced by INDTAC    we knm that we need to simplify the formulae by    the above axicxns as well as the definition of the    ccxnposition    operator. Suppose we created the simp-    lification set ss including the desired rewriting    rules; new we can specify our goal in the following    way:    g = "ToALL (+G o F)= (Jl?oALL    G)o (TbALL F)",ss,nil    L&T    = APTACTHENINDTACTHENSIMPTAC    bea    tactic we want to use to tackle our problem. If we    apply our tactic T to our goal g we get a pair    as result. Let us store the value on the variables    P and gl ; we can do it in ML in the following    way:    gl,p := Tg    It terns out that our tactic was fully successful    and the system respond that gl is the empty list.    Thus I see sec.11, we can apply p to the empty    theorem list and the produced value is the theorem    we wanted to prove.    In general cases when we are not so clever,    the subgoal list is not empty and we have to apply    another tacit to solve the subgoals.    Iv    .    FINAL-    As we mentioned in sec.7: the aim of the paper    was to present an application of EDINBURGH ID    But the aim of any application itself is to find    general tactics which can be used in totally diffe-    rent examples. Why? Because EDINBURGH LCF is es-    sentially a tool placed samewhere between a theorem    prover and a proof checker; this is why we cannot    rely on the built in strategies which is the case    with the theorem provers but we are interested in    looking for general purpose ones which when found    canbeprogramnedin    ML andanbeused    to tackle    other goals. The tactic used to prove the laws of    the FP-system algebra are of a ganeral use and we-    re inwolved in proving properties of the functions    defined in FP-systems; see [9J. The generalization    85    of FP systems is briefly described in    12    arad    presented in mr>re detailed version in v 9 . Exam-    ples of more powerful and ccmplex tactics can be    found in t8) .    Let us canpare EDINBUXHLCF    with other im-    plemented systems. On one hand the explicite pre -    sence of the logic of the systemmakes    DXNBURGH    LCF "hL11Tkzn-oriented"    and easy to extend. On the    other hand, for example, the Bayer-Moore theorem    prover /see L3]/ is very efficient in its use    of built in strategies, but difficult to extend ;    by contrast the need to conduct all inferences    through the basic inference rules /as ML-procedu-    res/ , which appears necessary if we wish to allaw    users to extend the system reliably by prqxxrming    leads to sm    inefficiency in LCF. This we have    found tolerable and indeed it can be reduced sig -    nificantly by direct implementation of ML /which    at present is ccmpiled into LISP / . Another way    of making the system quick is by running it on rrcul-    tiprocessor machines which is done for example at    Royal Technical University , Stockholm, Sweden .    For a nice general ccmparison of these two system    see [5J .    The EDINBURGH ICF style of proving which con    sists in solving the problems by means of program-    med proof strategies seems to be natural. It took    the athor    2 months to be able to work with the    system. This style fits pretty well to doing large    proofs with machine assistance. By this we mean    neither that a large proof is submitted step by    step and merely checked by the machine /see D] /,    nor that the system discovers the large proof by    itself, but that the problem may be split into sma-    ler parts, each of which is tackled semiautcmatica-    ly by a subgoaling method. A nice example of such    application of EDINBURGH XF    is the ccmpiler co-    rrectness proof presented in E41 .    I wish to thank Awa Cohn, Mike Cordon, Robin    Milner and Chris Wadsworth for their friendly help    duringmy stay inFdinburgh andespecially Robin    Milner for his support while preparing the draft    version of the paper.    REFERENCES    BackusJ.    lrCan    programming be liberated frcxn    the von Neumann style? A functional prcgram-    ming style and its algebra of programs",    C&-m ACM 21,8 , 1978 .    Bird R., "Programs and Machines; an introduc-    tion to the theory of camputation", Wiley 1976    Boyer R.S., mre    J S., "A ccmputational Logic"    Academic Press, New York 1979 .    Texas, 1979 .    Cohn A., "Remarks on Machine Proof", manusc:-    ript, 1980 .    Gordon M., Milner R., Wadsworth C., 'EDINBURGH    Ia?" , Springer Verlag , 1979 .    van Benthem Jutting L.S., "Checking Landau's    'Grundlagen'in the AUTCMATH system', Tech.    Hoghschule, Eidhoven, The Netherlands, 1977 .    IeszczyXmski J., ~~AI-I    experiment with    EDINBURGH LCF" , Proc. CAUE-5. , Les Arsc,    France , 1980 .    Leszczylmski J., "Theory of FP systems in    EDINBUIGH LCF", Internal Report, Ccanp. Sci.    Dept., Edinburgh University, Edinburgh, Scot-    land, 1980 .    Milner R., "LCF: a way of doing proofs with    a machine", Proc. MFCS-8 Symposium, OlcEnouc,    Chechoslcrwakia,    1979 .    Milner R., "Implementation and application of    Scott's logic for ccxnputable    functions", Proc.    ACM Conference on Proving Assertions abouT    Programs, SIGPLAN Notices , 1972 .    Leszczy3xwski J., "EDINBUZH IXF supporting    FTI?    systems", Proc. Annual Conference of the    Geselschaft fur Informatik, Universitat des    Saarlandes, 1980 .    [41 Cohn A., "High level proof in LCF", Proc. 4th    Workshop on Autcmated Deduction , Austin ,    86     
 | 
	1980 
 | 
	8 
 | 
					
78 
							 | 
	Mapping    Image    Properties    into Shape    Constraints:    Skewed    Symmetry,    Affine-Transformable    Patterns,    and the Shape-from-Texture    Paradigm    John R. Kender    and Takeo    Kanade    Computer Science Department    Carnegie-Mellon    University    Pittsburgh,    Pa. 15213    1.    Introduction    Certain image properties, such as parallelisms, symmetries, and    repeated patterns, provide cues for perceiving    the 3-D shape from    a 2-D picture.    This paper demonstrates    how we can map those    image    properties    into    3-D shape    constraints    by associating    appropriate    assumptions    with them and by using appropriate    computational    and representational    tools.    We begin    with the exploration    of how one specific    image    property,    “skewed    symmetry”,    can be defined and formulated    to    serve as a cue to the determination    of surface orientations.    Then    we will discuss the issue from two new, broader viewpoints.    One    is the    class    of Affine-transformable    patterns.    It has various    interesting    properties, and includes skewed symmetry as a special    case.    The    other    is    the    computational    paradigm    of    shape-from-texture.    Skewed    symmetry    is derived    in a second,    independent    way,    as an instance    of the    application    of the    paradigm.    This    paper    further    claims    that    the    ideas    and techniques    presented here are applicable to many other properties,    under the    general framework    of the shape-from-texture    paradigm,    with the    underlying    meta-heuristic    of non-accidental    image properties.    2.    Skewed    Symmetry    In    this    section    we    assume    the    standard    orthographic    projections    from scene to image, and a knowledge    of the gradient    space (see [4]).    Symmetry    in a 2-D picture has an axis for which the opposite    sides are reflective; in other words, the symmetrical    properties are    found along the transverse    lines perpendicular    to the symmetry    axis. The concept skewed symmefry is introduced    by Kanade [l]    by relaxing this condition a little. It means a class of 2-D shapes in    which    the    symmetry    is found    along    lines    not    necessarily    perpendicular    to the axis, but at a fixed angle to it. Formally, such    shapes    can    be    defined    as    2-D    Affine    transforms    of    real    symmetries.    Figures 2-1 (a)(b) show a few key examples.    Stevens [5] presents a good body of psychological    experiments    which    suggests    that    human    observers    can    perceive    surface    orientations    from figures    with this property.    This is probably    because such qualitative    symmetry    in the image is often due to    Figure    2-1:    Skewed symmetry    (b)    real symmetry    in the scene.    Thus let us associate the following    assumption with this image property:    “A skewed symmetry depicts a real symmetry viewed    from some unknown view angle.”    Note that the converse    of this assumption    is always true under    orthographic    projection.    We can transform    this assumption    into constraints    in the    gradient    space.    As shown    in Figure 2-1, a skewed    symmetry    defines two directions:    let us call them the skewed-symmetry    axis    and the skewed-transverse    axis, and denote    their directional    angles in the picture by (r and /?, respectively    (Figure 2-l(c)).    Let    G = (p,q) be the gradient    of the plane which includes the skewed    symmetry.    We will show that    p’*cos*(~)    - q’*sin*(y)    = -cos(a-/?)    (1)    where    p’ = pcosX + qsinX    q’ = -psinX + qcosX    h = (a + /3)/2.    Thus, the (p,q)‘s    are on a hyperbola.    That    is, the skewed    symmetry defined by (Y and fl in the picture can be a projection    of    a real symmetry if and only if the gradient is on this hyperbola.    The    skewed    symmetry    thus    imposes    a one-dimensional    family    of    constraints    on the underlying    surface orientation    (p,q).    3.    Affine-Transformable    Patterns    In    texture    analysis    we    often    consider    small    patterns    (texel= texture    element) whose repetition    constitutes    “texture”.    Suppose    we have a pair of texel patterns in which one is a 2-D    Affine    transform    of    the    other;    we    call    them    a    pair    of    Affine-transformable    patterns.    Let us assume that    “A    pair    of Affine-transformable    patterns    in the    picture    are projection    of similar    patterns    in the 3-D    space (i.e., they can be overlapped    by scale change,    rotation, and translation)“.    Note that, as in the case of skewed symmetry, the coiiverse of this    assumption    is always true under orthographic    projections.    The    above assumption can be schematized    by Figure 3-l.    Consider two texel patterns P, and P, in the picture, and place the    origins of the x-y coordinates    at their centers, respectively.    The    transform from P2 to P, can be expressed    by a regular 2x2 matrix    A = (aij). PI and P2 are projections    of patterns P’, and P’, which    are drawn on the 3-D surfaces.    We assume that P’, and P’, are    small enough SO that we can regard them as being drawn on small    planes.    Let us denote the gradients    of those small planes by    G, = (~1 ,ql) and G2 = (p2,q2), respectively;    i.e., P’, is drawn on a    plane-z=p,x+q,yandP’20n-z=p2x+q12y.    4    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    Now,    our    assumption    amounts    to    saying    that    P’,    is    transformable    from .P’, by a scalar scale factor u and a rotation    matrix R = (zynsz iz,“d,“,*). (W e can omit the translation    from our    consideration,    since for each pattern the origin of the coordinates    is placed    at its gravity    center,    which    is invariant    under    the    Affine-transform).    Thinking    about    a pattern    drawn    on a small    plane,    -z= px + qy, is equivalent    to viewing    the pattern    from    directly overhead; that is, rotating the x-y-z coordinates    so that the    normal vector of the plane is along the new z-axis (line of sight).    For this purpose we rotate the coordinates    first bv cp around the    y-axis and then by 8 around the x-axis.    We have the following    relations among v, 8, p, and q:    sincp = p/m    cosql = 1 /J&i    (2)    sine =9/&7&i    c0se    = J;;T;-i/    The plane    which    was represented    as -z= px +qy    in the old    coordinates    is, of course, now represented    as -z’ =O in the new    coordinates.    Let us denote the angles of the coordinate    rotations to obtain    P’, and P’, in Figure 3-l by (cp, ,8,) and (‘p2,e2), individually.    The    2-D mapping    from P’i (xl-y’    plane)    to Pi (x-y    plane) can be    conveniently    represented    by the following    2x2 matrix Tt which is    actually a submatrix of the usual 3-D rotation matrix.    Ti =( cyo -Si~W~“)    Now, in order for the schematic    diagram    of Figure 3-1 to hold,    what relationships    have to be satisfied among the matrix A = (at)),    the gradients Gi = (pilqi) for i = 1,2, the angles (pi, ei) for i = 1,2, the    scale factor u, and the matrix R ? We equate the two transforms    that start from P’, to reach at P,:    one following    the diagram    counter-clockwise    P’,->P*->P, ,    the other clockwise    P’,->P’,->P,.    We obtain    AT2    = T, aR.    By eliminating    u and a, and substituting    for sines and cosines    of pi and Bi by (2), we have two (fairly complex) equations in terms    of pi, qi, and the elements    of A.    We therefore    find that the    assumption    of Affine-transformable    patterns    yields a constraint    determined    solely by the matrix A. The matrix is determined    by the    relation    between    P, and P, observable    in the picture:    without    CT/?= CT (    coti -5ind    Sind    ~0x4 >    Figure    3-1:    A schematic diagram showing the assumptions    on Affine transformable    patterns.    knowing    either    the    original    patterns    relationships    (a and R) in the 3-D space.    (P’,    and    P’*)    or their    The    Affine    transform    from    P,    to    P,    is more    intuitively    understood    by how a pair of perpendicular    unit-length    vectors    (typically along the x and y coordinate    axes) are mapped into their    transformed    vectors.    Two angles (a and /I) and two lengths (T and    p)    can    characterize    the    transform.    Components    of    the    transformation    matrix A = (aij) are represented    by:    a,, =7cosa    a12=Pcos/?    (3)    a21 = Tsina    a2* = psi@    Let us consider the case that a and p are known, but T and p    are not. Using (3), eliminate T and p. Then, we obtain    (Pl cosa + q 1 sinaNp, cosp + q, sir@) + cos(a-P) = 0    which is exactly the same as the hyperbola    (1).    . The Shape-from-Texture    Paradigm    This section derives    the same skewed-symmetry    constraints    from a second    theory,    different    from the Affine-transformable    patterns.    The shape-from-texture    paradigm    is briefly presented    here; a futler discussion can be found in [3].    The paradigm    has two major portions.    In the first, a given    image textural property is “normalized”    to give a general class of    surface    orientation    constraints.    In the second,    the normalized    values are used in conjunction    with assumed scene relations to    refine the constraints.    Only two texels are required, and only one    assumption    (equality    of scenic    texture    objects,    or some other    simple    relation)    to generate    a well-behaved    one-dimensional    family of possible surface orientations.    The first step in the paradigm is the normalization    of a given    texel property.    The goal is to create a normalized texture property    map (NTPM), which is a representational    and computational    tool    relating    image    properties    to scene    properties.    The    NTPM    summarizes the many different conditions    that may have occurred    in the scene leading to the formation of the given texel. In general,    the NTPM of a certain property    is-a scalar-valued    function    of two    variables.    The two input variables describe the postulated surface    orientation    in the scene (top-bottom    and left-right    slants:    (p,q)    when we use the gradient space).    The NTPM returns the value of    the property that the textural object would have had in the scene,    in order for the image to have the observed textural property.    As    an example, the NTPM for a horizontal unit line length in the image    summarizes the lengths of lines that would have been necessary in    3-D space under various orientations:    at surface orientation    (p,q),    it would have to be m.    More specifically,    the NTPM is formed by selecting a texel and a    texel    property,    back-projecting    the texel    through    the known    imaging geometry    onto all conceivable    surface orientations,    and    measuring the texel property there.    In the second phase of the paradigm, the NTPM is refined in the    following    way.    Texels usually have various    orientations    in the    image, and there are many different    texel types.    Each texel    generates    its own image-scene    relationships,    summarized    in its    NTPM.    If, however, assumptions    can be made to relate one texel    to another, then their NTPMs can also be related; in most cases    only a few scenic surface orientations    can satisfy both texels’    requirements.    Some examples    of the assumptions    that relate    texels are:    both lie in the same plane, both are equal in textural    5    measure (length, area, etc.), one is k times the other in measure,    etc. Relating texels in this manner forces more stringent demands    on the scene.    If enough relations are invoked, the orientation    of    the local surface supporting    two or nrore related texels can be very    precisely determined.    What we now show is that the skewed symmetry    method is a    special case of the shape-from-texture    paradigm; it can be derived    from considerations    of texel slope.    To normalize the slope of a texel, it is back-projected    onto a    plane    with    the    postulated    orientation.    The angle    that    the    back-projected    slope makes with respect to the gradient vector of    the plane is one good choice    (out of many) for the normalized    slope measure.    Under perspective,    the normalized value depends    on the image position and camera focal length; under orthography    it is much simpler.    Using the construction    in Figure 4-1, together    with several    lemmas relating surfaces    in perspective    to their local vanishing    lines, slope is normalized as follows.    Assume a slope is parallel to    the p axis; the image and gradient space can always be rotated    into such a position.    (If rotation is necessary, the resulting NTPM    can be de-rotated    into the original    position    using the standard    two-by-two    orthonormal    matrix.)    Also assume that the slope is    somewhere    along the line y = ys, where the unit of measurement    in the image is equal to one focal length.    Then, the normalized    slope value--the normalized texture property map -- is given by    Es - Ys(P2 + cl211 ~dtl    + P2 + s*>1.    This normalized    value can be exploited    in several ways.    Most    important is the result that is obtained when one has two slopes in    the image that are assumed to arise from equal slopes in the    scene.    Under this assumptions,    their normalized    property    maps    can be equated.    The resulting constraint,    surprisingly,    is a simple    straight line in the gradient space.    Under    orthography,    nearly    everything    simplifies.    The    normalized slope of a texel becomes    q / [P/(1    f P2 + q*>1.    (4)    lt is independent    of Y,; in effect, all slopes are at the focal point.    Consider Figure 2-1 (a). Given the angle that the two texels form    (/?-a), rotate the gradient space so that the positive p axis bisects    the angle.    Let each half-angle    be 6, so 6 = (/I-a)/2.    Calculating    the normalized    value of either slope is obtained directly    from the    standard    normalized    slope    formula,    corrected    for    the    displacement    of + S and -6 respectively.    That is, for the slope at    the positive    6 orientation,    instead    of formula    (4), we use the    Figure    4-1:    Back-projecting    an image slope onto a plane    with gradient (p, q).    formula under the substitution    pcosS + qsin6 for p,    for q. We do similarly for the slope at -8.    -psin6    qcosd    The fact    that    the    normalized    slopes    are assumed    to be    perpendicular    in the scene allows us to set one of the normalized    values equal to the negative reciprocal of the other. The resultant    equation becomes    p2cos2&q2sin26    = sin*&cos*S    = -cos26.    This is exactly the hyperbola    (1) with 26 =/l-a.    5.    Conclusion    The assumptions    we used    for the skewed    symmetry,    the    Affine-transformable    patterns,    and    texture    analysis    can    be    generalized    as    “Properties    observable    in the picture    are not by    accident,    but    are    projections    of    some    preferred    corresponding    3-D properties.”    This    provides    a    useful    meta-heuristic    for    exploiting    image    properties:    we can call it the meta-heuristic    of non-accidental    image    properties.    It can be regarded    as a generalization    of    general view directions,    often used in the blocks world, to exclude    the cases of accidental    line alignments.    Instances    that    can fall within    this meta-heuristic    includes:    parallel lines in the picture vs. parallel lines in the scene, texture    gradients, and lines convergent    to a vanishing point.    One of the most essential points of our technique    is that we    related certain image properties    to certain 3-D space properties,    and that we map the relationships    into convenient    representations    of shape    constraints.    We explicitly    incorporate    assumptions    based either on the meta-heuristic    or on apriori    knowledge    of the    world.    The    shape-from-texture    paradigm    provides    a    computational    framework    for our technique.    In most part of our    discussion    we assumed    orthography.    Similar--though    more    involved    and    less    intuitive--results    can    be    obtained    under    perspective    projections.    This work is further    same title as this paper.    .discussed in a technical    report with    PI    [31    [41    6    References    Kanade, T.    Recovery    of the 3-Dimensional    Shape    of an Object    from a    Single    View.    Technical Report CMU-CS-79-153,    Computer Science    Department,    Carnegie-Mellon    University, Oct., 1979.    Kender, J.R.    Shape    from    Texture.    PhD thesis, Computer Science Department,    Carnegie-Mellon    University,    1980.    Mackworth,    A.K.    Interpreting    Pictures of Polyhedral Scenes.    Artificial    intelligence    4(2), 1973.    Stevens, K.A.    Surface    Perception    from Local Analysis    of Texture    and    Contour.    Technical Report AI-TR-512, MIT-AI, 1980.     
 | 
	1980 
 | 
	80 
 | 
					
79 
							 | 
	WHAT SHOULD BE COMPUTED IN LOW LEVEL VISION SYSTEMS    William B. Thompson    Albert Yonas    University    of Minnesota    Minneapolis,    Minnesota    55455    ABSTRACT    Recently,    there    has    been    a    trend    towards    developing    low    level    vision    models    based on an    analysis of the    mapping    of    a    three    dimensional    scene    into    a two dimensional    image.    Emphasis has    been placed on recovering    precise    metric    spatial    information    about    the scene.    While we agree with    this approach, we suggest that    more    attention    be    paid    to what    should    be computed.    Pschophysical    scaling, adaptation,    and    direct    determination    of    higher order relations may be as useful in the per-    ception of spatial layout as    in    other    perceptual    domains.    When applied to computer vision systems,    such processes    may    reduce    dependance    on    overly    specific scene constraints.    L*    Introduction    The following is a position paper directed    at    several    aspects    of    low-level    visual processing.    The current trend towards focusing on the    determi-    nation    of exact, three-dimensional    form in a scene    is questioned.    Both    analysis    of    representative    scene domains and experience with human vision sug-    gest that less precise form properties    may be    suf-    ficient    for    most problems.    Several computational    issues are also briefly discussed.    2.    Alternate Approaches    to "Low-Level"    Analysis    -    --    Computer    vision    systems    have    traditionally    been    divided    into segmentation    and interpretation    components.    A multiplicity    of image features    have    been    investigated    in    the    hope    that    they would    facilitate    the    partitioning    of    an    image    into    regions corresponding    to "objects" or "surfaces"    in    the    original    scene.    Only    after    this    two-    dimensional    segmentation    operation    was completed    would procedures be applied in an attempt to deter-    mine    the    original    three-dimensional    structure of    the scene.    Recently, an    alternative    approach    to    implementing    the    lower    levels of a computational    vision model has been developed.    Its basic premise    This research was supported in part by the Nation-    al Science Foundation    under Grant MCS-78-20780    and    by the National Insitute of Child Health and Human    Development    under Grants HD-01136 and HD-05027.    is    that    the    determination    of    three-dimensional    structure is such an integral    part    of    the    scene    description    processes    that it should be carried out    at all levels of the analysis    [1,2,3,4,51.    Proponents    of this approach usually    employ    a    well    structured    methodology    for    developing    computational    models of form perception:    Precisely    describe    a    carefully    con-    strained scene domain.    Identify important scene properties.    Determine    the function which    maps    these    scene properties    into an-image.    Develop computationally    feasible    mechan-    isms for recovering    the "important"    scene    properties    from the image.    Great emphasis is placed on determining    what    scene    properties    are    computable,    given    a    set of con-    straints on the scene.    Scene properties normally considered    essential    to    the    analysis include object boundaries,    three-    dimensional    position, and surface orientation.    In    many    cases,    the    determination    of these features    requires that properties    such    as    surface    reflec-    tance    and    illumination    must also be found.    A key    distinction    between the    classical    techniques    and    this newer approach is that in the latter, analysis    procedures    are    developed    analytically    from    an    understanding    of    how    scene properties    affect the    image rather than    ad    hoc    assumptions    about    how    image properties might relate to scene structure.    The    representational    structures    which    have    been    used    to    implement form based analysis have,    for the    most    part,    been    iconic.    The    features    represented    are almost always metric properties    of    the corresponding    point on the    surface:    distance    from    the    observer,    orientation    with    respect to    either the observer or a ground plane, reflectance,    incident    illumination,    and    so    on.    To determine    relative    effects    (eg.    which    of    two    points    is    farther away), absolute properties    are compared.    The determination    of these metric    scene    pro-    perties requires that the possible scenes be highly    constrained.    Usually, the analysis depends on res-    trictions    both on the types of objects allowed and    on the surface    properties    of    the    objects.    For    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    example,    a    "blocks    world"    assumption    (or alter-    nately, the assumption    of a "Play-Doh"    world    made    entirely    of    smooth    surfaces)    might be made.    In    addition,    it is commonly assumed that surfaces    are    all    lambertian    reflectors    and    that, for a given    surface,    the    coefficient    of    reflectance    is    constant.    Illumination    is often limited to a sin-    gle distant point source, possibly coupled    with    a    diffuse    illuminator.    Secondary    illumination    effects are usually presumed to be negligible.    20    Absolute Scene Properties    Are Not Always Needed    -m    The proponents    of form based analysis    presume    the    need    for    finding exact shape properties    of a    scene.    They concentrate    on investigating    how    con-    straints    on scenes affect what properties    are com-    putable and how they can be determined.    We suggest    that more attention be paid towards what properties    should be computed.    We    argue    that    for    a wide    variety    of problem areas, absolute metric informa-    tion about scene shape is not    required.    Instead,    relative    properties    such    as    flat/curved,    convex/concave,    farther-away/closer,    etc. are    both    sufficient    and easier to compute.    Most tasks involving description    of    a visual    environment    depend on generalized    shape properties.    In fact, much effort has been spent    searching    for    shape characterizations    that embody those relation-    ships useful for description    but not    the    enormous    amount    of    irrelevant    detail    contained    in    any    representation    based on specific position.    Even in    task domains such as object manipulation    and obsta-    cle avoidance, precise    positional    information    is    frequently    not    necessary.    Both these task areas    contain significant    sub-problems    involving    object    identification    -    a descriptive    task often possible    with approximate    and/or relative information    about    shape.    Even when actual position is needed, feed-    back control can be used to minimize    the    need    for    highly accurate positional    determinations.    A second argument for emphasizing    the    diffi-    culty    of    determining    metric properties    comes from    our experience with human perception.    The    psycho-    logical    literature    contains many references    to the    effects of the scaling    process    that    relates    the    physical    domain    to    the    psychological    [6,71, the    effects of adaptation    to stimulation    [8],    and    the    effects    of    practice    on    variable    error [91.    By    investigating    the competence    of    the    human    visual    system    in    determining    primitive    shape effects, we    can gain insight into sufficient    (but    not    neces-    sary) properties    for more complex analysis.    In our    own work on    perception    of    surfaces,    preliminary    results    from    one set of experiments    seem relevant    to the development    of computational    models.    We synthesized    a frontal view of a surface the    profile    of    which    is shown in figure 1.    Lighting    was assumed to be a combination    of a single distant    point    source    and    a    perfectly diffuse source.    A    simple reflectance    model    was    used    and    secondary    illumination    effects were not considered.    A series    of synthesized    images was produced with the    inten-    tion of examining the perception    of single displays    and the ability to    determine    differences    between    displays.    The "object" in our images was an ellip-    soid with semi-axes A, B, and C.    (A was    in    the    horizontal    direction    as seen by the viewer, B was    in the vertical direction,    and C was in the    direc-    tion    along    the    line    of    sight.)    The object was    presented    against a black background,    and    thus    no    cast    shadows    were present.    In one set of experi-    ments, A and B were held constant producing a    cir-    cular    occluding    contour.    Subjects were asked to    estimate the value of C for a number    of    different    displays,    with    true    values of C ranging from one    half of A to four times A.    On initial trials, sub-    jects tended to see the same shape independently    of    the actual value of C.    On subsequent    trials,    per-    formance    improved,    but    with    a significant,    sys-    tematic underestimation    of the true    value.    As    a    final    note,    when    subjects were asked to qualita-    tively describe the changes in the scene as    C was    varied,    they    often    indicated    that they felt that    the change was due to differences    in    illumination,    not shape.    It is certainly premature    to make any    defini-    tive    conclussions    from our results.    Nevertheless,    we    suggest    the    following    conjecture:    Subjects    appear    to see a specific shape (as opposed to sim-    ply a "round" object); however,    the metric    proper-    ties    they    estimate    for that shape are not neces-    sarily consistent with the "true" values.    The sub-    jects    do    appear    to be better at ranking displays    based on different values of C.    4*    ---    Non-metric    Scene Properties    We suggest that requiring    specific,    accurate    determination    of    scene properties may be unneces-    sarily restrictive.    Less    precise    and/or    purely    relative    qualities    are sufficient    for many situa-    tions.    By concentrating    on these    characteristics,    we    may    be    able    to    significantly    relax the con-    straints    under    which    our    computational    vision    models    must    operate.    Finally,    human    vision is    often quite inaccurate    in determining    metric values    for    these same properties.    Rather than indicating    a deficiency    in human vision,    this    suggests    that    alternative    (and    presumably    more useful) charac-    teristics are being    computed    by    the    human    per-    ceiver.    Two approaches    to structuring    computer    vision    models    based    on these observations    seem relevant.    First of all, it may be possible    to directly    com-    pute    properties    of interest, rather than deriving    them from    more    "primitive"    characteristics    (see    [lO,lll).    For    example, we might look for ways of    estimating    surface curvature    that do not depend    on    first    determining    depth and then taking the second    derivative.    A second possibility    is to presume that    esti-    mation    of    shape properties    is subject to the same    scaling    processes    as    most    other    perceptual    phenomena.    Thus,    our    model    would estimate some    non-linear    but monotonic    transformation    of    charac-    teristics such as depth.    The transformations    would    be adaptive, but in general    not    known    by    higher    level    analysis    procedures.    Thus,    the    precise    metric    three-dimensional    structure    can    not    be    recovered.    For    many tasks, the scaled values are    sufficient    and the    need    for    highly    constrained,    8    photometric    analysis of the image is reduced.    With    luminence    gradient,    illumination    direction,    and    appropriate    standardization,    precise scene    proper-    surface curvature.    For a given    gradient,    knowing    ties    may    be determined.    Without standardization,    either    illumination    or curvature allows determina-    relative    characteristics    are    still    computable.    tion of the other.    The    model    must    be    able    to    Ordinal    relationships    are detrminable    over a wide    account for this symmetry.    range while quantatative    comparisons    are    possible    over a more limited range.    (eg. it may be possible    6.    Conclusions    to judge that A is twice as far as B but not that C    is 100 times as far as D.)    When    attempting    to    construct    computational    models    of low-level vision systems, we need to pay    50    Computational    Models    as much attention    to what should be computed as    we    do    to how it is computed.    We may investigate    this    Recently, much attention has been    focused    on    problem in at least three ways.    The    first    is    a    using parallel process models to specify the compu-    computational    approach:    we can determine what is    tational structure of low-level vision systems.    An    computable given a set    of    constraints    about    the    image    is    partitioned    into a set of neighborhoods,    scene    and    the    imaging process.    The second is an    with one process associated    with each region.    The    ecological    approach:    we catalog the range of prob-    processes    compute    an estimate of scene properties    lem    domains    in    which    our    system is expected to    corresponding    to    the    region    using    the    image    function and then    determine    the    primitive    scene    features    in the region and whatever is known about    properties    needed for analysis.    The third is meta-    surrounding    scene structure.    The    circularity    of    phorical:    study    a    working    visual    system    (eg.    form estimation    for one point depending    on the form    human)    in order to determine which low-level scene    of neighboring    points can be dealt with in    several    properties    it is able to perceive.    These    proper-    ways.    A variable    resolution    technique    may    be    ties then define a sufficient    set for analysis.    employed.    First large,    non-interacting    neighbor-    hoods are used.    Then, progressively    smaller neigh-    Much current work focuses on estimating    exact    borhoods are used, each depending    on scene    proper-    positional    information    about    a    scene*    We argue    ties    computed    using    previously    analyzed, larger    that in many cases, these metric properties    cannot    regions.    (Marr's stereo model is an example [121.)    be    easily determined.    Even more importantly,    how-    Alternately,    an iterative technique can be used to    ever, they often need not    be    determined.    Simple    find crude estimates of scene properties    and    then    relative    properties may be sufficient    for analysis    those    values are fed back into the process to pro-    and be much easier to compute.    duce more    refined    estimates.    (Examples    include    many    "relaxation    labeling" applications    [131.) In    either case, the determination    of    absolute    scene    properties    usually    requires    a    set    of    boundary    values - image    points    at    which    the    scene    con-    BIBLIOGRAPHY    straints    allow direct determination    of the proper-    ties.    The computational    process    must    then    pro-    pagate these constraints    to other image regions.    [II    D. Marr, "Representing    and    computing    visual    The    robustness    of    these    parallel    process    information",    Artificial    Intelligence:    An MIT    --    models    may    be significantly    increased if they are    Perspective,    P.H.    Winston    and    R.H.    Brown,    only required to compute relative properties.    The    ed-,    PP.    17-82, 1979.    need    for    accurately propagating    scene information    is greatly    reduced.    Furthermore,    photometric    [21    H.G. Barrow and J.M. Tennenbaum,    "Recovering    analysis of the image will usually not be required.    intrinsic scene characteristics    from images,"    For instance, general characteristics    of the inten-    in Computer Vision Systems,    A.R. Hanson    and    sity    gradient    may    be    all    that    is required for    E-M. Riseman, eds., New York: Academic Press,    analysis.    As an example, for a reasonably    general    1978.    class    of    scene    types,    a    discontinuity    in    the    luminence gradient will    usually    correspond    to    a    [31    B.    Horn,    "Obtaining    shape    from    shading    shadow,    an occlusion boundary,    or the common boun-    information,"    in The Psychology    of Computer    dary between two surfaces.    Continuous but non-zero    Vision, P.H. Winston, ed., New York=    McGraw-    gradient    values    indicate either surface curvature    Hill, 1975.    or illumination    variation.    In neither case is    the    actual magnitude    of the gradient required.    [41    S.    Ullman,    The    Interpretation    of    Visual    Motion, Cambridge: MIT Press, 1979:    Finally, many of    the    problems    in    low-level    vision    are    underspecified.    No    single "correct"    [51    K-A. Stevens, "Surface perception    from    local    solution exists because insufficient    information    is    analysis    of    texture    and    contour",    Ph.D.    available    to derive the original scene properties.    Thesis, MIT, Feb. 1979.    Thus, computational    models    must    either    naturally    embody    default    assumptions    or allow for ambiguous    161    GOT.    Fechner,    Elemente    der    Psychophysik,    representations.    (There is reason to    expect    that    Leipzig:    Breitkopf    and    Hartel,    1860.    both    approaches    are useful.) Even more important,    (Reissued Amsterdam:    Bonset, 1964.)    the control structures used by the models must    not    impose any arbitrary    input/output    assumptions.    For    example, consider again    the    relationship    between    9    [73    S.S. Stevens, "Perceptual    magnitude    and    its    measurement,"    in Handbook of Perception, &.    II, Psychophysical    JudgemeT;;&    Measurement,    Carterette    and    Friedman,    eds.,    New    York:    Academic Press, 1974.    [al    H. Helson, Adaptation    Level Theory, New York:    Harper, 1964.    Dl    E.J. Gibson, Perceptual    Learning and Develop-    ment,    New    York:    Appleton-CenGy-Crofts,    1969.    [lo] J.J. Gibson, The Senses Considered    as    Visual    --    Systems, Boston: Houghton Mifflin, 1966.    [ll] J.J.    Gibson,    The    Ecological    Approach    to    Visual    Percept=,    Boston: Houghton Miffli.    1979.    [121 DO Marr and TO Poggio,    "A    theory    of    human    stereo    vision,"    MIT    AI Lab. MEMO 451, Nov.    1977.    [131 A.    Rosenfeld,    R.    Hummel,    and    S.    Zucker,    "Scene    labeling    by    relaxation    operations,"    IEEE Trans. Systems,    &,    PP    and    Cybernetics,    vol. 6, ppe 420-433, June 19x    .’ 1,    -    .    'i    7    0 .    /I I'    P    Figure 1.    10     
 | 
	1980 
 | 
	81 
 | 
					
80 
							 | 
	INTERPRETING LINE DRAWINGS AS THREE-DIMENSIONAL SURFACES    Harry G. Barrow and Jay M. Tenenbaum    Artificiai Inteiiigence Center    SRI Internationai, Menio Park, CA 94025    ABSTRACT    We propose a computationai modei for    interpreting iine drawings as three-dimensionai    surfaces, based on constraints on iocai surface    orientation aiong extremai and discontinuity    boundaries. Specific techniques are described for    two key processes: recovering the three-    dimensionai conformation of a space curve (e.g., a    surface boundary) from its two-dimensionai    projection in an image, and interpoiating smooth    surfaces from orientation constraints aiong    extremai boundaries.    INTRODUCTION    Our objective is the deveiopment of a computer    modei for interpreting two-dimensionai iine    drawings, such as Figure 1, as three-dimensionai    surfaces and surface boundaries. Line drawings    depict intensity discontinuities at surface    boundaries, which, in many cases, are the primary    source of surface information avaiiabie in an    image: i.e., in areas of shadow, compiex    (secondary) iiiumination, or specuiar surfaces    anaiytic photometry is inappropriate.    Understanding how iine drawings convey three-    dimensionaiity is thus of fundamentai importance.    Given a perspectiveiy correct iine drawing    depicting discontinuities of smooth surfaces, we    desire as output arrays containing vaiues for    orientation and reiative range at each point on the    impiied surfaces. This objective is distinct from    that of eariier work on interpretation in terms of    object modeis (e.g. [I]) and more basic. No    knowiedge of piants is required to understand the    three-dimensionai structure of Figure 1, as can be    demonstrated by viewing fragments out of context    (through a mask, for exampie).    Ambiguity and Constraints    The centrai probiem in perceiving iine    drawings is one of ambiguity: in theory, each two-    dimensionai iine in the image couid correspond to a    possibie projection of an infinitude of three-    dimensionai space curves (see Figure 2). Yet    peopie are not aware of this massive ambiguity.    When asked to provide a three-dimensionai    interpretation-of an eiiipse, the overwheiming    response is a tiited circie, not some bizarrely    twisting curve (or even a discontinuous one) that    has the same image.    What assumptions about the    scene and the imaging process are invoked to    constrain to this unique interpretation?    * This research was supported by funds from DARPA,    NASA, and NSF.    We observe that aithough aii the iines in    Figure 1 iook fundamentaiiy aiike, two distinct    types of scene event are depicted: extremai    boundaries (e.g., the sides of the vase), where a    surface turns smoothiy away from the viewer, and    discontinuity boundaries (e.g., the edges of the    ieavss), where smooth surfaces terminate or    interswt.    Each type provides different    constraints on three-dimensionai interpretation.    At an extremai boundary, the surface orientation    can be inferred exactiy; at every point aiong the    boundary, orientation is normai to the iine of    si ht and to the tangent to the curve in the image    d.    A discontinuity boundary, by contrast, does    not dirsctiy constrain surface orientation.    However, its iocai curvature in the image does    provide a statisticai constraint on the three-    dimension& tangent of the corresponding space    curve. The iocai surface normai is constrained    oniy to be orthogonai to this tangent, and is thus    free to swing about it as shown in Figure 3.    The abiiity to infer 3-D surface structure    from extremai and discontinuity boundaries suggests    a three-step modei for iine drawing interpretation,    anaiogous to those invoived in our intrinsic image    modei [2]: iine sorting, boundary interpretation,    and surface interpoiation. Each iine is first    ciassified according to the type of surface    boundary it represents (i.e., extremai versus    discontinuity). Surface contours are interpreted    as three-dimensionai space curves, providing    reiative 3-D distances aiong each curve; iocai    surface normais are assigned aiong the extremai    boundaries. Finaiiy, three-dimensionai surfaces    consistent with these boundary conditions are    constructed by interpoiation. (For an aiternative    model, see Stevens    [3].)    This paper addresses some    important aspects of three-dimensionai recovery and    interpoiation (see [l] and [4] for approaches to    iine ciassification).    INTERPRETATION OF DISCONTINUITY BOUNDARIES    To recover the three-dimensionai conformation    of a surface discontinuity boundary from its image,    we invoke two assumptions: surface smoothness and    generai position. The smoothness    assumption    impiies that the space curve bounding a surface    wiii aiso be smooth. The assumption that the scene    is viewed from a generai position impiies that a    smooth curve in the image resuits from a smooth    curve in space, and not from an accident of    viewpoint. In Figure 2, for exampie, the sharpiy    receding curve projects into a smooth eiiipse from    oniy one viewpoint. Thus, such a curve wouid be a    highiy improbabie three-dimensionai interpretation    of an eiiipse.    11    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    The problem now is to determine which smooth    space curve is most iikeiy. For the speciai case    of a wire curved in space, we conjectured that, of    aii projectiveiy-equivaient space curves, humans    perceive that curve having the most uniform    curvature and the ieast torsion [2]; i.e., they    perceive the space curve that is smoothest and most    planar. Consistent findings were reported in    recent work by Witkin [5]    at MIT on human    interpretation of the orientation of pianar ciosed    curves.    Measures of Smoothness    -    The smoothness of a space curve is expressed    quantitativeiy in terms of intrinsic    characteristics such as differentiai curvature (k)    and torsion (t), as weii as vectors giving    intrinsic axes of the curve: tangent (T),    principai normai (N), and binormai (B). A simpie    measure for the smoothness of a space curve is    uniformity of curvature. Thus, one might seek the    space curve corresponding to a given image curve    for which the integrai of k' (the spatiai    derivative of k) was minimum. This aione, however,    is insufficient, since the integrai of k' couid be    made arbitrariiy smaii by stretching out the space    curve so that it approaches a twisting straight    iine (see Figure 4).    Uniformity of curvature aiso    does not indicate whether a circuiar arc in the    image shouid correspond to a 3-D circuiar arc or to    part of a heiix. A necessary additionai constraint    in both cases is that the space curve corresponding    to a given image curve should be as pianar as    possibie, or more preciseiy that the integrai of    its torsion shouid aiso be minimized.    Integrai 1 expresses both the smoothness and    pianarity of a space curve in terms of a singie,    iocaiiy computed differentiai measure d(kB)/ds:    d(kB/ds)*ds =    (kg2 + k2t2)ds    (1)    Intuitiveiy, minimizing this integrai    corresponds to finding the three-dimensionai    projection of an image curve that most cioseiy    approximates a pianar, circular arc, for which k'    and t are both everywhere zero.    Recovery Techniques    A computer modei of this recovery theory was    impiemented to test its competence. The program    accepts a description of an input curve as a    sequence of two-dimensionai image coordinates.    Each input point, in conjunction with an assumed    center of projection, defines a ray in space aiong    which the corresponding space curve point is    constrained to iie. The program can adjust the    distance associated with each space curve point by    siiding it aiong its ray iike a bead on a wire.    From the resuiting 3-D coordinates, it can compute    iocal estimates for curvature k, intrinsic axes T,    N, and B, and the smoothness measure d(kB)/ds. An    iterative optimization procedure then adjusts    distance for each point to determine the    configuration of points that minimize the integrai    in (1).    The program was tested using input coordinates    synthesized from known 3-D space curves so that    resuits couid be readiiy evaiuated. Correct 3-D    interpretations were produced for simpie open and    ciosed curves such as an eiiipse, which was    interpreted as a tiited circle, and a trapezoid,    which was interpreted as a tiited rectangie.    However, convergence was siow and somewhat    dependent on the initiai choice of z-vaiues. For    exampie, the program had difficuity converging to    the "tiited-circie" interpretation of an eiiipse if    started either with aii z-vaiues in a piane    paraiiei to the image piane or aii randomized to be    highiy nonpianar.    SURFACE INTERPOLATION    Given constraints on orientation aiong    extremai and discontinuity boundaries, the next    probiem is to interpoiate smooth surfaces    consistent with these boundary conditions. The    probiem of surface interpoiation is not pecuiiar to    contour interpretation, but is fundamentai to    surface reconstruction, since data is generaiiy not    avaiiabie at every point in the image. We have    impiemented a soiution for an important case: the    interpoiation of approximateiy uniformiy-curved    surfaces from initiai orientation values and    constraints on orientation [6].    The input is assumed to be in the form of    sparse arrays, containing iocai estimates of    surface range and orientation, in a viewer-centered    coordinate frame, ciustered aiong the curves    corresponding to surface boundaries. The desired    output is simpiy fiiied arrays of range and surface    orientation representing the most iikeiy surfaces    consistent with the input data. These output    arrays are anaiogous to our intrinsic images [2] or    Marr's 2.5D sketch [7].    12    For any given set of input data, an infinitude    of possibie surfaces can be found to fit    arbitrariiy weii. Which of these is best (i.e.,    smoothest) depends upon assumptions about the    nature of surfaces in the worid and the image    formation process. For exampie, surfaces formed by    eiastic membranes (e.g., soap fiims) are    constrained to minimum energy configurations    characterized by minimum area and zero mean    curvature; surfaces formed by bending sheets of    ineiastic materiai (e.g., paper or sheet metai) are    characterized by zero Gaussian curvature; surfaces    formed by many machining operations (e.g., pianes,    cyiinders, and spheres) have constant principai    curvatures.    Uniformiy Curved Surfaces    We concentrate here on surfaces that are    iocaiiy sphericai or cyiindricai (which have    uniform curvature according to any of the above    criteria). These cases are important because they    require reconstructions that are symmetric in three    dimensions and independent of viewpoint. Many    simpie interpoiation techniques faii this test,    producing surfaces that are too fiat or too peaked.    An interpoiation aigorithm that performs correctiy    on sphericai and cyiindricai surfaces can be    expected to yieid reasonabie resuits for arbitrary    surfaces.    Our approach expioits an observation that    components of the unit normai vary iineariy across    the images of surfaces of uniform curvature.    Consider a three-dimensionai sphericai surface, as    shown in Figure 5.    The radius and normai vectors    are aiigned, and so from simiiar figures we    have: Nx = x/R, Ny = y/R, Nz = z/R . A simiiar    derivation for the right circuiar cyiinder is to be    found in [6]. The point to be noted is that for    both the cyiinder and the sphere, Nx and Ny are    iinear functions of x and y, and Nz can be derived    from Nx and Ny.    An Interpoiation Technique    -    We have impiemented an interpoiation process    that expioits the above observations to derive the    orientation and range over a surface from boundary    vaiues. It uses paraiiei iocai operations at each    point in the orientation array to make the two    observabie components of the normai, Nx and Ny,    each vary as iineariy as possibie in both x and y.    This couid be performed by a standard numericai    reiaxation technique that repiaces the vaiue at    each point by an average over a two-dimensionai    neighborhood. However, difficuities arise near    surface boundaries where orientation is    discontinuous. We decompose the two-dimensionai    averaging process into severai one-dimensionai    ones, by considering a set of iine segments passing    through the centrai point, as shown in Figure 6a.    Aiong each iine we fit a iinear function, and thus    estimate a corrected vaiue for the point. The    independent estimates produced from the set of iine    segments are then averaged. Oniy the iine segments    that do not extend across a boundary are used: in    the interior of a region, symmetric iine segments    are used (Figure 6a)    to interpoiate a centrai    vaiue; at boundaries, an asymmetric pattern aiiows    vaiues to be extrapoiated (Figure 6b).    The interpoiation process was appiied to test    cases in which surface orientations were defined    around a circuiar outiine, corresponding to the    extremai boundary of a sphere, or aiong two    paraiiei iines, corresponding to the extremai    boundary of a right circuiar cyiinder. Essentiaiiy    exact reconstructions were obtained, even when    boundary vaiues were extremeiy sparse or oniy    partiaiiy constrained. Resuits for other smooth    surfaces, such as eiiipsoids, seemed in reasonabie    agreement with human perception.    Current work is aimed at extending the    approach to partiaiiy constrained orientations    aiong surface discontinuities, which wiii permit    interpretation of generai soiid objects.    REFERENCES    1.    K. Turner, "Computer Perception of Curved    Objects Using a Teievision Camera," Ph.D.    thesis, Department of Machine Inteiiigence and    Perception, University of Edinburgh,    Edinburgh, Scotiand (1974).    3.    K. Stevens, "Constraints on the Visuai    Interpretation of Surface Contours," A.I. Memo    522, M.I.T., Cambridge, Massachusetts (March    1979).    4.    I. Chakravarty, "A Generaiized Line and    Junction Labeiing Scheme with Appiications to    Scene Anaivsis." IEEE Transactions on Pattern    ”    ,    --    Anaiysis and MachTInteiiigence,    Voi. PAMI-    1, No. 2,(Aprii,79).    59    A. Witkin, Department of Psychoiogy, M.I.T.,    Cambridge, Massachusetts (private    communication).    6.    H. G. Barrow and J. M. Tenenbaum,    "Reconstructing Smooth Surfaces from Partiai,    Noiss Information," Proc. ARPA Image    Understanding Workshs.S.C.,LosAngeies,    Caiifornia (Fam.    7.    D. Marr, "Representing Visuai Information," in    Computer Vision Systems, A. Hanson and    E. Riseman, eds.(Academic Press, New York,    New York, 1978).    13    FIGURE    1    LINE DRAWING OF A THREE-DIMENSIONAL    SCENE (Surface and boundary structure are dis-    tinctly perceived despite the ambiguity inherent    in the imaging process.)    FIGURE    2    THREE-DIMENSIONAL    CONFORMATION    OF    LINES DEPICTED IN A LINE DRAWING IS    INHERENTLY    AMBIGUOUS    (All of the space    curves in this figure project into an ellipse in the    image plane, but they are not all equally likely    FIGURE    3    AN ABSTRACT THREE-DIMENSIONAL    SURFACE CONVEYED    BY A LINE DRAWING    (Note that surface orientation is constrained to    one degree of freedom along discontinuity    boundaries.)    FIGURE    4    AN INTERPRETATION    THAT MAXIMIZES    UNIFORMITY    OF CURVATURE    y AXIS    x AXIS    FIGURE    5    LINEAR VARIATION    OF N ON A SPHERE    (a)    symmetric    (b) asymmetric    FIGURE    6    LINEAR INTERPOLATION    OPERATORS    interpretations.)    14     
 | 
	1980 
 | 
	82 
 | 
					
81 
							 | 
	SHAPE ENCODING    AND SUBJECTIVE    CONTOURS’    Mike Brady, W. E. L. Grimson    Artificial Intelligence Laboratory, MIT.    and D. J. Langridge    Division of Computing Research,    CSIRO, Canberra, ACT, Australia.    1. Int reduction    Ullman [15] has investigated the shape of subjective contours (see    for example [7]. [4]. [5], (121). In fact, the work is more generally ap-    plicable to other cases of pcrccptual shape completion, in which the    visual system is not constrained by actual physical intensity changes.    Examples include patterns foimcd from dots    line drawings and alphabetical characters.    and incomplctcly    Ullman proposes that subjective contc,.us consist of two circles    which meet smoothly and which arc tangential to the contrast bound-    aries from which they originate.    The foim of the solution derives    from a number of premises, one of which Ullman calls “the locality    hypothesis”. ‘Ihis is “based in part on cxpcrimcntal obscrvntions, and    partly on a theoretical consideration” ([I 51 ~2). ‘I’hc “cxpcrimcntal ob-    servation” rcfcrrcd to is the following: suppose that A’ is a point near    A on the filled-in contour AB as shown in Figure 1. If the process    by which AB was constructed is applied to A’B, it is claimed that    it gcneratcs the portion of AJ3, bctwccn A’ and B.    Let us call this    property “cxtcnsibility”. Ullman argues that cxtcnsibility, togcthcr with    the propcrtics of isotropy, smoothness, and having minimal integral cur-    vature, logically entails a solution consisting of two circles which meet    smoothly. In the fit11 version of this paper, we analyze the two-cti    solution and formulate the condition for minimal integral curvature.    This can bc solved by any descent method such as Newton-Raphson. A    program has been written which computes the minimum integral curva-    ture two-circle solution given the boundary angles 4, 8, and AB, and    which returns as a result the point at which they meet and at which the    curvature is discontinuous (the ‘knot point’). A tortuous version of this    simple proof and program recently appcarcd in [13]. We then show by    example that the two circle solution is not in fact extensible.    Figure 1. Ullman’s extensibility property. If the process by which AD    was constructed is applied to A’B,    portion of AB between A’ and B.    it is claimed that it generates the    lntcrestingly, Knuth [S]. in his discussion of mathematical typog    raphy, initially proposes that “most plcasing curves” should be ex-    tensible, isotropic, cyclically symmetric (that is, the solution should    not depend on the order of the contributing points), smooth, and be    generable by a local (in his case four point) process. In fact, Knuth    ([8]. Lemma 1) shows that these requirements arc mutually inconsistent,    and hc argues that it is most reasonable to drop the assumption of ex-    tensibility. He chooses the mathematically convenient process of cubic    spline fitting as the basis of his character typography. This is an in    ing choice in the light of what follows here.    We conclude from the above that either Ullman’s solution is    rect, despite the erroneous justification he gives, in which case    should like to be able to offer an alternative derivation, or else    incorrect, in which case we should like to propose an alternative.    For a longer discussion of the various suggestions regarding    completion which have appeared in the litcraturc, the reader is rc    to [ll. Often one finds that they are singularly lacking in justifi    beyond the vague claim that they seem to generate “rcasonablc” sham    On a more precise note, Pavlidis([ 111, chapters 7 and 8) surveys the    many versions of polygonal approximation, spline fitting, and Fourier    domain descriptors which have been proposed in the considcra    tern recognition literature    Two criteria have tended to domit    selection of curves for shape completion, namely mathcmatic    tability and computational efficiency, the latter implicitly assuming some    level of computational power on conventional computers. The enor-    mous computational power of the human perceptual system advocates    caution in the application of such arguments.    Indeed, a somewhat    different approach is to base the sclcction of a class of curves on an    analysis of some problem that the human visual system is trying to    solve, and to isolate and examine the constraints within which that    problem is posed. Several examples of this approach have app    over the past few years, mainly in the work of Marr and his cd-    laborators (see [9]). The work of Grimson, described briefly in Section    2. is of this sort. Ullman ([I61 section 3.3) makes some prclimi    comments about the biological feasibility of pcrccptual computations,    One of the difficulties which surrounds the choice of a class of    curves in the specific case of subjective contours is that the differences    between altcrnativc choices is often quite small at the angular extent of    most examples of subjective contours. This in turn means that the issue    is very difficult to resolve by psychophysical experimentation of the sort    used by Ullman ([lS] page 3).    In [I], we pursue an alternative approach, which was inspired by    the second section of Ullman’s paper [ 151. He develops a local algorithm    to compute the two circle solution which minimizes integral absolute    15    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    curvature. This naturally suggests dispensing with the cxtcnsibility as-    sumption cntircly, just as Knuth did, and proceeding to find a solu-    tion which minimizes some “pcrformancc index” related to curvature n,    such as abs(n) or $.    In order to test this idea, we apply the ideas of    modern control theory (see [14],[2]).    close approximation to curvature when the gradient fi is small. This is    Since IC and abs(K) are non-conservative, WC consider minimizing    $. We develop the Hamiltonian and show that it leads to a particularly    intractable diffcrcntial equation namely,    a reasonable condition to impose in the case of    point which we argue at greater length in PI.    subjective contours, a    WC proceed to set up the Hamiltonian as usual. The boundary    conditions arc as follows: (1) z = 0, y = 0, w = 4 (2) z = t, y = 0,    w = -0 = yt. where w = 2, ?~n = 9,    and the plant equations    are:    Yl = w    WI =u    (= &)    (1)    while the    given by    pcrformancc index is $.    ‘I’hc Clamiltonian state function is    where h is the Lagrange multiplier, and u is a constant of integration.    This almost certainly does not have a closed form analytical solution.    One possible iinc of approach at this juncture would be to base a local    U2    Setting $$& equal to zero in the usual way, and solving 8$ = XI and    g    = -pl,    leads to    computation on one of the known techniques for the numerical solution    of ordinary differential equations. Although shooting methods [3] are an    obvious method on which    yet explored the idea.    to base such a local computation, WC have not    ~=AxZ-p,    In order to proceed, WC suggest in Section 2 that the shape com-    pletion problem considcrcd hcrc is a two-dimcnsionnl analoguc of the    problem of interpolating a three-dimensional surface, for example from    the rclativcly sparse set of disparity points gcncratcd by the Marr-    Poggio [IO] stereo algorithm. ‘I’his problem has rcccntly been studied    where p is a constant of integration. This integrates easily to yield the    cubic solution    3    2    y+-$+ux+T,    where u and r are further constants of integration. Inserting the bound-    ary conditions enables us to solve for h, p, u, and r. We get finally    by Grimson [6]. He proposes that the pcrformancc index should be    a semi-norm and shows that many of the obvious pcrformancc indices    rclatcd to curvature, such as $ and abs(lc), do not have this property.    He notes that the quadratic variation, defined by f2,* in two dimensions,    is not only a semi-norm but is a close approx imation to curvature when    the gradicn t fi is small (subscripts indicate partial dcrivativcs). ‘This    is a rcasonablc condition to i::lpose in the case of subjcctivc contours.    Accordingly, WC set LIP the 1 lomiltonian for the quadr‘rtic variation and    show that it lcads to a cubic, which rcduccs to a parabola in the case of    equal angles. This is particularly intcrcsting in view of the comments    x3    x2    y = t;l(tanfI - tanf$) + t(tan$    - 2 tan@ + ztan0.    In Figure 2, WC showed Ullman’s solution for the case 0 =    30”, 4 = 20’. In Figure 3 WC show the curve generated by our method,    and in Figure 4 WC overlay the two. Clearly, the diffcrencc between    the solutions is quite small at the angular extent shown in the figures.    (Further examples arc given in [I]). As WC pointed out in the Introduction,    this limits the usefulness of the kind of psychophysical experimentation    used by Ullman to decide which solution is adopted by humans.    made about Knuth’s work above.    2. Minimizing    quadratic    variation    In order to proceed, we suggest that the shape completion problem    In cast the angles 4 and 0 are equal,    solution reduces to a parabola, namely    the cubic term is zero and the    considered here is a two-dimensional analoguc of the problem of inter-    polating a three-dimensional surface, for example from the relatively    2    y=    -Z-tanB+ztanf?.    t    sparse set of disparity points generated by the Marr-Poggio [lo] stereo    algorithm. More generally, WC suggest that the process by which subjec-    tive contours are gcncratcd results from the “mis-application” to two-    The apex is at (1, 6 tan 8) and the focal length is &.    Hence the    focus is below the lint AB so long as 0 < 4, which is normally the case    for subjective contours.    dimensional figures of a process whose main purpose is the interpola-    tion of three-dimensional surfaces. This idea requires some justification,    which space here does not permit (see [l]). It also gives a different    perspective on subjective contours, and leads to some fascinating, tes-    table hypotheses.    In the cast 0 = 4, it is straightforward to compute the difference    between the parabola gcncrated by our method and the circle predicted    by Ullman, which is    The three-dimensional interpolation problem has recently been    studied by Crimson [6] in the context of human stcreopsis. He observes    that a prcrcquisite to finding the optimal fimction satisfying a given    set of boundary conditions (namely that they should all pass through    the given set of sparse points), is that the functions be comparable.    Translating this into mathematics, hc proposes that the performance in-    dex should be a semi-norm. Most importantly, hc shows that many of    the obvious performance indices related to curvature, such as $ and    Using the approximation    (1 -    abs(K), do not have this property. He notes that the quadratic variation,    defined by f2,, in two dimensions, is not only a semi-norm but is a    16    this reduces to    [4] Coren “Subjective contours and apparent depth,” Psychol. Review 79    (1972) 359-367.    v - 4    II=    -ztanB+    wz2tan30.    t    The diffcrencc bctwcen the solutions is essentially given by the second    term whose maximum is at z = 4. Dividing by the horizontal extent    t of the curve, the relative difference is bounded by w.    For 8 < 1    this is negligible.    3. Acknowledgements    This report describes research done in part at the Artificial Intelli-    gence Laboratory of the Massachusetts lnstitute of Technology. Support    for the laboratory’s artificial intelligence research is provided in part by    the Advanced Research Projects Agency of the Dcpartmcnt of Defense    under Office of Naval Research contract N00014-75-C-0643. The authors    would like to thank the following people for valuable discussions at    various stages of the research described here: John Greenman, Pat    Hayes, David Marr, Slava Prazdny, Chris Rowbury. Robin Stanton, and    Shimon Ullman.    * This paper will appear as MIT AI memo 582. Here intelligibility    and content have been sacrificed in the intcrcsts of strict page limit    brevity.    4. References    [I] Rrady, Crimson, and Langridge “Shape encoding and subjective con-    tours,” (1980) to apEear.    1[2] Hryson and Ho Applied oplimal confrol, Ginn, Waltham MA, 1969.    [3) Come and de Boor Elementary numerical analysis , McGraw Hill,    Tokyo, 1972.    [5) I;risby and Clatworthy “Illusory contours: curious cases of simul-    tancous brightness contrast., 3 ” Perceprion 4 (1975), 349-357.    [6] Grimson Computing shape using a theory of human stereo vision,    Ph.D. ‘I’hcsis, Department of Dept. of Mathematics, MIT, 1980.    [7] Kanisza “Subjective contours,” Sci. Artrer. 234 (1976), 48-52.    ]$I Knuth “Mathematical typography,” Bull. Amer. M&h. Sot. (new    series) 1(1979), 337-372.    19) Marr Vision, Freeman, San Francisco, 1980.    [LO) Marr and Poggio “A theory of human stereo vision,” Proc. R. Sot.    Lond B 204 (1979) 301-328.    [Ill    Pavlidis Slrucrural Partern Recognition , Springer Verlag, Berlin,    1977.    ]I4 Rowbury Apparent contours, Univ. of Essex, UK,, 1978.    ]13] Wutkowski “Shape completion,” Computer Graphics and Image    Processing 9 (1979), 89-101.    1141 Schultz and Melsa State functions and linear control systems ,    McGraw Hill, New York, 1967.    [IS] Ullman “Filling the gaps: The shape of subjective contours and a    model for their generation,” Biof. Cyb. 25 (1976) l-6.    1161 Ullman “Relaxation and constrained optimisation by local processes,”    Compuler Graphics and Image Processing IO (1979). 115-125.    .    .    .    .    .    .    .    .    .    Figure 2. Ullman’s solution for the case of boundary angles 0 =    30”, 4 = 20”.    Figure 3. The solution generated by the method proposed in this paper.    The boundary conditions are the same as those of Figure 2.    ........    .......    ........    ....    .....    ....    .....    ...    ...    ......    .    ...    .    .........    .    .    .    .    ...........    .    ,    .............    ..............    .    .    ...........    .    .    .    .    .    Figure 4. The solutions of Figure 2 and Figure 3 are overlayed to    demonstrate the difference between them.    17     
 | 
	1980 
 | 
	83 
 | 
					
82 
							 | 
	A STATISTICAL    TECHNIQUE FOR RECOVERING    SURFACE    ORIENTATION    FROM TEXTURE IN NATURAL IMAGERY    Andrew P. Witkin    Artificial    Intelligence    Center    SRI International,    Menlo Park, CA    94025    ABSTRACT    A statistical    method is reported for inferring    the shape and orientation    of irregularly    marked    surfaces using image geometry.    The basis for    solving this problem lies in an understanding    of    projective    geometry, coupled with simple    statistical    models of the contour generating    process.    This approach is first applied to the    special case of surfaces known to be planar.    The    distortion    of contour shape imposed by projection    is treated as a signal to be estimated,    and    variations    of non-projective    origin are treated as    noise.    The resulting method is next extended to    the estimation    of curved surfaces, and applied    successfully    to natural images.    The statistical    estimation    stratefl is then experimentally    compared    to human perception    of orientation:    human    observers'    judgements    of tilt correspond closely to    the estimates produced by the planar strategy.    I    INTRODUCTION    Projective    geometry lawfully relates the shape    of a surface marking,    the orientation    of the    surface on which the marking lies, and the shape of    the marking's    projection    in the image: given any    two, the third can usually be recovered.    This    paper addresses    the problem of recovering    the shape    and orientation    of an irregularly    marked surface    from the shapes of the projected markings.    Of the    three constituents    of the projective    relation,    one--projected    shape--is given, and another--    surface orientation--is    to be recovered.    Since two    of the constituents    must be known to recover the    third, the problem has no solution unless something    is assumed about the unprojected    shapes of the    surface markings.    For instance, if those shapes    were known exactly, recovering    surface orientation    would usually be a straightforward    exercise in    projective    geometry.    More interesting    and more general is the case    of a surface about which nothing specific is known    in advance.    To recover surface orientation    in this    case, some assumption    must be made about the    geometry of surface markings    that is general enough    to apply to a broad range of surfaces, powerful    enough to to determine a solution for surface    orientation,    and true enough to determine the right    solution.    To meet these requirements,    the inference of    surface shape will be treated as a problem of    statistical    estimation,    combining constraints    from    projective    geometry with simple statistical    models    of the processes by which surface markings are    formed.    The distortion    imposed on shapes by    projection    will be treated, quite literally,    as a    signal, and the shapes themselves,    as noise.    Both    the "signal" and the "noise" contribute    to the    geometry of the image, and statistical    models of    the noise permit the projective    component to be    isolated.    II    ESTIMATING THE ORIENTATION    OF PLANAR SURFACES    The estimation    problem will first be    considered    subject to the artificial    restriction    that the surface is known to be planar.    While not    realistic,    this limited case provides the    groundwork    from which more general methods will be    developed.    The shape of a surface marking is the shape of    its bounding curve.    The shapes of curves are    naturally    described by tangent direction as a    function of position.    Since tangent direction in    the image is subject to projective distortion,    this    representation    is suitable for estimating    projective    distortion.    The mapping between a    tangent direction on a surface and the    corresponding    direction in the image is readily    expressed as a function of surface orientation,    by    means of simple projective geometry.    By examining a region of the image, a    distribution    of projected tangent directions may be    collected.    The form of this distribution    depends    in part on the distribution    of directions    over the    corresponding    surface region, but undergoes    systematic    distortion    depending on the orientation    of the surface: the projection    of an inclined shape    is foreshortened,    i.e.    compressed in the direction    of steepest inclination    (the tilt direction.)    The    amount of compression    varies with the angle between    the image plane and the plane of the shape (the    slant angle.)    The effect of this distortion    on the    distribution    of tangent directions    may be    illustrated    by the simple case of a circular    marking.    The distribution    of tangent directions    on    the original circle, measured by arc length, is    1    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    uniform. The orthographic projection of a circle    is an ellipse, whose minor axis lies parallel to    the tilt direction, and whose eccentricity varies    with the slant angle. The distribution of tangent    directions on an ellipse is not uniform, but    assumes minima and maxima in the directions of the    minor and major axes respectively. The degree of    nonuniformity increases with the eccentricity of    the ellipse.    In other words, projection systematically    "pushes" the image tangents away from the direction    of tilt. The greater the slant angle, the more the    tangents are "pushed." This systematic distortion    of the tangent distribution is the projective    "signal" that encodes surface orientation. Its    "phase" (the direction toward which the tangents    gravitate) varies with surface tilt, and its    "amplitude" (th e amount of distortion,) with    surface slant. To estimate the phase and amplitude    of the projective signal is to estimate surface    orientation.    Statistical estimation of the projective    component requires a model of the "noise", i.e. of    the expected distribution of tangent directions    prior to projective distortion. With no prior    knowledge of the surface, there is no reason to    expect any one tangent direction to be more likely    than any other. In other words, it is natural to    assume that all tangent directions on the surface    are equally likely. (Note that the amount of    projective distortTon increases with surface slant,    effectively increasing the signal-to-noise ratio.    Therefore, as slant increases, the exact form    assumed for the noise distribution becomes less    critical.)    Together with the geometric relation, this    simple statistical model defines a probability    density function for surface orientation, given a    set of tangent directions measured in the image.    The surface orientation value at which this    function assumes a maximum is the maximum    likelihood estimate for surface orientation, given    the model. The integral of the function over a    range of surface orientations is the probability    that the actual orientation lies in that range.    Conceptually, the distribution of tangent    directions in the image can be projected onto an    arbitrarily oriented planar surface. We seek that    orientation for which the distribution of projected    tangent directions is most nearly isotropic.    The estimator was first applied to geographic    contours: projections of coastlines drawn from a    digitized world map. This choice of data    circumvents the problem of contour detection, and    allows the actual orientation to be precisely    controlled. The overall accord between estimated    and actual orientation was excellent, and, equally    important, the confidence measures generated by the    estimator effectively distinguished the accurate    estimates from the inaccurate ones.    I- -1    ---    Figure 1 - Two photographs of roughly planar    surfaces, and the orientation estimates obtained    from them. The estimated orientations are    indicated by ellipses, representing the projected    appearance a circle lying on the surface would    have if the maximum likelihood estimate were    correct.    III    EXTENTION TO CURVED SURFACES    To apply the methods developed in the planar    case to curved surfaces without additional    assumptions, it would be necessary to obtain at    each point in the image a measure of the    distribution of tangent directions. But such a    local measure is never available, because the    density of the contour data is limited. On the    other hand, a distribution can be taken at each    point of the data in a surrounding region, as small    as possible, but large enough to provide a    reasonable sample. This spatially extended    distribution may be represented as a three    dimensional convolution of the image data with a    summation function.    The same technique was then applied to natural    To understand how such a distribution should    images, using zero-crossing contours in the    be applied to estimate surface disposition, it is    convolution of the image with the Laplacian of a    Gaussian [I], [2].    helpful to distinguish the intuitive, perceptual    While the veridical    notion of surface orientation from the strict    orientations were not independently measured, the    definition of differential geometry. It may be    maximum likelihood estimates are in close accord    argued that surface orientation is not a unique    with the perceived orientations (see Figure 1).    2    property of the surface, but must be regarded as a    function of scale. The scale at which orientation    is described corresponds to the spatial extent over    which it is measured. Thus, by measuring    orientation over a'large extent, the surface is    described at a coarse scale. The scale at which    the,surface is estimated therefore depends on the    spatial extent over which the distribution is    computed. Since that extent must be sufficiently    large compared to the density of the data, the    density effectively limits the spatial resolution    of the estimate. On this view, close parallels can    be drawn to Horn's [3]    method for inferring shape    from shading. The local estimator for orientation    is a geometric analogue to the photometric    reflectivity function.    The strategy was implemented, and applied to    natural images. Contours were extracted as in the    planar case, and the spatially extended    distribution approximated by a series of two-    dimensional convolutions with a "pillbox" mask.    The estimated surfaces were in close accord with    those perceived by the human observer (see Figure    2).    IV    RELATION TO HUMAN PERCEPTION    A psychophysical experiment was performed to    examine the relation between human observers'    judgments of the orientations of curves perceived    as planar, and the estimates obtained from the    estimation strategy outlined above.    A series of "random" curves were generated    using a function with pseudorandom parameters.    Although such curves have no "real" orientation    outside the picture plane, they often appear    inclined in space. Observer's judgments of    orientation were obtained by matching to a simple    probe shape. The judgments of tilt (direction of    steepest descent from the viewer) were highly    consistent across observers, while the slant    judgments (rate of descent) were much more    variable.    Orientation estimates for the same shapes were    computed using the planar estimator, and these    estimates proved to be in close accord with those    of the human observers, although the shapes had no    " real" orientation.    While no conclusion may be drawn about the    mechanism by which human observers judge    orientation from contours, or about the measures    they take on the image, this result provides    evidence that the human strategy, and the one    developed on geometric and statistical grounds, are    at the least close computational relatives.    1.    2.    3*    REFERENCES    Marr, D. C & Poggio, T., "A computational    theory of human stereo vision," Proc. Roy.    Sot. Lond., Vol. 204, pp.301-328 (1979)    Marr, D. C & Hildreth, E., "A Theory of edge    detection," MIT AI Memo 518 (1979)    Horn, B. K. P, "Understanding image    intensities," Artificial Intelligence, Vol.    21, No. 11, pp.201-231 (1977)    Figure 2 - A complex image, and the estimate    obtained. The ellipses represent the appearance    the estimated surface would have were it covered    with circles. Note that the estimate correctly    distinguishes the highly slanted foreground from    the nearly frontal background. The upward    pitch of the right foreground is also detected.    3     
 | 
	1980 
 | 
	84 
 | 
					
83 
							 | 
	INFORMATION    NEEDED TO LABEL A SCENE    Eugene C. Freuder    Dept. of Mathematics    and Computer Science    University    of New Hampshire    Durham, NH    03824    ABSTRACT    I analyze the information    content of scene    labels and provide a measure for the complexity    of    line drawings.    The Huffman-Clowes    label set is    found to contain surprisingly    little additional    information    as compared to more basic label sets.    The complexity    of a line drawing is measured in    terms of the amount of local labeling required to    determine global labeling.    A bound is obtained on    the number of lines which must be labeled before a    full labeling of a line drawing is uniquely deter-    mined.    I present an algorithm which combines    local sensory probing with knowledge of labeling    constraints    to proceed directly to a labeling    analysis of a given scene.    "r INTRODUCTION    Huffman [4] and Clowes [2] developed    (inde-    pendently) a basic labeling scheme for blocks    world picture graphs.    Given a basic labeling set:    + (convex), - (concave), -+ (occluding,    region on    arrowhead    side), and a standard set of simplifying    restrictions    on scene content and viewpoint,    the    physically    realizable    junction labelings are just    those shown in the last column of Fig. 1.    Waltz [5] explored the richer label sets    obtained by including additional    information    in    the labels (and loosening the scene restrictions).    In this paper I explore weaker, cruder label sets.    I identify three stages of scene labels, of which    the standard set is the third and richest.    I then    explore the increase in information    content    embodied in each successive    stage.    Rather sur-    prisingly I find that there is very little real    information    gain as we move from stage to stage.    A first stage scene labeling may well determine a    unique second stage labeling.    If    it does not, it    will come quite close to doing so, and I am able    to identify precisely the additional    information    that is necessary and sufficient    to complete the    second stage labeling.    Similar results are    obtained for the transition    from Stage II to Stage    III.    These results supply some theoretical    insight    into the nature and strength of the basic line    labels and physical constraints.    I go on in Section III    to analyze the amount    of information    required to obtain a Stage I    labeling.    The information    is measured    in terms of    the number of line labels which must be determined    in order for labeling constraints    to unambiguously    STAGE    I    STAGE    II    STAGE    III    Fork    A    2    -    -    2    2    A    2    2    2    A    +    +    +    Arrow    I I    T    T    2    -    +I+    2    -    T    +i    Fig.    1. Junction    labelings    ot each    stage    imply a unique labeling of the entire line draw-    ing.    In practice the required line labels can be    obtained    by local sensory probing of the physical    scene.    I obtain a bound on the number of labels    required to imply a full labeling of an arbitrary    line drawing.    Finally I discuss an algorithm that    effectively    combines sensory probing for labels    with knowledge of labeling constraints.    The    algorithm    proceeds directly to a full labeling,    reflecting    a presented physical scene, while    requiring neither a complete sensory scan for    every line label nor a consideration    of all    possible physical realizations    of the line drawing.    18    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    II    LABELING STAGES    The standard label set is a refinement    of a    cruder categorization    of lines as representing    a    physical edge of either one or two visible faces.    I consider three labeling stages.    In Stage    I the    only labels are the numbers 1 and 2, indicating    the number of associated    faces.    In Stage    II,    the    number 1 is replaced by occlusion    labels (+)    indicating which side the single face is on.    They    will be termed Stage II    labels.    A Stage II    label-    ing will be one that utilizes -f and 2 labels.    In    Stage    III    the number 2 is replaced by + and -    as the distinction    is made between convex and    concave edges.    The labels + and - will be termed    Stage    III    labels. - A Stage III    labeling is one    that utilizes +, - and -f labels.    At Stage I there are only 9 distinct junction    labels.    At Stage II    the L labelings are differ-    entiated, at Stage III    the fork and arrow labels    are differentiated.    Fig. 1 shows the physically    realizable    labelings at each stage.    T labelings    are added at each stage, but notice that fork and    arrow labelings do not increase in moving from    Stage I to    Stage II,    and the number of L labelings    does not increase in moving from Stage II    to Stage    III.    Thus we really do not know any more about    forks and arrows at Stage II    than we do at Stage    I,    nor more about L's at Stage III    than at Stage    II.    Once we have labelled a fork 2,1,1 for    example, we really know already that it can be    labelled 2,+,-t.    My interest in Stage ,J labeling was aroused    by the work of Chakravarty    [1] who utilized inform-    ation about the number of regions associated    with    lines and junctions,    in connection    with a more    elaborate labeling scheme.    A.    The Picture Graph    --    A blocks world line drawing is, of course, a    graph.    For the purposes of our analysis we will    modify picture graphs by "separating"    T junctions,    removing T junctions    by pulling the shafts away    from the crossbars.    After labeling a scene    separated in this fashion the T junction labelings    are easily recovered by rejoining the T junctions    to form the original scene.    The separation    reflects the fact the information    does not pass    through a T junction , and will permit us to    identify independent    segments of the scene as    connected components    of the (separated)    picture    graph.    The segments are independent    in the sense    that each can be labeled independently,    a label in    one segment can have no bearing on possible    labelings    for the other segment.    The connected components    of a graph are the    maximal connected    subgraphs, where a graph is    connected    if there is a chain of edges between any    two vertices.    B.    Stage I to Stage II    Theorem 1.    Given a picture graph with a    Stage I labeling    (separated at T junctions).    Further separate the graph by separating    L junc-    tions that have a 2 label on one line, i.e. pulling    the two sides of each such L apart to remove the    junction.    The Stage I labeling uniquely deter-    mines a Stage II    labeling on all connected compon-    ents of the resulting graph except those consist-    ing solely of l-labeled lines, none of which is a    crossbar of a T.    A unique labelinq for the ex-    ceptions may be determined    by specifying    the Stage    II    label of a single line in each such component.    c31 l    For proofs of the theorems in this paper see    C.    Stage    II to Stage    III    --    Theorem 2.    Given a picture graph with a    Stage II    labeling (separated at T junctions).    The    Stage II    labeling uniquely determines    a Stage III    labeling on all connected components    except those    consisting    solely of 2-labeled lines.    A unique    labeling for the exceptions may be determined    by    specifying    the Stage III    label of a single line in    each component,    III    OBTAINING A STAGE I LABELING    Given labels for a sufficient    number of lines    the physical constraints    on the labeling process    will imply a unique labeling for the remainder of    the scene.    A bound on this "sufficient    number"    will provide a bound on the complexity    and poten-    tial ambiguity of the picture graph, and on the    effort required to label it.    The limitations    on physically    realizable    labelings summarized    in Fig. 1 easily give rise to    a set of "implication    rules" for completing    a    junction labeling given the labelings of one or    two of its lines.    2.    These rules are listed in Fig.    Note first that labels for shafts of arrows    and crossbars of T's can be derived immediately    (2's and l's respectively),    without any previous    labels.    Labels for two of the lines of a fork or    arrow imply the third.    (Thus, in effect, a single    line label, other than for the shaft, determines    an arrow labeling.)    => 2    I\    I    I    => T    p    => yp    yp => ‘;1‘    Fig. 3. Implication    rules.    19    I will say that the labelina of a subset of    picture graph iines implies the labeling of the    entire graph, if repeated application    of the    implication    rules of Fig. 2;'starting    with the    given subset, leads to a complete, unique labeling    of t,he graph.    The labeling of a subset of lines    is sufficient    if the labeling implies the labeling    of the graph.    A subset of lines is sufficient    if    any consistent    lamoftheubset    is suffi-    cient.    The minimal number of lines in a suffi-    cient subset will be called the sufficient    number    of the picture graph.    --    The sufficient    number must,    of course, be determined    relative to a specified    label set.    We will be dealing with sufficiency    for Stage I labeling in this paper.    In Section A I obtain an upper bound on the    sufficient    number of a picture graph.    In [3]    I    discuss means of obtaining    sufficient    sets of    lines (and thus, bounds on sufficient    numbers) for    individual    graphs; I modify one of these methods    to provide a test for the sufficiency    of a set of    lines or line labels, and a labeling algorithm.    This algorithm    is discussed    in Section B.    Note that deduction can "propagate":    we may    deduce a label for a line which in turn permits    deduction of the label for an adjoining line, etc.    There is room for heuristic tuning in choosing    which lines to probe for labels.    Fig. 3 demonstrates    thi    on a simple scene.    The fort    labeled L line to probe for    get away with a single senso    the choice need not be entir    reasonable    to suspect that t    be 2-labeled.) m    + +    I    I+    I    w.2    a.    s labeling algorithm    uitous choice of a 2-    labeling permits us to    ry probe.    (Actually    ely fortuitous.    It is    he "bottom lines" will    A.    A Bound on the Sufficient    Number of a Picture    ----    ----    Graph    Theorem 3.    The sufficient number of a picture    graph is no more than the number of forks and    arrows plus the number of connected components    of    @    =>    l@l    O=>probed    the graph separated at T's and L's.    The bound provided by the theorem is tight,    in the sense that I can exhibit a picture graph    with sufficient    number equal to the given bound.    A simple triangular    figure may consist of all    occluding    lines, or one of the lines may have a 2    label.    Knowing the labeling of two lines is not    sufficient    to imply the label of the third for all    possible labelings.    Thus, the sufficient    number    Fig.    3. Labeling    algorithm.    REFERENCES    [1]    Chakravarty,    I.    A generalized    line and    junction labeling scheme with applications    to    scene analysis,    IEEE Trans. PAMI-    (1979)    202-205.    is three, which equals the number of forks and    arrows (0) plus the number of connected components    in the graph separated at T's and L's (3).    (In    general the sufficient    number of a picture graph    will be considerably    less than the bound provided    by the theorem).    PI    c31    B.    A Labeling Algorithm    II41    Algorithm:    1.    Obtain any labels that can be deduced by    repeated    application    of the implication    rules of    Figure 2.    (N t    o e arrow shafts and T crossbars    can be labeled immediately.)    II51    2.    While unlabeled lines remain:    2.1. Pick an unlabeled    line and "probe"    the physical scene to determine    its label.    (This    information    could be obtained from visual, tactile,    or range finding data.)    Clowes, M.B.    On seeing things, Artificial    Intelligence    2 (1971) 79-116.    Freuder, E.    On the Knowledge Required to    Label a Picture Graph.    Artificial    Intelli-    Qence, in press.    Huffman, D.A.    Impossible    objects    as nonsense    sentences,    in:    Meltzer, B. and Michie, D.    (Eds.), Machine Intelligence    5 (Edinburgh    Univ. Prminburgh,    1971) 295-324.    Waltz, D.    Understand ing line drawings of    scenes wit h shadows, in:    Winston,    P.H.    (Ed.), The Psychology    (McGraw-Hill,    New York, 1975    19-91.    of Computer Vision    2.2. Deduce any further labels that can    be obtained by repeated applications    of the    implication    rules.    20     
 | 
	1980 
 | 
	85 
 | 
					
84 
							 | 
	INTERPRETIVE    VISION AND RESTRICTION    GRAPHS    Rodney A. Brooks and Thomas 0. Binford    Artificial Intelligence Laboratory, Computer Science Department    Stanford University, Stanford, California 94305    ABSTRACT    We describe    an approach    to image interpretation which    uses a dynamically    determined    interaction    of prediction and    observation.    We provide    a representational    mechansim, built    on    our    geometric    modeling    scheme    which    facilitates    the    computational    processes necessary for image interpretation. The    mechanism    implements    generic object classes and specializations    of models, enables case analysis in reasoning about incompletely    specified    situations,    and    manages    multiple    hypothesized    instantiations    of modeled objects in a single image. It is based    on restriction    nodes and quantified    variables. A natural partial    order    on restriction    nodes can be defined by comparing the    satisifiability    of their constraints.    Nodes are arranged in an    incomplete    restriction    graph    whose arcs represent relations of    nodes    under    the    partial    order.    Predictions    are matched to    descriptions    by finding    maximal isomorphic    subgraphs of a    prediction    graph    and an observation    graph 183 subject to a    naturally    associated    infimum    of    restriction    nodes    being    satisifiable.    In this manner    constraints    implied by local two    dimensional    matches of image features to predicted features are    propagated    back to the three dimensional    model enforcing    global consistency.    I    INTRODUCTION    A.    Image Interpretation    A descriptive    process is one which takes an image and    produces    a description    of image features and their relations    found    within that image. A predictive process is one which uses    models of objects expected in an image to predict what features    and their relations will be present in the image.    We    have    constructed    a working    model-based    vision    system    called    ACRONYM    183. The    interaction    between    prediction    and description    is stronger than in previous systems.    A central    aspect of ACRONYM    is that it interprets images at    the level of three dimensional    models.    Here we describe a new layer of representation built on    the ACRONYM    system. We briefly describe a mechanism for    reasoning    about incompletely specified geometric situations. All    this has been implemented.    We describe how matching of two    dimensional    features    can    be    mapped    back to constrain    geometric    uncertainties    in three dimensions in order to obtain a    *three dimensional    understanding    of an image. This aspect of    the new system is still being implemented (June 1980).    21    B,    The ACRONYM System    ACRONYM    itself covers a wider range of tasks than    vision    (see 143, [lo]). The    reader    is referred    to 181 for a    complete    overview    of the system and a description of geometric    models    based on generalized    cones as the nodes of a subpart    tree. We give here a brief overview of ACRONYM’S vision    related    modules.    A    user    interacts    with    the system via a high level    modeling    language    and an interactive editor to provide three    dimensional    descriptions    of objects and object classes which are    viewpoint    independent,    and to partially mode1 general scenes.    The    result    is the    object graph. A library provides    useful    prototypes    and a graphics module provides valuable feedback    to the user.    A rule-based    module, the predictor and planner, takes    models of objects and scenes and produces the prediction graph    which    is a prediction    of the appearance of objects expected    within    the scene. It predicts observables in the image over the    expected    range of variations. It provides a plan, or instructions,    for lower level descriptive    processes and the matcher to find    instances    of the objects    within    the image. The process of    prediction    and    planning    is    repeated    as    first    coarse    interpretations    are found, more predictions are carried out, and    finer, less ambiguous    interpretations    are produced.    The    descriptive    aspect    of ACRONYM    is currently    provided    by the edge mapper IS] which describes monocular    pictures    as primitive    shape elements (ribbons) and their spatial    relationships.    It is goal-directed    and is thus programmed by the    predictor    and planner.    The observation graph is the result. As    with    the    predictor    and    planner,    the edge mapper may be    invoked    many times during the course of an interpretation as    finer    levels of description    become desirable. ACRONYM will    incorporate    stereo    and    advanced    edge-based    description    modules    (Baker    [23 and Arnold and Binford [II). This will    provide    three dimensional    cues directly.    The    matcher    interfaces    description    and prediction. It    finds    maxima1 subgraphs    of the observation graph isomorphic    with subgraphs    of the prediction graph, which also meet global    consistency    requirements.    In    the    new implementation    the    matching    process is mapped back to three dimensional models.    Such higher    level understanding    ensures global consistency and    enables    deductions    about three dimensional structures from a    single monocular    image. The matcher re-invokes the predictor    and planner    and the edge mapper to extend the two graphs    which    it is matching    in the context of successfully completed    submatches.    This    provides    direction    to both prediction and    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    description    and reduces    each must consider.    dramatically the number of possibilities    Lowe 191 has implemented    a system which determines    parameters    of    models    including    articulations    from    correspondences    of a match. This module provides predictions    from a tentative    interpretation    to guide detailed verification.    We have concentrated    on two classes of images in our    development    work; aerial images of airport scenes, and scenes    of industrial    workstations.    Together they provide a wide range    of challenges    and force us to look for general methods and    solutions    to problems, since they are sufficiently dis-similar that    special purpose solutions will fail on one of the two.    II    REPRESENTATION    A.    Requirements    We have chosen to describe the world to ACRONYM in    terms    of    three    dimensional    models    of objects, and their    expected    three dimensional    spatial relationships    ([Sl contains    details).    The    representation    is a coarse to fine description of    objects    as subpart    trees of generalized cones 133. In this paper    we are concerned    with ways to represent variations within    models, and how to keep track of multiple inconsistent instances    which    may    arise    during    image interpretation.    Thus    what    follows    does    not    rely    on    generalized    cones    as    the    representational    primitive.    In structured    situations    the exact dimensions of objects    may    be known    in advance,    but their orientation    may be    uncertain.    For instance it may be known that a bin is full of a    particular    type of part, but the parts may have been dropped    in with    arbitrary    orientations.    In an industrial    automation    domain    such descriptions    may already be available from a    Computer    Aided    Design    data-base.    In    less    structured    environments,    not even the dimensions of objects will be known    exactly. For instance for some aerial image interpretation tasks    it is desirable    to represent    both the class of wide bodied    passenger    jet aircraft, and particular types of aircraft such as a    Boeing    747, and even more particular models such as a 747-B.    Thus    it is necessary    to represent    constrained    variations    in    shape,    size and    even    structure    (e.g. different    aircraft have    different    engine configurations),    and constrained variations in    spatial    relations    between objects. Consider also that an F-l 11    aircraft    can have variable wing geometry. A manipulator arm    has even more complex variations    in spatial relations between    its subparts.    The    appearance    of an object may change qualitatively,    rather    than    merely quantitatively,    over the allowed variations    in its size, shape, structure or orientation relative to the camera.    Thus    it will often be necessary to carry out some case analysis    in prediction    of appearance,    and put further constraints on    models,    and to keep such mutually confliciting hypotheses in    the prediction    graph, until such time as they can be confirmed    or    denied    by descriptive    processes. The    prediction    graph    represents    case    as combinations    of components    instead of    explicit enmueration    of all cases.    As interpretation    of an image proceeds, constraints on the    exact values of variations    within a model will be derived from    the matches    made in the image. However there may be multiple    instances    of a modeled    object within the image. Parts on a    conveyor    belt will have different    orientations.    Aircraft at a    passenger    terminal    will have different lengths and wing spans.    Thus    multiple    instances of objects must be representable in the    interpretation    graph.    B.    Representing    Variations    In the following discussion we will consider the problem    of modeling    both the generic class of wide-bodied passenger jet    aircraft,    and specific wide-bodied passenger jet aircraft, such as    the Boeing    747, Lockheed    L-101 1, McDonnell-Douglas DC-10    and    the Airbus    Consortium    A-300. We will then discuss a    wider    situation    where    such    aircraft    are on runways and    taxiways,    and there are undetermined    variables in the camera    model.    We    need    to represent    both    variations    in size (e.g.    different    aircraft    subclasses    will    have    different    fuselage    lengths),    and    variations    in structure    (e.g. different    aircraft    subclasses    will have different    engine configurations). In both    cases we want to represent    the range of allowable variations.    We consider    the broader    problem of quantification    of sets.    Furthermore,    there    will    sometimes    be    interdependencies    between    these variations    (e.g. a scaling between fuselage length    and wing span).    Node:    FUSELAGE-CONE    NAME :    SIMPLE-CONE    SPINE:    20005    SWEEPING-RULE:    CONSTANT-SWEEPING-RULE    CROSS-SECTION:    20004    Node:    Z0005    NAME:    TYPE:    LENGTH:    SPINE    STRAIGHT    FUSELAGE-LENGTH    Node:    CONSTANT-SWEEPING-RULE    NAME :    SWEEPING-RULE    TYPE:    CONSTANT    Node:    20004    NAME :    TYPE:    RADIUS:    CROSS-SECTION    CIRCLE    FUSELAGE-RADIUS    Generalized cone representation    Figure 1.    of fuselage.    The    primitive    representational    mechanism    used    in    ACRONYM    is that of units and slots. Objects are represented    by units, as are generalized    cones, cross-sections, sweeping-rules,    spines,    rotations    and translations    to name the more important    ones. Figure    1 shows four units with their slots and fillers from    a particular    ACRONYM    model. They describe the generalized    cone    representing    the    fuselage    of the generic wide-bodied    passenger    jet aircraft. Note that units are referred to as “Nodes”    because    they are nodes of the Object graph of figure I. The    NAME    slot is a distinguished    slot which all units possess. It    describes    the entity represented    by the unit and corresponds to    the    SELF    slot of KRL    units E51. Units identified    by “Z”    followed    by a four digit number are those which were given no    explicit    identifier    by the user who modeled the object. The    modeling    language    parser has generated unique identifiers for    22    The value of a slot is given by its filler. Slot fillers may    be explicit,    such as “2” or “STRAIGHT”.    They can also be    symbolic    constants    in the same sense as constants are used in    programming    languages such as PASCAL. Such fillers are fine    for    representing    specific    completely determined    objects and    situations.    Slots may be filled by a quantifier, or any evaluable    expression    involving    quantifiers.    A quantifier    is an identifier    with    constraint    “FUSELAGE-LENGTH”    and    system    (quantification).    “FUSELAGE-RADIUS”    are    examples    of such quantifiers    in figure 2.    The    following    constraints    might    be imposed    upon    FUSELAGE-LENGTH    and    FUSELAGE-RADIUS    when    modeling    the class of wide-bodied passenger jet aircraft:    Node:    JET-AIRCRAFT    NAME 8    OBJECT    SUBPARTS:    (STARBOARD-W    I NG PORT-W I NG    FUSELAGE    1    QUANTIFIERS:    (F-ENG-QUANT ENGINE-LENGTH    ENGINE-RADIUS    WING-ATTACHMENT    ENG-OUT    ONE-WING-SPAN    WING-SWEEP-BACK    WING-LENGTH    WING-RATIO    WING-WIDTH WING-THICK)    Node : STARBOARD-W    I NG    (s    40.0    FUSELAGE-LENGTH)    (s FUSELAGE-LENGTH    70.0)    fs    2.5    FUSELAGE-RADIUS)    (s FUSELAGE-RADIUS    3.5)    (s    15.6    (QUOTIENT    FUSELAGE-LENGTH    FUSELAGE-RADIUS))    NAME :    OBJECT    SUBPARTS:    ( (SP-DES    F-ENG-QUANT    e    STARBOARD-ENGINE))    CONE-DESCRIPTOR:    STARBOARD-WING-CONE    These constrain the range of allowable length and radius,    and express    a lower bound on the ratio of length to radius.    Quantifiers    express allowable variations in dimensions of    objects    and    in the structure    of objects.    Figure 2 gives the    complete    subpart    tree for a model of generic wide-bodied    passenger    jet aircraft.    For brevity, not all the slots of the    OBJECT    units are shown here. The QUANTIFIERS    slot is    explained    later. The SUBPARTS    slot of an OBJECT unit is    filled with a list of subparts giving the next level of description    of the object. Entries in the list can be simple pointers to other    OBJECT    units (e.g. JET-AIRCRAFT    has three substructures:    STARBOARD-WING,    PORT-WING    and    FUSELAGE).    They can also be more complex such as the single entry for the    subparts    of    STARBOARD-WING,    which    speiifies    a    quantification    of subparts    called STARBOARD-ENGINE.    In    this case the quantification    is the quantifier F-ENG-QUANT.    Note    that    PORT-WING    has    a    quantification    of    PORT-ENGINES    as subparts,    which is represented    by the    same    quantifier    F-ENG-QUANT.    This explicitly represents    one    aspect    of the symmetry of the aircraft: it has the same    number    of engines attached to each wing. Constraints on this    quantifier    and    on    R-ENG-QUANT,    the number    of rear    engines    might be:    Node:    STARBOARD-ENGINE    NAME I    OBJECT    CONE-DESCRIPTOR8    PORT-ENGINE-CONE    Node : PORT-W I NG    NAME r    OBJECT    SUBPARTS:    ( (SP-DES    F-ENG-QUANT    .    PORT-ENGINE)    1    CONE-DESCRIPTOR:    PORT-WING-CONE    Node:    PORT-ENGINE    NAME :    OBJECT    CONE-DESCRIPTOR:    PORT-ENGINE-CONE    Node:    FUSELAGE    NAME :    OBJECT    SUBPARTS:    (RUDDER    STARBOARD-STABILIZER    PORT-STABILIZER)    QUANTIFIERS:    (STAB-ATTACH    STAB-WIOTH    STAB-THICK    STAB-SPAN    STAB-SWEEP-BACK    STAB-RAT 101    CONE-DESCRIPTOR:    FUSELAGE-CONE    (5 1 F-ENG-QUANT)    ts 2 F-ENG-QUANT)    (s 0 R-ENG-QUANT)    (I    1 R-ENG-QUANTI    (b 3 (PLUS    F-ENG-QUANT    R-ENG-QUANTI)    These    say that there must be either    one or two engines    on each wing, -zero or one at the rear of the aircraft, and if    there are two on each wing then there are zero at the rear.    Symmetry    of size (such as length of the wings) can    likewise be represented    by using the same quantifier as a place    holder in the appropriate    pair of slots.    Node : RUDDER    NAME :    SUBPARTS    :    OBJECT    ( (SP-DES    R-ENG-QUANT    .    REAR-ENGINE))    CONE-DESCRIPTORI    RUDDER-CONE    Node:    REAR-ENG I NE    NAME :    OBJECT    CONE-DESCRIPTORr    REAR-ENGINE-CONE    Node:    STARBOARD-STABILIZER    NAME :    OBJECT    Our compjete model for a generic wide-bodied passenger    jet aircraft    has 28 quantifiers    describing allowable variations in    size and structure.    CONE-DESCRIPTOR:    STARBOARD-STABILIZER-CONE    Node:    PORT-STABILIZER    NAME t    OBJECT    CONE-DESCRIPTORr    PORT-STABILIZER-CONE    Subpart tree of generic    Figure 3.    passenger jet.    23    C.    Representing    Classes    It should be clear that to model a subclass of wide-bodied    passenger    jet aircraft    we need only provide a different (more    restrictive)    set of constraints    for the quantifiers    used in ehe    general    model. To model a specific type of aircraft we could    force    the    constraints    to    be    completely    specific    (e.g. (=    FUSELAGE-LENGTH    52.8)) ). Thus we will not need to    distinguish    between    specialization    of the general model to a    subclass, or an individual.    Given that subclasses use different sets of constraints, the    problem    arises    of    how    to    represent    multiple    subclasses    simultaneously.    We    introduce    a new type of node to the    representation:    a restriction node. These are the embodiment of    specialization.    A restriction    node has a set of constraints associated with    it. If a set of values can be found for all the quantifiers    mentioned    in the constraints    such that all the constraints are    simultaneously    satisifed,    then we say the restriction node is    satisifiable.    A partial order can be defined on restriction nodes    by saying    that one restriction    node is more restrictive than    another    if its set of sets of satisfying values is a subsee of that    of rhe second node.    Different    views of the generic model.    Figure 4.-    For the example of the generic wide-bodied passenger jet    aircraft    the constraints    are associated with some restriction    node, GENERIC-    JET-AIRCRAFT    say. To represent the class    of 747s a more reskictive    node can be included; e.g.:    Node:    BOEING-747    NAME:    RESTRICTION    SUPREMA:    (GENERIC-JET-AIRCRAFT)    TYPE:    MODEL-SPECIALIZATION    CONSTRAINTSI    <list    of constraints>    It is constructed    by taking the constraints associated with    the GENERIC-JET-AIRCRAFT    restriction node, and merging    in additional    constraints    to specialize to a BOEING-747.    A model is always accessed in Ihe context of a restriction    node.    Thus    when    reasoning    about    the generic    class of    wide-bodied    aircraft, the predictor and planner will access the    JET-AIRCRAFT    model    and    base    its reasoning    on the    constraints    given    by    the    GENERIC-JET-AIRCRAFT    restriction    node. When reasoning about Boeing 747s it will base    its    reasoning    about    the    JET-AIRCRAFT    model on the    constraints    given by the BOEING-747 restriction node. Figure    4 conveys the flavor of viewing the JET-AIRCRAFT    through    different    restriction    nodes to see different models. (And in fact,    the drawings    of the two types of aircraft were produced by    ACRONYM    from the indicated restriction nodes.) In modeling    subclasses,    restriction    nodes typically form a tree rather than a    graph.    D.    Representing    Spatial Relations    Affixments    are    coordinate    transforms    between    local    coordinate    systems of objects. They are comprised of a rotation    and a translation.    Sometimes    affixments    vary over an object class. For    instance    the in generic wide-bodied passenger jet aircraft model    the    position    along the fuselage at which the wings will be    attached    will vary with particular types of aircraft. Articulated    objects    are    modeled    by    variable    affixments.    Variable    affixments    can also be useful for modeling spatial relationships    between    two objects - for instance an aircraft is on a runway.    We represent    a vector as a triple (a,b,c) where a, b and c    are scalars. We represent    a rotation as a pair <v,m> where v is    a unit vector, and m a scalar magnitude. An affixment will be    written    as a pair (r,t) where r is a rotation and t a translation    vector.    We will use some special vectors also: 4 Q and f. We    use * for the composieion of rotations, and QP for the application    of a rotation    to a vector.    In ACRONYM    we use the quantifier    mechanism to    represent    affixments    which    describe    a class of coordinate    transforms.    This    gives symbolic representations    for rotations    and translations.    Consider    the problem    of representing    ehe fact that an    aircraft    is somewhere    on a runway. Suppose the runway has its    x axis along its length, the y axis perpendicular at one end, and    the positive    z direction    vertically upward. Suppose that the    coordinate    system for the aircraft has its x axis running along    the spine    of the fuselage and has its t axis skyward for the    standard    orientation    of an aircraft.    Then    to represent the    aircraft    on the runway we could affix it with the affixment:    (<f,    ORI>,    (JET-RUNWAY-X, JET-RUNWAY-Y, 011    where    ORI,    JET-RUNWAY-X    and JET-RUNWAY-Y    are    quantifiers with the following constraints:    (I 0 JET-RUNWAY-X)    (ls JET-RUNWAY-X RUNWAY-LENGTH)    (5 0 JET-RUNWAY-Y)    (< JET-RUNWAY-Y RUNWAY-WIDTH)    Notice    that    OR1    is unconstrained.    The    aircraft    is    constrained    to be on the runway, in the normal orientation for    an aircraft    (e.g. not upside down), but it does not constrain the    24    direction    in which    the aircraft    is pointed. If we wished to    constrain    the aircraft to approximately    line up in the direction    of the runway    we could include a constraint on the quantifier    ORI,    allowing    for    some    small    uncertainty.    In    general,    constraints    on a single quantifier    may derive from different    coordinate    systems.    III    PREDICTION AND MATCHING    The    left most rotation corresponds to a rotation in the    image plane and can be ignored when predicting image shape -    i.e. shape    is invariant    with respect to rotation in the image    plane. The right most rotation expression is applied directly to    the cylindrical    tool. But it is a rotation about the x axis, which    is the linear axis of a cylinder in our representation 181 and the    appearance    of a cylinder is invariant with respect to a rotation    about    its linear axis. Thus for shape prediction we need only    consider:    <y^,TILT>    A.    Using Constraints    Prediction    consists of examining    the model in order to    find features    which are invariant    over the range of variations    in the model Es], or over a sufficient range to allow a small    number    of cases. A mixture    of reasoning directly about the    model,    and reasoning    about the constraints is needed to find    invariants.    If ‘TILT is sufficiently constrained (as in this example) it    may be possible    to predict the shape directly. The prediction    takes the form of expected image features, their relations, and    what constraints    local matches to such features produce on the    three dimensional    model. See section III-C for an example. But    note    here    that    the prediction    is a conjunction    of expected    features.    B.    Adding Constraints    Electric Screwdriver and Holder    Figure 5.    If    TILT    in    the    above    example    is not sufficentiy    constrained    there may be more than one qualitatively different    shape possible for the cylinder (e.g. an end view of a cylinder is    quite different    from a side view). If so it is necessary to make a    disjunction    of predictions.    Note however, that ail views need    not be explicitly expanded    - they can still share much structure.    Consider    the electric    screwdriver    holder    and electric    screwdriver    in figure 5. This is a display of an ACRONYM    model of a tool for the Stanford    hand-eye table. The position    and orientation    (about the vertical axis) are not known. Neither    are the exact camera pan and tilt known.    Under    these conditions the expression for the orientation    of the    screwdriver    tool relative    to the camera, as obtained    directly from the model, is:    cx^, TI LT>YK<Q, (- PAN) .*<Z?, 3n/Z>r~<y^, 3n/2>*I a<f,ORI >*    I *<(j, 3n/2>r#<y^, n/2>*<& n/2>*1 *I *I *I    Each prediction    is associated with a new, more restrictive    restriction    node. It is obtained    by adding a new constraint    which restricts the model sufficiently to necessitate only a single    prediction.    Figure    5 gives an indication of the structure of a    local    prediction,    with    two different    cases considered.    Not    indicated    in    that    diagram    are arcs between    the feature    predictions    which specify relations which should hold between    instances    of those features in the image.    R..LrlCLlm    u&m    corm PredlCLlon    where    I    is    the    identity    rotation,    PAN    and TILT    are    quantifiers    associated    with the camera orientation, and OR1 is    the unconstrained    orientation    of the screwdriver holder about    the vertical axis.    We have    implemented    a set of rules in ACRONYM    which    simplify    such expressions    to a canonical form, using    identities    for re-ordering    products of rotations. The details can    be found    in 173. The canonical form aids in the detection of    invariants.    E.g. the above expression    is transformed    to the    equivalent    expression:    <f,3n/Z>r<~,TILT>*<~,    (+ PAN (- ORi))>    W/l/    L oca I Graph    Strut ture    [or    feeture    Predlctlms    cone    appearance    predfct    ton.    Figure 5    Consider    the problem of predicting the appearance of the    cylinder    in the image. We outline below the chain of reasoning    intended    for    ACRONYM.    (The    previous    predictor    and    planner    rule set carried    out a slightly less general but still    powerful    line of reasoning.    For instance from the knowledge    that    the aircraft    is on the runway ACRONYM    deduces its    image in an aerial photograph    is determined up to a rotation    about the vertical plus a translation. The rules necessary for a    class of computations    including the simple example below will    be implemented    over summer 1980.)    C.    Geclerating Restrictions    During Matching    In    this    section    we give    an example    of an image    prediction    which generates    restriction nodes during matching.    We    predict    the    appearance    of a ribbon    generated    by the    fuselage    of figure    I in an aerial image. A prediction says what    a match of a local feature must imply about both the observed    feature,    and the aspect of the object whose image generated    that    feature.    Checking    this implication (deciding whether the    new restriction    node is satisfiable) provides an acceptance test    25    for a hypothesized    match.    For a projective    imaging system the observed distance m    between    two points    distance    1 apart on the ground is given    (approximately)    by:    Cl    m E    --    h    where    c is a constant    dependent    on the focal distance of the    camera    and h is the height of the camera above ground. Thus    if the camera is at height HEIGHT (a quantifier) and Ml and    M2 are the measurements    obtained for length and width from    a hypothesized    match of a ribbon in the observation graph,    then the following constraints must hold if the match is correct:    1.    (- Ml (QUOTIENT (TIMES CAM-CONST FUSELAGE-LENGTH)    HEIGHT) 1    2.    (= H2 KtUOTIENT    (TIMES 2 CAM-CONST FUSELAGE-RADIUS)    HEIGHT) 1    In general    Ml and M2 will not be given exactly by the    observation    graph, rather an interval estimate will be supplied.    Thus    they can be represented    by quantifiers    with constraints    on their upper and lower bounds. If CAM-CONST    is known    in    advance    its numeric    value can be substituted    into the    constraints    generated    by the match. If it too is a quantifier,    then it is just one more constrained unknown in the system. At    the time of hypothesizing    a match, a new restriction node is    generated    by adding*constraints    1 and 2 to the const aints of    the restriction    node associated with the prediction (see figure 5).    If the new node can be shown to be unsatisfiable    then the    match is rejected.    The    following    is not meant    to indicate a proposed    reasoning    chain    for ACRONYM.    Rather it is illustrative of    how    constraints    can    imply    that    a hypothesized    match is    incorrect.    Suppose    for    some    hypothesized    match,    where    CAM-CONST    is known to be 100.0, the observed Ml lies    between    4.0 and 5.0. Then given the constraints on the fuselage    size of section II-B, the height of the camera must be between    800.0    and    1750.0 due    to constraint    1 above. If this is    inconsistent    with a priori constraints    on HEIGHT the match    can be rejected.    In fact a priori constraints on HEIGHT may    also    put    further    restrictions    on    the    possible    range    for    FUSELAGE-LECNTH.    Similarly    measurement    M2    and    constrairit    2 above    will lead to restrictions on HEIGHT. If    these    restrictions    are inconsistent    with the 800.0 to 1750.0    bounds    already obtained    the match should be rejected.    D.    From Local to Global    After    each    phase    of    local matching,    the    matcher    combines    local matches into more global interpretations. This    involves    finding    consistent    subgraphs    of matches. Previously    consistency    has only concerned the existence of arcs describing    relations    between    matched    ribbons. With the introduction of    constraints    on quantifiers    during the ribbon matching process,    these too must be checked for consistency.    Constraints    on    a    quantifier    at    different    HYPOTHESIS-MATCH    restriction nodes may actually refer    to different    quantities    in the scene. For instance each potential    match    for    an    aircraft    may    have    constraints    on    FUSELAGE-LENGTH    and on HEIGHT.    When combining    the matches    for aircraft    to produce an interpretation    of the    image,    there    is no reason to require that the constraints on    FUSELAGE-LENGTH    at these different    nodes be mutually    consistent.    Different    instances    of wide-bodied    passenger jet    aircraft    will be different    lengths. However ail the constraints on    HEIGHT    should    be mutually consistent, as there is only one    HEIGHT    of the camera.    Sometimes    when    constraints    on    quantifiers    actually    correspond    to different    quantities    in the world, it may be that    these quantities    should have the same value. For instance the    ENGINE-LENGTH    for    the    port    and    starboard    engines    correspond    to physical measurements of different objects in the    world.    However    since aircraft    are symmetric the constraints    given    by    the    matches    on    possible    values    of    ENGINE-LENGTH    for each engine    should    be consistent.    Thus    when    clumping    the local matches for an aircraft the    ENGINE-LENGTH    constraints from each submatch should be    checked    for consistency. If they are not consistent the particular    set of local matches should be rejected as inconsistent.    A slot is provided    in object units to represent which    quantifiers    matched    at a lower level should be held consistent    for interpretation    of an object. This is the QUANTIFIERS    slot    as shown    in figure    2. As the matcher is combining    local    matches    it looks up the subpart tree. Any quantifier mentioned    in a QUANTIFIERS    slot of any ancestor of the object has its    constraints    copied    into the restriction node for the new more    global node. As each constraint    is introduced it is checked for    consistency.    This    process    is    not    quite    straightforward.    Sometimes    a constraint    on    a quantifier    involves    another    quantifier    which    is not being brought    into the new match.    Such    is the case of FUSELAGE-LENGTH    and HEIGHT in    the    example    of    the    previous    section    when    a global    interpretation    is being    made involving    many aircraft. Each    aircraft    provides    a constraint on HEIGHT, but each is in terms    of the instance    of FUSELAGE-LENGTH    of the individual    aircraft.    One solution is to generate a new unique identifier for    the quantifier    which is not to be constrained by new constraints    imposed.    Its role is to ensure the continued satisifiabiiity of the    local    match    in    light    of new global constraints    on other    quantifiers    involved    in that match. Other solutions exist, which    may result in constraint    inconsistencies being missed in return    for much simplified    constraint analysis.    IV    REMARKS    We have    described    a single part of ACRONYM    and have    ignored    many important    issues involved in the construction of    a    vision    system    based    on    the    representations    given.    In    particular    we have not discussed the analytic power necessary    to decide whether constraint sets are satisfiable. We believe that    quite weak analytic methods can lead to powerful interpretive    capabilities    even    though    they fail to detect large classes of    inconsistencies.    Nor have we described in detail the methods to    carry out the necessary geometric reasoning. These have been    discussed    in E7J which    includes    explicit rules for symbolic    geometric    reasoning    in states of uncertain knowledge.    We have provided    a representational    scheme which facilitates    the computaeionai    processes necessary for interpretation. The    scheme    uses restriction    graphs    to provide specializations of    models, to enable case analysis in reasoning about incompletely    specified    situations,    and    to manage    multiple    hypothesized    instantiations    of modeled objects in a single image.    26    ACKNOWLEDGEMENTS    This    research    was    sponsored    by    ARPA    contract    MDA-903-76-C-0206    and NSF contract DAR-78 15914. Support    was also provided    by the ALCOA foundation.    REFERENCES    111 Arnold,    R. David    and Thomas    0. Binford, “Geometric    Constraints    in Stereo Vision,” Proc. SPIE Meeting, San Diego,    July 1980.    C21 Baker,    H. Harlyn, “Edge Based Stereo Correlation,” Proc.    ARPA Image Understanding    Workshop, Baltimore, Apr. 1980,    168- 175.    131 Binford,    Thomas    O., “Visual    Invited    paper    at    IEEE    Systems    .Con.erence, Miami, Dec. 1971.    Perception    by Computer,”    Science and Cybernetics    141 Einford,    Thomas    O., Proc. NSF Grantees Conference,    Cornell    Univ., Sep. 1979.    151 Bobrow,    Daniel G. and Terry Winograd, “An Overview of    KRL,    a    Knowledge    Representation    Language,”    Cognitive    Science 1, 1977, 3-46.    161 Brooks,    Rodney    A., “Goal-Directed    Edge Linking and    Ribbon    Finding,”    PYOC. ARPA tmage Understanding Worhshop,    Palo Alto, Apr. 1979, 72-78.    171 Brooks, Rodney A. and Thomas 0. Binford, “Representing    and Reasoning    About Partiatiy Specified Scenes,” Proc. ARPA    Image Understanding    WorRshop, Baltimore, Apr. 1980,95-103.    I81 Brooks,    Rodney    A., Russell Greiner    and Thomas    0.    Binford,    “The ACRONYM    Model-Based VisionSystem,” Proc.    of IJCAI-79,    Tokyo, Aug. 1979, 105-l 13.    191 Lowe, David, “Solving for the Paramters of Object Models    from    Image    Descriptions,”    Proc. ARPA Image Understanding    Worksliop, Baltimore, Apr. 1980, 121-127.    Cl01 Soroka, Barry I., “Debugging Manipulator Programs with    a Simulator,”    to be presented at CAD/CAM8,    Anaheim, Nov.    1980.    27     
 | 
	1980 
 | 
	86 
 | 
					
85 
							 | 
	Sticks, Plates, and Blobs: A Three-Dimensional    Object Representation    for Scene Analysis    Linda G. Shapiro    John D. Moriarty    Prasanna G. Mulgaonkar    Robert M. Haralick    Virginia Polytechnic    Institute    and State University    Department    of Computer Science    ABSTRACT    In this paper    , we describe a relational    model-    ing technique    which categorizes    three-dimensional    objects at a gross level.    These models may then    be used to classify and recognize    two dimensional    views of the object, in a scene analysis system.    I. Introduction    The recognition of three-dimensional    objects    from two-dimensional    views is an important and    still largely unsolved problem in scene analysis.    This problem would be difficult even if the two-    dimensional    data were perfect, but the data can be    noisy, distorted, occluded, shadowed and poorly    segmented,    making recognition    much harder. Since    the data is so rough it seems reasonable    that very    rough models of three-dimensional    objects should    be used in the process of trying to classify such    data.    In this paper we describe a relational    model and discuss its use in a scene analysis sys-    tem.    There have been many approaches    to modeling    three-dimensional    objects.    For a comprehensive    collection    see the proceedings of the Workshop on    Representation    of Three-Dimensional    Objects [13].    Also see Voelcker and Requicha [ll] and Brown [4]    for mechanical    design; York et.al. [14] for curved    surface modeling using the Coons surface patch    151;    Horn [6] and Waltz [12] for study of light    and shadows; Badler et.al. [2] for study of human    body modeling; and Pgin and Binford [1] and Neva-    tia and Binford [7] for the generalized    cylinder    approach. The models we suggest are related to    the generalized    cylinder models, but are rougher    descriptions that specify less detail about    three-dimensional    shape than do generalized    cylin-    ders.    II    Sticks,    iptions    Plates, and Blobs in Relational    Dt?S-    A relational    description    of an object consists    of a set of parts of the object, the attributes    of    the parts, and a set of relations that describe    This research was supported by the National Sci-    ence Foundation under grants MCS-7923827 and    MCS-7919741.    how the parts fit together. Our models have three    kinds of three-dimensional    Parts:    and blobs.    Sticks are long,    sticks, Plates.    thin&-g    onlv one significant    dimension. Plates are flat-    ish-wide parts consisting    of two nearly flat sur-    faces connected by a thin edge between them.    Plates have two significant    dimensions. Blobs are    neither thin nor flat; they have three significant    dimensions. All three kinds of parts are "near    convex"; that is a stick cannot bend very much,    the surfaces of a plate cannot fold too much, and    a blob can be bumpy, but cannot have large concav-    ities. Figure 1 shows several examples of sticks,    plates, and blobs.    sticks    Figure 1 illust    rates several examples    sticks    , plates and blobs.    each of    In describing an object, we must list the    parts, their types (stick, plate, or blob), and    their relative sizes; and we must specify how the    parts fit together. For any two primitive parts    that connect, we specify the type of connection    and up to three angle constraints. The type of    connection    can be end-end, end-interior, end-cen-    ter, end-edge, interior-center,    or center-center    where "end" refers to an end of a stick, "inte-    rior" refers to the interior of a stick or surface    of a plate or blob, "edge" refers to the edge of a    plate, and "center" refers to the center of mass    of any part.    28    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    For each type of pairwise connection, there    are one, two, or three angles that, when specified    as single values, completely    describe the connec-    tion. For example, for a stick and a plate in the    end-edge type connection,    two angles are required:    the angle between the stick and its projection    on    the plane of the plate and the angle between that    projection    and the line from the connection    point    to the center of mass of the plate.    Requiring exact angles is not in the spirit of    our rough models. Instead we will specify permis-    sible ranges for each required angle. In our    relational    model, binary connections    are described    in the CONNECTS/SUPPORTS    relation which contains    lo-tuples    of the form (Partl, Part2, SUPPORTS,    HOW, VLl, VHl, VL2, VH2, VL3, VH3) where Part1    connects to Part2, SUPPORTS is true if Part1 sup-    ports Part2, HOW gives the connection    type, VLi    gives the low-value in the permissible range of    angle i and VHi gives the high value in the per-    missible range of angle i, i = 1, 2, 3.    The CONNECTS/SUPPORTS    relation is not suffi-    cient to describe a three-dimensional    object. One    shortcoming is its failure to place any global    constraints    on the resulting    object. We can make    the model more powerful merely by considering    tri-    ples of parts (sl,s2,s3) where sl and s3 both    touch s2 and describing the spatial relationship    between sl and s3 with respect to s2, Such a des-    ,ription appears in the TRIPLE CONSTRAINT relation    and has two components: 1) a boolean which is    true if sl and s3 meet s2 on the same end (or sur-    face) and 2) a contraint on the angle subtended    by    the center of mass of sl and s3 at the center of    mass of s2. The angle constraint is also in the    form of a range.    Our current relational description for an    object consists of ten relations. The A/V rela-    tion or attribute-value    table contains global pro-    perties of the object.    Our A/V relations    cur-    rently contain the following attributes:    1) number    of base supports, 2) type of topmost part, 3) num-    ber of sticks, 4) number of plates, 5) number of    blobs, 6) number of upright parts, 7) number of    horizontal parts, 8) number of slanted parts. The    A/V relation is a simple numeric vector, including    none of the structural information    in the other    relations. It will be used as a screening rela-    tion in matching; if two objects have very differ-    ent A/V relations, there is no point in comparing    the structure-describing    relations. We are also    using the A/V relations as feature vectors to    input to a clustering    algorithm. The resulting    clusters represent    groups of objects which are    similar. Matching can then be performed on clus-    ter centroids instead of on the entire database of    models,    Other relations include SIMPLE PARTS,    PARALLEL PAIRS, PERPENDICULAR    PAIRS, LENGTH CONST-    RAINT, BINARY ANGLE CONSTRAINT, AREA CONSTRAINT,    VOLUME CONSTRAINT, TRIPLE CONSTRAINT and CON-    NECTS/SUPPORTS.    III. Matching    Relational    matching of two-dimensional    objects    to two-dimensional    models is a well-defined    opera-    tion. See Barrow, Ambler, and Burstall [3] for a    discussion    of exact relational    matching, Shapiro    [8] for relational    shape matching, and Shapiro and    Haralick [lo] for inexact matching. Our problem    in scene analysis is to match two-dimensional    per-    spective projections of objects (as found in an    image) to the three-dimensional    models stored in    the database. Our approach to this problem is to    analyze a single two-dimensional view of an    object, produce a two-dimensional    structural    shape    description, use the two-dimensional    description    to infer as much as possible about the correspond-    ing three-dimensional    description, and then use    inexact matching techniques in trying to match    incomplete and possibly erroneous three-dimen-    sional object descriptions    to our stored three-di-    mensional relational    models.    We decompose a two-dimensional view into sim-    ple parts by a graph-theoretic clustering scheme    as described in [9].    To match a two-dimensional    object description    to a three-dimensional    model is    to find a mapping from the tm-dimensional simple    parts of the object to the sticks, plates and    blobs of the model so that the relationships    among    the two-dimensional parts are not inconsistent    with the relationships    among the three-dimensional    parts. For example, a binary CONNECTS relation    can be constructed    for the two-dimensional    parts.    For a pair (pl,p2) of three-dimensional    model    parts where (pl,p2,*,*,*,*,*,*,*,*,)    is an element    of the CONNECTS/SUPPORTS    relation and a mapping h    from three-dimensional    model parts to two-dimen-    sional object parts, if (h(pl),h(p2)) is not an    element of the two-dimensional    CONNECTS relation,    then an error has occured.    If a mapping accumu-    lates too many errors from various n-tuples of    various relations    not being satisfied, that map-    ping cannot be considered    a match.    As an example, suppose the three-dimensional    model of a simple chair contains two plates (the    back B and seat S) and four sticks (legs Ll, L2,    L3, L4).    The relation obtained from just the    first two columns of the CCNNECTS supports rela-    tion is {(S,B), (B,S)    , U-J,S), (S,Ll)    , (L2,S) r    (S,L2), (L3,S)    I (S,L3) (L4,S), (S,L4)]. Now con-    sider the two-dimensional    decomposition    of Figure    2. We can construct the hypothetical    connection    relation C = {(sl,s2), (s2,sl), (s3,s2), (s2,s3),    (s3,sl), (sl,s3)    I    (s4,s2), (s2,s4), (s4,sl),    (sl,s4), (s5,s2), (s2,s5)]. Then the mapping f    defined by {(S,s2), (Btsl)    ,    (Ll,s3), (L2,s4),    (L3,s5), (L4,s4))    accumulates    no error while the    mapping g defined by {(S,sl),(B,s2), (Ll,s3),    (L2,s4), (L3,s5), (L4,s4)]    accumulates    error since    (L3,S) is in the model, but (f(L3),f(S))    = (s5,sl)    is not in C.    Not all of the three-dimensional    relations    can    be directly constructed    from two-dimensional    data.    (If they could, the entire scene analysis problem    would be much easier.) For example, only an esti-    mate of whether one part supports another can be    computed.    Relations like PARRALLEL PAIRS and    LENGTH CONSTRAINT can also be estimated. Rela-    tions involving    angles are probably the most dif-    29    m    Figure 2 illustrates    the decomposition of a two-    dimensional chair by graph-theoretic    clustering.    ficult, since a perspective    projection    will change    the angles between parts. Such .information    should    be left out of initial matching attempts and used    later to try to validate a given match or to    "hoose between several possible matches. The pre-    cise definition    of an inexact match from a two-di-    mensional description    to a three-dimensional    des-    cription is the subject of our current research.    IV. Sumnary of Current and Future Research    We have described a relational model for    three-dimensional    objects, in which the parts of    an object are very grossly described as sticks,    plates, or blobs. We are building a database of    three-dimensional    object models.    The objects in    the database are being clusterd into groups, using    graph-theoretic    clustering algorithm. Instead    of comparing a two-dimensional view to every    object in the database, it will be compared ini-    tially only to a centroid objects in each group.    Only in those groups where the unknown object is    most highly related to the centroid will any full    relational    matching take place.    Relational matching will be a form of the    inexact matching we described in [lo]. The gen-    eral method will be to obtain estimates    of the    three-dimensional relations from the two-dimen-    sional shape and match these estimates    against the    three-dimensional    models. Deriving the algorithms    and heuristics for the matching is one of our most    challenging    tasks.    References    2. Badler, NaI.I J. O'Rourke,    and H. Toltzis, "A    Spherical Representation of A Human Body    For Visualizing    Movement", Proceedings    of    -    the IEEE, Oct. 79.    3. Barr==,    A.P. Ambler, and R.M. Burstall,    "Some Techniques    for Recognizing    Structure    in Pictures",    nition, S,    Frontiers    of Pattern Recog-    Watanabe (ed),Academic Press,    New York, 1972, pp. l-29.    4. Brown, C.M., A.A.G.    Requicha and H.B.    Voelcker, "Geometric    Modeling Systems for    Mechanical Design and Manufacturing",    Proc. ACM 1978, Washington    D.C., Dee 4:6,    m    --    l7l” .    5. Coons, S.A., "Surfaces for Computer-Aided    Design of Space Forms," M.I.T. Project    MAC, MAC-TR-41,    June 1967 (AD 663504).    6. Horn, B., "Obtaining    Shape From Shading Infor-    mation", in The Psychology    of Computer    7    Vision, (P.H. Winston, ed.), McGraw-Hill,    New York, 1975, pp. 115-155.    7. Nevatia, R.K. and T.0, Binford, "Structured    Descriptions    of Complex Objects", Proc.    Third International    Joint Conference on    Artificial Intelligence,    Stanford, 1973.-    8. Shapiro, L.G., "A Structural    Model of Shape",    IEEE Transactions    on Pattern Analysis and    Machine Intelligence,    -1-2,    No. 2,    -980.    9. Shapiro, L.G. and R.M. Haralick, "Decomposi-    tion of Two-Dimensional Shapes by Graph-    Theoretic Clustering,"    IEEE Transactions    on Pattern Analysis and Machine Intelli-    genZ&XX.    PAMI- pp- lo-?Z&-ZZi.m    10. Shapiro, L.G. and R.M. Haralick, "Structural    Descriptions    and Inexact Matching", Tech-    nical Report CS79011-R,    Department    of Com-    puter Science, Virginia Polytechnic    Insti-    tute and State University,    Blacksburg, VA    24061, Nov. 1979.    11. Voelcker, H.B. and A.A.G. Requicha, "Geometric    Modeling of Mechanical Parts and Pro-    cesses", Computer, Vol. 10, No. 12, Dec.    1977, pp.48-57.    12. Waltz, D., "Understandinq    Line Drawinqs of    Scenes with Shadows", in The Psychology    of    Computer Vision, (P.H. Winston,    ed.x    McGraw-Hi-w    York, 1975.    13. Workshop on Representation of Three-Dimen-    sional Objects, (R. W-Y,    Director),    University    of Pennsylvania,    May l-2, 1979.    14. York, B., A. Hanson, and E.M. Riseman, "Repre-    sentation of    Three-Dimensional    Objects    with Coons Patches and Cubic B-Splines",    Department of Computer and Information    Science, University of Massachusetts,    Amherst, MA 01003.    1. Agin, G.J. and T.O. Binford, "Computer    Des-    criptions    of Curved Objects", IEEE Tran-    sactins on Computers, Vol. 25x03,    --    April 1976.    30     
 | 
	1980 
 | 
	87 
 | 
					
86 
							 | 
	CONSTRAINT-BASED INFERENCE FROM IMAGE MOTION    Daryl T. Lawton    Computer and Information Science Department    University of Massachusetts    Amherst, Massachusetts 01003    ABSTRACT    We deal with the inference of environmental    information    (position    and    velocity) from a    sequence of images formed during relative motion    of an observer and the environment. A simple    method is used to transform relations between    environmental points into equations expressed in    terms of constants determined from the images and    unknown depth values.    This is used to develop    equations for environmental inference from several    cases of rigid body motion, some having direct    solutions. Also considered are the problems of    non-unique    solutions    and    the    necessity of    decomposing the inferred motion into    natural    components.    Inference from optic flow is based upon the    analysis of the relative motions of points in    images formed over time.    Here we deal with    environmental    inferences from optic flow for    several cases of rigid body motion and consider    extentions to linked systems of rigid bodies.    Since locality of processing is very important, we    attempt to determine the smallest number of points    necessary to infer environmental structure for    different types of motion.    I    INTRODUCTION    The processing of motion information from a    sequence of images is of fundamental importance.    It    allows    the    inference    of    environmental    information at a low level, using local, parallel    computations across    successive    images.    Our    concern is with processing a particular type of    image motion, termed optic    flow,    to    yield    environmental information. Optic flow [ll is the    set of velocity vectors formed on an imaging    surface by the moving projections of environmental    points. It is important to note that there are    several types of image transformations, caused by    environmental motion, which are not optic flow.    For example, image lightness changes due to motion    relative to light sources, the motion of features    produced by surface occlusion, moving shadows, and    a host of transduction effects. The occurrance of    these different types of image transformations    requires explicit recognition so the appropriate    inference technique can be applied for each.    ----------------    This work was supported by NIH Grant NO.    ROI    NS14971-02    COM and ONR Grant No.    NO001 4-75-C-0459.    II    CAMERA MODEL AND METHOD    ----    The camera model is based upon    a    3-D    Cartesian coordinate system whose orgin is the    focal point (refer to figure 1 throughout this    section).    The    image    plane (or retina) is    positioned in the positive direction along, and    perpendicular    to,    the    Z-axis.    The retinal    coordinate axes are A and B.    They are aligned    with,    and    parallel    to, the X and Y axes    respectively. For simplicity and without loss of    generality, the focal length is set to 1.    A point indexed by the number i in the    environment at time m is denoted by Pmi. The time    index will generally correspond to a frame number    from a sequence of images. The projection of an    environmental point Pmi onto the    retina    is    determined by the intersection of the retinal    surface with the line containing the focal point    and Pmi. The position of this intersection in the    3-D    coordinate system is represented by    the    position    vector    Imi.    In    this paper, any    subscripted I, A, or B, is a constant determined    directly from an image. The significant relations    concerning Pmi and Imi are    I    >    ( X mi9 Yrni, Zmi J    2    >    3    >    =    mi    4    )    =    i    In the method used here, Equation 4 is used to    transform    expressed    relations    between    environmental points into a set of equations in    terms of image position vectors and unknown Z    values. Solving these equations yields a set of Z    values which provide    a consistent interpretatiOn    for the positions, over time, of the corresponding    set of environmental points under the assumed    relations.    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    Retina    Fig. 1    III    INFERENCE FROM RIGID BODY MOTION    ----    A.    Arbitrary Motion of Rigid Bodies    ----    The constraint equations developed for this    case reflect the preservation of distances between    pairs of points on a rigid body during motion.    For two points i and j on a rigid body at times m    and n, the preservation of distance yields    5    >    -    =    -    i    i    which expands into the image-based equation    6    >    +2Zniznj(1nio    Lj)    - - 0    To determine a solution, we find the minimum    number of points and frames for which the number    of independent constraints (in the    form    of    equation 6) generated equals or exceeds the number    of unknown Z values.    It is then necessary to    solve the resulting set of simultaneous equations.    Note that each such constraint is a second degree    polynomial in 4 unknowns.    We begin with the number of    unknown Z values.    For N (N>2) points in K (K>l) frames there are    (NK)-1    unknown    Z values.    The minus 1    term    reflects the degree of freedom due to the loss of    absolute scale information.    Thus,    one    of    the    Z-values can be set to an arbitrary value.    The number of rigidity constraints generated    by a set of N (N>2) points in K (K>l) frames is    the product of 3*(N-2) and (K-1). The first term    is the minimum number of unique distances which    must be specified between pairs of points, in a    body of N points, to assure its rigidity. Thus 4    points require 6 pairwise distances (all that are    possible).    For configurations of more than 4    points, its is necessary to specify the distance    of each additional point to only 3 other points to    assure rigidity. The second term is the number of    interframe intervals.    Each distance specified    must be maintained over each interframe interval.    The number of constraints is greater or equal    to the number of unknowns when    Thus minimal solutions (But    not    unique!    necessarily    see    below)    can    be    found    when    (N=5,K=2,number of constraint equationszg) or    (N=4,K=3,number of constraint equations=l2), in    agreement with [2].    The rigidity equations can be simplified by    adding    restrictions on allowable motions of    environmental points. In the following sections    we investigate two such restrictions.    B.    Motion Parallel to the XZ Plane    -P-P    Here the Y component of an environmental    point is assumed to remain constant over time.    Otherwise its motion is    unrestricted.    This    corresponds    to    an observer moving along an    arbitrary    path    in a plane,    maintaining    his    retina    at an Orientation perpendicular to the plane, with    the motion of objects also so restricted.    For    Point i at times m and n this is expressed as    9    >    Ymi    = Zmi    Bmi=    Zni Bni “Yni    10)    Z    Bmi    ni=Zmi-    Bni    This allows a substitution, for points i and j,    which simplifies the rigidity constraint to    32    - -    where the bracketed expressions are constants    determinable from an image.    This case has a    direct solution using 2 points in 2 frames.    To    see this, consider points 1 and 2 at times 1 and    2.    This yields a    system    of    4    unknowns:    211,212,221,222.    The substitution allowed by    equation 10 reduces it to a system of 2 unknowns,    Zll    and 212.    Zll can then be set to an arbitrary    value 9 reflecting scale independence. 212 is then    determined from a constraint of the form of    equation 11 rel$ing    212 and    the    evaluated    variable Zll .    This is a quadratic equation of    212.    C.    Translations    The constraint expressing the translation of    points i and j on a rigid body at times m and n is    12    >    I=    j    j    13    >    -    i-    ii    where the operation is vector sub traction.    This    reflects    the    preservation of    1 eng th    and    orientation under translation. Setting Zmi to a    constant value C, to reflect scale independence,    in equation 13 yields 3 simultaneous    1 inear    equations in 3 unknowns    CAmi    =    ZmjAmj    + ZniAni    -    ZnjAnj    CBd    =ZmjBmj+ZniBni-ZnjBnj    C    =    zlTlj    +z,i    -Znj    Thus environmental inference from    translation    requires 2 points in 2 frames.    A potential    implication of this case is for interpreting    arbitrary,    and    not    necessarily rigid body,    environmental motion. If the resolution of detail    and the rate of image formation relative to    environmental motion are both very high, then, in    general, the motion of nearby points in images can    be locally approximated    as    the    result    of    translational motion in the environment.    D.    Solving the Constraints    A-    The    rigidity    constraints    are    easily    differentiable    and    can    be    solved    using    conventional optimization methods (taking care to    avoid the solution where all the Z-values equal    zero>.    There are, however, in the case    of    arbitrary    rigid    body motion , generally many    solutions. Here we consider ways of dealing with    this.    One way utilizes feedforward. It is crucial    to note that the rigidity equations needn’t be    solved anew at each point in time.    If the    environmental structure has been determined at    time t and an image is then formed at time t+l,    half the unknowns in the system of constraint    equations disappear.    This greatly    simplifies    finding a solution.    Additionally, the solution    process can be further simplified by extrapolation    of infered motion, if enough frames have been    processed . But how can the positions of the    environmental points be determined initially? I> .    Prior knowledge of the environment could supply    the initial estimates of the relative positions.    2).    There may be a small number (perhaps less    than 50) of generic patterns of image motion    (which may be termed flow fields), each associated    with a particular class of environmental motion.    For example, translational motion is characterized    by straight motion paths which radiate from or    converge to a single point on the retina.    Other    flow fields we have analyzed also have such    distinguishing    characteristics.    These    characteristics would    be    used to recognize    particular types of image motion, associated with    particular    types of environmental motion, to    initialize and constrain    the    more    detailed    solution    process    based    upon    solving    the    constraints for arbitrary rigid body motion.    3).    The observer could constrain his own motion for    one sampling period to a case of motion for which    environmental    strut ture    can be unambiguously    determined.    For example, by stabilizing    the    retina of a moving observer with respect to    rotations relative to a stationary environment ,    all image motion could be interpreted as the    result of translation.    Another possibility is to use more than the    minimum required number of points in the inference    process to supply additional constraints.    33    IV    APPROACHES TO OTHER CASES OF MOTION    -----    A.    Sub-Minimal Rigid Configurations    A sub-minimal configuration is one consisting    of I,2 or 3    points over an arbitrary number of    frames or 4 points in 2 frames.    Human subjects    can get an impression of 3-D rigid motion from    displays of such configurations [4], even though    there are not sufficient generated constraints, by    the above analysis, for a solution. How is this    possible?    Other assumptions must be used for    the    inference. A potential one reflects an assumption    of smoothness in the 3-D motion. This can be had    by minimizing an approximation of the acceleration    of a given point. For a point i at times t, t+l,    t+2 this can be expressed as    15)    Ipti- pt+l,i    I - Ipt+l,i-    pt+Z,i    Perhaps a sum of such expressions, formed using    the    substitution    of    equation 4, should be    minimized for a set of points over several time    periods along with the satisfaction of expressible    rigidity constraints.    B.    Johansson Human Dot Figures    ---    Experiments initiated by Johansson have shown    the ability of subjects to infer the structure and    changing spatial disposition of humans performing    various tasks with only joint positions displayed    over time [53.    Here we consider    how    such    inference could be performed.    First, it is necessary to determine which    points are rigidly connected in the environment.    Work by Raschid C63 as shown that this is possible    on the basis of image properties only, using    relative rates of motion between images. That is,    without any inference of environmental structure.    Given the determination of rigid linkages, it    is    necessary    to    find the relative spatial    disposition of the limbs. An approach is to infer    environmental position, for each limb, using the    rigidity constraints and optimizing the smooth    motion measure discussed above. If the figure is    recognized as being human, several object specific    constraints can also be used. These would involve    such things as allowable angles of articulation    between limbs, their relative lengths, and body    ACKNOWLEDGEMENTS    I greatly appreciate Ed Riseman, Al Hanson,    and Michael Arbib for their support and the    intellectual environment they have created.    I    would also like to acknowledge the last minute    (micro-nano-second!) heroics of Tom Vaughan, Earl    Billingsley, Maria de LaVega, Steve Epstein, and    Gwyn Mitchell.    REFERENCES    Cl1    c21    [31    c41    151    C61    Gibson, J.J. The Perception of the Visual    --    World.    BostoG-- Houghton Mifflin and Co.    1950.    Ullman, S.    The Interpretation of Visual    Motion.    Cambridge, Massachusetts:‘ The MIT    Press. 1979.    Rogers, D.F. and Adams, J.A.    Mathematical    Elements for Computer Graphics. McGraw-Hill    Book Company. 1976.    Johansson, G., and Jansson, G.    "Percieved    Rotary Motion from Changes in a Straight    Line." Perception and Psychophysics, 1968,    Vol. 4 (3).    Johansson,    G,    "Visual    Perception    of    Biological    Motion and a Model for its    Analysis." Perception and Psychophysics 14:2    (1973) 201-211.    -    Rashid, R.F.    "Towards a System for the    Interpretation of Moving Light Displays",    Technical Report 53, Department of Computer    Science, University of Rochester, Rochester,    New York 146727, May 1979.    symmetries.    34     
 | 
	1980 
 | 
	88 
 | 
					
87 
							 | 
	STATIC ANALYSIS OF MOVING JOINTED OBJECTS    Jon A. Webb    Department of Computer Science    University of Texas at Austin    ABSTRACT    The problem of interpreting images of moving    jointed    objects    is considered.    Assuming the    existence of a connectedness model, an algorithn is    presented for calculating the rigid part lengths    and motion of the jointed objects just from the    positions of the joints and some depth information.    The algorithn is proved correct.    I    INTRODUCTION    Vision research has only    recently    begun    considering the three-dimensional motion of jointed    objects, but progress has been relatively rapid.    This paper presents a method for using a very    general model to discover the motion and structure    of a jointed object. The method is proved correct    under reasonable conditions, which are    stated    precisely.    These condit,ions are    found to be    satified in most normal observation of normal    jointed object movement.    Jointed objects are    important because they    include most of the significant moving objects in    this world, e.g.    rigid objects, hunans,    and    animals.    The method to be described allows the    recovery    of a wealth of information by a single    monocular observer of a moving jointed object.    This information could aid recognition from a    distance.    This paper, like most other research    in    three-dimensional motion    (C1-41)    *    adopts the    feature point model.    In this model only the    positions of points rigidly attached to the object    are recorded. This method makes the mathematical    analysis more direct.    Moreover, psychological    research has shown that    humans    can    readily    interpret movies of people where only the joints    can be seen 15-71.    It is therefore reasonable to    try to construct a program that could interpret    such images.    II    THE MODEL    --    A.    Introduction    This    paper assumes    the existence    of a    ----------------    * This research supported by the Air Force Office    of Scientific Research under grant nLpnber AFOSR    77-3190.    connectedness model.    This    model    could    be    constructed by the methods of [31    or by other    methods under development.    The jointed object    model for a jointed object consists of three parts:    joints, rigid parts,    and feature points.    The    -    -    feature points are fixed on the rigid parts, which    are connected by the joints.    In this paper, it    will be assumed that the jointed object forms a    tree (i.e., that is has no cycles) and that the    feature points coincide with the joints. The rigid    parts are    not allowed to bend or stretch.    The    lengths of the rigid parts are    unknown, but are    calculated by the algorithm through observation of    the jointed object.    A connectedness model for a humanoid figure is    shown in figure 1. Feature points are    indicated by    letters.    m    ?”     /i    ‘e    d    a    f    I.    j r’“\    k    1    i    Figure 1.    B.    Description    Input    The analysis proposed in this paper applies    equally well whether the central projection or    parallel projection model of vision is used, but    central projection will be assuned as it most    accurately describes the way caneras work.    The    camera will be assuned to be at the or igin, with    the focal plane at (O,O,f>. Figure 2 shows this    model.    Figure 2.    The correspondence between model and image    feature    points must    be    established.    The    correspondence problem for moving objects has been    considered    in    c2-41.    These    correspondence    algorithms are based on nearest neighbor, and work    35    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    well ([31 reports 98%    accuracy) for fraes    with    small time intervals between them.    The algorithm to be described requires a z    coordinate for some feature point in every frame.    This point will be called the reference point. For    simplicity, it will be assuned that the reference    point is the same in every frame. The z coordinate    of the reference point can be obtained by several    means, including the support assumption (used in    Cl1 for this purpose and proposed for psychological    reasons in [91>    but no    method    is    entirely    satisfactory. This will be discussed briefly in    section IV.    III    THE ALGORITHM    A.    Introduction    The algorithm treats the model    the root being the reference point.    this tree for the humanoid model.    as a tree with    Figure 3 shows    The starting    _    .    point of a rigid part is its joint nearest the    reference point (in this tree); its ending point is    the joint farthest from the reference point. A    first rigid part is said to be above a second if it    lies on a path frcm the second to the reference    point. Similarly, the second is said to be below    the first.    a    d    f    h    k    i    1    Figure 3.    The algorithn works by calculating the lengths    and the positions of the ending points of the    topmost rigid parts (these ending points are m, c,    e, and a in figure 4). Next, rigid part lengths    and ending point positions immediately below these    rigid parts are calculated. The process continues    until the positions of all the joints and the    lengths    of    all    the    rigid parts have been    calculated.    The calculation of the lengths of rigid parts    is done using known lower bounds on their lengths.    These lower bounds are obtained from previous    frames.    (In the first frame a lower bound of zero    is used). If the lower bound is too small to    account for the observed positions of the joints,    the smallest rigid part length that will work is    calculated and a new lower bound is established.    B.    Formal Statement of the Algorithm    --    For each frame, do the following for each    rigid part in the tree, going from top to bottom:    1. Let the position of the starting point of    this rigid part be (x,y,z), the observed    coordinates of the ending point be (u,v>,    and the lower bound on the rigid part length    be r. If the rigid part length is exactly    rr then the ending point lies on a sphere of    radius r with center at (x,y,z>.    At the    sane time, the ending point lies on a line    through the origin and (u,v,f), where f is    the focal length. This situation is shown    in figure 4. The coordinates of the ending    point under these assumptions can easily be    calculated using the quadratic formula.    Figure 4.    This method gives two values    for    the    position of the end of the rigid part.    These two values represent two reflections    of the rigid part that could account for the    observed position of the ending point.    For    the algorithm to work, and calculate the    correct rigid part lengths, the correct    reflection must be chosen. It is assumed    that the correct reflection is always chosen    by some process.    While deciding which of    the reflections is correct might be a hard    problem (see section IV), once the correct    reflection is chosen it can be tracked    fairly easily since the two reflections    normally differ greatly in the z coordinate,    and in the angle they make at the starting    point.    2.    If the quadratic formula yields no value for    the position of the end of the rigid part    this means that the rigid part length must    be longer than r.    Calculate a new lower    bound on the rigid part length by the    formula    (1) r = SQRT[(x-p~~)~+(y-pv)~+(z-pf)~l    where    (2) p= (ux+vy+fz)    u2+v2+f2    The coordinates of the ending point are    (Pu*Pv,Pf)    l    The situation giving rise to    this formula is shown in figure 5.    Figure 5.    Whenever a rigid part length is changed, the    previously calculated lower bounds on rigid    part lengths below the changed rigid part    become invalid, so they must be set to zero.    This action introduces an order dependence    into the algorithm; for the algorithm to    work correctly, the proper view of a rigid    part must be seen after the proper views of    rigid parts above it are    seen.    This    restriction will be discussed in greater    detail later.    36    C.    Experimental Results    these restrictions can be removed.    An experiment was run    using three hand-drawn    hunanoid figures and the algorithm given above.    The figures were drawn with specific rigid part    lengths in mind.    The rigid part lengths were    recovered by the algorithm to within an average    relative error of about lo-15%.    D.    Proof of the Algorithm    ---    It will now be shown that the algorithm will    eventually calculate the correct rigid part lengths    and three-dimensional joint positions. In order to    show this, these assunptions are    necessary:    1.    The    correc    known.    t reflec    tions of the joints must be    2. Each rigid part must be seen at some time in a    position that satisfies figure 6. That is, the    angle between the origin, the endpoint, and the    starting point of the rigid part must be a    right angle.    3. If rigid part A is above rigid part    B,    condition 2 must be satisfied for B after it is    satisfied for A.    Theorem.    Under the above conditions, the    givenwthm    will correctly calculate the length    and endpoint position for every    rigid part.    Proof. Let R be a rigid part. The proof will    be by induction by the nLanber of rigid parts above    R. If there are    no rigid parts above R then R is    attached to the reference point.    As soon as    condition (2) is satisfied for R formula (1)    will    correctly calculate R's length and R's endpoint    will be correct.    If there are any rigid parts above R then    their correct    lengths and endpoint positions will    eventually be found.    Once this has happened,    conditions (3) guarantees that condition (2) will    be satisfied for R, at which time formula (1) will    be used to correctly calculate R's length. This    completes the proof.    IV    EXTENSIONS TO THE ALGORITHM    --    There are    several restrictions placed on the    data available to the system that are undesirable    in the sense that hunans cannot make them in their    observation of jointed objects. The most serious    restrictions are the necessity of a connectedness    model    for    the    jointed    object,    needing    a    z-coordinate for the reference point in every    frame,    the    necessity of knowing the correct    reflections of the rigid parts, and the order    dependence in rigid part views. These restrictions    are necessary because the analysis of the moving    object is only static, and does not take into    account invariants in the object's motion. Dynamic    analysis of the moving object is under active    investigation and is yielding quite encouraging    results that suggest that most, and perhaps all, of    V    SUMMARY    A mathematical approach to the problem of    jointed object observation has been presented.    Given a connectedness model of the jointed object    to be observed, the actual three-dimensional motion    and rigid part lengths of the jointed object can be    discovered by observation of the jointed object.    This is done by constantly making    minimizing    assumptionqabout the object.    Further research must take into account the    actual motion of the object in a more sophisticated    my.    In order to overcome    the deficiencies of the    currently proposed method it is necessary to have a    more complete understanding of how objects can be    expected to move.    ACKNOWLEDGEMENTS    Fruitful discussions with J.    K.    Agtw~l.    Larry Davis, Worthy Martin, and John Roach are    gratefully acknowledged.    REFERENCES    1.    2.    3.    4.    5.    6.    7.    8.    9.    Roberts,    "Machine    perception    L    G,    of    three-dimensional solids.11 in Optical and    electro-optical information processing, Jx    Tippett, et al., Eds., 159-197.    1965.    Ullman, S, The interpretation of visual motion.    The MIT PreK    Cambridge, MA. 19r    Rashid. R F. "Lights: A study in motion.11 In    Proc of the ARPA image understanding workshop,    ----    Los Angeles, CA. 57-68.    November 1979.    Roach, J W, Determining the three-dimensional    motion and model of objez    from a sequence of    ---    --    images.    Ph.D.    dissertation, University of    Texas    at    Austin,    Department of Computer    Science. May 1980.    Johansson, G, "Visual perception of biological    motion    and    a    model    for its analysis."    Perception and Psychophysics, 2, (21, 201-211.    1973.    -    Kozlowski    9 L T and J E Cutting, "Recogni zing    the sex of a wal ker from dynam ic point-l ight    displays .I1    Perception and    (61, 575-580.    1977.    -    Psychophysics, 21,    Johan sson, G, "Spatio-temporal differenti ation    and integration in visual mot ion percept ion."    Psychological Research, 38, 379-383. 1976.    Johansson, G and G Jansson 9 "Perceived    motion    from changes in a straight    rotary    line."    Perception and Psychophysics, 2, (3).    165-170.    1968.    -    Gibson, J J, The perception of the visual    world. Houghtofiifflin Co., E!ozn,950.    37     
 | 
	1980 
 | 
	89 
 | 
					
88 
							 | 
	A TECWIQUE FOR ESTABLISHING COMPLETENESS    RESULTS IN THEOREM PROVING WITH EQUALITY    Gerald E. Peterson    Department of Mathematical Sciences    University of Missouri at St. Louis    St. Louis, MO 63121    ABSTRACT    This is a summary of the methods and results    of a longer paper of the same name which will appear    elsewhere.    The main result is that an automatic theorem    proving system consisting of resolution, paramodul-    ation, factoring, equality reversal, simplification    and subsumption removal is complete in first-order    logic with equality. When restricted to sets of    equality units, the resulting system is very much    like the Knuth-Bendix procedure. The completeness    of resolution and paramodulation without the func-    tionally reflexive axioms is a corollary. The    methods used are based upon the familiar ideas of    reduction and semantic trees, and should be help-    ful in showing that other theorem proving systems    with equality are complete.    I INTRODUCTION    A.    Paramodulation    Attempts to incorporate equality into automa-    tic theorem provers began about 1969 when Robinson    and Wos [6] introduced paramodulation and proved    that if the functionally reflexive axioms were    added to the set of clauses, then resolution and    paramodulation constituted a complete set of in-    ference rules. In 1975 Brand [l] showed that re-    solution and paramodulation are complete even with    out the functionally reflexive axioms,, Unfortunat    ly, the usefulness of these results is limited    because unrestricted paramodulation is a weak in-    ference rule which rapidly produces mountains of    irrelevant clauses.    B.    The Knuth-Bendix Procedure    e-    In 1970 Knuth and Bendix [2], working indepen-    dently of Robinson and Wos, created a very effec-    tive procedure for deriving useful consequences    from equality units. Their process used paramodula-    tion, but since it also used simplification and    subsumption removal, most of the derived equalities    were discarded and the search space remained small.    The main defects of this procedure are that each    equality must be construed as a reduction, so the    commutative law is excluded, and the process works    only on equality units, so most mathematical theo-    ries, including field theory, are excluded.    C.    The Goal    --    Since resolution and paramodulation constitute    a complete set of inference rules, their use will    provide a proof of any valid theorem if given suf-    ficient (usually very large) time and space. On    the other hand, the Knuth-Bendix process is effec-    tive (usually small time and space) on a small    class of theorems. We need to combine these two    approaches and produce, if possible, an effective,    complete prover.    Some progress toward this goal has been re-    ported. For example, the commutative law can be    incorporated into the Knuth-Bendix procedure by    using associative-commutative unification [4,5].    Also, restricted completeness results (i.e. the set    of clauses must have a certain form) have been ob-    tained for systems which appear to be more effec-    tive than resolution and paramodulation [3].    D.    Contributions of this Paper    --    An impediment to progress toward the goal has    been the lack of an easily used technique for ob-    taining completeness results. We show here how the    use of semantic trees can be generalized to provide    completeness proofs for systems involving equality.    We use this technique to obtain unrestricted com-    pleteness results for a system which is thought to    be fairly effective. The verification of effective-    ness will require experiments which have not yet    been performed.    A.    Semantic Trees    II METHODS AND    RESULTS    One approach to obtaining completeness theorems    is the use of semantic trees. To obtain a semantic    tree T(S) for a set S of clauses, we first order    the atoms of the Herbrand base I3,    87    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    say 13 = 1~1,~~'". 1. Then we build the binary tree    T by giving each node at level k-l two sons labelled    Bk and sBk, respectively. There will then be a one-    to-one correspondence between the branches of T and    the Herbrand interpretations.    If the set S is unsatisfiable, then it will be    falsified by every branch of T and as we move down    a branch b of T we will come to a node n b at which    it first becomes clear that b does not satisfy S.    The node nb is called a failure node of T. The    --    portion of T(S) from the root and extending up to    and including the failure nodes is called the    closed semantic tree for S, 'c(s). An inference node    ofs    a node whose children are both failure nodes.    Every failure node nb has an associated clause    Cb in S which caused the failure. That is, there is    a ground instance CbO of C b such that if L is a    literal of CbB, then QL occurs on b at or above nb    with one such QL occurring at nbO    It can be shown that the two clauses associated    with the children of an inference node will resolve    to produce a new clause C which causes failure at    or above the inference node and therefore r(S U C)    is smaller than -r(s). By performing a sequence of    such resolutions we can eventually get the closed    semantic tree to shrink to a single node and this    will imply that the empty clause has been inferred.    B.    Incorporating Equality    Problems arise when we attempt to use this pro-    cess to obtain completeness results for systems in-    volving equality. If S is E-unsatisfiable, then S    is falsified by every E-interpretation but not nec-    essarily by every interpretation. Thus it will only    be on branches which are E-interpretations that    failure nodes will exist in the usual sense. The    other branches must be handled in some other manner.    1.    Altering Interpretations    The approach we use is to alter an arbi-    trary interpretation I in a way such that the re-    sulting interpretation I* is an E-interpretation.    If I is itself an E-interpretation, then no altera-    tion is needed, I* = I.    The alternation is made as follows. First    order 8 in a way such that each equality atom occurs    before any atom which contains either side of the    equality as a subterm. (Other restrictions are also    needed on this order.) For an arbitrary interpre-    tation I, define a partial order +(I) on 8 such that    A-+B means essentially that B has been obtained from    A by replacing a subterm s of A by a term t and    I(s=t) = T. Now define I*(A) as I(A) if A is irre-    ducible anfd as I*(B) if A-tB.    2.    Substitutions    For ground substitutions 0, 0' (0 =    l$l+-tl,...,vk+tk)),    we write 843' if 8' is identical    to 8 except that one term t. of 8 has been replaced    J    by t! and t.+t!.    J    J J    We say that 8 is irreducible if    every term ti of 8 is irreducible. Suppose CBl and    (Xl2 are ground instances of a clause C. If 81+e2    then I*(cel) = I*(Cf3,).    3.    Failure Nodes    Let Ib be the interpretation associated    with a branch b of T(S). Then 1;; will be an E-in-    terpretation and will, therefore, be falsified by    some clause in S. That is, there will be a ground    instance CB of a clause C in S such that I;l(Ce)    is false and 8 is irreducible (Ib). (If 8 were re-    ducible we could, by the previous paragraph, reduce    it to an irreducible 8' such that I;(Cel) = F.)    Every literal L of C8 will be falsified by 1; and    there will exist a failure node nb such that %L    occurs on b at or above n b with one such SL occur-    ring at n b' These failure nodes can be split into    two categories as follows. An R failure node, nb,    --    is one such that the associated clause C is irre-    ducible (Ib) (thus Ib(C) = F) and a P failure node    ---    is any failure node which is not an R failure node.    4.    Inference Nodes    The two categories of failure nodes lead    to two categories of inference nodes. A resolution    inference node is a node with two R failure node    children ams    essentially the same thing as an    inference node in a semantic tree for a set without    equality. A paramodulation inference node is a P    failure node nb such that every equalityode    ances-    tor of nb has a brother which is an R failure node.    5.    Summary of the Completeness Proof    It is easy to show that if S has no E-    model, then r(s), the closed semantic tree for S,    has either a resolution or paramodulation inference    node. If 'c(s) has a resolution inference node,    then there will be a resolvent C of two clauses of    S such that T(S U C) is smaller than T(S).    If -c(s) has a paramodulation inference    node n b' then there is a clause C28 such that    I;(C,e) = F and C2B isreducible (Ib), say C28+E.    Now C28 reduces to E using some equality s=t such    that I,(s=t) = T. Since s=t occurs in the ordering    88    REFERENCES    of 8 before the atom in C28 to which it applies, a    node labelled s=t occurs on b above n b' This node    has a brother which is an R failure node and hence    there is a clause Cl0 such that s=t is a literal of    Cl0 and if L is any other literal of Cl@, then    Ib(L) = F. It follows that Cl6 and C28 have a para-    modulant C'. This ground paramodulation can be    lifted to the general level since 8 is irreducible    and therefore s must start somewhere in C2. (The    lifting lemma for paramodulation holds only in this    case.) Thus there is a clause C which is obtained    by paramodulating Cl into C2 and which has a ground    instance C1 which is more reduced than C28. This    greater reduction can be the basis for an ordering    of the closed semantic trees involved and in the    sense of this order, the tree for S U C will be    smaller than the tree for S.    C.    Deletion of Unnecessary Clauses    -    1.    Subsumption    clauses    2.    Completeness is not lost if subsumed    are deleted from S as the proof proceeds.    Simplification    If Cl = (s=t), C2 contains an instance so    of s as a subterm, and s0e    > toe    for all ground    stitutions 8, then the clause C = C,[to] is a    sub-    simplification of C2 using Cl.    If a clause C has been simplified, then C    may be deleted. (Our proof of this fails when the    atom simplified is an equality of a certain form,    but there are other reasons for believing it is    still valid in this case.)    D.    The Final Result    ---    A complete system for first-order logic with    equality may consist of resolution, paramodulation,    factoring, equality reversal, simplification, and    subsumption removal with the following restrictions.    1. Simplification and subsumption removal are    given priority since they do not increase the size    of s.    2.    No paramodulation into variables.    3. All paramodulat ions replace s by t where    for at least one ground substitution 8,    se > te.    111    [21    [31    I141    151    161    D. Brand, "Proving Theorems with the Modifi-    cation Method." SIAM J. Comput. 4 (1975)    --    412-430.    D. E. Knuth    Problems in    and P. B.    Universal    Bendix, "Simple Word    Algebras." in Leech.    J. (ed.), Computational Problems in Abstract    Algebras, Pergammon Press, 1970, 263-297.    D. S. Lankford and A. M. Ballantyne, "The    Refutation Completeness of Blocked Permutative    Narrowing and Resolution." presented at the    fourth workshop on automated deduction,    February 1-3, 1979.    D. S. Lankford and A. M. Ballantyne, "Decision    Procedures for Simple Equational Theories    with Commutative-associative Axioms: Com-    plete Sets of Commutative-associative Reduc-    tions." Technical Report, Mathematics    Department, University of Texas at Austin,    August 1977.    G. E. Peterson and M. E. Stickel, "Complete    Sets of Reductions for Some Equational    Theories." to appear in JACM.    G. A. Robinson and L. Wo s, "Paramodulation    and Theorem Proving in F irst Order Theories    with Equality." Machine Intelligence 4,    American Elseviermork,    1969, 135-150.    4.    If s > t then no reversal of the equality    (s=t) will be necessary, and if Cl is obtained from    C by reversing (t=s) then C may be deleted.     
 | 
	1980 
 | 
	9 
 | 
					
89 
							 | 
	BOOTSTRAP STEREO    Marsha Jo Hannah    Lockheed Palo Alto Research Laboratory    Department 52-53, Building 204    3251 Hanover Street, Palo Alto, CA 94304    ABSTRACT    Lockheed has been working on techniques for    navigation of an autonomous aerial vehicle using    passively sensed images. One technique which    shows promise is bootstrap stereo, in which the    vehicle's position is determined from the per-    ceived locations of known ground control points,    then two known vehicle camera positions are used    to locate corresponding image points on the    ground, creating new control points. This paper    describes the components of bootstrap stereo.    I    INTRODUCTION    Before the advent of sophisticated navigation    aids such as radio beacons, barnstorming pilots    relied primarily on visual navigation. A pilot    would lookart the window of his airplane, see    landmarks below him, and know where he was. He    would watch the ground passing beneath him and    estimate how fast and in what direction he was    moving.    Today, there exist applications for which a    computer implementation of this simple, visually    oriented form of navigation would be useful. One    scenario hypothesizes a small, unmanned vehicle    which must fly accurately from its launch point    to its target under possibly hostile circum-    stances.    I    I    1    2    -    L    TERRAIN    PROFILE    KNOWN    FEATURES    -DETERMINE    VEHICLE    LOCATION    ---DETERMINE    LOCATION    OF UNKNOWN    TERRAIN    FEATURES    TIME    POSITION    OF CRAFT    DETERMINE    DERIVED    FROM    POSITION    OF    I    :    I    0    0, b    I    -    1    1    0, b    c, d    t    2    I    c. d    I    e.f    Figure 1 Navigation Using Bootstrap Stereo.    Cur overall approach to the problem involves    providing the vehicle with a Navigation Expert    having approximately the sophistication of an    early barnstorming pilot. This expert will navi-    gate partly by its simple instruments (altimeter,    airspeed indicator, and attitude gyros), but    mostly by what it sees of the terrain below it.    This paper covers one aspect of the Navigation    Expert, a technique which we call bootstrap    stereo.    II THE BOOTSTRAP STEREO CONCEPT    Given a set of ground control points with    known real-world positions, and given the 1ocatioIs    of the projections of these points onto the image    plane, it is possible to determine the position    and orientation of the camera which collected the    image. Conversely, given the positions and or-    ientations of two cameras and the locations of    corresponding point-pairs in the two image planes,    the real-world locations of the viewed ground    points can be determined [l]. Combining these two    techniques iteratively produces the basis for    bootstrap stereo.    Figure 1 shows an Autonomous Aerial Vehicle    (AAV) which has obtained images at three points in    its trajectory. The bootstrap stereo process    begins with a set of landmark points, simplified    here to two points a and b, whose real-world    coordinates are known. From these, the camera    position and orientation are determined for the    image frame taken at Time 0. Standard image-    matching correlation techniques [23 are then    used to locate these same points in the second,    overlapping frame taken at Time 1. This permits    the second camera position and orientation to be    determined.    Because the aircraft will soon be out of    sight of the known landmarks, new landmark points    must be established whenever possible. For this    purpose, "interesting points" -- points with a    high likelihood of being matched [3] -- are se-    lected in the first image and matched inthe second    image. Successfully matched points have their red-    world locations calculated from the camera posi-    tion and orientation data, then join the landmarks    list. In Figure 1, landmarks c    and    d are    located in this manner at Time 1; these new points    are later used to position the aircraft at Time 2.    Similarly, at Time 2, new landmarks e and f    join the list; old landmarks a and b, which are    38    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    no longer in the field of view, are dropped    the landmarks list.    Once initialized from a set of known land-    marks, bootstrap stereo has four components --    camera calibration, new landmark selection, point    matching, and control point positioning. Because    camera calibration and control point positioning    have been well covered in the photogrammetric and    imaging literatures (e.g., [1], [4],    [S],    [63),    we will discuss only landmark selection and point    matching in the following sections.    III NEW LANDMARK SELECTION    Because the aircraft rapidly moves beyond the    known landmarks, new landmark points must constant-    ly be established. For this purpose, "interesting    points" -- points with a high likelihood of being    matched [3] -- are selected in the old image of    each pair, then matched with their corresponding    points in the new image and located on the terrain.    Matching is done on the basis of the normal-    ized cross-correlation between small windows of    data (typically 11 x 11) around the two points in    question. Matching has trouble in areas that con-    tain little information or whose only information    results from a strong linear edge, therefore such    areas make poor candidate landmarks.    To avoid mismatches from attempting to use    such aras, various measures on the information in    the window have been used, including the simple    statistical variance of the image intensities over    the window [2] and the minimum of the directed    variances over the window [3]. We have combined    these into another interest measure which we call    edged variance, which appears to perform better    than either of its components [7].    We have defined our interesting points to be    those which are local peaks in our interest    measure, with a lower bound established to reject    undesirable areas. Figure 2 includes some    examples of the application of this interest    measure.    IV    POINT MATCHING    The actual matching of points in an image    pair is done by maximizing normalized cross-    correlation over small windows surrounding the    points. Given an approximation to the displace-    ment which describes the match, a simple spiral-    ing grid search is a fairly efficient way to    refine the precise match [2J. To provide that    initial approximation, we have employed a form    of reduction matching [3].    We first create a hierarchy of N-ary reduc-    tion images. Each NxN    square of pixels in    an image is averaged to form a single pixel at    the next level. This reduction process is re-    peated at each level, stopping when the image    becomes approximately the size of the correlation    windows being used. Matching then begins at the    smallest images, with the center point of the first    image being matched via a spiral search. There-    after, each matched point spawns four points    around itself, offset by half a window radius    along the diagonals of the window. These are    mapped down to the next level of images, carrying    their parent's displacement (suitably magnified)    as their suggested match approximation. These    matches are refined by a spiraling search before    spawning new points. This process continues    until the largest images are reached, effectively    setting up a grid of matched points.    In our implementation of bootstrap stereo,    reduction matching is used to determine approx-    imate registration of the images and to initial-    ize the second-order match prediction polynomials.    Matching of old landmarks and of interesting points    to create new landmarks uses these polynomials to    predict an approximate match, which is then re-    fined by a local search. Autocorrelation threshold-    ing is used to test the reliability of the match,    then points are located more closely than the image    grid permits by parabolic interpolation of the X-    and Y-slices of the correlation values.    V    ANEXAMPLE    In Figure 2, we present an example of the    control-point handling portion of bootstrap    stereo. The original data set, a sequence of    3 images from a video tape taken over the Night    Vision Laboratory terrain model, is shown in    Figure 2a.    Figure 2b shows the interesting points in    the first image, indicated by + overlays. If    these were the control points from a landmark    processor, we would use them to locate the first    camera. These landmark points are next matched    with their corresponding points in the second    image; Figure 2c shows the successful matches    overlaid on the first and second images. From    the image plane positions of these points, the    position and orientation of the second camera    are determined.    Next, the areas of the second image which    were not covered by matches are blocked out and    interesting points are found in the uncovered    areas, as seen in Figure 2d. The old landmark    points and the interesting points are then    matched in the third image, as shown in Figure    2e. The old control points from the second    image are used to calibrate the third camera;    the camera calibrations are then used to locate    the matched interesting points on the ground,    forming new control points. These two steps    are then repeated for subsequent pairs of    images in longer sequences.    VI    CONCLUSICNS    When an autonomous aerial vehicle must    navigate without using external signals or rad-    iating energy, a visual navigator is an enticing    possibility. We have proposed a Navigation Expert    capable of emulating the behavior of an early barn-    storming pilot in using terrain imagery. One tool    39    such a Navigation Expert could use is bootstrap    stereo. This is a technique by which the vehicle's    position is determined from the perceived positions    of known landmarks, then uses two known camera    positions to locate real-world points which serve    as new landmarks.    The components of bootstrap stereo are well    established in the photogrammetry and image process-    ing literature. We have combined these, with    improvement, into a workable system. We are work-    ing on an error simulation, to determine how the    errors propagate and accumulate.    VII    REFERENCES    Cl3    C23    Thompson, M. M., Manual of Photogrammetry,    American Society of Photogrammetry, Falls    Church, Virginia, 1944.    Hannah, M. J., Computer Matching of Areas in    Stereo Image=, PhD,Thesis, AIM#239, Computer    Science Department, Stanford University,    California, 1974.    Moravec, H. P., "Visual Mapping by a Robot    Rover", Proceedings of the 6th IJCAI, Tokyo,    Japan, 1979.    Duda, R. 0. and P. E. Hart, Pattern Classi-    fication and Scene Analysis, John Wiley and    Sons, New York, New York, 1973.    Fischler, M. A. and R. C. Bolles, "Random    Sampling Consensus", Proceedings: Image    Understanding Workshoe, College Park, Mary-    land, April 30, 1980.    Gennery, D. B., "A Stereo Vision System for an    Autonomous Vehicle", Proceedings of the 5th    IJCAI, Cambridge, Massachusetts, 1977.    Hannah, M. J., "Bootstrap Stereo", Proceedings:    Image Understanding Workshop, College Park,    Maryland, April 30, 1980.    Figure 2 An Example of the Control-Point Handling for Bootstrap Stereo    a> The original sequence of 3 images.    b) The interesting points in Image 1.    c) The matched points between Images 1 and 2.    d)    e)    The areas of Image 2 covered by matches,    The control points in Image 2 matched to    with interesting points found    Image 3.    in the uncovered areas.    40     
 | 
	1980 
 | 
	90 
 | 
					
90 
							 | 
	LOCATING PARTIALLY VISIBLE OBJECTS:    THE LOCAL FEATURE FOCUS METHOD    SRI Internati    onal, Menlo Park, California 94 025    ABSTRACT    Robert C.    process is robust, because it bases its decisions    on grou s of mutual1    is rela P-    ively fast, %    consistent features, and it    ecause it concentrates on key    features that are automatically selected on the    basis of a detailed analysis of CAD type of models    of the objects.    @I. INTRODUCTION    There are several tasks that involve locating    partial1    relative y easy tasks, such as    4    visible objects. The    9    range from    ocating a single    two-dimensional object, to the extremely difficult    task of locating and identifying thrF;-t;TEnsional    objects jumbled together in a bin.    we describe a technique to locate and identi y    .    gap-3    overlapping two-dimensional objects on the basis of    two-dimensional models.    Sequential 1,2,3]    approaches have    and parallel    [4,51roblem    een taken to solve this    F    '    In the sequential approach, one feature a ter    another is located and as much information as    possible is derived from the position and    orientation of each feature. This approach is fast    because it locates the minimum number of features;    however, if the ob'ects are complicated,    determining the or a er of the features to be located    may be difvficult. Development of the location    strategy becomes even more difficult when mistakes    are taken into account.    In the parallel approach, all the features in    an image are located, and then large grou s of    r-;,g;ed to recogn'ze    relaxation 5,7    t    7    8    :j",;tgisto;;$-    niques can be used to determine the    eature'groups. This ap roach is robust because it    bases its decisions on a E 1 the available    information, and the location strategy is    straightforward because all the features are used.    For even moderately complex objects, however, the    quantit    approac h    of data to be processed makes use of this    impractical on current computers.    Described here is a method called the Local    Feature FOCUS    (LFF), that combines the advanta es    of the sequential and parallel approaches, whi    3 e    avoidin    ifI    some of their disadvantages. This is    achieve by careful analysis of the object models    and selection of the best features.    Belles    II. LOCAL FEATURE FOCUS METHOD    The basic    F    rinciple of the LFF method is to    locate one rela ively reliable feature and use it    to partially define a coordinate system within    which a    f    roup of other key features is located.    Enough o the secondary features are located to    uniquely identify the focus feature and determine    the position and orientation of the object of which    it is a part. Robustness is achieved by using a    s    arallel matching scheme to make the final    ecisions, and speed is achieved by carefully    selecting information-rich features.    The idea of concentrating on one feature is    not new; it has been use    P    urpose vision programs    he ability to generate    their secondary features automatically from objet    models. This automatic feature selection, when    perfected, will significantly reduce the need for    peo    !c    le    wil    to program recognition procedures and thus    make possible quick and inexpensive    application of the LFF method to new objects.    OBJECT    MODELS    FOCUS    FEATURES    AND    THEIR    SECONDARY    FEATURES    I    #    ,    I    OBJECT    IMAGE    OF    EXECUTION-TIME    OBJECTS-    PROCESSING    IDENTITIES    -AND    POSITIONS    Figure 1 THE TOP-LEVEL BLOCK DIAGRAM    As Figure 1 shows, the analysis of ob'ect,,    models is performed once during training c ime and    the results of the analysis are used repeatedly    during nexecution time, making this a preach    particularly attractive when large    t    num ers of    objects are to be processed. In the rest of this    paper, we concentrate on the training-time    analysis.    III. ANALYSIS    The goal of the analysis is to examine a model    of an object (or objects) such as the one in    Figure 2, and generate a fist of focus features and    41    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    Figure 2 AN OBJECT MODEL    Figure 3 AN EXAMPLE TO BE PROCESSED    their associated secondary features. Given this    information and a picture such as the one in    Figure 3, the execution-time system tries to locate    occurrences of the objects. In the current    implementation of the system, ob'ects are modeled    as structures of regions, each o I which is bounded    TK    b a'sequence of line segments and arcs of circles.    e execution-time system uses a maximal-cli ue    ?! P    ra h-matching method to locate the groups o B    ea ures    objects.    that correspond to occurrences of the    Therefore, the analysis is tailored to    produce the information required by the maximal-    clique matching system. In particular, the    description of each secondary feature includes the    feature-type, its distance from the focus feature,    a,E;t;r;ist    of the possible identities for the    The analysis to produce this information    is perfirmed in five steps:    (1)    (2)    (3)    Location of interesting features    Grouping of similar features    Rotational symmetry analysis of each    object    (4) Selection of secondary features    (5) Ranking of focus features.    The purpose of the first step is to generate    the set of all features of the objects that could    be located at execution time. Typical features    include holes, corners, protrusions, and    intrusions. For the model in Fi ure 2, the set of    features contains all 14 interna !? holes.    In the second step, the set of features is    P    artitioned into subsets of "similar" features.    eatures are defined to be similar if they are    like1    For    to be indistinguishable at execution time.    t e model in Figure 2, feature detectors can    a    distinguish at most,,three    "small holes," and large    ty es    !?    ho es.    $f holes: "slots,"    Therefore, the    set of interesting features is partitioned into    three subsets, each defining a possible focus    feature.    In the third step,    T$    s mmetry anal    I    sis    a complete rotational    e rotationa    of each object is performed [12].    symmetry is used to determine the    number of structurally different occurrences of    each feature. Because the model in Figure 2 is    twofold rotationally symmetric, the features occur    in pairs, the members of which are    indistinguishable on the basis of the relative    positions of other features of the object. Instead    of four types of small holes, there are only two,    one on the axis between between the slots and one    off that axis.    Figure 4 SECONDARY FEATURES FOR SMALL HOLES    The fourth step in the analysis is the most    corn licated.    P    The oal is to select secondary    fea ures for each ? ecus feature. The secondary    features must distinguish between the structurally    different occurrences of the focus feature and    determine the position and orientation of the    object. In Figure 2, for example, given an    occurrence of a small hole, what nearby features    could be used to determine whether it is one of the    holes on the axis or off of it? There are two    slots close to the small hole on the axis and on1    one near the off-axis occurrence. In addition, g    t e    slots are at different distances from the holes.    Let Dl be the distance between the on-axis small    hole and its slots and let D2 be the distance from    the off-axis small hole to the nearest slot.    Figure 4 shows circles of radii Dl and D2 centered    on the two different types of small holes.    Tabulated below are the feature occurrences that    are sufficient to determine the type of the small    hole and compute the position and orientation of    the object.    ON-AXIS SMALL HOLE    --    Two slots at Dl    No slots at D2    OFF-AXIS SMALL HOLE    --    No slots at Dl    One slot at D2    The analysis in step 4 locates secondary    features in two substeps. First it performs a    rotational symmetry analysis centered on each    structural1 different occurrence of a focus    feature. T% is analysis builds a descri    R    tion of the    object in terms of    of features t at are    similar and equidis    ?!    roups    ant from the focus feature.    Figure 5 shows the groups of features produced by    the current system when focusing on one of the    small holes. In the second substep, the anal sis    iteratively selects groups of features from t ese    K    descriptions to be included in the set of secondary    features associated with tne focus feature. Groups    42    REFERENCES    are selected for their contribution to identifying    an occurrence of the focus feature or determining    the position and orientation of the object.    Figure 5    FEATURE GROUPS ABOUT A SMALL HOLE    The fifth and final step in the training-time    analysis is the rankin    8    oal is to determine t e    a of the focus features. The    order in which the focus    eatures should be checked at execution time. The    current system simply ranks them according to the    number of secondary features required at execution    time.    IV. DISCUSSION    The LFF method is a simple combination of the    se uential and parallel approaches.    9.    It offers the    re lability of a arallel approach and most of the    speed of a sequen    7.    la1 approach. The s eed is    achieved by usin    to define a coor 5    the location of the !f    ecus feature    inate system within which the    other features are located. Quickly establishing a    coordinate system significantly reduces the time    required to find secondary features.    The utilit    reliabilitv of 4    of the LFF method depends on the    ocatinn focus features and the    number of structurally-different occurrences of    these features in the objects. Fortunately, most    industrial    features. !i    arts have good candidates for focus.    time so they    he problem is to find them at tra;;ing    can be used at execution time.    fact, the more information gathered at training    IX-X-,,    the more efficient the system at execution    Also, as the training-time analysis is made    more'automatic correspondingly less time is    required of a human programmer.    The current implementation of the training-    time analysis forms the basis for a coS@n;;ly    automatic feature selection system.    extensions are    !    ossible.    could select ex ra features to guarantee tha    For example the s si;z    %    execution-time system would function proper1 even    if a prespecified number of mistakes were ma %    the feature detectors.    e by    The system could use the    orientation of a focus feature, if it exists, to    determine the orientation of the feature-centered    coordinate system. The system could also select    two or.more groups of features at one time, which    is necessary for some more difficult tasks such as    distinguishing an object from its mirror image.    Finally, the system could incorporate the cost and    reliability of locating a feature in the evaluation    of the feature.    In conclusion, the LFF method is a combination    of the sequential and parallel approaches that    B    rovides s eed and reliability for many two-    imensiona    P location tasks. The automatic    selection of features makes it particularly    attractive for industries such as the aircraft    industry that have hundreds of thousands of    different parts and cannot afford a special-purpose    program for each one.    hl    r4    L-31    t41    II51    b1    [71    b1    191    r.103    Cl11    IAl    S. Tsuji and A. Nakamura, "Recognition,,of    an    Ob'ect in a Stack of Industrial Parts,    IJSAI-75 Tbilisi,    Proc.    (August i 975).    Georgia, USSR, pp. 811-818    S. W. Holland, "A Programmable Compute: Vision    System Based on Spatial Relationshi    General Motors Research Pub.    s,    GMR-20 8    (February 1976).    1;    W. A. Perkins, "A Model-Based Vision System    for Industrial Parts," IEEE Transactions on    Computers, Vol. C-27, pTp';;f26-143    (Yebruam    .    A. P. Ambler et al., "A Verzatile Computer-    Controlled Assembly System    Proc. IJCAI-73,    Stanford, California, pp. 29m(August    1973).    S. W. Zucker and R. A. Hummel. "Toward a Low-    Level Description of Dot Clusters: Labeling    Edge, Interior, and Noise Points," Computer    Graphics and Image Proc ssing Vol. y, No. 5,    R. C. Bolles, "Rgbust Feature Matching Through    Maximal Cliques, Proc. SPIE's Technical    Symposium on Imagi~plications    Ior    AUtOmatea Xidustrial inspection an?f-Kssembly,    Wasnlngton, D.C. (April 19'19) .    S. T. Barnard and W. B. Thomnson. llDi.SDaratV    Analysis of Images )( To appear in IEEE-    "    Transactions    h    n Pattern Analysis amachine    lntellieence ulv 1Ytw).    --    R. 0. Duda and P. E. Hart, "Use of the Hough    Transform To Detect Lines and Curves in    Pictures," CACM, Vol. 15, No. 1, pp. 11-15    (January 1977)T    S. Tsuji and F. Matsumoto, "Detection of    Ellipses by a Modified Hou h Transformation,"    IEEE Transactions on Compu ers,    ?    -Pp.    '/'('I-'(81    (Au~St 1 Y'(U)    Vol. C-27, No.    .    J. T. Olsztyn_and L. Rossol, "An A gl,;;;:on    o~~~o;~;x~ Vision to a Simulated Ap    T    IJCPR-73, Washington, D.C.    Oct6bee3y    D. F.,,McGhie,    "Programmable Part Presenta-    ~~~~iligence Research A plied to Industrial    in the SRI Ninth Report on Machine    Automation, pp.    39-44    &gust    1979).    R. C. Bolles, "Symmetry Analysis of Two-    Dimensional Patterns for Computer Vision,"    Proc. IJCAI-79, Tokyo, Japan, pp. 70-72    mst    43     
 | 
	1980 
 | 
	91 
 | 
					
91 
							 | 
	INTERFERENCE    DETECTION AND COLLISION AVOIDANCE AMONG THREE DIMENSIONAL    OBJECTS*    N. Ahuja, R. T. Chien, R. Yen, and N. Bridwell    Coordinated    Science Laboratory    University    of Illinois at Urbana-Champaign    Urbana, Illinois    61801    ABSTRACT    Two methods for detecting    intersections    among    three dimensional    objects are described.    The    first method involves detecting overlap among the    projections    of the objects on a given set of    planes.    The second method uses a three dimension-    al octree representation    of the objects.    Intersec-    tions are detected by traversing    the trees for the    obstacles and the moving objects.    Application    of    the methods to collision avoidance    is discussed.    I    INTRODUCTION    An important problem in the robotics manipula-    tion of its environment    is that of following colli-    sion free trajectories    when objects have to be    moved.    A representation    or model of the spatial    configuration    of objects is necessary    to obtain    safe solutions,    i.e. those which avoid collisions.    In addition, a capability?0    plan an efficient    trajectory,    say, one following the shortest avail-    able path between the starting and the goal con-    figurations,    is desirable.    In either case, a    procedure    is required to decide if an arbitrary    configuration    involves occupancy of a given region    of space by multiple    objects.    The problem of interference    detection usually    involves static objects whereas collision detec-    tion refers to a situation where at least one ob-    ject is in motion.    Collision detection may be    viewed as a sequence of intersection    checks among    appropriately    defined static objects, and at    appropriate    time intervals.    Thus the basic prob-    lem appears to be that of detecting    intersection    among a set of objects.    Given a configuration    of objects, it is not    hard to develop an algorithm    to check any inter-    sections among the objects.    However, in a reason-    ably complex environment    and for reasonable    speeds    of object manipulation,    e.g. translation,    rotation,    etc., the availability    of a limited computational    power may demand efficient procedures.    At the    heart of the design of such procedures    lies the    need for a good representation    of the objects.    Algorithms    must be developed    that take advantage    of the properties    of chosen representation    to    efficiently    track the dynamic environment.    solid in terms of a set of relatively    simpler,    planar patches used to approximate    its surface.    For the case of convex objects, Comba [21 obtains    a pseudo-characteristic    functioninterms    of expres-    sions for the planar patches.    The function as-    sumes nonpositive    values in regions that approxi-    mate the parts of the space where all the objects    intersect.    Maruyama    151 compares the minimal    boxes containing    the objects.    Boyse 111 considers    three different    types of objects:    solids, surfac-    es and containers.    Intersections    are detected by    checking if an edge of one object intersects    a    face of another.    Collisions    are detected by    checking interference    among the obstacles and the    surfaces traced by the edges and faces of the mov-    ing objects.    Udupa cl11 uses two different    representations    for the manipulator    and the environment.    The ob-    stacles are modelled by polyhedra    (not necessarily    convex) in Cartesian space.    The manipulator    links    of the Scheinman arm are approximated    by the mini-    mum bounding cylinders.    Abstractions    are intro-    duced by replacing the links by their axes and    enlarging    the obstacles proportionately,    by retain-    ing only the major link, etc.    Lozano-Perez    and    Wesley [41 also grow the obstacles and shrink the    moving parts such that collisions    in the modified    representation    occur if and only if they occur in    the original space.    It is clear that the representation    of the    objects plays a major role in determining    the    feasibility    and performance    of any intersection    or    collision detection method using that representa-    tion.    This paper discusses    two methods of repre-    sentation with the aim of making the resulting    interference    checking operations more efficient.    Section II examines a method based upon a set    of two dimensional    projections    of the objects.    It    uses a conservative    criterion    to detect the occur-    rence of interference.    Section III discusses a    representation    of the objects by regions of a fix-    ed shape, but varying sizes determined    by a recur-    sive decomposition    of the space into successively    smaller regions.    Section IV outlines the use of    the representations    described    in Sections II and    III to collision avoidance.    Section V presents    some concluding    remarks.    In the past, mos,t of the related work has    used polyhedral    approximation    of objects [1,2,4,    5,111. This facilitates    a treatment of a complex    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    II    PLANAR PROJECTIONS    A planar projection    of a three dimensional    configuration    of objects will always show an over-    lap between the projections    of any objects that    intersect.    However the reverse is not necessarily    true, i.e., overlapping    projections    do not neces-    sarily correspond    to intersecting    objects.    Such    false alarms may sometimes be decreased by consid-    ering projections    on a collection    of planes.    A    configuration    then is adjudged to be safe if each    pair of objects has nonoverlapping    projections    on    at lease one plane.    However, for any given number    and choice of planes, spatial arrangements    of non-    interfering    objects may be devised whose project-    ions overlap.    An increase in the number of planes    may only decrease, but not eliminate,    the probabil-    ity of erroneous decisions.    The error may also be    lower for objects with simpler shapes, e.g., convex    objects.    A.    Basic Method    ~-    We will make the usual and reasonable    assump-    tion that objects may be represented    by polyhedra.    Each polyhedron    is uniquely described by the coor-    dinates of, and the adjacency relationships    among    its vertices.    The projection    of a polyhedron    on    any plane is determined by the projections    of the    vertices, using the original adjacency    relation-    ships.    The set of vectors OV! for various i defines    the vertices of the projecsion.    To determine    the    shape of the polygonal projection,    we must deter-    mine its border edges.    This is done by obtaining    the convex hull of the projections    of the vertices.    For convex polyhedra,    the edges of the convex hull    will be a subset of the set of straight lines Vl V!    where V. and V. are adjacent vertices.    On the    J    other hind, noaconvex polyhedra may give rise to    nonconvexpolygonalprojections.    The convex hull of    the corresponding    projected vertices will include    edges V! V! where Vi and Vj are not adjacent.    The    actual    or er of the projection    can be found by    i: a    replacing all such edges VI Vi by a sequence of    edges V!- V! , V!    VI-, ..t ,JV!-    Jl J?    J2    J3    VI-, where    k > 2, j, = 1, j, = J, and Vj Jk-l Jk    and V.    r    Jr+l    are adja-    cent, 1 5 r I k-l (fig. 1).    Obtaining    the convex    hull of N projected vertices    takes O(N log N) time    ClOl. This also determines    the overall complexity    of obtaining a polygonal projection,    since the com-    putations    required to obtain the vertices Vi are    O(N).    Thus, assuming that the fraction of edges of    a polygonal projection    that form concave cavities    is low, the entire set of operations    takes O(N log    N) time.    Given a set of M planes, projections    of all    the objects are obtained on each of the planes.    The polygons corresponding    to a given pair of ob-    jects to be examined for intersection    are then    checked for overlap in the different planes.    If a    plane is found in which the two objects do not    overlap, noninterference    between them is guaran-    teed.    Otherwise    the objects may or may not inter-    sect.    Shamos LlOl gives an O(m+n) algorithm    to find    the intersection    of a convex m-gon with a convex    n-gon.    Shamos [lOI also describes a simpler algo-    rithm suggested by Hoey.    We use the latter.    We    are not interested    in obtaining the exact region of    intersection,    but only in knowing if two given    polygons    intersect.    For an m-gon and an n-gon,    this needs only O(m+n> computations    compared to the    O(m log m) and O(n log n) computations    required to    obtain the individual polygons.    When the projection    of an object is nonconvex,    we must extract a description    in terms of convex    polygons in order to make use of the above algo-    rithm.    One obvious way is to use the convex hull    of each nonconvex m-gon.    However this introduces    an additional    source of false alarm since the cav-    ities in an object, appearing as concavities    in its    projection,    are treated as solids.    An alternative    way is to decompose a noncon-    vex polygon into a set of convex polygons    (fig. 2).    Each of these convex polygons must then be con-    sidered in each of the intersection    tests where    ,    45    Figure 1    A nonconvex polyhedron    and its nonconvex poly-    gonal projection.    To    obtain the actual pro-    jection from its convex    hull, the dotted line    must be replaced by the    remaining two sides of    the triangle.    A    /e    WY    : ..A.*..    .    . .    .    .    .    /    Figure 2    Decomposition    of a    nonconvex polygonal    projection    into con-    vex polygons.    the parent polygon is involved.    Pavlidis    [71,    and Feng and Pavlidis    [31 give algorithms    for con-    vex decomposition.    Schachter    [91 constructs    a    partial Delaunay triangulation    of an m-gon to    obtain an O(rm> algorithm, where r is the number    of concave vertices.    B.    Improving Efficiency    The complexity    of intersection    detection    in-    creases linearly with the number of projection    planes used, since each plane must be examined for    overlap until a zero overlap is found, or the    planes are exhausted.    To avoid applying the    intersection    algorithm    to pairs of polygons that    are far apart, coarse tests of intersection    on the    envelopes of the polygons may be used.    For exam-    ple, rectangular    envelopes may be obtained by    identifying    the points having minimum and maximum    x or y coordinate values.    Similarly,    a circular    envelope may be obtained in terms of the diameter    of the polygon.    An overlap between two envelopes    can be detected in constant time.    However, ob-    taining either of the two envelopes    itself takes    O(m) time for an m-gon, and hence such a coarse    test may be of marginal utility since the inter-    section algorithm    for polygons    is already linear    in the total number of vertices.    Suppose we are considering    a pair of objects    that do not intersect.    Also suppose that their    projections    on at least one of the planes do not    overlap.    Then we would like to discover such a    plane at the earliest,    It may be useful to order    the planes for examination    according    to some mea-    sure of the likelihood    that a given plane demon-    strates the desired relationships.    An example of    such a measure is the total black (polygonal) area.    It takes O(m) time to compute the area of an m-gon.    Thus the planes may be ordered for examination    by    performing    a computation    on each plane which is    linear in the number of vertices    in the plane.    The choice of the appropriate    planes depends    upon the given configuration.    In general, a mini-    mum of three planes would appear to be desirable    to work with three dimensional    objects.    The com-    putation of projections    is trivial when the three    orthogonal    planes are used.    The three projections    are obtained by successively    replacing one of its    three coordinates    by zero.    Other planes with    convenient    orientations    may also be used.    III    THREE DIMENSIONAL    REPRESENTATIONS    Extensions    of the methods for representing    two dimensional    images C8lmay be used for the rep-    resentation    of three dimensional    objects.    Thus    MAT (medial axis transform),    generalized    cylinders    and recursive subdivision    of space may all be used    among others.    In this paper, we will be concerned    with the third of these methods.    A.    Octrees    Just as the plane is recursively    divided into    squares in the quadtree representation    of the    images [12, 131, the three dimensional    space may    be subdivided    into octantsL143.    We start with the    entire space as one block.    If a block is com-    pletely contained within the object whose repre-    sentation    is sought, it is left alone.    Otherwise,    it is divided into eight octants    (fig. 3a) each of    which is treated similarly.    The splitting contin-    ues until all the blocks are either completely    within or completely    outside the object (fig. 3a),    or a block of minimum allowed size is reached,    reflecting    the limits on resulution.    A method of    subdivision    of space was also employed by Maruyama    c61.    However, he used rectanguloids.    A rectangu-    lar block was divided into two, along one of the    three axes as determined    by the region of inter-    section of the block with the object.    The recursive subdivision    allows a tree des-    cription of the occupancy of the space (fig. 3b).    Each block corresponds    to a node in the tree.    Let    us label a node black or white if it corresponds    to a block which is completely    contained within    the (black) object or the (white) free space,    respectively.    Otherwise    the node is labelled gray.    The gray nodes have children unless they are of    the minimum allowed size, in which case they are    relabelled black.    The free space covered by such    nodes is thus treated as part of the object in    order to be conservative    in detecting interference.    The final tree has only black or white leaves.    Figure 3    (a) An object and its representation    by recursive    subdivision    of space into octants.    (b) Octree for the object in (a).    north-east,    The north-west,    south-west    and the south-east    oc-    tants in the upper layer correspond    to the    children having labels 1, 2, 3 and 4, respect-    ively.    The nodes corresponding    to the octants    in the lower layer are labelled 5, 6, 7 and 8.    Dark (white) circles indicate black(white)    leaves.    The children nodes are arranged in    increasing    order of label values from left to    right.    46    B.    Interference    Detection    Suppose we are given the octree representa-    tions of a pair of objects.    Clearly, the objects    intersect if there exists at least one pair of    corresponding    nodes in the two trees such that one    of them is black, and the other is black or gray.    Thus the existence of interference    can be deter-    mined by traversing    the two trees in parallel.    Let A and B denote a pair of corresponding    nodes    at any time during the traversal.    If either A or    B is white, we do not traverse their children, and    continue the traversal of the remaining nodes.    The depth of the traversal along each path down    from the root is determined    by the shallower of    the two paths terminating    in a white leaf.    The    time required is proportional    to the number of    nodes in the subset of the nodes traversed.    To detect interference    among n objects, n> 2,    n trees must be traversed.    The traversal below a    certain node in a tree stops only if either the    node is white, or the corresponding    nodes in the    remaining n-l trees are all white.    The time re-    quired depends upon the actual number of nodes    visited.    IV    COLLISION AVOIDANCE    The use of interference    detection    in robotics    manipulators    lies in planning trajectories    of mov-    ing objects in a given environment.    The occupancy    of any part of the space by more than one object    can be used as indicative    of a collision.    To    avoid a collision its imminence must be foreseen.    Therefore,    intersections    should be detected in a    modified    space.    Suppose that it requires at least    a distance d to brake the speed of a moving object    or to change its course to avoid a collision.    The    value of d may depend upon different    factors in-    cluding the speed.    Then we must detect any obsta-    cles within a distance d along the path of a mov-    ing object.    For the obstacles off the path of the    object, the required safe distance can be easily    computed in terms of the relative location of the    obstacles.    Similarly,    for two moving objects the    required safe distance is 2d if they are approach-    ing each other head on, and larger otherwise,    given by the sizes of the objects, their speeds    and directions    of motion.    To use intersection    detection    for collision    avoidance,    the moving objects are grown Cl,111 by    a thickness d.    Any intersection    among the modi-    fied set of objects produces a timely warning of    a possible collision.    The representations    of the (static) obstacles    are computed only once.    Thus, we have a fixed set    of projections    of the obstacles on the planes    being used, and a single octree representing    the    space occupied by the obstacles.    Each moving ob-    ject is represented    separately.    Every time an    interference    check has to be made, the representa-    tions of the moving objects are obtained.    Thus,    in case of the planar projections,    the current    state of each of the moving objects is projected    on each of the planes, which already have the    appropriate    projections    of the obstacles.    The    usual procedure    to check intersection    is then car-    ried out.    In case of the octree representation,    a    new tree is generated    for each of the moving ob-    jects.    A parallel traversal of these trees and    the obstacle tree detects any interference    present.    The above checks for intersection    are applied    to a snapshot of the potentially    continuously    varying spatial configuration    of objects.    Hence    care must be taken to ensure that no collisions    will occur between any two successive    executions    of    the algorithm.    This requires that not only should    we detect any existing intersections,    but also note    if some objects are too close, and might collide    before the instant when the algorithm    is applied    again.    The safe distance D between any two objects    is proportional    to the speeds and relative loca-    tions of the objects involved.    The interval T    between two successive    applications    of the algo-    rithm is controlled    such that the objects do not    come closer than D during the time interval T.    This is accomplished    by growing the objects further    by a distance D.    Then the problem of collision    avoidance only involves periodic intersection    de-    tection among the modified objects.    V    CONCLUDING    REMARKS    We have discussed    two approaches    to detecting    interference    and collision among three dimensional    objects.    The first method involves detecting over-    laps among projections    of the objects on a given    set of planes.    It may thus be viewed as a two and    a half dimensional    approach.    The criterion used    for detecting    interference    is a conservative    one,    since for any given set of planes there exist many    spatial    configurations    of noninterfering    objects    whose projections    overlap on each of the planes.    The second method uses a three dimensional    octree    representation    of the objects, and does not suffer    an information    loss from the reduction in dimen-    sionality like the first method.    Here, the inter-    ference is detected by a parallel traversal of the    octrees for the obstacles and for each of the    moving objects.    Computer controlled manipulation    often invol-    ves the use of a manipulator    such as the Scheinman    arm.    Its degrees of freedom are given by the boom    length and the joint angles.    These parameters    thus define a natural representation    of the arm.    In particular,    the updating of the representation    as the arm moves becomes trivial.    However, for    the other objects in the environment,    the above    representation    is not very useful.    In addition,    the origin of the cylindrical    coordinate    system    above moves with the manipulator.    Therefore,    the    representation    of the whole environment    must be    modified    for each new position of the manipulator    as noted by Udupa [ill.    The use of the Cartesian    space for the representation    of all objects re-    quires regeneration    of the octree for the moving    objects only.    Use of hardware such as proximity    sensors may    significantly    improve the efficiency    of the proce-    dures.    For example, a "cluttered    zone" signal    from a certain sensor may channel the available    computational    power to a detailed investigation    of    the desired region of space, leaving the relative-    ly safe areas unexamined.    This essentially    amounts    to being a hardware implementation    of comparing    coarse envelopes of objects in parallel.    The computation    of projections    of the poly-    hedra in Section II may also be replaced by cameras    in the appropriate    planes.    This will provide real    time projection    generation.    The projections    are    then treated as a collection    of two dimensional    binary images.    The projections    of the moving ob-    jects are tracked and their positions    checked    against the projections    of the obstacles.    We have not addressed    the problem of finding    good trajectories    for moving an object from a giv-    en source position to a given goal position.    The    representations    discussed here, in conjunction    with    the planning strategies    discussed    in [4,11], may    be used to develop the desired trajectories.    Ex-    periments employing    the methods described    in this    paper, and two Scheinman arms are being carried out    currently.    ACKNOWLEDGEMENTS    This work was supported    in part by the United    States Department    of Transportation    and Federal    Aviation Administration    under Contract DOT FA79-    WA-4360 and the Joint Services Electronics    Program    (U. S. Army, U. S. Navy and TJ. S. Air Force) under    Contract N00014-79-C-0424.    Cl1    c21    c31    c41    c51    [61    REFERENCES    J. W. Boyse, "Interference    Detection    among    Solids and Surfaces,"    Comm. ACM 22, January    1979, pp. 3-9.    P. G. Comba, "A Procedure    for Detecting    Inter-    sections of Three-Dimensional    Objects," Jnl.    ACM, 15, July 1968, pp. 354-366.    H. Feng and T. Pavlidis,    "Decomposition    of    Polygons into Simpler Components,"    IEEE    Trans. Comp. C-14, June 1975, pp. 636-650.    T. Lozano-Perez    and M. A. Wesley, "An Algo-    rithm for Planning Collision-free    Paths among    Polyhedral    Obstacles,"    Comm. ACM 22,    October    1979, pp. 560-570.    K. Maruyama,    "A Procedure    to Determine    Inter-    sections between Polyhedral    Objects,"    Int.    Jnl. Comp. Inf. Sci. 1, 3, 1972, pp. 255-266.    K. Maruyama,    "A Procedure    for Detecting    Inter-    sections and its Application,"    University    of    Illinois Computer Science Technical Report    No. 449, May 1971.    c71    [81    c91    Cl01    Cl11    Cl21    Cl31    Cl41    T. Pavlidis,    "Analysis    Pattern Recognition    1,    of Set Patterns,"    1968, pp. 165-178.    A. Rosenfeld    and A. C. Kak, Digital Picture    Processing,    Academic Press, New York, 1976.    B. J. Schachter,    "Decomposition    of Polygons    into Convex Sets," IEEE Trans. Comp. C-27    November    1978, pp. 1078-1082.    ,    M. I. Shamos, Computational    Geometry,    Springer-Verlag,    New York, 1977.    S. Udupa, "Collision Detection    and Avoidance    in Computer Controlled Manipulators,"    Proc.    5th Int. Joint Conf. Art. Intel., Cambridge,    Massachusetts,    1977, pp. 737-748.    G. M. Hunter and K. Steiglitz,    "Onerations    on    Images Using Quadtrees,"-IEEE-Trans.    Pattern    Analysis Mach. Int. 1, April 1979, pp. 145-    153.    A. Klinger and C. R. Dyer, "Experiments    in    Picture Representation    using Regular Decom-    position,"    Computer Graphics and Image Pro-    cess%    5, 1975, pp. 68-105.    C. L. Jackins and S. L. Tanimoto,    "Ott-trees    and their Use in Representing    Three-    dimensional    Objects," University    of Washing-    ton Computer Science Technical Report,    January 1979.    48     
 | 
	1980 
 | 
	92 
 | 
					
92 
							 | 
	AUTOMATED INSPECTION USING GRAY-SCALE STATISTICS    Stephen T. Barnard    SRI International, Menlo Park, California    ABSTRACT    A method for using gray-scale statistics for    the inspection of assemblies is described. A test    image of an assembly under inspection is registered    with a model image of a nondefective assembly and    the two images are corn ared on the basis of two    statistical tests: a X 3 test of the two marginal    gray-level distributions and the correlation    coefficient of the joint distribution. These tests    are made in local subareas that correspond to    important structure, such as parts and    subassemblies. The subareas are compiled in an    off-line training phase. TheX* measure is most    sensitive to missi?ng    or damaged parts, whereas the    correlation coefficient is most sensitive to    mispositioned parts. It is also possible to detect    overall lighting changes and misregistration with    these measures. Two examples are presented that    show how the tests detect two types of defects.    I    INTRODUCTION    Binary machine-vision techniques have received    a great deal of attention for industrial inspection    [1,2,3,41.    High-contrast lighting and thresholding    may be used to obtain an accurate silhouette that    can be processed at video rates to yield useful    features, such as area, perimeter, centroid, and    higher moments. In addition, structural    information is available in the geometric    relationships between the local features of the    outline (holes, corners, and so on). This kind of    information is sometimes sufficient for some    industrial automation (IA) tasks, such as part    identification and acquisition. Other tasks,    however, are not so easily approached. Although    many simple parts can be adequately represented by    a mere outline, most assemblies cannot because they    are typically composites of several overlapping    parts or subassemblies. Binary techniques will not    be effective in such cases because thresholding    will not, in general, separate the important    components. Thorough binary inspection of even    simple parts may not be feasible if one wishes to    find defects in surface finish or other types of    defects not limited to the outline.    Gray-scale techniques have lately received    more attention [5,6,7]. Compared to binary    methods, there is a great variety of ways to use    gray-scale information. This paper describes an    approach for exploiting gray-scale information for    inspection in a very basic form. Statistical tests    of marginal and joint intensity distributions are    used to compare test assemblies with an ideal model    assembly. These tests are very efficiently    computed and they are sensitive to defects and    discrepancies that are not easily handled in the    binary domain.    II    REPRESENTATION    We use a representation that directly    specifies the expected appearance of the assembly.    Important structures of the assembly are    represented as subareas that are tagged for special    consideration. Using this representation, an image    of a test assembly is more-or-less directly    compared to a model with a minimum of    preprocessing. It is necessary to do some sort of    photometric and geometric normalization to the test    image to bring it into correspondence with the    model.    At the lowest level, an assembly is    represented by a gray-scale image. At the highest    level, an assembly is represented by a list of    named subareas, each of which corresponds to a    particularly meaningful segment. Attached to each    of these subareas are properties that specify the    important characteristics; these characteristics    identify the corresponding segment as "good" or    "defective." Ideally, these subareas could have    arbitrary shapes and sizes, but, for now, think of    them as rectangular windows. Each has a specific    location and size in the normal reference frame.    The inspection system begins with a representation    of an ideal assembly, called the model. This may    be constructed interactively in a training phase.    Given a low-level representation of a test case    (i.e., an image), the system proceeds to build a    high-level representation by comparing segments of    the test image to segments of the model.    The first step is to bring the test image into    geometric registration with the model image. This    is not strictly necessary. We could directly    compare regions in the normal reference frame    49    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    (i.e., in the model image) with regions in the    translated and shifted reference frame.    Nevertheless, geometric registration simplifies    further processing by establishing a one-to-one    correspondence between model pixels and test    pixels.    We have assumed that the positional variation    of the test assemblies is restricted to translation    and rotation. We must therefore determine three    parameters--Ax, Ay, and 6~ . There are several    ways to approach this problem. Binary techniques    may be adequate to determine the normalization    parameters to subpixel accuracy with a system such    as the SRI vision module [3].    In the gray-scale    domain, one may search for maximal cross-    correlation, although this will probably be very    time-consuming without special hardware. A    potentially more efficient method is to find    distinguishing local features in the test image and    then match them to their counterparts in the model    b1-    Once the translation and rotation has been    determined it is easy to register the test image    using a linear interpolation [g].    III    STATISTICAL COMPARISON    Two statistical measures that are useful for    comparing model and test subareas are theX* test    and the correlationP*coefficient. They are both    extremely efficient and simple to implement, and    they are sufficiently different to distinguish two    broad classes of defects.    A.    The X2 Test    -    --    TheX2 test measures the difference between    two frequency distributions. Let hm(k) be the    frequency distribution of gray-scale intensities in    a model window. Let ht(k) be the frequency    distribution of a test window. We can consider h    to be a hypothetical ideal distribution. The X 2m    test gives a measure of how far h+, deviates from    the hypothetical distribution h,. The significance    of the test depends on the number of samples.    x2 z-y& ((h,(k)-h+) I2    ---------------    .    k    ht(k)    This yields a measure of difference, but, to be    consistent with what follows, we want a measure of    similarity. Let    f = e XL/C    where c is some positive constant.    7 is a measure    of the similarity of two distributions (in the X2    sense). If the distributions are identical, then    7 will be unity; if they are very different, t    will be close to zero.    B.    The Correlation Coefficient    Let hmt be the joint frequency distribution of    the model and test windows. That is, hmt(u,v) is    the frequency with which a model pixel has gray    value u and its corresponding test pixel has gray    value v.    Let ml be the mean of h, and m2 be the mean of    ht.    Let 0, be the standard deviation    f12 be the standard deviation of ht.    of h,, and    The central moments of the joint distribution    h mt are    p(i,j> = 1    c    ( (Xm(k)-ml)i * (Xt(k)-m2)j)    ---    n    k    where X,(k) and Xt(k) are gray values of the kth    pixel in the model image and and the test image,    respectively. The correlation coefficient, p , is    cc OJ)    P    -------    7 O2    p is in the interval [-1,1]. If it has the value    +l or -1 the total "mass" in the joint frequency    distribution must lie on a straight line. This    will be the case when the test image and the model    image are identical, and P will be +l. In    general, if there is a linear functional dependence    between the test and model windows, P will be +l    (or, in the extremely unlikely case that one window    is a llnegativetl    of the other, P will be -1).    If    the windows are independent distributions, however,    P will be 0. We can reasonably expect that    intermediate values will measure the degree of    dependence between the two windows.    c.    r=p    7    is not sensitive to the location of pixels.    It simply measures the degree of similarity between    two marginal distributions. P , on the other hand,    measures the degree to which the model pixels agree    with their corresponding test pixels; therefore, it    is sensitive to location. This implies that T is    a good test for missing and severely damaged parts    (for they are likely to change the distribution of    the test pixels compared to the distribution of the    model pixels), while P is a good test for the    proper location of parts (for misalignment will    change the joint distribution).    A systematic change in lighting can also be    detected. T would be small because the lighting    change would change the intensity distributions in    the test windows, but P would be large because the    test and model distributions would still be well-    correlated. If this observation were made for only    one window, it would not be very meaningful.    However, if we make the reasonable assumption that    most of the windows locate regions that are not    defective, this leads to the observation that a    systematic pattern of small 7 and large P    indicates a lighting change.    Small    detectable    mi sregistration errors are also    Small misregistration would produce    large 7 because the margi    .nal d istributions of the    test windows would not be much different from the    IV    EXPERIMENTS    model windows. On the other hand, P would be    smaller than if there were poor registration    because the windows would not correlate as well.    The same result for a single window would be caused    by a misplaced part, but, again using the    assumption that most of the windows locate non-    defective regions, a systematic pattern of large r    and small P over many windows would indicate an    overall misregistration.    These relationships are summarized in Table 1.    Table 1    Defect Pattern vs. 7 and P    * OK    Missing Misplaced Lighting Registration    **++**++~+*****~**+***~**~~~~**~****~~*~~*~~~~**~    *    *    r + LARGE SMALL    LARGE    SMALL    LARGE *    *    (SYSTEMATIC)    *    P" LARGE SMALL    SMALL    LARGE    SMALL *    *    )E    D.    A Two-Stage System    -    The gray-scale statistics discussed above    provide a basis for an integrated minimal system    for inspection that is composed of two modules--a    training module that is u&d off-line and that    allows a human operator to build a high-level    description of the assembly, and an inspection    module that matches images of test assemblies to    the model built in the training phase. In the    training phase the operator works with an image of    an ideal assembly, identifying the important parts    that are likely to be defective and describing the    modes of defects that may be relevant. For    example, the location of a particular part may not    be precisely fixed, but rather permitted to range    over a rather large area. In this case the    operator might indicate that 7 (the location    insensitive measure) is a relevant test, but not    P. In another case there may exist areas that    have extremely variable appearance, perhaps because    of individual part identification markings, and    these areas might be excluded from testing    altogether. In the normal case, however, a part    will be fixed in one location and orientation,    perhaps with some tolerance, and the operator will    merely specify allowable limits for 7 and P.    The on-line inspection phase is entirely    automatic. The knowledge about the assembly    collected in the training phase is applied to    specific test assemblies and a judgment is made as    to how well what is known fits what is seen. The    7 and P measures are computed for each area and    are used to derive probability estimates of the    various types of defects.    We have tried the f and p tests on two    assemblies.    Figure 1 and Table 2 show the results for a    water pump. The upper left portion of Figure 1 is    the "model" image of a nondefective pump in a    normal position and orientation. Several windows    representing important parts of the assembly are    also shown. The upper right portion of Figure 1 is    a "test" image of a defective assembly.    dark pulley in the center is missing.    The round,    In the lower    left part of Figure 1 the test image has been    registered with the model. The lower right image    is the difference between the model image and the    registered test image, and has been included to    _    indicate how close the registration is. Table 2    shows the t    windows. Note that 7 and P are both very small    and p values for the various    for the (missing) pulley compared to the other    (nondefective) parts, just as predicted.    7 is    also small for the window representing the total    assembly because this includes the defective    pulley.    Figure 1.    Table 2    Pump Statistics    *    T    (c=800) *    o    *    *++**++*+**JHC~******************~***    Total    8    0492    Y    .801    Pulley +    .236    8    .354    * <= Defect    Link    *    .981    *    .824    *    spout    *    -919    *    -904    *    Clip1    *    .862    9    0879    *    Clip2    +    0978    *    .780    *    Clip3    *    ,949    *    .898    *    ++*~**~**+*+***~***+~~~~~~*~*~~~*    51    Figure 2 and Table 3 show the results for a    hardcopy    computer terminal.    In this case the write    head is not positioned    in the correct place.    Note    that the window for the "head" includes the entire    area where it might be located.    As predicted,    the    7 value is high, while the P value is small.    In    practice    this might not be considered a defect    because it may be permissible    for the head to be    located anywhere along the track.    If this were the    case, the window could be tagged to indicate that    P    is not a relevant test.    Figure 2.    Terminal    Table    3    Terminal Statistics    Jc    7    (c=800) *    p    *    *+++****~++++******8~*~~~**~*~~~~~    Total    *    0746    *    .g10    *    Platen    *    0674    *    .868    *    Head    *    .890    *    0458    * <= Defect    Keys    *    0740    *    0923    *    *~~*8~*~~~~~****~~8**~~~~*~~~~~    REFERENCES    1.    M. Ejiri, T. Uno, M. Mese, and S. Ikeda, "A    Process for Detecting Defects in Complicated    Patterns,n    Computer Graphics and Image    Processing,    Vol. 2, pp. 326-339 (1973).    4.    5.    6.    7.    8.    9.    G. J. Gleason and G. J. Agin, "A Modular    Vision System for Sensor Controlled    Manipulation    and Inspection,"    A.I. Center    Technical Note 178, SRI International,    Menlo    Park, California    (March 1979).    G. J. Gleason, A. E. Brain, and D. McGhie, "An    Optical Method for Measuring Fastener Edge    Distances,"    A.I. Center Technical Note 190,    SRI International,    Menlo Park, California    (July 1979).    M. L. Baird, "SIGHT-I: A Computer Vision    System for Automated    IC Chip Manufacture,"    IEEE Trans. on Syst., Man, Cybern., Vol. SMC-    8, Nomebruary    197'c    W. Perkins,    "A Model Based Vision System for    Industrial    Parts," IEEE Trans. Comput.,    Vol. C-27, pp. 126-143 (1978).    R. J. Woodham,    "Reflectance    Map Techniques    for    Analyzing    Surface Defects in Metal Castings,"    Ph.D. Dissertation,    Massachusetts    Institute of    Technology,    Cambridge,    Massachusetts    (September    1977).    S. T. Barnard, "The Image Correspondence    Problem," Ph.D. Dissertation,    University    of    Minnesota,    Minneapolis,    Minnesota    (December    1979).    A. Rosenfeld and A. Kak, Digital Picture    Processing    (New York: Acamrn76).    2.    W .C. Lin and C. Chan, "Feasibility    Study of    Automatic    Assembly and Inspection    of Light    Bulb Filaments,"    Proc.    1445 (October 19?r    IEEE, Vo1.63, pp. 1437-    52     
 | 
	1980 
 | 
	93 
 | 
					
93 
							 | 
	HUMAN MOVEMENT UNDERSTANDING:    A VARIETY OF PERSPECTIVES    Norman I. Badler    Joseph O'Rourke    Stephen Platt    Mary Ann Morris    Department    of Computer and Information    Science    Moore School D2    University    of Pennsylvania    Philadelphia,    PA    19104    ABSTRACT    Our laboratory    is examining human movement    from a variety of perspectives:    synthesis of anima-    ted movements    and analysis of moving images.    Both    gestural    (limb and hand) and facial movements    are    being explored.    Key Words and Phrases:    Human movement, move-    ment representation,    motion analysis,    computer    vision, facial expression,    facial analysis,    con-    straint network.    I    HUMAN MOVEMENT    SYNTHESIS    Our laboratory    is examining human movement    representation    from a variety of perspectives,    in-    cluding synthesis of three-dimensional    animated    movements    and analysis of moving images.    These    broad areas are further refined into gestural    (limb    and hand) and facial movements    since very different    sorts of actions take place in jointed skeletal    movement and "rubber-sheet"    facial distortions.    Our human body model r5](Fig.    1) and hand model    [2](Fig. 2) are based on spherical decompositions    of three-dimensional    objects.    Our goals have been    to develop representations    for human movements    which permit both analysis and synthesis,    are "com-    plete" in terms of the range of human movements,    and yet provide a "high-level"    interface suitable    for specifying    or describing movement.    Many of    these issues are addressed    in a recent survey by    Badler and Smoliar [6], so we shall emphasize only    the more recent work in human motion analysis and    facial expression    synthesis.    II    HUMAN MOVEMENT ANALYSIS    There have been a rather small number of at-    tempts to analyze complex movement presented    in    time-varying    images.    Rashid [13] uses moving light    displays to track body joints and infer three-    dimensional    connections.    Tsotsos [14] describes    non-articulated    shape changes.    Badler [l] attempts    conceptual descriptions    of rigid jointed movements.    Recently O'Rourke [9,10] describes a computer sys-    tem which accepts a sequence of two-dimensional    images of a solid, three-dimensional    body [5] per-    forming some motion sequence    (Fig. 3).    The output    of the system is a description    of the motion as    coordinate-versus-time    trackings of all body joints    and as movement    instructions    suitable for controll-    ing the simulation    of a human body model [3,4,15].    The simulation    includes a detailed model of a human    body which incorporates    the structural    relationships    between the body parts and the physical limitations    to relative movement between the parts,    This model    is used to guide the image analysis through a pre-    diction/verification    control cycle.    Predictions    are made at a high level on the basis of previous    analysis and the properties    of the human model.    The low level image analysis then verifies    the pre-    diction and the model is adjusted according    to any    discrepancies    found or new knowledge acquired.    This    cycle is repeated for each input frame.    The information    extracted from the image is in-    tegrated into the current position of the model    through a constraint network operating on real-    valued coordinate    regions in three-dimensional space.    Possible positions    of body features are represented    as unions of orthogonally-oriented    rectangular    boxes.    Relationships    among body parts-for example,    distance constraints    imposted by the body skeleton-    are enforced in the network.    As new joint positio-    nal information    is extracted from the image it is    added to the network and its geometrical    conse-    quences immediately    propagated    throughout    the net-    work,    Only head, hands, and feet are located in    the image space , yet all remaining body joints may    be tracked by the geometric    inference process in    the network.    Figure 4 shows the constraint boxes    for each joint of the body given the moving images    of Fig. 3.    III    FACIAL ANIMATION    We are also investigating    the representation    and simulation    of the actions performable    on the    human face,    The face presents an interesting    prob-    lem for simulation,    as it is composed of mostly in-    dependent    sections of non-rigid masses of skin and    muscle, such as the areas around the eyes, mouth,    and cheeks,    This type of action is basically dif-    ferent from gross body movement in that a facial    action will affect the visible results of other    actions performed    in the same area of the face.    53    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    Our internal representation    of the face is    based on FACS, the Facial Action Coding System [7].    The system categorizes    basic actions performable    and recognizable    (by a human notator) on the face.    It is also easily translated    into a state-descrip-    tion of the face, in terms of muscle contractions.    A complete recognition    and simulation    system for the    face would consist of a camera, computer processing    to obtain an idealized internal representation    of    the action, and a simulation    of the action perform-    ed on a graphic output device.    Once the camera    image is obtained, analysis is performed    to produce    the AU (FACS Action Unit) state of the face.    This    analysis is relatively    simple, as it consists of    identifying    the presence/absence    of "features"    such    as wrinkles and bulges on the face.    (Note that    this analysis does not require "recognition"    of a    particular    face, just good comparison    between suc-    cessive images of the same face.) The current tech-    nique under investigation    uses an improved method    of "rubber-sheet"    matching [8].    Each AU effects    only a small set of muscles;    their union gives the    muscle-status    of the face.    The specified muscle    contractions    are then simulated on the face.    The face is represented    by a network of points    and interconnecting    arcs (Figs. 5 and 6) 1121.    It    also has a higher level of organization    which par-    titions the network into the skin surface and spe-    cific muscles.    (It is this muscle organization    which distinguishes    our work from that of Parke    [XL].)    The skin is a "warped" two-dimensional    surface of points and arcs.    The points represent    the basic surface, while the arcs contain informa-    tion specific to their locale, such as the elasti-    city ("stretchiness")    of the skin between the arc's    points.    Stretching    the skin (by contracting    a mus-    cle) causes first local motion, followed by propa-    gation of the skin distortion.    Muscles are also    points, connected    to but beneath the skin.    They    are distinguished    by being the initiation of the    distortion    of the skin surface.    An AU is thus mere-    ly a set of muscles, with appropriate    magnitudes    of    initial force of contraction.    IV    FUTURES    Our research into human movement understanding    has the joint goals of achieving conceptual    des-    criptions of human activities    and producing    effect-    ive animations    of three-dimensional    human movement    from formal specification    or notation systems 161.    One application    of this effort is in the synthesis    and analysis of American Sign Language.    A proto-    type synthesizer    for ASL is being designed    to faci-    litate experimentation    with human dynamics and    movement variations    which have linguistic    import    for facial and manual communication.    ACKNOWLEDGEMENTS    The support of NSF Grants MCS76-19464,    MCS78-    07466, and O'Rourke's    IBM Fellowship    are gratefully    acknowledged.    lx1    I: 23    13J    PJ    [5J    171    b31    El    ho3    IKl    l-12J    1133    t-143    REFERENCES    Badler, N.I.    Temporal scene analysis:    Concep-    tual descriptions    of object movements.    Tech.    Rep. 80, Dept. of Computer Science, Univ. of    Toronto, Feb. 1975.    Badler, N.1, and J. O'Rourke.    Representation    of articulable,    quasi-rigid,    three-dimensional    objects,    Proc. Workshop on Representation    of    Three-Dimensional    Objects, Philadelphia,    PA,    May 1979.    Badler, N.I., J. O'Rourke, and B. Kaufman.    Special problems in human movement simulation,    Computer Graphics Summer 1980.    Badler, N.I., J, O'Rourke, S.W. Smoliar, and    L, Weber,    The simulation    of human movement by    computer,    Tech. Rep. Dept. of Computer and    Information    Science, Univ. of Pennsylvania,    July 1978.    Badler, N.I., J. O'Rourke, and H. Toltzis.    A    spherical representation    of a human body for    visualizing    movement.    IEEE Proceedings    67: 10    (1979) pp. 1397-1403.    Badler, N.I. and Smoliar, S.W.    Digital repre-    sentations    of human movement.    Computing Sur-    veys 11: 1 (1979) pp. 19-38.    Ekman, P. and W. Friesen.    Facial ActionCoding    System.    Consulting    Psychologists    Press,Palo    Alto, CA. 1978.    Fischler,M,A.    and R.A. Erschlager.    The repre-    sentation and matching    of pictoral structures.    IEEE Tr. on Computers C-22: 1 (1973).    O'Rourke, J.    Image analysis of human motion.    Ph.D. Diss. Dept. of Computer and Information    Science, Univ. of Pennsylvania,    1980.    O'Rourke, 3, and N.I. Badler.    Human motion    analysis using constraint    propagation.    To    appear IEEE-PAM1 November 1980.    Parke, F.I.    Animation    of faces.    Proc. ACM    Annual Conf. 1972, pp. 451-457.    Platt, S.    Animating    facial expressions.    MSE    Thesis, Dept. of Computer and Information    Science, Univ. of Pennsylvania,    1980.    Rashid, R.    Lights:    A system for the inter-    pretation    of moving light displays.    To    appear IEEE-PAMI, Nov. 1980.    Tsotsos, J.    A framework for visual motion    analysis.    Ph.D. Diss. Dept. of Computer    Science, Univ. of Toronto, 1980.    [15] Weber, L., S.W. Smoliar, and N.I. Badler.    An    architecture    for the simulation    of human move-    ment,    Proc. ACM Annual Conf. 1978, pp. 737-    745,    54    Simulated movement    Figure 4.    Constraint    boxes for mo vements in Fig. 3.    showing some of th    butline and muscles    55    Figure 7.    Upper porti    muscles;    t    has been pu    right brow.     
 | 
	1980 
 | 
	94 
 | 
					
94 
							 | 
	AN OPTIMIZATION APPROACH FOR USING CONTEXTUAL INFORMATION IN COMPUTER VISION    Olivier D. Faugeras    Image Processing Institute    University of Southern California    Los Angeles, California 90007, U.S.A.    ABSTRACT    Local parallel processes are a very efficient    way of using contextual information in a very    large class of problems commonly encountered in    Computer Vision. An approach to the design and    analysis of such processes based on the minimiza-    tion of a global criterion by local computation is    presented.    INTRODUCTION    The problem of assigning names or labels to a    set of units/objects is central to the fields of    Pattern Recognition, Scene Analysis and Artificial    Intelligence. Of course, not all possible names    are possible for every unit and constraints exist    that limit the number of valid assignments. These    constraints may be thought of as contextual    information that is brought to bear on the particu-    lar problem, or more boldly as a world model to    help us decide whether any particular assignment    of names to units makes sense or not.    Depending upon the type of world model that    we are using, the problem can be attacked by    discrete methods (search and discrete relaxation)    or continuous methods (continuous relaxation).    In the first case our contextual information    consists of a description of consistent/compatible    labels for some pairs, or more generally n-tuples    of units. In the second case the description    includes a numerical measure of their compatibility    that may or may not be stated in a probabilistic    framework. Initial estimates of likelihoods of    name assignments can be obtained from measurements    performed on the data to be analyzed. Usually,    because of noise, these initial estimates are    ambiguous and inconsistent with the world model.    Continuous relaxation (also sometimes called    probabilistic relaxation or stochastic labeling)    is thus concerned with the design and study of    algorithms that will update the original estimates    in such a way that ambiguity is decreased and    consistency (in terms of the world model) is    increased.    This work was supported in part by the Defense    ARPA contract F-33615-76-C-1203. The author is    with the Image Processing Institute and Department    of Electrical Engineering, USC, Los Angeles,    California 90007.    More precisely, let us denote by "Lithe finite    set of N units and by ztthe finite set of M    possible labels. In the discrete case, the world    model consists of an n-ary relation RC("L(x,qn.    The fact that the n-typle {(ul,~,),...,(un,~n)}    belongs to R means that it is valid to assign name    ai to unit Ui for i=l,...,n. In the continuous    case, the world model consists of a function c of    WQP    into a closed interval [a,b] of the real    line:    c:    (QxZ~)~    + [a,bl    In most applications [a,bl=[O,l]    or [-l,l] and n=2.    The numbers c(ul,~l,...,un,Rn) measure the compa-    tibility of assigning label Ri to unit ui for    i=l ,***,    n. Good compatibility is reflected by    large values of c, incompatibility by small values.    We will present in this paper two ways of    measuring the inadequacy of a given labeling of    units with respect to a world model and show that    these measures can be minimized using only local    cooperative computation. We will compare this    approach with the original probabilistic relaxation    scheme proposed by Rosenfeld, Hummel and Zucker    [3] and a matching scheme proposed by Ullman [6].    To conclude the section, we will discuss the    possibility of using Decentralization and    Decomposition techniques to alleviate the curse of    dimensionality and show how the Optimization    approach can be extended very easily to the    analysis of multilevel, possibly hierarchical,    systems.    We will not discuss in this paper any    specific application. For an early application to    scene analysis and discussion of some of the    issues addressed in this paper, see [2]. For    recent surveys, see [l] and [4]. For an applica-    tion to graph matching, see [18].    I. Basic Optimization Based Probabilistic    Relaxation Scheme    We assume that attached to every unit ui are    measures of certainty pi(a), for kin gthat can be    thought of loosely as probabilities    c    Pi(R) = '    (1)    R in ;;e    56    From: AAAI-80 Proceedings. Copyright © 1980, AAAI (www.aaai.org). All rights reserved.    The wor_ld    model is embedded in a function c mapping    m .aL    into [O,l], Again, c(ul,R,u2,m) measures    the compatibility of calling unit ul,~ and unit    u2,m. This function also allows us to define a    topological structure on the set of units by    assigning to every unit Ui and label R in ZT!Z.    a set    Vi(R) of related units uj for which these exists    at least one label m in 2 such that c(ui,R,uj,m)    is defined.    A compatibility vector di is then computed    for every unit ui that measures the compatibility    in each label R in Xwith the current labeling at    related units in Vi(a). The simplest way of    defining Qi(a) is [l]:    c    Qi(a) = +I    uj in v (a) Qij(R)    (2)    i    where /Vi(g) 1 is the number of units in the set    Vi(a) and Qij(") is given by:    Qij (‘) =    c    C(Ui,R,U    j    ,m)Pj    (ml    (3)    m in 2    Loosely speaking, Qi(a) will be large if for many    units U* in Vi(a), the compatible labels (that is    the labgls m such that c(ui,R,uj,m) is close to    1) have high probabilisties, and low otherwise.    In some cases the compatibility coefficients    may be given a probabilistic interpretation, that    is c(ui,R,uj,m) is the conditional probability    pij(R/m) that unit ui is labeled R given that unit    uj is labeled m.    The next step in designing the Relaxation    scheme is to specify a way of combining the two    sources of information thatwe can use, i.e. the    initial probabilities and the contextual informa-    tion, to update the label probabilities. This    updating should result in a less ambiguous and    more compatible overall labeling in a sense that    will remain vague until later on. Rosenfeld et al.    [3] proposed the following iterative algorithm:    for every unit ui and every label R in di set    p(n+l) (a) =    i    (4)    The denominator of the right hand side is simply    a normalizing factor to ensure that numbers    p("+l)(~) still add up to one. Intuitively, the    libels R whose compatibility Qi(a) is larger than    others will see their probability increase whereas    the labels with smaller compatibility will see    their probability decrease.    One criticism with this approach is that it    does not take explicitly into account measures of    the two most important characteristics of a    labeling of units, namely its consistency and its    ambiguity. Faugeras and Berthod [5,7,81 have    proposed several such measures and turned the    labeling task into a well defined optimization    problem which can be solved by local computations    in a network of processors.    We saw before that we can asso$iate with    every unit Ui a probakility vector pi and a    compatibility vector Qi whose components are given    by equation (2). In general, the vectors di are    not probability vectors in that their components    do not sum to 1. This can be easily changed by    normalizationand we can define:    qi(a>    =    Qi (‘)    c    Q; (4    m in i    I    The vectors G, are now probability    measure of coi&sistency    for unit u;    can be defined as the vector norm1    (5)    vectors and a    (local measure)    (6)    where ll*Ilcan    be any norm (in practice the    Euclidean normb    Similarly a local measure of    ambiguity can be defined as    Hi = R in ~pi(")(l-pi(")) = '-ll~ill~ (7)    where ~~*~~2    is th e Euclidean norm. Combining the    two measures yie Ids a local criterion    Ji = clCi + BHi    (8)    where c1 and 8 weight the relative importance we    attribute to ambiguity versus consistency. A    global measure can then be defined over the whole    set of units by averaging the local measures.    Using the arithmetic average for example, we    define    J=    Ji    (9)    all units ui    The labeling problem can then be stated as follows:    given an initial labeing {pi    '('))of the set    of units Q, find the local minimum of the    function J closest to the initial conditions,    subject to the constraints that the vectors    p. are probability vectors. More precisely,    &is implies that    c    pi(R)=1 and pi(~)>0 for all units    -    R in 2    U.    1    (gal    Aside from the fact that we are now confronted to    a well-defined mathematical problem which can be    tackled using Optimization techniques, we are also    sure that some weighted measure of inconsistency    and ambiguity is going to decrease.    57    As pointed out in [8], one minor drawback with    the definition (6) of consistency is that its    n$nimiz%tion implicitly implies the minimization of    qi and pi and therefore the maximization of the    entropy term H. (equation (7)). Thus there is an    inherent problhm with the definition (8) in the    sense thai consistency and ambiguity tend    opposite directions. One very simple way    resolving that contradiction is to define    measure of both ambiguity and consistency    J! =    1    -Gi    l ;;i    where 0 denotes the vector inner product,    definition of a global criterion proceeds    before:    J'    =    c    J!    all units u. 1    1    and the labeling problem can be stated as    to go in    of    a local    as    (10)    The    now as    (111    (&I,    replacing J with J'. This is similar to the    minimal mapping theory developed by Ullman [6] for    motion correspondence. Given two image frames    with elements ui in the first one (our units) and    element k (our names) in the second one, he studied    the problem of selecting the most plausible    correspondence between the two frames. Defining the    cost qi(k) of pairing element ui with element k and    the variables pi(k) equal to 1 if Ui is paired with    k and 0 otherwisephe rephrased the motion    correspondence probl em as a linear programming    problem by defining the cost function    (LPI    513,    =    c    c    all element u. all element k    Pi(k)qi(k)    1    (12)    which is precisely equation (11). The important    difference between criteria J' and JA is that the    costs qi(k) in (12) are not functions of the    variables pi(k) whereas in (11) they are. In    particular,minimizing J' is not an LP problem,    in general. Nevertheless, the parallel between the    two approaches is interesting and confirms that    what we called the compatibility coefficients qi(k)    defined in Eq. (5) are also a measure of the    satisfaction/profit implied in assigning name k to    unit u..    1    II. Computational Requirements: Locality,    Parallelism, Convergence    As described in [7,9], imagine that we attach    to the set Q of units and the sets Vi=    U    R in &    ‘i CR)    a simple network, that isapair <G,R> where G is a    connected graph and R a set of processors, one for    each node in the graph. There is a one to one    correspondence between the units and the nodes of    the graph on one hand, and the nodes of the graph    and the processors on the other hand. This in    turn implies that there is a one to one correspon-    dence between the neighbors of the ith processor    ri, i.e., the processors that are connected by    arcs of G to r., and units in V..    1    1    As shown in [7,8], the minimization of    criteria J or J' can be achieved by using only    local computation. More precisely, denoting by &7    (a function of all the vectors $i) either criterion    J or J', we can attach to every unit ui a local    gradient vector    &-k ='Fi(cj)    a Si    (13)    where Fi is a function of the vectors sj of units    u. in the set Vi of neighbors previously defined.    Eiplicit formula for the functions F. can be found    in [4,5,7,8]. The iterative scheme is then defined    as    $n+ll    +(n>    i    = Pi    + p P.    n l    (14)    where p    is a positive stepsize and Pi a linear    project?on operator de$e;T$ped by the constraints    imposed on the vector pi    [5,7], (for example    that it is a probability vector). The main point    is that both functions Fi and operator Pi can be    computed by processor ri by communicating only with    neighboring processors (local computation) while    guaranteeing that the cost function 8 will decrease    globally.    It was stated before that a large amount of    parallelism can be introduced in the process of    minimizing criteria J and J'. This is achieved by    attaching to every unit ui a processor ri connected    only to processors r. attached to units u. related    t0 Uim    The global csiterion can then be Minimized    by having processors ri perform simple operations    mostly in parallel while a simple sequential    communication process allows them to work toward    the final goal in a coordinated fashion.    If nonetheless our supply of processors is    limited, we may want to split our original problem    into several pieces and assign sequentially our    pool of processors to the different pieces. The    net result has of course to be the minimization of    the original global criterion and some coordination    must therefore take place.    Solutions to this problem can be found in the    so-called Decomposition and Decentralization    techniques which have been developed to solve    similar problems in Economics, Numerical Analysis,    Systems Theory and Optical Control [12,13,14,15].    Decomposition techniques proceed from an algorithm    standpoint: we are confronted with a problem of    large dimensionality and try to substitute for it a    sequence of problems of smaller dimensionalities.    Decentralization techniques take a different    viewpoint: we are confronted with a global problem    and have at our disposal P decision centers. The    question is whether it is possible to solve the    global problem while letting the decision centers    solve only local problems. The structure of    criteria J and J' as sums of local measures allows    us to develop both types of techniques [121. The    key idea is to partition the set of units. For    detailed algorithms, see [16].    III. Extension to Hierarchical Systems,    Conclusions    The optimization approach presented in    Section I can be extended to the case where    several labeling problems are present and embedded    in a pyramid or cone structure with, for example,    L levels.    The different levels can be the same picture    at different spatial resolutions as in [17] or    represent different states of abstraction. For    example the lowest level could be the edge element    level, then the link level [lo], then the level    dealing with elementary shapes like straight lines,    ellipses, cubits, etc... These different levels    form a multilevel system, each level having to    solve a stochastic labeling problem.    command vector for level i, that is Gi    Let Vi be the    is a NiMi    dimensional vector, if there are Ni units and Mi    possible classes, obtained in concatenating the    probability vectors $j, j=l,...,$i. At leyel i we    have to minimize a criterion J;(v,.$,.....v,). The    fact that criterion Ji depends&up& the c&&and    vectors at other levels accounts for the inter-    action between the levels.    A natural, but not always rational, way of    solving this multilevel problem is to assume that    every level i (i=l,...    ,L) considers as given the    command vectors of the other levels and minimizes    its own criterion. The result is a nongcooperative    equilibrium [12l or Nash point (ul,...,uL)    verifying:    Ji(;l,...,zi l,;i,:i+l,...,:L) < J.(;l,...&    1,    -1    +    -f    vi>"i+19"'>    L    :I    for all i and c.. This notion can certainly be    criticized beca?ise    by cooperating each of the L    levels can, in general, improve its situation    compared with the non-cooperative case. In other    words,    (61    t$e following situation is possible: if    ,..+.,uL)    set (Ui,..    i-y a Nash point, there exists another    .,uL) of command vectors such that    Ji(Zi ,...,$) < Ji(;l ,...,:L, for all i.    This introduces the notion of Pareto points which,    intuitively, are optimal in the sense that it is    impossible to find another set of L command vectors    that will decrease all criteria. It is possible to    show that under very general conditions [12], Pareto    points can be obtained by minimizing only one    criterion! In other words if ?i=(?il,...,zL)    is a    Pareto point, then there exists L positive number    Al ,...,AL such that 3 is a minimum of criterion    J$    ,...,-;,)    =    L    XiJi(;l,...,GL)    i=l    the ‘i’s    can therefore be interpreted as weigh    fat tors the different levels have agreed upon.    .ting    Another interesting possibility is to assume    a hierarchical structure within the L levels, level    1 being the lowest and level L the highest. We    then have a cascade of optimization problems    similar to what happens in the price decentraliza-    tion technique men$ioned i+n section II, where    level 1 considers v2,... ,vL as given and computes    +    u1 = mJn Jl(Gl,c2,... ,:,I    3    This defines zl as a function of !2,... ,; . Then    levzl $ solve5 th$ probl$m of minimizing kri$erion    .JJ~ul(v2,...,vL),v2,...,vL)    with respect to v2,    . . .    Even though the theory of hierarchical multi-    level systems is still in its infancy it has been    recognized for some time now [ll] that it carries    the possibility of solving many difficult problems    in Economics, Physiology, Biology [13,14,15],    Numerical Analysis and Systems Theory [12], Optimal    Control. It is clear that this theory is relevant    to Image Analysis.    In conclusion, we think that probabilistic    relaxation techniques will play a growing role    in the near future as building blocks of more    and more complex vision systems. The need to    quantify the behavior of these relaxation processes    will become more and more pressing as the    complexity of the tasks at hand rapidly increases    and the global optimization framework offers a    solid basis for this analysis.    REFERENCES    El1    PI    [31    [41    [51    [61    [71    L.S. Davis and A. Rosenfeld,%ooperating    processes for low-level vision: a survey,"    TR-123, Department of Computer Sciences,    University of Texas, Austin, January 1980.    H.G. Barrow and J.M. Tenenbaum, "MSYS: A    System for Reasoning About Scenes," Tech.    Note 121, AIC-SRI Int., Menlo Park, Ca.,    1976.    A. Rosenfeld, R.A. Hummel and S.W. Zucker,    "Scene Labeling by Relaxation Operations,"    IEEE Trans. on Syst., Man, and Cybern.,    SMC-6, No. 6, pp. 420-453, June 1976.    0-D. Faugeras, "An Overview of Probabilistic    Relaxation Theory and Applications,"    Proceedings of the NATO ASI, Bonas, France,    June-July 1980, D. Reidel Publishing Co.    O.D. Faugeras and M. Berthod, "Scene    Labeling: An Optimization Approach," Proc.    of 1979 PRIP Conference, pp. 318-326.    S. Ullman, The Interpretation of Visual    Motion, MIT Press, 1979.    O.D. Faugeras and M. Berthod,"Improving    Consistency and Reducing Ambiguity in    Stochastic Labeling: An Optimization    Approach," to appear in the IEEE Trans. on    Pattern Analysis and Machine Intelligence,    1980.    59    [8 1    DOI    Ull    WI    D31    1141    WI    1161    El71    W3 1    M. Berthod and O.D. Faugeras, "Using Context    in the global recognition of a set of objects:    an optimization approach," 8th World Computer    Congress (IFIP 80).    S. Ullman, "Relaxation and Constrained    Optimization by Local Processes," Computer    Graphics and Image Processing, 10, pp. 115-    125, 1979.    S.W. Zucker and J.L. Mohammed, "A Hierarchi-    cal System for Line Labeling and Grouping,"    Proc. of the 1978 IEEE Computer Society    Conference on Pattern Recognition and Image    Processing, pp. 410-415, Chicago, 1978.    M.D. Mesarovic, D. Macho and Y. Takahara,    Theory of Hierarchical Multilevel Systems,    Academic Press, 1970.    J.L. Lions and G.I. Marchuk, Sur Les Methodes    Numeriques En Sciences Physiques Et Econo-    Collect    miques,    ion Methodes Mathematiques de    L'Informatique, Dunod, 1976.    Goldstein,"Levels and Ontogeny,"Am. Scientist    50, 1, 1962.    M. Polanyi, "Life's Irreducible Structures,11    Science 160, 3884, 1969.    D.F. Bradley, "Multilevel Systems and    Biology - View of a Submolecular Biologist,11    in Systems Theory and Biology (M.D. Messarovic,    ed.), Springer 168.    O.D. Faugeras, "Decomposition and Decentrali-    zation Techniques in Relaxation Labeling,"    to appear in Computer Graphics and Image    Processing, 1980.    A.R. Hanson and E.M. Riseman,    of Natural Scenes, in A. Hanson and    Segmentation    E. Riseman, eds., Computer Vision Systems,    Academic Press, NY, 1978, 129-163.    O.D. Faugeras and K. Price, "Semantic    Description of Aerial Images Using    Stochastic Labeling," submitted to the 5th    ICPR.    60     
 | 
	1980 
 | 
	95 
 | 
					
95 
							 | 
	SWIRL: AN OBJECT-ORIENTED    AIR BATTLE SIMULATOR    Philip Klahr, David McArthur and Sanjai Narain;:    The Rand Corporation    1700 Main Street    Santa Monica, California    90406    ABSTRACT    ROSS, an object-oriented    language that has    evolved    over    the    last    two    years    as    part    of    the    knowledge-based    simulation    research    at    Rand    [1,3,6,7,8,91.    We describe a program    called    SWIRL    designed    for    simulating    military    air battles between offensive    and defensive    forces.    SWIRL    is    written    in    an    object-oriented    language    (ROSS) where the knowledge    base    consists    of    a    set    of    objects    and    their    In the following    sections we discuss the    goal    of    SWIRL,    outline    the    main    objects    in    the    air-battle    domain, and note how those    objects    and    their    behaviors    map onto the ROSS objects and ROSS    behaviors    that constitute    the    SWIRL    program.    We    discuss    some    of    the    problems    encountered    in    designing    an    object-oriented    simulation,    and    present our solutions    to these problems.    associated    behaviors.    problems we    encountered    We    discuss    some    of    the    and    in    designing    SWIRL    present our approaches    to them.    I    INTRODUCTION    Object-oriented    programming    languages such    as    SMALLTALK    [2],    PLASMA    [4],    and DIRECTOR    [5], as    well as ROSS [S], enforce a 'message-passing'    style    of    programming.    A    program    in    these    languages    II    THE GOAL OF SWIRL    -    ---~    The goal of SWIRL is to provide a prototype    of    a    design    tool    for    military    strategists    in the    domain of    air    battles.    SWIRL    embeds    knowledge    about offensive    and defensive    battle strategies    and    tactics.    SWIRL accepts from the user a    simulation    environment    representing    offensive    and defensive    forces,    and    uses    the    specifications    in    its    knowledge    base    to    produce a simulation    of an air    battle.    SWIRL also enables the user to observe, by    means of a graphical    interface,    the progress of the    air battle in time.    Finally,    SWIRL    provides    some    limited    user    aids.    Chief    among    these    are    an    interactive    browsing    and    documentation    facility,    written    in    ROSS,    for    reading    and understanding    SWIRL code, and an    interactive    history    recording    facility    for    analyzing    simulation    runs.    This,    coupled    with    ROSS's    ability    to    easily    modify    simulation    objects and their behaviors,    encourages    the user to explore a wide variety of    alternatives    in    the space of offensive    and defensive    strategies    and to discover    increasingly    effective    options    in    that space.    consists of a set of    objects    called    actors    that    interact    with    one another via the transmission    of    messages.    set    of    message    Each actor has a set of attributes    and a    templates.    Associated    with each    message template    is a behavior that is invoked when    the    actor    receives    a message    that matches that    template.    A behavior    is itself a    set    of    message    transmissions    to other actors.    Computation    is the    selective    invocation    of actor behaviors    via message    pasSing.    This style of computation    is especially    suited    simulation    in domains that may be thought of as    to    consisting    of    autonomous    interacting    components.    In    such    domains one can discern a natural mapping    of their constituent    components    onto actors and    of    their    Indeed,    interactions    experts    in    many    onto    message    transmissions.    may find    the    domains    object-oriented    metaphor    a natural one around which    to organize and express their    addition,    object-oriented    simulations    can achieve    knowledge    In    high intelligibility,    modifiability    and credibility    III    SWIRL'S DOMAIN    -    ~~    [1,6,91.    However, while these languages provide a    potentially    powerful simulation    environment,    they    can easily be misused,    since good programming    style    in object-oriented    languages is not as well-defined    as in more standard procedural    languages.    In this paper we    describe    a    program    called    SWIRL,    designed    for    simulations    in the domain of    air battles,    and use SWIRL to demonstrate    effective    simulation    programming    techniques    in    an    object-oriented    language.    SWIRL    is written    in    5: Views expressed    in this paper    are    the    authors'    own    and    are not necessarily    shared by Rand or its    research sponsors.    In our air-battle    domain, penetrators    enter an    airspace    with    a pre-planned    route    and    bombing    mission.    The goal of the defensive    forces    is    to    eliminate    those    penetrators.    Below    we list the    objects    that    comprise    this    domain    and    briefly    outline their behaviors.    1.    Penetrators.    These are the primary offensive    objects.    They are assumed to enter the defensive    air space with a mission plan and route.    2.    GCIs.    "Ground    control    intercept"    radars    detect incoming penetrators    and guide fighters to    331    From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved.    intercept penetrators.    3.    AWACS.    These are airborne radars    that    also    detect and guide.    4.    SAMs.    Surface-to-air    missile    installations    have    radar    capabilities    and    fire    missiles at    invading penetrators.    5.    Missiles.    These are objects fired by SAMs.    6.    Filter Centers.    They serve to integrate    and    interpret    radar    reports;    they    send    their    conclusions    to command centers.    7. Fighter Bases.    Bases are alerted    by    filter    centers    and    send    fighters    out    to    intercept    penetrators    when requested to by command centers.    8.    Fighters.    Fighters    receive    messages    from    their    base    about their target penetrator.    They    are guided to the penetrator    by a radar    that    is    tracking the penetrator.    9.    Command Centers.    These    represent    the    top    level    in    the    command-and-control    hierarchy.    Command centers    receive    processed    input    about    penetrators    from    filter    centers    and    make    decisions    about    which    resource    (fighter    base)    should be allocated to deal with a penetrator.    10.    Target.    Targets are the objects penetrators    intend to bomb.    Figure 1 shows an example snapshot of an air-battle    simulation.    A    complete    description    of the SWIRL    domain can be found in [7].    IV    THE DESIGN OF SWIRL    -    -~-    In this section we outline how the above    flow    of command and control among the different kinds of    objects is modeled in ROSS.    The    first    step    in    modeling    in    an    object-oriented    language such as ROSS is to decide    upon the generic actors    and    their    behaviors.    A    generic    object    or    actor    in    ROSS    represents    an    object type or class and    includes    the    attributes    and    behaviors    of all instances of that class.    For    example, the generic object FIGHTER represents    each    of    the    individual    fighters that may be present in    any simulation    environment.    Second, one    may    need    to design a set of auxiliary    actors to take care of    modeling    any    important    phenomena    that    are    unaccounted    for by the generic objects.    A    The Basic Objects    2    -~    We begin by defining one ROSS    generic    object    for    each    of    the    kinds    of    real-world    objects    mentioned    in the previous section.    We    call    these    objects    basic    objects.    Each of these has several    different    attributes    representing    the    structural    knowledge    associated    with that type of object.    For    example, to express    our    structural    knowledge    of    penetrators    in    ROSS,    we    create a generic object    Figure 1.    Graphical Snapshot of SWIRL Simulation.    332    called PENETRATOR    and define its    the following ROSS command:    attributes    using    times.    (The entire hierarchical    SWIRL is given in [7].)    organization    for    (ask MOVING-OBJECT    create generic PENETRATOR    with    Each object type in the class hierarchy    can be    position    1 a position'    construed    as    a description    or view of the objects    max-speed    'a maximum speed'    below it.    One object (AWACS)    happens    to    inherit    speed    'current speed'    its    behaviors    along    more    than one branch of the    bombs    'current number of bombs'    hierarchy    (via    RADAR    and    MOVING-OBJECT).    Such    status    'a status'    'multiple    views'    Or    'multiple-inheritance'    is    flight-plan    'a flight plan')    possible    in    ROSS    but    not    in    most    other    object-oriented    programming    environments.    where phrases in single-quotes    represent variables.    C.    Modeling Non-Intentional    Events    To capture    the    behaviors    of    each    kind    of    -    real-world    object,    we    begin    by    asking    what    The basic objects and their behaviors    have    a    different kinds of input    messages    each    of    these    clear    correspondence    to    real-world    objects    and    real-world    objects    could receive.    For example, a    their    responses    to    the    deliberate    actions    of    fighter can receive a message    (a) from its    fighter    others.    These    actions    comprise    most    of    the    base    telling    it    to    chase    a    penetrator    under    significant    events    that    we    wish    to    simulate.    guidance    from a radar, (b) from that radar    telling    However,    there    are    several    important    kinds    of    it    to    vector    to a projected    intercept point with    events that represent    side    effects    of    deliberate    the    penetrator,    or    (c)    an    'in-range'    message    actions    (e.g., a penetrator    appearing    as a blip on    informing    it    that    the penetrator    is in its radar    a radar screen is a side effect of    the    penetrator    range.    Each of these    messages    then    becomes    the    flying    its    course    and    entering    a radar range).    basis    for    a fighter behavior written    in ROSS.    To    Such events are important since    they    may    trigger    determine    the structure of each of these    behaviors    other actions    (e.g., a radar detecting    a penetrator    we    ask    what    messages    the    object    transmits    in    and notifying    a    filter    center).    However,    these    response to each of its inputs.    For    example,    in    non-intentional    events    do    not    correspond    to    response    to    a    'chase'    message    from its fighter    real-world    message    transmissions    (e-s.,    base, a fighter will send a message    to    itself    to    penetrator    does    not    notify    a    radar that it ha:    take    off, and then send a message to the specified    entered the radar's range).    An important    issue    in    radar requesting    guidance to the    penetrator.    The    the    development    of    SWIRL has been how to capture    following ROSS command captures this behavior:    these non-intentional    events in an    object-oriented    framework    (i.e., via message transmissions).    (ask FIGHTER when receiving    (chase 'penetrator    guided by >gci)    One method of capturing non-intentional    events    (-you unplan all (land))    could    be    to    refine the grain of simulation.    The    (-you set your status to scrambled)    grain of a simulation    is determined    by the kind    of    (if (-you are on the ground)    real-world    object    one    chooses    to represent    as a    then (-you take off))    ROSS object.    A division of the    air-battle    domain    (-requiring    (-your guide-time)    tell -the gci    into    objects    like    penetrators    and    radars    is    guide -yourself    to -the penetrator)).    relatively    coarse    grained;    a    finer    grain    is    possible.    In    particular,    one    could    choose    to    (The    '-'s    signal    abbreviations.    The    ROSS    create objects that represent    small    parts    of    the    abbreviations    package    t81    enables    the    user    to    introduce    English-like    airspace    through    which penetrators    fly.    Then, as    expressions    into    his    programs    and    to tailor the expressions    to his own    penetrators    move they would send messages    to    those    preference.    This approach towards    readability    is    sectors that they were entering or leaving (just as    'send    particularly    flexible,    since    the    user    is    objects moving through real space impact    or    not    restricted    to    system-defined    messages'    to    that    space).    Sectors    could    be    any    English    associated    with radars whose    interface.)    ranges    they    define,    and    act    as    intermediary    objects to notify radars    B.    Organizing    Objects Into a Hierarchy    when penetrators    enter their    ranges.    Essentially    -    -    -    this solution proposes modeling    the situation    at an    almost 'molecule-strikes-molecule'    level of    detail    The behaviors    of basic objects often have many    that    are revealed in the process of    since,    by    adopting    this level, one can achieve a    commonalities    strict mechanical    cause-and-effect    chain    that    is    defining their 'behaviors.    For example, GCIs, AWACS    and SAMs all share the ability to detect, and their    simple    to    model via message transmissions    between    real objects.    detection behaviors    are    identical.    We    can    take    advantage of ROSS's inheritance    hierarchy    (see [8])    However,    although    this    method    solves    one    to reorganize object behaviors    in a way    that    both    it causes two others that make it    emphasizes    these    conceptual    similarities    and    modeling problem,    intractable.    First,    the    method    entails    a    eliminates    redundant    code.    For    example,    for    prohibitive    amount of computation.    Second, in most    objects    that    have    the    behaviors    of detection    in    cases, the extra detail would    make    abstract    generic    object    modeling    very    common, we define a more    called    RADAR    to store these common behaviors.    We    awkward and unnatural    (at least for our purposes    in    building    and using SWIRL).    The    natural    level    of    then place it above    GCI,    AWACS    and    SAM    in    the    decomposition    is    that of 'coarse objects'    such as    hierarchy,    so    that they automatically    inherit the    penetrator    and fighter.    To    the    extent    we    stray    behavior    for detection whenever    necessary.    Hence    from this, we make the simulation    writer's    job more    we    avoid    writing these behaviors    separately    three    difficult    since he can no longer    conceive    of    the    333    task    in    the    way    that    comes    simplest    or    most    y    PERFORMANCE    FIGURES    The ROSS interpreter    is written    in MACLISP and    runs    under the TOPS-20 operating    system on a DEC20    (KLlO).    The space requirement    for this interpreter    is    about    112K    36-bit    words.    SWIRL    currently    contains the basic and auxiliary objects    mentioned    above,    along    with    approximately    175    behaviors.    Compiled    SWIRL    code    uses    about    48K    words.    A    typical    SWIRL simulation    environment    contains well    over    100    objects    and    the    file    defining    these    objects    uses    about 3K words.    Total CPU usage for    the simulation    of an air-battle    about    three    hours    long    is    about 95 seconds.    This includes the time    needed to drive the graphics    interface.    VI    CONCLUSIONS    -    We have found the object-oriented    environment    afforded    by    ROSS to be a rich and powerful medium    in    which    to    develop    a non-trivial    simulation    program    in the domain of air-battles.    The program    adheres    to    the    criteria    of    intelligibility,    modifiability    and credibility    laid out in [1,6,9].    Liberal use of the abbreviations    package has led to    highly    English-like    code, almost self-documenting    in    nature.    BY    adhering    to    several    stylistic    principles    for object-oriented    programming,    such as    the    Appropriate    Knowledge    Principle    and    the    Appropriate    Decomposition    Principle,    we have been    able    to    further    enhance    SWIRL's    modifiability.    This    has    enabled    us    to easily experiment    with a    wide range of air-battle    strategies.    Coupled    with    a    graphics    interface    which    allows us to quickly    trace a simulation    run and test the credibility    of    its    behaviors,    SWIRL    provides    a    powerful    environment    in which to develop and debug    military    strategies    and tactics.    naturally    to    him:    In    summary,    we    reject    this    technique    because    it    violates    the    following    principle    that we have found    object-oriented    simulators:    useful    in    designing    THE    APPROPRIATE    DECOMPOSITION    PRINCIPLE:    Select    a level of object decomposition    that is    'natural' and at a level of detail commensurate    with the goals and purposes of the model.    A second solution for modeling non-intentional    events    would    be to    allow    the    basic    objects    themselves    to transmit    the    appropriate    messages.    For example, if we allow a penetrator    (with a fixed    route) to    see    the    position    and    ranges    of    all    radars,    it could compute when it would enter those    ranges and send the appropriate    'in-range' messages    to    the    radars.    This solution    is computationally    tractable.    However,    it has the important    drawback    that    it    allows the penetrator    to access pieces of    knowledge    that,    in    reality,    it    cannot    access.    Penetrators    in the real world know their routes but    they may not know the location of all enemy radars.    Even    if    they    did,    they    do not send messages    to    radars telling the    radars    about    themselves:    In    short, we reiect this technique because it violates    another useful principle    that can be formulated    as    follows:    THE APPROPRIATE    KNOWLEDGE    PRINCIPLE:    Try    to    embed    in    your    objects    only    legitimate    knowledge,    i.e., knowledge    that can be directly    accessed    by    being modeled.    the    real-world    objects that are    D    L    Auxiliary    Objects    We feel that the above    principles    should    be    considered    by    anyone    attempting    to    develop    an    object-oriented    simulation,    as they are critical to    insure    readable    and conceptually    clear code.    The    solution we offer in SWIRL represents    one technique    that adheres to both principles.    After we decompose    the domain into    a set    of    basic    objects,    we    create    auxiliary    objects    to    handle    non-intentional    events.    Auxiliary    objects    are full objects in the ROSS sense.    However,    they    do not have real-world    correlates.    Nevertheless,    such objects provide a useful device    for    handling    certain    computations    that    cannot    be    naturally    delegated    to real-world    objects.    We have included    two auxiliary    objects in SWIRL, the    SCHEDULER    and    the PHYSICIST.    The    SCHEDULER    represents    an    omniscient,    god-like    being    which,    given current information,    anticipates    non-intentional    events    in    the    future    and    informs    the    appropriate    objects as to their    occurrence.    The PHYSICIST    models    non-intentional    events    involving    physical    phenomena    such as bomb    explosions    and ecm (electronic    counter    measures).    Although    we have now introduced    objects which have    no real-world    correlates,    the objects that do    have    real-world    correlates    adhere    to    the    above    principles.    Hence,    code    for    the    basic    objects    remains realistic    and transparent.    REFERENCES    Faught, W. S.,    Klahr, P.    and    Martins, G. R.    'An Artificial    Intelligence    Approach To Large-    Scale    Simulation."    In    Proc.    1980    Summer    ~    -    Computer    Simulation    Conference,    Seattle,    1980,    231-235.    Goldberg,    A.    and    Kay,    A.    "Smalltalk-    Instruction    Manual.'    SSL 76-6,    Xerox    PARC,    Palo Alto, 1976.    Goldin, S. E.    and Klahr,    P.    "Learning    and    Abstraction    in    Simulation."    In    Proc.    IJCAI-81, Vancouver,    1981, 212-214.    Hewitt,    C.    "Viewing    Control    Structures    as    Patterns    of    Message    Passing."    Artificial    Intelligence    8 (1977), 323-364.    Kahn, K. M.    "Director Guide." AI    Memo    482B,    MIT, 1979.    Klahr, P.    and Faught, W. S.    "Knowledge-Based    Simulation."    Proc.    AAAI-80, Palo Alto,    1980,    181-183.    Klahr, P., McArthur,    D., Narain, S.    and Best,    E.    "SWIRL:    Simulating    Warfare    in the    ROSS    Language."    The Rand Corporation,    1982.    McArthur,    D. and Klahr, P.    "The ROSS Language    Manual."    N-1854-AF,    The    Rand    Corporation,    Santa Monica, 1982.    McArthur,    D.    and    Sowizral, H.    "An    Object-    Oriented    Language    for    Constructing    Simulations."    Proc.    IJCAI-81,    Vancouver:    1981, 809-814.    [II    [21    [31    [41    [51    [61    [71    [81    [91    334     
 | 
	1982 
 | 
	1 
 | 
					
96 
							 | 
	SPEX:    A Second-Gcncration    Espcrimcnt    Iksifg    System    Yumi Iwasaki and Pctcr Friedland    IIcuristic Programming Project, Computer Science Department    Stanford University, Stanford, Ca. 94305    Abstract    ‘I’hc design of laboratory cxperimcnts is a complex and important    scientific task. The MOI,GEN project has been dcvcloping computer    jystems for automating Ihc design process in the domain of molecular    biology. SPEX is a scLonl-.gcncration syslcm which synlhcsizcs the    best ideas of two previous MOI,CEN hierarchical planning systems:    stcpwisc rcfincmcnt of skeletal plans ant1 a layered cotlirot structure. It    has been tcstcd successfully on several problems in the task domain and    promises to serve as a testbcd    for future work in explanation,    exp;rimeat    debugging, and ctnpirical evaluation of different basic    design strategies.    1. !nt reduction    Expcrimcnt design is the process of choosing an ordered set of    laboratory operations to accomplish some given analytical or synthetic    goal. This process is one    of the fundamental tasks of cxpcrimcntal    scientists; it involves large amounts of domain expcrtisc and specialized    design heuristics. ‘I’hc design of such expcrimcnt plans has been one of    the fundamental research efforts of the MOIGEN project at Stanford.    SIJi<X (Skclctal Planner of EXpcrimcnts)    is a second-generation    experiment design system. It is a synthesis of the best ideas of two    previous planning systems, and will serve as a “laboratory” for the    empirical testing of design strategies at many levels. SPEX will also be    used for MOI-GEN work on experiment verification, optimijlation, and    debugging. This paper is a report of the work in progress on SPEX.    1.1. Previous    MOLGEN Planning Systems    Fricdland developed    an experiment    planning system using the    methodology of hierarchical planning by progressive refmemcnt of    skeletal plans [l] [2]. A skeletal plan is a linear scqlrcnce of several    abstract steps; actual plans arc gcncratcd by refining each of the    abstract steps to use specific tcchniqucs and objects by going down a    general-to-specific hierarchy of laboratory operations stored within a    knowlcdgc base built by cxpcrimcnt molecular biologists. F’ricdland’s    experiment planner chooses a skeletal-plan suitable for the given goal    of an cxpcriment    and rcfincs each step by choosing the best    spccialilation of the laboratory method at each lcvcl of abstraction.    Stcfik developed    another    cxpcrimeht    design system [3].    His    hierarchical    planner    first constructs    an abstract plan by simple    clifl‘crcnce reduction, and then gcncratcs a specific plan from that    abstract plan by propagating constraints. It has a multi-layerec! control    structure to separate out diffcrcnt lcvcls of decisions to bc made by the    planner [4].    The two systems were complcmcntary.    Fricdland’s system made    cfficicnt USC of large amounts of domain knowledge to produce    practical, but not necessarily optimal cxpcrimcnt designs for a large    subset ofanalytical tasks in molecular biology. The assumption of near-    indcpcndcnce of abstract plan-steps worked well in the great majority    of casts.    Stcfik’s system took much longer to plan reasonable    experiments, but worked bcttcr when    plan-steps were highly dcpcndcnt    and kept a much richer description of the planning process, this    bccausc of the well-designed control structure.    Sl+TX was dcvclopcd to synthcsi;lc two fundamental ideas from these    planners, namely Fricdtand’s skeletal-plan rcfinctncnt and Stcfik’s    multi- layered control structure, in the hope of making further progress    in the construction of a design system that would be used by experts. In    addition, SPl’X has a grsatly cnhanccd capacity to simulate the    changing world st;ltc during an cxpcriment.    ‘I‘hc rctnaindcr of this    paper describes the laycrcd control structure and the simulation    mechanism used by SPEX.    I,ikc Friedland’s and Stefik’s systems, the knowledge base ofSPl<X is    constructed    using the Unit System [5], a frame-based    knowledge    rcprcscntation systctn dcvelopcd in the MOi,Gl1N project.    In SEX,    the Unit System is also used to rcprcscnt a tract of the planning process    and the changing states of objects in the world.    2. Method    2.1. Layers of Control    In order to leave a trace of a planning process, it is necessary to    identify the different kinds of operations the planner is expected to    perform and to represent the entire process as a scqucncc of such    opcrntions and their conscqucnccs.    The notion of a multi-layered    control structure was introduced and operations at three diffcrcnt tcvcts    in lhc planning process were idcntificd and rcprcscntcd within SPBX.    The bottom level, called the Iloruain Space by Stcfik, consists of the    objects and operators in the task domain, termed lob-s!eps.    They are    experiment goals, skctctal-plans, and laboratory tcchniqucs. On top of    the IJotnain Space exists the De5ig~2 Space, which consists of the vzt-ious    planning tasks pcrfortncd by SPEX, for example, the tasks of finding an    appropriate    skclctal-plan, expanding a skeletal-plan, or refining a    technique. Thcsc arc termed plan-sfeps.    When such tasks arc cxccutcd,    they crcatc or dctete lab-steps in the Domain Space, or create new tasks    in the I)csign Space.    I:inally, the third layer. the S/rcrfeu Space,    consists of scvcral different strntcgics to control the execution of tasks in    the Design Space.    Diffcrcnt types of decisions are made in the three different spaces.    In the Domain Space, decisions are biology-oricntcd.    The two major    types of decisions are environmental,    i.c. whcthcr environmental    conditions and structural propcrtics allow a given laboratory technique    to be used, and detailed selection, i.e. tl:c process of deciding on the    From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved.    basis of selection heuristics among several tcchniqucs all of which will    carry out the specified experimental goal.    In the I>csign Space,    decisions are more goal-oricntcd.    l’hesc decisions rclatc the specific    goal of a step in the skeletal plan to the potential utility of a laboratory    technique for satisfying that goal.    Finally, in the Strategy Space,    choices arc made among various design strntcgics, whcthcr to rcfinc in a    breadth-first, depth-first, or heuristic manner, for cxamplc.    I,ab-steps in the Domain Space arc also rcprcsentcd by units. I.ah-    step units initially record the 11orrtain Space results of decisions made in    the I>csign Space, but may also record later decisions made in the    Domain space. Some of the slots in the prototypical lab-step unit arc    shown in l;igurc 3-2.    Unit: LAD-STEP    2.2. Modeling    World Objects    FOLLOWING-STEP:    <UNIT>    A pointer IO the next iubstep in the plan.    -----    Ncithcr previous MOI,GEN design system did a thorough job of    monitoring    the state of the laboratory cnvironmcnt    during the    simulated execution of an cxperimcnt plan.    nut, the laboratory    environment, i.e. the physical conditions and the molecular structures    involved in the cxpcrimcnt, undergoes changes in the COUISC    of carrying    out the experiment.    Therefore, for the planning of an experiment    design to bc consistently successful, it is csscntial to predict the changes    caused by carrying out a step in the plan.    SEX    simulates those    changes, using the part of its knowlcdgc base that contzirls the yotcntial    effects of carrying out an laboratory tcchniquc. ‘I’hc prcdictcd states of    the world at certain points in the cxpcrimcnt arc used as part of the    selection criteria in choosing the appropriate technique for the next step    in the plan. ‘I’hcy arc also used to make sure that the preconditions for    a chosen technique arc met before the clpplication of the tcchniquc. If    any of the preconditions arc not satisfied, a sub-plan to modify the    world state is produced and inscrtcd in the original plan.    PRECEDING-STEP: <UNIT>    A pointer IO the prcccd~ng lab-step in the plan.    -----    CREATED-BY:    <UNIT>    A pointer to 11x plan-step which created thiy lab-step.    -----    STATUS    :    <Sl RING>    When a lubstep has just been created, it b given the status    ‘jurt-created”, I ater when it is ndcd in cr out, the status is    chnngcd to “ruled-In” or “ndcd-out”.    14gurc 3-2: Slots in a lab-step    SPEX uses an agenda mechanism to keep track of all the pending    tasks. The Strategy Space of SPEX c:msists of simple strntcgics about    how to choose the task from the agenda. Currently, a slratcgy is chosen    at the beginning of a session by the user and it is used throughout the    session. ‘I’here arc three strategies currently available in the system.    With Strategy 1. the agenda is used like a qucuc and tasks arc fctchcd in    first-in, firsL-out manner. With Strategy 2, the agenda is n~mnally used    like a stack and tasks arc fctchcd in first-in, List-out fashion.    With    3. Implementation    Strategy 3, the plan-steps arc given priorities according to their types.    SPEX is implcmcnted in Interlisp, making extensive USC of the IJnit    System in order to rcprescnt each operation in diffcrcnt planning    spaces. In the Design Space, thcrc arc at present scvcn types of plan-    Tasks of a higher priority are executed before any tasks of lower    priorities.    We are currently cxpcrimenting with various prioritizing    schcmcs for the different plan-step types.    steps: obtaining the goal of an cxpcrimcnt, choosing a skclctnl plan,    expanding a skclctal plan, refining a laboratory tcchniquc, ruling in or    out    a    laboratory    tcchniquc,    comparing    altcrnativc    laboratory    techniques,    and checking    the world    state    to verify that the    preconditions for application of a tcchniquc arc satisfied.    When a new task needs to hc gencratcd, a plan-step unit is created to    represent the task and to 1c;rvc a record of SPEX’s pcrformancc.    A    plan-step unit has slots containing such information as when the task    was created, when it was pcrformcd and what the consequence was.    Some of the slots in the prototypical plan-step unit arc shown in Figure    3-l.    this plan-step    NtXT-PLAN-STEP: <UNIT>    7%u next plan-step J&hed    -----    LAST-PLAN-STEP: <UNIT>    The previous plan-step fetched    3.1. Simulating    the World State    At the bcgilllling of ;I planning scss~on, the user is asked to provide a    description of the world ol!jccls. ‘l‘his inclirclcs the current physical    cnvironmcnt (tcinpcraturc, 1~1-1,    etc.) of the cxpcrimcnt and what tic    knows about rhc d&led    molecular structure of his cxpcrimcntal    objects, normally nucleic acid scqucnces. A unit is created to rcprcscnt    the initial dcscriplion for each HY~II:! object. When a skclctJ-plan    is    cxpnndcd    to individual steps, units arc crcatcd to rcprcscnt    rhc    simulated state of each world objjcct bcforc and Acr ,Ipplication of each    step. This is done by ulili& (1 simulation information stored in the    laboratory-tcchniqucs    hierarchy of the knowlcdgc base.    l;igurc 3-3    sl~ows some of the slots in the prototypical unit fc)r describing Iluclcic    acid structures.    ‘I’his unit is called l)N/\-S’l‘liUC’I‘U121~.    Its    composiiion h,js cvolvcd over scccrnl years of colLrborari\c wwk by    scvcral of tlw molecular biologists associalcd with the IL101 .CEN    project. It rcprcscnts all of the potential information a scicnlist might    wish to supply about a particul‘lr nucleic acid structure.    In an actual    cxpcrimcnt, a scicnlist would bc able to fill only a few of the olcr 50    slots in the prototypical unit. I:igurc 3-4 shows rlic actual values gi\,cn    to the sample slots during the plannin g of an cxperimcnt design by    swx.    STATUS:    <STRING>    The .status of this plan-step; either Succeeded’: “postponed, ” or    ‘created”.    Figure 3-1: Slots in a plan-step    342    Unit:    DNA-STRUCTURES    ----_    STRANDLDNESS:    <STRING>    One of: ["HYBRID"    "SS" "DS"]    ----_    LENGTH:    <INTEGFR>    [l lOM]    MEASUREMCNT    UNIT    EtASE-PAIRS    -----    #-TERMINI:    <INTEGER> co 41    -----    #-FORKS:    <INTLCER>    co 21    ----_    10POLOGY:    <STRING> One of:    ["DELIA-FORM"    "TtIEIA-I-ORM" "EYC-FORM"    "Y-FORM"    "LINEAR"    "CIRCULAR"]    -----    TYI'E:    <STRING>    One of: ["HYBRID" "RNA" "DNA" ]    Figure 3-3: Slots in DNA-STRUCTURE    Unit:    STRUC-1    STRANF'CDNESS:    <STRING>    "SS"    LENGTH:    <INfFGCR>    300    MFASUREMENT    UNIT    UASE-PAIRS    #-TERMINI:    <INFEGCR>    2    -----    #-FORKS    <INTEGER>    0    -----    TOPOLOGY    <STRING>    "LINEAR"    -----    TYPE:    <STRING>    "DNA"    Figure 3-4: Slots in an instance of DNA-S’I‘RUC’I’URC~    Sl’l+X ha:; been successfully tostctf 01i scv2r::l of 1112 same problems    uccd 2s test ca\cs in l*‘ricdlnnd’s thesis [l].    Combining the iclcas 9f    skclctnl-plan refinement and multi-la: cl-cd control ~,iruclurc proved    uscti~l in keeping the l)omain Space oi iicicncy of I~ri~:d!:~nd’s system,    while introducing thL Planning and Stratcu Space flexibility of Stcfik’?    syTtcm. ‘l’hc grenlly improved simulillion mcclianism incrcascd the    reliability of the decisions made and allowed SPlIX to sl~ggcst, in detail,    ways of correcting low-level incompatibilites bctwccn chosen laboratory    tccllIliqlles.    SI’I<X keeps a clear trace of all the decisions rnaclc during    tilt    cxpcrimcnt design process. This tract will now bc used for a variety of’    purposes in ongoing MOI~GIIN research. One goal of this rcscarch to    add a dctailcd cxplnn&ion capability to SPEX.    Iiarlier MOI.GEN    cxpclimcnt design bqstcms only provided explanation by listing the    domain-specific rules used to make dcc-isions at the Icvcl of laboratory    tcchlliyucs. The planning trace and the modular IKILLIK of the diffcrcnt    planning spaces will allow this explanation Elcility to bc cxpandcd. ‘l‘hc    user will bc able to direct his questioning to domain. design, or strategic    motivations. WC envision an initial cxplanntion facility silnilar to that    used by ME’CIN [(,I. ‘1’11~    user asks questions using a szt of key words    i\S “Why”, “WilCn”, “how”, CIC. WC believe that beside<; promoting the    USC of SPEX among expert users for whom full explanations arc a    nccersity, the explanation facility will also greatly Lrciiitatc debugging    of the knowI~Jge base.    A necessary cumplcrncnt to the task of cxpcrimcct &sign is the task    of cxpcrimcnt debugging. Initial cspcrimcnt designs prod~~cccl by even    the very best of scientists rarely work perfectly the first time. WC arc    certain that the same will be bc true of SPEX-produced cxpcrimcnt    designs. Given the scqucncc of tcchniqucs employotl in an cxpcrimcnt,    a dcbuggcr will compnrc the prcdictcd states of the world bcforc and    after each step and the actual world states during the cxpcrimcnt    (sL:pplicd by the niolccular biologist ulcr). ‘l’hen, it will point out the    steps which might possibly have gone wrong and suggest solutions:    altcmstivc techniques or a rcmcdial proccdurc to be licrformcd.    If the    “buggy” cxpcriment &sign had come from a human scictltist, then the    debugging    information    will enable him to correct his personal    “knowledge bitsc.” In a similar manner, wc bclicvt: that the Lomparison    of actual laboratory results to the previously predicted results should    allow the automatic improvcmcnt of the knowlcdgc base in the cast of a    Sl’I<X-gcncratcd cxpcrimcnt design.    WC do not illltiCipXC trujor    difficulties in building the cxpcrimcnl dcbuggcr gikcn the existing    mcchanicms of SPIIX.    In summary, SI’l;X rcprcscnts a synthesis of scvcr31 mcthodologics    for dcqign, nnmcly skclctnl-plan rcfincmcnt and the multi-layered    control structure. It is a framework for a gcn~‘~,al-pul’posc    design testing    and tlcbugging system, which cm bc easily tailored to do planning in    any specific npplication domain.    ‘I’hcrc arc no nlolcculi~r-biology    specific mcch,lnisms inhcrcnt in SI’IIX; nil of th: domain-spcci~ic    kn()\vlcL]gc in ill tllc :lssocinled knowlcdgc b:jsc. SI’liX C;\Il ah) h ~lsCd    to test clir[cFctlt    l>itSiC    &sign ~ir,ltCgiC!:    by LhC i~~!~)~ciIlCtltiltioll    Oi‘ In~llly    fiddilioll;ll    stra[c$ics iti its Str,ltcsy Sp:lcC. WC bclic\c tI13t IllC lKSt way    to dctcrminc the efficacy of tlic Illittly    diffcl cnt potential ~li~dtcgics is    cmpiric;J, and    Sl+tX    will    bc    useful    as    a    hhratory    for thCsC    cxperimcntS.    This work is p:lrt of the MOI~CiFN project, n joint rc$car.ch effort    among    Ihc I):partmcnts    o!’ Computer    Scicncc,    Medicine,    and    iGochcniistry al Stanford Uuivcrsity. '1'1~ mearch has been \upported    ~ntlcr NSl: gr#\rlt MC%&16247. Cornl~L1ti~tiollal rcsourccs hn~c IXCII    provided    by    the    SUMEX-AIM    Nation,\]    lGomcdica1 Rcscarch    I<csourcc, NI I I grant RR-00785-08, and by the Dcpal tlncnt of    Computer Science.    WC wish to th,lnk our many cnthusinstic MO1 6 l:N collA)orlltors for    their Ast,\ncc    in tllis work. WC arc cspeciallq grateful to I<cnL Ihch    and    I-arty Kcdcs for pro\ iding    tilC ~?lOlKtlli~~    biology c\pcrLisc    ncccbsary to test Sl’I:X and tu f2ruic l~uch;tlli111    and Mike Gcnzscrcth    for advice on the artificial intclligcllcc methotiolo~ics cmpioycd.    343    References    1.    Fricdiand,    P.E.,    Ktrowledge- Rased    l:icperhen~    Desigtl    In    Mol~~~~!ur Gerre[ics,    Phl1 dissertation,    Stanford    University,    OctobCr 1979.    2.    Fricdland,    P.E., “Knowlcdgc-1jascd    Expcrimcnt    Design in    Molccularc    Genetics ”    LIC’AI-754 ‘The International    Joint    Confcrcnce on hrtif&al Intelligence, 1979, pp. 285-287.    3.    Stcfik, M.J.,    PIunning    wilh Conslrairtts,    Phi> dissertation,    Stanford University, January 1980.    4.    Stcfik, M.J., “Planning and MC&Planning,” HPP-mcmo HPP-    80-13, Stanford University Heuristic Programming Project, 1380.    5.    Smith, KG., Friedland, P.R., “Unit Package User’s Guide,”    HPP-memo    :wP-80-28,    Stzmford    University    Heuristic    Programming Project, 1980.    6.    Scott,    AC.,    Clanccy,    W.J.,    Davis, R.,    Shortliffc,    E.H.,    “Explanation    Capahilitics    of I’roduction-Rascd    Consultation    System,” Anwican fournal of ~orqwilaliot~al    Linguislics, 1979, .     
 | 
	1982 
 | 
	10 
 | 
					
97 
							 | 
	CIRCUMSCRIPTION IMPLIES PREDICATE COMPLETION (SOMETIMES)    Raymond Reiter    Department    of Computer Science    Rutgers University    New Brunswick,    N. J. 08903    ABSTRACT    Predicate    completion    is an approach    to closed    world reasoning    which assumes that the given    sufficient    conditions    on a predicate    are also    necessary.    Circumscription    is a formal device    characterizing    minimal reasoning    i.e. reasoning    in minimal models, and is realized by an axiom    schema.    The basic result of this paper is that    for first order theories which are Horn in a    predicate    P, the circumscription    of P logically    implies P's completion    axiom.    Predicate    completion    [Clark 1978, Kowalski    19781 is a device for "closing off" a first    order representation.    This concept stems from    the observation    that frequently    a world descrip-    tion provides    sufficient,    but not necessary,    conditions    on one or more of its predicates    and    hence is an incomplete    description    of that world.    In reasoning    about such worlds, one often appeals    to a convention    of common sense reasoning    which    sanctions    the assumption    - the so-called    closed    world assumption    [Reiter 19781 - that the infor-    mation given about a certain predicate    is all    and only the relevant    information    about that    predicate.    Clark interprets    this assumption    formally as the assumption    that the sufficient    conditions    on the predicate,    which are explicitly    given by the world description,    are also neces-    sary.    The idea is best illustrated    by an example,    so consider the following    simple blocks world    description:    A and B are distinct    blocks.    A is on the table.    B is on A.    (1)    These statements    translate    naturally    into    the following    first order theory with equality,    assuming    the availability    of general knowledge    to the effect that blocks cannot be tables:    BLOCK (A)    BLOCK (B)    0~ (A,TABLE)    ON (B,A)    (2)    A#B    A#    TABLE    B # TABLE    Notice that we cannot, from (2), prove that    nothing is on B, i.e., (2)    /+ (x> -ON(x,B), yet    there is a common sense convention    about the    description    (1) which should admit this conclu-    sion.    This convention    holds that, roughly    speaking,    (1) is a description    of all and only    the relevant    information    about this world.    To    see how Clark understands    this convention,    consid-    er the formulae    (x) .x = A V x = B ZJ BLOCK (x)    (3)    (xy).x = A 6 y = TABLE V x * B d y = A 3 ON(x,y)    which are equivalent,    respectively,    to the facts    about the predicate    BLOCK, and the predicate    ON    in (2).    These can be read as "if halves", or    sufficient    conditions,    of the predicates    BLOCK    and ON.    Clark identifies    the closed world as-    sumption with the assumption    that these sufficient    conditions    are also necessary.    This assumption    can be made explicit by augmenting    the representa-    tion (2) by the "only if halves" or necessary    conditions,    of BLOCK and ON:    (x). BLOCK(x)    1 x=A V x=B    (xy). ON (x,y) 3 x=A & y=TABLE V x=B & y=A    Clark refers to these "only if" formulae as    the completions    of the predicates    BLOCK and ON    respectively.    It now follows that the first    order representation    (1) under the closed world    assumption    is    (x). BLOCK(x)    : x=A V x=B    (xy). ON(x,y)    = x=A & y=TABLE V x=B & y=A    AfB    A # TABLE    B # TABLE    From this theory we can prove that nothing    is on B -    (x)-ON(x,B)    - a fact which was not    derivable    from the original    theory (2).    Circumscription    [McCarthy 19801 is a dif-    ferent approach    to the problem of "closing off"    a first order representation.    McCarthy's    intu-    itions about the closed world assumption    are es-    sentially    semantic.    For him, those statements    derivable    from a first order theory T under the    closed world assumption    about a predicate    P are    just the statements    true in all models of T which    are minimal with respect to P.    Roughly speaking,    these are models in which P's extension    is    minimal.    McCarthy    forces the consideration    of    only such models by augmenting    T with the fol-    lowing axiom schema, called the circumscription    of P in T:    T(t) & b>    .4(x) = P(x)] = (xl. P(x) = @(xl    From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved.    Here, if P is an n-ary predicate,    then $ is    an n-ary predicate    parameter.    T(4) is the    conjunction    of the formulae of T with each occur-    rence of P replaced by 9.    Reasoning    about the    theory T under the closed world assumption    about    P is formally    identified    with first order deduc-    tions from the theory T together with this axiom    schema.    This enlarged theory, denoted by    CLOSUREp(T),    is called the closure of T with    respect to P.    Typically,    the way this schema    is used is to "guess" a suitable    instance of 4,    one which permits the derivation    of something use-    ful.    a fact which is not derivable    from the original    theory T.    Notice that in order to make this work, a    judicious    choice of the predicate    parameter    4,    namely    (5), was required.    Notice also that this    choice of $ is precisely    the antecedent    of the    "if half" (3) of ON and that, by (6), the "only    if half" - the completion    of ON - is derivable    from the closure of T with respect to ON.    For    this example, circumscription    is at least as    powerful as predicate    completion.    To see how this all works in practice,    con-    sider the blocks world theory (2), which we shall    denote by T.    To close T with repect to ON,    augment T with the circumscription    schema    X,Y> = ON(x,y)l    ON(x,y) = $(x,Y)    (4)    Here 4 is a 2-place predicate    parameter.    Intuitively,    this schema says that if 4 is a    predicate    satisfying    the same axioms in T as does    ON, and if 4's extension    is a subset of ON's,    then ON's extension    is a subset of 4's, i.e., ON    has the minimal    extension    of all predicates    satisfying    the same axioms as ON.    In fact, this example is an instance of a    large class of first order theories for which cir-    cumscription    implies predicate    completion.    Let T    be a first order theory in clausal form (so that    existential    quantifiers    have been eliminated    in    favour of Skolem functions,    all variables    are    universally    quantified,    and each formula of T is    a disjunct of literals).    If P is a predicate    symbol occurring    in some clause of T, then T is    said to be Horn in P iff every clause of T con-    tains at most one positive    literal in the pred-    icate P.    Notice that the definition    allows any    number of positive    literals    in the clauses of T    so long as their predicates    are distinct    from P.    Any such theory T may be partitioned    into two    disjoint    sets    To see how one might reason with the theory    CLOSUREON(    consider the following    choice of    the parameter    + in the schema (4):    Tp:    those clauses of T containing    exactly one    positive    literal in P, and    T-Tp: those clauses of T containing    no positive    $(x,y)    q x=A & y=TABLE V x=B 6 y=A    (5)    (but possibly negative)    literals    in P.    Then T(4) is    BLOCK(A) & BLOCK(B) & [A=A & TABLE=TABLE    v A=B 6r TABLE=A] 6 CB=A 6 A=TABLE v B=B & A=AI    & A# B & A# TABLE &B # TABLE    so that, for this choice of $,    CLOSUREON    I-- T($)    It is also easy to see that, for this choice of    CLOSUREON    j- (xy) .$(x,y> = ON(x,y) .    Thus, the antecedent    of (4) is provable,    whence    CLOSUREON    t (XY> .ON(x,y) 3    $(x,y).    1.    * e.    Clark (1978) provides a simple effective    procedure    for transforming    a set of clauses of    the form Tp into a single, logically    equivalent    formula of the form (x).    A(x) = P(x).    The con-    verse of this formula, namely (x).    P(x) = A(x),    Clark calls the completion    axiom for the predicate    P, and he argues that augmenting    T with P's com-    pletion axiom is the appropriate    formalization    of the notion of "closing off" a theory with re-    spect to P.    Our basic result relates this notion    of closure with McCarthy's,    as follows:    Theorem    Let T be a first order theory in clausal    form, Horn in the predicate    P.Let (x).P(x) 3 A(x)    be P's completion    axiom.    Then    CLOSUREp(T)    I- (x). P(x) 1 A(x)    CLOSURE    (T) I- (xy).ON(x,y)    '    x=A & y%BLE    V x=B & y=A    (6)    i.e. the only instances of ON are (A,TABLE) and    @,A )     l    It is now a simple matter to show that    nothing    is on B, i.e.    CLOSUREON    t (x).-ON(x,B)    i.e. P's completion    axiom is derivable    by cir-    cumscription.    Discussion    Circumscription    and predicate    completion    are    two seemingly different    approaches    to the formal-    ization of certain forms of common sense reason-    ing, a problem which has recently    become of major    419    concern in Artificial    Intelligence    (see e.g. [AI    19801).    That circumscription    subsumes predicate    completion    for a wide class of first order    theories    is thus of some theoretical    interest.    Moreover,    circumscription    is a new formalism,    one whose properties    are little understood.    Predicate    completion    on the other hand, has a    solid intuitive    foundation,    namely, assume that    the given sufficient    conditions    on a predicate    are also necessary.    The fact that predicate    com-    pletion is at least sometimes    implied by cir-    cumscription    lends support to the hypothesis    that    circumscription    is an appropriate    formalization    of    the notion of closing off a first order represen-    tation.    Finally, the theorem has computational    im-    port.    Notice that in order to reason with    McCarthy's    circumscription    schema it is first    necessary    to determine    a suitable instance of the    predicate    parameter    Cp. This is the central    computational    problem with circumscription.    With-    out a mechanism    for determining    "good $'s", one    cannot feasibly use the circumscription    schema    in reasoning    about the closure of a represen-    tation.    This problem of determining    useful 4's    is very like that of determining    suitable pre-    dicates on which to perform induction    in, say,    number theory.    Number theory provides an induc-    tion axiom schema, but no rules for instantiating    this schema in order to derive interesting    theorems.    In this respect,    the circumscription    schema acts like an induction    schema.    Now the above theorem provides a useful    heuristic    for computing    with the closure of first    order Horn theories.    For we know a priori, with-    out having to guess a $ at all, at least one non    trivial consequence    of the circumscription    schema,    namely the completion    axiom.    Clearly, one should    first try reasoning    with this axiom before invok-    ing the full power of circumscription    by "gues-    sing $‘s".    REFERENCES    AI (1980).    Special issue on non-monotonic    logic, Artificial    Intelligence    13 (1,2),    April.    Clark, K. L. (1978).    Negation    as failure,    in Logic and Data Bases, H. Gallaire and    J. Minker (eds.), Plenum Press, NY, 293-322.    Kowalski,    R. (1978).    Logic for data    description,    in Logic and Data Bases, H.    Gallaire and J. Minker    (eds.), Plenum Press,    NY, 77-103.    McCarthy,    J. (1980).    Circumscription    - a    form of non-monotonic    reasoning,    Artificial    Intelligence    13, 27-39.    Reiter, R. (1978).    On closed world data    bases, in Logic and Data Bases, H. Gallaire    and J. Minker (eds.), Plenum Press, NY,55-76.    420     
 | 
	1982 
 | 
	100 
 | 
					
98 
							 | 
	Monitors    as Responses    to Questions:    Determining    Competence    Eric Mays    Department    of Computer and Information    Science    Moore School of Electrical    Engineering/D2    University    of Pennsylvania    Philadelphia,    Pa. 19104    ABSTRACT    This paper discusses    the application    of a    propositional    temporal logic to determining    the    competence    of a monitor offer as an extended    response by a question-answering    system.    Determining    monitor competence    involves reasoning    about the possibility    of some future state given a    description    of the current state and possible    transitions.    I    INTRODUCTION    The offer of a monitor as a response becomes    possible when the system views the knowledge    base    (KB) as a dynamic rather than a static entity.    That is, in addition    to answering    a user's    question    on the basis of the information    the    system currently    contains, a system with a dynamic    view can offer to monitor for additional    relevant    information    which it will provide    to the user if    and when it learns of it.    Such additional    information    could be about some possible    future    event or some previous    event about which the    system's knowledge    is currently    incomplete.    In    the following question-answer    pairs, Q-Al    illustrates    a monitor for some possible    future    event.    The pair Q-A2 is an example of a monitor    for some additional    information    about a previous    event.    Responses    such as Q-A2 require reasoning    that some event, of which knowledge    regarding    its    outcome would enable us to answer the question,    has taken place.    At some point in the future we    will learn of its outcome, when we will then    answer.    Ql:    Al:    42:    A2:    Has John registered    for CSEllO?    No, shall I let you know if he does?    Did John pass CSEllO?    I don't know yet.    The semester has ended, but    Prof. Tardy has not turned in his grades.    Shall I let you know when I find out?    In order to offer monitors    as extended    responses    the system must be competent    to offer to    monitor    for only those events which might possibly    occur or, if the system has incomplete    knowledge    * This work i s partially    supported    by a grant from    the Na tional Science foundation,    NSF-MCS 81-O 7290.    of some event that has occured, only that    additional    information    it may learn of.* This    requires    some notion of what events are possible    or what additional    information    may be acquired    given the current state of the knowledge    base.    For example,    ignorance    of the stages through which    undergraduates    proceed in the university    would    leave a system attempting    to offer monitors    unable    to discriminate    between    the following    two cases.    Ql: Is John a sophomore?    Al: No, he's a freshman.    Shall I let you know    when he becomes a sophomore?    Q2: Is Mary a sophomore?    A2: No, she's a junior.    Shall I let you know when    she becomes a sophomore?    The remainder    of this paper is concerned    with    determining    monitor competence    with regard to    possible    future events.    We leave open for now the    question    of competence    for those monitors    that    require reasoning about incomplete    knowledge    of    some previous event.    II    REPRESENTATION    Temporal logic [3] is a modal logic for    reasoning    about the relative possibility    of some    state to some other state, where the relative    possibility    is with respect to time or sequences    of events.    (In contrast to, for example,    relative    possibility    with respect to obligation    or belief.)    Although    one might develop a suitable first order    theory to deal with the problems    discussed    here,    it seems worthwhile    to study this problem within    the framework    of temporal logic for reasons of    conceptual    clarity.    Restriction    to the    propositional    setting enables us to concentrate    on    those issues involved with reasoning about    possible    change.    We model the evolution    of a KB in a    propositional    temporal    logic.    The future fragment    is a unified branching    temporal logic [l] which    * In either case it must be able to identify those    future conditions    which are relevant to the user's    intentions.    The discussion    here will be limited    to determination    of monitor competence.    for a brief discussion    on relevance.    See [3]    421    From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved.    makes it possible    to describe properties    on some    or all futures.    By merging    the existential    operators    with the universal    operators,    a linear    temporal    logic is formed for the past fragment    (i.e.    AXp <-> EXp).    A.    Syntax    Formulas are composed    from the symbols,    - A set P of atomic propositions.    - Boolean connectives:    v, -.    - Temporal operators:    AX (every next), EX    (some next), AG (every always),    EG (some    always), AF (every eventually),    EF (some    eventually),    L (immediately    past), P    (sometime past), H (always past).    using the rules,    - If p e P, then p is a formula    - If p and q are formulas,    then (-p), (p v q)    are formulas.    - If m is a temporal operator and p is a    formula, then (m)p is a formula.    Parenthesis    will occasionally    be omitted, and &,    ->, <-> used as abbreviations.    B.    Semantics    A structure T is a triple (S, TT, R) where,    - S is a set of states.    - TT:(S -> 2-P) is an assignment    of atomic    propositi0t-Z to states.    - R C (S x S) is an accessibility    relation on    s.- Each state is required    to have at least    one successor and exactly one predecessor,    As (Et (sRt) & E!t (tRs)).    Define an s-branch    b = (..., s(-1), s=s(O), s(l), . ..) such that    s(i)Rs(i+l).    The satisfaction    of a formula p at a node s in a    structure    T, <T, s> I= p, is defined as follows:    (Note:    "err denotes set inclusion.)    <T, s> I= p    <T, s> I= -p    <T, s> I= p v    <T, s> I= AGp    (p is true at    <T, s> I= AFp    (p is true at    <T, s> I= AXp    (p is true at    <T, s> I= EGp    (P    is    true at    <T, s> I= EFp    (p is true at    <T, s> I= EXp    (p is true at    <T, s> I= Hp    (p is true at every time of the past)    iff    peTT(s),    for p a proposition    iff    not <T,s>l=p    q    iff    <T,s>l=p    or    <T,s>]=q    iff    Ab At ((teb & t>s) ->    <T,t>l=pT    every time of every future)    iff    Ab Et (teb & t>s &    <T,t>l=p)    some time of every future)    iff    At (sRt -> <T,t>l=p)    every immediate    future)    iff    Eb At ((teb & t>s) ->    <T,t>l=p)    every time of some future)    iff    Eb Et (teb & tks &    <T,t>l=p)    some time of some future)    iff    Et (sRt & <T,t>l=p)    some immediate    future)    iff    Ab At ((teb 6 t(s) ->    <T,t>l=p)    <T, s> I= Pp    iff    Ab Et (teb & t<s &    <T,t>l=p)    (p is true at some time of the past)    <T, s> I= Lp    iff    Et (tRs & <T,t>l=p)    (p is true at the immediate    past)    A formula p is valid if for every structure    T and    every node s in T, <T, s> I= p.    C.    Axioms    In the following axioms Dl-3, Al-4, El-4 and    rules Rl-3 form a complete deductive    system for    the future fragment    [l].    Similarly,    D4-5, Pl-4,    Rl, R2, R4 are complete for the past fragment.    The idea of Ul and U2 is that the relationship    between    the past and the future may be described    locally.    Using the induction axioms, we can    derive the following    theorems, which are a more    conventional    form (as in 131):    EF(Hp) -> Hp    P(AGp) -> AGp    See [1] for a list of many other useful theorems    for the future fragment.    Dl)    AFp <-> -EG-p    D2)    EFp <-> -AG-p    D3)    AXp <-> -EX-p    D4)    Pp <-> -H-p    DS)    Lp <-> -L-p    Al)    AG(p -> q) -> (AGp -> AGq)    A2)    AX(p -> q) -> (AXp -> AXq)    A3)    AGp -> p & AXp & AX(AGp)    A4)    AG(p -> AXp) -> (p -> AGp)    El)    AG(p -> q) -> (EGp -> EGq)    E2)    EGp -> p & EXp & EX(EGp)    E3)    AGp -> EGp    E4)    AG(p -> EXp) -> (p -> EGp)    Pl)    H(p -> q) -> (Hp -> Hq)    P2)    L(p -> q> -> (Lp -> Lq)    P3)    Hp -> p & Lp & L(Hp)    P4)    H(p -> Lp) -> (p -> Ilp)    Ul)    L(AXp) -> p    U2)    p ->    WLP)    Rl)    If p is a tautology,    then I- p.    R2)    If I- p and I- (p -> q), then I-    q.    R3)    If I- p, then I- AGp.    R4)    If I- p, then I- Up    III    EXAMPLE    Consider as an example representing    that    portion of a university    KB dealing with students    passing and registering    for courses.    Let the    propositional    variables    Q and R mean "student has    passed course" and "student is registered    for    course",    respectively.    One might have the    following    non-logical    axioms:    1) (AG)(Q -> (AX)Q) - once    course it remains so    a student has passed a    422    2) (AG)((-Q & -R) -> (EX)R) - if a student has not    passed a course and is not registered    then it    is next possible    that s/he is registered    3) (AG)(R -> (EX)Q) - if a student is registered    for a course then it is next possible    that s/he    has passed    4) (AG)(R -> (EX)(-Q & -R)) - if a student is    registered    for a course then it is next    possible    that s/he has not passed and is not    registered    5) (AG)(Q -> -R) - if a student has passed a    course s/he is not registered    for it    6) (AG)(R -> -Q> - if a student is registered    for    a course s/he has not passed it (equivalent    to    5)    Given the following question,    Is John registered    for CSEllO?,    there are three possibilities    depending    on the    present    state of the KB:    1) John is not registered    (-R), but he has passed    (Q>*    If we consider John registering    for    CSEllO as a possible monitor,    it would be ruled    out on the basis that it is provable    that John    cannot register for CSEllO.    Specifically,    from    Q and axioms 1 and 5, it is provable    that    -(EF)R.    It would thereFore    be incompetent    to    offer to monitor for that condition.    2) John is not registered    (-R), but he has not    passed (-Q).    In this case we could offer to    monitor for John registering    for CSEllO, since    (EF)R is derivable    from axiom 2.    3) John is registered    for (R), hence he has not    passed (-Q).    One could competently    offer to    monitor for any of the following:    a) John no longer registered    for CSEllO; (EF)-R    b) John passed CSEllO; (EF)Q    c) John registered    for CSEllO again; (EF)(-R &    (EX)R)    This last case is interesting    in that it can be    viewed as a monitor for -R whose action is to set    a monitor for R (whose aciton is to inform the    user of R).    Also, one may wish to include    monitors    that are responsible    for determining    whether    or not some monitor that is set can still    be satisfied.    That is, just because something was    possible    once does not imply that it will always    be possible.    The user should probably be informed    when a situation    s/he may still be expecting    (because a monitor was offered) can no longer    occur.    For example, if the system has offered to    inform the user if John registers for CSEllO, then    s/he should be informed if John receives advance    placement    credit and can no longer register.    The following set of axioms illustrate    the    use of the past operators.    Note that axiom 1 from    the above set may be eliminated,    due to the    ability of the past operators    to "look back".    1) (AG)((-(P)Q    & -RI -> (EX)R)    2) (AG)(R -> OWQ)    3) (AG)(R -> (EX)(-Q 6( -R))    4) (AG)((P)Q -> -RI    5) (AG)(R -> -(P)Q)    A more important    use of the past operators    the ability to express conditions    that depend on    sequences    of events.    Thus, expressing    the    condition    that in order for a student to registe    for a course s/he must not have registered    for i    twice before (say, because s/he dropped out or    failed),    requires a formula of the following    for    (AG)(-(P)(R    & (L)(P)R) & -(P)Q & -R -> (EX)R)    is    r    t    m:    IV    CONCLUSION    A simple theorem prover based on the tableau    method has been implemented    for the propositional    branching    time temporal logic as described    in [l].    Current    investigations    are aimed towards    formulating    a quantified    temporal logic, as well    as the complicated    issues involved in increasing    efficiency    of making deductions.    This effort is    part of a larger, more general attempt to provide    extended    responses    in question-answering    systems    (41.    ,4 final comment as to general structure    of    this enterprise.    One could conceivably    develop a    suitable    first order theory to deal with the    problems    discussed here.    It seems worthwhile,    however,    to study this problem within the    framework    of temporal logic for reasons of    conceptual    clarity.    Restriction    to the    propositional    setting enables us to deal strictly    with those issues involved with reasoning about    possible    change.    Also, we may be able to gain    some insight into a reasonable    decision procedure.    ACKNOWLEDGEMENT    Scott Weinstein,    Aravind Joshi, Bonnie    Webber,    Sitaram Lanka, and Kathy McCoy have    provided    valuable    discussion    and/or comments.    REFERENCES    [l]    M. Ben-Ari, Z. Manna, and A. Pneuli, "The    Temporal Logic of Branching    Time", Eighth    Annual ACM Symposium on Principles    of    Programming    Languages,    Williamsburg,    Va.,    January    1981.    [2]    E. Mays, A. Joshi, and B. Webber, "Taking the    Initiative    in Natural Language Data Base    Interactions:    Monitoring    as Response",    Proceedings    of ECAI 82, Orsay, France, July    1982.    [3]    N. Rescher and A. Urquhart,    Temporal Logic,    Springer-Verlag,    New York, 1971.    [4]    B. Webber, A. Joshi, E. Mays, and K. McKeown,    "Extended    Natural Language Data Base    Interactions",    to appear.     
 | 
	1982 
 | 
	101 
 | 
					
99 
							 | 
	A SYSTEMATIC APPROACH TO CONTINUOUS GRAPH LABELING    WITH APPLICATION TO COMPUTER VISION*    M. D. Diamond, N. Narasimhamurthi,    and S. Ganapathy    Department of Electrical and Computer Engineering    University of Michigan    Ann Arbor, MI 48109    ABSTRACT    The discrete and continuous graph labeling problem are    discussed.    A basis for the continuous    graph labeling    problem    is presented,    in which an explicit connection    between the discrete and continuous    problems    is made.    The need for this basis is argued by noting conditions    which must be satisfied before solutions can be pursued    in a formal manner.    Several cooperative    solution algo-    rithms based on the proposed formulation    and results of    the application    of these algorithms    to the problem    of    extracting line drawings are presented.    I XHECIONTINlrnrlrGRAPHLAREXWGJzQRLEM    A graph labeling problem    is one in which a unique    label, A from a set A of possible labels must be assigned    to each vertex of a graph G = (V,E).    The assignment    must be performed    given information    about the relation-    ship between labels on adjacent vertices and incomplete    local information    about the correct label at each vertex.    In a discrete    graph labeling problem    [ 1,2,3], the local    information    consists of a subset, & s A, of the label set    associated    with vertex vi E V, from which the correct    label for each vertex must be chosen.    The contextual    information    consists    of    binary    relations    Ru s Axh,    referred    to as constraint    relations,    assigned    to each    edge vivj E E. The function of the constraint relations is    to make explicit which labels can co-occur    on adjacent    vertices.    The graph, label set, and constraint    relations    together form a constraint    network [2,5].    An (unambi-    guous)    labeling    is a mapping    which assigns    a unique    label h E A to each vertex of the graph. A labeling is con-    sistent    if none of the constraint    relations    is violated,    that is, if label h is assigned to vertex Vi and label h’ is    assigned to vertex vj then the pair (h,X’) is in the con-    straint relation Rii for the edge ViVj E E.    Given initial labeling    information,    several    search    techniques    have been developed    which can be used to    derive consistent    labelings.    The original backtracking    search described    by Waltz [ 11 was later implemented    in    parallel by Rosenfeld et al. [6], resulting in the discrete    relaxation    operator.    At the same time    a continuous    analogue,    the continuous    graph labeling    problem    was    * This work was supported    in part by the Robotics    Research Laboratory,    and in part by the Ultrasonics Im-    aging Laboratory    both in the Department    of Electrical    and Computer Engineering, University of Michigan.    proposed,    as well as a continuous    relaxation    algorithm    for its solution, and since then several other relaxation    algorithms have been proposed [7,8].    In a continuous    graph labeling problem,    the initial    information    consists of strength measures    or figures of    merit, pi (Aj), given for each label Aj E A on each vertex    Vi E I! The strength    measures    are assumed    generated    by feature detectors    which are making observations    in    the presence of noise. They usually take onvalues in the    range [O,l], a 0 indicating no response,    and a 1 indicat-    ing a strong    response.    The contextual    information,    which is represented    in terms of constraint relations for    the discrete    graph    labeling    problem,    are replace    by    measures    of compatibility,    usually taking values in the    range [-1,1] or [O,l], which serve to indicate how likely    the pairs of labels are to co-occur    on adjacent vertices.    Several problems    have resulted in the extension of    the graph labeling problem from the discrete to continu-    ous case.    In the discrete case the presence    or absence    of a pair in a constraint relation can be determined    with    certainty    depending    on what labelings    are to be con-    sidered    consistent.    In the continuous    case,    however,    there is apparently    no formal means to assign specific    numeric values to the compatibility    coefficients,    partic-    ularly for shades of compatibility    between “impossible”    and “very likely”, although several heuristic techniques    have been proposed [7,9,10].    Furthermore,    with respect    to a constraint    network, the concept    of consistency    is    well defined,    The objective    the continuous    relaxation    labeling processes    has often been stated to be that of    improving    consistency,    however, the definition for con-    sistency has not been given explicitly.    This latter issue    is    circumvented    in    several    of    the    optimization    approaches    which have been proposed    [11,12,13],    where    the an objective function, defined in terms of the compa-    tibility coefficients    and the initial strength measures is    given. However, because of the dependence    of the objec-    tive functions    on the compatibility    coefficients,    and    because    no real understanding    of the role which these    coefficients    play    yet    exists,    it is often    difficult    to    describe    the significance    of these approaches    in terms    of what is being achieved in solving the problem.    In an alternate approach    to the continuous    graph    labeling    problem    [14] an attempt    has been made to    maintain    the characteristics    of the original    problem    while allowing for more systematic    approaches    toward a    50    From: AAAI-82 Proceedings. Copyright ©1982, AAAI (www.aaai.org). All rights reserved.    solution.    It is felt that solutions    to the reformulated    problem will be more useful because    it will be easier to    relate the results of the solution    algorithm    to what is    being    achieved    in the problem    domain.    In order    to    develop this approach,    we review the characteristics    of    the solutions to the graph labeling problem which have    appeared so far (refer to Fig. 1).    The inputs to the process    are the initial strength    measures    ipi'(    i=l,s..,W 0 j=l ,...,mj    which    can    be    represented    by an 7~ x’I)?. dimensional vector:    F    = (p 10 (AI),    p : @2),    . . . , p:(&))    E Pm.    Since the selection of a particular label at a given vertex    is related    to the label selections    made at other (not    necessarily    adjacent)    vertices,    information    about the    label selection    at that vertex is contained    in the initial    labeling values distributed    over the extent of the net-    work. The function of the global enhancement    process,    g, is to accumulate    this evidence into the labeling values    at the given vertex.    The output vector:    is used by a process of local maxima selection [ 151, s , to    choose a labeling:    r; = (A,, 43 . . . I &J,    where & is the label assigned to vertex ui. Thus g is a    function,    g:Rnm + Rnm    and    s    is    a    function    is    s:Rnm + C,,(A), where C,(A) is the set of possible label-    ings.    The hope is that labeling resulting from the pro-    cess s (e @)) is an improvement    over the labeling result-    ing from direct local maxima selections    @).    If a numerical solution is to be sought for this prob-    lem, then a formal definition must be given to the con-    cept of an improved labeling.    In previous work, particu-    larly with respect    to computer    vision,    improvements    were rated subjectively,    or in the case of an experiment    global    enhancement    process,    g    Fig. 1: Function of the global enhancement    process:    X    represents an improved labeling with respect to 2.    local    -+    g+    maxima    I-!    -x    selection,    S    where the solution was known in advance, by the number    of misclassified    vertices.    In our formulation this issue is    resolved by assuming that the problem domain specifies    an underlying constraint    network, or can be modeled to    do so. The objective is then to use the initial information    to choose a labeling that is (a) consistent with respect to    this    constraint    network,    and    (b) which    optimizes    a    prespecified    objective    function.    In this extension from    the discrete    to the continuous    graph labeling problem,    the constraint relations remain intact.    We are currently    investigating    optimal solutions to    this formulation    of the graph labeling problem based on    a maaGnu~-sum    decision    rule, that is, the rule is to    choose a consistent    labeling such that sum of the initial    labeling values is maximal.    A solution to this problem    could be extended    in a straightforward    manner to cer-    tain well established    decision rules such as,is found, for    example, in nearest neighbor classification.    Though the decision    rule serves to make explicit    what is meant by an improved labeling, it is defined glo-    bally. The problem remains to implement    it in terms of    a cooperative    process.    The concept    of a cooperative    process,    although not well defined, can be characterized    in    terms    certain    general    properties    [U1179].    Our    research    is into algorithms    which exhibit    certain    of    these properties,    such as locality, and simplicity.    In an    aptimul solution, the labeling algorithm must, further-    more, perform the label selection in accordance    with the    given decision    rule.    Other important    issues,    such as    speed    of convergence    are also being addressed.    Two    approaches    which have some    of these properties    are    demonstrated    in the following section.    The first is an    heuristic approach based on dynamic programming    [ 141    which converges very rapidly and with good results, but    does not guarantee    a consistent    labeling.    The second    approach    is based    on linear programming.    Details on    the latter algorithm will be presented at a later date.    III --    In this section,    we demonstrate    the application    of    the two approaches    discussed    above to the problem    of    extracting    polygon    approximations    of the outlines    of    objects in a scene.    The experiments    described    here are    based    on the reconstruction    of simple    closed    curves    (Fig. 2) when noise has been added to the initial labeling    values.    The graph used in this experiment    is a 16.by 16 ras-    ter. Each vertex is represented    by a pixel, and is adja-    cent to its eight immediate    neighbors.    The associated    label set is shown in Fig. 3. A pair of labels on adjacent    pixels are consistent    if an outgoing line segment is not    broken across a common    border or corner, and incon-    sistent otherwise.    Examples of consistent pairs of labels    are given in Fig. 4, and examples of inconsistent    pairs of    labels are given in Fig. 5.    *In terms of decision theory, every consistent    label-    ing constitutes    a class and the input vector p is a point    in an 7~x9~~ dimensional feature space.    51    Fig. 2: Initial labeling    lzl    lzl    til    la    vi    III    6l    El    q    El    q    Ic    El    q    lil    III    Gl    w    El    q    q    Fig. 4: Examples of locally consistent label pairs.    Uniformly    independently    distributed    noise    was    added to the labeling values at each pixel resulting in    the labeling, by local maxima selection,    shown in Fig. 6.    The two cooperative    algorithms were applied to the ini-    tial labeling    in attempt    to reconstruct    the original    curves.    The first is the dynamic programming    approach    with data moving along the eight major directions    of the    raster (two horizontal,    two vertical,    and four diagonal).    The second is the algorithm based on a linear program-    ming approach.    The performance    of these algorithms    are presented    in Fig. ‘7 and Fig. 6, which show the result-    ing labeling (by choosing the label with greatest strength    at each pixel) after 2 and 4 iterations.    The dynamic pro-    gramming    approach reaches    a fixed point after 2 itera-    tions, however, the result is not a consistent    labeling.    The linear programming    algorithm reconstructs    the ori-    ginal labeling after six iterations.    IV IIMXzWB    Our interest here has been to restate the continu-    ous graph labeling problem in a manner which allows for    a systematic    approachs    to a solution.    The formulation    which we have presented    amounts to the classification    of    Fig. 3: Label set for line drawing description.    Fig. 5: Examples of inconsistent    label pairs.    52    1    I    I    VI    IU    I    I I III    nwl    I DI    I    Fig. 6: Initial labeling plus noise.    I i i ii L-44 i ii i    Fig. 8a: Output of the linear    programming    algorithm    after two iterations.    Fig. 7: Output of the dynamic    programming    algorithm    after two iterations.    Note: the algorithm has reached a    fixed point.    Fig. 8b: Output of the linear programming    algorithm    after four iterations.    53    consistent    labelings according to a prespecified    decision    rule.    As with    previous    approaches,    consistency    is    defined on a local basis to make sense with respect to a    [101    particular    problem.    For example, if the objective    is to    extract    continuous    curves    as    in    the    experiment    described    above,    consistency    is maintained    between    pairs of labels when the scene events they represent    do    1111    not allow for broken lines. The global nature of the deci-    sion rule leads to a more intuitive description    of what    the techniques    accomplishes    with respect to the original    [12    problem.    However, as a consequence,    the problem    of    implementing    this rule on a local basis arises.    Two approaches    to the reformulated    problem have    been demonstrated    above.    Our present feeling is that a    [ 13    linear programming    approach    should yield an optimal    solution to the continuous graph labeling problem based    on a maximum-sum    decision rule. However, the restric-    tion that the algorithm must be implemented    in a local    manner has led to some theoretical    problems,    such as    resolving cycling under degeneracy    which remain to be    [l-+1    solved. Our investigation into these problems is continu-    ing. Obviously, the value of this approach and any tech-    niques which may be derived    from it will depend    on    whether or not real world applications    can be modeled in    [151    such a manner so that the absolute consistency    between    pairs of labels is meaningful.    We hope to demonstrate    this in at least one problem, deriving line drawings from    real world scenes, in forthcoming    results.    PI    PI    L31    WI    bl    PI    PI    ml    PI    WI    REFERENCES    D. L. Waltz, “Generating    semantic    descriptions    from drawings of scenes with shadows,” Technical    [171    Report A1271, M.I.T., 1972.    U. Montanari, “Networks    of constraints:    funda-    mental properties    and application to picture pro-    cessing,”    Information    Sciences,    vol. 7, pp. 95-    132, 1974.    R. M. Haralick and L. G. Shapiro, “The consistent    labeling problem:    part I,” IEEE    Trans. Pattern    Anal. Machine    Intell.,    vol. PAMI-1, pp. 173-184,    1979.    R. M. Haralick and L. G. Shapiro, The consistent    labeling problem:    part II,” IEEE Trans. Pattern    Anal. Machine    Intell.,    vol. PAMI-2, pp. 193-203,    1980.    A. K. Mackworth,    “Consistency    in networks    of    relations,”    Artificial    Intelligence,    vol. 8, pp. 99-    118, 1977.    A. Rosenfeld,    R. A. Hummel, and S. W. Zucker,    “Scene labeling by relaxation    operations,”    IEEE    D-an-s. Syst.,    Man, Cybern.,    vol. SMCQ, pp. 420-    433.    S. Peleg, “A new probabilistic    relaxation scheme ,”    IEEE    Trans. Pattern    Anal. Machine    Intell.,    vol.    PAMI-2, pp. 362-369, 1980.    R. L. Kirby, “A product rule relaxation method,”    Comput.    CA-aphics Image Processing,    vol. 12, pp.    158-189, 1980.    S. Peleg and A. Rosenfeld, “Determining compati-    bility coefficients    for curve enhancement    relaxa-    tion    processes,”    IEEE    Trans.    Syst.,    Man,    Cybem., vol. SMC-8, pp 548-555, 1978.    H. Yamamoto,    “A method of deriving compatibil-    ity coefficients    for relaxation    operators,”    Com-    put.    Graphics Image Processing,    vol. 10, pp. 256-    271, 1978.    S. IJllman, “Relaxation and constrained    optimiza-    tion by local processes,”    Comput.    Graphics Image    Processing,    vol. 11, pp. 115-125, 1979.    R. A. Hummel, and S. W. Zucker, “On the founda-    tions of relaxation    labeling processes,”    7+8@7,    Dept. of Elect. Eng., McGill University, Montreal,    Quebec, Canada.    0. D. P’augeras, and M. Berthod, “Improving con-    sistency    and reducing    ambiguity    in stochastic    labeling: an optimization    approach,”    IEEE Trans.    Pattern    Anal.    Machine    Intell,,    vol. PAMI-3, pp.    412-424, 1981.    M. D. Diamond, and S. Ganapathy,    “Cooperative    solutions to the graph labeling problems,”    Proc.    PRIP    82 Conference    on Patt.    Recognition    and    Image Processings,    June, 1982, to appear.    S. W. Zucker, Y. G. Leclerc, and J. L. Mohammed,    “Continuous    relaxation    and local maxima selec-    tion: conditions for equivalence    IEEE Trans. Pat-    tern Anal. Machine    Intell.,    vol. PAMI-3, pp. 1:7-    127, 1981.    L. S. Davis,    and A. Rosenfeld,    “Cooperating    processes    for low-level vision: a survey,”    TR-123,    Dept. of Computer Science,    University of Texas,    Austin, 1980.    E. C. Freuder,    “Synthesizing    constraint    eupres-    sions ,I’ Comm. ACM, vol. 21, pp. 958-966,    1978.    54     
 | 
	1982 
 | 
	102 
 | 
					
			Subsets and Splits
				
	
				
			
				
SQL Console for Seed42Lab/AI-paper-crawl
												Finds papers discussing interpretability and explainability in machine learning from after 2010, offering insight into emerging areas of research focus.
													
Interpretability Papers Since 2011
												Reveals papers from the AAAI dataset after 2010 that discuss interpretability or explainability, highlighting key research in these areas.
													
SQL Console for Seed42Lab/AI-paper-crawl
												Searches for papers related to interpretability and explainability in NIPS proceedings after 2010, providing a filtered dataset for further analysis of research trends.
													
AI Papers on Interpretability
												Finds papers discussing interpretability or explainability published after 2010, providing insight into recent trends in research focus.
													
ICML Papers on Interpretability
												Retrieves papers from the ICML dataset after 2010 that mention interpretability or explainability, offering insights into trends in model transparency research.
													
ICLR Papers on Interpretability
												Retrieves papers from the ICLR dataset that discuss interpretability or explainability, focusing on those published after 2010, providing insights into evolving research trends in these areas.
													
ICCV Papers on Interpretability
												Finds papers from the ICCV dataset published after 2010 that discuss interpretability or explainability, providing insight into trends in research focus.
													
EMNLP Papers on Interpretability
												Retrieves papers related to interpretability and explainability published after 2010, providing a focused look at research trends in these areas.
													
ECCV Papers on Interpretability
												The query retrieves papers from the ECCV dataset related to interpretability and explainability published after 2010, providing insights into recent trends in these research areas.
													
CVPR Papers on Interpretability
												Retrieves papers from the CVPR dataset published after 2010 that mention 'interpretability' or 'explainability', providing insights into the focus on these topics over time.
													
AI Papers on Interpretability
												Retrieves papers from the ACL dataset that discuss interpretability or explainability, providing insights into research focus in these areas.