The Soar User's Manual Version 9.6 - Soar Cognitive Architecture

Transcription

The Soar User’s ManualVersion 9.6.0John E. Laird, Clare Bates Congdon,Mazin Assanie, Nate Derbinsky and Joseph XuAdditional contributions by:Mitchell Bloch, Karen J. Coulter, Steven Jones,Aaron Mininger, Preeti Ramaraj and Bryan StearnsDivision of Computer Science and EngineeringUniversity of MichiganDraft of: November 2, 2017Errors may be reported to John E. Laird (laird@umich.edu)Copyright c 1998 - 2017, The Regents of the University of MichiganDevelopment of earlier versions of this manual were supported under contract N00014-92K-2015 from the Advanced Systems Technology Office of the Advanced Research ProjectsAgency and the Naval Research Laboratory, and contract N66001-95-C-6013 from the Advanced Systems Technology Office of the Advanced Research Projects Agency and the NavalCommand and Ocean Surveillance Center, RDT&E division.

2

ContentsContentsvii1 Introduction1.1 Using this Manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1.2 Contacting the Soar Group . . . . . . . . . . . . . . . . . . . . . . . . . . . .1.3 Different Platforms and Operating Systems . . . . . . . . . . . . . . . . . . .2 The Soar Architecture2.1 An Overview of Soar . . . . . . . . . . . . . . . . . . . . . .2.1.1 Types of Procedural Knowledge in Soar . . . . . . . .2.1.2 Problem-Solving Functions in Soar . . . . . . . . . .2.1.3 An Example Task: The Blocks-World . . . . . . . . .2.1.4 Representation of States, Operators, and Goals . . .2.1.5 Proposing candidate operators . . . . . . . . . . . . .2.1.6 Comparing candidate operators: Preferences . . . . .2.1.7 Selecting a single operator: Decision . . . . . . . . .2.1.8 Applying the operator . . . . . . . . . . . . . . . . .2.1.9 Making inferences about the state . . . . . . . . . . .2.1.10 Problem Spaces . . . . . . . . . . . . . . . . . . . . .2.2 Working memory: The Current Situation . . . . . . . . . . .2.3 Production Memory:Long-term Procedural Knowledge . . . . . . . . . . . . . . .2.3.1 The structure of a production . . . . . . . . . . . . .2.3.2 Architectural roles of productions . . . . . . . . . . .2.3.3 Production Actions and Persistence . . . . . . . . . .2.4 Preference Memory: Selection Knowledge . . . . . . . . . . .2.4.1 Preference Semantics . . . . . . . . . . . . . . . . . .2.4.2 How preferences are evaluated to decide an operator .2.5 Soar’s Execution Cycle: Without Substates . . . . . . . . . .2.6 Input and Output . . . . . . . . . . . . . . . . . . . . . . . .2.7 Impasses and Substates . . . . . . . . . . . . . . . . . . . . .2.7.1 Impasse Types . . . . . . . . . . . . . . . . . . . . .2.7.2 Creating New States . . . . . . . . . . . . . . . . . .2.7.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . .2.7.4 Justifications: Support for results . . . . . . . . . . .2.7.5 Chunking: Learning Procedural Knowledge . . . . . 293132

iiCONTENTS2.7.62.7.72.7.82.7.9The calculation of o-support . . . . . . . . . . . .Removal of Substates: Impasse Resolution . . . .Soar’s Cycle: With Substates . . . . . . . . . . .Removal of Substates: The Goal Dependency Set3 The Syntax of Soar Programs3.1 Working Memory . . . . . . . . . . . . . . . . . . .3.1.1 Symbols . . . . . . . . . . . . . . . . . . . .3.1.2 Objects . . . . . . . . . . . . . . . . . . . .3.1.3 Timetags . . . . . . . . . . . . . . . . . . .3.1.4 Acceptable preferences in working memory .3.1.5 Working Memory as a Graph . . . . . . . .3.1.6 Working Memory Activation . . . . . . . . .3.2 Preference Memory . . . . . . . . . . . . . . . . . .3.3 Production Memory . . . . . . . . . . . . . . . . .3.3.1 Production Names . . . . . . . . . . . . . .3.3.2 Documentation string (optional) . . . . . . .3.3.3 Production type (optional) . . . . . . . . . .3.3.4 Comments (optional) . . . . . . . . . . . . .3.3.5 The condition side of productions (or LHS) .3.3.6 The action side of productions (or RHS) . .3.3.7 Grammars for production syntax . . . . . .3.4 Impasses in Working Memory and in Productions .3.4.1 Impasses in working memory . . . . . . . . .3.4.2 Testing for impasses in productions . . . . .3.5 Soar I/O: Input and Output in Soar . . . . . . . . .3.5.1 Overview of Soar I/O . . . . . . . . . . . . .3.5.2 Input and output in working memory . . . .3.5.3 Input and output in production memory . 86878790.91919294959899991011021021031051081081084 Procedural Knowledge Learning4.1 Chunking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4.2 Explanation-based Chunking . . . . . . . . . . . . . . . . . . . . . .4.3 Overview of the EBC Algorithm . . . . . . . . . . . . . . . . . . . .4.3.1 Identity . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4.3.2 The Five Main Components of Explanation-Based Chunking4.4 What EBC Does Prior to the Learning Episode . . . . . . . . . . .4.4.1 Identity Assignment and Propagation . . . . . . . . . . . . .4.4.2 Relevant Operator Selection Knowledge Tracking . . . . . .4.5 What EBC Does During the Learning Episode . . . . . . . . . . . .4.5.1 Calculating the Complete Set of Results . . . . . . . . . . .4.5.2 Backtracing and the Three Types of Analysis Performed . .4.5.3 Rule Formation . . . . . . . . . . . . . . . . . . . . . . . . .4.6 Subtleties of EBC . . . . . . . . . . . . . . . . . . . . . . . . . . . .4.6.1 Relationship Between Chunks and Justifications . . . . . . .4.6.2 Chunk Inhibition . . . . . . . . . . . . . . . . . . . . . . . .

104.6.114.6.124.6.134.74.84.9Chunks Based on Chunks . . . . . . . . . . . . . . . .Mixing Chunks and Justifications . . . . . . . . . . . .Generality and Correctness of Learned Rules . . . . . .Over-specialization and Over-generalization . . . . . .Previous Results and Rule Repair . . . . . . . . . . . .Missing Operator Selection Knowledge . . . . . . . . .Generalizing Over Operators Selected ProbabilisticallyCollapsed Negative Reasoning . . . . . . . . . . . . . .Problem-Solving That Doesn’t Test The Superstate . .Disjunctive Context Conflation . . . . . . . . . . . . .Generalizing knowledge retrieved fromsemantic or episodic memory . . . . . . . . . . . . . . .4.6.14 Learning from Instruction . . . . . . . . . . . . . . . .4.6.15 Determining Which OSK Preferences are Relevant . . .4.6.16 Generalizing Knowledge From Mathand Other Right-Hand Side Functions . . . . . . . . .4.6.17 Situations in which a Chunk is Not Learned . . . . . .Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4.7.1 Overview of the chunk command . . . . . . . . . . . .4.7.2 Enabling Procedural Learning . . . . . . . . . . . . . .4.7.3 Fine-tuning What Your Agent Learns . . . . . . . . . .4.7.4 Examining What Was Learned . . . . . . . . . . . . .Explaining Learned Procedural Knowledge . . . . . . . . . . .Visualizing the Explanation . . . . . . . . . . . . . . . . . . .5 Reinforcement Learning5.1 RL Rules . . . . . . . . . . . . . .5.2 Reward Representation . . . . . . .5.3 Updating RL Rule Values . . . . .5.3.1 Gaps in Rule Coverage . . .5.3.2 RL and Substates . . . . . .5.3.3 Eligibility Traces . . . . . .5.3.4 GQ(λ) . . . . . . . . . . . .5.4 Automatic Generation of RL Rules5.4.1 The gp Command . . . . . .5.4.2 Rule Templates . . . . . . .5.4.3 Chunking . . . . . . . . . .6 Semantic Memory6.1 Working Memory Structure . . . . . . . . . . . . .6.2 Knowledge Representation . . . . . . . . . . . . . .6.2.1 Integrating Long-Term Identifiers with Soar6.3 Storing Semantic Knowledge . . . . . . . . . . . . .6.3.1 Store command . . . . . . . . . . . . . . . .6.3.2 Store-new command . . . . . . . . . . . . .6.3.3 User-Initiated Storage . . . . . . . . . . . .109109109110110111111112112113. . . . . . . . 113. . . . . . . . 114. . . . . . . . 138139139139140141.143143144144145145146146

ivCONTENTS6.46.56.3.4 Storage Location . . . . .Retrieving Semantic Knowledge .6.4.1 Non-Cue-Based Retrievals6.4.2 Cue-Based Retrievals . . .6.4.3 Retrieval with Depth . . .Performance . . . . . . . . . . . .6.5.1 Math queries . . . . . . .6.5.2 Performance Tweaking . .1471471481481521531531537 Episodic Memory7.1 Working Memory Structure . . . . . . . .7.2 Episodic Storage . . . . . . . . . . . . . .7.2.1 Episode Contents . . . . . . . . . .7.2.2 Storage Location . . . . . . . . . .7.3 Retrieving Episodes . . . . . . . . . . . . .7.3.1 Cue-Based Retrievals . . . . . . . .7.3.2 Absolute Non-Cue-Based Retrieval7.3.3 Relative Non-Cue-Based Retrieval .7.3.4 Retrieval Meta-Data . . . . . . . .7.4 Performance . . . . . . . . . . . . . . . . .7.4.1 Performance Tweaking . . . . . . .1551551561561561571581591601601611628 Spatial Visual System8.1 The scene graph . . . . . . . . .8.1.1 svs viewer . . . . . . . .8.2 Scene Graph Edit Language . .8.2.1 Examples . . . . . . . .8.3 Commands . . . . . . . . . . .8.3.1 add node . . . . . . . .8.3.2 copy node . . . . . . . .8.3.3 delete node . . . . . . .8.3.4 set transform . . . . . .8.3.5 set tag . . . . . . . . . .8.3.6 delete tag . . . . . . . .8.3.7 extract and extract once8.4 Filters . . . . . . . . . . . . . .8.4.1 Result lists . . . . . . .8.4.2 Filter List . . . . . . . .8.4.3 Examples . . . . . . . .8.5 Writing new filters . . . . . . .8.5.1 Filter subclasses . . . . .8.5.2 Generic Node Filters . .8.6 Command line interface . . . 76177177179181.9 The Soar User Interface1839.1 Basic Commands for Running Soar . . . . . . . . . . . . . . . . . . . . . . . 184

CONTENTS9.29.39.49.59.69.79.1.1 soar . . . . . . . . . . .9.1.2 run . . . . . . . . . . . .9.1.3 exit . . . . . . . . . . . .9.1.4 help . . . . . . . . . . .9.1.5 decide . . . . . . . . . .9.1.6 alias . . . . . . . . . . .Procedural Memory Commands9.2.1 sp . . . . . . . . . . . .9.2.2 gp . . . . . . . . . . . .9.2.3 production . . . . . . . .Short-term Memory Commands9.3.1 print . . . . . . . . . . .9.3.2 wm . . . . . . . . . . . .9.3.3 preferences . . . . . . . .9.3.4 svs . . . . . . . . . . . .Learning . . . . . . . . . . . . .9.4.1 chunk . . . . . . . . . .9.4.2 rl . . . . . . . . . . . . .Long-term Declarative Memory9.5.1 smem . . . . . . . . . .9.5.2 epmem . . . . . . . . . .Other Debugging Commands .9.6.1 trace . . . . . . . . . . .9.6.2 output . . . . . . . . . .9.6.3 explain . . . . . . . . . .9.6.4 visualize . . . . . . . . .9.6.5 stats . . . . . . . . . . .9.6.6 debug . . . . . . . . . .File System I/O Commands . .9.7.1 File System . . . . . . .9.7.2 load . . . . . . . . . . .9.7.3 save . . . . . . . . . . .9.7.4 echo . . . . . . . . . . 92Index295Summary of Soar Aliases, Variables, and Functions301

viCONTENTS

List of Soar is continually trying to select and apply operators. . . . . .The initial state and goal of the “blocks-world” task. . . . . . .The initial state of the blocks world as working memory objectsThe WM state in blocks world after the first operator is selectedSix proposed blocks world operators . . . . . . . . . . . . . . . .The blocks-world problem space . . . . . . . . . . . . . . . . . .An abstract view of production memory . . . . . . . . . . . . .The preference resolution process . . . . . . . . . . . . . . . . .A detailed illustration of Soar’s decision cycle. . . . . . . . . . .A simplified version of the Soar algorithm. . . . . . . . . . . . .A simplified illustration of a subgoal stack. . . . . . . . . . . . .Simplified Representation of the context dependencies . . . . . .The Dependency Set in Soar. . . . . . . . . . . . . . . . . . . .589101013162226273039393.13.23.33.4A semantic net illustration of four objects in working memory. .An example production from the example blocks-world task. . .An example portion of the input link for the blocks-world task. .An example portion of the output link for the blocks-world task.474988894.14.24.34.44.54.64.74.8A Soar 9.4.0 chunk vs. an explanation-based chunk . . . . . . . . .A comparison of a working memory trace and an explanation traceA visualization of an explanation trace . . . . . . . . . . . . . . . .An explanation trace of two simple rules that matched in a substateAn explanation trace after identity analysis . . . . . . . . . . . . . .The five main components of explanation-based chunking . . . . . .The seven stages of rule formation . . . . . . . . . . . . . . . . . . .A colored visualization of an explanation trace . . . . . . . . . . . . 92. 93. 94. 96. 97. 98. 105. 1285.1Example Soar substate operator trace. . . . . . . . . . . . . . . . . . . . . . 1376.1Example long-term identifier with four augmentations. . . . . . . . . . . . . 1447.1Example episodic memory cache setting data. . . . . . . . . . . . . . . . . . 1638.18.2SVS environment setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165SVS scene graph representation . . . . . . . . . . . . . . . . . . . . . . . . . 167vii

viiiLIST OF FIGURES

Chapter 1IntroductionSoar has been developed to be an architecture for constructing general intelligent systems.It has been in use since 1983, and has evolved through many different versions. This manualdocuments the most current of these: version 9.6.0.Our goals for Soar include that it ultimately be an architecture that can: be used to build systems that work on the full range of tasks expected of anintelligent agent, from highly routine to extremely difficult, open-ended problems; represent and use appropriate forms of knowledge, such as procedural, declarative,episodic, and possibly iconic; employ the full range of possible problem solving methods; interact with the outside world; and learn about all aspects of the tasks and its performance on those tasks.In other words, our intention is for Soar to support all the capabilities required of a generalintelligent agent. Below are the major principles that are the cornerstones of Soar’s design:1. The number of distinct architectural mechanisms should be minimized. ClassicallySoar had only a single representation of permanent knowledge (production rules), asingle representation of temporary knowledge (objects with attributes and values), asingle mechanism for generating goals (automatic subgoaling), and a single learningmechanism (chunking). It was only as Soar was applied to diverse tasks in complexenvironments that we found these mechanisms to be insufficient and added new longterm memories (semantic and episodic) and learning mechanisms (semantic, episodic,and reinforcement learning) to extend Soar agents with crucial new functionalities.2. All decisions are made through the combination of relevant knowledge at run-time.In Soar, every decision is based on the current interpretation of sensory data and anyrelevant knowledge retrieved from permanent memory. Decisions are never precompiledinto uninterruptible sequences.1

21.1CHAPTER 1. INTRODUCTIONUsing this ManualWe expect that novice Soar users will read the manual in the order it is presented. Notall users will makes use of the mechanisms described in chapters 4-8, but it is important toknow that these capabilities exist.Chapter 2 and Chapter 3 describe Soar from different perspectives: Chapter 2 describes the Soar architecture, but avoids issues of syntax, while Chapter 3 describesthe syntax of Soar, including the specific conditions and actions allowed in Soar productions.Chapter 4 describes chunking, Soar’s mechanism to learn new procedural knowledge (productions).Chapter 5 describes reinforcement learning (RL), a mechanism by which Soar’s proceduralknowledge is tuned given task experience.Chapter 6 and Chapter 7 describe Soar’s long-term declarative memory systems, semantic and episodic.Chapter 8 describes the Spatial Visual System (SVS), a mechanism by which Soar canconvert complex perceptual input into practical semantic knowledge.Chapter 9 describes the Soar user interface — how the user interacts with Soar. Thechapter is a catalog of user-interface commands, grouped by functionality. The mostaccurate and up-to-date information on the syntax of the Soar User Interface is foundonline, at the Soar web site, at x .Advanced users will refer most often to Chapter 9, flipping back to Chapters 2 and 3 toanswer specific questions.Chapters 2 and 3 make use of a Blocks World example agent. The Soar code for this agentcan be downloaded at https://web.eecs.umich.edu/ soar/blocksworld.soar .Additional Back MatterAfter these chapters is an index; the last pages of this manual contain a summary and indexof the user-interface functions for quick reference.Not Described in This ManualSome of the more advanced features of Soar are not described in this manual, such as howto interface with a simulator, or how to create Soar applications using multiple interacting agents. The Soar project website (see link below) has additional help documents andresources.

1.2. CONTACTING THE SOAR GROUP3For novice Soar users, try The Soar 9 Tutorial, which guides the reader through severalexample tasks and exercises.1.2Contacting the Soar GroupResources on the InternetThe primary website for Soar is:http://soar.eecs.umich.edu/Look here for the latest Soar-related downloads, documentation, FAQs, and announcements,as well as links to information about specific Soar research projects and researchers.Soar kernel development is hosted on GitHub athttps://github.com/SoarGroupThis site contains the public GitHub repository, a wiki describing the command-line interface,and an issue tracker where users can report bugs or suggests features.To contact the Soar group or get help, or to receive notifications of significant developmentsin Soar, we recommend that you register with one or both of our email lists:For questions about using Soar, you can use the soar-help list. For other discussion or toreceive announcements, use the soar-group list.Also, please do not hesitate to file bugs on our issue tracker:https://github.com/SoarGroup/Soar/issuesTo avoid redundant entries, please search for duplicate issues first.

4CHAPTER 1. INTRODUCTIONFor Those Without Internet AccessMailing Address:The Soar GroupArtificial Intelligence LaboratoryUniversity of Michigan2260 Hayward StreetAnn Arbor, MI 48109-2121USA1.3Different Platforms and Operating SystemsSoar runs on a wide variety of platforms, including Linux, Unix (although not heavily tested),Mac OS X, and Windows 10, 7, possibly 8 and Vista, XP, 2000 and NT). We currently testSoar on both 32-bit and 64-bit versions of Ubuntu Linux, OS X 10, and Windows 10.This manual documents Soar generally, although all references to files and directories useUnix format conventions rather than Windows-style folders.

Chapter 2The Soar ArchitectureThis chapter describes the Soar architecture. It covers all aspects of Soar except for thespecific syntax of Soar’s memories and descriptions of the Soar user-interface commands.This chapter gives an abstract description of Soar. It starts by giving an overview of Soar andthen goes into more detail for each of Soar’s main memories (working memory, productionmemory, and preference memory) and processes (the decision procedure, learning, and inputand output).2.1An Overview of SoarThe design of Soar is based on the hypothesis that all deliberate goal -oriented behavior canbe cast as the selection and application of operators to a state. A state is a representationof the current problem-solving situation; an operator transforms a state (makes changes tothe representation); and a goal is a desired outcome of the problem-solving activity.As Soar runs, it is continually trying to apply the current operator and select the nextoperator (a state can have only one operator at a time), until the goal has been achieved.The selection and application of operators is illustrated in Figure 2.1.Soar has separate memories (and different representations) for descriptions of its currentSoar executionselectapplyselectapplyselectapplyFigure 2.1: Soar is continually trying to select and apply operators.5.

6CHAPTER 2. THE SOAR ARCHITECTUREsituation and its long-term procedural knowledge. In Soar, the current situation, includingdata from sensors, results of intermediate inferences, active goals, and active operators isheld in working memory. Working memory is organized as objects. Objects are describedin terms of their attributes; the values of the attributes may correspond to sub-objects, sothe description of the state can have a hierarchical organization. (This need not be a stricthierarchy; for example, there’s nothing to prevent two objects from being “substructure” ofeach other.)Long-term procedural knowledge is held in production memory. Procedural knowledgespecifies how to respond to different situations in working memory, can be thought of as theprogram for Soar. The Soar architecture cannot solve any problems without the addition oflong-term procedural knowledge. (Note the distinction between the “Soar architecture” andthe “Soar program”: The former refers to the system described in this manual, common toall users, and the latter refers to knowledge added to the architecture.)A Soar program contains the knowledge to be used for solving a specific task (or set of tasks),including information about how to select and apply operators to transform the states of theproblem, and a means of recognizing that the goal has been achieved.2.1.1Types of Procedural Knowledge in SoarSoar’s procedural knowledge can be categorized into four distinct types of knowledge:1. Inference RulesIn Soar, we call these state elaborations. This knowledge provides monotonic inferencesthat can be made about the state in a given situation. The knowledge created by suchrules are not persistent and exist only as long as the conditions of the rules are met.2. Operator Proposal KnowledgeKnowledge about when a particular operator is appropriate for a situation. Notethat multiple operators may be appropriate in a given context. So, Soar also needsknowledge to determine which of the candidates to choose:3. Operator Selection Knowledge:Knowledge about the desirability of an operator in a particular situation. Such knowledge can be either in terms of a single operator (e.g. never choose this operator in thissituation) or relational (e.g. prefer this operator over another in this situation).4. Operator Application RulesKnowledge of how a specific selected operator modifies the state. This knowledgecreates persistent changes to the state that remain even after the rule no longer matchesor the operator is no longer selected.Note that state elaborations can indirectly affect operator selection and application by creating knowledge that the proposal and application rules match on.

2.1. AN OVERVIEW OF SOAR2.1.27Problem-Solving Functions in SoarThese problem-solving functions are the primitives for generating behavior that is relevant tothe current situation: elaborating the state, proposing candidate operators, comparing thecandidates, and applying the operator by modifying the state. These functions are drivenby the knowledge encoded in a Soar program.Soar represents that knowledge as production rules. Production rules are similar to “ifthen” statements in conventional programming languages. (For example, a production mightsay something like “if there are two blocks on the table, then suggest an operator to moveone block on top of the other block”). The “if” part of the production is called its conditionsand the “then” part of the production is called its actions. When the conditions are met inthe current situation as defined by working memory, the production is matched and it willfire, which means that its actions are executed, making changes to working memory.Selecting the current operator, involves making a decision once sufficient knowledge hasbeen retrieved. This is performed by Soar’s decision procedure, which is a fixed procedurethat interprets preferences that have been created by the knowledge retrieval functions. Theknowledge-retrieval and decision-making functions combine to form Soar’s decision cycle.When the knowledge to perform the problem-solving functions is not directly available inproductions, Soar is unable to make progress and reaches an impasse. There are three typesof possible impasses in Soar:1. An operator cannot be selected because no new operators are proposed.2. An operator cannot be selected because multiple operators are proposed and the comparisons are insufficient to determine which one should be selected.3. An operator has been selected, but there is insufficient knowledge to apply it.In response to an impasse, the Soar architecture creates a substate in which operators can beselected and applied to generate or deliberately retrieve the knowledge that was not directlyavailable; the goal in the substate is to resolve the impasse. For example, in a substate,a Soar program may do a lookahead search to compare candidate operators if comparisonknowledge is not directly available. Impasses and substates are described in more detail inSection 2.7.2.1.3An Example Task: The Blocks-WorldWe will use a task called the blocks-world as an example throughout this manual. In theblocks-world task, the initial state has three blocks named A, B, and C on a table; theoperators move one block at a time to another location (on top of another block or onto thetable); and the goal is to build a tower with A on top, B in the middle, and C on the bottom.The initial state and the goal are illustrated in Figure 2.2.The Soar code for this task is available online athttps://web.eecs.umich.edu/ soar/blocksworld.soar.You do not need to look at the code at this point.

8CHAPTER 2. THE SOAR ARCHITECTUREABABCInitial StateCGoalFigure 2.2: The initial state and goal of the “blocks-world” task.The operators in this task move a single block from its current location to a new location;each operator is represented with the following information: the name of the block being moved the current location of the block (the “thing” it is on top of) the destination of the block (the “thing” it will be on top of)The goal in this task is to stack the blocks so that C is on the table, with block B on top ofblock C, and block A on top of block B.2.1.4Representation of States, Operators, and GoalsThe initial state in our blocks-world task — before any operators have been proposed orselected — is illustrated in Figure 2.3.A state can have only one selected operator at a time but it may also have a numberof potential operators that are in consideration. These proposed operators should not beconfused with the active, selected operator.Figure 2.4 illustrates working memory after the first operator has been s

List of Figures 2.1 Soar is continually trying to select and apply operators. . . . . . . . . . . .5 2.2 The initial state and goal of the \blocks-world" task .