Logic And Logic Programming - Department Of Computer

Transcription

Logic ProgrammingLogic andLogic ProgrammingJ.A. Robinson

ogic has been around for avery long time [23]. It wasalready an old subject 23centuries ago, in Aristotle'sday (384-322 BC). WhileAristotle was not its originator, despite a widespread impression to the contrary, he was certainly its first important figure. Heplaced logic on sound systematicfoundations, and it was a majorcourse of study in his own university in Athens. His lecture notes onlogic can still be read today. Nodoubt he taught logic to the futureAlexander the Great when heserved for a time as the youngprince's personal tutor. In Alexandria a generation later (about 300B.C.), Euclid played a similar rolein systematizing and teaching thegeometry and number theory ofthat era. Both Aristotle's logic andEuclid's geometry have enduredand prospered. In some highschools and colleges, both are stilltaught in a form similar to theiroriginal one. The old logic, however, like the old geometry, has bynow evolved into a much more general and powerful form.Modern ('symbolic' or 'mathematical') logic dates back to 1879,when Frege published the first version of what today is known as thepredicate calculus [14]. This systemprovides a rich and comprehensivenotation, which Frege intended tobe adequate for the expression ofLall mathematical concepts and forthe formulation of exact deductivereasoning about them. It seems tobe so. T h e principal feature of thepredicate calculus is that it offers aprecise characterization of the concept of proof. Its proofs, as well as itssentences and its other formal expressions, are mathematically defined objects which are intendednot only to express ideas meaningfully--that is, to be used as one usesa l a n g u a g e - - b u t also to be the subject matter of mathematical analysis. They are also capable of beingmanipulated as the data objects ofconstruction and recognition algorithms.At the end of the nineteenth century, mathematics had reached astage in which it was more thanready to exploit Frege's powerfulnew instrument. Mathematicianswere opening up new areas of research that demandedmuchdeeper logical understanding andfar more careful handling ofproofs, than had previously beenrequired. Some of these were DavidHiibert's abstract axiomatic recasting of geometry and GiuseppePeano's of arithmetic, as well asGeorg Cantor's intuitive explorations of general set theory, especially his elaboration of the dazzlingtheory o f transfinite ordinal andcardinal numbers. Others wereErnst Zermelo's axiomatic analysisof set theory following the discov-COMMUNICATIONS OF THE ACM/March 1992/Vol.35, No.3ery of the logical and set-theoreticparadoxes (such as Bertrand Russell's set of all sets which are notmembers of themselves, whichtherefore by definition both is, andalso is not, a member of itself); andthe huge reductionist work Principia Mathematica by Bertrand Russell and Alfred North Whitehead.All of these developments had either shown what could be done, orhad revealed what needed to bedone, with the help of this newlogic. But it was necessary first formathematicians to master its techniques and to explore its scope andits limits.Significant early steps towardthis end were taken by LeopoldLowenheim(1915), [29] andThoralf Skolem [45], who studiedthe symbolic "satisfiability" of formal expressions. They showed thatsets of abstract logical conditionscould be proved consistent by beinggiven specific interpretations constructed from the very symbolicexpressions in which they are formulated. Their work opened theway for Kurt G6del (1930, [17]) andJacques Herbrand (1930, [19]) toprove, in their doctoral dissertations, the first versions of what isnow called the completeness of thepredicate calculus. G6del andHerbrand both demonstrated thatthe proof machinery of the predicate calculus can provide a formalproof for every logically true prop-41

Logic Programmingosition, and indeed they each gavea constructive m e t h o d for findingthe proof, given the proposition.G6del's more famous achievement,his discovery in 1931 of the amazing'incompletenesstheorems'about formalizations o f arithmetic,has tended to overshadow this imp o r t a n t earlier work of his, which isa result about p u r e logic, whereashis incompleteness results are aboutcertain applied logics (formal axiomatic theories o f elementary number theory, and similar systems)and do not directly concern ushere.T h e completeness o f the predicate calculus links the syntacticp r o p e r t y of formal provability withthe conceptually quite differentsemantic p r o p e r t y of logical truth.It assures us that each p r o p e r t y belongs to exactly the same sentences.Formal syntax and formal semantics are both needed, but for a timethe spotlight was on formal syntax,and formal semantics had to waituntil Alfred Tarski (1934, [46]) introduced the first rigorous semantical theory for the predicate calculus, by precisely defining alconsequence,and o t h e r related notions. Once itwas filled out by the concepts o fTarski's semantics, the theory o f thepredicate calculus was no longerunbalanced. Shortly afterward Gerh a r d Gentzen (1936, [15]) furthers h a r p e n e d the syntactical results onprovability by showing that if a sentence can be proved at all, then itcan be proved in a 'direct' way,without the need to introduce anyextraneous 'clever' concepts; thoseoccurring already in the sentenceitself are always sufficient.All of these positive discoverieso f the 1920s and 1930s laid thefoundations on which today's predicate calculus t h e o r e m - p r o v i n g programs, and thus logic p r o g r a m ming have been built.Not all the great logical discoveries o f this period were positive. In1936 Alonzo Church and AlanT u r i n g (see [6, 47]) i n d e p e n d e n t l ydiscovered a fundamental negative42p r o p e r t y of the predicate calculus.T h e r e had until then been an intense search for a positive solutionto what Hilbert called the decisionproblem--the problem to devise analgorithm for the predicate calculuswhich would correctly determine,for any formal sentence B and anyset A o f formal sentences, whetheror not B is a logical consequence ofA. Church and T u r i n g found thatdespite the existence o f the p r o o fprocedure, which correctly recognizes (by constructing a p r o o f o f Bfrom A) all cases where B is in fact alogical consequence o f A, there isnot and cannot be an algorithmwhich can similarly correctly recognize all cases in which B is not a logical consequence o f A. T h e i r discovery bears directly on all attempts towrite t h e o r e m - p r o v i n g software. Itmeans that it is pointless to try top r o g r a m a c o m p u t e r to answer 'yes'or 'no' correctly to every question o fthe form 'is this a logically true sentence?' T h e most that can be done isto identify useful subclasses o f sentences for which a decision proced u r e can be found. Many such subclasses are known. T h e y are called'solvable subcases o f the decisionproblem', but as far as I know noneo f them have t u r n e d out to be o fmuch practical interest.W h e n World War II began in1939 all the basic theoretical foundations o f today's computationallogic were in place. What was stilllacking was any practical way of actually carrying out the vast symboliccomputations called for by thep r o o f procedure. Only the verysimplest o f examples could be doneby hand. Already there were those,h o w e v e r - - T u r i n g himself for o n e - who were making plans whichwould eventually fill this gap. Turing's m e t h o d in negatively solvingthe decision problem had been todesign a highly theoretical, abstractversion o f the m o d e r n storedp r o g r a m , general-purpose universal digital c o m p u t e r (the 'universalT u r i n g machine'), and then toprove that no p r o g r a m for it couldrealize the decision procedure. Hissubsequent leading role in the war-time British code-breaking projectincluded his participation in theactual design, construction andoperation o f several electronic machines o f this kind, and thus hemust surely be reckoned as one o fthe major pioneers in their earlydevelopment.Logic on the ComputerA p a r t from this enormously importantcryptographicintelligencework and its crucial role in ballisticcomputations and nuclear physicssimulations, the war-time developm e n t o f electronic digital computing technology had relatively littleimpact on the outcome o f the waritself. After the war, however, itsr a p i d commercial and scientificexploitation quickly launched thec u r r e n t c o m p u t e r era. By 1950,m u c h - i m p r o v e d versions o f someo f the war-time general-purposeelectronic digital computers became available to industry, universities and research centers. By themid-1950s it had become a p p a r e n tto many logicians that, at last, sufficient computing power was now ath a n d to s u p p o r t computationalexperiments with the predicate calculus p r o o f procedure. It was j u s t amatter o f p r o g r a m m i n g it and trying it on some real examples. Several papers describing projects fordoing this were given at a S u m m e rSchool in Logic held at CornellUniversity in 1957. O n e o f these[37, pp. 74-76] was by A b r a h a mRobinson, the logician who latersurprised the mathematical establishment by applying logical 'nonstandard' model theory to legitimizeinfinitesimalsinthefoundations o f the integral and differential calculus. O t h e r publishedaccounts o f results in the first waveo f such experiments were [12, 16,35, 49]. T h e r e had also been, in1956, a strange e x p e r i m e n t by [33]which attracted a lot o f attention atthe time. It has since been cited as amilestone o f the early stages o f artificial intelligence research. T h eauthors designed their 'Logic Theory Machine' p r o g r a m to provesentences o f the propositional cal-March 1992/Voh35,N o . 3 / C O M M U N I C A T I O N S O F T H e A C M

culus (not the full predicate calculus), a very simple system of logicfor which there had long existedwell-known decision procedures.They nevertheless explicitly rejected the idea of using any algorithmic proof procedure, aiming,instead, at making their programbehave 'heuristically' as it cast aboutfor a proof. This experiment wasintendedtomodelhumanproblem-solving behavior, takingpropositional calculus theoremproving in particular as theproblem-solving task, rather thanto program the computer to provepropositional calculus theoremsefficiently.No sooner were the first computational proof experiments carriedout than the severe combinatorialcomplexity of the full predicate calculus proof procedure come vividlyinto view. T h e procedure is, afterall, essentially no more than a systematic exhaustive search throughan exponentially expanding spaceof possible proofs. The early researchers were brought face-to-facewith the inexorable 'combinatorialexplosion' caused by conductingthe search on nontrivial examples.These first predicate calculusproof-seeking programs may haveinspired, and perhaps even deserved, the disparaging label 'British Museum method' (see [33]),which was destined to be pinned onany merely-generate-and-test procedure which blindly and undiscriminatingly tries all possible combinations in the hope that a winningone, or even an acceptable one, mayeventually turn up.The intrinsic exponential complexity of the predicate calculusproof procedure is to be expected,because of the nature of the searchspace. There is evidently little onecan do to avoid its consequences.The only reasonable course is tolook for ways to strengthen theproof procedure as much as possible, by simplifying the forms ofexpressions in the predicate calculus and by packing more power intoits inference rules. This might atleast make the search process moreefficient, and permit it to findproofs of more interesting examples before it runs into the exponential barrier.Some limited progress has beenmade in this direction by reorganizing the predicate calculus in various'machine-oriented' versions.Evolution of MachineOriented LogicThe earliest versions of the predicate calculus proof procedure wereall based on human-oriented reasoning patterns--on types of inferencewhich reflected formally the kindof 'small' reasoning steps whichhumans find comfortable. A wellknown example of this is the modusponens inference-scheme. In usingmodus ponens, one infers a conclusion B from two premisses of theform A and (if A then B). Suchhuman-oriented inference-schemesare adapted to the limitations--andalso to the strengths--of thehuman information-processing system. They therefore tend to involvesimple, local, small and perceptuallyimmediate features of the state of thereasoning. In particular, they donot demand the handling of morethan one such bundle of features ata time--they are designed for serialprocessing on a single processor.T h e massive parallelism in humanbrain processes is well below thelevel of conscious awareness, and itis of the essence of deductive reasoning that the human reasoner befully conscious of the 'epistemological flow' of the proof and o f its stepwise assembling of his or her assentand understanding. In logics basedon such fine-grained serial inference patterns, proofs of interestingpropositions will tend to be largeassemblies of small steps. Thesearch space for the correspondingproof procedures will accordinglytend to be dense and overcrowdedwith redundant alternatives at toolow a level of detail.By about 1960 it had becomeclear that it might be necessary toabandon this natural predilectionfor human-oriented inference patterns, and to look for new logicsCOMMUNI LTIONSOF THE ACM/March 1992/Vol.35, No.3based on larger-scale, more complex, less local, and perhaps evenhighly parallel, machine-orientedtypes of reasoning. In contemplating these possible new logics it washoped their proofs would beshorter and (at the top level) simpler than those in the humanoriented logics. Of course, in theinterior of any individual inference,there would presumably be a largeamount of hidden structural detail.The global search space would besparser, since it would need to contain only the top-level structure ofproofs. The proof procedure itselfwould not need to be concernedwith the copious details of the conceptual microstructure packagedwithin the inference steps.This was the motivation behindthe introduction, in the early 1960s,of a new logic, based on two highlymachine-oriented reasoning patterns: unification, and the variouskinds of resolution which incorporate it.clausal LogicThe 1960 paper [12] had alreadydrawn attention to the simplifiedclausal predicate calculus in whichevery sentence is a clause. (A clauseis a sentence with a very simpleform: it is just a--possibly e m p t y - disjunction of literals. A literal, inturn, is just the application of anunnegated or negated predicate toa suitable list of terms as arguments). In the same year, DagPrawitz [34] had also forcefullyadvocated the use of the processwhich we now call unification. Alongwith Stig Kanger(see [34,footnote 11], p. 170) he apparentlyhad independently rediscoveredunification in the late 1950s. Heapparently did not realize that ithad already been introduced byHerbrand in his thesis of 1930 (albeit only in a brief and rather obscure passage). These were majorsteps in the right direction. Neitherthe Davis-Putnam nor the Prawitzimproved proof procedures, however, went quite far enough in discarding human-oriented inferencepatterns, and their algorithms still43

Logic Progrommingbecame bogged down too early intheir searches, to be useful.This was the situation when Ifirst became interested in mechanical t h e o r e m - p r o v i n g in late 1960.F r o m 1961 to 1964 I worked eachs u m m e r as a visiting researcher atthe A r g o n n e National Laboratory'sAppliedMathematicsDivision,which was then directed by WilliamF. Miller. It was Bill Miller who inearly 1961 first introduced me tothe engineering side of predicatecalculus t h e o r e m - p r o v i n g by pointing out to me the Davis and Putnampaper. He invited me to spend thes u m m e r of 1961 at A r g o n n e as avisiting researcher in his division,with the suggested assignment o fp r o g r a m m i n g the Davis-Putnamp r o o f p r o c e d u r e on the IBM 704and more generally o f pursuingmechanical t h e o r e m - p r o v i n g research.Reading the Davis-Putnam p a p e r[12] in early 1961 really changedmy life. Although Hilary Putnamhad been one o f my advisers when Iwas working on my doctoral thesisin philosophy at Princeton (19531956), my research had dealt withDavid Hume's theory o f causationand had little or nothing to do withm o d e r n logic, to which I paid scantattention at that time. I did not findout about Putnam's interest in thepredicate calculus p r o o f p r o c e d u r euntil I read this paper, four yearsafter I had left Princeton. It is avery i m p o r t a n tpaper.Theyshowed how, by relatively simplebut ingenious algorithmic reorganization, the original naive predicatecalculusproofprocedureofH e r b r a n d could be vastly improved.In a 1963 p a p e r I wrote aboutmy 'combinatorial explosion' experience with p r o g r a m m i n g and running the Davis-Putnam p r o c e d u r ein F o r t r a n for the IBM 704 at Argonne [38, pp. 372-383]. Meanwhile, d u r i n g my second researchs u m m e r there (1962) an A r g o n n ephysicist who was interested in andvery knowledgable about logic, William Davidon, had drawn my attention to the important 1960 p a p e r byDag Prawitz [34], in which I first44encountered the idea o f unification. After struggling with the woeful combinatorial inefficiency o f theinstantiation-based p r o c e d u r e usedby Davis and Putnam (and byeverybody else at that time; it goesback to H e r b r a n d ' s so-called 'Property B Method' developed in [19]). Iwas immediately very impressed bythe significance of this idea. It isessentially the idea underlyingH e r b r a n d ' s 'Property A Method'developed in the same thesis. Hereagain was still another p a p e r showing that even vaster improvementsthan those flowing from the Davisand Putnam p a p e r were possibleover the 'naive' predicate calculusp r o o f procedure. Instead o f generating-and-testing successive instantiations (substitutions) h o p i n g eventually to hit u p o n the r i g h t ones,Prawitz was describing a way o f directly computing them. This was abreakthrough. It offered an elegantand powerful alternative to theblind, hopeless, enumerative 'British Museum' methodology, andpointed the way to a new methodology featuring deliberate, goaldirected constructions.T h e entire academic year o f1962-1963 was consumed in tryingto figure out the best way to exploitthis Herbrand-Kanger-Prawitz process effectively, so as to eliminatethe generation of irrelevant instances in the p r o o f search. Finally,in the early s u m m e r o f 1963, Im a n a g e d to devise a clausal logicwith a single inference scheme,which was a combination o f theHerbrand-Kanger-Prawitz process(for which I p r o p o s e d the nameunification) with Gentzen's 'cut' rule.This combination p r o d u c e d arather i n h u m a n but very effectivenew inference pattern, for which Ip r o p o s e d the name resolution. Resolution permits the taking o f arbitrarily large inference steps whichin general require very considerable computational effort to carryout (and in some cases even to understand and to verify). Most of theeffort is concentrated on the unification involved. Preliminary investigations indicated that resolution-based theorem-provers would besignificantly better than any whichhad been built previously.I wrote about these ideas at Argonne at the end of the s u m m e r o f1963, and sent the p a p e r to theJournal of the A.C.M. (JACM). Itthen a p p a r e n t l y r e m a i n e d u n r e a don some referee's desk for morethan a year. It required some urging by the then editor o f the J o u r nal, Richard H a m m i n g o f Bell Laboratories, before the referee finallyresponded. T h e outcome was thatthe paper, [39], was published onlyin J a n u a r y 1965. Meanwhile themanuscript had been circulating. In1964 at A r g o n n e , Larry Wos,George Robinson and Dan Carsonp r o g r a m m e d a resolution-basedtheorem prover for the clausalpredicate calculus, a d d i n g to thebasic process search strategies(called unit preference and set of support) of their own devising, whichfurther s p e e d e d the resolutionp r o o f process. Because o f the refereeing delay, their paper, reachedprint before mine [52]) and couldonly cite it as 'to be published'.T h r o u g h o u t the winter o f 196364, while waiting for news o f thepaper's acceptance or rejection byJACM, I concentrated on trying topush the ideas further, and lookedfor ways o f e x t e n d i n g the resolution principle to accommodate evenlarger inference steps than thosesanctioned by the original binaryresolution pattern. One o f theset u r n e d out to be particularly attractive. I gave it the name hyper-resolution, meaning to suggest that it wasan inference principle on a levelabove resolution. One hyperresolution was essentially a new integrated whole, a condensation of adeduction consisting o f several resolutions. T h e p a p e r describing hyperresolution was published atabout the same time as the mainresolution paper, and was later reprinted in [40, pp. 416-423].It had been my guiding idea inthis research that bigger and (computationally) better inference patterns might be obtained by somehow packaging entire deductions atMarch 1992/Vo1.35, No,3/COMMUNICATIONS OF T H EACM

one level into single inferences at thenext higher level. As I cast aboutfor such patterns I came across aquite restricted form of r e s o l u t i o n - I called it ' P l - r e s o l u t i o n ' - - w h i c h Ifound I could prove was just aspowerful as the original unrestrictedbinary resolution. T h e restriction inPl-resolution is that one of the twopremises must be an unconditionalclause, that is, a clause in whichthere are no negative literals (orwhat amounts to the same thing, asentence o f the form: 'if antecedentthen consequent' whose antecedentpart is empty). From this restriction, it follows that every P l - d e d u c tion (that is, a deduction in whichevery inference is a Pl-resolution)can always be decomposed into acombination o f what I called 'P2deductions'. A P2-deduction is aP 1-deduction which satisfies theextra restriction that its conclusion,and all of its premises except one,are unconditional clauses. Thus, exactly one conditional clause is involved as an 'external' clause in aP2-deduction. By ignoring the internal inferences of a P2-deductiontree and d e e m i n g its conclusion tohave been directly obtained from itspremises, we obtain a single largeinference--ahyperresolution-which is really a multiinferencededuction whose interior details arehidden from view inside a sort oflogical black box.Computational Logic: TheResolution BoomAfter the publication o f the p a p e rin 1965, there began a sustaineddrive to p r o g r a m resolution-basedp r o o f procedures as efficiently aspossible and to see what they coulddo. In Edinburgh, Bernard Meltzer's Computational Logic group andDonald Michie's Machine Intelligenceg r o u p had by 1967 attracted manyyoung researchers who have sincebecome well known and who at thattime worked on various theoreticaland practical resolution issues:Robert Kowalski, Patrick Hayes, thelate Donald Kuehner, G o r d o n Plotkin, Robert Boyer and J Moore,David H.D. Warren, Maarten vanCOMMUNICATIONSOF THEACM/March 1992/Vol.35, No.3Emden, Robert Hill. B e r n a r d Meitzer had visited Rice University fortwo months in early 1965 in o r d e rto study resolution intensively, andon his r e t u r n to Edinburgh he setup one o f two seminal researchgroups which were to foster thebirth of logic p r o g r a m m i n g (theother being Alain Colmerauer'sg r o u p in Marseille). T h u s began mylong and fruitful association withEdinburgh. By 1970 the resolutionboom was in full swing. I recall thatin that year Keith Clark and JackMinker were among those attending a N A T O S u m m e r School organized by B e r n a r d Meltzer and Nicolas Findler at Menaggio on LakeComo. T h e r e we preached the new'resolution movement' for twoweeks, and Clark and Minker decided to join it, soon becoming twonotable contributors.Meanwhile, however, in the U.S.,the reaction was mostly muted, except for isolated pockets o f enthusiasm at Argonne, Stanford, Rice anda few other places. Bill Miller hadleft A r g o n n e to go to Stanford atthe end o f 1964, and I accepted hisinvitation to spend the summers o f1965 and 1966 as a visiting researcher in his computation groupat the Stanford Linear AcceleratorCenter. It was at Stanford in thes u m m e r o f 1965 that I met J o h nMcCarthy for the first time. I wasastonished to learn that after hehad recently read the resolutionp a p e r he had written and tested acompleteresolutiontheoremproving p r o g r a m in Lisp in a fewhours. I was still p r o g r a m m i n g inFortran, and I was used to takingdays and even weeks for such atask. In 1965, however, one coulduse Lisp easily in only a very fewplaces, and neither Rice Universitynor A r g o n n e National Laboratorywere then a m o n g them.Bertram Raphael, Nils Nilsson,and Cordell Green, at StanfordResearch Institute, were buildingdeductivedatabasesforthe'STRIPS' planning software fortheir robot, and they were adoptingresolution for this (see [36]). AtNew York University, Martin Davisand Donald Loveland were developing Davis's very closely relatedunification-based 'linked conjunct'm e t h o d [10, pp. 315-330] in wayswhich eventually led Loveland ind e p e n d e n t l y to his Model Elimination system [28], a linear reasoningmethod entirely similar to the linear resolution systems developed bythe E d i n b u r g h group, and by DavidLuckham at Stanford [30]. Back atArgonne, Larry Wos and GeorgeRobinson had f o r m e d a very strong'automated deduction' group. T h e yb r o a d e n e d the applicability o f unification by a u g m e n t i n g resolutionwith further inference rules specialized for equality reasoning (modulation, paramodulation) which further improved the efficiency ofp r o o f searches [43]. Today, theA r g o n n e g r o u p is still flourishingand remains a major center of excellence in automated deduction.In 1969 there began a series o fnoisy but interesting (and, it latert u r n e d out, fruitful) academic skirmishes between the then somewhatmeagerly f u n d e d resolution community and M I T ' s Artificial Intelligence Laboratory led by MarvinMinsky and Seymour Papert. T h eM I T AI Laboratory at that time was(it seemed to us) comfortably, if notlavishly, s u p p o r t e d by the Pentagon's Advanced Research ProjectsAgency (then ARPA, now DARPA).T h e issue was whether it was betterto represent knowledge computationally, for AI purposes, in a declarative or in a procedural form. I f itwas the f o r m e r (as had been originally p r o p o s e d in 1959 by J o h nMcCarthy) [31] then it would be thepredicate calculus, and efficientp r o o f procedures for it, that wouldplay a central role in AI research. I fit was the latter (e.g., see [51]), thena computational realization o fknowledge would have to be a system of procedures 'heterarchically'organized so that each could be invoked by any o f the others, andindeed by itself. These procedureswould be 'agents' that would bothcooperate and compete in collectively accomplishing the varioustasks comprising intelligent behav-45

LOgic Progrommingior and thought.The procedural-logical fight wasMinsky's book, The Society of the really ended, in a delightfully unexMind [32], elegantly summed up pected way, by Kowalski's inspiredthe M I T side of this debate in es- procedural interpretation of the bechewing polemics to outline a havior of a Horn-clause linear resogrand unified theory of the struc- lution proof finder, [24]. Heture and function of the mind in pointed out that in view of the bethe tradition of Freud and Piaget. havior of H o r n clause linear-resoT h e logic side of the debate has lution proof-seeking processes, abeen definitively treated in [25], collection of H o r n clauses could bewhich eloquently sets forth the role regarded as knowledge organizedof logic in the computational orga- both declaratively and procedurally.nization of knowledge and banishes It suddenly was hard to see what allthe procedural-declarative dichot- the fuss had been about. Kowalskiomy by insight that H o r n clauses was led to this reconciliatory princi(that is, clauses containing at most ple by superb implementation of aone unnegated literal) can be inter- 'structure-sharing' resolution theopreted as procedures, and thus can rem prover at Edinburgh [5, pp.be activated and executed by a suit- 101-116], which suddenly comably designed processor. It is this pleted the transformation .of theinsight that underlies what we now orem-proving from generate-and-testcall logic programming.searching to goal-directed stack-basedcomputation. When restricted toThenever-to-be-implementedH o r n clauses, the Boyer-Moorebut influential 'Planner' system by approach becomes the obvious preCarl Hewitt--his first paper on cursor of the first implementationsPlanner, in [20]--epitomized the of Prolog. David H.D. Warren'sM I T procedural approach, while enormously influential later softthe QA ('Question-Answering') se- ware and hardware refinementsries of programs by [31] carried out and advances clearly descend diMcCarthy's logical 'Advice Taker' rectly from the Boyer-Moore methapproach to AI and convinced odology [50].many skeptics that it would reallywork. T h e work by [18] should nowOnly the interaction of the Edinbe seen and appreciated as the ear- burgh group's ideas with the workliest demonst

Logic Programming osition, and indeed they each gave a constructive method for finding the proof, given the proposition. G6del's more famous achievement, his discovery in 1931 of the amaz- ing 'incompleteness theorems' about formal