Introduction To Algorithms

Transcription

Instructor’s Manualby Thomas H. Cormento AccompanyIntroduction to AlgorithmsThird Editionby Thomas H. CormenCharles E. LeisersonRonald L. RivestClifford SteinThe MIT PressCambridge, MassachusettsLondon, England

Instructor’s Manual to Accompany Introduction to Algorithms, Third Editionby Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford SteinPublished by the MIT Press. Copyright c 2009 by The Massachusetts Institute of Technology. All rightsreserved.No part of this publication may be reproduced or distributed in any form or by any means, or stored in a databaseor retrieval system, without the prior written consent of The MIT Press, including, but not limited to, network orother electronic storage or transmission, or broadcast for distance learning.

ContentsRevision HistoryPrefaceR-1P-1Chapter 2: Getting StartedLecture Notes 2-1Solutions 2-17Chapter 3: Growth of FunctionsLecture Notes 3-1Solutions 3-7Chapter 4: Divide-and-ConquerLecture Notes 4-1Solutions 4-17Chapter 5: Probabilistic Analysis and Randomized AlgorithmsLecture Notes 5-1Solutions 5-9Chapter 6: HeapsortLecture Notes 6-1Solutions 6-10Chapter 7: QuicksortLecture Notes 7-1Solutions 7-9Chapter 8: Sorting in Linear TimeLecture Notes 8-1Solutions 8-10Chapter 9: Medians and Order StatisticsLecture Notes 9-1Solutions 9-10Chapter 11: Hash TablesLecture Notes 11-1Solutions 11-16Chapter 12: Binary Search TreesLecture Notes 12-1Solutions 12-15Chapter 13: Red-Black TreesLecture Notes 13-1Solutions 13-13Chapter 14: Augmenting Data StructuresLecture Notes 14-1Solutions 14-9

ivContentsChapter 15: Dynamic ProgrammingLecture Notes 15-1Solutions 15-21Chapter 16: Greedy AlgorithmsLecture Notes 16-1Solutions 16-9Chapter 17: Amortized AnalysisLecture Notes 17-1Solutions 17-14Chapter 21: Data Structures for Disjoint SetsLecture Notes 21-1Solutions 21-6Chapter 22: Elementary Graph AlgorithmsLecture Notes 22-1Solutions 22-13Chapter 23: Minimum Spanning TreesLecture Notes 23-1Solutions 23-8Chapter 24: Single-Source Shortest PathsLecture Notes 24-1Solutions 24-13Chapter 25: All-Pairs Shortest PathsLecture Notes 25-1Solutions 25-9Chapter 26: Maximum FlowLecture Notes 26-1Solutions 26-12Chapter 27: Multithreaded AlgorithmsSolutions 27-1IndexI-1

Revision HistoryRevisions are listed by date rather than being numbered. 22 February 2014. Corrected an error in the solution to Exercise 4.3-7, courtesyof Dan Suthers. Corrected an error in the solution to Exercise 23.1-6, courtesyof Rachel Ginzberg. Updated the Preface.3 January 2012. Added solutions to Chapter 27. Added an alternative solutionto Exercise 2.3-7, courtesy of Viktor Korsun and Crystal Peng. Corrected aminor error in the Chapter 15 notes in the recurrence for T .n/ for the recursiveC UT-ROD procedure. Updated the solution to Problem 24-3. Corrected anerror in the proof about the Edmonds-Karp algorithm performing O.VE/ flowaugmentations. The bodies of all pseudocode procedures are indented slightly.28 January 2011. Corrected an error in the solution to Problem 2-4(c), andremoved unnecessary code in the solution to Problem 2-4(d). Added a missingparameter to recursive calls of R EC -M AT-M ULT on page 4-7. Changed thepseudocode for H EAP -E XTRACT-M AX on page 6-8 and M AX -H EAP -I NSERTon page 6-9 to assume that the parameter n is passed by reference.7 May 2010. Changed the solutions to Exercises 22.2-3 and 22.3-4 becausethese exercises changed.17 February 2010. Corrected a minor error in the solution to Exercise 4.3-7.16 December 2009. Added an alternative solution to Exercise 6.3-3, courtesyof Eyal Mashiach.7 December 2009. Added solutions to Exercises 16.3-1, 26.1-1, 26.1-3, 26.1-7,26.2-1, 26.2-8, 26.2-9, 26.2-12, 26.2-13, and 26.4-1 and to Problem 26-3. Corrected spelling in the solution to Exercise 16.2-4. Several corrections to thesolution to Exercise 16.4-3, courtesy of Zhixiang Zhu. Minor changes to thesolutions to Exercises 24.3-3 and 24.4-7 and Problem 24-1.7 August 2009. Initial release.

PrefaceThis document is an instructor’s manual to accompany Introduction to Algorithms,Third Edition, by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, andClifford Stein. It is intended for use in a course on algorithms. You might also findsome of the material herein to be useful for a CS 2-style course in data structures.Unlike the instructor’s manual for the first edition of the text—which was organizedaround the undergraduate algorithms course taught by Charles Leiserson at MITin Spring 1991—but like the instructor’s manual for the second edition, we havechosen to organize the manual for the third edition according to chapters of thetext. That is, for most chapters we have provided a set of lecture notes and a set ofexercise and problem solutions pertaining to the chapter. This organization allowsyou to decide how to best use the material in the manual in your own course.We have not included lecture notes and solutions for every chapter, nor have weincluded solutions for every exercise and problem within the chapters that we haveselected. We felt that Chapter 1 is too nontechnical to include here, and Chapter 10 consists of background material that often falls outside algorithms and datastructures courses. We have also omitted the chapters that are not covered in thecourses that we teach: Chapters 18–20 and 27–35 (though we do include somesolutions for Chapter 27), as well as Appendices A–D; future editions of this manual may include some of these chapters. There are two reasons that we have notincluded solutions to all exercises and problems in the selected chapters. First,writing up all these solutions would take a long time, and we felt it more importantto release this manual in as timely a fashion as possible. Second, if we were toinclude all solutions, this manual would be much longer than the text itself.We have numbered the pages in this manual using the format CC-PP, where CCis a chapter number of the text and PP is the page number within that chapter’slecture notes and solutions. The PP numbers restart from 1 at the beginning of eachchapter’s lecture notes. We chose this form of page numbering so that if we addor change solutions to exercises and problems, the only pages whose numbering isaffected are those for the solutions for that chapter. Moreover, if we add materialfor currently uncovered chapters, the numbers of the existing pages will remainunchanged.The lecture notesThe lecture notes are based on three sources:

P-2Preface Some are from the first-edition manual; they correspond to Charles Leiserson’slectures in MIT’s undergraduate algorithms course, 6.046.Some are from Tom Cormen’s lectures in Dartmouth College’s undergraduatealgorithms course, CS 25.Some are written just for this manual.You will find that the lecture notes are more informal than the text, as is appropriate for a lecture situation. In some places, we have simplified the material forlecture presentation or even omitted certain considerations. Some sections of thetext—usually starred—are omitted from the lecture notes. (We have included lecture notes for one starred section: 12.4, on randomly built binary search trees,which we covered in an optional CS 25 lecture.)In several places in the lecture notes, we have included “asides” to the instructor. The asides are typeset in a slanted font and are enclosed in square brackets. [Here is an aside.] Some of the asides suggest leaving certain material on theboard, since you will be coming back to it later. If you are projecting a presentation rather than writing on a blackboard or whiteboard, you might want to replicateslides containing this material so that you can easily reprise them later in the lecture.We have chosen not to indicate how long it takes to cover material, as the time necessary to cover a topic depends on the instructor, the students, the class schedule,and other variables.There are two differences in how we write pseudocode in the lecture notes and thetext: Lines are not numbered in the lecture notes. We find them inconvenient tonumber when writing pseudocode on the board.We avoid using the length attribute of an array. Instead, we pass the arraylength as a parameter to the procedure. This change makes the pseudocodemore concise, as well as matching better with the description of what it does.We have also minimized the use of shading in figures within lecture notes, sincedrawing a figure with shading on a blackboard or whiteboard is difficult.The solutionsThe solutions are based on the same sources as the lecture notes. They are writtena bit more formally than the lecture notes, though a bit less formally than the text.We do not number lines of pseudocode, but we do use the length attribute (on theassumption that you will want your students to write pseudocode as it appears inthe text).As of the third edition, we have publicly posted a few solutions on the book’s website. These solutions also appear in this manual, with the notation “This solutionis also posted publicly” after the exercise or problem number. The set of publicly posted solutions might increase over time, and so we encourage you to checkwhether a particular solution is posted on the website before you assign an exerciseor problem to your students.

PrefaceP-3The index lists all the exercises and problems for which this manual provides solutions, along with the number of the page on which each solution starts.Asides appear in a handful of places throughout the solutions. Also, we are lessreluctant to use shading in figures within solutions, since these figures are morelikely to be reproduced than to be drawn on a board.Source filesFor several reasons, we are unable to publish or transmit source files for this manual. We apologize for this inconvenience.You can use the clrscode3e package for LATEX 2" to typeset pseudocode in the sameway that we do. You can find this package at http://www.cs.dartmouth.edu/ thc/clrscode/. That site also includes documentation. Make sure to use the clrscode3epackage, not the clrscode package; clrscode is for the second edition of the book.Reporting errors and suggestionsUndoubtedly, instructors will find errors in this manual. Please report errors bysending email to clrs-manual-bugs@mitpress.mit.edu.If you have a suggestion for an improvement to this manual, please feel free tosubmit it via email to clrs-manual-suggestions@mitpress.mit.edu.As usual, if you find an error in the text itself, please verify that it has not alreadybeen posted on the errata web page before you submit it. You can use the MITPress web site for the text, http://mitpress.mit.edu/algorithms/, to locate the errataweb page and to submit an error report.We thank you in advance for your assistance in correcting errors in both this manualand the text.How we produced this manualLike the third edition of Introduction to Algorithms, this manual was produced inLATEX 2" . We used the Times font with mathematics typeset using the MathTimePro 2 fonts. As in all three editions of the textbook, we compiled the index usingWindex, a C program that we wrote. We drew the illustrations using MacDrawPro,1 with some of the mathematical expressions in illustrations laid in with thepsfrag package for LATEX 2" . We created the PDF files for this manual on aMacBook Pro running OS 10.5.AcknowledgmentsThis manual borrows heavily from the manuals for the first two editions. JulieSussman, P.P.A., wrote the first-edition manual. Julie did such a superb job on the1 Seeour plea in the preface for the third edition to Apple, asking that they update MacDraw Pro forOS X.

P-4Prefacefirst-edition manual, finding numerous errors in the first-edition text in the process,that we were thrilled to have her serve as technical copyeditor for both the secondand third editions of the book. Charles Leiserson also put in large amounts of timeworking with Julie on the first-edition manual.The manual for the second edition was written by Tom Cormen, Clara Lee, andErica Lin. Clara and Erica were undergraduate computer science majors at Dartmouth at the time, and they did a superb job.The other three Introduction to Algorithms authors—Charles Leiserson, RonRivest, and Cliff Stein—provided helpful comments and suggestions for solutionsto exercises and problems. Some of the solutions are modifications of those writtenover the years by teaching assistants for algorithms courses at MIT and Dartmouth.At this point, we do not know which TAs wrote which solutions, and so we simplythank them collectively. Several of the solutions to new exercises and problemsin the third edition were written by Sharath Gururaj of Columbia University; wethank Sharath for his fine work. The solutions for Chapter 27 were written by PriyaNatarajan.We also thank the MIT Press and our editors—Ada Brunstein, Jim DeWolf, andMarie Lee— for moral and financial support. Tim Tregubov and Wayne Crippsprovided computer support at Dartmouth.T HOMAS H. C ORMENHanover, New HampshireAugust 2009

Lecture Notes for Chapter 2:Getting StartedChapter 2 overviewGoals Start using frameworks for describing and analyzing algorithms.Examine two algorithms for sorting: insertion sort and merge sort.See how to describe algorithms in pseudocode.Begin using asymptotic notation to express running-time analysis.Learn the technique of “divide and conquer” in the context of merge sort.Insertion sortThe sorting problemInput: A sequence of n numbers ha1 ; a2 ; : : : ; an i.Output: A permutation (reordering) ha10 ; a20 ; : : : ; an0 i of the input sequence suchthat a10 a20 an0 .The sequences are typically stored in arrays.We also refer to the numbers as keys. Along with each key may be additionalinformation, known as satellite data. [You might want to clarify that “satellitedata” does not necessarily come from a satellite.]We will see several ways to solve the sorting problem. Each way will be expressedas an algorithm: a well-defined computational procedure that takes some value, orset of values, as input and produces some value, or set of values, as output.Expressing algorithmsWe express algorithms in whatever way is the clearest and most concise.English is sometimes the best way.When issues of control need to be made perfectly clear, we often use pseudocode.

2-2Lecture Notes for Chapter 2: Getting Started Pseudocode is similar to C, C , Pascal, and Java. If you know any of theselanguages, you should be able to understand pseudocode.Pseudocode is designed for expressing algorithms to humans. Software engineering issues of data abstraction, modularity, and error handling are oftenignored.We sometimes embed English statements into pseudocode. Therefore, unlikefor “real” programming languages, we cannot create a compiler that translatespseudocode to machine code.Insertion sortA good algorithm for sorting a small number of elements.It works the way you might sort a hand of playing cards: Start with an empty left hand and the cards face down on the table.Then remove one card at a time from the table, and insert it into the correctposition in the left hand.To find the correct position for a card, compare it with each of the cards alreadyin the hand, from right to left.At all times, the cards held in the left hand are sorted, and these cards wereoriginally the top cards of the pile on the table.PseudocodeWe use a procedure I NSERTION -S ORT. Takes as parameters an array AŒ1 : : n and the length n of the array.As in Pascal, we use “: :” to denote a range within an array.[We usually use 1-origin indexing, as we do here. There are a few places inlater chapters where we use 0-origin indexing instead. If you are translatingpseudocode to C, C , or Java, which use 0-origin indexing, you need to becareful to get the indices right. One option is to adjust all index calculations inthe C, C , or Java code to compensate. An easier option is, when using anarray AŒ1 : : n , to allocate the array to be one entry longer—AŒ0 : : n —and justdon’t use the entry at index 0.][In the lecture notes, we indicate array lengths by parameters rather than byusing the length attribute that is used in the book. That saves us a line of pseudocode each time. The solutions continue to use the length attribute.]The array A is sorted in place: the numbers are rearranged within the array,with at most a constant number outside the array at any time.

Lecture Notes for Chapter 2: Getting Started2-3I NSERTION -S ORT .A; n/for j D 2 to nkey D AŒj // Insert AŒj into the sorted sequence AŒ1 : : ji Dj 1while i 0 and AŒi keyAŒi C 1 D AŒi i Di 1AŒi C 1 D keycostc1c21 . 0c4c5c6c7c8timesnn 1n 1nP 1ntPjnD2 j.tPjnD2 jj D2 .tjn 11/1/[Leave this on the board, but show only the pseudocode for now. We’ll put in the“cost” and “times” columns 613123456123456123456245613124563123456jj[Read this figure row by row. Each part shows what happens for a particular iteration with the value of j indicated. j indexes the “current card” being inserted intothe hand. Elements to the left of AŒj that are greater than AŒj move one positionto the right, and AŒj moves into the evacuated position. The heavy vertical linesseparate the part of the array in which an iteration works—AŒ1 : : j —from the partof the array that is unaffected by this iteration—AŒj C 1 : : n . The last part of thefigure shows the final sorted array.]CorrectnessWe often use a loop invariant to help us understand why an algorithm gives thecorrect answer. Here’s the loop invariant for I NSERTION -S ORT:Loop invariant: At the start of each iteration of the “outer” for loop—theloop indexed by j —the subarray AŒ1 : : j 1 consists of the elements originally in AŒ1 : : j 1 but in sorted order.To use a loop invariant to prove correctness, we must show three things about it:Initialization: It is true prior to the first iteration of the loop.Maintenance: If it is true before an iteration of the loop, it remains true before thenext iteration.Termination: When the loop terminates, the invariant—usually along with thereason that the loop terminated—gives us a useful property that helps show thatthe algorithm is correct.

2-4Lecture Notes for Chapter 2: Getting StartedUsing loop invariants is like mathematical induction: To prove that a property holds, you prove a base case and an inductive step.Showing that the invariant holds before the first iteration is like the base case.Showing that the invariant holds from iteration to iteration is like the inductivestep.The termination part differs from the usual use of mathematical induction, inwhich the inductive step is used infinitely. We stop the “induction” when theloop terminates.We can show the three parts in any order.For insertion sortInitialization: Just before the first iteration, j D 2. The subarray AŒ1 : : j 1 is the single element AŒ1 , which is the element originally in AŒ1 , and it istrivially sorted.Maintenance: To be precise, we would need to state and prove a loop invariantfor the “inner” while loop. Rather than getting bogged down in another loopinvariant, we instead note that the body of the inner while loop works by movingAŒj 1 , AŒj 2 , AŒj 3 , and so on, by one position to the right until theproper position for key (which has the value that started out in AŒj ) is found.At that point, the value of key is placed into this position.Termination: The outer for loop ends when j n, which occurs when j D nC1.Therefore, j1 D n. Plugging n in for j1 in the loop invariant, thesubarray AŒ1 : : n consists of the elements originally in AŒ1 : : n but in sortedorder. In other words, the entire array is sorted.Pseudocode conventions[Covering most, but not all, here. See book pages 20–22 for all conventions.] Indentation indicates block structure. Saves space and writing time.Looping constructs are like in C, C , Pascal, and Java. We assume that theloop variable in a for loop is still defined when the loop exits (unlike in Pascal).// indicates that the remainder of the line is a comment.Variables are local, unless otherwise specified.We often use objects, which have attributes. For an attribute attr of object x, wewrite x:attr. (This notation matches x:attr in Java and is equivalent to x- attrin C .) Attributes can cascade, so that if x:y is an object and this object hasattribute attr, then x:y:attr indicates this object’s attribute. That is, x:y:attr isimplicitly parenthesized as .x:y/:attr.Objects are treated as references, like in Java. If x and y denote objects, thenthe assignment y D x makes x and y reference the same object. It does notcause attributes of one object to be copied to another.Parameters are passed by value, as in Java and C (and the default mechanism inPascal and C ). When an object is passed by value, it is actually a reference(or pointer) that is passed; changes to the reference itself are not seen by thecaller, but changes to the object’s attributes are.

Lecture Notes for Chapter 2: Getting Started 2-5The boolean operators “and” and “or” are short-circuiting: if after evaluatingthe left-hand operand, we know the result of the expression, then we don’tevaluate the right-hand operand. (If x is FALSE in “x and y” then we don’tevaluate y. If x is TRUE in “x or y” then we don’t evaluate y.)Analyzing algorithmsWe want to predict the resources that the algorithm requires. Usually, running time.In order to predict resource requirements, we need a computational model.Random-access machine (RAM) model Instructions are executed one after another. No concurrent operations.It’s too tedious to define each of the instructions and their associated time costs.Instead, we recognize that we’ll use instructions commonly found in real computers: Arithmetic: add, subtract, multiply, divide, remainder, floor, ceiling). Also,shift left/shift right (good for multiplying/dividing by 2k ).Data movement: load, store, copy.Control: conditional/unconditional branch, subroutine call and return.Each of these instructions takes a constant amount of time.The RAM model uses integer and floating-point types. We don’t worry about precision, although it is crucial in certain numerical applications.There is a limit on the word size: when working with inputs of size n, assumethat integers are represented by c lg n bits for some constant c 1. (lg n is avery frequently used shorthand for log2 n.) c 1 ) we can hold the value of n ) we can index the individual elements.c is a constant ) the word size cannot grow arbitrarily.How do we analyze an algorithm’s running time?The time taken by an algorithm depends on the input. Sorting 1000 numbers takes longer than sorting 3 numbers.A given sorting algorithm may even take differing amounts of time on twoinputs of the same size.For example, we’ll see that insertion sort takes less time to sort n elements whenthey are already sorted than when they are in reverse sorted order.

2-6Lecture Notes for Chapter 2: Getting StartedInput sizeDepends on the problem being studied. Usually, the number of items in the input. Like the size n of the array beingsorted.But could be something else. If multiplying two integers, could be the totalnumber of bits in the two integers.Could be described by more than one number. For example, graph algorithmrunning times are usually expressed in terms of the number of vertices and thenumber of edges in the input graph.Running timeOn a particular input, it is the number of primitive operations (steps) executed. Want to define steps to be machine-independent.Figure that each line of pseudocode requires a constant amount of time.One line may take a different amount of time than another, but each executionof line i takes the same amount of time ci .This is assuming that the line consists only of primitive operations. If the line is a subroutine call, then the actual call takes constant time, but theexecution of the subroutine being called might not.If the line specifies operations other than primitive ones, then it might takemore than constant time. Example: “sort the points by x-coordinate.”Analysis of insertion sort[Now add statement costs and number of times executed to I NSERTION -S ORTpseudocode.] Assume that the ith line takes time ci , which is a constant. (Since the third lineis a comment, it takes no time.)For j D 2; 3; : : : ; n, let tj be the number of times that the while loop test isexecuted for that value of j .Note that when a for or while loop exits in the usual way—due to the test in theloop header—the test is executed one time more than the loop body.The running time of the algorithm isX.cost of statement/ .number of times statement is executed/ :all statementsLet T .n/ D running time of I NSERTION -S ORT .nnXXT .n/ D c1 n C c2 .n 1/ C c4 .n 1/ C c5tj C c6.tjC c7nXj D2j D2.tj1/ C c8 .n1/j D21/ :The running time depends on the values of tj . These vary according to the input.

Lecture Notes for Chapter 2: Getting Started2-7Best caseThe array is already sorted. Always find that AŒi key upon the first time the while loop test is run (wheni D j 1).All tj are 1.Running time isT .n/ D c1 n C c2 .n 1/ C c4 .n 1/ C c5 .n 1/ C c8 .n 1/D .c1 C c2 C c4 C c5 C c8 /n .c2 C c4 C c5 C c8 / :Can express T .n/ as an C b for constants a and b (that depend on the statementcosts ci ) ) T .n/ is a linear function of n.Worst caseThe array is in reverse sorted order. Always find that AŒi key in while loop test.Have to compare key with all elements to the left of the j th position ) comparewith j 1 elements.Since the while loop exits because i reaches 0, there’s one additional test afterthe j 1 tests ) tj D j .nnnnXXXXtj Dj and.tj 1/ D.j 1/.j D2 nXj D2j D2j D2j is known as an arithmetic series, and equation (A.1) shows that it equalsj D1 n.n C 1/.2!nnXXSincej Djn.n C 1/1.2j D2j D1[The parentheses around the summation are not strictly necessary. They arethere for clarity, but it might be a good idea to remind the students that themeaning of the expression would be the same even without the parentheses.]nn 1XXn.n 1/Letting k D j 1, we see that.kD.j 1/ D2j D21, it equalskD1 Running time is n.n C 1/1T .n/ D c1 n C c2 .n 1/ C c4 .n 1/ C c52 n.n 1/n.n 1/C c7C c8 .n 1/C c622 cc5 c6 c7c6c7 25n C c1 C c2 C c4 CCCC c8 nD222222.c2 C c4 C c5 C c8 / : Can express T .n/ as an2 C bn C c for constants a; b; c (that again depend onstatement costs) ) T .n/ is a quadratic function of n.

2-8Lecture Notes for Chapter 2: Getting StartedWorst-case and average-case analysisWe usually concentrate on finding the worst-case running time: the longest running time for any input of size n.Reasons The worst-case running time gives a guaranteed upper bound on the runningtime for any input.For some algorithms, the worst case occurs often. For example, when searching, the worst case often occurs when the item being searched for is not present,and searches for absent items may be frequent.Why not analyze the average case? Because it’s often about as bad as the worstcase.Example: Suppose that we randomly choose n numbers as the input to insertion sort.On average, the key in AŒj is less than half the elements in AŒ1 : : j 1 andit’s greater than the other half.) On average, the while loop has to look halfway through the sorted subarrayAŒ1 : : j 1 to decide where to drop key.) tj j 2.Although the average-case running time is approximately half of the worst-caserunning time, it’s still a quadratic function of n.Order of growthAnother abstraction to ease analysis and focus on the important features.Look only at the leading term of the formula for running time. Drop lower-order terms.Ignore the constant coefficient in the leading term.Example: For insertion sort, we already abstracted away the actual statement coststo conclude that the worst-case running time is an2 C bn C c.Drop lower-order terms ) an2 .Ignore constant coefficient ) n2 .But we cannot say that the worst-case running time T .n/ equals n2 .It grows like n2 . But it doesn’t equal n2 .We say that the running time is ‚.n2 / to capture the notion that the order of growthis n2 .We usually consider one algorithm to be more efficient than another if its worstcase running time has a smaller order of growth.

Lecture Notes for Chapter 2: Getting Started2-9Designing algorithmsThere are many ways to design algorithms.For example, insertion sort is incremental: having sorted AŒ1 : : jcorrectly, so that AŒ1 : : j is sorted.1 , place AŒj Divide and conquerAnother common approach.Divide the problem into a number of subproblems that are smaller instances of thesame problem.Conquer the subproblems by solving them recursively.Base case: If the subproblems are small enough, just solve them by brute force.[It would be a good idea to make sure that your students are comfortable withrecursion. If they are not, then they will have a hard time understanding divideand conquer.]Combine the subproblem solutions to give a solution to the original problem.Merge sortA sorting algorithm based on divide and conquer. Its worst-case running time hasa lower order of growth than insertion sort.Because we are dealing with subproblems, we state each subproblem as sorting asubarray AŒp : : r . Initially, p D 1 and r D n, but these values change as werecurse through subproblems.To sort AŒp : : r :Divide by splitting into two subarrays AŒp : : q and AŒq C 1 : : r , where q is thehalfway point of AŒp : : r .Conquer by recursively sorting the two subarrays AŒp : : q and AŒq C 1 : : r .Combine by merging the two sorted subarrays AŒp : : q and AŒq C 1 : : r to produce a single sorted subarray AŒp : : r . To accomplish this step, we’ll define aprocedure M ERGE .A; p; q; r/.The recursion bottoms out when the subarray has just 1 element, so that it’s triviallysorted.M ERGE -S ORT .A; p; r/if p rq D b.p C r/ 2cM ERGE -S ORT .A; p; q/M ERGE -S ORT .A; q C 1; r/M ERGE .A; p; q; r/// check for base case// di

This document is an instructor’s manual to accompany Introduction to Algorithms, Third Edition, by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. It is intended for use in a course on algorithms. You might also find some of the material herein