Linear Programming And Network Flows, 3rd - Yazd

Transcription

This page intentionally left blank

Linear Programmingand Network Flows

This page intentionally left blank

Linear Programmingand Network FlowsFourth EditionMokhtar S. BazaraaAgility LogisticsAtlanta, GeorgiaJohn J. JarvisGeorgia Institute of TechnologySchool of Industrial and Systems EngineeringAtlanta, GeorgiaHanifD.SheraliVirginia Polytechnic Institute and State UniversityGrado Department of Industrial and Systems EngineeringBlacksburg, Virginia WILEYA John Wiley & Sons, Inc., Publication

Copyright 2010 by John Wiley & Sons, Inc. All rights reserved.Published by John Wiley & Sons, Inc., Hoboken, New Jersey.Published simultaneously in Canada.No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any formor by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except aspermitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the priorwritten permission of the Publisher, or authorization through payment of the appropriate per-copy fee tothe Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax(978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission shouldbe addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permission.Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts inpreparing this book, they make no representations or warranties with respect to the accuracy orcompleteness of the contents of this book and specifically disclaim any implied warranties ofmerchantability or fitness for a particular purpose. No warranty may be created or extended by salesrepresentatives or written sales materials. The advice and strategies contained herein may not besuitable for your situation. You should consult with a professional where appropriate. Neither thepublisher nor author shall be liable for any loss of profit or any other commercial damages, includingbut not limited to special, incidental, consequential, or other damages.For general information on our other products and services or for technical support, please contact ourCustomer Care Department within the United States at (800) 762-2974, outside the United States at(317) 572-3993 or fax (317) 572-4002.Wiley also publishes its books in a variety of electronic formats. Some content that appears in print maynot be available in electronic format. For information about Wiley products, visit our web site atwww.wiley.com.Library of Congress Cataloging-in-Publication Data:Bazaraa, M. S.Linear programming and network flows / Mokhtar S. Bazaraa, John J. Jarvis, Hanif D.Sherali. — 4th ed.p. cm.Includes bibliographical references and index.ISBN 978-0-470-46272-0 (cloth)1. Linear programming. 2. Network analysis (Planning) I. Jarvis, John J. II. Sherali,Hanif D., 1952- III. Title.T57.74.B39 2010519.7'2—dc222009028769Printed in the United States of America.10 9 8 7 6 5 4 3

Dedicated to Our Parents

This page intentionally left blank

CONTENTSPrefacexiONE:INTRODUCTION1.1The Linear Programming Problem1.2Linear Programming Modeling and Examples1.3Geometric Solution1.4The Requirement Space1.5NotationExercisesNotes and ReferencesTWO:LINEAR ALGEBRA, CONVEX ANALYSIS, ANDPOLYHEDRAL SETS2.1Vectors2.2Matrices2.3Simultaneous Linear Equations2.4Convex Sets and Convex Functions2.5Polyhedral Sets and Polyhedral Cones2.6Extreme Points, Faces, Directions, and ExtremeDirections of Polyhedral Sets: Geometric Insights2.7Representation of Polyhedral SetsExercisesNotes and 071758290THE SIMPLEX METHOD3.1Extreme Points and Optimality3.2Basic Feasible Solutions3.3Key to the Simplex Method3.4Geometric Motivation of the Simplex Method3.5Algebra of the Simplex Method3.6Termination: Optimality and Unboundedness3.7The Simplex Method3.8The Simplex Method in Tableau Format3.9Block PivotingExercisesNotes and G SOLUTION AND CONVERGENCE151SPECIAL SIMPLEX IMPLEMENTATIONS ANDOPTIMALITY CONDITIONS5.1The Revised Simplex Method5.2The Simplex Method for Bounded Variables2012012204.14.24.34.44.54.64.7The Initial Basic Feasible SolutionThe Two-Phase MethodThe Big-MMethodHow Big Should Big-WBe?The Single Artificial Variable TechniqueDegeneracy, Cycling, and StallingValidation of Cycling Prevention RulesExercisesNotes and References151154165172173175182187198vii

viiiContents5.35.4SIX:SEVEN:EIGHT:NINE:Farkas' Lemma via the Simplex MethodThe Karush-Kuhn-Tucker Optimality ConditionsExercisesNotes and ReferencesDUALITY AND SENSITIVITY ANALYSIS6.1Formulation of the Dual Problem6.2Primal-Dual Relationships6.3Economic Interpretation of the Dual6.4The Dual Simplex Method6.5The Primal-Dual Method6.6Finding an Initial Dual Feasible Solution: TheArtificial Constraint Technique6.7Sensitivity Analysis6.8Parametric AnalysisExercisesNotes and ReferencesTHE DECOMPOSITION PRINCIPLE7.1The Decomposition Algorithm7.2Numerical Example7.3Getting Started7.4The Case of an Unbounded Region X7.5Block Diagonal or Angular Structure7.6Duality and Relationships with otherDecomposition ProceduresExercisesNotes and ReferencesCOMPLEXITY OF THE SIMPLEX ALGORITHMAND POLYNOMIAL-TIME ALGORITHMS8.1Polynomial Complexity Issues8.2Computational Complexity of the Simplex Algorithm8.3Khachian's Ellipsoid Algorithm8.4Karmarkar's Projective Algorithm8.5Analysis of Karmarkar's Algorithm: Convergence,Complexity, Sliding Objective Method, and BasicOptimal Solutions8.6Affine Scaling, Primal-Dual Path Following, andPredictor-Corrector Variants of Interior Point MethodsExercisesNotes and ReferencesMINIMAL-COST NETWORK FLOWS9.1The Minimal Cost Network Flow Problem9.2Some Basic Definitions and Terminologyfrom Graph Theory9.3Properties of the A Matrix9.4Representation of a Nonbasic Vector inTerms of the Basic Vectors9.5The Simplex Method for Network Flow Problems9.6An Example of the Network Simplex Method9.7Finding an Initial Basic Feasible Solution9.8Network Flows with Lower and Upper 35448453453455459465466475475478

ContentsIX9.99.109.119.12The Simplex Tableau Associated with a NetworkFlow ProblemList Structures for Implementing the NetworkSimplex AlgorithmDegeneracy, Cycling, and StallingGeneralized Network ProblemsExercisesNotes and References481482488494497511TEN :THE TRANSPORTATION AND ASSIGNMENT PROBLEMS51310.1Definition of the Transportation Problem51310.2Properties of the A Matrix51610.3Representation of a Nonbasic Vector in Termsof the Basic Vectors52010.4The Simplex Method for Transportation Problems52210.5Illustrative Examples and a Note on Degeneracy52810.6The Simplex Tableau Associated with a TransportationTableau53510.7The Assignment Problem: (Kuhn's) HungarianAlgorithm53510.8Alternating Path Basis Algorithm for Assignment Problems.54410.9A Polynomial-Time Successive Shortest PathApproach for Assignment Problems54610.10The Transshipment Problem551Exercises552Notes and References564ELEVEN:THE OUT-OF-KILTER ALGORITHM11.1The Out-of-Kilter Formulation of a MinimalCost Network Flow Problem11.2Strategy of the Out-of-Kilter Algorithm11.3Summary of the Out-of-Kilter Algorithm11.4An Example of the Out-of-Kilter Algorithm11.5A Labeling Procedure for the Out-of-Kilter Algorithm11.6Insight into Changes in Primal and Dual Function Values11.7Relaxation AlgorithmsExercisesNotes and ReferencesTWELVE: MAXIMAL FLOW, SHORTEST PATH, MULTICOMMODITYFLOW, AND NETWORK SYNTHESIS PROBLEMS12.1The Maximal Flow Problem12.2The Shortest Path Problem12.3Polynomial-Time Shortest Path Algorithms for NetworksHaving Arbitrary Costs12.4Multicommodity Flows12.5Characterization of a Basis for the MulticommodityMinimal-Cost Flow Problem12.6Synthesis of Multiterminal Flow NetworksExercisesNotes and 3595605607607619635639649654663678681733

This page intentionally left blank

PREFACELinear Programming deals with the problem of minimizing or maximizing alinear function in the presence of linear equality and/or inequality constraints.Since the development of the simplex method by George B. Dantzig in 1947,linear programming has been extensively used in the military, industrial,governmental, and urban planning fields, among others. The popularity of linearprogramming can be attributed to many factors including its ability to modellarge and complex problems, and the ability of the users to solve such problemsin a reasonable amount of time by the use of effective algorithms and moderncomputers.During and after World War II, it became evident that planning andcoordination among various projects and the efficient utilization of scarceresources were essential. Intensive work by the United States Air Force teamSCOOP (Scientific Computation of Optimum Programs) began in June 1947. Asa result, the simplex method was developed by George B. Dantzig by the end ofthe summer of 1947. Interest in linear programming spread quickly amongeconomists, mathematicians, statisticians, and government institutions. In thesummer of 1949, a conference on linear programming was held under thesponsorship of the Cowles Commission for Research in Economics. The paperspresented at that conference were later collected in 1951 by T. C. Koopmansinto the book Activity Analysis of Production and Allocation.Since the development of the simplex method, many researchers andpractitioners have contributed to the growth of linear programming bydeveloping its mathematical theory, devising efficient computational methodsand codes, exploring new algorithms and new applications, and by their use oflinear programming as an aiding tool for solving more complex problems, forinstance, discrete programs, nonlinear programs, combinatorial problems,stochastic programming problems, and problems of optimal control.This book addresses linear programming and network flows. Both thegeneral theory and characteristics of these optimization problems, as well aseffective solution algorithms, are presented. The simplex algorithm providesconsiderable insight into the theory of linear programming and yields an efficient algorithm in practice. Hence, we study this method in detail in this text.Whenever possible, the simplex algorithm is specialized to take advantage of theproblem structure, such as in network flow problems. We also presentKhachian's ellipsoid algorithm and Karmarkar's projective interior pointalgorithm, both of which are polynomial-time procedures for solving linearprogramming problems. The latter algorithm has inspired a class of interiorpoint methods that compare favorably with the simplex method, particularly forgeneral large-scale, sparse problems, and is therefore described in greater detail.Computationally effective interior point algorithms in this class, including affinescaling methods, primal-dual path-following procedures, and predictorcorrector techniques, are also discussed. Throughout, we first present thefundamental concepts and the algorithmic techniques, then illustrate these bynumerical examples, and finally provide further insights along with detailedmathematical analyses and justification. Rigorous proofs of the results are givenxi

xiiPrefacewithout the theorem-proof format. Although some readers might find thisunconventional, we believe that the format and mathematical level adopted inthis book will provide an insightful and engaging discussion for readers whomay either wish to learn the techniques and know how to use them, as well asfor those who wish to study the theory and the algorithms at a more rigorouslevel.The book can be used both as a reference and as a textbook for advancedundergraduate students and first-year graduate students in the fields of industrialengineering, management, operations research, computer science, mathematics,and other engineering disciplines that deal with the subjects of linearprogramming and network flows. Even though the book's material requiressome mathematical maturity, the only prerequisite is linear algebra and elementary calculus. For the convenience of the reader, pertinent results from linearalgebra and convex analysis are summarized in Chapter 2.This book can be used in several ways. It can be used in a two-coursesequence on linear programming and network flows, in which case all of itsmaterial could be easily covered. The book can also be utilized in a onesemester course on linear programming and network flows. The instructor mayhave to omit some topics at his or her discretion. The book can also be used as atext for a course on either linear programming or network flows.Following the introductory first chapter, the second chapter presents basicresults on linear algebra and convex analysis, along with an insightful, geometrically motivated study of the structure of polyhedral sets. The remainder of thebook is organized into two parts: linear programming and networks flows. Thelinear programming part consists of Chapters 3 to 8. In Chapter 3 the simplexmethod is developed in detail, and in Chapter 4 the initiation of the simplexmethod by the use of artificial variables and the problem of degeneracy andcycling along with geometric concepts are discussed. Chapter 5 deals with somespecializations of the simplex method and the development of optimality criteriain linear programming. In Chapter 6 we consider the dual problem, developseveral computational procedures based on duality, and discuss sensitivityanalysis (including the tolerance approach) and parametric analysis (includingthe determination of shadow prices). Chapter 7 introduces the reader to thedecomposition principle and large-scale optimization. The equivalence ofseveral decomposition techniques for linear programming problems is exhibitedin this chapter. Chapter 8 discusses some basic computational complexity issues,exhibits the worst-case exponential behavior of the simplex algorithm, andpresents Karmarkar's polynomial-time algorithm along with a brief introductionto various interior point variants of this algorithm such as affine scalingmethods, primal-dual path-following procedures, and predictor-correctortechniques. These variants constitute an arsenal of computationally effectiveapproaches that compare favorably with the simplex method for large, sparse,generally structured problems. Khachian's polynomial-time ellipsoid algorithmis presented in the Exercises.The part on network flows consists of Chapters 9 to 12. In Chapter 9 westudy the principal characteristics of network structured linear programmingproblems and discuss the specialization of the simplex algorithm to solve theseproblems. A detailed discussion of list structures, useful from a terminology aswell as from an implementation viewpoint, is also presented. Chapter 10 deals

Prefacexiiiwith the popular transportation and assignment network flow problems.Although the algorithmic justifications and some special techniques rely on thematerial in Chapter 9, it is possible to study Chapter 10 separately if one is simply interested in the fundamental properties and algorithms for transportationand assignment problems. Chapter 11 presents the out-of-kilter algorithm alongwith some basic ingredients of primal-dual and relaxation types of algorithmsfor network flow problems. Finally, Chapter 12 covers the special topics of themaximal flow problem (including polynomial-time variants), the shortest pathproblem (including several efficient polynomial-time algorithms for thisubiquitous problem), the multicommodity minimal-cost flow problem, and anetwork synthesis or design problem. The last of these topics complements, aswell as relies on, the techniques developed for the problems of analysis, whichare the types of problems considered in the remainder of this book.In preparing revised editions of this book, we have followed two principal objectives. Our first objective was to offer further concepts and insights intolinear programming theory and algorithmic techniques. Toward this end, wehave included detailed geometrically motivated discussions dealing with thestructure of polyhedral sets, optimality conditions, and the nature of solutionalgorithms and special phenomena such as cycling. We have also addedexamples and remarks throughout the book that provide insights, and improvethe understanding of, and correlate, the various topics discussed in the book.Our second objective was to update the book to the state-of-the-art whilekeeping the exposition transparent and easy to follow. In keeping with thisspirit, several topics have now been included, such as cycling and stallingphenomena and their prevention (including special approaches for network flowproblems), numerically stable implementation techniques and empirical studiesdealing with the simplex algorithm, the tolerance approach to sensitivityanalysis, the equivalence of the Dantzig-Wolfe decomposition, Benders'partitioning method, and Lagrangian relaxation techniques for linearprogramming problems, computational complexity issues, the worst-casebehavior of the simplex method, Khachian's and Karmarkar's polynomial-timealgorithms for linear programming problems, various other interior point algorithms such as the affine scaling, primal-dual path-following, and predictorcorrector methods, list structures for network simplex implementations, a successive shortest path algorithm for linear assignment problems, polynomialtime scaling strategies (illustrated for the maximal flow problem), polynomialtime partitioned shortest path algorithms, and the network synthesis or designproblem, among others. The writing style enables the instructor to skip severalof these advanced topics in an undergraduate or introductory-level graduatecourse without any loss of continuity. Also, several new exercises have beenadded, including special exercises that simultaneously educate the reader onsome related advanced material. The notes and references sections and thebibliography have also been updated.We express our gratitude once again to Dr. Jeff Kennington, Dr. GeneRamsay, Dr. Ron Rardin, and Dr. Michael Todd for their many fine suggestionsduring the preparation of the first edition; to Dr. Robert N. Lehrer, formerdirector of the School of Industrial and Systems Engineering at the GeorgiaInstitute of Technology, for his support during the preparation of this firstedition; to Mr. Carl H. Wohlers for preparing its bibliography; and to Mrs. Alice

XIVPrefaceJarvis, Mrs. Carolyn Piersma, Miss Kaye Watkins, and Mrs. Amelia Williamsfor their typing assistance. We also thank Dr. Faiz Al-Khayyal, Dr. RichardCottle, Dr. Joanna Leleno, Dr. Craig Tovey, and Dr. Hossam Zaki among manyothers for their helpful comments in preparing this manuscript. We are gratefulto Dr. Suleyman Tufekci, and to Drs. Joanna Leleno and Zhuangyi Liu for,respectively, preparing the first and second versions of the solutions manual. Aspecial thanks to Dr. Barbara Fraticelli for her painstaking reading and feedbackon the third edition of this book, and for her preparation of the solutions manualfor the present (fourth) edition, and to Ki-Hwan Bae for his assistance withediting the bibliography. Finally, our deep appreciation and gratitude to Ms.Sandy Dalton for her magnificent single-handed feat at preparing the entireelectronic document (including figures) of the third and fourth editions of thisbook.Mokhtar S. BazaraaJohn J. JarvisHanif D. Sherali

ONE: INTRODUCTIONLinear programming is concerned with the optimization (minimization ormaximization) of a linear function while satisfying a set of linear equality and/orinequality constraints or restrictions. The linear programming problem was firstconceived by George B. Dantzig around 1947 while he was working as amathematical advisor to the United States Air Force Comptroller on developinga mechanized planning tool for a time-staged deployment, training, and logisticalsupply program. Although the Soviet mathematician and economist L. V.Kantorovich formulated and solved a problem of this type dealing withorganization and planning in 1939, his work remained unknown until 1959.Hence, the conception of the general class of linear programming problems isusually credited to Dantzig. Because the Air Force refers to its various plans andschedules to be implemented as "programs," Dantzig's first published paperaddressed this problem as "Programming in a Linear Structure." The term "linearprogramming" was actually coined by the economist and mathematician T. C.Koopmans in the summer of 1948 while he and Dantzig strolled near the SantaMonica beach in California.In 1949 George B. Dantzig published the "simplex method" for solvinglinear programs. Since that time a number of individuals have contributed to thefield of linear programming in many different ways, including theoreticaldevelopments, computational aspects, and exploration of new applications of thesubject. The simplex method of linear programming enjoys wide acceptancebecause of (1) its ability to model important and complex management decisionproblems, and (2) its capability for producing solutions in a reasonable amountof time. In subsequent chapters of this text we shall consider the simplex methodand its variants, with emphasis on the understanding of the methods.In this chapter, we introduce the linear programming problem. Thefollowing topics are discussed: basic definitions in linear programming,assumptions leading to linear models, manipulation of the problem, examples oflinear problems, and geometric solution in the feasible region space and therequirement space. This chapter is elementary and may be skipped if the readerhas previous knowledge of linear programming.1.1 THE LINEAR PROGRAMMING PROBLEMWe begin our discussion by formulating a particular type of linear programmingproblem. As will be seen subsequently, any general linear programmingproblem may be manipulated into this form.Basic DefinitionsConsider the following linear programming problem. Here, qx t c2*2 ·· cnxn is the objective function (or criterion function) to be minimized and willbe denoted by z. The coefficients cj,c 2 ,.,c„ are the (known) cost coefficientsand1

2Chapter 1xl,x2,.,xnare the decision variables (variables, structural variables, or activitylevels)tobe determined.Minimize qxj subject to α\\Χχ α2\χ\a xc2x2αχ2χ2 · · · · a«22 2 · a · · · · Πa m\ \m2 2Χγ ,The inequality Σ%\ a ij x jxΧ2 ,.,cnxn\nxn2nxn b\ hX Xyl bm0. fydenotes the rth constraint (or restriction or functional,structural, or technological constraint). The coefficients a for;' \,.,m,j 1,., «arecalled the technological coefficients. These technological coefficients form the constraintmatrix A.«11«12«1««21«22«2«.«ml«m2Ciyyiy,A The column vector whose rth component is bt, which is referred to as the righthand-side vector, represents the minimal requirements to be satisfied. Theconstraints X\, x2,. ,xn 0 are the nonnegativity constraints. A set of values ofthe variables x\,.,xn satisfying all the constraints is called a feasible point or afeasible solution. The set of all such points constitutes the feasible region or thefeasible space.Using the foregoing terminology, the linear programming problem can bestated as follows: Among all feasible solutions, find one that minimizes (ormaximizes) the objective function.Example 1.1Consider the following linear problem:Minimizesubject to2xjxj x\X], 5x2 x2 - 2x2 x- x2 -U0.In this case, we have two decision variables χλ and x2. The objective functionto be minimized is 2xj 5x2. The constraints and the feasible region areillustrated in Figure 1.1. The optimization problem is thus to find a point in thefeasible region having the smallest possible objective value.

Introduction3Figure 1.1. Illustration of the feasible region.Assumptions of Linear ProgrammingTo represent an optimization problem as a linear program, several assumptionsthat are implicit in the linear programming formulation discussed previously areneeded. A brief discussion of these assumptions is given next.1. Proportionality. Given a variable x ·, its contribution to cost is ex.and its contribution to the rth constraint is ciyx,-. This means that if x is doubled, say, so is its contribution to cost and to each of theconstraints. To illustrate, suppose that x.- is the amount of activity jused. For instance, if x. 10, then the cost of this activity is 10c,. Ifx 20, then the cost is 20c., and so on. This means that no savings(or extra costs) are realized by using more of activity/; that is, there areno economies or returns to scale or discounts. Also, no setup cost forstarting the activity is realized.2. Additivity. This assumption guarantees that the total cost is the sumof the individual costs, and that the total contribution to the rthrestriction is the sum of the individual contributions of the individualactivities. In other words, there are no substitution or interactioneffects among the activities.3. Divisibility. This assumption ensures that the decision variables canbe divided into any fractional levels so that non-integral values forthe decision variables are permitted.4. Deterministic. The coefficients c-, ay, and bl are all knowndeterministically. Any probabilistic or stochastic elements inherent in

Chapter 14demands, costs, prices, resource availabilities, usages, and so on are allassumed to be approximated by these coefficients through somedeterministic equivalent.It is important to recognize that if a linear programming problem is beingused to model a given situation, then the aforementioned assumptions areimplied to hold, at least over some anticipated operating range for the activities.When Dantzig first presented his linear programming model to a meeting of theEconometric Society in Wisconsin, the famous economist H. Hotelling criticallyremarked that in reality, the world is indeed nonlinear. As Dantzig recounts, thewell-known mathematician John von Neumann came to his rescue by countering that the talk was about "Linear" Programming and was based on a set ofpostulated axioms. Quite simply, a user may apply this technique if and only ifthe application fits the stated axioms.Despite the seemingly restrictive assumptions, linear programs are amongthe most widely used models today. They represent several systems quite satisfactorily, and they are capable of providing a large amount of informationbesides simply a solution, as we shall see later, particularly in Chapter 6.Moreover, they are also often used to solve certain types of nonlinearoptimization problems via (successive) linear approximations and constitute animportant tool in solution methods for linear discrete optimization problemshaving integer-restricted variables.Problem ManipulationRecall that a linear program is a problem of minimizing or maximizing a linearfunction in the presence of linear inequality and/or equality constraints. Bysimple manipulations the problem can be transformed from one form to anotherequivalent form. These manipulations are most useful in linear programming, aswill be seen throughout the text.INEQUALITIES AND EQUATIONSAn inequality can be easily transformed into an equation. To illustrate, consider theconstraint given by Σ"ί \αϋχί- V This constraint can be put in an equation formby subtracting the nonnegative surplus or slack variable xn i (sometimes denoted bySf) leading to Y,n; \aijxj 1L"J \ *ÌJXJxn i fy d xn i - 0· Similarly, the constraint bt is equivalent to Σ/ ιβ(/*/ xn iequation of the form ΥΓ; \αηχί ty and xn i 0. Also, an ty can be transformed into the two inequalitiesZ/ i%*y fy and Σ / · * / bh although this is not the practice.NONNEGATIVITY OF THE VARIABLESFor most practical problems the variables represent physical quantities, and hencemust be nonnegative. The simplex method is designed to solve linear programswhere the variables are nonnegative. If a variable x,· is unrestricted in sign, then itcan be replaced by x'- - x"- where x'- 0 and x"· 0. If xx,.,xkare some A:

Introduction5variables that are all unrestricted in sign, then only one additional variable x" isneeded in the equivalent transformation: x · x'- - x" for j 1, ., k, wherex'l 0 fory' \,.,k, and x" 0. (Here, -x" plays the role of representing themost negative variable, while all the other variables x are x'- above this value.)Alternatively, one could solve for each unrestricted variable in terms of the othervariables using any equation in which it appears, eliminate this variable from theproblem by substitution using this equation, and then discard this equation fromthe problem. However, this strategy is seldom used from a data management andnumerical implementation viewpoint. Continuing, if x -, then the newvariable x'- x - is automatically nonnegative. Also, if a variable x.- isrestricted such that x- u.-, where we might possibly have w. 0, then thesubstitution X'I u.- -x.- produces a nonnegative variable x'-.MINIMIZATION AND MAXIMIZATION PROBLEMSAnother problem manipulation is to convert a maximization problem into aminimization problem and conversely. Note that over any region,nnmaximum X CJXJ -minimum X cjxj-Hence, a maximization (minimization) problem can be converted into a minimization (maximization) problem by multiplying the coefficients of the objectivefunction by - 1 . After the optimization of the new problem is completed, theobjective value of the old problem is -1 times the optimal objective value of thenew problem.Standard and Canonical FormatsFrom the foregoing discussion, we have seen that any given linear program canbe put in different equivalent forms by suitable

1.1 The Linear Programming Problem 1 1.2 Linear Programming Modeling and Examples 7 1.3 Geometric Solution 18 1.4 The Requirement Space 22 1.5 Notation 27 Exercises 29 Notes and References 42 TWO: LINEAR ALGEBRA, CONVEX ANALYSIS, AND POLYHEDRAL SETS 45 2.1 Vectors 45 2.2 Matrices 51 2.3 Simultaneous Linear Equations 61