Elementary Linear Algebra, Fourth Edition - R-5

Transcription

Elementary LinearAlgebraFourth EditionStephen AndrilliDepartment of Mathematicsand Computer ScienceLa Salle UniversityPhiladelphia, PADavid HeckerDepartment of MathematicsSaint Joseph’s UniversityPhiladelphia, PAAMSTERDAM BOSTON HEIDELBERG LONDONNEW YORK OXFORD PARIS SAN DIEGOSAN FRANCISCO SINGAPORE SYDNEY TOKYOAcademic Press is an imprint of Elsevier

Academic Press is an imprint of Elsevier30 Corporate Drive, Suite 400, Burlington, MA 01803, USA525 B Street, Suite 1900, San Diego, California 92101-4495, USA84 Theobald’s Road, London WC1X 8RR, UKCopyright 2010 Elsevier Inc. All rights reserved.No part of this publication may be reproduced or transmitted in any form or by any means, electronicor mechanical, including photocopy, recording, or any information storage and retrieval system,without permission in writing from the publisher.Permissions may be sought directly from Elsevier’s Science & Technology Rights Department inOxford, UK: phone: ( 44) 1865 843830, fax: ( 44) 1865 853333, e-mail: permissions@elsevier.com.You may also complete your request online via the Elsevier homepage (http://elsevier.com), byselecting “Support & Contact” then “Copyright and Permission” and then “Obtaining Permissions.”Library of Congress Cataloging-in-Publication DataApplication submitted.British Library Cataloguing-in-Publication DataA catalogue record for this book is available from the British Library.ISBN: 978-0-12-374751-8For information on all Academic Press publicationsvisit our Web site at www.elsevierdirect.comPrinted in Canada09 10 11 9 8 7 6 5 4 3 2 1

To our wives, Ene and Lyn, for all their help and encouragement

This page intentionally left blank

ContentsPreface for the Instructor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .ixPreface for the Student . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xixSymbol Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xxiiiComputational and Numerical Methods, Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxviiCHAPTER 1Vectors and Matrices1.11.21.31.41.5CHAPTER 22.32.4Introduction to Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Determinants and Row Reduction . . . . . . . . . . . . . . . . . . . . . . . . .Further Properties of the Determinant . . . . . . . . . . . . . . . . . . . .Eigenvalues and Diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . .Finite Dimensional Vector Spaces4.14.24.34.44.54.64.7CHAPTER 579Solving Linear Systems Using Gaussian Elimination . . . . . . 79Gauss-Jordan Row Reduction and Reduced RowEchelon Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98Equivalent Systems, Rank, and Row Space. . . . . . . . . . . . . . . . . 110Inverses of Matrices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125Determinants and Eigenvalues3.13.23.33.4CHAPTER 41218314859Systems of Linear Equations2.12.2CHAPTER 3Fundamental Operations with Vectors . . . . . . . . . . . . . . . . . . . .The Dot Product. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .An Introduction to Proof Techniques . . . . . . . . . . . . . . . . . . . . .Fundamental Operations with Matrices . . . . . . . . . . . . . . . . . . .Matrix Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Introduction to Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Subspaces. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Span . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Linear Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Basis and Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Constructing Special Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Coordinatization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .143143155165178203204215227239255269281Linear Transformations5.15.2305Introduction to Linear Transformations. . . . . . . . . . . . . . . . . . . . 306The Matrix of a Linear Transformation . . . . . . . . . . . . . . . . . . . . . 321v

vi Contents5.35.45.55.6CHAPTER 6Graph Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Ohm’s Law. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Least-Squares Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Markov Chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Hill Substitution: An Introduction to Coding Theory . . . . .Elementary Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Rotation of Axes for Conic Sections . . . . . . . . . . . . . . . . . . . . . . .Computer Graphics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Least-Squares Solutions for Inconsistent Systems . . . . . . . . .Quadratic Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Numerical Methods9.19.29.39.49.5Appendix AComplex n-Vectors and Matrices . . . . . . . . . . . . . . . . . . . . . . . . . .Complex Eigenvalues and Complex Eigenvectors . . . . . . . .Complex Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Orthogonality in Cn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Inner Product Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Additional PTER 9397Orthogonal Bases and the Gram-Schmidt Process . . . . . . . . 397Orthogonal Complements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412Orthogonal Diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428Complex Vector Spaces and General Inner Products7.17.27.37.47.5CHAPTER 8338350356371Orthogonality6.16.26.3CHAPTER 7The Dimension Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .One-to-One and Onto Linear Transformations . . . . . . . . . . . .Isomorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Diagonalization of Linear Operators . . . . . . . . . . . . . . . . . . . . . . .Numerical Methods for Solving Systems . . . . . . . . . . . . . . . . . .LDU Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .The Power Method for Finding Eigenvalues . . . . . . . . . . . . . . .QR Factorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Singular Value Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Miscellaneous 4561570578587588600608615623645Proof of Theorem 1.14, Part (1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645Proof of Theorem 2.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646Proof of Theorem 2.9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647

Contents viiProof of Theorem 3.3, Part (3), Case 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648Proof of Theorem 5.29. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649Proof of Theorem 6.18. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 650Appendix BFunctionsAppendix CComplex Numbers661Appendix DAnswers to Selected Exercises665Index653Functions: Domain, Codomain, and Range . . . . . . . . . . . . . . . . . . . . . . . 653One-to-One and Onto Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654Composition and Inverses of Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 655725

This page intentionally left blank

Preface for the InstructorThis textbook is intended for a sophomore- or junior-level introductory course in linearalgebra. We assume the students have had at least one course in calculus.PHILOSOPHY AND FEATURES OF THE TEXTClarity of Presentation: We have striven for clarity and used straightforward lan-guage throughout the book, occasionally sacrificing brevity for clear and convincingexplanation. We hope you will encourage students to read the text deeply andthoroughly.Helpful Transition from Computation to Theory: In writing this text, our main intentionwas to address the fact that students invariably ran into trouble as the largely computational first half of most linear algebra courses gave way to a more theoreticalsecond half. In particular,many students encountered difficulties when abstract vectorspace topics were introduced. Accordingly, we have taken great care to help studentsmaster these important concepts. We consider the material in Sections 4.1 through5.6 (vector spaces and subspaces, span, linear independence, basis and dimension,coordinatization, linear transformations, kernel and range, one-to-one and onto lineartransformations, isomorphism, diagonalization of linear operators) to be the “heart” ofthis linear algebra text.Emphasis on the Reading and Writing of Proofs: One reason that students have troublewith the more abstract material in linear algebra is that most textbooks contain few,if any, guidelines about reading and writing simple mathematical proofs. This book isintended to remedy that situation. Consequently, we have students working on proofsas quickly as possible. After a discussion of the basic properties of vectors, thereis a special section (Section 1.3) on general proof techniques, with concrete examples using the material on vectors from Sections 1.1 and 1.2. The early placement ofSection 1.3 helps to build the students’confidence and gives them a strong foundationin the reading and writing of proofs.We have written the proofs of theorems in the text in a careful manner to givestudents models for writing their own proofs. We avoided “clever” or “sneaky” proofs,in which the last line suddenly produces “a rabbit out of a hat,” because such proofsinvariably frustrate students. They are given no insight into the strategy of the proofor how the deductive process was used. In fact, such proofs tend to reinforce thestudents’ mistaken belief that they will never become competent in the art of writingproofs. In this text,proofs longer than one paragraph are often written in a“top-down”manner, a concept borrowed from structured programming. A complex theorem isbroken down into a secondary series of results, which together are sufficient to provethe original theorem. In this way,the student has a clear outline of the logical argumentand can more easily reproduce the proof if called on to do so.ix

x Preface for the InstructorWe have left the proofs of some elementary theorems to the student. However, forevery nontrivial theorem in Chapters 1 through 6,we have either included a proof,orgiven detailed hints which should be sufficient to enable students to provide a proofon their own. Most of the proofs of theorems that are left as exercises can be found inthe Student Solutions Manual.The exercises corresponding to these proofs are markedwith the symbol .Computational and Numerical Methods, Applications: A summary of the most importantcomputational and numerical methods covered in this text is found in the chart locatedin the frontpages. This chart also contains the most important applications of linearalgebra that are found in this text. Linear algebra is a branch of mathematics having amultitude of practical applications, and we have included many standard ones so thatinstructors can choose their favorites. Chapter 8 is devoted entirely to applicationsof linear algebra, but there are also several shorter applications in Chapters 1 to 6.Instructors may choose to have their students explore these applications in computerlabs,or to assign some of these applications as extra credit reading assignments outsideof class.Revisiting Topics: We frequently introduce difficult concepts with concrete examplesand then revisit them frequently in increasingly abstract forms as students progressthroughout the text. Here are several examples: Students are first introduced to the concept of linear combinations beginning inSection 1.1, long before linear combinations are defined for real vector spacesin Chapter 4. The row space of a matrix is first encountered in Section 2.3, thereby preparingstudents for the more general concepts of subspace and span in Sections 4.2and 4.3. Students traditionally find eigenvalues and eigenvectors to be a difficult topic,sothese are introduced early in the text (Section 3.4) in the context of matrices.Further properties of eigenvectors are included throughout Chapters 4 and 5 asunderlying vector space concepts are covered. Then a more thorough, detailedtreatment of eigenvalues is given in Section 5.6 in the context of linear transformations. The more advanced topics of orthogonal and unitary diagonalizationare covered in Chapters 6 and 7. The technique behind the first two methods in Section 4.6 for computing basesare introduced earlier in Sections 4.3 and 4.4 in the Simplified Span Method andthe Independence Test Method, respectively. In this way, students will becomecomfortable with these methods in the context of span and linear independencebefore employing them to find appropriate bases for vector spaces. Students are first introduced to least-squares polynomials in Section 8.3 in aconcrete fashion,and then (assuming a knowledge of orthogonal complements),the theory behind least-squares solutions for inconsistent systems is exploredlater on in Section 8.10.

Preface for the Instructor xiNumerous Examples and Exercises: There are 321 numbered examples in the text, andmany other unnumbered examples as well, at least one for each new concept orapplication, to ensure that students fully understand new material before proceedingonward. Almost every theorem has a corresponding example to illustrate its meaningand/or usefulness.The text also contains an unusually large number of exercises. There are more than980 numbered exercises, and many of these have multiple parts, for a total of morethan 2660 questions. Some are purely computational. Many others ask the studentsto write short proofs. The exercises within each section are generally ordered byincreasing difficulty, beginning with basic computational problems and moving onto more theoretical problems and proofs. Answers are provided at the end of thebook for approximately half the computational exercises; these problems are markedwith a star ( ). Full solutions to the exercises appear in the Student SolutionsManual.True/False Exercises: Included among the exercises are 500 True/False questions,which appear at the end of each section in Chapters 1 through 9, as well as in theReview Exercises at the end of Chapters 1 through 7, and in Appendices B and C.These True/False questions help students test their understanding of the fundamentalconcepts presented in each section. In particular, these exercises highlight the importance of crucial words in definitions or theorems. Pondering True/False questionsalso helps the students learn the logical differences between “true,” “occasionallytrue,” and “never true.” Understanding such distinctions is a crucial step toward thetype of reasoning they are expected to possess as mathematicians.Summary Tables: There are helpful summaries of important material at various pointsin the text: Table 2.1 (in Section 2.3): The three types of row operations and their inverses Table 3.1 (in Section 3.2): Equivalent conditions for a matrix to be singular(and similarly for nonsingular)Chart following Chapter 3: Techniques for solving a system of linear equations,and for finding the inverse,determinant,eigenvalues and eigenvectors of a matrixTable 4.1 (in Section 4.4): Equivalent conditions for a subset to be linearlyindependent (and similarly for linearly dependent) Table 4.2 (in Section 4.6): Contrasts between the Simplified Span Method andthe Independence Test MethodTable 5.1 (in Section 5.2): Matrices for several geometric linear operatorsin R3Table 5.2 (in Section 5.5): Equivalent conditions for a linear transformation tobe an isomorphism (and similarly for one-to-one, onto)Symbol Table: Following the Prefaces, for convenience, there is a comprehensive Symbol Table listing all of the major symbols related to linear algebra that are employed inthis text together with their meanings.

xii Preface for the InstructorInstructor’s Manual: An Instructor’s Manual is available for this text that contains theanswers to all computational exercises, and complete solutions to the theoretical andproof exercises. In addition, this manual includes three versions of a sample test foreach of Chapters 1 through 7. Answer keys for the sample tests are also included.Student Solutions Manual: A Student Solutions Manual is available that contains fullsolutions for each exercise in the text bearing a (those whose answers appear inthe back of the textbook). The Student Solutions Manual also contains the proofs ofmost of the theorems whose proofs were left to the exercises. These exercises aremarked in the text with a . Because we have compiled this manual ourselves, itutilizes the same styles of proof-writing and solution techniques that appear in theactual text.Web Site: Our web 47518contains appropriate updates on the textbook as well as a way to communicate withthe authors.MAJOR CHANGES FOR THE FOURTH EDITIONChapter Review Exercises: We have added additional exercises for review following eachof Chapters 1 through 7, including many additional True/False exercises.Section-by-Section Vocabulary and Highlights Summary: After each section in the textbook, for the students’ convenience, there is now a summary of important vocabularyand a summary of the main results of that section.QR Factorization and Singular Value Decomposition: New sections have been added onQR Factorization (Section 9.4) and Singular Value Decomposition (Section 9.5). Thelatter includes a new application on digital imaging.Major Revisions: Many sections of the text have been augmented and/or rewritten forfurther clarity. The sections that received the most substantial changes are as follows: Section 1.5 (Matrix Multiplication): A new subsection (“Linear Combinationsfrom Matrix Multiplication”) with some related exercises has been added toshow how a linear combination of the rows or columns of a matrix can beaccomplished easily using matrix multiplication. Section 3.2 (Determinants and Row Reduction): For greater convenience,the approach to finding the determinant of a matrix by row reduction has beenrewritten so that the row reduction now proceeds in a forward manner. Section 3.4 (Eigenvalues and Diagonalization): The concept of similarityis introduced in a more formal manner. Also, the vectors obtained from therow reduction process are labeled as“fundamental eigenvectors”from this point

Preface for the Instructor xiiionward in the text, and examples in the section have been reordered for greaterclarity. Section 4.4 (Linear Independence): The definition of linear independenceis now taken from Theorem 4.7 in the Third Edition: that is, {v1 , v2 , . . . , vn } islinearly independent if and only if a1 v1 a2 v2 · · · an vn 0 implies a1 a2 · · · an 0. Section 4.5 (Basis and Dimension): The main theorem of this section (nowTheorem 4.12), that any two bases for the same finite dimensional vector spacehave the same size, was preceded in the previous edition by two lemmas. Theselemmas have now been consolidated into one “technical lemma” (Lemma 4.11)and proven using linear systems rather than the exchange method. Section 4.7 (Coordinatization):The examples in this section have been rewritten to streamline the overall presentation and introduce the row reductionmethod for coordinatization sooner. Section 5.3 (The Dimension Theorem): The Dimension Theorem is nowproven (in a more straightforward manner) for the special case of a linear transformation from Rn to Rm , and the proof for more general linear transformationsis now given in Section 5.5, once the appropriate properties of isomorphismshave been introduced. (An alternate proof for the Dimension Theorem in thegeneral case is outlined in Exercise 18 of Section 5.3.) Section 5.4 (One-to-One and Onto Linear Transformations) andSection 5.5 (Isomorphism): Much of the material of these two sectionswas previously in a single section, but has now been extensively revised. Thisnew approach gives the students more familiarity with one-to-one and ontotransformations before proceeding to isomorphisms. Also, there is a more thorough explanation of how isomorphisms preserve important properties of vectorspaces. This, in turn, validates more carefully the methods used in Chapter 4 forfinding particular bases for general vector spaces other than Rn . [The material formerly in Section 5.5 in the Third Edition has been moved to Section 5.6(Diagonalization of Linear Operators) in the Fourth Edition.] Chapter 8 (Additional Applications): Several of the sections in this chapterhave been rewritten for improved clarity,including Section 8.2 (Ohm’s Law) inorder to stress the use of both of Kirchhoff’s Laws, Section 8.3 (Least-SquaresPolynomials) in order to present concrete examples first before stating thegeneral result (Theorem 8.2), Section 8.7 (Rotation of Axes) in which theemphasis is now on a clockwise rotation of axes for simplicity, and Section 8.8(Computer Graphics) in which there are many minor improvements in the presentation, including a more careful approach to the display of pixel coordinatesand to the concept of geometric similarity. Appendix A (Miscellaneous Proofs): A proof of Theorem 2.4 (uniqueness ofreduced row echelon form for a matrix) has been added.

xiv Preface for the InstructorAlso, Chapter 10 in the Third Edition has been eliminated and two of its threesections (Elementary Matrices,Quadratic Forms) have been incorporated into Chapter8 in the Fourth Edition (as Sections 8.6 and 8.11, respectively). The sections from theThird Edition entitled“Change of Variables and the Jacobian,”“Max-Min Problems in Rnand the Hessian Matrix,”and“Function Spaces”have been eliminated, but are availablefor downloading and use from the text’s web site. Also, the appendix “Computersand Calculators”from previous editions has been removed because the most commoncomputer packages (e.g., Maple, MATLAB, Mathematica) that are used in conjunctionwith linear algebra courses now contain introductory tutorials that are much morethorough than what can be provided here.PREREQUISITE CHART FOR SECTIONS IN CHAPTERS 7, 8, 9Prerequisites for the material in Chapters 7 through 9 are listed in the following chart.The sections of Chapters 8 and 9 are generally independent of each other, and any ofthese sections can be covered after its prerequisite has been met.SectionPrerequisiteSection 7.1 (Complex n-Vectorsand Matrices)Section 1.5 (Matrix Multiplication)Section 7.2 (Complex Eigenvaluesand Complex Eigenvectors)*Section 3.4 (Eigenvalues and Diagonalization)Section 7.3 (Complex Vector Spaces)*Section 5.2 (The Matrix of a Linear Transformation)Section 7.4 (Orthogonality in Cn )*Section 6.3 (Orthogonal Diagonalization)Section 7.5 (Inner Product Spaces)*Section 6.3 (Orthogonal Diagonalization)Section 8.1 (Graph Theory)Section 1.5 (Matrix Multiplication)Section 8.2 (Ohm’s Law)Section 2.2 (Gauss-Jordan Row Reduction andReduced Row Echelon Form)Section 8.3 (Least-Squares Polynomials)Section 2.2 (Gauss-Jordan Row Reduction andReduced Row Echelon Form)Section 8.4 (Markov Chains)Section 2.2 (Gauss-Jordan Row Reduction andReduced Row Echelon Form)Section 8.5 (Hill Substitution: AnIntroduction to Coding Theory)Section 2.4 (Inverses of Matrices)Section 8.6 (Elementary Matrices)Section 2.4 (Inverses of Matrices)Section 8.7 (Rotation of Axes for Conic Sections)Section 4.7 (Coordinatization)(Continued)

Preface for the Instructor xv(Continued)SectionPrerequisiteSection 8.8 (Computer Graphics)Section 5.2 (The Matrix of a Linear Transformation)Section 8.9 (Differential Equations)**Section 5.6 (Diagonalization of Linear Operators)Section 8.10 (Least-SquaresSolutions for Inconsistent Systems)Section 6.2 (Orthogonal Complements)Section 8.11 (Quadratic Forms)Section 6.3 (Orthogonal Diagonalization)Section 9.1 (Numerical Methods forSolving Systems)Section 2.3 (Equivalent Systems, Rank,and Row Space)Section 9.2 (LDU Decomposition)Section 2.4 (Inverses of Matrices)Section 9.3 (The Power Methodfor Finding Eigenvalues)Section 3.4 (Eigenvalues and Diagonalization)Section 9.4 (QR Factorization)Section 6.1 (Orthogonal Bases and the Gram-SchmidtProcess)Section 9.5 (Singular ValueDecomposition)Section 6.3 (Orthogonal Diagonalization)*In addition to the prerequisites listed, each section in Chapter 7 requires the sections of Chapter 7 that precedeit, although most of Section 7.5 can be covered without having covered Sections 7.1 through 7.4 by concentratingonly on real inner products.**The techniques presented for solving differential equations in Section 8.9 require only Section 3.4 as aprerequisite. However, terminology from Chapters 4 and 5 is used throughout Section 8.9.PLANS FOR COVERAGEChapters 1 through 6 have been written in a sequential fashion. Each section is generally needed as a prerequisite for what follows. Therefore, we recommend that thesesections be covered in order. However, there are three exceptions: Section 1.3 (An Introduction to Proofs) can be covered, in whole, or in part,at any time after Section 1.2. Section 3.3 (Further Properties of the Determinant) contains some materialthat can be omitted without affecting most of the remaining development. Thetopics of general cofactor expansion,(classical) adjoint matrix,and Cramer’s Ruleare used very sparingly in the rest of the text. Section 6.1 (Orthogonal Bases and the Gram-Schmidt Process) can becovered any time after Chapter 4, as can much of the material in Section 6.2(Orthogonal Complements).Any section in Chapters 7 through 9 can be covered at any time as long as theprerequisites for that section have previously been covered. (Consult the PrerequisiteChart for Sections in Chapters 7, 8, 9.)

xvi Preface for the InstructorThe textbook contains much more material than can be covered in a typical3- or 4-credit course. We expect that the students will read much on their own, whilethe instructor emphasizes the highlights. Two suggested timetables for covering thematerial in this text are presented below — one for a 3-credit course,and the other for a4-credit course.A 3-credit course could skip portions of Sections 1.3,2.3,3.3,4.1 (moreabstract vector spaces), 5.5, 5.6, 6.2, and 6.3, and all of Chapter 7. A 4-credit coursecould cover most of the material of Chapters 1 through 6 (perhaps de-emphasizingportions of Sections 1.3, 2.3, and 3.3), and could cover some of Chapter 7. In eithercourse, some of the material in Chapter 1 could be skimmed if students are alreadyfamiliar with vector and matrix operations.3-Credit Course4-Credit CourseChapter 15 classes5 classesChapter 25 classes6 classesChapter 35 classes5 classesChapter 411 classes13 classesChapter 58 classes13 classesChapter 62 classes5 classesChapter 72 classesChapters 8 and 9 (selections)3 classes4 classesTests3 classes3 classesTotal42 classes56 classesACKNOWLEDGMENTSWe gratefully thank all those who have helped in the publication of this book. AtElsevier/Academic Press, we especially thank Lauren Yuhasz, our Senior AcquisitionsEditor,Patricia Osborn,ourAcquisitions Editor,Gavin Becker,ourAssistant Editor,PhilipBugeau, our Project Manager, and Deborah Prato, our Copyeditor.We also want to thank those who have supported our textbook at various stages.In particular, we thank Agnes Rash, former Chair of the Mathematics and ComputerScience Department at Saint Joseph’s University for her support of our project. Wealso thank Paul Klingsberg and Richard Cavaliere of Saint Joseph’s University, bothof whom gave us many suggestions for improvements to this edition and earliereditions.

Preface for the Instructor xviiWe especially thank those students who have classroom-tested versions of the earlier editions of the manuscript. Their comments and suggestions have been extremelyhelpful, and have guided us in shaping the text in many ways.We acknowledge those reviewers who have supplied many worthwhile suggestions. For reviewing the first edition, we thank the following:C. S. Ballantine, Oregon State UniversityYuh-ching Chen, Fordham UniversitySusan Jane Colley, Oberlin CollegeRoland di Franco, University of the PacificColin Graham, Northwestern UniversityK. G. Jinadasa, Illinois State UniversityRalph Kelsey, Denison UniversityMasood Otarod, University of ScrantonJ. Bryan Sperry, Pittsburg State UniversityRobert Tyler, Susquehanna UniversityFor reviewing the second edition, we thank the following:Ruth Favro, Lawrence Technological UniversityHoward Hamilton, California State UniversityRay Heitmann, University of Texas, AustinRichard Hodel, Duke UniversityJames Hurley, University of Connectic

algebra that are found in this text. Linear algebra is a branch of mathematics having a multitude of practical applications,and we have included many standard ones so that instructors can choose their favorites. Chapter 8 is devoted entirely to applications of linear algebra, but there are also several shorter applications in Chapters 1 to 6.