Neural Network Toolbox 5 User's Guide

Transcription

Neural Network Toolbox 5User’s GuideHoward DemuthMark BealeMartin Hagan

How to Contact The MathWorksWebNewsgroupwww.mathworks.com/contact TS.html Technical service@mathworks.cominfo@mathworks.comProduct enhancement suggestionsBug reportsDocumentation error reportsOrder status, license renewals, passcodesSales, pricing, and general information508-647-7000 (Phone)508-647-7001 (Fax)The MathWorks, Inc.3 Apple Hill DriveNatick, MA 01760-2098For contact information about worldwide offices, see the MathWorks Web site.Neural Network Toolbox User’s Guide COPYRIGHT 2005–2007 by The MathWorks, Inc.The software described in this document is furnished under a license agreement. The software may be usedor copied only under the terms of the license agreement. No part of this manual may be photocopied or reproduced in any form without prior written consent from The MathWorks, Inc.FEDERAL ACQUISITION: This provision applies to all acquisitions of the Program and Documentation by,for, or through the federal government of the United States. By accepting delivery of the Program orDocumentation, the government hereby agrees that this software or documentation qualifies as commercialcomputer software or commercial computer software documentation as such terms are used or defined inFAR 12.212, DFARS Part 227.72, and DFARS 252.227-7014. Accordingly, the terms and conditions of thisAgreement and only those rights specified in this Agreement, shall pertain to and govern the use,modification, reproduction, release, performance, display, and disclosure of the Program and Documentationby the federal government (or other entity acquiring for or through the federal government) and shallsupersede any conflicting contractual terms or conditions. If this License fails to meet the government'sneeds or is inconsistent in any respect with federal procurement law, the government agrees to return theProgram and Documentation, unused, to The MathWorks, Inc.TrademarksMATLAB, Simulink, Stateflow, Handle Graphics, Real-Time Workshop, SimBiology,SimHydraulics, SimEvents, and xPC TargetBox are registered trademarks and TheMathWorks, the L-shaped membrane logo, Embedded MATLAB, and PolySpace are trademarksof The MathWorks, Inc. Other product or brand names are trademarks or registered trademarksof their respective holders.PatentsThe MathWorks products are protected by one or more U.S. patents. Please seewww.mathworks.com/patents for more information.

Revision HistoryJune 1992April 1993January 1997July 1997January 1998September 2000June 2001July 2002January 2003June 2004October 2004October 2004March 2005March 2006September 2006March 2007September 2007First printingSecond printingThird printingFourth printingFifth printingSixth printingSeventh printingOnline onlyOnline onlyOnline onlyOnline onlyEighth printingOnline onlyOnline onlyNinth printingOnline onlyOnline onlyRevised for Version 3 (Release 11)Revised for Version 4 (Release 12)Minor revisions (Release 12.1)Minor revisions (Release 13)Minor revisions (Release 13SP1)Revised for Version 4.0.3 (Release 14)Revised for Version 4.0.4 (Release 14SP1)Revised for Version 4.0.4Revised for Version 4.0.5 (Release 14SP2)Revised for Version 5.0 (Release 2006a)Minor revisions (Release 2006b)Minor revisions (Release 2007a)Revised for Version 5.1 (Release 2007b)

AcknowledgmentsThe authors would like to thankJoe Hicklin of The MathWorks for getting Howard into neural networkresearch years ago at the University of Idaho, for encouraging Howard to writethe toolbox, for providing crucial help in getting the first toolbox Version 1.0 outthe door, for continuing to help with the toolbox in many ways, and for beingsuch a good friend.Roy Lurie of The MathWorks for his continued enthusiasm for thepossibilities for Neural Network Toolbox.Jim Tung of The MathWorks for his long-term support for this project.Liz Callanan of The MathWorks for getting us off to such a good start withNeural Network Toolbox Version 1.0.Pascal Gahinet of The MathWorks for helping us craft a good schedule forNeural Network Toolbox Releases SP3 and SP4.Madan Bharadwaj of The MathWorks for his help with planning, demos, andgecks, for getting user feedback, and for helping with many other toolboxmatters.Ronelle Landy of The MathWorks for help with gecks and otherprogramming issues.Mark Haseltine of The MathWorks for his help with the BaT system and forgeeking us on track with conference calls.Rajiv Singh of The MathWorks for his help with gecks and BaT problems.Bill Balint of The MathWorks for his help with gecks.Matthew Simoneau of The MathWorks for his help with demos, test suiteroutines, for getting user feedback, and for helping with other toolbox matters.Jane Carmody of The MathWorks for editing help and for always being at herphone to help with documentation problems.Lisl Urban, Peg Theriault, Christi-Anne Plough, and Donna Sullivan ofThe MathWorks for their editing and other help with the Mac document.Elana Person and Jane Price of The MathWorks for getting constructiveuser feedback on the toolbox document and its graphical user interface.

Susan Murdock of The MathWorks for keeping us honest with schedules.Sean McCarthy of The MathWorks for his many questions from users aboutthe toolbox operation.Orlando De Jesús of Oklahoma State University for his excellent work indeveloping and programming the dynamic training algorithms described inChapter 6, “Dynamic Networks,” and in programming the neural networkcontrollers described in Chapter 7, “Control Systems.”Bernice Hewitt for her wise New Zealand counsel, encouragement, and tea,and for the company of her cats Tiny and Mr. Britches.Joan Pilgram for her business help, general support, and good cheer.Leah Knerr for encouraging and supporting Mark. Teri Beale for herencouragement and for taking care of Mark’s three greatest inspirations,Valerie, Asia, and Drake, while he worked on this toolbox.Martin Hagan, Howard Demuth, and Mark Beale for permission to includevarious problems, demonstrations, and other material from Neural NetworkDesign, January, 1996.

Neural Network Design BookNeural Network Toolbox authors have written a textbook, Neural NetworkDesign (Hagan, Demuth, and Beale, ISBN 0-9717321-0-8). The book presentsthe theory of neural networks, discusses their design and application, andmakes considerable use of MATLAB and Neural Network Toolbox.Demonstration programs from the book are used in various chapters of thisuser’s guide. (You can find all the book demonstration programs in NeuralNetwork Toolbox by typing nnd.)This book can be obtained from John Stovall at (303) 492-3648, or by e-mail atJohn.Stovall@colorado.edu.The book has An Instructor’s Manual for those who adopt the book for a class Transparency Masters for class useIf you are teaching a class and want an Instructor’s Manual (with solutions tothe book exercises), contact John Stovall at (303) 492-3648, or by e-mail atJohn.Stovall@colorado.edu.To look at sample chapters of the book and to obtain Transparency Masters, godirectly to the Neural Network Design page athttp://hagan.okstate.edu/nnd.htmlOnce there, you can obtain sample book chapters in PDF format and you candownload the Transparency Masters by clicking “TransparencyMasters (3.6MB).”You can get the Transparency Masters in PowerPoint or PDF format.

ContentsGetting Started1What Are Neural Networks? . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2Fitting a Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3Using Command-Line Functions . . . . . . . . . . . . . . . . . . . . . . . . 1-3Using the Neural Network Fitting Tool GUI . . . . . . . . . . . . . . . 1-7Using the Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-16Neural Network Applications . . . . . . . . . . . . . . . . . . . . . . . . . 1-17Applications in this Toolbox . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-17Business Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-17Neuron Model and Network Architectures2Neuron Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Simple Neuron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Transfer Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Neuron with Vector Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-22-22-32-5Network Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-8A Layer of Neurons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-8Multiple Layers of Neurons . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-10Input and Output Processing Functions . . . . . . . . . . . . . . . . . 2-12Data Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Simulation with Concurrent Inputs in a Static Network . . . .Simulation with Sequential Inputs in a Dynamic Network . .Simulation with Concurrent Inputs in a Dynamic Network .2-142-142-152-17i

Training Styles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Incremental Training (of Adaptive and Other Networks) . . . .Batch Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Training Tip . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-202-202-222-25Perceptrons3Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-2Important Perceptron Functions . . . . . . . . . . . . . . . . . . . . . . . . . 3-2Neuron Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-3Perceptron Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-5Creating a Perceptron (newp) . . . . . . . . . . . . . . . . . . . . . . . . . . 3-6Simulation (sim) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-8Initialization (init) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-10Learning Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-13Perceptron Learning Rule (learnp) . . . . . . . . . . . . . . . . . . . . 3-14Training (train) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-17Limitations and Cautions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-22Outliers and the Normalized Perceptron Rule . . . . . . . . . . . . . 3-22Graphical User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Introduction to the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Create a Perceptron Network (nntool) . . . . . . . . . . . . . . . . . . .Train the Perceptron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Export Perceptron Results to Workspace . . . . . . . . . . . . . . . . .Clear Network/Data Window . . . . . . . . . . . . . . . . . . . . . . . . . .Importing from the Command Line . . . . . . . . . . . . . . . . . . . . .Save a Variable to a File and Load It Later . . . . . . . . . . . . . . .3-243-243-243-283-303-313-313-32ii

Linear Filters4Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-2Neuron Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-3Network Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-4Creating a Linear Neuron (newlin) . . . . . . . . . . . . . . . . . . . . . . . 4-4Least Mean Square Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-8Linear System Design (newlind) . . . . . . . . . . . . . . . . . . . . . . . . 4-9Linear Networks with Delays . . . . . . . . . . . . . . . . . . . . . . . . . 4-10Tapped Delay Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-10Linear Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-10LMS Algorithm (learnwh) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-13Linear Classification (train) . . . . . . . . . . . . . . . . . . . . . . . . . . 4-15Limitations and Cautions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Overdetermined Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Underdetermined Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . .Linearly Dependent Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . .Too Large a Learning Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-184-184-184-184-19Backpropagation5Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-2Solving a Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-4Improving Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-6Under the Hood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-7iiiContents

Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-8Feedforward Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-10Simulation (sim) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-14Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-15Backpropagation Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-15Faster Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Variable Learning Rate (traingda, traingdx) . . . . . . . . . . . . . .Resilient Backpropagation (trainrp) . . . . . . . . . . . . . . . . . . . . .Conjugate Gradient Algorithms . . . . . . . . . . . . . . . . . . . . . . . .Line Search Routines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Quasi-Newton Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Levenberg-Marquardt (trainlm) . . . . . . . . . . . . . . . . . . . . . . . .Reduced Memory Levenberg-Marquardt (trainlm) . . . . . . . . .5-195-195-215-225-265-295-305-32Speed and Memory Comparison . . . . . . . . . . . . . . . . . . . . . . . 5-34Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-50Improving Generalization . . . . . . . . . . . . . . . . . . . . . . . . . . . .Early Stopping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Index Data Division (divideind) . . . . . . . . . . . . . . . . . . . . . . . .Random Data Division (dividerand) . . . . . . . . . . . . . . . . . . . . .Block Data Division (divideblock) . . . . . . . . . . . . . . . . . . . . . . .Interleaved Data Division (dividerand) . . . . . . . . . . . . . . . . . .Regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Summary and Discussion of Early Stoppingand Regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-525-535-545-545-545-555-55Preprocessing and Postprocessing . . . . . . . . . . . . . . . . . . . . .Min and Max (mapminmax) . . . . . . . . . . . . . . . . . . . . . . . . . . .Mean and Stand. Dev. (mapstd) . . . . . . . . . . . . . . . . . . . . . . . .Principal Component Analysis (processpca) . . . . . . . . . . . . . . .Processing Unknown Inputs (fixunknowns) . . . . . . . . . . . . . . .Representing Unknown or Don’t Care Targets . . . . . . . . . . . .Posttraining Analysis (postreg) . . . . . . . . . . . . . . . . . . . . . . . . .5-625-635-645-655-665-675-675-59Sample Training Session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-69iv

Limitations and Cautions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-74Dynamic Networks6Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Examples of Dynamic Networks . . . . . . . . . . . . . . . . . . . . . . . . .Applications of Dynamic Networks . . . . . . . . . . . . . . . . . . . . . . .Dynamic Network Structures . . . . . . . . . . . . . . . . . . . . . . . . . . .Dynamic Network Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-26-26-76-86-9Focused Time-Delay Neural Network (newfftd) . . . . . . . . . 6-11Distributed Time-Delay Neural Network (newdtdnn) . . . . 6-15NARX Network (newnarx, newnarxsp, sp2narx) . . . . . . . . 6-18Layer-Recurrent Network (newlrn) . . . . . . . . . . . . . . . . . . . . 6-24Control Systems7Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-2vContentsNN Predictive Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .System Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Predictive Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Using the NN Predictive Controller Block . . . . . . . . . . . . . . . . .7-47-47-57-6NARMA-L2 (Feedback Linearization) Control . . . . . . . . . .Identification of the NARMA-L2 Model . . . . . . . . . . . . . . . . . .NARMA-L2 Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Using the NARMA-L2 Controller Block . . . . . . . . . . . . . . . . . .7-147-147-167-18

Model Reference Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-23Using the Model Reference Controller Block . . . . . . . . . . . . . . 7-25Importing and Exporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-31Importing and Exporting Networks . . . . . . . . . . . . . . . . . . . . . 7-31Importing and Exporting Training Data . . . . . . . . . . . . . . . . . 7-35Radial Basis Networks8Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-2Important Radial Basis Functions . . . . . . . . . . . . . . . . . . . . . . . 8-2Radial Basis Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Neuron Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Network Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Exact Design (newrbe) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .More Efficient Design (newrb) . . . . . . . . . . . . . . . . . . . . . . . . . .Demonstrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-38-38-48-58-78-8Probabilistic Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . 8-9Network Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-9Design (newpnn) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-10Generalized Regression Networks . . . . . . . . . . . . . . . . . . . . . 8-12Network Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-12Design (newgrnn) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-149Self-Organizing and LearningVector Quantization NetsIntroduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-2Important Self-Organizing and LVQ Functions . . . . . . . . . . . . . 9-2vi

Competitive Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Creating a Competitive Neural Network (newc) . . . . . . . . . . . .Kohonen Learning Rule (learnk) . . . . . . . . . . . . . . . . . . . . . . . . .Bias Learning Rule (learncon) . . . . . . . . . . . . . . . . . . . . . . . . . . .Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Graphical Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-39-39-49-59-59-69-8Self-Organizing Feature Maps . . . . . . . . . . . . . . . . . . . . . . . . . . 9-9Topologies (gridtop, hextop, randtop) . . . . . . . . . . . . . . . . . . . . 9-10Distance Functions (dist, linkdist, mandist, boxdist) . . . . . . . 9-14Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-17Creating a Self-Organizing MAP Neural Network (newsom) . 9-18Training (learnsom) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-19Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-23Learning Vector Quantization Networks . . . . . . . . . . . . . . .Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Creating an LVQ Network (newlvq) . . . . . . . . . . . . . . . . . . . . .LVQ1 Learning Rule (learnlv1) . . . . . . . . . . . . . . . . . . . . . . . . .Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Supplemental LVQ2.1 Learning Rule (learnlv2) . . . . . . . . . . .9-309-309-319-349-359-37Adaptive Filters and Adaptive Training10Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-2Important Adaptive Functions . . . . . . . . . . . . . . . . . . . . . . . . . 10-2Linear Neuron Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-3Adaptive Linear Network Architecture . . . . . . . . . . . . . . . . 10-4Single ADALINE (newlin) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-4Least Mean Square Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-7LMS Algorithm (learnwh) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-8viiContents

Adaptive Filtering (adapt) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-9Tapped Delay Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-9Adaptive Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-9Adaptive Filter Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-10Prediction Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-13Noise Cancellation Example . . . . . . . . . . . . . . . . . . . . . . . . . . 10-14Multiple Neuron Adaptive Filters . . . . . . . . . . . . . . . . . . . . . . 10-16Applications11Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-2Application Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-2Applin1: Linear Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Problem Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Network Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Network Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Thoughts and Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-311-311-411-411-6Applin2: Adaptive Prediction . . . . . . . . . . . . . . . . . . . . . . . . . 11-7Problem Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-7Network Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-8Network Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-8Network Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-8Thoughts and Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-10Appelm1: Amplitude Detection . . . . . . . . . . . . . . . . . . . . . . .Problem Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Network Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Network Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Network Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Network Generalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Improving Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-1111-1111-1111-1211-1211-1311-14Appcr1: Character Recognition . . . . . . . . . . . . . . . . . . . . . . . 11-15Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-15viii

Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-16System Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-19Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-21Advanced Topics12Custom Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-2Custom Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-2Network Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-3Network Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-12Additional Toolbox Functions . . . . . . . . . . . . . . . . . . . . . . . . 12-16Custom Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-17Historical Networks13Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-2Important Recurrent Network Functions . . . . . . . . . . . . . . . . . 13-2Elman Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Creating an Elman Network (newelm) . . . . . . . . . . . . . . . . . . .Training an Elman Network . . . . . . . . . . . . . . . . . . . . . . . . . . .13-313-313-413-5Hopfield Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-8Fundamentals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-8Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-8Design (newhop) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-10ixContents

Network Object Reference14Network Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-2Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-2Subobject Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-5Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-7Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-10Weight and Bias Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-11Other . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-12Subobject Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Outputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Biases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Input Weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Layer Weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14-1314-1314-1514-2014-2214-2314-25x

Functions — By Category15Analysis Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-3Distance Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-4Graphical Interface Functions . . . . . . . . . . . . . . . . . . . . . . . . 15-5Layer Initialization Functions . . . . . . . . . . . . . . . . . . . . . . . . 15-6Learning Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-7Line Search Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-8Net Input Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-9Network Initialization Function . . . . . . . . . . . . . . . . . . . . . . 15-10Network Use Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-11New Networks Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-12Performance Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-13Plotting Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-14Processing Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-15Simulink Support Function . . . . . . . . . . . . . . . . . . . . . . . . . . 15-16Topology Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-17Training Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-18Transfer Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-19Utility Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-20xiContents

Vector Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-21Weight and Bias Initialization Functions . . . . . . . . . . . . . . 15-22Weight Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-23Transfer Function Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-24Functions — Alphabetical List16Mathematical NotationAMathematical Notation for Equations and Figures . . . . . . .Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Weight Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Bias Elements and Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Time and Iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Layer Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Neural Network Design Book Neural Network Toolbox authors have written a textbook, Neural Network Design (Hagan, Demuth, and Beale, ISBN 0-9717321-0-8). The book presents the theory of neural networks, discusses their design and application, and make