COMPUTATIONAL PHYSICS Morten Hjorth-Jensen

Transcription

COMPUTATIONAL PHYSICSMorten Hjorth-JensenUniversity of Oslo, Fall 2009

PrefaceSo, ultimately, in order to understand nature it may be necessary to have a deeper understanding of mathematical relationships. But the real reason is that the subject is enjoyable,and although we humans cut nature up in different ways, and we have different courses indifferent departments, such compartmentalization is really artificial, and we should take ourintellectual pleasures where we find them. Richard Feynman, The Laws of Thermodynamics.Why a preface you may ask? Isn’t that just a mere exposition of a raison d’etre of an author’s choiceof material, preferences, biases, teaching philosophy etc.? To a large extent I can answer in the affirmativeto that. A preface ought to be personal. Indeed, what you will see in the various chapters of these notesrepresents how I perceive computational physics should be taught.This set of lecture notes serves the scope of presenting to you and train you in an algorithmic approachto problems in the sciences, represented here by the unity of three disciplines, physics, mathematics andinformatics. This trinity outlines the emerging field of computational physics.Our insight in a physical system, combined with numerical mathematics gives us the rules for settingup an algorithm, viz. a set of rules for solving a particular problem. Our understanding of the physicalsystem under study is obviously gauged by the natural laws at play, the initial conditions, boundary conditions and other external constraints which influence the given system. Having spelled out the physics,for example in the form of a set of coupled partial differential equations, we need efficient numericalmethods in order to set up the final algorithm. This algorithm is in turn coded into a computer programand executed on available computing facilities. To develop such an algorithmic approach, you will beexposed to several physics cases, spanning from the classical pendulum to quantum mechanical systems.We will also present some of the most popular algorithms from numerical mathematics used to solve aplethora of problems in the sciences. Finally we will codify these algorithms using some of the mostwidely used programming languages, presently C, C and Fortran and its most recent standard Fortran20031 . However, a high-level and fully object-oriented language like Python is now emerging as a goodalternative although C and Fortran still outperform Python when it comes to computational speed.In this text we offer an approach where one can write all programs in Python, C/C or Fortran. Wewill also show you how to develop large programs in Python interfacing C and/or Fortran functionsfor those parts of the program which are CPU intensive. Such an approach allows you to structure theflow of data in a high-level language like Python while tasks of a mere repetitive and CPU intensivenature are left to low-level languages like C or Fortran. Python allows you also to smoothly interfaceyour program with other software, such as plotting programs or operating system instructions. A typicalPython program you may end up writing contains everything from compiling and running your codes topreparing the body of a file for writing up your report.Computer simulations are nowadays an integral part of contemporary basic and applied research inthe sciences. Computation is becoming as important as theory and experiment. In physics, computationalphysics, theoretical physics and experimental physics are all equally important in our daily research andstudies of physical systems. Physics is the unity of theory, experiment and computation2 . Moreover,the ability "to compute" forms part of the essential repertoire of research scientists. Several new fields1Throughout this text we refer to Fortran 2003 as Fortran, implying the latest standard. Fortran 2008 will only add minorchanges to Fortran 2003.2We mentioned previously the trinity of physics, mathematics and informatics. Viewing physics as the trinity of theory,experiment and simulations is yet another example. It is obviously tempting to go beyond the sciences. History shows thattriunes, trinities and for example triple deities permeate the Indo-European cultures (and probably all human cultures), from theancient Celts and Hindus to modern days. The ancient Celts revered many such trinues, their world was divided into earth, seaand air, nature was divided in animal, vegetable and mineral and the cardinal colours were red, yellow and blue, just to mentioniii

within computational science have emerged and strengthened their positions in the last years, such ascomputational materials science, bioinformatics, computational mathematics and mechanics, computational chemistry and physics and so forth, just to mention a few. These fields underscore the importanceof simulations as a means to gain novel insights into physical systems, especially for those cases where noanalytical solutions can be found or an experiment is too complicated or expensive to carry out. To be ableto simulate large quantal systems with many degrees of freedom such as strongly interacting electrons ina quantum dot will be of great importance for future directions in novel fields like nano-techonology. Thisability often combines knowledge from many different subjects, in our case essentially from the physical sciences, numerical mathematics, computing languages, topics from high-performace computing andsome knowledge of computers.In 1999, when I started this course at the department of physics in Oslo, computational physics andcomputational science in general were still perceived by the majority of physicists and scientists as topicsdealing with just mere tools and number crunching, and not as subjects of their own. The computationalbackground of most students enlisting for the course on computational physics could span from dedicatedhackers and computer freaks to people who basically had never used a PC. The majority of undergraduateand graduate students had a very rudimentary knowledge of computational techniques and methods.Questions like ’do you know of better methods for numerical integration than the trapezoidal rule’ werenot uncommon. I do happen to know of colleagues who applied for time at a supercomputing centrebecause they needed to invert matrices of the size of 104 104 since they were using the trapezoidalrule to compute integrals. With Gaussian quadrature this dimensionality was easily reduced to matrixproblems of the size of 102 102 , with much better precision.Less than ten years later most students have now been exposed to a fairly uniform introduction tocomputers, basic programming skills and use of numerical exercises. Practically every undergraduatestudent in physics has now made a Matlab or Maple simulation of for example the pendulum, with orwithout chaotic motion. Nowadays most of you are familiar, through various undergraduate courses inphysics and mathematics, with interpreted languages such as Maple, Matlab and/or Mathematica. Inaddition, the interest in scripting languages such as Python or Perl has increased considerably in recentyears. The modern programmer would typically combine several tools, computing environments andprogramming languages. A typical example is the following. Suppose you are working on a projectwhich demands extensive visualizations of the results. To obtain these results, that is to solve a physicsproblems like obtaining the density profile of Bose-Einstein condensate, you need however a programwhich is fairly fast when computational speed matters. In this case you would most likely write a highperformance computing program using Monte Carlo methods in languages which are taylored for that.These are represented by programming languages like Fortran and C . However, to visualize the resultsyou would find interpreted languages like Matlab or scripting languages like Python extremely suitablefor your tasks. You will therefore end up writing for example a script in Matlab which calls a Fortran otC program where the number crunching is done and then visualize the results of say a wave equationsolver via Matlab’s large library of visualization tools. Alternatively, you could organize everything into aPython or Perl script which does everything for you, calls the Fortran and/or C programs and performsthe visualization in Matlab or Python. Used correctly, these tools, spanning from scripting languages tohigh-performance computing languages and vizualization programs, speed up your capability to solvecomplicated problems. Being multilingual is thus an advantage which not only applies to our globalizedmodern society but to computing environments as well. This text shows you how to use C , Fortrana few. As a curious digression, it was a Gaulish Celt, Hilary, philosopher and bishop of Poitiers (AD 315-367) in his work DeTrinitate who formulated the Holy Trinity concept of Christianity, perhaps in order to accomodate millenia of human divinationpractice.iv

and Pyhton as programming languages.There is however more to the picture than meets the eye. Although interpreted languages like Matlab,Mathematica and Maple allow you nowadays to solve very complicated problems, and high-level languages like Python can be used to solve computational problems, computational speed and the capabilityto write an efficient code are topics which still do matter. To this end, the majority of scientists still uselanguages like C and Fortran to solve scientific problems. When you embark on a master or PhD thesis, you will most likely meet these high-performance computing languages. This course emphasizes thusthe use of programming languages like Fortran, Python and C instead of interpreted ones like Matlabor Maple. You should however note that there are still large differences in computer time between forexample numerical Python and a corresponding C program for many numerical applications in thephysical sciences, with a code in C being the fastest.Computational speed is not the only reason for this choice of programming languages. Anotherimportant reason is that we feel that at a certain stage one needs to have some insights into the algorithmused, its stability conditions, possible pitfalls like loss of precision, ranges of applicability, the possibilityto improve the algorithm and taylor it to special purposes etc etc. One of our major aims here is topresent to you what we would dub ’the algorithmic approach’, a set of rules for doing mathematics ora precise description of how to solve a problem. To device an algorithm and thereafter write a codefor solving physics problems is a marvelous way of gaining insight into complicated physical systems.The algorithm you end up writing reflects in essentially all cases your own understanding of the physicsand the mathematics (the way you express yourself) of the problem. We do therefore devote quite somespace to the algorithms behind various functions presented in the text. Especially, insight into how errorspropagate and how to avoid them is a topic we would like you to pay special attention to. Only thencan you avoid problems like underflow, overflow and loss of precision. Such a control is not alwaysachievable with interpreted languages and canned functions where the underlying algorithm and/or codeis not easily accesible. Although we will at various stages recommend the use of library routines forsay linear algebra3 , our belief is that one should understand what the given function does, at least tohave a mere idea. With such a starting point, we strongly believe that it can be easier to develope morecomplicated programs on your own using Fortran, C or Python.We have several other aims as well, namely:– We would like to give you an opportunity to gain a deeper understanding of the physics you havelearned in other courses. In most courses one is normally confronted with simple systems whichprovide exact solutions and mimic to a certain extent the realistic cases. Many are however thecomments like ’why can’t we do something else than the particle in a box potential?’. In several ofthe projects we hope to present some more ’realistic’ cases to solve by various numerical methods.This also means that we wish to give examples of how physics can be applied in a much broadercontext than it is discussed in the traditional physics undergraduate curriculum.– To encourage you to "discover" physics in a way similar to how researchers learn in the context ofresearch.– Hopefully also to introduce numerical methods and new areas of physics that can be studied withthe methods discussed.– To teach structured programming in the context of doing science.3Such library functions are often taylored to a given machine’s architecture and should accordingly run faster than userprovided ones.v

– The projects we propose are meant to mimic to a certain extent the situation encountered during athesis or project work. You will tipically have at your disposal 2-3 weeks to solve numerically agiven project. In so doing you may need to do a literature study as well. Finally, we would likeyou to write a report for every project.Our overall goal is to encourage you to learn about science through experience and by asking questions.Our objective is always understanding and the purpose of computing is further insight, not mere numbers!Simulations can often be considered as experiments. Rerunning a simulation need not be as costly asrerunning an experiment.Needless to say, these lecture notes are upgraded continuously, from typos to new input. And we doalways benefit from your comments, suggestions and ideas for making these notes better. It’s through thescientific discourse and critics we advance. Moreover, I have benefitted immensely from many discussions with fellow colleagues and students. In particular I must mention my colleague Torgeir Engeland,whose input through the last years has considerably improved these lecture notes.Finally, I would like to add a petit note on referencing. These notes have evolved over many yearsand the idea is that they should end up in the format of a web-based learning environment for doing computational science. It will be fully free and hopefully represent a much more efficient way of conveyingteaching material than traditional textbooks. I have not yet settled on a specific format, so any input iswelcome. At present however, it is very easy for me to upgrade and improve the material on say a yearlybasis, from simple typos to adding new material. When accessing the web page of the course, you willhave noticed that you can obtain all source files for the programs discussed in the text. Many people havethus written to me about how they should properly reference this material and whether they can freelyuse it. My answer is rather simple. You are encouraged to use these codes, modify them, include themin publications, thesis work, your lectures etc. As long as your use is part of the dialectics of scienceyou can use this material freely. However, since many weekends have elapsed in writing several of theseprograms, testing them, sweating over bugs, swearing in front of a f*@?%g code which didn’t compileproperly ten minutes before monday morning’s eight o’clock lecture etc etc, I would dearly appreciate incase you find these codes of any use, to reference them properly. That can be done in a simple way, referto M. Hjorth-Jensen, Lecture Notes on Computational Physics, University of Oslo (2009). The weblinkto the course should also be included. Hope it is not too much to ask for. Enjoy!vi

ContentsI Introduction to Numerical Methods in Physics11 Introduction1.1 Choice of programming language . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1.2 Designing programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3562 Introduction to C and Fortran2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . .2.2 Getting started . . . . . . . . . . . . . . . . . . . . . . . . . . .2.2.1 Scientific hello world . . . . . . . . . . . . . . . . . . .2.3 Representation of integer numbers . . . . . . . . . . . . . . . .2.3.1 Fortran codes . . . . . . . . . . . . . . . . . . . . . . .2.3.2 Python codes . . . . . . . . . . . . . . . . . . . . . . .2.4 Real numbers and numerical precision . . . . . . . . . . . . . .2.4.1 Representation of real numbers . . . . . . . . . . . . .2.4.2 Machine numbers . . . . . . . . . . . . . . . . . . . . .2.5 Programming examples on loss of precision and round-off errors2.5.1 Algorithms for e x . . . . . . . . . . . . . . . . . . . .2.5.2 Fortran codes . . . . . . . . . . . . . . . . . . . . . . .2.5.3 Further examples . . . . . . . . . . . . . . . . . . . . .2.6 Additional features of C and Fortran . . . . . . . . . . . . .2.6.1 Operators in C . . . . . . . . . . . . . . . . . . . . .2.6.2 Pointers and arrays in C . . . . . . . . . . . . . . . . .2.6.3 Macros in C . . . . . . . . . . . . . . . . . . . . . .2.6.4 Structures in C and TYPE in Fortran . . . . . . . . .2.7 Exercises and projects . . . . . . . . . . . . . . . . . . . . . . .99912161819202122252528313434353739403 Numerical differentiation3.1 Introduction . . . . . . . . . . . . .3.2 Numerical differentiation . . . . . .3.2.1 The second derivative of ex3.2.2 Error analysis . . . . . . . .3.3 Exercises and projects . . . . . . . .4545454959624 Linear algebra4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4.2 Mathematical intermezzo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .636364.ix

Contents4.34.44.5Programming details . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4.3.1 Declaration of fixed-sized vectors and matrices . . . . . . . . . . .4.3.2 Runtime declarations of vectors and matrices in C . . . . . . . .4.3.3 Matrix operations and C and Fortran features of matrix handlingLinear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4.4.1 Gaussian elimination . . . . . . . . . . . . . . . . . . . . . . . . .4.4.2 LU decomposition of a matrix . . . . . . . . . . . . . . . . . . . .4.4.3 Solution of linear systems of equations . . . . . . . . . . . . . . .4.4.4 Inverse of a matrix and the determinant . . . . . . . . . . . . . . .4.4.5 Tridiagonal systems of linear equations . . . . . . . . . . . . . . .Exercises and projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 Non-linear equations and roots of polynomials5.1 Introduction . . . . . . . . . . . . . . . . .5.2 Iteration methods . . . . . . . . . . . . . .5.3 Bisection method . . . . . . . . . . . . . .5.4 Newton-Raphson’s method . . . . . . . . .5.5 The secant method and other methods . . .5.6 Exercises and projects . . . . . . . . . . . .6 Numerical interpolation, extrapolation and fitting of data6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . .6.2 Interpolation and extrapolation . . . . . . . . . . . . .6.2.1 Polynomial interpolation and extrapolation . .6.3 Richardson’s deferred extrapolation method . . . . . .6.4 Cubic spline interpolation . . . . . . . . . . . . . . . .7 Numerical integration7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7.2 Newton-Cotes quadrature: equal step methods . . . . . . . . . . . .7.3 Adaptive integration . . . . . . . . . . . . . . . . . . . . . . . . . .7.4 Gaussian quadrature . . . . . . . . . . . . . . . . . . . . . . . . . .7.4.1 Orthogonal polynomials, Legendre . . . . . . . . . . . . .7.4.2 Integration points and weights with orthogonal polynomials7.4.3 Application to the case N 2 . . . . . . . . . . . . . . . .7.4.4 General integration intervals for Gauss-Legendre . . . . . .7.4.5 Other orthogonal polynomials . . . . . . . . . . . . . . . .7.4.6 Applications to selected integrals . . . . . . . . . . . . . .7.5 Treatment of singular Integrals . . . . . . . . . . . . . . . . . . . .7.6 Scattering equation and principal value integrals . . . . . . . . . . .7.7 Parallel computing . . . . . . . . . . . . . . . . . . . . . . . . . .7.7.1 Brief survey of supercomputing concepts and terminologies7.7.2 Parallelism . . . . . . . . . . . . . . . . . . . . . . . . . .7.7.3 MPI with simple examples . . . . . . . . . . . . . . . . . .7.7.4 Numerical integration with MPI . . . . . . . . . . . . . . 146149151151152154159

Contents8 Outline of the Monte Carlo strategy8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8.1.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8.1.2 First illustration of the use of Monte-Carlo methods, crude integration8.1.3 Second illustration, particles in a box . . . . . . . . . . . . . . . . .8.1.4 Radioactive decay . . . . . . . . . . . . . . . . . . . . . . . . . . .8.1.5 Program example for radioactive decay of one type of nucleus . . . .8.1.6 Brief summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8.2 Probability distribution functions . . . . . . . . . . . . . . . . . . . . . . . .8.2.1 Multivariable Expectation Values . . . . . . . . . . . . . . . . . . .8.2.2 The central limit theorem . . . . . . . . . . . . . . . . . . . . . . . .8.3 Random numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8.3.1 Properties of selected random number generators . . . . . . . . . . .8.4 Improved Monte Carlo integration . . . . . . . . . . . . . . . . . . . . . . .8.4.1 Change of variables . . . . . . . . . . . . . . . . . . . . . . . . . . .8.4.2 Importance sampling . . . . . . . . . . . . . . . . . . . . . . . . . .8.4.3 Acceptance-Rejection method . . . . . . . . . . . . . . . . . . . . .8.5 Monte Carlo integration of multidimensional integrals . . . . . . . . . . . . .8.5.1 Brute force integration . . . . . . . . . . . . . . . . . . . . . . . . .8.5.2 Importance sampling . . . . . . . . . . . . . . . . . . . . . . . . . .8.6 Exercises and projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 032032052062089 Random walks and the Metropolis algorithm9.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . .9.2 Diffusion equation and random walks . . . . . . . . . . .9.2.1 Diffusion equation . . . . . . . . . . . . . . . . .9.2.2 Random walks . . . . . . . . . . . . . . . . . . .9.3 Microscopic derivation of the diffusion equation . . . . . .9.3.1 Discretized diffusion equation and Markov chains .9.3.2 Continuous equations . . . . . . . . . . . . . . . .9.3.3 ESKC equation and the Fokker-Planck equation . .9.3.4 Numerical simulation . . . . . . . . . . . . . . . .9.4 Entropy and Equilibrium Features . . . . . . . . . . . . .9.5 The Metropolis algorithm and detailed balance . . . . . .9.5.1 Brief summary . . . . . . . . . . . . . . . . . . .9.6 Exercises and projects . . . . . . . . . . . . . . . . . . . .21121121221221421822022522622622823123523610 Monte Carlo methods in statistical physics10.1 Introduction and motivation . . . . . . . . . . . . . . .10.2 Review of Statistical Physics . . . . . . . . . . . . . .10.2.1 Microcanonical Ensemble . . . . . . . . . . .10.2.2 Canonical Ensemble . . . . . . . . . . . . . .10.2.3 Grand Canonical and Pressure Canonical . . .10.3 Ising model and phase transitions in magnetic systems10.3.1 Theoretical background . . . . . . . . . . . .10.4 Phase Transitions and critical phenomena . . . . . . .10.4.1 The Ising model and phase transitions . . . . .239239241242243244245245254255.xi

Contents10.4.2 Critical exponents and phase transitions from mean-field models10.5 The Metropolis algorithm and the two-dimensional Ising Model . . . .10.5.1 Parallelization of the Ising model . . . . . . . . . . . . . . . .10.6 Selected results for the Ising model . . . . . . . . . . . . . . . . . . . .10.7 Correlation functions and further analysis of the Ising model . . . . . .10.7.1 Thermalization . . . . . . . . . . . . . . . . . . . . . . . . . .10.7.2 Time-correlation functions . . . . . . . . . . . . . . . . . . . .10.8 The Potts’ model . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10.9 Exercises and projects . . . . . . . . . . . . . . . . . . . . . . . . . . .11 Quantum Monte Carlo methods11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11.2 Postulates of Quantum Mechanics . . . . . . . . . . . . . . . . . . . . . . . .11.2.1 Mathematical Properties of the Wave Functions . . . . . . . . . . . . .11.2.2 Important Postulates . . . . . . . . . . . . . . . . . . . . . . . . . . .11.3 First Encounter with the Variational Monte Carlo Method . . . . . . . . . . . .11.4 Variational Monte Carlo for quantum mechanical systems . . . . . . . . . . . .11.4.1 First illustration of variational Monte Carlo methods . . . . . . . . . .11.5 Variational Monte Carlo for atoms . . . . . . . . . . . . . . . . . . . . . . . .11.5.1 The Born-Oppenheimer Approximation . . . . . . . . . . . . . . . . .11.5.2 The hydrogen Atom . . . . . . . . . . . . . . . . . . . . . . . . . . .11.5.3 Metropolis sampling for the hydrogen atom and the harmonic oscillator11.5.4 The helium atom . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11.5.5 Program example for atomic systems . . . . . . . . . . . . . . . . . .11.5.6 Helium and beyond . . . . . . . . . . . . . . . . . . . . . . . . . . . .11.6 The H 2 molecule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11.7 Improved variational calculations . . . . . . . . . . . . . . . . . . . . . . . . .11.7.1 Importance sampling . . . . . . . . . . . . . . . . . . . . . . . . . . .11.7.2 Guiding Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . .11.7.3 Energy and Variance Minimization . . . . . . . . . . . . . . . . . . .11.7.4 Correlated Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . .11.8 Exercises and projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29029329329430030330831431631831832032132232212 Eigensystems12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12.2 Eigenvalue problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12.3 Similarity transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12.4 Jacobi’s method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12.5 Diagonalization through the Householder’s method for tridiagonalization . . . .12.5.1 The Householder’s method for tridiagonalization . . . . . . . . . . . . .12.5.2 Diagonalization of a tridiagonal matrix via Francis’ algorithm . . . . . .12.6 The QR algorithm for finding eigenvalues . . . . . . . . . . . . . . . . . . . . .12.7 Schrödinger’s equation through diagonalization . . . . . . . . . . . . . . . . . .12.7.1 Numerical solution of the Schrödinger equation by diagonalization . . . .12.7.2 Program example and results for the one-dimensional harmonic oscillator12.8 Discussion of BLAS and LAPACK functionalities . . . . . . . . . . . . . . . . .12.9 Exercises and projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .327327327328329332333334336336338339344344xii

Contents13 Differential equations13.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13.2 Ordinary differential equations . . . . . . . . . . . . . . . . . . .13.3 Finite difference methods . . . . . . . . . . . . . . . . . . . . . .13.3.1 Improvements of Euler’s algorithm, higher-order methods13.3.2 Predictor-Corrector methods . . . . . . . . . . . . . . . .13.4 More on finite difference methods, Runge-Kutta methods . . . . .13.5 Adaptive Runge-Kutta and multistep methods . . . . . . . . . . .13.6 Physics examples . . . . . . . . . . . . . . . . . . . . . . . . . .13.6.1 Ideal harmonic oscillations . . . . . . . . . . . . . . . . .13.6.2 Damping of harmonic oscillations and external forces . . .13.6.3 The pendulum, a nonlinear differential equation . . . . . .13.6.4 Spinning magnet . . . . . . . . . . . . . . . . . . . . . .13.7 Physics Project: the pendulum . . . . . . . . . . . . . . . . . . .13.7.1 Analytic results for the pendulum . . . . . .

represents how I perceive computational physics should be taught. . up an algorithm, viz. a set of rules for solving a particular problem. Our understanding of the physical . guages like Python can be used to solve computational problems