Distributed Receding Horizon Control Of Multiagent Systems

Transcription

Distributed Receding Horizon Control ofMultiagent SystemsThesis byWilliam B. DunbarIn Partial Fulfillment of the Requirementsfor the Degree ofDoctor of PhilosophyCalifornia Institute of TechnologyPasadena, California2004(Defended April 6, 2004)

iic2004William B. DunbarAll Rights Reserved

iiiAcknowledgementsI would like to thank my advisor and committee chair, Prof. Richard Murray, for hisguidance, funding and enthusiasm for research. Thanks to Prof. Jerry Marsden forinviting me to come to Caltech, for being on my committee and for being the patriarchof the CDS department. Prof. Jeff Shamma was a great help in completing the workin this dissertation, and I thank him for his friendship and being on my committee.Also, thanks to Prof. Jason Hickey for reading group discussions and being on mycommittee.Thanks very much to the other graduate students, postdocs and visiting professorsin the CDS department who have helped me to mature as a researcher. In particular:Dr. Mark Milam, for demanding quality and providing CDS with NTG; Prof. NicolasPetit, for also demanding quality and inviting me to Ecole des Mines; Dr. RezaOlfati Saber, for teaching me aspects of his research and for our collaborations. Iespecially thank the crew of my past office mates, fellow system administrators, andthe department staff for making the administrative life of every CDS student easy.I would not be here if it wasn’t for my family, and I would not be able to stay ifit weren’t for my beautiful wife, Rebekah. This work is dedicated to my wife and theglorious memories I have of Mike Finch.

ivAbstractMultiagent systems arise in several domains of engineering. Examples include arraysof mobile sensor networks for aggregate imagery, autonomous highways, and formations of unmanned aerial vehicles. In these contexts, agents are governed by vehicledynamics and often constraints, and the control objective is achieved by cooperation.Cooperation refers to the agreement of the agents to 1) have a common objectivewith neighboring agents, with the objective typically decided offline, and 2) shareinformation online to realize the objective. To be viable, the control approach formultiagent systems should be distributed, for autonomy of the individual agents andfor scalability and improved tractability over centralized approaches.Optimization-based techniques are suited to multiagent problems, in that suchtechniques can admit very general objectives. Receding horizon control is an optimizationbased approach that is applicable when dynamics and constraints on the system arepresent. Several researchers have recently explored the use of receding horizon controlto achieve multi-vehicle objectives. In most cases, the common objective is formulated, and the resulting control law implemented, in a centralized way.This dissertation provides a distributed implementation of receding horizon controlwith guaranteed convergence and performance comparable to a centralized implementation. To begin with, agents are presumed to be individually governed by heterogeneous dynamics, modelled by a nonlinear ordinary differential equation. Couplingbetween agents occurs in a generic quadratic cost function of a single optimal controlproblem. The distributed implementation is generated by decomposition of the singleoptimal control problem into local problems, and the inclusion of local compatibilityconstraints in each local problem. The coordination requirements are globally syn-

vchronous timing and local information exchanges between neighboring agents. Forsufficiently fast update times, the distributed implementation is proven to be asymptotically stabilizing. Extensions for handling inter-agent coupling constraints andpartially synchronous timing are also explored. The venue of multi-vehicle formationstabilization demonstrates the efficacy of the implementation in numerical experiments. Given the generality of the receding horizon control mechanism, there isgreat potential for the implementation presented here in dynamic and constraineddistributed systems.

viContentsAcknowledgementsiiiAbstractiv1 Introduction1.11.21Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . .31.1.1Receding Horizon Control . . . . . . . . . . . . . . . . . . . .31.1.2Multi-Vehicle Coordinated Control . . . . . . . . . . . . . . .81.1.3Decentralized Optimal Control and Distributed Optimization .9Dissertation Outline . . . . . . . . . . . . . . . . . . . . . . . . . . .112 Receding Horizon Control142.1Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .142.2Receding Horizon Control . . . . . . . . . . . . . . . . . . . . . . . .142.3Implementation Issues . . . . . . . . . . . . . . . . . . . . . . . . . .182.3.1Computational Delay . . . . . . . . . . . . . . . . . . . . . . .182.3.2Robustness to Uncertainty . . . . . . . . . . . . . . . . . . . .192.3.3Relaxing Optimality . . . . . . . . . . . . . . . . . . . . . . .20Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .212.43 Receding Horizon Control of a Flight Control Experiment223.1Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .223.2Flight Control Experiment . . . . . . . . . . . . . . . . . . . . . . . .233.2.124Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

vii3.2.2Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .253.2.3Model of the Ducted Fan . . . . . . . . . . . . . . . . . . . . .25Application of Receding Horizon Control . . . . . . . . . . . . . . . .263.3.1Receding Horizon Control Formulation . . . . . . . . . . . . .263.3.2Timing Setup . . . . . . . . . . . . . . . . . . . . . . . . . . .293.4LQR Controller Design . . . . . . . . . . . . . . . . . . . . . . . . . .313.5Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .313.5.1Comparison of Response for Different Horizon Times . . . . .323.5.2LQR vs. Receding Horizon Control . . . . . . . . . . . . . . .34Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .353.33.64 Distributed Receding Horizon Control364.1Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .364.2Outline of the Chapter . . . . . . . . . . . . . . . . . . . . . . . . . .374.3Structure of Centralized Problem . . . . . . . . . . . . . . . . . . . .384.4Distributed Optimal Control Problems . . . . . . . . . . . . . . . . .444.5Distributed Implementation Algorithm . . . . . . . . . . . . . . . . .534.6Stability Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . .554.7Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .765 Analysis of Distributed Receding Horizon Control775.1Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .775.2Interpretation of the Distributed Receding Horizon Control Law . . .775.2.1Comparison with Centralized Implementations . . . . . . . . .785.2.2Effect of Compatibility Constraint on Closed-Loop Performance 825.2.3Distributed Implementation Solves a Modified Centralized Prob-5.3lem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .84Alternative Formulations . . . . . . . . . . . . . . . . . . . . . . . . .865.3.1Dual-Mode Distributed Receding Horizon Control . . . . . . .865.3.2Alternative Exchanged Inter-Agent Information . . . . . . . .965.3.3Alternative Compatibility Constraint . . . . . . . . . . . . . .97

viii5.45.5Extensions of the Theory . . . . . . . . . . . . . . . . . . . . . . . . . 1005.4.1A General Coupling Cost Function . . . . . . . . . . . . . . . 1005.4.2Inter-Agent Coupling Constraints . . . . . . . . . . . . . . . . 1045.4.3Locally Synchronous Timing . . . . . . . . . . . . . . . . . . . 108Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1116 Receding Horizon Control of Multi-Vehicle Formations1136.1Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1136.2Formation Stabilization Objective . . . . . . . . . . . . . . . . . . . . 1146.3Optimal Control Problems . . . . . . . . . . . . . . . . . . . . . . . . 1206.46.3.1Centralized Receding Horizon Control . . . . . . . . . . . . . . 1206.3.2Distributed Receding Horizon Control. . . . . . . . . . . . . 121Numerical Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . 1226.4.1Centralized Implementation . . . . . . . . . . . . . . . . . . . 1256.4.2Distributed Implementation . . . . . . . . . . . . . . . . . . . 1266.5Alternative Description of Formations . . . . . . . . . . . . . . . . . . 1366.6Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1437 Extensions1457.1Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1457.2Relevant Areas of Research . . . . . . . . . . . . . . . . . . . . . . . . 1457.37.47.2.1Parallel and Distributed Optimization . . . . . . . . . . . . . 1457.2.2Optimal Control and Neighboring Extremals . . . . . . . . . . 1467.2.3Multiagent Systems in Computer Science . . . . . . . . . . . . 1487.2.4Other Areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149Potential Future Applications . . . . . . . . . . . . . . . . . . . . . . 1507.3.1Mobile Sensor Networks . . . . . . . . . . . . . . . . . . . . . 1507.3.2Control of Networks . . . . . . . . . . . . . . . . . . . . . . . 151Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

ix8 Conclusions1548.1Summary of Main Results . . . . . . . . . . . . . . . . . . . . . . . . 1548.2Summary of Future Research . . . . . . . . . . . . . . . . . . . . . . 157A Basic Lemmas159Bibliography162

xList of Figures3.1Caltech ducted fan experiment: (a) full view, (b) close-up view. . . . .223.2Ducted fan experimental setup. . . . . . . . . . . . . . . . . . . . . . .243.3Planar model of the ducted fan. . . . . . . . . . . . . . . . . . . . . . .253.4Ducted fan experimental setup with receding horizon and inner-loopcontroller. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3.5Receding horizon input trajectories, showing implementation after delaydue to computation. . . . . . . . . . . . . . . . . . . . . . . . . . . . .3.630Moving one second average of computation time for receding horizoncontrol implementation with different horizon times. . . . . . . . . . .3.72832Response of receding horizon control laws to 6-meter offset in horizontalposition x for different horizon times. . . . . . . . . . . . . . . . . . . .333.8Position tracking for receding horizon control law with a 6-second horizon. 333.9Response of LQR and receding horizon control to 6-meter offset in horizontal position x. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6.134Seven-vehicle formation: vector structure on the left, and resulting formation on the right. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1166.2Fingertip formation response in position space using centralized recedinghorizon control. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1256.3Centralized receding horizon control law time history for vehicle 3. . . 1276.4Fingertip formation response using distributed receding horizon controlwithout compatibility constraints (κ ). . . . . . . . . . . . . . . 1286.5Distributed receding horizon control law time history for vehicle 3, without compatibility constraints (κ ). . . . . . . . . . . . . . . . . . 128

xi6.6Fingertip formation response using distributed receding horizon controland position compatibility constraints. . . . . . . . . . . . . . . . . . . 1296.7Distributed receding horizon control law time history for vehicle 3, usingposition compatibility constraints. . . . . . . . . . . . . . . . . . . . . 1306.8Fingertip formation response using distributed receding horizon controland control compatibility constraints. . . . . . . . . . . . . . . . . . . . 1326.9Distributed receding horizon control law time history for vehicle 3, usingcontrol compatibility constraints. . . . . . . . . . . . . . . . . . . . . . 1326.10Fingertip formation response using distributed receding horizon control,assuming neighbors continue along straight line paths at each updateand without enforcing any compatibility constraints (κ ). . . . . 1346.11Distributed receding horizon control law time history of vehicle 3 forupdate periods: (a) δ 0.5, (b) δ 0.1. . . . . . . . . . . . . . . . . . 1356.12Comparison of tracking performance of centralized (CRHC) and two distributed (DRHC) implementations of receding horizon control. DRHC1 denotes the distributed implementation corresponding to the theory,with control compatibility constraints, and DRHC 2 denotes the implementation with no compatibility constraints and neighbors are assumedto apply zero control. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1366.13Trajectories of a six-vehicle formation: (a) the evolution and the pathof the formation, (b) snapshots of the evolution of the formation (note:the two cones at the sides of each vehicle show the magnitudes of thecontrol inputs). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1416.14Control inputs applied by each vehicle for the purpose of tracking information: (a) controls of vehicles 1 through 3, (b) controls of vehicles4 through 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142

1Chapter 1IntroductionMultiagent system is a phrase used here to describe a general class of systems comprised of autonomous agents that cooperate to meet a system level objective. Cooperation refers to the agreement of the agents to 1) have a common objective with otheragents, with the objective typically decided offline, and 2) share information onlineto realize the objective. Agents are autonomous in that they are individually capableof sensing their environment and possibly other agents, communicating with otheragents, and computing and implementing control actions to meet their portion of theobjective. An inherent property of multiagent systems is that they are distributed,by which we mean that each agent must act autonomously based on local informationexchanges with neighboring agents.Examples of multiagent systems arise in several domains of engineering. Examplesof immediate relevance to this dissertation include arrays of mobile sensor networksfor aggregate imagery, autonomous highways, and formations of unmanned aerialvehicles. In these contexts, agents are governed by vehicle dynamics and often constraints. Constraints can arise in the individual agents, e.g., bounded control inputsfor each vehicle. Constraints that couple agents can also be inherent to the system orspecified as part of the objective, e.g., collision avoidance constraints between neighboring cars on an automated freeway. By design, such engineered multiagent systemsshould require little central coordination, as the communication issues in distributedenvironments often preclude such coordination. Moreover, a distributed control solution enables autonomy of the individual agents and offers scalability and improved

2tractability over centralized approaches.Optimization-based techniques for control are well suited to multiagent problemsin that such techniques can admit very general cooperative objectives. To be practically implementable in these systems, however, agents must be able to accommodatethe computational requirements associated with optimization-based techniques. Receding horizon control in particular is an optimization-based approach that is applicable when dynamics and constraints on the system are present, making it relevantfor the multi-vehicle examples discussed above. Moreover, recent experiments reviewed in this dissertation have shown successful real-time receding horizon controlof a thrust-vectored flight vehicle.Several researchers have recently explored the use of receding horizon control toachieve multi-vehicle objectives. In most cases, the common objective is formulated,and the resulting control law implemented, in a centralized way. This dissertationprovides a distributed implementation of receding horizon control with guaranteedconvergence and performance comparable to a centralized implementation.To begin with, agents are presumed to have decoupled dynamics, modelled by nonlinear ordinary differential equations. Coupling between agents occurs in a genericquadratic cost function of a single optimal control problem. The distributed implementation is first generated by decomposition of the single optimal control probleminto local optimal control problems. A local compatibility constraint is then incorporated in each local optimal control problem. The coordination requirements areglobally synchronous timing and local information exchanges between neighboringagents. For sufficiently fast update times, the distributed implementation is provento be asymptotically stabilizing. Extensions for handling coupling constraints andpartially synchronous timing are also explored. The venue of multi-vehicle formationstabilization is used for conducting numerical experiments, which demonstrate comparable performance between centralized and distributed implementations. Otherpotential multiagent applications, in which agents are not necessarily vehicles, arediscussed in the final portion of this dissertation.

31.1Literature ReviewThis section provides a review of relevant literature on receding horizon control,multi-vehicle coordination problems, decentralized optimal control and distributedoptimization.1.1.1Receding Horizon ControlIn receding horizon control, the current control action is determined by solving online,at each sampling instant, a finite horizon optimal control problem. Each optimizationyields an optimal control trajectory, also called an optimal control plan, and the firstportion of the plan is applied to the system until the next sampling instant. Thesampling period is typically much smaller than the horizon time, i.e., the planningperiod. The resample and replan action provides feedback to mitigate uncertainty inthe system. A typical source of uncertainty is the mismatch between the model ofthe system, used for planning, and the actual system dynamics. “Receding horizon”gets its name from the fact that the planning horizon, which is typically fixed, recedesahead in time with each update. Receding horizon control is also known as modelpredictive control, particularly in the chemical process control community. “Modelpredictive” gets its name from the use of the model to predict the system behaviorover the planning horizon at each update.As receding horizon control does not mandate that the control law be pre-computed,it is particularly useful when offline computation of such a law is difficult or impossible. Receding horizon control is not a new approach, and has traditionally beenapplied to systems where the dynamics are slow enough to permit a sampling rateamenable to optimal control computations between samples, e.g., chemical processplants. These systems are usually governed by strict constraints on states, inputsand/or combinations of both. With the advent of cheap and ubiquitous computational power, it has become possible to apply receding horizon control to systemsgoverned by faster dynamics that warrant this type of solution.There are two main advantages of receding horizon control:

4Generality – the ability to include generic models, linear and nonlinear, andconstraints in the optimal control problem. In fact, receding horizon control isthe only method in control that can handle generic state and control constraints;Reconfigurability – the ability to redefine cost functions and constraints asneeded to reflect changes in the system and/or the environment.Receding horizon control is also easy to describe and understand, relative to othercontrol approaches. However, a list of disadvantages can also be identified:Computational Demand – the requirement that an optimization algorithmmust run and terminate at every update of the controller is for obvious reasonsprohibitive;Theoretical Conservatism – proofs of stability of receding horizon controlare complicated by the implicit nature of the closed-loop system. As a consequence, conditions that provide theoretical results are usually sufficient and notnecessary.Based on the consideration above, one should be judicious in determining if recedinghorizon control is appropriate for a given system. For example, if constraints are nota dominating factor, and other control approaches are available, the computationaldemand of receding horizon control is often a reasonable deterrent from its application.Regarding multiagent systems, it is the generality of the receding horizon approachthat motivated this dissertation. The cooperative objective of multiagent systems canbe hard to mold into the framework of many other control approaches, even whenconstraints are not a dominating factor.Since the results in this dissertation are largely theoretical, a review of theoreticalresults in receding horizon control is now given. The thorough survey paper by Mayneet al. [46] on receding horizon control of nonlinear and constrained systems is anexcellent starting point for any research in this area. Therein, attention is restrictedto literature in which dynamics are modelled by differential equations, leaving out a

5large part of the process control literature where impulse and step response modelsare used.The history of receding horizon control machinery is quite the opposite of othercontrol design tools. Prior to any theoretical foundation, applications (specifically inprocess control) made this machinery a multi-million dollar industry. A review of suchapplications is given by Qin and Badgwell in [61]. As it was designed and developedby practitioners, early versions did not automatically ensure stability, thus requiringtuning. Not until the 1990s did researchers begin to give considerable attention toproving stability.Receding horizon control of constrained systems is nonlinear, warranting the toolsof Lyapunov stability theory. The control objective is typically to steer the state tothe origin, i.e., stabilization, where the origin is assumed to be an equilibrium pointof the system. The optimal cost function, also known as the value function, is almostuniversally used as a Lyapunov function for stability analysis in current literature.There are several variants of the open-loop optimal control problem employed. Theseinclude enforcing a terminal equality constraint, a terminal inequality constraint,using a terminal cost function alone, and more recently the combination of a terminalinequality constraint and terminal cost function. Enforcing a terminal inequalityconstraint is equivalent to requiring that the state arrive in a set, i.e., the terminalconstraint set, by the end of the planning horizon. The terminal constraint set isa neighborhood of the origin, usually in the interior of any other sets that defineadditional constraints on the state. Closed-loop stability is generally guaranteed byenforcing properties on the terminal cost and terminal constraint set.The first result proving stability of receding horizon control with a terminal equality constraint was by Chen and Shaw [9], a paper written in 1982 for nonlinear continuous time-invariant systems. Another original work by Keerthi and Gilbert [38]in 1988 employed a terminal equality constraint on the state for time-varying, constrained, nonlinear, discrete-time systems. A continuous time version is detailed inMayne and Michalska [45]. As this type of constraint is too computationally taxingand increases the chance of numerical infeasibility, researchers looked for relaxations

6that would still guarantee stability.The version of receding horizon control utilizing a terminal inequality constraintprovides some relaxation. In this case, the terminal cost is zero and the terminal set isa subset of the state constraint set containing the origin. A local stabilizing controlleris assumed to exist and is employed inside the terminal set. The idea is to steer thestate to the terminal set in finite time via receding horizon and then switch to thestabilizing controller. This is sometimes referred to as dual-mode receding horizoncontrol. Michalska and Mayne [49] proposed this method for constrained, continuous,nonlinear systems using a variable horizon time. It was found later in multiple studiesthat there is good reason to incorporate a terminal cost. Specifically, it is generallypossible to set the terminal cost to be exactly or approximately equal to the infinitehorizon value function in a suitable neighborhood of the origin. In the exact case,the advantages of an infinite horizon (stability and robustness) are achieved in theneighborhood. However, except for unconstrained cases, terminal cost alone has notproven to be sufficient to guarantee stability. This motivated work to incorporatethe combination of terminal cost (for performance) and terminal constraint set (forstability).Most recent receding horizon controllers use a terminal cost and enforce a terminal constraint set. These designs generally fall within one of two categories; eitherthe constraint is enforced directly in the optimization or it is implicitly enforced byappropriate choice of terminal cost and horizon time [32]. The former category isadvocated in this dissertation, drawing largely on the results by Chen and Allgöwer[11], who address receding horizon control of nonlinear and constrained continuoustime systems.This dissertation also examines issues that arise when applying receding horizoncontrol real-time to systems with dynamics of considerable speed, namely the Caltechducted fan flight control experiment. As stated, development and application ofreceding horizon control originated in process control industries where plants beingcontrolled are sufficiently slow to permit its implementation. This was motivated bythe fact that the economic operating point of a typical process lies at the intersection

7of constraints. Applications of receding horizon control to systems other than processcontrol problems have begun to emerge over recent years, e.g., [73] and [67].Recent work on distributed receding horizon control include Jia and Krogh [33],Motee and Sayyar-Rodsaru [54] and Acar [1]. In all of these papers, the cost isquadratic and separable, while the dynamics are discrete-time, linear, time-invariantand coupled. Further, state and input constraints are not included, aside from astability constraint in [33] that permits state information exchanged between theagents to be delayed by one update period. In another work, Jia and Krogh [34]solve a min-max problem for each agent, where again coupling comes in the dynamicsand the neighboring agent states are treated as bounded disturbances. Stability isobtained by contracting each agents state constraint set at each sample period, untilthe objective set is reached. As such, stability does not depend on information updateswith neighboring agents, although such updates may improve performance. Morerecently, Keviczky et al [39] have formulated a distributed model predictive schemewhere each agent optimizes locally for itself and every neighbor at each update. Bythis formulation, feasibility becomes difficult to ensure, and no proof of stability isprovided. The authors also consider a hierarchical scheme, similar to that in [47],where the scheme depends on a particular interconnection graph structure (e.g., nocycles are permitted).The results of this dissertation will be of use to the receding horizon controlcommunity, particularly those interested in distributed applications. The distributedimplementation with guaranteed convergence presented here is the first of its kind,particularly in the ability to handle generic nonlinear dynamics and constraints. Theimplementation and stability analysis are leveraged by the cooperation between thesubsystems. A critical assumption on the structure of the overall system is that thesubsystems are dynamically decoupled. As part of our future research, the extensionto handle coupled subsystem dynamics will be explored.

81.1.2Multi-Vehicle Coordinated ControlMulti-vehicle coordination problems are new and challenging, with isolated problemshaving been addressed in various fields of engineering. Probably the field that containsthe most recent research related to the multi-vehicle coordination problem is robotics.An example is the application of hybrid control to formations of robots [20]. In thispaper, nonholonomic kinematic robots are regulated to precise relative locations in aleader(s)/follower setting, possibly in the presence of obstacles. Other recent studiesthat involve coordinating multiple robots include [70], [36], [65], [69], [22]. All ofthese studies have in common the fact that the robots exhibit kinematic rather thankinetic behavior. Consequently, control and decision algorithms need not considerthe real-time update constraint necessary to stabilize vehicles that have inertia.Space systems design and engineering is another area that also addresses this typeof problem. Specifically, clusters of microsatellites when coordinated into an appropriate formation may perform high-resolution, synthetic aperture imaging. The goalis to exceed the image resolution that is possible with current single (larger) satellites.Strict constraints on fuel efficiency and maneuverability, i.e., low-thrust capability, ofeach microsatellite must be accounted for in the control objectives for the problemto be practically meaningful. There are various approaches to solving variants ofthe microsatellite formation and reconfiguration problem (see [37, 47] and referencestherein). In [53], the nonlinear trajectory generation (NTG) software package used inthis dissertation is applied to the microsatellite formation flying problem. This paperutilizes diff

iv Abstract Multiagent systems arise in several domains of engineering. Examples include arrays of mobile sensor networks for aggregate imagery, autonomous highways, and forma-