Evaluation And Application Of MIDAS V2 - NASA

Transcription

2001-01-2648Evaluation and Application of MIDAS v2.0Sandra G. HartNASA-Ames Research CenterMoffett Field, CADavid DahnMicro Analysis & Design Inc.Boulder, CO.Adolph AtencioUS Army Aeroflight Dynamics Directorate (Ret)NASA-Ames Research CenterMoffett Field, CAK. Michael DalalRaytheon STX CorporationMoffett Field, CACopyright 2001 Society of Automotive Engineers, Inc.ABSTRACTNASA’s Human Factors Research and TechnologyDivision.Version 2.0 of the Man-Machine Design and AnalysisSystem (MIDAS) was released in 2001. It provides toolsto describe an operating environment, mission, andequipment. User-defined goals, procedures, andknowledge interact with and are modified by models ofperception, memory, situation awareness, and attentionand constrained by the environment. Output ofsimulations that demonstrate or evaluate newcapabilities or answer questions posed by customers arepresented graphically and visually. MIDAS has beenused to model different professions (soldiers inprotective gear, air traffic controllers, astronauts, nuclearpower plant operators, pilots), missions (e.g., flying,target designation, underwater exploration, policedispatch) and environments (e.g., battlefields, civilairspace, ocean floor, control rooms, low earth orbit). Arecent independent evaluation of MIDAS V2.0 isreviewed.The primary goal was to develop an engineeringenvironment that contained the tools and models neededby crew station developers during the conceptual designphase. It would enable rapid prototyping and offer earlyintegration and visualization of developing designs tofoster communication among members of a designteam. A secondary goal was to advance the capabilitiesand use of computational representations of humanperformance in design. As an early and ambitiouspioneer, MIDAS has clearly achieved this objective. Inaddition, it was hoped that MIDAS would serve as aframework for integrating useful research results andmodels. A final goal was to transfer technologies to thelarger community of industry, government, and universityresearchers and designers. This process lization” agreements.INTRODUCTIONDevelopment of the Man-Machine Integration Designand Analysis System (MIDAS) began in 1983 as a jointArmy/NASA exploratory development effort. TheProgram was primarily funded by the US ArmyAeroflightdynamics Directorate and led by US Armyscientists and engineers. E. James Hartzell led theprogram until 1995, followed by Barry Lakinsmith and R.Jay Shively. In 2001, responsibility was transferred toIt has been estimated that as much as 80% of the lifecycle cost of developing an aircraft is determined in theconceptual design phase; after hardware is built, it isdifficult to correct errors or modify the design. MIDASwas conceived as a method of offering designers anopportunity to “see” their design being operated by ahuman model in a virtual environment and to ask “whatif” questions about the impact of design alternatives andcandidate procedures early in design (1) to preventcostly retrofits and inefficient training “band-aids” forclumsy designs. All too often in the past, humanoperators have had to compensate for poorly integratedsubsystems and periods of high workload or stress.

Initial interest in computational human performancemodeling tools, such as MIDAS, was prompted by adesire to make objective decisions about which tasks toautomate and minimum crew size for Army helicopters,commercial airliners, and Army tanks. These interestsare what prompted early development of “resource”models and “workload” prediction capabilities, asoperator workload was thought to be a limiting factor insystem performance.MIDAS has progressed through eight developmentphases, each culminating in demonstration of newcapabilities. The initial architecture was built in a skeletalform with the expectation that it would be populated withnew models, information and tools. Initially, MIDASconsisted of a number of integrated workstations thatcontributed simulations of the mission, operator, andenvironment to the designer’s workstation. Models ofhuman performance and behavior were used to evaluatedifferent aspects of the task or equipment designstatically or dynamically. The component models andtools were coordinated by a discrete, scaled-timesimulation. Unlike man-in-the-loop simulations, there wasno requirement for operating in real time. The results ofan analysis were presented graphically and/or visually inthe form of a virtual simulation of a “manned” mission.One form of MIDAS output supports scenarioindependent analysis of crew station layout (e.g.,visibility, legibility, reach) and is compatible with militarystandards. The other provides a dynamic simulation thatcan be visualized from different perspectives, withgraphic activity traces, task load timelines, informationrequirements, and mission performance measures (2).Although the number of computers supporting MIDAShave decreased as computational speed and capabilityhave increased, MIDAS is still hosted on a SiliconGraphics Workstation, although C has replacedCommon Lisp and FORTRAN.As MIDAS matured, it offered an increasing number oftools for modelers at Ames and Beta test sites. It hasbeen used to model different types of operators (e.g., USArmy personnel wearing protective gear, air trafficcontrollers, astronauts, nuclear power plant operators,helicopter and airline pilots, and 911 dispatchers),missions (e.g., flying, target designation, underwaterexploration, police dispatch), and environments (e.g.,nap-of-the earth helicopter flight, civilian airspace, oceanfloor, control room, low-earth orbit). Such simulations,some of which will be described below, were developedto demonstrate or evaluate new capabilities or toaddress questions posed by customers from NASA, USArmy, US Navy, the Department of Energy, and theRichmond, CA Police Department.The primary challenge faced by MIDAS developers then(and now), is the gap between what is known abouthuman behavior from psychological models andlaboratory research and the types of information andmodels needed to design and evaluate new proceduresor a new system. The precise numerical predictions andneat algorithms that would allow designers to incorporatehuman factors considerations into developing systemsdid not (and still do not) exist. To some extent this isbecause humans are enormously complex, adaptive,and diverse. To a greater extent, however, it reflectsmany years of reliance on the time-honored practice ofbasing design decisions on judgment calls made byexperienced engineers or test pilots. The great flaw ofthis approach is that relying on “lessons learned” frompast successes or failures will doom designers to solvingproblems that have already happened but failing toanticipate problems that have not yet occurred. A moreobjective and comprehensive assessment of designquestions through computational design tools offers thepossibility of discovering potential problems orinteractions that have not yet been encountered before.Given the apparent dearth of models of humanperformance and behavior needed to populate MIDAS,the National Research Council Committee on HumanFactors was approached in 1987 to offer guidance to theprogram. Following two years of deliberation andinteraction with members of the NASA/Army Program,the 12-member panel published their report (3). Itoffered a number of recommendations and suggestedthat progress was feasible in the areas of missionanalysis, workload, visual scanning, detectablity, andlegibility, topics that did, in fact, form an early focus forMIDAS. Other challenges involved integration amongcomponent models and tools, especially thoseassociated with variations in granularity, precision andtemporal resolution. Although no longer operating at thecutting edge of computer science, MIDAS taxed thecapabilities of a room-full of computers in its early years.MIDAS was developed “in-house” by Army and NASAresearchers and support service contractors. Forexample, Ms. Ranuka Shankar developed a schedulingmodel (4). Differences in the salience of stimuli isrepresented by a model of early attention, patterned afterthe work of Remington and Johnston (5). Mr. R. JayShively and his colleagues developed a situationawareness (SA) model (6). Extramural relationships withuniversities and private and government researchorganizations offered models, research results, andindependent evaluation ofcomponent models.Development of a 3D, dynamic anthropometric model torepresent the human operator was an early and criticalrequirement. A grant to Dr. Norman Badler at theUniversity of Pennsylvania (7) initiated the developmentof a realistic mannequin, Jack, that can be scaled fromthththe 5 percentile woman to the 95 percentile man andplaced within a 3D virtual workstation that can be viewedfrom the perspective of Jack’s eyes. A model ofbinocular vision developed by Dr. Aries Arditi from theLighthouse of New York was fully integrated with Jack’shead position and point of regard and with a visibilitymodel developed by Drs. James Bergen and Jeff Lubinfrom the SRI/David Sarnoff Research Center (8). Amodel of display layout principles was developed by Dr.Christopher Wickens at the University of Illinois (9). Thetask loading model, based on Wicken’s MultipleResource Model principles, was patterned after theapproach developed by Mr. Theodore Aldrich (10).

Decision making behavior is represented at severallevels in MIDAS, one of which follows the Dr. JensRasmussen’s distinction between skill-, rule-, knowledgebased behaviors (11). Other tools were developed tofacilitate modeling of 3D environments, controls, displaysand so on, using libraries of primitive objects andanimation.MIDAS APPLICATIONSARMY MISSION - The first model developed with theinitial suite of MIDAS tools in 1985 was of a militarymission performed by a Cobra AH-1 helicopter. Thesimulated attack on a convoy occurred over a DMAdatabase of the Fulda Gap. Visualization of thehelicopter nap of the earth flight was from a ”God’s-eye”perspective.UNMANNED UNDERWATER VEHICLE - During thesame time period, the US Navy requested thedevelopment of a MIDAS model of a notional unmannedvehicle that might seek out and destroy mines on theocean floor. The goal was conceptual exploration of theideas using graphic representations of the vehicle,environment, and mission. The MIDAS modelrepresented the ocean floor (viewed from theperspectives of the vehicle’s camera and the surfaceship) and the slow-moving search mission over theocean floor.911 DISPATCH - In 1993, a Cooperative Research andDevelopment Agreement (CRDA) was signed withCommunications Research Company (CRC) cation and navigation systems for emergencyresponse vehicles and 911 dispatch stations. TheRichmond police department partnered with the MIDASteam to study the procedures used in their 911 facilityusing a MIDAS representation of the 911 dispatchconsole geometry and evaluate a prototype graphicaldispatch decision aid under consideration. Analysissoftware on a laptop computer was used to record over75 hours of data on the frequency, duration, and types of911 dispatch operator activity at Richmond PD. Thesedata were used by MIDAS researchers to supportseveral key design recommendations to CRC.NUCLEAR POWER PLANT DESIGN - ONE of MIDAS’first applications examined advanced automation optionsfor next-generation nuclear power plant consoles incollaboration with Westinghouse (12). The simulatedmission involved diagnosis of a steam generator fault tocompare performance and workload using the existinglayout to one that offered an electronic checklist mergedwith one of the displays. The MIDAS simulationconcentrated on the Senior Reactor Operator and hiscommunications with other operators. Simulatedinterruptions not only delayed his “actions” but alsointerfered with his ability to “remember” where he had leftoff before the interruption. This was one of the firstdemonstrations of MIDAS’ utility in a domain quitedifferent than that for which it was developed andincluded a new class of events - - interruptions. Theresults demonstrated the potential value of the electronicchecklist but also suggested potential problems with therequirement to page between or scroll within displays toaccomplish procedures.AIR WARRIOR - THE US Army initiated the Air Warriorprogram to develop and integrate an improved MissionOriented Protective Posture (MOPP) ensemble forhelicopter aircrews (Figure 1). At the time of the study,aircrew MOPP clothing and equipment was bulky,uncomfortable, restrictive, and not tailored for the needsof aircrew. There was a concern that flying using thisequipment might degrade performance or flight safety.The MIDAS Air Warrior study was the most ambitiousapplication that had been undertaken to date.Itestablished baseline performance measurements withand without MOPP equipment (13). The AH-64DLongbow Apache cockpit was modeled in MIDAS usingcomputer aided design files supplied by themanufacturer. The activities performed by the copilot/gunner in the front seat of the helicopter were thefocus of the simulation. The co-pilot/gunner was human anthropometricmodeled using the Jackmodeling tool (7) and placed in the CAD representationof the cockpit Each simulation contained over 400discrete activities such as display fixations, controlmanipulations, and crew decisions. The investigationconcentrated on the co-pilot’s ability to access controlsand displays during a simulated battlefield scenario thatinvolved designating targets on the battle field andpreparingforweaponslaunch.Performance measures included tasktimelines and workload imposed by an airto-ground attack scenario. Two humananthropometric sizes (by stature) werethsimulated to represent a 95 percentilethmale and a 5 percentile female. With theshoulder harness locked, the smalleststature body failed to reach eight of ninecritical cockpit switches, even with full seatadjustment. And the vision model predicteda visual field restriction due to the MOPPprotective mask, a finding supported byFigure 1:pilot comments. Data supporting aggregateMOPP gear measures of performance were collectedandreported,quantifyingworkload,exposure time, and total mission time effects from eachsimulation experiment permutation.SHORT HAUL CIVIL TILTROTOR (SHCT) - Tiltrotoraircraft combine the speed and range of a turbopropaircraft with the ability to take off and land in a verticalmode like a helicopter. These aircraft will transportpassengers from city center to city center and fromsatellite airports to major hub airports. NASA has beenevaluating a 40-passenger civil version with large bladedpropellers (rotor blades) on nacelles that may be rotatedfrom airplane mode to helicopter mode during landing.MIDAS was used to evaluate human performance issuesrelated to crew procedures and pilot workload for atiltrotor flying a steep, low-noise approach to a vertiport(14). The simulated scenario was of a normal approach

interrupted by a commanded go-around at the landingdecision point. The simulation contrasted the use anautomated discrete nacelle control mode with a fullymanual nacelle control mode for the go-around. TheMIDAS simulation (Figure 2) showed that task loadingwas high during the approach and emergency goaround; pilot’s workload was near capacity throughout.The emergency go-around in manual nacelle mode wasmore demanding, resulting in additional timerequirements to complete tasks.Figure 2: Civil Tiltrotor cockpitCOMMERCIAL TRANSPORT OPERATIONS – Foursimulations were conducted using the “Air-MIDAS”version in support of NASA’s Advanced AirTransportation Technology (AATT) Program (15). Someof the issues were studied included aircrew’s processingof ATC clearance delivered via voice or data link,different flight path automation options, the relationshipbetween clearance timing and flight-path fuel efficiency,and the influences of weather, traffic and taskinterruptions. Sequences and timing of crew actionswere derived from pilot interviews and data from arelated simulator study. Distributions for task times werecreated, probabilities for various forms of interruptionwere added, and the entire study was exercised 100times for each fact and level combination using a MonteCarlo approach. Since the demands of processingclearances were shared between the pilot and copilot,they were treated as a composite “operator”. The resultsdemonstrated progressive decrements in performanceas clearances were issued closer to the desired descentpoint. Crews shifted from the flight management systemto a simpler form of automation earlier in the descentwhen receiving voice clearances. Results from thissimulation were precisely those needed by terminal areaautomation developers to define desired clearanceissuance windows. An extension of this applicationcompared the predictions of the model with those from apiloted simulation conducted in the 747-400 simulator inCrew-Vehicle Flight Simulation Facility at Ames (16).Four human flight crews and simulated pilots in MIDAS“flew” descent profiles with the same conditions ofspeed, crossing restriction, and distance to top ofdescent. The results demonstrated MIDAS’ effectivenessin predicting flight-crew performance. Additional AirMIDAS simulations have been performed to examinevarious implications of the “free flight” concept;constraints and requirements for controlled airspace,traffic alerting, and decision aids. Recommendations forthe design of alerting systems required to maintain safebuffers around aircraft with self-separation were basedon human performance constraints identified in MIDASsimulations. To meet the unique demands of modelingthis environment, Air MIDAS is operated without thevisualization capabilities of MIDAS, and has incorporatednew functionality, such as supporting multiple operators,auditory communications, and expectations.SITUATION AWARENESS MODEL VALIDATION:Three studies were undertaken to evaluate the validity ofimprovements made to MIDAS during 1998 and 1999(MIDAS v1.0) with particular focus on the vision and SAmodels. The work was performed in collaboration withthe Israeli Air Force under the auspices of aMemorandum of Agreement (17).Following asuccessful comparison of MIDAS predictions to theresults of a laboratory study, a part-task simulation wascompleted using the Rotorcraft Part Task LaboratorySimulator. Pilots flew simulated missions in an AH-64Longbow Apache designed to create situations predictedto generate high and low situation awareness. Themission of the co-pilot gunner was to hover and use ahelmet mounted display (HMD) to view the battlefield anddesignate/identify objects with a computer keyboard.Visibility conditions could be good or bad; the localcontrast of objects was either high or low; the number ofnon-target objects (clutter) was either high or low; andsubjects may have been prepared (briefed) or notprepared (not-briefed) before each trial. Parallel MIDASsimulations produced results that were in the samedirection as the human-in-the-loop results for visibilityand briefing, but failed to simulate the effects of targetcontrast and clutter. Situation awareness results couldnot be compared due to differences in the measuresused; MIDAS uses an analytical computation processwhereas a SAGAT-type subjective rating was used forthe piloted simulation.SHUTTLE UPGRADE - At the request of the NASAJohnson Space Center, a model of the current cockpit ofthe Shuttle was developed with the intention ofcomparing the current configuration to that proposed fora planned upgrade. Most of the instrumentation in theshuttle is 1970s vintage. The caution and warningsystem is a particular problem, as it does not provideintegrated failure information in a central location orsuccinct manner. The Shuttle Upgrade Program wasinitiated to develop an advanced orbiter cockpit with ahuman-computer interface designed to reduce workloadthrough better displays and control designs, improvecrew SA, increase flight crew autonomy and assist orautomate complex procedures. MIDAS was used tocreate a virtual rendition of the cockpit and conduct abaseline nominal ascent simulation, providing a

quantitative analysis of modifications in workload, SAand timing. Unfortunately, the project was not carriedfurther due to funding cuts and concern that MIDAS v2.0had not yet been subjected to a formal verification andvalidation process.animated views of the body. Because all of thesemodels are described in detail in the MIDAS User’sManual (18) and human performance model overview(19), they will be summarized only briefly, with theexception of the decision-making model.MIDAS REDESIGNSENSORY INPUT - The operator obtains visualinformation about objects in the world from their symbolicrepresentation (i.e., attributes that are attached to theobjects) along with information about the objects’surroundings such as ambient lighting. The vision modeldifferentiates between peripheral and foveal vision.While foveal vision involves fixation and attention on aspecific object, peripheral vision allows the modeling ofsignificant visual distractions that may divert theoperator’s attention temporarily or indefinitely. MIDASv2.0 conceptually differentiates interior (within the crewstation) and exterior (outside the crew station) vision.For interior vision, the operator is assumed to have amental representation of equipment he interacts withincluding its location and function. In contrast, anoperator must recognize or identify an exterior objectbefore he can reason about it. Perception of exteriorvision takes place in three stages: detection, recognition,and identification, and the perception level attained isdependant on various factors defined in the MIDASperceivability model, a component of the vision model.Redesign and re-implementation of much of the existingMIDAS system began in 1996. Over the many years ofdevelopment and ad-hoc extensions, MIDAS v1.0become very large and somewhat unwieldy from asoftware maintenance perspective, and contained muchlegacy code. The redesign had many goals, but themost significant included creating a cleanly designedsystem that combined the best of MIDAS v1.0 withfundamental enhancements to the human performancemodel, and a fully graphical user interface to enable“non-programmers”, such as crewstation designers andcognitive modelers, to use MIDAS.The Beta release of the new system was in 1999. Thenew system was built with object-oriented design anddevelopment methodology and supported multiplehuman operators, modeled audition, computed situationawareness, and allowed high level task abstraction. Mostimportant, it had a graphical interface that eliminated theneed for programming, with the notable exception of theprocess required to create operator procedures. For thefirst time since its inception, MIDAS models could bedeveloped by people other than its programming staff.MIDAS v2.0, which runs on Silicon Graphics computers,has been steadily enhanced and applied to severalmodeling projects since its first release (it’s still in Betarelease mode). This section describes the current stateof the system.HUMAN OPERATOR MODEL - The human operatormodel is the central feature of MIDAS and simulateshuman behavior at both physical and cognitive levels. Itscognitive components include sensory input (visual andauditory), decision making, memory, attention, situationawareness, and output behavior (motor control). Therelationships among these components are illustrated inFigure 3. The physical aspect of human behavior issimulated with anthropometric models, that provideAuditory perception occurs only within the crew station(hearing of exterior sounds is not yet supported, unless itis channeled through equipment contained within thecrew station representation, such as a speaker).Auditory signals and speech messages are perceived intwo stages, detection and comprehension, and cannotbe partially comprehended – if a listening task isinterrupted the entire content will be lost.DECISION MAKING - In sharp contrast to MIDAS v1.0,which required specification of all of an operatorsactivities at the primitive task level, human behavior isnow specified in a much more abstract way using a highlevel scripting language called the Operator ProcedureLanguage, or OPL, which serves as the front end to areactive planning system (20, 21). As its name implies,the central construct of this language is the procedure,which represents an atomic unit of the operator’sknowledge. Procedures can be thought of as instructionsfor accomplishing a task, much as a procedure in aprogramming language (technically, OPL is aprogramming language), OPL procedures can take input(arguments) and invoke (call) other procedures. Suchidioms fit naturally into human procedure modeling, asillustrated by the following example of OPL.Table 1: Procedural representation of turning an ignitionkey(procedure (turn-ignition)(task (move-right-hand ignition))(task (turn-clockwise-with-right-hand ignition)))Figure 3: Human Operator Model structure

(procedure (move-right-hand obj)(task (move-effector-primitive right-hand obj)))(procedure (turn-clockwise-with-right-hand obj)(task (turn-object-primitive right-hand obj clockwise)))The procedures listed in Table 1 model the turning of anignition key. The first procedure might be scenariospecific and hence written by the MIDAS user. The lattertwo exemplify library procedures known by simulatedMIDAS operators. They also call primitive procedures(e.g., move-effector-primitive, turn-object-primitive). Aprimitive procedure specifies a basic task that is notdecomposed into other OPL procedures but ratherexecuted directly by an action (e.g. a physical task suchas reaching) or in memory (e.g. remembering a fact orperforming a computation). This example illustrates onlysimple, sequential behavior and primitive action, but OPLincludes constructs for modeling more complex activities,such as selecting between alternatives, repetition,waiting (e.g., passively monitoring for a perceivedcondition), and concurrent tasks.MEMORY - The current memory model in MIDAS v2.0 issimple. It is essentially a database of assertions, orbeliefs. A belief is represented as a symbolic expressionthat usually denotes the property of an object, and isillustrated by the examples in Table 2. Memory can beexamined in powerful ways by means of a queryinglanguage built into OPL. For example, one can easilywrite a procedure for reporting to headquarters about allinstances of blue vehicles seen in a given area.Table 2: Examples of memory representations(color car1 blue)(raining? false)(wipers-on? false)(aircraft (style rotorcraft) (location (waypoint 3)))ATTENTION - The MIDAS attention model, is based onWicken’s Multiple Resource Theory (9). It acts as amediator that maintains an account of attentionalresources in six different “channels”. Two channelspertain to encoding (visual and auditory input), two tocognitive processing (spatial and verbal) and theremaining two to responding (motor and voice output).Before a primitive task is initiated, the necessaryattentional resources must be secured from the model.If sufficient resources are not available, the taskperformance may be degraded. The load on eachchannel is computed using a matrix of resourcecoefficients, which were estimated using MultipleResource Theory.SITUATION AWARENESS – The SA model is based ona computational representation developed by Jay Shively(6). It computes two quantities: actual SA and perceivedSA. The actual SA of an operator is defined as theportion of situational elements that he knows relative tothe situation elements that he would know in the idealstate. Perceived SA differs from Actual SA only in that itdoes not include elements for which the operator has noknowledge. Situational elements can be either specificobjects in the crew station or environment that define a“situation” or operationally pertinent information in theoperator’s memory.OUTPUT BEHAVIOR - Output behavior is regulated by amotor control process.If required resources areavailable, a motor activity that corresponds to a primitiveprocedure is created. Both the operator’s physicalactions and their effects on equipment and/orenvironment objects are modeled. Activities such asmanipulating equipment, fixating on an object, or makinga speech utterance are all supported as primitive motoroutputs. There are about 30 such primitive tasksavailable in the Procedure Library. Each task has loadvalues defined for each of the six resource channels(10).ANTHROPOMETRIC MODEL - Used primarily forvisualization, the anthropometric model provides a 3Danimated graphical representation of the modeledhuman operator. There are two anthropometric models available in MIDAS. Jack , a product of UnigraphicsSolutions, Inc., is a full body figure with advancedcapabilities and very realistic movement (7). Because the use of Jack requires a runtime license, a simpleranthropometric model, consisting of just a head andhands, is included with MIDAS v2.0. From a modelingperspective, both are equivalent in terms of thesimulation results that will be produced.CREWSTATION/EQUIPMENT MODEL- A crew station,such as an aircraft cockpit or nuclear power plant controlroom, is the essential component of a MIDAS simulation.“Crew station” defines a collection of equipmentcomponents with which operators can interact. Thereare several kinds of equipment component models:discrete-state components and continuous components.Discrete-state components are used to model equipmentwhose behavior is characterized by a finite number ofdistinct states, such as toggle switches. A discrete-statecomponent is a finite state machine, in essence, with theadded ability to send messages to other componentsupon state changes. Continuous components are usedto represent equipment that has a continuous range ofvalues, suc

As MIDAS matured, it offered an increasing number of tools for modelers at Ames and Beta test sites. It has been used to model different types of operators (e.g., US . software on a laptop computer was used to record over 75 hours of data on the frequency, duration, and types of 911 dispatch operator activity at Richmond PD. These