Eye Tracking In 360: Methods, Challenges, And Opportunities

Transcription

Eye Tracking in 360:Methods, Challenges, andOpportunitiesInstructors:Eakta Jain University of FloridaOlivier Le Meur, IRISA Rennes and Univ. of Rennes 1

Introduction: Eakta Jain PhD, Carnegie Mellon University Currently Assistant Professor, University of Florida Research Interests: Compelling virtual avatars Recording and understanding attention and perceptionSponsors:Le Meur and Jain, IEEE VR 20192

n: PhD, University Of Nantes (Fr)HDR, French post-doctoral degree, University of Rennes 1Associate Professor, University of Rennes 1Team leader: PERCEPT / IRISAMore than 10 years at Technicolor R&DResearch Interests: Computational modelling of visual attention Image processing (quality, inpainting, HDR IEEEVRVR20192019O. Le LeMeur& E.Jain- IEEE3

Special thanks to Brendan John PhD student, University of Florida NSF Graduate Research Fellow Research Interests: VR, Eye trackingLe Meur and Jain, IEEE VR 20194

Why is this topic relevant to VR/AR? Foveated Rendering Gaze as input: Objects react to being looked at, interactivenarrativesLe Meur and Jain, IEEE VR 20195

Datasets for Saliency Modeling/HeadOrientation Prediction(a)(b)the closer man is holding the 360-degree camera and the viewportincluding him is occupied by his body without interesting views.Unfortunately, the traditional model is ignorant of this and stillassigns a higher saliency to the front person.Building from the above analysis, we conclude that it is imperative to develop a unique saliency detection model for 360-degreevideos, which will consider all objects along the equator and pinpoint the most interesting object among multiple objects in equirectangular frames. Such accurate saliency detection would naturallybene t the head movement prediction.3(c)(d)Since there is no existing saliency dataset speci cally for 360-degreevideos, we have created a new dataset. In this section, we start withdescribing the steps to generate our dataset. We then carefullyexamine the dataset to demonstrate that the created panoramicsaliency data is highly consistent with human viewing xation.3.1(e)(f)Figure 1: Saliency detected by traditional models does notmatch true user xation. a) and b): sample frame; c) and d):true user xation. e) and f): salient prediction result from atraditional model [27].DATASETApplications: Video StreamingFirdose, Saeik, Pietro Lunqaro, and KonradTollmar. "Demonstration of Gaze-Aware VideoStreaming Solutions for Mobile VR." 2018IEEE VR, 2018.Collection of Panoramic SaliencyTo collect the saliency maps of 360-degree videos, it is essentialto extract user xation. The xation points imply the region thatusers pay special attention. In regular image/video saliency dataset,eye gaze points are obtained by specialized eye-tracking devicesto derive xation. Due to the absence of eye tracker in HMD, weadopt a similar method as in [1, 34] to represent eye gaze point byhead orientation. This methodology is supported by the fact thatthe head tends to follow eye movement to preserve the eye-restingposition (i.e., eyes looking straight ahead) [17]. We now followthe similar procedure in prior saliency collection works [15, 34] toextract xation and generate the panoramic saliency dataset.Deriving Head Orientation. To collect head orientation data,we explore two public head movement datasets for 360-degreevideos [6, 37]. rst dataset[37] has18 videosLeTheMeurand Jain,IEEEVRviewed2019by 48users in 2 experiments. We select the 9 videos in the rst experimentwhere the head orientation is obtained during free viewing without any particular viewing task. The second dataset [6] includes ve videos freely viewed by 59 users. We choose two videos fromthe dataset because the xation points of the other three videosare noisy, implying no region of interest. Both datasets recordtimestamped head orientation and the corresponding frame underviewing. A head orientation sample is stored as a quaternion, afour-tuple mathematical representation of head orientation withrespect to a xed reference point. We convert the quaternion to aregular 3D unit vector ( 1) to represent the head orientation[18]. Coupled with the timestamps, we are able to derive where theNgyugen, Yan and Nahrstedt.YourAttentioniswithUnique,ACMCentral Bias.In regular videosa single viewport,salient Multimedia 2018objects are normally found at the frame center. As a result, trainedmodels from these saliency datasets of regular videos tend to havea central bias [23, 32], where the level of saliency is reduced as thecontent moves from frame center to the four edges. However, thecentral bias would not re ect the saliency of 360-degree videos. Ina typical equirectangular frame, although objects at the two poles(top and bottom) are rarely viewed by users, all objects along theequator may attract user attention. In other words, edge objects canalso be the salient objects in some 360-degree videos. As shown inthe example of the left column in Figure 1, users are more interestedin the small animal at the edge of equator while the central biasedsaliency detected by a traditional model [27] is completely di erentfrom the true user xation.Multi-object Confusion. During the saliency data collectionfor regular videos, users are able to quickly scan through all objectsin the single viewport with a limited eld of view and are generallymore interested in front objects than objects in the back. The resulting saliency model adapts to this behavior and detects the saliencyMore on this in Part 36

Redirected WalkingTowards Virtual Reality Infinite Walking: Dynamic Saccadic RedirectionPatney et al. ACM SIGGRAPH 2018Le Meur and Jain, IEEE VR 20197

Social VR: Eye movements for AvatarsPerceptual Adjustment of Eyeball Rotation and Pupil Size Jitter for Virtual Characters.Sophie Jörg, Andrew Duchowski, Krzysztof Krejtz, and Anna Niedzielska. 2018.ACM Trans. Appl. Percept. 15, 4, Article 24 (October 2018)Guiding Gaze: Experssive Models of Reading and Face Scanning,Andrew Duchowski, Sophie Jörg, JaretScrews,NinaLe Meurand Jain,IEEE Gehrer,VR 2019 Michael Schoenenberg,Krzysztof Krejtz, ETRA 2019, Denver, CO, to appear.8

User EngagementRaiturkar et al. Decoupling Light Reflex from Pupillary Dilation to Measure EmotionalArousal in Videos ACM SAP 2016John et al. An Evaluation of Pupillary Light Response Models for 2D Screens andVR HMDs,ACM VRST 2018Le Meur and Jain, IEEE VR 20199

IEEE VR Recent Activity! Chen, Shu-Yu, et al. "Real-time 3D Face Reconstruction and Gaze Tracking forVirtual Reality." 2018. S. Grogorick, G. Albuquerque and M. Maqnor, "Gaze Guidance in ImmersiveEnvironments," 2018. Mei, Chao, et al. "Towards Joint Attention Training for Children with ASD-a VRGame Approach and Eye Gaze Exploration." 2018. Volonte, Matias, et al. "Empirical Evaluation of Virtual Human Conversational andAffective Animations on Visual Attention in Inter-Personal Simulations." 2018. Alghofaili Rawan et al. Optimizing Visual Element Placement in VirtualEnvironments via Visual Attention Analysis, 2019 Hu et al., SGaze: A Data-Driven Eye-Head Coordination Model for Realtime GazePrediction 2019 Mardanbegi et al. EyeSeeThrough: Unifying Tool Selection and Application inVirtual Environments 2019Le Meur and Jain, IEEE VR 201910

Expected Value to Audience Intended for a VR audience unfamiliar with eye tracking Who want to quickly have a working understanding of eyetracking Towards goals such as Should they invest in an eye tracker Should they propose to collect eye tracking data in their next proposal Collecting eye tracking data for the very first time because their advisergot funded for it (or asked them to collect some pilot data so they couldget funding for it)Le Meur and Jain, IEEE VR 201911

Organization and Learning ObjectivesTopicLearning ObjectivesPart 1: Basic understanding of theeye (focus on parameters relevantto eye tracking in VR)[Jain]1. Define the basic eye movements2. Define vergenceaccommodation conflict3. Explain the difference betweenfoveation and perception4. Explain the difference betweengaze in head, head in world, andgaze in world dataLe Meur and Jain, IEEE VR 201912

Organization and Learning ObjectivesTopicLearning ObjectivesPart 2: Methods for collecting eyetracking data, including sampleprotocols and pitfalls to avoid[Jain]1. Compare and contrast classes ofeye trackers2. Design a data collection protocol3. Report the relevant parametersfor the eye tracker, calibrationand validation in the Methodssection of a paperLe Meur and Jain, IEEE VR 201913

Organization and Learning ObjectivesTopicLearning ObjectivesPart 3: Methods to generatesaliency maps from eye trackingdata[Le Meur]1. Explain why 2D saliency mapmethods need to be generalizedfor omnidirectional viewing2. Discuss the pros and cons of theselected methods3. Compare the performance ofdifferent methods using standardmetrics4. Computational saliency modelsfor 360 imagesLe Meur and Jain, IEEE VR 201914

Let’s begin!TopicLearning ObjectivesPart 1: Basic understanding of theeye (focus on parameters relevantto eye tracking in VR)[Jain]1. Define the basic eye movements2. Define vergenceaccommodation conflict3. Explain the difference betweenfoveation and perception4. Explain the difference betweengaze in head, head in world, andgaze in world dataLe Meur and Jain, IEEE VR 201915

Anatomy of the EyeA Series of Anatomical Plates TheStructure of the Different Parts of TheHuman Body. by Jones Quain, M.D.1854Le Meur and Jain, IEEE VR 201916

Anatomy of the EyeImage Credit:WikimediaLe Meur and Jain, IEEE VR 201917

Anatomy of the EyeLine ofSightFovealRegion1o-5oLe Meur and Jain, IEEE VR 201918

Eye Movements Saccades Rapid, ballistic eye movements that shift the fovea (30-50ms) Perception is attenuated during saccade Fixations (between saccades) are when the eye is “stationary”( 200ms) Patterns of saccades and fixations are typical of tasks, e.g., reading,search Vergence Eyes converge so that object is on the fovea for each eye May be initiated by disparity cues (object not in fovea for one of theeyes) or accommodation cues (presence of blur in one of the eyes)Le Meur and Jain, IEEE VR 2019Vision Science by Palmer19 (1991)

Eye Movements Smooth Pursuit Track a moving object If moving object not tracked, its image would be ”smeared” across retina poor evolutionary choice! [Hold head still and move finger] Physiological Nystagamus Tiny tremors that cause the retinal image to never be still If removed, then retinal image “fades away” Vestibular Ocular Reflex Eye moves to keep fixated on an object when head or body is rotatedInitiated by the vestibular system[Hold finger still and move head]VOR much quicker and accurate than pursuit movementsLe Meur and Jain, IEEE VR 2019Vision Science by Palmer20 (1991)

Eye Movements Other parts of the eye move too Pupil Eye lids Pupil diameter changes recorded by eye trackers Eye lid movement – we can think of this as blinks – identified aspoints where pupil is not fully visible rather than eye lid trackingLe Meur and Jain, IEEE VR 201921

Pop Quiz!TopicLearning ObjectivesPart 1: Basic understanding of theeye (focus on parameters relevantto eye tracking in VR)Define the basic eye movementsHumans are effectively blind during this type of eye movement:(a) Fixation(b) SaccadeAnswer:(b) SaccadeLe Meur and Jain, IEEE VR 201922

VergenceAdapted from Shibata et al (2011) The Zoneof Discomfort: Predicting Visual Discomfortwith Stereo Displays, Journal of VisionLe Meur and Jain, IEEE VR 201923

Vergence Accommodation ConflictKoulieris et al (2017) SIGGRAPH. Accommodationand Comfort in Head Mounted DisplaysLe Meur and Jain, IEEE VR 201924

Pop Quiz!TopicLearning ObjectivesPart 1: Basic understanding of theeye (focus on parameters relevantto eye tracking in VR)Define vergence accommodationconflictVergence accommodation conflict occurs when:(a) The stereo depth of the object being looked at is further than the screen(b) The stereo depth of the object being looked at is the same as the screenAnswer:(a)Le Meur and Jain, IEEE VR 201925

Looking versus SeeingRubin’s VaseLe Meur and Jain, IEEE VR 201926

Looking ! Understanding I can be looking at a math equation for a long time withoutunderstanding it! %# Le Meur and Jain, IEEE VR 201927

It’s an eye tracker not a mind reader -- Andrew Duchowski(I said that in the context of marketing studies.but I've been wrong before---we now have thenotion of user intent)Le Meur and Jain, IEEE VR 201929

Though User being eye tracked while recalling an imageRetrieve image from a dataset ofmatching imagesWang et al. The Mental Image Revealed by Gaze Tracking. CHI 2019Le Meur and Jain, IEEE VR 201930

VR Relevant ParametersRotation within socket(Gaze In Head)3D Point of Regard(Gaze in World)3D Point of Regard(Gaze In World)θZXX(Eyes rotated within the head’s coordinate frame)ZRotating head(Head In World)(Head rotated within the global coordinate frame)Le Meur and Jain, IEEE VR 201931

Pop Quiz!TopicLearning ObjectivesPart 1: Basic understanding of theeye (focus on parameters relevantto eye tracking in VR)Explain the difference between gazein head, head in world, and gaze inworld dataWhat is the difference between gaze in headand gaze in world orientations?(a) The coordinate frame with respect to whichit is measured(b) Gaze in head is always largerAnswer:(a) Coordinate frameLe Meur and Jain, IEEE VR 201932

BreakLe Meur and Jain, IEEE VR 201933

Organization and Learning ObjectivesTopicLearning ObjectivesPart 2: Methods for collecting eyetracking data, including sampleprotocols and pitfalls to avoid[Jain]1. Compare and contrast classes ofeye trackers2. Design a data collection protocol3. Report the relevant parametersfor the eye tracker, calibrationand validation in the Methodssection of a paperLe Meur and Jain, IEEE VR 201934

What is an eye tracker“A device that measures eye position and eye movements”Le Meur and Jain, IEEE VR 201935

h large surfacesof thethemirrorof the P4apparatusplacesa r e parallelLaterexperimenterthe light sources in the correctolished, by removing the reflecting layer a transparent windowposition, makes sure that all the apparatus is in working order, ande created in it. Through this window the subject can s e e atusexperiments differ considerablynt of himbegins(practicallydistortion). Whena window in the m i r r o r is fixedto the eye, conditions a r e createdin their complexity, although all demand skill and precision from theich the eye's field of vision is divided into two parts. In one partexperimenterin his work.Recordings field, theordinary relationshipbetweenthe movementof the eye eye movemenrs on still photoisplacementof the retinalimagedisturbed,while in the othergraphicpapero risfilmis complicated.Let us assume that the cap isnormal. To increase the sharpness of the border between thefixedeye. Theis stillbut ready to record. The1 in cassettethe P4 apparatusis r eclosedof the field,the tosizetheof theapertured to 1.0-0.5mm. is dark. Thevisual testobject,placed against a matt black backroomhe P6cap, illustrated schematically in Fig. 17, is used to recordby a directedbeamoflight but cavered by paperulsationsground,of the eye. isTheilluminatedframe of the apparatus2 and the hollowpiece 14sa or e maderubber.The hollowside-piecebTthat ofthesubjectcannots e eis joinedit beforerecording begins. Then thepening 15 to the lower chamber 3 of the cap, in which is creatededuced pressure necessary for securing the apparatus to t h e eye.CHAPTER IMETHODSWhat is an eye tracker Then: caps inside the eyethe rod, conditions of perception a r e produced in which a definite partof the retina is shielded by the filter. Because of the small mirrorfixed to the cylinder of the P8 cap, eye movements can be recordedunder conditions in which a given part of the retina is shielded by thescreen, i.e., is in factpreventedfromreceivingany visual stimulation.Depending on the purpose of the investigation, the experimenter mayneed to modlfy not only the construction of the caps, but also the construction of the adaptors. The descriptions in the second chapter ofcertain experiments include a detailed account of several adaptors usedwith the P6apparatus.14. APPARATUS USED IN WORK WITH CAPSFig. 17. The P5 cap.A photograph of the apparatus usually used in recording eyemovements is given in Fig. 21. The apparaNs consists of a stand(or frame), a chin rest, two light sources, and a control panel. Theframe consists basically of a large, massive stand.Two metal uprights and the control panel, on which sockets andswitches are mounted, a r e firmly fixed to this stand. On the movablepart of the large stand is mounted a metal post ending in a chin rest.The chin rest can be moved vertically; horizontally, it can turn aboutthe axis of the post, and after the desired position has been obtained,it can be firmly fixed. In addition, the parameters of the chin restitself may be varied by the experimenter, depending on the size of thesubject's head. By use of this type of chin rest, the subject's headcan be securely fixed during the experiments. On each metal postis a massive connecting rod, and a t the end of this rod a universalstand. The light. source is fixed on ball bearings to each stand. Bymeans of this system the experimenter can quickly (and this is veryimportant) direct a beam of light reflected from the mirror of the capto the aperture of a kymograph or to a cassette. The switchboardcontrol panel permits any apparatus to be switched on and off in thecourse of the experiment without interrupting the observation.Depending on which cap is used for the experiment and the pareicular purpose of the investigation, the experimenter will need to usedifferent light sources and accessories. Forexample, when recordingeye movements on still photosensitive paper or film, a light sourceis used which throws a spot of light not more than 1mm in diameteronto the photosen&itivematerial. In this case the objective gives animage of the small aperture of the diaphragm against the backgroundof the incandescent filament. Usually a series of diaphragms withFig. 21. The apparatus used in recording eye movement&apertures between 10 and 70 p in diameter is used in an investigation.Fig. 24. Pasltlon of l d sheld by strlps ofYarbus,If the eye movementsa r e recordedon a photokymograph, a slit takesadhestve 1967plaster in work wlthall capsexceptthe place of the diaphragm in the light source; the slits in a suitableseries vary from 10 to 70 in width.typePI.Le Meur and Jain, IEEE VR 201936To illuminate the frosted glass of the P6apparatus or the screenpin the P8 apparatus, a light source is used which has an optical systemallowing a beam of light about 10-16 mm in diameter to be obtained atany point in space, illuminating a small area of surface uniformly.Uniformity of illumination is essential toensure thatduring eye movements, i.e., during movements of the frosted glass or the screen,their brightness does not change within the beam of light.

174CHAPTER VIlEYE MOVEMENTS DURING PERCEPTION OF COMPLEX OBJECTSFig. 110. Record of the eye movements for 3 minutes during free examinatiseven consecutive parts. The duration of each part is about 25 secFig. 109. Seven records of eye movements by the same subject. Each record lasted 3minutes. The subject examined the reproduction with both eyes. 1) Free examination ofthe picture. Before the subsequent recordingsessions, the subject was asked to: 2) estimatethe material Pcumstances of 'the family in the picture; 3) give the ages of the people;4) surmise what the family had been doing before the arrival of the 'unexpected visitor":5) remember the clothes worn by the people; 6) remember the position of the people andobjects in the room; 7) estimate how long the "unexpected visitor' had been away from thefamily.Le Meur and Jain, IEEE VR 2019information useful and essential for perception. Elemethe eye does not fixate, either in fact or in the observer'snot contain such information.Let us now try to explain and prove this statement. Fnote that special attention o r indifference to the elementsis in no way due to the number of details composing t37

What is an eye tracker Now: optical tracking using IR camerasScreenParticipantRemoteEyetrackerLe Meur and Jain, IEEE VR 201938

What is an eye trackernt of 360 Saliency Map Methods and Now: optical tracking using IR camerascies in Free Viewing59606162nonymous Author(s)eye trackand pracy to studyeye tracklished forent, theseThe mainther thanbe convea method(a normalplore gazeation, ande startinge contenteality; Im-63646566676869707172737475767778798081Figure 1: Experimental setup.82Le Meur 83and Jain, IEEE VR 2019360 imagery have become more common and are motivating anincrease in the amount of content produced.The study of attention and eye movements in traditional 2Dcontent is well established, providing insight into the human visualsystem [Duchowski 2007]. Experimental practices and standardshave been established, which is still an open problem for 360 eyetracking. A major computational problem that is currently being848586878889909140

l Coil Eye Tracking for Virtual RealityUltra-Low Power Gaze Tracking for Virtual RealityLaura Trutoiu2 , Robert Cavin2 , David Perek2 ,lly2 , James O. Phillips3 , Shwetak Patel1Tianxing Li, Qiang Liu, and Xia Zhouring3 Department of Otolaryngology2 Oculus & FacebookDepartmentof Computer Science, Dartmouth College, Hanover, NHUniversityof oratory alternativesABSTRACTance for mobilemerging virtual Tracking user’s eye xation direction is crucial to virtual realityR). Current eye (VR): it eases user’s interaction with the virtual scene and enablesheadsets rely on intelligent rendering to improve user’s visual experiences and savecy of 0.5 to 1 .lution eye track-system energy. Existing techniques commonly rely on cameras andg scleral search active infrared emitters, making them too expensive and poweron large gener-hungry for VR headsets (especially mobile VR headsets).uires a restraint We present LiGaze, a low-cost, low-power approach to gaze tracke scleral searching tailored to VR. It relies on a few low-cost photodiodes, eliminatalk around, andoom-sized coils.ing the need for cameras and active infrared emitters. Reusing lightgenerator coils emitted from the VR screen, LiGaze leverages photodiodes aroundunts for the less a VR lens to measure re ected screen light in di erent directions.Figure 1. The EyeContact scleral coil tracker can clip to an HMD andcoils. Using this It �sdoesnot requireuse of a heador room-sizedfieldlightcoils absorptionf the eye with aFigure 1: LiGaze integrated into a FOVE VR headset. LiGaze usesonly passive photodiodes on a light-sensing unit to track user gaze.property. The core of LiGaze is to deal with screen light dynamicsIt can be powered by a credit-card sized solar cell (atop the headset)Whitmireetal.EyeContact:ScleralcoileyeLi, et al.Ultra-LowPowerExistingchangesresearch inonre ectedwearable lighteye trackingfoharvestingenergyfrom indoorlighting.Gaze Tracking forand extractrelatedsystemsto pupilhasmovement.cusedpredominantlyon theusetheand yimprovementof opticalregresVirtual provideReality.ACM Conferenceon Embeddedtrackingvirtualreality.ACMLiGazeinfersa for3D gazevectoronusingInternationala lightweightVR headsetsimmersive,realistic simulationof the 3D ighion (e.g. ensorSystems2017ical world and are poised to transform how we interact, entertain,resolution eye tracking is still magnetic tracking with scleralo -the-shelfphotodiodes.comparisonto cana commercialand learn. With the advent of a ordable, accessible VR headsets (e.g.,search coils(SSC) [6]. OurScleralcoil trackingrecord smallVR eves6.3 andandspatial10.1 meanamplitudetemporal( 1 kHz)Google Daydream, Cardboard, Samsung Gear VR), VR is gainingresolution(calibratederroraccuracy. 0.1 ). sensingandcomputationpopularity and projected to be a multi-billion market by 2025 [2].h generateLeMeurandJain,IEEEVR 201941consume 791µW in total and thus can be completely powered by aOur work in this paper focuses on a feature crucial to VR:gazea uniform magnetic field. A wire loop embedded in a omindoorlighting.tracking, i.e., determining user’s eye xation direction. Not onlyannulus is placed on the sclera of the eye. The magnetic fieldLiGaze’ssimplicitymakeapplicable in ainducesa voltage andin theultra-lowscleral coilpoweraccordingto itsitorientationdoes gaze tracking allow users to interact with the content just byant for enabling wide range of VR headsets to better unleash VR’s potential.glances, it also can greatly improve user’s visual experience, reducegmented realitythe thin wires leading from the coil, the system estimates theVR sickness, and save systems (display) energy. The energy savingf gaze mediatedeye’s orientation.ering [5], which CCS CONCEPTScan be achieved by foveated rendering [23, 52], which progressivelyOne of the major limitations of SSC tracking is the need forR/VR by focus- Human-centeredreduces image details outside the eye xation region. Such energycomputing Ubiquitous and mobile deVirtual avatarslarge generator coils several meters in diameter or a head

Exploratory AlternativesInput ImageConvolutional NetworkGaze VectorIR CameraNVGaze: Anatomy-aware Augmentation for Low-Latency, Near-Eye Gaze EstimationStengel, Kim, Majercik, De Mello, McGuire, Laine, Luebke (2019)Le Meur and Jain, IEEE VR 201942

Compare and ContrastDeviceEye Image SampleResolution Rate (Hz)Cost(USD)7invensun-120 200FOVE VR HMD320 x 240120 599aGlass and aSee-120-380-Pupil Labs VR (VIVE USB)320 x 24030 1,572*Pupil Labs VR (DedicatedUSB)640 x 480120 1,572*Pupil Labs AR (Hololens)640 x 480120 1,965*Pupil Pro Glasses800 x 600200 2,066?*Pupil Pro Glasses800 x 600200 2,066?*Looxid Labs-- 2999Hololens v2-- 3500Tobii Pro Glasses 2Le240Meur andJain, IEEE100VR 2019x 960 10,000*Withoutacademicdiscount43

Pop Quiz!TopicLearning ObjectivesPart 2: Methods for collecting eyetracking data, including sampleprotocols and pitfalls to avoidCompare and contrast classes ofeye trackersYou want to use an eye tracker to study where people look during a publicspeaking study. In particular you are studying pre-service andexperienced teachers in a classroom.What type of eye tracker should you use?Answer:(a) Eye tracking glasses(a) Eye tracking glasses(b) Table mounted eye trackerLe Meur and Jain, IEEE VR 201945

Pop Quiz!TopicLearning ObjectivesPart 2: Methods for collecting eyetracking data, including sampleprotocols and pitfalls to avoidCompare and contrast classes ofeye trackersYou want to get a HMD fitted with an eye tracker to study where peoplelook during a VR public speaking study. What spec should consider?(a) Sample rate, because bigger is better(b) Calibration accuracy, because 30-60Hz is sufficient for attentionalresearchAnswer:(b) Calibration accuracyLe Meur and Jain, IEEE VR 201946

Pop Quiz!TopicLearning ObjectivesPart 2: Methods for collecting eyetracking data, including sampleprotocols and pitfalls to avoidCompare and contrast classes ofeye trackersYou want to get a HMD fitted with an eye tracker to study foveatedrendering. What spec should consider?(a) Sample rate, because bigger is better(b) Calibration accuracy, because 30-60Hz is sufficient for attentionalresearchAnswer:Both!Le Meur and Jain, IEEE VR 201947

What does an eye tracker measure Gaze location (L,R) Pupil diameterLe Meur and Jain, IEEE VR 201948

2D Eye Tracking DataGaze driven Video Re-editing. Eakta Jain, Yaser Sheikh, ArielShamir, Jessica Hodgins. ACM Transactions on Graphics. 2015Le Meur and Jain, IEEE VR 201949

Overlaid gaze dataGaze driven Video Re-editing. Eakta Jain, Yaser Sheikh, ArielShamir, Jessica Hodgins. ACM Transactions on Graphics. 2015Le Meur and Jain, IEEE VR 201950

What does it tell you If an AOI was attendedHow long was it looked at (Dwell times)How many times was it revisitedWhat order were they looked atPatterns across individuals (e.g. center bias, spatio-temporalconsistency)Le Meur and Jain, IEEE VR 201951

Mobile Eye Tracking Glasses Glasses based eye tracking – gaze position on scene camerafeedLe Meur and Jain, IEEE VR 201952

VR-HMD Eye Tracking VR eye tracking – gaze direction, gaze in worldJohn, Raiturkar, Le Meur, Jain. A Benchmark of Four Methods forGenerating 360 degree Saliency Maps from Eye Tracking Data,Le Meur and Jain, IEEE VR 2019AIVR 201853

VR Relevant Parameters Is head a good enough approximation of eye? People assume this for ease of data collection Depends on the applicationLe Meur and Jain, IEEE VR 201954

BreakLe Meur and Jain, IEEE VR 201955

Design Choices in an Eye Tracking Study Apparatus TaskLe Meur and Jain, IEEE VR 201956

Apparatus: How to select an eye tracker Remote or head mounted Glasses or VR-HMD Built in or retrofittedLe Meur and Jain, IEEE VR 201957

Design Choices in an Eye Tracking Study Apparatus TaskLe Meur and Jain, IEEE VR 201958

Task Components: GeneralComponentExplanationInformed ConsentPurpose of study, Risks/Benefits, Compensation,Data, Opt out, Agree?Le Meur and Jain, IEEE VR 201960

Task Components: GeneralComponentExpla

Part 2: Methods for collecting eye tracking data, including sample protocols and pitfalls to avoid [Jain] 1.Compare and contrast classes of eye trackers 2.Design a data collection protocol 3.Report the relevant parameters for the eye tracker, calibration and validation in the Methods section of a paper Le Meur and Jain, IEEE VR 2019 13