Úplné zobrazení záznamu

Toto je statický export z katalogu ze dne 23.03.2019. Zobrazit aktuální podobu v katalogu.

Bibliografická citace

.
0 (hodnocen0 x )
(2) Půjčeno:6x 
BK
1st pub.
Oxford : Oxford University Press, 2011
xix, 537 s. : il. ; 26 cm

objednat
ISBN 978-0-19-969708-3 (váz.)
ISBN 978-0-19-873859-6 (brož.)
Brož. dotisk 2015
Obsahuje bibliografii na s. [469]-520 a rejstřík
000251705
Contents // About the authors // 1 Introduction // 1.1 The structure of this book // 1.1.1 Technical and methodological skills // 1.1.2 Events and representations // 1.1.3 Measures and their operational definitions // 1.2 How eye-movement measures are described in this book // 1.2.1 The target question and the summary box // 1.2.2 Thename(s) // 1.2.3 The operational definitions // 1.2.4 Typical values and histograms // 1.2.5 Usage // 1.3 Terminology and style // 1.4 Material used in the book // I TECHNICAL AND METHODOLOGICAL SKILLS // xix // 1 // 1 // 1 // 2 // 3 // 3 // 3 // 4 4 4 4 // 4 // 5 // 2 // Eye-tracker Hardware and its Properties // 2.1 A brief history of the competences around eye-trackers // 2.2 Manufacturers and customers // 2.3 Hands-on advice on how to choose infrastructure and hardware // 2.4 How to set up an eye-tracking laboratory // 2.4.1 Eye-tracking labs as physical spaces // 2.4.2 Types of laboratories and their infrastructure // 2.5 Measuring the movements of the eye // 2.5.1 The eye and its movements // 2.5.2 Binocular properties of eye movements // 2.5.3 Pupil and corneal reflection eye tracking // 2.6 Data quality // 2.6.1 Sampling frequency: what speed do you need? // 2.6.2 Accuracy and precision // 2.6.3 Eye-tracker latencies, temporal precision, and stimulus-synchronization latencies // 2.6.4 Filtering and denoising // 2.6.5 Active and passive gaze contingency // 2.7 Types of eye-trackers and the properties of their set-up // 2.7.1 The three types
of video-based eye-trackers // 9 // 9 // 12 // 16 // 17 // 17 // 19 // 21 // 21 // 24 // 24 // 29 // 29 // 33 // 43 // 47 // 49 // 51 // 51 // X I CONTENTS // 2.7.2 Robustness // 2.7.3 Tracking range and headboxes // 2.7.4 Mono- versus binocular eye tracking // 2.7.5 The parallax error // 2.7.6 Data samples and the frames of reference // 2.8 Summary // 3 From Vague Idea to Experimental Design // 3.1 The initial stage—explorative pilots, fishing trips, operationalizations, and highway research // 3.1.1 The explorative pilot // 3.1.2 The fishing trip // 3.1.3 Theory-driven operationalizations // 3.1.4 Operationalization through traditions and paradigms // 3.2 What caused the effect? The need to understand what you are studying // 3.2.1 Correlation and causality: a matter of control // 3.2.2 What measures to select as dependent variables // 3.2.3 The task // 3.2.4 Stimulus scene, and the areas of interest // 3.2.5 Trials and their durations // 3.2.6 How to deal with participant variation // 3.2.7 Participant sample size // 3.3 Planning for statistical success // 3.3.1 Data exploration // 3.3.2 Data description // 3.3.3 Data analysis // 3.3.4 Data modelling // 3.3.5 Further statistical considerations // 3.4 Auxiliary data: planning // 3.4.1 Methodological triangulation of eye movement and auxiliary data // 3.4.2 Questionnaires and Likert scales // 3.4.3 Reaction time measures // 3.4.4 Galvanic skin response (GSR) // 3.4.5 Motion tracking // 3.4.6 Electroencephalography (EEG)
3.4.7 Functional magnetic resonance imaging (fMRI) // 3.4.8 Verbal data // 3.5 Summary // 4 Data Recording // 4.1 Hands-on advice for data recording // 4.2 Building the experiment // 4.2.1 Stimulus preparation // 4.2.2 Physically building the recording environment // 57 // 58 // 59 // 60 // 61 // 64 // 65 // 66 // 66 // 67 // 67 // 68 // 71 // 74 // 75 // 77 // 79 // 81 // 83 // 85 // 87 // 87 // 90 // 90 // 94 // 94 // 95 // 95 // 96 // 97 // 97 // 98 // 98 // 99 // 99 // 108 // 110 // 111 // 111 // 111 // 113 // CONTENTS I Xi // 4.2.3 Pilot testing the experiment 114 // 4.3 Participant recruitment and ethics 115 // 4.3.1 Ethics toward participants 115 // 4.4 Eye camera set-up 116 // 4.4.1 Mascara 119 // 4.4.2 Droopy eyelids and downward eyelashes 120 // 4.4.3 Shadows and infrared reflections in glasses 122 // 4.4.4 Bi-focal glasses 124 // 4.4.5 Contact lenses 124 // 4.4.6 Direct sunlight and other infrared sources 125 // 4.4.7 The fourth Purkinje reflection 126 // 4.4.8 Wet eyes due to tears or allergic reactions 126 // 4.4.9 The retinal reflection (bright-pupil condition) 127 // 4.4.10 Mirror orientation and dirty mirrors 127 // 4.5 Calibration 128 // 4.5.1 Points 128 // 4.5.2 Geometry 129 // 4.5.3 The calibration procedure 129 // 4.5.4 Corner point difficulties and solutions 130 // 4.5.5 Calibration validation 132 // 4.5.6 Binocular and head-tracking calibration 133 // 4.5.7 Calibration tricks with head-mounted systems 133 // 4.6 Instructions and start of
recording 134 // 4.7 Auxiliary data: recording 134 // 4.7.1 Non-interfering set-ups 135 // 4.7.2 Interfering set-ups 135 // 4.7.3 Verbal data 137 // 4.8 Debriefing 139 // 4.9 Preparations for data analysis 140 // 4.9.1 Data quality 140 // 4.9.2 Analysis software for eye-tracking data 141 // 4.10 Summary 143 // II DETECTING EVENTS AND BUILDING REPRESENTATIONS // Estimating Oculomotor Events from Raw Data Samples 147 // 5.1 The setting dialogues and the output 148 // 5.2 Principles and algorithms for event detection 150 // 5.3 Hands-on advice for event detection 153 // 5.4 Challenging issues in event detection 154 // 5.4.1 Choosing parameter settings 154 // 5.4.2 Noise, artefacts, and data quality 161 // xii I CONTENTS // 5.4.3 Glissades // 5.4.4 Sampling frequency // 5.4.5 Smooth pursuit // 5.4.6 Binocularity // 5.5 Algorithmic definitions // 5.5.1 Dispersion-based algorithms // 5.5.2 Velocity and acceleration algorithms // 5.6 Manual coding of events // 5.7 Blink detection // 5.8 Smooth pursuit detection // 5.9 Detection of noise and artefacts // 5.10 Detection of other events // 5.11 Summary: oculomotor events in eye-movement data // 6 Areas of Interest // 6.1 The AOI editor and your hypothesis // 6.2 Hands-on advice for using AOIs // 6.3 The basic AOI events // 6.3.1 The AOI hit // 6.3.2 The dwell // 6.3.3 The transition // 6.3.4 The return // 6.3.5 The AOI first skip // 6.3.6 The AOI total skip // 6.4 AOI-based representations of data // 6.4.1 Dwell maps // 6.4.2 The AOI
strings // 6.4.3 Transition matrices // 6.4.4 Markov models // 6.4.5 AOIs over time // 6.4.6 Time and order // 6.5 Types of AOIs // 6.5.1 Whitespace // 6.5.2 Planes // 6.5.3 Dynamic AOIs // 6.5.4 Distributed AOIs // 6.5.5 Gridded AOIs // 6.5.6 Fuzzy AOIs // 6.5.7 Stimulus-inherent AOI orders // 6.5.8 Participant-specific AOI identities // 6.5.9 AOI identities across stimuli // 6.5.10 AOIs in the feature domain // 6.6 Challenging issues with AOIs // 164 // 167 // 168 // 170 // 171 171 171 // 175 // 176 178 181 182 185 // 187 // 188 188 189 // 189 // 190 // 190 // 191 // 191 // 192 192 // 192 // 193 193 // 196 // 197 // 205 // 206 206 208 // 209 // 210 212 212 214 214 // 214 // 215 // 216 // CONTENTS) xiii // 6.6.1 Choosing and positioning AOIs 217 // 6.6.2 Overlapping AOIs 221 // 6.6.3 Deciding the size of an AOI 223 // 6.6.4 Data samples or fixations and saccades? 224 // 6.6.5 Dealing with inaccurate data 224 // 6.6.6 Normalizing AOI measures to size, position, and content 225 // 6.6.7 AOIs in gaze-overlaid videos 227 // 6.7 Summary: events and representations from AOIs 229 // Attention Maps—Scientific Tools or Fancy Visualizations? 231 // 7.1 Heat map settings dialogues 231 // 7.2 Principles and terminology 233 // 7.3 Hands-on advice for using attention maps 238 // 7.4 Challenging issues: interpreting and building attention maps 239 // 7.4.1 Interpreting attention map visualizations 239 // 7.4.2 How many fixations/participants? 243 // 7.4.3 How attention maps are
built 244 // 7.5 Usage of attention maps other than for visualization 248 // 7.5.1 Using attention maps to define AOIs 248 // 7.5.2 Attention maps as image and data processing tools 250 // 7.5.3 Using attention maps in measures 252 // 7.6 Summary: attention map representations 252 // Scanpaths —Theoretical Principles and Practical Application 253 // 8.1 What is a scanpath? 253 // 8.2 Hands-on advice for using scanpaths 255 // 8.3 Usages of scanpath visualization 256 // 8.3.1 Data quality checks 257 // 8.3.2 Data analysis by visual inspection 257 // 8.3.3 Exhibiting scanpaths in publications 259 // 8.4 Scanpath events 262 // 8.4.1 The backtrack 262 // 8.4.2 The regression family of events 263 // 8.4.3 The look-back and inhibition of return 264 // 8.4.4 The look-ahead 265 // 8.4.5 The local and global subscans 265 // 8.4.6 Ambient versus focal fixations 266 // 8.4.7 The sweep 267 // 8.4.8 The reading and scanning events 267 // 8.5 Scanpath representations 268 // 8.5.1 Symbol sequences 269 // 8.5.2 Vector sequences 271 // 8.5.3 Attention map sequences 272 // 8.6 Principles for scanpath comparison 273 // xiv I CONTENTS // 9 // 8.6.1 Representation // 8.6.2 Simplification // 8.6.3 Sequence alignment // 8.6.4 Calculation // 8.6.5 Pairwise versus groupwise comparison // 8.7 Unresolved issues concerning scanpaths // 8.7.1 Relationships between scanpaths and cognitive processes // 8.7.2 Scanpath Theory // 8.7.3 Scanpath planning // 8.7.4 The average scanpath // 8.7.5 Comparing
scanpaths // 8.8 Summary: scanpath events and representations // 274 // 274 // 274 // 277 // 278 // 278 // 279 // 280 281 282 // 283 // 284 // Auxiliary Data: Events and Representations // 9.1 Event-based coalignment // 9.1.1 Alignment of eye-tracking events with auxiliary data // 9.1.2 Latencies between events in eye-tracking and auxiliary data // 9.2 Triangulating eye-movement data with verbal data // 9.2.1 Detecting events in verbal data: transcribing verbalizations and segmenting them // 286 // 286 // 287 // 289 // 290 // into idea units // 9.2.2 Coding of verbal data units // 9.2.3 Representations, measures, and statistical considerations for verbal data // 9.2.4 Open issues: how to ??-analyse eye-movement and verbal data // 9.3 Summary: events and representations with auxiliary data // 292 // 293 // 295 // 296 296 // III MEASURES // 10 Movement Measures // 10.1 Movement direction measures // 10.1.1 Saccadic direction // 10.1.2 Glissadic direction // 10.1.3 Microsaccadic direction // 10.1.4 Smooth pursuit direction // 10.1.5 Scanpath direction // 10.2 Movement amplitude measures // 10.2.1 Saccadic amplitude // 10.2.2 Glissadic amplitude // 10.2.3 Microsaccadic amplitude // 10.2.4 Smooth pursuit length // 10.2.5 Scanpath length // 10.2.6 Blink amplitude // 10.3 Movement duration measures // 10.3.1 Saccadic duration // 301 // 301 // 302 308 // 308 // 309 // 310 // 311 // 312 317 317 319 // 319 // 320 // 321 321 // CONTENTS) XV // 10.3.2 Scanpath duration 323 // 10.3.3 Blink
duration 324 // 10.4 Movement velocity measures 326 // 10.4.1 Saccadic velocity 326 // 10.4.2 Smooth pursuit velocity 329 // 10.4.3 Scanpath velocity and reading speed 330 // 10.4.4 Pupil constriction and dilation velocity 331 // 10.5 Movement acceleration measures 332 // 10.5.1 Saccadic acceleration/deceleration 332 // 10.5.2 Skewness of the saccadic velocity profile 333 // 10.5.3 Smooth pursuit acceleration 335 // 10.5.4 Saccadic jerk 335 // 10.6 Movement shape measures 336 // 10.6.1 Saccadic curvature 336 // 10.6.2 Glissadic curvature 337 // 10.6.3 Smooth pursuit: degree of smoothness 338 // 10.6.4 Global to local scanpath ratio 338 // 10.7 AOI order and transition measures 339 // 10.7.1 Order of first AOI entries 339 // 10.7.2 Transition matrix density 341 // 10.7.3 Transition matrix entropy 341 // 10.7.4 Number and proportion of specific subscans 342 // 10.7.5 Unique AOIs 343 // 10.7.6 Statistical analysis of a transition matrix 344 // 10.8 Scanpath comparison measures 346 // 10.8.1 Correlation between sequences 346 // 10.8.2 Attention map sequence similarity 347 // 10.8.3 The string edit distance 348 // 10.8.4 Refined AOI sequence alignment measures 353 // 10.8.5 Vector sequence alignment 354 // 11 Position Measures 356 // 11.1 Basic position measures 357 // 11.1.1 Position 357 // 11.1.2 Landing position in AOI 358 // 11.2 Position dispersion measures 359 // 11.2.1 Comparison of dispersion measures 359 // 11.2.2 Standard deviation, variance, and RMS 360 // 11.2.3 Range 362
// 11.2.4 Nearest neighbour index 363 // 11.2.5 The convex hull area 364 // 11.2.6 Bivariate contour ellipse area (BCEA) 365 // 11.2.7 Skewness of the Voronoi cell distribution 366 // xvi I CONTENTS // 11.2.8 Coverage, and volume under an attention map // 11.2.9 Relative entropy and the Kullback-Leibler Distance (KLD) // 11.2.10 Average landing altitude // 11.3 Position similarity measures // 11.3.1 Euclidean distance // 11.3.2 Mannan similarity index // 11.3.3 The earth mover distance // 11.3.4 The attention map difference // 11.3.5 Average landing altitude // 11.3.6 The angle between dwell map vectors // 11.3.7 The correlation coefficient between two attention maps // 11.3.8 The Kullback-Leibler distance // 11.4 Position duration measures // 11.4.1 The inter-microsaccadic interval (IMSI) // 11.4.2 Fixation duration // 11.4.3 The skewness of the frequency distribution of fixation durations // 11.4.4 First fixation duration after onset of stimulus // 11.4.5 First fixation duration in an AOI, and also the second // 11.4.6 Dwell time // 11.4.7 Total dwell time // 11.4.8 First and second pass (dwell) times in an AOI // 11.4.9 Reading depth // 11.5 Pupil diameter // 11.6 Position data and confounding factors // 11.6.1 Participant brainware and substances // 11.6.2 Participant cultural background // 11.6.3 Participant experience and anticipation // 11.6.4 Communication, imagination, and problem solving // 11.6.5 Central bias // 11.6.6 The stimulus // 367 // 368 // 369 // 370 370
370 // 371 // 372 // 373 // 374 // 375 // 376 376 // 376 // 377 384 // 384 // 385 // 386 389 // 389 // 390 // 391 // 394 // 395 // 395 // 396 // 396 // 397 397 // 12 Numerosity Measures // 12.1 Saccades: number, proportion, and rate // 12.1.1 Number of saccades // 12.1.2 Proportion of saccades // 12.1.3 Saccadic rate // 12.2 Glissadic proportion // 12.3 Microsaccadic rate // 12.4 Square-wave jerk rate // 12.5 Smooth pursuit rate // 12.6 Blink rate // 12.7 Fixations: number, proportion, and rate // 12.7.1 Number of fixations // 399 // 403 // 404 404 // 404 // 405 // 406 // 407 // 408 410 412 412 // CONTENTS I xvii // 12.7.2 Proportion of fixations 415 // 12.7.3 Fixation rate 416 // 12.8 Dwells: number, proportion, and rate 417 // 12.8.1 Number of dwells (entries) in an area of interest 417 // 12.8.2 Proportion of dwells to an area of interest 418 // 12.8.3 Dwell rate 419 // 12.9 Participant, area of interest, and trial proportion 419 // 12.9.1 Participant looking and skipping proportions 419 // 12.9.2 Proportion of areas of interest looked at 421 // 12.9.3 Proportion of trials 421 // 12.10 Transition number, proportion, and rate 422 // 12.10.1 Number of transitions 422 // 12.10.2 Number of returns to an area of interest 423 // 12.10.3 Transition rate 424 // 12.11 Number and rate of regressions, backtracks, look-backs, and look-aheads 425 // 12.11.1 Number of regressions in and between areas of interest 425 // 12.11.2 Number of regressions out of and into an area of interest 426
// 12.11.3 Regression rate 426 // 12.11.4 Number of backtracks 427 // 12.11.5 Number of look-aheads 427 // 13 Latency and Distance Measures 428 // 13.1 Latency measures 429 // 13.1.1 Saccadic latency 430 // 13.1.2 Smooth pursuit latency 432 // 13.1.3 Latency of the reflex blink 434 // 13.1.4 Pupil dilation latency 434 // 13.1.5 EFRPs—eye fixation related potentials 436 // 13.1.6 Entry time in AOI 437 // 13.1.7 Thresholded entry time 438 // 13.1.8 Latency of the proportion of participants over time 440 // 13.1.9 Return time 442 // 13.1.10 Eye-voice latencies 442 // 13.1.11 Eye-hand span 445 // 13.1.12 The eye-eye span (cross-recurrence analysis) 447 // 13.2 Distances 447 // 13.2.1 Eye-mouse distance 448 // 13.2.2 Disparities 449 // 13.2.3 Smooth pursuit gain 450 // 13.2.4 Smooth pursuit phase 451 // 13.2.5 Saccadic gain 452 // 14 What are Eye-Movement Measures and How can they be Harnessed? 454 // 14.1 Eye-movement measures: plentiful but poorly accessible 454 // xviii I CONTENTS // 14.2 Measure concepts and operationalizing them // 14.3 Proposed model of eye-tracking measures // 14.4 Classification of eye-movement measures // 14.5 How to construct even more measures // 14.6 Summary // References // Index

Zvolte formát: Standardní formát Katalogizační záznam Zkrácený záznam S textovými návěštími S kódy polí MARC