Each page below replays a complete search session from the AdSERP dataset: numbered eye fixations, mouse cursor path, page scroll positions, and Scrutinizer-simulated peripheral vision — showing what the searcher could actually resolve at each moment. The background image is rendered through Scrutinizer's LGN/V1/DoG foveated pipeline with infinite visual memory accumulation.
AdSERP: 47 participants, 2,776 transactional Google queries, Gazepoint GP3 HD eye tracker at 150Hz. Trials below are prototypical examples of distinct search behaviors.
Positional accuracy: fixation overlay has median <13px offset, max ~45px at page bottom. SERP HTML is re-rendered locally; element heights differ from the original Chrome 110/Windows session due to external resource loading (Maps tiles, product images). Fixation coordinates (FPOGX/FPOGY) from AdSERP are pixel-verified accurate against synthetic test pages.
Pupil LF/HF — Low/High frequency power ratio of pupil diameter oscillations (Duchowski 2026, Butterworth IIR: LF 0–1.6 Hz / HF 1.6–4 Hz, 1s sliding window). Higher values = higher cognitive load. Amber timeline track and color mode.
←→ step through fixations · Space play/pause · drag timeline to scrub · Window selector limits visible fixation history
Each background image is a full-page foveated render produced by Scrutinizer, a neuroscience-based peripheral vision simulator. Scrutinizer models the human visual system's resolution falloff from fovea to periphery using a pipeline inspired by the lateral geniculate nucleus (LGN), primary visual cortex (V1), and difference-of-Gaussians (DoG) spatial frequency filtering.
Infinite visual memory mode accumulates every fixation across the entire scanpath. As each fixation is replayed, the foveal region at that position is "remembered" — remaining sharp while everything else degrades through the peripheral pipeline. The final image shows exactly what the participant could have resolved across their full search session: sharp where they looked, degraded where they didn't.
Pipeline:
TEST_VISUAL_MEMORY=-1 (infinite accumulation),
then tile-captures the full page at each scroll position and stitches themMulti-track timeline: The timeline strip shows five parallel data streams for each fixation: Saliency (visual pop-out at gaze location), Clutter (Rosenholtz feature congestion), Pupil (dilation from 150Hz pupillometry), Scroll (forward/regression direction + depth), and Dwell (fixation duration). Bar height encodes intensity within each track; color encodes the signal value. Scrub across fixations to see how visual complexity, cognitive load, motor behavior, and attention duration co-vary in real time.
DOM-anchored fixation positioning:
Fixation coordinates from the eye tracker are in screen-pixel space
(1280×1024), but the SERP was displayed at 1422 CSS pixels (90% display
scaling). Rendering at a different width causes content reflow, shifting
elements vertically. Instead of coordinate transforms, each fixation is
anchored to the DOM element it landed on: during the build step,
generate-anchors.js
loads each SERP in Playwright at the original window width, uses
elementFromPoint
to map each fixation to a CSS selector + offset, then the build script
re-resolves those anchors at the render width. The fixation dot lands
on the correct element regardless of layout width. This approach was inspired
by DOM-based event logging (Edmonds, 2003) — the same principle that anchors
interaction events to page structure rather than screen coordinates.
Data: AdSERP — Latifzadeh et al., 2,776 transactional Google SERP queries, 47 participants, Gazepoint GP3 HD eye tracker, simultaneous mouse + scroll + gaze recording. Dataset on Zenodo.