Color Persists in the Periphery: Even Where You're Not Looking

v1.9.0 release notes →

For twenty years, the standard shorthand for peripheral color has been “desaturation” — color drains away uniformly with eccentricity. This framing comes from a specific historical moment: threshold studies measured the minimum detectable chromatic contrast at increasing eccentricities and found it rising steeply, especially for red-green. The conclusion hardened into conventional wisdom. Peripheral vision is achromatic. Color is foveal.

The problem is that those studies measured detection, not appearance. Web colors aren’t hovering at threshold — they’re saturated reds, blues, and greens well above any detection limit. And the more recent literature paints a different picture. Jiang, Shooner & Mullen (2022) showed that perceived contrast follows a compressive power law — peripheral color appearance holds up much better than threshold sensitivity would predict. Tyler (2015) demonstrated that eccentricity-scaled stimuli appear vivid out to large eccentricities. Rosenholtz’s Texture Tiling Model frames peripheral color not as lost but as pooled — averaged over progressively larger regions, preserving mean chromaticity while losing spatial chromatic detail.

This release replaces Scrutinizer’s uniform chrominance reduction with a pipeline grounded in the current understanding. The result is visible immediately.


Before and after

The color spectrum reference page: a continuous hue gradient from red to magenta with rod sensitivity curve overlay, labeled color swatches, and shade rows.
The color spectrum reference page, unmodified. Full-saturation hues across the visible spectrum with rod sensitivity (V′) overlay. Access in Scrutinizer via Go → Reference Pages → Color Spectrum. This is what Scrutinizer processes — the question is what survives in the periphery, and how differently each channel decays.
Before/after comparison of chromatic pooling on a color spectrum test page. Left: uniform desaturation washes everything to grey. Right: blue-yellow persists into periphery while red-green fades.
Color spectrum reference page. Left: the old uniform approach — all color fades at the same rate. Right: per-channel decay — blue-yellow persists into the periphery while red-green fades faster. Large color swatches retain their hue further than small ones — matching what the research predicts (Abramov et al. 1991).
Before/after comparison on a dashboard UI. Left: peripheral UI is achromatic. Right: blue sidebar and green status badges retain color.
Dashboard page, Mode 0 (High-Key). Fixation at center (the “$45,231” hero metric). Left: everything outside the fovea is grey. Right: the blue sidebar retains its identity, green status badges remain discriminable, and red accents are attenuated but present. A designer using the left rendering would conclude “nobody can see the color coding in the table” and add redundant labels. The right rendering shows the color coding IS peripherally visible at coarse level — a different and more accurate design signal.

What changed in the pipeline

Two biological asymmetries that uniform desaturation misses entirely:

Channel-dependent decay rates

Red-green color sensitivity is a foveal specialization. The wiring that lets you distinguish red from green depends on one-to-one connections between photoreceptors and brain cells — connections that only exist near the center of gaze. As you move into the periphery, those connections pool together and the red-green signal collapses. Half the sensitivity is gone by about 5° from center.

Blue-yellow is different. The wiring for blue-yellow covers the entire retina — it’s ancient neural circuitry, far older than the red-green specialization. Half the sensitivity isn’t lost until about 26°. That’s roughly a 5:1 difference in how fast these two color channels fall off. Treating both channels identically — the old approach — over-attenuates blue-yellow by an order of magnitude.

Size-dependent preservation

Bigger color areas survive further into the periphery. A full-width colored banner retains its color identity much further than 14px colored text (Abramov, Gordon & Chan 1991). The visual system pools color over progressively larger regions — large patches average to a consistent hue, while small colored details get swallowed.

Scrutinizer’s pipeline already separates content by spatial scale (fine detail vs. large regions). Applying different color decay rates at each scale gives size-dependent preservation automatically — no need to measure stimulus size explicitly. The design takeaway: if you rely on color coding for peripheral discoverability, make the colored regions large.

Channel50% sensitivityWhat this means for design
Red-green~5° from fixationRed/green distinctions only work near where the user is looking
Blue-yellow~26° from fixationBlue UI elements remain visible across most of the screen
Brightness~7° from fixationLight/dark contrast is the most reliable peripheral signal

Colors that are well above the detection threshold — like the saturated reds and blues on most websites — hold up better in the periphery than threshold studies predicted (Jiang, Shooner & Mullen 2022). The shader applies a correction for this so web-scale colors don’t vanish unrealistically fast.

Open calibration question: How much peripheral saturation to preserve is still being tuned. The correction is conservative (exponent 0.5); the literature suggests 0.6–0.65 may be more accurate. The parameter is exposed as a tunable setting for researchers who want to experiment.

Congestion-gated pooling (Mode 9)

Rosenholtz et al. (2012) argue that peripheral vision computes summary statistics over local pooling regions, and that clutter is what happens when those statistics are ambiguous — too many features packed into a pooling region makes the summary unreliable. Mode 9 tests this prediction directly: the shader reads the congestion map and multiplies peripheral pooling strength by local clutter.

// peripheral2.frag
float congestionBoost = 1.0 + lgn.congestion * 1.0;  // 1.0x – 2.0x
coupledEccentricity *= congestionBoost;

Dense text columns and image-heavy sidebars pool more aggressively in the periphery than clean whitespace. The congestion worker auto-starts on mode selection and recomputes on scroll and navigation.

Tagged experimental. The 1.0× multiplier and linear boost curve are initial guesses. The prediction is testable: show observers gated vs. ungated peripheral renderings alongside gaze-contingent photographs and measure which simulation looks more realistic.


Crowding: the gap we can see

Two new reference pages expose a fundamental limitation. crowding.html places a target letter flanked by random letters (crowded) next to an identical letter in isolation, both at the same eccentricity. In real peripheral vision, the isolated letter is identifiable while the crowded letter is not — even when both are above acuity threshold.

In Scrutinizer, both letters are equally degraded. The V1 displacement field is computed from pixel position only — it has no knowledge of what is adjacent. An isolated letter and a densely flanked letter at the same eccentricity receive identical distortion.

Bouma’s law predicts critical spacing of ~0.4–0.5× eccentricity (Bouma spacing reference). Flankers within this radius contaminate the target’s pooling region; flankers outside it leave the target intact. Implementing this requires feeding the structure map’s density channel into V1 strength — a sigmoid gate where dense content gets full displacement and isolated elements are spared. The spec is written (density_gated_crowding.md), the reference pages are published, and the gap is documented.


Also in this release


What’s next


Links: GitHub · Changelog · Full release notes · Chromatic pooling spec

References: Ashraf et al. 2024 (castleCSF) · Jiang, Shooner & Mullen 2022 · Abramov, Gordon & Chan 1991 · Bowers, Gegenfurtner & Goettker 2025 · Rosenholtz et al. 2012 (TTM) · Mullen & Kingdom 2002 · Tyler 2015 — Annotated Bibliography · Primer References