2026-03 Tech Brief
How GPU MIP Chains Simulate Peripheral Vision
Peripheral loss is better modeled as spatial pooling than uniform optical blur. Ganglion cells near
the fovea have tiny receptive fields; peripheral ones have large fields. Spatial resolution
falls off steeply with eccentricity. GPUs already encode this hierarchy: the
MIP chain. Each level halves resolution, and
textureLod() samples any level directly. Scrutinizer maps eccentricity
to MIP level — sharp at center, pooled at edges — one texture lookup per pixel.
The MIP chain
A MIP chain is a stack of progressively downsampled copies of the same image. Level 0 is full resolution; each next level halves width and height (so level 4 is 1/16 linear scale, 1/256 pixels).
Each level is a stronger low-pass version of the image: fine detail disappears first, coarse structure remains.
From MIPs to DoG bands
A Difference-of-Gaussians (DoG) band isolates a range of spatial frequencies by subtracting two blur scales. The MIP chain already gives us discrete blur scales, so we approximate DoG directly from adjacent levels:
bandk ≈ levelk − levelk+1
levelk= content up to scale k (low-pass)levelk+1= even lower-pass version- subtracting them leaves the octave of detail between those scales (band-pass)
This is why the demo moves naturally from showing MIP levels to showing DoG bands: the bands are computed from those same levels.
Stacking all bands reconstructs an approximate Laplacian pyramid. (Approximate because hardware MIPs use box/bilinear filtering, not true Gaussian kernels, so some spectral leakage is expected.)
Scrutinizer then drops high-frequency bands with eccentricity: near the fovea most bands are retained; farther out (by 15° in default settings), only coarse bands remain.
Eccentricity-based selection
With Scrutinizer, your cursor is the fovea. As eccentricity increases, the selected MIP level rises and high-frequency bands are suppressed, producing peripheral pooling.
In the shader, one textureLod() call per pixel:
float mipLevel = log2(1.0 + r_deg / a);
vec4 color = textureLod(page, uv, mipLevel);
The logarithmic mapping comes from cortical magnification —
Blauch, Alvarez & Konkle (2026)
formalized it as M(r) = 1/(r + a). The resolution ratio
at eccentricity r versus the fovea is (r + a) / a;
taking log2 gives the MIP level directly, since each level
halves resolution. The constant a
(2.78°) is a scale parameter controlling foveation strength: at
r = a, resolution is halved (MIP 1); at 10°, it’s
reduced ~4.6× (MIP 2.2).
Biology → GPU
- Ganglion cell RF size → MIP level (each level doubles the integration area)
- Approximate DoG / Laplacian pyramid → subtraction of adjacent MIP levels (box-filtered, not Gaussian)
- Cortical magnification →
log2(1 + r/a)— eccentricity-to-MIP-level mapping (each level = 2× resolution loss) - Foveal resolution → MIP level 0 (full-res, no pooling)
The MIP chain was built for texture filtering in 1983. The retina evolved under selection pressure. They converged on the same structure because progressive spatial pooling is a good solution to the same problem: representing a scene at multiple resolutions efficiently. Scrutinizer exploits that convergence.
Links: GitHub · FOVI & v1.7 blog post · v1.4: MIP Pooling origin story · Kitten image: Pixabay CC0