Scrutinizer v1.8: Scientific Accuracy Audit & Feature Congestion

v1.8.0 release notes →

Two themes in this release. First, a scientific accuracy audit — replacing geometric cutoffs with linear M-scaling, recalibrating E2 (the eccentricity half-resolution constant — the distance from fixation, in degrees of visual angle, at which spatial resolution drops to half its foveal value), and qualifying every reference to “Laplacian pyramid” in the codebase. Second, a new analytical tool: Feature Congestion (Rosenholtz et al. 2007) ships as a real-time visual clutter metric with an interactive HUD overlay.

Linear M-scaling

The v1.6 DoG band cutoffs used a geometric 2× series (0.3, 0.6, 1.2, 2.4 × E2), which over-predicts resolution loss near fixation. Biological M-scaling predicts linear growth of minimum resolvable detail with eccentricity:

$$s_{\min}(e) = s_0 \times \left(1 + \frac{e}{E_2}\right)$$

— Rovamo & Virsu (1979). Band $k$ (spatial scale $2^k$ px) drops out when $s_{\min}(e) > 2^k$, giving cutoff eccentricity $= E_2 \times (2^k - 1)$:

BandSpatial ScaleCutoff (× E2)Content
band 01–2 px1Serifs, hairlines
band 12–4 px3Letter bodies, small icons
band 24–8 px7Words, UI elements
band 38–16 px15Buttons, layout blocks
residual16 px+AlwaysOverall color/luminance

The perceptual effect: coarse structure (bands 2–3) now persists further into the periphery — you see where a button is but can’t read its label. Fine detail (band 0) drops at the same rate as before. E2 values were recalibrated to preserve the band-0 onset point under the new formula: 0.15 (High-Key), 0.12 (Biological).

Two additional correctness fixes. Hardware MIP levels use box/bilinear downsampling, not Gaussian convolution — every doc, shader comment, and paper reference that said “Laplacian pyramid” now says approximate Laplacian pyramid and notes the distinction from Burt & Adelson (1983). And final color output is clamped to [0,1] to prevent negative-going band artifacts from the DoG subtraction reaching the framebuffer.


Feature Congestion

Feature Congestion (Rosenholtz, Li & Nakano 2007) measures visual clutter as local feature variance across color channels. Where saliency answers “what pops out?” (center-surround contrast, relative), congestion answers “how much is going on?” (local variance, absolute). A dense product grid with nothing that particularly pops out still scores high on congestion.

The key finding during implementation: fixed σ=2.5. Auto-scaling σ with resolution — standard for natural images — fails on web content because pages at different resolutions are the same layout at different pixel densities, not the same scene at different zoom levels. Scaling σ up smears text, borders, and UI into indistinguishable blobs. Fixed σ keeps the neighborhood matched to the feature scale that matters.

ResolutionAuto σρFixed σ=2.5ρ
512 px5.00.53 FAIL2.50.89
768 px7.50.60 FAIL2.50.93
1024 px10.00.65 FAIL2.50.92

Two Web Workers handle analysis: saliency (256 px, every 15th frame, continuous) and congestion (1024 px, on-demand when the HUD is toggled). The saliency map texture is now RGB-packed: R=saliency, G=congestion, B=edge density. Scoring formula:

$$\text{score} = \sqrt{\,\text{congestion}_{p90} \times 0.7 + \text{edgeDensity}_{p90} \times 0.3\,} \times 100$$

P90 captures the busy regions, ignores whitespace. Sqrt scaling spreads the [0,1] raw range into a discriminative [0,100] score. Validated at Spearman ρ=0.93 against Rosenholtz’s reference implementation. The full algorithm story — the fixed-σ discovery, validation pipeline, and scoring design — is in the congestion tech brief.


ComplexityHUD

Interactive draggable overlay replacing the toolbar URL-bar approach. Three tabs: Score (live congestion score with color-coded badge), Stats (p50/p75/p90 breakdowns for congestion and edge density), Spatial (congestion heatmap overlay, blue → yellow → red). Scroll and navigation-aware — the heatmap hides immediately on scroll to prevent stale overlay, restores when fresh worker results arrive.


Mode 9: Congestion-Gated Pooling (Hypothesis)

A new mode wired in modes.json with category: “hypothesis”. The shader reads u_congestionMap on TEXTURE4, but pooling modulation is not yet implemented. The hypothesis: cluttered regions should get stronger peripheral pooling — harder to resolve in the periphery when there’s more local feature variance competing for the same summary-statistic representation.

This tests Rosenholtz’s (2012) prediction that visual clutter and crowding are manifestations of the same computation. High-congestion areas would receive increased MIP pooling, simulating the degraded feature access that occurs when peripheral receptive fields pool over diverse, competing features. Tagged hypothesis to distinguish from the validated modes (0–8).


What’s next


Links: GitHub · Changelog · Feature Congestion tech brief · Congestion journey (implementation log)