Peripheral vision

My physical workspace is full of subtle cues. The books I read or bought most recently are lying out. Papers I’ve accumulated are lying in stacks on my desk, very roughly arranged by their relationship to each other. I notice a broken door every time I walk by it. These cues together give me a kind of “peripheral vision”: when I’m doing one thing, it’s easy for me to fluidly notice other nearby things. So long as the peripheral vision is reasonably dynamic, anyway—Unchanging peripheral vision desensitizes.

Software systems, by contrast, often lack this kind of peripheral vision. (Though there have been some attempts; see e.g. Just-in-time information retrieval agents)

Peripheral vision can spontaneously prompt actions

Digital task lists live in a dedicated app. I have no natural reason to look at the contents of that app. If I need to fix a broken door, I’ll be reminded of that task intermittently as I walk around the house. But if the tasks live primarily on a digital task list, I’ll need to establish a habit of explicitly reviewing my task list.

Peripheral vision emphasizes the concrete

Unread digital books and papers live in some folder or app, invisible until I decide that “it’s reading time.” But that confuses cause and effect. When I leave books lying on my coffee table, I’ll naturally notice them at receptive moments, and I’ll decide to start reading based on my reaction to a specific book. In these cases, the motivation to read physical books comes from my actual interest in a concrete work; the motivation to read digital books comes from my abstract interest in the habit of reading.

This is a big issue with the system described in A reading inbox to capture possibly-useful references.

Peripheral vision offers context

If I mark up a physical book, then later flip through to see my margin notes, I’ll always see them in the context of the surrounding text. By contrast, digital annotation listings usually display only the text I highlighted, removed from its context. The primary “unit” in such systems is a single highlight or note, but that’s not how I think. Margin marks have fuzzy boundaries, and I often think of a page’s worth of markings as a single unit.

LiquidText is a lovely counterexample: it works hard to display annotations in context. See also PeekQuotes.


All this is part of why I like a Studio environment: constantly being physically surrounded by the work is very different from needing to choose to “pull up” some element of the work.


References

My Twitter thread on this note: Andy Matuschak on Twitter: “Software interfaces undervalue peripheral vision! (a thread)My physical space is full of subtle cues. Books I read or bought most recently are lying out. Papers are lying in stacks on my desk, roughly arranged by their relationships.… https://t.co/jaLLpxXh3y”

Mark Weiser and John Seely Brown. “Designing Calm Technology”.
https://calmtech.com/papers/designing-calm-technology.html

Technologies encalm as they empower our periphery. This happens in two ways. First, as already mentioned, a calming technology may be one that easily moves from center to periphery and back. Second, a technology may enhance our peripheral reach by bringing more details into the periphery. An example is a video conference that, by comparison to a telephone conference, enables us to attune to nuances of body posture and facial expression that would otherwise be inaccessible. This is encalming when the enhanced peripheral reach increases our knowledge and so our ability to act without increasing information overload.


Q. Taylor’s experiment to solve the emotional problems of digital books? (from 2022-09 Fey Computer Festival)
A. Hire an UpWorker to associate GoodReads quotes with all the books in his to-read list, so he’s reacting to quotes in his inbox, rather than to abstract titles.

Last updated 2023-07-13.