Dynamicland

Design/research project led by Bret Victor; current iteration created ~2017.

Current (2021) lab members (as far as I know):

Past full-time members:

More loosely affiliated:

Log - reading the 2024 new site

  • Front shelf - “hypershelf” can be rearranged in physical space, and a web page generated from the photo rearranges its image map accordingly.
    • Implemented using a “chain of recognizers”—reminds me of Bret’s work with the Nile project. The first fixes the JPEG orientation; the second masks out certain colored regions; the third finds boxes of the right area; the fourth reads the text (==implied: via Tesseract?==); the fifth performs an edit distance with the known set of labels; these outputs are used to update the hyperphoto links.
    • A paper (no dots; apparently just a relative display below each recognizer—the paper is to give more contrast) displays visual representations of each step’s outputs, very like the Nile demo.
    • In the video, at 4:30, Bret has a “magnifying glass”-like page which he can point at part of a recognizer step’s visual output to see more detail.
    • How does the text recognizer work?
    • Each shelf label is a Realtalk object with a label on the back declaring its contents (title, subtitle, link, etc)
    • These are made in an interesting way. When a shelf label program appears, another program lays out and prints the actual shelf label contents corresponding to the label program. Then another (?) program cuts the label and program for the back. Another program can cut and score the card stock backing.
    • “The object can be recognized by its front i.e. via the recognizer chain or by the dot frame on the back i.e. via traditional Realktalk recognizers”. How interesting!
    • Bret says something about making a “==snapshot==” out of the recognizer chain but that it’s not ready. I don’t know what a snapshot is. It looks like… maybe a capture of a bunch of related pages onto another page, so that it can be used more compactly?
    • Bret says something about the recognized links being “==remembered as memories== on the shelf”—some kind of memoization? This is new to me.
    • Bret describes a wonderful anecdote at the end (8:53): he wanted to make a mini-page for The Humane Representation of Thought - Bret Victor, so that people could view the presentation or read the document or poster. But he kept putting it off, because making a web page is so unpleasant. “I kept…opening up a text editor to make this web page, and I was like—I don’t want to make a web page. I don’t want to mess with CSS. … A few days later, I realized I could make a hyperphoto … This enormous wave of relief and joy just flooded over me … that instead of using CSS, I could just use paper and tape” How lovely.
  • Memories
    • Every memories is “stuck” to each page, can be edited, copy/pasted like text, etc
    • They travel between “==areas==”
    • Remember is like Claim but begins at the next tick. It’s a key-value store of sorts.
    • “Similarly, every page's memories are also visible. Like patches, you can hide them, but you should feel ashamed to do so.”
    • “You can even cut and paste to move a statement from a page's memories into the page's text, changing "Remembering" to "Claim", in order "crystalize" it.”
    • “For example, after I take a video on my phone using the mobile camera (forward reference!), there's now a card on the table which is remembering that it represents "video" (video). That card -- the physical object -- should be our handle to the video, not a pathname in some directory. If we want to play that video, we should reference that physical object”
    • “Everything with state, including text boxes, editors, and keyboards, now proudly display their memories.  You can read the pasteboard on the keyboard.”
    • “A primary theme of Realtalk is making computational entities visible and tangible. One would hope that would serve as a guiding principle, and it must at some level, but the experience of design is usually one of struggling with a design that doesn't feel right, laboriously coming to a better solution, and recognizing only in retrospect that the essence of the solution was making some computational entity visible and tangible.”
  • MIDI
    • Very nice procedure for mapping a grid of MIDI knobs to parameters: turn a knob to assign it to the closest parameter to the control surface. Then the knob and a debug display of the parameter are colored with the same color.
    • This design seems to depend on “parameters” as a generic concept. Luke’s showing pages that have parameters floating on the supporter outside the page.
    • Very lovely demo with a MIDI keyboard. The system knows where the keys are, physically, so it can render a visual trace of the keys you’re playing. And the keys actually play sound, using a realtime audio engine.
  • Ring pen
    • Glorious example of the recognizer system in action, alongside simple papercraft. Bret makes a hyper minimal “light pen”-like tool which can e.g. select text, scrub numbers in source code, etc. It’s just paper, and he demonstrates a version of its implementation which includes a printable copy of the tool.
  • Color maps
    • A very interesting demonstration of how the system’s capabilities are moving away from “dots and databases”. Bret hand-draws a map with sharpie, then puts printed labels with color names (“green”, “pink”) on regions of the map. Those regions are then filled with that color. No dots on the map or the labels.
    • This is implemented with a recognizer chain akin to the one for the Dynashelf.
    • “This hand-drawn map with text labels is readable both by humans and Realtalk. No encodings or UIs, no dots or databases. It just is what it is.”
  • Realtalk binders
    • The “change scripts” are an interesting design, leaning on the malleability of text for conviviality. The idea is that if someone has modified Realtalk, and you want to consider adopting their changes, you can use a program called “make this like that” which generates a “change script” program which would transform your Realtalk binder into the other one. The change script is a file with one “instruction” per line (e.g. add a page, change a page, remove a page). So you can discard part of the change by just deleting the instructions corresponding to the changes you don’t want. And you can use the standard code editor to do it.
    • “Make this like that” works over “any two objects that collects things”. Not sure what that means, in terms of practical implementation. There’s a “collection tool”, part of “collection kit”, which is shown at 22:43. I think collection kit is specific to Realtalk rulebooks, rather than being just a generic set API, but parts of it do seem more general.
    • There’s a “phantom Realtalk” program which can be made to represent a remote Realtalk rulebook, and then used with the other viewing / editing programs Bret shows.
    • Infrastructurally, this set of changes makes it possible to have a “non-running Realtalk” which you can work on using some other “running Realtalk”—essential for work on the system itself, I imagine.
    • A very nice debugging shot, showing live match annotations in action:
  • Recognition Kit (2024)
    • Hm, finding myself with not nearly enough context/info to understand Luke’s demo video here.
    • A “stable appearance” concept in the kit memoizes elements in the pipeline and only changes it when there’s a significant change, avoiding constant recomputation driven by trivial camera noise.
    • “Marks” detect “color and measurements of contours”
    • Can ask for spatial relations of marks: query marks inside of marks, or “to the right of”, etc.
    • Very cool to see these pipelines running in 10-100µs.
    • “Can ask about all the green rectangles / red rectangles”
    • Luke’s final email, 2024-08-15, shows a system which interprets (in real time!) the symbols of an L-system hand-drawn on a whiteboard. But I can’t tell at all how it works. I don’t know how the “cards” at the far right work.
  • Archive cards (2022)
    • “Realtalk Javascript” in the archive page! (last video, 3:19)
  • Archive editing (2024)
    • A really lovely moment at 6:40 where Bret can insert some media from their archive into the web page by shining a laser at it.
    • “Lots of little ten line tools” <- that really is the beauty of Realtalk for this kind of task.
  • Algorithmic Alphabets
    • Talk with Alex McLean
    • More information about some of what Luke shows in Recognition Kit, particularly of the L-system demo
  • Improvising cellular playgrounds in Realtalk
    • First scientific conference talk in Realtalk, realizing some of the “dynamic presentation” ideas first presented in The Humane Representation of Thought - Bret Victor
    • Presentation used:
    • a large table as manipulation surface (and secondary table to store materials)
    • projector screen displaying handheld video camera of contents of large table
    • persistent timeline projected onto wall for larger context
      • they created Realtalk cards for key people; when the cards are out, the person’s lifespan and institutions are shown on the persistent timeline
      • likewise, when papers are placed on the table, an icon is added on the timeline “so that as the talk precedes, you can always see the entire sequence of papers so far in context”
      • lasering a paper’s icon on the timeline shows page thumbnails
    • Bret notes that after Shawn made his presentation materials (including slide-like pages, papers with “live” figures, etc), he’d often pull them out in meetings with collaborators and funders, and they’d stay on the table to support conversation. Previously he’d show something on his phone, which is awkward, so the phone would get put away quickly, and the materials couldn’t stay out to support conversation.
    • Bret explains key papers by recreating their structures live on the table, using a set of card for modeling DNA manipulation: ligate, duplicate, mutate, crossovers, etc

2024 questions

  • Bret argues that physicality and spatiality can give some of the OoM reduction in code complexity which STEPS aspired to. What kinds of complexity is removed?
    • representations and interactions which replicate many spatial notions: e.g. manipulable objects on a canvas
    • many database applications; “the file system”
    • “collaboration” libraries and interactions
    • much interface rendering code
    • *
  • “Unlike the print medium, which is inherently oriented toward private study, the characteristics of this medium make people prefer to study together”
    • Why are these people in the library? Curiosity? Learning for learning’s sake?

Last updated 2024-09-15.