It’s hard to maintain design artifacts across different levels of fidelity

You write a context scenario (a la Alan Cooper) or user story; you use that to write a conceptual model (a la Johnson and Henderson); you use that to create a wireframe; you use that to make increasingly high-fidelity comps. But design isn’t a waterfall model. With each step of fidelity, you often revise your conception of the more foundational model. The trouble is that we usually don’t bother to update the conceptual model when we figure something out in the wireframing stage. This can lead to overwhelm (have to keep everything in your head); to trouble seeing clearly (since the more foundational documents are no longer reliable guides); and to obvious communication/collaboration challenges.

One approach to this situation might be to create a tool which lets designers fluidly draw connections between these different representations. Ideally, you could use that tool to create a workflow which would help you propagate downstream changes back upstream. One could imagine a Figma plugin which walks you through quickly tying your conceptual model to your wireframe. It might walk through each conceptual model element, one at a time, saying—OK, now click on the part of your design which represents “invitations can be deleted”. Persistent visual cues might make it clear when elements of the conceptual design aren’t represented in the wireframe, or vice versa. Of course, it would be even better if the system could automatically identify such situations.

Another approach might be to lighten the burden involved in creating each of these stages from the last. It’s enough of a hassle to make a conceptual model or wireframe that designers often don’t bother—even though it’s “good for them.” For instance, Taylor Rogalski prototyped a Large language models prompt which generates a conceptual model (in Mermaid) from a high-level textual description of a program. The machine doesn’t need to get it totally right, so long as the provisional model is fludily editable by the designer afterwards. (see Heer, J. (2019). Agency plus automation: Designing artificial intelligence into interactive systems. Proceedings of the National Academy of Sciences, 116(6), 1844–1850).

One trouble for both of these approaches is that we have good representations of high-fidelity comps (e.g. Figma documents); but we don’t really have good digital representations of conceptual models, or, to a lesser extent, context scenarios. What might a powerful, purpose-built editor for conceptual models look like? What is the ideal dynamic representation, one which lends itself to the sorts of approaches described above?

References

Taylor Rogalski first pointed this out to me in conversation on 2022-08-21; the possible solutions described above came from our jam on 2022-10-17.

Last updated 2023-07-13.