The Augmental folks kindly invited me to visit their SF lab on 2022-08-23. I was connected to them through my comments on TouchBoard, a Silent speech interface in the form factor of a dental retainer. They’re working on an input device in that form factor, currently targeting customers with motor disabilities.
Their device fits over the upper molars. The form factor has a very low profile, so that it doesn’t impede speech. It has a small touchpad surface, meant to be controlled by the tongue. The touchpad can sense pressure (for “clicks”, like a MacBook trackpad). The device also contains a 6-axis gyro/accelerometer package, and a temperature sensor, as well as some mechanism for detecting exhalation/inhalation. They intend—but I believe haven’t yet included—force sensors on the molars. One early prototype featured a microphone, but there are evidently some considerable miniaturization and power considerations.
Perhaps because I was anchored on silent speech, I was initially underwhelmed. Sure, maybe you can cram all that stuff into a retainer, but can one do anything useful with it? A startling demo made me suddenly pay much more attention: co-founder Tomás put one of the devices on, connected via Bluetooth to his Mac, and used it to move his cursor to the tiny “close” button at the upper-left hand corner of the front-most window. He “clicked” by pressing with his tongue against the trackpad, and the window closed. I was very impressed—that’s a tiny hit target! He hit it quickly, on the first try, and with no visible motion of his body.
OK, so within certain parameters, this satisfies some of the characteristics of a Poor man’s brain-computer interface: it offers omnipresent, hands-free, screen-free, invisible, multi-axis input. That’s worth taking seriously.
I wanted to be helpful, so I steered our conversation towards interface design. I suggested we spend a while riffing on mass-market consumer applications of this device. A framework I found generative: as a persistent, unobtrusive device, AirPods are the mainstream modality to beat. For discrete, command-and-control input, they’re great. Their significant deficit is that they’re not fully unobtrusive/silent, and they can’t really do continuous, multi-axis, analog input (except dictation, which counts in limited contexts). So, focus on those elements. What if you had something “worn” like an AirPod but which could produce output like a trackpad?
A few directions I particularly liked: