Building the next computing user interface
/Craig Grannell, writing for Wired UK, delves into Apple’s announcements during the recently concluded World Wide Developer Conference and what role they may have in making audio the next big user interface:
This infatuation with audio is also a refreshing change in a world routinely obsessed with what you see rather than what you hear. We’re so often informed about innovations in AR and VR, dazzling environments and visual immersion. But voice UI and audio are just as important –and, in some ways, more so when you consider Apple’s reasoning regarding focus and clarity.
Much of what you can glean from AR you can get from audio, and with fewer distractions. An always-on AR overlay can be disruptive and in your face. It changes how you experience and interact with the world, in a not necessarily positive fashion. Audio input, by contrast, is fleeting and focused. It enhances the concept of mindful and meaningful tech use, in context, unlike equipping all humans with a heads-up display. Bose tried something similar with its Frames audio glasses, but couldn’t stay the course.
A counterpoint argument is that Apple Glasses are long-rumoured and may appear in the near future. But, even then, it’s the union of the senses combined with the foundations Apple is laying that will make them all the more powerful. Apple is being considered about the audio layer to the point than if visuals are added, the company will have a big advantage over rivals trying to do everything at once – or, worse, multiple organisations attempting to combine fragmented resources to achieve the same goal. Plus, importantly, Apple has that sense of focus.
While Grannell focused on audio, the work Apple is doing for accessibility across all their devices and operating systems is what resonates with me. In a special 10 minute session titled Accessibility by design: An Apple Watch for everyone Apple engineers and designers highlight their approach to accessible design, iteration and community engagement.
I suggest you watch the whole session, but if you prefer you can jump ahead to around the 6 minute mark, where you can see the Apple Watch team addressing this question ‘could we create a completely gesture-based human-to-computer interaction?’ and see what they achieved using four very simple hand gestures to fully control the watch, which feels like an evolved version of hand tracking for head mounted displays.
That pursuit of “gesture-based human-to-computer interactions” sounds very much like the development of a new user interface. Between the accessibility features on AirPods Pro with spatial audio and the accessibility features on Apple Watch and the many years of accessibility design work, Apple appears to be introducing features that are helpful to many right now, while simultaneously getting everyone used to those features and how they work. In the process creating the human interface for spatial computing.