The future of voice coding
Bas van Essen,
Thursday, January 31, 2019
The majority of developers relies on visual resources such as auto-indentation, block-based editors, bracket matching, highlighting and syntaxes. But they are of no real use to visually impaired people. Yet thankfully, there are some (upcoming) alternatives that reduce the pain of their struggle.
Reading the syntax of an IDE is very important to be able to function as a software developer. It’s the environment where both junior and senior developers massively make use of auto-indentation, bracket matching and syntax highlighting. But these cues are to no avail for visually impaired people. Instead, the blind yet still ambitious people have found about alternative tools that make code accessible through speech.
Voice coding needs two kinds of software: a speech-recognition engine and a platform for voice coding commands. Nowadays Dragon from Nuance, a speech-recognition software developer, forms an advanced engine and is used a lot for voice coding, with Windows and Mac versions available. Windows also has its own built-in speech recognition system. On the platform side, VoiceCode by Ben Meyer and Talon by Ryan Hileman (both are for Mac OS only) are the more popular options.
Great source of inspiration for voice coding has been a video of Tavis Rudd, now director of technology at development company Unbounce. In front of the crowd at the PyCon 2013 conference, he used his method to dictate code instructing his laptop to read aloud a snippet of Monty Python’s Dead Parrot sketch. See the below video.
After that, new initiatives popped up and started to iterate and improve on Rudd’s idea, especially in the academic scene. One such example is CodeMirror-Blocks, a recently released toolkit for creating completely accessible, browser-based coding environments for several languages. When a parser meets certain requirements, this toolset will create a block editor that is even functional for sighted users. It communicates the structure of a program by providing spoken descriptions, and allows for navigation using standard (accessible) keyboard shortcuts.
Conversational Developer Assistant
Another invention that is both useful for sighted and visually disabled programmers, is a Conversational Developer Assistant. A CDA enables developers to focus a lot more on the high-level development tasks, by reducing the number of manual low-level commands that software developers need to execute. It does so by deducing the high-level meaning from developer’s voice commands and unifying this with an auto-generated context model.
An example is the recently released CDA called Devy. Through a mixed methods evaluation with 21 industrial developers, the developers of Devy have received feedback that their conversational developer assistant offers an intuitive interface that is able to support numerous programming tasks while helping them to focus on their programming environment.
Brain Computer Interface
A potentially more relieving solution for visually impaired programmers, at the least for the ones that are able to visualize code in their minds, is a brain-computer interface (BCI). Scientists are not yet quite there, but it’s encouraging for example that in 2017 both Mark Zuckerberg and Elon Musk announced efforts to build one. Since then, we have at least a BCI seen passing by that enables virtual reality gaming.
The sighted side of the software developer community at least seems to believe in the foreseen future of BCI’s. In a recently published research paper, developers from the San Francisco Bay Area expressed a strong shared belief that the contents of our minds will someday be “read” or “decoded” by machines. One of the interviewed developers mentioned a desire for a BCI becoming our future programming assistant. Now that would be truly awesome! Watch below’s presentation to hear more about the research and its findings.
Categorised in: Tech
This post was written by Bas van Essen