When people develop carpal tunnel or various medical conditions, it can be difficult to use mainstream input mechanisms like keyboards, mice, and phone touchscreens. Such users might have to rely on speech recognition, eye tracking, head gestures, etc. Technology allows devices to understand these mechanisms as user input, and hence provide agency and a communication channel to the rest of the world.
Every person's case is a bit different. Sometimes, a fully custom system has to be designed, such as for Stephen Hawking. A team from Intel worked through many prototypes with him, saying: "The design [hinged] on Stephen. We had to point a laser to study one individual." With custom hardware and custom software to detect muscle movements in his cheek, he still could only communicate a handful of words per minute.
In less severe situations, the goal is often to add accessible input mechanisms to mainstream computers and phones. Similarly, a blind person will adapt to using a normal smartphone, despite not being able to see the screen. It is not economical to design myriad variants of hardware to handle many different users who all have slightly different needs. Adapting mainstream devices allows a wide range of existing software to be used, from email clients to Google maps, without reinventing everything.
In my own case, I have a medical condition and cannot use my hands more than a certain threshold in a day. In order to keep reading books, I designed a speech recognition system and also an eyelid blink system to control an ebook reader app. As a computer programmer, I used to make heavy use of a keyboard to write code. With modern speech recognition, it is possible to design a spoken language that allows precise control over a computer. I will demo an open source voice coding system, and describe how it can be adapted to do any task on a desktop computer.