A team of researchers from Cornell, specifically Joshua Harrison, Ehsan Toreini and Maryam Mehrnezhad, have published a paper detailing their work in training AI to interpret keyboard input from audio alone. By recording keystrokes to train the model, they were able to predict what was typed on the keyboard with up to 95% accuracy. This accuracy only dropped to 93% when using Zoom to train the system.
The system doesn’t work with just any random keyboard, it must be trained to a specific keyboard with references for what character each keystroke corresponds to. This can be done locally with a microphone or remotely using an application like Zoom to record the keystroke audio.
In the project demonstration, the team used a MacBook Pro to test the concept. They pressed 36 individual keys 25 times a piece. This was the basis for the AI model to recognize what character is associated with what keystroke sound. There were enough subtle differences in the waveforms produced by the recording for it to recognize each key with a startling degree of accuracy.
This type of potential attack isn’t without points of weakness. The team said that there are things that can be done to mitigate the accuracy of the system including just changing the style in which you type. Touch typing reduced the keystroke recognition accuracy from between 64% to 40%. It would also be possible to use software to produce noise that muddies up the input with white noise or extra keystrokes.
This type of cyber attack works great with mechanical keyboards that have a loud audible click but it isn’t limited to mechanical switches alone. Using a membrane keyboard still produces enough sound to train the AI model. So your best bet to avoid this type of attack would be to implement a software-side solution instead of swapping out your clicky mechanical keyboard for a quiet one.
If you want to read more about the team’s findings in depth, check out the official research pdf that details all of the study protocols and research they found along the way.