Feel the noise: turning sound into touch

Adam Price, January 2017

 

One of the areas in which the IOA is doing a great job is in explaining to the world that acoustics is more than just musical or architectural in nature.  If you’re involved in acoustics then you’ll know that it’s a discipline that covers everything from bioacoustics to ultrasound imaging.

But even the well-informed may be surprised to find out how we’re using acoustics – and specifically ultrasound – at Ultrahaptics. We’re creating what are known as touchless haptic feedback systems: solutions that bring the sense of touch to user interface design. For example, today’s gesture recognition systems are able to sense the mid-air motion of your hand well enough to mimic controls such as traditional rotary dials (generally using a camera and some image processing ‘smarts’). Ultrahaptics technology passes information in the opposite direction, giving you the sensation that you are actually operating the dial, by providing touch feedback using ultrasound focused onto the skin. You can watch a video illustrating the concept here: https://youtu.be/tBMkBS_Rlgc; put simply, what we do is provide a better connection between people and technology.

The use of acoustic radiation force to create haptic feedback was first demonstrated in 1995 by a group of researchers under Diane Dalecki of the University of Rochester (although there had been previous investigations as far back as the 1970s). Focusing ultrasound on to the surface of the human skin induces a shear wave in the skin tissue. Although the majority of the ultrasound energy is reflected – research suggests that less than 0.1% is absorbed – this small amount of energy is sufficient to trigger mechanoreceptors within the skin, generating a perceptible sensation.

That’s the fundamental principle, but it’s still quite a way to the practical reality of a haptic feedback system. 

We use an array of up to 256 ultrasonic transducers to generate an acoustic interference pattern and create one or more focal points of ultrasonic energy targeted on the user’s palm or finger(s). The transducers themselves are, of course, important elements, but just as vital is the availability of powerful, low-cost computing hardware, and signal processing algorithms smart enough to drive the transducers and produce the desired interference pattern.

And of course, things get even more complex when we are trying to render a 3-dimensional shape; but the important point is that this can still be achieved using a 2D transducer array, and sufficiently advanced signal processing.

By modulating the ultrasound output, we produce stimulation at the frequencies at which the skin is most sensitive – the precise modulation frequency is one of the most important parameters in determining the nature of the sensation that the user feels. Our system works best on the palm of the hand and fingers, because these are most sensitive to the modulated acoustic field.

A broad range of products could benefit from this technology. In the automotive industry in particular, there is considerable interest. Makers of home appliances are equally keen to build touchless interfaces.  And for developers of games and virtual and augmented reality (VR and AR) applications, we can (literally) bring a whole new dimension to their products, making them much more immersive and compelling.

As I finish writing this blog post, I think it’s worth stepping back and taking a moment to appreciate the surprising richness of the field of acoustics, and how it’s allowed us to turn sound into touch.