As kiosk manufacturers, we won’t hear a bad word about touchscreens. Touchscreen technology has transformed the world we live in.
By doing away with the need for a keypad interface, touchscreens have allowed digital tech to shrink down into the pocket-sized forms we now all carry around with us. And more importantly (in our humble opinion), they have enabled computerised units to become robust enough to withstand the wear and tear of being deployed in public areas for the purposes of self-service.
But at the same time, we are also very committed to maintaining the momentum of the kiosk revolution, and playing our role in pushing the ongoing evolution of the technology. So while touchscreens are great, it also has to be acknowledged that they are no longer the only ticket in town when it comes to self-service interfaces. Indeed, new ways to engage, control, interact with and experience what kiosks have to offer will be right at the forefront of driving kiosks forwards.
With that in mind, we’ve pulled together the following list of five interface alternatives that are paving the way for the future of self-service kiosks.
Voice
We’ve written previously about the benefits of voice-controlled kiosks. Voice AI lets users interact with kiosks via speech, which can lead to a faster and more intuitive experience. For example, speech is much better for allowing queries and information searches to develop in a conversational way rather than typing out questions or tapping choices on a screen. The latter can severely limit the options available, while a voice interface powered by natural language (NPL) AI can allow users to ask anything, and develop queries organically. This is just as useful for customising orders in a fast-food restaurant as it is for running an information kiosk.
In our view, voice interfaces are best deployed alongside touchscreens rather than instead of them. For some actions, a tap and swipe to select pre-made options is easier than formulating commands yourself. But It gives users a choice, and boosts accessibility for people who may not be able to use a touchscreen so easily.
Gesture control
Gesture control is a fascinating development in UI technology that in effect offers a touch-free alternative to a touchscreen. Instead of detecting changes in a screen’s electrostatic field when a finger touches the surface, gesture control uses visual capture (a camera) and computer vision AI to interpret hand movements in the air. So you can still swipe and select, just without having to physically touch a screen, which has benefits for hygiene and for the longevity of kiosks. Gesture control also opens the door to a much more sophisticated and nuanced range of 3D command gestures, which will come into its own as technologies like mixed reality (MR) continue to shift visual UI from ‘flat’ screens to 3D virtual environments.
Biometrics
Biometric interfaces like fingerprint scanners and facial recognition technology are already common security and identification tools in kiosks. Biometrics is recognised as a highly secure, highly reliable way identifying individuals, so much so that it is already being used for high-stakes purposes like passport control, and is mooted as the next big step in payment authorisation. But despite its hi-tech image, the interfaces between human and machine you need to run biometric identification are pretty straightforward. For users, touching a fingerprint scanner or looking into a camera fitted with facial recognition AI couldn’t be easier. Easier than swiping a card or tapping in personal details.
There’s another side to biometrics that concerns how kiosks interact with you. What we mean by this is that biometric interfaces allow kiosks to gather user information, which can then be used to tailor the experience. This ranges from gathering demographic information which can be used to profile and segment users, to individualised personalisation where identification brings up account details (much like a loyalty scheme works) and allows the kiosk to tailor responses, suggestions and content to personal habits and preferences.
Sentiment analysis
Another way kiosks can adapt their responses to the user via data gathered from interactions is so-called sentiment analysis. Sentiment analysis is a field of AI that ‘reads’ emotions in order to hone responses. For example, added to a voice assistant, sentiment analysis can spot when a user is getting frustrated from their tone of voice, and perhaps simplify suggestions or offer an alternative pathway. Similarly, cameras used for biometric identification or gesture control could feature AI tools that ‘read’ the body language of users. Not only does this promise to improve the experience by shaping interactions in real-time based on the user, it also has great potential as security tool, i.e. identifying signs of suspicious behaviour.
Scent
Finally, in terms of how ‘multi-sensory’ kiosk interfaces are, you might have noticed that all of the examples listed above are built around sight, sound and touch. But we know of at least one example where scent has been thrown into the mix. One bright spark in Dubai has struck upon the idea of a perfume kiosk, where users dial in the scents they would like to wear – and the kiosk dispenses their custom cologne there and then! That’s what we call a nose for a great idea!
In summary, the convergence of these different interface technologies in self-service kiosks is transforming the user experience, making interactions more personalised, efficient, and secure. Given the pace of technological innovation, particularly with AI flexing its muscles, we can only anticipate more of the same.