Voice vs. Touch: Why Mobile App Developers need to Focus on Both

      Voice vs. Touch: Why Mobile App Developers need to Focus on Both

      When it comes to UI for apps, developers realistically have two main avenues to take; touch and voice. With the meteoric rise of smartphone technology over the last  15 years, gestural commands via touch have been the dominant choice for mobile app development. However, with the relatively recent growth in voice activated software such as Siri and Google Assistant, touch-based applications are beginning to face new competition.

      Yet, while both have their pros and cons, it’s important as an app developer to take advantage of both and utilise them where necessary, rather than simply discarding one approach over the other. With this in mind, we’ve decided to take a closer look at the strengths and weaknesses of both approaches and how you can best utilise them for your application.

      Touch gestures

      As we’ve already mentioned, touch control has been the more ubiquitous of the two approaches for longer, with gestures being utilised in some form or another in software applications since the 1980s. With the advent of smartphones, touch UIs really came into their own, providing snappy and intuitive controls for users looking to break free from the irritating button layout.

      However, while even the least optimised touch controls offer quick and easy to learn mechanics, those that may struggle with the physical aspect of touch technology will struggle to benefit. Another issue with touch controls can sometimes rely too heavily on pictures rather than text for navigation which can confuse the user. 

      Voice control

      With the advent of voice control, the issues that face touch interfaces are easily circumvented. As mobile users are usually on the go, it can free up hands for other actions and come allow those with disabilities to easily access their phones.

      VUIs can be split into two parts. The first is a speech synthesizer, known as Text to Speech, that can communicate with the user. This can be especially useful if the user requires their messages to be read aloud or to if they need to be informed on a specific process i.e. cooking times. The other is what is known as speech recognition technology which allows the user to send commands to their device via spoken word - both are necessary for a quality VUI (Voice User Interface)

      Arguably one of the most pressing issues with voice controls for mobile applications is the lack of privacy for the user. Unlike the discreet flick of the finger that touch controls provide, there are certain scenarios where having a message read aloud or needing to publicly announce a command to your device in public can be annoying or even distressing for the user.

      Utilising both in app development

      As you can see, both approaches have their strengths and weaknesses, however, by utilising both, you can create an end product that is better than the sum of its parts, giving the user a choice based on their individual needs and preferences. 

      Intuitive design is an essential part of app development, and while both touch and voice should be used in tandem with each other, a robust GUI can tie together the user experience.

      Rarely will either voice or touch operate in a vacuum without some form of graphical interface to support and interact with, making it the perfect bridge between the two interfaces.