Safety and privacy in voice design
Thursday, 9th August 2018Guest blog by Marcus Duffy, Head of Design and UX, Apadmi.
Despite the comfort of GDPR, privacy concerns around the technology in our lives are still increasingly being trumped by its convenience.
Users are more willing than ever to sacrifice their privacy for a useful product or service that makes their life easier. But these small concessions add up, until there’s effectively no real privacy left.
Take Google Maps on iOS and Android. Most people are aware that by using the app they’re sharing their current location, home/work addresses, the places they visit and the routes they take. Google is quite open about this, sending regular emails revealing insights about your most frequently visited places.
But in return, users get enough benefits to offset any concerns – intelligence on the best routes to take, traffic delays and even which are the busy times to avoid. People are okay with it.
“A piece-by-piece erosion of will”
With voice assistants, the initial concern people tend to have is that large tech firms are listening when they shouldn’t be. Although technically possible, this probably isn’t the most valid worry. These companies would stand to lose a lot if it was ever discovered they were doing this en masse– they get enough intelligence as it is from the things we agree to.
The more concerning scenario would be if the devices themselves were targeted and compromised individually by hackers, allowing them to listen in on private conversations.
People are still willing to give voice technology a try though, and any concerns are often overtaken by the convenience voice assistants provide. Current concerns around privacy and data security could fall away as more people recognise the benefits.
Some technology creators will always push the boundaries of what’s acceptable, and in turn, some users will complain and reject the services offered.
But some won’t, quite happy to give away more and more of their personal information because they’re already doing it elsewhere. It’s a piece-by-piece erosion of will, and many people don’t even notice how things are progressing.
However, as creators of such technology, it’s important we ensure users are protected as much as possible.
Protecting future generations
Children are especially vulnerable to technological trends. Often possessing a natural affinity for the latest gadgets, they’re the part of the population who find it easiest to accept tech into their still-fluid world view.
Society as a whole feels a responsibility to safeguard children, but parents are instinctively and fiercely protective of their own offspring. When creating child-friendly tech, creators must first convince parents that the product or service is appropriate and safe for their child to use.
Voice designers should consider how their application could be abused by inquisitive or clever children, as well as others with nefarious intentions. A comprehensive risk analysis should be undertaken to identify and mitigate possible misuse scenarios before they actually happen.
Often, technology aimed at children is ‘sandboxed’, meaning the system is unable to access inappropriate content or information, such as with the YouTube Kids app. Or, the interactions and data are kept deliberately constrained – with Moshi Monsters for example, there’s a whitelist dictionary of permitted words in their chat service.
Another key consideration is the moderation of user-submitted content, and it’s important to make sure a clear, simple reporting path for inappropriate content is in place.
Voice apps tend to be less dependent on user-generated content at this time, but who knows how they’ll evolve. Twitter already has an Alexa skill that can read tweets aloud – no doubt we’ll see social media extend further into this medium before long.
We need a plan
As voice technology improves, it will become ever more ubiquitous. Voice app developers therefore need to ensure the safety and privacy of all users is built-in to everything they create.
It’s important to think about the potential for problems during the design phase, rather than waiting for things to go wrong before addressing the issue, which could have a catastrophic impact on both your users and your brand.
This blog was originally published at: https://www.apadmi.com/blog/safety-privacy-voice-design/.