Alexa, was that a goal? The Age of Ubiquitous Computing
Alexa, was that a goal?
By Mark Gibson, Managing Director of Capito Ltd
Our MD, Mark Gibson, discusses the Age of Ubiquitous Computing (also known as ubicomp) and how the lines are blurring between our digital and physical lives - we would love to hear your feedback.
If the 2010s could be described as the decade of Mobile Computing, then 2020 is sure to be known as the dawn of the decade of Ubiquitous Computing.
The Fourth Industrial Revolution is in full flow with technologies such as artificial intelligence (AI), self-driving vehicles, and the internet of Things (IOT) blurring with our physical lives.
Thanks to facial ID recognition and voice-activated assistants such as Siri, Alexa or Cortana, there's a movement from computing being distinct objects with traditional screens and keyboards to the technology becoming increasingly integrated into our environment. Our trajectory is accelerating to the Ubiquity era of millions of computers per person.
How we use the mobile computer in our pocket will transform with the widespread adoption of 5G and augmented reality will be commonplace in our homes and workplace. Expect retail experiences where you can virtually try on an outfit prior to purchase as the norm, and live sports events where you become the producer and choose the camera angle and replays. For football fans, this will bring a further dimension to the Video Assistant Referee (VAR) debate.
The big Virtual Digital Assistant players such as Amazon, Apple and Google are already well down this path and jostling to take stakes in the ‘Ubiquitous Ambient Computing’ space. Alexa and Siri and their kind are becoming smarter and will follow us - you may already have them with you on the drive to work. Soon we will see the delivery of skills that showcase AI's ever-improving contextual reasoning - such as walking into a Supermarket, asking where the milk is, and being directed to the specific aisle. Whilst these digital assistants still have a long way to go before they can really ‘understand’, they are getting closer to the goal of understanding us as naturally as other people.
And there’s much more to come. There are already hints that the next step from voice recognition will be Emotional Recognition, where the camera on your smartphone will also analyse and interpret your facial reactions. Could this lead to automatic lie detections the next time you submit an insurance claim?
Maybe our cars will soon decide if we are fit to drive? That is, if they're not already doing the driving themselves...