AI has been around for a while now… since 1955 in fact! Today, it's more than a figment of Spielberg's imagination brought to the big screen in AI: Artificial Intelligence (17 years ago, can you believe it?). Most businesses are using AI everyday to improve products and enhance customer experiences (we're looking at you Kura). However, one of the biggest obstacles to AI has been its lack of EQ—is it just us or do Siri and Alexa test your patience at times with their tone-deaf responses?
Not only is the humanoid robot (played by Haley Joel Osment pictured) in the AI movie all grown up, AI in the real world is maturing too and gaining some much needed EQ. The grownup version of AI is called ‘Artificial Emotional Intelligence’ or AEI which is broadly defined as the ability for artificial systems to receive, interpret, and act upon, emotional responses from a human. A couple of examples:
- Siri detecting the tone in your voice to determine whether you’re angry, sad, happy or excited.
- Cortana detecting facial cues in your expression to determine if you’re under pressure, tired, or relaxed.
According to Mary Czerwinski (Research Manager at the Visualisation and Interaction Research Group at Microsoft), AI developers are “getting pretty good” at assessing your emotional state and turning this into an input for the computer.
With bots and virtual personal assistants (VPAs – think Siri and Cortana, as well as Alexa and Google Home) quickly becoming a larger part of our daily lives, the ability of these machines to ‘better serve’, and for their manufacturers to gain a competitive advantage, are a hot-bed of discussion.
Why would we want a VPA to know our emotional state?
Czerwinski states that the potential benefits include a VPA that can better assist us – helping to improve our personal and working lives.
Studies show that humans engage more positively with those who mimic their own emotions. If you’re sad, you empathise more with someone who is also sad (“misery loves company”); if you’re happy, it’s only natural to share the celebrations with someone else.
The same applies for VPAs – if the VPA responds to a person feeling negatively with a more negative tone, it makes them appear more empathetic and more useful to the person’s current situation.
Currently, most interactions with VPAs are short and simple (“Okay Google, play Bruce Springsteen”, “Alexa, what’s the weather going to be like today?”). The expectation, however, is that the quantity and complexity of these interactions will only increase, allowing us to complete more significant tasks.
Imagine a system that could make a programmer more productive…
Based on the programmer’s pupil dilation or typing speed, the system can detect when the programmer feels tired, is more prone to make errors, needs a break etc. A little bit of logic around programming could even go so far as to identify and correct errors in code as they’re being committed.
Imagine a system that could save your life…
A system that was programmed to identify how changes in speech or facial cues might be able to detect a stroke, or the early stages of a schizophrenic episode. The implications of such technology are literally life-changing.
Imagine a system that could help you keep your project promises…
From a project management perspective, the act of setting up a schedule, conducting resource management activities, or tracking project profitability, can all become streamlined and automated (“Cortana – please set up a project schedule for a Jumpstart Deployment, plus 10 days of enhancements.” “Okay Google, who from our delivery team has availability during July?”). The VPA allows their user to remain more productive, and can suggest ways to avoid complications in the project schedule, or throughout the project delivery.
Of course, there are questions to be asked around the ethical implications of AEI; just how pervasive do we wish for these systems to become? Are we comfortable with machines tracking our bodily activities (hello Fitbit)? How much do we want our children to interact with a device that’s specifically programmed to understand their emotions?
The message from Czerwinski is clear – stop being scared of machine learning. Have an understanding of what’s happening, of course, but don’t shy or run away from it, and be sure to look for the potential benefits of AEI as well. Personally we like where this well-adjusted AEI is heading.
“Okay Google – blog post complete. Please email to our Marketing Manager.”
By Ashley Braybon, Excellence Enabler at Sensei
The main source for this post is ‘The Future of Work’ podcast with Jacob Morgan. The episode was entitled ‘Artificial Emotional Intelligence: How it can enhance our lives, advances over the past few decades and some valid concerns.” (released on April 10, 2018). The guest for the podcast was Mary Czerwinski.