Artificial Intelligence as we’ve come to know it is an incredibly complex tool. All manner of things can be analysed with AI, from shopping habits to logistical solutions. But what we’re interested in here is using AI in speech and language recognition.
AI and nuance
Regular AI is really good at detecting intelligent speech, however it struggles to detect the emotional intention behind speech. When learning another language at school, we learn specifics – cat = chat, dog = chien. We don’t learn the nuances behind the language until we experience them. Ask anyone who has moved to the UK after only learning a second language at school and they’ll tell you that they got good grades, but still understood nothing. Think of AI as the language-learner. The nuance behind language is what Emotion AI is trying to work through.
Regular AI focuses on cognitive understanding of speech and language. Emotion AI focuses on the emotion behind the cognitive understanding – this helps machine learning act more human. It can do this by analysing certain features alongside speech and language – facial expressions, speech tonality, and by using sentiment analysis. This makes it an incredibly powerful tool for MRX and CX platforms.
The human element
Emotion AI senses intention. It senses how a person feels about a specific brand, product, or situation. This is an incredibly useful tool – it adds a human element without a human being present, making it easier to communicate with customers. But this is where we get a bit more in-depth.
Affectiva – the leading voice in the emotion AI game – have developed something even more impressive than Emotion AI. Human Perception AI is the next step in the evolution of artificial intelligence. Not only does their AI system detect cognitive understanding and emotion, it exhibits social skills and perceptual awareness.
“… we’re pioneering Human Perception AI: software that can detect not only human emotions, but complex cognitive states, such as drowsiness and distraction.”
But ethically, can you put a computer in place of a human? For years, people have argued that typically human jobs shouldn’t go to machines – “you’re putting people out of jobs”.
Not to get too Sci-Fi fantasy, but will it be ethical to create a “human” AI and use it? We’ve seen plenty of: human creates human-like robot, robot feels as a human would, humans use robots as slaves, robots rise up and rebel. We may be getting a tad too far ahead here…
Instead, consider this for a moment: you’re the type of person who does not have a lot of experience with technology.
The human experience
You’re having trouble inputting some sensitive information on a website. You tap on the “chat with us now” icon on the bottom-right of the screen, and start up a conversation. It seems as though you are talking to a real person, a proper human with proper memories and proper sentimentalities with the full range of emotions, experiences, and empathy you would expect.. You reveal information that may not be sensitive to most, but it’s sensitive to you. You’re assuming that this ‘human” you’re talking to will understand your issues and help you accordingly. However the more complex your responses, the less the “human” understands. You begin to get confused, and it dawns on you that you’re not talking to a living, breathing human at all. You may feel disillusioned as a consumer, you put your trust in something you don’t fully understand. This can bring about the Uncanny Valley effect – where conversing with an entity which appears human (however is not) risks eliciting negative feelings and emotions in the user.
Granted, the further into the future we get, the less issue we will have with the use of technology – it’s just something the younger generations of us are used to by now. However it’s still worth considering that Human Perception AI is a powerful tool, so we mustn’t take it for granted. It’s hypothetical today, but there are already CX chatbots which attempt to mimic human responses to conversations.
But what does this mean for the future? With Emotion AI becoming more prevalent (according to Affectiva), what can we use it for? It can be used to trawl social media to figure out how people view certain brands, it can be used as a chatbot on a website, or it can be used to aid researchers in their data analysis. One thing is for certain, Emotion AI can be incredibly useful for companies and researchers alike. Will consumers have to worry about who or what they’re talking to? If handled correctly, it’s unlikely that Emotion AI or Human Perception AI will cause an issue. I suppose the question to take away is this: how exactly do we handle it correctly?
Read Affectiva’s article on Emotion AI here! https://blog.affectiva.com/our-evolution-from-emotion-ai-to-human-perception-ai