Emotional Avatars – What and Why

Overview

Tonight I had the opportunity to attend my 2nd Scottish Usability Professionals’ Association event in Edinburgh.  It was a good night which saw two speakers from the University of Abertay delivering two very different talks.

I am not going to talk about both.  Instead I would like to share my notes and thoughts on Dr Jackie Archibald’s talk titled: Emotional Avatars – What and Why, which I found extremely interesting and thought provoking.

Emotions

The talk saw Jackie begin by explaining the role emotion plays in modern day human computer interactions.  She discussed aspects such as perception, cognition, creativity and our ability to cope as reactions triggered by our emotions.  She also discussed how our current emotions can affect our memory, ability to plan, or complete tasks, and our ability to make decisions.  She went on to explain how displaying your emotions can often be seen as a negative thing.  People may questions your ability to work logical, or even think rationally.  However, she explained that emotion is important, and can aid interaction in an intelligent way.

Avatars

With the reasoning out of the way, next came the interesting part – emotional avatars.  She explained that an emotional avatar, was simply a virtual character who could display, and / or recognise emotions.  The use for them she suggested could be in education, training and as help systems.  She provided a example of a virtual avatar which was used in New Zealand to help teach kids working from home, which could help guide them through lessons.  This was called Easy with Eve, and supposedly could detect facial patterns in students and react to them.

She provided a few more examples, of avatars currently in use:

Emotional avatars presentation.
Jackie, showing Virtual agent Sasha, the Kaspersky bot.

All very interesting and sharing one common trait – they are all provide support desk type feedback.  Perhaps this is how future FAQs (frequently asked questions) will turn out?  None from what I could gather had any concept of emotions, but simply how to display them.

Now the talk became more interesting as Jackie went on to explain the research three of her students had done around this area.

Dealing with frustration

The first student, John Hart had completed a research project titled “Dealing with frustration”.   He began by surveying opinions on virtual agents such as the Asking Anna IKEA bot.  The general feedback not surprisingly, was that people were not overly impressed by them.

For the next step of his project, John developed a travel website which intentionally frustrated users with introduced delays and broken links.  As people used the site it was noted their frustration levels began to rise.

His final step was to develop an emotional avatar, which would see if it could assist in reducing user frustrations.  He would then be able to introduce an agent to help assist the user, using affective language via a text based conversation panel.

What he found was:

  • Users were both frustrated with the avatar and the website.
  • Avatar frustrations seemed to stem from its limited knowledge base, meaning it struggled to answer certain questions asked of it during the studies.
    • The avatar provided no help to overly frustrated users.
    • Moderately frustrated users which it successfully responded to, noted reduced frustration levels to some extent.

Interesting, but without seeing the research myself it is hard to make much of it.  The approach appears open to a lot of bias and assumptions.  The fact that surveys were employed as a research tool, doesn’t hold much weight really in my opinion.  How, or when he decided to inject the avatar as well, I’m unsure of.  Still, it is very interesting and opens up ideas for how these could be used as a means of online engagement with users based upon some sort of emotional rules engine.

Improving self service systems via emotional virtual agents

Next up was PHD student Chris Martin, who was doing a research project titled: Towards improvement of self service systems via emotional virtual agents.  This project looked into how the user experience of a self service system such as a supermarket self service checkout.  The idea being that an empathetic system could be developed, which could understand, reason and react to peoples emotions.

The first stage of his project involved filming (with permission) people using supermarket self service checkouts, and noting their facial reactions while using them.  These would be logged as facial action units and when reviewed patterns could be identified.  It was found that the two most common action units related to anger.

For his next stage, knowing that anger was the most common response to these systems, he wanted to determine the most appropriate response an agent should provide to the user.  To do this he created a system which would mimic a self service checkout, but intentionally block the users from completing their task.  At this point, an affective agents (emotional avatar) would be introduced, and their response to the avatar would be recorded.  Seven different types of avatar were used to determine which was most suitable.  Each of these had a different facial expression:

  • Anger
  • Disgust
  • Fear
  • Neutral
  • Happy
  • Sadness
  • Surprise

Of these, the one found to be the most inappropriate to use was disgust.  The one which received the most positive reaction was the neutral avatar.  I sought clarification after the talk if text was displayed to the user to match these differing emotions, but it appears that only an avatar image was presented to the user.  I assume some processes were still available to the user at this point to progress or cancel the checkout process though.

Usable security via affective feedback

The final research project titled: Usable security via affective feedback, was being done by a student named Lynsay Shepherd.  This one though was only just starting, so there wasn’t much to cover research wise, as none had really been done yet.

This project would look at how systems could improve risk awareness in end users using affective feedback.  Users would then become more aware of the risks their behaviour could introduce.

Thoughts

All in all, it was a really interesting talk and for that reason alone I’m glad I took the time out to attend.  I struggle though to see how avatars using current technology can react to end users.  I did ask the question and Jackie provided a decent enough response, although perhaps not quite what I was looking for:

  • Biometrics could be used to determine user emotions.
  • Mouse devices with hand grip detection, to determine frustration.
  • Cameras which monitor facial reactions.
  • Language used during text interaction with the avatar.

The last point certainly seems valid.  What would this be useful for though?  FAQ type systems?  Could you possibly drive site navigation via a keyword driven avatar, returning differing results depending on gauged emotions?

My thoughts with current technology are that outwith user inputs such as direct textual conversation, it is hard to gauge emotion.  I do think though that some form of rules engine could be developed to determine emotions and react to the user in different ways.  For example if the user exhibits some form of navigation pattern while using a website, which indicates that they might be lost, perhaps an avatar would help negate negative emotions, and if it successfully helps them find where they need to go, perhaps their mood may then even be positive.  Or, what if the user is found to be on a checkout process for some time, perhaps an avatar could be injected onto the page to provide assistance.

Regardless, I think relying upon hardware for now, is a non starter.  Perhaps as devices become more camera ready ( mobile is certainly miles ahead of desktops currently), then detecting facial emotions might not be so far away.  There is a big assumption of course that people will want their devices monitoring them, which I think is very unlikely to be a given, for many obvious reasons.

There is serious potential though for emotional avatars.  I’m still unsure where though.  The world certainly could do without another Microsoft Clippy that’s for sure!  With some thought though and making use of behavioural interaction modules that new content management systems are now beginning to make available, I think there could be room for these emotional avatars to build empathic relationships with users.

Thanks for reading.

No related posts.