top of page

Human Centered Machine Learning

I recently read an article about human centered machine learning(HCML) and here is my thoughts on the recent VA MST project I’ve been working on from the HCML perspective. Read the article here. Read the case study here.

Neurons

The Challenge

The VA MST project has a very unique audience, people who have experienced military sexual trauma. One thing we learned from VA clinical psychologist is that people are not very good an identifying their emotions. Many of us have been brought up to ignore and override feelings. The result of this kind of suppression is often anxiety, phobias, depression and restlessness.

Feelings are often complex and it is hard to identify them. Sometimes, even if one can identify them, it becomes hard to express them. It is important to identify feelings and then be able to express them appropriately to avoid the dangers of becoming prone to anxiety and phobias.

Since being able to identify feelings and express them are two crucial parts of mental health, we thought we should focus on these two things when we design the experience. In the original requirements of the project, one of the on-boarding question is "what are your major concern?", the rationale is that by selecting one or more major concerns, the app will push relevant content to the users. And this is a one time setup thing.

Later in our discovery, we found at-the-moment emotion or feeling could be so powerful in shaping the experience of using the app by making the content immediately relevant. We decided to include a emotion/feeling identification elements in the experience. So every time user open the app, we will capture their feeling in some way and then recommend content based on their feelings.

Since identifying feeling could be hard for many people, we brainstormed several ways to do that. The most conventional way would be giving them multiple choice, like this:

Clearly this is not very engaging and feels very impersonal and it’s impossible to cover full spectrum of emotions.

An more encompassing option is to have the full emotional wheel, like this:

This option provide a more complete picture of emotion, it introduced color as another dimension of emotion. But it still in some way feel “clinical”. Being clinical is not necessarily a bad thing, but we want to push further to make this experience even more personal and less threatening, especially for people suffer from depression or other mental symptoms because of the traumatic experience they had.

One option is to use facial emotion recognition technology to identify emotion by doing face scan, like this:

Since a lot of MST survivors have self image issue, having face scan may make them feel uncomfortable. They may also have privacy concerns. Another option is to allow use to select a photo from their camera roll or take a new photo using their camera and then the machine learning will analyze the how are they feeling by analyzing the image in the photo, like this:

The last option we came up with is a freeform drawing/writing pad, where users can write how their feel using words, or draw something or choose from a wide range of emojis, like this:

A challenge at this point in our concept exploration is determining which experiences require machine learning, which are meaningfully enhanced by machine learning, and which do not benefit from ML or are even degraded by it.

After meeting with the development lead of the project, we learnt that since this app doesn’t have an backend, which is a constraint specified in the contract. We won’t be able to do any machine learning. If that is the case, how do we still be able to meet the need of capturing feeling and expressing feeling, and further still, make this app “smart” or “personal”?

One possible solution is to still include the notepad feature but instead of relying on that for content recommendation, we use it as purely as an emotional journal which could potentially serve as their creative outlet.

Lessons Learned

Currently waiting on client feedback, we are still in the process of concept exploration. But this process of trying to include machine learning into the experience we design made me think more about the following two things:

One, there are a lot barriers when we try to incorporate something that may seem novel for some people. These barrier can be technical, psychological and even political. In this case, since many of the MST survivor feel that they were betrayed by the military/government , they may have greater fear for the technology because they may feel there’s someone watching them behind the artificial intelligence, no matter how we are going to address this concern in product copy. This has not been backed up any empirical research.

Two, when we think of which experiences require machine learning, even those can potentially be meaningfully enhanced by machines learning, there could be a lot more variables to consider than we thought. Although we do believe the machine learning behind the face scan and image analysis can help those who are not that good at articulate their feelings, it may at the same time degrade the experience by making some users feel threatened. Not to mention the anger or disappointment when the system fail to accurately identify their emotion. So even when we are able to incorporate machine learning, how to calibrate trust and manage user expectation by maybe stating the confidence level of the machine learning, could potentially alleviate the negative impact on users. Also, have the user provide feedback on how good the recommendation is also help guide user understand how good the automation is in a positive way. This also tie into the co-learning and adaptation part discussed in the article. We want to guide users with clear mental models that encourage them to give feedback that is mutually beneficial to them and the model.

I’ll write more on how would we test app recommendation before the script is fully built out. There are obviously a lot I need to learn about machine learning. But based on my experience designing and evaluating machine learning related experiences, it’s a far more creative and expressive engineering process than other technology. There are so many different thing we need to consider to train a model and sometimes the perfect tools or data are not available yet. So how do we use our imagination and empathy to tune an algorithm become ever more interesting.


RECENT POSTS

FEATURED POSTS

logo.png

Copyright © Li Lu UX Design 2024

bottom of page