Predicting customer satisfaction

Measuring your users’ satisfaction with your product is very important ― it guides your product decisions, impacts your new features and teaches you a lot.

At Soluto we recently started working on a new platform ― connecting people who need on-demand tech support of any kind to a “supporting user” (we call them “experts”), who can help them resolve any tech issue ―  think Uber for tech support. This platform fulfills two reciprocal user needs: one user uses our app to ask a question; the other user, the “expert” (someone with the right knowledge and the desire to help) uses another app to receive and respond to that question through a direct chat. Since a central feature of our product is a chat between the user and the expert, we decided that one of our first features would be a measurement of the user’s satisfaction level within a given chat session.

Our use of such measurement:

  • Monitors unsatisfied users with an open issue (after a chat support session is already taking place)
  • Prioritizes unsatisfied users in the question pool
  • Evaluates experts
  • Measures how happy users are with our platform

We started with the first use case which monitors unsatisfied users.

Predicting customer satisfaction

Flow and entities:

Message – a single message within a chat session

Label – what we actually try to predict

At the end of every chat session we ask our users: “Will you use chat again?”

We decided to use this as our label, meaning our model will predict how likely it is that a user will click on the “thumbs up” button in the question above.

  • Upside – explicit feedback from the user
  • Downside – disturbing the user with a question. This also creates biased data by those who tend to answer surveys; we will address this in next iterations.

Features – data used for prediction

For this first iteration of the feature we’ve mainly used sentiment and emotion (and some other features also) in order to predict the user’s satisfaction.

Using Watson for text analysis

We extracted sentiment and emotion from every message**. This was performed by using IBM Watson. The output for every message were the following values: sentiment [-1 = bad, 1 = good], anger, sadness, fear, disgust and joy [0 = no such emotion, 1 = very high emotion]

When looking at the correlations between the above features with our label, we immediately saw some connections:

Right off the bat, there were already some interesting correlations to work with. In order to better separate users from experts, the next step was to look at these correlations for user and expert separately:

From the above correlations we understood that these can be used as features, since these both have some correlation to the label and also make sense. 🙂

Another aspect we looked at is the change of correlation of a label over the span of a chat session. The X axis in the following graph is the message sequence number  within a chat session, meaning 0 is the first message in the session, 0.5 is a message in the middle of the chat session, and so on.

This mainly showed us that user sentiment impacts their satisfaction much more towards the end of the chat session.

Simple features from messages

Some other features we used:

  • Total amount of messages
  • Average word count

An interesting finding here was that there’s a positive correlation with the total amount of messages, and a negative correlation with the average word count. In other words, users like to get attention and receive more messages, however they want these messages to be short and to the point.

We also considered some other features regarding the customer wait time for response, but those didn’t show promise for now.

Model selection

Since our problem is a classification problem, we tried a few suited classification models. We found that a simple logistic regression fits our needs while giving us a good idea of the final result.

Deploying to production

For every new message, we use IBM Watson to analyze sentiments and emotions. Aggregating these with all the previous chat messages, we created our features vector. Our prediction model is deployed using Azure MLStudio, giving us the API for predictions in real time. More about this in our next post.

Finally

The final user satisfaction prediction is then saved and updated in real time on the mobile app. Here’s what it looks like from the expert’s point of view:

Here we can see our final use of the prediction. In the pool of questions, an indication (using a smiley face) is added to every open chat session, reflecting how satisfied the customer is.

Future work

This was just our first iteration using this prediction in production. A few things are still missing:

  • Monitoring the predictions
  • Gaining a better understanding of the prediction distribution
  • Enabling an easy deployment of the new prediction models (and starting A/B testing)
  • Finding more features for the prediction, such as the level of resolution the user experienced after they asked their question
  • Considering other labels
  • Using this prediction and more data to find a pricing model for the platform

 

We hope this was useful to you. Please share your comments or questions below. 🙂

 

** For privacy reasons some information was redacted from some messages, this did not seem to impact the results.

 

Previous

You’ve Got React Native All Wrong

Next

Your CI can be a whole lot better

1 Comment

  1. Haoran

    At first many thanks to you, your blog is very useful for me and I expect very much for your next post.

Leave a Reply

Your email address will not be published. Required fields are marked *

Powered by WordPress & Theme by Anders Norén