Userless User Authentication for Mobile Application

Like all the other cool kids, we at Soluto have a mobile app and a lot of micro-services that this app utilizes. Recently, we added a feature to our app that required sensitive user data, and for this feature we had to add some sort of authentication between our app and the services it utilizes. Usually, this problem is pretty simple to solve: Just add social login to the app, and use those credentials to authenticate the requests. This solution also has a drawback, however – it means that we must add a new login screen to the onboarding flow.

We have found that the fewer the steps in the onboarding flow, the more likely users will complete it (others have found this as well). For this reason, we had one requirement for our authentication solution that we were not willing to compromise on: it must be implemented without any user interaction – what we call “Seamless Authentication”. With seamless authentication, the objective is to authenticate the device instead of the user. The way to identify a specific device is by its deviceId – which the app generates on the first launch and is unique per device (we are using GUID for that).

Brief overview of OpenId Connect

As you might already know, OpenId Connect is (at a very high level) a protocol to authenticate clients in various ways, based on OAuth 2.0. The user (through a client, which can be a browser, mobile app or anything else) authenticates itself to a server called the “Authorization Server”. The server validates the request, and upon success, issues a token that contains the user identity. The user then can use this token to consume other authenticated services (using “Bearer Authentication”).

The protocol also defines multiple methods (called “flows” or “grants”) to authenticate the user, each flow is intended for a specific client or scenario.
This means the protocol is highly extensible – in order to replace the authentication method, we only need to change the client and the Authorization Server (usually by just enabling the flow on the Authorization Server). Because of this simple extensibility, and the popularity of the protocol, we decided to use it to authenticate our app to the backend.

OpenId Connect and seamless authentication

In order to use OpenId Connect, the first thing to decide is which flow to use. The “Resource Owner Password Credentials Grant” seems to fit our needs – it allows us to authenticate a user based on its username and password. We can use the deviceId as the username, and generate a unique password (One-Time Password, OTP) on each request. We choose to generate a new password on each request because mobile networking is pretty vulnerable (see this tool that disables certificate pinning on a rooted device for example). Also, by design, OTP is not vulnerable to Replay Attack.

Password generation

In order to generate a good unique password we need:

  • A method to verify that the password was generated only by the client, and not by a malicious attacker.
  • A method to invalidate the password after each use.

This concept of OTP is widely used – for example, Google Authenticator uses it to generate the Two-Factor code on mobile devices. One way to accomplish both of the above requirements is through Time-based One-Time Password (TOTP). TOTP algorithm uses time and a shared secret to generate an OTP that is valid only for a very short time range. This is probably good enough as a second factor, but it might not be good enough as a primary password because of the following:

  • The security of the solution is highly related to the allowed time range – the wider the time range, the less secure this solution. And since on some devices the time might not be synced, in order to support such devices, we would need a very wide time range.
  • If the storage used by the authentication server is compromised, the attacker has access to all the secrets and can easily generate its own passwords.

There are other implementations of OTP, but none of them quite fit for us.

Strong one-time password generation

First, we decided to use a Digital Signature to validate the password. On the app’s first launch, the device generates a public-private key pair (we used RSA, but other algorithms could work). During the registration, the device sends to the Authorization Server the public key and the deviceId. After this, each time the client requests a token from the Authorization Server, it creates a JSON object and signs it with the device’s private key – JWT. The device passes the deviceId and the JWT to the Authorization Server, which uses the registered public key to validate the signature of the JWT.

Only one question remains: What data (payload) should we pass in the JWT to make it unique per request? Our requirements for this payload are as follows:

  • The payload should be unique (obvious).
  • The payload should allow us to identify a compromised private key.
  • The payload should allow the client to recover from errors.

How can we fulfill these requirements? We came up with a design that we believe covers all these requirements. For the payload, we use two numbers, and rotate them after each request. The server also stores the numbers, and after validating the signature of the JWT, it compares the numbers in the payload to the numbers it stored on the last request. If the numbers match, the authentication succeeds.

This is the high-level design. Now let’s dive into the details.

Payload rotation

As I said, after each request the client should rotate its payload. To understand the rotation process, let’s talk a bit more about the payload. Those two numbers are called OldSyncKey and NewSyncKey. The rotation is pretty simple: OldSyncKey receives the value of NewSyncKey, and NewSyncKey receives a new (cryptographically secure) random value. Let’s do a short example to make it clearer. Assume we have the following payload:

{
    "OldSyncKey": 4,
    "NewSyncKey": -9
}

The client just uses it to request a token, and the request succeeds. Now the client needs to rotate the payload. As I said before, OldSyncKey receives the value of NewSyncKey (-9), and NewSyncKey receives a new random value:

{
    "OldSyncKey": -9,
    "NewSyncKey": 76
}

OldSyncKey keeps the last used number, and NewSyncKey receives a new number. They are called Sync Keys because they keep the client and server synchronized: If those numbers do not match the number the server stored, it means that something bad happened.

Payload validation

Now that we understand how the payload rotation is done, we can talk about the validation. In our description below, the client payload will be marked with “c”, and the payload stored in the server storage will be marked with “s”. So “c.OldSyncKey” will be the OldSyncKey received from the client, and “s.NewSyncKey” will be the NewSyncKey from the server storage. The validation rules are simple:

  • c.OldSyncKey equals s.NewSyncKey: Validation success
  • c.OldSyncKey equals s.OldSyncKey and c.NewSyncKey equals s.NewSyncKey: Validation failure
  • Any other case: Validation failure and the app is marked as revoked – meaning this client will not be able to authenticate any longer.

Let’s break it down. The first rule is the easiest to understand: After a successful request, the server updates its payload with the payload received from the client. If we continue with the example we used before:

{
    "c.OldSyncKey": 4,
    "c.NewSyncKey": -9
}

Then after this request, s.NewSyncKey will be -9, and s.OldSyncKey will be 4. When the client receives the token from the server it rotates the payload, so the next request for a token will be:

{
    "c.OldSyncKey": -9,
    "c.NewSyncKey": 76
}

And now you can see that c.OldSyncKey equals s.NewSyncKey. This is the happy flow, in which the first requirement for our payload is fulfilled: the payload is unique for each request, and we have a mechanism to detect payload reuse.

The other two validation rules support flows other than the happy flow. Let’s discuss those now.

Not so happy flow – recover from errors

The third payload requirement said that the client should be able to recover from errors. Let’s again take the same payload we have already used:

{
   "OldSyncKey": 4,
   "NewSyncKey": -9
}

What will happen if the client encounters an error (I’m referring to the case when the client does not receive any response from the server, not the case when the client receives a status code that represents error)? For example, what if a network error or a timeout occurs before the client receives the server response? The client is not able to know whether the server received the request and updated its storage with the new payload. And, as we are talking about mobile devices, this is not a rare situation – mobile networks (especially not on Wi-Fi) are not always stable.

The second validation rule is designed specifically to allow the client to recover from this situation. The client should continue to send exactly the same payload as in its previous request, until it receives a response, which can be one of the following:

  • If the original request (that failed) was not received by the Authorization Server, then this payload will be valid and the client will receive a token.
  • If the original request was received by the Authorization Server, and it already updated the server’s internal storage, then the second validation rule will apply (because the client sent the same payload as in the original request).
    The server will respond with 400 status code (bad request, as required by the RFC, with “invalid_grant” as the message).
    The client will rotate the payload, and then retry the request, and will now receive a token.

With this we have now fulfilled both the first (uniqueness) and the third (error recovery) authentication requirements. Let’s discuss the final and (in my opinion) the most interesting requirement.

No so happy flow 2 – malicious attacker captures device’s private key.

Meet Eve and Alice. Alice has our app and loves using it. Eve wants to see Alice’s sensitive data, and she is determined to somehow steal it from our app. Let say Eve elevated her privileges via QuadRooter and compromised Alice’s private key from her Nexus 5X. Now Eve is able to impersonate Alice forever. Luckily, our protocol is able to detect such scenarios – let’s see how!

Again, let’s say the last request from Alice’s device was with the following payload:

{
    "OldSyncKey": 4,
    "NewSyncKey": -9
}

Eve is smart and understands the protocol. So she also rotates the payload and requests a token:

{
    "OldSyncKey": -9,
    "NewSyncKey": 76
}

And this request is valid, so Eve receives a token from the Authorization Server. What happens the next time our app tries to request a token? The app again rotates the payload after the last request:

{
    "OldSyncKey": -9,
    "NewSyncKey": 45
}

But the NewSyncKey is different from the one Eve used. And here comes the third validation rule to the rescue: when the app on Alice’s device tries to request a token, the request will fail according to the third validation rule, and the Authentication Server will mark the app as revoked/compromised – locking out both Alice and Eve. Once again, our app saves the day!

Just to be the killjoy guy, one can say that if Eve was able to take over Alice’s phone and compromise the private key, then she is also able to alter the payload stored on Alice’s device after each request she is making. In such an extreme scenario we will not be able to detect the compromise. So, to be more accurate, we are only able to protect the private key if the attacker has a one-time access to the device (for example, a malicious technician who compromises it while fixing the device, or a temporary elevated privilege scenario). If the attacker completely takes over the device, we will not be able to identify the compromise until access has been denied to the attacker.  But in such a situation, there is not much we could do to protect the user in any case.

Conclusion

In this short post, I have demonstrated how we used OpenId Connect to implement seamless authentication between our mobile app and the other services it utilizes.
In the next post, I will start talking about the most interesting parts – how we implement this complicated solution, starting with the Authorization Server.

Edit – 18/03/2018

I’ve started to work on a proposed standard, based on this flow. The new standard is an OAuth 2.0 extension, adding new client assertion. You can view it [here](https://soluto.github.io/oauth-jwt-otp-client-assertion/), and more than welcome to participate. Feedback is highly appreciated!

Previous

Size matters: how I used React Native to make my app look great on every device

Next

Keeping your Redis in shape in 3 simple steps

10 Comments

  1. Tam Huynh

    Thanks Omer Levi Hevroni!
    Your article is very nice. I was googling this problem many times, i was using OAuth 2.0 (Resource Owner Password Credentials Grant) for my mobile app to work with restful api. But i don’t think it’s perfect, i’m not confident about its security at the moment. Your blog showed exactly the problem i’ll need to solve. I’m waiting for your part 2 of this topic.

    • Thank you! Is there something specific you are interested at?

      • Tam Huynh

        My OAuth 2 integration is getting bigger and messy. My boss was asking me many things, he don’t like to enter email/password many time, the app should remember them. OAuth saved me with access token and refresh token, but it isn’t safe, anybody with the device can view sensitive data on the app, so I mixed OAuth 2 with “login with PIN” (4 digits code). After that, I continued mix them with 2-step verification login (with One Time Password). I’m not sure my OAuth 2 implement still following the standard and how secure it is. I still need to support single device login, tracking login activities, remote logout etc.

        People keep saying OAuth 2 is stateless, there is no validations on server. But my OAuth 2 implement has many validations. Should i keep using OAuth 2 ? I really appreciate hearing your thoughts about the correct way to mix up these things. Thanks.

  2. It seems that you are actually using “Client Credentials Grant” (RFC6749 #4.4) with your own authentication scheme (which is allowed) and not “Resource Owner Password Credentials Grant” because no credential is provided to the device by a resource owner (the device generates its own credentials) – at least no such process is mentioned in your description of the protocol, though it could be how the original registration request is secured, which you didn’t describe.

    Regarding the actual scheme, I have a feeling that the client can easily go out of sync without an attacker present, at least in a web-scale environment where you can’t lock the device to a single server and you (sanely) use a lockless data storage with eventual consistency, as a result locking out supposedly happy customers. I would be very cautious in deploying protocols that require such a tight synchronization between the server and client with no way to automatically recover an out-of-sync client.

  3. Hey
    Thank you for your detailed response.
    Technically the grant we are using is resource owner, although it is a bit abuse of the grant. I could use client credentials, but it would require me to create a new client in our authorization server for each new device. When using resource owner grant I could use the same client for all our devices.
    Regarding your second comment – you are correct about the synchronization issues, and this is a pretty common problem. I spent some time comparing various storage account and was able to find a solution that was good enough for our needs.

    • Omer – noted the “found a solution that was good enough for our needs”, though in your place – as correct sync key storage is the linchpin of this whole protocol – I would have noted the issue of possible breakage of the protocol when used with common scalable/eventual-consistency storage services, and/or list the requirements from the sync key storage and what solution you deployed that fulfil those requirements.

      Most common scalable storage systems support some form of CAS, which is not – by itself – enough for what you need, but you can probably implement a rudimentary pseudo-locking scheme on top of that, or just encode both sync keys into a single value and CAS on that. I would really like to hear about what you’ve used for this deployment.

  4. The storage I am using is Redis. Using Redis as primary storage is possible, just need to configure it correctly. The main advantage of Redis is performance, but also (as it in-memory) synchronization issues are somewhat simpler. Also, there is a locking mechanism in Redis that I’ve considered adding, but so far I did not encounter such issues from the server side.
    You are correct about your point – but I did not add it to this post because this is a more high-level overview of the solution. I do plan to write a more technical post about how I built this solution but did not get to it yet. If that something of interest to you, you can check out this demo repository: https://github.com/Soluto/authentication-without-authentication-demo

  5. Sven

    How do you sign the new payload JWT on each request without asking the user for a pincode or touchid? The private key is is not usable without mobile authentication.

    • Hey Sven,
      The private key is persisted on the device, following the OS best practices. It is persisted without any credentials so it could be used without any user interaction. Protecting the key with pincode or touchid would improve the security of the solution, but require user interaction – which we tried to avoid. Does that make sense?

Leave a Reply

Your email address will not be published. Required fields are marked *

Powered by WordPress & Theme by Anders Norén