Reputation: 43
Yohoho! I am building an application that leverages OAuth to pull user data from provider APIs, and am wondering about the RFC compliance of the flow I have chosen.
Currently, the user signs into the authorization server which sends an auth code to my frontend client. The frontend then passes the auth code to my backend, which exchanges it for an auth token and then makes calls to the provider to pull data.
In this best practices document, it states:
Note: although PKCE so far was recommended as a mechanism to protect native apps, this advice applies to all kinds of OAuth clients, including web applications.
To my understanding, PKCE is designed to ensure the token is granted to the same entity that requested the auth code, in order to prevent attackers from using stolen auth codes to execute unwarranted requests.
Now, it makes sense to me why this is important even if the backend keeps the client secret unexposed, since the attacker can make requests to the backend with the intercepted auth code to receive the token. However in my flow, since I am not creating an authentication scheme and rather trying to authorize premeditated requests, the token stays with the backend.
So why is PKCE recommended here? It seems to me the most a stolen auth code can do is initiate an API request from the backend, with no token or data being returned to the attacker. And assuming a PKCE implementation is the way to go, how exactly would it work? The frontend requesting the auth code and the backend trading it for a token aren't exactly the same, so would it be as simple as passing the code_verifier to the backend to make the call?
Some clarification would be greatly appreciated.
Upvotes: 4
Views: 4391
Reputation: 29291
PKCE ensures that the party who started the login is also completing it, and there are two main variations that I'll summarise below in terms of Single Page Apps (SPA).
PUBLIC CLIENTS
Consider a Single Page App that runs a code flow implemented only in Javascript. This would store a code verifier in session storage during the OpenID Connect redirect. Upon return to the app after login, this would be sent, along with the authorization code and state to the Authorization Server.
This would return tokens to the browser. If there was a Cross Site Scripting vulnerability, the flow could be abused. In particular the malicious code could spin up a hidden iframe and use prompt=none
to get tokens silently.
CONFIDENTIAL CLIENTS
Therefore the current best practice for Single Page Apps is to use a Backend for Frontend (BFF), and never return tokens to the browser. In this model it is more natural for the BFF to operate like a traditional OpenID Connect website, where both the state
and code_verifier
are stored in a login cookie
that lasts for the duration of the sign-in process.
If there was a Cross Site Scripting vulnerability, then session riding
is possible by the malicious code, to send the authorization code to the BFF and complete a login. However, this would just result in writing secure cookies that the browser cannot access. Similarly, the hidden iframe hack would also only rewrite cookies.
The code_verifier
could alternatively be stored in session storage and sent from the browser to the BFF, but it would be easy enough for malicious code to grab it and also send it to the server. This does not really change the risks, and the key point is that no tokens should be returned to the browser. It is OK to use secondary security values in the browser, as long as you can justify them, eg for security reviewers. Generally though it is easier to explain if secure values are in secure cookies and less visible to Javascript.
FURTHER INFO
Best practices often vary depending on the customer use case, and at Curity we provide resources for customers (and general readers) to explain secure design patterns. These are based on security standards and we translate them to customer use cases. You may find our recent SPA Security Whitepaper useful.
Upvotes: 5