I have a project that uses node-oidc-provider and angular-oauth2-oidc.
However one thing is wierd, refresh token request uses prompt=consent (I know it is by spec) which returns 303 with location including code in the hash, the token is refreshed but it looks like terrible UX if SPA appplication refreshes in middle of user interaction, is this expected behaviour or is something in my configuration wrong?
Is there any way to get refresh token through backchannel like AJAX request (I would like to avoid iframes if possible)? I can't find any specs on how it should work.
OPTION 1
The traditional SPA solution is to use an Authorization Code Flow (PKCE) redirect on a hidden iframe using prompt=none. This prevents refreshing the SPA since it runs in a mini app as in this code. This is no longer reliable though, due to recent browser restrictions that drop the SSO cookie - eg in Safari.
OPTION 2
Another option is to use a refresh token grant message in an Ajax request. But this relies on storing a refresh token in browser local storage to get past page reload issues. And this is not considered secure and is not likely to fare well in PEN tests etc.
OPTION 3
The preferred option these days is a variation on option 2 where the refresh token is stored in a secure HTTP only encrypted cookie. It is possible to issue cookies via an API, if you want to avoid impacting the web architecture, though it is a little tricky. See this Curity blog post for more on this approach - and this code sample.
Related
I've seen a few posts on this but I want to highlight some specific questions I have yet to see properly answered.
My current chrome extension contains the following:
background service worker
html pages to handle login / logout (although doing this in a popup would be great)
content scripts that run a SPA on certain domains
What I would like is for a user to be able to authenticate with auth0, and then any content script running on any domain can then use an access token to hit my API.
The current challenges I've been seeing that I'm not sure how to tackle:
Ideally each running content script has its own token. This involves using the auth0 session to silently get an access token. However, since auth0 checks the origin when hitting /authorize it would mean registering every domain as an "allowed origin" which is not possible for me due to volume. Currently if I try just setting the redirectURI to my chrome extension URL, it ends up timing out. I have seen some users report this approach working, so I'm not sure if I'm doing something wrong or not, but this approach feels unsafe in retrospect.
If I instead funnel my requests through the background script, so all running content scripts effectively use a single access token, how do I refresh that access token? The documentation recommends making a call to /oauth/token which involves the client secret. My guess is this is not something I should be putting into my javascript as all of that is visible to anyone who inspects the bundle. Is that a valid concern? If so, what other approach do I have?
If I do use a manually stored refresh_token, what is the best way to keep that available? The chrome storage API says not to use it for sensitive information. Is it preferred then to keep it in local storage?
If the best option is to have the background script make all the requests on behalf of the content scripts, what is the safest way for the content scripts to make a request through the background script? I would rely on chrome.runtime.sendMessage but it seems like the API supports arbitrarily sending messages to any extension, which means other code that isn't part of the extension could also funnel requests through the background script.
More generally, I would love to hear some guidance on a safe architecture to authenticate users for a multi-domain extension.
I am also not adverse to using another service, but from what I've seen so far, auth0 offers relatively good UX/DX.
My understanding is that, if you include your login page in your SPA, then the user is receiving all of your code before they're even authenticated. And yet, it seems to be a very common practice. Isn't this incredibly insecure?? Why or why not?
An SPA would have all the page structures (html and javascript code for the design of pages), but obviously not data. Data would be downloaded in subsequent ajax requests, and that is the point. To download actual data, a user would have to be authenticated to the server, and all security would then be implemented server-side. An unauthorized user should not be able to access data from the server. But the idea is that how pages look is not a secret, anybody can have a look at pages of the SPA without data, and that's fine.
Well, and here comes the catch that people often overlook. Html is one thing, but there is all the javascript in an SPA that can access all the data. Basically the code of the SPA is an API documentation if you like, a list of possible queries that the backend can handle. Sure, it should all be secure server-side, but that's not always the case, people make mistakes. With such a "documentation" that an SPA is, it can be much easier for an attacker to evaluate server-side security and find authorization / access-control flaws in server-side code which may enable access to data that should not be accessible to the attacker.
So in short, having access to how pages look (without data) should be ok. However, giving away how exactly the API works can in certain scenarios help an attacker, and therefore adds some risk, which is inherent to SPAs.
It must be noted though that it should not matter. As security by obscurity should not be used (ie. it should not be a secret how things work, only things like credentials should be secrets), it should be fine to let anyone know all the javascript, or the full API documentation. However, the real world is not always so idealistic. Often attackers don't know how stuff works, and it can be of real help to be able to for example analyze an SPA, because people that write the backend code do make mistakes. In other cases the API is public and documented anyway, in which case having an SPA presents no further risk.
If you put the SPA behind authentication (only authenticated users can download the SPA code), that complicates CDN access a lot, though some content delivery networks do support some level of authentication I think.
Yet there is a real benefit of having a separate (plain old html) login page outside the SPA. If you have the login page in the SPA, you can only get an access token (session id, whatever) in javascript, which means it will be accessible to javascript, and you can only store it in localStorage, or a plain non-httpOnly cookie. This may easily result in the authentication token being stolen via cross-site scripting (XSS). A more secure option is to have a separate login page, which sets the authentication token as a httpOnly cookie, inaccessible to any javascript, and as such, safe from XSS. Note though that this brings the risk of CSRF, which you wil lhave to deal with then, as opposed to the token/session id being sent as something like a request header.
In many cases, having the login in the SPA and storing the authentication token in localStorage is acceptable, but this should be an informed decision, and you should be aware of the risk (XSS, vs CSRF in the other case).
It's clear that data loaded into an SPA must be secured behind an API through authn. But I think you can also secure layout so it is "less ok" having access to how pages look. With metamodel-driven development, you can serve layout configuration from a secured API. I am not talking about serving HTML (that's SSR), I am talking about serving JSON. That layout configuration is nothing but a JSON file on the server defining the content of your screen (fully or partially). Then your SPA code turns into a generic interpreter/render of that metamodel that parses the payload, renders components and binds data. If your API is L3, voilĂ , you get a fully working API-driven app.
No matter how I reason about it, it seems as if there is no secure way of implementing a client side rendered single-page-application that uses/accesses information on sessions for authentication, either via cookies, without severe compromise in security. I was mainly looking to building a React app, but it seems as if I will need to build it with SSR for a relatively secure version of authentication.
The use case that I'm especially thinking of is where the user logs in or registers and then gets a cookie with the session id. From there, in a server side implementation, I can simply set up conditional rendering depending on whether the server stored session has an associated user id or not and then pull the user information from there and display it.
However, I can't think of a client-side rendered solution where the user can use the session id alone on the cookie that isn't easily spoofable. Some of the insecure implementations would include using browser storage (local/session). Thanks.
I think the major issue here is that you are mixing the two parts of a web page (at least according to what HTML set out achieve) and treating them both as sensitive information.
You have two major parts in a web page - the first being the display format and the second being the data. The presumption in client side rendering / single page applications is that the format itself is not sensitive, and only the data needs to be protected.
If that's the case you should treat your client-side redirect to login behavior as a quality of life feature. The data endpoints on your server would still be protected - meaning that in theory an unauthenticated user could muck about the static HTML he is being served and extract page layouts and templates - but those would be meaningless without the data to fill them - which is the protected part.
In practice - your end product would be a single page application that makes requests to various API endpoints to fetch data and fill in the requested page templates. You wouldn't even need to go as far as storing complex session states - a simple flag notifying the client if it is authenticated or not would suffice (that is beyond what you would normally use for server-side authentication such as cookies or tokens)
Now let's say I'm a malicious user who is up to no good - I could "spoof" - or really just open the browser dev tools and set the isAuthenticated flag to true letting me skip past the login screen - now what would I do? I could theoretically navigate to my-service/super-secret without being redirected locally back to the login page on the client side - and then as soon as the relevant page tries to load the data from the server with the nonexistent credentials it would fail - best case displaying an error message, worst case with some internal exception and a view showing a broken template.
So just to emphasize in short:
A. If what you want to protect is your TEMPLATE then there is no way to achieve this clientside.
B. If what you want to protect is your DATA then you should treat gating/preventing users from navigating to protected pages as a quality of life feature and not a security feature, since that will be implemented on the server when serving the data for that specific page.
I am implementing a REST API that has both mobile application and browser based clients and users. Based on questions I've asked and previous questions here and at security.stackexchange, I have come to the conclusion that to stay as "RESTful" as I can for as long as I can, HTTP Basic Auth over SSL is sufficient for Authentication. The problem is I'd also like to implement Two Factor Authentication along with it. Is it acceptable to add headers in the 401 Authorization header response, like username:password:token, or in a totally separate request header, but in the same payload as the basic auth response by the client? Since I'm using node.js + express/connect, I have access to the entire HTTP protocol stack, but want to remain as restful as possible for scalability reasons. On the browser side, I guess I could do the basic auth, and if it passes, ask for the TFA token, and only if it passes consider the user authenticated.
You can technically make up new authentication schemes to extend from HTTP Basic Auth, but they generally won't be supported by browsers. In your example, I don't believe any browser would be able to natively ask for and send username:password:token in the same way they can easily ask for username and password.
Generally two-factor authentication schemes work by putting the user into an intermediary state using some form of sessions as you mentioned in your second example. A user who has passed the first factor, say username/password via Basic Auth, has a session opened but not marked as really logged in until they also pass the second factor. Inputting a dongle code or something like that. Once both factors are passed their session is marked as fully logged in and they can access their account/data/whatever.
Most frameworks I've looked at will insert into forms a hidden input element with the value being a CSRF token. This is designed to prevent user Bob from logging in on my site and then going to http://badsite.com which embeds img tags or JS that tell my site to execute requests using Bob's still logged in session.
What I'm not getting is what stops JS on badsite.com from AJAX requesting a URL with a form on my site, regex-ing the CSRF token from the hidden input element, and then AJAX posting to my site with that valid CSRF token?
It seems to me that you'd want to use JS to insert the CSRF token into the form at runtime, pulling the value from a cookie (which is inaccessible to badsite.com). However, I've not heard this approach mentioned and so many frameworks do the simple hidden input with the CSRF token, I'm wondering if my solution is over-engineered and I'm missing some part of what makes the hidden input method secure.
Can anyone provide some clarity? Thanks!
what stops JS on badsite.com from AJAX requesting a URL with a form on my site
The Same Origin Policy (unless you subvert it with overly liberal CORS headers). JavaScript running on a site can't read data from a site hosted on a different host without permission from that host.
There are workarounds to the SOP, but they all either require the co-operation of the host the data is being read from (JSON-P, CORS), or don't pass any data that identifies a specific user (so can't access anything that requires authorisation).