I'm working on a login flow and can't decide what's best for verifying authentication in a microservice architecture.
What happens is that a jwt is sent to the client in an httpOnly cookie where it remains on login.
On every request the cookie is sent to protected REST api's (microservices) for verifying authenticity of the request / jwt.
The two options available:
I create common middleware for cookie / jwt verification and add it to each microservice
I embed this service in the auth microservice, and create requests to handle the verification in a centralised way over http(s).
Both options would work, I wonder what are the pros and contras of each approach.
Do you have experience with either one and therefore suggest one over the other?
I decided to isolate the authentication service entirely.
The code that calls the api will be common for all the microservices that need to handle all authentication related issues, but the authentication api itself will automatically be changed for all microservices when an update is performed.
I prefer this above pulling changes to each of the microservices individually on updates.
The api can also be versioned if need be for whatever reason.
Http calls form a very small delay, but then again http calls are lightweight.
The same applies just as well for microservices that communicate with each other than authentication.
They will be uploaded to a Kubernetes cluster in the cloud, I assume that these calls will be fast.
The code can still be modified if need be.
Related
I am currently implementing a API project using express-js.
There are multiple clients for the API. This includes a front-end web app and some backend services.
I am looking at using a normal session based management for authentication using express-session.
I am preferring this over jwt since session based + secure cookies is easier for many use cases which I would need need
Ability to revoke user access from server side
Allow only single active web session for a user
I am sure I can maintain a separate persistance table with userid + refresh_token + access_token to achieve the above.
Its just that session based gives me these in straightforward.
Now when it comes to my daemon services, I would still like them to go via API route. This will be more like Client Credentials Flow.
Since these are non-http clients, cookies will not be supported.
I am not sure how my app can API's continue to support both of them ?
The only option I have now based on readings on various blog sources is to use JWT in the cookies for the web front end and using JWT as bearer in header.
This means that
I will need to implement all the security mechanisms like token black-listing, regenerating refresh_token etc.
I will potentially lose out on main benefit of JWT of statelessness.
What are the options I have to ensure that my API layer can support both front-end web apps like react/angular and other micro services
The standard solution here is to use an API gateway:
APIs receive JWT access tokens regardless of the client, and validate them on every request
Browser clients have their own routes to APIs, and send cookies that contain or reference tokens
Mobile clients call API directly, but with opaque access tokens
APIs call each other inside the cluster using JWTs, typically by forwarding the original token from the web or mobile client
The API gateway can perform translation where required. Here are a couple of related articles:
Phantom Token Pattern
Token Handler Pattern
Done well, all of this should provide a good separation of concerns and keep application code simple.
Explain it to me like I'm five.
Is it typical to use both machine to machine authentication alongside user-based authentication? Meaning: if I have a gateway or proxy which accepts user requests, and it verifies the JWT that come in with a user request prior to processing or forwarding the request to application servers is it normal, or a mis-use to expect to use a machine-to-machine JWT to ensure that requests arriving at the application servers originated from the gateway? And furthermore is it normal, or a mis-use to wrap, or nest the user's JWT within the machine-to-machine JWT when making the request to the application server?
Is it simply more typical to just have the gateway validate the JWT signature and claims and just forward it to the various application servers as needed?
Is the desire to nest JWTs in this fashion overkill, or some misuse / case of "you're holding it wrong"?
If you have a bunch of back end microservices being called via a gateway then it is usual for the original access token to be forwarded - this provides user context to your APIs so that they can authorize correctly.
A better use of the reverse proxy is to swap confidential tokens for those with rich claims - see the Phantom Token Approach for how this works.
Note also that it is recommended for each individual API to validate the JWT - this is often described as a Zero Trust Architecture, which protects against man in the middle exploits.
I've an API that uses OAuth authentication (hydra) to authenticated requests that are
coming from the user browser.
I would also like to send requests to the same API's also from another backend (NodeJS).
I'm a bit confused what is the best way to do it.
The current Authentication mechanism uses a refresh token (1h).
I was thinking about creating another client for the backend in hydra, but it seems strange that also the backend will use the same method with the refresh token like the browser (never saw this before).
Any help with how to address this issue will be appreciated.
So... there are several concepts you might need to take into consideration here...
Since its conception the OAuth 2.0 family of standards distinguishes between private (trusted) and public (potentially vulnerable to attacks) clients. The client you've got running in the browser falls in the latter category, and thus most experienced OAuth devs out there would argue that it's not OK to use refresh tokens for this client. For you backend service (even if it is a simple backend-for-frontend) written in node, that's a completely different story - there it's OK to use and store refresh tokens.
If however your node.js backend is working "outside" an active customer session, i.e. tries to access customer data even when no customer is actively interacting with the frontend, you might also want to consider the machine-to-machine flow provided by OAuth 2.0 - the Client Credentials Flow.
I am planing to make a simple admin CP. Im oldschool PHP developer where usually all is in one huge monolith server and concept of microservices does not apply.
In my next app I would like to have:
Express UI (Frontend) <----> REST/GraphQL API <-----> DB server
The idea is to limit access to DB as much as possible. All requests from users would go to frontend only and API would be used only internally by other servers in my solution.
I will set up IP filters between API and DB, and likely between Frontend and API. But my concern is - say I want one admin to create a product. While this user will be authenticated on frontend using sessions, I need requests going to API to be somewhat authenticated too. Ignoring IP filters for now, do not want just about anybody to be able to send REST requests to API.
I have several ideas, please give me your opinion:
sharing express-session between API and frontend using mongodb (likely on yet another server) - I see latency issues
putting API service on same server as frontend and use redis to share sessions - kinda defeats the purpose of microservice separation
on login, generating jsonwebtoken that is always fwded between frontend and API for any user action - cookie stealing will be an issue, since i can only verify user logged in, not that he authorized certain action to be performed
on login, sending private key to admin and have him sign all requests that are fwded to API - this looks like a CPU overkill
Is there any generally used solution I am missing? Is separating frontend and API mitm overkill, or a good practice? I could easily merge the 2 and talk to DB directly from frontend, then i can manage everything with sessions just like with PHP.
Thanks for any inputs! Cheers
A more elaborate implementation of (1) is the use of a session server. The idea is to purely remove database lookup latency but not the bottleneck of session lookup in general. It acts as a caching layer. A zero coding implementation is to use something like redis or memcache as the session storage.
In general though, a cryptographic signing mechanism like JWT would be much more scalable because it involves zero I/O lookup. All you do is verify that the token is properly signed. And as long as you keep your application secret safe you're secure. You can even encode things like user roles and permissions directly in the token to completely avoid querying the database for it.
The key idea of JWT is that all the security is hidden in the backend. The front-end only echos back the token to the server as proof of authentication.
But since the front-end stores the token, it can be hijacked by javascript. One solution is to use HttpOnly cookies as the mechanism to transmit the tokens. I've even seen implementations where the main part of the token is sent in the Authorization header but the signature is sent in a HttpOnly cookie. This prevents scripts from being able to read the entire token.
We have a simple setup: An authorization server based on OAuth 2.0, which currently only supports the client_credentials grant type. Then we have an API, the resource server, which is protected by requiring an access token from our OAuth server.
All use-cases for our API so far, have been pure machine-to-machine communcation, where it's simply our customers servers running batch jobs.
Today I had a meeting with a new customer. They have an SPA that seemingly does not have its own backend server. It uses AWS for authenticating, and seems to return a JWT, but from what I can tell, they make a lot of API calls directly to publicly available services, and then the logic is all performed in the SPA.
We ideally would've liked them to simply register a single OAuth client with us, so that when users make a request that needs one of our APIs, the request is first routed to their server, which performs the lookup, and then uses their client's credentials to contact our server. But they would prefer to not have to set up a backend. In this case I'm kind of at a loss for how we sensibly let them integrate with our system. They would prefer to send their user's JWT to our system, but I don't think they understand that we would need their key to verify the user's signature in that case, and we don't want to have to create new APIs simply for this purpose.
Would very much appreciate any advice on this issue - thanks very much in advance for any help.
We ideally would've liked them to simply register a single OAuth
client with us, so that when users make a request that needs one of
our APIs, the request is first routed to their server, which performs
the lookup, and then uses their client's credentials to contact our
server.
Your recommended approach is the right way of doing it with Client Credentials Grant.
They would prefer to send their user's JWT to our system, but I don't
think they understand that we would need their key to verify the
user's signature in that case, and we don't want to have to create new
APIs simply for this purpose.
If the previous approach doesn't work, I'm afraid that this is the only way of doing it. You will need to implement a Proxy to validate the JWT (You should be able to get their public key to verify the signature). You can do this in AWS itself, using AWS API Gateway + Lambda to verify the JWT to and forward the request to your existing backend with Client Credentials Grant where you don't need to pay anything upfront.