I am planning on building a K8s cluster with many microservices (each running in pods with services ensuring communication). I'm trying to understand how to ensure communication between these microservices is secure. By communication, I mean HTTP calls between microservice A and microservice B's API.
Usually, I would implement an OAuth flow, where an auth server would receive some credentials as input and return a JWT. And then the client could use this JWT in any subsequent call.
I expected K8s to have some built-in authentication server that could generate tokens (like a JWT) but I can't seem to find one.
K8s does have authentication for its API server, but that only seems to authenticate calls that perform Kubernetes specific actions such as scaling a pod or getting secrets etc.
However, there is no mention of simply authenticating HTTP calls (GET POST etc).
Should I just create my own authentication server and make it accessible via a service or is there a simple and clean way of authenticating API calls automatically in Kubernetes?
Not sure how to answer this vast question, however, i will try my best.
There are multiple solutions that you could apply but again there is nothing in K8s for auth you can use.
Either you have to set up the third-party OAuth server or IAM server etc, or you write and create your own microservice.
There are different areas which you cannot merge,
For service interconnection service A to service B it would be best to use the service mesh like Istio and LinkerD which provide the mutual TLS support for security and are easy to set up also.
So the connection between services will be HTTPS and secured but it's on you to manage it and set it up.
If you just run plain traffic inside your backend you can follow the same method that you described.
Passing plain HTTP with jwt payload or so in backend services.
Keycloak is also a good idea to use the OAuth server, i would also recommend checking out Oauth2-proxy
Listing down few article also that might be helpful
https://medium.com/codex/api-authentication-using-istio-ingress-gateway-oauth2-proxy-and-keycloak-a980c996c259
My Own article on Keycloak with Kong API gateway on Kubernetes
https://faun.pub/securing-the-application-with-kong-keycloak-101-e25e0ae9ec56
GitHub files for POC : https://github.com/harsh4870/POC-Securing-the--application-with-Kong-Keycloak
Keycloak deployment on K8s : https://github.com/harsh4870/Keycloack-postgres-kubernetes-deployment
Related
Currently doing some research to setup an (azure) api gateway with oauth (jwt token) security.
an external partner/app sends a request to an api endpoint published on the gateway including a valid JWT-token in the header that gets validated by the gateway against AzureAD for example. When validated the request is routed to the backend service. No problems here.
My question is, what is best practice for the external app to obtain that JWT-token (to use for the api call) ?
Obviously, It could send a request to AzureAD with a clientid+secret to obtain a valid JWT token. But to do so it has to call my internal AzureAD directly ? Is this the way to do it ?
or should I expose a 'get-jwt-token' api on my api gateway and route that request to AD ? How should I secure that API ? with basic auth ?
or am I missing something, and is there a much better best/proven practice ?
HOSTING BEST PRACTICE
A reverse proxy or API gateway is placed in front of both APIs and the Authorization Server (AS). This ensures that an attacker who somehow gains access to the back end entry point cannot access data sources.
OAUTH REQUESTS TO GET TOKENS
OAuth requests are typically proxied straight through the reverse proxy / API gateway to the AS with no extra logic. All credentials, auditing of login attempts etc remain in the AS.
MANAGED SERVICES
If using Azure AD as a cloud managed AS, this is a special case: the system is already hardened for internet clients, so most companies don't add their own proxying - though it is possible to do so.
FURTHER INFO
The first of these covers the infra setup and the second gives you an idea of extensibility options once a reverse proxy / gateway is in place.
IAM Primer
API Gateway Guides
I have two Django backends hosted differently, one is an authentication service(ServiceA) that handles everything auth including generating tokens using the djangorest-simple-jwt library. The other is a service for my business logic(ServiceB).
There is a route on the 'ServiceA' that verifies tokens which is token/verify. I currently verify tokens on 'ServiceB' before processing a request, by calling the token/verify on ServiceA which I think is not the optimal way to communicate between the two services. What better approach do you suggest.
Have you looked at JWTTokenUserAuthentication?
This can facilitate developing single sign-on functionality between separately hosted Django apps which all share the same token secret key.
I'm trying to build a microservice architecture. I've learned some benefits of API gateway like: load balancing, invoking multiple microservices and aggregating the results, cache management etc. So I decided to include it in my system.
My question is whether I should implement authorization in gateway layer or separately in each microservice endpoints ? For example authenticating user on gateway and passing user claims in decrypted form to be used in authorization logic to each service call ?
It seems like it make sense and saves processing time to authorize some aggregates before even calling each service. However, authorization logic is really a concern of individual service.
What is your advice ?
each microservices endpoint. implementing the authorization in API gateway will make your system rigid. If at any later stage you have to separate logic for authorization (say, internal user, external user, open api). This will be very difficult to incorporate.
Authorization should happen at each API level.
You can use API Gateway Pattern / API Gateway. Then you can also offload the authentication/authorization responsibility of the microservice. It will be easy for user or developer that is calling the services. API GW support External /Internal GW even. It may support Role base permissions. eg: WSO2 APIM.
You will get below advantages when you have API /MS GW:
An API Gateway is the single point of entry for any microservice call.
It can work as a proxy service to route a request to the concerned microservice.
It can aggregate the results to send back to the consumer.
This solution can create a fine-grained API for each specific type of client.
It can also convert the protocol request and respond.
I'm reading a tutorial provided by AWS explaining how to break up a monolithic NodeJS application into a microservice architectured one.
Here is a link to it.
One important piece is missing from the simple application example they've provided and that is user authentication.
My question is, where does authentication fit into all this?
How do you allow users to authenticate to all these services separately?
I am specifically looking for an answer that does not involve AWS Cogntio. I would like to have my own service perform user authentication/management.
First, there is more than one approach for this common problem.
Here is one popular take:
Divide your world to authentication (you are who you say you are) and authorization (you are permitted to do this action).
As a policy, every service decides on authorization by itself. Leave the authentication to a single point in the system - the authentication gateway - usually combined inside the API gateway.
This gateway forwards requests from clients to the services, after authenticating, with a trusted payload stating that the requester is indeed who they say they are. Its then up to the service to decide whether the request is allowed.
An implementation can be done using several methods. A JWT is one such method.
The authenticator creates a JWT after receiving correct credentials, and the client uses this JWT in every request to each service.
If you want to write your own auth, it can be a service like the others. Part of it would be a small client middleware that you run at all other service endpoints which require protection (golang example for middleware).
An alternative to a middleware is to run a dedicated API Gateway that queries the auth service before relaying the requests to the actual services. AWS also has a solution for those and you can write custom authentication handlers that will call your own auth service.
It is important to centralize the authentication, even for a microservices approach for a single product. So I'm assuming you will be looking at having an Identity Service(Authentication Service) which will handle the authentication and issue a token. The other microservices will be acting as the service providers which will validate the token issued.
Note: In standards like OpenID connect, the id_token issued is in the format of JWT which is also stateless and self-contained with singed information about the user. So individual Microservices doesn't have to communicate with the authentication service for each token validation. However, you can look at implementing or using Refresh tokens to renew tokens without requiring users to login again.
Depending on the technology you choose, it will change the nature how you issue the tokens and validate.
e.g:
ExpressJS framework for backend - You can verify the tokens and routes in a Node Middleware Handler using Passport.
If you use API Gateway in front of your Microservice endpoints you can use a Custom Authorizer Lambda to verify the tokens.
However, it is recommended to use a standard protocol like OpenID connect so that you can be compatible with Identity Federation, SSO behaviors in future.
Since you have mentioned that you are hoping to have your own solution, it will come also with some challenges to address,
Password Policies
Supporting standards (OpenID Connect)
Security (Encryption at rest and transit especially for PIDs)
SSO, MFA & Federation support etc.
IDS/IPS
In addition to non-functional requirements like scalability, reliability, performance. Although these requirements might not arise in the beginning, I have seen many come down the line, when products get matured, especially for compliance.
That's why most people encourage to use an identity server or service like Cognito, Auth0 & etc to get a better ROI.
I'm trying to design a green-field project that will have several services (serving data) and web-applications (serving HTML). I've read about microservices and they look like good fit.
The problem I still have is how to implement SSO. I want the user to authenticate once and have access to all the different services and applications.
I can think of several approaches:
Add Identity service and application. Any service that has protected resources will talk to the Identity service to make sure the credentials it has are valid. If they are not it will redirect the user for authentication.
Use a web-standard such as OpenID and have each service handle it own identities. This means the user will have to authorize individually each service/application but after that it will be SSO.
I'll be happy to hear other ideas. If a specific PaaS (such as Heroku) has a proprietary solution that would also be acceptable.
While implementing a microservice architecture at my previous job we decided the best approach was in alignment with #1, Add identity service and authorize service access through it. In our case this was done with tokens. If a request came with an authorization token then we could verify that token with the identity service if it was the first call in the user's session with the service. Once the token had been validated then it was saved in the session so subsequent calls in the user's session did not have to make the additional call. You can also create a scheduled job if tokens need to be refreshed in that session.
In this situation we were authenticating with an OAuth 2.0 endpoint and the token was added to the HTTP header for calls to our domain. All of the services were routed from that domain so we could get the token from the HTTP header. Since we were all part of the same application ecosystem, the initial OAuth 2.0 authorization would list the application services that the user would be giving permission to for their account.
An addition to this approach was that the identity service would provide the proxy client library which would be added to the HTTP request filter chain and handle the authorization process to the service. The service would be configured to consume the proxy client library from the identity service. Since we were using Dropwizard this proxy would become a Dropwizard Module bootstrapping the filter into the running service process. This allowed for updates to the identity service that also had a complimentary client side update to be easily consumed by dependent services as long as the interface did not change significantly.
Our deployment architecture was spread across AWS Virtual Private Cloud (VPC) and our own company's data centers. The OAuth 2.0 authentication service was located in the company's data center while all of our application services were deployed to AWS VPC.
I hope the approach we took is helpful to your decision. Let me know if you have any other questions.
Chris Sterling explained standard authentication practice above and it makes absolute sense. I just want to put another thought here for some practical reasons.
We implemented authentication services and multiple other micro services relying on auth server in order to authorize resources. At some point we ran in to performance issues due to too many round trips to authentication server, we also had scalability issues for auth server as number of micro services increased. We changed the architecture little bit to avoid too many round trips.
Auth server will be contacted only once with credentials and it will generate the token based on a private key. Corresponding public key will be installed in each client (micro service server) which will be able to validate the authentication key with out contacting auth server. Key contain time generated and a client utility installed in micro service will validity as well. Even though it was not standard implementation we have pretty good success with this model especially when all the micro services are internally hosted.