Can I use mTLS for authentication instead of Open ID Connect? - security

I would like to use mTLS to protect microservices instead of Open ID Connect.
As I understand how mTLS works, it encrypts the communications between two services via SSL handshake.
However, can I use mTLS as authentication mechanism instead protect services through Open ID Connect?

You can use mTLS as an infrastructure security solution between microservices, but it will have limitations, and will not be able to manage user level security. So it depends who the clients of your microservices are, and how you need to authorize API requests.
USER LEVEL SECURITY
This is where OAuth and OIDC are used, and often there is a web or mobile app as a client:
The client runs a code flow and gets tokens
The client sends an access token to microservices
Microservices receive a JWT access token that identifies the user
Microservices can forward this to other microservices when required
Because of the user identity in the access token, the API can perform user level authorization, eg to. ensure that a user can only view their own purchase history.
COMBINING mTLS AND OAUTH
It is common to combine these two building blocks, and security standards / regulations sometimes require this. See this financial-grade security overview for some example scenarios.

Related

Authentication in service to service requests in k8s

Say I have several services in kubernetes. And I have one entry point to the cluster, it's a public facing service that is meant to validate the JWT token (from AWS cognito).
The entry point routes the request to an internal service, and that in turn usually makes more requests to other internal services.
My question is: is it enough to validate the JWT only once and make other communications without any form of authentication, just passing the user id (or any other data needed)? Or do I need to have some form of authentication when making http requests between services? if so, which? should I validate the JWT again? should I have server certificates or something like that?
Posted a community wiki answer for better visibility. Feel free to expand it.
As David Szalai’s comment mentioned, it depends on your security and project requirements:
If you go with a zero-trust model inside k8s, you can use mTLS with a service mesh between services. Passing JWTs is also good if you need to propagate user-auth info to different services.
In the current (project) we’ll use mTLS with a service mesh, and send JWTs along with requests where the receiver needs info about user, and parse/validate it there again.
If you apps do not have built-in authentication / authorization mechanisms you may try Istio - check these articles:
Istio documentation - Security
Istio & JWT: Step by Step Guide for Micro-Services Authentication.
Also check these articles about authentication in Kubernetes:
Kubernetes Docs - Authenticating
Authentication between microservices using Kubernetes identities
EDIT:
Why?
In security, we say:
It’s a common principle in security that the strength of a given system is only as strong as the strength of its weakest link.
This article - The service mesh era: Securing your environment with Istio mentions some possible attacks that can be done in insecure system, like man-in-the middle attacks and replayed attacks:
An approach to mitigate this risk is to ensure that peers are only authenticated using non-portable identities. Mutual TLS authentication (mTLS) ensures that peer identities are bound to the TLS channel and cannot be replayed. It also ensures that all communication is encrypted in transit, and mitigates the risk of man-in-the middle attacks and replay attacks by the destination service. While mutual TLS helps strongly identify the network peer, end user identities (or identity of origin) can still be propagated using bearer tokens like JWT.
Given the proliferation of threats within the production network and the increased points of privileged access, it is increasingly necessary to adopt a zero-trust network security approach for microservices architectures. This approach requires that all accesses are strongly authenticated, authorized based on context, logged, and monitored … and the controls must be optimized for dynamic production environments.
Without adding additional security layers (like mTLS and service mesh in a cluster), we are assuming that communication between microservices in the cluster is done in fully trusted network, so they can give an attacker possibility to exploit business assets via network:
Many microservices deployments today, mostly worry about the edge security by exposing the microservices via APIs and protecting those with an API gateway at the edge. Once a request passes the API gateway, the communications among microservices assume a trusted network, and expose endless possibilities to an attacker gaining access to the network to exploit all valuable business assets exposed by the microservices.

Is it possible to use/forward certificate information from key credentials to a bearer token (Azure AD)

I have a scenario where I have to let external systems have access to one of our internal API's.
The security team want the externals to use client certificates as the preferred authentication method, so that basically leaves us two options:
Use direct client certificate authentication. It will give us the most control, but that will leave all the certificate handling and validation in our hands, and I'd rather not do that if I have a choice. Besides - direct client certification auth does not play well with our existing authentication methods on that API. If you turn on client certificates on the App Service, you will require a certificate on every request (and most requests on that API use cookies)
Add key credentials to the Azure AD app. We'd rather not give access directly to the app the API is registered on, so we register a OUR-APP-EXTERNAL and set up a trust relationship between the two. So the client authenticates with a certificate to the "external app", gets a bearer token and use that on our API. I'd prefer to use this solution, and it seems to play nicely with everything else.
So far so good - but I'm worrying about scaling this. We have to separate the external clients somehow (each client will in effect be different systems in different companies). One strategy is to create one AD-app per external system (OUR-APP-EXTERNAL-SYSTEM-A), but it seems cumbersome and somewhat spammy. One quick and easy solution would be to add some metadata from the client's authentication certificate (where we could just set what system this cert is issued to during creation), and add that to the bearer token.
Is this possible? Or are there other ways to handle "multi tenant" external clients?
Thanks
Consider an option of using Azure API Management for your scenario. API Management provides the capability to secure access to APIs (i.e., client to API Management) using client certificates. Currently, you can check the thumbprint of a client certificate against a desired value. You can also check the thumbprint against existing certificates uploaded to API Management.
Follow this guide - How to secure APIs using client certificate authentication in API Management
Also you can create multiple Azure AD Application for different clients and provide provide required roles to each of these Azure AD application to Azure AD Application registered to secure Internal API.
Follow this guide for this approach - Protect an API by using OAuth 2.0 with Azure Active Directory and API Management

How to handle user authentication with microservices in AWS?

I'm reading a tutorial provided by AWS explaining how to break up a monolithic NodeJS application into a microservice architectured one.
Here is a link to it.
One important piece is missing from the simple application example they've provided and that is user authentication.
My question is, where does authentication fit into all this?
How do you allow users to authenticate to all these services separately?
I am specifically looking for an answer that does not involve AWS Cogntio. I would like to have my own service perform user authentication/management.
First, there is more than one approach for this common problem.
Here is one popular take:
Divide your world to authentication (you are who you say you are) and authorization (you are permitted to do this action).
As a policy, every service decides on authorization by itself. Leave the authentication to a single point in the system - the authentication gateway - usually combined inside the API gateway.
This gateway forwards requests from clients to the services, after authenticating, with a trusted payload stating that the requester is indeed who they say they are. Its then up to the service to decide whether the request is allowed.
An implementation can be done using several methods. A JWT is one such method.
The authenticator creates a JWT after receiving correct credentials, and the client uses this JWT in every request to each service.
If you want to write your own auth, it can be a service like the others. Part of it would be a small client middleware that you run at all other service endpoints which require protection (golang example for middleware).
An alternative to a middleware is to run a dedicated API Gateway that queries the auth service before relaying the requests to the actual services. AWS also has a solution for those and you can write custom authentication handlers that will call your own auth service.
It is important to centralize the authentication, even for a microservices approach for a single product. So I'm assuming you will be looking at having an Identity Service(Authentication Service) which will handle the authentication and issue a token. The other microservices will be acting as the service providers which will validate the token issued.
Note: In standards like OpenID connect, the id_token issued is in the format of JWT which is also stateless and self-contained with singed information about the user. So individual Microservices doesn't have to communicate with the authentication service for each token validation. However, you can look at implementing or using Refresh tokens to renew tokens without requiring users to login again.
Depending on the technology you choose, it will change the nature how you issue the tokens and validate.
e.g:
ExpressJS framework for backend - You can verify the tokens and routes in a Node Middleware Handler using Passport.
If you use API Gateway in front of your Microservice endpoints you can use a Custom Authorizer Lambda to verify the tokens.
However, it is recommended to use a standard protocol like OpenID connect so that you can be compatible with Identity Federation, SSO behaviors in future.
Since you have mentioned that you are hoping to have your own solution, it will come also with some challenges to address,
Password Policies
Supporting standards (OpenID Connect)
Security (Encryption at rest and transit especially for PIDs)
SSO, MFA & Federation support etc.
IDS/IPS
In addition to non-functional requirements like scalability, reliability, performance. Although these requirements might not arise in the beginning, I have seen many come down the line, when products get matured, especially for compliance.
That's why most people encourage to use an identity server or service like Cognito, Auth0 & etc to get a better ROI.

Single Sign-On in Microservice Architecture

I'm trying to design a green-field project that will have several services (serving data) and web-applications (serving HTML). I've read about microservices and they look like good fit.
The problem I still have is how to implement SSO. I want the user to authenticate once and have access to all the different services and applications.
I can think of several approaches:
Add Identity service and application. Any service that has protected resources will talk to the Identity service to make sure the credentials it has are valid. If they are not it will redirect the user for authentication.
Use a web-standard such as OpenID and have each service handle it own identities. This means the user will have to authorize individually each service/application but after that it will be SSO.
I'll be happy to hear other ideas. If a specific PaaS (such as Heroku) has a proprietary solution that would also be acceptable.
While implementing a microservice architecture at my previous job we decided the best approach was in alignment with #1, Add identity service and authorize service access through it. In our case this was done with tokens. If a request came with an authorization token then we could verify that token with the identity service if it was the first call in the user's session with the service. Once the token had been validated then it was saved in the session so subsequent calls in the user's session did not have to make the additional call. You can also create a scheduled job if tokens need to be refreshed in that session.
In this situation we were authenticating with an OAuth 2.0 endpoint and the token was added to the HTTP header for calls to our domain. All of the services were routed from that domain so we could get the token from the HTTP header. Since we were all part of the same application ecosystem, the initial OAuth 2.0 authorization would list the application services that the user would be giving permission to for their account.
An addition to this approach was that the identity service would provide the proxy client library which would be added to the HTTP request filter chain and handle the authorization process to the service. The service would be configured to consume the proxy client library from the identity service. Since we were using Dropwizard this proxy would become a Dropwizard Module bootstrapping the filter into the running service process. This allowed for updates to the identity service that also had a complimentary client side update to be easily consumed by dependent services as long as the interface did not change significantly.
Our deployment architecture was spread across AWS Virtual Private Cloud (VPC) and our own company's data centers. The OAuth 2.0 authentication service was located in the company's data center while all of our application services were deployed to AWS VPC.
I hope the approach we took is helpful to your decision. Let me know if you have any other questions.
Chris Sterling explained standard authentication practice above and it makes absolute sense. I just want to put another thought here for some practical reasons.
We implemented authentication services and multiple other micro services relying on auth server in order to authorize resources. At some point we ran in to performance issues due to too many round trips to authentication server, we also had scalability issues for auth server as number of micro services increased. We changed the architecture little bit to avoid too many round trips.
Auth server will be contacted only once with credentials and it will generate the token based on a private key. Corresponding public key will be installed in each client (micro service server) which will be able to validate the authentication key with out contacting auth server. Key contain time generated and a client utility installed in micro service will validity as well. Even though it was not standard implementation we have pretty good success with this model especially when all the micro services are internally hosted.

server-to-server REST API security

We're building a REST API that will only be accessed from a known set of servers. My question is, if there is no access directly from any browser based clients, what security mechanisms are required.
Currently Have:
Obviously over HTTPS
Have HTTP auth enabled, API consumers have a Key & password
Is it neccessary to:
Create some changing key, e.g. md5(timestamp + token) that is formed for the request and validated at the endpoint?
Use OAuth (2-legged authentication)?
Doesn't matter - from browser or not.
Is it neccessary to:
Create some changing key, e.g. md5(timestamp + token) that is formed
for the request and validated at the endpoint?
Use oauth (2-legged authorization)?
Use OAuth, it solves both these questions. And OAuth usage is good because:
You aren't reinventing wheel
There are already a lot of libraries and approaches depending on technology stack
You can also use JWT token to pass some security context with custom claims from service to service.
Also as reference you can look how different providers solve the problem. For example Azure Active Directory has on behalf flow for this purpose
https://learn.microsoft.com/en-us/azure/active-directory/develop/v1-oauth2-on-behalf-of-flow
Use of OAuth2/OpenID Connect is not mandatory between your services, there are other protocols and alternatives and even custom. All depends in which relationships are services and either they both are in full trust environment.
You can use anything you like but main idea not to share sensitive information between services like service account credentials or user credentials.
If REST API security is main requirement - OAuth2/OpenID Connect is maybe the best choice, if you need just secure (in a sense of authentication) calls in full trust environment in a simplest way - Kerberos, if you need encrypted custom tunnel between them for data in transit encryption - other options like VPN. It does not make sense to implement somthing custom because if you have Service A and Service B, and would like to make sure call between them is authenticated, then to avoid coupling and sharing senstive information you will always need some central service C as Identity provider. So if you think from tis pov, OAuth2/OIDC is not overkill
Whether the consumers of your API are web browsers or servers you don't control doesn't change the security picture.
If you are using HTTPs and clients already have a key/password then it isn't clear what kind of attack any other mechanism would protect against.
Any compromise on the client side will expose everything anyway.
Firstly - it matters whether a user agent (such as a browser) is involved in call.
If there are only S2S calls then 1 way SSL HTTPS (for network encryption) and some kind of signature mechanism (SHA-256) should be enough for your security.
However if you return sensitive information in your api response, its always better to accept 2 way ssl HTTPS connections (in order to validate the client).
OAuth2 doesn't add any value in a server to server call (that takes place without user consent and without any user agent present).
For authentication between servers:
Authentication
Known servers:
use TLS with X.509 client certificates (TLS with mutual authentication).
issue the client certificates with a common CA (certificate authority). That way, the servers need only have the CA certificate or public key in the truststore, and new client certificates for additional clients/servers can be issued without having to update the truststores.
Open set of servers:
use API keys, issued by a central authority. The servers need to validate these keys on each request (and may cache the hashes of the keys along with the validation result for some short time).
Identity propagation
if the requests are executed in the context of a non-technical user, use JWT (or SAML) for identity propagation of the user principal and claims (authorize at security proxy/WAF/IAM, and issue JWT signed by authentication server).
otherwise the user principal refers to the technical user and can can be extracted from the client certificate (X.509 DName) or be returned with a successful authentication response (API key case).

Resources