Kubernetes and OIDC, how can it be secure without claim validation? - security

I'm trying to configure OIDC login to my Kubernetes, and I'm a bit troubled by some security aspects of it.
From what I gathered, it doesn't check the scopes from the ID token, which would mean any ID token delivered by my Identity provider for my user could have access to my cluster.
Which means the backend of another service (not even managed by me), that uses the same identity provider, could potentially access my cluster on my behalf. However, this doesn't seem like something people worry about.
Why ? What is wrong with my reasoning ?
Please enlighten me.
Thanks in advance.

Your application will only accept/trust tokens (ID/Access) from the specific token provider you trust. The tokens are signed using the token provider's private key, and your application will only accept those tokens signed by this key. Your applications are configured to typically only trust tokens from one issuer.
Tokens from other sources should be rejected by the applications if they are properly configured (there have been some examples in the past).

Related

Secure API with OpenID Connect - RP trust of OP

Getting to grips with OpenID Connect with a third party IdP ( OP ) and securing APIs. I'm comfortable with the client and user agent component and the OAuth2.0 flows and scopes to get an access token and an id token issued to my client from an IdP
What I'm struggling with is the Resource Provider end and how the secured API trusts the access token that the client passes. I keep equating the trust element to SAML and the initial exchange of static configuration data between the IdP and the SP. That appears to be missing in OpenID Connect so I'm missing the trust element. I'm reading about dynamic discovery but again I'm missing a trick as to the trust between the RP and the IdP. What's to stop me setting up a rogue IdP ? Why should the API provider trust tokens coming from my IdP ?
Final question is on a local representation of the unique identifier in the RP. Does the account ID need to exist ahead of presentation of the access token ? I expect it does ( again using the SAML analogy which requires a local account representation ahead of authentication success ) so account management on the relying party is also a requirement for an end to end authentication to be successful.
Boils down to two questions. How does the API trust the access token presented ? Does the API provider need to have accounts provisioned ahead of successful authentication of those requests for resource ?
Thanks in advance.
Dynamic Client Registration is used exactly in cases where there is no pre-established trust between parties and one wants to allow any OP that has a valid certificate to play.
When you want to restrict the set of OPs, you'd statically register with those in advance, thus doing the same type of bootstrapping to create pre-established trust as in SAML.
A unique identifier can be created by the (<sub>,<iss>) tuple as found in the id_token that the RP receives.
For dynamic registration of the RP with the OP there is no trust metric described. There are two HTTP trust levels that can be enforced.
HTTPS is required with a certificate that chains up to the root of trust certs.
HTTPS has an EV certificate. This required in-proofing as defined by CAB.
I have proposed the use of a trust ecosystem in the IDESG, but that is some ways away.
It would be interesting to see if Kantara has any kind of solution in mind.
From an OpenID Connect perspective you are correct. That does not imply that any relying party code that you construct needs to be insecure. I mentioned two ways that you can verify the OP.
A similar set of security rules could be enforced by the OP, but as others have said, most OPs do not enable dynamic registration for this reason.
The id_token is a JWS signed JWT. It is signed by an OpenID Connect OP (IdP), generally using its private key. You MUST verify the signature so that you're sure you know the JWT/id_token emitter. After that it's up to you to decide if you can trust this emitter for the action the client is willing to achieve.
The id_token contains a claim (at_hash) which is a reference to the access_token, so that you're sure the access token was generated by the same OP.

LDAP JWT OAuth scheme explanation

I'm trying to systematize my knowledge about oauth + jwt + LDAP authorization. I've read multiple excellent articles (i.e. this) but still have a questions about about that:
My understanding:
JWT is a token which allow Single Sign-On (SSO). It's more secure than simple token auth since it encrypts all user specific info (e.g. userName, password, clientAppId, ip address etc.). This info is signed with internal authority server key and can't be changed by attacker.
From here, look at phrase below. As I understand that means that each of HTTP frontend servers doesn't require lookup for session data. But it requires lookup to authority server. What's the benefit? Isn't that the same single point of failure? Why JWT is considered STATEless? JWT still needs to keep user data on authority server, right?
The server side storage issues are gone.
If you need log out user with JWT before expiration period gone - you need to keep black lists. So what the benefit over simple token uath without SSO?
Is JWT a realization of OpenID (authentication only)?
It's impossible to do auto-sign-in for server-2-server with JWT (tokens) without OAuth. Oauth is used when you nwant ot authorize request from some service on behalf of user without user participation. Why it's impossible with tokens and possible with OAuth?
OAuth is also used to configure flexible access policies, like roles, groups etc. But why you can't implement them yourself based on tokens/JWT?
LDAP server is extremely fast for read operation on small not-interconnected pieces of data ,as user credentials. Where is LDAP in jwt-oauth scheme (or in OppenID Connect)? Is LDAP used for authentication (JWT)or authorization (OAuth)?
I'll try to clarify some concepts here:
Oauth and OpenID Connect (OIDC) are just authentication mechanisms by themselves. JWT is just a way to convey authenticated information between two parties. So, you have to do an effort of separation of concerns. When having doubts about how to identify an user, and be sure it's really him/her, check credentials and so... go check OIDC or Oauth standards. Whn in doubt about how to convey user related information between parties securely then, look at the JWT RFC and related (JWS, JWK, JWE and related).
Having said this:
Totally correct. You understood correctly.
It depends on implementation, but some stateful approaches exists, and also some implementations are stateless. The JWT-consuming server (Resource server in Oauth jargon), can possibly have the IdMS's (Authorization server (badly-named) in Oauth jargon) signing keys in cache, or pre-provisioned. This way, it can validate JWT Access Tokens coming from the IdMS without having to do any request. The IdMS could be down without impacting the system. This is precisely the case for some architectures that have the IdMS behind some VPN, and the Resource Servers outside it. Besides that, there could be a more stateful way of checking Access Tokens against the introspection endpoint of the IdMS. With this way, for each validation required on the Resource Server, a request would be made to the IdMS to check the Access Token is still valid and to extract the related claims. This latter mechanism is used also when the Access Tokens are not JWT and thus opaque.
Blacklists could be a way to do it, but it is more usually done through Refresh Token mechanism. You give the Access Token a very short lifetime, like 1 minute, and then rely on the refreshing mechanism to fail in case the session is revoked.
Technically speaking OpenID and OpenID Connect (OIDC) are different beasts. For shortness sake, we could say that OpenID is an old implementation of identity federation that did not see great adoption. OpenID Connect is an evolution of Oauth 2.0 that adds JWT, ID Token and some other niceties. But no, JWT and OIDC are by no means an exclusive implementation. It is true that OIDC implies forcefully the use of JWT, but JWT exists outside OIDC.
If you want to authorize requests between two servers, at the basic level, it can be done with simple tokens (Just maintain them secret, and use TLS). But with JWT what is enabled is that the servers can trust a central IdMS without having to trust entirely on each other. Oauth is used in this case you indicate because Google for example, trust himself and the user, but not your server. So, authentication occurs between Google, acting as the IdMS and the user, and this, generates a JWT (Not always, so you see the case for my previous statement) that your server (trusting an external IdMS) can use to communicate with Google.
As already said, groups/role management is independent of JWT/tokens used. JWT/plain tokens are only the way of conveying authentication information.
LDAP on Oauth/OIDC lies in the Authentication phase. When the user sends it's credentials to the IdMS instead of checking against a local database, the credentials are checked against LDAP. LDAP could also be used by some advanced IdMS's to retrieve policies, groups or other permissions. But when the authorization is done, the rest of the process is the same as always.
References:
https://openid.net/connect/
https://es.wikipedia.org/wiki/OpenID
https://www.rfc-editor.org/rfc/rfc7519#section-1

OpenID Connect ID Token: What's the purpose of audience [aud] field validation

I'm trying to implement OpenID Connect Implicit Flow. The frontend Single Page App passes the ID Token down to the backend server (using Authorization header) where I need to validate it.
The documentation requires me to check that I trust the audience of the token (aud & azp fields). I'm struggling to understand the significance of this validation step and what are the security implications of not doing so. Why should I distrust the token if I'm not the intended recipient?
My reasoning is that if I trust the issuer it doesn't matter who was the token issued for. I would expect the claims to be the same for any clientId (is this wrong?). Ideally when I pass the ID Token around my microservices all they should know is what issuers to trust (and use discovery protocol for figuring out the keys).
What is the attack vector if I skip this validation step?
The issuer could be issuing tokens to different applications and those applications could have different permissions. Not checking the audience would allow an attacker to use a token issued for application A at application B and may lead to permission elevation.
To your suggestion: the claims may indeed differ per Client.
Here's another reason: If you check that aud claim only contains your client, it prevents other apps' stolen tokens from being used on your app. If a user's token gets stolen from another app, nobody will be able to impersonate the user on your app because the aud claim will not be correct.
I'm answering this for posterity.
You should check the issuer and if your client_id is the only one in the audience if you are receiving tokens from an external OpenId Provider. One that could have more than your client.
Claims are not global to the OpenID Provider, they can be per-client. A user can have "Admin" role on app-A, gets a token there, then tries to send app-B (your application) the same token hoping that your are not checking to which client it was issued for (its audience).

Using JWT with Active Directory authentication in NodeJS backend

I am building an intranet web application consisting of an Angular frontend and a Node.JS backend. The application needs to use the corporate Active Directory for authentication and authorization.
I'm considering how to best implement this in a secure way. I am planning to use the Active Directory node module for actually communicating with the AD to authenticate when the user logs in, and to check security group membership for certain restricted actions, etc.
However, I am not quite sure what is the best way to authorize my backend endpoints. The AD module does not offer any token/ticket, even though I suppose Kerberos is used for the actual authentication process. In other authenticated apps I've developed I've generated a jsonwebtoken when the user logs in, and then passed and verified that token in each backend route, is that a good idea also when authenticating against AD?
EDIT: Second part of question spawned to separate thread: Best practices for server-side handling of JWT tokens
Also, I have a more general concern, regarding what the best practice is for actually verifying tokens. Suppose that the "secret" used for JWT generation is compromised (in my scenario many people may have access to the source code of the system, but not to the system itself). Am I right in believing that a malicious user could then, with only this information, generate a token on behalf of any given user, and without ever authenticating with AD use that token in my API requests? A token is typically generated using jwt.sign(payload, secretOrPrivateKey, options).
Alternatively, suppose a malicious user could get hold of an actual token (before it has expired). To me it seems like instead of having to know a user's username and password, the security is now reduced to having to know the username and the JWT secret. Is this a valid concern and what should I do to prevent this?
My best hope so far is using a server side session to store information about the current user after logging in, so that even if a token is maliciously generated and used when accessing backend endpoints, it would fail unless the user has actually gone through the login route, authenticated with AD and stored some information in the session as a result of this.
I also considered actually authenticating with AD in each API endpoint, but that would require the AD username/password to be sent in every request, which in turn would require that sensitive information would have to be stored in the client's sessionstorage or localstorage, which is most likely a bad idea.
So, questions:
1) Is it reasonable to combine AD authorization with JWT as bearer token or what is the preferred way to build a secure backend + frontend utilizing AD for authentication?
2) If JWT is a good idea, what is the best practice for securing endpoints using JWT? Is using a server side session reasonable?
Interestingly enough I have found tons of examples on how to best implement token based authentication (in general, or with NodeJS specifically), but many of them seem flawed in one way or another.
1) Is it reasonable to combine AD authorization with JWT as bearer
token or what is the preferred way to build a secure backend +
frontend utilizing AD for authentication?
It is reasonable, but if you are already using Kerberos and AD to initially authenticate the user, you might consider using s4u2proxy constrained delegation which allows the service to present the user's service ticket to the KDC and acquire (subject to authorisation checks) a ticket for a backend service (and repeat for as many services are necessary).
If you have a lot of backend services that need to be contacted, a single JWT bearing all the authorization claims needed for all the services to enforce authorization policy may be a better option.
2) If JWT is a good idea, what is the best practice for securing
endpoints using JWT? Is using a server side session reasonable?
General key security practices apply:
Never store keys in the clear in non-volatile storage, anywhere.
Ideally do not store encrypted keys in attached storage on the server where, if the server is compromised, they would be subject to offline attack. Make them available to the host only at server startup.
Ensure key material resides in secure memory so that it cannot be swapped to disk (and/or use encrypted swap).
Use public key algorithms so that no secret key need exist on multiple hosts.
Consider using a hardware security module (HSM).

Do I need Federation Authentication if I have a custom STS? If so, why?

If I have a custom Secure Token Service that specifically lists out allowed audiences and checks if the token is coming from one of of those audiences and also checks the thumbprint and issuer of the X509 certificate, do I need WSFederation?
Since my STS is checking that the the token already came from a specific application and was routed through my ACS, aren't I verifying all of the things I need to? I know that Application A sent a request to the ACS which sent a request to Application B all from the custom STS, so where does Federated Identity fit in this picture?
Edit for clarity:
Sorry I was a bit unclear in the orignal post. I think the confusion came because I used STS instead of security token handler (Way different things, just a typo).
Application A is a custom login service, which displays the login options for the user, google/facebook/yahoo/etc. Logging in through these service gets the token from the ACS and returns it to application B, the Relying Party. This RP has a custom security token handler which accepts the token and validates that it is has an audience URI matching application A. It also validates that the issuer was the ACS and the thumbprint matches the one of the cert used to sign the token via the ACS.
This means that theoretically application B knows, that application A was used to login (as it came from that audienceURI) and that the ACS sent the token (as it was the issuer and the thumbprint matches). What I am asking is if federated identity is necessary for application B? What exactly do you gain by using it, if you've already proved where the token came from?
your question might need some clarification.
First, you might want to explain specifically what you mean by application A and application B, and how your STS fits in this scenario. Applications don't typically issue tokens, only STSes do. In this sense, ACS doesn't connect applications to each other, it connects relying party applications to third party identity providers.
Second, if you're talking about authentication over the web, and you have a custom identity provider STS that's issuing tokens for ACS, then you're probably already using WS-Federation. If however your token acquisition is not browser based, and you're making back-end HTTP calls to ACS, then WS-Federation is not relevant to the scenario.
Third, from the point of view of the STS, the set of allowed audiences is not about token issuers, it refers to entities that will consume tokens issued by that STS. That is, it's the set of subjects that the STS will issue tokens to. This could be applications themselves, or other intermediary STSes along the federation chain. (ACS for example acts as such an intermediary)
Fourth, when you're validating the issuer's certificate on an incoming token, you must do more than just compare the thumbprint. The thumbprint is not part of the token's cryptographic proof. You must validate the token's digital signature in order to verify that the token issuer owns the private key of the certificate.
I hope this clears things up, but if it doesn't answer your question please let me know.

Resources