cert manager - Duration and renewal process of my certificates - azure

I am working with cert-manager and kong-ingress-controller to enable https in kubernetes.
I am interested in figure out how is the renewal process, when I just using a ClusterIssuer and the certificate that it generate by default when we use the ingress resource.
I am not using the kind: Certificate resource, this means that I am not defining a X.509 custom certificate to be signed and obtain the certificate validated through the reference to my ClusterIssuer.
At the moment I've created a ClusterIssuer and one ingress resource, whose automatically creates one certificate named letsencrypt-prod which will be used for perform the http01 validation between cert-manager and letsencrypt CA
Finally, I have this output:
I0321 10:49:48.505664 1 controller.go:162] certificates controller: syncing item 'default/letsencrypt-prod'
I0321 10:49:48.506008 1 conditions.go:143] Found status change for Certificate "letsencrypt-prod" condition "Ready": "False" -> "True"; setting lastTransitionTime to 2019-03-21 10:49:48.506003434 +0000 UTC m=+168443.026129945
I0321 10:49:48.506571 1 sync.go:263] Certificate default/letsencrypt-prod scheduled for renewal in 1438h59m58.49343646s
I0321 13:57:46.226424 1 controller.go:168] certificates controller: Finished processing work item "default/letsencrypt-prod"
I0321 15:12:53.199067 1 controller.go:178] ingress-shim controller: syncing item 'default/kong-ingress-service'
I0321 15:12:53.199171 1 sync.go:183] Certificate "letsencrypt-prod" for ingress "kong-ingress-service" is up to date
This means that my certificate will be renoved within 1438h-59m-58.49343646s. This means 3 months aproximately
This means, will be automatically renoved really?
such as indicated here:
The default duration for all certificates is 90 days and the default renewal windows is 30 days. This means that certificates are considered valid for 3 months and renewal will be attempted within 1 month of expiration.
The cert manager documentation say :
Although the duration and renewal periods are specified on the Certificate resources, the corresponding Issuer or ClusterIssuer must support this.
My Cluster Issuer is:
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: my-email#example.com
privateKeySecretRef:
name: letsencrypt-prod
http01: {}
How to can I manage the duration and renewBefore parameters if I am not creating a Certificate Resource. ?
According to this can I add the duration and renewBefore parameters in my ClusterIssuer? Maybe of this way?
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: my-email#example.com
privateKeySecretRef:
name: letsencrypt-prod
http01: {}
# ...
duration: 24h
renewBefore: 12h

This is not supported on the issuers\clusterissuers, only on certificates. you can create a admission controllers to mutate certificates or you can have a cronjob to update certificate resources after they are created

Related

where are tokens created by kubectl create token stored (from v1.24 on)?

This question concerns kubernetes v1.24 and up
So I can create tokens for service accounts with
kubectl create token myserviceaccount
The created token works and serves the purpose, but what I find confusing is that when I kubectl get sa SECRETS field of myserviceaccount is still 0. The token doesn't appear in kubectl get secrets either.
I've also seen that I can pass --bound-object-kind and --bound-object-name to kubectl create token but this doesn't seem to do anything (visible) either...
Is there a way to see created token? And what is the purpose of --bound.. flags?
Thanks to the docs link I've stumbled upon today (I don't know how I've missed it when asking the question because I've spent quite some time browsing through the docs...) I found the information I was looking for. I feel like providing this answer because I find v1d3rm3's answer incomplete and not fully accurate.
The kubernetes docs confirm v1d3rm3's claim (which is btw the key to answering my question):
The created token is a signed JSON Web Token (JWT).
Since the token is JWT token the server can verify if it has signed it, hence no need to store it. JWTs expiry time is set not because the token is not associated with an object (it actually is, as we'll see below) but because the server has no way of invalidating a token (it would actually need to keep track of invalidated tokens because tokens aren't stored anywhere and any token with good signature is valid). To reduce the damage if a token gets stolen there is an expiry time.
Signed JWT token contains all the necessary information inside of it.
The decoded token (created with kubectl create token test-sa where test-sa is service account name) looks like this:
{
"aud": [
"https://kubernetes.default.svc.cluster.local"
],
"exp": 1666712616,
"iat": 1666709016,
"iss": "https://kubernetes.default.svc.cluster.local",
"kubernetes.io": {
"namespace": "default",
"serviceaccount": {
"name": "test-sa",
"uid": "dccf5808-b29b-49da-84bd-9b57f4efdc0b"
}
},
"nbf": 1666709016,
"sub": "system:serviceaccount:default:test-sa"
}
Contrary to v1d3rm3 answer, This token IS associated with a service account automatically, as the kubernets docs link confirm and as we can also see from the token content above.
Suppose I have a secret I want to bind my token to (for example kubectl create token test-sa --bound-kind Secret --bound-name my-secret where test-sa is service account name and my-secret is the secret I'm binding token to), the decoded token will look like this:
{
"aud": [
"https://kubernetes.default.svc.cluster.local"
],
"exp": 1666712848,
"iat": 1666709248,
"iss": "https://kubernetes.default.svc.cluster.local",
"kubernetes.io": {
"namespace": "default",
"secret": {
"name": "my-secret",
"uid": "2a44872f-1c1c-4f18-8214-884db5f351f2"
},
"serviceaccount": {
"name": "test-sa",
"uid": "dccf5808-b29b-49da-84bd-9b57f4efdc0b"
}
},
"nbf": 1666709248,
"sub": "system:serviceaccount:default:test-sa"
}
Notice that binding happens inside the token, under kubernetes.io key and if you describe my-secret you will still not see the token. So the --bound-... flags weren't visibly (from secret object) doing anything because binding happens inside the token itself...
Instead of decoding JWT tokens, we can also see details in TokenRequest object with
kubectl create token test-sa -o yaml
Regarding your question it seems that the command “kubectl create token myserviceaccount” is not as you expect because according to the official documentation, when you are configuring another SA, the token is automatically created and it is referenced by the service account.
Consider that Any tokens for non-existent service accounts will be cleaned up by the token controller.
Regarding to the configuration of the SA check the link
If you want to know more about how the authentication works in the service account token works you can consult context you can consult the link
Incase that you have Sa set it yo can check how to manage it in the link
For establishing bidirectional trust between a node joining the cluster and a control-plane node, consult the link
If you desire to check the token assigned, you can follow:
Get information about your Kubernetes secret object. Secrets are used to store access credentials:
kubectl get secret --namespace={namespace}
output:
NAME TYPE DATA AGE
admin.registrykey kubernetes.io/dockercfg 1 1h
default-token-2mfqv kubernetes.io/service-account-token 3 1h
Get details of the service account token.
kubectl get secret default-token-2mfqv --namespace={namespace} -o yaml
Following is a sample output:
apiVersion: v1
data:
ca.crt: S0tLS1CR...=
namespace: ZGVmYXVsdA==
token: ZXlKaGJHY...=
kind: Secret
metadata:
annotations:
kubernetes.io/service-account.name: default
kubernetes.io/service-account.uid: df441c69-f4ba-11e6-8157-525400225b53
creationTimestamp: 2017-02-17T02:43:33Z
name: default-token-2mfqv
namespace: default
resourceVersion: "37"
selfLink: /api/v1/namespaces/default/secrets/default-token-2mfqv
uid: df5f1109-f4ba-11e6-8157-525400225b53
type: kubernetes.io/service-account-token
Note: The token in the sample output is encoded in base64. You must decode the token and then set this token by using kubectl.
Decode and set the base64-encoded token.
kubectl config set-credentials sa-user --token=$(kubectl get secret <secret_name> -o jsonpath={.data.token} | base64 -d)
kubectl config set-context sa-context --user=sa-user
In the command, <secret_name> type the name of your service account secret.
Connect to the API server.
curl -k -H "Authorization:Bearer {token}"
You can now use kubectl to access your cluster without a time limit for token expiry.
And finally, here you can read about the –bound flag.
****Considering at 1.24 version this have change ****
Once the Pod is running using a SA you could check:
Generate ServiceAccount token manually
Simply generate tokens manually to use in pipelines or whenever we need to contact the K8s Apiserver:
kubectl create token cicd
kubectl create token cicd --duration=999999h
Tip: You can inspect the tokens using for example https://jwt.io, just don’t do this with production tokens!
Create a Secret for the ServiceAccount
We can create Secrets manually and assign these to a ServiceAccount:
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: cicd
annotations:
kubernetes.io/service-account.name: "cicd"
if you describe the Secret, we’ll also see that a token was generated for it:
kubectl describe secret cicd
One big difference is though, that the ServiceAccount doesn’t have a Secret section any longer like before:
kubectl get sa cicd -oyaml
To find a Secret belonging to a ServiceAccount we need to search for all Secrets having the proper annotation.
Delete a ServiceAccount
If we delete the ServiceAccount then the Secret will also be deleted automatically, just like in previous versions:
kubectl delete sa cicd
kubectl get sa,secret # all gone
You could check the following video for reference
Look at golder3 answer.
Is there a way to see created token?
No, there isn't. Tokens created with Token Request API are one time creation. Kubernetes doesn't manage tokens, the only way to manage tokens is associate it with a Secret or Pod object. Tokens are JWT objects, so, for security reasons, expiration time of a created token not associated with an object is, by default, of one hour. It can be configured with --duration property.
And what is the purpose of --bound.. flags?
The purpose of --bound flags is to associate a token with a specific object.
From v1.24, you've to manually create tokens.
Using TokenRequest API
It depends of the use case, but it's not so easy to manage these tokens. They're created with the command kubectl create, some examples are:
kubectl create token SERVICE_ACCOUNT_NAME
if the Service Account is in a specific namespace, then you need to define on command:
kubectl create token SERVICE_ACCOUNT_NAME -n NAMESPACE
You can defined expiration time too:
kubectl create token SERVICE_ACCOUNT_NAME --duration 5h
Reference: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-token-em-
Token associated with a Secret
To create a token associated with a secret object, you can use kubectl apply with a file:
apiVersion: v1
kind: Secret
metadata:
name: demo-token # the name of secret
annotations:
kubernetes.io/service-account.name: "name_of_sa" # the name of the ServiceAccount
type: kubernetes.io/service-account-token
Then, just execute:
kubectl apply -f file.yml
Or, if the ServiceAccount is in a specific namespace
kubectl apply -f file.yml -n NAMESPACE

cert-manager DNS01 Challenge fails - found no zone for wildcard domain

I'm getting this error in wildcard certificate challenge:
Error presenting challenge: Found no Zones for domain _acme-challenge.my-domain.com. (neither in the sub-domain noir in the SLD) please make sure your domain-entries in the config are correct and the API is correctly setup with Zone.read rights.
I'm using Cloudflare as the DNS01 Challenge Provider and have set up the API token with the permissions described in the cert-manager documentation.
My cluster issuer looks like this:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: test-issuer
spec:
acme:
email: <email>
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: test-issuer-private-key
solvers:
- dns01:
cloudflare:
email: <email>
apiTokenSecretRef:
name: issuer-access-token
key: api-token
And my certificate:
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: test-wildcard
spec:
secretName: test-wildcard-tls
issuerRef:
name: test-issuer
kind: ClusterIssuer
dnsNames:
- "*.my-domain.com"
I have CNAME record with ‘*’ name that points to my domain and an A record that points to my Kubernetes cluster IP.
Am I missing something? How do you correctly set up cert-manager to automatically manage wildcard domain with Cloudflare as DNS01 Challenge Provider?
I've run into this issue as well, and I realized that I made two different errors in my configuration.
#1: I had overlooked that the API Token that you generate must have all of the following permissions and zone resources associated to it:
Permissions
Zone.Zone.Read
Zone.Zone.Edit
Zone Resources
Include.All zones
This is in the docs but clearly I wasn't reading correctly.
#2: I wasn't able to make it work with the dnsNames attribute in the Certificate resource, but rather needed to use dnsZones instead. In your example, try changing from:
dnsNames:
- "*.my-domain.com"
to:
dnsZones:
- "my-domain.com"
According to this docs (emphasis mine):
Note: dnsNames take an exact match and do not resolve wildcards, meaning the following Issuer will not solve for DNS names such as foo.example.com. Use the dnsZones selector type to match all subdomains within a zone.
This should generate a certificate with a CN of *.my-domain.com and with both *.my-domain.com and my-domain.com in the subjectAltName field.

MS Azure OAuth2 proxy - Token based authentication not oauth_proxy cookie

I am using a Kubernetes deployment model inside Azure, having an OAuth2 proxy(https://github.com/oauth2-proxy/oauth2-proxy) which is protecting the cluster resources by enabling SSO login through various clients. That is ok from the end user perspective who can easily login with his SSO client. The problem appears when the APIs exposed by the services behind the OAuth2 proxy, need to be consumed by external applications via REST calls.
The configuration is the default one, having a dedicated Kubernetes service for OAuth2 and the following rules inside the Ingress file.
nginx.ingress.kubernetes.io/auth-signin: 'https://$host/oauth2/start?rd=$request_uri'
nginx.ingress.kubernetes.io/auth-url: 'https://$host/oauth2/auth'
From what I checked, currently those services can only by consumed via REST calls from an external application(Postman for example) only if I add a cookie parameter(_oauth2_proxy) which is generated after a successful login using the UI and the SSO client provider. If I do not add this cookie parameter an error such as cookie _oauth_proxy is not present.
Is there any option which I can add to the proxy configuration in order to permit token based authentication and authorization/programmatic generation of some identifier in order to access the resources behind the OAuth2 proxy for a technical user(external application) through REST calls? I can generate an access token based on the existing configuration(client id, secret, application scope, etc).
The OAuth2 proxy deployment YAML looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: oauth2-proxy
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: oauth2-proxy
template:
metadata:
labels:
app: oauth2-proxy
spec:
containers:
- env:
- name: OAUTH2_PROXY_PROVIDER
value: azure
- name: OAUTH2_PROXY_AZURE_TENANT
value: <REPLACE_WITH_DIRECTORY_ID>
- name: OAUTH2_PROXY_CLIENT_ID
value: <REPLACE_WITH_APPLICATION_ID>
- name: OAUTH2_PROXY_CLIENT_SECRET
value: <REPLACE_WITH_SECRET_KEY>
- name: OAUTH2_PROXY_COOKIE_SECRET
value: <REPLACE_WITH_VALUE_OF python -c 'import os,base64; print base64.b64encode(os.urandom(16))'>
- name: OAUTH2_PROXY_HTTP_ADDRESS
value: "0.0.0.0:4180"
- name: OAUTH2_PROXY_UPSTREAM
value: "<AZURE KUBERNETES CLUSTER HOST e.g. >"
image: bitnami/oauth2-proxy:latest
imagePullPolicy: IfNotPresent
name: oauth2-proxy
ports:
- containerPort: 4180
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
namespace: default
spec:
ports:
- name: http
port: 4180
protocol: TCP
targetPort: 4180
selector:
app: oauth2-proxy
EDIT:
I was finally able to use the generated token through the AD OAuth2 token endpoint in order to call my APIs behind the proxy. In order to achieve that, I changed the docker image from machinedata/oauth2_proxy to bitnami/oauth2-proxy.
Beside that I added the following arguments to the container:
args:
- '--provider=azure'
- '--azure-tenant=TENANT_ID'
- '--skip-jwt-bearer-tokens=true'
- >-
--oidc-issuer-url=https://sts.windows.net/TENANT_ID/
- >-
--extra-jwt-issuers=https://login.microsoftonline.com/TENANT_ID/v2.0=APP_ID
- '--request-logging=true'
- '--auth-logging=true'
- '--standard-logging=true'
Also, I had to do some changes at the app registration manifest from Azure AD in order for the token to be validated by the OAuth2 proxy against the correct version.
"accessTokenAcceptedVersion": 2
I found some useful explanations in here as well: https://github.com/oauth2-proxy/oauth2-proxy/issues/502 .
Now I can use the token endpoint provided by Azure to generate a Bearer token for my API calls. The only issue which is still remaining is an error which I have when I try to access the application UI.
The error/warning received in the pod logs is:
WARNING: Multiple cookies are required for this session as it exceeds the 4kb cookie limit. Please use server side session storage (eg. Redis) instead.
The error received in the browser is 502 Bad Gateway
EDIT #2:
I was able to bypass this new error by increasing buffer size at my ingress level for OAuth2 proxy. More details can be found in the below URLs.
https://oauth2-proxy.github.io/oauth2-proxy/docs/configuration/oauth_provider/#azure-auth-provider
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/
I finally made it work using the following configuration:
deployment.yaml for OAuth2 proxy:
kind: Deployment
apiVersion: apps/v1
metadata:
name: oauth2-proxy
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: oauth2-proxy
template:
metadata:
creationTimestamp: null
labels:
app: oauth2-proxy
spec:
containers:
- name: oauth2-proxy
image: 'bitnami/oauth2-proxy:latest'
args:
- '--provider=azure'
- '--azure-tenant=TENANT_ID'
- '--skip-jwt-bearer-tokens=true'
- >-
--oidc-issuer-url=https://sts.windows.net/TENANT_ID/
- >-
--extra-jwt-issuers=https://login.microsoftonline.com/TENANT_ID/v2.0=CLIENT_ID
- '--request-logging=true'
- '--auth-logging=true'
- '--standard-logging=true'
ports:
- containerPort: 4180
protocol: TCP
env:
- name: OAUTH2_PROXY_AZURE_TENANT
value: TENANT_ID
- name: OAUTH2_PROXY_CLIENT_ID
value: CLIENT_ID
- name: OAUTH2_PROXY_CLIENT_SECRET
value: CLIENT_SECRET
- name: OAUTH2_PROXY_COOKIE_SECRET
value: COOKIE_SECRET
- name: OAUTH2_PROXY_HTTP_ADDRESS
value: '0.0.0.0:4180'
- name: OAUTH2_PROXY_UPSTREAM
value: 'http://your-host'
- name: OAUTH2_PROXY_EMAIL_DOMAINS
value: '*'
ingress.yaml for OAuth2 proxy:
kind: Ingress
apiVersion: networking.k8s.io/v1beta1
metadata:
name: oauth2-proxy
namespace: default
labels:
app: oauth2-proxy
annotations:
kubernetes.io/ingress.class: addon-http-application-routing
# in my case the generated cookie was too big so I had to add the below parameters
nginx.ingress.kubernetes.io/proxy-buffer-size: 8k
nginx.ingress.kubernetes.io/proxy-buffers-number: '4'
spec:
tls:
- hosts:
- YOUR_HOST
rules:
- host: YOUR_HOST
http:
paths:
- path: /oauth2
backend:
serviceName: oauth2-proxy
servicePort: 4180
In addition to those configuration files, I also had to change the value for accessTokenAcceptedVersion in the Azure application registration manifest. By default this value is setup to null which means it will go for V1 tokens instead of V2 as specified in the extra-jwt-issuers argument.
"accessTokenAcceptedVersion": 2
After those changes were in place, I was able to use the generated token through Azure token endpoint in order to go through the OAuth2 proxy and reach my application exposed APIs:
HTTP POST to https://login.microsoftonline.com/TENANT_ID/oauth2/v2.0/token
Content-Type: application/x-www-form-urlencoded
Body:
- client_id: YOUR_CLIENT_ID
- grant_type: client_credentials
- client_secret: YOUR_CLIENT_SECRET
- scope: api://YOUR_CLIENT_ID/.default - this was generated by me, but it should work with MS Graph as well
I am using a Kubernetes deployment model inside Azure, having an OAuth2 proxy which is protecting the cluster resources by enabling SSO login.
I have the Oauth service running successfully, I also have the application ingress and the oauth ingress deployed. But when I access the application URL, I got 500 Internal error. If I access the Oauth ingress url,I got the login window. I can provide the details on the oauth deployment yaml and the application ingress and oauth ingress yaml.

How can I debug oauth2_proxy when connecting to Azure B2C?

I'm new to Kubernetes, and I've been learning about Ingress. I'm quite impressed by the idea of handling TLS certificates and authentication at the point of Ingress. I've added a simple static file server, and added cert-manager, so I basically have a HTTPS static website.
I read that NGINX Ingress Controller can be used with oauth2 proxy to handle authentication at the ingress. The problem is that I can't get this working at all. I can confirm that my oauth2-proxy Deployment Service and Deployment are present and correct - in the Pod's log, I can see the requests coming through from NGINX, but I can't see what uri it is actually calling at Azure B2C. Whenever I try and access my service I get a 500 Internal error - if I put my /oath2/auth address in the browser, I get "The scope 'openid' specified in the request is not supported.". However if I Test run the user Flow in Azure, the test URL also specifies "openid" and it functions as expected.
I think that I could work through this if I could find out how to monitor what oauth2-proxy requests from Azure (i.e. work out where my config is wrong by observing it's uri) - otherwise, maybe somebody that has done this can tell me where I went wrong in the config.
My config is as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
namespace: default
spec:
replicas: 1
selector:
matchLabels:
k8s-app: oauth2-proxy
template:
metadata:
labels:
k8s-app: oauth2-proxy
spec:
containers:
- args:
- -provider=oidc
- -email-domain=*
- -upstream=file:///dev/null
- -http-address=0.0.0.0:4180
- -redirect-url=https://jwt.ms/
- -oidc-issuer-url=https://<tenant>.b2clogin.com/tfp/<app-guid>/b2c_1_manager_signup/
- -cookie-secure=true
- -scope="openid"
# Register a new application
# https://github.com/settings/applications/new
env:
- name: OAUTH2_PROXY_CLIENT_ID
value: <app-guid>
- name: OAUTH2_PROXY_CLIENT_SECRET
value: <key-base64>
- name: OAUTH2_PROXY_COOKIE_SECRET
value: <random+base64>
image: quay.io/pusher/oauth2_proxy:latest
imagePullPolicy: Always
name: oauth2-proxy
ports:
- containerPort: 4180
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
namespace: default
spec:
ports:
- name: http
port: 4180
protocol: TCP
targetPort: 4180
selector:
k8s-app: oauth2-proxy
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: static1-oauth2-proxy
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
spec:
rules:
- host: cloud.<mydomain>
http:
paths:
- backend:
serviceName: oauth2-proxy
servicePort: 4180
path: /oauth2
tls:
- hosts:
- cloud.<mydomain>
secretName: cloud-demo-crt
In my static site ingress I have the following added to metadata.annotations:
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$request_uri"
I'm not 100% sure whether these annotations should always be set as such, or whether I should have varies these for B2C/OIDC, but they seem to go off to the proxy, it's just what the proxy does next that fails.
Note that the log does indicate that oauth2-proxy connected to B2C, indeed if the issuer uri changes, then it goes into a crash fallback loop.
There seem to be anumber of articles about how to set this up, so I'm sure it's possible, but I got a little lost. If somebody could help with the setup or ideas for debugging, that would be wonderful.
Thanks.
Now I'm able to reliably get a ?state= and code= to display in the browser window on the /oauth2/callback page, but the page reports Internal Error. oauth2_proxy is logging when it should now, and the log says:
[2020/06/03 21:18:07] [oauthproxy.go:803] Error redeeming code during OAuth2 callback: token exchange: oauth2: server response missing access_token
My Azure B2C Audit log howwever says that it is issuing id_tokens.
When I look at the source code to oauth2_proxy, it looks as though the problem occurs during oauth2.config.Exchange() - which is in the goloang library - I don't know what that does, but I don't think that it works properly with Azure B2c. Does anybody have an idea how I can progress from here?
Thanks.
Mark
I resorted to compiling and debugging the proxy app in VSCode. I ran a simple NGINX proxy to supply TLS termination to the proxy to allow the Azure B2C side to function. It turns out that I had got a lot of things wrong. Here are a list of problems that I resolved in the hope that somebody else might be able to use this to run their own oauth_proxy with Azure B2C.
When attached to a debugger, it is clear that oauth2_proxy reads the token and expects to fin, in turn access_token, then id_token, it then requires (by default) the "email" claim.
To get an "access_token" to return, you have to request access to some resource. Initially I didn't have this. In my yaml file I had:
- --scope=openid
Note: do not put quotation marks around your scope value in YAML, because they are treaded as a part of the requested scope value!
I had to set up a "read" scope in Azure B2C via "App Registrations" and "Expose an API". My final scope that worked was of the form:
- --scope=https://<myspacename>.onmicrosoft.com/<myapiname>/read openid
You have to make sure that both scopes (read and openid) go through together, otherwise you don't get an id_token. If you get an error saying that you don't have an id_token in the server response, make sure that both values are going through in a single use of the --scope flag.
Once you have access_token and id_token, oauth2_proxy fails because there is no "email" claim. Azure B2C has an "emails" claim, but I don't think that can be used. To get around this, I used the object id instead, I set:
- --user-id-claim=oid
The last problem I had was that no cookies were being set in the browser. I did see an error that the cookie value itself was too long in the oauth2-proxy output, and I removed the "offline_access" scope and that message went away. There were still not cookies in the browser however.
My NGinX ingress log did however have a message that the Headers were more than 8K, and NGinX was reporting a 503 error because of this.
In the oauth2-proxy documents, there is a description that a Redis store should be used if your cookie is long - it specifically identifies Azure AD cookies as being long enough to warrant a Redis solution.
I installed a single node Redis to test (unhardened) using a YAML config from this answer https://stackoverflow.com/a/53052122/2048821 - The --session-store-type=redis and --redis-connection-url options must be used.
The final Service/Deployment for my oauth2_proxy look like this:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
namespace: default
spec:
replicas: 1
selector:
matchLabels:
k8s-app: oauth2-proxy
template:
metadata:
labels:
k8s-app: oauth2-proxy
spec:
containers:
- args:
- --provider=oidc
- --email-domain=*
- --upstream=file:///dev/null
- --http-address=0.0.0.0:4180
- --redirect-url=https://<myhost>/oauth2/callback
- --oidc-issuer-url=https://<mynamespane>.b2clogin.com/tfp/<my-tenant>/b2c_1_signin/v2.0/
- --cookie-secure=true
- --cookie-domain=<myhost>
- --cookie-secret=<mycookiesecret>
- --user-id-claim=oid
- --scope=https://<mynamespace>.onmicrosoft.com/<myappname>/read openid
- --reverse-proxy=true
- --skip-provider-button=true
- --client-id=<myappid>
- --client-secret=<myclientsecret>
- --session-store-type=redis
- --redis-connection-url=redis://redis:6379
# Register a new application
image: quay.io/pusher/oauth2_proxy:latest
imagePullPolicy: Always
name: oauth2-proxy
ports:
- containerPort: 4180
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
namespace: default
spec:
ports:
- name: http
port: 4180
protocol: TCP
targetPort: 4180
selector:
k8s-app: oauth2-proxy
Hope that this saves somebody a lot of time.
Mark
I tried to follow the Mark Rabjohn’s answer , but was getting errors like
oidc: issuer did not match the issuer returned by provider, expected
"https://your-tenant-name.b2clogin.com/tfp/c5b28ff6-f360-405b-85d0-8a87b5783d3b/B2C_1A_signin/v2.0/"
got
"https://your-tenant-name.b2clogin.com/c5b28ff6-f360-405b-85d0-8a87b5783d3b/v2.0/“
( no policy name in the url)
It is a known issue (https://security.stackexchange.com/questions/212724/oidc-should-the-provider-have-the-same-address-as-the-issuer)
I'm aware that a few of the mainstream providers such as Microsoft
doesn't strictly follow this pattern but you'll have to take it up
with them, or consider the workarounds given by the OIDC library.
Fortunately oauth2-proxy supports
--skip-oidc-discovery parameter:
bypass OIDC endpoint discovery. --login-url, --redeem-url and --oidc-jwks-url must be configured in this case.
The example of parameters is the following:
- --skip-oidc-discovery=true
- --login-url=https://<mynamespace>.b2clogin.com/<mynamespace>.onmicrosoft.com/
b2c_1a_signin/oauth2/v2.0/authorize
- --redeem-url=https://<mynamespace>.b2clogin.com/<mynamespace>.onmicrosoft.com/
b2c_1a_signin/oauth2/v2.0/token
- --oidc-jwks-url=https://<mynamespace>.b2clogin.com/<mynamespace>.onmicrosoft.com/
b2c_1a_signin/discovery/v2.0/keys
To create scope I had to set up Application ID URI in Azure B2C via "App Registrations" and "Expose an API".
(e.g see https://learn.microsoft.com/en-us/azure/active-directory-b2c/tutorial-web-api-dotnet?tabs=app-reg-ga#configure-scopes).
I also have to grant admin permissions as described in https://learn.microsoft.com/en-us/azure/active-directory-b2c/add-web-api-application?tabs=app-reg-ga#grant-permissions (see also Application does not have sufficient permissions against this web resource to perform the operation in Azure AD B2C)
- —scope=https://<mynamespace>.onmicrosoft.com/<myappname>/<scopeName> openid
You also should specify
- —oidc_issuer_url=https://<mynamespace>.b2clogin.com/<TenantID>/v2.0/
Using Directory/tenantId in oidc_issuer_url satifies validation in callback/redeem stage, and you don't need to set useinsecure_oidc_skip_issuer_verification=true.
Also note that redirect-url=https:///oauth2/callback should be registered in AAD B2C as application Redirect URI( navigate in Application Overview pane to Redirect URIs link)
I am not clear about debugging, but according to your issue it's looking you are not passing header param.
On static site ingress please add this one as well and try
nginx.ingress.kubernetes.io/auth-response-headers: X-Auth-Request-Access-Token, Authorization
Or this one
nginx.ingress.kubernetes.io/auth-response-headers: Authorization
Based on the information by Mark Rabjohn and Michael Freidgeim I also got (after hours of trying) a working integration with Azure AD B2C. Here is a configuration to reproduce a working setup, using docker-compose for testing it out locally:
Local setup
version: "3.7"
services:
oauth2proxy:
image: quay.io/oauth2-proxy/oauth2-proxy:latest
command: >
--provider=oidc
--email-domain=*
--upstream=http://web
--http-address=0.0.0.0:9000
--redirect-url=http://localhost:9000
--reverse-proxy=true
--skip-provider-button=true
--session-store-type=redis
--redis-connection-url=redis://redis:6379
--oidc-email-claim=oid
--scope="https://<mynamepsace>.onmicrosoft.com/<app registration uuid>/read openid"
--insecure-oidc-skip-issuer-verification=true
--oidc-issuer-url=https://<mynamespace>.b2clogin.com/<mynamepsace>.onmicrosoft.com/<policy>/v2.0/
environment:
OAUTH2_PROXY_CLIENT_ID: "<app registration client id>"
OAUTH2_PROXY_CLIENT_SECRET: "<app registration client secret>"
OAUTH2_PROXY_COOKIE_SECRET: "<secret follow oauth2-proxy docs to create one>"
ports:
- "9000:9000"
links:
- web
web:
image: kennethreitz/httpbin
ports:
- "8000:80"
redis:
image: redis:latest
The important bits here are these options:
--oidc-email-claim=oid
--scope="https://<mynamepsace>.onmicrosoft.com/<app registration uuid>/read openid"
--insecure-oidc-skip-issuer-verification=true
--oidc-issuer-url=https://<mynamespace>.b2clogin.com/<mynamepsace>.onmicrosoft.com/<policy>/v2.0/
Using --insecure-oidc-skip-issuer-verification=true allows you to skip the explicit mentioning of the endpoints using --login-url --redeem-url --oidc-jwks-url.
--oidc-email-claim=oid replaces the deprecated option --user-id-claim=oid mentioned by Mark Rabjohn
The scope is needed also just as Mark is explaining.
Azure AD B2C settings
These are the steps summarized that are necessary to perform in Azure AD B2C portal:
In the user flow, go to "application claims" and enable "User's Object ID". This is required to make the --oidc-email-claim=oid setting work
In the app registration, in "API permissions", create a new permission with the name read. The url for this permission is the value that you need to fill in for --scope="...".

Error while starting peers and orderer with the certificates generated using Root and Intermediate CA

I am trying to implement Root CA and intermediate CAs for my network setup. I have already created Root CA and intermediate CA and have already registered and enrolled all the members of the organisation i.e. orderer, peer, admin, users.
Everything is working fine as I have seen the logs of CAs and they are working properly but when I tried to start docker container of peers and orderers they are not getting up, and by looking at the logs of the orderer and peer I am getting these logs:
certificate has expired or is not yet valid yet.
Can anyone help me with this as I have tried few times but everytime I am getting this error only.
Okay I have got the solution here:
https://jira.hyperledger.org/browse/FABC-832
The starting validity period i.e. Not Before of peers is approximately 5 minutes earlier than that of intermediate CA. This is because Fabric CA, by default, backdates the signing of certificates by 5 minutes. So now I have set backdate to 1 second in fabric-ca-config.yaml file:
signing:
default:
usage:
- digital signature
expiry: 8760h
backdate: 1s
profiles:
ca:
backdate: 1s
usage:
- cert sign
- crl sign
expiry: 43800h
caconstraint:
isca: true
maxpathlen: 0
tls:
backdate: 1s
usage:
- signing
- key encipherment
- server auth
- client auth
- key agreement
expiry: 8760h
It seems more like it is a problem with generating of certificates and not with expiring. So they are either haven't been generated yet or where put in a wrong place.
Try to completely restart the docker at first.
Then, add some buffer time after key generation. You can add sleep 300 after cryptogen and configtxgen have been called but before the actual docker containers get spun up.
If it doesn't help, try to trace where the certificates are and where are you try to take them from.

Resources