I'm new to Kubernetes, and I've been learning about Ingress. I'm quite impressed by the idea of handling TLS certificates and authentication at the point of Ingress. I've added a simple static file server, and added cert-manager, so I basically have a HTTPS static website.
I read that NGINX Ingress Controller can be used with oauth2 proxy to handle authentication at the ingress. The problem is that I can't get this working at all. I can confirm that my oauth2-proxy Deployment Service and Deployment are present and correct - in the Pod's log, I can see the requests coming through from NGINX, but I can't see what uri it is actually calling at Azure B2C. Whenever I try and access my service I get a 500 Internal error - if I put my /oath2/auth address in the browser, I get "The scope 'openid' specified in the request is not supported.". However if I Test run the user Flow in Azure, the test URL also specifies "openid" and it functions as expected.
I think that I could work through this if I could find out how to monitor what oauth2-proxy requests from Azure (i.e. work out where my config is wrong by observing it's uri) - otherwise, maybe somebody that has done this can tell me where I went wrong in the config.
My config is as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
namespace: default
spec:
replicas: 1
selector:
matchLabels:
k8s-app: oauth2-proxy
template:
metadata:
labels:
k8s-app: oauth2-proxy
spec:
containers:
- args:
- -provider=oidc
- -email-domain=*
- -upstream=file:///dev/null
- -http-address=0.0.0.0:4180
- -redirect-url=https://jwt.ms/
- -oidc-issuer-url=https://<tenant>.b2clogin.com/tfp/<app-guid>/b2c_1_manager_signup/
- -cookie-secure=true
- -scope="openid"
# Register a new application
# https://github.com/settings/applications/new
env:
- name: OAUTH2_PROXY_CLIENT_ID
value: <app-guid>
- name: OAUTH2_PROXY_CLIENT_SECRET
value: <key-base64>
- name: OAUTH2_PROXY_COOKIE_SECRET
value: <random+base64>
image: quay.io/pusher/oauth2_proxy:latest
imagePullPolicy: Always
name: oauth2-proxy
ports:
- containerPort: 4180
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
namespace: default
spec:
ports:
- name: http
port: 4180
protocol: TCP
targetPort: 4180
selector:
k8s-app: oauth2-proxy
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: static1-oauth2-proxy
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
spec:
rules:
- host: cloud.<mydomain>
http:
paths:
- backend:
serviceName: oauth2-proxy
servicePort: 4180
path: /oauth2
tls:
- hosts:
- cloud.<mydomain>
secretName: cloud-demo-crt
In my static site ingress I have the following added to metadata.annotations:
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$request_uri"
I'm not 100% sure whether these annotations should always be set as such, or whether I should have varies these for B2C/OIDC, but they seem to go off to the proxy, it's just what the proxy does next that fails.
Note that the log does indicate that oauth2-proxy connected to B2C, indeed if the issuer uri changes, then it goes into a crash fallback loop.
There seem to be anumber of articles about how to set this up, so I'm sure it's possible, but I got a little lost. If somebody could help with the setup or ideas for debugging, that would be wonderful.
Thanks.
Now I'm able to reliably get a ?state= and code= to display in the browser window on the /oauth2/callback page, but the page reports Internal Error. oauth2_proxy is logging when it should now, and the log says:
[2020/06/03 21:18:07] [oauthproxy.go:803] Error redeeming code during OAuth2 callback: token exchange: oauth2: server response missing access_token
My Azure B2C Audit log howwever says that it is issuing id_tokens.
When I look at the source code to oauth2_proxy, it looks as though the problem occurs during oauth2.config.Exchange() - which is in the goloang library - I don't know what that does, but I don't think that it works properly with Azure B2c. Does anybody have an idea how I can progress from here?
Thanks.
Mark
I resorted to compiling and debugging the proxy app in VSCode. I ran a simple NGINX proxy to supply TLS termination to the proxy to allow the Azure B2C side to function. It turns out that I had got a lot of things wrong. Here are a list of problems that I resolved in the hope that somebody else might be able to use this to run their own oauth_proxy with Azure B2C.
When attached to a debugger, it is clear that oauth2_proxy reads the token and expects to fin, in turn access_token, then id_token, it then requires (by default) the "email" claim.
To get an "access_token" to return, you have to request access to some resource. Initially I didn't have this. In my yaml file I had:
- --scope=openid
Note: do not put quotation marks around your scope value in YAML, because they are treaded as a part of the requested scope value!
I had to set up a "read" scope in Azure B2C via "App Registrations" and "Expose an API". My final scope that worked was of the form:
- --scope=https://<myspacename>.onmicrosoft.com/<myapiname>/read openid
You have to make sure that both scopes (read and openid) go through together, otherwise you don't get an id_token. If you get an error saying that you don't have an id_token in the server response, make sure that both values are going through in a single use of the --scope flag.
Once you have access_token and id_token, oauth2_proxy fails because there is no "email" claim. Azure B2C has an "emails" claim, but I don't think that can be used. To get around this, I used the object id instead, I set:
- --user-id-claim=oid
The last problem I had was that no cookies were being set in the browser. I did see an error that the cookie value itself was too long in the oauth2-proxy output, and I removed the "offline_access" scope and that message went away. There were still not cookies in the browser however.
My NGinX ingress log did however have a message that the Headers were more than 8K, and NGinX was reporting a 503 error because of this.
In the oauth2-proxy documents, there is a description that a Redis store should be used if your cookie is long - it specifically identifies Azure AD cookies as being long enough to warrant a Redis solution.
I installed a single node Redis to test (unhardened) using a YAML config from this answer https://stackoverflow.com/a/53052122/2048821 - The --session-store-type=redis and --redis-connection-url options must be used.
The final Service/Deployment for my oauth2_proxy look like this:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
namespace: default
spec:
replicas: 1
selector:
matchLabels:
k8s-app: oauth2-proxy
template:
metadata:
labels:
k8s-app: oauth2-proxy
spec:
containers:
- args:
- --provider=oidc
- --email-domain=*
- --upstream=file:///dev/null
- --http-address=0.0.0.0:4180
- --redirect-url=https://<myhost>/oauth2/callback
- --oidc-issuer-url=https://<mynamespane>.b2clogin.com/tfp/<my-tenant>/b2c_1_signin/v2.0/
- --cookie-secure=true
- --cookie-domain=<myhost>
- --cookie-secret=<mycookiesecret>
- --user-id-claim=oid
- --scope=https://<mynamespace>.onmicrosoft.com/<myappname>/read openid
- --reverse-proxy=true
- --skip-provider-button=true
- --client-id=<myappid>
- --client-secret=<myclientsecret>
- --session-store-type=redis
- --redis-connection-url=redis://redis:6379
# Register a new application
image: quay.io/pusher/oauth2_proxy:latest
imagePullPolicy: Always
name: oauth2-proxy
ports:
- containerPort: 4180
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
namespace: default
spec:
ports:
- name: http
port: 4180
protocol: TCP
targetPort: 4180
selector:
k8s-app: oauth2-proxy
Hope that this saves somebody a lot of time.
Mark
I tried to follow the Mark Rabjohn’s answer , but was getting errors like
oidc: issuer did not match the issuer returned by provider, expected
"https://your-tenant-name.b2clogin.com/tfp/c5b28ff6-f360-405b-85d0-8a87b5783d3b/B2C_1A_signin/v2.0/"
got
"https://your-tenant-name.b2clogin.com/c5b28ff6-f360-405b-85d0-8a87b5783d3b/v2.0/“
( no policy name in the url)
It is a known issue (https://security.stackexchange.com/questions/212724/oidc-should-the-provider-have-the-same-address-as-the-issuer)
I'm aware that a few of the mainstream providers such as Microsoft
doesn't strictly follow this pattern but you'll have to take it up
with them, or consider the workarounds given by the OIDC library.
Fortunately oauth2-proxy supports
--skip-oidc-discovery parameter:
bypass OIDC endpoint discovery. --login-url, --redeem-url and --oidc-jwks-url must be configured in this case.
The example of parameters is the following:
- --skip-oidc-discovery=true
- --login-url=https://<mynamespace>.b2clogin.com/<mynamespace>.onmicrosoft.com/
b2c_1a_signin/oauth2/v2.0/authorize
- --redeem-url=https://<mynamespace>.b2clogin.com/<mynamespace>.onmicrosoft.com/
b2c_1a_signin/oauth2/v2.0/token
- --oidc-jwks-url=https://<mynamespace>.b2clogin.com/<mynamespace>.onmicrosoft.com/
b2c_1a_signin/discovery/v2.0/keys
To create scope I had to set up Application ID URI in Azure B2C via "App Registrations" and "Expose an API".
(e.g see https://learn.microsoft.com/en-us/azure/active-directory-b2c/tutorial-web-api-dotnet?tabs=app-reg-ga#configure-scopes).
I also have to grant admin permissions as described in https://learn.microsoft.com/en-us/azure/active-directory-b2c/add-web-api-application?tabs=app-reg-ga#grant-permissions (see also Application does not have sufficient permissions against this web resource to perform the operation in Azure AD B2C)
- —scope=https://<mynamespace>.onmicrosoft.com/<myappname>/<scopeName> openid
You also should specify
- —oidc_issuer_url=https://<mynamespace>.b2clogin.com/<TenantID>/v2.0/
Using Directory/tenantId in oidc_issuer_url satifies validation in callback/redeem stage, and you don't need to set useinsecure_oidc_skip_issuer_verification=true.
Also note that redirect-url=https:///oauth2/callback should be registered in AAD B2C as application Redirect URI( navigate in Application Overview pane to Redirect URIs link)
I am not clear about debugging, but according to your issue it's looking you are not passing header param.
On static site ingress please add this one as well and try
nginx.ingress.kubernetes.io/auth-response-headers: X-Auth-Request-Access-Token, Authorization
Or this one
nginx.ingress.kubernetes.io/auth-response-headers: Authorization
Based on the information by Mark Rabjohn and Michael Freidgeim I also got (after hours of trying) a working integration with Azure AD B2C. Here is a configuration to reproduce a working setup, using docker-compose for testing it out locally:
Local setup
version: "3.7"
services:
oauth2proxy:
image: quay.io/oauth2-proxy/oauth2-proxy:latest
command: >
--provider=oidc
--email-domain=*
--upstream=http://web
--http-address=0.0.0.0:9000
--redirect-url=http://localhost:9000
--reverse-proxy=true
--skip-provider-button=true
--session-store-type=redis
--redis-connection-url=redis://redis:6379
--oidc-email-claim=oid
--scope="https://<mynamepsace>.onmicrosoft.com/<app registration uuid>/read openid"
--insecure-oidc-skip-issuer-verification=true
--oidc-issuer-url=https://<mynamespace>.b2clogin.com/<mynamepsace>.onmicrosoft.com/<policy>/v2.0/
environment:
OAUTH2_PROXY_CLIENT_ID: "<app registration client id>"
OAUTH2_PROXY_CLIENT_SECRET: "<app registration client secret>"
OAUTH2_PROXY_COOKIE_SECRET: "<secret follow oauth2-proxy docs to create one>"
ports:
- "9000:9000"
links:
- web
web:
image: kennethreitz/httpbin
ports:
- "8000:80"
redis:
image: redis:latest
The important bits here are these options:
--oidc-email-claim=oid
--scope="https://<mynamepsace>.onmicrosoft.com/<app registration uuid>/read openid"
--insecure-oidc-skip-issuer-verification=true
--oidc-issuer-url=https://<mynamespace>.b2clogin.com/<mynamepsace>.onmicrosoft.com/<policy>/v2.0/
Using --insecure-oidc-skip-issuer-verification=true allows you to skip the explicit mentioning of the endpoints using --login-url --redeem-url --oidc-jwks-url.
--oidc-email-claim=oid replaces the deprecated option --user-id-claim=oid mentioned by Mark Rabjohn
The scope is needed also just as Mark is explaining.
Azure AD B2C settings
These are the steps summarized that are necessary to perform in Azure AD B2C portal:
In the user flow, go to "application claims" and enable "User's Object ID". This is required to make the --oidc-email-claim=oid setting work
In the app registration, in "API permissions", create a new permission with the name read. The url for this permission is the value that you need to fill in for --scope="...".
Related
Is there a way to disable impersonation in Kubernetes for all admin/non Admin users?
kubectl get pod --as user1
The above command should not provide answer due to security concerns.
Thank you in advance.
Unless all your users are already admins they should not be able to impersonate users. As cluster-admin you can do "anything" and pre-installed roles/rb should not be edited under normal circumstances.
The necessary Role to enable impersonation is:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: impersonator
rules:
- apiGroups: [""]
resources: ["users", "groups", "serviceaccounts"]
verbs: ["impersonate"]
As long as normal users don't have those permissions, they should not be allowed to perform --as.
I'm getting this error in wildcard certificate challenge:
Error presenting challenge: Found no Zones for domain _acme-challenge.my-domain.com. (neither in the sub-domain noir in the SLD) please make sure your domain-entries in the config are correct and the API is correctly setup with Zone.read rights.
I'm using Cloudflare as the DNS01 Challenge Provider and have set up the API token with the permissions described in the cert-manager documentation.
My cluster issuer looks like this:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: test-issuer
spec:
acme:
email: <email>
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: test-issuer-private-key
solvers:
- dns01:
cloudflare:
email: <email>
apiTokenSecretRef:
name: issuer-access-token
key: api-token
And my certificate:
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: test-wildcard
spec:
secretName: test-wildcard-tls
issuerRef:
name: test-issuer
kind: ClusterIssuer
dnsNames:
- "*.my-domain.com"
I have CNAME record with ‘*’ name that points to my domain and an A record that points to my Kubernetes cluster IP.
Am I missing something? How do you correctly set up cert-manager to automatically manage wildcard domain with Cloudflare as DNS01 Challenge Provider?
I've run into this issue as well, and I realized that I made two different errors in my configuration.
#1: I had overlooked that the API Token that you generate must have all of the following permissions and zone resources associated to it:
Permissions
Zone.Zone.Read
Zone.Zone.Edit
Zone Resources
Include.All zones
This is in the docs but clearly I wasn't reading correctly.
#2: I wasn't able to make it work with the dnsNames attribute in the Certificate resource, but rather needed to use dnsZones instead. In your example, try changing from:
dnsNames:
- "*.my-domain.com"
to:
dnsZones:
- "my-domain.com"
According to this docs (emphasis mine):
Note: dnsNames take an exact match and do not resolve wildcards, meaning the following Issuer will not solve for DNS names such as foo.example.com. Use the dnsZones selector type to match all subdomains within a zone.
This should generate a certificate with a CN of *.my-domain.com and with both *.my-domain.com and my-domain.com in the subjectAltName field.
I am using a Kubernetes deployment model inside Azure, having an OAuth2 proxy(https://github.com/oauth2-proxy/oauth2-proxy) which is protecting the cluster resources by enabling SSO login through various clients. That is ok from the end user perspective who can easily login with his SSO client. The problem appears when the APIs exposed by the services behind the OAuth2 proxy, need to be consumed by external applications via REST calls.
The configuration is the default one, having a dedicated Kubernetes service for OAuth2 and the following rules inside the Ingress file.
nginx.ingress.kubernetes.io/auth-signin: 'https://$host/oauth2/start?rd=$request_uri'
nginx.ingress.kubernetes.io/auth-url: 'https://$host/oauth2/auth'
From what I checked, currently those services can only by consumed via REST calls from an external application(Postman for example) only if I add a cookie parameter(_oauth2_proxy) which is generated after a successful login using the UI and the SSO client provider. If I do not add this cookie parameter an error such as cookie _oauth_proxy is not present.
Is there any option which I can add to the proxy configuration in order to permit token based authentication and authorization/programmatic generation of some identifier in order to access the resources behind the OAuth2 proxy for a technical user(external application) through REST calls? I can generate an access token based on the existing configuration(client id, secret, application scope, etc).
The OAuth2 proxy deployment YAML looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: oauth2-proxy
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: oauth2-proxy
template:
metadata:
labels:
app: oauth2-proxy
spec:
containers:
- env:
- name: OAUTH2_PROXY_PROVIDER
value: azure
- name: OAUTH2_PROXY_AZURE_TENANT
value: <REPLACE_WITH_DIRECTORY_ID>
- name: OAUTH2_PROXY_CLIENT_ID
value: <REPLACE_WITH_APPLICATION_ID>
- name: OAUTH2_PROXY_CLIENT_SECRET
value: <REPLACE_WITH_SECRET_KEY>
- name: OAUTH2_PROXY_COOKIE_SECRET
value: <REPLACE_WITH_VALUE_OF python -c 'import os,base64; print base64.b64encode(os.urandom(16))'>
- name: OAUTH2_PROXY_HTTP_ADDRESS
value: "0.0.0.0:4180"
- name: OAUTH2_PROXY_UPSTREAM
value: "<AZURE KUBERNETES CLUSTER HOST e.g. >"
image: bitnami/oauth2-proxy:latest
imagePullPolicy: IfNotPresent
name: oauth2-proxy
ports:
- containerPort: 4180
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
namespace: default
spec:
ports:
- name: http
port: 4180
protocol: TCP
targetPort: 4180
selector:
app: oauth2-proxy
EDIT:
I was finally able to use the generated token through the AD OAuth2 token endpoint in order to call my APIs behind the proxy. In order to achieve that, I changed the docker image from machinedata/oauth2_proxy to bitnami/oauth2-proxy.
Beside that I added the following arguments to the container:
args:
- '--provider=azure'
- '--azure-tenant=TENANT_ID'
- '--skip-jwt-bearer-tokens=true'
- >-
--oidc-issuer-url=https://sts.windows.net/TENANT_ID/
- >-
--extra-jwt-issuers=https://login.microsoftonline.com/TENANT_ID/v2.0=APP_ID
- '--request-logging=true'
- '--auth-logging=true'
- '--standard-logging=true'
Also, I had to do some changes at the app registration manifest from Azure AD in order for the token to be validated by the OAuth2 proxy against the correct version.
"accessTokenAcceptedVersion": 2
I found some useful explanations in here as well: https://github.com/oauth2-proxy/oauth2-proxy/issues/502 .
Now I can use the token endpoint provided by Azure to generate a Bearer token for my API calls. The only issue which is still remaining is an error which I have when I try to access the application UI.
The error/warning received in the pod logs is:
WARNING: Multiple cookies are required for this session as it exceeds the 4kb cookie limit. Please use server side session storage (eg. Redis) instead.
The error received in the browser is 502 Bad Gateway
EDIT #2:
I was able to bypass this new error by increasing buffer size at my ingress level for OAuth2 proxy. More details can be found in the below URLs.
https://oauth2-proxy.github.io/oauth2-proxy/docs/configuration/oauth_provider/#azure-auth-provider
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/
I finally made it work using the following configuration:
deployment.yaml for OAuth2 proxy:
kind: Deployment
apiVersion: apps/v1
metadata:
name: oauth2-proxy
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: oauth2-proxy
template:
metadata:
creationTimestamp: null
labels:
app: oauth2-proxy
spec:
containers:
- name: oauth2-proxy
image: 'bitnami/oauth2-proxy:latest'
args:
- '--provider=azure'
- '--azure-tenant=TENANT_ID'
- '--skip-jwt-bearer-tokens=true'
- >-
--oidc-issuer-url=https://sts.windows.net/TENANT_ID/
- >-
--extra-jwt-issuers=https://login.microsoftonline.com/TENANT_ID/v2.0=CLIENT_ID
- '--request-logging=true'
- '--auth-logging=true'
- '--standard-logging=true'
ports:
- containerPort: 4180
protocol: TCP
env:
- name: OAUTH2_PROXY_AZURE_TENANT
value: TENANT_ID
- name: OAUTH2_PROXY_CLIENT_ID
value: CLIENT_ID
- name: OAUTH2_PROXY_CLIENT_SECRET
value: CLIENT_SECRET
- name: OAUTH2_PROXY_COOKIE_SECRET
value: COOKIE_SECRET
- name: OAUTH2_PROXY_HTTP_ADDRESS
value: '0.0.0.0:4180'
- name: OAUTH2_PROXY_UPSTREAM
value: 'http://your-host'
- name: OAUTH2_PROXY_EMAIL_DOMAINS
value: '*'
ingress.yaml for OAuth2 proxy:
kind: Ingress
apiVersion: networking.k8s.io/v1beta1
metadata:
name: oauth2-proxy
namespace: default
labels:
app: oauth2-proxy
annotations:
kubernetes.io/ingress.class: addon-http-application-routing
# in my case the generated cookie was too big so I had to add the below parameters
nginx.ingress.kubernetes.io/proxy-buffer-size: 8k
nginx.ingress.kubernetes.io/proxy-buffers-number: '4'
spec:
tls:
- hosts:
- YOUR_HOST
rules:
- host: YOUR_HOST
http:
paths:
- path: /oauth2
backend:
serviceName: oauth2-proxy
servicePort: 4180
In addition to those configuration files, I also had to change the value for accessTokenAcceptedVersion in the Azure application registration manifest. By default this value is setup to null which means it will go for V1 tokens instead of V2 as specified in the extra-jwt-issuers argument.
"accessTokenAcceptedVersion": 2
After those changes were in place, I was able to use the generated token through Azure token endpoint in order to go through the OAuth2 proxy and reach my application exposed APIs:
HTTP POST to https://login.microsoftonline.com/TENANT_ID/oauth2/v2.0/token
Content-Type: application/x-www-form-urlencoded
Body:
- client_id: YOUR_CLIENT_ID
- grant_type: client_credentials
- client_secret: YOUR_CLIENT_SECRET
- scope: api://YOUR_CLIENT_ID/.default - this was generated by me, but it should work with MS Graph as well
I am using a Kubernetes deployment model inside Azure, having an OAuth2 proxy which is protecting the cluster resources by enabling SSO login.
I have the Oauth service running successfully, I also have the application ingress and the oauth ingress deployed. But when I access the application URL, I got 500 Internal error. If I access the Oauth ingress url,I got the login window. I can provide the details on the oauth deployment yaml and the application ingress and oauth ingress yaml.
I'm trying to create an endpoint at gateway that will call multiple service calls and combine them in one response. Is that possible with express-gateway?
This is my gateway.config.yml.
http:
port: 8080
admin:
port: 9876
host: localhost
apiEndpoints:
api:
host: localhost
paths: '/ip'
uuid:
host: localhost
paths: '/uuid'
agent:
host: localhost
paths: '/user-agent'
serviceEndpoints:
httpbin:
url: 'https://httpbin.org'
uuid:
url: 'https://httpbin.org'
agent:
url: 'https://httpbin.org'
policies:
- basic-auth
- cors
- expression
- key-auth
- log
- oauth2
- proxy
- rate-limit
pipelines:
default:
apiEndpoints:
- api
policies:
# Uncomment `key-auth:` when instructed to in the Getting Started guide.
- key-auth:
- proxy:
- action:
serviceEndpoint: httpbin
changeOrigin: true
default-1:
apiEndpoints:
- uuid
policies:
# Uncomment `key-auth:` when instructed to in the Getting Started guide.
- key-auth:
- proxy:
- action:
serviceEndpoint: uuid
changeOrigin: true
default-2:
apiEndpoints:
- agent
policies:
# Uncomment `key-auth:` when instructed to in the Getting Started guide.
- key-auth:
- proxy:
- action:
serviceEndpoint: agent
changeOrigin: true
Basically, I want to combine all of declared serviceEndpoints to one path.
Let say I trigger /ip , it will call 'api,uuid,agent' serviceEndpoints and combine them all to one response. Is that possible?
Express Gateway does not really support such scenario unfortunately. You're going to have to write your own plugin — but it is not going to be so easy.
There are multiple approaches suggested on this regard, which you can weigh based on your use-case, tech-stack, team & tech-skills and other variables.
Write an aggregation service that calls other underlying service
Let the Gateway perform the service aggregation, data formatting etc... The important aspect here is to ensure that you don't break the domain boundaries. Its highly likely/tempting to inject some domain logic at the gateway along with aggregation, so be very careful about it.
Have a look at GraphQL libraries out there, which is good to expose on top of your standard REST apis and you can define the schema and define the resolvers.
I am working with cert-manager and kong-ingress-controller to enable https in kubernetes.
I am interested in figure out how is the renewal process, when I just using a ClusterIssuer and the certificate that it generate by default when we use the ingress resource.
I am not using the kind: Certificate resource, this means that I am not defining a X.509 custom certificate to be signed and obtain the certificate validated through the reference to my ClusterIssuer.
At the moment I've created a ClusterIssuer and one ingress resource, whose automatically creates one certificate named letsencrypt-prod which will be used for perform the http01 validation between cert-manager and letsencrypt CA
Finally, I have this output:
I0321 10:49:48.505664 1 controller.go:162] certificates controller: syncing item 'default/letsencrypt-prod'
I0321 10:49:48.506008 1 conditions.go:143] Found status change for Certificate "letsencrypt-prod" condition "Ready": "False" -> "True"; setting lastTransitionTime to 2019-03-21 10:49:48.506003434 +0000 UTC m=+168443.026129945
I0321 10:49:48.506571 1 sync.go:263] Certificate default/letsencrypt-prod scheduled for renewal in 1438h59m58.49343646s
I0321 13:57:46.226424 1 controller.go:168] certificates controller: Finished processing work item "default/letsencrypt-prod"
I0321 15:12:53.199067 1 controller.go:178] ingress-shim controller: syncing item 'default/kong-ingress-service'
I0321 15:12:53.199171 1 sync.go:183] Certificate "letsencrypt-prod" for ingress "kong-ingress-service" is up to date
This means that my certificate will be renoved within 1438h-59m-58.49343646s. This means 3 months aproximately
This means, will be automatically renoved really?
such as indicated here:
The default duration for all certificates is 90 days and the default renewal windows is 30 days. This means that certificates are considered valid for 3 months and renewal will be attempted within 1 month of expiration.
The cert manager documentation say :
Although the duration and renewal periods are specified on the Certificate resources, the corresponding Issuer or ClusterIssuer must support this.
My Cluster Issuer is:
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: my-email#example.com
privateKeySecretRef:
name: letsencrypt-prod
http01: {}
How to can I manage the duration and renewBefore parameters if I am not creating a Certificate Resource. ?
According to this can I add the duration and renewBefore parameters in my ClusterIssuer? Maybe of this way?
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: my-email#example.com
privateKeySecretRef:
name: letsencrypt-prod
http01: {}
# ...
duration: 24h
renewBefore: 12h
This is not supported on the issuers\clusterissuers, only on certificates. you can create a admission controllers to mutate certificates or you can have a cronjob to update certificate resources after they are created