NGINX Ingress external oauth with Azure Active Directory - azure

I want to use Azure Active Directory as an external oauth2 provider to protect my services on the ingress level. In the past, I used basic ouath and everything worked like expected. But nginx provides the extern ouath methode which sounds much more confortable!
For that I created an SP:
$ az ad sp create-for-rbac --skip-assignment --name test -o table
AppId DisplayName Name Password Tenant
<AZURE_CLIENT_ID> test http://test <AZURE_CLIENT_SECRET> <TENANT_ID>
My ingress ressource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
namespace: ingress-nginx
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/auth-url: "https://\$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://\$host/oauth2/start?rd=$escaped_request_uri"
# nginx.ingress.kubernetes.io/auth-type: basic
# nginx.ingress.kubernetes.io/auth-secret: basic-auth
# nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required'
And the externel-oauth:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: oauth2-proxy
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: oauth2-proxy
template:
metadata:
labels:
app: oauth2-proxy
spec:
containers:
- args:
- --provider=azure
- --email-domain=microsoft.com
- --upstream=file:///dev/null
- --http-address=0.0.0.0:4180
- --azure-tenant=$AZURE_TENANT_ID
env:
- name: OAUTH2_PROXY_CLIENT_ID
value: $API_CLIENT_ID
- name: OAUTH2_PROXY_CLIENT_SECRET
value: $API_CLIENT_SECRET
- name: OAUTH2_PROXY_COOKIE_SECRET
value: $API_COOKIE_SECRET
# created by docker run -ti --rm python:3-alpine python -c 'import secrets,base64; print(base64.b64encode(base64.b64encode(secrets.token_bytes(16))));
image: docker.io/colemickens/oauth2_proxy:latest
imagePullPolicy: Always
name: oauth2-proxy
ports:
- containerPort: 4180
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: oauth2-proxy
namespace: kube-system
spec:
ports:
- name: http
port: 4180
protocol: TCP
targetPort: 4180
selector:
app: oauth2-proxy
It looks like something wents wrong but I have no idea what I missed.
When i try to enter the page it loads up to one minute and ends in a '500 internal server error'.
The logs of the ingress controller show an infinity loop of following:
10.244.2.1 - - [16/Jan/2020:15:32:30 +0000] "GET /oauth2/auth HTTP/1.1" 499 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/xxxxx Safari/xxxx" 727 0.003 [upstream-default-backend] [] - - - - <AZURE_CLIENT_ID>

So you need another ingress for the oAuth deployment as well. here's how my setup looks like:
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: grafana-ingress-oauth
namespace: grafana
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: xxx
http:
paths:
- path: /oauth2
backend:
serviceName: oauth2-proxy
servicePort: 4180
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: grafana-ingress
namespace: grafana
annotations:
kubernetes.io/ingress.class: "nginx"
kubernetes.io/tls-acme: "true"
certmanager.k8s.io/cluster-issuer: letsencrypt-production
ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri"
spec:
rules:
- host: xxx
http:
paths:
- path: /
backend:
serviceName: grafana
servicePort: 80
this way second ingress redirects to first and first does the auth and redirects back

Related

Shiny proxy on AKS behind an Azure Application Gateway

I’ve been using shiny proxy on AKS for the past couple of months and it’s been fantastic, no problems at all, however, the need for a more secure setup has arised, and I have to use it behind an Azure Application Gateway (v2) with WAF and TLS certifcates (on the AGW).
The deployments happen with no problems whatsoever, but, upon trying to access the Application I always get a “404 Not Found”, also the health probes always throw "no Results" has anyone been through this before?
Here is my Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{lvappname}}-proxy
namespace: {{ns}}
labels:
app: {{lvappname}}
spec:
selector:
matchLabels:
run: {{lvappname}}-proxy
replicas: 1
template:
metadata:
labels:
run: {{lvappname}}-proxy
spec:
containers:
- name: {{lvappname}}-proxy
image: {{server}}/shiny-app/{{lvappname}}-shiny-proxy-application:{{TAG}}
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
- name: kube-proxy-sidecar
image: {{server}}/shiny-app/{{lvappname}}-kube-proxy-sidecar:{{TAG}}
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8001
And here is my Service and Ingress
kind: Service
apiVersion: v1
metadata:
name: {{lvappname}}-proxy
namespace: {{ns}}
labels:
app: {{lvappname}}
tier: frontend
spec:
selector:
app: {{lvappname}}-proxy
tier: frontend
ports:
- protocol: TCP
port: 80
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{lvappname}}-proxy
namespace: {{ns}}
annotations:
kubernetes.io/ingress.class: azure/application-gateway
cert-manager.io/cluster-issuer: letsencrypt-staging-application-gateway
appgw.ingress.kubernetes.io/health-probe-status-codes: "200-399,404"
appgw.ingress.kubernetes.io/health-probe-path: "/"
labels:
app: {{lvappname}}
spec:
rules:
- host: {{lvappname}}-{{lvstage}}.{{domain}}
http:
paths:
- path: /
backend:
service:
name: {{lvappname}}-proxy
port:
number: 8080
pathType: Prefix
tls:
- hosts:
- {{lvappname}}-{{lvstage}}.{{domain}}
secretName: {{lvappname}}-{{lvstage}}.{{domain}}-secret-name
and here is my shinyproxy configuration file
proxy:
port: 8080
authentication: none
landing-page: /app/{{lvappname}}
hide-navbar: true
container-backend: kubernetes
kubernetes:
namespace: {{ns}}
image-pull-policy: IfNotPresent
image-pull-secret: {{lvappname}}-secret
specs:
- id: {{lvappname}}
display-name: {{lvappname}} application
description: Application for {{lvappname}}
container-cmd: ["R", "-e", "shiny::runApp('/app/Shiny')"]
container-image: {{server}}/shiny-app/{{lvappname}}:{{TAG}}
server:
servlet.session.timeout: 3600
spring:
session:
store-type: redis
redis:
host: redis-leader
Any Help would be deeply appreciated
Thank you all in advance

Regex based routing setup for ingress in AKS

I am have a K8s cluster in Azure, in which I am wanting to host multiple web applications on with a single host. Each application has it's own service and deployment. How can I achieve something like the following routes?
MyApp.com
Partner1.MyApp.com
Partner2.MyApp.com
Here is what my yml file looks like currently:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp # the label for the pods and the deployments
spec:
containers:
- name: myapp
image: myimagename
imagePullPolicy: Always
ports:
- containerPort: 6666 # the application listens to this port
---
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
selector:
app: myapp
ports:
- protocol: TCP
port: 6666
targetPort: 6666
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: partner1-myapp
spec:
selector:
matchLabels:
app: partner1-myapp
template:
metadata:
labels:
app: partner1-myapp # the label for the pods and the deployments
spec:
containers:
- name: partner1-myapp
image: myimagename
imagePullPolicy: Always
ports:
- containerPort: 6669 # the application listens to this port
---
apiVersion: v1
kind: Service
metadata:
name: partner1-myapp
spec:
selector:
app: partner1-myapp
ports:
- protocol: TCP
port: 6669
targetPort: 6669
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: partner2-myapp
spec:
selector:
matchLabels:
app: partner2-myapp
template:
metadata:
labels:
app: partner2-myapp # the label for the pods and the deployments
spec:
containers:
- name: partner2-myapp
image: myimagename
imagePullPolicy: Always
ports:
- containerPort: 6672# the application listens to this port
---
apiVersion: v1
kind: Service
metadata:
name: partner2-myapp
spec:
selector:
app: partner2-myapp
ports:
- protocol: TCP
port: 6672
targetPort: 6672
type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ing
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/proxy-body-size: 70m
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- path: /(.*)
pathType: Prefix
backend:
service:
name: partner1-myapp
port:
number: 6669
- path: /(.*)
pathType: Prefix
backend:
service:
name: partner2-myapp
port:
number: 6672
- path: /(.*)
pathType: Prefix
backend:
service:
name: myapp
port:
number: 6666
---
What can I do to get the above routing?
Please run your two applications using kubectl
kubectl apply -f Partner1-MyApp.yaml --namespace ingress-basic
kubectl apply -f Partner2-MyApp.yaml --namespace ingress-basic
To setup for routing in your YAML file, If both application are running in Kubernetes cluster to route traffic in each application EXTERNAL_IP/static is needed to route the service, create a file name add this YAML code in below
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: partner-ingress-static
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /static/$2
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /static(/|$)(.*)
pathType: Prefix
backend:
service:
name: partner1-myapp
port:
number: 6669
To more in detail please refer this link :
Create an ingress controller in Azure Kubernetes Service (AKS)

How to authenticate against AAD (Azure Active Directory) with oauth2_proxy and obtain Access Token

I'm trying to authenticate against AAD (Azure Active Directory) with oauth2_proxy used in Kubernetes to obtain Access Token.
First of all, I'm struggling to get the correct authentication flow to work.
Second, after being redirected to my application, Access Token is not in the request headers specified in oauth2_proxy documentation.
Here is some input on authentication against Azure Active Directory (AAD) using oauth2_proxy in kubernetes.
First you need to create an application in AAD and add it email, profile and User.Read permissions to Microsoft Graph.
The default behavior of authentication flow, is that after login against Microsoft authentication server, you will be redirected to root of website with authentication code (e.g. https://exampler.com/). You would expect the Access Token to be visible there -this is a faulty assumption. The url that Access Token is injected into is https://exampler.com/oauth2 !!!
Successful configuration of oauth2_proxt that worked is below.
oauth2-proxy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
namespace: oa2p
spec:
replicas: 1
selector:
matchLabels:
k8s-app: oauth2-proxy
template:
metadata:
labels:
k8s-app: oauth2-proxy
spec:
containers:
- args:
- --provider=oidc
- --azure-tenant=88888888-aaaa-bbbb-cccc-121212121212
- --email-domain=example.com
- --http-address=0.0.0.0:4180
- --set-authorization-header=true
- --set-xauthrequest=true
- --pass-access-token=true
- --pass-authorization-header=true
- --pass-user-headers=true
- --pass-host-header=true
- --skip-jwt-bearer-tokens=true
- --oidc-issuer-url=https://login.microsoftonline.com/88888888-aaaa-bbbb-cccc-121212121212/v2.0
env:
- name: OAUTH2_PROXY_CLIENT_ID
valueFrom:
secretKeyRef:
name: oauth2-proxy-secret
key: OAUTH2_PROXY_CLIENT_ID
- name: OAUTH2_PROXY_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: oauth2-proxy-secret
key: OAUTH2_PROXY_CLIENT_SECRET
- name: OAUTH2_PROXY_COOKIE_SECRET
valueFrom:
secretKeyRef:
name: oauth2-proxy-secret
key: OAUTH2_PROXY_COOKIE_SECRET
image: quay.io/oauth2-proxy/oauth2-proxy:v7.1.3
imagePullPolicy: Always
name: oauth2-proxy
ports:
- containerPort: 4180
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
namespace: oa2p
spec:
ports:
- name: http
port: 4180
protocol: TCP
targetPort: 4180
selector:
k8s-app: oauth2-proxy
ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: oa2p
namespace: oa2p
annotations:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/limit-rps: "1"
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri"
nginx.ingress.kubernetes.io/auth-response-headers: "X-Auth-Request-Email,X-Auth-Request-Preferred-Username"
spec:
tls:
- hosts:
- oa2p.example.com
secretName: oa2p-tls
rules:
- host: oa2p.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: oa2p
port:
number: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: oa2p-proxy
namespace: oa2p
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/limit-rps: "1"
nginx.ingress.kubernetes.io/proxy-buffer-size: "8k"
spec:
tls:
- hosts:
- oa2p.example.com
secretName: oa2p-tls
rules:
- host: oa2p.example.com
http:
paths:
- path: /oauth2
pathType: Prefix
backend:
service:
name: oauth2-proxy
port:
number: 4180

Kubernetes ingress for Teamcity blank page

I have problem whit a ingress. I'm using haproxy, but after apply yaml file(s) I'm not able to open teamcity site in web browser. I got blank page. If I use curl it shows nothing.
Test echo (image: jmalloc/echo-server) is working just fine.
Of course kubernetes.local is added to hosts file to be able to resolve DNS name.
My config yaml files:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: teamcity
name: teamcity
namespace: default
spec:
replicas: 1
selector:
matchLabels:
run: teamcity
strategy:
type: Recreate
template:
metadata:
labels:
run: teamcity
spec:
tolerations:
- key: node.kubernetes.io/not-ready
effect: NoExecute
tolerationSeconds: 10
- key: node.kubernetes.io/unreachable
effect: NoExecute
tolerationSeconds: 10
containers:
- image: jetbrains/teamcity-server
imagePullPolicy: Always
name: teamcity
ports:
- containerPort: 8111
volumeMounts:
- name: teamcity-pvc-data
mountPath: "/data/teamcity_server/datadir"
- name: teamcity-pvc-logs
mountPath: "/opt/teamcity/logs"
volumes:
- name: teamcity-pvc-data
persistentVolumeClaim:
claimName: teamcity-pvc-data
- name: teamcity-pvc-logs
persistentVolumeClaim:
claimName: teamcity-pvc-logs
---
apiVersion: v1
kind: Service
metadata:
labels:
run: teamcity
name: teamcity
namespace: default
annotations:
haproxy.org/check: "true"
haproxy.org/forwarded-for: "true"
haproxy.org/load-balance: "roundrobin"
spec:
selector:
run: teamcity
ports:
- name: port-tc
port: 8111
protocol: TCP
targetPort: 8111
externalIPs:
- 192.168.22.152
- 192.168.22.153
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: teamcity
namespace: default
spec:
rules:
- host: kubernetes.local
http:
paths:
- path: /teamcity
pathType: Prefix
backend:
service:
name: teamcity
port:
number: 8111
I wold be grateful for every hint. Struggling whit this for hours. connection to http://192.168.22.152:8111 is working fine too. Just Ingress having troubles.
subdomain is fixing problem teamcity.kubernetes.local kubernetes.local/teamcity doesn't work.
Solution:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: teamcity-ingress
namespace: default
spec:
rules:
- host: teamcity.kubernetes.local
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: teamcity
port:
number: 8111

Azure Kubernetes Service - Http Routing Giving 502

We are trying to host our API in AKS but we are hitting the same issue no matter what ingress option we use. We are running the latest version of kubernetes(1.11.2) on AKS with Http application routing configured. All the services and Pods are healthy according to the dashboard and the DNS Zone /healthz is returning 200, so that's working.
All of the api services are built using the latest version of dotnet core with the / route configured to return a status code 200.
Here's the services & deployments:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: accounts-api
spec:
replicas: 3
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: accounts-api
spec:
containers:
- name: accounts-api
# image: mycompany.azurecr.io/accounts.api:#{Build.BuildId}#
image: mycompany.azurecr.io/accounts.api:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: programs-api
spec:
replicas: 3
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: programs-api
spec:
containers:
- name: programs-api
# image: mycompany.azurecr.io/programs.api:#{Build.BuildId}#
image: mycompany.azurecr.io/programs.api:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: teams-api
spec:
replicas: 3
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: teams-api
spec:
containers:
- name: teams-api
# image: mycompany.azurecr.io/teams.api:#{Build.BuildId}#
image: mycompany.azurecr.io/teams.api:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: payments-api
spec:
replicas: 3
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: payments-api
spec:
containers:
- name: payments-api
# image: mycompany.azurecr.io/payments.api:#{Build.BuildId}#
image: mycompany.azurecr.io/payments.api:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: accounts-api-service
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: accounts-api
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: programs-api-service
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: programs-api
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: teams-api-service
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: teams-api
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: payments-api-service
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: payments-api
type: ClusterIP
---
Firstly, we tried to use Path Based Fanout, like so:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-api-ingress
annotations:
kubernetes.io/ingress.class: addon-http-application-routing
spec:
rules:
- host: mycompany-api.d6b1cf1ede294842b0ed.westeurope.aksapp.io
http:
paths:
- path: /accounts-api
backend:
serviceName: accounts-api-service
servicePort: 80
- path: /programs-api
backend:
serviceName: programs-api-service
servicePort: 80
- path: /teams-api
backend:
serviceName: teams-api-service
servicePort: 80
- path: /workouts-api
backend:
serviceName: payments-api-service
servicePort: 80
---
But we were hitting a 502 bad gateway for each path. We then tried aggregating the ingresses and assigning a host per service:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-api-ingress
annotations:
kubernetes.io/ingress.class: addon-http-application-routing
spec:
rules:
- host: accounts-api.d6b1cf1ede294842b0ed.westeurope.aksapp.io
http:
paths:
- path: /
backend:
serviceName: accounts-api-service
servicePort: 80
- host: programs-api.d6b1cf1ede294842b0ed.westeurope.aksapp.io
http:
paths:
- path: /
backend:
serviceName: programs-api-service
servicePort: 80
- host: teams-api.d6b1cf1ede294842b0ed.westeurope.aksapp.io
http:
paths:
- path: /
backend:
serviceName: teams-api-service
servicePort: 80
- host: payments-api.d6b1cf1ede494842b0ed.westeurope.aksapp.io
http:
paths:
- path: /
backend:
serviceName: payments-api-service
servicePort: 80
---
The Azure DNS Zone is adding the correct txt and A records for each of the services but we are still hitting the 502.
From what we can tell from googling it seems as though the wiring of the ingress to the services is screwy, but as far as we can see our deploy script looks ok. Ideally we would like to use the path based fanout option, so what could the issue be? Base Path configuration?

Resources