Regex based routing setup for ingress in AKS - azure

I am have a K8s cluster in Azure, in which I am wanting to host multiple web applications on with a single host. Each application has it's own service and deployment. How can I achieve something like the following routes?
MyApp.com
Partner1.MyApp.com
Partner2.MyApp.com
Here is what my yml file looks like currently:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp # the label for the pods and the deployments
spec:
containers:
- name: myapp
image: myimagename
imagePullPolicy: Always
ports:
- containerPort: 6666 # the application listens to this port
---
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
selector:
app: myapp
ports:
- protocol: TCP
port: 6666
targetPort: 6666
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: partner1-myapp
spec:
selector:
matchLabels:
app: partner1-myapp
template:
metadata:
labels:
app: partner1-myapp # the label for the pods and the deployments
spec:
containers:
- name: partner1-myapp
image: myimagename
imagePullPolicy: Always
ports:
- containerPort: 6669 # the application listens to this port
---
apiVersion: v1
kind: Service
metadata:
name: partner1-myapp
spec:
selector:
app: partner1-myapp
ports:
- protocol: TCP
port: 6669
targetPort: 6669
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: partner2-myapp
spec:
selector:
matchLabels:
app: partner2-myapp
template:
metadata:
labels:
app: partner2-myapp # the label for the pods and the deployments
spec:
containers:
- name: partner2-myapp
image: myimagename
imagePullPolicy: Always
ports:
- containerPort: 6672# the application listens to this port
---
apiVersion: v1
kind: Service
metadata:
name: partner2-myapp
spec:
selector:
app: partner2-myapp
ports:
- protocol: TCP
port: 6672
targetPort: 6672
type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ing
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/proxy-body-size: 70m
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- path: /(.*)
pathType: Prefix
backend:
service:
name: partner1-myapp
port:
number: 6669
- path: /(.*)
pathType: Prefix
backend:
service:
name: partner2-myapp
port:
number: 6672
- path: /(.*)
pathType: Prefix
backend:
service:
name: myapp
port:
number: 6666
---
What can I do to get the above routing?

Please run your two applications using kubectl
kubectl apply -f Partner1-MyApp.yaml --namespace ingress-basic
kubectl apply -f Partner2-MyApp.yaml --namespace ingress-basic
To setup for routing in your YAML file, If both application are running in Kubernetes cluster to route traffic in each application EXTERNAL_IP/static is needed to route the service, create a file name add this YAML code in below
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: partner-ingress-static
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /static/$2
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /static(/|$)(.*)
pathType: Prefix
backend:
service:
name: partner1-myapp
port:
number: 6669
To more in detail please refer this link :
Create an ingress controller in Azure Kubernetes Service (AKS)

Related

Shiny proxy on AKS behind an Azure Application Gateway

I’ve been using shiny proxy on AKS for the past couple of months and it’s been fantastic, no problems at all, however, the need for a more secure setup has arised, and I have to use it behind an Azure Application Gateway (v2) with WAF and TLS certifcates (on the AGW).
The deployments happen with no problems whatsoever, but, upon trying to access the Application I always get a “404 Not Found”, also the health probes always throw "no Results" has anyone been through this before?
Here is my Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{lvappname}}-proxy
namespace: {{ns}}
labels:
app: {{lvappname}}
spec:
selector:
matchLabels:
run: {{lvappname}}-proxy
replicas: 1
template:
metadata:
labels:
run: {{lvappname}}-proxy
spec:
containers:
- name: {{lvappname}}-proxy
image: {{server}}/shiny-app/{{lvappname}}-shiny-proxy-application:{{TAG}}
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
- name: kube-proxy-sidecar
image: {{server}}/shiny-app/{{lvappname}}-kube-proxy-sidecar:{{TAG}}
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8001
And here is my Service and Ingress
kind: Service
apiVersion: v1
metadata:
name: {{lvappname}}-proxy
namespace: {{ns}}
labels:
app: {{lvappname}}
tier: frontend
spec:
selector:
app: {{lvappname}}-proxy
tier: frontend
ports:
- protocol: TCP
port: 80
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{lvappname}}-proxy
namespace: {{ns}}
annotations:
kubernetes.io/ingress.class: azure/application-gateway
cert-manager.io/cluster-issuer: letsencrypt-staging-application-gateway
appgw.ingress.kubernetes.io/health-probe-status-codes: "200-399,404"
appgw.ingress.kubernetes.io/health-probe-path: "/"
labels:
app: {{lvappname}}
spec:
rules:
- host: {{lvappname}}-{{lvstage}}.{{domain}}
http:
paths:
- path: /
backend:
service:
name: {{lvappname}}-proxy
port:
number: 8080
pathType: Prefix
tls:
- hosts:
- {{lvappname}}-{{lvstage}}.{{domain}}
secretName: {{lvappname}}-{{lvstage}}.{{domain}}-secret-name
and here is my shinyproxy configuration file
proxy:
port: 8080
authentication: none
landing-page: /app/{{lvappname}}
hide-navbar: true
container-backend: kubernetes
kubernetes:
namespace: {{ns}}
image-pull-policy: IfNotPresent
image-pull-secret: {{lvappname}}-secret
specs:
- id: {{lvappname}}
display-name: {{lvappname}} application
description: Application for {{lvappname}}
container-cmd: ["R", "-e", "shiny::runApp('/app/Shiny')"]
container-image: {{server}}/shiny-app/{{lvappname}}:{{TAG}}
server:
servlet.session.timeout: 3600
spring:
session:
store-type: redis
redis:
host: redis-leader
Any Help would be deeply appreciated
Thank you all in advance

Nginx Ingress Controller getting error in Node.js application with routes prefixed

I'm using Ingress Nginx Controller in my Kubernetes Cluster hosted on Google Kubernetes Engine. The application is a Node.js app.
When I integrated my app with Rollbar (logger service) I started to notice repetitive errors every ~15 seconds (154K times in one week).
Error: Cannot GET /
Print:
I think the reason is the fact that the my Node.js application uses the /v1 prefix in the routes, i.e the / route doesn'i exists.
PS: Rollbar is linked in Develop (local), Testing (Heroku) and Production (GKE) environments and the error only occurs in production.
My ingress file:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-app
spec:
rules:
- host: api.company.com
http:
paths:
- path: /v1
backend:
serviceName: company-prod-v1-service
servicePort: 3000
Ingress documentation says something about / endpoint, but i don't understand very well.
I need to remove this error. Can any jedi masters help me fix this error?
Thanks in advance
[UPDATE]
Deployment File
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deploy-v1
labels:
app: app-v1
version: 1.10.0-2
spec:
selector:
matchLabels:
app: app-v1
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
minReadySeconds: 30
progressDeadlineSeconds: 60
template:
metadata:
labels:
app: app-v1
version: 1.10.0-2
spec:
serviceAccountName: gke-account
containers:
- name: app-container
image: registry.gitlab.com/company/app-back:1.10.0
lifecycle:
postStart:
exec:
command:
- "/bin/sh"
- "-c"
- >
if [ -f "$SECRETS_FOLDER/$APPLE_NOTIFICATION_KEY" ]; then
cp $SECRETS_FOLDER/$APPLE_NOTIFICATION_KEY $APP_FOLDER;
fi;
- name: cloud-sql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.17
command:
- "/cloud_sql_proxy"
- "-instances=thermal-petal-283313:us-east1:app-instance=tcp:5432"
securityContext:
runAsNonRoot: true
lifecycle:
preStop:
exec:
command: ["/bin/sleep", "300"]
imagePullSecrets:
- name: registry-credentials
Ingress Controller Pod File (auto generate in installation command)
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2021-06-16T03:08:53Z"
generateName: ingress-nginx-controller
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
name: ingress-nginx-controller
namespace: ingress-nginx
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: ingress-nginx-controller
resourceVersion: "112544950"
selfLink: /api/v1/namespaces/ingress-nginx/pods/ingress-nginx-controller
spec:
containers:
image: k8s.gcr.io/ingress-nginx/controller:v0.46.0
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: controller
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
- containerPort: 8443
name: webhook
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1

Kubernetes ingress for Teamcity blank page

I have problem whit a ingress. I'm using haproxy, but after apply yaml file(s) I'm not able to open teamcity site in web browser. I got blank page. If I use curl it shows nothing.
Test echo (image: jmalloc/echo-server) is working just fine.
Of course kubernetes.local is added to hosts file to be able to resolve DNS name.
My config yaml files:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: teamcity
name: teamcity
namespace: default
spec:
replicas: 1
selector:
matchLabels:
run: teamcity
strategy:
type: Recreate
template:
metadata:
labels:
run: teamcity
spec:
tolerations:
- key: node.kubernetes.io/not-ready
effect: NoExecute
tolerationSeconds: 10
- key: node.kubernetes.io/unreachable
effect: NoExecute
tolerationSeconds: 10
containers:
- image: jetbrains/teamcity-server
imagePullPolicy: Always
name: teamcity
ports:
- containerPort: 8111
volumeMounts:
- name: teamcity-pvc-data
mountPath: "/data/teamcity_server/datadir"
- name: teamcity-pvc-logs
mountPath: "/opt/teamcity/logs"
volumes:
- name: teamcity-pvc-data
persistentVolumeClaim:
claimName: teamcity-pvc-data
- name: teamcity-pvc-logs
persistentVolumeClaim:
claimName: teamcity-pvc-logs
---
apiVersion: v1
kind: Service
metadata:
labels:
run: teamcity
name: teamcity
namespace: default
annotations:
haproxy.org/check: "true"
haproxy.org/forwarded-for: "true"
haproxy.org/load-balance: "roundrobin"
spec:
selector:
run: teamcity
ports:
- name: port-tc
port: 8111
protocol: TCP
targetPort: 8111
externalIPs:
- 192.168.22.152
- 192.168.22.153
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: teamcity
namespace: default
spec:
rules:
- host: kubernetes.local
http:
paths:
- path: /teamcity
pathType: Prefix
backend:
service:
name: teamcity
port:
number: 8111
I wold be grateful for every hint. Struggling whit this for hours. connection to http://192.168.22.152:8111 is working fine too. Just Ingress having troubles.
subdomain is fixing problem teamcity.kubernetes.local kubernetes.local/teamcity doesn't work.
Solution:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: teamcity-ingress
namespace: default
spec:
rules:
- host: teamcity.kubernetes.local
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: teamcity
port:
number: 8111

How can I run a docker container from a docker image from docker hub in skaffold?

Let's say I want to run a container from an image from docker hub, let's say mosquitto I'd execute docker run -it -p 1883:1883 -p 9001:9001 eclipse-mosquitto.
I tried to pull the image from gcr.io (deployment.yaml) like done here:
apiVersion: v1
kind: Service
metadata:
name: mqtt-broker
labels:
app: mqtt-broker
spec:
type: NodePort
ports:
- targetPort: 1883
port: 1883
selector:
app: mqtt-broker
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mqtt-broker
labels:
app: mqtt-broker
spec:
replicas: 1
selector:
matchLabels:
app: mqtt-broker
template:
metadata:
labels:
app: mqtt-broker
spec:
containers:
- name: mqtt-broker
image: gcr.io/vu-james-celli/eclipse-mosquitto # https://hub.docker.com/_/eclipse-mosquitto
ports:
- containerPort: 1883
skaffold.yaml:
apiVersion: skaffold/v2beta10
kind: Config
build:
artifacts:
- <other image builds>
deploy:
kubectl:
manifests:
- mqtt-broker/*
portForward:
- resourceType: deployment
resourceName: mqtt-broker
port: 1883
localPort: 1883
<other port forwardings>
...
However when I run skaffold --dev --port-forward I get an error in the output:
- deployment/mqtt-broker: container mqtt-broker is waiting to start: gcr.io/vu-james-celli/eclipse-mosquitto can't be pulled
How do I have to configure skaffold.yaml (schema version v2beta10) when using kubectl to run the mosquitto container as part of a deployment?
You could create a pod with a single container referencing eclipse-mosquitto, and then ensure that pod is referenced from your skaffold.yaml.
apiVersion: v1
kind: Pod
metadata:
name: mqtt
spec:
containers:
- name: mqtt
image: eclipse-mosquitto
ports:
- containerPort: 1883
name: mqtt
- containerPort: 9001
name: websockets
You could turn this into a deployment or replicaset with services, etc.
First, pull the image from docker hub onto the local machine: docker pull eclipse-mosquitto
Second, refer the image in the mqtt-broker/deployment.yaml e.g.:
apiVersion: v1
kind: Service
metadata:
name: mqtt-broker
labels:
app: mqtt-broker
spec:
type: NodePort
ports:
- targetPort: 1883
port: 1883
selector:
app: mqtt-broker
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mqtt-broker
labels:
app: mqtt-broker
spec:
replicas: 1
selector:
matchLabels:
app: mqtt-broker
template:
metadata:
labels:
app: mqtt-broker
spec:
containers:
- name: mqtt-broker
image: eclipse-mosquitto
ports:
- containerPort: 1883
Third, reference the deploment.yaml in skaffold.yaml` e.g.:
apiVersion: skaffold/v2beta10
kind: Config
build:
artifacts:
- <services-under-development>
deploy:
kubectl:
manifests:
- mqtt-broker/deployment.yaml
portForward:
- resourceType: deployment
resourceName: mqtt-broker
port: 1883
localPort: 1883
- <port-forwarding-for-services-under-development>

Azure Kubernetes Service - Http Routing Giving 502

We are trying to host our API in AKS but we are hitting the same issue no matter what ingress option we use. We are running the latest version of kubernetes(1.11.2) on AKS with Http application routing configured. All the services and Pods are healthy according to the dashboard and the DNS Zone /healthz is returning 200, so that's working.
All of the api services are built using the latest version of dotnet core with the / route configured to return a status code 200.
Here's the services & deployments:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: accounts-api
spec:
replicas: 3
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: accounts-api
spec:
containers:
- name: accounts-api
# image: mycompany.azurecr.io/accounts.api:#{Build.BuildId}#
image: mycompany.azurecr.io/accounts.api:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: programs-api
spec:
replicas: 3
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: programs-api
spec:
containers:
- name: programs-api
# image: mycompany.azurecr.io/programs.api:#{Build.BuildId}#
image: mycompany.azurecr.io/programs.api:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: teams-api
spec:
replicas: 3
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: teams-api
spec:
containers:
- name: teams-api
# image: mycompany.azurecr.io/teams.api:#{Build.BuildId}#
image: mycompany.azurecr.io/teams.api:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: payments-api
spec:
replicas: 3
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: payments-api
spec:
containers:
- name: payments-api
# image: mycompany.azurecr.io/payments.api:#{Build.BuildId}#
image: mycompany.azurecr.io/payments.api:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: accounts-api-service
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: accounts-api
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: programs-api-service
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: programs-api
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: teams-api-service
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: teams-api
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: payments-api-service
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: payments-api
type: ClusterIP
---
Firstly, we tried to use Path Based Fanout, like so:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-api-ingress
annotations:
kubernetes.io/ingress.class: addon-http-application-routing
spec:
rules:
- host: mycompany-api.d6b1cf1ede294842b0ed.westeurope.aksapp.io
http:
paths:
- path: /accounts-api
backend:
serviceName: accounts-api-service
servicePort: 80
- path: /programs-api
backend:
serviceName: programs-api-service
servicePort: 80
- path: /teams-api
backend:
serviceName: teams-api-service
servicePort: 80
- path: /workouts-api
backend:
serviceName: payments-api-service
servicePort: 80
---
But we were hitting a 502 bad gateway for each path. We then tried aggregating the ingresses and assigning a host per service:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-api-ingress
annotations:
kubernetes.io/ingress.class: addon-http-application-routing
spec:
rules:
- host: accounts-api.d6b1cf1ede294842b0ed.westeurope.aksapp.io
http:
paths:
- path: /
backend:
serviceName: accounts-api-service
servicePort: 80
- host: programs-api.d6b1cf1ede294842b0ed.westeurope.aksapp.io
http:
paths:
- path: /
backend:
serviceName: programs-api-service
servicePort: 80
- host: teams-api.d6b1cf1ede294842b0ed.westeurope.aksapp.io
http:
paths:
- path: /
backend:
serviceName: teams-api-service
servicePort: 80
- host: payments-api.d6b1cf1ede494842b0ed.westeurope.aksapp.io
http:
paths:
- path: /
backend:
serviceName: payments-api-service
servicePort: 80
---
The Azure DNS Zone is adding the correct txt and A records for each of the services but we are still hitting the 502.
From what we can tell from googling it seems as though the wiring of the ingress to the services is screwy, but as far as we can see our deploy script looks ok. Ideally we would like to use the path based fanout option, so what could the issue be? Base Path configuration?

Resources