I have a situation, where I am planning to use two separate traefik ingress controller inside the Kubernetes cluster.
I have a few URLs which I want to be accessible through VPN only, and few which can be publicly accessible.
In the current architecture, I have one traefik-ingress controller, and two separate ALBs, one internal and one internet-facing, both pointing to traefik.
Let's say, I have a URL public.example.com and private.example.com. public.example.com is pointing to internet-facing ALB, and private.example.com is pointing to internal ALB. But what if someone get to know the pointing of public.example.com and points private.example.com to same pointing in his /etc/hosts, he will be able to access my private website.
To avoid this, I am planning to run two separate traefik-ingress-controller, one which will be serving only private URL and one public URL. Can this be done? Or is there any other way to avoid this
To deploy two separate traefik-ingress controller, to serve private and public traffic separately, I used kubernetes.ingressclass=traefik args.
This is what documentation has to say for kubernetes.ingressclass:
--kubernetes.ingressclass Value of kubernetes.io/ingress.class annotation to watch for
I created two deployment, having separate value for kubernetes.ingressclass.
One with kubernetes.ingressclass=traefik, which was behind a public ALB and kubernetes.ingressclass=traefik-internal, which was behind a private/internal ALB
For services, which I want to serve privately, I use the following annotations in ingress objects :
annotations:
kubernetes.io/ingress.class: traefik-internal
and for public
annotations:
kubernetes.io/ingress.class: traefik
My deployment.yaml
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: traefik-internal-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-internal-ingress-lb
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-internal-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-internal-ingress-lb
spec:
serviceAccountName: traefik-internal-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik:v1.7
name: traefik-internal-ingress-lb
ports:
- name: http
containerPort: 80
- name: admin
containerPort: 8080
args:
- --api
- --kubernetes
- --logLevel=INFO
- --accesslog=true
- --kubernetes.ingressclass=traefik-internal ##this makes it to watch only for ingress objects with annotaion "kubernetes.io/ingress.class: traefik-internal"
Hope this helps someone.
You can achieve this with a single Ingress controller inside the cluster but by creating various Ingress Kubernetes Objects.
For Private site:-
consider whitelist-source-range annotation in the ingress resource.
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#whitelist-source-range
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/whitelist-source-range: 10.0.0.0/24,172.10.0.1
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /testpath
pathType: Prefix
backend:
serviceName: test
servicePort: 80
For Public site:-
https://kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /testpath
pathType: Prefix
backend:
serviceName: test
servicePort: 80
Multiple Træfik Deployments can run concurrently in the same cluster.For instance, it is conceivable to have one Deployment deal with internal and another one with external traffic.
For such cases, it is advisable to classify Ingress objects through a label and configure the labelSelector option per each Træfik Deployment accordingly. To stick with the internal/external example above, all Ingress objects meant for internal traffic could receive a traffic-type: internal label while objects designated for external traffic receive a traffic-type: external label. The label selectors on the Træfik Deployments would then be traffic-type=internal and traffic-type=external, respectively.
Related
I created an Ingress Service as below and I am able to get response using the IP (retrieved using the kubectl get ingress command).
Deployment File
apiVersion: apps/v1
kind: Deployment
metadata:
name: app1-nginx-deployment
labels:
app: app1-nginx
spec:
replicas: 1
selector:
matchLabels:
app: app1-nginx
template:
metadata:
labels:
app: app1-nginx
spec:
containers:
- name: app1-nginx
image: msanajyv/k8s_api
ports:
- containerPort: 80
Service File
apiVersion: v1
kind: Service
metadata:
name: app1-nginx-clusterip-service
labels:
app: app1-nginx
spec:
type: ClusterIP
selector:
app: app1-nginx
ports:
- port: 80
targetPort: 80
Ingress File
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginxapp1-ingress-service
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: app1-nginx-clusterip-service
servicePort: 80
With above YAML file I am able to access the service using the ip address i.e., http://xx.xx.xx.xx/weatherforecast .
But I thought of accessing the service using a Domain name instead of IP. So I created a DNS in my azure portal and added a Record set as below.
Also I changed my Ingress file as below.
...
rules:
- host: app1.msvcloud.io
http:
paths:
- path: /
backend:
serviceName: app1-nginx-clusterip-service
servicePort: 80
When I access using the host name (http://app1.msvcloud.io/weatherforecast), the host is not getting resolved. kindly let me know what I am missing.
By create a record in your private DNS zone, you can only resolve the name (app1.msvcloud.io) within your virtual network. This means it will work if you remote into a VM within the VNET and access from there. But it will not work if you try the same outside the VNET. If you want the name to be resolvable on the world wide web, you need to buy the domain name and register in Azure DNS.
The records contained in a private DNS zone aren't resolvable from the
Internet. DNS resolution against a private DNS zone works only from
virtual networks that are linked to it.
~What is a private Azure DNS zone.
Setup:
Azure Kubernetes Service
Azure Application Gateway
We have kubernetes cluster in Azure which uses Application Gateway for managing network trafic. We are using appgw over Load Balancer because we need to handle trafic at layer 7, hence path-based http rules. We use kubernetes ingress controller for configuring appgw. See config below.
Now I want a service that both accept requests on HTTP (layer 7) and TCP (layer 4).
How do I do that? The exposed port should not be public on the big internet, but public on the azure network. Do I need to add another Ingress Controller that is not configured to use appgw?
This is what I want to accomplish:
This is the config for the ingress controller using appgw:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: service1
labels:
app: service1
annotations:
appgw.ingress.kubernetes.io/backend-path-prefix: /
appgw.ingress.kubernetes.io/use-private-ip: "false"
kubernetes.io/ingress.class: azure/application-gateway
spec:
tls:
- hosts:
secretName: <somesecret>
rules:
- host: <somehost>
http:
paths:
- path: /service1/*
backend:
serviceName: service1
servicePort: http
Current setup:
Generally you have two straightforward options.
Use direct pod IP or Headless ClusterIP Service.
I assume your AKS cluster employs either Azure CNI or Calico as a networking fabric.
In both cases your Pods get routable ips in AKS subnet.
https://learn.microsoft.com/en-us/azure/aks/concepts-network#azure-cni-advanced-networking
Thus, you can make them accessible directly across your VNet.
Alternatively, you could use Service of type Internal Load Balancer.
You can build Internal LB thru appropriate annotations.
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
When you view the service details, the IP address of the internal load balancer is shown in the EXTERNAL-IP column. In this context, External is in relation to the external interface of the load balancer, not that it receives a public, external IP address.
https://learn.microsoft.com/en-us/azure/aks/internal-lb#create-an-internal-load-balancer
If needed, you can assign a predefined IP address to your LB and / or put it into a different subnet across your VNet or even into a private subnet, use VNet peering etc.
Eventually you can make it routable from wherever you need.
The exposed port should not be public, but public in the kubernetes cluster.
I assume that you mean that your application should expose a port for clients within the Kubernetes cluster. You don't have to do any special in Kubernetes for Pods to do this, they can accept TCP connection to any port. But you may want to create a Service of type: ClusterIP for this, so it will be easer for clients.
Nothing more than that should be needed.
Leveraging a Kubernetes Service should address your concern, all you need to do it modify your YAML file.
Leveraging a Kubernetes Service should address your concern, all you need to do it modify your YAML file.
Here is an example snippet where I have created a service
apiVersion: v1
kind: Service
metadata:
name: cafe-app-service
labels:
application: cafe-app-service
spec:
ports:
port: 80
protocol: HTTP
targetPort: 8080
name: coffee-port
port: 8081
protocol: TCP
targetPort: 8081
name: tea-port
selector:
application: cafe-app
You can reference your service in your ingress that you have created
I went with having two Services for same pod; one for the HTTP handled by an Ingress (appgw) and one for TCP using an internal Azure Load balancer.
This is my config that I ended up using:
ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-serviceA-cm
namespace: default
data:
8200: "default/serviceA:8200"
TCP service
apiVersion: v1
kind: Service
metadata:
name: tcp-serviceA
namespace: default
labels:
app.kubernetes.io/name: serviceA
app.kubernetes.io/part-of: serviceA
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
service.beta.kubernetes.io/azure-load-balancer-internal-subnet: subnetA
spec:
type: LoadBalancer
ports:
- name: tcp
port: 8200
targetPort: 8200
protocol: TCP
selector:
app: serviceA
release: serviceA
HTTP Ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: serviceA
labels:
app: serviceA
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
tls:
- hosts:
secretName: <somesecret>
rules:
- host: <somehost>
http:
paths:
- path: /serviceA/*
backend:
serviceName: serviceA
servicePort: http
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: serviceA
labels:
app: serviceA
release: serviceA
spec:
replicas: 1
selector:
matchLabels:
app: serviceA
release: serviceA
template:
metadata:
labels:
app: serviceA
release: serviceA
spec:
containers:
- name: serviceA
image: "serviceA:latest"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
protocol: TCP
HTTP Service
apiVersion: v1
kind: Service
metadata:
name: serviceA
labels:
app: serviceA
release: serviceA
spec:
type: ClusterIP
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
selector:
app: serviceA
release: serviceA
Here I have explained the scenario. I can reach a clusterIP using nginx ingress But I can't reach the same service using Azure Application Gateway Ingress. Bellow annotation is not helping me
appgw.ingress.kubernetes.io/rewrite-target: /
Any Idea?
Make sure you add below annotations to example-ingress.
appgw.ingress.kubernetes.io/use-private-ip: "false"
kubernetes.io/ingress.class: azure/application-gateway
You can see the full list and examples here.
You were using wrong annotation. I have updated your ingress with correct annotation:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
kubernetes.io/ingress.class: azure/application-gateway
appgw.ingress.kubernetes.io/backend-path-prefix: "/"
spec:
rules:
- http:
paths:
- path: /apple/*
pathType: Prefix
backend:
service:
name: apple-service
port:
number: 5678
Checkout all the AGIC annotations here
I have created a sample spring boot app and did the following:-
1.created a docker image
2.created an Azure container registry and did a docker push to this
3.Created a cluster in Azure Kubernetes service and deployed it successfully.I have chosen external endpoint option for this.
Kubernetes external end point
say for service to service call i dont want to use IP like http://20.37.134.68:80 but another custom name how can i do it?
Also if i chose internal then is there any way to replace the name.
Tried editing YAML with endpoint name property but failed.Any ideas?
I think you mixing some concept, so I'll try to explain and help you to reach what you want.
When you deploy a container image in a Kubernetes cluster, in the most cases you will use a pod or deployment spec, that basically is a yaml file with all your deployment/pod configuration, name, image name etc. Here is an example of a simple echo-server app:
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo
spec:
selector:
matchLabels:
app: echo
template:
metadata:
labels:
app: echo
spec:
containers:
- name: echo
image: mendhak/http-https-echo
ports:
- name: http
containerPort: 80
Observe the fields name in the file. Here you can configure the name for your deployment and for your containers.
In order to expose your application, you will need to use a service. Services can be internal and external. Here you can find all service types.
For a internal service, you need to use the service type ClusterIP (default), it means only your cluster will reach the pods. To reach your service from other pods, you can use the service name composed by my-svc.my-namespace.svc.cluster-domain.example.
Here is an example of a service for the deployment above:
apiVersion: v1
kind: Service
metadata:
name: echo-svc
spec:
selector:
app: echo
ports:
- protocol: TCP
port: 80
targetPort: 80
To expose your service externally, you have the option to use a service type NodePort, LoadBalancer or use an ingress.
You can configure your DNS name in the ingress rules and make path rules if you want, or even configure a HTTPS for your application. There are few options to ingresses in kubernetes, and one of the most popular is nginx-ingress.
Here is an example of how to configure a simple ingress for our example service:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "false"
name: echo-ingress
spec:
rules:
- host: myapp.mydomain.com
http:
paths:
- path: "/"
backend:
serviceName: echo-svc
servicePort: 80
In the example, i'm using the dns name myapp.mydomain.com, so it means you can only will reach your application by this name.
After create the ingress, you can see the external ip with the command kubectl get ing, and you can create a A entry in your dns server.
Currently I have an ingress controller that responds to a set of ingress rules . I use this setup as a defacto API gateway that exposes my different services to the internet.
I have an azure domain dev-APIGateway.northeurope.cloudapp.azure.com set to the ingress controller, I set this up via the public ip address via the azure portal
Now to my problem - I want to have an additional domain name that will resolve to the same ingress controller (dev-frontend.northeurope.cloudapp.azure.com) that will satisfy the frontend ingress definition seen below.
I know I can achieve this by creating an additional ingress controller with its own ip address and domain but this seems redundant. See Ingress definitions
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: dev-ingress-someapi
namespace: dev
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: dev-APIGateway.northeurope.cloudapp.azure.com
http:
paths:
- backend:
serviceName: someServiceA-service
servicePort: 80
path: /someServiceA/(.*)
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: dev-ingress-frontend
namespace: dev
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: dev-frontend.northeurope.cloudapp.azure.com
http:
paths:
- backend:
serviceName: frontend-service
servicePort: 80
path: /
If I had my own domain this would be fine - I wouldy simply add my A name records for the domains that I own along with their CNAMES and point them to ingress controller IP address and that would be it, but this is not the case as im stuck using azure domain names.