I have 6 Pods, Each Pod application listen from differnt port and Each Pod has it own IP address.
I have Kubernetes LoadBalancer service (Azure LoadBalancer) with defined static IP address.
I Can access app1 application with LoadBalancer IP 10.1.1.100 with port number 9111 (app 1 listen from 9111 port)
Now I have app2 which listen from port 9112, is it possible, where I can access same Loader Balancer IP that is 10.1.1.100 with 9112. If Yes. How to implement the service
My current service maniest file
apiVersion: v1
kind: Service
metadata:
name: "app1-service"
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
selector:
app: app1
type: LoadBalancer
loadBalancerIP: 10.1.1.100
ports:
- port: 9111
app1 deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: "app1-deployment"
labels:
app: app1
spec:
replicas: 1
selector:
matchLabels:
app: app1
template:
metadata:
labels:
app: app1
spec:
imagePullSecrets:
- name: image-secrets
containers:
- name: inaudiotools
securityContext:
{}
image: myregitry.io/app1:latest
imagePullPolicy: Always
ports:
- containerPort: 9111
app2 deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: "app2-deployment"
labels:
app: app2
spec:
replicas: 1
selector:
matchLabels:
app: app2
template:
metadata:
labels:
app: app2
spec:
imagePullSecrets:
- name: imqge-secrets
containers:
- name: inaudiotools
securityContext:
{}
image: myregitry.io/app2:latest
imagePullPolicy: Always
ports:
- containerPort: 9112
There are different ways to expose a service to external traffic
Cluster IP service : Default service that K8 creates for accessing the pods. Service can be exposed using kube proxy . Good for starters but not suitable for production.
Node port: Exposes a service from a specific Node port. Good for demo purpose however will lead to scalability & maintainability issues.
Load balancer type: Standard way to expose a service that creates a Network load balancer and exposes your service externally. This is what you have used too.
Ingress : Not actually a service but a reverse proxy that sits in middle of the Load balancer and multiple services in K8, for e.g nginx. The reverse proxy is capable of forwarding the requests to any service based on url pattern, host. Say in your case app1 uses https://host1.abc.com host name & app2 uses https://host2.abc.com name, nginx will be able to route requests to app1 when incoming host name is host1.abc.com and to app2 when incoming host name is host2.abc.com. Most preferred way for production workloads.
To answer your specific query, in case you want to proceed with a Load balancer type only , you need to create a new Load balancer type service that routes the traffic to app2.
apiVersion: v1
kind: Service
metadata:
name: "app2-service"
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
selector:
app: app2
type: LoadBalancer
loadBalancerIP: 10.1.2.100
ports:
- port: 9112
This will create a new LB with a new static public IP and will route the requests to app2.
The downside is
Cost as in this case Azure will spin a new load balancer for each service.
Maintenance : New DNS record for every service instead of one per domain.
Related
I'm trying to access a simple Asp.net core application deployed on Azure AKS but I'm doing something wrong.
This is the deployment .yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: aspnetapp
spec:
replicas: 1
selector:
matchLabels:
app: aspnet
template:
metadata:
labels:
app: aspnet
spec:
containers:
- name: aspnetapp
image: <my_image>
resources:
limits:
cpu: "0.5"
memory: 64Mi
ports:
- containerPort: 8080
and this is the service .yml
apiVersion: v1
kind: Service
metadata:
name: aspnet-loadbalancer
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 8080
selector:
name: aspnetapp
Everything seems deployed correctly
Another check I did was to enter the pod and run
curl http://localhost:80,
and the application is running correctly, but if I try to access the application from the browser using http://20.103.147.69 a timeout is returned.
What else could be wrong?
Seems that you do not have an Ingress Controller deployed on your AKS as you have your application exposed directly. You will need that in order to get ingress to work.
To verify if your application is working your can use port-forward and then access http://localhost:8080 :
kubectl port-forward aspnetapp 8080:8080
But you should def. install a ingress-controller: Here is a Workflow from MS to install ingress-nginx as IC on your Cluster.
You will then only expose the ingress-controller to the internet and could also specify the loadBalancerIP statically if you created the PublicIP in advance:
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-resource-group: myResourceGroup # only needed if the LB is in another RG
name: ingress-nginx-controller
spec:
loadBalancerIP: <YOUR_STATIC_IP>
type: LoadBalancer
The Ingress Controller then will route incoming traffic to your application with an Ingress resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
spec:
ingressClassName: nginx # ingress-nginx specifix
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: test
port:
number: 80
PS: Never expose your application directly to the internet, always use the ingress controller
In your Deployment, you configured your container to listen on port 8080. You need to add targetport value set to 8080 in the Service definition.
Documentation
Setup:
Azure Kubernetes Service
Azure Application Gateway
We have kubernetes cluster in Azure which uses Application Gateway for managing network trafic. We are using appgw over Load Balancer because we need to handle trafic at layer 7, hence path-based http rules. We use kubernetes ingress controller for configuring appgw. See config below.
Now I want a service that both accept requests on HTTP (layer 7) and TCP (layer 4).
How do I do that? The exposed port should not be public on the big internet, but public on the azure network. Do I need to add another Ingress Controller that is not configured to use appgw?
This is what I want to accomplish:
This is the config for the ingress controller using appgw:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: service1
labels:
app: service1
annotations:
appgw.ingress.kubernetes.io/backend-path-prefix: /
appgw.ingress.kubernetes.io/use-private-ip: "false"
kubernetes.io/ingress.class: azure/application-gateway
spec:
tls:
- hosts:
secretName: <somesecret>
rules:
- host: <somehost>
http:
paths:
- path: /service1/*
backend:
serviceName: service1
servicePort: http
Current setup:
Generally you have two straightforward options.
Use direct pod IP or Headless ClusterIP Service.
I assume your AKS cluster employs either Azure CNI or Calico as a networking fabric.
In both cases your Pods get routable ips in AKS subnet.
https://learn.microsoft.com/en-us/azure/aks/concepts-network#azure-cni-advanced-networking
Thus, you can make them accessible directly across your VNet.
Alternatively, you could use Service of type Internal Load Balancer.
You can build Internal LB thru appropriate annotations.
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
When you view the service details, the IP address of the internal load balancer is shown in the EXTERNAL-IP column. In this context, External is in relation to the external interface of the load balancer, not that it receives a public, external IP address.
https://learn.microsoft.com/en-us/azure/aks/internal-lb#create-an-internal-load-balancer
If needed, you can assign a predefined IP address to your LB and / or put it into a different subnet across your VNet or even into a private subnet, use VNet peering etc.
Eventually you can make it routable from wherever you need.
The exposed port should not be public, but public in the kubernetes cluster.
I assume that you mean that your application should expose a port for clients within the Kubernetes cluster. You don't have to do any special in Kubernetes for Pods to do this, they can accept TCP connection to any port. But you may want to create a Service of type: ClusterIP for this, so it will be easer for clients.
Nothing more than that should be needed.
Leveraging a Kubernetes Service should address your concern, all you need to do it modify your YAML file.
Leveraging a Kubernetes Service should address your concern, all you need to do it modify your YAML file.
Here is an example snippet where I have created a service
apiVersion: v1
kind: Service
metadata:
name: cafe-app-service
labels:
application: cafe-app-service
spec:
ports:
port: 80
protocol: HTTP
targetPort: 8080
name: coffee-port
port: 8081
protocol: TCP
targetPort: 8081
name: tea-port
selector:
application: cafe-app
You can reference your service in your ingress that you have created
I went with having two Services for same pod; one for the HTTP handled by an Ingress (appgw) and one for TCP using an internal Azure Load balancer.
This is my config that I ended up using:
ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-serviceA-cm
namespace: default
data:
8200: "default/serviceA:8200"
TCP service
apiVersion: v1
kind: Service
metadata:
name: tcp-serviceA
namespace: default
labels:
app.kubernetes.io/name: serviceA
app.kubernetes.io/part-of: serviceA
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
service.beta.kubernetes.io/azure-load-balancer-internal-subnet: subnetA
spec:
type: LoadBalancer
ports:
- name: tcp
port: 8200
targetPort: 8200
protocol: TCP
selector:
app: serviceA
release: serviceA
HTTP Ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: serviceA
labels:
app: serviceA
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
tls:
- hosts:
secretName: <somesecret>
rules:
- host: <somehost>
http:
paths:
- path: /serviceA/*
backend:
serviceName: serviceA
servicePort: http
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: serviceA
labels:
app: serviceA
release: serviceA
spec:
replicas: 1
selector:
matchLabels:
app: serviceA
release: serviceA
template:
metadata:
labels:
app: serviceA
release: serviceA
spec:
containers:
- name: serviceA
image: "serviceA:latest"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
protocol: TCP
HTTP Service
apiVersion: v1
kind: Service
metadata:
name: serviceA
labels:
app: serviceA
release: serviceA
spec:
type: ClusterIP
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
selector:
app: serviceA
release: serviceA
Does NodePort service work in azure kubernetes ?
My use case is , I'm trying to deploy 3 Nodes (3 VMs) , such that every node has a pod which runs an nginx container, using daemonsets.
So 3 nodes -> 3 pods -> 3 docker nginx containers running basic welcome to nginx page.
Now to expose the service, I use load balancer and get a public IP which runs any of the three pods and when i do
http:// ----> it displays welcome to nginx page using one of the pods.
Now i want to deploy NodePort, so that i can view my nginx page using
http://:nodeport
I'm not able to access this using node public IP and i'm using azure kubernetes service.
What to do next
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-deployment
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 30007
selector:
app: nginx
I have created the Kubernetes Cluster using two Azure Ubuntu VMs. I am able to deploy and access pods and deployments using the Nodeport service type. I have also checked the pod's status in Kube-system namespace. All of the pod's status showing as running. but, whenever I mention service type to Loadbalancer, it was not creating the LoadBalancer IP and it's status always showing as pending. I have also created an Ingress controller for the Nginx service. still, it is not creating an ingress Address. While initializing the Kubernetes master, I am using the following command.
kubeadm init
Below is deployment, svc and Ingress manifest files.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: nginx
servicePort: 80
$ kubectl describe svc nginx
Name: nginx
Namespace: default
Labels: app=nginx
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"nginx"},"name":"nginx","namespace":"default"},"spec":{"p...
Selector: app=nginx
Type: ClusterIP
IP: 10.96.107.97
Port: http 80/TCP
TargetPort: 80/TCP
Endpoints: 10.44.0.4:80,10.44.0.5:80,10.44.0.6:80
Session Affinity: None
Events: <none>
$ kubectl describe ingress nginx
Name: test-ingress
Namespace: default
Address:
Default backend: nginx:80 (10.44.0.4:80,10.44.0.5:80,10.44.0.6:80)
Rules:
Host Path Backends
---- ---- --------
`*` `*` nginx:80 (10.44.0.4:80,10.44.0.5:80,10.44.0.6:80)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"networking.k8s.io/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"test-ingress","namespace":"default"},"spec":{"backend":{"serviceName":"nginx","servicePort":80}}}
Events: `<none>`
Do we need to mention any IP ranges(private or public) of VMs while initializing the kubeadm init? or
Do we need to change any network settings in Azure Ubuntu VMs?
As you created your own Kubernetes cluster rather than AWS, Azure or GCP provided one, there is no load balancer integrated. Due to this reason, you are getting IP status pending.
But with the use of Ingress Controller or directly through NodePort you can circumvent this problem.
However, I also observed in your nginx service you are using an annotation service.beta.kubernetes.io/aws-load-balancer-type: nlb and you said you are using Azure and those are platform specific annotations for the service and that annotation is AWS specific.
However, you can give something like this a try, if you would like to experiment directly with public IPs, you can define your service by providing externalIPs in your service if you have a public ip allocated to your node and allows ingress traffic from somewhere.
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- name: http
protocol: TCP
port: 80
targetPort: 9376
externalIPs:
- 80.11.12.10
But, a good approach to get this done is using an ingress controller if you are planning to build your own Kubernetes cluster.
Hope this helps.
I am trying to setup a K8s deployment where ingress's controllers can define a service as a subdomain. i.e. app1 can define itself to get traffic from app1.sub.domain.io in its ingress config.
I have a DNS A Record *.sub.domain.io that points to a Load Balancer. That load balancer is pointing to the cluster's instance group.
So if I am right all traffic that goes to anything at sub.domain.io will land inside the cluster and just need to route said traffic.
Below are the k8 configs, which has a pod, a service and an ingress. The pods are healthy and working, I believe the service isn't required but will want other pods to talk to it via internal DNS so it's added.
The ingress rules have a host app1.sub.domain.io, so in theory, curl'ing app1.sub.domain.io should follow:
DNS -> Load Balancer -> Cluster -> Ingress Controller -> Pod
At the moment when I try to hit app1.sub.domain.io it just hangs. I have tried not having service, making external-name service and doesn't work.
I don't want to go down the route of using the loadBalancer ingress as that makes a new external IP that needs to be applied to DNS records manually, or with a nasty bash script that waits for services external IP and runs GCP command, and we don't want to do this for each service.
Ref links:https://kubernetes.io/docs/concepts/services-networking/ingress/#name-based-virtual-hosting
Deployment
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: app1
namespace: default
labels:
app: app1
spec:
replicas: 3
selector:
matchLabels:
app: app1
template:
metadata:
labels:
app: app1
spec:
containers:
- image: xxxx:latest
name: app1
ports:
- containerPort: 80
env:
- name: NODE_ENV
value: production
Service
---
kind: Service
apiVersion: v1
metadata:
name: app1
labels:
app: app1
spec:
ports:
- port: 80
targetPort: 80
selector:
app: app1
type: ClusterIP
Ingress
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app1-ingress
labels:
app: app1
spec:
rules:
- host: app1.sub.domain.io
http:
paths:
- backend:
serviceName: app1
servicePort: 80
Once everything is deployed if you query
kubectl get pods,services,ingresses -l app=app1
NAME READY STATUS RESTARTS AGE
po/app1-6d4b9d8c5-4gcz5 1/1 Running 0 20m
po/app1-6d4b9d8c5-m4kwq 1/1 Running 0 20m
po/app1-6d4b9d8c5-rpm9l 1/1 Running 0 20m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/app1 ClusterIP x.x.x.x <none> 80/TCP 20m
NAME HOSTS ADDRESS PORTS AGE
ing/app1-ingress app1.sub.domain.io 80 20m
----------------------------------- Update -----------------------------------
Currently doing this, not ideal. Have global static IP that's assigned to a DNS record.
---
kind: Service
apiVersion: v1
metadata:
name: app1
labels:
app: app1
spec:
type: NodePort
selector:
app: app1
ports:
- port: 80
targetPort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app1-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: app1-static-ip
labels:
app: app1-static-ip
spec:
backend:
serviceName: app1
servicePort: 80
*.sub.domain.io should point to the IP of the Ingress.
You can use a static IP for the Ingress by following the instructions in the tutorial here: https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer#step_5_optional_configuring_a_static_ip_address
Try adding path to your Ingress:
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app1-ingress
labels:
app: app1
spec:
rules:
- host: app1.sub.domain.io
http:
paths:
- backend:
serviceName: app1
servicePort: 80
path: /
If that doesn't work, please post the output of describe service and describe ingress.
Do you have an Ingress Controller?
Traffic should go LB-> Ingress Controller-> Ingress-> Service ClusterIP-> Pods