I have a service configured as a LoadBalancer running on a five-node cluster running 1.9.11. The LoadBalancer sits in front of three pods running an ASP.NET Core web application (that in turn talks to a NATS message queue from which a listener retrieves messages and saves them to an Azure SQL database). All pods have resource request and limits set and everything is in a dedicated namespace.
I’m using a PowerShell script to cause the web application to generate a message for the NATS queue every 50 milliseconds. I can see from a couple of ways that the Loadbalancer is only sending traffic to one pod: firstly the CPU graphs in the k8s dashboard show no activity for two of the pods and secondly I’m tracing the Environment. MachineName from the web app right the way through to a field in the database and I can see that it’s only ever one MachineName. If I delete the pod that is receiving traffic a new pod immediately stars receiving traffic but it's still only that one pod out of three.
My understanding is that this isn’t how the LoadBalancer is intended to work, ie the LoadBalancer should send traffic to all pods. Is that right and if so any clues as to what I’m doing wrong? My service file is as follows:
apiVersion: v1
kind: Service
metadata:
name: megastore-web-service
spec:
selector:
app: megastore-web
ports:
- port: 80
type: LoadBalancer
It sounds to me like your load balancer is working correctly. When traffic comes into a LB the LB will automatically direct traffic to the first available node. The fact that you can shutdown your POD and traffic is rerouted is what would be expected.
This is a good article which helps explain how the LB works
https://blogs.msdn.microsoft.com/cie/2017/04/19/how-to-fix-load-balancer-not-working-in-round-robin-fashion-for-your-cloud-service/
To test this further, I would suggest you try opening a port on one of the PODs but not the others. Such as port 88 on POD2. Then connect using the loadbalancer:88 and see if the connection gets routed to the correct POD.
Related
First of all I am pretty new on Kubernetes and containerized world.
My scenario is as follows:
I have a application which is deployed to AKS, we are using AGIC as ingress. The application is consuming endpoints hosted outside the AKS. The consumed application is publicly accessible but it has IP whitelisting. I am whitelisting the Application Gateway IP. Also I created a External Service as such.
kind: Service
apiVersion: v1
metadata:
name: service-endpoint
spec:
type: ExternalName
externalName: endpointname.something.com
ports:
- protocol: TCP
port: 433
But it does not work.
Additionally I tried to ping the direct endpoint URL(https://endpointname.something.com) from the pod, and I receive 403.
Can someone advice what would be the correct steps in order to achieve this connectivity?
Please note that we fixed this issue by whitelisting the public IP of the AKS load balancer to the target system.
I have created a sample API application with Node and Express to be containerized and deployed into Azure Kubernetes Services (AKS). However, I was unable to access the API endpoint through the external API generated from the service.yml that was deployed.
I have made use of deployment center within AKS to deploy my application to AKS and generate the relevant deployment.yml and service.yml. The following is the services running containing the external IP.
The following is the response from postman. I have tried with or without port number and ip address from kubectl get endpoints but to no avail. The request will timeout eventually and unable to access the api.
The following is the dockerfile config
I have tried searching around for solutions, how it was not possible to resolve. I would greatly appreciate if you have encountered similar issues and able to share your experience, thank you.
From client machine where kubectl is installed do
kubectl get pods -o wide -n restapicluster5ca2
this will give you all the pods with the ip of the Pods
kubectl describe svc restapicluster-bb91 -n restapicluster5ca2
this will give details about the service and then check LoadBalancer Ingress: for the external IP address, Port: for the port to access, TargetPort: the port on the containers to access i.e 5000 in your case, Endpoints: to verify if all IP of the pod with correct port i.e 5000 is displaying or not.
Log into any of the machines in the AKS cluster do the following
curl [CLUSTER-IP]:[PORT]/api/posts i.e curl 10.-.-.67:5000
check if you get the response.
For reference to use kubectl locally with AKS cluster check the links below
https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough
https://learn.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest
As I see it you need to bring ingress in front of your service. Folks use NgInx etc. for that.
If you want to stay "pure" Azure you could use AGIC - Application Gateway Ingress Controller and annotate your service to have it exposed over AppGw. You could also spin up your own custom AppGw and hook it up with the AKS Service/LoadBalancer IP.
I was going through Docker and Kubernetes . I want to create two Python web servers and need to access them using public URL and these requests should be balanced between two servers.
I created one Python server and initially deployed that with Docker containers and all this I'm doing using AWS ec2 instance so when I tried to send a request I used ec2publicip:port. This is working which means I created one web server and similarly I will do the same for the second server.
My question is If I deploy this with Kubernetes - Is there any way to do load balancing the Python web servers within the pod. If so, can someone tell me how to do this?
If you create two replicas of the pod via a kubernetes deployment and create a service of type LoadBalancer an ELB on AWS is automatically provisioned.Then whenever a request comes to the ELB on AWS it will distribute the traffic to the replicas of the pod. With a loadbalancer type service you get advanced load balancing capabilities at layer 7. Without a loadbalancer type service or an ingress you get round robin load balancing at layer 4 offered by kube proxy.
Problem with loadbalancer type service is that it will create new ELB for each service which is costly. So I recommend using ingress controller such as Nginx and expose the Nginx Ingress controller via a single loadbalancer on AWS. Then create ingress resource and use path or host based routing to send traffic to pods behind a clusterIP type service.
I would like to secure my web application running on Kubernetes (EKS). All the nodes attached to the cluster are running on private subnets.
I have one front-end service and a dozen back-end services.
The front-end service is a pod running a container which is running on port 80. It is configured to be attached to an ELB which is only accepting traffic from 443 with an https certificate.
apiVersion: v1
kind: Service
metadata:
name: service_name
labels:
app: service_name
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: xxxxxxxxxx
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
ports:
- port: 443 # Exposed port
targetPort: 80 # Container port
selector:
app: service_name
type: LoadBalancer
The back-end services are pods running containers also running on port 80. None of them have been configured to be accessible from outside the cluster. Back-end services talk to each other by pointing to http://service_name (NOT https) as I configured them with this template:
apiVersion: v1
kind: Service
metadata:
name: service_name
spec:
ports:
- port: 80 # Exposed port
targetPort: 80 # Container port
selector:
app: service_name
It all works but is it sufficient?
Should the front-end/back-end containers use certificate/443 too with a wildcard https certificate? Should this configuration be done inside the container or on the services' configurations?
I have done quite a bit of investigation now and here is what I came down to.
All my EKS EC2 instances are running on the private subnets which means they are not accessible from outside. Yes, by default Kubernetes does not encrypt traffic between pods which means that a hacker who gained access to my VPC (could be a rogue AWS engineer, someone who manages to physically access AWS data centers, someone who managed to access my AWS account...) will be able to sniff the network traffic. At the same time, I feel that in that instance the hacker will have access to much more! If he has access to my AWS account, he can download the https certificate himself for instance. If he manages to walk into an (high security) AWS data center and finds my server - it's good to compare the risk he has to take against the value of your data. If your data includes credit card/payments or sensitive personal data (date of birth, health details...), SSL encryption is a must.
Anyway, to secure pods traffic, there are 2 options.
Update all the pod source code and add the certificate there. It requires a lot of maintenance if you are running many pods and certificates expire every other year..
Add an extra 'network layer' like https://istio.io/. This will add complexity to your cluster and in the case of EKS, support from AWS will be 'best effort'. Ideally, you would pay for Istio support.
For the load balancer, I decided to add an ingress to the cluster (Ngnix, Traefik...) and set it up with https. That's critical as the ELB sits on the public subnets.
I'm trying to connect my Kubernetes deployments together via DNS.
I have a Java (Spring Boot) deployment and a javascript (node.js) deployment, both exposed via a default ClusterIP Service. I need websocket and REST communication between both services.
I've read that I should use DNS so that these two services can talk to each other, but I'm having trouble trying to determine what those DNS's are.
For example,
kubectl get pods --all-namespaces
gives me this:
NAMESPACE NAME
default javascript-deployment-65869b7db4-mxfrb
default java-deployment-54bfc87fd6-z8wml
What do I need to specify in my Service config to stop these random suffixes being applied?
How do I then determine what my DNS names need to be with a similar form of my-svc.my-namespace.svc.cluster.local?
About your questions:
1- Kubernetes doesn't recommend to avoid creating the names because basically, it ensures that the pods are unique and also, the first part of the hash it groups all the pods with the same replica-controller.
So just as advice, don't touch it. https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#pod-template-hash-label
2- kubectl get services -o wide will provide you in which port is your app listening. You just need to use the cluster ip + port like CLUSTER_IP:PORT to be able to reach your service.
I fixed it using the Service metadata name and port.
For example, this is my service definition:
apiVersion: v1
kind: Service
metadata:
name: my-big-deployment
spec:
ports:
- port: 8000
protocol: TCP
targetPort: 8000
selector:
app: my-service
From my applications in the cluster I can now access this service via the following environment variables:
MY_BIG_DEPLOYMENT_SERVICE_HOST
MY_BIG_DEPLOYMENT_SERVICE_PORT