Kubernetes + Socket.io: Pod client -> LoadBalancer service SSL issues - node.js

I have a socket.io-based node.js deployment on my Kubernetes cluster with a LoadBalancer-type service through Digital Ocean. The service uses SSL termination using a certificate uploaded to DO.
I've written a pod which acts as a health check to ensure that clients are still able to connect. This pod is node.js using the socket.io-client package, and it connects via the public domain name for the service. When I run the container locally, it connects just fine, but when I run the container as a pod in the same cluster as the service, the health check can't connect. When I shell into the pod, or any pod really, and try wget my-socket.domain.com, I get an SSL handshake error "wrong version number".
Any idea why a client connection from outside the cluster works, a client connection out of the cluster to a normal server works, but a client connection from a pod in the cluster to the public domain name of the service doesn't work?

You have to set up Ingress Controller to route traffic from a Load-Balancer to a Service.
The flow of traffic looks like this:
INTERNET -> LoadBalancer -> [ Ingress Controller -> Service]
If you want to use SSL:
You can provision your own SSL certificate and create a Secret to hold it. You can then refer to the Secret in an Ingress specification to create an HTTP(S) load balancer that uses the certificate.
You can deploy an ingress controller like nginx using following instruction: ingress-controller.

Turns out, the issue is with how kube-proxy handles LoadBalancer-type services and requests to it from inside the cluster. Turns out, when the service is created, it adds iptables entries that causes requests inside the cluster skip the load balancer completely, which becomes an issue when the load balancer also handles SSL termination. There is a workaround, which is to add a loadbalancer-hostname annotation which forces all connections to use the load balancer. AWS tends not to have this problem because they automatically apply the workaround to their service configurations, but Digital Ocean does not.
Here are some more details:
https://github.com/digitalocean/digitalocean-cloud-controller-manager/blob/master/docs/controllers/services/annotations.md

Related

Pod to pod communication in kubernetes

The application I use is deployed on kubernetes with a frontend (React) and multiple back end services (Express.js). I need my frontend to make fetch api calls to each service. The frontend and each service is deployed within its own pods. A service is exposing each pod so I have cluster-ip for each of these. The frontend was exposed using a load balancer so I have the external ip.
The question:
What would my fetch call need to be to access one of these services? (ex. fetch();)
Am I missing anything to make this possible?
I've looked through K8s docs and I couldn't understand what to do.
Can someone please point me in the right direction?
The Pods in the frontend Deployment run an image that is configured to find the specific backend Service.
The key to connecting a frontend to a backend is the backend Service. A Service creates a persistent IP address and DNS name entry so that the backend microservice can always be reached. A Service uses selectors to find the Pods that it routes traffic to.
The frontend connects to the backend worker Pods by using the DNS name given to the backend Service. The DNS name is for example "hello", which is the value of the name field in the preceding Service configuration file.
External IP can be used to interact with the frontend service from outside the cluster.
When the frontend and backends are now connected. You can hit the endpoint by using the curl command on the external IP of your frontend Service.
curl http://${EXTERNAL_IP}
Follow instruction from here: frontend-backed-connection.
Please take a look: multiple-backend-kubernetes, frontend-backend-connection, kubernetes-services.

How to access API from Postman with containerized app deployed in Azure Kubernetes Service

I have created a sample API application with Node and Express to be containerized and deployed into Azure Kubernetes Services (AKS). However, I was unable to access the API endpoint through the external API generated from the service.yml that was deployed.
I have made use of deployment center within AKS to deploy my application to AKS and generate the relevant deployment.yml and service.yml. The following is the services running containing the external IP.
The following is the response from postman. I have tried with or without port number and ip address from kubectl get endpoints but to no avail. The request will timeout eventually and unable to access the api.
The following is the dockerfile config
I have tried searching around for solutions, how it was not possible to resolve. I would greatly appreciate if you have encountered similar issues and able to share your experience, thank you.
From client machine where kubectl is installed do
kubectl get pods -o wide -n restapicluster5ca2
this will give you all the pods with the ip of the Pods
kubectl describe svc restapicluster-bb91 -n restapicluster5ca2
this will give details about the service and then check LoadBalancer Ingress: for the external IP address, Port: for the port to access, TargetPort: the port on the containers to access i.e 5000 in your case, Endpoints: to verify if all IP of the pod with correct port i.e 5000 is displaying or not.
Log into any of the machines in the AKS cluster do the following
curl [CLUSTER-IP]:[PORT]/api/posts i.e curl 10.-.-.67:5000
check if you get the response.
For reference to use kubectl locally with AKS cluster check the links below
https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough
https://learn.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest
As I see it you need to bring ingress in front of your service. Folks use NgInx etc. for that.
If you want to stay "pure" Azure you could use AGIC - Application Gateway Ingress Controller and annotate your service to have it exposed over AppGw. You could also spin up your own custom AppGw and hook it up with the AKS Service/LoadBalancer IP.

Loadbalancing python webserver in kubernetes pods

I was going through Docker and Kubernetes . I want to create two Python web servers and need to access them using public URL and these requests should be balanced between two servers.
I created one Python server and initially deployed that with Docker containers and all this I'm doing using AWS ec2 instance so when I tried to send a request I used ec2publicip:port. This is working which means I created one web server and similarly I will do the same for the second server.
My question is If I deploy this with Kubernetes - Is there any way to do load balancing the Python web servers within the pod. If so, can someone tell me how to do this?
If you create two replicas of the pod via a kubernetes deployment and create a service of type LoadBalancer an ELB on AWS is automatically provisioned.Then whenever a request comes to the ELB on AWS it will distribute the traffic to the replicas of the pod. With a loadbalancer type service you get advanced load balancing capabilities at layer 7. Without a loadbalancer type service or an ingress you get round robin load balancing at layer 4 offered by kube proxy.
Problem with loadbalancer type service is that it will create new ELB for each service which is costly. So I recommend using ingress controller such as Nginx and expose the Nginx Ingress controller via a single loadbalancer on AWS. Then create ingress resource and use path or host based routing to send traffic to pods behind a clusterIP type service.

Configuring an AKS load balancer for HTTPS access

I'm porting an application that was originally developed for the AWS Fargate container service to AKS under Azure. In the AWS implementation an application load balancer is created and placed in front of the UI microservice. This load balancer is configured to use a signed certificate, allowing https access to our back-end.
I've done some searches on this subject and how something similar could be configured in AKS. I've found a lot of different answers to this for a variety of similar questions but none that are exactly what I'm looking for. From what I gather, there is no exact equivalent to the AWS approach in Azure. One thing that's different in the AWS solution is that you create an application load balancer upfront and configure it to use a certificate and then configure an https listener for the back-end UI microservice.
In the Azure case, when you issue the "az aks create" command the load balancer is created automatically. There doesn't seem be be a way to do much configuration, especially as it relates to certificates. My impression is that the default load balancer that is created by AKS is ultimately not the mechanism to use for this. Another option might be an application gateway, as described here. I'm not sure how to adapt this discussion to AKS. The UI pod needs to be the ultimate target of any traffic coming through the application gateway but the gateway uses a different subnet than what is used for the pods in the AKS cluster.
So I'm not sure how to proceed. My question is: Is the application gateway the correct solution to providing https access to a UI running in an AKS cluster or is there another approach I need to use?
You are right, the default Load Balancer created by AKS is a Layer 4 LB and doesn't support SSL offloading. The equivalent of the AWS Application Load Balancer in Azure is the Application Gateway. As of now there is no option in AKS which allows to choose the Application Gateway instead of a classic load balancer, but like alev said, there is an ongoing project that still in preview which will allow to deploy a special ingress controller that will drive the routing rules on an external Application Gateway based on your ingress rules. If you really need something that is production ready, here are your options :
Deploy an Ingress controller like NGINX, Traefik, etc. and use cert-manager to generate your certificate.
Create an Application Gateway and manage your own routing rule that will point to the default layer 4 LB (k8s LoadBalancer service or via the ingress controller)
We implemented something similar lately and we decide to managed our own Application Gateway because we wanted to do the SSL offloading outside the cluster and because we needed the WAF feature of the Application Gateway. We were able to automatically manage the routing rules inside our deployment pipeline. We will probably use the Application Gateway as an ingress project when it will be production ready.
Certificate issuing and renewal are not handled by the ingress, but using cert-manager you can easily add your own CA or use Let's encrypt to automatically issue certificates when you annotate the ingress or service objects. The http_application_routing addon for AKS is perfectly capable of working with cert-manager; can even be further configured using ConfigMaps (addon-http-application-routing-nginx-configuration in kube-system namespace). You can also look at initial support for Application Gateway as ingress here

Accessing Mongo replicas in kubernetes cluster from AWS lambdas

Some of my data is in Mongo replicas that are hosted in docker containers running in kubernetes cluster. I need to access this data from the AWS lambda that is running in the same VPC and subnet (as the kubernetes minions with mongo db). lambda as well as the kubernetes minions (hosting mongo containers) are run under the same security group. I am trying to connect using url "mongodb://mongo-rs-1-svc,mongo-rs-2-svc,mongo-rs-3-svc/res?replicaSet=mongo_rs" where mongo-rs-x-svc are three kubernetes services that enables access to the appropriate replicas. When I try to connect using this url, it fails to resolve the mongo replica url (e.g. mongo-rs-2-svc). Same URL works fine for my web service that is running in its own docker container in the same kubernetes cluster.
Here is the error I get from mongo client that I use...
{\"name\":\"MongoError\",\"message\":\"failed to connect to server [mongo-rs-1-svc:27017] on first connect [MongoError: getaddrinfo ENOTFOUND mongo-rs-1-svc mongo-rs-1-svc:27017]\"}". I tried replacing mongo-rs-x-svc to their internal ip addresses in the url. In this case the above name resolution error disappeared but got another error - {\"name\":\"MongoError\",\"message\":\"failed to connect to server [10.0.170.237:27017] on first connect [MongoError: connection 5 to 10.0.170.237:27017 timed out]\"}
What should I be doing to enable this access successfully?
I understand that I can use the webservice to access this data as intermediary but since my lambda is in VPC, I have to deploy NAT gateways and that would increase the cost. Is there a way to access the webservice using the internal endpoint instead of public url? May be that is another way to get data.
If any of you have a solution for this scenario, please share. I went through many threads that showed up as similar questions or in search results but neither had a solution for this case.
This is a common confusion with Kubernetes. The Service object in Kubernetes is only accessible from inside Kubernetes by default (i.e. when type: ClusterIP is set). If you want to be able to access it from outside the cluster you need to edit the service so that it is type: NodePort or type: LoadBalancer.
I'm not entirely sure, but it sounds like your network setup would allow you to use type: NodePort for your Service in Kubernetes. That will open a high-numbered port (e.g. 32XXX) on each of the Nodes in your cluster that forwards to your Mongo Pod(s). DNS resolution for the service names (e.g. mongo-rs-1-svc) will only work inside the Kubernetes cluster, but by using NodePort I think you should be able to address them as mongodb://ec2-instance-1-ip:32XXX,ec2-instance-2-ip:32XXX,....
Coreyphobrien's answer is correct. Subsequently you were asking for how to keep the exposure private. For that I want to add some information:
You need to make the Lambdas part of your VPC that your cluster is in. For this you use the --vpc-config parameter when creating the lambdas or updating. This will create a virtual network interface in the VPC that allows the Lambda access. For Details see this.
After that you should be able to set the AWS security group for your instances so that the NodePort will only be accessible from another security group that is used for your Lambdas network interface.
This blog discusses an example in more detail.

Resources