Pod to pod communication in kubernetes - node.js

The application I use is deployed on kubernetes with a frontend (React) and multiple back end services (Express.js). I need my frontend to make fetch api calls to each service. The frontend and each service is deployed within its own pods. A service is exposing each pod so I have cluster-ip for each of these. The frontend was exposed using a load balancer so I have the external ip.
The question:
What would my fetch call need to be to access one of these services? (ex. fetch();)
Am I missing anything to make this possible?
I've looked through K8s docs and I couldn't understand what to do.
Can someone please point me in the right direction?

The Pods in the frontend Deployment run an image that is configured to find the specific backend Service.
The key to connecting a frontend to a backend is the backend Service. A Service creates a persistent IP address and DNS name entry so that the backend microservice can always be reached. A Service uses selectors to find the Pods that it routes traffic to.
The frontend connects to the backend worker Pods by using the DNS name given to the backend Service. The DNS name is for example "hello", which is the value of the name field in the preceding Service configuration file.
External IP can be used to interact with the frontend service from outside the cluster.
When the frontend and backends are now connected. You can hit the endpoint by using the curl command on the external IP of your frontend Service.
curl http://${EXTERNAL_IP}
Follow instruction from here: frontend-backed-connection.
Please take a look: multiple-backend-kubernetes, frontend-backend-connection, kubernetes-services.

Related

Security for applications hosted in kubernetes ingress

I need to host the frontend and backend parts of my application on ingress kubernetes. I would like only the frontend part to be sent to the backend part, even though both are available in ingress under one host (but a different path). Is it possible to set something like this in a kubernetes cluster? So that no other applications can send requests to the backend part. Can you do something like this with kubernetes security headers?
Within the cluster, you can restrict traffic between services by using Network Policies. E.g. you can declare that service A can send traffic to service B, but that service C can not send traffic to service B. However, you need to make sure that your cluster has a CNI with support for Network Policies. Calico is an example for such add-on.
Ingress is useful for declaring what services can receive traffic from outside of the cluster.
Also, Service Meshes, like Istio is useful for further enhance this security. E.g. by using an Egress proxy, mTLS and require JWT based authentication between services.

How to access API from Postman with containerized app deployed in Azure Kubernetes Service

I have created a sample API application with Node and Express to be containerized and deployed into Azure Kubernetes Services (AKS). However, I was unable to access the API endpoint through the external API generated from the service.yml that was deployed.
I have made use of deployment center within AKS to deploy my application to AKS and generate the relevant deployment.yml and service.yml. The following is the services running containing the external IP.
The following is the response from postman. I have tried with or without port number and ip address from kubectl get endpoints but to no avail. The request will timeout eventually and unable to access the api.
The following is the dockerfile config
I have tried searching around for solutions, how it was not possible to resolve. I would greatly appreciate if you have encountered similar issues and able to share your experience, thank you.
From client machine where kubectl is installed do
kubectl get pods -o wide -n restapicluster5ca2
this will give you all the pods with the ip of the Pods
kubectl describe svc restapicluster-bb91 -n restapicluster5ca2
this will give details about the service and then check LoadBalancer Ingress: for the external IP address, Port: for the port to access, TargetPort: the port on the containers to access i.e 5000 in your case, Endpoints: to verify if all IP of the pod with correct port i.e 5000 is displaying or not.
Log into any of the machines in the AKS cluster do the following
curl [CLUSTER-IP]:[PORT]/api/posts i.e curl 10.-.-.67:5000
check if you get the response.
For reference to use kubectl locally with AKS cluster check the links below
https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough
https://learn.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest
As I see it you need to bring ingress in front of your service. Folks use NgInx etc. for that.
If you want to stay "pure" Azure you could use AGIC - Application Gateway Ingress Controller and annotate your service to have it exposed over AppGw. You could also spin up your own custom AppGw and hook it up with the AKS Service/LoadBalancer IP.

Kubernetes + Socket.io: Pod client -> LoadBalancer service SSL issues

I have a socket.io-based node.js deployment on my Kubernetes cluster with a LoadBalancer-type service through Digital Ocean. The service uses SSL termination using a certificate uploaded to DO.
I've written a pod which acts as a health check to ensure that clients are still able to connect. This pod is node.js using the socket.io-client package, and it connects via the public domain name for the service. When I run the container locally, it connects just fine, but when I run the container as a pod in the same cluster as the service, the health check can't connect. When I shell into the pod, or any pod really, and try wget my-socket.domain.com, I get an SSL handshake error "wrong version number".
Any idea why a client connection from outside the cluster works, a client connection out of the cluster to a normal server works, but a client connection from a pod in the cluster to the public domain name of the service doesn't work?
You have to set up Ingress Controller to route traffic from a Load-Balancer to a Service.
The flow of traffic looks like this:
INTERNET -> LoadBalancer -> [ Ingress Controller -> Service]
If you want to use SSL:
You can provision your own SSL certificate and create a Secret to hold it. You can then refer to the Secret in an Ingress specification to create an HTTP(S) load balancer that uses the certificate.
You can deploy an ingress controller like nginx using following instruction: ingress-controller.
Turns out, the issue is with how kube-proxy handles LoadBalancer-type services and requests to it from inside the cluster. Turns out, when the service is created, it adds iptables entries that causes requests inside the cluster skip the load balancer completely, which becomes an issue when the load balancer also handles SSL termination. There is a workaround, which is to add a loadbalancer-hostname annotation which forces all connections to use the load balancer. AWS tends not to have this problem because they automatically apply the workaround to their service configurations, but Digital Ocean does not.
Here are some more details:
https://github.com/digitalocean/digitalocean-cloud-controller-manager/blob/master/docs/controllers/services/annotations.md

Set kubernetes VM with nodeports as backend for application gateway

I have two VMs that are part of a kubernetes cluster. I have a single service that is exposed as NodePort (30001). I am able to reach this service on port 30001 through curl on each of these VMs. When I create an Azure application gateway, the gateway is not directing traffic to these VMs.
I've followed the steps for setting up the application gateway as listed in the Azure documentation.
I constantly get a 502 from the gateway.
In order for the Azure Application Gateway to redirect or route traffic to the NodePort you need to add the Backend servers to the backend pool inside the Azure Application Gateway.
There are options to choose Virtual Machines as well.
A good tutorial explaining how to configure an application gateway in azure and direct web traffic to the backend pool is:
https://learn.microsoft.com/en-us/azure/application-gateway/quick-create-portal
I hope this solves your problem.
So I finally ended up getting on a call with the support folks. It turned out that the UI on Azure's portal is slightly tempremental.
For the gateway to be able to determine which of your backends are healthy it needs to have a health probe associated with the HTTP setting (the HTTP Setting is the one that determines how traffic from the gateway flows to your backends).
Now, when you are configuring the HTTP setting, you need to select the "Use Custom Probe" but when you do that it doesn't show the probe that you have already created. Hence, I figured that wasn't required.
The trick to first check the box below "Use Custom probe" which reads "Pick hostname from backend setttings", and then click on custom probe and your custom probe will show up and things will work.

How do I expose Kubernetes service to the internet?

I am running a kubernetes cluster with 1 master (also a node) and 2 nodes on Azure. I am using Ubuntu with Flannel overlay network. So far everything is working well. The only problem I have is exposing the service to the internet.
I am running the cluster on an azure subnet. The master has a NIC attached to it that has a public IP. This means if I run a simple server that listens on port 80, I can reach my server using a domain name (Azure gives an option to have a domain name for a public IP).
I am also able to reach the kubernetes guest book frontend service with some hack. What I did was check all the listening ports on the master and try each port with the public IP. I was able to hit the kubernetes service and get response. Based on my understanding this is directly going to the pod that is running on the master (which is also a node) rather than going through the service IP (which would have load balanced across any of the pods).
My question is how do I map the external IP to the service IP? I know kubernetes has a setting that works only on GCE (which I can't use right now). But is there some neat way of telling etcd/flannel to do this?
If you use the kubectl expose command:
--external-ip="": External IP address to set for the service. The service can be accessed by this IP in addition to its generated service IP.
Or if you create from a json or yaml file, use the spec/externalIPs array.

Resources