How Do i make my services registered to consul communicate with each other? - service-discovery

Working on POC for Consul
Lets say I have consul agent running as server and i have registered two services
(Service1 , Service2 ) , which are API , so how service1 and service2 will communicate ?

On this example I assume service1 depends on service2, and service2 has registed itself to Consul. You can simply use the HTTP Catalog API to query Consul which ip + port combinations provide service2. Without load-balancing it is service1's responsibility to implement it, handle failure cases and so forth.
A more "advanced" option is to use Consul to generate a config for a load balancer, this way service1 can use a hard-coded ip or domain name to contact the load balancer and will will forward the traffic to a healthy instance of service2.

Related

how to get hostname/servicename from HTTP request?

I have two Node.js/Express services run on Azure/Kubernetes.
Then I send HTTP request to Service1 which is forwarding the request to Service2.
How Service2 knows that the request came from Service1?
HTTP/POST/GET => Service1 => Service2
console.log(request.headers.host) prints "Service2"
I do not want to modify the request in Service1 by adding extra info/data/fields.
So how do Service2 knows it came from Service1?
update: I though this way I can reject some requests if they come from other services. Should I have it done through K8s Network Policy?
Host header is the address that the client is trying to reach, which is not what you want. Origin header will give you the source ip, but not a friendly service name, so to get that you would have to somehow map the IP back to a service, which is not simple to do since pods will typically get an new, unpredictable IP when they are created.
You could use the Kubernetes API to reverse lookup ip addresses for a service name, but that's a pretty bad design imo.
Ideally, you would have the client service add headers or data in the body with info about the calling service.
You might also be able to use a service mesh for Kubernetes (ie. istio, linkerd) which can inject information without needing to modify the original service.

Pod to pod communication in kubernetes

The application I use is deployed on kubernetes with a frontend (React) and multiple back end services (Express.js). I need my frontend to make fetch api calls to each service. The frontend and each service is deployed within its own pods. A service is exposing each pod so I have cluster-ip for each of these. The frontend was exposed using a load balancer so I have the external ip.
The question:
What would my fetch call need to be to access one of these services? (ex. fetch();)
Am I missing anything to make this possible?
I've looked through K8s docs and I couldn't understand what to do.
Can someone please point me in the right direction?
The Pods in the frontend Deployment run an image that is configured to find the specific backend Service.
The key to connecting a frontend to a backend is the backend Service. A Service creates a persistent IP address and DNS name entry so that the backend microservice can always be reached. A Service uses selectors to find the Pods that it routes traffic to.
The frontend connects to the backend worker Pods by using the DNS name given to the backend Service. The DNS name is for example "hello", which is the value of the name field in the preceding Service configuration file.
External IP can be used to interact with the frontend service from outside the cluster.
When the frontend and backends are now connected. You can hit the endpoint by using the curl command on the external IP of your frontend Service.
curl http://${EXTERNAL_IP}
Follow instruction from here: frontend-backed-connection.
Please take a look: multiple-backend-kubernetes, frontend-backend-connection, kubernetes-services.

Kubernetes + Socket.io: Pod client -> LoadBalancer service SSL issues

I have a socket.io-based node.js deployment on my Kubernetes cluster with a LoadBalancer-type service through Digital Ocean. The service uses SSL termination using a certificate uploaded to DO.
I've written a pod which acts as a health check to ensure that clients are still able to connect. This pod is node.js using the socket.io-client package, and it connects via the public domain name for the service. When I run the container locally, it connects just fine, but when I run the container as a pod in the same cluster as the service, the health check can't connect. When I shell into the pod, or any pod really, and try wget my-socket.domain.com, I get an SSL handshake error "wrong version number".
Any idea why a client connection from outside the cluster works, a client connection out of the cluster to a normal server works, but a client connection from a pod in the cluster to the public domain name of the service doesn't work?
You have to set up Ingress Controller to route traffic from a Load-Balancer to a Service.
The flow of traffic looks like this:
INTERNET -> LoadBalancer -> [ Ingress Controller -> Service]
If you want to use SSL:
You can provision your own SSL certificate and create a Secret to hold it. You can then refer to the Secret in an Ingress specification to create an HTTP(S) load balancer that uses the certificate.
You can deploy an ingress controller like nginx using following instruction: ingress-controller.
Turns out, the issue is with how kube-proxy handles LoadBalancer-type services and requests to it from inside the cluster. Turns out, when the service is created, it adds iptables entries that causes requests inside the cluster skip the load balancer completely, which becomes an issue when the load balancer also handles SSL termination. There is a workaround, which is to add a loadbalancer-hostname annotation which forces all connections to use the load balancer. AWS tends not to have this problem because they automatically apply the workaround to their service configurations, but Digital Ocean does not.
Here are some more details:
https://github.com/digitalocean/digitalocean-cloud-controller-manager/blob/master/docs/controllers/services/annotations.md

How to expose multiple kubernetes services trough single azure load balancer?

I want to expose multiple services trough single load balancer. Each service points to exactly one pod.
So far I tried to:
kubectl expose <podName> --port=7000
And in Azure portal to manually set either load balancing rules or Inbound Nat rules, pointing to exposed pod.
So far I can connect to pod using external IP and specified port.
Depends on how you want to separate services on the same IP. The two ways that come to my mind are :
use NodePort services and then map some ports from your LB to that part on your cluster nodes. This gives separation by port.
way more interesting in my opinion is to use Ingress/IngressController. You would expose only IC on standard ports like 80 & 443 and then it will map to your services by hostname and uri
In Azure container service, Azure will use Load Balancer to expose k8s services, like this:
root#k8s-master-E27AE453-0:~# kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jasonnginx 10.0.41.194 52.226.33.200 8080:32011/TCP 4m
kubernetes 10.0.0.1 <none> 443/TCP 11m
mynginx 10.0.144.49 40.71.230.60 80:32366/TCP 5m
yournginx 10.0.147.28 40.71.226.23 80:32289/TCP 4m
root#k8s-master-E27AE453-0:~#
Via Azure portal, check Azure load balancer frontend IP configuration(different IP address):
ACS will create Load Balancer rules and add rontend IP address automatically.
How to expose multiple kubernetes services trough single azure load
balancer?
ACS expose k8s services through that Azure Load Balancer, do you mean you want to expose k8s services with a single Public IP address?
If you want to expose k8s services with a single public IP address, as Radek said, maybe you should use Nginx Ingress Controller.
The Ingress Controller works like this:
Thanks guys. I think I have found viable solution to my problem. I should have been more specific about what I'm going to do.
I want to host game server over UDP. So any kubernetes ingress controller is not really an option, since they rarely support UDP routing.
I also don't need to host multitude of services on single machine 1-4 of pods per single node is probably the maximum.
I have found about using :
hostNetwork: true
in yaml config and it actually works pretty well for this scenario.
I get IP directly from host node. I can then select matching node within load balancer and create NAT or load balancing rule
Create multiple nodePort type service, and fix the nodePort.
And the cloud load balancer, set multiple listener groups. The listen port is same as the service's nodeport, and the target are all the worker nodes.

Access Azure Service Fabric application from internet

I think I'm missing something that is on surface.
I have created SF cluster in Azure. It has a Load Balancer and a network with 3 VMs (nodes) which have IP addresses in 10.0.0.0/16.
When I'm asking Load balancer for application endpoint it responds with node IP address. (I captured packets with WireShark) But I can't access it because the network is private.
A bit more info about my case: 3xA0 instances, net.tcp:20001 endpoints, firewall allow connections, ports opened and listening, i have public IP address assigned to balancer, probe for service port.
On your load balancer you will need to assign a public IP address. You can find some really good detailed guides in the documentation for this.
Ok Here is it:
When you want to communicate to the service from outside the cluster - just use load balancer IP and you don't need the naming server communication. Load balancer has probs that can check ports on each node in cluster and forward your request to random instance which has service you are asking.
When you want to communicate one microservice to another within the cluster then you have 2 options:
ask naming service through load balancer and then communicate to the service directly.
if you know for sure that the service should be on every node in your cluster - you can just communicate to localhost directly.
When you want to communicate from separate vm to the microservice in the cluster from within cluster's virtual network (you can connect WebApp to the cluster using vpn) then you can ask naming service through load balancer but using Service fabric HTTP API because you will not be able to use service fabric classes on VM wich doesn't have Service Fabric SDK installed. Here is an example of service resolving: https://github.com/lAnubisl/ServiceFabricHttpApiClient

Resources