Can Istio use existing zipkin? - zipkin

Can istio send tracing information to an external zipkin instance? It seems like istio has hard coded zipkin address several places to expect it in the istio-system namespace(by referencing it without a namespace).

I believe you should be able to as long as you use the fully qualified domain name. For instance, zipkin.mynamespace.svc.cluster.local.

Related

Create Azure Kubernetes ingress controller to limit 1 connection per pod

I'm using Azure Kubernetes Service and have a unique scenario where I want to allow only one connection per pod. I used the "advanced" networking option to set up my cluster such that each pod has its own internal IP address. The problem is, all of these pods are behind a public load balancer IP address, and the load balancer decides where to route the traffic.
I need to either A) set up a rule such that the load balancer only allows one connection per pod and routes new traffic to new pods, 1 per request, or B) set up an ingress controller to do the same. I think B) is the solution but I have no clear path on how to do this. I see that you can route by URL, but you'd have to set up a rule for each pod, which is definitely not a good idea. Is there any way to set up a rule that just limits 1 session per pod? Or some other method that works similarly.
Thanks.
This is a very good question. Based on solutions you suggested in the second part of your question, I would like to add my input here. However, it's not limited or possible only to use these, there are most effective advanced ways people are establishing connections to their pods.
A.) I am looking at how are you routing your traffic to your pods from a load balancer, in general each pod inside Kubernetes cluster by defaults get's its own ip. If we know this how you managing traffic flow from external world to each pod. I can add my answer to A part of possible solutions. But not advisable to go this method, because it is more likely your pod dies and a new pod with new ip might get created you need to manually route traffic to the newly created pod, which is why people opted for kubernetes rather than manually managing docker containers on a VM. But I might be wrong, you might be having different complex system it is debatable though.
B.) Like you said, and researched Ingress and Services is also a solution, unfortunately there are no ingress controller annotations available as of now that only limits one connection per pod, but like you said URL based would be one part of the solution but again as you already identified there will be a overhead with this way it is more like single service per single pod and a sub domain for each service. It is more like single deployment with a unique service associated with it and a unique service with unique subdomain. It's a complex setting but doable.
Edit Based on Comments (Removed HPA)
Based on the information you added I can suggest a different approach, but it is kinda wrong way of using kubernetes, but again it is debatable based on the kind of system you are planning to achieve. Run a proxy server (HAProxy, NGINX, or your fav) on it is own on one of the node and route traffic from the outside world to your pod directly with the internal ip of the pod in your proxy. And you can route based on number of connections, etc from the proxy config remember this is not your kubernetes pod, it's a standalone service your OS running. But caution when node dies pod dies, so is the ip address of the pod.
But this is something we shouldn't do, I am sure in couple of weeks or so you will get the bigger picture of K8s and it's moving parts, you might say this is wrong as there is lot of manual setup overhead.
Hope this is helpful.
I'm fairly new to the k8s world, but as I understand it you should be able to do this with the nginx.org/max-conns annotation in a Nginx Ingress Controller:
https://docs.nginx.com/nginx-ingress-controller/configuration/ingress-resources/advanced-configuration-with-annotations/
That way you should be able to limit the number of connections to 1 per 'upstream' or pod.
I.e. the Load Balancer directs traffic to Nginx, Nginx proxies the traffic to pods with one concurrent request per pod.

How to pick Kubernetes Ingress controller

What do I need to take into comparison to pick the kubernetes ingress controller / load balancer?
For example, I found this fancy implementation of Azure Application Gateway as ingress controller. But what are the benefits of using this setup compared to the simplest nginx or maybe traefik?
Ingress Controller components are based on a contract, to allow external traffic to impact the internal workload using a predefined YAML representation (the Ingress resource).
Each Ingress Controller provides further features, like NGINX Ingress Controller with web-socket support or Traefik that provides HTTP/2 and gRPC support with ease, till the new HAProxy that allows also TCP/L4 external routing. Some Ingress Controller also provide additional features, like Traefik (and HAProxy as well), like a simpler Circuit Breaker implementation thanks to Kubernetes annotations.
First of all, you have to understand which kind of feature you really want and need and, finally, also considering your knowledge to debug the whole stuff, even considered the community is always available to support other people.

Kubernetes AKS bind domain

Question regarding AKS, each time release CD. The Kubernetes will give random IP Address to my services.
I would like to know how to bind the domain to the IP?
Can someone give me some link or article to read?
You have two options.
You can either deploy a Service with type=LoadBalancer which will provision a cloud load balancer. You can then point your DNS entry to that provisioned LoadBalancer with (for example) a CNAME.
More information on this can be found here
Your second option is to use an Ingress Controller with an Ingress Resource. This offers much finer grained access via url parameters. You'll probably need to deploy your ingress controller pod/service with a service Type=LoadBalancer though, to make it externally accessible.
Here's an article which explains how to do ingress on Azure with the nginx-ingress-controller

Microservices - how to find DNS IP?

In the world of microservices endpoints should not (must not) be hardcoded. One of the best ways to do this is to have a DNS and let each microservice register while starting. By doing this whenever microservice A wants to communicate with microservice B it just asks DNS for endpoints where B currently listens.
What I do not understand is: How microservices know where the DNS lives?
Basically DNS is just a 'special' service and I can have one or multiple instances of it right? So I should not hardcode it's endpoint too or should I? And let's say I do - what if DNS instnace is moved to different location? Do I have to manually change it's location in configuration?
Does anyone happen to know how to design this? (or can anyone just point me to any document where this is explained since although there are many information about microservices and dns I can not find this particular information anywhere - maybe it's just too trivial and I am the only one who does not get it)
Manual setup of DNS is possible, as stated by the other answers, but I would recommend to use an infrastructure that supports the service discovery in all respects. For example kubernetes has built in DNS support and makes it very easy to expose a service that can consist of any number of Pods.
An infrastructure technology like kubernetes will also make many other respects of the microservices architectural style easier to implement, including high availability and scalability.
Please see the official docs for some more information.
DHCP solves this problem. When a host boots it sends a broadcast DHCP message. The DHCP server responds with many values, one of which is the location of DNS servers.
In the case of micro services, the host OS (or container host) will be configured for DNS via DHCP. The microservice code uses the OS DNS functions to resolve addresses.
https://en.m.wikipedia.org/wiki/Dynamic_Host_Configuration_Protocol
You can use your local network to discover services, via Dhcp and whatnot. But that requires that all services are already "registered" within that DNS server.
Microservices can find each other via service discovery, server or client side. If you choose client side service discovery, you can use tools like Consul, which provides a bunch of great features. One of which is a DNS endpoint which allows queries via SRV records with <serviceName>.consul.service domain names.
Consul has it's own DNS endpoint, you can configure your services to use that (usually on port 8600 locally, as Consul agents run locally).
But you can also configure an actual DNS server to forward questions to Consul, so that you can easily mix service discovery drive by Consul with manually setup services within a Bind instance or similar...
Known hostname solution. The fixed part would be the service domain name, for instance xservice.com. You can query this host using standard DNS tools (e.g., dig in your shell, etc).
Finally, in the DNS bound to xservice.com you then add a SRV record with further details.
A SRV record lists all the service details, including:
the symbolic service name;
the canonical hostname of the machine providing the service;
the TCP (or UDP) port on which the service is available.
There are many other info as well. Please see Wikipedia for the complete list.
Please keep in mind this is a somewhat static solution. If you are looking for a more dynamic one, then Oswin answer might be a better fit :-)

Kubernetes Subdomain for each service

How to add multiple ingress or Load balancers in kubernetes for separate services,
here is the post who I ended up creating a ingress to my sub-domain. Is there any way we can specific the same IP address created by GCE to launch multiple Ingress resources.
I am using GCE for hosting my cluster. If there is a better way to handle this scenario to have multiple resources to expose a service with a sub-domain www.app1.domain.com, www.app2.domain.com which are entirely different apps and have two ingress resources that point to two these specific services using same external IP address.
From the post I could able to create but unable to specify the external IP address to it.
Any help is much appreciated, thank you.
You can just define multiple Ingress resources and put them to Kubernetes - they don't have to be in the same yaml file. All ingress resources share the same proxy and they are routed via the defined hostname and path to the wanted service.
I am not sure what you mean with the external IP address.

Resources