How do I know which pod processes the request? - azure

I used this Deployment.yml to create pods.
apiVersion: apps/v1
kind: Deployment
metadata:
name: price-calculation-deployment
labels:
app: price-calculation
spec:
replicas: 2
selector:
matchLabels:
app: price-calculation
template:
metadata:
labels:
app: price-calculation
spec:
containers:
- name: price-calculation
image: ${somewhere}/price-calculation:latest
ports:
- containerPort: 80
protocol: TCP
- name: pc-mongodb
image: mongo:latest
ports:
- containerPort: 27017
protocol: TCP
imagePullSecrets:
- name: acr-auth
And, later on, I used this Service.yml to expose port to external.
apiVersion: v1
kind: Service
metadata:
name: price-calculation-service
spec:
type: LoadBalancer
ports:
- port: 5004
targetPort: 80
protocol: TCP
selector:
app: price-calculation
Finally, both are working now. Good.
As I configured the LoadBalancer in the Service.yml, there shall be a LoadBalancer dispatches requests to the 2 replicas/pods.
Now, I want to know which pod takes the request and how do I know
that?
Thanks!!!

well, the easiest way - make pods write their identity in the response, that way you will know which pod responds. Another way - implement distributed tracing with Zipkin\Jaeger. That will give you deep insights into networking flows.
I believe kubernetes doesnt offer any sort of built-in network tracing.

Append pod name to the response that gets rendered on the user's browser. That is how you know which pod is processing the request

You may view pod logs to see the requests made to the pod:
kubectl logs my-pod # dump pod logs (stdout)
kubectl logs my-pod -c my-container # dump pod container logs (stdout, multi-container case)
Or add to the response in the application itself, for instance in Nodejs app it might look like this:
const http = require('http');
const os = require('os');
var handler = function(request, response) {
response.writeHead(200);
response.end("You have hit " + os.hostname() + "\n");
};
var app = http.createServer(handler);
app.listen(8080);
Then you can use curl to test out your service and get a response:
Request:
curl http://serviceIp:servicePort
Response:
You have hit podName
Depending on the app’s programming language, just find a module/library that provides a utility method to get a hostname and you’ll be good to go to give it back in the response for debugging purposes.

Related

Not able to connect to an AKS Service Endpoint

Following on from my question here, I now have the issue that I am unable to connect to the external endpoint. My YAML file is here:
apiVersion: v1
kind: Pod
spec:
containers:
- name: dockertest20190205080020
image: dockertest20190205080020.azurecr.io/dockertest
ports:
- containerPort: 443
metadata:
name: my-test
labels:
app: app-label
---
kind: Service
apiVersion: v1
metadata:
name: test-service
spec:
selector:
app: app-label
type: LoadBalancer
ports:
- protocol: TCP
port: 443
I can now see an external IP when I issue the command:
kubectl get service test-service --watch
However, if I try to connect to that IP I get a timeout exception. I've tried running the dashboard, and it says everything is running fine. What I can do next to diagnose this issue?
in this case, the problem was solved by exposing container on port 80 and routing from external port 6666 to it.

kubernetes service discovery with specific endpoint (DNS)

I am using AKS cluster on Azure. I am trying to discover service using DNS (http://my-api.default.svc.cluster.local:3000/) but, it's not working (This site can’t be reached). With service IP endpoint everything is working fine.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-api
labels:
app: my-api
spec:
replicas: 1
selector:
matchLabels:
app: my-api
template:
metadata:
labels:
app: my-api
spec:
containers:
- name: my-api
image: test.azurecr.io/my-api:latest
ports:
- containerPort: 3000
imagePullSecrets:
- name: testsecret
---
apiVersion: v1
kind: Service
metadata:
name: my-api
spec:
selector:
app: my-api
ports:
- protocol: TCP
port: 3000
targetPort: 3000
kubectl describe services kube-dns --namespace kube-system
Name: kube-dns
Namespace: kube-system
Labels: addonmanager.kubernetes.io/mode=Reconcile
k8s-app=kube-dns
kubernetes.io/cluster-service=true
kubernetes.io/name=KubeDNS
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kube-dns","kubernet...
Selector: k8s-app=kube-dns
Type: ClusterIP
IP: 10.10.110.110
Port: dns 53/UDP
TargetPort: 53/UDP
Endpoints: 10.10.100.54:53,10.10.100.64:53
Port: dns-tcp 53/TCP
TargetPort: 53/TCP
Endpoints: 10.10.100.54:53,10.10.100.64:53
Session Affinity: None
Events: <none>
kubectl describe svc my-api
Name: my-api
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"my-api","namespace":"default"},"spec":{"ports":[{"port":3000,"protocol":...
Selector: app=my-api
Type: ClusterIP
IP: 10.10.110.104
Port: <unset> 3000/TCP
TargetPort: 3000/TCP
Endpoints: 10.10.100.42:3000
Session Affinity: None
Events: <none>
From Second POD
kubectl exec -it second-pod /bin/bash
curl my-api.default.svc.cluster.local:3000
Response: {"value":"Hello world2"}
From Second POD website is running which is using the same endpoint but it's not connecting to the service.
Fixing the indentation of your yaml file, I was able to launch the deployment and service successfully. Also the DNS resolution worked fine.
Differences:
Fixed indentation
Used test1 namespaces instead of default
Used containerPort 80 instead of 3000
Used my image
Deployment:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
labels:
app: my-api
name: my-api
namespace: test1
spec:
replicas: 1
selector:
matchLabels:
app: my-api
template:
metadata:
labels:
app: my-api
spec:
containers:
- image: leodotcloud/swiss-army-knife
name: my-api
ports:
- containerPort: 80
protocol: TCP
Service:
apiVersion: v1
kind: Service
metadata:
name: my-api
namespace: test1
spec:
ports:
- port: 3000
protocol: TCP
targetPort: 80
selector:
app: my-api
type: ClusterIP
Debugging steps:
Install tcpdump inside both of the kube-dns containers and start capturing DNS traffic (with filters from the second pod IP)
From inside the second pod, run curl or dig command using the FQDN.
Check if the DNS query packets are reaching the kube-dns containers.
If not, check for networking issues.
If the DNS resolution is working, then start tcpdump inside your application container and check if the curl packet is reaching the container.
Check the source and destination IP address of the packets.
Check the iptables rules on the hosts.
Check sysctl settings.
If you use Deployment to deploy your application onto cluster where it will be consumed via a Service you should have no need at all to manually set Endpoints. Just rely on kubernetes and define normal selector in your Service object.
Other then that, when it makes sense (external service consumed from within cluster), you need to make sure your Endpoints ports definition fully matches the one on service (incl. protocol and potentially name). This incomplete matching is a most common reason for endpoints to be not visible as a part of service.
From the above discussion, what I understood is, you want to expose a service but not using the IP address.
Service can be exposed in many ways. you should look for Service type LoadBalancer.
Try modifying your service is follow :
apiVersion: v1
kind: Service
metadata:
name: my-api
spec:
type: LoadBalancer
selector:
app: my-api
ports:
- protocol: TCP
port: 3000
targetPort: 3000
This will create a loadbalancer and map your service to the same.
Later you can add this loadbalancer to your DNS mapping service provided by Azure to give the domain name you like. ex: http:\\my-api.example.com:3000
Also I would like to add, if you define your ports as follow :
ports:
- name: http
port: 80
targetPort: 3000
This will redirect traffic coming to port 80 to 3000 and your service call would look much cleaner for ex. http:\\my-api.example.com

Kuberntes SSL for Service of Type LoadBalancer

I am deploying a Socket.IO and NodeJs based application on Kubernetes. I found that with the following configuration of Service I can maintain client stickiness very easily,
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-service
labels:
app: my-service
spec:
replicas: 3
template:
metadata:
labels:
app: my-service
spec:
containers:
- image: gcr.io/app_name/my-service:latest
imagePullPolicy: Always
name: my-service
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: my-service
annotations:
traefik.backend.loadbalancer.stickiness: "true"
labels:
app: my-service
spec:
type: LoadBalancer
sessionAffinity: "ClientIP"
ports:
- name: my-service
port: 80
protocol: TCP
targetPort: 4000
selector:
app: my-service
Now, I am stuck with adding SSL Certs. I am not able to get a documentation or resource to add SSL Certs for Service of Type LoadBalancer. Is it Possible? if it is then, how can I dot it?
If it is not possible at all is there any other way? I am using GKE on GCP.
Can anyone help me with this? thanks.
It looks kind of weird as you mention traefik in your annotations. Still, managing cert for this case is almost impossible≥ It would have to be supported on cloud provider level based on annotations cause there is no explicit way to bound tls certificate with particular service/port, not to mention automated certificates like with cert-manager/ingress.
To achieve that you should use some kind of API gateway / ingress controller, that can handle this for you instead of exposing your service directly, or implement TLS support in your application.

Side-car Traefik container route to ports in Kuberenets

I am running a NodeJS image in my Kubernetes Pod, while exposing a specific port (9080), and running Traefik as a side-car container as reverse proxy. How do I specify Traefik route from the Deployment template.
Deployment
apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
app: web
name: web-controller
spec:
replicas: 1
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- image: "nodeJS-image"
name: web
ports:
- containerPort: 9080
name: http-server
- image: "traefik-image"
name: traefik-proxy
ports:
- containerPort: 80
name: traefik-proxy
- containerPort: 8080
name: traefik-ui
args:
- --web
- --kubernetes
If I understand correctly, you want to forward requests hitting the Traefik container to the Node.js application living in the same pod. Given that the application is configured statically from Traefik's perspective, you can simply mount a proper file provider configuration into the Traefik pod (presumably via a ConfigMap) pointing at the side car container.
The most simple way to achieve this (as documented) is to append the following file provider configuration directly at the bottom of Traefik's TOML configuration file:
[file]
[backends.backend.servers.server]
url = "http://127.0.0.1:9080"
[frontends.frontend]
backend = "backend"
[frontends.frontend.routes.route]
host = "machine-echo.example.com"
If you mount the TOML configuration file into the Traefik pod under a path other than the default one (/etc/traefik.toml), you will also need to pass the --configFile option in the manifest referencing the correct location of the file.
After that, any request hitting the Traefik container on port 80 with a host header of machine-echo.example.com should get forwarded to the Node.js side car container on port 9080.

Kubernetes nginx ingress-controller on Azure not reachable

I am very new to Azure, Kubernetes, even Docker itself, and playing with the system to learn and evaluate for a possible deployment later. I have so far dockerized my services and successfully deployed them and made the web frontend publicly visible using a service with type: LoadBalancer.
Now I would like to add TLS termination and have learned that for that I am supposed to configure an ingress controller with the most commonly mentioned one being nginx-ingress-controller.
Strictly monkeying examples and then afterwards trying to read up on the docs I have arrived at a setup that looks interesting but does not work. Maybe some kind soul can point out my mistakes and/or give me pointers on how to debug this and where to read more about it.
I have kubectl apply'd the following file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: default-http-backend-deployment
namespace: kube-system
spec:
template:
metadata:
labels:
app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
image: gcr.io/google_containers/defaultbackend:1.0
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: default-http-backend-service
namespace: kube-system
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
selector:
app: default-http-backend
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-ingress-controller-conf
namespace: kube-system
data:
# enable-vts-status: 'true'
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller-deployment
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
app: nginx-ingress-controller
spec:
terminationGracePeriodSeconds: 60
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.13
name: nginx-ingress-controller
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-ingress-controller-conf
---
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-controller-service
namespace: kube-system
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: 443
- name: http
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx-ingress-controller
sessionAffinity: None
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
namespace: kube-system
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host:
http:
paths:
- path: /
backend:
serviceName: default-http-backend-service
servicePort: 80
This gave me two pods:
c:\Projects\Release-Management\Azure>kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
<some lines removed>
kube-system default-http-backend-deployment-3108185104-68xnk 1/1 Running 0 39m
<some lines removed>
kube-system nginx-ingress-controller-deployment-4106313651-v7p03 1/1 Running 0 24s
Also two new services. Note that I have also configured the default-http-backend-service with type: LoadBalancer, this is for debugging only. I have included my web-frontend which is called webcms:
c:\Projects\Release-Management\Azure>kubectl get services --all-namespaces
NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
<some lines removed>
default webcms 10.0.105.59 13.94.250.173 80:31400/TCP 23h
<some lines removed>
kube-system default-http-backend-service 10.0.106.233 13.80.68.38 80:31639/TCP 41m
kube-system nginx-ingress-controller-service 10.0.33.80 13.95.30.39 443:31444/TCP,80:31452/TCP 37m
And finally an ingress:
c:\Projects\Release-Management\Azure>kubectl get ingress --all-namespaces
NAMESPACE NAME HOSTS ADDRESS PORTS AGE
kube-system nginx-ingress * 10.240.0.5 80 39m
No errors that I can immediately detect. I then went to the Azure Dashboard and looked at the loadbalancer and its rules and that looks good to my (seriously untrained) eye. I did not touch these, the loadbalancer and the rules were created by the system. There is a screenshot here:
https://qvwx.de/tmp/azure-loadbalancer.png
But unfortunately it does not work. I can curl my webcms-service:
c:\Projects\Release-Management\Azure>curl -v http://13.94.250.173
* Rebuilt URL to: http://13.94.250.173/
* Trying 13.94.250.173...
* TCP_NODELAY set
* Connected to 13.94.250.173 (13.94.250.173) port 80 (#0)
<more lines removed, success>
But neither default-http-backend nor the ingress work:
c:\Projects\Release-Management\Azure>curl -v http://13.80.68.38
* Rebuilt URL to: http://13.80.68.38/
* Trying 13.80.68.38...
* TCP_NODELAY set
* connect to 13.80.68.38 port 80 failed: Timed out
* Failed to connect to 13.80.68.38 port 80: Timed out
* Closing connection 0
curl: (7) Failed to connect to 13.80.68.38 port 80: Timed out
(ingress gives the same with a different IP)
If you read this far: Thank you for your time and I would appreciate any hints.
Marian
Kind of a trivial thing, but it'll save you some $$$: the default-http-backend is not designed to be outside facing, and thus should not have type: LoadBalancer -- it is merely designed to 404 so the Ingress controller can universally /dev/null traffic for Pod-less Services.
Moving slightly up the triviality ladder, and for extreme clarity: I do not think what you have is wrong but I did want to offer you something to decide if you want to change. Typically the contract for a Pod's container is to give an ideally natural-language name to the port ("http", "https", "prometheus", whatever) that maps into the port of underlying image. And then, set targetPort: in the Service to that name and not the number which offers the container the ability to move the port number without breaking the Service-to-Pod contract. The nginx-ingress Deployment's container:ports: agrees with me on this one.
Now, let's get into the parts that may be contributing to your system not behaving as you wish.
I can't prove it right now, but the presence of containers:hostPort: is suspicious without hostNetwork: true. I'm genuinely surprised kubectl didn't whine, because those config combinations are a little weird.
I guess the troubleshooting step would be to get on the Node (that is, something within the cluster which is not a Pod -- you could do it with a separate VM within the same subnet as your Node, too, if you wish) and then curl to port 31452 of the Node upon which the nginx-ingress-controller Pod is running.
kubectl get nodes will list all available Nodes, and
kubectl get -o json pod nginx-ingress-controller-deployment-4106313651-v7p03 | jq -r '.items[0].status.hostIP' should produce the specific VM's IP address, if you don't already know it. Err, I just realized from your prompt you likely don't have jq -- but I don't know PowerShell well enough to know its JSON-querying syntax.
Then, from any Node: curl -v http://${that_host_IP_value}:31452 and see what materializes. It may be something, or it may be the same "wha?!" that the LoadBalancer is giving you.
As for the Ingress resource specifically, again default-http-backend is not supposed to have an Ingress resource -- I don't know if it hurts anything because I've never tried it, but I'd also bet $1 it is not helping your situation, either.
Since you already have a known working Service with default:webcms, I would recommend creating an Ingress resource in the default namespace with pretty much exactly what your current Ingress resource is, but pointed at webcms instead of default-http-backend. That way your Ingress controller will actually have something to target which isn't the default backend.
If you haven't already seen it, adding --v=2 will cause the Pod to emit the actual diff of its nginx config change, which can be unbelievably helpful in tracking down config misunderstandings
I'm so sorry that you're having to do battle with Azure, Ingress controllers, and rough-around-the-edges documentation for your first contact with Kubernetes. It really is amazing when you get all the things set up correctly, but it is a pretty complex piece of machinery, for sure.

Resources