I'm having issues with the internal DNS/service resolution within Kubernetes and I can't seem to track the issue down. I have an api-gateway pod running Kong, which calls other services by their internal service name, i.e srv-name.staging.svc.cluster.local. Which was working fine up until recently. I attempted to deploy 3 more services, into two namespaces, staging and production.
The first service, works as expected when calling booking-service.staging.svc.cluster.local, however the same code doesn't seem to work in the production service. And the other two service don't worth in either namespace.
The behavior I'm getting is a timeout. If I curl these services from my gateway pod, they all timeout, apart from the first service deployed (booking-service.staging.svc.cluster.local). When I call these services from another container within the same pod, they do work as expected.
I have Node services set up for each service I wish to expose to the client side.
Here's an example Kubernetes deployment:
---
# API
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{SRV_NAME}}
spec:
replicas: 1
template:
metadata:
labels:
app: {{SRV_NAME}}
spec:
containers:
- name: booking-api
image: microhq/micro:kubernetes
args:
- "api"
- "--handler=rpc"
env:
- name: PORT
value: "8080"
- name: ENV
value: {{ENV}}
- name: MICRO_REGISTRY
value: "kubernetes"
ports:
- containerPort: 8080
- name: {{SRV_NAME}}
image: eu.gcr.io/{{PROJECT_NAME}}/{{SRV_NAME}}:latest
imagePullPolicy: Always
command: [
"./service",
"--selector=static"
]
env:
- name: MICRO_REGISTRY
value: "kubernetes"
- name: ENV
value: {{ENV}}
- name: DB_HOST
value: {{DB_HOST}}
- name: VERSION
value: "{{VERSION}}"
- name: MICRO_SERVER_ADDRESS
value: ":50051"
ports:
- containerPort: 50051
name: srv-port
---
apiVersion: v1
kind: Service
metadata:
name: booking-service
spec:
ports:
- name: api-http
port: 80
targetPort: 8080
protocol: TCP
selector:
app: booking-api
I'm using go-micro https://github.com/micro/go-micro with the Kubernetes pre-configuration. Which again works in one case absolutely fine, but not all the others. Which leads me to believe it's not code related. It also works fine locally.
When I do nslookup from another pod, it resolves the name and finds the cluster IP for the internal Node service as expected. When I attempt to cURL that IP address, I get the same timeout behavior.
I'm using Kubernetes 1.8 on Google Cloud.
I don't understand why you think that it is an issue with the internal DNS/service resolution within Kubernetes since when you perform the DNS lookup it works, but if you query that IP you get a connection timeout.
If you curl these services from outside the pod they all timeout, apart from the first service deployed, no matter if you used the IP or the domain name.
When you call these services from another container within the same pod, they do work as expected.
It seems an issue with the connection between pods more than a DNS issue therefore I would focus your troubleshooting towards that direction, but correct me if I'am wrong.
Can you perform the classical networking troubleshooting (ping, telnet, traceroute)from a pod toward the IP given by the DNS lookup and from one of the container that is giving timeout to one of the other pods and update the question with the results?
Related
I have created a sample spring boot app and did the following:-
1.created a docker image
2.created an Azure container registry and did a docker push to this
3.Created a cluster in Azure Kubernetes service and deployed it successfully.I have chosen external endpoint option for this.
Kubernetes external end point
say for service to service call i dont want to use IP like http://20.37.134.68:80 but another custom name how can i do it?
Also if i chose internal then is there any way to replace the name.
Tried editing YAML with endpoint name property but failed.Any ideas?
I think you mixing some concept, so I'll try to explain and help you to reach what you want.
When you deploy a container image in a Kubernetes cluster, in the most cases you will use a pod or deployment spec, that basically is a yaml file with all your deployment/pod configuration, name, image name etc. Here is an example of a simple echo-server app:
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo
spec:
selector:
matchLabels:
app: echo
template:
metadata:
labels:
app: echo
spec:
containers:
- name: echo
image: mendhak/http-https-echo
ports:
- name: http
containerPort: 80
Observe the fields name in the file. Here you can configure the name for your deployment and for your containers.
In order to expose your application, you will need to use a service. Services can be internal and external. Here you can find all service types.
For a internal service, you need to use the service type ClusterIP (default), it means only your cluster will reach the pods. To reach your service from other pods, you can use the service name composed by my-svc.my-namespace.svc.cluster-domain.example.
Here is an example of a service for the deployment above:
apiVersion: v1
kind: Service
metadata:
name: echo-svc
spec:
selector:
app: echo
ports:
- protocol: TCP
port: 80
targetPort: 80
To expose your service externally, you have the option to use a service type NodePort, LoadBalancer or use an ingress.
You can configure your DNS name in the ingress rules and make path rules if you want, or even configure a HTTPS for your application. There are few options to ingresses in kubernetes, and one of the most popular is nginx-ingress.
Here is an example of how to configure a simple ingress for our example service:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "false"
name: echo-ingress
spec:
rules:
- host: myapp.mydomain.com
http:
paths:
- path: "/"
backend:
serviceName: echo-svc
servicePort: 80
In the example, i'm using the dns name myapp.mydomain.com, so it means you can only will reach your application by this name.
After create the ingress, you can see the external ip with the command kubectl get ing, and you can create a A entry in your dns server.
I'm trying to expose a pod using a load balancer service. The service was created successfully and an external IP was assigned. When I tried accessing the external in the browser the site is no and I got ERR_CONNECTION_TIMED_OUT. Please see the yaml below:
apiVersion: v1
kind: Service
metadata:
labels:
name: service-api
name: service-api
spec:
externalTrafficPolicy: Cluster
ports:
- nodePort: 30868
port: 80
protocol: TCP
targetPort: 9080
name: http
selector:
name: service-api
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
I also tried creating the service using kubernetes CLI still no luck.
It looks like I have a faulty DNS on my k8s cluster. In order to resolve the issue, I have to restart the cluster. But before restarting the cluster, you can also delete all the pods in kube-system to refresh the DNS pods and if it's still not working I suggest restarting the cluster.
I am attempting to create a service for creating training datasets using the Prodigy UI tool. I would like to do this using a Kubernetes cluster which is running in Azure cloud. My Prodigy UI should be reachable on 0.0.0.0:8880 (on the container).
As such, I created a deployment as follows:
kind: Deployment
apiVersion: apps/v1beta2
metadata:
name: prodigy-dply
spec:
replicas: 1
selector:
matchLabels:
app: prodigy_pod
template:
metadata:
labels:
app: prodigy_pod
spec:
containers:
- name: prodigy-sentiment
image: bdsdev.azurecr.io/prodigy
imagePullPolicy: IfNotPresent
command: ["/bin/bash"]
args: ["-c", "prodigy spacy textapi -F training_recipe.py"]
ports:
- name: prodigyport
containerPort: 8880
This should (should being the operative word here) expose that 8880 port at the pod level aliased as prodigyport
Following that, I have created a Service as below:
kind: Service
apiVersion: v1
metadata:
name: prodigy-service
spec:
type: LoadBalancer
selector:
app: prodigy_pod
ports:
- protocol: TCP
port: 8000
targetPort: prodigyport
At this point, when I run the associated kubectl create -f <deployment>.yaml and kubectl create -f <service>.yaml, I get an ExternalIP and associated Port: 10.*.*.*:34672.
This is not reachable by browser, and I'm assuming I have a misunderstanding of how my browser would interact with this Service, Pod, and the underlying Container. What am I missing here?
Note: I am willing to accept that kubernetes may not be the tool for the job here, it seems enticing because of the ease of scalability and updating images to reflect more recent configurations
You can find public IP address(LoadBalancer Ingress) with this command:
kubectl get service azure-vote-front
Result like this:
root#k8s-master-79E9CFFD-0:~# kubectl get service azure
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
azure 10.0.136.182 52.224.219.190 8080:31419/TCP 10m
Then you can browse it with external IP and port, like this:
curl 52.224.219.190:8080
Also you can find the Load Balaner rules via Azure portal:
Hope this helps.
You can find the IP address created for your service by getting the service information through kubectl:
kubectl describe services prodigy-service
The IP address is listed next to LoadBalancer Ingress.
Also, you can use port forwarding to access your pod:
kubectl port-forward <pod_name> 8880:8880
After that you can access Prodigy UI by localhost:8880 in your browser.
I am running a NodeJS image in my Kubernetes Pod, while exposing a specific port (9080), and running Traefik as a side-car container as reverse proxy. How do I specify Traefik route from the Deployment template.
Deployment
apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
app: web
name: web-controller
spec:
replicas: 1
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- image: "nodeJS-image"
name: web
ports:
- containerPort: 9080
name: http-server
- image: "traefik-image"
name: traefik-proxy
ports:
- containerPort: 80
name: traefik-proxy
- containerPort: 8080
name: traefik-ui
args:
- --web
- --kubernetes
If I understand correctly, you want to forward requests hitting the Traefik container to the Node.js application living in the same pod. Given that the application is configured statically from Traefik's perspective, you can simply mount a proper file provider configuration into the Traefik pod (presumably via a ConfigMap) pointing at the side car container.
The most simple way to achieve this (as documented) is to append the following file provider configuration directly at the bottom of Traefik's TOML configuration file:
[file]
[backends.backend.servers.server]
url = "http://127.0.0.1:9080"
[frontends.frontend]
backend = "backend"
[frontends.frontend.routes.route]
host = "machine-echo.example.com"
If you mount the TOML configuration file into the Traefik pod under a path other than the default one (/etc/traefik.toml), you will also need to pass the --configFile option in the manifest referencing the correct location of the file.
After that, any request hitting the Traefik container on port 80 with a host header of machine-echo.example.com should get forwarded to the Node.js side car container on port 9080.
I am very new to Azure, Kubernetes, even Docker itself, and playing with the system to learn and evaluate for a possible deployment later. I have so far dockerized my services and successfully deployed them and made the web frontend publicly visible using a service with type: LoadBalancer.
Now I would like to add TLS termination and have learned that for that I am supposed to configure an ingress controller with the most commonly mentioned one being nginx-ingress-controller.
Strictly monkeying examples and then afterwards trying to read up on the docs I have arrived at a setup that looks interesting but does not work. Maybe some kind soul can point out my mistakes and/or give me pointers on how to debug this and where to read more about it.
I have kubectl apply'd the following file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: default-http-backend-deployment
namespace: kube-system
spec:
template:
metadata:
labels:
app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
image: gcr.io/google_containers/defaultbackend:1.0
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: default-http-backend-service
namespace: kube-system
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
selector:
app: default-http-backend
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-ingress-controller-conf
namespace: kube-system
data:
# enable-vts-status: 'true'
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller-deployment
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
app: nginx-ingress-controller
spec:
terminationGracePeriodSeconds: 60
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.13
name: nginx-ingress-controller
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-ingress-controller-conf
---
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-controller-service
namespace: kube-system
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: 443
- name: http
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx-ingress-controller
sessionAffinity: None
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
namespace: kube-system
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host:
http:
paths:
- path: /
backend:
serviceName: default-http-backend-service
servicePort: 80
This gave me two pods:
c:\Projects\Release-Management\Azure>kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
<some lines removed>
kube-system default-http-backend-deployment-3108185104-68xnk 1/1 Running 0 39m
<some lines removed>
kube-system nginx-ingress-controller-deployment-4106313651-v7p03 1/1 Running 0 24s
Also two new services. Note that I have also configured the default-http-backend-service with type: LoadBalancer, this is for debugging only. I have included my web-frontend which is called webcms:
c:\Projects\Release-Management\Azure>kubectl get services --all-namespaces
NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
<some lines removed>
default webcms 10.0.105.59 13.94.250.173 80:31400/TCP 23h
<some lines removed>
kube-system default-http-backend-service 10.0.106.233 13.80.68.38 80:31639/TCP 41m
kube-system nginx-ingress-controller-service 10.0.33.80 13.95.30.39 443:31444/TCP,80:31452/TCP 37m
And finally an ingress:
c:\Projects\Release-Management\Azure>kubectl get ingress --all-namespaces
NAMESPACE NAME HOSTS ADDRESS PORTS AGE
kube-system nginx-ingress * 10.240.0.5 80 39m
No errors that I can immediately detect. I then went to the Azure Dashboard and looked at the loadbalancer and its rules and that looks good to my (seriously untrained) eye. I did not touch these, the loadbalancer and the rules were created by the system. There is a screenshot here:
https://qvwx.de/tmp/azure-loadbalancer.png
But unfortunately it does not work. I can curl my webcms-service:
c:\Projects\Release-Management\Azure>curl -v http://13.94.250.173
* Rebuilt URL to: http://13.94.250.173/
* Trying 13.94.250.173...
* TCP_NODELAY set
* Connected to 13.94.250.173 (13.94.250.173) port 80 (#0)
<more lines removed, success>
But neither default-http-backend nor the ingress work:
c:\Projects\Release-Management\Azure>curl -v http://13.80.68.38
* Rebuilt URL to: http://13.80.68.38/
* Trying 13.80.68.38...
* TCP_NODELAY set
* connect to 13.80.68.38 port 80 failed: Timed out
* Failed to connect to 13.80.68.38 port 80: Timed out
* Closing connection 0
curl: (7) Failed to connect to 13.80.68.38 port 80: Timed out
(ingress gives the same with a different IP)
If you read this far: Thank you for your time and I would appreciate any hints.
Marian
Kind of a trivial thing, but it'll save you some $$$: the default-http-backend is not designed to be outside facing, and thus should not have type: LoadBalancer -- it is merely designed to 404 so the Ingress controller can universally /dev/null traffic for Pod-less Services.
Moving slightly up the triviality ladder, and for extreme clarity: I do not think what you have is wrong but I did want to offer you something to decide if you want to change. Typically the contract for a Pod's container is to give an ideally natural-language name to the port ("http", "https", "prometheus", whatever) that maps into the port of underlying image. And then, set targetPort: in the Service to that name and not the number which offers the container the ability to move the port number without breaking the Service-to-Pod contract. The nginx-ingress Deployment's container:ports: agrees with me on this one.
Now, let's get into the parts that may be contributing to your system not behaving as you wish.
I can't prove it right now, but the presence of containers:hostPort: is suspicious without hostNetwork: true. I'm genuinely surprised kubectl didn't whine, because those config combinations are a little weird.
I guess the troubleshooting step would be to get on the Node (that is, something within the cluster which is not a Pod -- you could do it with a separate VM within the same subnet as your Node, too, if you wish) and then curl to port 31452 of the Node upon which the nginx-ingress-controller Pod is running.
kubectl get nodes will list all available Nodes, and
kubectl get -o json pod nginx-ingress-controller-deployment-4106313651-v7p03 | jq -r '.items[0].status.hostIP' should produce the specific VM's IP address, if you don't already know it. Err, I just realized from your prompt you likely don't have jq -- but I don't know PowerShell well enough to know its JSON-querying syntax.
Then, from any Node: curl -v http://${that_host_IP_value}:31452 and see what materializes. It may be something, or it may be the same "wha?!" that the LoadBalancer is giving you.
As for the Ingress resource specifically, again default-http-backend is not supposed to have an Ingress resource -- I don't know if it hurts anything because I've never tried it, but I'd also bet $1 it is not helping your situation, either.
Since you already have a known working Service with default:webcms, I would recommend creating an Ingress resource in the default namespace with pretty much exactly what your current Ingress resource is, but pointed at webcms instead of default-http-backend. That way your Ingress controller will actually have something to target which isn't the default backend.
If you haven't already seen it, adding --v=2 will cause the Pod to emit the actual diff of its nginx config change, which can be unbelievably helpful in tracking down config misunderstandings
I'm so sorry that you're having to do battle with Azure, Ingress controllers, and rough-around-the-edges documentation for your first contact with Kubernetes. It really is amazing when you get all the things set up correctly, but it is a pretty complex piece of machinery, for sure.