Kubernetes How to confirm DNS is working - dns

I have successfully created a Kubernetes POD/Service using MiniKube on windows. But I would now like to ensure that DNS is working correctly.
The DNS service is shown as running
.\kubectl get pod -n kube-system
Which shows me the kube-dns pod is running
I also have the DNS add on shown as running
So I then want to verify that DNS is working, Ideally I want to test that PODs that have a service on top of them can lookup the service by DNS name.
But I started simple like this, where I get my running POD
So now that I have my POD name, I want to try do simple DNS lookup in it using the following commmand
.\kubectl exec simple-sswebapi-pod-v1-f7f8764b9-xs822 -- nslookup google.com
Where I am using the kubectl exec to try and run this nslookup in the POD that was found (running I should point out above).
But I get this error
Why would it not be able to find nslookup inside POD. All the key things seem to be ok
Kube-DNS pod is running (as shown above)
DNS AddOn is installed and running (as shown above)
What am I missing, is there something else I need to enable for DNS lookups to work inside my PODs?

To do it like this your container needs to include the command you want to use inside of the built image.
Sidenote: kubectl debug is coming to kube in near future https://github.com/kubernetes/kubernetes/issues/45922 which will help solve things like that by enabling you to attach a custom container to existing pod and debug in it

So more on this I installed busybox into a POD to allow me to use nslookup and this enabled me to do this
So this looks cool, but should I not be able to ping that service either by its IP address or by its DNS name which seems to be resolving just fine as shown above.
If I ping google.com inside busybox command prompt all is ok, but when I do a ping for either this IP address of this service of the DNS names, it never gets anywhere.
DNS lookup is clearly working. What am I missing?

Related

Google Cloud Firewall Exposing Port Docker

I managed to successfully deploy a docker image to a VM instance. How can I send network requests to it?
The code is a simple Node.js / express app that simply res.json() "Hi there!" on the root path. It is listening on port 3000.
I think the deploy process was this:
Build Docker image from Node.js / express src.
Run container on local command line, correctly expose ports. It works locally.
Tagged the image with the correct project ID / zone.
Pushed to VM. I Think I pushed the image, rather than the container. is this a problem?
SSH into VM. Run docker ps and see running container with correct image tag
use command line curl (I am using zsh terminal) as well as browser to check network requests. Getting connection refused error
As a beginner, the google firewall settings appear to be open--I have allowed ingress on all ports.
I will also want to allow egress at some point but for now my problem is that I am getting a connection refused error whenever I try to contact the IP address, either with my web-browser or curl from the command line.
It would seem that the issue is most likely with the firewalls, and I have confirmed that my docker container is running in the VM (and the source code works on my machine).
EDIT:
Updated Firewall Rules with Port 3000 Ingress:
You need a firewall rule that permits traffic to tcp:3000.
Preferably from just your host's IP (Google "what's my IP?" And use that) but for now you can (temporarily) use any IP 0.0.0.0/0.
Firewall rules can be applied only to the VM running your container too, but I'd not worry about that initially.

How to configure kubernetes so that I could issue commands against the master machine from my laptop?

I'm trying to setup a cluster of one machine for now. I know that I can get the API server running and listening to some ports.
I am looking to issue commands against the master machine from my laptop.
KUBECONFIG=/home/slackware/kubeconfig_of_master kubectl get nodes should send a request to the master machine, hit the API server, and get a response of the running nodes.
However, I am hitting issues with permissions. One is similar to x509: certificate is valid for 10.61.164.153, not 10.0.0.1. Another is a 403 if I hit the kubectl proxy --port=8080 that is running on the master machine.
I think two solutions are possible, with a preferable one (B):
A. Add my laptop's ip address to the list of accepted ip addresses that API server or certificates or certificate agents holds. How would I do that? Is that something I can set in kubeadm init?
B. Add 127.0.0.1 to the list of accepted ip addresses that API server or certificates or certificate agents holds. How would I do that? Is that something I can set in kubeadm init?
I think B would be better, because I could create an ssh tunnel from my laptop to the remote machine and allow my teammates (if I ever have any) to do similarly.
Thank you,
Slackware
You shoud add --apiserver-cert-extra-sans 10.0.0.1 to your kubeadm init command.
Refer to https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/#options
You should also use a config file:
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.16.2
apiServer:
certSANs:
- 10.0.0.1
You can find all relevant info here: https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2

How to resolve a service's name to IP from a kubernetes node's perspective?

I have a kubernetes cluster running on GCE.
I created a setup in which I have 2 pods glusterfs-server-1 and glusterfs-server-2 that are my gluster server.
The 2 glusterfsd daemon correctly communicate and I am able to create replicated volumes, write files to them and see the files correctly replicated on both pods.
I also have 1 service called glusterfs-server that automatically balances the traffic between my 2 glusterfs pods.
From inside another pod, I can issue mount -t glusterfs glusterfs-server:/myvolume /mnt/myvolume and everything works perfectly.
Now, what I really want is being able to use the glusterfs volume type inside my .yaml files when creating a container:
...truncated...
spec:
volumes:
- name: myvolume
glusterfs:
endpoints: glusterfs-server
path: myvolume
...truncated...
Unfortunately, this doesn't work. I was able to find out why it doesn't work:
When connecting directly to a kubernetes node, issuing a mount -t glusterfs glusterfs-server:/myvolume /mnt/myvolume does not work, this is because from my node's perspective glusterfs-server does not resolve to any IP address. (That is getent hosts glusterfs-server returns nothing)
And also, due to how glusterfs works, even directly using the service's IP will fail as glusterfs will still eventually try to resolve the name glusterfs-server (and fail).
Now, just for fun and to validate that this is the issue, I edited my node's resolv.conf (by putting my kube-dns IP address and search domains) so that it would correctly resolve my pods and services ip addresses. I then was finally able to successfully issue mount -t glusterfs glusterfs-server:/myvolume /mnt/myvolume on the node. I was then also able to create a pod using a glusterfs volume (using the PodSpec above).
Now, I'm fairly certain modifying my node's resolv.conf is a terrible idea: kubernetes having the notion of namespaces, if 2 services in 2 different namespaces share the same name (say, glusterfs-service), a getent hosts glusterfs-service would resolve to 2 different IPs living in 2 different namespaces.
So my question is:
What can I do for my node to be able to resolve my pods/services IP addresses?
You can modify resolv.conf and use the full service names to avoid collisions. Usually are like this: service_name.default.svc.cluster.local and service_name.kube-system.svc.cluster.local or whatever namespace is named.

How to assign an external ip to linux server at gcloud?

Last several days I'm struggling with a problem.
I have two instances(ubuntu server) on gcloud and I want to assign them their external IP.
And I can ping and ssh to my instances but when I try to do telnet it is not performed.
On gcloud all instances have one internal ip and one external IP.
And they does not know their ip. I get it from gcloud console.
How could I assign it to them?
Also I've tried sudo ifconfig eth0:0 130.211.95.1 up
You can do something like this to add the external IP to a local interface:
ip addr add 130.211.95.1/32 dev eth0 label eth0:shortlabel
Replace 'add' with 'del' to remove it once you are done with it.
shortlabel can be any string up to a certain (short) length.
Update: also see this GCE support issue for related information.
A feature request for this is already filed on GCE public issue tracker, however it is not yet implemented. You can star it to get notification if any update posted on the thread.
May you also mention what's your use case? so I can probably provide you with a workaround.

Routing TLD in docker image to 127.0.0.1

Foreword
My rails app cares about the hostname. So for example when the request comes from domain-a.dev it behaves differently than when the request comes from domain-b.dev. I want to test this behaviour and therefore have routed the complete *.dev TLD to 127.0.0.1 on my local machine, so I can set the domain of the server in my tests to what I want but it always uses my local machine tests server.
This is necessary because my tests use selenium, which launches an external browser and browses to domain-a.dev or domain-b.dev. So I cannot simply overwrite request.hostname (or so) in my tests, because this has no effect on the external browser.
Now I want to use a docker image for my tests, so I do not have to configure the test environment on multiple servers but simply start the docker image. Everything works so far but the *.dev resolving.
AFAIK Docker uses the hosts nameserver or google nameserver by default (https://docs.docker.com/articles/networking/#dns), but that would mean to change the host's dns to accomplish my goal which I don't want.
I want to build a docker image, where a special TLD, for example dev, always routes to 127.0.0.1, without touching the docker host.
This means that for everybody running this docker image, for example domain.dev will be resolved to 127.0.0.1 inside the container. (Not only domain.dev, but every *.dev domain.). Other TLDs should work as usual.
An idea I have is to start dnsmasq inside the container, configured to resolve *.dev to 127.0.0.1 and forward the rest to the usual nameserver. But I am new to docker and have no idea if this is too complicated or how this could be accomplished.
Another idea might be to overwrite /etc/hosts in the container with fixed entries for special domains. But this would mean I have to update the docker container in case I want to resolve new domains to 127.0.0.1, which is a drawback if the domains change often.
What do the docker experts say?
If you are using docker run to start your container, you have the --add-host="" argument which takes a hostname and an IP that get written to the container's /etc/hosts.
Your startup command would look like this:
docker run -d --add-host="domain-a.dev:192.168.0.10" [...]
Replace 192.168.0.10 with your computer's local IP address.
Don't use 127.0.0.1 as Docker will resolve that to the container not your computer.
I basically worked around this now by using http://xip.io/
By using urls like sub.127.0.0.1.xip.io I can connect to my local machine. My app only has to know that 127.0.0.1.xip.io is treated as the "top level domain", and sub is the domain name without tld. (In a Ruby on Rails app this can be done by adjusting config.action_dispatch.tld_length = 6 for example).

Resources