K8s Issue connecting to Cassandra on Mac OS (via Node.js) - node.js

While trying to setup Cassandra database in a local Kubernetes cluster on a Mac OS (via Minikube), I am getting connection issues.
It seems like Node.js is not able to resolve DNS settings correctly, but resolving via command line DOES work.
The setup is as following (simplified):
Cassandra Service
apiVersion: v1
kind: Service
metadata:
labels:
app: cassandra
name: cassandra
spec:
type: NodePort
ports:
- port: 9042
targetPort: 9042
protocol: TCP
name: http
selector:
app: cassandra
In addition, there's a PersistentVolume and a StatefulSet.
The application itself is very basic
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: app1
labels:
app: app1
spec:
replicas: 1
selector:
matchLabels:
app: app1
template:
metadata:
labels:
app: app1
spec:
containers:
- name: app1
image: xxxx.dkr.ecr.us-west-2.amazonaws.com/acme/app1
imagePullPolicy: "Always"
ports:
- containerPort: 3003
And a service
apiVersion: v1
kind: Service
metadata:
name: app1
namespace: default
spec:
selector:
app: app1
type: NodePort
ports:
- port: 3003
targetPort: 3003
protocol: TCP
name: http
there also a simple ingress setup
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: dev.acme.com
http:
paths:
- path: /app1
backend:
serviceName: app1
servicePort: 3003
And added to /etc/hosts the minikube ip address
192.xxx.xx.xxx dev.acme.com
So far so good.
When trying to call dev.acme.com/app1 via Postman, the node.js app itself is being called correctly (can see in the logs), HOWEVER, the app can not connect to Cassandra and times out with the following error:
"All host(s) tried for query failed. First host tried,
92.242.140.2:9042: DriverError: Connection timeout. See innerErrors."
The IP 92.242.140.2 seems to be just a public IP that is related to my ISP, I believe since the app is not able to resolve the service name.
I created a simple node.js script to test dns:
var dns = require('dns')
dns.resolve6('cassandra', (err, res) => console.log('ERR:', err, 'RES:', res))
and the response is
ERR: { Error: queryAaaa ENOTFOUND cassandra
at QueryReqWrap.onresolve [as oncomplete] (dns.js:197:19) errno: 'ENOTFOUND', code: 'ENOTFOUND', syscall: 'queryAaaa', hostname:
'cassandra' } RES: undefined
However, and this is where it gets confusing - when I ssh into the pod (app1), I am able to connect to cassandra service using:
cqlsh cassandra 9042 --cqlversion=3.4.4
So it seems as the pod is "aware" of the service name, but node.js runtime is not.
Any idea what could cause the node.js to not being able to resolve the service name/dns settings?
UPDATE
After re-installing the whole cluster, including re-installing docker, kubectl and minikube I am getting the same issue.
While running ping cassandra from app1 container via ssh, I am getting the following
PING cassandra.default.svc.cluster.local (10.96.239.137) 56(84) bytes
of data. 64 bytes from cassandra.default.svc.cluster.local
(10.96.239.137): icmp_seq=1 ttl=61 time=27.0 ms
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
Which seems to be fine.
However, when running from Node.js runtime I am still getting the same error -
"All host(s) tried for query failed. First host tried,
92.242.140.2:9042: DriverError: Connection timeout. See innerErrors."
These are the services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app1 ClusterIP None <none> 3003/TCP 11m
cassandra NodePort 10.96.239.137 <none> 9042:32564/TCP 38h
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 38h
And these are the pods (all namespaces)
NAMESPACE NAME READY STATUS RESTARTS AGE
default app1-85d889db5-m977z 1/1 Running 0 2m1s
default cassandra-0 1/1 Running 0 38h
kube-system calico-etcd-ccvs8 1/1 Running 0 38h
kube-system calico-node-thzwx 2/2 Running 0 38h
kube-system calico-policy-controller-5bb4fc6cdc-cnhrt 1/1 Running 0 38h
kube-system coredns-86c58d9df4-z8pr4 1/1 Running 0 38h
kube-system coredns-86c58d9df4-zcn6p 1/1 Running 0 38h
kube-system default-http-backend-5ff9d456ff-84zb5 1/1 Running 0 38h
kube-system etcd-minikube 1/1 Running 0 38h
kube-system kube-addon-manager-minikube 1/1 Running 0 38h
kube-system kube-apiserver-minikube 1/1 Running 0 38h
kube-system kube-controller-manager-minikube 1/1 Running 0 38h
kube-system kube-proxy-jj7c4 1/1 Running 0 38h
kube-system kube-scheduler-minikube 1/1 Running 0 38h
kube-system kubernetes-dashboard-ccc79bfc9-6jtgq 1/1 Running 4 38h
kube-system nginx-ingress-controller-7c66d668b-rvxpc 1/1 Running 0 38h
kube-system registry-creds-x5bhl 1/1 Running 0 38h
kube-system storage-provisioner 1/1 Running 0 38h
UPDATE 2
The code to connect to Cassandra from Node.js:
const cassandra = require('cassandra-driver');
const client = new cassandra.Client({ contactPoints: ['cassandra:9042'], localDataCenter: 'datacenter1', keyspace: 'auth_server' });
const query = 'SELECT * FROM user';
client.execute(query, [])
.then(result => console.log('User with email %s', result.rows[0].email));
It DOES work when replacing cassandra:9042 with 10.96.239.137:9042 (10.69.239.137 is the ip address received from pinging cassandra via cli).

The Cassandra driver for Node.js uses resolve4/resolve6 to do its dns lookup, which bypasses your resolv.conf file. A program like ping uses resolv.conf to resolve 'cassandra' to 'cassandra.default.svc.cluster.local', the actual dns name assigned to your Cassandra service. For a more detailed explanation of name resolution in node.js see here.
The fix is simple, just pass in the full service name to your client:
const client = new cassandra.Client({ contactPoints: ['cassandra.default.svc.cluster.local:9042'], localDataCenter: 'datacenter1', keyspace: 'auth_server' });

Related

Every spark pods scheduled on a single minikube node

Following this tutorial but instead setting minikube replicas to 3
minikube start --nodes 3 --memory 8192 --cpus 4 # enough resources for spark
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready control-plane,master 68m v1.22.3
minikube-m02 Ready <none> 68m v1.22.3
minikube-m03 Ready <none> 67m v1.22.3
When I apply the following deployments, everything gets scheduled on a single node, even though other nodes have enough resources.
$ kubectl get pods -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
spark-master-9d67dd4b7-tps82 1/1 Running 0 48m 10.244.2.2 minikube-m03 <none> <none>
spark-worker-766ccb5887-64bzk 1/1 Running 0 13s 10.244.2.17 minikube-m03 <none> <none>
spark-worker-766ccb5887-6gvfv 1/1 Running 0 13s 10.244.2.18 minikube-m03 <none> <none>
This is my deployment for workers:
kind: Deployment
apiVersion: apps/v1
metadata:
name: spark-worker
spec:
replicas: 15
selector:
matchLabels:
component: spark-worker
template:
metadata:
labels:
component: spark-worker
spec:
containers:
- name: spark-worker
image: mjhea0/spark-hadoop:3.2.0
command: ["/spark-worker"]
ports:
- containerPort: 8081
resources:
requests:
cpu: 100m
and master:
kind: Deployment
apiVersion: apps/v1
metadata:
name: spark-master
spec:
replicas: 1
selector:
matchLabels:
component: spark-master
template:
metadata:
labels:
component: spark-master
spec:
containers:
- name: spark-master
image: mjhea0/spark-hadoop:3.2.0
command: ["/spark-master"]
ports:
- containerPort: 7077
- containerPort: 8080
resources:
requests:
cpu: 100m
Any reason why everything sits on a single node?
Solved. Had to increase the replicas to 150 for the scheduler to finally consider other nodes.

MetalLB works only in master Node, cant reach ip assigned from workers

I've sucessfully installed MetalLB on my Bare Metal Kubernetes cluster, but only pods assigned to the master Node seems to work.
MLB is configured on layer2, in the range of 192.168.0.100-192.168.0.200, and pods do get an IP when assigned to worker nodes, but those ips do not respond to any request.
If the assigned ip is curled inside the node, it works, yet if its curled from another node or machine, it doesnt respond.
Example:
# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx2-658ffbbcb6-w5w28 1/1 Running 0 4m51s 10.244.1.2 worker2.homelab.com <none> <none>
nginx21-65b87bcbcb-fv856 1/1 Running 0 4h32m 10.244.0.10 master1.homelab.com <none> <none>
# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5h49m
nginx2 LoadBalancer 10.111.192.206 192.168.0.111 80:32404/TCP 5h21m
nginx21 LoadBalancer 10.108.222.125 192.168.0.113 80:31387/TCP 4h43m
# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master1.homelab.com Ready control-plane,master 5h50m v1.20.2 192.168.0.20 <none> CentOS Linux 7 (Core) 3.10.0-1160.15.2.el7.x86_64 docker://20.10.3
worker2.homelab.com Ready <none> 10m v1.20.2 192.168.0.22 <none> CentOS Linux 7 (Core) 3.10.0-1160.15.2.el7.x86_64 docker://20.10.3
Deployment nginx2 (Worker2, the one that doest work)
kubectl describe svc nginx2
Name: nginx2
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=nginx2
Type: LoadBalancer
IP: 10.111.192.206
LoadBalancer Ingress: 192.168.0.111
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 32404/TCP
Endpoints: 10.244.1.2:80
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal nodeAssigned 10m (x6 over 5h23m) metallb-speaker announcing from node "master1.homelab.com"
Normal nodeAssigned 5m18s metallb-speaker announcing from node "worker2.homelab.com"
[root#worker2 ~]# curl 192.168.0.111
<!DOCTYPE html> ..... (Works)
[root#master1 ~]# curl 192.168.0.111
curl: (7) Failed connect to 192.168.0.111:80; No route to host
Deployment nginx21 (Master1, the one that works)
kubectl describe svc nginx21
Name: nginx21
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=nginx21
Type: LoadBalancer
IP: 10.108.222.125
LoadBalancer Ingress: 192.168.0.113
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 31387/TCP
Endpoints: 10.244.0.10:80
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal nodeAssigned 12m (x3 over 4h35m) metallb-speaker announcing from node "master1.homelab.com"
[root#worker2 ~]# curl 192.168.0.113
<!DOCTYPE html> ..... (Works)
[root#master1 ~]# curl 192.168.0.113
<!DOCTYPE html> ..... (Works)
--------- PING WORKS FROM OTHER MACHINES ----------
I've just found out this, so it might be a problem with iptables? i dont really know how it works on MetalLB, i can ping the ip (192.168.0.111) from other machines and it responds
i figured out, after Matt response, it was the firewall that was blocking the access, so i just simply added the whole network to the port 80 and it worked.
[root#worker2 ~]# firewall-cmd --new-zone=kubernetes --permanent
success
[root#worker2 ~]# firewall-cmd --zone=kubernetes --add-source=192.168.0.1/16 --permanent
success
[root#worker2 ~]# firewall-cmd --zone=kubernetes --add-port=80/tcp --permanent
success
[root#worker2 ~]# firewall-cmd --reload

NatsError: Could not connect to server: Error: connect ECONNREFUSED Error

I created a NATS Streaming Server on my Kubernetes cluster.
And "Kubectl get services" output like that:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
api-gateway-srv NodePort 10.106.100.181 <none> 8080:30440/TCP 16m
auth-mongo-srv ClusterIP 10.101.9.123 <none> 27017/TCP 16m
auth-srv ClusterIP 10.102.227.91 <none> 3000/TCP 16m
radio-srv ClusterIP 10.111.20.153 <none> 3003/TCP 16m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d13h
tv-srv ClusterIP 10.111.88.212 <none> 3001/TCP 16m
nats-srv ClusterIP 10.105.230.126 <none> 4222/TCP,8222/TCP 16m
On on my nats-publisher.js file like that:
const nats = require('node-nats-streaming');
const stan = nats.connect('natsserver', 'nats-cli1', {
url: 'nats://nats-srv:4222'
});
stan.on('connect', () => {
console.log('Links publisher connected to NATS')
}, (err, guid) => {
if(err) console.log(err)
else console.log(guid)
})
And I get :
NatsError: Could not connect to server: Error: connect ECONNREFUSED 10.105.230.126:4222
But on the another service I used same connection codes for nats connection. And this service can connect successfully nats server.
Why I getting this error?
Same code run as correctly on the another service but this code How can crash from this service?
When you use command kubectl port-forward [pod_name] 4222:4222
You can see next lines at the terminal:
Forwarding from 127.0.0.1:4222 -> 4222
Forwarding from [::1]:4222 -> 4222
Use 127.0.0.1:4222 in config

Cannot access Web API deployed in Azure ACS Kubernetes Cluster

Please help. I am trying to deploy a web API to Azure ACS Kubernetes cluster, it is a simple web API created in VSTS and the result should be like this: { "value1", "value2" }.
I plan to make the type as Cluster-IP but I want to test and access it first that is why this is LoadBalancer, the pods is running and no restart (I think it's good).
The guide I'm following is: Running Web API using Docker and Kubernetes
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 3d
sampleapi-service LoadBalancer 10.0.238.155 102.51.223.6 80:31676/TCP 1h
When I tried to browse the IP 102.51.223.6/api/values it says:
"This site can’t be reached"
service.yaml
kind: Service
apiVersion: v1
metadata:
name: sampleapi-service
labels:
name: sampleapi
app: sampleapi
spec:
selector:
name: sampleapi
ports:
- protocol: "TCP"
# Port accessible inside the cluster
port: 80
# Port to forwards inside the pod
targetPort: 80
# Port accessible oustide the cluster
#nodePort: 80
type: LoadBalancer
deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: sampleapi-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: sampleapi
spec:
containers:
- name: sampleapi
image: mycontainerregistry.azurecr.io/sampleapi:latest
ports:
- containerPort: 80
POD
Name: sampleapi-deployment-498305766-zzs2z
Namespace: default
Node: c103facs9001/10.240.0.4
Start Time: Fri, 27 Jul 2018 00:20:06 +0000
Labels: app=sampleapi
pod-template-hash=498305766
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"sampleapi-deployme
-498305766","uid":"d064a8e0-9132-11e8-b58d-0...
Status: Running
IP: 10.244.2.223
Controlled By: ReplicaSet/sampleapi-deployment-498305766
Containers:
sampleapi:
Container ID: docker://19d414c87ebafe1cc99d101ac60f1113533e44c24552c75af4ec197d3d3c9c53
Image: mycontainerregistry.azurecr.io/sampleapi:latest
Image ID: docker-pullable://mycontainerregistry.azurecr.io/sampleapi#sha256:9635a9df168ef76a6a27cd46cb15620d762657e9b57a5ac2514ba0b9a8f47a8d
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Fri, 27 Jul 2018 00:20:48 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mj5m1 (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-mj5m1:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mj5m1
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 50m default-scheduler Successfully assigned sampleapi-deployment-498305766-zzs2z to c103facs9001
Normal SuccessfulMountVolume 50m kubelet, c103facs9001 MountVolume.SetUp succeeded for volume "default-token-mj5m1"
Normal Pulling 49m kubelet, c103facs9001 pulling image "mycontainerregistry.azurecr.io/sampleapi:latest"
Normal Pulled 49m kubelet, c103facs9001 Successfully pulled image "mycontainerregistry.azurecr.io/sampleapi:latest"
Normal Created 49m kubelet, c103facs9001 Created container
Normal Started 49m kubelet, c103facs9001 Started container
It seems like to me that your service isn't set to a port on the container. You have your targetPort commented out. So the service is reachable on port 80 but the service doesn't know to target the pod on that port.
You will need to start the service which exposes the internal port to some external Ip:port that can be used in your browser to access the service. try this after deploying your deployment and service yml files:
kubectl service sampleapi-service

Unable to setup service DNS in Kubernetes cluster

Kubernetes version --> 1.5.2
I am setting up DNS for Kubernetes services for the first time and I came across SkyDNS.
So following documentation, my skydns-svc.yaml file is :
apiVersion: v1
kind: Service
spec:
clusterIP: 10.100.0.100
ports:
- name: dns
port: 53
protocol: UDP
targetPort: 53
- name: dns-tcp
port: 53
protocol: TCP
targetPort: 53
selector:
k8s-app: kube-dns
sessionAffinity: None
type: ClusterIP
And my skydns-rc.yaml file is :
apiVersion: v1
kind: ReplicationController
spec:
replicas: 1
selector:
k8s-app: kube-dns
version: v18
template:
metadata:
creationTimestamp: null
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
version: v18
spec:
containers:
- args:
- --domain=kube.local
- --dns-port=10053
image: gcr.io/google_containers/kubedns-amd64:1.6
imagePullPolicy: IfNotPresent
name: kubedns
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
resources:
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
terminationMessagePath: /dev/termination-log
- args:
- --cache-size=1000
- --no-resolv
- --server=127.0.0.1#10053
image: gcr.io/google_containers/kube-dnsmasq-amd64:1.3
imagePullPolicy: IfNotPresent
name: dnsmasq
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
- args:
- -cmd=nslookup kubernetes.default.svc.kube.local 127.0.0.1 >/dev/null &&
nslookup kubernetes.default.svc.kube.local 127.0.0.1:10053 >/dev/null
- -port=8080
- -quiet
image: gcr.io/google_containers/exechealthz-amd64:1.0
imagePullPolicy: IfNotPresent
name: healthz
ports:
- containerPort: 8080
protocol: TCP
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
Also on my minions, I updated the /etc/systemd/system/multi-user.target.wants/kubelet.service file and added the following under the ExecStart section :
ExecStart=/usr/bin/kubelet \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBELET_API_SERVER \
$KUBELET_ADDRESS \
$KUBELET_PORT \
$KUBELET_HOSTNAME \
$KUBE_ALLOW_PRIV \
$KUBELET_POD_INFRA_CONTAINER \
$KUBELET_ARGS \
--cluster-dns=10.100.0.100 \
--cluster-domain=kubernetes \
Having done all of this and having successfully brought up the rc & svc :
[root#kubernetes-master DNS]# kubectl get po | grep dns
kube-dns-v18-hl8z6 3/3 Running 0 6s
[root#kubernetes-master DNS]# kubectl get svc | grep dns
kube-dns 10.100.0.100 <none> 53/UDP,53/TCP 20m
This is all that I got from a config standpoint. Now in order to test my setup, I downloaded busybox and tested a nslookup
[root#kubernetes-master DNS]# kubectl get svc | grep kubernetes
kubernetes 10.100.0.1 <none> 443/TCP
[root#kubernetes-master DNS]# kubectl exec busybox -- nslookup kubernetes
nslookup: can't resolve 'kubernetes'
Server: 10.100.0.100
Address 1: 10.100.0.100
Is there something that I have missed ?
EDIT ::
Going through the logs, I see something that might explain why this is not working :
kubectl logs $(kubectl get pods -l k8s-app=kube-dns -o name) -c kubedns
.
.
.
E1220 17:44:48.403976 1 reflector.go:216] pkg/dns/dns.go:154: Failed to list *api.Endpoints: Get https://10.100.0.1:443/api/v1/endpoints?resourceVersion=0: x509: failed to load system roots and no roots provided
E1220 17:44:48.487169 1 reflector.go:216] pkg/dns/dns.go:155: Failed to list *api.Service: Get https://10.100.0.1:443/api/v1/services?resourceVersion=0: x509: failed to load system roots and no roots provided
I1220 17:44:48.487716 1 dns.go:172] Ignoring error while waiting for service default/kubernetes: Get https://10.100.0.1:443/api/v1/namespaces/default/services/kubernetes: x509: failed to load system roots and no roots provided. Sleeping 1s before retrying.
E1220 17:44:49.410311 1 reflector.go:216] pkg/dns/dns.go:154: Failed to list *api.Endpoints: Get https://10.100.0.1:443/api/v1/endpoints?resourceVersion=0: x509: failed to load system roots and no roots provided
I1220 17:44:49.492338 1 dns.go:172] Ignoring error while waiting for service default/kubernetes: Get https://10.100.0.1:443/api/v1/namespaces/default/services/kubernetes: x509: failed to load system roots and no roots provided. Sleeping 1s before retrying.
E1220 17:44:49.493429 1 reflector.go:216] pkg/dns/dns.go:155: Failed to list *api.Service: Get https://10.100.0.1:443/api/v1/services?resourceVersion=0: x509: failed to load system roots and no roots provided
.
.
.
Looks like kubedns is unable to authorize against K8S master node. I even tried to do a manual call :
curl -k https://10.100.0.1:443/api/v1/endpoints?resourceVersion=0
Unauthorized
Looks like the kube-dns pod is not able to authenticate with the kubernetes api server. I don't see any secret and serviceaccount in the YAML file for the kube-dns pod.
I suggest doing the following:
Create a k8s secret using kubectl create secret for the kube-dns pod with the right certificate file ca.crt and token:
$ kubectl get secrets -n=kube-system | grep dns
kube-dns-token-66tfx kubernetes.io/service-account-token 3 1d
Create a k8s serviceaccount using kubectl create serviceaccount for the kube-dns pod:
$ kubectl get serviceaccounts -n=kube-system | grep dns
kube-dns 1 1d`
Mount the secret at /var/run/secrets/kubernetes.io/serviceaccount inside the kube-dns container in the YAML file:
...
kind: Pod
...
spec:
...
containers:
...
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-dns-token-66tfx
readOnly: true
...
volumes:
- name: kube-dns-token-66tfx
secret:
defaultMode: 420
secretName: kube-dns-token-66tfx
Here are the links about creating serviceaccounts for pods:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
https://kubernetes.io/docs/admin/service-accounts-admin/

Resources