Deploy an empty pod that sits idle in k8's NameSpace - linux

Can anyone help in making me understand if it's possible to deploy an empty pod inside a node in k8's for basic network debugging.PS : I should be able to exec this pod after its deployed.

You can just use kubectl with generator.
# Create an idle pod
$ kubectl run --generator=run-pod/v1 idle-pod -i --tty --image ubuntu -- bash
root#idle-pod:/# # Debug whatever you want inside the idle container
root#idle-pod:/# exit
$
# Exec into idle pod
$ kubectl exec -i --tty idle-pod bash
root#idle-pod:/# # Debug whatever you want inside the idle container
root#idle-pod:/# exit
$
# Delete the idle pod
$ kubectl delete pod idle-pod
pod "idle-pod" deleted
$

Just deploy a pod with a container you need and command which do nothing.
Save that spec to a yaml file:
apiVersion: v1
kind: Pod
metadata:
name: empty
spec:
containers:
- name: empty
image: alpine
command: ["cat"]
And then apply that yaml by kubectl apply -f $filename

Related

Copying local directory via Terraform into Kubernetes Cluster

I am trying to copy some files from my local terraform directory into my datadog resources into a preexisting configuration path.
When I try the below in my datadog-values.yaml I do not see any of my configuration files copied into the location. I also cannot see any logs, even in debug mode, that are telling me whether it failed or the path was incorrect.
See datadog helm-charts
# agents.volumes -- Specify additional volumes to mount in the dd-agent container
volumes:
- hostPath:
path: ./configs
name: openmetrics_config
# agents.volumeMounts -- Specify additional volumes to mount in all containers of the agent pod
volumeMounts:
- name: openmetrics_config
mountPath: /etc/datadog-agent/conf.d/openmetrics.d
readOnly: true
What I've tried
I can manually copy the configuration files into the directory like below in a shell script. But Of course if the datadog names change on restart I have to manually update.
kubectl -n datadog -c trace-agent cp ./configs/bookie_conf.yaml datadog-sdbh5:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/broker_conf.yaml datadog-sdbh5:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/proxy_conf.yaml datadog-sdbh5:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/zookeeper_conf.yaml datadog-sdbh5:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/bookie_conf.yaml datadog-t4pgg:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/broker_conf.yaml datadog-t4pgg:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/proxy_conf.yaml datadog-t4pgg:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/zookeeper_conf.yaml datadog-t4pgg:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/bookie_conf.yaml datadog-z8knp:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/broker_conf.yaml datadog-z8knp:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/proxy_conf.yaml datadog-z8knp:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/zookeeper_conf.yaml datadog-z8knp:/etc/datadog-agent/conf.d/openmetrics.d
kubectl rollout restart deployment datadog-cluster-agent -n datadog
This volumes that you use here don't work as you wish to. This ./config directory is not your local directory. Kubernetes has no idea about your local machine.
But fear not.
There are few ways of doing it that and it all depending on your needs. They are:
Terraformed config
Terraformed mount
Terraformed copy config action
Terraformed config
To have config file terraformed means:
to have config updated in k8s whenever change of file occurs - we want terraform to track those changes
to have config uploaded before service using it will start (this is a configuration file after all, they configure something I assume)
DISCLAIMER - service won't reset after config change (it's achievable, but it's another topic)
To achieve this create config map for every config:
resource "kubernetes_config_map" "config" {
metadata {
name = "some_name"
namespace = "some_namespace"
}
data = {
"config.conf" = file(var.path_to_config)
}
}
and then use it in your volumeMounts. I assume that you're working with helm provider, so this should probably be
set {
name = "agents.volumeMounts"
value = [{
"mountPath": "/where/to/mount"
"name": kubernetes_config_map.config.metadata.0.name
}]
}
In example above I used single config and single volume for simplification, but for_each should be enough.
Terraformed mount
Another variant is that you don't want terraform to track configurations, then what you want to do is:
Create single storage (it can be mounted storage from your kube provider, can be also created dynamic volume in terraform - chose your poison)
Mount this storage to kubernetes volume (kubernetes_persistent_volume_v1 in terraform)
Set set {...} like in previous section.
Terraformed copy config action
Last one and my least favorited option is to call action to copy from terraform. It's last resort... Provisioners
Even terraform docs say it's bad, yet it has one advantage. It's super easy to use. You can simply call your shell command here - it could be: scp, rsync, or even (but please don't do it) kubectl cp.
To not encourage this solution more I'll just leave doc of null_resource which uses provisioner "remote-exec" (you can use "local-exec") here.

How to write loops in gitlab-ci.yaml file?

I am writing a GitLab-ci.yaml file for my application and in that I have to run the kubectl command to get all pods after I ran the command I got the names of the pods which I need but ow the thing is I have to run kubectl cp command and need to copy a file into all three pods but I don't know the way to perform this action.
If anyone knows the way to do this activity then please reply.
Thanks
job:
script:
- |
for pod in $(kubectl get po -o jsonpath='{.items[*].metadata.name} -n your-namespace')
do
kubectl cp $(pwd)/some-file.txt your-namespace/$pod:/
done
deploy:
image:
name: bitnami/kubectl:latest
entrypoint: ['']
script:
- kubectl config get-contexts
- kubectl config use-context path/to/agent/repository:agent-name
- kubectl get pods

Get node that a specific pod is running on

I have 2 nodes that I'm running development pods on. I'd like to be able to echo only the node that a pod is running on based on the name.
I can use kubectl get pod -o=custom-columns=NAME:.metadata.name,NODE:spec.nodeName -n my-namespace to pull back all the names and nodes for all pods in that namespace, but I'd like to filter just the nodename for specific pods. Using grep on the pod name works, but I'm not sure if its possible to only show the node when filtering based off a single pod name.
Option-1: Using custom-columns
kubectl get pod mypod -o custom-columns=":.spec.nodeName" --no-headers
Option-2: Using jsonpath
kubectl get pod mypod -o jsonpath='{.spec.nodeName}'

Defining command & argument - Pod yaml

As per the syntax mentioned, below is the Pod yaml using args:
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-exec
spec:
containers:
- name: liveness
image: k8s.gcr.io/busybox
resources:
limits:
memory: "64Mi" #64 MB
cpu: "50m" #50 millicpu (.05 cpu or 5% of the cpu)
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
Documentation says: If you define args, but do not define a command, the default command is used with your new arguments.
As per the documentation, What is the default command for the arguments(in above yaml)?
"Default command" references the command set in your container image. In case of your image - k8s.gcr.io/busybox - this appears to be /bin/sh:
$ docker pull k8s.gcr.io/busybox
Using default tag: latest
latest: Pulling from busybox
a3ed95caeb02: Pull complete
138cfc514ce4: Pull complete
Digest: sha256:d8d3bc2c183ed2f9f10e7258f84971202325ee6011ba137112e01e30f206de67
Status: Downloaded newer image for k8s.gcr.io/busybox:latest
k8s.gcr.io/busybox:latest
$ docker image inspect k8s.gcr.io/busybox | jq '.[0] | .Config.Cmd'
[
"/bin/sh"
]
So, by explicitly setting a pod.spec.containers.command, you are effectively overriding that value.
See also:
$ kubectl explain pod.spec.containers.command
KIND: Pod
VERSION: v1
FIELD: command <[]string>
DESCRIPTION:
Entrypoint array. Not executed within a shell. The docker image's
ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME)
are expanded using the container's environment. If a variable cannot be
resolved, the reference in the input string will be unchanged. The
$(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME).
Escaped references will never be expanded, regardless of whether the
variable exists or not. Cannot be updated. More info:
https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell
Read more here.
To visualize this, you can run the following command:
kubectl run busybox-default --image busybox
pod/busybox-default created
kubectl run busybox-command --image busybox --command sleep 10000
pod/busybox-command created
check the output of docker ps and look for COMMAND column. You may use --no-trunc flag for complete output.
Output for the container running without command option:
docker ps |grep -E 'busybox-default'
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e01428c98071 k8s.gcr.io/pause:3.2 "/pause" 3 minutes ago Up 3 minutes k8s_POD_busybox-default_default_3449d2bc-a731-4441-9d78-648a7fa730dd_0
Output for the container running with command option:
docker ps |grep -E 'busybox-command'
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
557578fc60ea busybox "sleep 10000" 5 minutes ago Up 5 minutes k8s_busybox-comand_busybox-command_default_57f6d09c-2ed1-4b73-b3f9-c2b612c19a16_0
7c6f1240ab07 k8s.gcr.io/pause:3.2 "/pause" 5 minutes ago Up 5 minutes k8s_POD_busybox-comand_default_57f6d09c-2ed1-4b73-b3f9-c2b612c19a16_0

How to delete every Pod in a Kubernetes namespace

Take this scenario:
I want to delete every running pod automatically using the Commandline without having to type kubectl delete pod <pod_name> -n <namespace> for each pod.
You can use awk to filter pod names based on their STATUS==RUNNING. Below code will delete all(in Running state) the pods from $NAMESPACE namespace.
kubectl get pod -n $NAMESPACE|awk '$3=="Running"{print $1}'
Example:
for pod in $(kubectl get pod -n $NAMESPACE |awk '$3=="Running"{print $1}'); do
kubectl delete pod -n $NAMESPACE $pod
done
OR
You may use jsonpath,
NAMESPACE=mynamespace
for pod in $(kubectl get pod -n $NAMESPACE -o jsonpath='{.items[?(#.status.phase=="Running")].metadata.name}{"\n"}'); do
kubectl delete pod -n $NAMESPACE "$pod"
done
NOTE: Above code will cause deletion of all the pods in $NAMESPACE variable.
Example:
kubectl get pod -n mynamespace
NAME READY STATUS RESTARTS AGE
foo-mh6j7 0/1 Completed 0 5d3h
nginx 1/1 Running 2 7d10h
mongo 2/2 Running 12 57d
busybox 1/1 Running 187 61d
jsonpath query to print all pods in Running state:
kubectl get pod -n mynamespace -o jsonpath='{.items[?(#.status.phase=="Running")].metadata.name}{"\n"}'
nginx mongo busybox
Although, you have not asked for ready state, but following query can be used to list pods in ready state.
kubectl get pod -n mynamespace -o jsonpath='{range .items[*]}{.status.containerStatuses[*].ready.true}{.metadata.name}{ "\n"}{end}'
foo-mh6j7
nginx
mongo
busybox
Similarly, this can be done via grep:
kubectl get pod -n $NAMESPACE |grep -P '\s+([1-9]+)\/\1\s+'
NOTE: Either of the solution will not prevent pods from getting respawned if they are created via replicaset or deployment or statefulset etc. This means, they will get deleted and respawned.
You could filter and delete running pods by:
kubectl delete pods -n <NAMESPACE> --field-selector=status.phase=Running
Here is a shellscript I made to achieve the task,
i=0 && for pod in $(kubectl get pods | grep 'Running')
do
if [ `expr $i % 5` == 0 ]
then kubectl delete pod $pod
fi
i=`expr $i + 1`
done
I found that when we loop over kubectl get pods | grep 'Running', every 5th word is a pod name.
So I basically wrote the script to take every 5th-word from the loop and execute whatever command I want on it.
Still, this feels like a naive approach. Feel free to share a better one.
To directly answer the question from the topic summary, for people who found this question, but want delete actually every pod, not only running ones.
Simply do:
kubectl delete pods -n <NAMESPACE> --all

Resources