Putting a pod to sleep in kubernetes - linux

I know how to put a pod to sleep by command:
kubectl -n logging patch sts <sts name> --type='json' -p='[{"op": "add", "path": "/spec/template/spec/containers/0/command", "value": ["sleep", "infinity"] }]'
Whats the command to wake up the pod?

What you are actually doing is to update the statefulset, changing the command parameter for its pods. The command parameter sets the entrypoint for the container, in other words, the command that is executed when starting the container.
You are setting that command to sleep infinity. Thus to wake up the pod, just update the statefulset and set the command to the original one.
The best solution to do this would be to just scale the statufulset to 0 replicas with:
kubectl -n logging scale sts <sts name> --replicas 0
And scale up to the original replicas number with:
kubectl -n logging scale sts <sts name> --replicas <original number>
This way you don't have any pod running sleep infinity in your cluster, and you will save costs by not having this useless pods wasting resources.

Related

Different environment variables with ssh or kubectl exec

We have a service in our cluster that we call via ssh (test environment etc.). In this container we have different environment variables when we connect with ssh or we connect with kubectl.
Can someone explain me what else is set here with the kubectl exec command?
As an example a small excerpt from both environments.
kubectl exec: (printenv | grep KU)
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT_443_TCP=tcp://10.4.0.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_ADDR=10.4.0.1
KUBERNETES_SERVICE_HOST=10.4.0.1
KUBERNETES_PORT=tcp://10.4.0.1:443
KUBERNETES_PORT_443_TCP_PORT=443
ssh into the same container: (printenv | grep KU)
dev-xxxxx:~$ printenv | grep KU
dev-xxxxx:~$
The kubectl exec command allows you to remotely run arbitrary commands inside an existing container of a pod. kubectl exec isn’t much different from using SSH to execute commands on a remote system. SSH and kubectl should both work well with 99% of CLI applications. The only difference I could find when it comes to environment variables is that:
kubectl will always set the environment variables provided to the container at startup
SSH relies mostly on the system login shell configuration (but can also accept user’s environment via PermitUserEnvironment or SendEnv/AcceptEnv)
Answering your question:
Can someone explain me what else is set here with the kubectl exec
command?
They should result with the same output (assuming that you have typed both commands correctly and execute them on the same container).
Below you will find some useful resources regarding the kubectl exec command:
Get a Shell to a Running Container
kubectl-commands#exec docs
How does 'kubectl exec' work?
EDIT:
If you wish to learn some more regarding the differences between kubectl exec and SSH I recommend this article. It covers the topics of:
Authn/z
Shell UX
Non-shell features, and
Performance

No environment variable displayed through kubectl

To display some environment variables in a pod on Kubernetes, I tried it in two ways.
(1) Connecting inside a pod
I connected to shell in a pod and I executed 'echo' command like below..
kubectl exec -it <pod-name> /bin/bash
then...
echo $KUBERNETES_SERVICE_HOST
I saw correct result as I expected.
(2) Send a command to a pod
kubectl exec <pod-name> -- echo $KUBERNETES_SERVICE_HOST
In this case, there is no output.
you can see the screenshot what I did.
What is the problem here?
What is difference between two situations?
Thanks you :)
In the second case, '$' DOllar in the command references to your local host envrionment variables. And there is no such variable KUBERNETES_SERVICE_HOST on local host, the command that goes looks like below
kubectl exec -- echo
use below instead
kubectl exec c-hub-admin-app-systest-6dc46bb776-tvb99 -- printenv | grep KUBERNETES_SERVICE_HOST

Kubernetes cronjob email alerts

I have few cronjobs configured and running in Kubernetes. How to setup up cronjob email alerts for success or failure in Kubernetes.
This could be as easy as setting up a bash script with kubectl to send an email if you see a job that is Failed state.
while true; do if `kubectl get jobs myjob -o jsonpath='{.status.conditions[?(#.type=="Failed")].status}' | grep True`; then mail email#address -s jobfailed; else sleep 1 ; fi; done
or on newer K8s:
while true; do kubectl wait --for=condition=failed job/myjob; mail#address -s jobfailed; done
How to tell whether a Job is complete: Kubernetes - Tell when Job is Complete
You can also setup something like Prometheus with Alertmanager in your Kubernetes cluster to monitor your Jobs.
Some useful info here and here.

Why is there `sleep infinity` in the kubernetes yaml file for spark

I am reading this blog and tried to run the code. If sleep infinity is removed, the pod will be stuck in CrashLoopBackOff:
$ kubectl get po
NAME READY STATUS RESTARTS AGE
spark-master-715509916-zggtc 0/1 CrashLoopBackOff 5 3m
spark-worker-3468022965-xb5mw 0/1 Completed 5 3m
Can anyone explain this?
The reason the pod goes into the CrashLoopBackOff state is that Kubernetes expects to process manage the command executed by the container. Presumably the start-master.sh script executes, then exits, which Kubernetes interprets as the process dying. You need to execute a command which will not exit in order to keep the pod alive. In this case the sleep infinity is included to simulate a long running process. You could also achieve this with something like:
'./start-master.sh ; /bin/bash'
yes, since you removed the sleep infinity, container is starting and terminating. you need to keep sleep statement. Is there a reason you want to remove sleep?
Thanks
SR

Google cloud SDK code to execute via cron

I am trying to implement an automated code to shut down and start VM Instances in my Google Cloud account via Crontab. The OS is Ubuntu 12 lts and is installed with Google service account so it can handle read/write on my Google cloud account.
My actual code is in this file /home/ubu12lts/cronfiles/resetvm.sh
#!/bin/bash
echo Y | gcloud compute instances stop my-vm-name --zone us-central1-a
sleep 120s
gcloud compute instances start my-vm-name --zone us-central1-a
echo "completed"
When I call the above file like this,
$ bash /home/ubu12lts/cronfiles/resetvm.sh
It works perfect and does the job.
Now I wanted to set this up in cron so it would do automatically every hour. So I did
$ sudo crontab -e
And added this code in cron
0 * * * * /bin/sh /home/ubu12lts/cronfiles/resetvm.sh >>/home/ubu12lts/cron.log
And made script executable
chmod +x /home/ubu12lts/cronfiles/resetvm.sh
I also tested the crontab by adding a sample command of creating .txt file with a sample message and it worked perfect.
But the above code for gcloud SDK doesn't work through cron. The VM doesn't stop neither starts in my GC compute engine.
Anyone can help please?
Thank you so much.
You have added the entry to root's crontab, while your Cloud SDK installation is setup for a different user (I am guessing ubu121lts).
You should add the entry in ubu121lts's crontab using:
crontab -u ubu121lts -e
Additionally your entry is currently scheduled to run on the 0th minute every hour. Is that what you intended?
I have run into a similar issue before. I fixed it by forcing the profile in my script.sh,loading the gcloud environment variables with it. Example below:
#!/bin/bash
source /etc/profile
echo Y | gcloud compute instances stop my-vm-name --zone us-central1-a
sleep 120s gcloud compute instances start my-vm-name --zone
us-central1-a echo "completed"
This also helped me resize node count in GKE.

Resources