I have a cronjob that runs every 10 minutes. So every 10 minutes, a new pod is created. After a day, I have a lot of completed pods (not jobs, just one cronjob exists). Is there way to automatically get rid of them?
That's a work for labels.
Use them on your CronJob and delete completed pods using a selector (-l flag).
For example:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: my-cron
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
metadata:
labels:
app: periodic-batch-job
is-cron: "true"
spec:
containers:
- name: cron
image: your_image
imagePullPolicy: IfNotPresent
restartPolicy: OnFailure
Delete all cron-labeled pods with:
kubect delete pod -l is-cron
Specifically for my situation, my pods where not fully terminating as I was running one container with the actual job, another with cloud sql proxy, and cloud sql proxy was preventing the pod from completing successfully.
The fix was to kill the proxy process after 30 seconds (my jobs typically take couple of seconds). Then once the job completes, successfulJobsHistoryLimit on the cronjob kicks in and keeps (by default) only 3 last pods.
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["sh", "-c"]
args:
- /cloud_sql_proxy -instances=myinstance=tcp:5432 -credential_file=/secrets/cloudsql/credentials.json & pid=$! && (sleep 30 && kill -9 $pid 2>/dev/null)
Related
I have a usecase where (at least for now) I need a k8s pod to stay up without a HTTP or TCP endpoint. I tried the following deployment...
apiVersion: apps/v1
kind: Deployment
metadata:
name: node-deployment
labels:
app: node-app
spec:
selector:
matchLabels:
app: node-app
template:
metadata:
labels:
app: node-app
spec:
containers:
- name: node-server
image: node:17-alpine
exec:
command: ["node","-v"]
livenessProbe:
exec:
command: ["node","-v"]
initialDelaySeconds: 5
periodSeconds: 5
readinessProbe:
exec:
command: ["node","-v"]
initialDelaySeconds: 5
periodSeconds: 5
But after a while it stops the pod with the following error...
Warning BackOff 5s (x10 over 73s) kubelet, minikube Back-off restarting failed container
And the pod status shows...
node-deployment-58445f5649-z6lkz 0/1 CrashLoopBackOff 5 (103s ago) 4m47s
I know it is running because I see the version in kubectl logs <node-name>. How do I get the image to stay up with no long running process? Is this even possible?
The process of the container simply exited, the same happens if you run node -v in your terminal.
Not sure if the use case you provide is your real use case ( I can't see any reason to have the node version as an application), so ..
if you really want to have the version, you can change the command to be
["watch","-n1","node","-v"]
So it will print the node version every second
I would like to have a docker container active only during certain time's in the day so that a Test Automation can run? Is it possible?
Yes it is possible. Cronjobs is designed to run a job periodically on a given schedule, written in Cron format. A Job creates one or more Pods and will continue to retry execution of the Pods until a specified number of them successfully terminate.
To run your automation tests
You should create a Cronjob definition
Set the cron timer
Call your CMD
Here is a sample Hello Wordl example:
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
You haven't given much information beside running and stopping a container. One of the simplest way is to use the Docker CLI to run an instance of your container in Azure Container Instances. You create and use a context for Azure and then, using docker run will create an ACI and run your container in Azure.
docker login azure
docker context create aci myacicontext
docker context use myacicontext
docker run -p 80:80 [yourImage]
docker rm [instanceName]
Ref: https://www.docker.com/blog/running-a-container-in-aci-with-docker-desktop-edge/ https://learn.microsoft.com/en-us/azure/container-instances/quickstart-docker-cli
I'm trying to create a kubernetes deployment that creates a pod.
I want this pod to run the command "cron start" on creation so that cron is automatically initialized.
This is currently how I am trying to run the command though it clearly isn't working (kubernetes_deployment.yaml)
- containerPort: 8080
command: [ "/bin/sh" ]
args: ["cron start"]
Thank you in advance :)
Maybe you could use Kubernetes CronJobs.
You can set a cron expression.
https://kubernetes.io/es/docs/concepts/workloads/controllers/cron-jobs/
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
I have a project that contains two parts: the first one is a Flask Api and the second one is a script that should be scheduled.
The Flask app is served through a docker image that runs in Openshift.
My problem is where should i schedule the second script. I have access to Gitlab CI/CD but that's not really its purpose.
Building a docker image and running it on Openshift is also not possible because it will run more times than needed if the pods are more than 1.
The only option I'm thinking of is just using a regular server with cron.
Do you have maybe a better solution?
Thanks
There are several aspects to your question and several ways to do it, I'll give you some brief info on where to start.
Pythonic-way
You can deploy a celery worker, that will handle the scheduled jobs. You can look into celery documentation on how to work it out in python: https://docs.celeryproject.org/en/latest/userguide/workers.html
You can probably get a grasp on how to extend your deployment to support celery from this article on dev.to, which shows a full deployment of celery:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: celery-worker
labels:
deployment: celery-worker
spec:
replicas: 1
selector:
matchLabels:
pod: celery-worker
template:
metadata:
labels:
pod: celery-worker
spec:
containers:
- name: celery-worker
image: backend:11
command: ["celery", "worker", "--app=backend.celery_app:app", "--loglevel=info"]
env:
- name: DJANGO_SETTINGS_MODULE
value: 'backend.settings.minikube'
- name: SECRET_KEY
value: "my-secret-key"
- name: POSTGRES_NAME
value: postgres
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-credentials
key: user
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-credentials
key: password
Kubernetes-way
In Kubernetes (Openshift is a distribution of Kubernetes) - you can create a cronjob, which will execute a specific task on a schedule, similar to this:
kubectl run --generator=run-pod/v1 hello --schedule="*/1 * * * *" --restart=OnFailure --image=busybox -- /bin/sh -c "date; echo Hello from the Kubernetes cluster"
which I pulled from Kubernetes docs.
Cloud way
You can also use a serverless platform, e.g. AWS Lambda to execute a scheduled job. The cool thing about AWS Lambda is that their free tier will be more than enough for your use case.
See AWS example code here
I want to shutdown Node.js gracefully, but it doesn't receive the preStop signal from Kubernetes.
process.on('preStop', handleShutdown);
function handleShutdown() {
console.log("Pod will shut down in 30 seconds");
}
I current do not have a preStop lifecycle command in the .yaml because I couldn't find any way to get it to notify the Node.js worker
Thank you!
preStop
command will kill the container. Basically, it will kill your app process. If you set up only 'preStop', it does not work, unfortunately. Here is a sample POD, I played with minikube to test.
apiVersion: v1
kind: Pod
metadata:
name: lifecycle-demo
spec:
containers:
- name: lifecycle-demo-container
image: nginx
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
preStop:
exec:
command: ["/bin/sh", "-c", "/bin/sleep 120"]
terminationGracePeriodSeconds: 6000
I am telling to hold the kill for 120 second before killing it. If that does not happen, it will violently be killed after 6000 second.
For your case, terminationGracePeriodSeconds itself should be enough.