Passing in Arguments to Kubernetes Deployment Pod - cron

I'm trying to create a kubernetes deployment that creates a pod.
I want this pod to run the command "cron start" on creation so that cron is automatically initialized.
This is currently how I am trying to run the command though it clearly isn't working (kubernetes_deployment.yaml)
- containerPort: 8080
command: [ "/bin/sh" ]
args: ["cron start"]
Thank you in advance :)

Maybe you could use Kubernetes CronJobs.
You can set a cron expression.
https://kubernetes.io/es/docs/concepts/workloads/controllers/cron-jobs/
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure

Related

Running Testcafe in Docker Containers in Kubernetes - 1337 Port is Already in Use - Error

I have multiple Testcafe scripts (script1.js, script2.js) that are working fine. I have Dockerized this code into a Dockerfile and it works fine when I run the Docker Image. Next, I want to invoke this Docker Image as a CronJob in Kubernetes. Given below is my manifest.yaml file.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: application-automation-framework
namespace: development
labels:
team: development
spec:
schedule: "*/1 * * * *"
jobTemplate:
metadata:
labels:
team: development
spec:
ttlSecondsAfterFinished: 120
backoffLimit: 3
template:
metadata:
labels:
team: development
spec:
containers:
- name: script1-job
image: testcafe-minikube
imagePullPolicy: Never
args: ["chromium:headless", "script1.js"]
- name: script2-job
image: testcafe-minikube
imagePullPolicy: Never
args: [ "chromium:headless", "script2.js"]
restartPolicy: OnFailure
As seen above, this manifest has two containers running. When I apply this manifest to Kubernetes, the first container (script1-job), runs well. But the second container (script2-job) gives me the following error.
ERROR The specified 1337 port is already in use by another program.
If I run this with one container, it works perfectly. I also tried changing the args of the containers to the following.
args: ["chromium:headless", "script1.js", "--ports 12345,12346"]
args: ["chromium:headless", "script2.js", "--ports 1234,1235"]
Still, I get the same error saying 1337 port already in use. (I wonder whether the --ports argument is working at all in Docker).
This is my Dockerfile for reference.
FROM testcafe/testcafe
COPY . ./
USER root
RUN npm install
Could someone please help me with this? I want to run multiple containers as Cronjobs in Kubernetes, where I can run multiple Testcafe scripts in each job invocation?
adding the containerPort configuration to your kubernetes resource should do the trick.
for example:
spec:
containers:
- name: script1-job
image: testcafe-minikube
imagePullPolicy: Never
args: ["chromium:headless", "script1.js", "--ports 12345,12346"]
ports:
- containerPort: 12346

How to execute a function in Node.js using cronjob in Kubernetes

I want to execute a function written in Node.js, lets assume on an image called helloworld every minute on Kubernetes using cronjob.
function helloWorld() {
console.log('hello world!')'
}
I don't understand how I can call it in yaml file.
Config.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: helloworld
restartPolicy: OnFailure
I think you should use fn. One of the most powerful features of Fn is the ability to use custom defined Docker container images as functions. This feature makes it possible to customize your function’s runtime environment including letting you install any Linux libraries or utilities that your function might need. And thanks to the Fn CLI’s support for Dockerfiles it’s the same user experience as when developing any function.
Deploying your function is how you publish your function and make it accessible to other users and systems. To see the details of what is happening during a function deploy, use the --verbose switch. The first time you build a function of a particular language it takes longer as Fn downloads the necessary Docker images. The --verbose option allows you to see this process.
New image will be created - example node-app-hello.
Then you can configure CronJob.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello-fn-example
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: example node-app-hello
args:
- ...
restartPolicy: OnFailure
You can also add extra command to run hello container.
Then simply exacute command:
$ kubectl create -f you-cronjob-file.yaml
Take a look: cron-jobs.

Best practice for running a cron job inside an app on Openshift?

I want to run a simple backup of my postgres db deployed in Openshift. What are the best practices for running a cron job? Since systemd is not available on the containers and can only be enabled through a hack, I'd rather use a 'cleaner' approach. Besides cronie or systemd timer units, what options are there? There seems one could enable cron in earlier Openshift versions, however Openshift v4.x doesn't support this feature anymore and the docs only mention the Kubernetes Cron Jobs objects.
Here is what I use:
Dedicated Pod with same image (ensure db dump client is available) and PVC for backup mounted
ConfigMap with backup script
Cronjob running that pod frequently
Here's some example manifests:
PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: database-bkp
namespace: database
annotations:
volume.beta.kubernetes.io/storage-class: "storage-class-name"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
CM:
apiVersion: v1
kind: ConfigMap
metadata:
name: psqldump
namespace: database
labels:
job-name: db-backup
data:
psqldump.sh: |
#!/bin/bash
DBS=$(psql -xl |awk /^Name/'{print $NF}')
for DB in ${DBS}; do
SCHEMAS=$(psql -d ${DB} -xc "\dn" |awk /^Name/'{print $NF}')
for SCHEMA in ${SCHEMAS}; do
echo "Dumping database '${DB}' from Schema '${SCHEMA}' into ${BACKUPDIR}/${PGHOST}_${SCHEMA}_${DB}_${ENVMNT}_$(date -I).sql"
pg_dump -n "${SCHEMA}" ${DB} > ${BACKUPDIR}/${PGHOST}_${SCHEMA}_${DB}_${ENVMNT}_$(date -I).sql
done
done
echo "Deleting dumps older than ${RETENTION} days"
find ${BACKUPDIR} -name "*.sql" -mtime +${RETENTION} -exec rm -rf {} \;
CronJob:
apiVersion: v1
kind: Template
metadata:
name: postgres-backup
namespace: database
objects:
- kind: CronJob
apiVersion: batch/v1beta1
metadata:
name: postgres-backup
namespace: database
spec:
schedule: "0 3 * * *"
successfulJobsHistoryLimit: 1
jobTemplate:
spec:
template:
metadata:
namespace: database
spec:
containers:
- name: postgres-dbbackup
image: "postgres:11"
env:
- name: PGHOST
value: "${_PGHOST}"
- name: PGUSER
value: "${_PGUSER}"
- name: RETENTION
value: "${_RETENTION}"
- name: BACKUPDIR
value: "${_BACKUPDIR}"
command: ["/bin/bash", "-c", "/usr/local/bin/psqldump.sh"]
volumeMounts:
- mountPath: /usr/local/bin
name: psqldump-volume
- mountPath: /backup
name: backup-volume
volumes:
- name: psqldump-volume
configMap:
name: psqldump
defaultMode: 0755
- name: backup-volume
persistentVolumeClaim:
claimName: database-bkp
restartPolicy: Never
parameters:
- name: _PGHOST
value: postgres
- name: _PGUSER
value: postgres
- name: _RETENTION
value: "30"
- name: _BACKUPDIR
value: "/backup"
PGHOST is the pod name of your data base. If you have a dedicated user and password for your backup, export the env vars PGUSER and PGPASS accordingly
Running the cronjob inside the same pod as your db is not a good idea (the pod where the db runs can be killed/respawned etc)
IMHO the best solution is to define a Cronjob in the same project as the db, the Job will use an official OpenShift base image with the OC CLI, and from there execute a script that will connect to the pod where the db runs (oc rsh..) and perform the backup
Or execute a script from outside OCP that will connect to the cluster (with a system account), then executeoc rsh <db pod name> <backup command>

What is the best way to run a scheduled job

I have a project that contains two parts: the first one is a Flask Api and the second one is a script that should be scheduled.
The Flask app is served through a docker image that runs in Openshift.
My problem is where should i schedule the second script. I have access to Gitlab CI/CD but that's not really its purpose.
Building a docker image and running it on Openshift is also not possible because it will run more times than needed if the pods are more than 1.
The only option I'm thinking of is just using a regular server with cron.
Do you have maybe a better solution?
Thanks
There are several aspects to your question and several ways to do it, I'll give you some brief info on where to start.
Pythonic-way
You can deploy a celery worker, that will handle the scheduled jobs. You can look into celery documentation on how to work it out in python: https://docs.celeryproject.org/en/latest/userguide/workers.html
You can probably get a grasp on how to extend your deployment to support celery from this article on dev.to, which shows a full deployment of celery:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: celery-worker
labels:
deployment: celery-worker
spec:
replicas: 1
selector:
matchLabels:
pod: celery-worker
template:
metadata:
labels:
pod: celery-worker
spec:
containers:
- name: celery-worker
image: backend:11
command: ["celery", "worker", "--app=backend.celery_app:app", "--loglevel=info"]
env:
- name: DJANGO_SETTINGS_MODULE
value: 'backend.settings.minikube'
- name: SECRET_KEY
value: "my-secret-key"
- name: POSTGRES_NAME
value: postgres
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-credentials
key: user
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-credentials
key: password
Kubernetes-way
In Kubernetes (Openshift is a distribution of Kubernetes) - you can create a cronjob, which will execute a specific task on a schedule, similar to this:
kubectl run --generator=run-pod/v1 hello --schedule="*/1 * * * *" --restart=OnFailure --image=busybox -- /bin/sh -c "date; echo Hello from the Kubernetes cluster"
which I pulled from Kubernetes docs.
Cloud way
You can also use a serverless platform, e.g. AWS Lambda to execute a scheduled job. The cool thing about AWS Lambda is that their free tier will be more than enough for your use case.
See AWS example code here

Openshift cron job curl

I have Java Rest application on Openshift. I need to create cron for link in my application. Is it possible?
I tried this example
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: pi
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
metadata:
labels:
parent: "cronjobpi"
spec:
containers:
- name: pi
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: OnFailure
this job works well, but I dont know how I need to change it from perl to curl correctly.
Thank you for advice.
I think you can run curl command if you change the image to rhel7 as follows.
...
spec:
containers:
- name: pi
image: rhel7
command: ["curl", "-kvs", "https://www.google.com/"]
...
I hope it help you. :^)

Resources