I want to execute a function written in Node.js, lets assume on an image called helloworld every minute on Kubernetes using cronjob.
function helloWorld() {
console.log('hello world!')'
}
I don't understand how I can call it in yaml file.
Config.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: helloworld
restartPolicy: OnFailure
I think you should use fn. One of the most powerful features of Fn is the ability to use custom defined Docker container images as functions. This feature makes it possible to customize your function’s runtime environment including letting you install any Linux libraries or utilities that your function might need. And thanks to the Fn CLI’s support for Dockerfiles it’s the same user experience as when developing any function.
Deploying your function is how you publish your function and make it accessible to other users and systems. To see the details of what is happening during a function deploy, use the --verbose switch. The first time you build a function of a particular language it takes longer as Fn downloads the necessary Docker images. The --verbose option allows you to see this process.
New image will be created - example node-app-hello.
Then you can configure CronJob.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello-fn-example
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: example node-app-hello
args:
- ...
restartPolicy: OnFailure
You can also add extra command to run hello container.
Then simply exacute command:
$ kubectl create -f you-cronjob-file.yaml
Take a look: cron-jobs.
Related
I have multiple Testcafe scripts (script1.js, script2.js) that are working fine. I have Dockerized this code into a Dockerfile and it works fine when I run the Docker Image. Next, I want to invoke this Docker Image as a CronJob in Kubernetes. Given below is my manifest.yaml file.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: application-automation-framework
namespace: development
labels:
team: development
spec:
schedule: "*/1 * * * *"
jobTemplate:
metadata:
labels:
team: development
spec:
ttlSecondsAfterFinished: 120
backoffLimit: 3
template:
metadata:
labels:
team: development
spec:
containers:
- name: script1-job
image: testcafe-minikube
imagePullPolicy: Never
args: ["chromium:headless", "script1.js"]
- name: script2-job
image: testcafe-minikube
imagePullPolicy: Never
args: [ "chromium:headless", "script2.js"]
restartPolicy: OnFailure
As seen above, this manifest has two containers running. When I apply this manifest to Kubernetes, the first container (script1-job), runs well. But the second container (script2-job) gives me the following error.
ERROR The specified 1337 port is already in use by another program.
If I run this with one container, it works perfectly. I also tried changing the args of the containers to the following.
args: ["chromium:headless", "script1.js", "--ports 12345,12346"]
args: ["chromium:headless", "script2.js", "--ports 1234,1235"]
Still, I get the same error saying 1337 port already in use. (I wonder whether the --ports argument is working at all in Docker).
This is my Dockerfile for reference.
FROM testcafe/testcafe
COPY . ./
USER root
RUN npm install
Could someone please help me with this? I want to run multiple containers as Cronjobs in Kubernetes, where I can run multiple Testcafe scripts in each job invocation?
adding the containerPort configuration to your kubernetes resource should do the trick.
for example:
spec:
containers:
- name: script1-job
image: testcafe-minikube
imagePullPolicy: Never
args: ["chromium:headless", "script1.js", "--ports 12345,12346"]
ports:
- containerPort: 12346
I'm trying to create a kubernetes deployment that creates a pod.
I want this pod to run the command "cron start" on creation so that cron is automatically initialized.
This is currently how I am trying to run the command though it clearly isn't working (kubernetes_deployment.yaml)
- containerPort: 8080
command: [ "/bin/sh" ]
args: ["cron start"]
Thank you in advance :)
Maybe you could use Kubernetes CronJobs.
You can set a cron expression.
https://kubernetes.io/es/docs/concepts/workloads/controllers/cron-jobs/
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
I have a project that contains two parts: the first one is a Flask Api and the second one is a script that should be scheduled.
The Flask app is served through a docker image that runs in Openshift.
My problem is where should i schedule the second script. I have access to Gitlab CI/CD but that's not really its purpose.
Building a docker image and running it on Openshift is also not possible because it will run more times than needed if the pods are more than 1.
The only option I'm thinking of is just using a regular server with cron.
Do you have maybe a better solution?
Thanks
There are several aspects to your question and several ways to do it, I'll give you some brief info on where to start.
Pythonic-way
You can deploy a celery worker, that will handle the scheduled jobs. You can look into celery documentation on how to work it out in python: https://docs.celeryproject.org/en/latest/userguide/workers.html
You can probably get a grasp on how to extend your deployment to support celery from this article on dev.to, which shows a full deployment of celery:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: celery-worker
labels:
deployment: celery-worker
spec:
replicas: 1
selector:
matchLabels:
pod: celery-worker
template:
metadata:
labels:
pod: celery-worker
spec:
containers:
- name: celery-worker
image: backend:11
command: ["celery", "worker", "--app=backend.celery_app:app", "--loglevel=info"]
env:
- name: DJANGO_SETTINGS_MODULE
value: 'backend.settings.minikube'
- name: SECRET_KEY
value: "my-secret-key"
- name: POSTGRES_NAME
value: postgres
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-credentials
key: user
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-credentials
key: password
Kubernetes-way
In Kubernetes (Openshift is a distribution of Kubernetes) - you can create a cronjob, which will execute a specific task on a schedule, similar to this:
kubectl run --generator=run-pod/v1 hello --schedule="*/1 * * * *" --restart=OnFailure --image=busybox -- /bin/sh -c "date; echo Hello from the Kubernetes cluster"
which I pulled from Kubernetes docs.
Cloud way
You can also use a serverless platform, e.g. AWS Lambda to execute a scheduled job. The cool thing about AWS Lambda is that their free tier will be more than enough for your use case.
See AWS example code here
I'm learning Openshift Online, I tried to create a cron job from either UI or CLI, both resulted in the below error:
Error from server: admission webhook "validate.cron.create" denied the request: Prohibited resource for this cluster.
I checked permissions for the account, and it had admin rights with CRUD on cronjobs..
I use the same example as documentation:
https://docs.openshift.com/container-platform/3.5/dev_guide/cron_jobs.html
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: pi
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
metadata:
labels:
parent: "cronjobpi"
spec:
containers:
- name: pi
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: OnFailure
So as #Daein Park mentioned :
Unfortunately, CronJob can be available on OpenShift Online Pro. So if you use the cluster as Starter plan, you can not create CronJob.
https://docs.openshift.com/online/dev_guide/cron_jobs.html#overview
I have Java Rest application on Openshift. I need to create cron for link in my application. Is it possible?
I tried this example
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: pi
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
metadata:
labels:
parent: "cronjobpi"
spec:
containers:
- name: pi
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: OnFailure
this job works well, but I dont know how I need to change it from perl to curl correctly.
Thank you for advice.
I think you can run curl command if you change the image to rhel7 as follows.
...
spec:
containers:
- name: pi
image: rhel7
command: ["curl", "-kvs", "https://www.google.com/"]
...
I hope it help you. :^)