Google cloud SDK code to execute via cron - cron

I am trying to implement an automated code to shut down and start VM Instances in my Google Cloud account via Crontab. The OS is Ubuntu 12 lts and is installed with Google service account so it can handle read/write on my Google cloud account.
My actual code is in this file /home/ubu12lts/cronfiles/resetvm.sh
#!/bin/bash
echo Y | gcloud compute instances stop my-vm-name --zone us-central1-a
sleep 120s
gcloud compute instances start my-vm-name --zone us-central1-a
echo "completed"
When I call the above file like this,
$ bash /home/ubu12lts/cronfiles/resetvm.sh
It works perfect and does the job.
Now I wanted to set this up in cron so it would do automatically every hour. So I did
$ sudo crontab -e
And added this code in cron
0 * * * * /bin/sh /home/ubu12lts/cronfiles/resetvm.sh >>/home/ubu12lts/cron.log
And made script executable
chmod +x /home/ubu12lts/cronfiles/resetvm.sh
I also tested the crontab by adding a sample command of creating .txt file with a sample message and it worked perfect.
But the above code for gcloud SDK doesn't work through cron. The VM doesn't stop neither starts in my GC compute engine.
Anyone can help please?
Thank you so much.

You have added the entry to root's crontab, while your Cloud SDK installation is setup for a different user (I am guessing ubu121lts).
You should add the entry in ubu121lts's crontab using:
crontab -u ubu121lts -e
Additionally your entry is currently scheduled to run on the 0th minute every hour. Is that what you intended?

I have run into a similar issue before. I fixed it by forcing the profile in my script.sh,loading the gcloud environment variables with it. Example below:
#!/bin/bash
source /etc/profile
echo Y | gcloud compute instances stop my-vm-name --zone us-central1-a
sleep 120s gcloud compute instances start my-vm-name --zone
us-central1-a echo "completed"
This also helped me resize node count in GKE.

Related

Running multiple commands in the same cron job using Ansible

I currently have a cronjob that creates a backup of a postgres db in ansible
- name: Create a cron job to export database.
become_user: postgres
cron:
name: "Export database"
minute: "*/2"
job: "pg_dump -U postgres -W -F t db_name > db_backup-$(date +%Y-%m-%d-%H.%M.%S).tar"
I want to run a gsutil cp command in the same job that then uploads this backup to a storage location in GCP.
I understand that with a cronjob you would simply separate the two jobs with && however I'm not sure how this would work in ansible.
Any pointers would be great, thank you!
What you put in the job attribute will end up in the crontab. (You can check this with crontab -l on the machine).
So you can do everything in ansible, you could do on the crontab directly, including chaining multiple commands separated by && or ;.
If you have a very long line here, I'd suggest to write a script and have cron execute that for readability reasons.

Setting up a cronjob on Google Compute Engine

I am new to setting up cronjobs and I'm trying to do it on a virtual machine in google compute engine. After a bit of research, I found this StackOverflow question: Running Python script at Regular intervals using Cron in Virtual Machine (Google Cloud Platform)
As per the answer, I managed to enter the crontab -e edit mode and set up a test cronjob like 10 8 * * * /usr/bin/python /scripts/kite-data-pull/dataPull.py. I also checked the system time, which was in UTC, and entered the time according to that.
The step I'm supposed to take, as per the answer, is to run sudo systemctl restart cron which is throwing an error for me:
sudo systemctl restart cron
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down
Any suggestions on what I can do to set up this cronjob correctly?
Edit a cron jobs with crontab -e and inset a line:
* * * * * echo test123 > /your_homedir_path/file.log
That will write test123 every minute into file.log file.
Then do tail if and wait a couple minutes. You should see test123 lines appearing in the file (and screen).
If it runs try running your python file but first make your .py file executable with "chmod +x script.py"
Here you can find my reply to similar question.

How to make a shell script that connects to a VM and executes commands?

I have a Python script on a gcloud VM instance. I want to run it via this shell script:
gcloud compute instances start instance-1 #start instance
gcloud compute ssh my_username#instance-1 #ssh into it
cd project_folder #execute command once inside VM
python my_script.py #run python script
sudo shutdown now #exit instance
gcloud compute instances stop instance-1 #stop instance
The first two commands work as intended; however, the rest of the commands don't execute on the VM. How can I make a script that executes commands after connecting to the VM?
gcloud compute instances start instance-1 #start instance
gcloud compute ssh my_username#instance-1 #ssh into it
At this point you have a SSH connection to your VM that is waiting for input. That is not what you want.
Note the --command option to gcloud compute ssh, which...
Runs the command on the target instance and then exits.
gcloud compute ssh my_username#instance-1 \
--command="cd project_folder && python my_script.py && sudo shutdown now"
You can also use the SSHPass utility to automate execution of commands on a remote server. https://linux.die.net/man/1/sshpass

Cron job is not triggered on Ubuntu VM

Ubuntu
In my Ubuntu VM, I have configured a cronjob
cat /var/spool/cron/crontabs/*
MAILTO="myemail#gmail.com"
* * * * * python /home/forge/web-app/database/backup_mysql.py
I checked pgrep cron I got number printing out fine.
It been 5 mins now, I don't see any email send to me.
I don't see any backup file is being generated.
I have a feeling that this cronjob never got run.
How do I debug this ?
Do I need to restart some kind of service ?
Could you please check the cron service.
service cron status.
And check the cronjob logs to check it is running or not
tail -f /var/log/cron | grep username
Check the cron
crontab -e -u username
And also check permission.
chmod +x <file>

How to distribute cron jobs to the cluster machines?

I can install a new cron job using crontab command.
For example,
crontab -e
0 0 * * * root rdate -s time.bora.net && clock -w > /dev/null 2>&1
Now I have 100 machines in my cluster, I want to install the above cron task in all of the machines.
How to distribute cron jobs to the cluster machines?
Thanks,
Method 1: Ansible
Ansible can distribute machine configuration to remote machines. Please refer to:
https://www.ansible.com/
Method 2: Distributed Cron
You can use distribted cron to assign cron job. It has a master node and you can config your job easily and monitor the running result.
https://github.com/shunfei/cronsun
crontab is stored /var/spool/cron/(username)
so, write your own cron jobs and distribute that location after acquire root permission.
but if other user edit the crontab at the same time, you can never be sure when it will get changed.
below links help you
https://ubuntuforums.org/showthread.php?t=2173811
https://forums.linuxmint.com/viewtopic.php?t=144841
https://serverfault.com/questions/347318/is-it-bad-to-edit-cron-file-manually
Ansible already has a cron module:
https://docs.ansible.com/ansible/2.7/modules/cron_module.html

Resources