Ansible cronjob - adding a timestamp to filename - cron

I am trying to write an Ansible task that does a daily backup of our database and timestamps the name of the file.
- name: Run cronjob to backup database.
become_user: postgres
cron:
name: "Export database"
special_time: daily
job: pg_dump -U db_user -W -F t db_name > filename.tar
I tried simply appending $(date +%Y-%m-%d-%H.%M.%S) to filename.tar e.g
filename-$(date +%Y-%m-%d-%H.%M.%S).tar
However this wont work as the value of the date is set only once when the task is ran, leading to every filename having a timestamp of when the ansible task was ran.
I'm looking for around this, and to allow this job to generate a new timestamp for each filename.

In general, we have a practice of creating shell scripts to handle database backups. This allows us to capture exit code(s), and log/send alerts accordingly.
Though a simple one with just one pg_dump command will do, something similar as below should be possible:
Example:
db-backup.sh
#!/bin/bash
DB_USER="dbuser"
DB_PASSWORD="dbpassword"
DB_NAME="db_name"
BACKUP_FILE="db-backup-$(date +%Y-%m-%d-%H.%M.%S).tar"
cmd="pg_dump -U ${DB_USER} -W -F t ${DB_NAME} > ${BACKUP_FILE}"
if $cmd; then
# do something
else
# do something else
fi
Then in Ansible, copy this script to target, and create cron job to run this script.
# Copying to temp directory for example
- name: copy database backup script
copy:
src: 'db-backup.sh'
dest: '/tmp/db-backup.sh'
mode: '0755'
owner: 'postgres'
group: 'postgres'
- name: run cronjob to backup database
become_user: postgres
cron:
name: 'Export database'
special_time: 'daily'
job: '/tmp/db-backup.sh'

Related

How to run ansible playbook from github actions - without using external action

I have written a workflow file, that prepares the runner to connect to the desired server with ssh, so that I can run an ansible playbook.
ssh -t -v theUser#theHost shows me that the SSH connection works.
The ansible sript however tells me, that the sudo Password is missing.
If I leave the line ssh -t -v theUser#theHost out, ansible throws a connection timeout and cant connect to the server.
=> fatal: [***]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host *** port 22: Connection timed out
First I don't understand, why ansible can connect to the server only if i execute the command ssh -t -v theUser#theHost.
The next problem is, that the user does not need any sudo Password to have execution rights. The same ansible playbook works very well from my local machine without using the sudo password. I configured the server, so that the user has enough rights in the desired folder recursively.
It simply doesn't work form my GithHub Action.
Can you please tell me what I am doing wrong?
My workflow file looks like this:
name: CI
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the "master" branch
push:
branches: [ "master" ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
run-playbooks:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
with:
submodules: true
token: ${{secrets.REPO_TOKEN}}
- name: Run Ansible Playbook
run: |
mkdir -p /home/runner/.ssh/
touch /home/runner/.ssh/config
touch /home/runner/.ssh/id_rsa
echo -e "${{secrets.SSH_KEY}}" > /home/runner/.ssh/id_rsa
echo -e "Host ${{secrets.SSH_HOST}}\nIdentityFile /home/runner/.ssh/id_rsa" >> /home/runner/.ssh/config
ssh-keyscan -H ${{secrets.SSH_HOST}} > /home/runner/.ssh/known_hosts
cd myproject-infrastructure/ansible
eval `ssh-agent -s`
chmod 700 /home/runner/.ssh/id_rsa
ansible-playbook -u ${{secrets.ANSIBLE_DEPLOY_USER}} -i hosts.yml setup-prod.yml
Finally found it
First basic setup of the action itself.
name: CI
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the "master" branch
push:
branches: [ "master" ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
Next add a job to run and checkout the repository in the first step.
jobs:
run-playbooks:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
with:
submodules: true
token: ${{secrets.REPO_TOKEN}}
Next set up ssh correctly.
- name: Setup ssh
shell: bash
run: |
service ssh status
eval `ssh-agent -s`
First of all you want to be sure that the ssh service is running. The ssh service was already running in my case.
However when I experimented with Docker I had to start the service manually at the first place like service ssh start. Next be sure that the .shh folder exists for your user and copy your private key to that folder. I have added a github secret to my repository where I saved my private key. In my case it is the runner user.
mkdir -p /home/runner/.ssh/
touch /home/runner/.ssh/id_rsa
echo -e "${{secrets.SSH_KEY}}" > /home/runner/.ssh/id_rsa
Make sure that your private key is protected. If not the ssh service wont accept working with it. To do so:
chmod 700 /home/runner/.ssh/id_rsa
Normally when you start a ssh connection you are asked if you want to save the host permanently as a known host. As we are running automatically we can't type in yes. If you don't answer the process will fail.
You have to prevent the process being interrupted by the prompt. To do so you add the host to the known_hosts file by yourself. You use ssh-keyscan for that. Unfortunately ssh-keyscan can produce output in differeny formats/types.
Simply using ssh-keyscan was not enough in my case. I had to add other type options to the command. The generated output has to be written to the known_hosts file in the .ssh folder of your user. In my case /home/runner/.ssh/knwon_hosts
So the next command is:
ssh-keyscan -t rsa,dsa,ecdsa,ed25519 ${{secrets.SSH_HOST}} >> /home/runner/.ssh/known_hosts
Now you are almost there. Just call the ansible playbook command to run the ansible script. I ceated a new step where I changed the directory to the folder in my repository where my ansible files are saved.
- name: Run ansible script
shell: bash
run: |
cd infrastructure/ansible
ansible-playbook --private-key /home/runner/.ssh/id_rsa -u ${{secrets.ANSIBLE_DEPLOY_USER}} -i hosts.yml setup-prod.yml
The complete file:
name: CI
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the "master" branch
push:
branches: [ "master" ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
run-playbooks:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
with:
submodules: true
token: ${{secrets.REPO_TOKEN}}
- name: Setup SSH
shell: bash
run: |
eval `ssh-agent -s`
mkdir -p /home/runner/.ssh/
touch /home/runner/.ssh/id_rsa
echo -e "${{secrets.SSH_KEY}}" > /home/runner/.ssh/id_rsa
chmod 700 /home/runner/.ssh/id_rsa
ssh-keyscan -t rsa,dsa,ecdsa,ed25519 ${{secrets.SSH_HOST}} >> /home/runner/.ssh/known_hosts
- name: Run ansible script
shell: bash
run: |
service ssh status
cd infrastructure/ansible
cat setup-prod.yml
ansible-playbook -vvv --private-key /home/runner/.ssh/id_rsa -u ${{secrets.ANSIBLE_DEPLOY_USER}} -i hosts.yml setup-prod.yml
Next enjoy...
An alternative, without explaining why you have those errors, is to test and use actions/run-ansible-playbook to run your playbook.
That way, you can test if the "sudo Password is missing" is missing in that configuration.
- name: Run playbook
uses: dawidd6/action-ansible-playbook#v2
with:
# Required, playbook filepath
playbook: deploy.yml
# Optional, directory where playbooks live
directory: ./
# Optional, SSH private key
key: ${{secrets.SSH_PRIVATE_KEY}}
# Optional, literal inventory file contents
inventory: |
[all]
example.com
[group1]
example.com
# Optional, SSH known hosts file content
known_hosts: .known_hosts
# Optional, encrypted vault password
vault_password: ${{secrets.VAULT_PASSWORD}}
# Optional, galaxy requirements filepath
requirements: galaxy-requirements.yml
# Optional, additional flags to pass to ansible-playbook
options: |
--inventory .hosts
--limit group1
--extra-vars hello=there
--verbose

Cron doesn't run sh script

I have a simple script that works when I manually put it into the ~/ directory.
#!/bin/bash
source /opt/python/current/env
source /opt/python/run/venv/bin/activate
cd /opt/python/current/app
scrapy crawl myspider
deactivate
exit 0
Since EBS will delete the script and crontab eventually, I need to create a config file in .ebextensions.
So I've created cron-linux.config
But I can't make it work. It doesn't do anything (I should see changes in a database).
files:
"/etc/cron.d/crawl":
mode: "000644"
owner: root
group: root
content: |
* * * * * /usr/local/bin/crawl.sh
"/usr/local/bin/crawl.sh":
mode: "000755"
owner: ec2-user
group: ec2-user
content: |
#!/bin/bash
source /opt/python/current/env
source /opt/python/run/venv/bin/activate
cd /opt/python/current/app
scrapy crawl myspider
deactivate
exit 0
commands:
remove_old_cron:
command: "rm -f /etc/cron.d/crawl.bak"
Do you know where is the problem?
Even when I do eb logs > log.txt and check the file, there is no such word like 'crawl', 'cron' etc...
EDIT
I can even manually run the script: [ec2-user#ip-xxx-xx-x-xx ~]$ /usr/local/bin/crawl.sh
And I see that it works. Just the cron doesn't work.

run one ansible task only once during starting and run one ansible task at end [duplicate]

This question already has answers here:
Ansible delegate and run_once
(1 answer)
Running a task on a single host always with Ansible?
(1 answer)
Closed 4 years ago.
I have below ansible playbook in which I want to do few things:
I want to run first task only once in entire time this playbook will be running since I have 100 machines in servers group so "task one" should run only once in the starting.
I want to run task three once as well but at the very end of this playbook when it is working on last machine. I am not sure if this is possible to do as well.
What I need to do is:
Copy clients.tar.gz file from some remote servers to local box (inside /tmp folder) where my ansible is running.
And then unarchive this "clients.tar.gz" file on all the servers specified in "servers" group.
And at the end delete this tar.gz file from /tmp folder.
Below is my ansible script: Is this possible to do by any chance?
---
- name: copy files
hosts: servers
serial: 10
tasks:
- name: copy clients.tar.gz file. Run this task only once during starting
shell: "(scp -o StrictHostKeyChecking=no goldy#machineA:/process/snap/20180418/clients.tar.gz /tmp/) || (scp -o StrictHostKeyChecking=no goldy#machineB:/process/snap/20180418/clients.tar.gz /tmp/) || (scp -o StrictHostKeyChecking=no goldy#machineC:/process/snap/20180418/clients.tar.gz /tmp/)"
- name: copy and untar latest clients.tar.gz file
unarchive: src=/tmp/clients.tar.gz dest=/data/files/
- name: Remove previous tarFile. Run this task only once at the end of this playbook
file: path=/tmp/clients.tar.gz
state=absent
- name: sleep for few seconds
pause: seconds=20
Update
I went through the linked question and it looks like it can be done like this?
---
- name: copy files
hosts: servers
serial: 10
tasks:
- name: copy clients.tar.gz file. Run this task only once during starting
shell: "(scp -o StrictHostKeyChecking=no goldy#machineA:/process/snap/20180418/clients.tar.gz /tmp/) || (scp -o StrictHostKeyChecking=no goldy#machineB:/process/snap/20180418/clients.tar.gz /tmp/) || (scp -o StrictHostKeyChecking=no goldy#machineC:/process/snap/20180418/clients.tar.gz /tmp/)"
delegate_to: "{{ groups['servers'] | first }}"
run_once: true
- name: copy and untar latest clients.tar.gz file
unarchive: src=/tmp/clients.tar.gz dest=/data/files/
- name: Remove previous tarFile. Run this task only once at the end of this playbook
file: path=/tmp/clients.tar.gz
state=absent
delegate_to: "{{ groups['servers'] | last }}"
run_once: true
- name: sleep for few seconds
pause: seconds=20

Edit current user's shell with ansible

I'm trying yo push my dot files and some personal configuration files to a server (I'm not root or sudoer). Ansible connects as my user in order to edit files in my home folder.
I'd like to set my default shell to usr/bin/fish.
I am not allowed to edit /etc/passwd so
user:
name: shaka
shell: /usr/bin/fish
won't run.
I also checked the chsh command but the executable prompt for my password.
How could I change my shell on such machines ? (Debian 8, Ubuntu 16, Opensuse)
I know this is old, but I wanted to post this in case anyone else comes back here looking for advise like I did:
If you're running local playbooks, you might not be specifying the user and expecting to change the shell of user you're running the playbook as.
The problem is that you can't change the shell without elevating the privileges (become: yes), but when you do - you're running things as root. Which just changes the shell of the root user. You can double check that this is the case by looking at /etc/passwd and seeing what the root shell is.
Here's my recipe for changing the shell of the user running the playbook:
- name: set up zsh for user
hosts: localhost
become: no
vars:
the_user: "{{ ansible_user_id }}"
tasks:
- name: change user shell to zsh
become: yes
user:
name: "{{ the_user }}"
shell: /bin/zsh
This will set the variable the_user to the current running user, but will change the shell of that user using root.
I ended up using two ansible modules :
ansible expect
ansible prompt
First I record my password with a prompt :
vars_prompt:
- name: "my_password"
prompt: "Enter password"
private: yes
And then I use the module expect to send the password to the chsh command :
tasks:
- name: Case insensitve password string match
expect:
command: "chsh -s /usr/bin/fish"
responses:
(?i)password: "{{ my_password }}"
creates: ".shell_is_fish"
The creates sets a lock file avoiding this task to be triggered again. This may be dangerous because the shell could be changed after and ansible will not update it (because of the lock still present). You may want to avoid this behaviour.
Here is how I do it:
- name: Set login shell of user {{ ansible_env.USER }} to `/bin/zsh` with `usermod`
ansible.builtin.command: usermod --shell /bin/zsh {{ ansible_env.USER }}
become: true
changed_when: false
Ubuntu 16
add first line in ~/.bashrc
/usr/bin/fish && exit

cron job not working properly giving error as "syntax error near unexpected token `)'"

I m creating cron job that takes backup of my entire DB. For that I used following code
*/5 * * * * mysqldump -u mydbuser -p mypassword mydatabase | gzip > /home/myzone/public_html/test.com/newfolder/dbBackup/backup.sql.gz
But instead of getting backup I m getting error as "syntax error near unexpected token `)' ". In my password there is round bracket included is this happening because of this. Please Help me..
Thanks in advance.
) is a special character for the shell (and crontab uses the shell to execute commands).
Add single quotes around your password:
*/5 * * * * mysqldump -u mydbuser -p 'mypassword' mydatabase | ...
try to remove spaces between -u mydbuser and -p mypassword..
-umydbuser -pmypassword
As I suggested in my commentary, move this in external script, and include the script in cron.daily. I've given below the basic skeleton for such a script. This way you gain a couple of advantages => you can test the script, you can easily reuse it, it's also configurable. I don't know if you do this for administration or personal usage. My suggestion is more towards "I do it for administration" :)...
#!/bin/bash
# Backup destination directory
DIR_BACKUP=/your/backup/directory
# Timestamp format for filenames
TIMESTAMP=`date +%Y%m%d-%H%M%S`
# Database name
DB_NAME=your_database_name
# Database user
DB_USER=your_database_user
# Database password
DB_PASSWD=your_database_password
# Database export file name
DB_EXPORT=your_database_export_filename.sql
# Backup file path
BKFILE=$DIR_BACKUP/your-backup-archive-name-$TIMESTAMP.tar
# Format for time recordings
TIME="%E"
###########################################
# Create the parent backup directory if it does not exist
if [ ! -e $DIR_BACKUP ]
then
echo "=== Backup directory not found, creating it ==="
mkdir $DIR_BACKUP
fi
# Create the backup tar file
echo "=== Creating the backup archive ==="
touch $BKFILE
# Export the database
echo "=== Exporting YOUR DATABASE NAME database ==="
time bash -c "mysqldump --user $DB_USER --password=$DB_PASSWD $DB_NAME > $DIR_BACKUP/$DB_EXPORT"
# Add the database export to the tar file, remove after adding
echo "=== Adding the database export to the archive ==="
time tar rvf $BKFILE $DIR_BACKUP/$DB_EXPORT --remove-files
# Compress the tar file
echo "=== Compressing the archive ==="
time gzip $BKFILE
# All done
DATE=`date`
echo "=== $DATE: Backup complete ==="

Resources