I have the following script that works fine when called from the command line:
#!/bin/sh
/usr/bin/mysqldump -u root -ppassword redmine > /home/administrateur/backup/backup_$(date+%Y-%m-%d-%H.%M.%S).sql
find /home/administrateur/backup/* -mtime +15 -exec rm {} \;
rsync -e 'ssh -p 22' -avzp /home/administrateur/backup is-uber-1:/home/administrateur/backup
But this script omits the rsync line when called from cron.
Does anyone know why?
Basically you need to run your script as administrateur. You can use sudo for it:
/usr/bin/sudo -H -u administrateur -- /bin/sh /path/to/your/script.sh
Related
I'm trying to run the following command on crontab but for some reason it cuts of a portion of the command. when I check /var/logs/cron. However, it runs when I run it on the terminal.
Command in crontab:
*/30 * * * * user find /home/user/recordings -name '*.pcap,SDPerr' -exec sh -c 'mv "$0" "${0%.pcap,SDPerr}.pcap"' {} \;
from /var/logs/cron:
Jan 10 11:00:01 server CROND[116349]: (user) CMD ( find /home/user/recordings -name '*.pcap,SDPerr' -exec sh -c 'mv "$0" "${0)
What am I missing here, any help would be appreciated.
Your command has a % (percent sign) in it, which has special meaning in crontab. Therefore better to put "\" before "%" to escape it.
I have the bash script which makes pg_dumpall and uploads to my FTP server.
#/bin/bash
timeslot=`date +\%Y-\%m-\%d-\%H:\%M:\%S`
backup_dir="/var/lib/backup"
name=recycle_backup_$timeslot.sql.gz
set -e
find /var/lib/backup/*.sql.gz -mtime +7 -type f -exec rm -rvf {} \; -exec curl -p --insecure ftp://aaa.bbb/backup/{} --user user:password -Q 'DELE backup/'{} \;
echo $timeslot "Running docker container to backup Postgresql"
sudo docker run --rm \
-t \
--name postgres-backup \
-v "/var/lib/backup":"/mnt" \
-w "/mnt" \
-e PGPASSWORD="password" \
--entrypoint=/usr/bin/pg_dumpall \
postgres:9 \
-h 1.2.3.4 -U postgres | gzip -9 > $backup_dir/$name
echo "Your back is successfully stored in " $backup_dir
curl -T /var/lib/backup/$name ftp://aaa.bbb/backup/ --user user:password
It cleans up all old .sql.gz files (older than 7 days).
After these steps, the script makes a docker container with Postgres, runs commands to make backup and saves it locally to /var/lib/backup/filename.sql.gz.
The problem is I can't clean up files on my FTP server with arguments that "find" returns.
find /var/lib/backup/*.sql.gz -mtime +7 -type f -exec rm -rvf {} \; -exec curl -p --insecure ftp://aaa.bbb/backup/{} --user user:password -Q 'DELE backup/'{} \;
How to add the argument {} from find command to this curl request? -Q 'DELE BACKUP/'{} \;
Thanks for the support.
Solved the problem with a second script, which connects to FTP server and deletes outdated files in a loop from this topic Linux shell script for delete old files from ftp
That's not going to work, you can only use {} once with find.
Maybe try this approach? Remove the echo if this looks sane ...
find /var/lib/backup/*.sql.gz -mtime +7 -type f -exec rm -rvf {} + | awk '{print gensub( /...(.*)/, "\\1","1",$2)}' | while read -s file
do
echo curl -p --insecure ftp://aaa.bbb/backup/{} --user user:password -Q 'DELE backup/'${file}
done
So, I have a very simple cron set up to run daily. It does a find and rsync with certain parameters. When it runs on the bash command line, it runs just fine, but when in the root crontab, it doesn't want to know. Any ideas what is wrong here?
/usr/bin/find /var/www/*/logs/ -iname '*.lzma' -mtime +21 -exec rsync -a --ignore-existing --relative -e 'ssh -q -p 2230 -o "StrictHostKeyChecking no"' {} root#nas0:/space/Logs/reporting0/ \;
Syslog shows it ran:
Apr 28 09:40:01 reporting1 CRON[26347]: (root) CMD (/usr/bin/find /var/www/*/logs/ -iname '*.lzma' -mtime +21 -exec rsync -a --ignore-existing --relative -e 'ssh -q -p 2230 -o "StrictHostKeyChecking no"' {} root#nas0:/space/Logs/reporting1/ \;)
But nothing actually gets copied.
Cron always runs with a mostly empty environment. HOME, LOGNAME, and
SHELL are set; and a very limited PATH.
link here
So you can complete all application with the full path or add the environment variables.
For example in Ubuntu you can
replace rsync by /usr/bin/rsync
repalce ssh by /usr/bin/ssh
You can check your cron's environment variable by
add this to cron and check the /tmp/env.output
* * * * * env > /tmp/env.output
here is detail
I'm new to linux/unix shell scripting, and I have a few dozen projects that I want to set up Subversion folders for (eventually I'll get to Git lol). How do I write a script to do the following:
Get a list of all sub-folders in a folder
For each sub-folder, use it execute the following commands:
svnadmin create /var/www/svn/<sub-folder>
svn import /var/www/<sub-folder> file:///var/www/svn/<sub-folder>
chmod -R 777 var/www/svn/<sub-folder>
chown -R apache.apache var/www/svn/<sub-folder>
From what I've seen on the internet so far, I suppose I put it all into a .sh file and type something like :
.sh thing.sh
... to execute it.
Any help appreciated.
for i in `find -maxdepth 1 -type d`; do
svnadmin create "/var/www/svn/$i"
svn import "/var/www/$i" "file:///var/www/svn/$i"
chmod -R 777 "var/www/svn/$i"
chown -R apache.apache "var/www/svn/$i"
done
Of course your svn import command is incorrect, and pathes in your chmod and chown missing /. But it's copypaste of your commands, anyway.
Does this work for you?
#!/bin/bash
for FILE in `ls`
do
if test -d $FILE
then
svnadmin create /var/www/svn/$FILE
svn import /var/www/$FILE file:///var/www/svn/$FILE
chmod -R 777 /var/www/svn/$FILE
chown -R apache.apache /var/www/svn/$FILE
fi
done
After saving execute chmod +x {name of file} on the script to make it executable with ./{name of file} or sh {name of file}.
In case you need all subfolders recursively from current folder:
#!/bin/bash
for FILE in `find . -type d`
do
if test -d $FILE
then
svnadmin create /var/www/svn/$FILE
svn import /var/www/$FILE file:///var/www/svn/$FILE
chmod -R 777 /var/www/svn/$FILE
chown -R apache.apache /var/www/svn/$FILE
fi
done
If you have any questions please comment.
You could create a script doIt.sh with the following:
#!/bin/bash
svnadmin create /var/www/svn/$1
svn import /var/www/$1 file:///var/www/svn/$1
chmod -R 777 var/www/svn/$1
chown -R apache.apache var/www/svn/$1
Then you can go into the folder in which you want to find all subfolders and execute the following:
find . -type d | xargs -I {} ./doIt.sh {}
Also, are you sure of this line:
svn import /var/www/<sub-folder> file:///var/www/svn/<sub-folder>
Did you not mean:
svn import /var/www/svn/<sub-folder> file:///var/www/svn/<sub-folder>
Note: Missing svn subfolder in path
If I run this command it works fine in the terminal:
for dirname in $(ls -d dir/checkpoint/features.txt/20*);do;echo "hello";done
But when run through /bin/sh -c it gives an error
/bin/sh -c "for dirname in $(ls -d dir/checkpoint/features.txt/20*);do;echo "hello";done"
ERROR:
/bin/sh: -c: line 1: syntax error near unexpected token `dir/checkpoint/features.txt/201108000'
/bin/sh: -c: line 1: `dir/checkpoint/features.txt/201108000'
My default shell is /bin/bash. I cant seem to understand what is causing this. My default implementation for running all shell commands in my program is by appending /bin/sh -c to them. It is the first time i am seeing this issue. Any suggestions?
Don't try to parse the output of ls, especially with a for construct. There are many, many ways that this can go wrong.
This is a good place to use find instead. Try this:
/bin/sh -c "find dir/checkpoint/features.txt -mindepth 1 -maxdepth 1 -type d -iname '20*' -exec echo \"hello\" \;"
Besides eliminating the error-prone use of ls, you avoid the sub-shell and all of the issues that it brings with it.
Follow-up in response to your comment:
I'm assuming that you're using awk -F/ '{print $NF}' to grab the name of the folder in which the file lives (that is, the last directory name before the filename). The commands basename and dirname can be used to do this for you. This should make your script a bit easier. Place the following into a script file:
#!/bin/sh
folder=$(basename $(dirname $1))
mkdir -p #{nfs_checkpoint}/${folder}
cat #{result_location}/${folder}/20* > #{nfs_checkpoint}/${folder}/features.txt
And execute it like this:
/bin/sh -c "find dir/checkpoint/features.txt -mindepth 1 -maxdepth 1 -type d -iname '20*' -exec yourscript.sh {} \;"