Check previous step in shell script run trough Cron - linux

--- file makebackup.sh
#!/bin/bash
DATE='date'
mysqldump --all-databases | gzip -9 > /backup/temp_db.gz
tar -Pcf /backup/temp_ftp.tar /public_html/
tar -Pcf /backup/temp_backup.tar /home/temp_db.gz /backup/temp_ftp.tar
sleep 60 && /backup/upload.sh $DATE
--- file upload.sh
#!/usr/bin/expect -f
# connect via scp
spawn scp /backup/temp_backup.tar root#mybackup.com:/home/backup_$argv.tar
#######################
expect {
-re ".*es.*o.*" {
exp_send "yes\r"
exp_continue
}
-re ".*sword.*" {
exp_send "mypassword\r"
}
}
interact
Why this does not work, i don't want to use sleep i need to know when last tar is over and execute file upload.sh. Instead it always executes as soon as last tar file starts.
&& does not do anything even if i remove sleep 60

As you say 'Instead it always executes as soon as last tar file starts', normally that means there is an '&' at the end of the line OR are you sure the tar is really working? Are you looking at an old tar.gz that was created early on? Make sure it is a new tar file that is correct size.
Edit I'm not saying you have to delete files, just dbl-check that what is being put into the final tar makes sense.
Are you checking the sizes of input files to your final tar cmd? (/home/temp_db.gz /backup/temp_ftp.tar)? Edit By this I mean, that an uncompressed tar file (temp_ftp.tar) should be just slightly larger than the sum of sizes of all files it contains. If you know that you have 1 Meg of files that compose temp_ftp.tar, and the file is 1.1 Meg, that is good, if it is .5 Meg, then that is bad. (Also consider gziping the whole thing to reduce transmission time to your remote host). Your compressed db file, hard to say, presumably that is working, if the file size is something like 25 bytes, then that indicates an error in creating the file.
Otherwise what you are saying really seems impossible. It is one of these things, or something else is bollixing things up.
One way to debug how long the last tar is taking is to wrap the command in two date commands, i.e.
date
tar -Pcf /backup/temp_backup.tar /home/temp_db.gz /backup/temp_ftp.tar
rc=$?
date
printf "return code from quick tar was ${rc}\n"
Also, per your title, 'check previous step', I've added capturing the return code from tar and printing the value.
Again, to reinforce what you know already, in a linux shell script, there is no way (excepting a background job with the '&' char) for one command to start executing before the previous one has completed.
EDIT ownership and permissions on your files might be screwing things up is ownership and permissions on your files. use \
ls -l /backup/temp_backup.tar /home/temp_db.gz /backup/temp_ftp.tar
to confirm that your userID owns the files and and that you can write to them. If you want to, edit your posting to include that information.
Also, your headline says 'cron' : are you capturing all of the possible output of this script to help debug the situation? Turn on shell debugging with set -vx near the top of makebackup.sh. Add debugging output to your tar cmd '-v'.
Capture the cron output of your whole process like
59 23 31 12 * { /path/to/makebackup.sh 2>&1 ; } > /tmp/makebackup.`/bin/date +\%Y\%m\%d.\%H\%M\%S.trace_log
And be sure you don't find any error messages.
( Crontab sample, min hr day mon (day-of-week, 0-6 or *) , change date/time to meet your testing needs)
Your expect script uses '\r', don't you want '\n' in the Unix/linux environment. If you're a Windows based server, then you want '\r\n' .
Edit does the expect script work, have you proved to your satisifaction that files are being copied, are they the same size on the backup site, does the date change?
If you expect backups to save your systems someday, you have to develop a better understanding of how the whole process should work and if it is working as expected. Depending on your situation and availability of alternate computers, you should schedule a test of your restoring your backups to see if they will really work. As you're using -P to preserve full-path info, you'll really need to be careful not to overwrite your working system with old files.
To summarize my advise, double-check everything.
I hope this helps.

Related

Shell script does not run completely when run by cron

The script in file modBackup.sh does not run completely when started by cron, the result is a corrupted tar.gz file that is half the size of this one if I run manually. In any case, its size is many times smaller than the one started manually, but still creates some content that can not be opened normally, archive is damaged
file modBackup.sh:
#!/bin/sh
find /home/share/ -mmin -720 -type f -exec tar -rvf /mnt/archives/`date +%F`-modified.tar.gz "{}" +
Тhe behavior of the automatic one seems to be interrupted and does not end.
When I run it manualy, the script creates a genuine archive as [current date]-modified.tar.gz
Here is the crontab -e:
00 18 * * 1-5 /home/myScripts/modBackup.sh
Edit:
There is no information in the logs except that crond has started
neither in the mail log, nor in the cron, nor in the messages
(I use very old CentOS :( but I don't think this is the reason for the error).
For testing only: I added %H%M of the file name in the script and did the following:
I ran it manually: sh /home/myScripts/modBackup.sh
and set with crontab -e to run a two minutes later the same command
After a few minutes, two files appeared that grew at the same time, but then the one created by cronjob
stopped growing
(two files).
I use the same GUI tool (Archive Manager) to open in both cases.
Тhe file, created by manually starting the script, opens (manually started), but the other one, from cronjob cannot, even after I changed the extension, the error is 'unexpected EOF in archive' (auto started)
Suggesting to include the users's environment context with $PATH and other critical environment variables for the application to work.:
modBackup.sh:
#!/bin/sh
source ~/.profile
find /home/share/ -mmin -720 -type f -exec tar -rvf /mnt/archives/`date +%F`-modified.tar.gz "{}" +
I found that in the cron environment the "find" command misinterprets filenames containing specific characters, even with the explicit change of the encoding with add at the beginning of the script line "export LANG = en_US.UTF-8; LC_CTYPE=...". With many other combinations and attempts I had no success.
That's why I left the "find" command and use the tar command with an option to archive modified files. This way works perfect now:
fromDate = $(date --date = '15 hours ago')
/bin/tar -N "$fromDate" -zcf /mnt/archives/`date +% F-% H% M`-share.modified.tar.gz /home/share/

mysqldump problem in Crontab and bash file

I have created a cron tab to backup my DB each 30 minutes...
*/30 * * * * bash /opt/mysqlbackup.sh > /dev/null 2>&1
The cron tab works well.. Each 30 minutes I have my backup with the script bellow.
#!/bin/sh
find /opt/mysqlbackup -type f -mtime +2 -exec rm {} +
mysqldump --single-transaction --skip-lock-tables --user=myuser --
password=mypass mydb | gzip -9 > /opt/mysqlbackup/$(date +%Y-%m-%d-%H.%M)_mydb.sql.gz
But my problem is that the rm function to delete old data isn't working.. this is never deleted.. Do you know why ?
and also... the name of my backup is 2020-02-02-12.12_mydb.sql.gz?
I always have a ? at the end of my file name.. Do you know why ?
Thank you for your help
The question mark typically indicates a character that can't be displayed; the fact that it's at the end of a line makes me think that your script has Windows line endings rather than Unix. You can fix that with the dos2unix command:
dos2unix /path/to/script.sh
It's also good practice not to throw around MySQL passwords on the CLI or store them in executable scripts. You can accomplish this by using MySQL Option files, specifically the file that defines user-level options (~/.my.cnf).
This would require us to figure out which user is executing that cronjob, however. My assumption is that you did not make that definition inside the system-level crontab; if you had, you'd actually be trying to execute /opt/mysqlbackup.sh > /dev/null 2>&1 as the user bash. This user most likely doesn't (and shouldn't) exist, so cron would fail to execute the script entirely.
As this is not the case (you say it's executing the mysqldump just fine), this makes me believe you have the definition in a user-level crontab instead. Once we figure out which user that actually is as I asked for in my comment, we can identify the file permissions issue as well as create the aforementioned MySQL options file.
Using find with mtime is not the best choice. If for some reason mysqldump stops creating backups, then in two days all backups will be deleted.
You can use my Python script "rotate-archives" for smart delete backups. (https://gitlab.com/k11a/rotate-archives). The script adds the current date at the beginning of the file or directory name. Like 2020-12-31_filename.ext. Subsequently uses this date to decide on deletion.
Running a script on your question:
rotate-archives.py test_mode=off age_from-period-amount_for_last_timeslot=0-0-48 archives_dir=/mnt/archives
In this case, 48 new archives will always be saved. Old archives in excess of this number will be deleted.
An example of more flexible archives deletion:
rotate-archives.py test_mode=off age_from-period-amount_for_last_timeslot=7-5,31-14,365-180-5 archives_dir=/mnt/archives
As a result, there will remain archives from 7 to 30 days old with a time interval between archives of 5 days, from 31 to 364 days old with time interval between archives 14 days, from 365 days old with time interval between archives 180 days and the number of 5.

Bash script for moving and renaming application log files on Linux

I'm relatively new to coding on linux.
I have the below script for moving my ERP log file.
!/bin/bash #Andrew O. MBX 2015-09-03
#HansaWorld Script to periodically move the log file
_now=$(date +"%m_%d_%Y")
mv /u/OML_Server_72/hansa.log /u/HansaLogs/hansa_$now.log
The code runs but does not rename the log file to the date it has been moved.
I would also like to check when the file exceeds the 90M size so it moves it automatically at the end of every day. a cron job of some kind.
Help Please
After editing this is my new code.
#!/bin/bash
#Andrew O. MBX 2015-09-03
#HansaWorld Script to periodically move the log file
now=$(date +"%m_%d_%Y")
mv /u/OML_Server_72/hansa.log /u/HansaLogs/hansa$now.log
I wish to add code to check if hansa.log file is over 90M then move it. If it is not then leave it as it is.
cd /u find. -name '*hansa.log*' -size +90000k -exec mv '{}' /u/HansaLogs\;
In addition to the other comments, there are a few other things to consider. tgo's logrotate suggestion is a good one. In Linux, if you are every stuck on the use of a utility, etc.. the man files (while a bit cryptic at first), provide concise usage information. To see the logs available for a given utility, use man -k name (some distributions provide this selection capability by default alias) e.g.:
$ man -k logrotate
logrotate (8) - rotates, compresses, and mails system logs
logrotate.conf (5) - rotates, compresses, and mails system logs
Then if you want the logrotate page:
$ man 8 logrotate
or the conf page
$ man 5 logrotate.conf
There are several things you may want to change/consider regarding your script. First, while there is nothing wrong with a variable now, you may run into confusion with the date command's builtin use of now. There is no conflict, but it would look strange to write now=$(date -d "now + 24 hours" "+%F %T"). (recommend a name like tstamp, short for timestamp instead).
For maintainability, readability, etc... you may consider assiging your path components to variables that will help with readability later on. (example below).
Finally, before moving, copying, deleting, etc... it is always a good idea to validate that the target file exists and to provide an error message if something is out of whack. A rewrite could be:
#!/bin/bash
#Andrew O. MBX 2015-09-03
#HansaWorld Script to periodically move the log file
tstamp=$(date +"%m_%d_%Y")
logdir="/u/HansaLogs"
logname="/u/OML_Server_72/hansa.log"
if [ -f "$logname" ]; then
mv "$logname" "$logdir/hansa_${tstamp}.log"
else
printf "error: file not found '%s'.\n" "$logname" >&2
exit 1
fi
Note: the >&2 simply redirects the output of printf to stderr rather than stdout.
As for the find command, there is no need to cd and find ., the find command takes the path as its first argument. Additionally, the --size option has builtin support for Megabytes M. A rewrite here could look like:
find /u -name "*hansa.log*" -size +90M -exec mv '{}' /u/HansaLogs \;
All in all, it looks like you will pick up shell programming without any problem. Just develop good habits early, they will save you a lot of grief later.
Hi Guys Thanx for the help. So far I have come up with this code. I am stuck at creating a cron job to run this periodically say after every 22hrs
#!/bin/bash
#Andrew O. MBX 2015-09-03
#HansaWorld Script to Check if log file exists before moving:
tstamp=$(date +"%m_%d_%Y")
logdir="/u/HansaLogs"
logname="/u/OML_Server_72/hansa.log"
minimumsize=90000
actualsize=$(wc -c <"$logname")
if [ $actualsize -ge $minimumsize ]; then
mv "$logname" "$logdir/hansa_${tstamp}.log"
else
echo size is under $minimumsize bytes
exit 1
fi

Prevent files to be moved by another process in linux

I have problem with bash script.
I have two cron tasks, which gets some number of files from same folder for further processing.
ls -1h "targdir/*.json" | head -n ${LIMIT} > ${TMP_LIST_FILE}
while read REMOTE_FILE
do
mv $REMOTE_FILE $SCRDRL
done < "${TMP_LIST_FILE}"
rm -f "${TMP_LIST_FILE}"
But then two instances of script run simultaneously same file beeing moved to $SRCDRL which different for instances.
The question is how to prevent files to be moved by different script?
UPD:
Maybe I was little uncleare...
I have folder "targdir" where I store json files. And I have two cron tasks which gets some files from that directory to process. For example in targdir exists 25 files first cron task should get first 10 files and move them to /tmp/task1, second cron task should get next 10 files and move them to /tmp/task2 , e.t.c.
But now first 10 files moves to /tmp/task1 and /tmp/task2.
First and foremost: rename is atomic. It is not possible for a file to be moved twice. One of the moves will fail, because the file is no longer there. If the scripts run in parallel, both list the same 10 files and instead of first 10 files moved to /tmp/task1 and next 10 to /tmp/task2 you may get 4 moved to /tmp/task1 and 6 to /tmp/task2. Or maybe 5 and 5 or 9 and 1 or any other combination. But each file will only end up in one task.
So nothing is incorrect; each file is still processed only once. But it will be inefficient, because you could process 10 files at a time, but you are only processing 5. If you want to make sure you always process 10 if there is enough files available, you will have to do some synchronization. There are basically two options:
Place lock around the list+copy. This is most easily done using flock(1) and a lock file. There are two ways to call that too:
Call the whole copying operation via flock:
flock targdir -c copy-script
This requires that you make the part that should be excluded a separate script.
Lock via file descriptor. Before the copying, do
exec 3>targdir/.lock
flock 3
and after it do
flock -u 3
This lets you lock over part of the script only. This does not work in Cygwin (but you probably don't need that).
Move the files one by one until you have enough.
ls -1h targdir/*.json > ${TMP_LIST_FILE}
# ^^^ do NOT limit here
COUNT=0
while read REMOTE_FILE
do
if mv $REMOTE_FILE $SCRDRL 2>/dev/null; then
COUNT=$(($COUNT + 1))
fi
if [ "$COUNT" -ge "$LIMIT" ]; then
break
fi
done < "${TMP_LIST_FILE}"
rm -f "${TMP_LIST_FILE}"
The mv will sometimes fail, in which case you don't count the file and try to move the next one, assuming the mv failed because the file was meanwhile moved by the other script. Each script copies at most $LIMIT files, but it may be rather random selection.
On a side-note if you don't absolutely need to set environment variables in the while loop, you can do without a temporary file. Simply:
ls -1h targdir/*.json | while read REMOTE_FILE
do
...
done
You can't propagate variables out of such loop, because as part of a pipeline it runs in subshell.
If you do need to set environment variables and can live with using bash specifically (I usually try to stick to /bin/sh), you can also write
while read REMOTE_FILE
do
...
done <(ls -1h targdir/*.json)
In this case the loop runs in current shell, but this kind of redirection is bash extension.
The fact that two cron jobs move the same file to the same path should not matter for you unless you are disturbed by the error you get from one of them (one will succeed and the other will fail).
You can ignore the error by using:
...
mv $REMOTE_FILE $SCRDRL 2>/dev/null
...
Since your script is supposed to move a specific number of files from the list, two instances will at best move twice as many files. Unless they even interfere with each other, then the number of moved files might be less.
In any case, this is probably a bad situation to begin with. If you have any way of preventing two scripts running at the same time, you should do that.
If, however, you have no way of preventing two script instances from running at the same time, you should at least harden the scripts against errors:
mv "$REMOTE_FILE" "$SCRDRL" 2>/dev/null
Otherwise your scripts will produce error output (no good idea in a cron script).
Further, I hope that your ${TMP_LIST_FILE} is not the same in both instances (you could use $$ in it to avoid that); otherwise they'd even overwrite this temp file, in the worst case resulting in a corrupted file containing paths you do not want to move.

Modifying files nested in tar archive

I am trying to do a grep and then a sed to search for specific strings inside files, which are inside multiple tars, all inside one master tar archive. Right now, I modify the files by
First extracting the master tar archive.
Then extracting all the tars inside it.
Then doing a recursive grep and then sed to replace a specific string in files.
Finally packaging everything again into tar archives, and all the archives inside the master archive.
Pretty tedious. How do I do this automatically using shell scripting?
There isn't going to be much option except automating the steps you outline, for the reasons demonstrated by the caveats in the answer by Kimvais.
tar modify operations
The tar command has some options to modify existing tar files. They are, however, not appropriate for your scenario for multiple reasons, one of them being that it is the nested tarballs that need editing rather than the master tarball. So, you will have to do the work longhand.
Assumptions
Are all the archives in the master archive extracted into the current directory or into a named/created sub-directory? That is, when you run tar -tf master.tar.gz, do you see:
subdir-1.23/tarball1.tar
subdir-1.23/tarball2.tar
...
or do you see:
tarball1.tar
tarball2.tar
(Note that nested tars should not themselves be gzipped if they are to be embedded in a bigger compressed tarball.)
master_repackager
Assuming you have the subdirectory notation, then you can do:
for master in "$#"
do
tmp=$(pwd)/xyz.$$
trap "rm -fr $tmp; exit 1" 0 1 2 3 13 15
cat $master |
(
mkdir $tmp
cd $tmp
tar -xf -
cd * # There is only one directory in the newly created one!
process_tarballs *
cd ..
tar -czf - * # There is only one directory down here
) > new.$master
rm -fr $tmp
trap 0
done
If you're working in a malicious environment, use something other than tmp.$$ for the directory name. However, this sort of repackaging is usually not done in a malicious environment, and the chosen name based on process ID is sufficient to give everything a unique name. The use of tar -f - for input and output allows you to switch directories but still handle relative pathnames on the command line. There are likely other ways to handle that if you want. I also used cat to feed the input to the sub-shell so that the top-to-bottom flow is clear; technically, I could improve things by using ) > new.$master < $master at the end, but that hides some crucial information multiple lines later.
The trap commands make sure that (a) if the script is interrupted (signals HUP, INT, QUIT, PIPE or TERM), the temporary directory is removed and the exit status is 1 (not success) and (b) once the subdirectory is removed, the process can exit with a zero status.
You might need to check whether new.$master exists before overwriting it. You might need to check that the extract operation actually extracted stuff. You might need to check whether the sub-tarball processing actually worked. If the master tarball extracts into multiple sub-directories, you need to convert the 'cd *' line into some loop that iterates over the sub-directories it creates.
All these issues can be skipped if you know enough about the contents and nothing goes wrong.
process_tarballs
The second script is process_tarballs; it processes each of the tarballs on its command line in turn, extracting the file, making the substitutions, repackaging the result, etc. One advantage of using two scripts is that you can test the tarball processing separately from the bigger task of dealing with a tarball containing multiple tarballs. Again, life will be much easier if each of the sub-tarballs extracts into its own sub-directory; if any of them extracts into the current directory, make sure you create a new sub-directory for it.
for tarball in "$#"
do
# Extract $tarball into sub-directory
tar -xf $tarball
# Locate appropriate sub-directory.
(
cd $subdirectory
find . -type f -print0 | xargs -0 sed -i 's/name/alternative-name/g'
)
mv $tarball old.$tarball
tar -cf $tarball $subdirectory
rm -f old.$tarball
done
You should add traps to clean up here, too, so the script can be run in isolation from the master script above and still not leave any intermediate directories around. In the context of the outer script, you might not need to be so careful to preserve the old tarball before the new is created (so rm -f $tarbal instead of the move and remove command), but treated in its own right, the script should be careful not to damage anything.
Summary
What you're attempting is not trivial.
Debuggability splits the job into two scripts that can be tested independently.
Handling the corner cases is much easier when you know what is really in the files.
You probably can sed the actual tar as tar itself does not do compression itself.
e.g.
zcat archive.tar.gz|sed -e 's/foo/bar/g'|gzip > archive2.tar.gz
However, beware that this will also replace foo with bar also in filenames, usernames and group names and ONLY works if foo and bar are of equal length

Resources