I create starttime_file and endtime_file hourly and the interval is 1 hour.
The purpose of this is to check the files which are created hourly.
But the thing is that I sometimes receive error code due to No such file or directory, although there is that file.
Could you let me know what would be the reason for this?
starttime_file
endtile_file
find ~/ -type -f -newer starttime_file -a ! -newer endtime_file > listfile.txt
When I conducted this on crontab hourly based, I received fail msg saying that no such file or directory which was created at Oct. 30th 16:15.
test200.txt was created at 16:15 and when I think, I need to take the error msg only once.
What would be the reason for this?
I really dunno what I need to try.
I suspected the authority issue but when I find this file on normal user status, I was able to find this file as well.
Both are working well on normal user status.
ll -rtl | grep -i "test200"
find ~/ -name "teset200.txt"
Related
The script in file modBackup.sh does not run completely when started by cron, the result is a corrupted tar.gz file that is half the size of this one if I run manually. In any case, its size is many times smaller than the one started manually, but still creates some content that can not be opened normally, archive is damaged
file modBackup.sh:
#!/bin/sh
find /home/share/ -mmin -720 -type f -exec tar -rvf /mnt/archives/`date +%F`-modified.tar.gz "{}" +
Тhe behavior of the automatic one seems to be interrupted and does not end.
When I run it manualy, the script creates a genuine archive as [current date]-modified.tar.gz
Here is the crontab -e:
00 18 * * 1-5 /home/myScripts/modBackup.sh
Edit:
There is no information in the logs except that crond has started
neither in the mail log, nor in the cron, nor in the messages
(I use very old CentOS :( but I don't think this is the reason for the error).
For testing only: I added %H%M of the file name in the script and did the following:
I ran it manually: sh /home/myScripts/modBackup.sh
and set with crontab -e to run a two minutes later the same command
After a few minutes, two files appeared that grew at the same time, but then the one created by cronjob
stopped growing
(two files).
I use the same GUI tool (Archive Manager) to open in both cases.
Тhe file, created by manually starting the script, opens (manually started), but the other one, from cronjob cannot, even after I changed the extension, the error is 'unexpected EOF in archive' (auto started)
Suggesting to include the users's environment context with $PATH and other critical environment variables for the application to work.:
modBackup.sh:
#!/bin/sh
source ~/.profile
find /home/share/ -mmin -720 -type f -exec tar -rvf /mnt/archives/`date +%F`-modified.tar.gz "{}" +
I found that in the cron environment the "find" command misinterprets filenames containing specific characters, even with the explicit change of the encoding with add at the beginning of the script line "export LANG = en_US.UTF-8; LC_CTYPE=...". With many other combinations and attempts I had no success.
That's why I left the "find" command and use the tar command with an option to archive modified files. This way works perfect now:
fromDate = $(date --date = '15 hours ago')
/bin/tar -N "$fromDate" -zcf /mnt/archives/`date +% F-% H% M`-share.modified.tar.gz /home/share/
I am having issue storing my server backup on a storage VPS. My server is not deleting old backup folders and the storage is getting full and the backup fails in mid way. My runs once every week.
Can anyone help me create a cron job script on that deletes folder older than 7 days and runs one day before backup and delete old folders.
Any help appreciated.
For example, the description of crontab for deleting files older than 7 days under the /path/to/backup/ every day at 4:02 AM is as follows.
02 4 * * * find /path/to/backup/* -mtime +7 -exec rm {} \;
Please make sure before executing rm whether targets are intended files. You can check the targets by specifying -ls as the argument of find.
find /path/to/backup/* -mtime +7 -ls
mtime means the last modification timestamp and the results of find may not be the expected file depending on the backup method.
I need to create a bash script to run another script command if none of the files in the directory have been created within 30 mins.
I am not sure of the code I need but it needs to find and execute if not matched. -
find /folder/to/watch/* if-not 30 mins -exec fixscript.sh or something?
When the script is ran I want it to check the files in the folder , if a file has not been created within the last 30 mins then to run the fixscript.sh
Thanks in advance
I don't think this can be combined to a single find statement. The following would work, with the caveat that if a file is modified, it would be detected as "newer than 30 minutes".
if [ `find /folder/to/watch -mmin -30 | wc -l` -eq 0 ]; then
/path/to/fixscript.sh
fi
Linux/Unix does not have an independent file creation attribute. Some filesystems might have it, though, but it can't be accessed from shell without c code and call to stat(). This uses "file modified" timestamp, which gets changed on not only file creation but also file edit.
Hannu
In Linux you don't have metadata for creation time, but only for last modification time, last file status change and last access time.
You can do:
if test ! [`find "your_file" -mmin +30`]
then
`path/to/your/script.sh`
fi
the -mmin option is giving you last modification time.
I'm relatively new to coding on linux.
I have the below script for moving my ERP log file.
!/bin/bash #Andrew O. MBX 2015-09-03
#HansaWorld Script to periodically move the log file
_now=$(date +"%m_%d_%Y")
mv /u/OML_Server_72/hansa.log /u/HansaLogs/hansa_$now.log
The code runs but does not rename the log file to the date it has been moved.
I would also like to check when the file exceeds the 90M size so it moves it automatically at the end of every day. a cron job of some kind.
Help Please
After editing this is my new code.
#!/bin/bash
#Andrew O. MBX 2015-09-03
#HansaWorld Script to periodically move the log file
now=$(date +"%m_%d_%Y")
mv /u/OML_Server_72/hansa.log /u/HansaLogs/hansa$now.log
I wish to add code to check if hansa.log file is over 90M then move it. If it is not then leave it as it is.
cd /u find. -name '*hansa.log*' -size +90000k -exec mv '{}' /u/HansaLogs\;
In addition to the other comments, there are a few other things to consider. tgo's logrotate suggestion is a good one. In Linux, if you are every stuck on the use of a utility, etc.. the man files (while a bit cryptic at first), provide concise usage information. To see the logs available for a given utility, use man -k name (some distributions provide this selection capability by default alias) e.g.:
$ man -k logrotate
logrotate (8) - rotates, compresses, and mails system logs
logrotate.conf (5) - rotates, compresses, and mails system logs
Then if you want the logrotate page:
$ man 8 logrotate
or the conf page
$ man 5 logrotate.conf
There are several things you may want to change/consider regarding your script. First, while there is nothing wrong with a variable now, you may run into confusion with the date command's builtin use of now. There is no conflict, but it would look strange to write now=$(date -d "now + 24 hours" "+%F %T"). (recommend a name like tstamp, short for timestamp instead).
For maintainability, readability, etc... you may consider assiging your path components to variables that will help with readability later on. (example below).
Finally, before moving, copying, deleting, etc... it is always a good idea to validate that the target file exists and to provide an error message if something is out of whack. A rewrite could be:
#!/bin/bash
#Andrew O. MBX 2015-09-03
#HansaWorld Script to periodically move the log file
tstamp=$(date +"%m_%d_%Y")
logdir="/u/HansaLogs"
logname="/u/OML_Server_72/hansa.log"
if [ -f "$logname" ]; then
mv "$logname" "$logdir/hansa_${tstamp}.log"
else
printf "error: file not found '%s'.\n" "$logname" >&2
exit 1
fi
Note: the >&2 simply redirects the output of printf to stderr rather than stdout.
As for the find command, there is no need to cd and find ., the find command takes the path as its first argument. Additionally, the --size option has builtin support for Megabytes M. A rewrite here could look like:
find /u -name "*hansa.log*" -size +90M -exec mv '{}' /u/HansaLogs \;
All in all, it looks like you will pick up shell programming without any problem. Just develop good habits early, they will save you a lot of grief later.
Hi Guys Thanx for the help. So far I have come up with this code. I am stuck at creating a cron job to run this periodically say after every 22hrs
#!/bin/bash
#Andrew O. MBX 2015-09-03
#HansaWorld Script to Check if log file exists before moving:
tstamp=$(date +"%m_%d_%Y")
logdir="/u/HansaLogs"
logname="/u/OML_Server_72/hansa.log"
minimumsize=90000
actualsize=$(wc -c <"$logname")
if [ $actualsize -ge $minimumsize ]; then
mv "$logname" "$logdir/hansa_${tstamp}.log"
else
echo size is under $minimumsize bytes
exit 1
fi
I'm looking for a way to batch rename almost 1,000 log files created by an Eggdrop bot. A few years ago, I had to setup my bot from scratch, and neglected to set the log format properly, so all of these files now have the format:
channelname.log.%d%b%Y (channelname.log.14Jan2014)
I want to rename all those files to match all my old log files, which are in the format of:
channelname.log.%Y%m%d (channelname.log.20140101)
I've already made the change in my eggdrop.conf file, but I would like to rename all the newer log files to match the format of the old ones.
This is on a Linux shell, so some sort of bash command would be ideal. Thanks!
find . -type f -name '*.log.*[^0-9-]*' -print0 | while read -d '' -r logfile; do
mv "${logfile}" "${logfile/.log.*/.log.`date -d ${logfile#*.log.} +%Y-%m-%d`}"
done
Assuming it's in a locale date knows how to handle.