Compressing logs everyday - linux

I'm struggling with compressing my logs. I have a simple script which runs everynight
find /directory/logs -type f -mmin +1440 -print -exec gzip {} \;
But sometimes it skips my logs with different ending than *.log. For example it don't compress logs with *.log.1 *.log.0.lck etc.
Any ideas?

I suppose that you just don't use correctly the find command.
-mmin +1440 - find all files except last 1440min (24 hours)
-mmin -1440 or -mmin 1440 - find all files created in last 1440min (24 hours)
You can use "-mtime n", from man:
File's data was last modified n*24 hours ago. See the comments for -atime to understand how rounding affects the interpretation of file modification times.
So for you:
find /directory/logs -type f -mtime 1 -print -exec gzip {} \;

Related

Fetch files from IFS directory file which are less than 120 minutes

I am using shell in an AS400. I need to find all files older than 120 minutes:
find . -type f -mmin 120
It fails with an error -mmin is not valid. Then I tried -mtime but since it is days, I can't use a decimal to find files which older than 120 minutes.
I've been unable to think of how to use the -newer option to get this done.
The -mmin option (and all other time option) have three options. They can be a number (e.g. 120) for exactly 120 minutes ago. They can also be +120 for more than 120 minutes and -120 for less than 120 minutes.
find . -mmin +120 -print
Note this command also will return . if it meets the time criteria. The current directory (.) is likely not what you need. You have options.
find . -type f -mmin +120 -print for only regular files.
find . -name \*.jpg -mmin +120 -print for only files ending in .jpg.
One more thing, it is considered unsafe to use the output of find to run other commands. You will need to use -print0 to null terminate the files then use xargs -0 to run the commands safely with special characters.

How to capture both success and error messages for linux "find" command

I'm trying to run an auto-delete script to free up space on a remote server.
The command I'm thinking to use is:
find . -atime +30 -mtime +30 -type f -delete
What I want is to also capture which files were successfully deleted and which failed because of access issue. How should I do this? I think this command below might take care of the failures only, but I'm not sure.
find . -atime +30 -mtime +30 -type f -delete 2>failed_deletions.txt
find out of the box does not print the files it processes. If you want to list the files, add a -print or -ls before the -delete.
This obviously prints all the files it processes, including the ones it fails to delete for whatever reason.
Redirecting standard output to a different file should be trivial to discover; command >stdout 2>stderr
The final command would become
find . -atime +30 -mtime +30 -type f \
-print -delete >success.txt 2>errors.txt
Less performant, but should do what you wanted:
find . -atime +30 -mtime +30 -type f -exec rm -v {} \; >successful.txt 2>failed.txt

Statement that compress files older than X and after it removes old ones

Trying to do a bash script, that will compress files older than X, and after compressing removes uncompressed version. Tried something like this, but it doesn't work.
find /home/randomcat -mtime +11 -exec gzip {}\ | -exec rm;
By default, gzip will remove the uncompressed file (since it replaces it with the compressed variant). And you don't want it to run on anything else than a plain file (not on directories or devices, not on symbolic links).
So you want at least
find /home/randomcat -mtime +11 -type f -exec gzip {} \;
You could even want find(1) to avoid files with several hard links. And you might also want it to ask you before running the command. Then you could try:
find /home/randomcat -mtime +11 -type f -links 1 -ok gzip {} \;
The find command with -exec or -ok wants a semicolon (or a + sign), and you need to escape that semicolon ; from your shell. You could use ';' instead of \; to quote it...
If you use a + the find command will group several arguments (to a single gzip process), so will run less processes (but they would last longer). So you could try
find /home/randomcat -mtime +11 -type f -links 1 -exec gzip -v {} +
You may want to read more about globbing and how a shell works.
BTW, you don't need any command pipeline (as suggested by the wrong use of | in your question).
You could even consider using GNU parallel to run things in parallel, or feed some shell (with background jobs) with e.g.
find /home/randomcat -mtime +11 -type f -links 1 \
-exec printf "gzip %s &\n" {} \; | bash -x
but in practice you won't speed up a lot your processing.
find /home/randomcat -mtime +11 -exec gzip {} +
This bash script compresses the files you find with the "find command".Instead of generating new files in gzip format, convert the files to gzip format.Let's say you have three files named older than X. And their names are a,b,c.
After running find /home/randomcat -mtime +11 -exec gzip {} + command,
you will see a.gz b.gz c.gz instead of seeing a b c in /home/randomcat directory.
find /location/location -type f -ctime +15 -exec mv {} /location/backup_location \;
This will help you find all the files and move to backup folder

Copy files in Unix generated in 24 hours

I am trying to copy files which are generated in the past one day (24 hrs). I am told to use awk command but I couldn't find the exact command for doing this. My task is to copy files from /source/path --> /destination/path.
find /source/path -type f -mmin -60 -exec ls -al {} \;
I have used the above command to find the list of files generated in the past 60 mins, but my requirement is to copy the files, and not just knowing the file names.
Just go ahead an exec cp instead of ls:
find /source/path -type f -mmin -60 -exec cp {} /destination/path \;
You are really close! Take the name of files and use it for copy.
find /source/path -type f -mmin -60 -exec ls -al {} \; |\
while read file
do
cp -a "${file}" "/destination/path"
done

A crontab to move completed uploads from one dir to another?

I'm using the following crontab, once an hour, to move any files with the .mp3 extension from the dir "webupload" to the dir "complete" :
60 * * * * find usr/webupload -type f -maxdepth 1 -name "*.mp3" -exec mv {} usr/webupload/complete \;
The problem is that "webupload" contains lots of partial files being transferred.
I've read about a lot of different ways to achieve this but I think i'm more confused now than I was when I started!
What is the best practice or easiest way to only move the completed uploads?
Many thanks :)
It's going to be hard to tell when a file is completely written unless it is renamed when the download is completed, but you could change your find command and add -mmin +1 so that it only looks for files which have been modified more than 1 minutes ago (meaning that download is likely completed). Also, you should use / at the beginning of your paths rather than the relative paths your using:
60 * * * * find /usr/webupload -type f -mmin +1 -maxdepth 1 -name "*.mp3" -exec mv {} /usr/webupload/complete \;
You could obviously make the modification time longer (eg. 10 minutes -mmin +10) if you want to be more certain that the file has been downloaded.

Resources