Batch rename log files from one time format to another - linux

I'm looking for a way to batch rename almost 1,000 log files created by an Eggdrop bot. A few years ago, I had to setup my bot from scratch, and neglected to set the log format properly, so all of these files now have the format:
channelname.log.%d%b%Y (channelname.log.14Jan2014)
I want to rename all those files to match all my old log files, which are in the format of:
channelname.log.%Y%m%d (channelname.log.20140101)
I've already made the change in my eggdrop.conf file, but I would like to rename all the newer log files to match the format of the old ones.
This is on a Linux shell, so some sort of bash command would be ideal. Thanks!

find . -type f -name '*.log.*[^0-9-]*' -print0 | while read -d '' -r logfile; do
mv "${logfile}" "${logfile/.log.*/.log.`date -d ${logfile#*.log.} +%Y-%m-%d`}"
done
Assuming it's in a locale date knows how to handle.

Related

Bash Scripting with xargs to BACK UP files

I need to copy a file from multiple locations to the BACK UP directory by retaining its directory structure. For example, I have a file "a.txt" at the following locations /a/b/a.txt /a/c/a.txt a/d/a.txt a/e/a.txt, I now need to copy this file from multiple locations to the backup directory /tmp/backup. The end result should be:
when i list /tmp/backup/a --> it should contain /b/a.txt /c/a.txt /d/a.txt & /e/a.txt.
For this, I had used the command: echo /a/*/a.txt | xargs -I {} -n 1 sudo cp --parent -vp {} /tmp/backup. This is throwing the error "cp: cannot stat '/a/b/a.txt /a/c/a.txt a/d/a.txt a/e/a.txt': No such file or directory"
-I option is taking the complete input from echo instead of individual values (like -n 1 does). If someone can help debug this issue that would be very helpful instead of providing an alternative command.
Use rsync with the --relative (-R) option to keep (parts of) the source paths.
I've used a wildcard for the source to match your example command rather than the explicit list of directories mentioned in your question.
rsync -avR /a/*/a.txt /tmp/backup/
Do the backups need to be exactly the same as the originals? In most cases, I'd prefer a little compression. [tar](https://man7.org/linux/man-pages/man1/tar.1.html) does a great job of bundling things including the directory structure.
tar cvzf /path/to/backup/tarball.tgz /source/path/
tar can't update compressed archives, so you can skip the compression
tar uf /path/to/backup/tarball.tar /source/path/
This gives you versioning of a sort, as if only updates changed files, but keeps the before and after versions, both.
If you have time and cycles and still want the compression, you can decompress before and recompress after.

Check directory daily for new files - linux bash script

I'd like to monitor a directory for new files daily using a linux bash script.
New files are added to the directory every 4 hours or so. So I'd like to at the end of the day process all the files.
By process I mean convert them to an alternative file type then pipe them to another folder once converted.
I've looked at inotify to monitor the directory but can't tell if you can make this a daily thing.
Using inotify I have got this code working in a sample script:
#!/bin/bash
while read line
do
echo "close_write: $line"
done < <(inotifywait -mr -e close_write "/home/tmp/")
This does notify when new files are added and it is immediate.
I was considering using this and keeping track of the new files then processing them at all at once, at the end of the day.
I haven't done this before so I was hoping for some help.
Maybe something other than inotify will work better.
Thanks!
You can use a daily cron job: http://linux.die.net/man/1/crontab
Definitely should look into using a cronjob. Edit your cronfile and put this in:
0 0 * * * /path/to/script.sh
That means run your script at midnight everyday. Then in your script.sh, all you would do is for all the files, "convert them to an alternative file type then pipe them to another folder once converted".
Your cron job (see other answers on this page) should keep a list of the files you have already processed and then use comm -3 processed-list all-list to get the new files.
man comm
Its a better alternative to
awk 'FNR==NR{a[$0];next}!($0 in a)' processed-list all-list
and probably more robust than using find since you record the ones that you have actually processed.
To collect the files by the end of day, just use find:
find $DIR -daystart -mtime -1 -type f
Then as others pointed out, set up a cron job to run your script.

How can I delete files that are not used in code files in linux?

I am running Fedora 18 linux and I have a PHP project that I have been working on for some time. I am trying to clean things up for a production deploy of a web application. I have a folder of icon images that over time has collected files that are not used in my code any more, either because I changed to a different icon in code, or the image file was used to create other icons. What I am looking to do is to make a backup copy of the entire code project, and HOPEFULLY using a combination of find, rm and grep on the command line, scan the entire folder of the images, and if the images are not used anywhere in my code files, delete them. I did some searching on the web and I am finding things that find a line of text in a file and delete it, but I have not found anything quite like what I am trying to do.
Any help is appreciated...
So here is what I came up with. I put together a shell script that does what I need. For the benefit of those who stumble upon this, and for those who want to critique my solution, here it is. I chose to skip files that were found in .xcf files because these are only used to create many of the icon files and some of the .png images would grep to these .xcf files.
#!/bin/bash
FILES=/var/www/html/support_desk/templates/default/images/icons/*
codedir=/var/www/html/support_desk_branch/
for f in $FILES
do
bn=$(basename $f)
ext="${bn##*.}"
echo "Processing $bn file..."
if ! fgrep --quiet -R $bn $codedir; then
if [ ext != 'xcf' ]; then
rm $f
fi
fi
done
Now I have ONLY the image files that are used in the PHP script files. Just so as not to miss any of the icon files used in the menu, which is defined in a table in a mysql database, I created an sql dump file of the data for that table, and put it in the path of the application files prior to running the shell script.
The simplest way to find unused icon files would be to do a build of your complete project and then look at the access-times of the icon-files. Those that were not read recently (including with grep, of course) would show up readily.
For instance, supposing that you did a backup an hour ago, and did a build ten minutes ago — the access times would be distinct. Then
find . -amin +15 -type f
should give a nice list of "unused" files. If you're sure of the list (you did do a backup, right?) then you could purge the unused files:
find . -amin +15 -type f -exec rm -i {} \;
If you are really certain, you can remove the -i option.

Scp bulk files from current directory to another directory

I need to transfer a bunch of files from a production host to my local machine. I'm already in the directory that I need to transfer the files from. I know the names of log files that I need to transfer to my local machine. They are log.timestamp.hostnames and these tend to be long. How can I transfer in bulk using scp ? Is there an easier way than just typing the long file names ? Can I get it out from a filename ?
Use wildcards:
scp log.* user#host:/target/directory
If you didn't want to copy over all of your files in the current directory (which would just be using ./*), what you could do is parse all of the files in your current directory and run a regular expression on it to match up log.timestamp.hostname then pipe that into scp. For the regex I found this example regex with find. To send big files here is an example: scp syntax. Something along the lines of:
scp $(find . -regextype sed -regex ".*/log\.[a-z0-9\-]\.[a-z0-9\-]") user#remote:~/
You will probably want to modify the regex to make it work.
This command line execution option helped to solve my issue of transfering a subset of files. As the AIX unix does not provide the -regextype option with find I used the grep command instead in order to retrieve files tab1.msg to tab9 msg
scp $(find . -name "*" | grep tab.\.msg) user#host:/tmp

How do I rename multiple files beginning with a Unix timestamp - imapsync issue

I didn't got the script from imapsync to rename maildir filenames to work. :-/
So what I need is:
I have a mail folder with thousands of mails. After importing those emails to my new server, the filename of the emails got the creation date as a Unix timestamp in the filename, but the creation date flag of the file is the correct receive date from the email.
ls -l for one file looks like this:
-rw-r--r-- 1 popuser popuser 1350432 2013-03-16 07:22 1363563215.M562903P29332V0000000000000802I0000000000AEA46B_527.my-domain.org,S=1350432:2,S
So what the script has to do is:
1) read the creation date/time of the file (I found the command
stat -c %y filename
does this)
2) convert the date/time from 1) to a Unix timestamp
date -d "2013-03-17 11:19:01.000000000 +0100" "+%s"
3) delete the first 10 digits (wrong timestamp) of the filename and us the the timestamp from 2) instead
4) do this for all files in a specific directory
I'm a newby in Linux scripts, can anyone help me with this script?
Thank you!
Try doing this with rename :
$ rename -n 's/^\d+/(stat($_))[9]/e' [0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]*
from the shell prompt. It's very useful, you can put some perl code like I does in a substitution for stat with the e modifier.
You can remove the -n (dry-run mode switch) when your tests become valids.
There are other tools with the same name which may or may not be able to do this, so be careful.
If you run the following command (linux)
$ file $(readlink -f $(type -p rename))
and you have a result like
.../rename: Perl script, ASCII text executable
and not containing:
ELF
then this seems to be the right tool =)
If not, to make it the default (usually already the case) on Debian and derivative like Ubuntu :
$ sudo update-alternatives --set rename /path/to/rename
(replace /path/to/rename to the path of your perl's rename command.
If you don't have this command, search your package manager to install it or do it manually
Last but not least, this tool was originally written by Larry Wall, the Perl's dad.
Edit
As stated here, if you have the following error :
Argument list too long
Then use find like this :
find -type f -name '[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]*' -print0|
xargs -0 -n1 rename -n 's/^\d+/(stat($_))[9]/e'
(try it without -n1, that should works too)

Resources