In this code I parse a file (containing the output from ls -lrt) for a log file's modification date. Then I move all log files into a new folder with their modification dates added to the filenames, and than making a tar of all those files.
The problem I am getting is in the while loop. Because it's reading the data for all the files the while loop keeps on running 15 times. I understand that there is some issue in the code but I can't figure it out.
Inside the while loop I am splitting the ls -lrt records to find the log file modified date. $file is the output of the ls command that I am storing in the text file /scripts/yagya.txt in order to get the modification date. But the while loop is executing 15 times since there are 15 log files in the folder which match the pattern.
#!/usr/bin/perl
use File::Find;
use strict;
my #field;
my $filenew;
my $date;
my $file = `ls -lrt /scripts/*log*`;
my $directory="/scripts/*.log";
my $current = localtime;
my $current_time = $current;
$current_time = s/\s+//g;
my $freetime = $current_time;
my $daytime = substr($current_time,0,8);
my $seconddir = "/$freetime/";
system ("mkdir $seconddir");
open (MYFILE,">/scripts/yagya.txt");
print MYFILE "$file";
close (MYFILE);
my $data = "/scripts/yagya.txt";
my $datas = "/scripts/";
my %options = (
wanted => \&wanted,
untaint => 1
);
find (\%options, $datas);
sub wanted {
if (/[._]log\d*$/){
my $files;
my #fields;
my $fields;
chomp;
$files=$_;
open (MYFILE,$data);
while(<MYFILE>){
chop;
s/#.*//;
next unless /\S/;
#fields = (split)[5,6,7];
$fields = join('',#fields), "\n";
}
close (MYFILE);
system ("mv $files $seconddir$fields$files");
}
}
system ("tar cvf /$daytime/$daytime.tar.gz /$daytime/*log*");
system ("rm $seconddir*log*");
system ("rm $data");
Your code is very difficult to read. It looks like you have written the program as a single big chunk before you started to test it. That way of working is common but very wrong. You should start by implementing a small part of the program and testing that before you add a little more functionality, test again, and so on. That way you won't be overwhelmed with fixing many problems at once in a large untested program.
It would also help you a lot if you added use warnings to your use strict at the top of the program. It helps to catch simple errors that you may overlook.
Also, are you aware that File::Find will call your wanted callback subroutine every time it encounters a file? It doesn't pass all the files at once.
The problem seems to be that you are reading all the way through the yagya.txt file when you should be stopping when you find the record that matches the current file that File::Find has found. What you need to do is to check whether the current record in the ls output ends with the name of the current file. If you write the loop like this
while (<MYFILE>) {
if (/\Q$files\E$/) {
my #fields = (split)[5,6,7];
$fields = join('',#fields);
last;
}
}
then $fields will end up with the modification date of the current file, which is what you want.
But this would be a thousand times easier if you used Perl to read the file modification date for you.
Instead of writing an ls listing to a file and reading it back, you should do something like this
use File::stat;
my $mtime = localtime(stat($files)->mtime);
which will give you a string like Wed Jun 13 11:25:23 2012. The date from my ls output includes only the month name, day of month, and time of day, like Jun 8 12:37. That isn't very specific and you perhaps should at least include a year, but to generate the same string from this $mtime you can write
my $fields = join '', (split ' ', $mtime)[1,2,3];
There is a lot more I could say about your program, but I hope this gets it going for you for now.
Another couple of things I have noticed:
The line $current_time = s/\s+//g should be $current_time =~ s/\s+//g to remove all spaces from the current time string
A value like Sun Jun 3 11:50:54 2012 will be reduced to SunJun311:53:552012, and $daytime will then take the value SunJun31 which is incorrect
I'm usually not recommending using bash instead of perl, but sometimes it is much shorter
this problem has 2 parts:
rename files into another directory and adding timestamp into the filenames
archive them by every minutes or hours, days ... etc..
for 1.)
find ./scripts -name \*[_.]log\* -type f -printf "%p\0./logs/%TY%Tm%Td-%TH%Tk%TM-%f\0" | xargs -0 -L 2 mv
The above will find all plain files with [_.]log in their names and rename them into the ./logs directory with timestamp prefix. e.g.
./scripts/aaa.log12 get renamed into ./logs/20120403-102233-aaa.log12
2.) archiving
ls logs | sed 's/\(........-....\).*/\1/' | sort -u | while read groupby
do
( cd logs && echo tar cvzf ../$groupby.tgz $groupby* )
done
this will create tar archives by timestamp-prefix. (Assumed than the ./logs contain only files with valid (timestamped) filenames)
Of course, the above sed pattern is not nice, but clearly shows deleting seconds from the timestamp - so it is creating archives by minutes. If want another grouping, you can use:
sed 's/\(........-..\).*/\1/' - by hours
sed 's/\(........\).*/\1/' - by days
Other:
the -printf for find is supported only in gnu version of find - common in Linux
usually not a good practice working directly in '/', like /scripts, therefore my example uses ./
if in your ./scrips subtree exists the same filename with the same timestamp, the mv will overwrite the first, e.g. both of ./scripts/a/a.log and ./scripts/x/a.log with the same timestamp will be renamed into ./logs/TIMESTAMP-a.log
Related
This question already has answers here:
Change file's numbers Bash
(2 answers)
Closed 2 years ago.
I need to implement a script (duplq.sh) that would rename all the text files existing in the current directory using the command line arguments. So if the command duplq.sh pic 0 3 was executed, it would do the following transformation:
pic0.txt will have to be renamed pic3.txt
pic1.txt to pic4.txt
pic2.txt to pic5.txt
pic3.txt to pic6.txt
etc…
So the first argument is always the name of a file the second and the third always a positive digit.
I also need to make sure that when I execute my script, the first renaming (pic0.txt to pic3.txt), does not erase the existing pic3.txt file in the current directory.
Here's what i did so far :
#!/bin/bash
name="$1"
i="$2"
j="$3"
for file in $name*
do
echo $file
find /var/log -name 'name[$i]' | sed -e 's/$i/$j/g'
i=$(($i+1))
j=$(($j+1))
done
But the find command does not seem to work. Do you have other solutions ?
The problem you're trying to solve is actually somewhat tricky, and I don't think you've fully thought it through. For instance, what's the difference between duplq.sh pic 0 3 and duplq.sh pic 2 5 -- it looks like both should just add 3 to the number, or would the second skip "pic0.txt" and "pic1.txt"? What effect would either one have on files named "pic", "pic.txt", "picture.txt", "picture2.txt", "pic2-2.txt", or "pic999.txt".
There are also a bunch of basic mistakes in the script you have so far:
You should (almost) always put variable references in double-qotes, to avoid unexpected word-splitting and wildcard expansion. So, for example, use echo "$file" instead of echo $file. In for file in $name*, you should put double-quotes around the variable but not the *, because you want that to be treated as a wildcard. Hence, the correct version is for file in "$name"*
Don't put variable references in single-quotes, they aren't expanded there. So in the find and sed commands, you aren't passing the variables' values, you're passing literal dollar signs followed by letters. Again, use double-quotes. Also, you don't have a "$" before "name", so it won't be treated as a variable even in double-quotes.
But the find and sed commands don't do what you want anyway. Consider find /var/log -name "name[1]" -- that looks for files named "name1", not "name1" + some extension. And it looks in the current directory and all subdirectories, which I'm pretty sure you don't want. And the "1" ("$i") may not be the number in the current filename. Suppose there are files named "pic0.jpg", "pic0.png", and "pic0.txt" -- on the first iteration, the loop might find all three with a pattern like "pic0*", then on the second and third iterations try to find "pic1*" and "pic2*, which don't exist. On the other hand, suppose there are files named "pic0.txt", "pic5.txt", and "pic8.txt" -- again, it might look for "pic0*" (ok), then "pic1*" (not found), and then "pic2*" (ditto).
Also, if you get to multi-digit numbers, the pattern "name[10]" will match "file0" and "file1", but not "file10". I don't know why you added the brackets there, but they don't do anything you'd want.
You already have the files being listed one at a time in the $file variable, searching again with different criteria just adds confusion.
Also, at no point in the script do you actually rename anything. The find | sed line will (if it works) print the new name for the file, but not actually rename it.
BTW, when you do use the mv command, use either mv -n or mv -i to keep it from silently and irretrievably overwriting files if/when a name conflict occurs.
To prevent overwriting when incrementing file numbers, you need to do the renames in reverse numeric order (i.e. rename "pic3.txt" to "pic6.txt" before renaming "pic0.txt" to "pic3.txt"). This is especially tricky because if you just sort filenames in reverse alphabetic order, you'll get "pic7.txt" before "pic10.txt". But you can't do a numeric sort without removing the "pic" and ".txt" parts first.
IMO this is actually the trickiest problem to be solved in order to get this script to work right. It might be simplest to specify the largest index number as one of the arguments, and have it start there and count down to 0 (looping over numbers rather than files), and then for each number iterate over matching files (e.g. "pic0.jpg", "pic0.png", and "pic0.txt").
So I assume that 0 3 is just a measurement for the difference of old num and new num and equivalent to 1 4 or 100 103.
To avoid overwriting existing files, create a new temp dir, move all affected files there, and move all of them back in the end.
#/bin/bash
#
# duplq.sh pic 0 3
base="$1"
delta=$(( $3 - $2 ))
# echo delta $delta
target=$(mktemp -d)
echo $target
# /tmp/tmp.7uXD2GzqAb
add () {
f="$1"
b="$2"
d=$3
num=${f#./${b}}
# echo -e "file: $f \tnum: $num \tnum + d: $((num + d))" ;
echo -e "$((num + d))" ;
}
for f in $(find -maxdepth 1 -type f -regex ".*/${base}[0-9]+")
do
newnum=$(add "$f" "${base}" $delta)
echo mv "$f" "$target/${base}$newnum"
done
# exit
echo mv $target/${base}* .
First I tried to just use bash syntax, to check, whether removal of the prefix (pic) results in just digits remaining. I also didn't use the extension .txt - this is left as an exercise for the reader. From the question it is unclear - it is never explicitly told, that all files share the same extension, but all files in the example do.
With the -regex ".*/${base}[0-9]+") in find, the values are guaranteed to be just digits.
num=${f#./${b}}
removes from file f the base ("pic"). Delta d is added.
Instead of really moving, I just echoed the mv-command.
#TODO: Implement the file name extension conservation.
And 2 other pitfalls came to my mind: If you have 3 files pic0, pic00 and pic000 they all will be renamed to pic3. And pic08 will be cut into pic and 08, 08 will then be tried to be read as octal number (or 09 or 012129 and so on) and lead to an error.
One way to solve this issue is, that you prepend the extracted number (001 or 018) with a "1", then add 3, and remove the leading 1:
001 1001 1004 004
018 1018 1021 021
but this clever solution leads to new problems:
999 1999 2002 002?
So a leading 1 has to be cut off, a leading 2 has to be reduced by 1. But now, if the delta is bigger, let's say 300:
018 1018 1318 318
918 1918 2218 1218
Well - that seems to be working.
Writing a bash script that will copy a file into a directory where the new copy has the same name, but with the timestamp appended to the filename (prior to the extension).
How can I achieve this?
to insert the time stamp of the file itself into the original file name, as well as preserving that timestamp in the target file, the following works in GNU environments:
file="/some/dir/path-to-file.xxx";
cp -p "$file" "${file%.*}-$(date -r"$file" '+%Y%m%d-%H%M%S').${file##*.}"
Adding proper use of the basename(1) command into the mix would allow you to copy the file into a different directory.
It's more challenging to do this outside of GNU/Linux environments and you have to start visiting languages like awk, perl, python, even php, to replace the date -r command.
file="file_to_copy"
cp $file "/path/to/dest/$file"`stat --printf "%X" $file`
You can look at the manual page of stat (man 1 stat) to choose the appropriate timestamp for your needs (creation, last access etc.)
In this example, I chose %X which means time of last access, seconds since Epoch
Suppose
var="/path/to/filename.ext" #path is optional
Do
var1="${var##*/}
cp "$var" "/path/to/new/directory/${var1%.*}$(date +%s).${var1##*.}"
For more on ${var%.*} & ${var##*.} , see [ shell parameter expansion ].
date manpage says :
%s seconds since 1970-01-01 00:00:00 UTC
I am trying to figure out a way to determine the most recent file created in a directory. I cannot use a module and I am on a Linux OS.
just a simple google gave me good answer
#list = `ls -t`;
$newest = $list[0];
or completely in perl
opendir(my $DH, $DIR) or die "Error opening $DIR: $!";
my %files = map { $_ => (stat("$DIR/$_"))[9] } grep(! /^\.\.?$/, readdir($DH));
closedir($DH);
my #sorted_files = sort { $files{$b} <=> $files{$a} } (keys %files);
$sorted_files[0] is the most-recently modified. If it isn't the actual
file-of-interest, you can iterate through #sorted_files until you find
the interesting file(s).
No you can not get the files on the basis of their birth date, as their is no linux command to get the birth date of a file, but of-course you can get the access, modification and change information about the file. To get the access, modification and change time information of any file use this :
stat file-name
Also, to get the most recent changed/modified file use this:
ls -ltr | tail -1
Try:
cd DIR
ls -l -rt | tail -1
The naughty IO::All method: ;-)
use IO::All;
use v5.20;
# sort files by their modification time and store in an array:
my #files = sort{$b->mtime <=> $a->mtime} io(".")->all_files;
# get the first/newest file from the sort:
say "$files[0] ". ~~localtime($files[0]->mtime);
i have a set of files named img1.png , img2.png ,...img10.png,.. and so on. what i want to achieve is renaming these files so that the starting index is increased by 30 such that the files become img31.png, img32.png,.....img40.png,....and so on. Is this possible using the "rename" command? or is a script required? in either case how do i do this?
related - for this to work do i have to first rename the files to img001.png, img002.png, ...img010.png , and so on? how is this to be done, if required?
add 30 to the numbers in each filename
rename 's/(\d+)/$1+30/e' *png
rename to be 3 digits long
rename 's/(\d+)/sprintf("%03d",$1)/e' *png
See perldoc perlre http://perldoc.perl.org/perlre.html for details of how this works, rename is a perl program
LOCATION=/my/image/directory #change this to your location
for file in $(ls -1 ${LOCATION})
do
ind=$(echo ${file}|cut -c 4-|cut -d"." -f1)
(( newind=${ind}+30 ))
mv ${LOCATION}/${file} ${LOCATION}/img${newind}.png
done
I am sure there is much more elegant way of doing this on one line using likes of awk/sed/perl etc, but this shows you the logic behind it.
Hope it helps
I'm finding it difficult to word my question in a way I can search for the answer, my problem is as follows.....
I have a webcam that takes a photo every 2mins and saves as a numbered file, the first photo is taken at 0000hrs and is named image001.jpg, at 0002hrs image002.jpg and so on. At 2359hrs all the photos are turned in to 24hr time lapse video and saved as daily_video.mov. At 0000hrs (of the next day) the old image001.jpg is over written and the whole process repeated including generation of a new daily_video.mov.
This is all working fine with the webcam doing the file naming and overwriting, and a cron job running fffmpeg once a day to make the video.
What I want to do now is make a time lapse video over say a month by copying every 30th file from the days images to a new folder and naming in a sequential order. ie.
Day 1; image030.jpg, image060.jpg, etc... are renamed to Archive001.jpg, Archive002.jpg,etc...
But on day 2; image030.jpg, image060.jpg etc... Will need to be named to Archive025.jpg, Archive026.jpg etc.. and repeat untill the end of the month copying files from the day to a sequentially increasing in name list of files to use at the end of month, where the process can be repeated.
Does that make sense?!!
You could use a bash script like the following. Just call it at 2359hrs.
Remeber to make it executable using chmod +x myScript
I did not rename to Archive00X.jpg, but by adding the current date, they will be in proper alphabetical order.
example output:
cp files/image000.jpg >> archive/image_2012-08-29_000.jpg
cp files/image030.jpg >> archive/image_2012-08-29_030.jpg
....
adapt pSource and pDest to your paths (preferrably absolute paths)
adapt offset and maxnum to your needs. If maxnum is too big it will tell you some files are missing, but otherwise work properly.
Remove the echo lines if they disturb you ;)
Code:
#!/bin/bash
pSource="files"
pDest="archive"
offset=30
maxnum=721
curdate=`date "+%F"`
function rename_stuff()
{
myvar=0
while [ $myvar -lt $maxnum ]
do
forg=`printf image%03d.jpg ${myvar}`
fnew=`printf image_%s_%03d.jpg ${curdate} ${myvar}`
forg="$pSource/$forg"
fnew="$pDest/$fnew"
if [ -f "$forg" ]; then
echo "cp $forg >> $fnew"
cp "$forg" "$fnew"
else
echo "missing file $forg"
fi
myvar=$(( $myvar + $offset ))
done
}
rename_stuff