Move files to another folder, including creation date in filename - linux

I am able to move files from one folder to another but the issue is I want the new created file in the new folder as its created date and filename.
For instance
/scripts/a.log
moved to
/log/8june2012a.log

cp filename "`date +%Y%m%d`filename"
This copies filename as 20120608filename. For your example this is what you want:
cp filename "`date +%d%b%Y`filename"
This copies filename as 08jun2012filename. If you want move your file instead of copying use mv instead of cp:
mv filename "`date +%d%b%Y`filename"

Here is a solution in Perl.
#!/usr/bin/perl
use strict;
use warnings;
use File::Copy 'move';
use Time::Piece 'localtime';
my $indir = '/scripts';
my $outdir = '/log';
# get all of the files in the scripts dir
chdir $indir;
my #files = grep -f, glob '*';
foreach my $infile (#files) {
# get the date that the file was created
my $file_created_date = localtime( (stat $infile)[9] );
my $outfile = $file_created_date->strftime('%d%B%Y').$infile;
move $infile, "$outdir/$outfile";
}
As an aside, I would format the date as %Y%m%d (yyyymmdd) as it gives you a consistent format and allows you to sort by date more easily.

Another solution.
use strict ;
use File::stat ;
use POSIX qw(strftime);
my $File = 'mv.pl';
my $NewFile=strftime("%d%B%Y",localtime(stat($File)->ctime)) . $File ;
rename $File, $NewFile;

Using a couple of CPAN modules this can be made straightforward. File::Copy has been a core module since Perl v5.0, but Date::Format and Path::Class will need installing unless you already have them.
I have taken your requirement literally, and this solution prefixes the original file with the creation date using %e%B%Y as the format, with upper case translated to lower case and spaces stripped. However this isn't very readable and the directory listing will not automatically sort in date order, so I recommend using %Y-%m-%d- instead by replacing the line containing the call to strftime with
my $date = lc strftime('%Y-%m-%d-', #date)
At present the code just prints a list of the files it is going to move and their destination. To actually do the move you should uncomment the call to move.
use strict;
use warnings;
use Path::Class 'dir';
use Date::Format 'strftime';
use File::Copy 'move';
my $source = dir '/scripts/';
my $dest = dir '/log/';
for my $file (grep { not $_->is_dir } $source->children) {
my #date = localtime $file->stat->ctime;
(my $date = lc strftime('%e%B%Y', #date)) =~ tr/\x20//d;
my $newfile = $dest->file($date.$file->basename);
print "move $file -> $newfile\n";
# move $file, $newfile;
}

use File::Copy;
move("a.log",$DIRECTORY.get_timestamp().".log");
Your get_timestamp function should generate the date.

I wrote a demo for you,
#!/bin/bash
DATE=`date +"%e%B%Y" | tr -d ' ' | tr A-Z a-z`
for FILENAME in *.log
do
cp "${FILENAME}" "/log/${DATE}${FILENAME}"
done
you can run this in your "scripts" directory.

Related

finding a file in directory using perl script

I'm trying to develop a perl script that looks through all of the user's directories for a particular file name without the user having to specify the entire pathname to the file.
For example, let's say the file of interest was data.list. It's located in /home/path/directory/project/userabc/data.list. At the command line, normally the user would have to specify the pathname to the file like in order to access it, like so:
cd /home/path/directory/project/userabc/data.list
Instead, I want the user just to have to enter script.pl ABC in the command line, then the Perl script will automatically run and retrieve the information in the data.list. which in my case, is count the number of lines and upload it using curl. the rest is done, just the part where it can automatically locate the file
Even though very feasible in Perl, this looks more appropriate in Bash:
#!/bin/bash
filename=$(find ~ -name "$1" )
wc -l "$filename"
curl .......
The main issue would of course be if you have multiple files data1, say for example /home/user/dir1/data1 and /home/user/dir2/data1. You will need a way to handle that. And how you handle it would depend on your specific situation.
In Perl that would be much more complicated:
#! /usr/bin/perl -w
eval 'exec /usr/bin/perl -S $0 ${1+"$#"}'
if 0; #$running_under_some_shell
use strict;
# Import the module File::Find, which will do all the real work
use File::Find ();
# Set the variable $File::Find::dont_use_nlink if you're using AFS,
# since AFS cheats.
# for the convenience of &wanted calls, including -eval statements:
# Here, we "import" specific variables from the File::Find module
# The purpose is to be able to just type '$name' instead of the
# complete '$File::Find::name'.
use vars qw/*name *dir *prune/;
*name = *File::Find::name;
*dir = *File::Find::dir;
*prune = *File::Find::prune;
# We declare the sub here; the content of the sub will be created later.
sub wanted;
# This is a simple way to get the first argument. There is no
# checking on validity.
our $filename=$ARGV[0];
# Traverse desired filesystem. /home is the top-directory where we
# start our seach. The sub wanted will be executed for every file
# we find
File::Find::find({wanted => \&wanted}, '/home');
exit;
sub wanted {
# Check if the file is our desired filename
if ( /^$filename\z/) {
# Open the file, read it and count its lines
my $lines=0;
open(my $F,'<',$name) or die "Cannot open $name";
while (<$F>){ $lines++; }
print("$name: $lines\n");
# Your curl command here
}
}
You will need to look at the argument-parsing, for which I simply used $ARGV[0] and I do dont know what your curl looks like.
A more simple (though not recommended) way would be to abuse Perl as a sort of shell:
#!/usr/bin/perl
#
my $fn=`find /home -name '$ARGV[0]'`;
chomp $fn;
my $wc=`wc -l '$fn'`;
print "$wc\n";
system ("your curl command");
Following code snippet demonstrates one of many ways to achieve desired result.
The code takes one parameter, a word to look for in all subdirectories inside file(s) data.list. And prints out a list of found files in a terminal.
The code utilizes subroutine lookup($dir,$filename,$search) which calls itself recursively once it come across a subdirectory.
The search starts from current working directory (in question was not specified a directory as start point).
use strict;
use warnings;
use feature 'say';
my $search = shift || die "Specify what look for";
my $fname = 'data.list';
my $found = lookup('.',$fname,$search);
if( #$found ) {
say for #$found;
} else {
say 'Not found';
}
exit 0;
sub lookup {
my $dir = shift;
my $fname = shift;
my $search = shift;
my $files;
my #items = glob("$dir/*");
for my $item (#items) {
if( -f $item && $item =~ /\b$fname\b/ ) {
my $found;
open my $fh, '<', $item or die $!;
while( my $line = <$fh> ) {
$found = 1 if $line =~ /\b$search\b/;
if( $found ) {
push #{$files}, $item;
last;
}
}
close $fh;
}
if( -d $item ) {
my $ret = lookup($item,$fname,$search);
push #{$files}, $_ for #$ret;
}
}
return $files;
}
Run as script.pl search_word
Output sample
./capacitor/data.list
./examples/data.list
./examples/test/data.list
Reference:
glob,
Perl file test operators

How to rename multiple files in terminal (LINUX)?

I have bunch of files with no pattern in their name at all in a directory. all I know is that they are all Jpg files. How do I rename them, so that they will have some sort of sequence in their name.
I know in Windows all you do is select all the files and rename them all to a same name and Windows OS automatically adds sequence numbers to compensate for the same file name.
I want to be able to do that in Linux Fedora but I you can only do that in Terminal. Please, help. I am lost.
What is the command for doing this?
The best way to do this is to run a loop in the terminal going from picture to picture and renaming them with a number that gets bigger by one with every loop.
You can do this with:
n=1
for i in *.jpg; do
p=$(printf "%04d.jpg" ${n})
mv ${i} ${p}
let n=n+1
done
Just enter it into the terminal line by line.
If you want to put a custom name in front of the numbers, you can put it before the percent sign in the third line.
If you want to change the number of digits in the names' number, just replace the '4' in the third line (don't change the '0', though).
I will assume that:
There are no spaces or other weird control characters in the file names
All of the files in a given directory are jpeg files
That in mind, to rename all of the files to 1.jpg, 2.jpg, and so on:
N=1
for a in ./* ; do
mv $a ${N}.jpg
N=$(( $N + 1 ))
done
If there are spaces in the file names:
find . -type f | awk 'BEGIN{N=1}
{print "mv \"" $0 "\" " N ".jpg"
N++}' | sh
Should be able to rename them.
The point being, Linux/UNIX does have a lot of tools which can automate a task like this, but they have a bit of a learning curve to them
Create a script containing:
#!/bin/sh
filePrefix="$1"
sequence=1
for file in $(ls -tr *.jpg) ; do
renamedFile="$filePrefix$sequence.jpg"
echo $renamedFile
currentFile="$(echo $file)"
echo "renaming \"$currentFile\" to $renamedFile"
mv "$currentFile" "$renamedFile"
sequence=$(($sequence+1))
done
exit 0
If you named the script, say, RenameSequentially then you could issue the command:
./RenameSequentially Images-
This would rename all *.jpg files in the directory to Image-1.jpg, Image-2.jpg, etc... in order of oldest to newest... tested in OS X command shell.
I wrote a perl script a long time ago to do pretty much what you want:
#
# reseq.pl renames files to a new named sequence of filesnames
#
# Usage: reseq.pl newname [-n seq] [-p pad] fileglob
#
use strict;
my $newname = $ARGV[0];
my $seqstr = "01";
my $seq = 1;
my $pad = 2;
shift #ARGV;
if ($ARGV[0] eq "-n") {
$seqstr = $ARGV[1];
$seq = int $seqstr;
shift #ARGV;
shift #ARGV;
}
if ($ARGV[0] eq "-p") {
$pad = $ARGV[1];
shift #ARGV;
shift #ARGV;
}
my $filename;
my $suffix;
for (#ARGV) {
$filename = sprintf("${newname}_%0${pad}d", $seq);
if (($suffix) = m/.*\.(.*)/) {
$filename = "$filename.$suffix";
}
print "$_ -> $filename\n";
rename ($_, $filename);
$seq++;
}
You specify a common prefix for the files, a beginning sequence number and a padding factor.
For exmaple:
# reseq.pl abc 1 2 *.jpg
Will rename all matching files to abc_01.jpg, abc_02.jpg, abc_03.jpg...

Perl to check the sub directories and change the onwership

I am trying to write a perl script which checks all the directories in the current directory and then accordingly penetrates in the subsequent directories to the point where it contains the last directory. This is what I have written:
#!/usr/bin/perl -w
use strict;
my #files = <*>;
foreach my $file (#files){
if (-d $file){
my $cmd = qx |chown deep:deep $file|;
my $chdir = qx |cd $file|;
my #subfiles = <*>:
foreach my $ subfile(#subfiles){
if (-d $file){
my $cmd = qx |chown deep:deep $subfile|;
my $chdir = qx |cd $subfile|;
. # So, on in subdirectories
.
.
}
}
}
}
Now, some of the directories I have conatins around 50 sub directories. How can I penetrate through it without writing 50 if conditions? Please suggest. Thank you.
Well, a CS101 way (if this is just an exercise) is to use a recursive function
sub dir_perms {
$path = shift;
opendir(DIR, $path);
my #files = grep { !/^\.{1,2}$/ } readdir(DIR); # ignore ./. and ./..
closedir(DIR);
for (#files) {
if ( -d $_ ) {
dir_perms($_);
}
else {
my $cmd = qx |chown deep:deep $_|;
system($cmd);
}
}
}
dir_perms(".");
But I'd also look at File::Find for something more elegant and robust (this can get caught in a circular link trap, and errors out if you don't call it on a directory, etc.), and for that matter I'd look at plain old UNIX find(1), which can do exactly what you're trying to do with the -exec option, eg
/bin/bash$ find /path/to/wherever -type f -exec chown deep:deep {} \;
perldoc File::Find has examples for what you are doing. Eg,
use File::Find;
finddepth(\&wanted, #directories_to_search);
sub wanted { ... }
further down the doc, it says you can use find2perl to create the wanted{} subproc.
find2perl / -name .nfs\* -mtime +7 \
-exec rm -f {} \; -o -fstype nfs -prune
NOTE: The OS usually won't let you change ownership of a file or directory unless you are the superuser (i.e. root).
Now, we got that out of the way...
The File::Find module does what you want. Use use warnings; instead of -w:
use strict;
use warnings;
use feature qw(say);
use autodie;
use File::Find;
finddepth sub {
return unless -d; # You want only directories...
chown deep, deep, $File::Find::name
or warn qq(Couldn't change ownership of "$File::Find::name\n");
}, ".";
The File::Find package imports a find and a finddepth subroutine into your Perl program.
Both work pretty much the same. They both recurse deeply into your directory and both take as their first argument a subroutine that's used to operate on the found files, and list of directories to operate on.
The name of the file is placed in $_ and you are placed in the directory of that file. That makes it easy to run the standard tests on the file. Here, I'm rejecting anything that's not a directory. It's one of the few places where I'll use $_ as the default.
The full name of the file (from the directory you're searching is placed in $File::Find::name and the name of that file's directory is $File::Find::dir.
I prefer to put my subroutine embedded in my find, but you can also put a reference to another subroutine in there too. Both of these are more or less equivalent:
my #directories;
find sub {
return unless -d;
push #directories, $File::Find::name;
}, ".";
my #directories;
find \&wanted, ".";
sub wanted {
return unless -d;
push #directories, $File::Find::name;
}
In both of these, I'm gathering the names of all of the directories in my path and putting them in #directories. I like the first one because it keeps my wanted subroutine and my find together. Plus, the mysteriously undeclared #directories in my subroutine doesn't look so mysterious and undeclared. I declared my #directories; right above the find.
By the way, this is how I usually use find. I find what I want, and place them into an array. Otherwise, you're stuck putting all of your code into your wanted subroutine.

How to get Perl to loop over all files in a directory?

I have a Perl script with contains
open (FILE, '<', "$ARGV[0]") || die "Unable to open $ARGV[0]\n";
while (defined (my $line = <FILE>)) {
# do stuff
}
close FILE;
and I would like to run this script on all .pp files in a directory, so I have written a wrapper script in Bash
#!/bin/bash
for f in /etc/puppet/nodes/*.pp; do
/etc/puppet/nodes/brackets.pl $f
done
Question
Is it possible to avoid the wrapper script and have the Perl script do it instead?
Yes.
The for f in ...; translates to the Perl
for my $f (...) { ... } (in the case of lists) or
while (my $f = ...) { ... } (in the case of iterators).
The glob expression that you use (/etc/puppet/nodes/*.pp) can be evaluated inside Perl via the glob function: glob '/etc/puppet/nodes/*.pp'.
Together with some style improvements:
use strict; use warnings;
use autodie; # automatic error handling
while (defined(my $file = glob '/etc/puppet/nodes/*.pp')) {
open my $fh, "<", $file; # lexical file handles, automatic error handling
while (defined( my $line = <$fh> )) {
do stuff;
}
close $fh;
}
Then:
$ /etc/puppet/nodes/brackets.pl
This isn’t quite what you asked, but another possibility is to use <>:
while (<>) {
my $line = $_;
# do stuff
}
Then you would put the filenames on the command line, like this:
/etc/puppet/nodes/brackets.pl /etc/puppet/nodes/*.pp
Perl opens and closes each file for you. (Inside the loop, the current filename and line number are $ARGV and $. respectively.)
Jason Orendorff has the right answer:
From Perlop (I/O Operators)
The null filehandle <> is special: it can be used to emulate the behavior of sed and awk, and any other Unix filter program that takes a list of filenames, doing the same to each line of input from all of them. Input from <> comes either from standard input, or from each file listed on the command line.
This doesn't require opendir. It doesn't require using globs or hard coding stuff in your program. This is the natural way to read in all files that are found on the command line, or piped from STDIN into the program.
With this, you could do:
$ myprog.pl /etc/puppet/nodes/*.pp
or
$ myprog.pl /etc/puppet/nodes/*.pp.backup
or even:
$ cat /etc/puppet/nodes/*.pp | myprog.pl
take a look at this documentation it explains all you need to know
#!/usr/bin/perl
use strict;
use warnings;
my $dir = '/tmp';
opendir(DIR, $dir) or die $!;
while (my $file = readdir(DIR)) {
# We only want files
next unless (-f "$dir/$file");
# Use a regular expression to find files ending in .pp
next unless ($file =~ m/\.pp$/);
open (FILE, '<', $file) || die "Unable to open $file\n";
while (defined (my $line = <FILE>)) {
# do stuff
}
}
closedir(DIR);
exit 0;
I would suggest to put all filenames to array and then use this array as parameters list to your perl method or script. Please see following code:
use Data::Dumper
$dirname = "/etc/puppet/nodes";
opendir ( DIR, $dirname ) || die "Error in opening dir $dirname\n";
my #files = grep {/.*\.pp/} readdir(DIR);
print Dumper(#files);
closedir(DIR);
Now you can pass \#files as parameter to any perl method.
my #x = <*>;
foreach ( #x ) {
chomp;
if ( -f "$_" ) {
print "process $_\n";
# do stuff
next;
};
};
Perl can shell out to execute system commands in various ways, the most straightforward is using backticks ``
use strict;
use warnings FATAL => 'all';
my #ls = `ls /etc/puppet/nodes/*.pp`;
for my $f ( #ls ) {
open (my $FILE, '<', $f) || die "Unable to open $f\n";
while (defined (my $line = <$FILE>)) {
# do stuff
}
close $FILE;
}
(Note: you should always use strict; and use warnings;)

How to find/cut for only the filename from an output of ls -lrt in Perl

I want the file name from the output of ls -lrt, but I am unable to find a file name. I used the command below, but it doesn't work.
$cmd=' -rw-r--r-- 1 admin u19530 3506 Aug 7 03:34 sla.20120807033424.log';
my $result=`cut -d, -f9 $cmd`;
print "The file name is $result\n";
The result is blank. I need the file name as sla.20120807033424.log
So far, I have tried the below code, and it works for the filename.
Code
#!/usr/bin/perl
my $dir = <dir path>;
opendir (my $DH, $dir) or die "Error opening $dir: $!";
my %files = map { $_ => (stat("$dir/$_"))[9] } grep(! /^\.\.?$/, readdir($DH));
closedir($DH);
my #sorted_files = sort { $files{$b} <=> $files{$a} } (keys %files);
print "the file is $sorted_files[0] \n";
use File::Find::Rule qw( );
use File::stat qw( stat );
use List::Util qw( reduce );
my ($oldest) =
map $_ ? $_->[0] : undef, # 4. Get rid of stat data.
reduce { $a->[1]->mtime < $b->[1]->mtime ? $a : $b } # 3. Find one with oldest mtime.
map [ $_, scalar(stat($_)) ], # 2. stat each file.
File::Find::Rule # 1. Find relevant files.
->maxdepth(1) # Don't recurse.
->file # Just plain files.
->in('.'); # In whatever dir.
File::Find::Rule
File::stat
List::Util
You're making it harder for yourself by using -l. This will do what you want
print((`ls -brt`)[0]);
But it is generally better to avoid shelling out unless Perl can't provide what you need, and this can be done easily
print "$_\n" for (sort { -M $a <=> -M $b } glob "*")[0];
if the name of log file is under your control, ie., free of space or other special characters, perhaps a quick & dirty job will do:
my $cmd=' -rw-r--r-- 1 admin u19530 3506 Aug 7 03:34 sla.20120807033424.log more more';
my #items = split ' ', $cmd;
print "log filename is : #items[8..$#items]";
print "\n";
It's not possible to do it reliably with -lrt - if you were willing to choose other options you could do it.
BTW you can still sort by reverse time with -rt even without the -l.
Also if you must use ls, you should probably use -b.
my $cmd = ' -rw-r--r-- 1 admin u19530 3506 Aug 7 03:34 sla.20120807033424.log';
$cmd =~ / ( \S+) $/x or die "can't find filename in string " ;
my $filename = $1 ;
print $filename ;
Disclaimer - this won't work if filename has spaces and probably under other circumstances. The OP will know the naming conventions of the files concerned. I agree there are more robust ways not using ls -lrt.
Maybe as this:
ls -lrt *.log | perl -lane 'print $F[-1]'

Resources