How can I create a daily task to run a command which outputs the results into a file that gets emailed?
I would like to run the following command:
find ./ -type f -size 510c -name "*.php" -mtime -3
in the following location:
/var/www/vhosts/
I would like to add this a cron job so I get the contents emailed ONLY if the file is not empty.
What's the best way to accomplish this?
It may be possible write a one-liner to do this, but if not, you can write a script. Below is a perl script that should do the job:
use warnings;
use strict;
my($cmd, $r);
my($to, $from, $subject, $message);
$cmd="find /var/www/vhosts/ -type f -size 510c -name \"*.php\" -mtime -3";
$r=`$cmd`;
if(length($r)>0) {
$to = 'to#to.com';
$from = 'from#from.com';
$subject = 'This is the subject';
$message = $r;
open(MAIL, "|/usr/sbin/sendmail -t");
# Email Header
print MAIL "To: $to\n";
print MAIL "From: $from\n";
print MAIL "Subject: $subject\n\n";
# Email Body
print MAIL $message;
close(MAIL);
}
Just save this to a file (e.g. script.pl), then just add a line to your crontab like so:
x x x x x perl /path/to/script.pl
Related
I'm trying to extract data from a file to a Perl program and execute it in Linux sh terminal using "foreach" loop but every time Perl return value to sh terminal it will print a new line before print the next string which cause my script to fail. How do I prevent that?
open(FILE, "input") or die("Unable to open file");
# read file into an array
#data = <FILE>;
# close file
close(FILE);
foreach my $x(#data) {
system "/granite_api.pl -type update_backdoor -project tgplp -test_id $x -turninid 4206";
}
Expected output:
/granite_api.pl -type update_backdoor -project tgplp -test_id example -turninid 4206
Actual output:
/granite_api.pl -type update_backdoor -project tgplp -test_id example
-turninid 4206
With
#data = <FILE>;
#data contains all lines from the input file. Each line ends with a LF. You need to remove it from each $x, using chomp for instance (that removes the trailing character(s) as set in $/)
foreach my $x ( #data ) {
chomp $x;
system "/granite_api.pl -type update_backdoor -project tgplp -test_id $x -turninid 4206";
}
See chomp in perldoc
For example:
I have a directory with project.
In that 3 sub directories.
In that 3 sub directories 1 text file in each.
Now I am using scandir() to find how many files& directive present in that project. But scandir() is only scanning 1 level mean it is not scanning sub directories how to scan them also.
If you are using the command line, you can use find and wc.
To count all files recursively:
find . -type f | wc -l
To find directory count:
find . -type d | wc -l
+++++++++++++++++++++++++++++
-type f = file
-type d = directory
wc = prints newline, word or byte count which takes a parameter -l to give you line count
If you are referring to using scandir in PHP, you can try something like this:
<?php
function dirToArray($dir) {
$result = array();
$cdir = scandir($dir);
foreach ($cdir as $key => $value)
{
if (!in_array($value,array(".","..")))
{
if (is_dir($dir . DIRECTORY_SEPARATOR . $value))
{
$result[$value] = dirToArray($dir . DIRECTORY_SEPARATOR . $value);
}
else
{
$result[] = $value;
}
}
}
return $result;
}
?>
Source: comment #88 http://php.net/manual/en/function.scandir.php
I have bunch of files with no pattern in their name at all in a directory. all I know is that they are all Jpg files. How do I rename them, so that they will have some sort of sequence in their name.
I know in Windows all you do is select all the files and rename them all to a same name and Windows OS automatically adds sequence numbers to compensate for the same file name.
I want to be able to do that in Linux Fedora but I you can only do that in Terminal. Please, help. I am lost.
What is the command for doing this?
The best way to do this is to run a loop in the terminal going from picture to picture and renaming them with a number that gets bigger by one with every loop.
You can do this with:
n=1
for i in *.jpg; do
p=$(printf "%04d.jpg" ${n})
mv ${i} ${p}
let n=n+1
done
Just enter it into the terminal line by line.
If you want to put a custom name in front of the numbers, you can put it before the percent sign in the third line.
If you want to change the number of digits in the names' number, just replace the '4' in the third line (don't change the '0', though).
I will assume that:
There are no spaces or other weird control characters in the file names
All of the files in a given directory are jpeg files
That in mind, to rename all of the files to 1.jpg, 2.jpg, and so on:
N=1
for a in ./* ; do
mv $a ${N}.jpg
N=$(( $N + 1 ))
done
If there are spaces in the file names:
find . -type f | awk 'BEGIN{N=1}
{print "mv \"" $0 "\" " N ".jpg"
N++}' | sh
Should be able to rename them.
The point being, Linux/UNIX does have a lot of tools which can automate a task like this, but they have a bit of a learning curve to them
Create a script containing:
#!/bin/sh
filePrefix="$1"
sequence=1
for file in $(ls -tr *.jpg) ; do
renamedFile="$filePrefix$sequence.jpg"
echo $renamedFile
currentFile="$(echo $file)"
echo "renaming \"$currentFile\" to $renamedFile"
mv "$currentFile" "$renamedFile"
sequence=$(($sequence+1))
done
exit 0
If you named the script, say, RenameSequentially then you could issue the command:
./RenameSequentially Images-
This would rename all *.jpg files in the directory to Image-1.jpg, Image-2.jpg, etc... in order of oldest to newest... tested in OS X command shell.
I wrote a perl script a long time ago to do pretty much what you want:
#
# reseq.pl renames files to a new named sequence of filesnames
#
# Usage: reseq.pl newname [-n seq] [-p pad] fileglob
#
use strict;
my $newname = $ARGV[0];
my $seqstr = "01";
my $seq = 1;
my $pad = 2;
shift #ARGV;
if ($ARGV[0] eq "-n") {
$seqstr = $ARGV[1];
$seq = int $seqstr;
shift #ARGV;
shift #ARGV;
}
if ($ARGV[0] eq "-p") {
$pad = $ARGV[1];
shift #ARGV;
shift #ARGV;
}
my $filename;
my $suffix;
for (#ARGV) {
$filename = sprintf("${newname}_%0${pad}d", $seq);
if (($suffix) = m/.*\.(.*)/) {
$filename = "$filename.$suffix";
}
print "$_ -> $filename\n";
rename ($_, $filename);
$seq++;
}
You specify a common prefix for the files, a beginning sequence number and a padding factor.
For exmaple:
# reseq.pl abc 1 2 *.jpg
Will rename all matching files to abc_01.jpg, abc_02.jpg, abc_03.jpg...
I have a file called search.txt containing multiple search patterns
example "search.txt" (Over 300 entries in total):
A28
A32
A3C
A46
A50
A5A
898
8A2
8AC
8B6
8C0
Example files from folder I want to search (Over 5000 in total):
1_0_1_4AB_3_56_300000_0_0_0.png
1_0_1_5A0_20_56_300000_0_0_0.png
1_0_1_A28_22_56_300000_0_0_0.png
1_0_1_A32_22_56_300000_0_0_0.png
1_0_1_A96_23_56_300000_0_0_0.png
1_0_1_898_21_56_300000_0_0_0.png
I need to check the fourth string of all .png's against all entries in search.txt (The strings are devided by "_")
I used a perl script similar to this before:
match4th.pl
#!/usr/bin/perl -w
use strict;
my $pat = qr/$ARGV[0]/;
while (<STDIN>) {
my (undef, undef, undef, $fourth) = split /_/;
print if defined($fourth) && $fourth =~ $pat;
}
Then I would use something like this to execute the sccript and move matching files to new location:
cd /png_folder
find . -name '*.png' | perl match4th.pl '/tmp/search.txt' | xargs mv -t /tmp/results
The part I am unsure about is how to tell the find command to use all entries in /tmp/search.txt rather than writing each pattern into the find command
I would also prefer to copy the files rather than move them
You could just use the search.txt file as a list of patterns with grep directly:
find . -name '*.png' | grep -f search.txt | xargs ...
Or if you want to make the patterns more strict, you could do something like this:
find . -name '*.png' | grep -f <(sed s/^/[0-9]_[0-9]_[0-9]_/ search.txt)
Or even more strict with:
find . -name '*.png' | grep -f <(sed s?^?/[0-9]_[0-9]_[0-9]_? search.txt)
And even more strict with:
find . -name '*.png' | grep -f <(sed 's?.*?/[0-9]_[0-9]_[0-9]_&_?' search.txt)
In this last one, an entire line in search.txt is matched (.*), and in the replacement we prefix with the pattern /[0-9]_[0-9]_[0-9]_, followed by the matched string (&), followed by a _. So for example if you have the letter A as a pattern in search.txt, this will generate a pattern for that line as /[0-9]_[0-9]_[0-9]_A_, which will correctly match your file with _A_ there.
If the output looks good, you can pipe it to xargs to copy the matched files like this:
... | xargs -i{} cp {} /path/to/dir
Most efficient solution should be:
use strict;
use warnings;
use File::Basename; # no_chdir will cause we will get full path name
use File::Find;
use File::Copy; # copy and move will work as shell's cp and mv
my ( $fn, $dir, $target ) = #ARGV; # script arguments
# check parameters
( stat($dir) && -d _ ) or die "Not a dir $dir";
( stat($target) && -d _ ) or die "Not a dir $target";
# construct regexp for matching files
# use quotemeta to sanitize data read from $fn file
my $re = join '|', map quotemeta, do {
# open file
open( my $fh, '<', $fn ) or die "$fn: $!";
my #p = <$fh>; # read all patterns
close($fh);
chomp #p; # remove end of line from patterns
#p; # return of do statement
};
$re = qr/$re/; # precompile regexp
# it makes trie for up to ten thousand patterns so match should be O(1)
sub wanted {
my $fourth;
lstat($_) # initialize special _ term
&& (
-d _ # is directory? Return true so step in depth
|| -f _ # otherwise if is file
&& /\.png$/ # is filename in $_ ending .png
# split by '_' to five pieces max and get fourth part (index 3)
&& defined( $fourth = ( split '_', basename($_), 5 )[3] ) # check if defined
&& $fourth =~ /^$re$/ # match regexp
&& do { move( $_, $target ) or die "$_: $!" } # then move using File::Copy::move
); # change move to copy if you want copy file instead
}
# do not change directory so $target can be relative and move will still work well
find( { wanted => \&wanted, no_chdir => 1 }, $dir );
Usage
perl find_and_move.pl /tmp/search.txt . /tmp/results
You use my $pat = qr/$ARGV[0]/;, but $ARGV[0] is /tmp/search.txt. You need to actually read the file.
#!/usr/bin/perl -w
use strict;
my $re = do {
my $qfn = shift(#ARGV);
open(my $fh, '<', $qfn) or die $!;
chomp( my #pats = <$fh> );
my $pat = join '|', map quotemeta, #pats;
qr/^$pat\z/
};
while (<>) {
my $tag = (split /_/)[3];
next if !defined($tag);
print if /$re/;
}
I have the following command that I run on cygwin:
find /cygdrive/d/tmp/* -maxdepth 0
-mtime -150 -type d |
xargs du --max-depth=0 > foldersizesreport.csv
I intended to do the following with this command:
for each folder under /d/tmp/ that was modified in last 150 days, check its total size including files within it and report it to file foldersizesreport.csv
however that is now not good enough for me, as it turns out inside each
/d/tmp/subfolder1/somefile.properties
/d/tmp/subfolder2/somefile.properties
/d/tmp/subfolder3/somefile.properties
/d/tmp/subfolder4/somefile.properties
so as you see inside each subfolderX there is a file named somefile.properties
inside it there is a property SOMEPROPKEY=3808612800100 (among other properties)
this is the time in millisecond, i need to change the command so that instead of -mtime -150 it will include in the whole calculation only
subfolderX that has a file inside them somefile.properties where the SOMEPROPKEY=3808612800100 is the time in millisecond in future, if the value SOMEPROPKEY=23948948 is in past then dont at all include the folder
in the foldersizesreport.csv because its not relevant to me.
so the result report should be looking like:
/d/tmp/,subfolder1,<itssizein KB>
/d/tmp/,subfolder2,<itssizein KB>
and if subfolder3 had a SOMEPROPKEY=34243234 (time in ms in past) then it would not be in that csv file.
so basically I'm looking for:
find /cygdrive/d/tmp/* -maxdepth 0 -mtime -150 -type d |
<only subfolders that have in them property in file
SOMEPROPKEY=28374874827 - time in ms in future and
not in past | xargs du --max-depth=0 > foldersizesreport.csv
Here's a perl version for the whole thing:
filter.pl
#!/usr/bin/perl
use strict;
use warnings;
use File::Spec;
# -------------------------- configuration ----------------------------
my %CFG = (
'propertiesFile' => 'somfile.properties',
'propertyKey' => 'SOMEPROPKEY',
'duCommand' => 'du -Bk -s'
);
# ---------------------------------------------------------------------
while (my $dir = <>) {
chomp $dir;
open(my $F, File::Spec->catfile($dir, $CFG{"propertiesFile"})) || next;
my ($match) = grep /$CFG{"propertyKey"}=\d+/, <$F>;
close $F;
if ($match =~ m/$CFG{"propertyKey"}=(\d+)/) {
my ($volume, $directories, $file) = File::Spec->splitpath($dir);
my $command = "$CFG{'duCommand'} $dir";
# on Windows you might need $volume this assumes Unix-like filesystem
print $directories . "," . $file . "," .
`$command | cut -f1` if $1 > time();
}
}
exit;
usage
find /home/regis/stackoverflow/2937940/* -maxdepth 0
-mtime -150 -type d | ./filter.pl
output (with your sample)
/home/regis/stackoverflow/2937940/,subfolder1,16K
/home/regis/stackoverflow/2937940/,subfolder2,16K
/home/regis/stackoverflow/2937940/,subfolder4,16K