Perl script for finding unowned files not finding anything - linux

I've written a script which is designed to find all files not owned by either an existing user or group. However, despite having created a test user and then removing it leaving behind its /home directory, the script is not finding it. Clearly I have an error in the script's logic. I just can't find it.
#!/usr/bin/perl
# Directives which establish our execution environment
use warnings;
use strict;
use File::Find;
no warnings 'File::Find';
no warnings 'uninitialized';
# Variables used throughout the script
my $OUTDIR = "/var/log/tivoli/";
my $MTAB = "/etc/mtab";
my $PERMFILE = "orphan_files.txt";
my $TMPFILE = "orphan_files.tmp";
my $ROOT = "/";
my(#devNum, #uidNums, #gidNums);
# Create an array of the file stats for "/"
my #rootStats = stat("${ROOT}");
# Compile a list of mountpoints that need to be scanned
my #mounts;
open MT, "<${MTAB}" or die "Cannot open ${MTAB}, $!";
# We only want the local HDD mountpoints
while (<MT>) {
if ($_ =~ /ext[34]/) {
my #line = split;
push(#mounts, $line[1]);
}
}
close MT;
# Build an array of each mountpoint's device number for future comparison
foreach (#mounts) {
my #stats = stat($_);
push(#devNum, $stats[0]);
print $_ . ": " . $stats[0] . "\n";
}
# Build an array of the existing UIDs on the system
while((my($name, $passwd, $uid, $gid, $quota, $comment, $gcos, $dir, $shell)) = getpwent()) {
push(#uidNums, $uid);
}
# Build an array of existing GIDs on the system
while((my($name, $passwd, $gid, $members)) = getgrent()){
push(#gidNums, $gid);
}
# Create a regex to compare file device numbers to.
my $devRegex = do {
chomp #devNum;
local $" = '|';
qr/#devNum/;
};
# Create a regex to compare file UIDs to.
my $uidRegex = do {
chomp #uidNums;
local $" = '|';
qr/#uidNums/;
};
# Create a regex to compare file GIDs to.
my $gidRegex = do {
chomp #gidNums;
local $" = '|';
qr/#gidNums/;
};
print $gidRegex . "\n";
# Create the output file path if it doesn't already exist.
mkdir "${OUTDIR}" or die "Cannot execute mkdir on ${OUTDIR}, $!" unless (-d "${OUTDIR}");
# Create our filehandle for writing our findings
open ORPHFILE, ">${OUTDIR}${TMPFILE}" or die "Cannot open ${OUTDIR}${TMPFILE}, $!";
foreach (#mounts) {
# The anonymous subroutine which is executed by File::Find
find sub {
my #fileStats = stat($File::Find::name);
# Is it in a basic directory, ...
return if $File::Find::dir =~ /sys|proc|dev/;
# ...an actual file vs. a link, directory, pipe, etc, ...
return unless -f;
# ...local, ...
return unless $fileStats[0] =~ $devRegex;
# ...and unowned? If so write it to the output file
if (($fileStats[4] !~ $uidRegex) || ($fileStats[5] !~ $gidRegex)) {
print $File::Find::name . " UID: " . $fileStats[4] . "\n";
print $File::Find::name . " GID: " . $fileStats[5] . "\n";
print ORPHFILE "$File::Find::name\n";
}
}, $_;
}
close ORPHFILE;
# If no world-writable files have been found ${TMPFILE} should be zero-size;
# Delete it so Tivoli won't alert
if (-z "${OUTDIR}${TMPFILE}") {
unlink "${OUTDIR}${TMPFILE}";
} else {
rename("${OUTDIR}${TMPFILE}","${OUTDIR}${PERMFILE}") or die "Cannot rename file ${OUTDIR}${TMPFILE}, $!";
}
The test user's home directory showing ownership (or lack thereof):
drwx------ 2 20000 20000 4096 Apr 9 19:59 test
The regex for comparing a files GID to those existing on the system:
(?-xism:0|1|2|3|4|5|6|7|8|9|10|12|14|15|20|30|39|40|50|54|63|99|100|81|22|35|19|69|32|173|11|33|18|48|68|38|499|76|90|89|156|157|158|67|77|74|177|72|21|501|502|10000|10001|10002|10004|10005|10006|5001|5002|5005|5003|10007|10008|10009|10012|10514|47|51|6000|88|5998)
What am I missing with my logic?

I really recommend using find2perl for doing anything with locating files by different attributes. Although not as pretty as File::Find or File::Find::Rule it does the work for you.
mori#liberty ~ $ find2perl -nouser
#! /usr/bin/perl -w
eval 'exec /usr/bin/perl -S $0 ${1+"$#"}'
if 0; #$running_under_some_shell
use strict;
use File::Find ();
# Set the variable $File::Find::dont_use_nlink if you're using AFS,
# since AFS cheats.
# for the convenience of &wanted calls, including -eval statements:
use vars qw/*name *dir *prune/;
*name = *File::Find::name;
*dir = *File::Find::dir;
*prune = *File::Find::prune;
sub wanted;
my (%uid, %user);
while (my ($name, $pw, $uid) = getpwent) {
$uid{$name} = $uid{$uid} = $uid;
}
# Traverse desired filesystems
File::Find::find({wanted => \&wanted}, '.');
exit;
sub wanted {
my ($dev,$ino,$mode,$nlink,$uid,$gid);
(($dev,$ino,$mode,$nlink,$uid,$gid) = lstat($_)) &&
!exists $uid{$uid}
&& print("$name\n");
}

'20000' =~ /(0|1|2|...)/
matches. You probably want to anchor the expression:
/^(0|1|2|...)$/
(The other answer is better, just adding this for completeness.)

Related

Perl script to search a word inside the directory

I'am looking for a perl script to grep for a string in all files inside a directory .
bash command .
Code:
grep -r 'word' /path/to/dir
This is a fairly canonical task while I couldn't find straight answers with a possibly easiest and simples tool for the job, the handy Path::Tiny
use warnings;
use strict;
use feature 'say';
use Data::Dump; # dd
use Path::Tiny; # path
my $dir = shift // '.';
my $pattern = qr/word/;
my $ret = path($dir)->visit(
sub {
my ($entry, $state) = #_;
return if not -f;
for ($entry->lines) {
if (/$pattern/) {
print "$entry: $_";
push #{$state->{$entry}}, $_;
}
}
},
{ recurse => 1 }
);
dd $ret; # print the returned complex data structure
The way a file is read here, using lines, is just one way to do that. It may not be suitable for extremely large files as it reads all lines at once, where one better read line by line.
The visit method is based on iterator, which accomplishes this task cleanly as well
my $iter = path($dir)->iterator({ recurse => 1 });
my $info;
while (my $e = $iter->()) {
next if not -f $e;
# process the file $e as needed
#/$pattern/ and push #{$info->{$e}}, $_ and print "$e: $_"
# for $e->lines
}
Here we have to provide a data structure to accumulate information but we get more flexibility.
The -f filetest used above, of a "plain" file, is still somewhat permissive; it allows for swap files, for example, which some editors keep during a session (vim for instance). Those will result in all kinds of matches. To stay with purely ASCII or UTF-8 files use -T test.
Otherwise, there are libraries for recursive traversal and searching, for example File::Find (or File::Find::Rule) or Path::Iterator::Rule.
For completeness, here is a take with the core File::Find
use warnings;
use strict;
use feature 'say';
use File::Find;
my #dirs = #ARGV ? #ARGV : '.';
my $pattern = qr/word/;
my %res;
find( sub {
return if not -T; # ASCII or UTF-8 only
open my $fh, '<', $_ or do {
warn "Error opening $File::Find::name: $!";
return;
};
while (<$fh>) {
if (/$pattern/) {
chomp;
push #{$res{$File::Find::name}}, $_
}
}
}, #dirs
);
for my $k (keys %res) {
say "In file $k:";
say "\t$_" for #{$res{$k}};
}

finding a file in directory using perl script

I'm trying to develop a perl script that looks through all of the user's directories for a particular file name without the user having to specify the entire pathname to the file.
For example, let's say the file of interest was data.list. It's located in /home/path/directory/project/userabc/data.list. At the command line, normally the user would have to specify the pathname to the file like in order to access it, like so:
cd /home/path/directory/project/userabc/data.list
Instead, I want the user just to have to enter script.pl ABC in the command line, then the Perl script will automatically run and retrieve the information in the data.list. which in my case, is count the number of lines and upload it using curl. the rest is done, just the part where it can automatically locate the file
Even though very feasible in Perl, this looks more appropriate in Bash:
#!/bin/bash
filename=$(find ~ -name "$1" )
wc -l "$filename"
curl .......
The main issue would of course be if you have multiple files data1, say for example /home/user/dir1/data1 and /home/user/dir2/data1. You will need a way to handle that. And how you handle it would depend on your specific situation.
In Perl that would be much more complicated:
#! /usr/bin/perl -w
eval 'exec /usr/bin/perl -S $0 ${1+"$#"}'
if 0; #$running_under_some_shell
use strict;
# Import the module File::Find, which will do all the real work
use File::Find ();
# Set the variable $File::Find::dont_use_nlink if you're using AFS,
# since AFS cheats.
# for the convenience of &wanted calls, including -eval statements:
# Here, we "import" specific variables from the File::Find module
# The purpose is to be able to just type '$name' instead of the
# complete '$File::Find::name'.
use vars qw/*name *dir *prune/;
*name = *File::Find::name;
*dir = *File::Find::dir;
*prune = *File::Find::prune;
# We declare the sub here; the content of the sub will be created later.
sub wanted;
# This is a simple way to get the first argument. There is no
# checking on validity.
our $filename=$ARGV[0];
# Traverse desired filesystem. /home is the top-directory where we
# start our seach. The sub wanted will be executed for every file
# we find
File::Find::find({wanted => \&wanted}, '/home');
exit;
sub wanted {
# Check if the file is our desired filename
if ( /^$filename\z/) {
# Open the file, read it and count its lines
my $lines=0;
open(my $F,'<',$name) or die "Cannot open $name";
while (<$F>){ $lines++; }
print("$name: $lines\n");
# Your curl command here
}
}
You will need to look at the argument-parsing, for which I simply used $ARGV[0] and I do dont know what your curl looks like.
A more simple (though not recommended) way would be to abuse Perl as a sort of shell:
#!/usr/bin/perl
#
my $fn=`find /home -name '$ARGV[0]'`;
chomp $fn;
my $wc=`wc -l '$fn'`;
print "$wc\n";
system ("your curl command");
Following code snippet demonstrates one of many ways to achieve desired result.
The code takes one parameter, a word to look for in all subdirectories inside file(s) data.list. And prints out a list of found files in a terminal.
The code utilizes subroutine lookup($dir,$filename,$search) which calls itself recursively once it come across a subdirectory.
The search starts from current working directory (in question was not specified a directory as start point).
use strict;
use warnings;
use feature 'say';
my $search = shift || die "Specify what look for";
my $fname = 'data.list';
my $found = lookup('.',$fname,$search);
if( #$found ) {
say for #$found;
} else {
say 'Not found';
}
exit 0;
sub lookup {
my $dir = shift;
my $fname = shift;
my $search = shift;
my $files;
my #items = glob("$dir/*");
for my $item (#items) {
if( -f $item && $item =~ /\b$fname\b/ ) {
my $found;
open my $fh, '<', $item or die $!;
while( my $line = <$fh> ) {
$found = 1 if $line =~ /\b$search\b/;
if( $found ) {
push #{$files}, $item;
last;
}
}
close $fh;
}
if( -d $item ) {
my $ret = lookup($item,$fname,$search);
push #{$files}, $_ for #$ret;
}
}
return $files;
}
Run as script.pl search_word
Output sample
./capacitor/data.list
./examples/data.list
./examples/test/data.list
Reference:
glob,
Perl file test operators

Comparing files from tape and disk using MD5 with perl archive::tar fails

We want to create a report with MD5 checks between an tar archive on tape and the files on disk. I created a script that should do this, but it works correct using a tar file, but it failes when using a tar on tape. The tar was written with gnu tar to tape.
use strict;
use warnings;
use Archive::Tar;
use Digest::MD5 qw(md5 md5_hex md5_base64);
my $tarfile = '/dev/rmt/1';
my $iter = Archive::Tar->iter( $tarfile, 1, {md5 => 1} );
print "------------ TAR MD5 ----------- ----------- FILE MD5 ----------- ----- File -----\n";
while( my $f = $iter->() ) {
if ($f->is_file != 0) {
my $tarMd5 = md5_hex( $f->get_content);
my $filename = $f->full_path;
my $fileMd5 = '';
if (-e $filename) {
open(HANDLE, "<", $filename);
$fileMd5 = md5_hex(<HANDLE>);
} else {
$fileMd5 = "!!!!!!! FILE IS MISSING !!!!!!!!";
}
if ($tarMd5 eq $fileMd5) {
print "$tarMd5 <--> $fileMd5 --> $filename\n";
} else {
print "$tarMd5 ><>< $fileMd5 --> $filename\n";
}
}
}
As said it works correct when using a file based tar file, but when using a tar on tape we get the error:
Use of uninitialized value in subroutine entry at check_archive.pl line 12.
Can't use string ("") as a subroutine ref while "strict refs" in use at check_archive.pl line 12.
my $f is not defined.
Use of uninitialized value in subroutine entry at check_archive.pl line 12. Can't use string ("") as a subroutine ref while "strict refs" in use at check_archive.pl line 12. my $f is not defined.
if ($f && $f->is_file != 0) {## NOT AN IMPORTANT WARNING...
...
if (-e $filename) { ## CHECK IF FILE EXISTS
local $/=""; ## <= JUST ADD THIS FOR INSTANT OUTPUT (NO BUFFER)
open(HANDLE, "<", $filename); ## CREATE A FILE HANDLE TO READ A FILE
$fileMd5 = md5_hex(<HANDLE>); ## SEND HANDLE TO THE MD5 FUNCTION
}
...

Search filesystem via perl script while ignoring remote mounts

I've written a perl script that is designed to search a server for world writable files. After some testing, though, I've found that I made a mistake in the logic. Specifically, I've told it to not search /. My initial thought behind this was that I was looking for locally mounted volumes while avoiding those of a remote variety (CIFS, NFS, what-have-you).
What I failed to take into consideration is that not every directory has a unique volume. As a result, by excluding / in my scan, I've missed several directories that should be included. Now I need to rework the script to include those while still excluding remote volumes.
#!/usr/bin/perl
# Directives which establish our execution environment
use warnings;
use strict;
use Fcntl ':mode';
use File::Find;
no warnings 'File::Find';
no warnings 'uninitialized';
# Variables used throughout the script
my $DIR = "/var/log/tivoli/";
my $MTAB = "/etc/mtab";
my $PERMFILE = "world_writable_w_files.txt";
my $TMPFILE = "world_writable_files.tmp";
my $EXCLUDE = "/usr/local/etc/world_writable_excludes.txt";
# Compile a list of mountpoints that need to be scanned
my #mounts;
# Create the filehandle for the /etc/mtab file
open MT, "<${MTAB}" or die "Cannot open ${MTAB}, $!";
# We only want the local mountpoints that are not "/"
while (<MT>) {
if ($_ =~ /ext[34]/) {
my #line = split;
push(#mounts, $line[1]) unless ($_ =~ /root/);
}
}
close MT;
# Read in the list of excluded files
my $regex = do {
open EXCLD, "<${EXCLUDE}" or die "Cannot open ${EXCLUDE}, $!\n";
my #ignore = <EXCLD>;
chomp #ignore;
local $" = '|';
qr/#ignore/;
};
# Create the output file path if it doesn't already exist.
mkdir "${DIR}" or die "Cannot execute mkdir on ${DIR}, $!" unless (-d "${DIR}");
# Create the filehandle for writing the findings
open WWFILE, ">${DIR}${TMPFILE}" or die "Cannot open ${DIR}${TMPFILE}, $!";
foreach (#mounts) {
# The anonymous subroutine which is executed by File::Find
find sub {
return unless -f; # Is it a regular file...
# ...and world writable.
return unless (((stat)[2] & S_IWUSR) && ((stat)[2] & S_IWGRP) && ((stat)[2] & S_IWOTH));
# Add the file to the list of found world writable files unless it is
# in the list if exclusions
print WWFILE "$File::Find::name\n" unless ($File::Find::name =~ $regex);
}, $_;
}
close WWFILE;
# If no world-writable files have been found ${TMPFILE} should be zero-size;
# Delete it so Tivoli won't alert
if (-z "${DIR}${TMPFILE}") {
unlink "${DIR}${TMPFILE}";
} else {
rename("${DIR}${TMPFILE}","${DIR}${PERMFILE}") or die "Cannot rename file ${DIR}${TMPFILE}, $!";
}
I'm at a bit of a loss as to how to approach this now. I know I can obtain the necessary information using stat -f -c %T but I don't see a similar option for perl's built-in stat (unless I'm misinterpreting the descriptions for output fields; perhaps it is found in one of the S_ variables?).
I'm just looking for a push in the right direction. I'd really rather not drop to a shell command to obtain this information.
EDIT: I've found this answer to a similar question, but it seems to be not entirely helpful. When I test the built-in stat against a CIFS mount I get 18. Perhaps what I need is a comprehensive list of values that could be returned for remote files to compare against?
EDIT2: This is the script in its new form which meets the requirements:
#!/usr/bin/perl
# Directives which establish our execution environment
use warnings;
use strict;
use Fcntl ':mode';
use File::Find;
no warnings 'File::Find';
no warnings 'uninitialized';
# Variables used throughout the script
my $DIR = "/var/log/tivoli/";
my $MTAB = "/etc/mtab";
my $PERMFILE = "world_writable_w_files.txt";
my $TMPFILE = "world_writable_files.tmp";
my $EXCLUDE = "/usr/local/etc/world_writable_excludes.txt";
my $ROOT = "/";
my #devNum;
# Create an array of the file stats for "/"
my #rootStats = stat("${ROOT}");
# Compile a list of mountpoints that need to be scanned
my #mounts;
open MT, "<${MTAB}" or die "Cannot open ${MTAB}, $!";
# We only want the local mountpoints
while (<MT>) {
if ($_ =~ /ext[34]/) {
my #line = split;
push(#mounts, $line[1]);
}
}
close MT;
# Build an array of each mountpoint's device number for future comparison
foreach (#mounts) {
my #stats = stat($_);
push(#devNum, $stats[0]);
}
# Read in the list of excluded files and create a regex from them
my $regExcld = do {
open XCLD, "<${EXCLUDE}" or die "Cannot open ${EXCLUDE}, $!\n";
my #ignore = <XCLD>;
chomp #ignore;
local $" = '|';
qr/#ignore/;
};
# Create a regex to compare file device numbers to.
my $devRegex = do {
chomp #devNum;
local $" = '|';
qr/#devNum/;
};
# Create the output file path if it doesn't already exist.
mkdir("${DIR}" or die "Cannot execute mkdir on ${DIR}, $!") unless (-d "${DIR}");
# Create our filehandle for writing our findings
open WWFILE, ">${DIR}${TMPFILE}" or die "Cannot open ${DIR}${TMPFILE}, $!";
foreach (#mounts) {
# The anonymous subroutine which is executed by File::Find
find sub {
# Is it in a basic directory, ...
return if $File::Find::dir =~ /sys|proc|dev/;
# ...a regular file, ...
return unless -f;
# ...local, ...
my #dirStats = stat($File::Find::name);
return unless $dirStats[0] =~ $devRegex;
# ...and world writable?
return unless (((stat)[2] & S_IWUSR) && ((stat)[2] & S_IWGRP) && ((stat)[2] & S_IWOTH));
# If so, add the file to the list of world writable files unless it is
# in the list if exclusions
print(WWFILE "$File::Find::name\n") unless ($File::Find::name =~ $regExcld);
}, $_;
}
close WWFILE;
# If no world-writable files have been found ${TMPFILE} should be zero-size;
# Delete it so Tivoli won't alert
if (-z "${DIR}${TMPFILE}") {
unlink "${DIR}${TMPFILE}";
} else {
rename("${DIR}${TMPFILE}","${DIR}${PERMFILE}") or die "Cannot rename file ${DIR}${TMPFILE}, $!";
}
The dev field result from stat() tells you the device number the inode lives on. That can be used to distinguish different mount points, as they'll have a different device number from the one you started at.

How to get Perl to loop over all files in a directory?

I have a Perl script with contains
open (FILE, '<', "$ARGV[0]") || die "Unable to open $ARGV[0]\n";
while (defined (my $line = <FILE>)) {
# do stuff
}
close FILE;
and I would like to run this script on all .pp files in a directory, so I have written a wrapper script in Bash
#!/bin/bash
for f in /etc/puppet/nodes/*.pp; do
/etc/puppet/nodes/brackets.pl $f
done
Question
Is it possible to avoid the wrapper script and have the Perl script do it instead?
Yes.
The for f in ...; translates to the Perl
for my $f (...) { ... } (in the case of lists) or
while (my $f = ...) { ... } (in the case of iterators).
The glob expression that you use (/etc/puppet/nodes/*.pp) can be evaluated inside Perl via the glob function: glob '/etc/puppet/nodes/*.pp'.
Together with some style improvements:
use strict; use warnings;
use autodie; # automatic error handling
while (defined(my $file = glob '/etc/puppet/nodes/*.pp')) {
open my $fh, "<", $file; # lexical file handles, automatic error handling
while (defined( my $line = <$fh> )) {
do stuff;
}
close $fh;
}
Then:
$ /etc/puppet/nodes/brackets.pl
This isn’t quite what you asked, but another possibility is to use <>:
while (<>) {
my $line = $_;
# do stuff
}
Then you would put the filenames on the command line, like this:
/etc/puppet/nodes/brackets.pl /etc/puppet/nodes/*.pp
Perl opens and closes each file for you. (Inside the loop, the current filename and line number are $ARGV and $. respectively.)
Jason Orendorff has the right answer:
From Perlop (I/O Operators)
The null filehandle <> is special: it can be used to emulate the behavior of sed and awk, and any other Unix filter program that takes a list of filenames, doing the same to each line of input from all of them. Input from <> comes either from standard input, or from each file listed on the command line.
This doesn't require opendir. It doesn't require using globs or hard coding stuff in your program. This is the natural way to read in all files that are found on the command line, or piped from STDIN into the program.
With this, you could do:
$ myprog.pl /etc/puppet/nodes/*.pp
or
$ myprog.pl /etc/puppet/nodes/*.pp.backup
or even:
$ cat /etc/puppet/nodes/*.pp | myprog.pl
take a look at this documentation it explains all you need to know
#!/usr/bin/perl
use strict;
use warnings;
my $dir = '/tmp';
opendir(DIR, $dir) or die $!;
while (my $file = readdir(DIR)) {
# We only want files
next unless (-f "$dir/$file");
# Use a regular expression to find files ending in .pp
next unless ($file =~ m/\.pp$/);
open (FILE, '<', $file) || die "Unable to open $file\n";
while (defined (my $line = <FILE>)) {
# do stuff
}
}
closedir(DIR);
exit 0;
I would suggest to put all filenames to array and then use this array as parameters list to your perl method or script. Please see following code:
use Data::Dumper
$dirname = "/etc/puppet/nodes";
opendir ( DIR, $dirname ) || die "Error in opening dir $dirname\n";
my #files = grep {/.*\.pp/} readdir(DIR);
print Dumper(#files);
closedir(DIR);
Now you can pass \#files as parameter to any perl method.
my #x = <*>;
foreach ( #x ) {
chomp;
if ( -f "$_" ) {
print "process $_\n";
# do stuff
next;
};
};
Perl can shell out to execute system commands in various ways, the most straightforward is using backticks ``
use strict;
use warnings FATAL => 'all';
my #ls = `ls /etc/puppet/nodes/*.pp`;
for my $f ( #ls ) {
open (my $FILE, '<', $f) || die "Unable to open $f\n";
while (defined (my $line = <$FILE>)) {
# do stuff
}
close $FILE;
}
(Note: you should always use strict; and use warnings;)

Resources