Linux - Perl: Printing the content of the script I am executing - linux

Is it possible to print the whole content of the script I am executing?
Since there are many things that will go on inside the script like the calling perl modules I will call in runtime (require "/dir/file";), the print lines I am executing inside an array (foreach(#array) {print "$_\n";}).
Why do I need this? To study the script generation I am making, especially when errors are occurring. Error occurred on line 2000 (even I have only 1 thousand lines of script).

There are probably better ways to debug a script (the perl debugger, using Carp::Always to get stack traces with any errors and warnings), but nonetheless there are at least two three mechanisms for obtaining the source code of the running script.
Since $0 contains the name of the file that perl is executing, you can read from it.
open my $fh, '<', $0;
my #this_script = <$fh>;
close $fh;
If a script has the __DATA__ or __END__ token in its source, then Perl also sets up the DATA file handle. Initially, the DATA file handle points to the text after
the __DATA__ or __END__ token, but it is actually opened to the whole source file, so you can seek to the beginning of that file handle and access the entire script.
seek DATA, 0, 0;
my #this_script = <DATA>;
HT Grinnz: the token __FILE__ in any Perl source file refers to the name of the file that contains that token
open my $fh, '<', __FILE__;
my #this_file = <$fh>;
close $fh;

Related

How terrible is my Perl? Script that takes IP addresses and returns Fully Qualified Domain Names [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I invite you, tear me a new one.
This code gets the job done. It takes a .txt file containing a list of IPs and writes a file containing their respective fully qualified domain names.
I want to know in what ways is this code poorly written. What bad habits are here?
I am a perl and programming newbie. I managed to put this together using google and trail and error. Getting it to work was satisfying but please tell me how I can improve.
use strict;
use warnings;
use Socket;
use autodie;
my $filename = 'IPsForFQDN.txt';
#File with list of IPs to lookup.
#One IP address per line like so:
#10.10.10.10
#10.10.10.11
#10.10.10.12
#etc...
open(my $fh, '<:encoding(UTF-8)', $filename)
or die "Could not opne file '$filename' $!";
my $fqdn = '';
while (my $row = <$fh>) {
chomp $row;
print "$row\n";
$fqdn = gethostbyaddr(inet_aton($row), AF_INET);
print $fqdn;
print "\n";
open FILE, ">>fqdn.txt" or die $!;
print FILE $fqdn;
print FILE "\n";
close FILE;
}
print "done\n";
For instance is the {chomp $row;} line needed? I have NO IDEA what it does.
I am equally mystified by the whole {or die $!;} thing.
$! reports why something failed. Here if you were unable to open the file the reason for failure would be pointed out. perlvar has a section on error variables.
You're using chomp to remove the newline character from the end of each line.
When writing the file you call open slightly differently, consider using the same 3 argument version as you do when opening for reading earlier in your code (also see the link I gave you for open), and in the same coding style. It's good to be consistent, also this method is safer.
You're repeatedly opening fqdn.txt for every line you write. I'd just open it before the loop and close it at the end.
Oh - and you're using autodie so the or die shouldn't be necessary.
Oh - and you've used old-style open for it too, compared to new-style open for the reading file.
Not much going on at work so I had a go at a little rewrite with comments in to explain a few things. Not right/not wrong just my spin and a few of the standards we use at my place have been added.
Hope this helps...
use strict;
use warnings;
use Socket;
# initialize variables here.
my $filename = "IPsForFQDN.txt";
# open both file handles - once only
# Note safer expression using 2 commas
open(FH, "<", $filename)
or die "Could not opne file '$filename' $!";
# open FILE for appending
open FILE, ">>", "fqdn.txt" or die $!;
# use foreach instead of while - easier syntax (may provoke discussion ;-) )
# replaced $fh for FH - use file handles throughout for consitency
foreach my $row ( <FH> )
{
chomp $row;
# put a regex check in for comments
if( $row !~ m/^#/ )
{
printf ("Row in file %s \n", $row );
# initialize $fqdn here to keep it fresh
my $fqdn = gethostbyaddr(inet_aton($row), AF_INET);
# formatted print to screen (STDOUT)
printf ("FQDN %s \n", $fqdn);
# formatted print to output file
printf FILE ("%s \n", $fqdn);
}
}
# close both file handles - once only
close FILE;
close FH;
print "done\n";

Convert an excel file to txt and open in perl

I have an excel file with my data. I saved it as a tab delimited txt file.
But if I do a simple perl script:
open(IN, '<', 'myfile.txt') or die;
while (defined(my $line = <IN>)){
print "$line\n";
}
close IN;
it only prints out one line, but it contains all the data - just in one line
If I use another data file, there are no problems, so i think there is a problem convertin the excel file to a txt file.
can anybody help me?
try while (<IN>) instead. Your condition beats the while magic..
I'd change the loop to:
while(my $line = <IN>) { ... }
There's no need to use defined().
I am not sure if have this answered yet. But, first make sure you have the following in your code:
use strict;
use warnings;
This will give you debugging help that you would receive otherwise. Using the above will give you more messages that can help.
When I put your open command in a current program I am working on I received this debugging message:
Name "main::IN" used only once: possible typo at ./test.pl line 37
You also may want to use a file handle so Perl can remember where go. This is the "new" way to open files in Perl and is explained on the online perldoc. Just search for "perl file handle open." I learned to do my open's this way:
open my $in '<', 'myfile.txt' or die;
Then, you can just run the following:
while ( my $line = <$in> ) { ... }
There is a better way to do this if you ever have been introduced to Perl's default variable, yet I don't think that you have so the above solution may be the best.

Perl processing log file

I want to create a perl script that processes log files in linux. The ideea is to sort the "interesting" lines from the others. My plan is this:
- make a temp copy of the log file (because it is constantly written)
- search for the "interesting" lines (keywords)
- copy them in another file "log.processed"
- send that file over the e-mail to me. (this part i think will be done by cron)
Untill now i have this:
#!/usr/bin/perl
#use strict;
use warnings;
use File::Copy;
copy("/home/hq-asa.log","/home/hq-asa.temp") or die "Copy failed $!";
$NewLog = "/home/hq-asa.processed";
our $search = "keyword1|keyword2|";
my $TempLog = "/home/hq-asa.temp";
open (my $LogFile, "+<", $TempLog) or die "Could not open log temp file $!";
qx(touch $NewLog);
open ($newlog, "+<", $NewLog) or die "could not open new log file $!";
foreach $line (<$LogFile>) {
if (($line =~ m/$search/) or ($line eq $search)) {
print $newlog $line;
}
}
close($LogFile);
close($newlog);
unlink "/home/hq-asa.temp";
Don't judge, i am a newbie.
The problem is that if i want this script to be run every hour for example it will process again and again all the original log file. Can i inser a "bookmark" in the original log file and tell this script to search for the last one and continue from there? Or how should this be done?
Write out a status file containing the line number where you left off. When you want to resume processing, first read the status file and skip the number of lines.
Use tell() to get what you call a "bookmark" (the offset in the file) and seek() to go back to that place.
Also saving the inode number (the result of (stat $file)[1]) with the bookmark might be helpful to ensure that the file has not been replaced by another one (think about rotating logs with logrotate).

Perl string replacements of file paths in text file

I'm trying to match file paths in a text file and replace them with their share file path. E.G. The string "X:\Group_14\Project_Security" I want to replace with "\\Project_Security$".
I'm having a problem at getting my head around the syntax, as I have use the backslash (\) to escape another backslash (\\) but this does not seem to work for matching a path in a text file.
open INPUT, '< C:\searchfile.txt';
open OUTPUT, '> C:\logsearchfiletest.txt';
#lines = <INPUT>;
%replacements = (
"X:\\Group_14\\Project_Security" => "\\\\Project_Security\$",
...
(More Paths as above)
...
);
$pattern = join '|', keys %replacements;
for (#lines) {
s/($pattern)/#{[$replacements{$1}]}/g;
print OUTPUT;
}
Not totally sure whats happening as "\\\\Project_Security\$" appears as \\Project_Security$" correctly.
So I think the issues lies with "X:\\Group_14\\Project_Security" not evaluating to
"X:\Group_14\Project_Security" correctly therefore not match within the text file?
Any advice on this would be appreciated, Cheers.
If all the file paths and replacements are in a similar format to your example, you should just be able to do the following rather than using a hash for looking up replacements:
for my $line (#lines) {
$line =~ s/.+\\(.+)$/\\\\$1\$/;
print OUTPUT $line;
}
Some notes:
Always use the 3-argument open
Always check for errors on open, print, or close
Sometimes is easier to use a loop than clever coding
Try:
#!/usr/bin/env perl
use strict;
use warnings;
# --------------------------------------
use charnames qw( :full :short );
use English qw( -no_match_vars ); # Avoids regex performance penalty
use Data::Dumper;
# Make Data::Dumper pretty
$Data::Dumper::Sortkeys = 1;
$Data::Dumper::Indent = 1;
# Set maximum depth for Data::Dumper, zero means unlimited
local $Data::Dumper::Maxdepth = 0;
# conditional compile DEBUGging statements
# See http://lookatperl.blogspot.ca/2013/07/a-look-at-conditional-compiling-of.html
use constant DEBUG => $ENV{DEBUG};
# --------------------------------------
# place file names in variables to they are easily changed
my $search_file = 'C:\\searchfile.txt';
my $log_search_file = 'C:\\logsearchfiletest.txt';
my %replacements = (
"X:\\Group_14\\Project_Security" => "\\\\Project_Security\$",
# etc
);
# use the 3-argument open as a security precaution
open my $search_fh, '<', $search_file or die "could not open $search_file: $OS_ERROR\n";
open my $log_search_fh, '>', $log_search_file or die "could not open $log_search_file: $OS_ERROR\n";
while( my $line = <$search_fh> ){
# scan for replacements
while( my ( $pattern, $replacement ) = each %replacements ){
$line =~ s/\Q$pattern\E/$replacement/g;
}
print {$log_search_fh} $line or die "could not print to $log_search_file: $OS_ERROR\n";
}
# always close the file handles and always check for errors
close $search_fh or die "could not close $search_file: $OS_ERROR\n";
close $log_search_fh or die "could not close $log_search_file: $OS_ERROR\n";
I see you've posted my rusty Perl code here, how embarrassing. ;) I made an update earlier today to my answer in the original PowerShell thread that gives a more general solution that also handles regex metacharacters and doesn't require you to manually escape each of 600 hash elements: PowerShell multiple string replacement efficiency. I added the perl and regex tags to your original question, but my edit hasn't been approved yet.
[As I mentioned, since I've been using PowerShell for everything in recent times (heck, these days I prepare breakfast with PowerShell...), my Perl has gotten a tad dusty, which I see hasn't gone unnoticed here. :P I fixed several things that I noticed could be coded better when I looked at it a second time, which are noted at the bottom. I don't bother with error messages and declarations and other verbosity for limited use quick-and-dirty scripts like this, and I don't particularly recommend it. As the Perl motto goes, "making easy things easy and hard things possible". Well, this is a case of making easy things easy, and one of Perl's main advantages is that it doesn't force you to be "proper" when you're trying to do something quick and simple. But I did close the filehandles. ;)

End open file wih perl for frequently updated reports

I have a daemon which needs to report a small hash of statistics to a file in a /dev/loop0 filesystem. I am using FileHandle to store the reference to the filehandle in perl. So a real small version of the problem looks like this:
#!/usr/bin/perl
use strict;
use warnings;
use FileHandle;
my $report = FileHandle->new("> /devfs/test");
print $report "Hello";
seek($report,0,0);
print $report "Hi";
$report->close();
The result from this will be Hillo, which is what I'd expect. What I'd like to do is be able to indicate after Hi (and really Hello), that the file is now finished.
Question: When reading from a file, you can just search for the end of file (EOF), but how can I indicate the end of a file on write without closing it? If it makes a difference, the solution needs to apply to Linux specifically.
You want the truncate function.
truncate($report, tell($report));
...will truncate the file to wherever the file pointer currently is (as reported by tell).

Resources