I have a daemon which needs to report a small hash of statistics to a file in a /dev/loop0 filesystem. I am using FileHandle to store the reference to the filehandle in perl. So a real small version of the problem looks like this:
#!/usr/bin/perl
use strict;
use warnings;
use FileHandle;
my $report = FileHandle->new("> /devfs/test");
print $report "Hello";
seek($report,0,0);
print $report "Hi";
$report->close();
The result from this will be Hillo, which is what I'd expect. What I'd like to do is be able to indicate after Hi (and really Hello), that the file is now finished.
Question: When reading from a file, you can just search for the end of file (EOF), but how can I indicate the end of a file on write without closing it? If it makes a difference, the solution needs to apply to Linux specifically.
You want the truncate function.
truncate($report, tell($report));
...will truncate the file to wherever the file pointer currently is (as reported by tell).
Related
I have an excel file with my data. I saved it as a tab delimited txt file.
But if I do a simple perl script:
open(IN, '<', 'myfile.txt') or die;
while (defined(my $line = <IN>)){
print "$line\n";
}
close IN;
it only prints out one line, but it contains all the data - just in one line
If I use another data file, there are no problems, so i think there is a problem convertin the excel file to a txt file.
can anybody help me?
try while (<IN>) instead. Your condition beats the while magic..
I'd change the loop to:
while(my $line = <IN>) { ... }
There's no need to use defined().
I am not sure if have this answered yet. But, first make sure you have the following in your code:
use strict;
use warnings;
This will give you debugging help that you would receive otherwise. Using the above will give you more messages that can help.
When I put your open command in a current program I am working on I received this debugging message:
Name "main::IN" used only once: possible typo at ./test.pl line 37
You also may want to use a file handle so Perl can remember where go. This is the "new" way to open files in Perl and is explained on the online perldoc. Just search for "perl file handle open." I learned to do my open's this way:
open my $in '<', 'myfile.txt' or die;
Then, you can just run the following:
while ( my $line = <$in> ) { ... }
There is a better way to do this if you ever have been introduced to Perl's default variable, yet I don't think that you have so the above solution may be the best.
I am somewhat familiar with various ways of calling a script from another one. I don't really need an overview of each, but I do have a few questions. Before that, though, I should tell you what my goal is.
I am working on a perl/tk program that: a) gathers information and puts it in a hash, and b) fires off other scripts that use the info hash, and some command line args. Each of these other scripts are available on the command line (using another command-line script) and need to stay that way. So I can't just put all that into a module and call it good.I do have the authority to alter the scripts, but, again, they must also be usable on the command line.
The current way of calling the other script is by using 'do', which means I can pass in the hash, and use the same version of perl (I think). But all the STDOUT (and STDERR too, I think) goes to the terminal.
Here's a simple example to demonstrate the output:
this_thing.pl
#!/usr/bin/env perl
use strict;
use warnings;
use utf8;
use Tk;
my $mw = MainWindow->new;
my $button = $mw->Button(
-text => 'start other thing',
-command => \&start,
)->pack;
my $text = $mw->Text()->pack;
MainLoop;
sub start {
my $script_path = 'this_other_thing.pl';
if (not my $read = do $script_path) {
warn "couldn't parse $script_path: $#" if $#;
warn "couldn't do $script_path: $!" unless defined $read;
warn "couldn't run $script_path" unless $read;
}
}
this_other_thing.pl
#!/usr/bin/env perl
use strict;
use warnings;
use utf8;
print "Hello World!\n";
How can I redirect the STDOUT and STDIN (for interactive scripts that need input) to the text box using the 'do' method? Is that even possible?
If I can't use the 'do' method, what method can redirect the STDIN and STDOUT, as well as enable passing the hash in and using the same version of perl?
Edit: I posted this same question at Perlmonks, at the link in the first comment. So far, the best response seems to use modules and have the child script just be a wrapper for the module. Other possible solutions are: ICP::Run(3) and ICP in general, Capture::Tiny and associated modules, and Tk::Filehandle. A solution was presented that redirects the output and error streams, but seems to not affect the input stream. It's also a bit kludgy and not recommended.
Edit 2: I'm posting this here because I can't answer my own question yet.
Thanks for your suggestions and advice. I went with a suggestion on Perlmonks. The suggestion was to turn the child scripts into modules, and use wrapper scripts around them for normal use. I would then simply be able to use the modules, and all the code is in one spot. This also ensures that I am not using different perls, I can route the output from the module anywhere I want, and passing that hash in is now very easy.
To have both STDIN & STDOUT of a subprocess redirected, you should read the "Bidirectional Communication with Another Process" section of the perlipc man page: http://search.cpan.org/~rjbs/perl-5.18.1/pod/perlipc.pod#Bidirectional_Communication_with_Another_Process
Using the same version of perl works by finding out the name of your perl interpreter, and calling it explicitly. $^X is probably what you want. It may or may not work on different operating systems.
Passing a hash into a subprocess does not work easily. You can print the contents of the hash into a file, and have the subprocess read & parse it. You might get away without using a file, by using the STDIN channel between the two processes, or you could open a separate pipe() for this purpose. Anyway, printing & parsing the data back cannot be avoided when using subprocesses, because the two processes use two perl interpreters, each having its own memory space, and not being able to see each other's variables.
You might avoid using a subprocess, by using fork() + eval() + require(). In that case, no separate perl interpreter will be involved, the forked interpreter will inherit the whole memory of your program with all variables, open file descriptors, sockets, etc. in it, including the hash to be passed. However, I don't see from where your second perl script could get its hash when started from CLI.
I want to create a perl script that processes log files in linux. The ideea is to sort the "interesting" lines from the others. My plan is this:
- make a temp copy of the log file (because it is constantly written)
- search for the "interesting" lines (keywords)
- copy them in another file "log.processed"
- send that file over the e-mail to me. (this part i think will be done by cron)
Untill now i have this:
#!/usr/bin/perl
#use strict;
use warnings;
use File::Copy;
copy("/home/hq-asa.log","/home/hq-asa.temp") or die "Copy failed $!";
$NewLog = "/home/hq-asa.processed";
our $search = "keyword1|keyword2|";
my $TempLog = "/home/hq-asa.temp";
open (my $LogFile, "+<", $TempLog) or die "Could not open log temp file $!";
qx(touch $NewLog);
open ($newlog, "+<", $NewLog) or die "could not open new log file $!";
foreach $line (<$LogFile>) {
if (($line =~ m/$search/) or ($line eq $search)) {
print $newlog $line;
}
}
close($LogFile);
close($newlog);
unlink "/home/hq-asa.temp";
Don't judge, i am a newbie.
The problem is that if i want this script to be run every hour for example it will process again and again all the original log file. Can i inser a "bookmark" in the original log file and tell this script to search for the last one and continue from there? Or how should this be done?
Write out a status file containing the line number where you left off. When you want to resume processing, first read the status file and skip the number of lines.
Use tell() to get what you call a "bookmark" (the offset in the file) and seek() to go back to that place.
Also saving the inode number (the result of (stat $file)[1]) with the bookmark might be helpful to ensure that the file has not been replaced by another one (think about rotating logs with logrotate).
I'm trying to match file paths in a text file and replace them with their share file path. E.G. The string "X:\Group_14\Project_Security" I want to replace with "\\Project_Security$".
I'm having a problem at getting my head around the syntax, as I have use the backslash (\) to escape another backslash (\\) but this does not seem to work for matching a path in a text file.
open INPUT, '< C:\searchfile.txt';
open OUTPUT, '> C:\logsearchfiletest.txt';
#lines = <INPUT>;
%replacements = (
"X:\\Group_14\\Project_Security" => "\\\\Project_Security\$",
...
(More Paths as above)
...
);
$pattern = join '|', keys %replacements;
for (#lines) {
s/($pattern)/#{[$replacements{$1}]}/g;
print OUTPUT;
}
Not totally sure whats happening as "\\\\Project_Security\$" appears as \\Project_Security$" correctly.
So I think the issues lies with "X:\\Group_14\\Project_Security" not evaluating to
"X:\Group_14\Project_Security" correctly therefore not match within the text file?
Any advice on this would be appreciated, Cheers.
If all the file paths and replacements are in a similar format to your example, you should just be able to do the following rather than using a hash for looking up replacements:
for my $line (#lines) {
$line =~ s/.+\\(.+)$/\\\\$1\$/;
print OUTPUT $line;
}
Some notes:
Always use the 3-argument open
Always check for errors on open, print, or close
Sometimes is easier to use a loop than clever coding
Try:
#!/usr/bin/env perl
use strict;
use warnings;
# --------------------------------------
use charnames qw( :full :short );
use English qw( -no_match_vars ); # Avoids regex performance penalty
use Data::Dumper;
# Make Data::Dumper pretty
$Data::Dumper::Sortkeys = 1;
$Data::Dumper::Indent = 1;
# Set maximum depth for Data::Dumper, zero means unlimited
local $Data::Dumper::Maxdepth = 0;
# conditional compile DEBUGging statements
# See http://lookatperl.blogspot.ca/2013/07/a-look-at-conditional-compiling-of.html
use constant DEBUG => $ENV{DEBUG};
# --------------------------------------
# place file names in variables to they are easily changed
my $search_file = 'C:\\searchfile.txt';
my $log_search_file = 'C:\\logsearchfiletest.txt';
my %replacements = (
"X:\\Group_14\\Project_Security" => "\\\\Project_Security\$",
# etc
);
# use the 3-argument open as a security precaution
open my $search_fh, '<', $search_file or die "could not open $search_file: $OS_ERROR\n";
open my $log_search_fh, '>', $log_search_file or die "could not open $log_search_file: $OS_ERROR\n";
while( my $line = <$search_fh> ){
# scan for replacements
while( my ( $pattern, $replacement ) = each %replacements ){
$line =~ s/\Q$pattern\E/$replacement/g;
}
print {$log_search_fh} $line or die "could not print to $log_search_file: $OS_ERROR\n";
}
# always close the file handles and always check for errors
close $search_fh or die "could not close $search_file: $OS_ERROR\n";
close $log_search_fh or die "could not close $log_search_file: $OS_ERROR\n";
I see you've posted my rusty Perl code here, how embarrassing. ;) I made an update earlier today to my answer in the original PowerShell thread that gives a more general solution that also handles regex metacharacters and doesn't require you to manually escape each of 600 hash elements: PowerShell multiple string replacement efficiency. I added the perl and regex tags to your original question, but my edit hasn't been approved yet.
[As I mentioned, since I've been using PowerShell for everything in recent times (heck, these days I prepare breakfast with PowerShell...), my Perl has gotten a tad dusty, which I see hasn't gone unnoticed here. :P I fixed several things that I noticed could be coded better when I looked at it a second time, which are noted at the bottom. I don't bother with error messages and declarations and other verbosity for limited use quick-and-dirty scripts like this, and I don't particularly recommend it. As the Perl motto goes, "making easy things easy and hard things possible". Well, this is a case of making easy things easy, and one of Perl's main advantages is that it doesn't force you to be "proper" when you're trying to do something quick and simple. But I did close the filehandles. ;)
I have a perl script the creates a report based on an xml definition. Currently these definitions all exist as .xml files.
So I have the script run-report.pl, which can take a path to a definition file and create the report.
Now I want to create run-reports-from-db.pl, which will generate the report definition based on same database entries. I don't want to create temp files to pass to run-report.pl, I would just like to pass in the definition somehow.
So instead of saying:
run-report.pl -def=./path/to/def.xml
I want to be able to say:
run-report.pl --stream
And have the report definition available in <STDIN>
I am sure there is pretty trivial way to do this???
If I understand your question correctly, all you need is one | (pipe).
./generate-xml-from-db.pl | ./run-report.pl --stream
Anything the first process in the pipeline prints to stdout will appear in the second process's stdin.
As long as you read from STDIN, you have it available. Notice what happens with you take the code below name it something like echo.pl run it at the command line and paste reams of text.
#!/usr/bin/perl -w
use 5.010;
use strict;
use warnings;
while ( <> ) {
say;
}
<> is the Perl shorthand for "read from STDIN".
As long as the method you're using to launch the process has a way to get a hold of the standard input and outputs, you can just write it to that handle. You have to use the ways that are available to you. In Java, for example, you'd have to get the input stream of the process, in a batch command you have to pipe it. At a GUI terminal you can cut and paste.