What is the correct way to change the locale for a subprocess (in Linux)?
Example, when running
perl -e 'use POSIX qw(setlocale); setlocale(POSIX::LC_ALL, "C"); open F, "locale|"; while (<F>) { print if /LC_MESS/ }; close F'
I get the answer LC_MESSAGES="ca_ES.UTF-8" but I would like to obtain LC_MESSAGES="C". Whatever I've tried I can't seem to change it.
Note: I know about doing LC_ALL=C perl ..... but this is not what I want todo, I neet to change the locale inside the Perl script.
I'm picking up on Ted Lyngmo's comment, so credit goes to him.
You can set the environment for your code as well as subsequent sub-processes with %ENV. As with all global variables, it makes sense to only change these locally, temporarily, for your scope and smaller scopes. That's what local does.
I've also changed your open to use the three-arg form as that's more secure (even though you're not using a variable for the filename/command), and used a lexical filehandle. The lexical handle will go out of scope at the end of the block and close implicitly.
use strict;
use warnings;
use POSIX qw(setlocale);
{
setlocale(POSIX::LC_ALL, "C");
local $ENV{LC_ALL} = 'C';
open my $fh, '-|', 'locale' or die $!;
while (<$fh>) {
print if /LC_MESS/
};
}
Related
When I SSH to another server thare are some blurbs of text that always outputs when you log in. (wheather its SSH or just logging in to its own session)
"Authentification banner" is what it prints out every time i either scp a file over or SSH into it.
My code iterates thru a list of servers and sends a file, each time it does that it outputs a lot of text id like to suppress.
This code loops thru each server printing out what its doing.
for(my $j=0; $j < $#servName+1; $j++)
{
print "\n\nSending file: $fileToTransfer to \n$servName[$j]:$targetLocation\n\n";
my $sendCommand = `scp $fileToTransfer $servName[$j]:$targetLocation`;
print $sendCommand;
}
But then it comes out like this:
Sending file: /JacobsScripts/AddAlias.pl to
denamap2:/release/jscripts
====================================================
Welcome authorized users. This system is company
property and unauthorized access or use is prohibited
and may subject you to discipline, civil suit or
criminal prosecution. To the extent permitted by law,
system use and information may be monitored, recorded
or disclosed. Using this system constitutes your
consent to do so. You also agree to comply with applicable
company procedures for system use and the protection of
sensitive (including export controlled) data.
====================================================
Sending file: /JacobsScripts/AddAlias.pl to
denfpev1:/release/jscripts
====================================================
Welcome authorized users. This system is company
property and unauthorized access or use is prohibited
and may subject you to discipline, civil suit or
criminal prosecution. To the extent permitted by law,
system use and information may be monitored, recorded
or disclosed. Using this system constitutes your
consent to do so. You also agree to comply with applicable
company procedures for system use and the protection of
sensitive (including export controlled) data.
====================================================
I havent tried much, i saw a few forums that mention taking the output into a file and then delete it but idk if thatll work for my situation.
NOTE This answer assumes that on the system in question the ssh/scp messages go to STDERR stream (or perhaps even directly to /dev/tty)†, like they do on some systems I test with -- thus the question.
If not, then ikegami's answer of course takes care of it: just don't print the captured STDOUT. But even in that case, I also think that all ways shown here are better for capturing output (except for the one involving the shell), specially when both streams are needed.
These prints can be suppressed by configuring the server, or perhaps via a .hushlogin file, but then that clearly depends on the server management.
Otherwise, yes you can redirect standard streams to files or, better yet, to variables, what makes the overall management easier.
Using IPC::Run
use IPC::Run qw(run);
my ($file, $servName, $targetLocation) = ...
my #cmd = ("scp", $file, $servName, $targetLocation);
run \#cmd, '1>', \my $out, '2>', \my $err;
# Or redirect both to one variable
# run \#cmd, '>&', \my $out_err;
This mighty and rounded library allows great control over the external processes it runs; it provides almost a mini shell.
Or using the far simpler, and very handy Capture::Tiny
use Capture::Tiny qw(capture);
...
my ($out, $err, $exit) = capture { system #cmd };
Here output can be merged using capture_merged. Working with this library is also clearly superior to builtins (qx, system, pipe-open).
In both cases then inspect $out and $err variables, what is far less cut-and-dry as error messages depend on your system. For some errors the library routines die/croak but for some others they don't but merely print to STDERR. It is probably more reliable to use other tools that libraries provide for detecting errors.
The ssh/scp "normal" (non-error) messages may print to either STDERR or STDOUT stream, or may even go directly to /dev/tty,† so can be mixed with error messages.
Given that the intent seems to be to intersperse these scp commands with other prints then I'd recommend either of these two ways over the others below.
Another option, which I consider least satisfactory overall, is to use the shell to redirect output in the command itself, either to separate files
my ($out_file, $err_file) = ...
system("#cmd 2> $err_file 1> $out_file" ) == 0
or die "system(#cmd...) error: $?"; # see "system" in perldoc
or, perhaps for convenience, both streams can go to one file
system("#cmd > $out_err_file 2>&1" ) == 0 or die $?;
Then inspect files for errors and remove if there is nothing remarkable. Or, shell redirections can be used like in the question but to capture all output
my $out_and_err = qx(#cmd 2>&1);
Then examine the (possibly multiline) variable for errors.
Or, instead of dealing with individual commands we can redirect streams themselves to files for a duration of a larger part of the program
use warnings;
use strict;
use feature 'say';
# Save filehandles ('dup' them) so to be able to reopen later
open my $saveout, ">&STDOUT" or die "Can't dup STDOUT: $!";
open my $saveerr, ">&STDERR" or die "Can't dup STDERR: $!";#]]
my ($outf, $errf) = qw(stdout.txt stderr.txt);
open *STDOUT, ">", $outf or die "Can't redirect STDOUT to $outf: $!";
open *STDERR, ">", $errf or die "Can't redirect STDERR to $errf: $!";
my ($file, $servName, $targetLocation) = ...
my #cmd = ("scp", $file, $servName, $targetLocation);
system(#cmd) == 0
or die "system(#cmd) error: $?"; # see "system" in perldoc
# Restore standard streams when needed for normal output
open STDOUT, '>&', $saveout or die "Can't reopen STDOUT: $!";
open STDERR, '>&', $saveerr or die "Can't reopen STDERR: $!";
# Examine what's in the files (errors?)
I use system instead of qx (operator form of backticks) since there is no need for output from scp. Most of this is covered in open, and search SO for specifics.
It'd be nice to be able to reopen streams to variables but that doesn't work here
† This is even prescribed ("allowed") by POSIX
/dev/tty
In each process, a synonym for the controlling terminal associated with the process group of that process, if any. It is useful for programs or shell procedures that wish to be sure of writing messages to or reading data from the terminal no matter how output has been redirected. It can also be used for applications that demand the name of a file for output, when typed output is desired and it is tiresome to find out what terminal is currently in use.
Courtesy of this superuser post, which has a substiantial discussion.
You are capturing the text, then printing it out using print $sendCommand;. You could simply remove that statement.
I have an excel file with my data. I saved it as a tab delimited txt file.
But if I do a simple perl script:
open(IN, '<', 'myfile.txt') or die;
while (defined(my $line = <IN>)){
print "$line\n";
}
close IN;
it only prints out one line, but it contains all the data - just in one line
If I use another data file, there are no problems, so i think there is a problem convertin the excel file to a txt file.
can anybody help me?
try while (<IN>) instead. Your condition beats the while magic..
I'd change the loop to:
while(my $line = <IN>) { ... }
There's no need to use defined().
I am not sure if have this answered yet. But, first make sure you have the following in your code:
use strict;
use warnings;
This will give you debugging help that you would receive otherwise. Using the above will give you more messages that can help.
When I put your open command in a current program I am working on I received this debugging message:
Name "main::IN" used only once: possible typo at ./test.pl line 37
You also may want to use a file handle so Perl can remember where go. This is the "new" way to open files in Perl and is explained on the online perldoc. Just search for "perl file handle open." I learned to do my open's this way:
open my $in '<', 'myfile.txt' or die;
Then, you can just run the following:
while ( my $line = <$in> ) { ... }
There is a better way to do this if you ever have been introduced to Perl's default variable, yet I don't think that you have so the above solution may be the best.
I am somewhat familiar with various ways of calling a script from another one. I don't really need an overview of each, but I do have a few questions. Before that, though, I should tell you what my goal is.
I am working on a perl/tk program that: a) gathers information and puts it in a hash, and b) fires off other scripts that use the info hash, and some command line args. Each of these other scripts are available on the command line (using another command-line script) and need to stay that way. So I can't just put all that into a module and call it good.I do have the authority to alter the scripts, but, again, they must also be usable on the command line.
The current way of calling the other script is by using 'do', which means I can pass in the hash, and use the same version of perl (I think). But all the STDOUT (and STDERR too, I think) goes to the terminal.
Here's a simple example to demonstrate the output:
this_thing.pl
#!/usr/bin/env perl
use strict;
use warnings;
use utf8;
use Tk;
my $mw = MainWindow->new;
my $button = $mw->Button(
-text => 'start other thing',
-command => \&start,
)->pack;
my $text = $mw->Text()->pack;
MainLoop;
sub start {
my $script_path = 'this_other_thing.pl';
if (not my $read = do $script_path) {
warn "couldn't parse $script_path: $#" if $#;
warn "couldn't do $script_path: $!" unless defined $read;
warn "couldn't run $script_path" unless $read;
}
}
this_other_thing.pl
#!/usr/bin/env perl
use strict;
use warnings;
use utf8;
print "Hello World!\n";
How can I redirect the STDOUT and STDIN (for interactive scripts that need input) to the text box using the 'do' method? Is that even possible?
If I can't use the 'do' method, what method can redirect the STDIN and STDOUT, as well as enable passing the hash in and using the same version of perl?
Edit: I posted this same question at Perlmonks, at the link in the first comment. So far, the best response seems to use modules and have the child script just be a wrapper for the module. Other possible solutions are: ICP::Run(3) and ICP in general, Capture::Tiny and associated modules, and Tk::Filehandle. A solution was presented that redirects the output and error streams, but seems to not affect the input stream. It's also a bit kludgy and not recommended.
Edit 2: I'm posting this here because I can't answer my own question yet.
Thanks for your suggestions and advice. I went with a suggestion on Perlmonks. The suggestion was to turn the child scripts into modules, and use wrapper scripts around them for normal use. I would then simply be able to use the modules, and all the code is in one spot. This also ensures that I am not using different perls, I can route the output from the module anywhere I want, and passing that hash in is now very easy.
To have both STDIN & STDOUT of a subprocess redirected, you should read the "Bidirectional Communication with Another Process" section of the perlipc man page: http://search.cpan.org/~rjbs/perl-5.18.1/pod/perlipc.pod#Bidirectional_Communication_with_Another_Process
Using the same version of perl works by finding out the name of your perl interpreter, and calling it explicitly. $^X is probably what you want. It may or may not work on different operating systems.
Passing a hash into a subprocess does not work easily. You can print the contents of the hash into a file, and have the subprocess read & parse it. You might get away without using a file, by using the STDIN channel between the two processes, or you could open a separate pipe() for this purpose. Anyway, printing & parsing the data back cannot be avoided when using subprocesses, because the two processes use two perl interpreters, each having its own memory space, and not being able to see each other's variables.
You might avoid using a subprocess, by using fork() + eval() + require(). In that case, no separate perl interpreter will be involved, the forked interpreter will inherit the whole memory of your program with all variables, open file descriptors, sockets, etc. in it, including the hash to be passed. However, I don't see from where your second perl script could get its hash when started from CLI.
I'm trying to match file paths in a text file and replace them with their share file path. E.G. The string "X:\Group_14\Project_Security" I want to replace with "\\Project_Security$".
I'm having a problem at getting my head around the syntax, as I have use the backslash (\) to escape another backslash (\\) but this does not seem to work for matching a path in a text file.
open INPUT, '< C:\searchfile.txt';
open OUTPUT, '> C:\logsearchfiletest.txt';
#lines = <INPUT>;
%replacements = (
"X:\\Group_14\\Project_Security" => "\\\\Project_Security\$",
...
(More Paths as above)
...
);
$pattern = join '|', keys %replacements;
for (#lines) {
s/($pattern)/#{[$replacements{$1}]}/g;
print OUTPUT;
}
Not totally sure whats happening as "\\\\Project_Security\$" appears as \\Project_Security$" correctly.
So I think the issues lies with "X:\\Group_14\\Project_Security" not evaluating to
"X:\Group_14\Project_Security" correctly therefore not match within the text file?
Any advice on this would be appreciated, Cheers.
If all the file paths and replacements are in a similar format to your example, you should just be able to do the following rather than using a hash for looking up replacements:
for my $line (#lines) {
$line =~ s/.+\\(.+)$/\\\\$1\$/;
print OUTPUT $line;
}
Some notes:
Always use the 3-argument open
Always check for errors on open, print, or close
Sometimes is easier to use a loop than clever coding
Try:
#!/usr/bin/env perl
use strict;
use warnings;
# --------------------------------------
use charnames qw( :full :short );
use English qw( -no_match_vars ); # Avoids regex performance penalty
use Data::Dumper;
# Make Data::Dumper pretty
$Data::Dumper::Sortkeys = 1;
$Data::Dumper::Indent = 1;
# Set maximum depth for Data::Dumper, zero means unlimited
local $Data::Dumper::Maxdepth = 0;
# conditional compile DEBUGging statements
# See http://lookatperl.blogspot.ca/2013/07/a-look-at-conditional-compiling-of.html
use constant DEBUG => $ENV{DEBUG};
# --------------------------------------
# place file names in variables to they are easily changed
my $search_file = 'C:\\searchfile.txt';
my $log_search_file = 'C:\\logsearchfiletest.txt';
my %replacements = (
"X:\\Group_14\\Project_Security" => "\\\\Project_Security\$",
# etc
);
# use the 3-argument open as a security precaution
open my $search_fh, '<', $search_file or die "could not open $search_file: $OS_ERROR\n";
open my $log_search_fh, '>', $log_search_file or die "could not open $log_search_file: $OS_ERROR\n";
while( my $line = <$search_fh> ){
# scan for replacements
while( my ( $pattern, $replacement ) = each %replacements ){
$line =~ s/\Q$pattern\E/$replacement/g;
}
print {$log_search_fh} $line or die "could not print to $log_search_file: $OS_ERROR\n";
}
# always close the file handles and always check for errors
close $search_fh or die "could not close $search_file: $OS_ERROR\n";
close $log_search_fh or die "could not close $log_search_file: $OS_ERROR\n";
I see you've posted my rusty Perl code here, how embarrassing. ;) I made an update earlier today to my answer in the original PowerShell thread that gives a more general solution that also handles regex metacharacters and doesn't require you to manually escape each of 600 hash elements: PowerShell multiple string replacement efficiency. I added the perl and regex tags to your original question, but my edit hasn't been approved yet.
[As I mentioned, since I've been using PowerShell for everything in recent times (heck, these days I prepare breakfast with PowerShell...), my Perl has gotten a tad dusty, which I see hasn't gone unnoticed here. :P I fixed several things that I noticed could be coded better when I looked at it a second time, which are noted at the bottom. I don't bother with error messages and declarations and other verbosity for limited use quick-and-dirty scripts like this, and I don't particularly recommend it. As the Perl motto goes, "making easy things easy and hard things possible". Well, this is a case of making easy things easy, and one of Perl's main advantages is that it doesn't force you to be "proper" when you're trying to do something quick and simple. But I did close the filehandles. ;)
My pseudo code looks like this:
#!/usr/local/bin/perl5.8.8
use warnings;
use strict;
use threads;
use threads::shared;
sub tasker;
my #allThreads = ();
my #array = ('alpha','beta','gamma');
push #allThreads, threads->new(\&tasker, #array);
$_->join foreach #allThreads;
sub tasker{
my #localArray = #_;
...call some other modules/functions...
}
While the threads are running, I get these messages after a few seconds on my STDOUT:
Still here!
Still here!
Still here!
After which the threads join (complete) successfully. I am not sure where these are coming from and why they show up only for some #array. A point to mention is that the number of these messages is equal to the elements in #array.
Will appreciate any help from experts.
Your code (or one of the module you are using) appears to have some leftover debugging code. To locate it, add
INIT { print "$0\n"; print "$_\n" for values %INC; exit }
to your script. Pipe the output to
xargs grep 'Still here!'
Then remove the debugging code.
PS - If you use warn without a trailing newline, your debugging messages will have a file name and line number attached. This can be useful :)