There are a couple of answers for Perl 6 back in its Parrot days and they don't seem to work currently:
This is Rakudo version 2017.04.3 built on MoarVM version 2017.04-53-g66c6dda
implementing Perl 6.c.
The answer to Does perl6 enable “autoflush” by default? says it's enabled by default (but that was 2011).
Here's a program I was playing with:
$*ERR.say: "1. This is an error";
$*OUT.say: "2. This is standard out";
And its output, which is an unfortunate order:
2. This is standard out
1. This is an error
So maybe I need to turn it on. There's How could I disable autoflush? which mentions an autoflush method:
$*ERR.autoflush = True;
$*ERR.say: "1. This is an error";
$*OUT.say: "2. This is standard out";
But that doesn't work:
No such method 'autoflush' for invocant of type 'IO::Handle'
I guess I could fake this myself by making my IO class that flushes after every output. For what it's worth, it's the lack of this feature that prevented me from using Perl 6 for a particular task today.
As a secondary question, why doesn't Perl 6 have this now, especially when it looks like it used to have it? How would you persaude a Perl 5 person this isn't an issue?
There was an output refactory very recently. With my local version of rakudo i can't get it to give the wrong order any more (2017.06-173-ga209082 built on MoarVM version 2017.06-48-g4bc916e)
There's now a :buffer argument to io handles that you can set to a number (or pass it as :!buffer) that will control this.
I assume the default if the output isatty is to not buffer.
This might not have worked yet when you asked the question, but:
$*ERR.out-buffer = False;
$*ERR.say: "1. This is an error";
$*OUT.say: "2. This is standard out";
It's a bit hard to find, but it's documented here.
Works for me in Rakudo Star 2017.10.
Rakudo doesn't support autoflush (yet). There's a note in 5to6-perlvar under the $OUTPUT_AUTOFLUSH entry.
raiph posted a comment elsewhere with a #perl6 IRC log search showing that people keep recommending autoflush and some other people keep saying it's not implemented. Since it's not a documented method (although flush is), I suppose we'll have to live without for a bit.
If you're primarily interested in STDOUT and STDERR, the following seems to reopen them without buffering (auto-flushed):
$*OUT = $*OUT.open(:!buffer);
$*ERR = $*ERR.open(:!buffer);
This isn't thoroughly tested yet, and I'm surprised this works. It's a funny API that lets you re-open an open stream.
Related
I'm trying to print some RFID tags and retrieve their TIDs to store them in my system and know which tags have been printed. Right now I'm reading the TID and sending it back to my computer (connected via USB with the my ZT421 printer) with the following code:
^RFR,H,0,12,2^FN0^FS^FH_^HV0,24,,_0D_0A,L^FS
^RFW,H,2,12,1^FD17171999ABABABAAAAAAAAAB^FS
This is repeated for each tag that I'm printing. However, when printing 10 tags, I only get 9 TIDs. If after that I try to print 7 tags, I still get 9 TIDs. To be honest I'm a bit lost now, because even trying to use the code examples from the ZPL manual (I've tried the ^RI instruction also) it doesn't seem to work.
The communication with the printer is beeing done through Zebra Setup Utilities' direct communication tool.
I tried to retrieve each printed tag TID with:
^RFR,H,0,12,2^FN0^FS^FH_^HV0,24,,_0D_0A,L^FS
^RFW,H,2,12,1^FD17171999ABABABAAAAAAAAAB^FS
but I always get 9 TIDs.
I also tried getting the TID with the ZPL manual example for the ^RI command:
^XA
^FO20,120^A0N,60^FN0^FS
^RI0,,5^FS
^HV0,,Tag ID:^FS
^XZ
And I got absolutely nothing returned to the computer, just a mssage saying "Tag ID:" and no value shown.
I would really appreciate some help with this...
Thanks in advance!
I've fixed the issue, but I'm going to leave the solution here just in case someone else is facing the same problem.
I thought that maybe it wasn't a code issue, but something related to the computer-printer communication. It turned out to be the case. The Zebra Setup Utilities program has a button that says "options". If you click it, a new screen will open and there you can configure the seconds that the program will wait for the printer response (in this case through USB). By default it's set to 5, i changed this value to 100, which is the maximum. This meant that instead of just printing and retrieving the TIDs of 6-9 tags, now I can do it for about 100.
This is not amazing because in my case it implied creating 25 files for the 2500 tags I had to print and store the TIDs, however it's far better than before.
The code host in https://github.com/yeahnoob/perl6-perf , as follow:
use v6;
my $file=open "wordpairs.txt", :r;
my %dict;
my $line;
repeat {
$line=$file.get;
my ($p1,$p2)=$line.split(' ');
if ?%dict{$p1} {
%dict{$p1} = "{%dict{$p1}} {$p2}".words;
} else {
%dict{$p1} = $p2;
}
} while !$file.eof;
Running well when the "wordpairs.txt" is small.
But when the "wordpairs.txt" file is about 140,000 lines (each line, two words), it is running Very Very Slow. And it cannot Finish itself, even after 20 seconds running.
What's the problem with it? Is there any fault in the code??
Thanks for anyone help!
Following contents Added # 2014-09-04, THANKS for many suggestions from SE Answers and IRC#freenode#perl6
The code(for now, 2014-09-04):
my %dict;
grammar WordPairs {
token word-pair { (\S*) ' ' (\S*) "\n" }
token TOP { <word-pair>* }
}
class WordPairsActions {
method word-pair($/) { %dict{$0}.push($1) }
}
my $match = WordPairs.parse(slurp, :actions(WordPairsActions));
say ?$match;
Running time cost(for now):
$ time perl6 countpairs.pl wordpairs.txt
True
The pairs count of the key word "her" in wordpairs.txt is 1036
real 0m24.043s
user 0m23.854s
sys 0m0.181s
$ perl6 --version
This is perl6 version 2014.08 built on MoarVM version 2014.08
This test's time performance is not reasonable for now(as the same proper Perl 5 code only cost about 160ms), but Much Better than my original old Perl6 code. :)
PS. The whole thing, including original test code, patch and sample text, is on github.
I've tested this with code very similar to Christoph's using a file containing 10,000 lines. It takes around 15 seconds, which as you say, is significantly slower than Perl 5. I suspect that the code is slow because something this code uses hasn't seen as much optimisation effort as other parts of Rakudo and MoarVM have received recently. I'm sure that the performance of the code will improve dramatically over the next few months as whatever is slow sees more attention.
When trying to determine why some Perl 6 code is slow I suggest running perl6 on MoarVM with --profile to see whether it helps you find the bottleneck. Unfortunately, with this code it'll point to rakudo internals rather than anything you can improve.
It's certainly worth talking to #perl6 on irc.freenode.net as they'll have the knowledge to offer an alternative solution and will be able to improve its performance in the future.
Rakudo isn't exactly known for its stellar performance.
Using more idiomatic code might or might not help:
my %dict;
for open('wordpairs.txt', :r).lines {
my ($key, #words) = .words;
push %dict{$key}, #words;
}
You could also check the other backends (Rakudo runs on MoarVM, Parrot and JVM) to see if it is equally slow everywhere.
It would be interesting to know if it's IO or processing that's slow, eg via
my %dict;
say 'start IO';
my #lines = eager open('wordpairs.txt', :r).lines;
say 'done IO';
say 'start processing';
for #lines { ... }
say 'done processing';
I believe there's also a profiler available, if you want to dig into the issue yourself.
I have a embedded linux box with perl 5.10 and a GSM modem attached.
I have written a simple perl script to read/write AT commands through the modems device file (/dev/ttyACM0).
If i write a simle command like "ATZ\r" to the modem and wait for a response I receive very odd data like "\n\n\nATZ\n\n0\n\nOK\n\n\n\n\nATZ\n\n\n\n..." and the data keeps coming in. It almost seems like the response is garbled up with other data.
I would expect something like "ATZ\nOK\n" (if echo is enabled).
If i send the "ATZ" command manually with e.g. minicom everything works as expected.
This leads me to think it might be some kind of perl buffering issue, but that's only guessing.
I open the device in perl like this (I do not have Device::Serialport on my embedded linux perl installation):
open(FH, "+<", "/dev/ttyACM0") or die "Failed to open com port $comport";
and read the response one byte at a time with:
while(1) {
my $response;
read(FH, $response, 1);
printf("hex response '0x%02X'\n", ord $response);
}
Am I missing some initialization or something else to get this right?
Regards
Klaus
I don't think you need the while loop. This code should send the ATZ command, wait for the response, then print the response:
open(FH, "+>", "/dev/ttyACM0") or die "Failed to open com port $comport";
print FH ("ATZ\n");
$R = <FH>;
print $R;
close(FH);
It may be something to do with truncation. Try changing "+>" into "+<".
Or it may be something to do with buffering, try unbuffering output after your open():
select((select(FH), $| = 1)[0]);
Thanks for your answer. Although not the explicit answer to my question it certainly brought me on the right track.
As noted by mti2935 this was indeed not a perl problem, but a mere tty configuration problem.
Using the "stty" command with the following parameters set my serial port in the "expected" mode:
-opost: Disable all output postprocessing
-crnl: Do not translate CR to NL
-onlcr: Do not translate NL to CR NL
-echo: Do not echo input (having this echo enabled and echo on the modem itself gave me double echoes resulting in odd behaviour in my script)
It is also possible to use the combination setting "raw" to set all these parameters the correct way.
-
I am somewhat familiar with various ways of calling a script from another one. I don't really need an overview of each, but I do have a few questions. Before that, though, I should tell you what my goal is.
I am working on a perl/tk program that: a) gathers information and puts it in a hash, and b) fires off other scripts that use the info hash, and some command line args. Each of these other scripts are available on the command line (using another command-line script) and need to stay that way. So I can't just put all that into a module and call it good.I do have the authority to alter the scripts, but, again, they must also be usable on the command line.
The current way of calling the other script is by using 'do', which means I can pass in the hash, and use the same version of perl (I think). But all the STDOUT (and STDERR too, I think) goes to the terminal.
Here's a simple example to demonstrate the output:
this_thing.pl
#!/usr/bin/env perl
use strict;
use warnings;
use utf8;
use Tk;
my $mw = MainWindow->new;
my $button = $mw->Button(
-text => 'start other thing',
-command => \&start,
)->pack;
my $text = $mw->Text()->pack;
MainLoop;
sub start {
my $script_path = 'this_other_thing.pl';
if (not my $read = do $script_path) {
warn "couldn't parse $script_path: $#" if $#;
warn "couldn't do $script_path: $!" unless defined $read;
warn "couldn't run $script_path" unless $read;
}
}
this_other_thing.pl
#!/usr/bin/env perl
use strict;
use warnings;
use utf8;
print "Hello World!\n";
How can I redirect the STDOUT and STDIN (for interactive scripts that need input) to the text box using the 'do' method? Is that even possible?
If I can't use the 'do' method, what method can redirect the STDIN and STDOUT, as well as enable passing the hash in and using the same version of perl?
Edit: I posted this same question at Perlmonks, at the link in the first comment. So far, the best response seems to use modules and have the child script just be a wrapper for the module. Other possible solutions are: ICP::Run(3) and ICP in general, Capture::Tiny and associated modules, and Tk::Filehandle. A solution was presented that redirects the output and error streams, but seems to not affect the input stream. It's also a bit kludgy and not recommended.
Edit 2: I'm posting this here because I can't answer my own question yet.
Thanks for your suggestions and advice. I went with a suggestion on Perlmonks. The suggestion was to turn the child scripts into modules, and use wrapper scripts around them for normal use. I would then simply be able to use the modules, and all the code is in one spot. This also ensures that I am not using different perls, I can route the output from the module anywhere I want, and passing that hash in is now very easy.
To have both STDIN & STDOUT of a subprocess redirected, you should read the "Bidirectional Communication with Another Process" section of the perlipc man page: http://search.cpan.org/~rjbs/perl-5.18.1/pod/perlipc.pod#Bidirectional_Communication_with_Another_Process
Using the same version of perl works by finding out the name of your perl interpreter, and calling it explicitly. $^X is probably what you want. It may or may not work on different operating systems.
Passing a hash into a subprocess does not work easily. You can print the contents of the hash into a file, and have the subprocess read & parse it. You might get away without using a file, by using the STDIN channel between the two processes, or you could open a separate pipe() for this purpose. Anyway, printing & parsing the data back cannot be avoided when using subprocesses, because the two processes use two perl interpreters, each having its own memory space, and not being able to see each other's variables.
You might avoid using a subprocess, by using fork() + eval() + require(). In that case, no separate perl interpreter will be involved, the forked interpreter will inherit the whole memory of your program with all variables, open file descriptors, sockets, etc. in it, including the hash to be passed. However, I don't see from where your second perl script could get its hash when started from CLI.
Is it in anyway possible to increase the number of suggests that your extension may show in the omnibox?
By default it looks like 5 rows is the limit. I've read about a command line switch to change the number of rows (--omnibox-popup-count) but I am really interested in dynamically being able to set this in my extension.
5 rows isn't really enough for the information my extension want to show.
[Update 2018 - thanx version365 & Aaron!]
» Not hardcoded anymore! chrome://flags/#omnibox-ui-max-autocomplete-matches . Credit goes to #version365 answering below: http://stackoverflow.com/a/47806290/234309 « – Aaron Thoma
..[historical] detail: since the removal of --omnibox-popup-count flag (http://codereview.chromium.org/2013008) in May 2010 the number [was] hardcoded:
const size_t AutocompleteResult::kMaxMatches = 6;
a discussion two years later on the chromium-discuss mailing list about the 'Omnibox default configuration' "concluded"
"On lower performing machines generating more result would slow down the result display. The current number look like a good balance."
[which is not a very valid argument, considering the probably infinitesimal overhead retrieving more items than the hard-wired number]
why the heck is there no chromium fork that removes stupid UX limitations like this (or the no tabbar in fullscreen mode)^^
Since there is no more --omnibox-popup-count flag; you can use another flag which is new.
chrome://flags/#omnibox-ui-max-autocomplete-matches lets you select a maximum of 12 rows.
In fact there is no more --omnibox-popup-count flag
http://code.google.com/p/chromium/issues/detail?id=40083
So I think there is no way to enlarge the omnibox.