Capturing a module output - io

Say we have this module:
unit module outputs;
say "Loaded";
And we load it like this
use v6;
use lib ".";
require "outputs.pm6";
That will print "Loaded" when it's required. Say we want to capture the standard output of that loaded module. We can do that if it's an external process, but there does not seem to be a way for redirecting *OUT to a string or,if that's not possible, to a file. Is that so?

You could try use IO::String:
use v6;
use lib ".";
use IO::String;
my $buffer = IO::String.new;
with $buffer -> $*OUT {
require "outputs.pm6";
};
say "Finished";
print ~$buffer;
Output:
Finished
Loaded
See also If I reassigned OUT in Perl 6, how can I change it back to stdout?

Temporarily reassigning $*OUT so that .print calls append to a string:
my $capture-stdout;
{
my $*OUT = $*OUT but
role { method print (*#args) { $capture-stdout ~= #args } }
require "outputs.pm6" # `say` goes via above `print` method
}
say $capture-stdout; # Loaded

Related

nodejs import file and use functions natively

How can I import/require a file, and then use the functions in the file natively?
Say I have file 1:
const file2 = require("./file2.js")
const text = "hello"
file2.print()
And in file 2 I have:
module.exports = {
print:()=>{
console.log(text)
}
}
I want to be able to use functions from another file as if they were in the original file, retaining the variables and objects created in the first file, is this possible?
No, the modules are separate, unless you resort to assigning your variables into the global object and hoping that you can keep track of them without going insane. Don't do that.
Either
pass the data you need around (the best option most of the time), or
maybe add a third module containing the shared state you need and require() it from both file 1 and file 2
No!
But
The regular pattern of shared context is that you create a context and share it. The most simple form of it is something like this:
//In file 1 -->
let myContext = {
text: 'hello'
}
file2.print(myContext);
//In file 2 -->
module.exports = {
print:(ctx)=>{
console.log(ctx.text)
}
}
However
JS has some inbuilt support for context. Something like this:
//In file 1 -->
let myContext = {
text: 'hello'
}
let print = file2.print.bind(myContext);
print();
//In file 2 -->
module.exports = {
print: function(){
console.log(this.text)
}
}
Notice the removal of the argument and changing the arrow function to a function expression.

Multiple MAIN signatures

I have a package with multiple main and I want to define several options:
My code is something like this:
package Perl6::Documentable::CLI {
proto MAIN(|) is export {*}
my %*SUB-MAIN-OPTS = :named-everywhere;
multi MAIN(
"setup"
) { ... }
multi MAIN (
"start" ,
Str :$topdir = "doc",
Bool :v(:verbose($v)) = False
) { ... }
But when I try to actually execute it with:
perl6 -Ilib bin/documentable start -v --topdir=ss
It outputs this line:
Usage:
bin/documentable [--topdir=<Str>] [-v|--verbose] start
I am using %*SUB-MAIN-OPTS but it looks like it does not work neither.
The simplest solution would be to export the dynamic variable %*SUB-MAIN-OPTS, but that is still Not Yet Implemented completely: the export works sorta, but winds up being an empty hash. So not very useful.
Rakudo will call a subroutine called RUN-MAIN when it decides there is a MAIN sub to be run. You can actually export a RUN-MAIN from your module, and set up the dynamic variable, and then call the original RUN-MAIN:
sub RUN-MAIN(|c) is export {
my %*SUB-MAIN-OPTS = :named-anywhere;
CORE::<&RUN-MAIN>(|c)
}
For more information about RUN-MAIN, see: https://docs.raku.org/language/create-cli#index-entry-RUN-MAIN

How should I handle Perl 6 $*ARGFILES that can't be read by lines()?

I'm playing around with lines which reads lines from the files you specify on the command line:
for lines() { put $_ }
If it can't read one of the filenames it throws X::AdHoc (one day maybe it will have better exception types so we can grab the filename with a .path method). Fine, so catch that:
try {
CATCH { default { put .^name } }
for lines() { put $_ }
}
So this catches the X::AdHoc error but that's it. The try block is done at that point. It can't .resume and try the next file:
try {
CATCH { default { put .^name; .resume } } # Nope
for lines() { put $_ }
}
Back in Perl 5 land you get a warning about the bad filename and the program moves on to the next thing.
I could filter #*ARGS first then reconstruct $*ARGFILES if there are some arguments:
$*ARGFILES = IO::CatHandle.new:
#*ARGS.grep( { $^a.IO.e and $^a.IO.r } ) if +#*ARGS;
for lines() { put $_ }
That works although it silently ignores bad files. I could handle that but it's a bit tedious to handle the argument list myself, including - for standard input as a filename and the default with no arguments:
my $code := { put $_ };
#*ARGS = '-' unless +#*ARGS;
for #*ARGS -> $arg {
given $arg {
when '-' { $code.($_) for $*IN.lines(); next }
when ! .IO.e { note "$_ does not exist"; next }
when ! .IO.r { note "$_ is not readable"; next }
default { $code.($_) for $arg.IO.lines() }
}
}
But that's a lot of work. Is there a simpler way to handle this?
To warn on bad open and move on, you could use something like this:
$*ARGFILES does role { method next-handle { loop {
try return self.IO::CatHandle::next-handle;
warn "WARNING: $!.message"
}}}
.say for lines
Simply mixing in a role that makes the IO::CatHandle.next-handle method re-try getting next handle. (you can also use but operator to mixin on a copy instead).
If it can't read one of the filenames it throws X::AdHoc
The X::AdHoc is from .open call; there's a somewhat moldy PR to make those exceptions typed, so once that's fixed, IO::CatHandle would throw typed exceptions as well.
It can't .resume
Yeah, you can only resume from a CATCH block that caught it, but in this case it's caught inside .open call and is made into a Failure, which is then received by IO::CatHandle.next-handle and its .exception is re-.thrown.
However, even if it were resumable here, it'd simply resume into a path where exception was thrown, not re-try with another handle. It wouldn't help. (I looked into making it resumable, but that adds vagueness to on-switch and I'm not comfortable speccing that resuming Exceptions from certain places must be able to meaningfully continue—we currently don't offer such a guarantee for any place in core).
including - for standard input as a filename
Note that that special meaning is going away in 6.d language as far as IO::Handle.open (and by extension IO::CatHandle.new) goes. It might get special treatment in IO::ArgFiles, but I've not seen that proposed.
Back in Perl 5 land you get a warning about the bad filename and the program moves on to the next thing.
In Perl 6, it's implemented as a generalized IO::CatHandle type users can use for anything, not just file arguments, so warning and moving on by default feels too lax to me.
IO::ArgFiles could be special-cased to offer such behaviour. Personally, I'm against special casing stuff all over the place and I think that is the biggest flaw in Perl 5, but you could open an Issue proposing that and see if anyone backs it.

How to script stap (systemtap) to see if some process has called specific kernel function?

Using stap, I can write *.stp file to
Either track a process's action like:
probe process("mytest").begin
{
printf("Caught mytest process")
}
Or to track if a kernel function is called by any process:
probe kernel.function("do_exit").call #all processes
{
printf("called kernel/exit.c: do_exit\n")
}
But my requirement is: to track the kernel function call from specific process names, like tracking "sys_open" called by "mytest" processes.
How to write this .stp statement/function?
Thanks!
I found a way to do that: use a variable indicating the program name
global prog_name = "mytest";
probe kernel.function("do_exit").call
{
if(execname() == progname){
printf("called kernel/exit.c: do_exit\n");
}
}

Multiple Greasemonkey Metablocks

I'm trying to write a Greasemonkey script for a hierarchy of websites such that I have a bunch of code modifications for http://www.foo.com/*, then more specific ones for http://www.foo.com/bar/*, and still others for http://www.foo.com/foobar/*.
Is there anyway for me to write all these in the same script, or do I have to make multiple?
Is there anyway for me to write all
these in the same script, or do I have
to make multiple?
Yes, just use those three #includes, then in your user script do something like (depends on specifics of script):
var currentURL = (document.location+'');
if (currentURL .match(/http:\/\/www\.foo\.com\/foobar\/.*/)) {
// do stuff for page set A
} else if (currentURL .match(/http:\/\/www\.foo\.com\/foo\/.*/)) {
// do stuff for page set B
} else if (currentURL .match(/http:\/\/www\.foo\.com\/.*/)) {
// do stuff for page set C
}
One nifty trick I was shown for dealing with different functions at different sub-locations is to use the global directory of function names as a sort of virtual switchboard...
// do anything that is supposed to apply to the entire website above here.
var place = location.pathname.replace(/\/|\.(php|html)$/gi, "").toLowerCase();
// the regex converts from "foo/" or "foo.php" or "foo.html" to just "foo".
var handler;
if ((handler = global["at_" + place])) {
handler();
}
// end of top-level code. Following is all function definitions:
function at_foo() {
// do foo-based stuff here
}
function at_foobar() {
// do foobar stuff here.
}

Resources