Lets say an environment redirects stdout, ignores the output, and then uses their own IO object to write to the screen:
const supersecretstdoutname = Ref{IO}()
# ...
function evironment_init()
# ...
supersecretstdoutname[] = stdout
stdout = DevNull
# ...
end
Would I have any way of finding supersecretstdoutname (or if they did something similar with stdin) without digging through source code?
Said another way, can I get a list of open IOs, or at least the ones that have access to the screen/user input?
Related
I am creating something along the likes of a text adventure game. I have a .yaml file that is my input. This file looks something like this
node_type:
action
title:
Do some stuff
info:
This does some stuff and things
script:
'print("hello world")
print(ret_val)
foo.bar(True)
ret_val = (foo.bar() == True)
if (thing):
print(thing)
print(ret_val)
'
My end goal is to have my python program run the script portion of the yaml file exactly as if it had been copy pasted into the main code. (I know there are about ten bazillion security reasons I should not be running user input like this, but I am the only one writing these nodes, and the only one using this program so I'm mostly just ignoring this fact...)
Currently my attempt goes like this: I load my yaml file as a dict using pyyaml
node = yaml.safe_load(file.yaml)
Then I'm trying to use exec to run my code and hitting a lot of problems, I can't run if statements, I simply get a syntax error, and I can't get any sort of return value from my code. I've tried this as a work around:
def main()
ret_val = "test";
thing = exec(node['script'], globals(),locals())
print(ret_val)
which when run with the above .yaml file prints
>> hello world
>> test
>> True
>> test
for some reason not actually modifying any of my main variables even though I fed them to exec.
Is there any way for me to work around these issues or is there an all together better way to be doing this?
One way of doing this would be to parse the code out and save it to a .py file, from which it can be imported dynamically, for example by importlib.
You might want to encapsulate parsed code into a function, which you can then easily call to invoke your action. Also, it would make sense to specify some default imports there.
I am somewhat familiar with various ways of calling a script from another one. I don't really need an overview of each, but I do have a few questions. Before that, though, I should tell you what my goal is.
I am working on a perl/tk program that: a) gathers information and puts it in a hash, and b) fires off other scripts that use the info hash, and some command line args. Each of these other scripts are available on the command line (using another command-line script) and need to stay that way. So I can't just put all that into a module and call it good.I do have the authority to alter the scripts, but, again, they must also be usable on the command line.
The current way of calling the other script is by using 'do', which means I can pass in the hash, and use the same version of perl (I think). But all the STDOUT (and STDERR too, I think) goes to the terminal.
Here's a simple example to demonstrate the output:
this_thing.pl
#!/usr/bin/env perl
use strict;
use warnings;
use utf8;
use Tk;
my $mw = MainWindow->new;
my $button = $mw->Button(
-text => 'start other thing',
-command => \&start,
)->pack;
my $text = $mw->Text()->pack;
MainLoop;
sub start {
my $script_path = 'this_other_thing.pl';
if (not my $read = do $script_path) {
warn "couldn't parse $script_path: $#" if $#;
warn "couldn't do $script_path: $!" unless defined $read;
warn "couldn't run $script_path" unless $read;
}
}
this_other_thing.pl
#!/usr/bin/env perl
use strict;
use warnings;
use utf8;
print "Hello World!\n";
How can I redirect the STDOUT and STDIN (for interactive scripts that need input) to the text box using the 'do' method? Is that even possible?
If I can't use the 'do' method, what method can redirect the STDIN and STDOUT, as well as enable passing the hash in and using the same version of perl?
Edit: I posted this same question at Perlmonks, at the link in the first comment. So far, the best response seems to use modules and have the child script just be a wrapper for the module. Other possible solutions are: ICP::Run(3) and ICP in general, Capture::Tiny and associated modules, and Tk::Filehandle. A solution was presented that redirects the output and error streams, but seems to not affect the input stream. It's also a bit kludgy and not recommended.
Edit 2: I'm posting this here because I can't answer my own question yet.
Thanks for your suggestions and advice. I went with a suggestion on Perlmonks. The suggestion was to turn the child scripts into modules, and use wrapper scripts around them for normal use. I would then simply be able to use the modules, and all the code is in one spot. This also ensures that I am not using different perls, I can route the output from the module anywhere I want, and passing that hash in is now very easy.
To have both STDIN & STDOUT of a subprocess redirected, you should read the "Bidirectional Communication with Another Process" section of the perlipc man page: http://search.cpan.org/~rjbs/perl-5.18.1/pod/perlipc.pod#Bidirectional_Communication_with_Another_Process
Using the same version of perl works by finding out the name of your perl interpreter, and calling it explicitly. $^X is probably what you want. It may or may not work on different operating systems.
Passing a hash into a subprocess does not work easily. You can print the contents of the hash into a file, and have the subprocess read & parse it. You might get away without using a file, by using the STDIN channel between the two processes, or you could open a separate pipe() for this purpose. Anyway, printing & parsing the data back cannot be avoided when using subprocesses, because the two processes use two perl interpreters, each having its own memory space, and not being able to see each other's variables.
You might avoid using a subprocess, by using fork() + eval() + require(). In that case, no separate perl interpreter will be involved, the forked interpreter will inherit the whole memory of your program with all variables, open file descriptors, sockets, etc. in it, including the hash to be passed. However, I don't see from where your second perl script could get its hash when started from CLI.
In my Nodejs script I have a line, called on demand:
eval(fs.readFileSync('eval.js')+'');
It is done so, because sometimes I want to know about what is going on in my "nodejs script", the content of it's variables.
So, "eval.js" usually represented as:
console.dir(myVar);
The problem is, it outputs in console output. "Parent" script also outputs in console some info, so the console is running very fast and I can't get that I want.
I was searching any way to put all output of file "eval.js" into another file "x.log".
Something like (in "parent" script):
evalFileToLog("eval.js", "x.log");
Or "eval.js":
// something what will forward stdout to "x.log"
console.dir(abc);
// blah blah blah
// something that will restore stdout to it's normal behaviour, like it was before.
Thank you for your help!
You can write to another stream (eg a file or standard error) to separate your outputs, but I highly advise against doing synchronous reads of any file in your program.
Another way to get a view into your app would be to leverage the repl with it you can have a shell in your program instead of flooding the output.
var repl = require('repl');
var bar = new MyApp();
relp.start({
foo: bar
});
And now you have bar exposed in your shell.
I'm having the following problem. I want to write a program in Fortran90 which I want to be able to call like this:
./program.x < main.in > main.out
Additionally to "main.out" (whose name I can set when calling the program), secondary outputs have to be written and I wanted them to have a similar name to either "main.in" or "main.out" (they are not actually called "main"); however, when I use:
INQUIRE(UNIT=5,NAME=sInputName)
The content of sInputName becomes "Stdin" instead of the name of the file. Is there some way to obtain the name of files that are linked to stdin/stdout when the program is called??
Unfortunately the point of i/o redirection is that you're program doesn't have to know what the input/output files are. On unix based systems you cannot look at the command line arguments as the < main.in > main.out are actually processed by the shell which uses these files to set up standard input and output before your program is invoked.
You have to remember that sometimes the standard input and output will not even be files, as they could be a terminal or a pipe. e.g.
./generate_input | ./program.x | less
So one solution is to redesign your program so that the output file is an explicit argument.
./program.x --out=main.out
That way your program knows the filename. The cost is that your program is now responsible for openning (and maybe creating) the file.
That said, on linux systems you can actually find yout where your standard file handles are pointing from the special /proc filesystem. There will be symbolic links in place for each file descriptor
/proc/<process_id>/fd/0 -> standard_input
/proc/<process_id>/fd/1 -> standard_output
/proc/<process_id>/fd/2 -> standard_error
Sorry, I don't know fortran, but a psudeo code way of checking the output file could be:
out_name = realLink( "/proc/"+getpid()+"/fd/1" )
if( isNormalFile( out_name ) )
...
Keep in mind what I said earlier, there is no garauntee this will actually be a normal file. It could be a terminal device, a pipe, a network socket, whatever... Also, I do not know what other operating systems this works on other than redhat/centos linux, so it may not be that portable. More a diagnostic tool.
Maybe the intrinsic subroutines get_command and/or get_command_argument can be of help. They were introduced in fortran 2003, and either return the full command line which was used to invoke the program, or the specified argument.
how to capture the command line output in a window using ncurses?
Suppose I am excecuting a command like "ls" I want to print that output in a specific window which is designed in ncurses. I am new to ncurses.help me.Thanks in advance
One thing I can think of is using system() to execute the command, redirecting its output to a temp file:
system("ls > temp");
Then opening the file temp, reading its content and displaying it on the window.
Not an elegant solution, but works.
The more elegant solution might be to implement the redirect within your program. Look into the dup() and dup2() system calls (see the dup(2) manpage). So, what you would want to do is (this is essentially what the shell called by system() ends up doing):
Code Snippet:
char *tmpname;
int tmpfile;
pid_t pid;
int r;
tmpname = strdup("/tmp/ls_out_XXXXXX");
assert(tmpname);
tmpfile = mkstemp(tmpname);
assert(tmpfile >= 0);
pid = fork();
if (pid == 0) { // child process
r = dup2(STDOUT_FILENO, tmpfile);
assert(r == STDOUT_FILENO);
execl("/bin/ls", "ls", NULL);
assert(0);
} else if (pid > 0) { // parent
waitpid(pid, &r, 0);
/* you can mmap(2) tmpfile here, and read from it like it was a memory buffer, or
* read and write as normal, mmap() is nicer--the kernel handles the buffering
* and other memory management hassles.
*/
} else {
/* fork() failed, bail violently for this example, you can handle differently as
* appropriately.
*/
assert(0);
}
// tmpfile is the file descriptor for the ls output.
unlink(tmpname); // file stays around until close(2) for this process only
For more picky programs (ones that care that they have a terminal for input and output), you'll want to look into pseudo ttys, see the pty(7) manpage. (Or google 'pty'.) This would be needed if you want ls to do its multicolumn pretty-printing (eg, ls will detect it is outputting to a file, and write one filename to a line. If you want ls to do the hard work for you, you'll need a pty. Also, you should be able to set the $LINES and $COLUMNS environment variables after the fork() to get ls to pretty print to your window size--again, assuming you are using a pty. The essential change is that you would delete the tmpfile = mkstemp(...); line and replace that and the surrounding logic with the pty opening logic and expand the dup2() call to handle stdin and stderr as well, dup2()ing them from the pty file handles).
If the user can execute arbitrary programs in the window, you'll want to be careful of ncurses programs--ncurses translates the move() and printw()/addch()/addstr() commands into the appropriate console codes, so blindly printing the output of ncurses programs will stomp your program's output and ignore your window location. GNU screen is a good example to look into for how to handle this--it implements a VT100 terminal emulator to catch the ncurses codes, and implements its own 'screen' terminal with its own termcap/terminfo entries. Screen's subprograms are run in pseudo-terminals. (xterm and other terminal emulators perform a similar trick.)
Final note: I haven't compiled the above code. It may have small typos, but should be generally correct. If you mmap(), make sure to munmap(). Also, after you are done with the ls output, you'll want to close(tmpfile). unlink() might be able to go much earlier in the code, or right before the close() call--depends on if you want people to see the output your playing with--I usually put the unlink() directly after the mkstemp() call--this prevents the kernel from writing the file back to disk if the tmp directory is disk backed (this is less and less common thanks to tmpfs). Also, you'll want to free(tmpname) after you unlink() to keep from leaking memory. The strdup() is necessary, as tmpname is modified by mkstemp().
Norman Matloff shows in his Introduction to the Unix Curses Library on page five a way:
// runs "ps ax" and stores the output in cmdoutlines
runpsax()
{ FILE* p; char ln[MAXCOL]; int row,tmp;
p = popen("ps ax","r"); // open Unix pipe (enables one program to read
// output of another as if it were a file)
for (row = 0; row < MAXROW; row++) {
tmp = fgets(ln,MAXCOL,p); // read one line from the pipe
if (tmp == NULL) break; // if end of pipe, break
// don’t want stored line to exceed width of screen, which the
// curses library provides to us in the variable COLS, so truncate
// to at most COLS characters
strncpy(cmdoutlines[row],ln,COLS);
// remove EOL character
cmdoutlines[row][MAXCOL-1] = 0;
}
ncmdlines = row;
close(p); // close pipe
}
...
He then calls mvaddstr(...) to put out the lines from the array through ncurses.