In NodeJS, how does one make the c-level exec call, like other languages and runtimes allow (e.g. Python's os.exec() or even Bash's exec), please?
Unlike the child_process module (and all the wrappers I've found for it so far), this would be meant to completely replace the current NodeJS process by the execed process.
What I am aiming for is to create a NodeJS cli adaptor of sorts, that you provide with one set of configuration options and you can use it to dispatch different tools that use the same configuration, but each has a different expectations on the format (some expect ENV VARs, some command-line arguments, some need a file, ...).
The reason why I am looking for such an exec call is that, once the tool gets dispatched, I have no expectation on the NodeJS process sticking around -- there's no reason for it to continue existing. At the same time, I need to tool to become the foreground process (accept all signals, control characters, ...). It seems to me that it would be possible to forward all such events from the parent NodeJS to the child tool (if I used child_process), but it seems a bit too much to implement from scratch...
Cheers!
OK, so I think I have finally got it. I am posting this as an answer as it actually shows a real code example:
child_process.spawnSync(
"bash",
[
"-ic",
["psql"],
],
{
stdio: "inherit",
}
);
process.exit();
This will spin up bash (with control tty) and inside it psql. Pressing ctrl-c does not kill the node process; it is, as expected, sent to psql. I have no idea if it would be possible to rewrite this code in a way where the node process terminates before psql and bash, but for my use case, this does it. Maybe it would be even possible to get it done without bash, but I think that bash -i is actually the key here.
Related
I have a program that takes a cluster name, does some lookups, figures out what's missing, and produces an SSH invocation that will get you a prompt on a given box.
I'd like to actually run the program (meaning you're dropped at the relevant bash prompt) instead of just telling the user to run the blurb themselves, but I can't seem to find a way to do so with node, mostly out of concern for Windows or worker threads. (Non-interactive SSH sessions are basically not a thing in our environment.)
It appears that this is close enough for my needs, although I'm sure it will be somehow subtly wrong around signals or whatnot:
import {spawn} from 'child_process';
spawn('ssh',
['-o', 'UserKnownHostsFile=/dev/null', '-o', 'PasswordAuthentication=no', ...],
{
detached: false,
stdio: 'inherit',
}
);
Execution will block at the spawn() call, meaning your process will continue running, and any monitoring on runtime of your commands will have a bad time... but oh well.
Is there a way to make a bash script process messages that have been sent to it using the "write" command? So for example, if a user wants to activate a feature in my script, could I make it so that they can send the script a command using the write command?
One possible method I thought of was to configure logging for a screen session and then have the bash script parse text through there, but I'm not sure if there would be a simpler or more efficient way to tackle this
EDIT: I was thinking as an alternative solution I could use a named pipe. I'm worried that it would break though if the tmp partition gets filled up completely (not sure if this would impact write as well?). I'm going to be running this script on a shared box, and every once in a while someone will completely fill up the /tmp partition and then just leave it like that until people start complaining
Hmm, you are trying to really circumvent a poor unix command to ask it something it was not specified for. From the man page (emphasize mine):
The write utility allows you to communicate with other users, by copying
lines from your terminal to theirs
That means that write is intended to copy line directly on terminals. As soon as you say, I will dump terminal output with screen, and then parse the dump file, you loose the simplicity of write (and also need disk space, with the problem of removing old lines from a sequencial file)
Worse, as your script lives on its own, it could (should?) be a daemon script attached to no terminal
So if I have correctly understood your question, your requirements are:
a script that does some tasks and should be able to respond to asynchronous requests - common usages are named pipes or network or unix domain sockets, less common are files in a dedicated folder with a optional signal to have immediate processing, adding lines to a sequential file while being possible is uncommon, because of a synchonization of access problem
a simple and convivial way for users to pass requests. Ok write is nice for that part, but much too hard to interface IMHO
If you do not want to waste time on that part by using standard tools, I would recommend the mail system. It is trivial to alias a mail address to a program that will be called with the mail message as input. But I am not sure it is worth it, because the user could directly call the program with the request as input or command line parameter.
So the client part could be simply a program that:
create a temporary file in a dedicated folder (mkstemp is your friend in C or C++, or mktemp in shell - but beware of race conditions)
write the request to that file
optionaly send a signal to a pid - provided the script write its own PID on startup to a dedicated file
I have been working on a project that uses PIDs, /proc and command line analysis to validate processes on a system. My code had to be checked by the security guys who manage to break it with a single line... embarrassing!
#!/usr/bin/env perl
$0="I am running wild"; # I had no clue you can do this!
system("cat /proc/$$/cmdline");
print("\n");
system("ps -ef | grep $$");
# do bad stuff here...
My questions:
I see some uses cases for the above, like hiding passwords given on the command line (also bad practice) but I see a lot more problems/issues when one can hide processes and spoof cmdline. Is there a reason it is allowed? Isn't it a system vulnerability?
How can I prevent or detect this? I have looked into /proc mount options. I also know that one can use lsof to identify spoofed processes based on unexpected behavior, but this won't work in my case. At the moment I am using a simple method to detect if the cmdline contains at least one null (\0) character which assumes that at least one argument is present. In the above code, spaces need to be replaced with nulls to bypass that check which is something I couldn't find how to implement in Perl - writes up to the first \0.
To answer 1:
It's because of how starting a new process actually works.
You fork() to spawn a duplicate instance of your current process, and then you exec() to start the new thing - it replaces your current process with the 'new' process, and as a result - it has to rewrite $0.
This is actually quite useful though when running parallel code - I do it fairly frequently when fork()ing code, because it makes it easy to spot which 'thing' is getting stuck/running hot.
E.g.:
use Parallel::ForkManager;
my $manager = Parallel::ForkManager -> new ( 10 );
foreach my $server ( #list_of_servers ) {
$manager -> start and next;
$0 = "$0 child: ($server)";
#do stuff;
$manager -> finish;
}
You can instantly see in your ps list what's going on. You'll see this sort of behaviour with many multiprocessing services like httpd.
But it isn't a vulnerability, unless you assume it's something that it's not (as you do). No more than being able to 'mv' a binary anyway.
Anyway, to answer 2... prevent or detect what? I mean, you can't tell what a process is doing from the command line, but you can't tell what it's doing otherwise anyway (there's plenty of ways to make a piece of code do something 'odd' behind the scenes).
The answer is - trust your security model. Don't run untrusted code in a privileged context, and it's largely irrelevant what they call it. Sure, you could write rude messages in the process list, but it's pretty obvious who's doing it.
I am spawning multiple sh (sipp) scripts from a tcl script . I want to know those scripts will run in parallel or as a child process? Because I want to run it in parallel.
If I use threads extension then do I need to use any other packages along with that?
Thanks in advance.
Tcl can quite easily run multiple subprocesses in parallel. The way you do so depends on how you want to handle what those subprocesses do. (Bourne shell — sh — scripts work just fine as subprocesses.) None of these require threads. You can use threads too, but they're not necessary for just running subprocesses as, from Tcl's perspective at least, subprocess handling is a purely I/O-bound matter.
For more details, please narrow down (in another question) which type of subprocess handling you want to do.
Fire and Forget
If you don't care about tracking the subprocesses at all, just set them going in the background by putting a & as the last word to exec:
exec /bin/sh myscript.sh &
Keeping in Touch
To keep in touch with the subprocess, you need to open a pipeline (and use this weird stanza to do so; put the arguments inside a list with a | concatenated on the front):
set thePipe [open |[list /bin/sh myscript.sh]]
You can then read/gets from the pipe to get the output (yes, it supports fileevent and asynchronous I/O on all platforms). If you want to write to the pipe (i.e., to the subprocess's stdin) open with mode w, and to both read and write, use mode r+ or w+ (doesn't matter which, as it is a pipe and not a file). Be aware that you have to be a bit careful with pipes; you can get deadlocked or highly confused. I recommend using the asynchronous I/O style, with fconfigure $thePipe -blocking 0, but that's quite a bit different to a synchronous style of I/O handling.
Great Expectations
You can also use the Expect extension to work with multiple spawned subprocesses at once. To do that, you must save the id from each spawn in its own variable, then pass that id to expect and send with the -i option. You probably want to use expect_background.
set theId [spawn /bin/sh myscript.sh]
expect_background {
-i $theId
"password:" {
send -i $theId "$mypass\r"
# Etc.
}
}
# Note that [expect_background] doesn't support 'timeout'
Bit support question. Apologies for that.
I have an application linked with GNU readline. The application can invoke shell commands (similar to invoking tclsh using readline wrapper). When I try to invoke the Linux less command, I get the following error:
Suspend (tty output)
I'm not an expert around issues of terminals. I've tried to google it but found no answer. Does any one know how to solve this issue?
Thanks.
You probably need to investigate the functions rl_prep_terminal() and rl_deprep_terminal() documented in the readline manual:
Function: void rl_prep_terminal(int meta_flag)
Modify the terminal settings for Readline's use, so readline() can read a single character at a time from the keyboard. The meta_flag argument should be non-zero if Readline should read eight-bit input.
Function: void rl_deprep_terminal(void)
Undo the effects of rl_prep_terminal(), leaving the terminal in the state in which it was before the most recent call to rl_prep_terminal().
The less program is likely to get confused if the terminal is already in the special mode used by the Readline library and it tries to tweak the terminal into an equivalent mode. This is a common problem for programs that work with the curses library, or other similar libraries that adjust the terminal status and run other programs that also do that.
Whilst counterintuitive it may be stopped waiting for input (some OSs and shells give Stopped/Suspended (tty output) when you might expect it to refer to (tty input)). This would fit the usual behaviour of less when it stops at the end of (what it thinks is) the screen length.
Can you use cat or head instead? or feed less some input? or look at the less man/info pages to see what options to less might suit your requirement (e.g w, z, F)?
Your readline application is making itself the controlling app for your tty.
When you invoke less from inside the application, it wants to be in control of the tty as well.
If you are trying to invoke less in your application to display a file for the user,
you want to set the new fork'd process into it's own process group before calling exec.
You can do this with setsid(). Then when less call tcsetpgrpp(), it will not get
thrown into the backgroud with SIGTTOU.
When less finishes, you'll want to restore the foregroud process group with tcsetpgrp(), as well.