issue with fork() from a firebreath npapi plugin - linux

I am trying to fork() a new process so that I can call a separate console application.
The fork does happen fine and I get a new process id but the process is in sleeping state and does not get active at all even if the browser exits.
I just took the sample plugin project and modified the echo method to do the fork.
A regular console application works fine with the fork code.
Is there something different that has to be taken into account for a firebreath plugin app?
Can someone suggest what might be the issue?
The platform is archlinux 64 bit.
FB::variant PluginTestVZAPI::echo(const FB::variant& msg)
{
static int n(0);
fire_echo("So far, you clicked this many times: ", n++);
// fork
pid_t pid = fork();
if(pid == 0) // Child
{
m_host->htmlLog("child process");
}
else if (pid < 0) // Failed to fork
{
m_host->htmlLog("Failed to fork");
m_host->htmlLog(boost::lexical_cast<std::string>(pid));
}
else // Parent
{
m_host->htmlLog("Parent process");
}
m_host->htmlLog("Child Process PID = " + boost::lexical_cast<std::string>(pid));
// end fork
// return "foobar";
return msg;
}

I can't be certain but if I were you I'd try removing the htmlLog calls -- there is no way for you to access the DOM from the child process, so htmlLog won't work at all and it is quite possible that trying to use it in a forked process is causing it to go into an inactive state while it tries (unsuccessfully) to communicate with a browser process that doesn't know about it.
I don't know for certain if this can work or not, but I'd be more than a bit nervous about forking a process that is already a child process of something else; the browser owns the plugin process and communicates with it via IPC, so if you fork that process there could be a lot of code that you don't know about still running and trying to talk to the browser through a now-defunct IPC connection.
My recommendation would be to launch a seperate process, but that's just me. At the very least, you absolutely cannot use anything FireBreath provides for communicating with the browser from the child process.

Related

Node JS - Cannot Kill Process Executed with Child Process Exec

We are trying to kill the process of a chrome browser launched with nodes child_process exec command
var process = cp.exec(`"chrome.exe" --app="..."`, () => {}); // working great
But when we try
process.kill(); //nothing happens...
Does the process refer to the chrome window or something else? if not, how can we get a hold of the newly opened chrome window process, PID, etc...?
Any help would be great...
Note - we have tried using the chrome_launcher NPM but it didn't help because we couldn't open chrome in kiosk mode without fullscreen, but this is an issue for a different question...
Try the PID hack
We can start child processes with {detached: true} option so those processes will not be attached to main process but they will go to a new group of processes.
Then using process.kill(-pid) method on main process we can kill all processes that are in the same group of a child process with the same pid group. In my case, I only have one processes in this group.
var spawn = require('child_process').spawn;
var child = spawn('your-command', {detached: true});
process.kill(-child.pid);
I built a cross platform npm package that wraps up spawning and killing child processes from node, give it a try.
https://www.npmjs.com/package/subspawn
I am not able to add comment, so I am saying it directly in the answer:
How to kill process with node js
If you check the link above you need library as follows
https://www.npmjs.com/package/fkill
Usage Example taken from stackoverflow question
const fkill = require('fkill');
fkill(1337).then(() => {
console.log('Killed process');
});
fkill('Safari');
fkill([1337, 'Safari']);
I also found this library to check running processes
https://github.com/neekey/ps

How to send "CTRL+C" to child process in Node.js?

I tried to spawn child process - vvp (https://linux.die.net/man/1/vvp). At the certain time, I need to send CTRL+C to that process.
I am expecting that simulation will be interrupted and I get the interactive prompt. And after that I can continue the simulation by send command to the child process.
So, I tried something like this:
var child = require('child_process');
var fs = require('fs');
var vcdGen = child.spawn('vvp', ['qqq'], {});
vcdGen.stdout.on('data', function(data) {
console.log(data.toString())
});
setTimeout(function() {
vcdGen.kill('SIGINT');
}, 400);
In that case, a child process was stopped.
I also tried vcdGen.stdin.write('\x03') instead of vcdGen.kill('SIGINT'); but it isn't work.
Maybe it's because of Windows?
Is there any way to achieve the same behaviour as I got in cmd?
kill only really supports a rude process kill on Windows - the application signal model in Windows and *nix isn't compatible. You can't pass Ctrl+C through standard input, because it never comes through standard input - it's a function of the console subsystem (and thus you can only use it if the process has an attached console). It creates a new thread in the child process to do its work.
There's no supported way to do this programmatically. It's a feature for the user, not the applications. The only way to do this would be to do the same thing the console subsystem does - create a new thread in the target application and let it do the signalling. But the best way would be to simply use coöperative signalling instead - though that of course requires you to change the target application to understand the signal.
If you want to go the entirely unsupported route, have a look at https://stackoverflow.com/a/1179124/3032289.
If you want to find a middle ground, there's a way to send a signal to yourself, of course. Which also means that you can send Ctrl+C to a process if your consoles are attached. Needless to say, this is very tricky - you'd probably want to create a native host process that does nothing but create a console and run the actual program you want to run. Your host process would then listen for an event, and when the event is signalled, call GenerateConsoleCtrlEvent.

node.js multithreading with max child count

I need to write a script, that takes an array of values and multithreaded way it (forks?) runs another script with a value from array as a param, but so max running forks would be set, so it would wait for script to finish if there are more than n running already. How do I do that?
There is a plugin named child_process, but not sure how to get it done, as it always waits for child termination.
Basically, in PHP it would be something like this (wrote it from head, may contain some syntax errors):
<php
declare(ticks = 1);
$data = file('data.txt');
$max=20;
$child=0;
function sig_handler($signo) {
global $child;
switch ($signo) {
case SIGCHLD:
$child -= 1;
}
}
pcntl_signal(SIGCHLD, "sig_handler");
foreach($data as $dataline){
$dataline = trim($dataline);
while($child >= $max){
sleep(1);
}
$child++;
$pid=pcntl_fork();
if($pid){
// SOMETHING WENT WRONG? NEVER HAPPENS!
}else{
exec("php processdata.php \"$dataline\"");
exit;
}//fork
}
while($child != 0){
sleep(1);
}
?>
After the conversation in the comments, here's how to have Node executing your PHP script.
Since you're calling an external command, there's no need to create a new thread. The Node.js runloop understands that calls to external commands are async operations, and it can execute all of them at the same time.
You can see different ways for executing an external process in this SO question (linked answer may be the best in your case).
However, since you're already moving everything to Node, you may even consider rewriting your "process.php" script to Node.js code. Since, as you explained, that script connects to remote servers and databases and uses nslookup (which you may not really need with Node.js), you won't need any separate thread: they're all async operations that Node.js excels at performing.

How to create an execve() child process with the right tty settings to run 'vi' yet still redirect IO back to the parent process?

How do I get a forked, execve() child process that can run 'vi', etc. and redirect all IO to the parent process?
I'm trying to pass shells through from an embedded Linux process to the PC software interface connected over the network. The IO for the shell process is packaged into app-specific messages for network transport over our existing protocol.
First I was just redirecting IO using simply pipe2(), fork(), dup2(), and execve(). This didn't give me a tty on the remote side, so screen, etc. didn't work.
Now I'm using forkpty, and screen mostly works, but many many other don't (vi, stty, etc). It appears the current problem is that the child process doesn't control the tty.
I've been experimenting with TIOCSCTTY, but haven't had much luck.
Here's more or less what I've got:
bool ExternalProcess::launch(...)
{
...
winsize winSize;
winSize.ws_col = 80;
winSize.ws_row = 25;
winSize.ws_xpixel = 10;
winSize.ws_ypixel = 10;
_pid = forkpty(&_stdin, NULL, NULL, &winSize);
//ioctl(_stdin, TIOCNOTTY, NULL);
if (!_pid && (_pid != -1))
{
// this is the child process
char tty[4096];
strncpy(tty, ttyname(STDIN_FILENO), sizeof(tty));
tty[sizeof(tty)-1]=0;
FILE* fp = fopen("debug.txt", "wt"); // no error checking - temporary test code
fprintf(fp, "slave TTY %s", tty);
//if (ioctl(_stdin, TIOCSCTTY, NULL) < 0)
if (ioctl(STDIN_FILENO, TIOCSCTTY, NULL) < 0)
{
fprintf(fp, "ioctl() TIOCSCTTY %s\n", strerror(errno));
fflush(fp);
}
else
{
fprintf(fp, "SET CONTROLLING TTY!");
fflush(fp);
}
fclose(fp);
// command, args, env populated elsewhere
execve(command, args, env);
...
// fail path
_exit(-1);
return false;
}
_stdout = _stdin;
...
// enter select() loop reading/writing _stdin, _stdout
}
I am getting results in the debug file like:
slave TTY /dev/pts/5
SET CONTROLLING TTY!
but still many apps are failing with tcsetattr() errors. Am I right in thinking this is a controlling tty problem? How do I fix it?
EDIT
Minor correction. When I do the ioctl TIOCSCTTY on STDIN_FILENO, then it works as in the debug file above, but the IO redirection back to the parent process is disrupted.
EDIT 2
Okay, I'm starting to understand this better. Looking at the kernel source for the ioctl behind tcsetattr(), the processes I am calling are being sent SIGTTIN and SIGTTOU when trying to change the tty.
Only a foreground process can do that, and they're running as if they're background processes. I tried setting those signals to SIG_IGN after forking and before the execve(), but that didn't work. The semantics of this I understand, but it's safe in my redirection scenario for the execve()'d processes to act as if they're foreground processes. The question is... how to make it so? I will continue to search in the kernel code for clues.
Ugh! It's bash, the shell I was calling with execve().
If it detects that stderr is not attached to a tty, then it enters this special mode where child processes cause SIGTTOU.
I found a mention of this problem here.
When I stopped redirecting stderr away from the tty, then it now seems to work as planned.

Detect when parent process exits

I will have a parent process that is used to handle webserver restarts. It will signal the child to stop listening for new requests, the child will signal the parent that it has stopped listening, then the parent will signal the new child that it can start listening. In this way, we can accomplish less than 100ms down time for a restart of that level (I have a zero-downtime grandchild restart also, but that is not always enough of a restart).
The service manager will kill the parent when it is time for shutdown. How can the child detect that the parent has ended?
The signals are sent using stdin and stdout of the child process. Perhaps I can detect the end of an stdin stream? I am hoping to avoid a polling interval. Also, I would like this to be a really quick detection if possible.
a simpler solution could be by registering for 'disconnect' in the child process
process.on('disconnect', function() {
console.log('parent exited')
process.exit();
});
This answer is just for providing an example of the node-ffi solution that entropo has proposed (above) (as mentioned it will work on linux):
this is the parent process, it is spawning the child and then exit after 5 seconds:
var spawn = require('child_process').spawn;
var node = spawn('node', [__dirname + '/child.js']);
setTimeout(function(){process.exit(0)}, 5000);
this is the child process (located in child.js)
var FFI = require('node-ffi');
var current = new FFI.Library(null, {"prctl": ["int32", ["int32", "uint32"]]})
//1: PR_SET_PDEATHSIG, 15: SIGTERM
var returned = current.prctl(1,15);
process.on('SIGTERM',function(){
//do something interesting
process.exit(1);
});
doNotExit = function (){
return true;
};
setInterval(doNotExit, 500);
without the current.prctl(1,15) the child will run forever even if the parent is dying. Here it will be signaled with a SIGTERM which will be handled gracefully.
Could you just put an exit listener in the parent process that signals the children?
Edit:
You can also use node-ffi (Node Foreign Function Interface) to call ...
prctl(PR_SET_PDEATHSIG, SIGHUP);
... in Linux. ( man 2 prctl )
I start Node.JS from within a native OSX application as a background worker. To make node.js exit when the parent process which consumes node.js stdout dies/exits, I do the following:
// Watch parent exit when it dies
process.stdout.resume();
process.stdout.on('end', function() {
 process.exit();
});
Easy like that, but I'm not exactly sure if it's what you've been asking for ;-)

Resources