after calling exec - linux

After calling exec, is it possible to print a message, because I tried and nothing happened. I read some articles about exec but I couldn't find my answer. It replaces the process image with a new one but not creating a new process. Is it something about it? Does it wait for something I mean if I use it in child process, so does it wait for ending child process?
I can give this example:
char *args[6] = { "cat","-b","-t","-v",argv[1],0};
else if(pid == 0){
printf("Child Process ID:%d, Parent ID:%d, Process
Group:%d\n",getpid(),getppid(),getgid());
execv("/bin/cat",args);
printf("AHMET TANAKOL\n");
}

The exec family, like you already read, replaces the process image. That is, it loads the new program, removes your program, and start running the new program in place of your program.
No call to exec functions ever returns, unless there is an error.

Related

How does prctl, set_pdeathsig work in perl?

perl-5.24.0 on RH7
I'd like a forked process to kill itself when it determines that it's parent dies. I've read that I can use Linux::Prctl, set_pdeathsig() to do that. But my test of this doesn't seem to work.
#!/usr/bin/env perl
use strict;
my $pid = fork();
die if not defined $pid;
if($pid == 0) {
do_forked_steps();
}
print "====PARENT===\n";
print "Hit <CR> to kill parent.\n";
my $nocare = <>;
exit;
sub do_forked_steps {
system("/home/dgauthie/PERL/sub_fork.pl");
}
And sub_fork.pl is simply...
#!/usr/bin/env perl
use strict;
use Linux::Prctl;
Linux::Prctl::set_pdeathsig(1);
sleep(300);
exit;
(I believe sending "1" tp set_pdeathsig = SIGHUP. But I also tried "9". Same results)
When I run the first script, I can see both procs using ps in another window. When I hit in the script to kill it, I can see that proc go away, but the second one, the forked process, remains.
What am I doing wrong?
You have three processes, not two, because system forks. They are:
The parent process in the parent script ($pid != 0) which waits on <> and calls exit.
The child process, created by fork in the parent script, which calls system. system forks and then waits for its child to exit before returning.
The child process created by system which execs your child script, calls prctl, and sleeps.
When you press enter, process #1 dies, but process #2 does not, and since process #2 is the parent of process #3, the PDEATHSIG is never invoked.
Changing system to exec in your first script, so that a third process isn't created, causes the PDEATHSIG to fire in your toy problem, but without more information it isn't clear if that's suitable in the "real world" version of what you're trying to do.

Spawn Expect from a perl thread

I am working on a script which needs to spawn an Expect process periodically (every 5 mins) to do some work. Below is the code that I have that spawns an Expect process and does some work. The main process of the script is doing some other work at all times, for example it may wait for user input, because of that I am calling this function 'spawn_expect' in a thread that keeps calling it every 5 minutes, but the issue is that the Expect is not working as expected.
If however I replace the thread with another process, that is if I fork and let one process take care of spawning Expect and the other process does the main work of the script (for example waiting at a prompt) then Expect works fine.
My question is that is it possible to have a thread spawn Expect process ? do I have to resort to using a process to do this work ? Thanks !
sub spawn_expect {
my $expect = Expect->spawn($release_config{kinit_exec});
my $position = $expect->expect(10,
[qr/Password.*: /, sub {my $fh = shift; print $fh "password\n";}],
[timeout => sub {print "Timed out";}]);
# if this function is run via a process, $position is defined, if it is run via a thread, it is not defined
...
}
Create the Expect object beforehand (not inside a thread) and pass it to a thread
my $exp = Expect->spawn( ... );
$exp->raw_pty(1);
$exp->log_stdout(0);
my ($thr) = threads->create(\&login, $exp);
my #res = $thr->join();
# ...
sub login {
my $exp = shift;
my $position = $exp->expect( ... );
# ...
}
I tested with multiple threads, where one uses Expect with a custom test script and returns the script's output to the main thread. Let me know if I should post these (short) programs.
When the Expect object is created inside a thread it fails for me, too. My guess is that in that case it can't set up its pty the way it does that normally.
Given the clarification in a comment I'd use fork for the job though.

Callback when a child_process has successfully processed a signal

I need to verify that a child_process has successfully been killed because I cannot execute the next action if that process is still alive.
var proc = require('child_process');
var prog = proc.spawn('myprog', ['--option', 'value']);
prog.on('data', function(data) {
// Do something
});
Somewhere else in the code I reach to a certain event and on a certain condition I need to kill prog:
prog.kill('SUGHUP');
// Only when the process has successfully been killed execute next
// Code...
Since kill is probably async, I am using q. I would like to use q on kill but kill does not have a callback which is executed when the signal has successfully been processed.
How to do?
Possible idea
If I send a message to process prog and in process prog when receiving the message I kill it? How can tell a process to self-kill?
Wouldn't prog.exec() with the option killsignal and a callback fit your needs ?

How to create an execve() child process with the right tty settings to run 'vi' yet still redirect IO back to the parent process?

How do I get a forked, execve() child process that can run 'vi', etc. and redirect all IO to the parent process?
I'm trying to pass shells through from an embedded Linux process to the PC software interface connected over the network. The IO for the shell process is packaged into app-specific messages for network transport over our existing protocol.
First I was just redirecting IO using simply pipe2(), fork(), dup2(), and execve(). This didn't give me a tty on the remote side, so screen, etc. didn't work.
Now I'm using forkpty, and screen mostly works, but many many other don't (vi, stty, etc). It appears the current problem is that the child process doesn't control the tty.
I've been experimenting with TIOCSCTTY, but haven't had much luck.
Here's more or less what I've got:
bool ExternalProcess::launch(...)
{
...
winsize winSize;
winSize.ws_col = 80;
winSize.ws_row = 25;
winSize.ws_xpixel = 10;
winSize.ws_ypixel = 10;
_pid = forkpty(&_stdin, NULL, NULL, &winSize);
//ioctl(_stdin, TIOCNOTTY, NULL);
if (!_pid && (_pid != -1))
{
// this is the child process
char tty[4096];
strncpy(tty, ttyname(STDIN_FILENO), sizeof(tty));
tty[sizeof(tty)-1]=0;
FILE* fp = fopen("debug.txt", "wt"); // no error checking - temporary test code
fprintf(fp, "slave TTY %s", tty);
//if (ioctl(_stdin, TIOCSCTTY, NULL) < 0)
if (ioctl(STDIN_FILENO, TIOCSCTTY, NULL) < 0)
{
fprintf(fp, "ioctl() TIOCSCTTY %s\n", strerror(errno));
fflush(fp);
}
else
{
fprintf(fp, "SET CONTROLLING TTY!");
fflush(fp);
}
fclose(fp);
// command, args, env populated elsewhere
execve(command, args, env);
...
// fail path
_exit(-1);
return false;
}
_stdout = _stdin;
...
// enter select() loop reading/writing _stdin, _stdout
}
I am getting results in the debug file like:
slave TTY /dev/pts/5
SET CONTROLLING TTY!
but still many apps are failing with tcsetattr() errors. Am I right in thinking this is a controlling tty problem? How do I fix it?
EDIT
Minor correction. When I do the ioctl TIOCSCTTY on STDIN_FILENO, then it works as in the debug file above, but the IO redirection back to the parent process is disrupted.
EDIT 2
Okay, I'm starting to understand this better. Looking at the kernel source for the ioctl behind tcsetattr(), the processes I am calling are being sent SIGTTIN and SIGTTOU when trying to change the tty.
Only a foreground process can do that, and they're running as if they're background processes. I tried setting those signals to SIG_IGN after forking and before the execve(), but that didn't work. The semantics of this I understand, but it's safe in my redirection scenario for the execve()'d processes to act as if they're foreground processes. The question is... how to make it so? I will continue to search in the kernel code for clues.
Ugh! It's bash, the shell I was calling with execve().
If it detects that stderr is not attached to a tty, then it enters this special mode where child processes cause SIGTTOU.
I found a mention of this problem here.
When I stopped redirecting stderr away from the tty, then it now seems to work as planned.

issue with fork() from a firebreath npapi plugin

I am trying to fork() a new process so that I can call a separate console application.
The fork does happen fine and I get a new process id but the process is in sleeping state and does not get active at all even if the browser exits.
I just took the sample plugin project and modified the echo method to do the fork.
A regular console application works fine with the fork code.
Is there something different that has to be taken into account for a firebreath plugin app?
Can someone suggest what might be the issue?
The platform is archlinux 64 bit.
FB::variant PluginTestVZAPI::echo(const FB::variant& msg)
{
static int n(0);
fire_echo("So far, you clicked this many times: ", n++);
// fork
pid_t pid = fork();
if(pid == 0) // Child
{
m_host->htmlLog("child process");
}
else if (pid < 0) // Failed to fork
{
m_host->htmlLog("Failed to fork");
m_host->htmlLog(boost::lexical_cast<std::string>(pid));
}
else // Parent
{
m_host->htmlLog("Parent process");
}
m_host->htmlLog("Child Process PID = " + boost::lexical_cast<std::string>(pid));
// end fork
// return "foobar";
return msg;
}
I can't be certain but if I were you I'd try removing the htmlLog calls -- there is no way for you to access the DOM from the child process, so htmlLog won't work at all and it is quite possible that trying to use it in a forked process is causing it to go into an inactive state while it tries (unsuccessfully) to communicate with a browser process that doesn't know about it.
I don't know for certain if this can work or not, but I'd be more than a bit nervous about forking a process that is already a child process of something else; the browser owns the plugin process and communicates with it via IPC, so if you fork that process there could be a lot of code that you don't know about still running and trying to talk to the browser through a now-defunct IPC connection.
My recommendation would be to launch a seperate process, but that's just me. At the very least, you absolutely cannot use anything FireBreath provides for communicating with the browser from the child process.

Resources