Let's say I spawn a process PO through popen (READ ONLY) from a process PA. I then pclose() the pipe on PA's side.
On PO's side, how do I determine if stdout is still available without executing a write() ?
Note that I have tried catching SIGPIPE on PO's side to no avail.
UPDATED: I tried using fstat(1, &buf) without success.
UPDATED: The reason I need to detect this condition through PO I do not have access to PO's PID from PA (and hence can't kill it). Futhermore, I'd like for PO to be more robust in face failures of PA i.e. exiting by itself.
RESOLUTION: I went ahead and used socketpair, fork. Trying to control a process through popen turned out to be a nightmare (to me at least). A big thanks to everyone who contributed!
Mmm... pclose() is supposed to wait for PO to finish before closing the pipe. In the meantime, PO can keep writing to its end of the pipe at least up to 4192 bytes (ulimit -p times 512), then it should simply block.
Perhaps you will have to switch for pipe()/fork()/dup2()/close() if you want more control. If this is what you want, let me know and I'll post some code.
PA is the consumer of information (Hence it does popen() , and pclose() ).
PO is the provider, and hence the server - in this case it only knows that it is writing to stdout, but cannot tell what stdout is bound to. So in this case PO is not supposed to know too much about stdout.
The EOF detection should happen at the PA program.
Could you post a few more details about why you need to do it in PO?
Related
I'm working on a program that would simulate scheduling from creation to completion of processes. I need assistance to know can a process move back from ready queue to the job queue (in any case - may be an exception).
I'm not sure what you mean by "queue job". A process is either :
running (in that case no need to do anything)
sleeping, that means that the process is waiting for an input or an output. You can't force it to “wake up”. It'll wake up when the input or output operation it wants to make is possible.
stopped, that means that the process is currently suspended. There four different kind for it.
SIGTSTP, (most of the time triggered by CTRL + Z. Can be unstopped with fg command
SIGSTOP, meaning it has been roughly stopped. Can't do many things about that one.
SIGTTIN and SIGTTOU, but I don't have the knowledge for those two.
So you can dive in fg command that might helps you.
NB : sorry for bad english.
I am using an embedded system, which runs linux. When i run a compiled C program in the forground, it works correctly. However, when i add the '&' after the program call, to make it run as a job in the background, certain features do not work correctly. The main feature which stops working is the use of the 'read' function (unistd.h), used to read from a socket.
Does running a process in the backround reduce its permissions?
What else could cause this behaviour?
Edit:
The function uses the 'select' and 'read' function to read from a socket used for the reception of CANbus message frames. When the data is received, we analyse it and 'echo' a string into a .txt file, to act as a datalogger. When run in the foreground, the file is created and added to successfully, but when in the background, the file is not created/appended.
The only difference between running a process in foreground of background is the interaction with your terminal.
Typically when you background a process it's stdin gets disconnected (it no longer reads input from your keyboard) and you can no longer send keyboard-shortcut signals like Ctrl-C/Ctrl-D to the process.
Other then that nothing changes, no permissions or priorities are changed.
No, a process doesn't have its persmissons changed when going into background.
Internally whats happening is before the process's code starts getting executed, the file descriptors 0,1,2 (stdin,out,err) will be pointed to /dev/null instead of usual files.
Similarly if you use >/file/path the stdout descriptor will point to that particular file
You can verify this with
ls -l /proc/process_number/fd
I'm currently running a process with the & sign.
$ example &
However, (please note i'm a newbie to Linux) I realised that pretty much a second after such command I'm getting a note that my process received a stopped signal. If I do
$ jobs
I'll get the list with my example process with a little note "Stopped". Is it really stopped and not working at all in the background? How does it exactly work? I'm getting mixed info from the Internet.
In Linux and other Unix systems, a job that is running in the background, but still has its stdin (or std::cin) associated with its controlling terminal (a.k.a. the window it was run in) will be sent a SIGTTIN signal, which by default causes the program to be completely stopped, pending the user bringing it to the foreground (fg %job or similar) to allow input to actually be given to the program. To avoid the program being paused in this way, you can either:
Make sure the programs stdin channel is no longer associated with the terminal, by either redirecting it to a file with appropriate contents for the program to input, or to /dev/null if it really doesn't need input - e.g. myprogram < /dev/null &.
Exit the terminal after starting the program, which will cause the association with the program's stdin to go away. But this will cause a SIGHUP to be delivered to the program (meaning the input/output channel experienced a "hangup") - this normally causes a program to be terminated, but this can be avoided by using nohup - e.g. nohup myprogram &.
If you are at all interested in capturing the output of the program, this is probably the best option, as it prevents both of the above signals (as well as a couple others), and saves the output for you to look at to determine if there are any issues with the programs execution:
nohup myprogram < /dev/null > ${HOME}/myprogram.log 2>&1 &
Yes it really is stopped and no longer working in the background. To bring it back to life type fg job_number
From what I can gather.
Background jobs are blocked from reading the user's terminal. When one tries to do so it will be suspended until the user brings it to the foreground and provides some input. "reading from the user's terminal" can mean either directly trying to read from the terminal or changing terminal settings.
Normally that is what you want, but sometimes programs read from the terminal and/or change terminal settings not because they need user input to continue but because they want to check if the user is trying to provide input.
http://curiousthing.org/sigttin-sigttou-deep-dive-linux has the gory technical details.
Just enter fg which will resolve the error when you then try to exit.
I have a question, and I could'nt find help anywhere on stackoverflow or the web.
I have a program (celery distributed task queue) and I have multiple instances (workers) each having a logfile (celery_worker1.log, celery_worker2.log).
The important errors are stored to a database, but I like to tail these logs from time to time when running new operations to make sure everything is ok (the loglevel is lower).
My problem: these logs are taking a lot of disk space.
What I would like to do: be able to "watch" the logs (tail -f) only when I need it, without them taking a lot of space.
My ideas until now:
outputing logs to stdout, not to a file: not possible here since I have many workers outputing to different files, but I want to tail them all at once (tail -f celery_worker*.log)
using logrotate: it is an "OK" solution for me. I don't want this to be a daily task but would rather not put a minute crontab for this, and more, the server is not mine so that would mean some work on the admin-sys side
using named pipes: it looked good at first sight but I didn't know that named pipes (linux FIFO) where blocking. Hence, when I don't tail -f ALL of the pipes at the same time, or when I just quit my tail, the writing operations from the logger are blocked.
Is there a way to have a non-blocking named pipe, which would just throw to stdout when tailed, and throw to /dev/null when not?
Or are there technical difficulties to such a type of pipe? If there are, what are they?
Thank you for your answers!
Have each worker log to stdout, but connect each stdout to a utility that automatically spools and rotates logs based on size or time. multilog and svlogd are examples of such. For those programs, you'd merely tail the "current" log file.
You're right that logrotate is not quite the right solution for the problem you have.
Named pipes won't work as you want. At best, your writers could fill up their pipes and then discard subsequent logs, which is the inverse of the behavior you want.
You could try shared memory device man:shm_overview or perhaps a number of them. You need to organise them as circular buffers so they'd store last N kb of your log and whenever you read them with reader it will output everything to your console. This approach is adopted by busybox's syslog/logread suit (see logread.c).
I have a process that is running in the background (sh script) and I wonder if it is possible to view the output of this process without having to interrupt it.
The process ran by some application otherwise I would have attached it to a screen for later viewing. It might take an hour to finish and i want to make sure it's running normally with no errors.
There is already an program that uses ptrace(2) in linux to do this, retty:
http://pasky.or.cz/dev/retty/
It works if your running program is already attached to a tty, I do not know if it will work if you run your program in background.
At least it may give some good hints. :)
You can probably retreive the exit code from the program using ptrace(2), otherwise just attach to the process using gdb -p <pid>, and it will be printed when the program dies.
You can also manipulate file descriptors using gdb:
(gdb) p close(1)
$1 = 0
(gdb) p creat("/tmp/stdout", 0600)
$2 = 1
http://etbe.coker.com.au/2008/02/27/redirecting-output-from-a-running-process/
You could try to hook into the /proc/[pid]/fd/[012] triple, but likely that won't work.
Next idea that pops to my mind is strace -p [pid], but you'll get "prittified" output. The possible solution is to strace yourself by writing a tiny program using ptrace(2) to hook into write(2) and writing the data somewhere. It will work but is not done in just a few seconds, especially if you're not used to C programming.
Unfortunately I can't think of a program that does precisely what you want, which is why I give you a hint of how to write it yourself. Good luck!