I am using an embedded system, which runs linux. When i run a compiled C program in the forground, it works correctly. However, when i add the '&' after the program call, to make it run as a job in the background, certain features do not work correctly. The main feature which stops working is the use of the 'read' function (unistd.h), used to read from a socket.
Does running a process in the backround reduce its permissions?
What else could cause this behaviour?
Edit:
The function uses the 'select' and 'read' function to read from a socket used for the reception of CANbus message frames. When the data is received, we analyse it and 'echo' a string into a .txt file, to act as a datalogger. When run in the foreground, the file is created and added to successfully, but when in the background, the file is not created/appended.
The only difference between running a process in foreground of background is the interaction with your terminal.
Typically when you background a process it's stdin gets disconnected (it no longer reads input from your keyboard) and you can no longer send keyboard-shortcut signals like Ctrl-C/Ctrl-D to the process.
Other then that nothing changes, no permissions or priorities are changed.
No, a process doesn't have its persmissons changed when going into background.
Internally whats happening is before the process's code starts getting executed, the file descriptors 0,1,2 (stdin,out,err) will be pointed to /dev/null instead of usual files.
Similarly if you use >/file/path the stdout descriptor will point to that particular file
You can verify this with
ls -l /proc/process_number/fd
Related
This is under Ubuntu 20.04.
There's a script that appends to a file via shell redirection.
I want to read that file after the script's process has ended and all data has been written.
I'm using pgrep to check when the script ends (I have carefully checked that this check works).
I have noted that the file may not be fully written even if the process ended.
Because of what I have read, this can happen because of buffering. A side question would be: can this actually happen or am I misunderstanding something?
I'm thinking on using lsof/inotifywait/a loop with fuser to await the file closing. Is this the right wait to manage this situations?
What I don't really understand is: if the process that opened the file exited, who will show as the file "opener" on lsof/inotifywait/fuser output?
If you're worried about the file not having been written to disk due to buffering and it's in a process where you don't have the file descriptor, you can force the system to write them to disk with the sync <file> command or sync function in unistd.h.
I'm currently running a process with the & sign.
$ example &
However, (please note i'm a newbie to Linux) I realised that pretty much a second after such command I'm getting a note that my process received a stopped signal. If I do
$ jobs
I'll get the list with my example process with a little note "Stopped". Is it really stopped and not working at all in the background? How does it exactly work? I'm getting mixed info from the Internet.
In Linux and other Unix systems, a job that is running in the background, but still has its stdin (or std::cin) associated with its controlling terminal (a.k.a. the window it was run in) will be sent a SIGTTIN signal, which by default causes the program to be completely stopped, pending the user bringing it to the foreground (fg %job or similar) to allow input to actually be given to the program. To avoid the program being paused in this way, you can either:
Make sure the programs stdin channel is no longer associated with the terminal, by either redirecting it to a file with appropriate contents for the program to input, or to /dev/null if it really doesn't need input - e.g. myprogram < /dev/null &.
Exit the terminal after starting the program, which will cause the association with the program's stdin to go away. But this will cause a SIGHUP to be delivered to the program (meaning the input/output channel experienced a "hangup") - this normally causes a program to be terminated, but this can be avoided by using nohup - e.g. nohup myprogram &.
If you are at all interested in capturing the output of the program, this is probably the best option, as it prevents both of the above signals (as well as a couple others), and saves the output for you to look at to determine if there are any issues with the programs execution:
nohup myprogram < /dev/null > ${HOME}/myprogram.log 2>&1 &
Yes it really is stopped and no longer working in the background. To bring it back to life type fg job_number
From what I can gather.
Background jobs are blocked from reading the user's terminal. When one tries to do so it will be suspended until the user brings it to the foreground and provides some input. "reading from the user's terminal" can mean either directly trying to read from the terminal or changing terminal settings.
Normally that is what you want, but sometimes programs read from the terminal and/or change terminal settings not because they need user input to continue but because they want to check if the user is trying to provide input.
http://curiousthing.org/sigttin-sigttou-deep-dive-linux has the gory technical details.
Just enter fg which will resolve the error when you then try to exit.
I have an embedded application that I want a simple-minded logger for.
The system starts from a script file, which in turn runs the application. There could be various reasons that the script fails to run the application, or the application itself could fail to start. To diagnose this remotely, I need to view the stdout from the script and the application.
I tried writing a tee-like logger that would repeat its stdin to stdout, and save the text in a FIFO for later retrieval via the network. Then I naively tried
./script | ./logger
I ended up with only the script stdout going to the logger, and the application stdout disappearing. I had similar results trying tee.
The system is running kernel 2.4.26, and busybox.
What is going on, and how can I accomplish my desired ends?
It turns out it was working exactly as I thought it should work, with one minor gotcha. stdout was being buffered, and without any fflush(stdout) commands, I never saw it. Had I been really patient, I would have suddenly seen a big gush of output when the stdout buffer filled up. A call to setlinebuf(3) fixed my problem.
Apparently, the application output doesn't end up on stdout...
The output is actually on stderr (which is usually also connected to the terminal)
./script.sh 2>&1 | ./logger
should then work
The application actively disconnects from stdin/stdout (e.g. by closing/reopening file descriptors 0,1(,2) or, using nohup, exec or similar utilities)
the script daemonizes (which also detaches from all standard streams)
Currently I am working with a embedded system that has the Linux OS. I need to run multiple application at the same time, and I would like them to be able to run through one script. A fellow colleague already had implemented this by using a wrapper script and return codes.
wrapperScript.sh $command & > output_log.txt
wrapperScript.sh $command2 & >output_log2.txt
But the problem arises in when exiting the application. Normally all the application that are on the embedded system require a user to press q to exit. But the wrapper script rather than doing that when it gets the kill signal or user signal, it just kill the process. This is dangerous because the wrapper script assumes that the application has the proper facilities to deal with the kill signal (that is not always the case and leads to memory leaks and unwanted socket connections). I have looked into automating programs such as expect but since I am using an embedded board, I am unable to get expect for it. Is there a way in the bash shell or embedded C to deal with multiple process have one single program automatically send the q signal to the programs.
I also would like the capability to maintain log and the output of the files.
EDIT:
Solution:
Okay I found the issue to the problem, Expect is the way to go about it in any situation. There is a serious limitation that it might slower, but the trade off is not bad in this situation. I decided to use Expect Scripting Language to implement the solution. There are certain trade off.
Pros:
* Precise control over embedded application
* Can Make Process Interactive to User
* can Deal with Multiple Process
Cons:
* Performance is slow
Use a pipe
Make the command read input from a named pipe. You'll then be able to send it commands from anywhere.
mkfifo command1.ctrl
{ "$command1" <command1.ctrl >command1.log 2>&1;
rm command1.ctrl; } &
Use screen
Run your applications inside the Screen program. You can run all your commands in separate windows in a single instance of screen (you'll save a little memory that way). You can specify the commands to run from a Screen configuration file:
sessionname mycommands
screen -t command1 command1
screen -t command2 command2
To terminate a program, use
screen -S mycommands -p 1 -X stuff 'q
'
where 1 is the number of the window to send the input to (each screen clause in the configuration file starts a window). The text after stuff is input to send to the program; note the presence of a newline after the q (some applications may require a carriage return instead; you can get one with stuff "q$(printf \\015)" if your shell isn't too featured-starved). If your command expects a q with no newline at all, just stuff q.
For logging, you can use Screen's logging feature, or redirect the output to a file as before.
Is it possible to i/o to a running process?
I have multiple game servers running like this:
cd /path/to/game/server/binary
./binary arg1 arg2 ... argn &
Is it possible to write a message to a server if i know the process id?
Something like this would be handy:
echo "quit" > process1234
Where process1234 is the process (with sid 1234).
The game server is not a binary written by me, but it is a Call of Duty binary. So i can't change anything to the code.
Yes, you can start up the process with a pipe as its stdin and then write to the pipe. You can used a named or anonymous pipe.
Normally a parent process would be needed to do this, which would create an anonmyous pipe and supply that to the child process as its stdin - popen() does this, many libraries also implement it (see Perl's IPC::Open2 for example)
Another way would be to run it under a pseudo tty, which is what "screen" does. Screen itself may also have a mechanism for doing this.
Only if the process is listening for some message somewhere. For instance, your game server can be waiting for input on a file, over a network connection, or from standard input.
If your process is not actively listening for something, the only things you can really do is halt or kill it.
Now if your process is waiting on standard input, and you ran it like so:
$ myprocess &
Then (in linux) you should be able to try the following:
$ jobs
[1]+ Running myprocess &
$ fg 1
And at this point you are typing standard input into your process.
You can only do that if the process is explicitly designed for that.
But since you example is requesting the process quit, I'd recommend trying signals. First try to send the TERM (i.e. terminate) signal which is the default:
kill _pid_
If that doesn't work, you can try other signals such as QUIT:
kill -QUIT _pid_
If all else fails, you can use the KILL signal. This is guaranteed (*) to stop the process but the process will have no change to clean up:
kill -KILL _pid_
* - in the past, kill -KILL would not work if the process was hung when on a flaky network file server. Don't know if they ever fixed this.
I'm pretty sure this would work, since the server has a console on stdin:
echo "quit" > /proc/<server pid>/fd/0
You mention in a comment below that your process does not appear to read from the console on fd 0. But it must on some fd. ls -l /proc/<server pid/>/fd/ and look for one that's pointing at /dev/pts/ if the process is running in a gnome-terminal or xterm or something.
If you want to do a few simple operations on your server, use signals as mentioned elsewhere. Set up signal handlers in the server and have each signal perform a different action e.g.:
SIGINT: Reread config file
SIGHUP: quit
...
Highly hackish, don't do this if you have a saner alternative, but you can redirect a process's file descriptors on the fly if you have ptrace permissions.
$ echo quit > /tmp/quitfile
$ gdb binary 1234
(gdb) call dup2(open("/tmp/quitfile", 0), 0)
(gdb) continue
open("/tmp/quitfile", O_RDONLY) returns a file descriptor to /tmp/quitfile. dup2(..., STDIN_FILENO) replaces the existing standard input by the new file descriptor.
We inject this code into the application using gdb (but with numeric constants, as #define constants may not be available), and taadaah.
Simply run it under screen and don't background it. Then you can either connect to it with screen interactively and tell it to quit, or (with a bit of expect hackery) write a script that will connect to screen, send the quit message, and disconnect.