I have come across strange scenario where when I am trying to redirect stdout logs of perl script into a log file, all the logs are getting written at the end of execution when script completes instead of during execution of the script.
While running script when I do tail -f "filename", I could able to see log only when script has completed its execution not during execution of script.
My script details are given below:
/root/Application/download_mornings.pl >> "/var/log/file_manage/file_manage-$(date +\%Y-\%m-\%d).txt"
But when I run without redirecting log file, I can see logs on command prompt as when script progresses.
Let me know if you need any other details.
Thanks in advance for any light that you all might be able shed whats going on.
Santosh
Perl would buffer the output by default. You can say:
$| = 1;
(at the beginning of the script) to disable buffering. Quoting perldoc perlvar:
$|
If set to nonzero, forces a flush right away and after every write or
print on the currently selected output channel. Default is 0
(regardless of whether the channel is really buffered by the system or
not; $| tells you only whether you've asked Perl explicitly to flush
after each write). STDOUT will typically be line buffered if output is
to the terminal and block buffered otherwise. Setting this variable is
useful primarily when you are outputting to a pipe or socket, such as
when you are running a Perl program under rsh and want to see the
output as it's happening. This has no effect on input buffering. See
getc for that. See select on how to select the output channel. See
also IO::Handle.
You might also want to refer to Suffering from Buffering?.
Related
I am trying to use suckless ii irc client. I can listen to a channel by tail -f out file. However is it also possible for me to input into the same console by starting an echo or cat command?
If I background the process, it actually displays the output in this console but that doesn't seem to be right way? Logically, I think I need to get the fd of the console (but how to do that) and then force the tail output to that fd and probably background it. And then use the present bash to start a cat > in.
Is it actually fine to do this or is that I am creating a lot of processes overhead for a simple task? In other words piping a lot of stuff is nice but it creates a lot of overhead which ideally has to be in a single process if you are going to repeat that task it a lot?
However is it also possible for me to input into the same console by starting an echo or cat command?
Simply NO! cat writes the current content. cat has no idea that the content will grow later. echo writes variables and results from the given command line. echo itself is not made for writing the content of files.
If I background the process, it actually displays the output in this console but that doesn't seem to be right way?
If you do not redirect the output, the output goes to the console. That is the way it is designed :-)
Logically, I think I need to get the fd of the console (but how to do that) and then force the tail output to that fd and probably background it.
As I understand that is the opposite direction. If you want to write to the stdin from the process, you simply can use a pipe for that. The ( useless ) example show that cat writes to the pipe and the next command will read from the pipe. You can extend to any other pipe read/write scenario. See link given below.
Example:
cat main.cpp | cat /dev/stdin
cat main.cpp | tail -f
The last one will not exit, because it waits that the pipe gets more content which never happens.
Is it actually fine to do this or is that I am creating a lot of processes overhead for a simple task? In other words piping a lot of stuff is nice but it creates a lot of overhead which ideally has to be in a single process if you are going to repeat that task it a lot?
I have no idea how time critical your job is, but I believe that the overhead is quite low. Doing the same things in a self written prog must not be faster. If all is done in a single process and no access to the file system is required, it will be much faster. But if you also use system calls, e.g. file system access, it will not be much faster I believe. You always have to pay for the work you get.
For IO redirection please read:
http://www.tldp.org/LDP/abs/html/io-redirection.html
If your scenario is more complex, you can think of named pipes instead of IO redirection. For that you can have a look at:
http://www.linuxjournal.com/content/using-named-pipes-fifos-bash
I'm using expect to execute a bunch of commands in a remote machine. Then, i'm calling the expect script from a shell script.
I don't want the expect script to log to stdout the sent commands but i want it to log the output of the commands, so my shell script can do other things depending on that results.
log_user 0
Hides both the commands and the results, so it doesn't fit my needs. How can i tell expect to log the results?
Hmm... I'm not sure you can do that, since the reason for seeing the commands you send is because the remote device echoes them back to you. This is standard procedure, and is done so that a user sees what he or she types when interacting with the device.
What I'm trying to say is that both the device output to issued commands, and the echoed-back commands, are part of the spawned process's stdout, therefore I don't believe you can separate one from the other.
Now that I think of it, I think you can configure a terminal to not display echoed commands... but not sure how you would go about doing that with a spawned process that is not using an interactive terminal.
Let us know if you find a way, I'd be interested of knowing if there is one.
I am using an embedded system, which runs linux. When i run a compiled C program in the forground, it works correctly. However, when i add the '&' after the program call, to make it run as a job in the background, certain features do not work correctly. The main feature which stops working is the use of the 'read' function (unistd.h), used to read from a socket.
Does running a process in the backround reduce its permissions?
What else could cause this behaviour?
Edit:
The function uses the 'select' and 'read' function to read from a socket used for the reception of CANbus message frames. When the data is received, we analyse it and 'echo' a string into a .txt file, to act as a datalogger. When run in the foreground, the file is created and added to successfully, but when in the background, the file is not created/appended.
The only difference between running a process in foreground of background is the interaction with your terminal.
Typically when you background a process it's stdin gets disconnected (it no longer reads input from your keyboard) and you can no longer send keyboard-shortcut signals like Ctrl-C/Ctrl-D to the process.
Other then that nothing changes, no permissions or priorities are changed.
No, a process doesn't have its persmissons changed when going into background.
Internally whats happening is before the process's code starts getting executed, the file descriptors 0,1,2 (stdin,out,err) will be pointed to /dev/null instead of usual files.
Similarly if you use >/file/path the stdout descriptor will point to that particular file
You can verify this with
ls -l /proc/process_number/fd
I'm currently running a process with the & sign.
$ example &
However, (please note i'm a newbie to Linux) I realised that pretty much a second after such command I'm getting a note that my process received a stopped signal. If I do
$ jobs
I'll get the list with my example process with a little note "Stopped". Is it really stopped and not working at all in the background? How does it exactly work? I'm getting mixed info from the Internet.
In Linux and other Unix systems, a job that is running in the background, but still has its stdin (or std::cin) associated with its controlling terminal (a.k.a. the window it was run in) will be sent a SIGTTIN signal, which by default causes the program to be completely stopped, pending the user bringing it to the foreground (fg %job or similar) to allow input to actually be given to the program. To avoid the program being paused in this way, you can either:
Make sure the programs stdin channel is no longer associated with the terminal, by either redirecting it to a file with appropriate contents for the program to input, or to /dev/null if it really doesn't need input - e.g. myprogram < /dev/null &.
Exit the terminal after starting the program, which will cause the association with the program's stdin to go away. But this will cause a SIGHUP to be delivered to the program (meaning the input/output channel experienced a "hangup") - this normally causes a program to be terminated, but this can be avoided by using nohup - e.g. nohup myprogram &.
If you are at all interested in capturing the output of the program, this is probably the best option, as it prevents both of the above signals (as well as a couple others), and saves the output for you to look at to determine if there are any issues with the programs execution:
nohup myprogram < /dev/null > ${HOME}/myprogram.log 2>&1 &
Yes it really is stopped and no longer working in the background. To bring it back to life type fg job_number
From what I can gather.
Background jobs are blocked from reading the user's terminal. When one tries to do so it will be suspended until the user brings it to the foreground and provides some input. "reading from the user's terminal" can mean either directly trying to read from the terminal or changing terminal settings.
Normally that is what you want, but sometimes programs read from the terminal and/or change terminal settings not because they need user input to continue but because they want to check if the user is trying to provide input.
http://curiousthing.org/sigttin-sigttou-deep-dive-linux has the gory technical details.
Just enter fg which will resolve the error when you then try to exit.
I have an embedded application that I want a simple-minded logger for.
The system starts from a script file, which in turn runs the application. There could be various reasons that the script fails to run the application, or the application itself could fail to start. To diagnose this remotely, I need to view the stdout from the script and the application.
I tried writing a tee-like logger that would repeat its stdin to stdout, and save the text in a FIFO for later retrieval via the network. Then I naively tried
./script | ./logger
I ended up with only the script stdout going to the logger, and the application stdout disappearing. I had similar results trying tee.
The system is running kernel 2.4.26, and busybox.
What is going on, and how can I accomplish my desired ends?
It turns out it was working exactly as I thought it should work, with one minor gotcha. stdout was being buffered, and without any fflush(stdout) commands, I never saw it. Had I been really patient, I would have suddenly seen a big gush of output when the stdout buffer filled up. A call to setlinebuf(3) fixed my problem.
Apparently, the application output doesn't end up on stdout...
The output is actually on stderr (which is usually also connected to the terminal)
./script.sh 2>&1 | ./logger
should then work
The application actively disconnects from stdin/stdout (e.g. by closing/reopening file descriptors 0,1(,2) or, using nohup, exec or similar utilities)
the script daemonizes (which also detaches from all standard streams)