Bash output stream write to a file - linux

so i am running this on bash:
# somedevice -getevent
What this command does is it just keeps running, and everytime my device sends a certain data, say it detects change in temperature, it outputs something like this
/dev/xyz: 123 4567 8910112238 20
/dev/xyz: 123 4567 8915712347 19
/dev/xyz: 123 4567 8916412345 22
/dev/xyz: 123 4567 8910312342 25
/dev/xyz: 123 4567 8910112361 18
/dev/xyz: 123 4567 8910112343 20
And this just keeps running and as soon as it has any cause it outputs something. So there is no end to execution.
No the echo is working perfectly, however when i am trying to use the '>' operator this doesn't seem to write to file.
so for instance
#somedevice -getevent > my_record_file
this doesn't work properly, my_record_file only gets data written to it in intervals, however i want to be written immediately.
Any ideas?

The output is being buffered because the C standard library changes the output buffering mode depending on whether or not stdout is a terminal device. If it's a terminal device (according to isatty(3)), then stdout is line-buffered: it gets flushed every time a newline character gets written. If it's not a terminal device, then it's fully buffered: it only gets flushed whenever a certain amount of data (usually something on the order of 4 KB to 64 KB) gets written.
So, when you redirect the command's output to a file using the shell's > redirection operator, it's no longer outputting to a terminal and it buffers its output. A program can change its buffering mode with setvbuf(3) and friends, but the program has to cooperate to do this. Many programs have command line options to make them line-buffered, e.g. grep(1)'s --line-buffered option. See if your command has a similar option.
If you don't have such an option, you can try using a tool such as unbuffer(1) to unbuffer the output stream, but it doesn't always work and isn't a standard utility, so it's not always available.

The command somedevice probably uses the "Standard Input/Output Library", and in that library, the buffering is on by default. It is switched off when the output does to a terminal/console.
Can you modify the somedevice program? If not, you can still hack around it. See http://www.pixelbeat.org/programming/stdio_buffering/ for details.

You can try 'tee':
somedevice -getevent | tee -a my_record_file
The '-a' option is to append instead of just replacing the content.

This is probably because your "somedevice -getevent" command's stdout is being block-buffered. According to this, stdout is by default line-buffered (i.e. what you want) if stdout is a terminal, and block-buffered otherwise.
I'd have a look at the manual for your somedevice command to see if you can force the output to be unbuffered or line-buffered. If not, stdbuf -oL somedevice -getevent > my_record_file should do what you want.

Related

Run AT commands from adb shell on Redmi 7

I tried this:
echo -e "ATD123456789;\r" > /dev/smd0
and then when I ran:
cat /dev/smd0
I got this output:
ATD123456789;
Is that what I'm supposed to see? The phone didn't respond to the command.
Update: The phone made a call when I used smd7 or smd11. The problem is I'm trying to send SMS messages using AT+CMGS and it's not working.
Update2: I run this command:cat /dev/smd7 & echo -e "AT+CMGS=24;\r" > /dev/smd7.
Then I enter the PDU message and I get this: /system/bin/sh: 079...771B: not found
As you probably know, the command
ATD<number>;\r
performs a voice call to the destination number <number> (without the semicolon ; the call type would depend o the current settings of AT+FCLASS command).
By default the OK result code would be received as soon as it starts remotely ringing, so after some seconds. But it would take even more if there are network problems or the remote number is unavailable/doesn't exist.
The default timeout of ATD command during a voice call is 30s, and can be changed by issuing ATS7 command. For example, to set a 1 minute timeout:
ATS7=60
The answer you get is the command echo: in fact the modem, by default, echoes every character sent to its AT port (the echo can be desabled through ATE0 command and aenabled again with ATE1). Receiving it **is the proof that the modem is correctly powered on and that it communicates correctly.
So, even though I'm aware that's not the only thing you expect to seee (you would like to see an answer!) you are actually supposed to see it.
Some pieces of advice in order to receive your answer:
Start providing simplier commands with shorter timeouts. For example the very basic AT.
Make sure to wait at least the maximum command timeout
Set the cat command in background and before starting providing commands:
cat /dev/smd0 &
echo -e "AT\r" > /dev/smd0
OK
Note: I'm not aware of any timeout in cat command.
To have an interactive session you can use:
strace 2>/dev/null -e inject=ioctl:retval=0 microcom /dev/smdXX
Without the strace command, microcom returns an ioctl error.
Strace makes microcom think the ioctl succeeded and so it allows it to continue and run.

How can I avoid terminal messages screwing up vim?

Sometimes I use vim in TTY1/2/etc. I am experiencing a problem with this. Messages such as the following keep flooding my terminal:
[ 1050.29303] wlp3s0: failed to set TX queue parameters for AC 2
[ 1059.29340] wlp3s0: failed to set TX queue parameters for AC 2
[ 1020.12309] wlp3s0: failed to set TX queue parameters for AC 2
[ 1029.12899] something_else: some other logging message here
[ 1292.21300] yet_another_thing: hey look a distraction
This can be quite disruptive, especially when I'm using vim to work, and sometimes it even results in me screwing up my text without realizing it. Is there any way to eliminate messages like this, at least when using vim? Using :redraw, editing the messed up lines, etc. don't seem to make the messages disappear.
Your sample of lines looks like kernel messages.
You can turn off output of dmesg messages by typing in terminal
sudo dmesg -D
This is a temporary solution and will work until the system is rebooted. For permanent disabling edit /etc/sysctl.conf file to set kernel.printk parameter.
kernel.printk = 1 4 1 3
I've set the first digit to 1 as the third was 1. Read more about kernel.printk and klogctl(3) {see description of SYSLOG_ACTION_CONSOLE_OFF command}
You can redirect output to a file in sh script.
In bash this would be using the redirect operator >.
If what you are trying to get rid of is standard output, the redirection arrow is defaulted to that. If the output is error output, this would be file descriptor 2 so the operand would be 2>
for example if I was going to run a python script in the background while using vim I could run the script like this
$ python3 script.py >stdoutput.txt 2>errors.txt

What is the order of redirection in terminal?

I want to take input from file input.txt and write output of execution to output.txt What is the right order? The below does not work.
./a.out < input.txt > output.txt
EDIT
Do I have to wait for execution to complete for it to be written? I usually break in the middle to see if o/p is getting written as run time is very high.
CLARIFICATION:
This C program (P1) iterates through a loop and feeds the loop value x to a system() call which calls another C program (P2) using ./P2 < x. Program P2 executes for each value of x and outputs to screen. I want to the complete output of both programs to output.txt.
If you're killing the command before it finishes, this is probably a buffering issue. Line-buffered terminal output and block-buffered file output are default behaviors in the C stdio library, so redirection can cause output to be buffered until a few kilobytes have been written.
Some programs have a command line option to force line-buffered or unbuffered output. They do this by calling setvbuf. If that a.out is a program you wrote, you could addsetvbuf(stdout, NULL, _IOLBF, 0);
If the program is not yours and you can't recompile it, there is a utility called stdbuf that might help, as in stdbuf -oL ./a.out < in > out
stdbuf is kind of a kludge though. I wouldn't use it unless there is no other option.

Redirect stdout to fifo immediately

I have, for example, a c program that prints three lines, two seconds apart, that is:
printf("Wait 2 seconds...\n");
sleep(2);
printf("Two more\n");
sleep(2);
printf("Quitting in 2 seconds...\n");
sleep(2);
I execute the program and redirect it to a pipe:
./printer > myPipe
On another terminal
cat < myPipe
The second terminal prints all at once, 6 seconds later! I would like it to print the available lines immediatly. How can i do it?
Obs: I can't change the source code. It's actually the output of a boardgame algorithm, i have to get it immediatly so that i can plug it into another algorithm, get the answer back and plug into the first one...
Change the program to this approach:
printf("Wait 2 seconds...\n");
fflush (stdout);
sleep(2);
printf("Two more\n");
fflush (stdout);
sleep(2);
printf("Quitting in 2 seconds...\n");
fflush (stdout);
sleep(2);
Additional:
If you can't change the program, there really is no way to affect the program's built-in buffering without hacking it.
If you can relink the program, you could substitute a printf() function which flushes after each call. Or changes the startup initialization of stdout to be unbuffered—or at least line buffered.
If you can't change the source, you might want to try some of the solutions to this related question:
bash: force exec'd process to have unbuffered stdout
Basically, you have to make the OS execute this program interactively.
I'm assuming that the actual source file is complete. If so, then you have to compile the source and run it to get it to do anything. Using cat will just print the contents of the file, not run it.
If it was written in bash then it would have to be set mode bit +x, which would then make it executable. Allowing you to run it from a terminal ./script
No need to worry about the syntax since you've stated it's not an option to change it and... It's correctly written in C.

on-the-fly output redirection, seeing the file redirection output while the program is still running

If I use a command like this one:
./program >> a.txt &
, and the program is a long running one then I can only see the output once the program ended. That means I have no way of knowing if the computation is going well until it actually stops computing. I want to be able to read the redirected output on file while the program is running.
This is similar to opening a file, appending to it, then closing it back after every writing. If the file is only closed at the end of the program then no data can be read on it until the program ends. The only redirection I know is similar to closing the file at the end of the program.
You can test it with this little python script. The language doesn't matter. Any program that writes to standard output has the same problem.
l = range(0,100000)
for i in l:
if i%1000==0:
print i
for j in l:
s = i + j
One can run this with:
./python program.py >> a.txt &
Then cat a.txt .. you will only get results once the script is done computing.
From the stdout manual page:
The stream stderr is unbuffered.
The stream stdout is line-buffered
when it points to a terminal.
Partial lines will not appear until
fflush(3) or exit(3) is called, or
a new‐line is printed.
Bottom line: Unless the output is a terminal, your program will have its standard output in fully buffered mode by default. This essentially means that it will output data in large-ish blocks, rather than line-by-line, let alone character-by-character.
Ways to work around this:
Fix your program: If you need real-time output, you need to fix your program. In C you can use fflush(stdout) after each output statement, or setvbuf() to change the buffering mode of the standard output. For Python there is sys.stdout.flush() of even some of the suggestions here.
Use a utility that can record from a PTY, rather than outright stdout redirections. GNU Screen can do this for you:
screen -d -m -L python test.py
would be a start. This will log the output of your program to a file called screenlog.0 (or similar) in your current directory with a default delay of 10 seconds, and you can use screen to connect to the session where your command is running to provide input or terminate it. The delay and the name of the logfile can be changed in a configuration file or manually once you connect to the background session.
EDIT:
On most Linux system there is a third workaround: You can use the LD_PRELOAD variable and a preloaded library to override select functions of the C library and use them to set the stdout buffering mode when those functions are called by your program. This method may work, but it has a number of disadvantages:
It won't work at all on static executables
It's fragile and rather ugly.
It won't work at all with SUID executables - the dynamic loader will refuse to read the LD_PRELOAD variable when loading such executables for security reasons.
It's fragile and rather ugly.
It requires that you find and override a library function that is called by your program after it initially sets the stdout buffering mode and preferably before any output. getenv() is a good choice for many programs, but not all. You may have to override common I/O functions such as printf() or fwrite() - if push comes to shove you may just have to override all functions that control the buffering mode and introduce a special condition for stdout.
It's fragile and rather ugly.
It's hard to ensure that there are no unwelcome side-effects. To do this right you'd have to ensure that only stdout is affected and that your overrides will not crash the rest of the program if e.g. stdout is closed.
Did I mention that it's fragile and rather ugly?
That said, the process it relatively simple. You put in a C file, e.g. linebufferedstdout.c the replacement functions:
#define _GNU_SOURCE
#include <stdlib.h>
#include <stdio.h>
#include <dlfcn.h>
char *getenv(const char *s) {
static char *(*getenv_real)(const char *s) = NULL;
if (getenv_real == NULL) {
getenv_real = dlsym(RTLD_NEXT, "getenv");
setlinebuf(stdout);
}
return getenv_real(s);
}
Then you compile that file as a shared object:
gcc -O2 -o linebufferedstdout.so -fpic -shared linebufferedstdout.c -ldl -lc
Then you set the LD_PRELOAD variable to load it along with your program:
$ LD_PRELOAD=./linebufferedstdout.so python test.py | tee -a test.out
0
1000
2000
3000
4000
If you are lucky, your problem will be solved with no unfortunate side-effects.
You can set the LD_PRELOAD library in the shell, if necessary, or even specify that library system-wide (definitely NOT recommended) in /etc/ld.so.preload.
If you're trying to modify the behavior of an existing program try stdbuf (part of coreutils starting with version 7.5 apparently).
This buffers stdout up to a line:
stdbuf -oL command > output
This disables stdout buffering altogether:
stdbuf -o0 command > output
Have you considered piping to tee?
./program | tee a.txt
However, even tee won't work if "program" doesn't write anything to stdout until it is done. So, the effectiveness depends a lot on how your program behaves.
If the program writes to a file, you can read it while it is being written using tail -f a.txt.
Your problem is that most programs check to see if the output is a terminal or not. If the output is a terminal then output is buffered one line at a time (so each line is output as it is generated) but if the output is not a terminal then the output is buffered in larger chunks (4096 bytes at a time is typical) This behaviour is normal behaviour in the C library (when using printf for example) and also in the C++ library (when using cout for example), so any program written in C or C++ will do this.
Most other scripting languages (like perl, python, etc.) are written in C or C++ and so they have exactly the same buffering behaviour.
The answer above (using LD_PRELOAD) can be made to work on perl or python scripts, since the interpreters are themselves written in C.
The unbuffer command from the expect package does exactly what you are looking for.
$ sudo apt-get install expect
$ unbuffer python program.py | cat -
<watch output immediately show up here>

Resources