Related
I've been dealing with a weird issue that I can't find a way to solve.
My situation is as follows.
I have an application in python called "app1", that requires a file for outputting the results of it's execution.
I have a secondary application, called "app2"; a binary, that gets the input from stdin.
I want to pipe what "app1" is generating directly into "app2" for processing, what in an ideal situation would be like this:
app1 | app2
But, as I said, there are some restrictions, like the fact that app1 requires a file to be the output.
The first solution I found for "fooling" app1 into outputting to stdout is to use mkfifo and create a pipe, so I can pipe it into stdin in app2. Like this:
pipe='/tmp/output_pipe'
mkfifo "$output_pipe"
python app1 -o "$output_pipe" &
app2 < $tmp_pipe
The problem is that eventually, during the execution, app1 will generate more output than what app2 can handle as an input, and due to the buffer size restrictions on the pipe, the pipe will fill up and everything will stop working.
Then I used this other approach:
python app1 -o /dev/stdout | app2
But the situation is the same as stdout has buffer size restrictions too.
Anyone has any idea on how can I solve this specific scenario?
TL;DR: I need a "dummy" file that will act as stdout but without the standard size restrictions of the pipes.
There are several utils designed to handle similar situations:
buffer: python app1 -o /dev/stdout | buffer | app2
stdbuf: python app1 -o /dev/stdout | stdbuf app2
unbuffer: python app1 -o /dev/stdout | unbuffer app2
mbuffer (buffer with more options): python app1 -o /dev/stdout | mbuffer | app2
bash process substitution: python app1 -o >(app2)
The utils have various options, some of which may be required here, (that depends on what app1 and app2 are doing). Some options set the size of the buffer, or add delays, or show diagnostic info.
Pixelbeat.org has some diagrams to help visualize how buffering works, (or fails to).
You have a few options:
Use a file. Instead of reading from stdin have the consumer read from a file and implement the file following code from "tail -f"
Write a pipe buffer program. This option is kind of silly but works if you cannot change either of the others. I wrote one in Perl a while ago, sorry can't share it, but basically use non-blocking IO to read from a pipe and write to a pipe, holding all the data in memory. Probably good to log a complaint if memory use goes too high.
Modify the reader or writer to use non-blocking IO and buffer the output or input.
Well. My bad.
It was not a buffer problem, as some people suggested here.
It was a CPU cap problem. Both applications were consuming 100% for the CPU and RAM when running and that's why the application crashed.
I have large file with size of 130GB.
# ls -lrth
-rw-------. 1 root root 129G Apr 20 04:25 syslog.log
So I need to reduce file size by deleting line which starts with "Nov 2" , So I have given the following command,
sed -i '/Nov 2/d' syslog.log
So I can't edit file using VIM editor also.
When I trigger SED command , its creating backup file also. But I don't have much space in root. Please try to give alternate solution to delete particular line from this file without increasing space in server.
It does not create a real backup file. sed is a stream editor. When applied to a file with option -i it will stream that file through the sed process, write the output to a new file (a temporary one), when everything is done, it will rename the new file to the original name.
(There are options to create backup files also, but you didn't give them, so I won't mention that further.)
In your case you have a very large file and don't want to create any copy, however temporary. For this you need to open the file for reading and writing at the same time, then your sed process can overwrite the original. After this, you will have to truncate the file at the end of the writing.
To demonstrate how this can be done, we first perform a test case.
Create a test file, containing lots of lines:
seq 0 999999 > x
Now, lets say we want to remove all lines containing the digit 4:
grep -v 4 1<>x <x
This will open the file for reading and writing as STDOUT (1), and for reading as STDIN. The grep command will read all lines and will output only the lines not containing a 4 (option -v).
This will effectively overwrite the beginning of the original file.
You will not know how long the output is, so after the output the original contents of the file will appear:
…
999991
999992
999993
999995
999996
999997
999998
999999
537824
537825
537826
537827
537828
537829
…
You can use the Unix tool truncate to shorten your file manually afterwards. In a real scenario you will have trouble finding the right spot for this, so it makes sense to count the number of bytes written (using wc):
(Don't forget to recreate the original x for this test.)
(grep -v 4 <x | tee /dev/stderr 1<>x) |& wc -c
This will preform the step above and additionally print out the number of bytes written to the terminal, in this example case the output will be 3653658. Now use truncate:
truncate -s 3653658 x
Now you have the result you want.
If you want to do this in a script, i. e. without interaction, you can use this:
length=$((grep -v 4 <x | tee /dev/stderr 1<>x) |& wc -c)
truncate -s "$length" x
I cannot guarantee that this will work for files >2GB or >4GB on your machine; depending on your operating system (32bit?) and the versions of the installed tools you might run into largefile issues. I'd perform tests with large files first (>4GB as this is typically a limit for many things) and then cross your fingers and give it a try :)
Some caveats you have to keep in mind:
Of course, nobody is supposed to append log entries to that log file while the procedure is running.
Also, any abort during the running of the process (power failure, signal caught, etc.) will leave the file in an undefined state. But re-running the command again after such a mishap will in most cases produce the correct output; some lines might be doubled, but not more than a single line should be corrupted then.
The output must be smaller than the input, of course, otherwise the writing will overtake the reading, corrupting the whole result so that lines which should be there will be missing (or truncated at the start).
From what I understood about buffers: a buffer is a temporarily stored data.
For example: let's assume that you wanted to implement an algorithm for determining whether something is speech or just noise. How would you do this using a constant stream flow of sound data? It would be very difficult. Therefore, by storing this into an array you can perform analysis on this data.
This array of data is called a buffer.
Now, I have a Linux command where the output is continuous:
stty -F /dev/ttyUSB0 ispeed 4800 && awk -F"," '/SUF/ {print $3,$4,$5,$6,$10,$11,substr($2,1,2),".",substr($2,3,2),".",substr($2,5,2)}' < /dev/ttyUSB0
If I were to write the output of this command to a file, I won't be able to write it, because the output is probably block buffered and only an empty text file will be generated when I terminate the output of the above command (CTRL+C).
Here is what i mean by Block Buffered.
The three types of buffering available are unbuffered, block
buffered, and line buffered. When an output stream is unbuffered,
information appears on the destination file or terminal as soon as
written; when it is block buffered many characters are saved up and
written as a block; when it is line buffered characters are saved
up until a newline is output or input is read from any stream
attached to a terminal device (typically stdin). The function
fflush(3) may be used to force the block out early. (See
fclose(3).) Normally all files are block buffered. When the first
I/O operation occurs on a file, malloc(3) is called, and a buffer
is obtained. If a stream refers to a terminal (as stdout normally
does) it is line buffered. The standard error stream stderr is
always unbuffered by default.
Now, executing this command,
stty -F /dev/ttyUSB0 ispeed 4800 && awk -F"," '/SUF/ {print $3,$4,$5,$6,$10,$11,substr($2,1,2),".",substr($2,3,2),".",substr($2,5,2)}' < /dev/ttyUSB0 > outputfile.txt
An empty file will be generated because the buffer block might have not been completed when I terminated the process, and since i don't know the block buffer size, there is no way to wait for the block is complete.
In order to write the output of this command to a file I have to use fflush() inside awk, which would successfully write the output into the text file, which I have already done successfully.
Here it goes:
stty -F /dev/ttyUSB0 ispeed 4800 && awk -F"," '/GGA/ {print "Latitude:",$3,$4,"Longitude:",$5,$6,"Altitude:",$10,$11,"Time:",substr($2+50000,1,2),".",substr($2,3,2),".",substr($2,5,2); fflush(stdout) }' < /dev/ttyUSB0 | head -n 2 > GPS_data.txt
But my question is:
Is there any way to declare the buffer block size so that I would know when the buffer block in generated, so eliminating the need of using fflush()?
OR
Is there anyway to change buffer type from Block buffered to unbuffered or line buffered ?
You can use stdbuf to run a command with a modified buffer size.
For example, stdbuf -o 100 awk ... will run awk with a 100 byte standard output buffer.
I would like to run a program "A", have its output go to the input to another program "B", as well as stdin going to intput of "B". If program "A" closes, I'd like "B" to continue running.
I can redirect A output to B input easily:
./a | ./b
And I can combine stderr into the output if I'd like:
./a 2>&1 | ./b
But I can't figure out how to combine stdin into the output. My guess would be:
./a 0>&1 | ./b
but it doesn't work.
Here's a test that doesn't require us to rewrite up any test programs:
$ echo ls 0>&1 | /bin/sh -i
$ a b info.txt
$
/bin/sh: Cannot set tty process group (No such process)
If possible, I'd like to do this using only bash redirection on the command line (I don't want to write a C program to fork off child processes and do anything complicated everytime I want to do some redirection of stdin to a pipe).
This cannot be done without writing an auxiliary program.
In general, stdin could be a read-only file descriptor (heck, it might refer to read-only file). So you cannot "insert" anything into it.
You will need to write a "helper" program that monitors two file descriptors (say, 0 and 3) in order to read from both and "merge" them. A simple select or poll loop would be sufficient, and you could write it in most scripting languages, but not the shell, I don't think.
Then you can use shell redirection to feed your program's output to descriptor 3 of the "helper".
Since what you want is basically the opposite of "tee", I might call it "eet"...
[edit]
If only you could launch "cat" in the background...
But that will fail because background processes with a controlling terminal cannot read from stdin. So if you could just detach "cat" from its controlling terminal and run it in the background...
On Linux, "setsid cat" should do it, roughly. But (a) I could not get it to work very well and (b) I really do not have time for this today and (c) it is non-standard anyway.
I would just write the helper program.
[edit 2]
OK, this seems to work:
{ seq 5 ; sleep 2 ; seq 5 ; } | /bin/bash -c 'set -m ; setsid cat ; echo HELLO'
The set -m thing forces bash to enable job control, which apparently is needed to prevent the shell from redirecting stdin from /dev/null.
Here, the echo HELLO represents your "program A". The seq commands (with the sleep in the middle) are just to provide some input. And yes, you can pipe this whole thing to process B.
About as ugly and non-portable a solution as you could ask for...
A pipe has two ends. One is for writing, and that which gets written appears in the other end, which is for reading.
It's a pipe, not a T or Y junction.
I don't think your scenario is possible. Having "stdin going to input of" anything doesn't make sense.
If I understand your requirements correctly, you want this set up (ASCII art to the fore):
o----+----->| A |----+---->| B |---->o
| ^
| |
+------------------+
with the additional constraint that if process A closes up shop, process B should be able to continue with the input stream going to B.
This is a non-standard setup, as you realize, and can only be achieved by using an auxilliary program to drive the input to A and B. You end up with some interesting synchronization issues but it will all work remarkably well as long as your messages are short enough.
The plumbing necessary to achieve this is notable - you'll need two pipes, one for the input to A and the other for the input to B, and the output of A will be connected to the input of B as well.
o---->| C |---------->| A |----+---->| B |---->o
| ^
| |
+--------------------------+
Note that C will be writing the data twice, once to A and once to B. Note, too, that the pipe from A to B is the same pipe as the pipe from C to A.
To make the given test case work you have to while ... read from the controlling terminal device /dev/tty inside a sh -c '...' construct.
Note the use of eval (could it be avoided here?) and that multi-line commands on input> will fail.
echo 'ls; export var=myval' | (
stdin="$(</dev/stdin)"
/bin/sh -i -c '
eval "$1";
while IFS="" read -e -r -p "input> " line; do
history -s "${line}"
eval "${line}";
done </dev/tty
' argv0 "${stdin}"
)
input> echo $var
For a similar problem and the use of named pipes see here:
BASH: Best architecture for reading from two input streams
This can't be done exactly as shown, but to perform your example you can make use of cat's ability to join files together:
cat <(echo ls) - | /bin/sh
(You can do -i, but then you'll have to have another process kill the /bin/sh, as your attempts to Ctrl-C and Ctrl-D out will fail.)
This assumes that you want to pass in your piped input and then accept from stdin. You can also make it so that it does something after stdin is done, or on both sides -- but it won't merge input character-by-character or line-by-line.
This seems to do what you want:
$ ( ./a <&-; cat ) | ./b
(It's not clear to me if you want a to get input...this solution sends all input to b)
Of course, in this case the inputs to b are strictly ordered: all of the output of a
is sent to b first, then a terminates, then input goes to b. If you want things
interleaved, try:
$ ( ./a <&- & cat ) | ./b
A modified version of a shell script converts an audio file from FLAC to MP3 format. The computer has a quad-core CPU. The script is run using:
./flac2mp3.sh $(find flac -type f)
This converts the FLAC files in the flac directory (no spaces in file names) to MP3 files in the mp3 directory (at the same level as flac). If the destination MP3 file already exists, the script skips the file.
The problem is that sometimes two instances of the script check for the existence of the same MP3 file at nearly the same time, resulting in mangled MP3 files.
How would you run the script multiple times (i.e., once per core), without having to specify a different file set on each command-line, and without overwriting work?
Update - Minimal Race Condition
The script uses the following locking mechanism:
# Convert FLAC to MP3 using tags from flac file.
#
if [ ! -e $FLAC.lock ]; then
touch $FLAC.lock
flac -dc "$FLAC" | lame${lame_opts} \
--tt "$TITLE" \
--tn "$TRACKNUMBER" \
--tg "$GENRE" \
--ty "$DATE" \
--ta "$ARTIST" \
--tl "$ALBUM" \
--add-id3v2 \
- "$MP3"
rm $FLAC.lock
fi;
However, this still leaves a race condition.
The "lockfile" command provides what you're trying to do for shell scripts without the race condition. The command was written by the procmail folks specifically for this sort of purpose and is available on most BSD/Linux systems (as procmail is available for most environments).
Your test becomes something like this:
lockfile -r 3 $FLAC.lock
if test $? -eq 0 ; then
flac -dc "$FLAC" | lame${lame_opts} \
--tt "$TITLE" \
--tn "$TRACKNUMBER" \
--tg "$GENRE" \
--ty "$DATE" \
--ta "$ARTIST" \
--tl "$ALBUM" \
--add-id3v2 \
- "$MP3"
fi
rm -f $FLAC.lock
Alternatively, you could make lockfile keep retrying indefinitely so you don't need to test the return code, and instead can test for the output file for determining whether to run flac.
If you don't have lockfile and cannot install it (in any of its versions - there are several implementations) a robust and portable atomic mutex is mkdir.
If the directory you attempt to create already exists, mkdir will fail, so you can check for that; when creation succeeds, you have a guarantee that no other cooperating process is in the critical section at the same time as your code.
if mkdir "$FLAC.lockdir"; then
# you now have the exclusive lock
: critical section
: code goes here
rmdir "$FLAC.lockdir"
else
: nothing? to skip this file
# or maybe sleep 1 and loop back and try again
fi
For completeness, maybe also look for flock if you are on a set of platforms where that is reliably made available and need a performant alternative to lockfile.
You could implement locking of FLAC files that it's working on. Something like:
if (not flac locked)
lock flac
do work
else
continue to next flac
Send output to a temporary file with a unique name, then rename the file to the desired name.
flac -dc "$FLAC" | lame${lame_opts} \
--tt "$TITLE" \
--tn "$TRACKNUMBER" \
--tg "$GENRE" \
--ty "$DATE" \
--ta "$ARTIST" \
--tl "$ALBUM" \
--add-id3v2 \
- "$MP3.$$"
mv "$MP3.$$" "$MP3"
If a race condition leaks through your file locking system every once in a while, the final output will still be the result of one process.
To lock a file process you can create a file with the same name with a .lock extension.
Before starting the encoding check the existence of the .lock file, and optionally make sure the date of the lockfile isn't too old (in case the process dies). If it does not exist, create it before the encoding starts, and remove it after the encoding is complete.
You can also flock the file, but this only really works in c where you are calling flock() and writing to the file then closing and unlocking. For a shell script, you probably are calling another utility to do the writing of the file.
How about writing a Makefile?
ALL_FLAC=$(wildcard *.flac)
ALL_MP3=$(patsubst %.flac, %.mp3, $(ALL_FLAC)
all: $(ALL_MP3)
%.mp3: %.flac
$(FLAC) ...
Then do
$ make -j4 all
In bash it's possible to set noclobber option to avoid file overwriting.
help set | egrep 'noclobber|-C'
Use a tool like FLOM (Free LOck Manager) and simply serialize your command as below:
flom -- flac ....