Nonblocking/asynchronous fifo/named pipe in shell/filesystem? - linux

Is there a way to create non blocking/asynchronous named pipe or something similar in shell? So that programs could place lines in it, those lines would stay in ram, and when some program could read some lines from pipe, while leaving what it did not read in fifo? It is also very probable that programs can be writing and reading to this fifo at the same time. At first I though maybe this could be done using files, but after searching a web for a bit it seems nothing good can come from the fact that file is read and written at same time. Named pipes would almost work, just there are two problems: first they block reads/writes if there is no one at the other end, second even if I let writing to blocked and set two processes to write to pipe while no one is reading, by trying to write one line with each process, and then try head -n 1 <fifo> I get just one line as I need, but both writing processes terminate, and second line is lost. Any suggestions?
Edit: maybe some intermediate program could be used to help with this, acting like mediator between writers and readers?

You can use special program for this purpose - buffer. Buffer is designed to try and keep the writer side continuously busy so that it can stream when writing to tape drives, but you can use for other purposes. Internally buffer is a pair of processes communicating via a large circular queue held in shared memory, so your processes will work asynchronously. Your reader process will be blocked in case the queue is full and the writer process - in case the queue is empty. Example:
bzcat archive.bz2 | buffer -m 16000000 -b 100000 | processing_script | bzip2 > archive_processed.bz2
http://linux.die.net/man/1/buffer

Related

Treadmill infinite file between producer and consumer processes

I have a "producer" executable (that I can run, but don't control the source for) that continually writes to a growing output file. (Like a log file -- it happens to be binary, but I think of it like a log, just without the nice line breaks).
And I have another "consumer" process that is continually reading from that file in decently big (10mb) chunks.
The good news:
Once the consumer reads a chunk from the file, it is ok to discard that part of the file, we are done with it forever.
I can keep the consumer from going too fast and catching up.
I'm confident the consumer is fast enough to not fall too far behind the producer.
The bad news: if I let it run, eventually the file will get huge enough to fill up the disk (many GBs) and I have to kill off both processes, erase the file, and start everything over. :(
I'd like to have to do that restarting less often! Is there a way to put this on a "treadmill," or ring-buffer, or something similar, where I don't have to have a huge amount of disk space for the full file? The only part I actually need to keep around is the maybe 100mb buffer between the producer and consumer. I'd even be ok with some bridge process in between them, or some pipe magic, or virtual filesystems, or ???

How to use the attach the same console as output for a process and input for another process?

I am trying to use suckless ii irc client. I can listen to a channel by tail -f out file. However is it also possible for me to input into the same console by starting an echo or cat command?
If I background the process, it actually displays the output in this console but that doesn't seem to be right way? Logically, I think I need to get the fd of the console (but how to do that) and then force the tail output to that fd and probably background it. And then use the present bash to start a cat > in.
Is it actually fine to do this or is that I am creating a lot of processes overhead for a simple task? In other words piping a lot of stuff is nice but it creates a lot of overhead which ideally has to be in a single process if you are going to repeat that task it a lot?
However is it also possible for me to input into the same console by starting an echo or cat command?
Simply NO! cat writes the current content. cat has no idea that the content will grow later. echo writes variables and results from the given command line. echo itself is not made for writing the content of files.
If I background the process, it actually displays the output in this console but that doesn't seem to be right way?
If you do not redirect the output, the output goes to the console. That is the way it is designed :-)
Logically, I think I need to get the fd of the console (but how to do that) and then force the tail output to that fd and probably background it.
As I understand that is the opposite direction. If you want to write to the stdin from the process, you simply can use a pipe for that. The ( useless ) example show that cat writes to the pipe and the next command will read from the pipe. You can extend to any other pipe read/write scenario. See link given below.
Example:
cat main.cpp | cat /dev/stdin
cat main.cpp | tail -f
The last one will not exit, because it waits that the pipe gets more content which never happens.
Is it actually fine to do this or is that I am creating a lot of processes overhead for a simple task? In other words piping a lot of stuff is nice but it creates a lot of overhead which ideally has to be in a single process if you are going to repeat that task it a lot?
I have no idea how time critical your job is, but I believe that the overhead is quite low. Doing the same things in a self written prog must not be faster. If all is done in a single process and no access to the file system is required, it will be much faster. But if you also use system calls, e.g. file system access, it will not be much faster I believe. You always have to pay for the work you get.
For IO redirection please read:
http://www.tldp.org/LDP/abs/html/io-redirection.html
If your scenario is more complex, you can think of named pipes instead of IO redirection. For that you can have a look at:
http://www.linuxjournal.com/content/using-named-pipes-fifos-bash

How to create a virtual command-backed file in Linux?

What is the most straightforward way to create a "virtual" file in Linux, that would allow the read operation on it, always returning the output of some particular command (run everytime the file is being read from)? So, every read operation would cause an execution of a command, catching its output and passing it as a "content" of the file.
There is no way to create such so called "virtual file". On the other hand, you would be
able to achieve this behaviour by implementing simple synthetic filesystem in userspace via FUSE. Moreover you don't have to use c, there
are bindings even for scripting languages such as python.
Edit: And chances are that something like this already exists: see for example scriptfs.
This is a great answer I copied below.
Basically, named pipes let you do this in scripting, and Fuse let's you do it easily in Python.
You may be looking for a named pipe.
mkfifo f
{
echo 'V cebqhpr bhgchg.'
sleep 2
echo 'Urer vf zber bhgchg.'
} >f
rot13 < f
Writing to the pipe doesn't start the listening program. If you want to process input in a loop, you need to keep a listening program running.
while true; do rot13 <f >decoded-output-$(date +%s.%N); done
Note that all data written to the pipe is merged, even if there are multiple processes writing. If multiple processes are reading, only one gets the data. So a pipe may not be suitable for concurrent situations.
A named socket can handle concurrent connections, but this is beyond the capabilities for basic shell scripts.
At the most complex end of the scale are custom filesystems, which lets you design and mount a filesystem where each open, write, etc., triggers a function in a program. The minimum investment is tens of lines of nontrivial coding, for example in Python. If you only want to execute commands when reading files, you can use scriptfs or fuseflt.
No one mentioned this but if you can choose the path to the file you can use the standard input /dev/stdin.
Everytime the cat program runs, it ends up reading the output of the program writing to the pipe which is simply echo my input here:
for i in 1 2 3; do
echo my input | cat /dev/stdin
done
outputs:
my input
my input
my input
I'm afraid this is not easily possible. When a process reads from a file, it uses system calls like open, fstat, read. You would need to intercept these calls and output something different from what they would return. This would require writing some sort of kernel module, and even then it may turn out to be impossible.
However, if you simply need to trigger something whenever a certain file is accessed, you could play with inotifywait:
#!/bin/bash
while inotifywait -qq -e access /path/to/file; do
echo "$(date +%s)" >> /tmp/access.txt
done
Run this as a background process, and you will get an entry in /tmp/access.txt each time your file is being read.

Logging to a non blocking named pipe?

I have a question, and I could'nt find help anywhere on stackoverflow or the web.
I have a program (celery distributed task queue) and I have multiple instances (workers) each having a logfile (celery_worker1.log, celery_worker2.log).
The important errors are stored to a database, but I like to tail these logs from time to time when running new operations to make sure everything is ok (the loglevel is lower).
My problem: these logs are taking a lot of disk space.
What I would like to do: be able to "watch" the logs (tail -f) only when I need it, without them taking a lot of space.
My ideas until now:
outputing logs to stdout, not to a file: not possible here since I have many workers outputing to different files, but I want to tail them all at once (tail -f celery_worker*.log)
using logrotate: it is an "OK" solution for me. I don't want this to be a daily task but would rather not put a minute crontab for this, and more, the server is not mine so that would mean some work on the admin-sys side
using named pipes: it looked good at first sight but I didn't know that named pipes (linux FIFO) where blocking. Hence, when I don't tail -f ALL of the pipes at the same time, or when I just quit my tail, the writing operations from the logger are blocked.
Is there a way to have a non-blocking named pipe, which would just throw to stdout when tailed, and throw to /dev/null when not?
Or are there technical difficulties to such a type of pipe? If there are, what are they?
Thank you for your answers!
Have each worker log to stdout, but connect each stdout to a utility that automatically spools and rotates logs based on size or time. multilog and svlogd are examples of such. For those programs, you'd merely tail the "current" log file.
You're right that logrotate is not quite the right solution for the problem you have.
Named pipes won't work as you want. At best, your writers could fill up their pipes and then discard subsequent logs, which is the inverse of the behavior you want.
You could try shared memory device man:shm_overview or perhaps a number of them. You need to organise them as circular buffers so they'd store last N kb of your log and whenever you read them with reader it will output everything to your console. This approach is adopted by busybox's syslog/logread suit (see logread.c).

how shell pipe handles infinite loop

Whenever I need to limit shell command output, I use less to paginate the results:
cat file_with_long_content | less
which works fine and dandy, but what I'm curious about is, less still works even if the output is never ending, consider having the following script in inf.sh file:
while true; do date; done
then I run
sh inf.sh | less
And it's still able to again paginate the results, so is it correct to say that pipe streams the result rather than waiting for the command to finish before outputting the result?
Yes, when you run sh inf.sh | less the two commands are run in parallel. Data written into the pipe by the first process is buffered (by the kernel) until it is read by the second. If the buffer is full (i.e., if the first command writes to the pipe faster than the second can read) then the next write operation will block until further space is available. A similar condition occurs when reading from an empty pipe: if the pipe buffer is empty but the input end is still open, a read will block for more data.
See the pipe(7) manual for details.
It is correct. Pipes are streams.
You can code your own version of the less tool in very few lines of C code. Take the time to do it, including a short research on files and pipes, and you'll emerge with the understanding to answer your own question and more :).

Resources