Streaming video using a non-blocking FIFO in linux/bash - linux

I am trying to accomplish the following objectives:
Write video from my Raspberry Pi Camera to disk without any interference from streaming
Stream the same video through the network optimizing latency
It is important streaming does not interfere with the video being written to disk, since network connection may be unstable, such as WiFi router may be out of range, etc.
To accomplish that, the first thing I have tried is the following:
#Receiver side
FPS="30"
netcat -l -p 5000 | mplayer -vf scale -zoom -xy 1280 -fps $FPS -cache-min 50 -cache 1024 - &
#RPi side
FPS="30"
mkfifo netcat_fifo
raspivid -t 0 -md 5 -fps $FPS -o - | tee --output-error=warn netcat_fifo > $video_out &
cat netcat_fifo | netcat -v 192.168.0.101 5000 &> $netcat_log &
And the streaming works very well. However, when I switch off the router, simulating a problem with the network, my $video_out is cut. I think this is due to the back-pressure from the netcat_fifo.
I found a solution here at stackexchange concerning a non-blocking FIFO, by replacing tee by ftee:
Linux non-blocking fifo (on demand logging)
It now prevents my $video_out from being affected by the streaming, but the streaming itself is very unstable. The best results were using the following script:
#RPi side
FPS="30"
MULTIPIPE="ftee"
mkfifo netcat_fifo
raspivid -t 0 -md 5 -fps $FPS -o - | ./${MULTIPIPE} netcat_fifo > $video_out &
cat netcat_fifo | mbuffer --direct -t -s 2k 2> $mbuffer_log | netcat -v 192.168.0.101 5000 &> $netcat_log &
When I check the mbuffer log, I diagnose a FIFO that remains most of the time empty, but has peaks of 99-100% utilization. During these peaks, my mplayer receiver-side has many errors decoding the video and takes about 5 seconds to recover itself. After this interval, the mbuffer log shows again an empty FIFO.
The empty->full->empty goes on and on.
I have two questions:
Am I using the correct approach to solve my problem?
If so, how could I render my streaming more robust while keeping the $video_out file intact?

I had a little attempt at this and it seems to work pretty solidly on my Raspberry Pi 3. It is pretty well commented so it should be quite easy to understand, but you can always ask if there are any questions.
Basically there are 3 threads:
main program - it constantly reads its stdin from raspivid and cyclically puts the data into a bunch of buffers
disk writer thread - it constantly cycles through the list of buffers waiting for the next one to become full. When the buffer is full, it writes the contents to disk, marks the buffer as written and moves onto the next one
fifo writer thread - it constantly cycles through the list of buffers waiting for the next one to become full. When the buffer is full, it writes the contents to the fifo, flushes the fifo to reduce lag and marks the buffer as written and moves onto the next one. Errors are ignored.
So, here is the code:
////////////////////////////////////////////////////////////////////////////////
// main.cpp
// Mark Setchell
//
// Read video stream from "raspivid" and write (independently) to both disk file
// and stdout - for onward netcatting to another host.
//
// Compiles with:
// g++ main.cpp -o main -lpthread
//
// Run on Raspberry Pi with:
// raspivid -t 0 -md 5 -fps 30 -o - | ./main video.h264 | netcat -v 192.168.0.8 5000
//
// Receive on other host with:
// netcat -l -p 5000 | mplayer -vf scale -zoom -xy 1280 -fps 30 -cache-min 50 -cache 1024 -
////////////////////////////////////////////////////////////////////////////////
#include <iostream>
#include <chrono>
#include <thread>
#include <vector>
#include <unistd.h>
#include <atomic>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#define BUFSZ 65536
#define NBUFS 64
class Buffer{
public:
int bytes=0;
std::atomic<int> NeedsWriteToDisk{0};
std::atomic<int> NeedsWriteToFifo{0};
unsigned char data[BUFSZ];
};
std::vector<Buffer> buffers(NBUFS);
////////////////////////////////////////////////////////////////////////////////
// This is the DiskWriter thread.
// It loops through all the buffers waiting in turn for each one to become ready
// then writes it to disk and marks the buffer as written before moving to next
// buffer.
////////////////////////////////////////////////////////////////////////////////
void DiskWriter(char* filename){
int bufIndex=0;
// Open output file
int fd=open(filename,O_CREAT|O_WRONLY|O_TRUNC,S_IRUSR|S_IWUSR|S_IRGRP|S_IWGRP);
if(fd==-1)
{
std::cerr << "ERROR: Unable to open output file" << std::endl;
exit(EXIT_FAILURE);
}
bool Error=false;
while(!Error){
// Wait for buffer to be filled by main thread
while(buffers[bufIndex].NeedsWriteToDisk!=1){
// std::this_thread::sleep_for(std::chrono::milliseconds(1));
}
// Write to disk
int bytesToWrite=buffers[bufIndex].bytes;
int bytesWritten=write(fd,reinterpret_cast<unsigned char*>(&buffers[bufIndex].data),bytesToWrite);
if(bytesWritten!=bytesToWrite){
std::cerr << "ERROR: Unable to write to disk" << std::endl;
exit(EXIT_FAILURE);
}
// Mark buffer as written
buffers[bufIndex].NeedsWriteToDisk=0;
// Move to next buffer
bufIndex=(bufIndex+1)%NBUFS;
}
}
////////////////////////////////////////////////////////////////////////////////
// This is the FifoWriter thread.
// It loops through all the buffers waiting in turn for each one to become ready
// then writes it to the Fifo, flushes it for reduced lag, and marks the buffer
// as written before moving to next one. Errors are ignored.
////////////////////////////////////////////////////////////////////////////////
void FifoWriter(){
int bufIndex=0;
bool Error=false;
while(!Error){
// Wait for buffer to be filled by main thread
while(buffers[bufIndex].NeedsWriteToFifo!=1){
// std::this_thread::sleep_for(std::chrono::milliseconds(1));
}
// Write to fifo
int bytesToWrite=buffers[bufIndex].bytes;
int bytesWritten=write(STDOUT_FILENO,reinterpret_cast<unsigned char*>(&buffers[bufIndex].data),bytesToWrite);
if(bytesWritten!=bytesToWrite){
std::cerr << "ERROR: Unable to write to fifo" << std::endl;
}
// Try to reduce lag
fflush(stdout);
// Mark buffer as written
buffers[bufIndex].NeedsWriteToFifo=0;
// Move to next buffer
bufIndex=(bufIndex+1)%NBUFS;
}
}
int main(int argc, char *argv[])
{
int bufIndex=0;
if(argc!=2){
std::cerr << "ERROR: Usage " << argv[0] << " filename" << std::endl;
exit(EXIT_FAILURE);
}
char * filename = argv[1];
// Start disk and fifo writing threads in parallel
std::thread tDiskWriter(DiskWriter,filename);
std::thread tFifoWriter(FifoWriter);
bool Error=false;
// Continuously fill buffers from "raspivid" on stdin. Mark as full and
// needing output to disk and fifo before moving to next buffer.
while(!Error)
{
// Check disk writer is not behind before re-using buffer
if(buffers[bufIndex].NeedsWriteToDisk==1){
std::cerr << "ERROR: Disk writer is behind by " << NBUFS << " buffers" << std::endl;
}
// Check fifo writer is not behind before re-using buffer
if(buffers[bufIndex].NeedsWriteToFifo==1){
std::cerr << "ERROR: Fifo writer is behind by " << NBUFS << " buffers" << std::endl;
}
// Read from STDIN till buffer is pretty full
int bytes;
int totalBytes=0;
int bytesToRead=BUFSZ;
unsigned char* ptr=reinterpret_cast<unsigned char*>(&buffers[bufIndex].data);
while(totalBytes<(BUFSZ*.75)){
bytes = read(STDIN_FILENO,ptr,bytesToRead);
if(bytes<=0){
Error=true;
break;
}
ptr+=bytes;
totalBytes+=bytes;
bytesToRead-=bytes;
}
// Signal buffer ready for writing
buffers[bufIndex].bytes=totalBytes;
buffers[bufIndex].NeedsWriteToDisk=1;
buffers[bufIndex].NeedsWriteToFifo=1;
// Move to next buffer
bufIndex=(bufIndex+1)%NBUFS;
}
}

Related

Where are POSIX message queues located (Linux)?

man 7 mq_overview says that the POSIX "...message queues on the system can be viewed and manipulated using the commands usually used for files (e.g., ls(1) and rm(1))."
For example I was able to read using a mqd_t as a file descriptor as follows:
#include <iostream>
#include <fcntl.h>
#include <mqueue.h>
#include <unistd.h>
#include <stdlib.h>
int main(int argc, char **argv) {
if (argc != 2) {
std::cout << "Usage: mqgetinfo </mq_name>\n";
exit(1);
}
mqd_t mqd = mq_open(argv[1], O_RDONLY);
struct mq_attr attr;
mq_getattr(mqd, &attr);
std::cout << argv[1] << " attributes:"
<< "\nflag: " << attr.mq_flags
<< "\nMax # of msgs: " << attr.mq_maxmsg
<< "\nMax msg size: " << attr.mq_msgsize
<< "\nmsgs now in queue: " << attr.mq_curmsgs << '\n';
// Get the queue size in bytes, and any notification info:
char buf[1024];
int n = read(mqd, buf, 1023);
buf[n] = '\0';
std::cout << "\nFile /dev/mqueue" << argv[1] << ":\n"
<< buf << '\n';
mq_close(mqd);
}
Running this on msg queue /myq when it contains 5 msgs, 549 bytes gives:
$ g++ mqgetinfo.cc -o mqgetinfo -lrt
$ ./mqgetinfo /myq
/myq attributes:
flag: 0
Max # of msgs: 10
Max msg size: 8192
msgs now in queue: 5
File /dev/mqueue/myq:
QSIZE:549 NOTIFY:0 SIGNO:0 NOTIFY_PID:0
$
Also:
$ !cat
cat /dev/mqueue/myq
QSIZE:549 NOTIFY:0 SIGNO:0 NOTIFY_PID:0
So the file /dev/mqueue/myq has some info associated with the msg queue.
My question is: Where is the queue itself, i.e., where are the 549 bytes? I'm guessing they are in some list-type data structure internal to the kernel, but I don't see this mentioned in the man pages etc, and wondering how to find out about that..
Since the internal handling of message queues are implementation specific (not part of the standard, as it only specifies the programming interface and behaviour), I recommend you to have a look into the linux kernel source file ipc/mqueue.c and pay special attention to mqueue_create() and msg_insert() functions, as it's a good place to get started if you want to understand how message queues are implemented in the linux kernel.

Real parallelism in Linux shell

I am trying to have real parallelism on Linux shell, but I can't achieve it.
I have two programs. Allones, that only prints '1' character, and allzeros, that only prints 0 characters.
When I execute "./allones & ./allzeros &", I get big prints of '0's, and big prints of '1's, that mix in big chunks (e.g. 1111....111000...0000111...111000...000"). My processor has 8 cores.
However, when I executed my own program on a multi-core FPGA (with no OS), (If I distribute programs on different cores) I get something like "011000101000011010...".
How can I run it on Linux to get a result similar to what I get on a multi-core FPGA?
Sounds like you're experiencing libc's default line buffering:
Here's a test program spam.c:
#include <stdio.h>
int main(int argc, char** argv) {
while(1) {
printf("%s", argv[1]);
}
}
We can run it with:
$ ./spam 0 & ./spam 1 & sleep 1; killall spam
11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111(...)000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000(...)
On my systems, each block is exactly 1024 bytes long, strongly hinting at a buffering issue.
Here's the same code with a fflush to prevent buffering:
#include <stdio.h>
int main(int argc, char** argv) {
while(1) {
printf("%s", argv[1]);
fflush(stdout);
}
}
This is the new output:
100111001100110011001100110011001100110011100111001110011011001100110011001100110011001100110011001100110011001100110011001100011000110001100110001100100110011001100111001101100110011001100110011001100110000000000110010011000110011

Why Linux always output "^C" upon pressing of Ctrl+C?

I have been studying signals in Linux. And I've done a test program to capture SIGINT.
#include <unistd.h>
#include <signal.h>
#include <iostream>
void signal_handler(int signal_no);
int main() {
signal(SIGINT, signal_handler);
for (int i = 0; i < 10; ++i) {
std::cout << "I'm sleeping..." << std::endl;
unsigned int one_ms = 1000;
usleep(200* one_ms);
}
return 0;
}
void signal_handler(int signal_no) {
if (signal_no == SIGINT)
std::cout << "Oops, you pressed Ctrl+C!\n";
return;
}
While the output looks like this:
I'm sleeping...
I'm sleeping...
^COops, you pressed Ctrl+C!
I'm sleeping...
I'm sleeping...
^COops, you pressed Ctrl+C!
I'm sleeping...
^COops, you pressed Ctrl+C!
I'm sleeping...
^COops, you pressed Ctrl+C!
I'm sleeping...
^COops, you pressed Ctrl+C!
I'm sleeping...
I'm sleeping...
I'm sleeping...
I understand that when pressing Ctrl+C, processes in foreground process group all receives a SIGINT(if no process chooses to ignore it).
So is it that the shell(bash) AND the instance of the above program both received the signal? Where does the "^C" before each "Oops" come from?
The OS is CentOS, and the shell is bash.
It is the terminal (driver) that intercepts the ^C and translates it to a signal sent to the attached process (which is the shell) stty intr ^B would instruct the terminal driver to intercept a ^B instead. It is also the terminal driver that echoes the ^C back to the terminal.
The shell is just a process that sits at the other end of the line, and receives it's stdin from your terminal via the terminal driver (such as /dev/ttyX), and it's stdout (and stderr) are also attached to the same tty.
Note that (if echoing is enabled) the terminal sends the keystrokes to both the process (group) and back to the terminal. The stty command is just wrapper around the ioctl()s for the tty driver for the processes "controlling" tty.
UPDATE: to demonstrate that the shell is not involved, I created the following small program. It should be executed by its parent shell via exec ./a.out (it appears an interactive shell will fork a daughter shell, anyway) The program sets the key that generates the SIGINTR to ^B, switches echo off, and than waits for input from stdin.
#include <stdio.h>
#include <string.h>
#include <termios.h>
#include <unistd.h>
#include <signal.h>
#include <errno.h>
int thesignum = 0;
void handler(int signum);
void handler(int signum)
{ thesignum = signum;}
#define THE_KEY 2 /* ^B */
int main(void)
{
int rc;
struct termios mytermios;
rc = tcgetattr(0 , &mytermios);
printf("tcgetattr=%d\n", rc );
mytermios.c_cc[VINTR] = THE_KEY; /* set intr to ^B */
mytermios.c_lflag &= ~ECHO ; /* Dont echo */
rc = tcsetattr(0 , TCSANOW, &mytermios);
printf("tcsetattr(intr,%d) =%d\n", THE_KEY, rc );
printf("Setting handler()\n" );
signal(SIGINT, handler);
printf("entering pause()\n... type something followed by ^%c\n", '#'+THE_KEY );
rc = pause();
printf("Rc=%d: %d(%s), signum=%d\n", rc, errno , strerror(errno), thesignum );
// mytermios.c_cc[VINTR] = 3; /* reset intr to ^C */
mytermios.c_lflag |= ECHO ; /* Do echo */
rc = tcsetattr(0 , TCSANOW, &mytermios);
printf("tcsetattr(intr,%d) =%d\n", THE_KEY, rc );
return 0;
}
intr.sh:
#!/bin/sh
echo $$
exec ./a.out
echo I am back.
The shell echoes everything you type, so when you type ^C, that too gets echoed (and in your case intercepted by your signal handler). The command stty -echo may or may not be useful to you depending on your needs/constraints, see the man page for stty for more information.
Of course much more goes on at a lower level, anytime you communicate with a system via peripherals device drivers (such as the keyboard driver that you use to generate the ^C signal, and the terminal driver that displays everything) are involved. You can dig even deeper at the level of assembly/machine language, registers, lookup tables etc. If you want a more detailed, in-depth level of understanding the books below are a good place to start:
The Design of the Unix OS is a good reference for these sort of things. Two more classic references: Unix Programming Environment
and Advanced Programming in the UNIX Environment
Nice summary here in this SO question How does Ctrl-C terminate a child process?
"when youre run a program, for example find, the shell:
the shell fork itself
and for the child set the default signal handling
replace the child with the given command (e.g. with find)
when you press CTRL-C, parent shell handle this signal but the child will receive it - with the default action - terminate. (the child can implement signal handling too)"

Linux non-blocking fifo (on demand logging)

I like to log a programs output 'on demand'. Eg. the output is logged to the terminal, but another process can hook on the current output at any time.
The classic way would be:
myprogram 2>&1 | tee /tmp/mylog
and on demand
tail /tmp/mylog
However, this would create a ever growing log file even if not used until the drive runs out of space. So my attempt was:
mkfifo /tmp/mylog
myprogram 2>&1 | tee /tmp/mylog
and on demand
cat /tmp/mylog
Now I can read /tmp/mylog at any time. However, any output blocks the program until the /tmp/mylog is read. I like the fifo to flush any incoming data not read back. How to do that?
Inspired by your question I've written a simple program that will let you do this:
$ myprogram 2>&1 | ftee /tmp/mylog
It behaves similarly to tee but clones the stdin to stdout and to a named pipe (a requirement for now) without blocking. This means that if you want to log this way it may happen that you're gonna lose your log data, but I guess it's acceptable in your scenario.
The trick is to block SIGPIPE signal and to ignore error on writing to a broken fifo. This sample may be optimized in various ways of course, but so far, it does the job I guess.
/* ftee - clone stdin to stdout and to a named pipe
(c) racic#stackoverflow
WTFPL Licence */
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <errno.h>
#include <signal.h>
#include <unistd.h>
int main(int argc, char *argv[])
{
int readfd, writefd;
struct stat status;
char *fifonam;
char buffer[BUFSIZ];
ssize_t bytes;
signal(SIGPIPE, SIG_IGN);
if(2!=argc)
{
printf("Usage:\n someprog 2>&1 | %s FIFO\n FIFO - path to a"
" named pipe, required argument\n", argv[0]);
exit(EXIT_FAILURE);
}
fifonam = argv[1];
readfd = open(fifonam, O_RDONLY | O_NONBLOCK);
if(-1==readfd)
{
perror("ftee: readfd: open()");
exit(EXIT_FAILURE);
}
if(-1==fstat(readfd, &status))
{
perror("ftee: fstat");
close(readfd);
exit(EXIT_FAILURE);
}
if(!S_ISFIFO(status.st_mode))
{
printf("ftee: %s in not a fifo!\n", fifonam);
close(readfd);
exit(EXIT_FAILURE);
}
writefd = open(fifonam, O_WRONLY | O_NONBLOCK);
if(-1==writefd)
{
perror("ftee: writefd: open()");
close(readfd);
exit(EXIT_FAILURE);
}
close(readfd);
while(1)
{
bytes = read(STDIN_FILENO, buffer, sizeof(buffer));
if (bytes < 0 && errno == EINTR)
continue;
if (bytes <= 0)
break;
bytes = write(STDOUT_FILENO, buffer, bytes);
if(-1==bytes)
perror("ftee: writing to stdout");
bytes = write(writefd, buffer, bytes);
if(-1==bytes);//Ignoring the errors
}
close(writefd);
return(0);
}
You can compile it with this standard command:
$ gcc ftee.c -o ftee
You can quickly verify it by running e.g.:
$ ping www.google.com | ftee /tmp/mylog
$ cat /tmp/mylog
Also note - this is no multiplexer. You can only have one process doing $ cat /tmp/mylog at a time.
This is a (very) old thread, but I've run into a similar problem of late. In fact, what I needed is a cloning of stdin to stdout with a copy to a pipe that is non blocking. the proposed ftee in the first answer really helped there, but was (for my use case) too volatile. Meaning I lost data I could have processed if I had gotten to it in time.
The scenario I was faced with is that I have a process (some_process) that aggregates some data and writes its results every three seconds to stdout. The (simplified) setup looked like this (in the real setup I am using a named pipe):
some_process | ftee >(onlineAnalysis.pl > results) | gzip > raw_data.gz
Now, raw_data.gz has to be compressed and has to be complete. ftee does this job very well. But the pipe I am using in the middle was too slow to grab the data flushed out - but it was fast enough to process everything if it could get to it, which was tested with a normal tee. However, a normal tee blocks if anything happens to the unnamed pipe, and as I want to be able to hook in on demand, tee is not an option. Back to the topic: It got better when I put a buffer in between, resulting in:
some_process | ftee >(mbuffer -m 32M| onlineAnalysis.pl > results) | gzip > raw_data.gz
But that was still losing data I could have processed. So I went ahead and extended the ftee proposed before to a buffered version (bftee). It still has all the same properties, but uses an (inefficient ?) internal buffer in case a write fails. It still loses data if the buffer runs full, but it works beautifully for my case. As always there is a lot of room for improvement, but as I copied the code off of here I'd like to share it back to people that might have a use for it.
/* bftee - clone stdin to stdout and to a buffered, non-blocking pipe
(c) racic#stackoverflow
(c) fabraxias#stackoverflow
WTFPL Licence */
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <errno.h>
#include <signal.h>
#include <unistd.h>
// the number of sBuffers that are being held at a maximum
#define BUFFER_SIZE 4096
#define BLOCK_SIZE 2048
typedef struct {
char data[BLOCK_SIZE];
int bytes;
} sBuffer;
typedef struct {
sBuffer *data; //array of buffers
int bufferSize; // number of buffer in data
int start; // index of the current start buffer
int end; // index of the current end buffer
int active; // number of active buffer (currently in use)
int maxUse; // maximum number of buffers ever used
int drops; // number of discarded buffer due to overflow
int sWrites; // number of buffer written to stdout
int pWrites; // number of buffers written to pipe
} sQueue;
void InitQueue(sQueue*, int); // initialized the Queue
void PushToQueue(sQueue*, sBuffer*, int); // pushes a buffer into Queue at the end
sBuffer *RetrieveFromQueue(sQueue*); // returns the first entry of the buffer and removes it or NULL is buffer is empty
sBuffer *PeakAtQueue(sQueue*); // returns the first entry of the buffer but does not remove it. Returns NULL on an empty buffer
void ShrinkInQueue(sQueue *queue, int); // shrinks the first entry of the buffer by n-bytes. Buffer is removed if it is empty
void DelFromQueue(sQueue *queue); // removes the first entry of the queue
static void sigUSR1(int); // signal handled for SUGUSR1 - used for stats output to stderr
static void sigINT(int); // signla handler for SIGKILL/SIGTERM - allows for a graceful stop ?
sQueue queue; // Buffer storing the overflow
volatile int quit; // for quiting the main loop
int main(int argc, char *argv[])
{
int readfd, writefd;
struct stat status;
char *fifonam;
sBuffer buffer;
ssize_t bytes;
int bufferSize = BUFFER_SIZE;
signal(SIGPIPE, SIG_IGN);
signal(SIGUSR1, sigUSR1);
signal(SIGTERM, sigINT);
signal(SIGINT, sigINT);
/** Handle commandline args and open the pipe for non blocking writing **/
if(argc < 2 || argc > 3)
{
printf("Usage:\n someprog 2>&1 | %s FIFO [BufferSize]\n"
"FIFO - path to a named pipe, required argument\n"
"BufferSize - temporary Internal buffer size in case write to FIFO fails\n", argv[0]);
exit(EXIT_FAILURE);
}
fifonam = argv[1];
if (argc == 3) {
bufferSize = atoi(argv[2]);
if (bufferSize == 0) bufferSize = BUFFER_SIZE;
}
readfd = open(fifonam, O_RDONLY | O_NONBLOCK);
if(-1==readfd)
{
perror("bftee: readfd: open()");
exit(EXIT_FAILURE);
}
if(-1==fstat(readfd, &status))
{
perror("bftee: fstat");
close(readfd);
exit(EXIT_FAILURE);
}
if(!S_ISFIFO(status.st_mode))
{
printf("bftee: %s in not a fifo!\n", fifonam);
close(readfd);
exit(EXIT_FAILURE);
}
writefd = open(fifonam, O_WRONLY | O_NONBLOCK);
if(-1==writefd)
{
perror("bftee: writefd: open()");
close(readfd);
exit(EXIT_FAILURE);
}
close(readfd);
InitQueue(&queue, bufferSize);
quit = 0;
while(!quit)
{
// read from STDIN
bytes = read(STDIN_FILENO, buffer.data, sizeof(buffer.data));
// if read failed due to interrupt, then retry, otherwise STDIN has closed and we should stop reading
if (bytes < 0 && errno == EINTR) continue;
if (bytes <= 0) break;
// save the number if read bytes in the current buffer to be processed
buffer.bytes = bytes;
// this is a blocking write. As long as buffer is smaller than 4096 Bytes, the write is atomic to a pipe in Linux
// thus, this cannot be interrupted. however, to be save this should handle the error cases of partial or interrupted write none the less.
bytes = write(STDOUT_FILENO, buffer.data, buffer.bytes);
queue.sWrites++;
if(-1==bytes) {
perror("ftee: writing to stdout");
break;
}
sBuffer *tmpBuffer = NULL;
// if the queue is empty (tmpBuffer gets set to NULL) the this does nothing - otherwise it tries to write
// the buffered data to the pipe. This continues until the Buffer is empty or the write fails.
// NOTE: bytes cannot be -1 (that would have failed just before) when the loop is entered.
while ((bytes != -1) && (tmpBuffer = PeakAtQueue(&queue)) != NULL) {
// write the oldest buffer to the pipe
bytes = write(writefd, tmpBuffer->data, tmpBuffer->bytes);
// the written bytes are equal to the buffer size, the write is successful - remove the buffer and continue
if (bytes == tmpBuffer->bytes) {
DelFromQueue(&queue);
queue.pWrites++;
} else if (bytes > 0) {
// on a positive bytes value there was a partial write. we shrink the current buffer
// and handle this as a write failure
ShrinkInQueue(&queue, bytes);
bytes = -1;
}
}
// There are several cases here:
// 1.) The Queue is empty -> bytes is still set from the write to STDOUT. in this case, we try to write the read data directly to the pipe
// 2.) The Queue was not empty but is now -> bytes is set from the last write (which was successful) and is bigger 0. also try to write the data
// 3.) The Queue was not empty and still is not -> there was a write error before (even partial), and bytes is -1. Thus this line is skipped.
if (bytes != -1) bytes = write(writefd, buffer.data, buffer.bytes);
// again, there are several cases what can happen here
// 1.) the write before was successful -> in this case bytes is equal to buffer.bytes and nothing happens
// 2.) the write just before is partial or failed all together - bytes is either -1 or smaller than buffer.bytes -> add the remaining data to the queue
// 3.) the write before did not happen as the buffer flush already had an error. In this case bytes is -1 -> add the remaining data to the queue
if (bytes != buffer.bytes)
PushToQueue(&queue, &buffer, bytes);
else
queue.pWrites++;
}
// once we are done with STDIN, try to flush the buffer to the named pipe
if (queue.active > 0) {
//set output buffer to block - here we wait until we can write everything to the named pipe
// --> this does not seem to work - just in case there is a busy loop that waits for buffer flush aswell.
int saved_flags = fcntl(writefd, F_GETFL);
int new_flags = saved_flags & ~O_NONBLOCK;
int res = fcntl(writefd, F_SETFL, new_flags);
sBuffer *tmpBuffer = NULL;
//TODO: this does not handle partial writes yet
while ((tmpBuffer = PeakAtQueue(&queue)) != NULL) {
int bytes = write(writefd, tmpBuffer->data, tmpBuffer->bytes);
if (bytes != -1) DelFromQueue(&queue);
}
}
close(writefd);
}
/** init a given Queue **/
void InitQueue (sQueue *queue, int bufferSize) {
queue->data = calloc(bufferSize, sizeof(sBuffer));
queue->bufferSize = bufferSize;
queue->start = 0;
queue->end = 0;
queue->active = 0;
queue->maxUse = 0;
queue->drops = 0;
queue->sWrites = 0;
queue->pWrites = 0;
}
/** push a buffer into the Queue**/
void PushToQueue(sQueue *queue, sBuffer *p, int offset)
{
if (offset < 0) offset = 0; // offset cannot be smaller than 0 - if that is the case, we were given an error code. Set it to 0 instead
if (offset == p->bytes) return; // in this case there are 0 bytes to add to the queue. Nothing to write
// this should never happen - offset cannot be bigger than the buffer itself. Panic action
if (offset > p->bytes) {perror("got more bytes to buffer than we read\n"); exit(EXIT_FAILURE);}
// debug output on a partial write. TODO: remove this line
// if (offset > 0 ) fprintf(stderr, "partial write to buffer\n");
// copy the data from the buffer into the queue and remember its size
memcpy(queue->data[queue->end].data, p->data + offset , p->bytes-offset);
queue->data[queue->end].bytes = p->bytes - offset;
// move the buffer forward
queue->end = (queue->end + 1) % queue->bufferSize;
// there is still space in the buffer
if (queue->active < queue->bufferSize)
{
queue->active++;
if (queue->active > queue->maxUse) queue->maxUse = queue->active;
} else {
// Overwriting the oldest. Move start to next-oldest
queue->start = (queue->start + 1) % queue->bufferSize;
queue->drops++;
}
}
/** return the oldest entry in the Queue and remove it or return NULL in case the Queue is empty **/
sBuffer *RetrieveFromQueue(sQueue *queue)
{
if (!queue->active) { return NULL; }
queue->start = (queue->start + 1) % queue->bufferSize;
queue->active--;
return &(queue->data[queue->start]);
}
/** return the oldest entry in the Queue or NULL if the Queue is empty. Does not remove the entry **/
sBuffer *PeakAtQueue(sQueue *queue)
{
if (!queue->active) { return NULL; }
return &(queue->data[queue->start]);
}
/*** Shrinks the oldest entry i the Queue by bytes. Removes the entry if buffer of the oldest entry runs empty*/
void ShrinkInQueue(sQueue *queue, int bytes) {
// cannot remove negative amount of bytes - this is an error case. Ignore it
if (bytes <= 0) return;
// remove the entry if the offset is equal to the buffer size
if (queue->data[queue->start].bytes == bytes) {
DelFromQueue(queue);
return;
};
// this is a partial delete
if (queue->data[queue->start].bytes > bytes) {
//shift the memory by the offset
memmove(queue->data[queue->start].data, queue->data[queue->start].data + bytes, queue->data[queue->start].bytes - bytes);
queue->data[queue->start].bytes = queue->data[queue->start].bytes - bytes;
return;
}
// panic is the are to remove more than we have the buffer
if (queue->data[queue->start].bytes < bytes) {
perror("we wrote more than we had - this should never happen\n");
exit(EXIT_FAILURE);
return;
}
}
/** delete the oldest entry from the queue. Do nothing if the Queue is empty **/
void DelFromQueue(sQueue *queue)
{
if (queue->active > 0) {
queue->start = (queue->start + 1) % queue->bufferSize;
queue->active--;
}
}
/** Stats output on SIGUSR1 **/
static void sigUSR1(int signo) {
fprintf(stderr, "Buffer use: %i (%i/%i), STDOUT: %i PIPE: %i:%i\n", queue.active, queue.maxUse, queue.bufferSize, queue.sWrites, queue.pWrites, queue.drops);
}
/** handle signal for terminating **/
static void sigINT(int signo) {
quit++;
if (quit > 1) exit(EXIT_FAILURE);
}
This version takes one more (optional) argument which specifies the number of the blocks that are to buffered for the pipe. My sample call now looks like this:
some_process | bftee >(onlineAnalysis.pl > results) 16384 | gzip > raw_data.gz
resulting in 16384 blocks to be buffered before discards happen. this uses about 32 Mbyte more memory, but... who cares ?
Of course, in the real environment I am using a named pipe so that I can attach and detach as needed. There is looks like this:
mkfifo named_pipe
some_process | bftee named_pipe 16384 | gzip > raw_data.gz &
cat named_pipe | onlineAnalysis.pl > results
Also, the process reacts on signals as follows:
SIGUSR1 -> print counters to STDERR
SIGTERM, SIGINT -> first exits the main loop and flushed the buffer to the pipe, the second terminated the program immediatly.
Maybe this helps someone in the future...
Enjoy
However, this would create a ever growing log file even if not used until the drive runs out of space.
Why not periodically rotate the logs? There's even a program to do it for you logrotate.
There's also a system for generating log messages and doing different things with them according to type. It's called syslog.
You could even combine the two. Have your program generate syslog messages, configure syslog to place them in a file and use logrotate to ensure they don't fill the disk.
If it turned out that you were writing for a small embedded system and the program's output is heavy there are a variety of techniques you might consider.
Remote syslog: send the syslog messages to a syslog server on the network.
Use the severity levels availble in syslog to do different things with the messages. E.g. discard "INFO" but log and forward "ERR" or greater. E.g. to console
Use a signal handler in your program to reread configuration on HUP and vary log generation "on demand" this way.
Have your program listen on a unix socket and write messages down it when open. You could even implement and interactive console into your program this way.
Using a configuration file, provide granular control of logging output.
It seems like bash <> redirection operator (3.6.10 Opening File Descriptors for Reading and WritingSee) makes writing to file/fifo opened with it non-blocking.
This should work:
$ mkfifo /tmp/mylog
$ exec 4<>/tmp/mylog
$ myprogram 2>&1 | tee >&4
$ cat /tmp/mylog # on demend
Solution given by gniourf_gniourf on #bash IRC channel.
BusyBox often used on embedded devices can create a ram buffered log by
syslogd -C
which can be filled by
logger
and read by
logread
Works quite well, but only provides one global log.
The logging could be directed to a UDP socket. Since UDP is connection-less, it won't block the sending program. Of course logs will be lost if the receiver or network can't keep up.
myprogram 2>&1 | socat - udp-datagram:localhost:3333
Then when you want to observe the logging:
socat udp-recv:3333 -
There are some other cool benefits like being able to attach multiple listeners at the same time or broadcast to multiple devices.
If you can install screen on the embedded device then you can run 'myprogram' in it and detach it, and reattach it anytime you want to see the log. Something like:
$ screen -t sometitle myprogram
Hit Ctrl+A, then d to detach it.
Whenever you want to see the output, reattach it:
$ screen -DR sometitle
Hit Ctrl-A, then d to detach it again.
This way you won't have to worry about the program output using disk space at all.
The problem with the given fifo approach is that the whole thing will hang when the pipe buffer is getting filled up and no reading process is taking place.
For the fifo approach to work I think you would have to implement a named pipe client-server model similar to the one mentioned in BASH: Best architecture for reading from two input streams (see slightly modified code below, sample code 2).
For a workaround you could also use a while ... read construct instead of teeing stdout to a named pipe by implementing a counting mechanism inside the while ... read loop that will overwrite the log file periodically by a specified number of lines. This would prevent an ever growing log file (sample code 1).
# sample code 1
# terminal window 1
rm -f /tmp/mylog
touch /tmp/mylog
while sleep 2; do date '+%Y-%m-%d_%H.%M.%S'; done 2>&1 | while IFS="" read -r line; do
lno=$((lno+1))
#echo $lno
array[${lno}]="${line}"
if [[ $lno -eq 10 ]]; then
lno=$((lno+1))
array[${lno}]="-------------"
printf '%s\n' "${array[#]}" > /tmp/mylog
unset lno array
fi
printf '%s\n' "${line}"
done
# terminal window 2
tail -f /tmp/mylog
#------------------------
# sample code 2
# code taken from:
# https://stackoverflow.com/questions/6702474/bash-best-architecture-for-reading-from-two-input-streams
# terminal window 1
# server
(
rm -f /tmp/to /tmp/from
mkfifo /tmp/to /tmp/from
while true; do
while IFS="" read -r -d $'\n' line; do
printf '%s\n' "${line}"
done </tmp/to >/tmp/from &
bgpid=$!
exec 3>/tmp/to
exec 4</tmp/from
trap "kill -TERM $bgpid; exit" 0 1 2 3 13 15
wait "$bgpid"
echo "restarting..."
done
) &
serverpid=$!
#kill -TERM $serverpid
# client
(
exec 3>/tmp/to;
exec 4</tmp/from;
while IFS="" read -r -d $'\n' <&4 line; do
if [[ "${line:0:1}" == $'\177' ]]; then
printf 'line from stdin: %s\n' "${line:1}" > /dev/null
else
printf 'line from fifo: %s\n' "$line" > /dev/null
fi
done &
trap "kill -TERM $"'!; exit' 1 2 3 13 15
while IFS="" read -r -d $'\n' line; do
# can we make it atomic?
# sleep 0.5
# dd if=/tmp/to iflag=nonblock of=/dev/null # flush fifo
printf '\177%s\n' "${line}"
done >&3
) &
# kill -TERM $!
# terminal window 2
# tests
echo hello > /tmp/to
yes 1 | nl > /tmp/to
yes 1 | nl | tee /tmp/to
while sleep 2; do date '+%Y-%m-%d_%H.%M.%S'; done 2>&1 | tee -a /tmp/to
# terminal window 3
cat /tmp/to | head -n 10
If your process writes to any log file and then wipes the file and starts again every now and again, so it doesn't get too big, or uses logrotate.
tail --follow=name --retry my.log
Is all you need. You will get as much scroll-back as your terminal.
Nothing non standard is needed. I've not tried it with small log files but all our logs rotate like this and I have never noticed loosing lines.
To follow in Fabraxias foot steps I'm going to share my small modification of racic's code. In one of my use cases I needed to suppress the writes to STDOUT, so I've added another parameter: swallow_stdout. If that is not 0, then output to STDOUT will be turned off.
Since I'm no C coder I've added comments while reading the code, maybe they are useful for others.
/* ftee - clone stdin to stdout and to a named pipe
(c) racic#stackoverflow
WTFPL Licence */
// gcc /tmp/ftee.c -o /usr/local/bin/ftee
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <errno.h>
#include <signal.h>
#include <unistd.h>
int main(int argc, char *argv[])
{
int readfd, writefd; // read & write file descriptors
struct stat status; // read file descriptor status
char *fifonam; // name of the pipe
int swallow_stdout; // 0 = write to STDOUT
char buffer[BUFSIZ]; // read/write buffer
ssize_t bytes; // bytes read/written
signal(SIGPIPE, SIG_IGN);
if(3!=argc)
{
printf("Usage:\n someprog 2>&1 | %s [FIFO] [swallow_stdout] \n"
"FIFO - path to a named pipe (created beforehand with mkfifo), required argument\n"
"swallow_stdout - 0 = output to PIPE and STDOUT, 1 = output to PIPE only, required argument\n", argv[0]);
exit(EXIT_FAILURE);
}
fifonam = argv[1];
swallow_stdout = atoi(argv[2]);
readfd = open(fifonam, O_RDONLY | O_NONBLOCK); // open read file descriptor in non-blocking mode
if(-1==readfd) // read descriptor error!
{
perror("ftee: readfd: open()");
exit(EXIT_FAILURE);
}
if(-1==fstat(readfd, &status)) // read descriptor status error! (?)
{
perror("ftee: fstat");
close(readfd);
exit(EXIT_FAILURE);
}
if(!S_ISFIFO(status.st_mode)) // read descriptor is not a FIFO error!
{
printf("ftee: %s in not a fifo!\n", fifonam);
close(readfd);
exit(EXIT_FAILURE);
}
writefd = open(fifonam, O_WRONLY | O_NONBLOCK); // open write file descriptor non-blocking
if(-1==writefd) // write file descriptor error!
{
perror("ftee: writefd: open()");
close(readfd);
exit(EXIT_FAILURE);
}
close(readfd); // reading complete, close read file descriptor
while(1) // infinite loop
{
bytes = read(STDIN_FILENO, buffer, sizeof(buffer)); // read STDIN into buffer
if (bytes < 0 && errno == EINTR)
continue; // skip over errors
if (bytes <= 0)
break; // no more data coming in or uncaught error, let's quit since we can't write anything
if (swallow_stdout == 0)
bytes = write(STDOUT_FILENO, buffer, bytes); // write buffer to STDOUT
if(-1==bytes) // write error!
perror("ftee: writing to stdout");
bytes = write(writefd, buffer, bytes); // write a copy of the buffer to the write file descriptor
if(-1==bytes);// ignore errors
}
close(writefd); // close write file descriptor
return(0); // return exit code 0
}

Externally disabling signals for a Linux program

On Linux, is it possible to somehow disable signaling for programs externally... that is, without modifying their source code?
Context:
I'm calling a C (and also a Java) program from within a bash script on Linux. I don't want any interruptions for my bash script, and for the other programs that the script launches (as foreground processes).
While I can use a...
trap '' INT
... in my bash script to disable the Ctrl C signal, this works only when the program control happens to be in the bash code. That is, if I press Ctrl C while the C program is running, the C program gets interrupted and it exits! This C program is doing some critical operation because of which I don't want it be interrupted. I don't have access to the source code of this C program, so signal handling inside the C program is out of question.
#!/bin/bash
trap 'echo You pressed Ctrl C' INT
# A C program to emulate a real-world, long-running program,
# which I don't want to be interrupted, and for which I
# don't have the source code!
#
# File: y.c
# To build: gcc -o y y.c
#
# #include <stdio.h>
# int main(int argc, char *argv[]) {
# printf("Performing a critical operation...\n");
# for(;;); // Do nothing forever.
# printf("Performing a critical operation... done.\n");
# }
./y
Regards,
/HS
The process signal mask is inherited across exec, so you can simply write a small wrapper program that blocks SIGINT and executes the target:
#include <signal.h>
#include <unistd.h>
#include <stdio.h>
int main(int argc, char *argv[])
{
sigset_t sigs;
sigemptyset(&sigs);
sigaddset(&sigs, SIGINT);
sigprocmask(SIG_BLOCK, &sigs, 0);
if (argc > 1) {
execvp(argv[1], argv + 1);
perror("execv");
} else {
fprintf(stderr, "Usage: %s <command> [args...]\n", argv[0]);
}
return 1;
}
If you compile this program to noint, you would just execute ./noint ./y.
As ephemient notes in comments, the signal disposition is also inherited, so you can have the wrapper ignore the signal instead of blocking it:
#include <signal.h>
#include <unistd.h>
#include <stdio.h>
int main(int argc, char *argv[])
{
struct sigaction sa = { 0 };
sa.sa_handler = SIG_IGN;
sigaction(SIGINT, &sa, 0);
if (argc > 1) {
execvp(argv[1], argv + 1);
perror("execv");
} else {
fprintf(stderr, "Usage: %s <command> [args...]\n", argv[0]);
}
return 1;
}
(and of course for a belt-and-braces approach, you could do both).
The "trap" command is local to this process, never applies to children.
To really trap the signal, you have to hack it using a LD_PRELOAD hook. This is non-trival task (you have to compile a loadable with _init(), sigaction() inside), so I won't include the full code here. You can find an example for SIGSEGV on Phack Volume 0x0b, Issue 0x3a, Phile #0x03.
Alternativlly, try the nohup and tail trick.
nohup your_command &
tail -F nohup.out
I would suggest that your C (and Java) application needs rewriting so that it can handle an exception, what happens if it really does need to be interrupted, power fails, etc...
I that fails, J-16 is right on the money. Does the user need to interract with the process, or just see the output (do they even need to see the output?)
The solutions explained above are not working for me, even by chaining the both commands proposed by Caf.
However, I finally succeeded in getting the expected behavior this way :
#!/bin/zsh
setopt MONITOR
TRAPINT() { print AAA }
print 1
( ./child & ; wait)
print 2
If I press Ctrl-C while child is running, it will wait that it exits, then will print AAA and 2. child will not receive any signals.
The subshell is used to prevent the PID from being shown.
And sorry... this is for zsh though the question is for bash, but I do not know bash enough to provide an equivalent script.
This is example code of enabling signals like Ctrl+C for programs which block it.
fixControlC.c
#include <stdio.h>
#include <signal.h>
int sigaddset(sigset_t *set, int signo) {
printf("int sigaddset(sigset_t *set=%p, int signo=%d)\n", set, signo);
return 0;
}
Compile it:
gcc -fPIC -shared -o fixControlC.so fixControlC.c
Run it:
LD_LIBRARY_PATH=. LD_PRELOAD=fixControlC.so mysqld

Resources