Call function in running Linux process using gdb? - linux

I have a running Linux process that is stuck on poll(). It has some data in a buffer, but this buffer is not yet written to disk. Ordinarily I'd kill the process which would cause it to flush the buffer and and exit.
However, in this case the file it's writing to has been deleted from the file system, so I need the process to write the buffer before it exits, while the inode is still reachable via /proc//fd/
Is it possible to "kick" the process out of the poll() call and single step it until it has flushed the buffer to disk using GDB?
(For the curious, the source code is here: http://sourcecodebrowser.com/alsa-utils/1.0.15/arecordmidi_8c_source.html)

Related

Most efficient way to save and later send output from many child processes

I want to do the following on linux:
Spawn a child process, run it to completion, save it's stdout
and then later write that saved stdout to a file.
The issue is that I want to do step 1 a few thousand times with different processes in a thread pool before doing step 2.
What's the most efficient way of doing this?
The normal way of doing this would be to have a pipe that the child process writes to, and then call sendfile() to send it to the output file (saving the copy to/from userspace). But this won't work for a few reasons. First of all, it would require me to have thousands of fds open at a time, which isn't supported in all linux configurations. Secondly, it would cause the child processes to block when their pipes fill up, and I want them to run to completion.
I considered using memfd_create to create to stdout fd for the child process. That solves the pipe filling issue, but not the fd limit one. vmsplice looked promising: I could splice from a pipe to user memory but according to the man page:
vmsplice() really supports true splicing only from user memory to a
pipe. In the opposite direction, it actually just copies the data to
user space.
Is there a way of doing this without copying to/from userspace in the parent process, and without having a high number of fds open at once?

close()->open() in Linux, how safe is it?

A process running on Linux writes some data to a file on the file system, and then invokes close(). Immediately afterwards, another process invokes open() and reads from the file.
Is the second process always 100% guaranteed to see an updated file?
What happens when using a network filesystem, and the two processes run on the same host?
What happens when the two processes are on different hosts?
close() does not guarantee that the data is written to disk, you have to use fsync() for this. See Does Linux guarantee the contents of a file is flushed to disc after close()? for a similar question.

Linux Kernel Procfs multiple read/writes

How does the Linux kernel handle multiple reads/writes to procfs? For instance, if two processes write to procfs at once, is one process queued (i.e. a kernel trap actually blocks one of the processes), or is there a kernel thread running for each core?
The concern is if you have a buffer used within a function (static to the global space), do you have to protect it or will the code be run sequentially?
It depends on each and every procfs file implementation. No one can even give you a definite answer because each driver can implement its own procfs folder and files (you didn't specify any specific files. Quick browsing in http://lxr.free-electrons.com/source/fs/proc/ shows that some files do use locks).
In either way you can't use the global buffer because a context switch can always occur, if not in the kernel then it can catch your reader thread right after it finishes the read syscall and before it started to process the read data.

Linux - Disabing buffered I/O to file in the child processes

In my application I am creating a bunch of child processes. After fork() I open a per process file, set the stdout/stderr of the created process to point to that file and then exec the intended program.
Is there an option for the parent process to setup things such a way that when the child process does a printf it gets flushed immediately to the output file without having to call flush() ? Or is there an API that can be called from the child process itself (before exec) to disable buffered I/O ?
The problem here is that printf is buffered. The underlying file descriptors are not buffered in that way (they are buffered in the kernel, but the other end can read from the same kernel buffer). You can change the buffering using setvbuf as mentioned in a comment which should have been an answer.
setvbuf(stdout, NULL, _IONBF, 0);
You do not need to do this for stdin or stderr.
You can't do this from the parent process. This is because the buffers are created by the child process. The parent process can only manipulate the underlying file descriptors (which are in the kernel), not stdout (which is part of the C library).
P.S. You mean fflush, not flush.

Open file in kthread on behalf of a user process

I am writing a linux kernel module which would start a kthread when a user process calls to it (using ioctl).
How can i open a file using this kthread on bahalf of user process, that is, when it returns the user process can access this file itself!?
It's not really sensible to do this. To open a file that the userspace process can read, you need to return a file descriptor to that process.
Potentially you could return a UNIX-domain socketpair connecting the kernel thread to the userspace thread, and have the kernel thread pass open file descriptors across that socket using a SCM_RIGHTS message.
It is likely to be more appropriate, however, to simply open the file in the context of the original process in the ioctl() call and return the file descriptor there.

Resources