I have some tasks requiring massive temporary named pipes to deal with.
Originally, I just simply think that generate random numbers, then append it as <number>.fifo be the name of named pipe.
However, I found this post: Create a temporary FIFO (named pipe) in Python?
It seems there is something I don't know that may cause some security issue there.
So my question here is that, what's the best way to generate a named pipe?
Notice that even though I am referencing a Python related post, I don't really mean to ask only in Python.
UPDATE:
Since I want to use a named pipe to connect unrelated processes, my plan is having process A call process B first via shell, and capture stdout to acquire the name of pipe, then both know what to open.
Here I am just worrying about whether leaking the name of pipe will become an issue. Before I never thought of it, until I read that Python post.
If you have to use named FIFOs and need to ensure that overlap/overwriting cannot occur, your best bet is probably to use some combination of mktemp and mkfifo.
Although mktemp itself cannot create FIFOs, it can be used to create unique temporary directories, which you can then put your FIFOs into.
The GNU mktemp documentation has an example of this.
Alternatively, you could create some name containing well random letters. You could read from /dev/random (or /dev/urandom, read random(4)) some random bytes to e.g. seed a PRNG (e.g. random(3) seeded by srandom), and/or mix the PID and time, etc.
And since named fifo(7) are files, you should use the permission system (and/or ACL) on them. In particular, you might create a command Linux user to run all your processes and restrict the FIFOs to be only owner-readable, etc.
Of course, and in all cases, you need to "store" or "transmit" securely these FIFO names.
If you start your programs in some bash script, you might consider making your fifo names using mktemp(1) as:
fifoname=$(mktemp -u -t yourprog_XXXXXX).fifo-$RANDOM-$$
mkfifo -m 0600 $fifoname
(perhaps in some loop). I guess it would be secure enough if the script is running in a dedicated user (and then pass the $fifoname in some pipe or file, not as a program argument)
The recent renameat2(2) syscall might be helpful (atomicity of RENAME_EXCHANGE).
BTW, you might want some SElinux. Remember that opened file descriptors -and that includes your fifos- are available as symlinks in proc(5) !
PS. it all depends upon how paranoid are you. A well sysadmined Linux system can be quite secure...
Related
I want to write a wrapper for memory mapped file io, that either fails to map a file or returns a mapping that is valid until it is unmapped. With plain mmap, problems arise, if the underlying file is truncated or deleted while being mapped, for example. According to the linux man page of mmap SIGBUS is received if memory beyond the new end of the file is accessed after a truncate. It is no option to catch this signal and handle the error this way.
My idea was to create a copy of the file and map the copy. On a cow capable file system, this would impose little overhead.
But the problem is: how do I protect the copy from being manipulated by another process? A tempfile is no real option, because in theory a malicious process could still mutate it. I know that there are file locks on Linux, but as far as I understood they're either optional or don't prevent others from deleting the file.
I'm asking for two kinds of answers: Either a way to mmap a file in a rock solid way or a mechanism to protect a tempfile fully from other processes. But maybe my whole way of approaching the problem is wrong, so feel free to suggest radical solutions ;)
You can't prevent a skilled and determined user from intentionally shooting themselves in the foot. Just take reasonable precautions so it doesn't happen accidentally.
Most programs assume the input file won't change and that's usually fine
Programs that want to process files shared with cooperative programs use file locking
Programs that want a private file will create a temp file, snapshot or otherwise -- and if they unlink it for auto-cleanup, it's also inaccessible via the fs
Programs that want to protect their data from all regular user actions will run as a dedicated system account, in which case chmod is protection enough.
Anyone with access to the same account (or root) can interfere with the program with a simple kill -BUS, chmod/truncate, or any of the fancier foot-guns like copying and patching the binary, cloning its FDs, or attaching a debugger. If that's what they want to do, it's not your place to stop them.
I know it's frowned upon to use passwords in command line interfaces like in this example:
./commandforsomething -u username -p plaintextpassword
My understanding that the reason for that (in unix systems at least) is because it'll be able to be read in the scrollback as well as the .bash_history file (or whatever flavor shell you use).
HOWEVER, I was wondering if it was safe to use that sort of interface with sensitive data programatically while programming things. For example, in perl, you can execute a command using two ``, the exec command, or system command (I'm not 100% sure on the differences between these apart from the return value from the two backticks being the output of the executed command versus the return value... but that's a question for another post I guess).
So, my question is this: Is it safe to do things LIKE
system("command", "userarg", "passwordarg");
as it essentially does the same thing, just without getting posted in scrollback or history? (note that I only use perl as an example - I don't care about the answer specific to perl but instead the generally accepted principle).
It's not only about shell history.
ps shows all arguments passed to the program. The reason why passing arguments like this is bad is that you could potentially see other users' passwords by just looping around and executing ps. The cited code won't change much, as it essentially does the same.
You can try to pass some secrets via environment, since if the user doesn't have an access to the given process, the environment won't be shown. This is better, but is a pretty bad solution too (e.g.: in case program fails and dumps a core, all passwords will get written to disk).
If you use environment variables, use ps -E which will show you environment variables of the process. Use it as a different users than the one executing the program. Basically simulate the "attacker" and see if you can snoop the password. On a properly configured system you shouldn't be able to do it.
The thing is, I want to track if a user tries to open a file on a shared account. I'm looking for any record/technique that helps me know if the concerned file is opened, at run time.
I want to create a script which monitors if the file is open, and if it is, I want it to send an alert to a particular email address. The file I'm thinking of is a regular file.
I tried using lsof | grep filename for checking if a file is open in gedit, but the command doesn't return anything.
Actually, I'm trying this for a pet project, and thus the question.
The command lsof -t filename shows the IDs of all processes that have the particular file opened. lsof -t filename | wc -w gives you the number of processes currently accessing the file.
The fact that a file has been read into an editor like gedit does not mean that the file is still open. The editor most likely opens the file, reads its contents and then closes the file. After you have edited the file you have the choice to overwrite the existing file or save as another file.
You could (in addition of other answers) use the Linux-specific inotify(7) facilities.
I am understanding that you want to track one (or a few) particular given file, with a fixed file path (actually a given i-node). E.g. you would want to track when /var/run/foobar is accessed or modified, and do something when that happens
In particular, you might want to install and use incrond(8) and configure it thru incrontab(5)
If you want to run a script when some given file (on a native local, e.g. Ext4, BTRS, ... but not NFS file system) is accessed or modified, use inotify incrond is exactly done for that purpose.
PS. AFAIK, inotify don't work well for remote network files, e.g. NFS filesystems (in particular when another NFS client machine is modifying a file).
If the files you are fond of are somehow source files, you might be interested by revision control systems (like git) or builder systems (like GNU make); in a certain way these tools are related to file modification.
You could also have the particular file system sits in some FUSE filesystem, and write your own FUSE daemon.
If you can restrict and modify the programs accessing the file, you might want to use advisory locking, e.g. flock(2), lockf(3).
Perhaps the data sitting in the file should be in some database (e.g. sqlite or a real DBMS like PostGreSQL ou MongoDB). ACID properties are important ....
Notice that the filesystem and the mount options may matter a lot.
You might want to use the stat(1) command.
It is difficult to help more without understanding the real use case and the motivation. You should avoid some XY problem
Probably, the workflow is wrong (having a shared file between several users able to write it), and you should approach the overall issue in some other way. For a pet project I would at least recommend using some advisory lock, and access & modify the information only thru your own programs (perhaps setuid) using flock (this excludes ordinary editors like gedit or commands like cat ...). However, your implicit use case seems to be well suited for a DBMS approach (a database does not have to contain a lot of data, it might be tiny), or some index locked file like GDBM library is handling.
Remember that on POSIX systems and Linux, several processes can access (and even modify) the same file simultaneously (unless you use some locking or synchronization).
Reading the Advanced Linux Programming book (freely available) would give you a broader picture (but it does not mention inotify which appeared aften the book was written).
You can use ls -lrt, it displays the last RW operations in the shell. Then you can conclude whether the file is opened or not. Make sure that you are in the exact directory.
I would like to be able to get a list of all of the file descriptors (now considering this question to pertain to actual files) that a process ever opened during the runtime of the process. The problem with polling /proc/(PID)/fd/ is that you only get a snapshot in time of what is currently open. Is there a way to force linux to keep this information around long enough to log it for the entire run of the process?
First, notice that a file descriptor which is open-ed then close-d by the application is recycled by the kernel (a future open could give the same file descriptor). See open(2) and close(2) and read Advanced Linux Programming.
Then, consider using strace(1); you'll be able to log all the syscalls (or perhaps just open, socket, close, accept, ... that is the syscalls changing the file descriptor table). Of course strace is using the ptrace(2) syscall (which you probably don't want to bother using directly).
The simplest way would be to run strace -o /tmp/mytrace.tr yourprog argments... and to look, e.g. with some pager like less, into the quite big /tmp/mytrace.tr file.
As Gearoid Murphy commented you could restrict the output of strace using e.g. -e trace=file.
BTW, to debug Makefile-s this is the wrong approach. Learn more about remake.
Both "netstat -p" and "lsof -n -i -P" seems to readlinking all processes fd's, like stat /proc/*/fd/*.
How to do it more efficiently?
My program wants to know what process is connecting to it. Traversing all processes again and again seems too ineffective.
Ways suggesting iptables things or kernel patches are welcome too.
Take a look at this answer, where various methods and programs that perform socket to process mappings are mentioned. You might also try several additional techniques to improve performance:
Caching the file descriptors in /proc, and the information in /proc/net. This is done by the programs mentioned in the linked answer, but is only viable if your process lasts more than a few seconds.
You might try getpeername(), but this relies you knowing of the possible endpoints and what processes they map to. Your questions suggests that you are connecting sockets locally, you might try using Unix sockets which allow you to receive the credentials of a peer when exchanging messages by passing SO_PASSCRED to setsockopt(). Take a look at these examples (they're pretty nasty but the best I could find).
http://www.lst.de/~okir/blackhats/node121.html
http://www.zanshu.com/ebook/44_secure-programming-cookbook-for-c-and-cpp/0596003943_secureprgckbk-chp-9-sect-8.html
Take a look at fs/proc/base.c in the Linux kernel. This is the heart of the information given by the result of a readlink on a file descriptor in /proc/PID/fd/FD. A significant part of the overhead is the passing of the requests up and down the VFS layer, the numerous locking that occurs on all the kernel data structures that provide the information given, and the stringyfying and destringyfying at the kernel and your end respectively. You might adapt some of the code in this file to generate this information without many of the intermediate layers, in particular minimizing the locking to once per process, or simply once per scan of the entire data set you're after.
My personal recommendation is to just brute force it for now, ideally traverse the processes in /proc in reverse numerical order, as the more recent and interesting processes will have higher PIDs, and return as soon as you've located the results you're after. Doing this once per incoming connection is relatively cheap, it really depends on how performance critical your application is. You'll definitely find it worthwhile to bypass calling netstat and directly parse the new connection from /proc/net/PROTO, then locate the socket in /proc/PID/fd. If all your traffic is localhost, just switch to Unix sockets and get the credentials directly. Writing a new syscall or proc module that dumps huge amounts of data regarding file descriptors I'd save for last.