I would like to monitor the STDERR channel of all the processes running on my Linux. Monitoring should preferably be done at real-time (i.e. while the process is running), but post-processing will also do. It should be done without requiring root permissions, and without breaking any security features.
I have done a good bit of searching, and found some utilities such as reptyr and screenify, and a few explanations on how to do this with gdb (for example here). However, all of these seem to be doing both too much and too little. Too much in the sense that they take full control of the process's stream handles (i.e. closing original one and opening a new one). Too little in the sense that they have serious limitations, such as the fact that require disabling security features, such as ptrace_scope.
Any advice would be highly appreciated!
Maybe this question would get more answers on SU. The only thing I could think of would be to monitor the files and devices already opened as STDERR. Of course, this would not work if STDERR is redirected to /dev/null.
You can get all the file descriptors for STDERR with:
ls -l /dev/proc/[0-9]*/fd/2
If you own the process, accessing its STDERR file descriptor or output file should be possible in the language of your choice without being root.
Related
I understand that a daemon should not write to stdout (and stderr) because that wouldn't be available once detached from the controlling terminal. But can I reopen stdout to a regular file, so that all my original logging will still work? This would be very nice and useful for me.
I tried something like this after forking,
freopen("/dev/null/", "r", stdin);
freopen("log", "w", stdout);
freopen("log", "w", stderr);
BOOST_LOG_TRIVIAL(info) << "daemonized!";
The daemon can be launched (to be precise, it doesn't fail and exit) and the log file can be created. But the log is empty (no "daemonized!"). Is this not the right way to daemonize? Could someone shed some light?
There is a library function, daemon(int nochdir, int noclose), that is available to help code appropriately daemonize and additionally reopen the standard I/O streams connected to /dev/null. Using it and a system log facility (like syslog) would be the way I'd go insofar as a "right" way to daemonize.
Having the standard I/O streams open and associated with /dev/null would provide the benefit of avoiding any hiccups due to any left-over I/O with these (that might for example block the process or cause an unexpected signal). It'd additionally prevent any new descriptors from unsuspectingly acquiring them and unwittingly getting output from say left over printf statements.
As far as associating the standard I/O streams with a regular file, the following warning in the online daemonize program man page seems useful to recognize:
Be careful where you redirect the output! The file system containing the open file cannot be unmounted as long as the file is open. For best results, make sure that this output file is on the same file system as the daemon's working directory.
I would like to be able to get a list of all of the file descriptors (now considering this question to pertain to actual files) that a process ever opened during the runtime of the process. The problem with polling /proc/(PID)/fd/ is that you only get a snapshot in time of what is currently open. Is there a way to force linux to keep this information around long enough to log it for the entire run of the process?
First, notice that a file descriptor which is open-ed then close-d by the application is recycled by the kernel (a future open could give the same file descriptor). See open(2) and close(2) and read Advanced Linux Programming.
Then, consider using strace(1); you'll be able to log all the syscalls (or perhaps just open, socket, close, accept, ... that is the syscalls changing the file descriptor table). Of course strace is using the ptrace(2) syscall (which you probably don't want to bother using directly).
The simplest way would be to run strace -o /tmp/mytrace.tr yourprog argments... and to look, e.g. with some pager like less, into the quite big /tmp/mytrace.tr file.
As Gearoid Murphy commented you could restrict the output of strace using e.g. -e trace=file.
BTW, to debug Makefile-s this is the wrong approach. Learn more about remake.
I am very new to linux and am sorry for the newbie questions.
I had a homework extra credit question that I was trying to do but failed to get it.
Q. Write a security shell script that logs the following information
for every process: User ID, time started, time ended (0 if process is
still running), whether the process has tried to access a secure file
(stored as either yes or no) The log created is called
process_security_log where each of the above pieces of information is
stored on a separate line and each entry follows immediately (that is,
there are no blank lines). Write a shell script that will examine
this log and output the User ID of any process that is still running
that has tried to access a secure file.
I started by trying to just capturing the User and echo it but failed.
output=`ps -ef | grep [*]`
set -- $output
User=$1
echo $User
The output of ps is both insufficient and incapable of producing data required by this question.
You need something like auditd, SELinux, or straight up kernel hacks (ie. fork.c) to do anything remotely in the realm of security logging.
Update
Others have made suggestions to use shell command logging, ps and friends (proc or sysfs). They can be useful, and do have their place (obviously). I would argue that they shouldn't be relied on for this purpose, especially in an educational context.
... whether the process has tried to access a secure file (stored as either yes or no)
Seems to be the one that the other answers are ignoring. I stand by my original answer, but as Daniel points out there are other interesting ways to garnish this data.
systemtap
pref
LTTng
For an educational exercise these tools will help provide a more complete answer.
Since this is homework, I'm assuming that the scenario isn't a real-world scenario, and is merely a learning exercise. The shell is not really the right place to do security auditing or process accounting. However, here are some pointers that may help you discover what you can do at the shell prompt.
You might set the bash PROMPT_COMMAND to do your process logging.
You can tail or grep your command history for use in logging.
You can use /usr/bin/script (usually found in the bsdutils package) to create a typescript of your session.
You can run ps in a loop, using subshells or the watch utility, to see what processes are currently running.
You can use pidof or pgrep to find processes more easily.
You can modify your .bashrc or other shell startup file to set up your environment or start your logging tools.
As a starting point, you might begin with something trivial like this:
$ export PROMPT_COMMAND='history | tail -n1'
56 export PROMPT_COMMAND='history | tail -n1'
$ ls /etc/passwd
/etc/passwd
57 ls /etc/passwd
and build in any additional logging data or process information that you think necessary. Hope that gets you pointed in the right direction!
Take a look at the /proc pseudo-filesystem.
Inside of this, there is a subdirectory for every process that is currently running - process [pid] has its information available in /proc/[pid]/. Inside of that directory, you might make use of /prod/[pid]/stat/ or /proc/[pid]/status to get information about which user started the process and when.
I'm not sure what the assignment means by a "secure file," but if you have some way of determining which files are secure, you get get information about open files (including their names) through /prod/[pid]/fd/ and /prod/[pid]/fdinfo.
Is /proc enough for true security logging? No, but /proc is enough to get information about which processes are currently running on the system, which is probably what you need for a homework assignment about shell scripting. Also, outside of this class you'll probably find /proc useful later for other purposes, such as seeing the mapped pages for a process. This can come in handy if you're writing a stack trace utility or want to know how they work, or if you're debugging code that uses memory-mapped files.
Both "netstat -p" and "lsof -n -i -P" seems to readlinking all processes fd's, like stat /proc/*/fd/*.
How to do it more efficiently?
My program wants to know what process is connecting to it. Traversing all processes again and again seems too ineffective.
Ways suggesting iptables things or kernel patches are welcome too.
Take a look at this answer, where various methods and programs that perform socket to process mappings are mentioned. You might also try several additional techniques to improve performance:
Caching the file descriptors in /proc, and the information in /proc/net. This is done by the programs mentioned in the linked answer, but is only viable if your process lasts more than a few seconds.
You might try getpeername(), but this relies you knowing of the possible endpoints and what processes they map to. Your questions suggests that you are connecting sockets locally, you might try using Unix sockets which allow you to receive the credentials of a peer when exchanging messages by passing SO_PASSCRED to setsockopt(). Take a look at these examples (they're pretty nasty but the best I could find).
http://www.lst.de/~okir/blackhats/node121.html
http://www.zanshu.com/ebook/44_secure-programming-cookbook-for-c-and-cpp/0596003943_secureprgckbk-chp-9-sect-8.html
Take a look at fs/proc/base.c in the Linux kernel. This is the heart of the information given by the result of a readlink on a file descriptor in /proc/PID/fd/FD. A significant part of the overhead is the passing of the requests up and down the VFS layer, the numerous locking that occurs on all the kernel data structures that provide the information given, and the stringyfying and destringyfying at the kernel and your end respectively. You might adapt some of the code in this file to generate this information without many of the intermediate layers, in particular minimizing the locking to once per process, or simply once per scan of the entire data set you're after.
My personal recommendation is to just brute force it for now, ideally traverse the processes in /proc in reverse numerical order, as the more recent and interesting processes will have higher PIDs, and return as soon as you've located the results you're after. Doing this once per incoming connection is relatively cheap, it really depends on how performance critical your application is. You'll definitely find it worthwhile to bypass calling netstat and directly parse the new connection from /proc/net/PROTO, then locate the socket in /proc/PID/fd. If all your traffic is localhost, just switch to Unix sockets and get the credentials directly. Writing a new syscall or proc module that dumps huge amounts of data regarding file descriptors I'd save for last.
From time to time, a file that I'm interested in is modified by some process. I need to find out which process is modifying this file. Using lsof will not work, nor does kqueue. Is this possible under FreeBSD and Linux?
On Linux, there's a kernel patch floating around for inotify. However, some have said this is rarely useful and that it can be a security risk. In any case, here's the patch.
Apart from that, I'm not sure there's any way to get the PID, either with inotify or dnotify. You could investigate further (e.g. search for pid dnotify or pid inotify), but I believe it isn't likely.
On FreeBSD, perhaps it should be best if you check its auditing features.
Linux has an audit daemon http://www.cyberciti.biz/tips/linux-audit-files-to-see-who-made-changes-to-a-file.html
See also auditd homepage
You can see which processes opened a file just installing and using lsof (LiSt Open Files) command.