I have 3 samba shares mounted in my system, but suddenly, one of them gets umounted without my permision. Maybe one of houndreds of scripts which run in my crontab, but i dont know which one.
I've reviewed all /var/log directory looking for umount word without success, then i want to log when command umount is executed and which process is running it.
Maybe with syslog, maybe with another log, maybe a mail to my box....
Thanks a lot.
I have this software:
mount: mount-2.12q
mount.cifs version: 1.14-3.5.4
Unmounting does not only happen by calling the umount binary, many programs might do it. See the manual page (man syscalls) and search for umount. This said, you would have to hook the corresponding syscall and see who invokes it. I'm not sure, but most probably it's possible to disconnect inside the kernel by calling the corresponding method directly, so functionality might bypass the syscall interface which is mainly required for userspace interaction. In this case you would have to use some debugging technique on the kernel itself, which maybe is a little much for finding your problem!
You may have success using strace on an already running process (man strace), for example smbd, and see if this process invokes umount, which is quite possible.
Anyways, if you can recompile your kernel from source, you might add some printk message inside the function that is used to unmount a device to see which process did it (this would be my approach for cases where nothing else, including strace, helps).
Since the mount is a change in the filesystem, maybe the inode-observer incron is a solution for you. Another option might be the auditd.
Related
If I run some command-line application in Linux, how to tell which files were accessed (read and/or written) by that process? I imagine I would need to place some hooks in the file-system driver and recompile the kernel, or something like that? Is there an easier way?
strace is a command will display each system call the application makes.
From the man page:
In the simplest case strace runs the specified command until it exits. It intercepts and records the system calls which are called by a process and the signals which are received by a process. The name of each system call, its arguments and its return value are printed on standard error or to the file specified with the -o option.
For instance, each open(), read() and write() operation will show the arguments and the return code.
You can get list of file access by your application by lsof command in linux Here is list of example
In addition of other answers mentionning lsof, strace (maybe ltrace could be useful too!), fs_usage you could use for process 1234 the directory /proc/1234/, in particular the opened file descriptors are available from /proc/1234/fd/; from inside your program you could use /proc/self/fd/. See proc(5)
Perhaps inotify(7) or ptrace(2) is relevant too.
I would like to be able to get a list of all of the file descriptors (now considering this question to pertain to actual files) that a process ever opened during the runtime of the process. The problem with polling /proc/(PID)/fd/ is that you only get a snapshot in time of what is currently open. Is there a way to force linux to keep this information around long enough to log it for the entire run of the process?
First, notice that a file descriptor which is open-ed then close-d by the application is recycled by the kernel (a future open could give the same file descriptor). See open(2) and close(2) and read Advanced Linux Programming.
Then, consider using strace(1); you'll be able to log all the syscalls (or perhaps just open, socket, close, accept, ... that is the syscalls changing the file descriptor table). Of course strace is using the ptrace(2) syscall (which you probably don't want to bother using directly).
The simplest way would be to run strace -o /tmp/mytrace.tr yourprog argments... and to look, e.g. with some pager like less, into the quite big /tmp/mytrace.tr file.
As Gearoid Murphy commented you could restrict the output of strace using e.g. -e trace=file.
BTW, to debug Makefile-s this is the wrong approach. Learn more about remake.
I was wondering if there exists a way to run an untrusted C program under a sandbox in Linux. Something that would prevent the program from opening files, or network connections, or forking, exec, etc?
It would be a small program, a homework assignment, that gets uploaded to a server and has unit tests executed on it. So the program would be short lived.
I have used Systrace to sandbox untrusted programs both interactively and in automatic mode. It has a ptrace()-based backend which allows its use on a Linux system without special privileges, as well as a far faster and more poweful backend which requires patching the kernel.
It is also possible to create a sandbox on Unix-like systems using chroot(1), although that is not quite as easy or secure. Linux Containers and FreeBSD jails are a better alternative to chroot. Another alternative on Linux is to use a security framework like SELinux or AppArmor, which is what I would propose for production systems.
We would be able to help you more if you told as what exactly it is that you want to do.
EDIT:
Systrace would work for your case, but I think that something based on the Linux Security Model like AppArmor or SELinux is a more standard, and thus preferred, alternative, depending on your distribution.
EDIT 2:
While chroot(1) is available on most (all?) Unix-like systems, it has quite a few issues:
It can be broken out of. If you are going to actually compile or run untrusted C programs on your system, you are especially vulnerable to this issue. And if your students are anything like mine, someone WILL try to break out of the jail.
You have to create a full independent filesystem hierarchy with everything that is necessary for your task. You do not have to have a compiler in the chroot, but anything that is required to run the compiled programs should be included. While there are utilities that help with this, it's still not trivial.
You have to maintain the chroot. Since it is independent, the chroot files will not be updated along with your distribution. You will have to either recreate the chroot regularly, or include the necessary update tools in it, which would essentially require that it be a full-blown Linux distribution. You will also have to keep system and user data (passwords, input files e.t.c.) synchronized with the host system.
chroot() only protects the filesystem. It does not prevent a malicious program from opening network sockets or a badly-written one from sucking up every available resource.
The resource usage problem is common among all alternatives. Filesystem quotas will prevent programs from filling the disk. Proper ulimit (setrlimit() in C) settings can protect against memory overuse and any fork bombs, as well as put a stop to CPU hogs. nice(1) can lower the priority of those programs so that the computer can be used for any tasks that are deemed more important with no problem.
I wrote an overview of sandboxing techniques in Linux recently. I think your easiest approach would be to use Linux containers (lxc) if you dont mind about forking and so on, which don't really matter in this environment. You can give the process a read only root file system, an isolated loopback network connection, and you can still kill it easily and set memory limits etc.
Seccomp is going to be a bit difficult, as the code cannot even allocate memory.
Selinux is the other option, but I think it might be more work than a container.
Firejail is one of the most comprehensive tools to do that - it support seccomp, filesystem containers, capabilities and more:
https://firejail.wordpress.com/features-3/
You can use Qemu to test assignments quickly. This procedure below takes less than 5 seconds on my 5 year old laptop.
Let's assume the student has to develop a program that takes unsigned ints, each on their own line, until a line with "-1" arrives. The program should then average all the ints and output "Average: %f". Here's how you could test program completely isolated:
First, get root.bin from Jslinux, we'll use that as the userland (it has the tcc C-compiler):
wget https://github.com/levskaya/jslinux-deobfuscated/raw/master/root.bin
We want to put the student's submission in root.bin, so set up the loop device:
sudo losetup /dev/loop0 root.bin
(you could use fuseext2 for this too, but it's not very stable. If it stabilizes, you won't need root for any of this)
Make an empty directory:
mkdir mountpoint
Mount root.bin:
sudo mount /dev/loop0 mountpoint
Enter the mounted filesystem:
cd mountpoint.
Fix rights:
sudo chown -R `whoami` .
mkdir -p etc/init.d
vi etc/init.d:
#!/bin/sh
cd /root
echo READY 2>&1 > /dev/ttyS0
tcc assignment.c 2>&1 > /dev/ttyS0
./a.out 2>&1 > /dev/ttyS0
chmod +x etc/init.d/rcS
Copy the submission to the VM:
cp ~/student_assignment.c root/assignment.c
Exit the VM's root FS:
cd ..
sudo umount mountpoint
Now the image is ready, we just need to run it. It will compile and run the submission after booting.
mkfifo /tmp/guest_output
Open a seperate terminal and start listening for guest output:
dd if=/tmp/guest_output bs=1
In another terminal:
qemu-system-i386 -kernel vmlinuz-3.5.0-27-generic -initrd root.bin -monitor stdio -nographic -serial pipe:/tmp/guestoutput
(I just used the Ubuntu kernel here, but many kernels will work)
When the guest output shows "READY", you can send keys to the VM from the qemu prompt.
For example, to test this assignment, you could do
(qemu) sendkey 1
(qemu) sendkey 4
(qemu) sendkey ret
(qemu) sendkey 1
(qemu) sendkey 0
(qemu) sendkey ret
(qemu) sendkey minus
(qemu) sendkey 1
(qemu) sendkey ret
Now Average = 12.000000 should appear on the guest output pipe. If it doesn't, the student failed.
Quit qemu: quit
A program passing the test is here: https://stackoverflow.com/a/14424295/309483. Just use tcclib.h instead of stdio.h.
Try User-mode Linux. It has about 1% performance overhead for CPU-intensive jobs, but it may be 6 times slower for I/O-intensive jobs.
Running it inside a virtual machine should offer you all the security and restrictions you want.
QEMU would be a good fit for that and all the work (downloading the application, updating the disk image, starting QEMU, running the application inside it, and saving the output for later retrieval) could be scripted for automated tests runs.
When it goes about sanboxing based on ptrace (strace) check-out:
"sydbox" sandbox and "pinktrace" programming library ( it's C99 but there are bindings to python and ruby as far as I know).
Collected links related to topic:
http://www.diigo.com/user/wierzowiecki/sydbox
(sorry that not direct links, but no enough reputation points yet)
seccomp and seccomp-bpf accomplish this with the least effort: https://www.kernel.org/doc/Documentation/prctl/seccomp_filter.txt
ok thanks to all the answers they helped ME a lot. But i would suggest none of them as an solution for the person who asked the original question. All mentioned tools require to much work for the purpose to test students code as a teacher,tutor,prof. The best way in this case would be in my opinion virtualbox. Ok, its emulates an complete x68-system and has nothing to do with the meaning of sandboxing in this way but if i imagine my programming teacher it would be the best for him. So "apt-get install virtualbox" on debian based systems, all others head over to http://virtualbox.org/ , create a vm, add an iso, click install, wait some time and be lucky. It will be much easier to use as to set up user-mode-linux or doing some heavy strace stuff...
And if you have fears about your students hacking you i guess you have an authority problem and a solution for that would be threaten them that you will sue the living daylights out of them if you can prove just one bite of maleware in the work they give you...
Also if there is a class and 1% of it is as good as he could do such things, dont bore them with such simple tasks and give them some big ones where they have to code some more. Integrative learning is best for everyone so dont relay on old deadlocked structures...
And of cause, never use the same computer for important things (like writing attestations and exams), that you are using for things like browsing the web and testing software.
Use an off line computer for important things and an on line computer for all other things.
However to everyone else who isnt a paranoid teacher (dont want to offend anybody, i am just the opinion that you should learn the basics about security and our society before you start being a programmers teacher...)
... where was i ... for everyone else:
happy hacking !!
From time to time, a file that I'm interested in is modified by some process. I need to find out which process is modifying this file. Using lsof will not work, nor does kqueue. Is this possible under FreeBSD and Linux?
On Linux, there's a kernel patch floating around for inotify. However, some have said this is rarely useful and that it can be a security risk. In any case, here's the patch.
Apart from that, I'm not sure there's any way to get the PID, either with inotify or dnotify. You could investigate further (e.g. search for pid dnotify or pid inotify), but I believe it isn't likely.
On FreeBSD, perhaps it should be best if you check its auditing features.
Linux has an audit daemon http://www.cyberciti.biz/tips/linux-audit-files-to-see-who-made-changes-to-a-file.html
See also auditd homepage
You can see which processes opened a file just installing and using lsof (LiSt Open Files) command.
On Unix, is there any way that one process can change another's environment variables (assuming they're all being run by the same user)? A general solution would be best, but if not, what about the specific case where one is a child of the other?
Edit: How about via gdb?
Via gdb:
(gdb) attach process_id
(gdb) call putenv ("env_var_name=env_var_value")
(gdb) detach
This is quite a nasty hack and should only be done in the context of a debugging scenario, of course.
You probably can do it technically (see other answers), but it might not help you.
Most programs will expect that env vars cannot be changed from the outside after startup, hence most will probably just read the vars they are interested in at startup and initialize based on that. So changing them afterwards will not make a difference, since the program will never re-read them.
If you posted this as a concrete problem, you should probably take a different approach. If it was just out of curiosity: Nice question :-).
Substantially, no. If you had sufficient privileges (root, or thereabouts) and poked around /dev/kmem (kernel memory), and you made changes to the process's environment, and if the process actually re-referenced the environment variable afterwards (that is, the process had not already taken a copy of the env var and was not using just that copy), then maybe, if you were lucky and clever, and the wind was blowing in the right direction, and the phase of the moon was correct, perhaps, you might achieve something.
Quoting Jerry Peek:
You can't teach an old dog new tricks.
The only thing you can do is to change the environment variable of the child process before starting it: it gets the copy of the parent environment, sorry.
See http://www.unix.com.ua/orelly/unix/upt/ch06_02.htm for details.
Just a comment on the answer about using /proc. Under linux /proc is supported but, it does not work, you cannot change the /proc/${pid}/environ file, even if you are root: it is absolutely read-only.
I could think of the rather contrived way to do that, and it will not work for arbitrary processes.
Suppose that you write your own shared library which implements 'char *getenv'. Then, you set up 'LD_PRELOAD' or 'LD_LIBRARY_PATH' env. vars so that both your processes are run with your shared library preloaded.
This way, you will essentially have a control over the code of the 'getenv' function. Then, you could do all sorts of nasty tricks. Your 'getenv' could consult external config file or SHM segment for alternate values of env vars. Or you could do regexp search/replace on the requested values. Or ...
I can't think of an easy way to do that for arbitrary running processes (even if you are root), short of rewriting dynamic linker (ld-linux.so).
Or get your process to update a config file for the new process and then either:
perform a kill -HUP on the new process to reread the updated config file, or
have the process check the config file for updates every now and then. If changes are found, then reread the config file.
It seems that putenv doesn't work now, but setenv does.
I was testing the accepted answer while trying to set the variable in the current shell with no success
$] sudo gdb -p $$
(gdb) call putenv("TEST=1234")
$1 = 0
(gdb) call (char*) getenv("TEST")
$2 = 0x0
(gdb) detach
(gdb) quit
$] echo "TEST=$TEST"
TEST=
and the variant how it works:
$] sudo gdb -p $$
(gdb) call (int) setenv("TEST", "1234", 1)
$1 = 0
(gdb) call (char*) getenv("TEST")
$2 = 0x55f19ff5edc0 "1234"
(gdb) detach
(gdb) quit
$] echo "TEST=$TEST"
TEST=1234
Not as far as I know. Really you're trying to communicate from one process to another which calls for one of the IPC methods (shared memory, semaphores, sockets, etc.). Having received data by one of these methods you could then set environment variables or perform other actions more directly.
If your unix supports the /proc filesystem, then it's trivial to READ the env - you can read the environment, commandline, and many other attributes of any process you own that way. Changing it... Well, I can think of a way, but it's a BAD idea.
The more general case... I don't know, but I doubt there's a portable answer.
(Edited: my original answer assumed the OP wanted to READ the env, not change it)
UNIX is full of Inter-process communication. Check if your target instance has some. Dbus is becoming a standard in "desktop" IPC.
I change environment variables inside of Awesome window manager using awesome-client with is a Dbus "sender" of lua code.
Not a direct answer but... Raymond Chen had a [Windows-based] rationale around this only the other day :-
... Although there are certainly unsupported ways of doing it or ways that work with the assistance of a debugger, there’s nothing that is supported for programmatic access to another process’s command line, at least nothing provided by the kernel. ...
That there isn’t is a consequence of the principle of not keeping track of information which you don’t need. The kernel has no need to obtain the command line of another process. It takes the command line passed to the CreateProcess function and copies it into the address space of the process being launched, in a location where the GetCommandLine function can retrieve it. Once the process can access its own command line, the kernel’s responsibilities are done.
Since the command line is copied into the process’s address space, the process might even write to the memory that holds the command line and modify it. If that happens, then the original command line is lost forever; the only known copy got overwritten.
In other words, any such kernel facilities would be
difficult to implement
potentially a security concern
However the most likely reason is simply that there's limited use cases for such a facility.
Yes, you can, but it doesn't make sense. To do that you should invade into a running process and change its memory.
Environment variables are the part of the running process, that also means env vars are copied and stored to the memory of a process when it started, and only the process itself owns it.
If you change evn vars when the process is trying to read them it could be corrupted, but you still can change them if you attach a debugger to the process, because it has access to the memory of the attached process.