For an application I'm writing, I want to know which all processes are accessing a particular file and dump that information into a Log file. In the end one of the processes will be deleting this file, I would want to know the Process name for that too.
I can use the INotify library to monitor the file access, but it does not give me the process name which is accessing the file. This might be possible using the Auditctl package on linux as well but I can't use this option as well :-(
Actually it is a controlled environment for some reasons the end customer is ready to run a program but not ready to install new packages or make changes to the existing utilities.
It is not possible to reliably audit directly attached file access in Linux from userspace alone.
You could poll with lsof but you would risk not detecting accesses between polling. The purpose of the original dnotify module (obsoleted by inotify...) was to avoid having to incur the overhead of polling and to avoid loosing events. The audit system gives user identification at the time of file open.
If you can move the file to an NFS server, then you can use the NFS logging to record access to the file.
The customer could be correct about not installing new packages if this is a production server or if it is a development server that is about to go live. You should consider asking for authorization to set up auditing on the next development or testing server.
Related
I would like to create something like "file honeypot" on Windows OS.
The problem I would like to answer is this:
I need to detect that file is accessed (Malware wants to read file to send it over internet) so I can react to it. But I do not know how exacly tackle this thing.
I can periodically test file - Do not like this sollution. Would like some event driven without need to bother processor every few ms. But could work if file is huge enought so it cannot be read between checks.
I could exclusively open file myselve and somehow detect if file is accessed. But I have no idea how to do this thing.
Any idea about how to resolve this issue effectively? Maybe creating specialized driver could help but I have little experience in this.
Thanks
Tracking (and possibly preventing) filesystem access on Windows is accomplished using filesystem filter drivers. But you must be aware that kernel-mode code (rootkits etc) can bypass the filter driver stack and send the request directly to the filesystem. In this case only the filesystem driver itself can log or intercept access.
I'm going to assume that what you're writing is a relatively simple honeypot. The integrity of the system on which you're running has not been compromised, there is no rootkit or filter driver installation by malware and there is no process running that can implement avoidance or anti-avoidance measures.
The most likely scenario I can think of is that a server process running on the computer is subject to some kind of external control which would allow files containing sensitive data to be read remotely. It could be a web server, a mail server, an FTP server or something else but I assume nothing else on the computer has been compromised. And the task at hand is to watch particular files and see if anything is reading them.
With these assumptions a file system watcher will not help. It can monitor parts of the system for the creation of new files or modification or deletion of existing ones, but as far as I know it cannot monitor for read only access.
The only event-driven mechanism I am aware of is a filter driver. This is a specialised piece of driver software that can be inserted into the driver chain and monitor access to files. With the constraints above, it is a reliable solution to the problem at the cost of being quite hard to write.
If a polling mechanism is sufficient then I can see two avenues. One is to try to lock the file exclusively, which will fail if it is open. This is easy, but slow.
The other is to monitor the open file handles. I know it can be done because I know programs that do it, but I can't tell you how without some research.
If my assumptions are wrong, please edit your question and provide additional information.
I bought a new server and I want to move all the data (directories, sub directories, users, passwords, ..etc) from my old server to it.
Is there a way to do that?
Thanks,
Do you have physical access to both servers? If so you can use the dd command to make a clone of the disk from the old server to the disk that is going into the new server.
In order to do this though, both hard drives have to be installed in one of the servers.
You can also use netcat and dd to clone a disk over a network.
for the directories and files, use a FTP client from your server, if it allows you to, if not, just download all the content to your computer and upload it to the new server.
For the users and passwords, i guess they are in a Database, connect to the database using SSH, telnet, or MysqlAdmin or any RMDB client system and export a dump file, then log in to the new server's SQL system and import that dump file.
Anyway you should give more details of both servers anyway so we can help you, for example, are they Shared hosting or dedicated machine? and what kind of access do you have to them, also, their operative system would help people to reply you accurately
In principle, yes.
If the hardware is similar (= just more RAM, disk space but same CPU architecture and no special graphics card drivers), you might be able to copy every file and then install the boot loader once more (the boot loader config usually changes when the hard disk size changes).
Or you can create a list of all services that you use, determine which config files each one uses and then just copy those. Ideally, you shouldn't copy them but compare the old and the new versions and merge them.
The most work intensive way is to use a tool like puppet. In a nutshell, puppet allows to create install scripts for services (along with all the configuration that you need). So if you need to install a service again (new hardware, second server), you just tell puppet to do it. On the plus side, your whole installation will be documented, too. If you ever wonder why something is the way it is, you can look into the puppet files.
Of course, this approach takes a lot of time and discipline, so it might not be worth it in your case. Apply common sense.
I have a linux service (c++, with lots of loadable modules, basically .so files picked up at runtime) which from time to time crashes ... I would like to get behind this crash and investigate it, however at the moment I have no clue how to proceed. So, I'd like to ask you the following:
If a linux service crashes where is the "core" file created? I have set ulimit -c 102400, this should be enough, however I cannot find the core files anywhere :(.
Are there any linux logs that track services? The services' own log obviously is not telling me that I'm going to crash right now...
Might be that one of the modules is crashing ... however I cannot tell which one. I cannot even tell which modules are loaded. Do you know how to show in linux which modules a service is using?
Any other hints you might have in debugging a linux service?
Thanks
f-
Under Linux, processes which switch user ID, get their core files disabled for security reasons. This is because they often do things like reading privileged files (think /etc/shadow) and a core file could contain sensitive information.
To enable core dumping on processes which have switched user ID, you can use prctl with PR_SET_DUMPABLE.
Core files are normally dumped in the current working directory - if that is not writable by the current user, then it will fail. Ensure that the process's current working directory is writable.
0) Get a staging environment which mimics production as close as possible. Reproduce problem there.
1) You can attach to a running process using gdb -a (need a debug build of course)
2) Make sure the ulimit is what you think it is (output ulimit to a file from the shell script
which runs your service right before starting it). Usually you need to set ulimit in /etc/profile file; set it ulimit -c 0 for unlimited
3) Find the core file using find / -name \*core\* -print or similar
4) I think gdb will give you the list of loaded shared objects (.so) when you attach to the process.
5) Add more logging to your service
Good luck!
Your first order of business should be getting a core file. See if this answer applies.
Second, you should run your server under Valgrind, and fix any errors it finds.
Reproducing the crash when running under GDB (as MK suggested) is possible, but somewhat unlilkely: bugs tend to hide when you are looking for them, and the debugger may affect timing (especially if your server is multi-threaded).
I wonder whether linux provide a low-level support for such a use case:
When a monitored file is to be deleted, the monitor application can be notified to do something before this file is deleted from the file system.
Before teh monitored file is deleted from the file system, I hope to get a chance to free its related resouces. And I can only locate the resources as long as the file is still on disk and no matter where it moves.
Many thanks!
Amanda
There is no direct API for this, but there are a number of techniques you can use.
First, the inotify API can be used to be notified /after/ deletion. To get at the contents of the file, you could create a hard link to it in another directory - this way, when the file is 'deleted', it remains on disk (at a different path) until you're ready to finally delete it.
Alternately, you could interpose a filter using FUSE. This would let you intercept any filesystem operations you like. However, this comes at a performance hit, as all filesystem operations would be intercepted, not just deletions.
My Apache module launches a helper subprocess which does, for example, but not limited by, the following things:
It sets up a socket so that it can communicate with Apache.
Reads and writes files in a temporary location that is deleted when Apache exits. These files are used e.g. for storing large amounts of data received over the network, in case that data does not comfortably fit in RAM.
It spawns user-specified executables. Similar to CGI. Each of these spawned processes are run as their own dedicated user.
The helper subprocess is launched as root so that it can manage file ownerships and permissions and can spawn more processes as specific users.
Some users of my module run on systems with SELinux installed, e.g. RedHat-based distros. SELinux usually interferes with my module. Until now I've been telling people to disable SELinux system-wide because I can't figure out how to write a proper policy for my software. Documentation is very scattered, complex and usually only targets system administrators, not software developers.
As a step into the right direction, I want to implement minimal support for SELinux. I'm looking for a way to launch my helper subprocess without any SELinux constraints without disabling SELinux system-wide. Is there a way to do that, and if so, how?
Well... you could write a rule that transitions your domain to unconfined_t, but then you'd piss off quite a few sysadmins. Best to write yourself a new domain that inherits from httpd_t and also adds the appropriate contexts for access.