On Linux,File created by which program? - linux

I have the same questions as this post
Only that my question is on the Linux platform
I have a directory in my folder
and I don't know which program has created it
Is it possible to know ?
Thanks

Same answer applies, unless the file itself has metadata like some .doc files and such that contains the information you cannot know what created the file (unless you create a kernel module to intercept block requests to create new files and check what application submitted the request but that is probably not what you want to do).

The answer is the same as in the previous question -- generally, no.
However, you can look at the owner and group of this directory; if the program that creates it is a daemon (service) process, it might be running under its own user / group and thus the files / directories created might have those ownerships.
What does this say?
ls -l /path/to/the/directory

The answer is the same as the one for question you linked. Linux doesn't store information about the creator of the file as far as which program did it. But, like the other answer said, you could create a monitor and record that information yourself.

Related

How to make /proc-like files

I want to make a service, that runs like a daemon. It runs in the background and downloads things here and there.
It isn't much that I want to present to the user, just some statistics of docs downloaded in the day/week and how many errors occurred, etc...
So I wanted to make just a "virtual file" where the information is displayed. Like the files in GNU/Linux in the /proc directory.
My question is: what is the convention to do this? Where can I put this file (/proc is just for the linux kernel stats). Can I somehow create a "virtual file" (like a socket or so) or is it better to write a real file? Or is this idea stupid?

How to "safely" allow others to work on my server?

I sometimes have a need to pay someone to perform some programming which exceeds my expertise. And sometimes that someone is someone I might not know.
My current need is to configure Apache which happens to be running on Centos.
Giving root access via SSH on my main physical server is not an option.
What are my options?
One thought is to create a VPS (guest as Linux) on my main physical server (operating system as Linux) using virtualbox (or equal), have them do the work, figure out what they did, and manually implement the changes my self.
Seem secure? Maybe better options? Thank you
I suggest looking into the chroot command.
chroot() changes the root directory of the calling process to that specified in path. This directory will be used for pathnames beginning with /. The root directory is inherited by all children of the calling process.
This implications of this, are that once inside a chroot "jail" a user cannot see "outside" of the jail. You've changed their root file. You can include custom binaries, or none at all (I don't see why you'd want that, but point being YOU decide what the developer can and can't see.)
We can use a directory for chroot, or you could use my personal favorite: a mounted file, so your "jail" is easily portable.
Unfortunately I am a Debian user, and I would use
debootstrap to build a minimal system to a small file (say, 5GB), but there doesn't seem to be an official RPM equivalent. However the process is fairly simple. Create a file, I would do so with dd if=/dev/zero of=jailFile bs=1M count=5120. Then we can mkfs.ext4 jailFile. Finally, we must mount and include any files we wish the jailed user to use (this is what debootstrap does. It downloads all the default goodies in /bin and such) either manually or with a tool.
After these steps you can copy this file around, make backups, or move servers even. All with little to no effort on the user side.
From a short google search there appears to be a third party tool that does nearly the same thing as debootstrap, here. If you are comfortable compiling this tool, can build a minimal system manually, or can find an alternative; and the idea of a portable ext4 jail is appealing to you, I suggest this approach.
If the idea is unappealing, you can always chroot a directory which is very simple.
Here are some great links on chroot:
https://wiki.archlinux.org/index.php/Change_root
https://wiki.debian.org/chroot
http://www.unixwiz.net/techtips/chroot-practices.html
Also, here and here are great links about using chroot with OpenSSHServer.
On a side note: I do not think the question was off topic, but if you feel the answers here are inadequate, you can always ask on https://serverfault.com/ as well!
Controlling permissions is some of the magic at the core of Linux world.
You... could add the individual as a non-root user, and then work towards providing specific access to the files you would like him to work on.
Doing this requires a fair amount of 'nixing to get right.
Of course, this is one route... If the user is editing something like an Apache configuration file, why not set-up the file within a private bitbucket or github repository?
This way, you can see the changes that are made, confirm they are suitable, then pull them into production at your leisure.

Preferred location for PID file of system daemon run as non-root user

My question is related to this question, but the processes in question are run from cron, and by non-root users. As such, many of the users don't really have home dirs (or their home dirs point to /usr/share/package_name which is not an ideal location for a PID file).
Storing in /var/run is problematic, because this directory is not writable except by root.
I could use /tmp, but I wonder if this is ideal for security reasons.
I could arrange for a startup script to create a directory in /var/run which is owned by the appropriate user (I can't do this at package install time, as /var is often mounted as tmpfs, so is not persistent).
What's the best practice here?
Nice question :), I'm having exactly the same at moment. I'm not sure if this is the correct answer but I hope it helps and I would appreciate feedback as well.
I've googled around and found that registering the per user daemon as a dbus service is an elegant solution. dbus could make sure that the service runs just once. no need for a pidfile.
Another solution (my current) would be to create the PID file in a directory like:
$HOME/.yourdaemon/pid
After your comment I realized, that you cannot write to home. I would suggest to look into dbus
Update
I have an idea. What if you are using /tmp, but looking for a pidfile which is called yourdaemon.pid.UNIQUE_KEY and is owned by the daemon's user? This should work fine.
UNIQUE_KEY should be random generated (preferred is using tempnam as it is race condition proof).

NodeJS: How would one watch a large amount of files/folders on the server side for updates?

I am working on a small NodeJS application that essentially serves as a browser based desktop search for a LAN based server that multiples users can query. The users on the LAN all have access to a shared folder on that server and are traditionally used to just placing files within that folder to sharing among everyone, and I want to keep that process the same.
The first solution I came across was the fs.watchFile which has been touched on in other stackoverflow questions. In the first question user Ivo Wetzel noted that on a linux system fs.watchFile uses inotify but, was of the opinion that fs.watchFile should not be used for large amounts of files/folders.
In another question about fs.watchFile user tjameson first reiterated that on Linux inotify would be used by fs.fileWatch and recommended to just use a combination of node-inotify-plusplus and node-walk but again stated this method should not be used for a large number of files. With a comment and response he suggested only watching the modified times of directories and then rescanning the relevant directory for file changes.
My biggest hurdles seem to be that even with tjameson's suggestion there is still a hard limit to the number of folders monitored (of which there are many and growing). Also it would have to be done recursively because the directory tree is somewhat deep and can also be subject to change at the lower branches so I would have to monitor the following at every folder level (or alternatively monitor the modified time of the folders and then scan to find out what happened):
creation of file or subfolder
deletion of file or subfolder
move of file or subfolder
deletion of self
move of self
Assuming the inotify has limits in line with what was said above then this alone to me seems like it may be too many monitors when I have a significant amount of nested subfolders. The real awesome way looks like it would involve kqueue which I subsequently found as a topic of discussion on a better fs.fileWatch in a google group.
It seems clear to me that keeping a database of the relevant file and folder information is the appropriate course of action on the query side of things, but keeping that database synchronized with the actual state of the file system under the directories of concern will be the challenge.
So what does the community think? Is there a better or well known solution for attacking this problem that I am just unaware of? Is it best just to watch all directories of interest for a single change e.g. modified time and then scan to find out what happened? Is it better to watch all the relevant inotify alerts and modify the database appropriately? Is this not a problem which is solvable by a peasant like me?
Have a look at monit. I use it to monitor files for changes in my dev environment and restart my node processes when relevant project files change.
I recommend you to take a look at the Dropbox API.
I implemented something similar with ruby on the client side and nodejs on the server side.
The best approach is to keep hashes to check if the files or folders changed.

Giving a unix process exclusive RW access to a directory [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 12 years ago.
Improve this question
Is there a way to sandbox a linux process into a certain directory, and give this process exclusive rw access to this dir? For example, create a temporary working directory, and start e.g. python or another scripting tool in such a way that it can only write in this directory, without limiting too much of its functionality. And also that only this process can access read from this directory (except for superusers of course).
I need this to sandbox a web service that basically allows users to run arbitrary code. We currently do authorization in the software itself, but in the end all processes run as one and the same linux user. We would need a way in which a user cannot do any harm on the system, but does have a temporary private working directory to write and read files that is protected from the other users of the webservice.
File permissions are based on owner/group not process so multiple programs run by the same user are going to be able to access owned directories. However if you create a temporary directory for each process before it runs and then chroot() it then no process should be able to get out of its chroot jail to access other directories.
The basic notion is that the temp directory becomes the top of the directory tree as far the process is concerned. The process doesn't know about, nor can it change to, anything above it. Otherwise it can read/write create/delete whatever to its heart's content in its sandbox.
For instance:
/rundir
/rundir/temp1 <-- process 1 chroot jailed here, can't go above
/rundir/temp2 <-- process 2 chroot jailed here, can't go above
See also "man 8 chroot".
in such a way that it can only write
in this directory, without limiting
too much of its functionality.
Wow, this sounds almost magical. Hardly a programming question.
Sounds like you want something like the Linux equivalent of the FreeBSD Jail, or at least something quite similar. This blog posting contains the description of a tool with the same name at least.
You could use a kernel patch like Grsecurity (there are others that could do the job, I think, look for SELinux and AppArmor) to enforce RBAC (role-based access control) for a certain process.
I think using a security enhanced kernel is a must, given your usage scenario.

Resources