I have a question on Zen of Unix / GNU Linux, which looks like an aha-moment for me...
Apparently ANY standard Unix/Linux program has a config file, which the most are located at /etc directory.
Can we derive the concept as follow:
1- As an application developer you should design your software which have a customiztion file (possibly located at /etc.)
2- Then, Admins or users can SET these configs based on their needs and run your program.
3- Changing the behavior of your program should ONLY depends on its config file.
If you're asking whether this is true, that tends to be the convention, yes. Keep in mind that developers are free to design their programs to run however they want to and that they tend to follow this pattern only for convenience of similarity.
Other patterns you may see:
Programs with no global settings and only per user settings may store their settings in ~/.[something], or maybe somewhere else entirely. Many programs do this AND use /etc. Bash is a good example, using /etc/profile/.bashrc for default settings and ~/.bashrc for user settings.
Very large standalone installations of some programs may package all of their files into their own .../etc, .../bin, etc.. directories, and will not use the typical system directories at all. An example of this is charybdis, an ircd, which stores everything in a folder specified at compile time (Mine lives in /var/ircd, so I have /var/ircd/etc, /var/ircd/bin, /var/ircd/lib, ...)
OSX is a certified Unix and tries not to use etc - in effect only Apple supplied programs should change /etc, they supply alternatives.
However for all OSs including Windows you do have a separate configuration/customisation file (or in Windows the registry) and there probably need to be two of these. One that is set and controlled by admins and one for changes the user makes. The former of these can use /etc for Linux see the Filesystem_Hierarchy_Standard
Related
i used this code to execute external script, from mod_exec proftpd.
ExecEngine on
ExecLog /opt/proftpd_mod_exec.log
ExecOptions logStderr logStdout
<IfUser yogi>
ExecBeforeCommand STOR,RETR /home/yogi/Desktop/kab.sh EVENT=BeforeCommand FILE='%f'
ExecOnCommand STOR,RETR /home/yogi/Desktop/kab.sh EVENT=OnCommand FILE='%f'
</IfUser>
but i get error code like this on proftpd_mod_exec.log file. STOR ExecBeforeCommand '/home/yogi/Desktop/kab.sh' failed: Exec format error
how can i fix it?
from http://www.proftpd.org/docs/contrib/mod_exec.html
This module will not work properly for logins, or for logins that are affected by DefaultRoot. These directives use the chroot(2) system call, which wreaks havoc when it comes to scripts. The path to script/shell interpreters often assume a certain location that is no longer valid within a chroot. In addition, most modern operating systems use dynamically loaded libraries (.so libraries) for many binaries, including script/shell interpreters. The location of these libraries, when they come to be loaded, are also assumed; those assumptions break within a chroot. Perl, in particular, is so wrought with filesystem location assumptions that it's almost impossible to get a Perl script to work within a chroot, short of installing Perl itself into the chroot environment.
From the error message it sounds like that just that. You have enabled chroot and the script cannot get executed because of files not available at expected places within chroot.
Author suggest not to use the module because of this.
To get it work You need to figure out the dependencies You need in the chroot target and set them up there at the appropriate places. Or disable chroot for the users and try again. Third possibility: build a statically linked binary with almost no dependencies.
Or try, as the author of the module suggest, to use a FIFO and proftpd logging functionality to trigger the scripts outside of the chroot environment.
The thing is, I want to track if a user tries to open a file on a shared account. I'm looking for any record/technique that helps me know if the concerned file is opened, at run time.
I want to create a script which monitors if the file is open, and if it is, I want it to send an alert to a particular email address. The file I'm thinking of is a regular file.
I tried using lsof | grep filename for checking if a file is open in gedit, but the command doesn't return anything.
Actually, I'm trying this for a pet project, and thus the question.
The command lsof -t filename shows the IDs of all processes that have the particular file opened. lsof -t filename | wc -w gives you the number of processes currently accessing the file.
The fact that a file has been read into an editor like gedit does not mean that the file is still open. The editor most likely opens the file, reads its contents and then closes the file. After you have edited the file you have the choice to overwrite the existing file or save as another file.
You could (in addition of other answers) use the Linux-specific inotify(7) facilities.
I am understanding that you want to track one (or a few) particular given file, with a fixed file path (actually a given i-node). E.g. you would want to track when /var/run/foobar is accessed or modified, and do something when that happens
In particular, you might want to install and use incrond(8) and configure it thru incrontab(5)
If you want to run a script when some given file (on a native local, e.g. Ext4, BTRS, ... but not NFS file system) is accessed or modified, use inotify incrond is exactly done for that purpose.
PS. AFAIK, inotify don't work well for remote network files, e.g. NFS filesystems (in particular when another NFS client machine is modifying a file).
If the files you are fond of are somehow source files, you might be interested by revision control systems (like git) or builder systems (like GNU make); in a certain way these tools are related to file modification.
You could also have the particular file system sits in some FUSE filesystem, and write your own FUSE daemon.
If you can restrict and modify the programs accessing the file, you might want to use advisory locking, e.g. flock(2), lockf(3).
Perhaps the data sitting in the file should be in some database (e.g. sqlite or a real DBMS like PostGreSQL ou MongoDB). ACID properties are important ....
Notice that the filesystem and the mount options may matter a lot.
You might want to use the stat(1) command.
It is difficult to help more without understanding the real use case and the motivation. You should avoid some XY problem
Probably, the workflow is wrong (having a shared file between several users able to write it), and you should approach the overall issue in some other way. For a pet project I would at least recommend using some advisory lock, and access & modify the information only thru your own programs (perhaps setuid) using flock (this excludes ordinary editors like gedit or commands like cat ...). However, your implicit use case seems to be well suited for a DBMS approach (a database does not have to contain a lot of data, it might be tiny), or some index locked file like GDBM library is handling.
Remember that on POSIX systems and Linux, several processes can access (and even modify) the same file simultaneously (unless you use some locking or synchronization).
Reading the Advanced Linux Programming book (freely available) would give you a broader picture (but it does not mention inotify which appeared aften the book was written).
You can use ls -lrt, it displays the last RW operations in the shell. Then you can conclude whether the file is opened or not. Make sure that you are in the exact directory.
I want to write a shell script to monitor changes (rename and delete) in files in a directory.
You probably should learn how to use a revision control system, a.k.a. version control system. I strongly recommend using git; I'm guessing that you need a VCS....
You may want to use Linux-specific inotify(7) facilities, e.g. with incrontab(5) files for icrond(8)
(it might not work well for network file systems like NFS; it is working better on Linux local file systems like Ext4 or BTRFS or XFS)
Let's assume that we've several non-identical versions of the same folder in different locations as follows:
/in/some/location/version1
/different/path/version2
/third/place/version3
Each version of them contains callerFile, which is a pre-compiled executable that we can't control its working functionality. this callerFile will create and edit a folder called cache
/some/fourth/destination/cache
So we've contradiction between the setting of every version so what I want to do is converting the /some/fourth/destination/cache to a link with 3 different destinations
/some/fourth/destination/cache --> /in/some/location/version1/cache
/some/fourth/destination/cache --> /different/path/version2/cache
/some/fourth/destination/cache --> /third/place/version3/cache
so for example:
if /in/some/location/version1/callerFile calls /some/fourth/destination/cache it should redirected to /in/some/location/version1/cache
and if /different/path/version2/callerFile calls /some/fourth/destination/cache it should redirected to /different/path/version2/cache
and if /third/place/version3/callerFile calls /some/fourth/destination/cache it should redirected to /third/place/version3/cache
So, How can I do so on Ubuntu 12.04 64 bit Operating System?
Assuming you have no control over what callerFile actually does, I mean it does what it wants and always the same, so the conclusion is you need to modify it's environment. This will be quite advanced trick, requiring deep experience of Linux kernel and Unix programming in general, and you should think over if it's worth. It will also require root priviledges on the machine where your callerFile binary exists.
Solution I'd propose would be creating an executable ( or some script calling one of exec() family function ), which will prepare special environment ( or make sure it's ready to use ), based on "mount -o bind" or unshare() system call.
Like said, playing with so called "execution context", is quite advanced trick. Theoretically you could also try some autofs-like solution, however you'll probably end up with the same, and bindmount/unshare will be probably more effective than some FS-detection daemon. I wouldn't recommend diving into FUSE, for the same reason. And playing with some over-complicated game with symlinks is probably not the way too.
http://www.kernel.org/doc/Documentation/unshare.txt
Note: whatever "callerFile" binary does, I'm pretty sure it won't check its own filename, which makes possible replacing it with something else in-between, which will do exec() on "callerFileRenamed".
As I understand it, basically what you want is to get different result with the same activity, distinguished by some condition external to activity itself, like, for example, returning different list for "ls" in same directory, based upon e.g. UID of user who issued "ls" command, without modifying some ./ls program binary.
I am currently working on a PXE bootable environment that I would like to put into revision control.
The filesystem and server will both be Linux (SLES if you must know).
I've considered using some kind of hack that stores file ownership/permissions via getfacl -R -P, but this doesn't cover symlinks or devices. And it's kind of ugly.
Tricky things that I need to be covered:
file ownership
file permissions (ACLs are not necessary)
devices
symlinks
Are there any revision control systems that will cover my needs for this?
Note: Rather than a "block volume", I need to put a "set of files" into revision control and keep individual changes.
I'm pretty sure that Perforce will handle all of those things. I believe another group at work uses it for nearly the exact same purpose that you list (storing a variant of linux for an embedded device).
etckeeper is a tool that puts the contents of /etc under revision control (with a choice of revision control systems). It has a layer on top that takes care of permissions, symlinks, etc. I'm not sure it if handles device nodes - probably not - but it could probably easily be extended to do so.
You may find that this tool can be adapted to handle an entire filesystem.
You can use versioning filesystems, that is, filesystems with version control built into the filesystem code itself.
Some examples:
NILFS
Ext3cow
In principle, any filesystem that supports snapshots could be used.
More information: Versioning file system on Wikipedia.
If your file system will not be extremely huge, you might just store it all in a single file, version control that binary file, and then mount that to work with it. It won't give you pretty file-level control, but it might work acceptably for some tasks.