I have two servers, computer A and computer B, both running Linux. I need to write a program or a shell script which will continuously monitor the contents of my home directory on computer A and if anything changes, copy the changes to my home directory on computer B such that both home directories are always the same. (Any changes made to the home directory on computer B and safely be ignored.)
Have you considered exporting /home from computer A to computer B via a network file system, e.g. NFS ?
You could also mount the exported filesystem on B in read-only mode so you won't be able to modify the contents of /home from B if that's desired.
Assuming a reasonably recent Linux kernel (one including inotify - it's been present since 2.6.13), you could use inotify-tools as described here to monitor for changes and call rsync on the files to update computer B. That should do what you're asking for, and allow changes on B that don't propagate to A, as well.
You could probably do the same job with incron, which works like cron but based on filesystem events instead of times, but it seems more intended for use with single files.
Use rsync, which will solve your problems. Most distributions would have this already pre-installed.
Related
I am looking for a program that would allow me to mirror one partition to another disk (something like RAID1) for Linux. It doesn't have to be a windowed application, it can be a console application, I just want what is in one place to be mirrored to another.
It would be nice if it were possible to mirror a specific folder that I would care for instead of copying everything from the given partition.
I was looking on the internet, but it's hard to find something that would give such opportunities, hence the idea to ask such a question.
I do not want to make fake RAID on Linux or hardware RAID because I read that if the motherboard fails then it is best to have the same second one to recover data.
I will be grateful for every suggestion :)
You can check my script "CopyDirFile" written in bash, which is located on github.
You can perform a replication (mirroring) task of any source folder to another destination folder (deleting a file in the source folder means deleting it in the destination folder).
The script also allows you to create copy tasks (deleted files in the source folder will not be deleted in the target folder).
The tasks are executed in background at a specified time, not all the time, frequency is set by the user when creating the task.
You can also set the task to start automatically when the user logs on.
All the necessary information can be found in the README file in repository.
If I understood you correctly, I think it meets your requirements.
Linux has standard support for software RAID: mdraid.
It allows you to bundle two disk devices into a RAID 1 device (among other things); you then create a filesystem on top of that device.
LVM offers another way to do software RAID; it doesn't seem to be very popular, but it's certainly supported.
(If your system supports hardware RAID, on the motherboard or with a separate RAID controller, Linux can use that, too, but that doesn't seem to be what you're asking here.)
I am working on a linux computer which is locked down and used in kiosk mode to run only one application. This computer cannot be updated or modified by the user. When the computer crashes or freezes the OS rebuilds or modifies the ld-2.5.so file. This file needs to be locked down without allowing even the slightest change to it (there is an application resident which requires ld-2.5.so to remain unchanged and that is out of my control). Below are the methods I can think of to protect ld-2.5.so but wanted to run it by the experts to see if I am missing anything.
I modified the fstab to mount the EXT3 filesystem as EXT2 to disable journaling. Also set the DUMP and FSCK values to "0" to disable those processes.
Performed a "chattr +i ld-2.5.so" on the file but there are still system processes that can overwrite this protection.
I could attempt to trap the name of the processes which are hitting ld-2.5.so and prevent this.
Any ideas or hints would be greatly appreciated.
-Matt (CentOS 5.0.6)
chattr +i should be fine in most circumstances.
The ld-*.so files are under /usr/lib/ and /usr/lib64/. If /usr/ is a separate partition, you also might want to mount that partition read only on a kiosk system.
Do you have, by any chance, some automated updating/patching of said PC configured? ld-*.so is part of glibc and basically should only change if the glibc package is updated.
The thing is, I want to track if a user tries to open a file on a shared account. I'm looking for any record/technique that helps me know if the concerned file is opened, at run time.
I want to create a script which monitors if the file is open, and if it is, I want it to send an alert to a particular email address. The file I'm thinking of is a regular file.
I tried using lsof | grep filename for checking if a file is open in gedit, but the command doesn't return anything.
Actually, I'm trying this for a pet project, and thus the question.
The command lsof -t filename shows the IDs of all processes that have the particular file opened. lsof -t filename | wc -w gives you the number of processes currently accessing the file.
The fact that a file has been read into an editor like gedit does not mean that the file is still open. The editor most likely opens the file, reads its contents and then closes the file. After you have edited the file you have the choice to overwrite the existing file or save as another file.
You could (in addition of other answers) use the Linux-specific inotify(7) facilities.
I am understanding that you want to track one (or a few) particular given file, with a fixed file path (actually a given i-node). E.g. you would want to track when /var/run/foobar is accessed or modified, and do something when that happens
In particular, you might want to install and use incrond(8) and configure it thru incrontab(5)
If you want to run a script when some given file (on a native local, e.g. Ext4, BTRS, ... but not NFS file system) is accessed or modified, use inotify incrond is exactly done for that purpose.
PS. AFAIK, inotify don't work well for remote network files, e.g. NFS filesystems (in particular when another NFS client machine is modifying a file).
If the files you are fond of are somehow source files, you might be interested by revision control systems (like git) or builder systems (like GNU make); in a certain way these tools are related to file modification.
You could also have the particular file system sits in some FUSE filesystem, and write your own FUSE daemon.
If you can restrict and modify the programs accessing the file, you might want to use advisory locking, e.g. flock(2), lockf(3).
Perhaps the data sitting in the file should be in some database (e.g. sqlite or a real DBMS like PostGreSQL ou MongoDB). ACID properties are important ....
Notice that the filesystem and the mount options may matter a lot.
You might want to use the stat(1) command.
It is difficult to help more without understanding the real use case and the motivation. You should avoid some XY problem
Probably, the workflow is wrong (having a shared file between several users able to write it), and you should approach the overall issue in some other way. For a pet project I would at least recommend using some advisory lock, and access & modify the information only thru your own programs (perhaps setuid) using flock (this excludes ordinary editors like gedit or commands like cat ...). However, your implicit use case seems to be well suited for a DBMS approach (a database does not have to contain a lot of data, it might be tiny), or some index locked file like GDBM library is handling.
Remember that on POSIX systems and Linux, several processes can access (and even modify) the same file simultaneously (unless you use some locking or synchronization).
Reading the Advanced Linux Programming book (freely available) would give you a broader picture (but it does not mention inotify which appeared aften the book was written).
You can use ls -lrt, it displays the last RW operations in the shell. Then you can conclude whether the file is opened or not. Make sure that you are in the exact directory.
Please, give me a hint to the simplest and lightest solution to isolate a linux shell script (usually ubuntu in case it has smth special)
What I mean about isolation:
1. Filesystem - the most important - I want it cannot access any folders (read) outside workspace except those I will manually configure in some way
2. actually, other types of isolation does not matter
It is ok for "soft" isolation, I mean script may just fail/aborted if trying to access(read) denied paths, but "hard" isolation to get "Not found" for such attempts looks like a cleaner solution
I do not need any process isolations, script may use sudo/fakeroot/etc. inside it, but this should not affect isolation.
Also, I plan to use different isolations inside one workspace:
for ex., I have folders:
a/
b/
include/
target/
I want to make a giving it access only to "a"(rw), "include"(r) and "target" (rw+sudo)
make b giving it access only to "b"(rw), "include"(r) and "target" (rw+sudo)
and target will get both results from A and B, allowing B overwrite anything of results of A - the same if there is no isolation
The target of isolation I'm talking about is to prevent B reading from A, even knowing that there is A and vice versa
Thanks!
Two different users and SSH is a simple way to solve your problem. One of the key benefits is that this will start a "clean" environment in a new shell.
ssh <user_a>#localhost '<path_to_build_script_a>'
ssh <user_b>#localhost '<path_to_build_script_b>'
User a and b must both be members of the group that owns common directories.
Note that it's the directory write permission that decide if a user can create new files inside that directory.
Edit: 2013-07-29
For lots of sequential isolated builds like in your case, one solution is to do as you already have suggested; automate file permission changes so that each build only have access to the files and folders it should.
I am curious if it is possible artificially modify the server load in Ubuntu or more generally linux. I am working on an application that reacts to the server load, and in order to test it it would be nice if I could change the server load easily.
I am currently running an over-active program that will literally generate load, but I'd prefer to not continue overheating my laptop (it's getting hot!).
One of the most important things to know about Linux (or Unix) systems is, everything is just a file. Since you are just reading from /proc/loadavg, the easiest was for you to accomplish what you are after is simply make a text file that contains a line of text that you would see when running cat /proc/loadavg. Then have your program read from that file you created instead of /proc/loadavg and it will be none the wiser. If you want to test under different "artificial" situations, just change the text in this file and save. When your testing is done, simply change your program back to reading from /proc/loadavg and you can be sure it will work as expected.
Note, you can make this text file anywhere you want...in your home directory, in the program directory, wherever. However, you shouldn't make it in /proc. That directory is reserved for system objects.
You can use the stress command, see http://weather.ou.edu/~apw/projects/stress/
A tool to impose load on and stress test a computer system
sudo apt-get install stress
To avoid CPU warm, you can install a virtual machine with small cpu capacity. virtualbox and qemu-kvm are free.
Use chroot to run the various pieces of software you're testing with a specified directory as the root directory. Set up a manufactured/modified /proc/loadavg relative to that new root directory, too.
chroot will let you create a dummy file that appears to have /proc/loadavg as its path, so the software will observe your manufactured values even if you can't change your code to look for load data in a different location.
Since you don't want to actually/literally stress the machine, something like stress is not what you are after.
As stated, /proc/loadavg would be the place to set system load averages (faux loads).
But if that's also not the meat of what you're after, I would absolutely suggest
getloadavg
watchdog
and even possible Munin plugins
There're two methods.
Hacking /proc/loadavg
The machine is not overstressed
Your program reads load valus from a file
Todo: hack Linux to report fake load value
Modify your prg
The machine is not overstressed
Your program reads load valus from a file
Todo: change 4 characters in your prg: replace /proc/loadavg with /tmp/loadavg
You can decide now. Calculate costs ;)