Protect file from system modifications - linux

I am working on a linux computer which is locked down and used in kiosk mode to run only one application. This computer cannot be updated or modified by the user. When the computer crashes or freezes the OS rebuilds or modifies the ld-2.5.so file. This file needs to be locked down without allowing even the slightest change to it (there is an application resident which requires ld-2.5.so to remain unchanged and that is out of my control). Below are the methods I can think of to protect ld-2.5.so but wanted to run it by the experts to see if I am missing anything.
I modified the fstab to mount the EXT3 filesystem as EXT2 to disable journaling. Also set the DUMP and FSCK values to "0" to disable those processes.
Performed a "chattr +i ld-2.5.so" on the file but there are still system processes that can overwrite this protection.
I could attempt to trap the name of the processes which are hitting ld-2.5.so and prevent this.
Any ideas or hints would be greatly appreciated.
-Matt (CentOS 5.0.6)

chattr +i should be fine in most circumstances.
The ld-*.so files are under /usr/lib/ and /usr/lib64/. If /usr/ is a separate partition, you also might want to mount that partition read only on a kiosk system.
Do you have, by any chance, some automated updating/patching of said PC configured? ld-*.so is part of glibc and basically should only change if the glibc package is updated.

Related

Deployment over GPRS to embedded devices

I've got quite a head scratcher here. We have multiple Raspberry Pis on the field hundreds of kilometers apart. We need to be able to safe(ish)ly upgrade them remotely, as the price for local access can cost up to few hundred euros.
The raspis run rasbian, / is on SD-card mounted in RO to prevent corruption when power is cut (usually once/day). The SD cards are cloned from same base image, but contain manually installed packages and modified files that might differ between devices. The raspis all have a USB flash as a more corruption resistant RW-drive and a script to format it on boot in case the drive is corrupted. They call home via GPRS connection with varying reliability.
The requirements for the system are as follows:
Easy versioning of config files, scripts and binaries, at leasts /etc, /root and home preferably Git
Efficient up-/downgrade from any verion to other over GPRS -> transfer file deltas only
Possibility to automatically roll back recently applied patch, if connection is no longer working
Root file system cannot be in RW mode while downloading changes, the changes need to be stored locally before applying to /
The simple approach might be keeping a complete copy of the file system in a remote git repository, generate a diff file between commits, upload the patch to the field and apply it. However, at the the moment the files on different raspis are not identical. This means, at least when installing the system, the files would have to be synchronized through something similar to rsync -a.
The procedure should be along the lines of "save diff between / and ssh folder to a file on the USB stick, mount / RW, apply diff from file, mount / RO". Rsync does the diff-getting and applying simultaneously, so my first question becomes:
1 Does there exist something like rsync that can save the file deltas from local and remote and apply them later?
Also, I have never made a system like this and the drawt is "closest to legit I can come up with". There's a lot of moving parts here and I'm terrified that something I didn't think of beforehand will cause things to go horribly wrong. Rest of my questions are:
Am I way off base here and is there actually a smarter/safe(r) way to do this?
If not, what kind of best practices should I follow and what kind of things to be extremely careful with (to not brick the devices)?
How do I handle things like installing new programs? Bypass packet manager, install in /opt?
How to manage permissions/owners (root+1 user for application logic)? Just run everything as root and hope for the best?
Yes, this is a very broad question. This will not be a direct answer to your questions, but rather provide guidelines for your research.
One means to prevent file system corruption is use an overlay file system (e.g., AUFS, UnionFS) where the root file system is mounted read-only and a tmpfs (RAM based) or flash based read-write is mount "over" the read-only root. This requires your own init scripts including use of the pivot_root command. Since nothing critical is mounted RW, the system robustly handles power outages. The gist is before the pivot_root, the FS looks like
/ read-only root (typically flash)
/rw tmpfs overlay
/aufs AUFS union overlay of /rw over /
after the pivot_root
/ Union overlay (was /aufs
/flash read only root (was /)
Updates to the /flash file system are done by remounting it read-write, doing the update, and remounting read-only. For example,
mount -oremount,rw <flash-device> /flash
cp -p new-some-script /flash/etc/some-script
mount -oremount,ro <flash-device> /flash
You may or may not immediately see the change reflected in /etc depending upon what is in the tmpfs overlay.
You may find yourself making heavy use of the chroot command especially if you decide to use a package manager. A quick sample
mount -t proc none /flash/proc
mount -t sysfs none /flash/sys
mount -o bind /dev /flash/dev
mount -o bind /dev/pts /flash/dev/pts
mount -o bind /rw /flash/rw #
mount -oremount,rw <flash-device> /flash
chroot /flash
# do commands here to install packages, etc
exit # chroot environment
mount -oremount,ro <flash-device> /flash
Learn to use the patch command. There are binary patch commands How do I create binary patches?.
For super recovery when all goes wrong, you need hardware support with watchdog timers and the ability to do fail-safe boot from alternate (secondary) root file system.
Expect to spend significant amount of time and money if you want a bullet-proof product. There are no shortcuts.

Does Linux need a writeable file system

Does Linux need a writeable file system to function correctly? I'm just running a very simple init programme. Presently I'm not mounting any partitions. The Kernel has mounted the root partition as read-only. Is Linux designed to be able run with just a read-only file system as long as I stick to mallocs, readlines and text to standard out (puts), or does Linux require a writeable file system in-order even to perform standard text input and output?
I ask because I seem to be getting kernel panics and complaints about the stack. I'm not trying to run a useful system at the moment. I already have a useful system on another partition. I'm trying to keep it as simple as possible so as I can fully understand things before adding in an extra layer of complexity.
I'm running a fairly standard x86-64 desktop.
No, writable file system is not required. It is theoretically possible to run GNU/Linux with the only read-only file system.
In practice you probably want to mount /proc, /sys, /dev, possibly /dev/pts to everything work properly. Note that even some bash commands requires writable /tmp. Some other programs - writable /var.
You always can mount /tmp and /var as ramdisk.
Yes and No. No it doesn't need to be writeable if it did almost nothing useful.
Yes, you're running a desktop so it's needed to be writeable.
Many processes actually need a writeable filesystem as many system calls can create files. e.g. Unix Domain Sockets can create files.
Also many applications write into /var, and /tmp
The way to get around this is to mount the filesystem read/only and use a filesystem overlay to overlay an in memory filesystem. That way, the path will be writable but they go to ram and any changes are thrown away on reboot.
See: overlayroot
No it's not required. For example as most distributions have a live version of Linux for booting up for a cd or usb disk with actually using and back end hdd.
Also on normal installations, the root partitions are changed to read-only when there are corruptions on the disk. This way the system still comes up as read-only partition.
You need to capture the vmcore and the stack trace of the panic form the dmesg output to analyse further.

How to make the linux filesystem power failure safe?

I need to have a robust filesystem on an debian or ubuntu linux. The problem is, that the system can be "shutdown" by just cut the power ( without a real shutdown ).
After such a scenario I don't want to have the filesystem check or corrupted data on the root filesystem. I need to make sure that the system starts without problems again the next time. What could be the solution?
Along with journaling , Soft Updates can also be a method.Regardless of the method you use, please keep in mind that these methods can only guarantee you the consistency of the file system and neither of them can guarantee the safety of your data.Disk can be mounted more quickly in case of soft updates.

how to disable writing to /var/log/lastlog

how do i disable writing to /var/log/lastlog file. In our system we had to use a vfat file system for /var/log and since it doesn't support sparse files, our lastlog file gets real huge. I'm tried 'session required pam_lastlog.so noupdate' as last rule in /etc/pam.d/login but it doesn't seem to be working.
having 'LASTLOG_ENAB no' in /etc/login.defs isn't also working.
Appreciate any help. Don't get any hits for this on web.
First, the answer: There are a number of ways to do it.
Make the file immutable with chattr. Zero it's size first. Now nobody is allowed to write to it.
ln -s it to /dev/null. Writes will still be allowed, but they go off into never-never land.
Delete it and create a directory with the same name.
Delete it and see if your version of the logging system will recreate it, several of them won't.
Obligatory warnings:
1) Removing lastlog disrupts the audit-trail on your system and may cause various security tools to fail to function properly, or at all.
2) vfat does not support sparse files... It also does not support journaling or individual file ownership. By putting /var/log on vfat you are making it much, much easier for crackers (or even power failures) to potentially alter, damage, or destroy your logfiles. There are lots of better filesystems out there, you should probably find one. But that would be a separate question.

Mandatory file lock on linux

On Linux I can dd a file on my hard drive and delete it in Nautilus while the dd is still going on.
Can Linux enforce a mandatory file lock to protect R/W?
[EDIT] The original question wasn't about linux file locking capabilities but about a supposed bug in linux, reproducing it here as it is responded below and others may have the same question.
People keep telling me Linux/Unix is better OS. I am coding Java on Linux now and come across a problem, that I can easily reproduce: I can dd a file on my hard drive and delete it in Nautilus while the dd is still going on. How come linux cannot enforce a mandatory file lock to protect R/W??
To do mandatory locking on Linux, the filesystem must be mounted with the -o mand option, and you must set g-x,g+s permissions on the file. That is, you must disable group execute, and enable setgid. Once this is performed, all access will either block or error with EAGAIN based on the value of O_NONBLOCK on the file descriptor. But beware: "The implementation of mandatory locking in all known versions of Linux is subject to race conditions which render it unreliable... It is therefore inadvisable to rely on mandatory locking." See fcntl(2).
You don't need locking. This is not a bug but a choice, your assumptions are wrong.
The file system uses reference counting and it will mark a file as free only when all hard links to the file are removed and all file descriptors are closed.
This approach allows safe file system operations that Windows, for example, doesn't. Operations like delete, move and rename over files in use without needing locking or breaking anything.
Your dd operation is going to succeed despite the file removal, which will actually be deferred till the dd finishes.
http://en.wikipedia.org/wiki/Reference_counting#Disk_operating_systems
[EDIT] My response doesn't make much sense now as the question was edited by someone else. The original question was about a supposed bug in linux and not about if linux can lock a file:
People keep telling me Linux/Unix is better OS. I am coding Java on Linux now and come across a problem, that I can easily reproduce: I can dd a file on my hard drive and delete it in Nautilus while the dd is still going on. How come linux cannot enforce a mandatory file lock to protect R/W??
Linux and Unix OS's can enforce file locks, but it does not do so by default becuase of its multiuser design. Try reading the manual pages for flock and fcntl. That might get you started.

Resources