I am in a rather unique predicament.
Let's say that I am on a Linux-based computer. It could be anything, really. The important part is that I have 2 partitions on my device. 1 that is around 1 GB and another that is around 15 GB.
The 1 GB partition (mounted on /) is reserved for system use, and the rest (mounted on /home) is for the user (me) to use.
Suppose I am running low on free space in my system partition. However, I want to install some command line utilities (which, of course, install to the system).
In the meantime, I create a folder in /home called stash. More on this later.
So, I download a tool, for example, bash. Bash is a .deb which I end up extracting to /home/stash. Let's assume bash is too big for me to install it to the system. That's okay, I can just create a symlimk at /bin/bash that redirects to /home/stash/bin/bash.
However, I'd like not only to symlink /bin/bash, but all of the other directories in the /home/stash folder. Is there a way that I could automate this symlink process?
Related
I've written a bash script that takes media from my mobile phone via webdav mount and DSLR sd card via usb connection and puts it in my ~/Pictures and ~/Video directories.
I'm using rsync to move the files (--remove-source-files) to my home directory and then I use find to find the specific files I need to process and then I'm running exiftool on each to put them where I want them (dated directories, sub-directories from tags, etc). I copy them to one directory and then move them to a similarly structured backup drive which is samba mounted.
$ free -h
total used free shared buff/cache available
Mem: 31Gi 6.6Gi 324Mi 253Mi 24Gi 24Gi
Swap: 15Gi 1.9Gi 14Gi
This process starts off fast but slows down quickly and dramatically.
What is the proper way to accomplish this task that won't use up so much buff/cache or clears it out more often within the process?
I've found many references to use nocache but I have not been able to use this solution.
I have found that doing sync; echo 3 > /proc/sys/vm/drop_caches before and after the script helps but depending on how much I'm moving, the cache still fills up.
Here is something that looks promising but I haven't tried it yet. https://access.redhat.com/solutions/5652631
I'm running linux mint 17 and I notice that every so often my computer slows to a crawl.W When I look at top I see "/usr/bin/find / -ignore_readdir_race (..." etc. sucking up most of my memory. It runs for a really long time (several hours) and my guess is that its an automated indexing process for my hard drive.
I'm working on a project that requires me to have over 6 million audio files on a mounted SSD so another guess is that the filesystem manager is trying to index all these files for quick search. Is that the case? Is there any way to turn it off for the SSD?
The locate command reports data collected for its database by a regular cron task. You can exclude directories from the database, making the task run more quickly. According to updatedb.conf(5)
PRUNEPATHS
A whitespace-separated list of path names of directories which should not be scanned by updatedb(8). Each path name must be exactly in the form in which the directory would be reported by locate(1).
By default, no paths are skipped.
On my Debian machine for instance, /etc/updatedb.conf contains this line:
PRUNEPATHS="/tmp /var/spool /media"
You could modify your /etc/updatedb.conf to add the directories which you want to ignore. Only the top-level directory of a directory tree need be listed; subdirectories are ignored when the parent is ignored.
Further reading:
Tip of the day: Speed up `locate`
How do I get mlocate to only index certain directories?
It's a daily cron job that updates databases used by the locate command. See updatedb(8) if you want to learn more. Having six million audio files will likely cause this process to eat up a lot of CPU as it's trying to index your local filesystems.
If you don't use locate, I'd recommend simply disabling updatedb, something like this:
sudo kill -9 <PID>
sudo chmod -x /etc/cron.daily/mlocate
sudo mv /var/lib/mlocate/mlocate.db /var/lib/mlocate/mlocate.db.bak
If all else fails just remove the package.
I've got quite a head scratcher here. We have multiple Raspberry Pis on the field hundreds of kilometers apart. We need to be able to safe(ish)ly upgrade them remotely, as the price for local access can cost up to few hundred euros.
The raspis run rasbian, / is on SD-card mounted in RO to prevent corruption when power is cut (usually once/day). The SD cards are cloned from same base image, but contain manually installed packages and modified files that might differ between devices. The raspis all have a USB flash as a more corruption resistant RW-drive and a script to format it on boot in case the drive is corrupted. They call home via GPRS connection with varying reliability.
The requirements for the system are as follows:
Easy versioning of config files, scripts and binaries, at leasts /etc, /root and home preferably Git
Efficient up-/downgrade from any verion to other over GPRS -> transfer file deltas only
Possibility to automatically roll back recently applied patch, if connection is no longer working
Root file system cannot be in RW mode while downloading changes, the changes need to be stored locally before applying to /
The simple approach might be keeping a complete copy of the file system in a remote git repository, generate a diff file between commits, upload the patch to the field and apply it. However, at the the moment the files on different raspis are not identical. This means, at least when installing the system, the files would have to be synchronized through something similar to rsync -a.
The procedure should be along the lines of "save diff between / and ssh folder to a file on the USB stick, mount / RW, apply diff from file, mount / RO". Rsync does the diff-getting and applying simultaneously, so my first question becomes:
1 Does there exist something like rsync that can save the file deltas from local and remote and apply them later?
Also, I have never made a system like this and the drawt is "closest to legit I can come up with". There's a lot of moving parts here and I'm terrified that something I didn't think of beforehand will cause things to go horribly wrong. Rest of my questions are:
Am I way off base here and is there actually a smarter/safe(r) way to do this?
If not, what kind of best practices should I follow and what kind of things to be extremely careful with (to not brick the devices)?
How do I handle things like installing new programs? Bypass packet manager, install in /opt?
How to manage permissions/owners (root+1 user for application logic)? Just run everything as root and hope for the best?
Yes, this is a very broad question. This will not be a direct answer to your questions, but rather provide guidelines for your research.
One means to prevent file system corruption is use an overlay file system (e.g., AUFS, UnionFS) where the root file system is mounted read-only and a tmpfs (RAM based) or flash based read-write is mount "over" the read-only root. This requires your own init scripts including use of the pivot_root command. Since nothing critical is mounted RW, the system robustly handles power outages. The gist is before the pivot_root, the FS looks like
/ read-only root (typically flash)
/rw tmpfs overlay
/aufs AUFS union overlay of /rw over /
after the pivot_root
/ Union overlay (was /aufs
/flash read only root (was /)
Updates to the /flash file system are done by remounting it read-write, doing the update, and remounting read-only. For example,
mount -oremount,rw <flash-device> /flash
cp -p new-some-script /flash/etc/some-script
mount -oremount,ro <flash-device> /flash
You may or may not immediately see the change reflected in /etc depending upon what is in the tmpfs overlay.
You may find yourself making heavy use of the chroot command especially if you decide to use a package manager. A quick sample
mount -t proc none /flash/proc
mount -t sysfs none /flash/sys
mount -o bind /dev /flash/dev
mount -o bind /dev/pts /flash/dev/pts
mount -o bind /rw /flash/rw #
mount -oremount,rw <flash-device> /flash
chroot /flash
# do commands here to install packages, etc
exit # chroot environment
mount -oremount,ro <flash-device> /flash
Learn to use the patch command. There are binary patch commands How do I create binary patches?.
For super recovery when all goes wrong, you need hardware support with watchdog timers and the ability to do fail-safe boot from alternate (secondary) root file system.
Expect to spend significant amount of time and money if you want a bullet-proof product. There are no shortcuts.
I have a Python script running under Linux that generates huge numbers of tiny files into a given directory. However, many Linux filesystems like ext4 have a fixed number of inodes set at creation time, so I want to make sure it's possible to save that many files into that directory before starting. From the command line, you can see this number using df -i /some/directory.
How do you find the number of free inodes on the filesystem that directory lives on, in Python?
This can be done using the statvfs system call. In Python (both 2 and 3), this can be accessed using os.statvfs. The call describes the filesystem containing the file/directory the path specifies.
So to get the number of free inodes, use
#import os
os.statvfs('/some/directory').f_favail
Also, it's possible that some percentage of the inodes are reserved for the root user. If the script is running as root and you want to allow it to use the reserved inodes, use f_ffree instead of f_favail.
I have two servers, computer A and computer B, both running Linux. I need to write a program or a shell script which will continuously monitor the contents of my home directory on computer A and if anything changes, copy the changes to my home directory on computer B such that both home directories are always the same. (Any changes made to the home directory on computer B and safely be ignored.)
Have you considered exporting /home from computer A to computer B via a network file system, e.g. NFS ?
You could also mount the exported filesystem on B in read-only mode so you won't be able to modify the contents of /home from B if that's desired.
Assuming a reasonably recent Linux kernel (one including inotify - it's been present since 2.6.13), you could use inotify-tools as described here to monitor for changes and call rsync on the files to update computer B. That should do what you're asking for, and allow changes on B that don't propagate to A, as well.
You could probably do the same job with incron, which works like cron but based on filesystem events instead of times, but it seems more intended for use with single files.
Use rsync, which will solve your problems. Most distributions would have this already pre-installed.