Mount /var on ramdisk at boot - Bash Script Issue - linux

I have an embedded device where i need to put my /var and /tmp in ram in order to diminish the number of writes on the drive (Compact flash). I know how to do it with /tmp as i don't have to recover anything whenever i reboot or shutdown.
But the /var directory has important stuff. I have been researching and i found this, but it doesn't seem to be working.
Here is the script:
# insert this on file 'rc.sys.init'
# after the mount of the root file system
# to create the /var on ramdisk
echo "Create ramdisk........."
#dd if=/dev/zero of=/dev/ram0 bs=1k count=16384
mkfs.ext2 -j -m 0 -q -L ramdisk /dev/ram0
if [ ! -d /mnt/ramdisk ]; then
mkdir -p /mnt/ramdisk
fi
mount /dev/ram0 /mnt/ramdisk
if [ -L /var ]; then
tar -xf /vartmp.tar -C /mnt/ramdisk
else
tar -C / -cf /vartmp.tar var
cp -a /var /mnt/ramdisk
rm -rf /var
ln -s /mnt/ramdisk/var /var
fi
# insert this into file 'halt'
# to stop the ram disk properly on shutdown.
#
if [ -e /vartmp.tar ]; then
rm -f /vartmp.tar
fi;
tar -C /mnt/ramdisk -cf /vartmp.tar var
Is there any problem with this script? If not, in which inicialization and termination script should i include them?

For all that have the same problem i do i have solved my problem (kind of)
The two scripts i posted are correct and accomplish the job. What you have to be careful is where you put them.
In Slackware the first run script is rc.S. At first i copy pasted my first script into the middle of that one. It definitely should be there, just not where i put it. You have to see where does the script rc.S call for a particular directory or file from /var. The creation of the ramdisk should be before those lines.
the shutdown script should be added in the bottom of the rc.6 script (shutdown script)
Also i should point out that although this improves the life expectancy of the drive, it is a little volatile and sometimes randomly reboots, so be careful.

Nice script...but it seems to me that it is volatile for several reasons. First did you tell the system max ramdisk size...first as a kernel argument.....linux /vmlinuz ramdisk_size=204800......then in rc mke2fs -t ext2 /dev/ram1 204800.....and maybe use ram1 not ram0.......also use a script for manual saving of ramdisk contents to /var.....cp -a /mnt/ramdisk/var/. /var........backup real /var to another directoryusin tar compression, but introducing tar compression to reduce data size probably introduces lag, latency and instability. Just seems to me to be so.

Related

Load root filesystem from USB device

I'm trying to make a fast reboot to the other Linux system. First step is kernel loading, I make it with
sudo kexec --append='$(cat /proc/cmdline)' -l new_kernel.img --reuse-cmdline
sudo kexec -e
It works fine, but loads only kernel, not entire system.
How can I mount an *.img file with OS resources, located at USB as /? Preferable during kernel loading, but afterwards mount is still suitable. *.img format is not necessary, it can be unpacked before
As stark said, pivot root() was the call I was searching for. Commands to make a USB located at /dev/sdb1 a root directory:
sudo -s
mkdir /newroot
mount /dev/sdb1 /newroot
cd /newroot
mkdir oldroot
pivot_root . oldroot/
switch_root() deletes all files at the previous root dir, also there are few other differences, this answer might be useful

A variety of rsync commands in a cron script to sync Ubuntu & Mac home directories

I have a Thinkpad running Linux (Ubuntu 14.04) which is on a wired network and a Mac running Yosemite on wireless, in a different subnet. They're both work machines. I also have a 1TB encrypted USB external Lenovo disk. I have created the following script to run from cron from the Thinkpad to sync the hidden folders in /home/greg with the external drive (connected to the TP), providing it's mounted to the right dir. Then it should sync the remaining, non-hidden) content of /home/greg and perhaps select customised parts of /etc. Once that's done, it should do something similar for the Mac, keeping the hidden files separate but doing a union of the content. My first rsync is meant to only include the hidden files (.*/) in /home/greg and the second rsync is meant to grab everything that's not hidden in that directory. The following is a work in progress.
#!/bin/bash
#source
LOCALHOME="/home/greg/"
#target disk
DRIVEBYIDPATH="/dev/disk/by-id"
DRIVEID="disk ID here"
DRIVE=$DRIVEBYIDPATH/$DRIVEID
#mounted target directories
DRIVEMOUNTPOINT="/media/Secure-BU-drive"
THINKPADHIDDENFILES="/TPdot"
MACHIDDENFILES="/MACdot"
BACKUPDIR="/homeBU"
#if test -a $DRIVE ;then echo "-a $?";fi
# Check to see if the drive is showing up in /dev/disk/by-id
function drivePresent {
if test -e $DRIVE
then echo "$DRIVE IS PRESENT!"
driveMounted
else
echo "$DRIVE is NOT PRESENT!"
fi
}
# Check to see if drive is mounted where expected by rsync and if not mount it
function driveMounted {
mountpoint -q $DRIVEMOUNTPOINT
if [[ $? == 0 ]]
then
syncLocal #make sure local has completed before remote starts
#temp disabled syncRemote
else
echo "drive $DRIVEID is PRESENT but NOT MOUNTED. Mounting $DRIVE on $DRIVEMOUNTPOINT"
mount $DRIVE $DRIVEMOUNTPOINT
if [ $? == 0 ]
then
driveMounted
#could add a counter + while/if to limit the retries to say 5?
fi # check mount worked, then re-test until true, at which point the test will follow the other path
fi
}
# Archive THINKPAD to USB encrypted drive
function syncLocal {
echo "drive $DRIVEID is PRESENT and MOUNTED on $DRIVEMOUNTPOINT- now do rsync"
#rsync for all the Linux application specific files (hidden directories)
rsync -ar --delete --update $LOCALHOME/.* $DRIVEMOUNTPOINT/$BACKUPDIR/$THINKPADHIDDENFILES
#rsync for all the content (non-hidden directories)
rsync -ar --delete --exclude-from ./excludeFromRsync.txt $LOCALHOME $DRIVEMOUNTPOINT/$BACKUPDIR
#rsync for Linux /etc dir (includes some custom scripts and cron jobs)
#rsync
}
# Sync MAC to USB encrypted drive
function syncRemote { # Sync Mac to USB encrypted drive
echo "drive $DRIVEID is PRESENT and MOUNTED on $DRIVEMOUNTPOINT- now do rsync"
#rsync for all the Mac application specific files (hidden directories)
rsync -h --progress --stats -r -tgo -p -l -D --update /home/greg/ /media/Secure-BU-drive/
#rsync for all the content (non-hidden directories)
rsync -av --delete --exclude-from ./excludeFromRsync.txt $LOCALHOME $DRIVEMOUNTPOINT/$BACKUPDIR
#rsync for Mac /etc dir (includes some custom scripts and cron jobs)
rsync
}
#This is the program starting
drivePresent
The content of the exclude file mentioned in the second rsync in syncLocal is (nb syncRemote is disabled for the moment):
.cache/
/Downloads/
.macromedia/
.kde/cache-North/
.kde/socket-North/
.kde/tmp-North/
.recently-used
.local/share/trash
**/*tmp*/
**/*cache*/
**/*Cache*/
**~
/mnt/*/**
/media/*/**
**/lost+found*/
/var/**
/proc/**
/dev/**
/sys/**
**/*Trash*/
**/*trash*/
**/.gvfs/
/Documents/RTC*
.*
My problem is that the first local rsync that's meant to be capturing ONLY the /home/greg/.* files seems to have captured everything or possible has failed silently and allowed the second local rsync to run but without excluding the /home/greg/.* files?
I know I've added a load of possibly irrelevant context but I thought it might help set my expectations for the related rsyncs. Sorry if I've gone overboard.
Thanks in advance
Greg
You have to be very careful with .* as it will pull in . and ... So that's your first rsync line:
rsync -ar --delete --update $LOCALHOME/.* $DRIVEMOUNTPOINT/$BACKUPDIR/$THINKPADHIDDENFILES
The shell expand * and so rsync sees . and .. and it will go full-recursive on those two!
I wonder if this might help: --exclude . --exclude .. Well, I'm sure you know about using -vn to help you debug rsync issues.

Using Optware packages and startup scripts on dd-wrt router

I'm trying to run a mumble server (umurmur) on my dd-wrt router (Buffalo WZR-HP-AG300H). I flashed one of the recent community versions of dd-wrt on the device (SVN Rev.: 23320), it has an Atheros CPU inside.
After that I mounted a USB pendrive into the filesystem using these guides (Guide 1, Guide 2) and created writable directories. Here is my startup-script saved to nvram (via web-gui)
EDIT: USB pendrive should be partioned before using it with DD-Wrt.
#!/bin/sh
sleep 5
insmod mbcache
insmod jbd
insmod ext3
mkdir '/mnt/part1'
mkdir '/mnt/part2'
mount -t ext3 -o noatime /dev/sda5 /mnt/part1 # /dev/sda5 -> partition on USB pendrive
mount -t ext3 -o noatime /dev/sda7 /mnt/part2 # /dev/sda7 -> partition on USB pendrive
swapon /dev/sda6 # /dev/sda6 -> partition on USB pendrive
sleep 2
if [ -f /mnt/part1/optware.enable ];then
#mount -o bind /mnt/part2 /mnt/part1/root
mount -o bind /mnt/part1 /jffs
mount -o bind /mnt/part1/etc /etc
mount -o bind /mnt/part1/opt /opt
mount -o bind /mnt/part1/root /tmp/root
else
exit
fi
if [ -d /opt/usr ]; then
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/lib:/usr/lib:/opt/lib:/opt/usr/lib:/jffs/usr/lib:/jffs/usr/local/lib
export PATH=$PATH:/bin:/sbin:/usr/bin:/usr/sbin:/jffs/bin:/opt/bin:/opt/sbin:/opt/usr/bin:/opt/usr/sbin
export IPKG_INSTROOT=/opt
else
exit
fi
The script works well and I can use opkg to install packages. I can also run umurmur manually but I'm struggling on making umurmur autostart. I recognized that the umurmur startup script placed in /opt/etc/init.d/ requires arguments like start and stop but it seems they are called without any arguments.
Another way described here did not work too.
Has anyone a working solution on problems like these? Please help!
Optware runs on Broadcom routers only. Your's has an Atheros chipset.
Taken from this page: Link
Its unclear i the page you referred to has changed - and indeed my setup is fairly different to yours, but to get scripts working on startup I did the following -
mkdir -p /jffs/etc/config
copy script into /jffs/etc/config directory, renaming it to end with .startup
chmod 755 /jffs/etc/config/scriptname.startup

What happens if you mount to a non-empty mount point with fuse?

I am new to fuse. When I try to run a FUSE client program I get this error:
fuse: mountpoint is not empty
fuse: if you are sure this is safe, use the 'nonempty' mount option
I understand that a mountpoint is the directory where you will logically attach the FUSE filesystem. What will happen if I mount to this location? What are the dangers? Is it just that the directory will be overwritten? Basically: what will happen if you mount to a non empty directory?
You need to make sure that the files on the device mounted by fuse will not have the same paths and file names as files which already existing in the nonempty mountpoint. Otherwise this would lead to confusion. If you are sure, pass -o nonempty to the mount command.
You can try what is happening using the following commands.. (Linux rocks!) .. without destroying anything..
// create 10 MB file
dd if=/dev/zero of=partition bs=1024 count=10240
// create loopdevice from that file
sudo losetup /dev/loop0 ./partition
// create filesystem on it
sudo e2mkfs.ext3 /dev/loop0
// mount the partition to temporary folder and create a file
mkdir test
sudo mount -o loop /dev/loop0 test
echo "bar" | sudo tee test/foo
# unmount the device
sudo umount /dev/loop0
# create the file again
echo "bar2" > test/foo
# now mount the device (having file with same name on it)
# and see what happens
sudo mount -o loop /dev/loop0 test
Just add -o nonempty in command line, like this:
s3fs -o nonempty <bucket-name> </mount/point/>
Apparently nothing happens, it fails in a non-destructive way and gives you a warning.
I've had this happen as well very recently. One way you can solve this is by moving all the files in the non-empty mount point to somewhere else, e.g.:
mv /nonEmptyMountPoint/* ~/Desktop/mountPointDump/
This way your mount point is now empty, and your mount command will work.
For me the error message goes away if I unmount the old mount before mounting it again:
fusermount -u /mnt/point
If it's not already mounted you get a non-critical error:
$ fusermount -u /mnt/point
fusermount: entry for /mnt/point not found in /etc/mtab
So in my script I just put unmount it before mounting it.
Just set "nonempty" as an optional value in your /etc/fstab
For example:
## mount a bucket
/usr/local/bin/s3fs#{your_bucket_name} {local_mounted_dir} fuse _netdev,url={your_bucket_endpoint_url},allow_other,nonempty 0 0
## mount a sub-directory of bucket, Do like this:
/usr/local/bin/s3fs#{your_bucket_name}:{sub_dir} {local_mounted_dir} fuse _netdev,url={your_bucket_endpoint_url},allow_other,nonempty 0 0
force it with -l
sudo umount -l ${HOME}/mount_dir

How to Free Inode Usage?

I have a disk drive where the inode usage is 100% (using df -i command).
However after deleting files substantially, the usage remains 100%.
What's the correct way to do it then?
How is it possible that a disk drive with less disk space usage can have
higher Inode usage than disk drive with higher disk space usage?
Is it possible if I zip lot of files would that reduce the used inode count?
If you are very unlucky you have used about 100% of all inodes and can't create the scipt.
You can check this with df -ih.
Then this bash command may help you:
sudo find . -xdev -type f | cut -d "/" -f 2 | sort | uniq -c | sort -n
And yes, this will take time, but you can locate the directory with the most files.
It's quite easy for a disk to have a large number of inodes used even if the disk is not very full.
An inode is allocated to a file so, if you have gazillions of files, all 1 byte each, you'll run out of inodes long before you run out of disk.
It's also possible that deleting files will not reduce the inode count if the files have multiple hard links. As I said, inodes belong to the file, not the directory entry. If a file has two directory entries linked to it, deleting one will not free the inode.
Additionally, you can delete a directory entry but, if a running process still has the file open, the inode won't be freed.
My initial advice would be to delete all the files you can, then reboot the box to ensure no processes are left holding the files open.
If you do that and you still have a problem, let us know.
By the way, if you're looking for the directories that contain lots of files, this script may help:
#!/bin/bash
# count_em - count files in all subdirectories under current directory.
echo 'echo $(ls -a "$1" | wc -l) $1' >/tmp/count_em_$$
chmod 700 /tmp/count_em_$$
find . -mount -type d -print0 | xargs -0 -n1 /tmp/count_em_$$ | sort -n
rm -f /tmp/count_em_$$
My situation was that I was out of inodes and I had already deleted about everything I could.
$ df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda1 942080 507361 11 100% /
I am on an ubuntu 12.04LTS and could not remove the old linux kernels which took up about 400,000 inodes because apt was broken because of a missing package. And I couldn't install the new package because I was out of inodes so I was stuck.
I ended up deleting a few old linux kernels by hand to free up about 10,000 inodes
$ sudo rm -rf /usr/src/linux-headers-3.2.0-2*
This was enough to then let me install the missing package and fix my apt
$ sudo apt-get install linux-headers-3.2.0-76-generic-pae
and then remove the rest of the old linux kernels with apt
$ sudo apt-get autoremove
things are much better now
$ df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda1 942080 507361 434719 54% /
My solution:
Try to find if this is an inodes problem with:
df -ih
Try to find root folders with large inodes count:
for i in /*; do echo $i; find $i |wc -l; done
Try to find specific folders:
for i in /src/*; do echo $i; find $i |wc -l; done
If this is linux headers, try to remove oldest with:
sudo apt-get autoremove linux-headers-3.13.0-24
Personally I moved them to a mounted folder (because for me last command failed) and installed the latest with:
sudo apt-get autoremove -f
This solved my problem.
I had the same problem, fixed it by removing the directory sessions of php
rm -rf /var/lib/php/sessions/
It may be under /var/lib/php5 if you are using a older php version.
Recreate it with the following permission
mkdir /var/lib/php/sessions/ && chmod 1733 /var/lib/php/sessions/
Permission by default for directory on Debian showed drwx-wx-wt (1733)
We experienced this on a HostGator account (who place inode limits on all their hosting) following a spam attack. It left vast numbers of queue records in /root/.cpanel/comet. If this happens and you find you have no free inodes, you can run this cpanel utility through shell:
/usr/local/cpanel/bin/purge_dead_comet_files
You can use RSYNC to DELETE the large number of files
rsync -a --delete blanktest/ test/
Create blanktest folder with 0 files in it and command will sync your test folders with large number of files(I have deleted nearly 5M files using this method).
Thanks to http://www.slashroot.in/which-is-the-fastest-method-to-delete-files-in-linux
Late answer:
In my case, it was my session files under
/var/lib/php/sessions
that were using Inodes.
I was even unable to open my crontab or making a new directory let alone triggering the deletion operation.
Since I use PHP, we have this guide where I copied the code from example 1 and set up a cronjob to execute that part of the code.
<?php
// Note: This script should be executed by the same user of web server
process.
// Need active session to initialize session data storage access.
session_start();
// Executes GC immediately
session_gc();
// Clean up session ID created by session_gc()
session_destroy();
?>
If you're wondering how did I manage to open my crontab, then well, I deleted some sessions manually through CLI.
Hope this helps!
firstly, get the inode storage usage:
df -i
The next step is to find those files. For that, we can use a small script that will list the directories and the number of files on them.
for i in /*; do echo $i; find $i |wc -l; done
From the output, you can see the directory which uses a large number of files, then repeat this script for that directory like below. Repeat it until you see the suspected directory.
for i in /home/*; do echo $i; find $i |wc -l; done
When you find the suspected directory with large number of unwanted files. Just delete the unwanted files on that directory and free up some inode space by the following the command.
rm -rf /home/bad_user/directory_with_lots_of_empty_files
You have successfully solved the problem. Check the inode usage now with the df -i command again, you can see the difference like this.
df -i
eaccelerator could be causing the problem since it compiles PHP into blocks...I've had this problem with an Amazon AWS server on a site with heavy load. Free up Inodes by deleting the eaccelerator cache in /var/cache/eaccelerator if you continue to have issues.
rm -rf /var/cache/eaccelerator/*
(or whatever your cache dir)
We faced similar issue recently, In case if a process refers to a deleted file, the Inode shall not be released, so you need to check lsof /, and kill/ restart the process will release the inodes.
Correct me if am wrong here.
As told before, filesystem may run out of inodes, if there are a lot of small files. I have provided some means to find directories that contain most files here.
In one of the above answers it was suggested that sessions was the cause of running out of inodes and in our case that is exactly what it was. To add to that answer though I would suggest to check the php.ini file and ensure session.gc_probability = 1 also session.gc_divisor = 1000 and
session.gc_maxlifetime = 1440. In our case session.gc_probability was equal to 0 and caused this issue.
this article saved my day:
https://bewilderedoctothorpe.net/2018/12/21/out-of-inodes/
find . -maxdepth 1 -type d | grep -v '^\.$' | xargs -n 1 -i{} find {} -xdev -type f | cut -d "/" -f 2 | uniq -c | sort -n
On Raspberry Pi I had a problem with /var/cache/fontconfig dir with large number of files. Removing it took more than hour. And of couse rm -rf *.cache* raised Argument list too long error. I used below one
find . -name '*.cache*' | xargs rm -f
you could see this info
for i in /var/run/*;do echo -n "$i "; find $i| wc -l;done | column -t
For those who use Docker and end up here,
When df -i says 100% Inode Use;
Just run docker rmi $(docker images -q)
It will let your created containers (running or exited) but will remove all image that ain't referenced anymore freeing a whole bunch of inodes; I went from 100% back to 18% !
Also might be worth mentioning I use a lot CI/CD with docker runner set up on this machine.
It could be the /tmp folder (where all the temporarily files are stored, yarn and npm script execution for exemple, specifically if you are starting a lot of node script). So normally, you just have to reboot your device or server, and it will delete all the temporarily file that you don't need. For my, I went from 100% of use to 23% of use !
Many answers to this one so far and all of the above seem concrete. I think you'll be safe by using stat as you go along, but OS depending, you may get some inode errors creep up on you. So implementing your own stat call functionality using 64bit to avoid any overflow issues seems fairly compatible.
Run sudo apt-get autoremove command
in some cases it works. If previous unused header data exists, this will be cleaned up.
If you use docker, remove all images. They used many space....
Stop all containers
docker stop $(docker ps -a -q)
Delete all containers
docker rm $(docker ps -a -q)
Delete all images
docker rmi $(docker images -q)
Works to me

Resources