I am going to bring up a new embedded Linux system soon, kernel version 3.2. The main root filesystem needs to be writable as we do software image updates, and we do want to keep the logs under /var/log persisted for analysis after reboots.
One technique I've seen used is to mount /tmp as tmpfs which makes sense, as we don't need anything in /tmp to be maintained across reboots. What other directories in a Linux system will undergo a lot of writes, but do not need to be maintained across reboots? I've seen so far:
/tmp
/var/run
can anyone suggest any other candidates for tmpfs?
Yes,
/tmp
/var/run
And
/var/tmp
too. Yes, /var/tmp is suppose to preserve temporary files between system reboots, but practically, my /var/tmp/ is always empty. It won't hurt to put that in tmpfs -- I've been doing that for more than 10 years and so far so good.
Also, I always put /run/lock in tmpfs and so far so good as well. If you have udev then it will put /dev on devtmpfs. Also my system, automatically put /run and /run/shm in tmpfs. Depending on your system, you may consider doing that as well.
HTH
Related
when i'm writing df -h in my instance i'm getting this data:
Filesystem Size Used Avail Use% Mounted on
devtmpfs 7.7G 0 7.7G 0% /dev
tmpfs 7.7G 0 7.7G 0% /dev/shm
tmpfs 7.7G 408K 7.7G 1% /run
tmpfs 7.7G 0 7.7G 0% /sys/fs/cgroup
/dev/nvme0n1p1 32G 24G 8.5G 74% /
tmpfs 1.6G 0 1.6G 0% /run/user/1000
but when i'm clicking sudo du -sh / i'm getting:
11G /
So in df -h, / size is 24G but in du -sh same directory the size is 11G.
I'm trying to get some free space on my instance and can't find the files that cause that.
What i'm missing?
did df -h is really giving fake data?
This question comes up quite often. The file system allocates disk blocks in the file system to record its data. This data is referred to as metadata which is not visible to most user-level programs (such as du). Examples of metadata are inodes, disk maps, indirect blocks, and superblocks.
The du command is a user-level program that isn't aware of filesystem metadata, while df looks at the filesystem disk allocation maps and is aware of file system metadata. df obtains true filesystem statistics, whereas du sees only a partial picture.
There are many causes on why the disk space used or available when running the du or df commands differs.
Perhaps the most common is deleted files. Files that have been deleted may still be open by at least one process. The entry for such files is removed from the associated directory, which makes the file inaccessible. Therefore the command du which only counts files does not take these files into account and comes up with a smaller value. As long as a process still has the deleted file in use, however, the associated blocks are not yet released in the file system, so df which works at the kernel level correctly displays these as occupied. You can find out if this is the case by running the following:
lsof | grep '(deleted)'
The fix for this issue would be to restart the services that still have those deleted files open.
The second most common cause is if you have a partition or drive mounted on top of a directory with the same name. For example, if you have a directory under / called backup which contains data and then you mount a new drive on top of that directory and label it /backup but it contains no data then the space used will show up with the df command even though the du command shows no files.
To determine if there are any files or directories hidden under an active mount point, you can try using a bind-mount to mount your / filesystem which will enable me to inspect underneath other mount points. Note, this is recommended only for experienced system administrators.
mkdir /tmp/tmpmnt
mount -o bind //tmp/tmpmnt
du /tmp/tmpmnt
After you have confirmed that this is the issue, the bind mount can be removed by running:
umount /tmp/tmpmnt/
rmdir /tmp/tmpmnt
Another possible cause might be filesystem corruption. If this is suspected, please make sure you have good backups, and at your convenience, please unmount the filesystem and run fsck.
Again, this should be done by experienced system administrators.
You can also check the calculation by running:
strace -e statfs df /
This will give you output similar to:
statfs("/", {f_type=XFS_SB_MAGIC, f_bsize=4096, f_blocks=20968699, f_bfree=17420469,
f_bavail=17420469, f_files=41942464, f_ffree=41509188, f_fsid={val=[64769, 0]},
f_namelen=255, f_frsize=4096, f_flags=ST_VALID|ST_RELATIME}) = 0
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/vda1 83874796 14192920 69681876 17% /
+++ exited with 0 +++
Notice the difference between f_bfree and f_bavail? These are the free blocks in the filesystem vs free blocks available to an unprivileged user. The used column is merely a calculation between the two.
Hope this will make your idea clear. Let me know if you still have any doubts.
I have a server with the following disk structure:
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 219G 192G 17G 93% /
tmpfs 16G 0 16G 0% /lib/init/rw
udev 16G 124K 16G 1% /dev
tmpfs 16G 0 16G 0% /dev/shm
/dev/sda2 508M 38M 446M 8% /boot
/dev/sdb1 2.7T 130G 2.3T 5% /media/3TB1
I am interested in making backup of the whole server on my local machine. When the time comes I want to be able to restore a new server from my local machine backup. What procedure do you recommend?
I tried rsync, but the indexing took extremely long so I aborted it. Than I used scp, and well, it is currently working. There is lots of symbolic links that weren't transferred to the local machine, and I worry I won't be able to restore it later on.
Since your sda isn't very large and a lot of it is used anyway, I'd create a complete backup of the block device. Your sdb, however, is very large and used only to a small part. Of that I'd create a file system backup.
Boot your server with a Ubuntu live CD and become root (sudo su -).
Attach your backup medium (I assume it's mounted as /mnt/backup/ in the following).
Create a block device backup of sda: cat /dev/sda > /mnt/backup/sda
Mount your sdb (I assume it's mounted as /media/3TB1/ in the following).
Create a file system backup of sdb: rsync -av /media/3TB1/ /mnt/backup/sdb/
For restoring the backup later:
Boot your server with a Ubuntu live CD and become root (sudo su -).
Attach your backup medium (I assume it's mounted as /mnt/backup/ in the following).
Restore the block device backup of sda: cat /mnt/backup/sda > /dev/sda
Mount your sdb (I assume it's mounted as /media/3TB1/ in the following).
Restore the file system backup of sdb: rsync -av /mnt/backup/sdb/ /media/3TB1/
There are more fancy ways of doing it for sure. But this routine worked for me lots of times.
A backup of that size should take a long time to copy over the internet in any case: rsync, cp , dd ..etc, the time taken to copy the file depends on your internet speed.
In my opinion, rsync is the way to go, but if you're not willing to wait that long for the download to complete (I wouldn't either) I highly suggest backing your disk up on another remote server, unless you don't plan on restoring it later since uploading would be a pain too (especially on ADSL).
You have a few options:
Ask your data center for disk redundancy.
A cheap and highly unrecommended solution is to backup your most important data on a file sharing web service, eg. Dropbox (As far as I remember they had a shell API for many tasks including uploading files, which can be used for automatic backups).
Wait for the download to finish.
Go with #Alfe's solution, which is pretty neat in my opinion.
Data deleted from /mnt directory after stop and start EC2 instance
[root#localhost opt]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 9.7G 1.4G 7.8G 15% /
none 1.9G 0 1.9G 0% /dev/shm
/dev/xvdb 394G 199M 374G 1% /mnt
I place my data in /mnt .I stop instance yesterday .
Afetr starting the instance today ,I didnt find any data form /mnt .
I have another from /opt .
How can i recover that data from /mnt .
If /mnt is Temporary mounting point .Then how can i use these all space
On EC2, /mnt directory is mounted to ephemeral storage.
After reboot or instance stop/start, all data is lost.
Please refer to this post.
It is a common misconception that a reboot/restart will wipe the ephemeral storage - this isn't true.
You can try it yourself and see.
What will is a stop/start - that actually deprovisions your VM, and then moves it to another host machine - which will have wiped ephemeral drive(s) and then starts it up with your root ebs (at least) attached. Stop/start and reboot are often conflated - but they are very different things here.
/mnt should really just be used for ephemeral data storage that is not critical if instance needs to be restarted. This is actually well suited for things such as local on-disk cache, temporary data storage, etc. as this ephemeral storage will oftentimes perform better from an I/O standpoint than say an EBS volume mount. Just understand that you should only place non-critical data there.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I read an article about the /run directory on Linux systems.
http://article.gmane.org/gmane.linux.redhat.fedora.devel/146976
This article states that many Linux distributions have agreed that the /run directory is the only clean solution for early-runtime-dir problem. Previously, they put early runtime data in /dev/.XXX or /var/run. But they are now adopting the /run directory for storing early runtime data.
My question: How do they make this change? To be specific, do they change the code in kernel or boot or initscripts?
Take this article (http://article.gmane.org/gmane.linux.redhat.fedora.devel/146976) for example, what are the possible changes that needed to implement this?
The run directory has no special meaning for the kernel itself be it /run or /var/run. From the kernel's point of view it is just a regular directory. For performance reasons since some time ago it is usually mounted as tmpfs file system. The Fedora distribution creates a symbolic link /var/run pointing to /run for backward compatibility:
mount:
tmpfs on /run type tmpfs (rw,nosuid,nodev,seclabel,mode=755)
ls /var:
lrwxrwxrwx. 1 root root 6 Jun 8 15:33 run -> ../run
So actually all 'old' programs and scripts will work. But as the convention changed the packages are also undergoing the update to reflect this. So, with time the need in the /var/run link will dissappear.
To implement this move of /run the init scripts are changed.
/run is created and mounted (usually as a tmpfs filesystem) by the init system of your Linux distribution. For example systemd or OpenRC. The init system runs before any other program.
The kernel doesn't have anything to do with it.
I am relatively new to Linux. In one of our projects, we use amazon's EC2 instance for processing of some files. We upload files to S3 server after processing. EC2 instance is booted using an existing AMI
Recently I got an error no space left on disk, hence processing of files was halted. I cleaned up some older files and the processing continued.
Now when I look at available space using df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 9.9G 5.7G 3.7G 61% /
none 3.7G 0 3.7G 0% /dev/shm
/dev/xvdb 414G 199M 393G 1% /mnt
/dev/xvdc 414G 199M 393G 1% /data
I can see my files are effecting only /dev/xvda1.
I have following queries
What is the use of other partitions when I can see my files only effecting /dev/xvda1
It looks like we are only using 10 GB of space effectively and other is being wasted. How can I use other space? Can I move some disk space to /dev/xvda1 or directly store files in other areas?
As you can see from the output of df -h, there are two large partitions mouted on /mnt and /data respectively. I suggest that you use those partitions by processing the files in one of those directories. If you cannot move where the processing happens for some reason, you can remount the partitions in the appropriate place.
If for example your files are processed in the directory /var/mydir and you cannot change that, do the following (as root):
umount /mnt
mount /dev/xvdb /var/mydir
You can use the other partition as well of course if you prefer that.