Let's assume that we have two devices: sda1 (which has system installed on it /) and sda2 (which is clear, formatted partition). I have directory /data on sda1 which is used in real time by hundreds of processes (some write operations).
Is it possible to mount sda2 as /data folder (containing files) preserving [access to] all the files(?) and in the same time "cut out" /data folder from sda1 partition (and make it part of partition sda2)? I know that there is bind option in mount but it allows you only for duplication first directory to another.
Is it the only one solution to stop all processes, mount sda2 as i.e. /data2 or something else, move all files to sda2 and remount sda2 as /data?
Yes - the only one way is to mount sda2 into /data2, move data from sda1 and remount sda2 as /data. Simultaneous mount of two partitions on one directory is not an option.
You can do these things, in case you find them helpful:
Mount sda2 on /data while existing processes still work on the files they had open on sda1. When they close the files they will no longer see the files from sda1.
Mount sda2 on /data in a new mount namespace, so that when you list files on /data you see the content from sda2, but the rest of the system still sees sda1. You can use unshare to create a new namespace.
What you can't do is cut a directory from one filesystem and paste it on another. The data has to be moved from one place on the disk to another, and that will take time.
Related
I have 4 hard drivers mounted in a directory:
/dev/sda1 11T 62M 11T 1% /all-hdds/hdd1 │ 36 #MpiParams=ports=#-#
/dev/sdb1 11T 62M 11T 1% /all-hdds/hdd2 │ 37 #PluginDir=
/dev/sdc1 11T 62M 11T 1% /all-hdds/hdd3 │ 38 #PlugStackConfig=
/dev/sdd1 11T 62M 11T 1% /all-hdds/hdd4
Is it possible to export all-hdds as a single NFS point and mount it on other clients? I tried it and I can see all the hdd1, hdd2, etc directories on the client side but when I create files inside them they don't show up on the host so I think maybe I'm hitting some sort of limitation?
Let's assume that /all-hdds itself is mounted from /dev/sde1.
When /all-hdds/hdd1 is not mounted, /all-hdds (on sde1) still has a directory /hdd1, which is empty. When you mount sda1, you mounted the root of the filesystem in sda1 onto /all-hdds/hdd1.
But when you export /all-hdds/hdd1 over NFS, it's confined only to the filesystem on /dev/sde1. So if on the client you mounted /all-hdds onto e.g. /client-mountpoint and then created a file /client-mountpoint/hdd1/test, what actually happens is that the file /hdd1/test is created on the filesystem /all-hdds, stored on /dev/sde1.
Of course, you don't see that file, because it is hidden by the filesystem on sda1, mounted onto /all-hdds/hdd1.
What this means is that you need to export all the filesystems, and explicitly tell the server that you want to export them as a tree.
That entails setting fsid=0 mount option on the root of the exported tree, and setting the nohide option on the sub-exports.
The full guide is here (the installation part is Ubuntu-specific, the export part isn't).
Do note that this will mean that the client mounts yoursever:/ rather than yourserver:/all-hdds - NFSv4 only has one root.
I have a server with the following disk structure:
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 219G 192G 17G 93% /
tmpfs 16G 0 16G 0% /lib/init/rw
udev 16G 124K 16G 1% /dev
tmpfs 16G 0 16G 0% /dev/shm
/dev/sda2 508M 38M 446M 8% /boot
/dev/sdb1 2.7T 130G 2.3T 5% /media/3TB1
I am interested in making backup of the whole server on my local machine. When the time comes I want to be able to restore a new server from my local machine backup. What procedure do you recommend?
I tried rsync, but the indexing took extremely long so I aborted it. Than I used scp, and well, it is currently working. There is lots of symbolic links that weren't transferred to the local machine, and I worry I won't be able to restore it later on.
Since your sda isn't very large and a lot of it is used anyway, I'd create a complete backup of the block device. Your sdb, however, is very large and used only to a small part. Of that I'd create a file system backup.
Boot your server with a Ubuntu live CD and become root (sudo su -).
Attach your backup medium (I assume it's mounted as /mnt/backup/ in the following).
Create a block device backup of sda: cat /dev/sda > /mnt/backup/sda
Mount your sdb (I assume it's mounted as /media/3TB1/ in the following).
Create a file system backup of sdb: rsync -av /media/3TB1/ /mnt/backup/sdb/
For restoring the backup later:
Boot your server with a Ubuntu live CD and become root (sudo su -).
Attach your backup medium (I assume it's mounted as /mnt/backup/ in the following).
Restore the block device backup of sda: cat /mnt/backup/sda > /dev/sda
Mount your sdb (I assume it's mounted as /media/3TB1/ in the following).
Restore the file system backup of sdb: rsync -av /mnt/backup/sdb/ /media/3TB1/
There are more fancy ways of doing it for sure. But this routine worked for me lots of times.
A backup of that size should take a long time to copy over the internet in any case: rsync, cp , dd ..etc, the time taken to copy the file depends on your internet speed.
In my opinion, rsync is the way to go, but if you're not willing to wait that long for the download to complete (I wouldn't either) I highly suggest backing your disk up on another remote server, unless you don't plan on restoring it later since uploading would be a pain too (especially on ADSL).
You have a few options:
Ask your data center for disk redundancy.
A cheap and highly unrecommended solution is to backup your most important data on a file sharing web service, eg. Dropbox (As far as I remember they had a shell API for many tasks including uploading files, which can be used for automatic backups).
Wait for the download to finish.
Go with #Alfe's solution, which is pretty neat in my opinion.
I know if I put file on /dev/shm, then it is put on RAM of the server.
And if I put in my home directory, it is put on NFS.
And i know there is a command to tell if a given location on NFS or maybe RAM, what's that command?
Ex, how can I be sure my home directory is on NFS? I remember by using that command, some prints "NFS" can be seen
You can use the df command to show you the directory's mount point:
[mrsam#octopus ~]$ df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/md0 178G 32G 137G 19% /home
So, my current directory is on a filesystem that's mounted on /dev/md0.
Based on the device that the filesystem is mounted on, you can then figure out if it's a local filesystem, an NFS mount, or something else.
I have a backup system that creates directories named after Unix Timestamps, and then creates incremental backups using a hardlink system (--link-dest in rsync), so typically the first backup is very large, and then later backups are fractions as big.
This is my output of my current backups:
root#athos:/media/awesomeness_drive# du -sh lantea_home/*
31G lantea_home/1384197192
17M lantea_home/1384205953
17M lantea_home/1384205979
17M lantea_home/1384206056
17M lantea_home/1384206195
17M lantea_home/1384207349
3.1G lantea_home/1384207678
14M lantea_home/1384208111
14M lantea_home/1384208128
16M lantea_home/1384232401
15G lantea_home/1384275601
43M lantea_home/1384318801
Everything seems correct, however, take for example the last directory, lantea_home/1384318801:
root#athos:/media/awesomeness_drive# du -sh lantea_home/1384318801/
28G lantea_home/1384318801/
I consistently get this behavior, why is the directory considered 28G by the second du command?
Note - the output remains the same with the -P and -L flags.
Hardlinks are real references to the same file (represented by its inode). There is no difference between the "original" file and a hard link pointing to it as well. Both files have the same status, both are then references to this file. Removing one of them lets the other stay intact. Only removing the last hardlink will remove the file at last and free the disk space.
So if you ask du what it sees in one directory only, it does not care that there are hardlinks elsewhere pointing to the same contents. It simply counts all the files' sizes and sums them up. Only hardlinks within the considered directory are not counted more than once. du is that clever (not all programs necessarily need to be).
So in effect, directory A might have a du size of 28G, directory B might have a size of 29G, but together they still only occupy 30G and if you ask du of the size of A and B, you will get that number.
And with the switch "-l" du counts the hardlinks in every subdir too, so I can see, how big the whole backup is, not only the increment delta.
I have a vps slice running centos 5.5 I am supposed to have 15 gigs of disk space, but according to df it seems to double my disk space usage.
when I run du -skh * in / as root i get:
[root#yardvps1 /]# du -skh *
0 aquota.group
0 aquota.user
5.2M bin
4.0K boot
4.0K dev
4.9M etc
2.5G home
12M lib
14M lib64
4.0K media
4.0K mnt
299M opt
0 proc
692K root
23M sbin
4.0K selinux
4.0K srv
0 sys
48K tmp
2.0G usr
121M var
this is consistent with what I have uploaded to the machine, and adds up to about 5gigs.
BUT when i run df i get:
[root#yardvps1 /]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/simfs 15728640 11659048 4069592 75% /
none 262144 4 262140 1% /dev
it is showing me using almost 12 gigs already.
what is causing this discrepancy and is there anything I can do about it, I planned the server out based on 15 gigs but now it is basically only letting me have about 7 gigs of stuff on it.
thanks.
The most common cause of this effect is open files that have been deleted.
The kernel will only free the disk blocks of a deleted file if it is not in use at the time of its deletion. Otherwise that is deferred until the file is closed, or the system is rebooted.
A common Unix-world trick to ensure that no temporary files are left around is the following:
A process creates and opens a temporary file
While still holding the open file descriptor, the process unlinks (i.e. deletes) the file
The process reads and writes to the file normally using the file descriptor
The process closes the file descriptor when it's done, and the kernel frees the space
If the process (or the system) terminates unexpectedly, the temporary file is already deleted and no clean-up is necessary.
As a bonus, deleting the file reduces the chances of naming collisions when creating temporary files and it also provides an additional layer of obscurity over the running processes - for anyone but the root user, that is.
This behaviour ensures that processes don't have to deal with files that are suddenly pulled from under their feet, and also that processes don't have to consult each other in order to delete a file. It is unexpected behaviour for those coming from Windows systems, though, since there you are not normally allowed to delete a file that is in use.
The lsof command, when run as root, will show all open files and it will specifically indicate deleted files that are deleted:
# lsof 2>/dev/null | grep deleted
bootlogd 2024 root 1w REG 9,3 58 917506 /tmp/init.0W2ARi (deleted)
bootlogd 2024 root 2w REG 9,3 58 917506 /tmp/init.0W2ARi (deleted)
Stopping and restarting the guilty processes, or just rebooting the server should solve this issue.
Deleted files could also be held open by the kernel if, for example, it's a mounted filesystem image. In this case unmounting the filesystem or rebooting the server should do the trick.
In your case, judging by the size of the "missing" space I'd look for any references to the file that you used to set up the VPS e.g. the Centos DVD image that you deleted after installing.
Another case which I've come across although it doesn't appear to be your issue is if you mount a partition "on top" of existing files.
If you do so you effectively hide existing files that exist in the directory on the mounted-to partition (the mount point) from the mounted partition.
To fix: stop any processes with open files on the mounted partition, unmount partition, find and move/remove any files that now appear in mount point directory.
I had the same trouble with FreeBSD server. The reboot helped.