Avoid delay after Linux mount - linux

I'm executing those commands:
mkfs.msdos -C path/to/file/some_file.img 1440
sudo mount -o loop path/to/file/some_file.img /path/to/folder/mounted_folder
After those commands, I copy files to mounted_folder and upload the some_file.img file to NFS server.
I do it all in Ansible and it happened so fast that some_file.img uploaded empty.
I gave it a 5-minute delay and the some_file.img was good (with content in it).
I don't want to do it with delay, there is any command I can use to ensure that the some_file.img file will upload with the content in it (and not be empty)?

Unmount the directory when you're finished to ensure everything is flushed to disk.
sudo umount /path/to/folder/mounted_folder

Related

SCP command altering filesize of tranferred data

Context:
I am transferring a backup dir from Server A to Server B.(RHEL)
Directory size (to be transferred) on Server A: 48GB
Available space on Server B: 154GB
Command I'm using on Server A(user: root):
scp -r -C <nameof-backup-dir> user#severB:/path
Unexpected Behaviour:
The backup directory appears on the target server B #/path occupying all available 154GB of space.
Meanwhile the SCP run on the source server A terminates with an "Insufficent space message" for the remaining files.
Question/Help needed:
What am I doing wrong here?
What changes do I need to make to the SCP command to achieve the result?
One thing I can think of is that block sizes are different.
If block size on the destination machine is bigger, small files will occupy more space.
To find out block size :
sudo tune2fs -l /dev/sda1 | grep -i 'block size'
# Replace /dev/sda1 with your device (found out with command [df])
If it's indeed the case, you can recreate destination file system with the same block size as the source file system.

Rsync script does not continue after sync

Problem: I have a simple custom backup script that is set to run whenever my backup drive is detected, this is done via udev. All is well until about halfway down through the script it seems to hang after the rsync command. My code is below:
#!/bin/bash
#Mount the Backup Drive
wall "backup is starting"
mount -U f91b8373-6349-4de3-86e1-6a2557f2c3f7 /media/backupdrive
#Get updated package-list
mv /media/backupdrive/package-selections /media/backupdrive/package-selections.old
dpkg --get-selections >/media/backupdrive/package-selections
wall "pacakge list updated"
#Run Backup
mv /home/user/backup/rsync.log /home/user/backup/rsync.log.old
rsync --log-file=/media/backupdrive/backup/rsync.log -ravzX --delete --exclude /var/tmp --exclude /var/lock --exclude /var/run /home /etc /var /usr /media/backupdrive/backup
wall "rsync complete"
#Sync changes to disk and unmount
sync
cp /media/backupdrive/backup/rsync.log /home/user/backup/rsync.log
umount /media/backupdrive
wall "Backup is complete, the logfile can be viewed at /home/user/backup/rsync.log"
Question: What am I doing wrong here, why is the script not continuing after the rsync?
PS - The wall commands are not important to the script I placed them in at various points to troubleshoot, yes I'm new to this :)
Edit - I have tried removing the "z" option as was mentioned on a similar question, however this has made no difference
It's like a timeout of RUN command in udev.
Instead of running backup script (which normally takes long time to complete) directly from udev, you can run it from separate process, activated by udev.
For example you can use at command:
ACTION=="add", KERNEL=="sd*", ENV{ID_FS_UUID_ENC}=="f91b8373-6349-4de3-86e1-6a2557f2c3f7", RUN+="/home/steve/backup/backup_at.sh"
backup_at.sh:
#!/bin/sh
echo /home/steve/backup/backup.sh | at now
Or you can try to run it in background:
ACTION=="add", KERNEL=="sd*", ENV{ID_FS_UUID_ENC}=="f91b8373-6349-4de3-86e1-6a2557f2c3f7", RUN+="/home/steve/backup/backup.sh &"
but I don't check this method.
From http://lists.freedesktop.org/archives/systemd-devel/2012-November/007390.html:
It's completely wrong to launch any long running task from a udev rule
and you should expect that it will be killed. If you need to launch a
process from a udev rule, use ENV{SYSTEMD_WANTS} to activate a
service.

Linux umount a device from a script running in the device itself

I've a mounted iso image in the path:
/mnt/iso
Inside this iso I've an install script install.sh
I run the installation script from the iso and at the end the script ask to the user if he want to umount the iso itself.
If the user press "y" the script execute the following code:
cd /
umount /mnt/iso
echo "Installation completed!"
Unfortunately when the script tries to execute the umount there's an error
umount: /mnt/iso: device is busy
I suppose it's due to the fact that the virtual device is busy from the script itself.
How can make it work?
Tnx
Use the -l or --lazy switch to umount which will do a lazy umount, where it is only fully unmounted once it is no longer in use. The full description in the manual page (this is a linux specific option) is:
Lazy unmount. Detach the filesystem from the filesystem hierarchy
now, and cleanup all references to the filesystem as soon as it is not
busy anymore. (Requires kernel 2.4.11 or later.)
TomH's solution will resolve the issue if you are using the latest. Otherwise the comment by Simone Palazzo is your best bet. You are unmounting something through a script located in the area you are unmounting. If you run the script from the root directory it will be successful.

How to unmount a busy device [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed last year.
The community reviewed whether to reopen this question last year and left it closed:
Original close reason(s) were not resolved
Improve this question
I've got some samba drives that are being accessed by multiple users daily. I already have code to recognize shared drives (from a SQL table) and mount them in a special directory where all users can access them.
I want to know, if I remove a drive from my SQL table (effectively taking it offline) how, or even is, there a way to unmount a busy device? So far I've found that any form of umount does not work.
Ignoring the possibility of destroying data - is it possible to unmount a device that is currently being read?
YES!! There is a way to detach a busy device immediately - even if it is busy and cannot be unmounted forcefully. You may cleanup all later:
umount -l /PATH/OF/BUSY-DEVICE
umount -f /PATH/OF/BUSY-NFS (NETWORK-FILE-SYSTEM)
NOTE/CAUTION
These commands can disrupt a running process, cause data loss OR corrupt open files. Programs accessing target DEVICE/NFS files may throw errors OR could not work properly after force unmount.
Do not execute above umount commands when inside mounted path (Folder/Drive/Device) itself. First, you may use pwd command to validate your current directory path (which should not be the mounted path), then use cd command to get out of the mounted path - to unmount it later using above commands.
If possible, let us locate/identify the busy process, kill that process and then unmount the samba share/ drive to minimize damage:
lsof | grep '<mountpoint of /dev/sda1>' (or whatever the mounted device is)
pkill target_process (kills busy proc. by name | kill PID | killall target_process)
umount /dev/sda1 (or whatever the mounted device is)
Make sure that you aren't still in the mounted device when you are trying to umount.
Avoid umount -l
At the time of writing, the top-voted answer recommends using umount -l.
umount -l is dangerous or at best unsafe. In summary:
It doesn't actually unmount the device, it just removes the filesystem from the namespace. Writes to open files can continue.
It can cause btrfs filesystem corruption
Work around / alternative
The useful behaviour of umount -l is hiding the filesystem from access by absolute pathnames, thereby minimising further moutpoint usage.
This same behaviour can be achieved by mounting an empty directory with permissions 000 over the directory to be unmounted.
Then any new accesses to filenames in the below the mountpoint will hit the newly overlaid directory with zero permissions - new blockers to the unmount are thereby prevented.
First try to remount,ro
The major unmount achievement to be unlocked is the read-only remount. When you gain the remount,ro badge, you know that:
All pending data has been written to disk
All future write attempts will fail
The data is in a consistent state, should you need to physcially disconnect the device.
mount -o remount,ro /dev/device is guaranteed to fail if there are files open for writing, so try that straight up. You may be feeling lucky, punk!
If you are unlucky, focus only on processes with files open for writing:
lsof +f -- /dev/<devicename> | awk 'NR==1 || $4~/[0-9]+[uw -]/'
You should then be able to remount the device read-only and ensure a consistent state.
If you can't remount read-only at this point, investigate some of the other possible causes listed here.
Read-only re-mount achievement unlocked 🔓☑
Congratulations, your data on the mountpoint is now consistent and protected from future writing.
Why fuser is inferior to lsof
Why not use use fuser earlier? Well, you could have, but fuser operates upon a directory, not a device, so if you wanted to remove the mountpoint from the file name space and still use fuser, you'd need to:
Temporarily duplicate the mountpoint with mount -o bind /media/hdd /mnt to another location
Hide the original mount point and block the namespace:
Here's how:
null_dir=$(sudo mktemp --directory --tmpdir empty.XXXXX")
sudo chmod 000 "$null_dir"
# A request to remount,ro will fail on a `-o bind,ro` duplicate if there are
# still files open for writing on the original as each mounted instance is
# checked. https://unix.stackexchange.com/a/386570/143394
# So, avoid remount, and bind mount instead:
sudo mount -o bind,ro "$original" "$original_duplicate"
# Don't propagate/mirror the empty directory just about hide the original
sudo mount --make-private "$original_duplicate"
# Hide the original mountpoint
sudo mount -o bind,ro "$null_dir" "$original"
You'd then have:
The original namespace hidden (no more files could be opened, the problem can't get worse)
A duplicate bind mounted directory (as opposed to a device) on which
to run fuser.
This is more convoluted[1], but allows you to use:
fuser -vmMkiw <mountpoint>
which will interactively ask to kill the processes with files open for writing. Of course, you could do this without hiding the mount point at all, but the above mimicks umount -l, without any of the dangers.
The -w switch restricts to writing processes, and the -i is interactive, so after a read-only remount, if you're it a hurry you could then use:
fuser -vmMk <mountpoint>
to kill all remaining processes with files open under the mountpoint.
Hopefully at this point, you can unmount the device. (You'll need to run umount on the mountpoint twice if you've bind mounted a mode 000 directory on top.)
Or use:
fuser -vmMki <mountpoint>
to interactively kill the remaining read-only processes blocking the unmount.
Dammit, I still get target is busy!
Open files aren't the only unmount blocker. See here and here for other causes and their remedies.
Even if you've got some lurking gremlin which is preventing you from fully unmounting the device, you have at least got your filesystem in a consistent state.
You can then use lsof +f -- /dev/device to list all processes with open files on the device containing the filesystem, and then kill them.
[1] It is less convoluted to use mount --move, but that requires mount --make-private /parent-mount-point which has implications. Basically, if the mountpoint is mounted under the / filesystem, you'd want to avoid this.
Try the following, but before running it note that the -k flag will kill any running processes keeping the device busy.
The -i flag makes fuser ask before killing.
fuser -kim /address # kill any processes accessing file
unmount /address
Before unmounted the filesysem. we need to check is any process holding or using the filesystem. That's why it show device is busy or filesystem is in use.
run below command to find out the processes using by a filesystem:
fuser -cu /local/mnt/
It will show how many processes holding/using the filesystem.
local/mnt: 1725e(root) 5645c(shasankarora)
ps -ef | grep 1725 <--> ps -ef | grep <pid>
kill -9 pid
Kill all the processes and then you will able to unmount the partition/busy device.
Check for exported NFS file systems with exportfs -v. If found, remove with exportfs -d share:/directory. These don't show up in the fuser/lsof listing, and can prevent umount from succeeding.
Just in case someone has the same pb. :
I couldn't unmount the mount point (here /mnt) of a chroot jail.
Here are the commands I typed to investigate :
$ umount /mnt
umount: /mnt: target is busy.
$ df -h | grep /mnt
/dev/mapper/VGTout-rootFS 4.8G 976M 3.6G 22% /mnt
$ fuser -vm /mnt/
USER PID ACCESS COMMAND
/mnt: root kernel mount /mnt
$ lsof +f -- /dev/mapper/VGTout-rootFS
$
As you can notice, even lsof returns nothing.
Then I had the idea to type this :
$ df -ah | grep /mnt
/dev/mapper/VGTout-rootFS 4.8G 976M 3.6G 22% /mnt
dev 2.9G 0 2.9G 0% /mnt/dev
$ umount /mnt/dev
$ umount /mnt
$ df -ah | grep /mnt
$
Here it was a /mnt/dev bind to /dev that I had created to be able to repair my system inside from the chroot jail.
After umounting it, my pb. is now solved.
Check out umount2:
Linux 2.1.116 added the umount2() system call, which, like umount(),
unmounts a target, but allows additional flags controlling the
behaviour of the operation:
MNT_FORCE (since Linux 2.1.116) Force unmount even if busy. (Only for
NFS mounts.)
MNT_DETACH (since Linux 2.4.11) Perform a lazy unmount:
make the mount point unavailable for new accesses, and actually
perform the unmount when the mount point ceases to be busy.
MNT_EXPIRE (since Linux 2.6.8) Mark the mount point as expired. If a mount point
is not currently in use, then an initial call to umount2() with this
flag fails with the error EAGAIN, but marks the mount point as
expired. The mount point remains expired as long as it isn't accessed
by any process. A second umount2() call specifying MNT_EXPIRE unmounts
an expired mount point. This flag cannot be specified with either
MNT_FORCE or MNT_DETACH.
I recently had a similar need to unmount in order to change it's label with gparted.
/dev/sda1 was being mounted via /etc/fstab as /media/myusername. When attempts to unmount failed, I researched the error. I had forgotten to unmount a dual partitioned thumb drive with a mountpoint on /dev/hda1 first.
I gave 'lsof' a go as recommended.
$ sudo lsof | grep /dev/sda1
The output of which was:
lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs
Output information may be incomplete.
lsof: WARNING: can't stat() fuse file system /run/user/1000/doc
Output information may be incomplete.
Since lsof burped up two fuse warnings, I poked around in /run/user/1000/*, and took a guess that it could be open files or mount points (or both) interfering with things.
Since the mount points live in /media/, I tried again with:
$ sudo lsof | grep /media
The same two warnings, but this time it returned additional info:
bash 4350 myusername cwd DIR 8,21 4096 1048577 /media
sudo 36302 root cwd DIR 8,21 4096 1048577 /media
grep 36303 myusername cwd DIR 8,21 4096 1048577 /media
lsof 36304 root cwd DIR 8,21 4096 1048577 /media
lsof 36305 root cwd DIR 8,21 4096 1048577 /media
Still scratching my head, it was at this point I remembered the thumb drive sticking out of the USB port. Maybe the scratching helped.
So I unmounted the thumb drive partitions (unmounting one automatically unmounted the other) and safefly unplugged the thumb drive. After doing so, I was able to unmount /dev/sda1 (having nothing mounted on it anymore), relabel it with gparted, remount both the drive and thumb drive with no issues whatsoever.
Bacon saved.
Someone has mentioned that if you are using terminal and your current directory is inside the path which you want to unmount, you will get the error.
As a complementary, in this case, your lsof | grep path-to-be-unmounted must have below output:
bash ... path-to-be-unmounted
sudo fusermount -u -z <mounted path>
NB: do not use completition for the path as this will also freeze the terminal.
Another alternative when anything works is editing /etc/fstab, adding noauto flag and rebooting the machine. The device won't be mounted, and when you're finished doing whatever, remove flag and reboot again.
Niche Answer:
If you have a zfs pool on that device, at least when it's a file-based pool, lsof will not show the usage. But you can simply run
sudo zpool export mypool
and then unmount.
Multiple mounts inside a folder
An additional reason could be a secondary mount inside your primary mount folder, e.g. after you worked on an SD card for an embedded device:
# mount /dev/sdb2 /mnt # root partition which contains /boot
# mount /dev/sdb1 /mnt/boot # boot partition
Unmounting /mnt will fail:
# umount /mnt
umount: /mnt: target is busy.
First we have to unmount the boot folder and then the root:
# umount /mnt/boot
# umount /mnt
In my case, I couldn't unmount a partition that was mounted to a directory that was an AFP share. (sharing into an Apple bonjour/avahi mdns world)
I moved all the logins on the server to their home directory; I moved all the remotely connected Macs to some other directory.
I still couldn't unmount the partition even with umount -f
So I restarted the netatalk daemon on the server.
(/etc/netatalk/afp.conf has in it the share assignment)
After the netatalk restart, umount succeeded without the -f.

Inode of directory on mounted share changes despite no change in modification time

I am running Ubuntu 10.4 and am mounting a drive using cifs. The command I'm using is:
'sudo mount -t cifs -o workgroup="workgroup",username="username",noserverino,ro //"drive" "mount_dir"'
(Obviously with "" values substituted for actual values)
When I then run the command ls -i I get: 394070
Running it a second time I get: 12103522782806018
Is there any reason to expect the inode value to change?
Running ls -i --full-time shows no change in modification time.
noserverino tells your mount not to use server-generated inode numbers, and instead use client-generated temporary inode numbers, to make up for them. Try with serverino, if your server and the exported filesystem support inode numbers, they should be persistent.
I found that using the option "nounix" before the "noserverino" kept the inodes small and persistent. I'm not really sure why this happened. The server is AIX and I'm running it from Ubuntu. Thank you for your response.

Resources