What variables can produce different md5sum of the same dd image of a partition?
If I execute this code over two storages (same size, brand and geometry) why I obtain different "partition.image" files:
sfdisk /dev/sda < /partition.table
mkfs.ext4 /dev/sda1
mount /dev/sda1 /mnt/
tar -xf somefiles.tar -C /mnt/
umount /mnt
dd if=/dev/sda1 of=/partition.image
P.S. tar is preserving all files timings!
When you make a new ext4 filesystem with the mkfs utility, it generates a unique UUID between invocations(unless you pass the -U option with an explicit UUID). Since the UUID is stored in the superblock of the file system, the images you generate between different runs of the above code will not be bit-for-bit identical.
Sources: http://wiki.debian.org/fstab#UUIDs
https://ext4.wiki.kernel.org/index.php/Ext4_Disk_Layout#The_Super_Block
Time of creation, access etc. And that's good - no two images, created at different storages should be the same. Or otherwise, you could have something called "collision".
Related
I'm creating an ISO of a Debian system with:
mkisofs -V "Debian ISO" -cache-inodes -J -l -o file.iso debian-system/
The problem is: when I mount the ISO (mount -o loop) ping and sudo don't work because their suid bits have not been set.
I know that special bis are cleared by the -r flag. This flag generates the "rationalized Rock Ridge directory information" which enables to retain the original file permissions, but also clears any set-id bits.
But if I don't use -r, file permissions will be the same for all files, as specified at runtime when the ISO is mounted.
Question: how to add set-id files like ping and sudo to a linux "live" ISO?
You need to use an alternate file system, that supports those permissions.
The way a LiveCD/DVD works is there is a squashfs file that is mounted with changes made in RAM.
You could "fake" the same by creating a file full of zeros using dd, make a file system on it wtih mkfs.ext4, mount it, and copy the files onto it. Then on your custom disk, mount it as loop (mount -o loop /path/to/file /mnt/point) and symlink/etc the binaries over.
I use rsync for automatic periodic syncing of the home folder (root user) in a linux server that is used by several people. A service that users need is the possibility of mounting remote directories through sshfs. However, when there is an sshfs mount, rsync fails giving the following messages
rsync: readlink_stat("/home/???/???") failed: Permission denied (13)
IO error encountered -- skipping file deletion
...
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1183) [sender=3.1.1]
Because of this error, the automated sync does not work as expected, in particular due to skipping the file deletion and a non-zero exit code. The sync is only necessary for the file system where home is mounted, so the wanted behavior is that the sshfs mounts be ignored. The -x / --one-file-system rsync option does not resolve it.
This problem is clearly explained in https://www.agwa.name/blog/post/how_fuse_can_break_rsync_backups . The follow-up article (https://www.agwa.name/blog/post/easily_running_fuse_in_an_isolated_mount_namespace) proposes a solution, though not an acceptable one because fuse mounts are only visible to the process created the mount.
I am looking for a solution that does not affect sshfs usability and is transparent for the users.
The problem is that FUSE denies stat access to other users, including root. Rsync requires stat access on all source files and directories specified. But when an rsync process owned by another user stats a FUSE mount-point, FUSE denies that process access to the mount-point's attributes, causing rsync to throw the said "permission denied" error. Mauricio Villega's solution works by telling rsync to skip FUSE mount-points listed by the mount command. Here is another version of Villega's solution that specifies a white-list of filesystem types using the findmnt command. I chose ext3 and ext4 but you may add other types as needed.
#!/bin/sh
# Which paths to rsync (note the lack of trailing slash tells rsync to preserve source path name at destination).
SOURCES=(
/home
)
# Which filesystem types are supported.
FSTYPES=(
ext3
ext4
)
# Rsync each source.
for SOURCE in ${SOURCES[#]}; do
# Build exclusion list (array of "--exclude=PATH").
excludedPaths=$(findmnt --invert --list --noheadings --output TARGET --types $(IFS=',';echo "${FSTYPES[*]}"))
printf -v exclusionList -- "--exclude=%s " ${excludedPaths[#]}
# Rsync.
rsync --archive ${exclusionList[#]} --hard-links --delete --inplace --one-file-system ${SOURCE} /backup
done
Note that it builds the exclusion list inside the loop to address a fundamental problem with this solution. That problem is due to rsync'ing from a live system where a user could create new FUSE mount-points while rsync is running. The exclusion list needs to be updated frequently enough to include new FUSE mount-points. You may divide the home directory further by each username by modifying the SOURCES array as shown.
SOURCES=(
/home/user1
/home/user2
)
If you are using LVM, an alternative solution is rsync from an LVM snapshot. An LVM snapshot provides a simple (e.g., no FUSE mount-points) and frozen view of the logical volume it is linked to. The downside is that you must reserve space for the LVM snapshot's copy-on-write (COW) activity. It is crucial that you discard the LVM snapshot after you are done with it; otherwise the LVM snapshot will continue to grow in size as modifications are made. Here is a sample script that uses LVM snapshots. Note that it does not need to build an exclusion list for rsync.
# Create and mount LVM snapshot.
lvcreate --extents 100%FREE --snapshot --name snapRoot /dev/vgSystem/lvRoot
mount -o ro /dev/mapper/snapRoot /root/mnt # Note that only root has access to this mount-point.
# Rsync each source.
for SOURCE in ${SOURCES[#]}; do
rsync --archive --hard-links --delete --inplace --one-file-system /root/mnt/${SOURCE} /backup
done
# Discard LVM snapshot.
umount /root/mnt
lvremove vgSystem/snapRoot
References:
"How FUSE Can Break Rsync Backups"
This error does not appear if the fuse mount points are excluded in the rsync command. Since it is an automated sync, the mount command can be used to obtain all fuse mount points. The output of the mount command may differ depending on the system, but in a debian jessie sshfs mounts appear as USER#HOST:MOUNTED_DIR on /path/to/mount/point type fuse.sshfs (rw,...). A simple way to automate the exclusion of fuse mounts in bash+sed is the following
SOURCE="/home/"
FUSEEXCLUDE=( $( mount |
sed -rn "
/ type fuse/ {
s|^[^ ]+ on ([^ ]+) type fuse.+|\1|;
/^${SOURCE//\//\\\/}.+/ {
s|^${SOURCE//\//\\\/}| --exclude |;
p;
}
}" ) )
rsync $OPTIONS "${FUSEEXCLUDE[#]}" "$SOURCE" "$TARGET"
I am new to fuse. When I try to run a FUSE client program I get this error:
fuse: mountpoint is not empty
fuse: if you are sure this is safe, use the 'nonempty' mount option
I understand that a mountpoint is the directory where you will logically attach the FUSE filesystem. What will happen if I mount to this location? What are the dangers? Is it just that the directory will be overwritten? Basically: what will happen if you mount to a non empty directory?
You need to make sure that the files on the device mounted by fuse will not have the same paths and file names as files which already existing in the nonempty mountpoint. Otherwise this would lead to confusion. If you are sure, pass -o nonempty to the mount command.
You can try what is happening using the following commands.. (Linux rocks!) .. without destroying anything..
// create 10 MB file
dd if=/dev/zero of=partition bs=1024 count=10240
// create loopdevice from that file
sudo losetup /dev/loop0 ./partition
// create filesystem on it
sudo e2mkfs.ext3 /dev/loop0
// mount the partition to temporary folder and create a file
mkdir test
sudo mount -o loop /dev/loop0 test
echo "bar" | sudo tee test/foo
# unmount the device
sudo umount /dev/loop0
# create the file again
echo "bar2" > test/foo
# now mount the device (having file with same name on it)
# and see what happens
sudo mount -o loop /dev/loop0 test
Just add -o nonempty in command line, like this:
s3fs -o nonempty <bucket-name> </mount/point/>
Apparently nothing happens, it fails in a non-destructive way and gives you a warning.
I've had this happen as well very recently. One way you can solve this is by moving all the files in the non-empty mount point to somewhere else, e.g.:
mv /nonEmptyMountPoint/* ~/Desktop/mountPointDump/
This way your mount point is now empty, and your mount command will work.
For me the error message goes away if I unmount the old mount before mounting it again:
fusermount -u /mnt/point
If it's not already mounted you get a non-critical error:
$ fusermount -u /mnt/point
fusermount: entry for /mnt/point not found in /etc/mtab
So in my script I just put unmount it before mounting it.
Just set "nonempty" as an optional value in your /etc/fstab
For example:
## mount a bucket
/usr/local/bin/s3fs#{your_bucket_name} {local_mounted_dir} fuse _netdev,url={your_bucket_endpoint_url},allow_other,nonempty 0 0
## mount a sub-directory of bucket, Do like this:
/usr/local/bin/s3fs#{your_bucket_name}:{sub_dir} {local_mounted_dir} fuse _netdev,url={your_bucket_endpoint_url},allow_other,nonempty 0 0
force it with -l
sudo umount -l ${HOME}/mount_dir
ie the partition of interest is already mounted as read-only.the partition need to be mounted as a rw enabled partition for executing particular lines of script alone.After that the partition should go to it's previous state of read only.
Question is for QNX operating system. And correct way to remount partition as read/write can be done using below command.
mount -uw /
To remout a partition read-write:
mount /mnt/mountpoint -oremount,rw
and to remout read-only
mount /mnt/mountpoint -oremount,ro
you may be interested in remount option.
for example, this command is widely used in rooted androids.
mount -o remount,rw /system
mount -o remount,ro /system
mount(8) - Linux man page
Filesystem Independent Mount Options
remount
Attempt to remount an already-mounted filesystem. This is commonly used to change the mount flags for a filesystem, especially to make a readonly filesystem writeable. It does not change device or mount point.
The remount functionality follows the standard way how the mount command works with options from fstab. It means the mount command doesn't read fstab (or mtab) only when a device and dir are fully specified.
mount -o remount,rw /dev/foo /dir
After this call all old mount options are replaced and arbitrary stuff from fstab is ignored, except the loop= option which is internally generated and maintained by the mount command.
mount -o remount,rw /dir
After this call mount reads fstab (or mtab) and merges these options with options from command line ( -o ).
I've written a small FUSE-based filesystem and now the only part's missing is that I want to register it with fstab(5) to auto-mount it on system startup and/or manually mount it with just mount /srv/virtual-db. How can I achieve this?
I know, I can just run /usr/bin/vdbfs.py /srv/virtual-db from some init script, but that's not exactly pretty.
I'm sorry because this may be not exactly a programming question, but it's highly related, as the packaging and deployment is still the programmer's job.
In general, one "registers" a new mount filesystem type by creating an executable mount.fstype.
$ ln -s /usr/bin/vdbfs.py /usr/sbin/mount.vdbfs
If vdbfs.py takes mount-ish arguments (i.e. dev path [-o opts]), then mount -t vdbfs and using vdbfs as the 3rd field in fstab will work. If it doesn't, you can create a wrapper which does take arguments of that form and maps them to whatever your vdbfs.py takes.
FUSE should also install a mount.fuse executable; mount.fuse 'vdbfs.py#dev' path -o opts will go on and call vdbfs.py dev path -o opts. In that case, you can use fuse as your filesystem type and prefix your device with vdbfs.py#.
So to clarify ephemient's answer, there are two options:
Edit /etc/fstab like this:
# <file system> <mount point> <type> <options> <dump> <pass>
# ...
vdbfs.py#<dev> /srv/virtual-db fuse user,<other-opts> 0 0
Or,
Create an executable prefixed with "mount." (ensuring it can be used
with mount-like options):
$ ln -s /usr/bin/vdbfs.py /usr/sbin/mount.vdbfs
And edit /etc/fstab like this:
# <file system> <mount point> <type> <options> <dump> <pass>
# ...
<dev> /srv/virtual-db vdbfs.py user,<other-opts> 0 0
With regards to auto-mounting at start up and manually mounting with mount, the user and noauto options are relevant and fully supported by fuse itself so you don't have to implement them yourself. The user option lets a non-priveleged user who is a member of the "fuse" group mount your filesystem with the mount command, and noauto directs your filesystem not to automatically mount at startup. If you don't specify noauto, it will automatically mount.
To clarify #patryk.beza comment on the accepted answer, the correct way to mount a FUSE file system is by setting the file system type to fuse.<subtype>.
For example, to mount an s3fs-fuse implementation, which does not provide a specific /sbin/mount.* wrapper and uses normally the s3fs user command to mount S3 buckets, one can use this command as root:
mount -t fuse.s3fs bucket-name /path/to/dir -o <some,options>
or this line in /etc/fstab:
bucket-name /path/to/dir fuse.s3fs <some,options> 0 0
or this SystemD mount unit (for example, /etc/systemd/system/path-to-dir.mount):
[Unit]
Description=S3 Storage
After=network.target
[Mount]
What=bucket-name
Where=/path/to/dir
Type=fuse.s3fs
Options=<some,options>
[Install]
WantedBy=multi-user.target
How this works: mount recognizes the concept of "filesystem subtypes" when the type is formatted with a period (i.e. <type>.<subtype>), so that a type with the format fuse.someimpl is recognized to be the responsibility of the FUSE mount helper /sbin/mount.fuse. The FUSE mount helper then resolves the someimpl part to the FUSE implementation, in the same way as the # format is used in the original answer (I think this is just a path search for a program named <subtype>, but I'm not 100% sure about it).
You could just use fuse filesystem type. The following works on my system:
smbnetfs /media/netbios fuse defaults,allow_other 0 0
Another example:
sshfs#user#example.com:/ /mnt fuse user,noauto 0 0
After researching a lot found this solution to mount fuse filesystem suing fstab entry. I was using fuse for s3bucket to mount on local linux machine.
.passwd-s3fs : Is containing credentials to access your aws account 1] Secret key and 2] Access Key .
uid : User Id. You can type linux command id and you can get uid
Syntax:
s3fs#<Bucket_Name> <Mounted_Direcotry_Path> fuse _netdev,allow_other,passwd_file=/home/ubuntu/.passwd-s3fs,use_cache=/tmp,umask=002,uid=<User_Id> 0 0
Example:
s3fs#myawsbucket /home/ubuntu/s3bucket/mys3bucket fuse _netdev,allow_other,passwd_file=/home/ubuntu/.passwd-s3fs,use_cache=/tmp,umask=002,uid=1000 0 0
To mount you need run following command.
mount -a
To check your bucket is mounted properly or not use following command to check which shows all mounted points.
df -h