How can I make a copy of device/socket file - linux

I can know inode of device/socket with stat, so seems like I can somehow "copy" this file for backup. Of course the solution is "dd", but I have no idea what can I do if the device is infinity (like the random one). And can I just copy the inode somehow?

These are referred to as "special files" or "special nodes". Copying their contents doesn't make sense, as the contents are generated in one way or another programatically by the kernel as needed.
Programs like "tar" know how to copy the contents of the inode, which will refer to the portion of the kernel that support each of these different nodes. See the documentation of the "mknod" command for some more details.

And if you need one-liner to copy device nodes with tar, here it is:
cd /dev && tar -cpf- sda* | tar -xf- -C /some/destination/path/

Found out the major and minor number of the device file you need to copy then use mknod to create the device file with the same major and minor number. Major number is used for a program to access to kernel device switch table and calling the proper kernel function (usually device drive). Minor number is used as a parameter for calling those functions (like different density, disk, .... etc).

24 July 2022
There is one legitimate use case for copying (archiving) a socket.
I have a program that gathers and summarizes attribute data in a file system tree. In order to regression test, I created a directory that contains one example of every type of file the program might encounter. I run my program on this directory to test it whenever I alter the code.
It is necessary to backup this directory along with other more valuable data, and it is necessary to restore it, should the storage device fail.
tar is the program of choice, and of course tar can not archive a socket. Doing so in most situations is senseless - any program that uses the socket will have to delete it and recreate it before use.
In the case of the test directory, there is one named socket, for it is possible that my program will encounter such things and it needs to correctly gather attributes for a complete summary.
As noted by others, that socket is not useful for anything directly. It does, however, occupy a little storage space, much as an empty file occupies storage space. That is why you can see it in the directory listing.
You can copy it successfully with the command:
cp -ar --parents <path> <backup_device_directory>
and restore it with:
cp -ar --parents <backup_device_directory>/<path> <directory>
The socket is not useful for anything except probing its attributes with a program during a regression test.
Archiving it saves the trouble of having to remember to recreate it after a restoration. The extra nuisance of archiving the sockets is easily codified in a script and forgotten. That is what we all want - easy to use solutions whose implementation you can ignore after you have solved the problem.

You can copy from a working system as below to some shared location between the machines and copy from the shared location to the other system.
Machine A
cp -rf /dev/SRC shared_directory
Machine B
cp -rf shared_directory /dev/

Related

Force rsync to compare local files byte by byte instead of checksum

I have written a Bash script to backup a folder. At the core of the script is an rsync instruction
rsync -abh --checksum /path/to/source /path/to/target
I am using --checksum because I neither want to rely on file size nor modification time to determine if the file in the source path needs to be backed up. However, most -- if not all -- of the time I run this script locally, i.e., with an external USB drive attached which contains the backup destination folder; no backup over network. Thus, there is no need for a delta transfer since both files will be read and processed entirely by the same machine. Calculating the checksums even introduces a speed down in this case. It would be better if rsync would just diff the files if they are both on stored locally.
After reading the manpage I stumbled upon the --whole-file option which seems to avoid the costly checksum calculation. The manpage also states that this is the default if source and destination are local paths.
So I am thinking to change my rsync statement to
rsync -abh /path/to/source /path/to/target
Will rsync now check local source and target files byte by byte or will it use modification time and/or size to determine if the source file needs to be backed up? I definitely do not want to rely on file size or modification times to decide if a backup should take place.
UPDATE
Notice the -b option in the rsync instruction. It means that destination files will be backed up before they are replaced. So blindly rsync'ing all files in the source folder, e.g., by supplying --ignore-times as suggested in the comments, is not an option. It would create too many duplicate files and waste storage space. Keep also in mind that I am trying to reduce backup time and workload on a local machine. Just backing up everything would defeat that purpose.
So my question could be rephrased as, is rsync capable of doing a file comparison on a byte by byte basis?
Question: is rsync capable of doing a file comparison on a byte by byte basis?
Strictly speaking, Yes:
It's a block by block comparison, but you can change the block size.
You could use --block-size=1, (but it would be unreasonably inefficient and inappropriate for basically every)
The block based rolling checksum is the default behavior over a network.
Use the --no-whole-file option to force this behavior locally. (see below)
Statement 1. Calculating the checksums even introduces a speed down in this case.
This is why it's off by default for local transfers.
Using the --checksum option forces an entire file read, as opposed to the default block-by-block delta-transfer checksum checking
Statement 2. Will rsync now check local source and target files byte by byte or
       will it use modification time and/or size to determine if the source
file        needs to be backed up?
By default it will use size & modification time.
You can use a combination of --size-only, --(no-)ignore-times, --ignore-existing and
--checksum to modify this behavior.
Statement 3. I definitely do not want to rely on file size or modification times to decide if a        backup should take place.
Then you need to use --ignore-times and/or --checksum
Statement 4. supplying --ignore-times as suggested in the comments, is not an option
Perhaps using --no-whole-file and --ignore-times is what you want then ? This forces the use of the delta-transfer algorithm, but for every file regardless of timestamp or size.
You would (in my opinion) only ever use this combination of options if it was critical to avoid meaningless writes (though it's critical that it's specifically the meaningless writes that you're trying to avoid, not the efficiency of the system, since it wouldn't actually be more efficient to do a delta-transfer for local files), and had reason to believe that files with identical modification stamps and byte size could indeed be different.
I fail to see how modification stamp and size in bytes is anything but a logical first step in identifying changed files.
If you compared the following two files:
File 1 (local) : File.bin - 79776451 bytes and modified on the 15 May 07:51
File 2 (remote): File.bin - 79776451 bytes and modified on the 15 May 07:51
The default behaviour is to skip these files. If you're not satisfied that the files should be skipped, and want them compared, you can force a block-by-block comparison and differential update of these files using --no-whole-file and --ignore-times
So the summary on this point is:
Use the default method for the most efficient backup and archive
Use --ignore-times and --no-whole-file to force delta-change (block by block checksum, transferring only differential data) if for some reason this is necessary
Use --checksum and --ignore-times to be completely paranoid and wasteful.
Statement 5. Notice the -b option in the rsync instruction. It means that destination files will be backed up before they are replaced
Yes, but this can work however you want it to, it doesn't necessarily mean a full backup every time a file is updated, and it certainly doesn't mean that a full transfer will take place at all.
You can configure rsync to:
Keep 1 or more versions of a file
Configure it with a --backup-dir to be a full incremental backup system.
Doing it this way doesn't waste space other than what is required to retain differential data. I can verify that in practise as there would not be nearly enough space on my backup drives for all of my previous versions to be full copies.
Some Supplementary Information
Why is Delta-transfer not more efficient than copying the whole file locally?
Because you're not tracking the changes to each of your files. If you actually have a delta file, you can merge just the changed bytes, but you need to know what those changed bytes are first. The only way you can know this is by reading the entire file
For example:
I modify the first byte of a 10MB file.
I use rsync with delta-transfer to sync this file
rsync immediately sees that the first byte (or byte within the first block) has changed, and proceeds (by default --inplace) to change just that block
However, rsync doesn't know it was only the first byte that's changed. It will keep checksumming until the whole file is read
For all intents and purposes:
Consider rsync a tool that conditionally performs a --checksum based on whether or not the file timestamp or size has changed. Overriding this to --checksum is essentially equivalent to --no-whole-file and --ignore-times, since both will:
Operate on every file, regardless of time and size
Read every block of the file to determine which blocks to sync.
What's the benefit then?
The whole thing is a tradeoff between transfer bandwidth, and speed / overhead.
--checksum is a good way to only ever send differences over a network
--checksum while ignoring files with the same timestamp and size is a good way to both only send differences over a network, and also maximize the speed of the entire backup operation
Interestingly, it's probably much more efficient to use --checksum as a blanket option than it would be to force a delta-transfer for every file.
There is no way to do byte-by-byte comparison of files instead of checksum, the way you are expecting it.
The way rsync works is to create two processes, sender and receiver, that create a list of files and their metadata to decide with each other, which files need to be updated. This is done even in case of local files, but in this case processes can communicate over a pipe, not over a network socket. After the list of changed files is decided, changes are sent as a delta or as whole files.
Theoretically, one could send whole files in the file list to the other to make a diff, but in practice this would be rather inefficient in many cases. Receiver would need to keep these files in the memory in case it detects the need to update the file, or otherwise the changes in files need to be re-sent. Any of the possible solutions here doesn't sound very efficient.
There is a good overview about (theoretical) mechanics of rsync: https://rsync.samba.org/how-rsync-works.html

Linux file deleted recovery

Is there a way to create a file in Linux that link to a specific iNode?
Take this scenario: There is a file that is in course of writing (a log maybe) and the specific file is deleted but a link in the dir /proc is still pointing at it. In this case we need not a bare copy of it but an hard link to it so we can have the future modifications and the most last modification before the process close and the system delete it.
If we have the iNode number is there a way to achieve this goal?
Since there is no Syscall that involves iNode, because is a concept of extX fs and is not a good practice make a stove pipe but it is to make a chain of responsability (as M.E.L. suggests), there is only a NO answer for this question because at VFS level we handle files path and names and not other internal representations.
BUT to achieve the goal to track the most last modification we can use a continous monitoring and duplication with tail:
tail -c+1 -f --pid=PID /proc/PID/fd/FD > /path/to/the/copy
where PID is the pid of the process that have the deleted file still opened and FD is its file descriptor number. With -f tail open and hold the file to display further modification, with -c+1 start to "tail" from the first byte and with --pid=PID tail is informed to exit when the pid exit.
You can use lsof to recover deleted files (sometimes)...
> lsof | grep testing.txt
less 4607 juliet 4r REG 254,4 21
8880214 /home/juliet/testing.txt (deleted)
Be sure to read the original article for full details before attempting this, unless you're a Maveric like me.
> ls -l /proc/4607/fd/4
lr-x------ 1 juliet juliet 64 Apr 7 03:19
/proc/4607/fd/4 -> /home/juliet/testing.txt (deleted)
> cp /proc/4607/fd/4 testing.txt.bk
http://www.linuxplanet.com/linuxplanet/tips/6767/1
Enjoy
It's always difficult to answer a question like "can I do" confidently in the negative. But as far as I see, neither /sys/ nor /proc provide a mapping of open files descriptors that are not symlinks. I assume by "BUT a link in the dir /proc is still pointing at it" you mean that the /proc//fd/ entries look like symlinks? I'm almost sure you cannot recover the original file.
I take that back: As user user2676075 pointed out, copying does work. Just hardlinking doesn't ...
UPDATE: If you think about it, it's quite logical.
/proc and /sys are file systems different from your hard disk. So they can't provide file like directory entries which one could hardlink to a destination on the hard disk.
The /proc/*/fd/ entries pretend to be symlinks, but actually they are different, else the copying would not work. I think they pretend to be symlinks to provide meaningful information with 'ln -l'.
Regarding the (missing) capability to hardlink to some inode (let's say with some system call): This cannot be part of the kernel or the VFS-Interface, for the following reasons:
It would violate the integrity of the file system. The filesystem is not supposed to keep the disk blocks of files that are completely deleted around in the same manner as files that persist.
The inodes might be a completely virtual concept to identify a "slot where a datastream is stored'. I assume there can be implementations that would have a problem converting a slot that has no reference back to a slot which is refered to by a name in the file system.
I admit the case against the possibility of such a system call is not water tight. But given the current state of the VFS interface (which AFAIR doesn't provide for such a call), it would be a heavy burden for any file system implementation (including e.g. distributed file systems) to provide a call to link a file into a directory by inode.
ATM I wonder if calling fstat before and after deleting the last reference is actually requires to return the same inode information ...
t

Query with mv linux command

I used the following linx command :
mv RegisteredOutputs.msg registered_outputs.tcl
My intention was to achieve the following :
mv RegisteredOutputs.msg registered_outputs.msg
The directory in which I issued the command already had a file named registered_outputs.tcl .
So by far you might have figured out what my issue is. registered_outputs.tcl got overwritten. Is there any way of recovering it ?
First thing you always do: Boot a live CD/USB so that your partition is mounted read-only, to avoid those spaces on the drive being re-used. Once another file uses that platter space, the data is gone.
Because of how Linux ext3 file system works, it actually zeroes out inode data on delete, making recovery impossible. This is for delete however, and I don't know if the same could apply to overwriting existing files. Hope you're feeling lucky.
See this guide on how to recover deleted files on ext3
source:
recovery of overwritten file

Checksum of a loop device file exactly reproducible?

how can I mount and unmount a file as loop device and have exactly the same MD5 checksum afterwards? (Linux)
Here's the workflow:
I take a fresh copy of a fixed template file which contains a prepared
ext2 root file system.
The file is mounted with mount -t ext2 <file> <mountpoint> -o loop,sync,noatime,nodiratime
( Here, some files will be added in future--but ignore this for a moment and focus on mount / umount )
umount
Take the MD5 sum of the file.
I expect the same, reproducible checksum every time I perform exactly the same steps.
However, when I repeat the process (remember: taking a fresh copy of the template file), I always get a different checksum.
I assume on the one hand that still some timestamps are set internally (I tried to avoid this with the noatime option) or, on the other hand, Linux manages the file system on its own way where I have no influence. That means: the files and timestamps inside might might be the same, but the way the file system is arranged inside the file might be differnt and therefore kind of random.
In comparison, when I create a zip file of a file tree, and I touched all files with a defined timestamp, the checksum of the zip file is reproducible.
Is there a way to keep the mount or file access that controlled as I need at all?
it depends on the file system on disk format. I believe ext2 keep sat the least the mount count counter - how many time the file system was mounted. I don't remember any mount option to tell it not to write that counter (and perhaps other data items) but you can:
a. mount the file system read only. Then the checksum will not change of course.
b. Change the ext2 file system kernel driver to add an option to not change the counter and possible other data bits.
The more interesting question is why you are interested is such an option. I think there is probably a better way to achieve what you are trying to do - whatever it is.

What happens if there are too many files under a single directory in Linux?

If there are like 1,000,000 individual files (mostly 100k in size) in a single directory, flatly (no other directories and files in them), is there going to be any compromises in efficiency or disadvantages in any other possible ways?
ARG_MAX is going to take issue with that... for instance, rm -rf * (while in the directory) is going to say "too many arguments". Utilities that want to do some kind of globbing (or a shell) will have some functionality break.
If that directory is available to the public (lets say via ftp, or web server) you may encounter additional problems.
The effect on any given file system depends entirely on that file system. How frequently are these files accessed, what is the file system? Remember, Linux (by default) prefers keeping recently accessed files in memory while putting processes into swap, depending on your settings. Is this directory served via http? Is Google going to see and crawl it? If so, you might need to adjust VFS cache pressure and swappiness.
Edit:
ARG_MAX is a system wide limit to how many arguments can be presented to a program's entry point. So, lets take 'rm', and the example "rm -rf *" - the shell is going to turn '*' into a space delimited list of files which in turn becomes the arguments to 'rm'.
The same thing is going to happen with ls, and several other tools. For instance, ls foo* might break if too many files start with 'foo'.
I'd advise (no matter what fs is in use) to break it up into smaller directory chunks, just for that reason alone.
My experience with large directories on ext3 and dir_index enabled:
If you know the name of the file you want to access, there is almost no penalty
If you want to do operations that need to read in the whole directory entry (like a simple ls on that directory) it will take several minutes for the first time. Then the directory will stay in the kernel cache and there will be no penalty anymore
If the number of files gets too high, you run into ARG_MAX et al problems. That basically means that wildcarding (*) does not always work as expected anymore. This is only if you really want to perform an operation on all the files at once
Without dir_index however, you are really screwed :-D
Most distros use Ext3 by default, which can use b-tree indexing for large directories.
Some of distros have this dir_index feature enabled by default in others you'd have to enable it yourself. If you enable it, there's no slowdown even for millions of files.
To see if dir_index feature is activated do (as root):
tune2fs -l /dev/sdaX | grep features
To activate dir_index feature (as root):
tune2fs -O dir_index /dev/sdaX
e2fsck -D /dev/sdaX
Replace /dev/sdaX with partition for which you want to activate it.
When you accidently execute "ls" in that directory, or use tab completion, or want to execute "rm *", you'll be in big trouble. In addition, there may be performance issues depending on your file system.
It's considered good practice to group your files into directories which are named by the first 2 or 3 characters of the filenames, e.g.
aaa/
aaavnj78t93ufjw4390
aaavoj78trewrwrwrwenjk983
aaaz84390842092njk423
...
abc/
abckhr89032423
abcnjjkth29085242nw
...
...
The obvious answer is the folder will be extremely difficult for humans to use long before any technical limit, (time taken to read the output from ls for one, their are dozens of other reasons) Is there a good reason why you can't split into sub folders?
Not every filesystem supports that many files.
On some of them (ext2, ext3, ext4) it's very easy to hit inode limit.
I've got a host with 10M files in a directory. (don't ask)
The filesystem is ext4.
It takes about 5 minutes to
ls
One limitation I've found is that my shell script to read the files (because AWS snapshot restore is a lie and files aren't present till first read) wasn't able to handle the argument list so I needed to do two passes. Firstly construct a file list with find (wholename in case you want to do partial matches)
find /path/to_dir/ -wholename '*.ldb'| tee filenames.txt
then secondly read from a the file containing filenames and read all files. (with limited parallelism)
while read -r line; do
if test "$(jobs | wc -l)" -ge 10; then
wait -n
fi
{
#do something with 10x fanout
} &
done < filenames.txt
Posting here in case anyone finds the specific work-around useful when working with too many files.

Resources