Recover files deleted with rsync -avz --delete - linux

Is it possible to recover files deleted with rsync -avz --delete?
If it is, what are some suggested tools to do so?

I am assuming you ran rsync on some unix system.
If you don't have a backup of your file system,
then its a long tedious process recovering deleted files from unix file system.
High level steps :
find partition where your file resided
create image of entire partition % dd if=/partition of=partition.img ..
(this assumes you have enough space to store this somewhere locally in a different partition, or you can copy it over to different system % dd if=/partition | ssh otherhost "dd of=partition.img")
open the img file in hex edit
(this assumes you know the contents of the files that you've lost and can identify them when you see the content.)
note the byte offset and length of your file
use grep -b to extract the contents of your missing file.
enjoy!

I wasn't able to get extundelete to work, so I ended up using photorec + find/grep in order to recover my important files.

Related

Restore large sized directories using linux command

I want to restore the files inside the directory which is deleted and I want to use linux. So I am using "cp -ra".
Now large sized directory are getting restored back using "cp -ra" (less than 10MB) but the files inside the directory are not restoring using "cp -ra". Plesae help whats the unix command if anyone has any idea.
cp can't restore deleted files. If restoration of deleted files is possible at all depends on the file system these files were stored in.

Add files to VFAT image without mounting

I want to create a new VFAT image and add a few files to it.
# Create file of 1MB size:
dd if=/dev/zero of=my-image.fat count=1 bs=1M
# Format file as VFAT:
mkfs.vfat ./my-image.fat
Now I want to add the files ./abc, ./def and ./ghi to the image.
How do I do that without mount -o loop or fusermount?
I only want to write to a new, empty, pristine VFAT image.
I don't need deleting appending or any "complicated" operations.
I tried 7z -a because 7zip can read VFAT images, but it does not know how to write to it.
I want to do the exact same thing as part of an image build for an embedded system. It's really annoying that the entire build, which takes ~3hrs, could be completely unattended except for the final steps which required a password in order to mount a VFAT image. Fortunately, I found a set of tools which solve the problem.
You want mcopy provided by GNU mtools.
Mtools is a collection of utilities to access MS-DOS disks from GNU and Unix without mounting them.
It also supports disk images such as VFAT image files.
As an example, the following command will copy the file hello.txt from your current directory into the subdirectory subdir of the VFAT file system in ~/images/fat_file.img:
mcopy -i ~/images/fat_file.img hello.txt ::subdir/hello.txt
There are more useful inclusions in mtools, such as mdir and mtype which are great for inspecting your image file without having to mount it.
mdir -i ~/images/fat_file.img ::
mdir -i ~/images/fat_file.img ::subdir
mtype -i ~/imags/fat_file.img ::subdir/hello.txt
What you want is basically impossible. You can't just "stuff" some file data onto the end of a disk image and have those files magically "appear" within the image. Feel free to stuff in the data, but there's more to a filesystem than just the data. You have to EXACTLY replicate the metadata operations that the file system handles for you, e.g. updating the FAT tables.
In other words, you'd have to build the ENTIRE FAT filesystem handling code in your own code. Which is utterly ludicrous. Just mount the image, use normal file operations on that mounted file system, then dismount it again. Boom, done.

Compare two folders containing source files & hardlinks, remove orphaned files

I am looking for a way to compare two folders containing source files and hard links (lets use /media/store/download and /media/store/complete as an example) and then remove orphaned files that don't exist in both folders. These files may have been renamed and may be stored in subdirectories.
I'd like to set this up on a cron script to run regularly. I just can't logically figure out myself how work the logic of the script - could anyone be so kind as to help?
Many thanks
rsync can do what you want, using the --existing, --ignore-existing, and --delete options. You'll have to run it twice, once in each "direction" to clean orphans from both source and target directories.
rsync -avn --existing --ignore-existing --delete /media/store/download/ /media/store/complete
rsync -avn --existing --ignore-existing --delete /media/store/complete/ /media/store/download
--existing says don't copy orphan files
--ignore-existing says don't update existing files
--delete says delete orphans on target dir
The trailing slash on the source dir, and no trailing slash on the target dir, are mandatory for your task.
The 'n' in -avn means not to really do anything, and I always do a "dry run" with the -n option to make sure the command is going to do what I want, ESPECIALLY when using --delete. Once you're confident your command is correct, run it with just -av to actually do the work.
Perhaps rsync is of use ?
Rsync is a fast and extraordinarily versatile file copying tool. It
can copy locally, to/from another host over any remote shell, or
to/from a remote rsync daemon. It offers a large number of options
that control every aspect of its behavior and permit very flexible
specification of the set of files to be copied. It is famous for its
delta-transfer algorithm, which reduces the amount of data sent over
the network by sending only the differences between the source files
and the existing files in the destination. Rsync is widely used for
backups and mirroring and as an improved copy command for everyday
use.
Note it has a --delete option
--delete delete extraneous files from dest dirs
which could help with your specific use case above.
You can also use "diff" command to list down all the different files in two folders.

tar a folder into multiple files over SSH

Here is the thing
I have a server with total 85 GB disk space and right now i have a folder with the size of 50 GB which is containing over 60000 files .
Now i want to download these files on my localhost and in order to do that i need to tar the folder but I can't tar the whole folder because of disk space limitation.
So i'm looking for a way to archive the folder into two 25 GB tar file like part1.tar and part2.tar but when the first part is done it should wait for asking something like next part name or permission or anything so I can transfer the first part to an another server and then continue archiving to part2. Or a way to tar half of the folder like first 30000 files and then tar the rest.
Any idea? Thanks in advance
One of the earliest applications of rsync was to implement mirroring or backup for multiple Unix clients to a central Unix server using rsync/ssh and standard Unix accounts.
I use rsync to move compressed (and uncompressed) files between servers.
I think the command should be something like this
rsync -av host::src /dest
rsync solution was good enough but i found the solution for main question:
tar -c -M --tape-length=30000000 --file=filename.tar foldername
After reaching 29GB you will need to change the tape(in my case transferring the first part and removing it) and hit enter for continue.Additionally it is possible for give next parts name:
Prepare volume #2 for `filename.tar' and hit return:
n filename2.tar
Because it is going to take time i suggest using screen session over SSH :
http://thelinuxnoob.com/linux/screen-in-ssh/

How to recover deleted files in linux filesystem (a bit faster)?

If I launch the following command to recover lost file on linux:
grep -a -B 150 -A 600 "class SuperCoolClass" /dev/sda10 > /tmp/SuperCoolClass.repair
Do I really need the "-a"? We need to recover from "sda10" some erased files (sabotage) and we have a bunch of them to recover and I believe removing the -a would be faster.
I believe the files to be on disk but not in binary.
thx
The file you are working on is /dev/sda10 which grep would assume to contain binary data. In order to treat it as text (which you are looking for) you need the -a otherwise grep will just print Binary file /dev/sda10 matches
In addition since the task is IO rather than CPU bound it would not be a big performance gain in any case.
In the future it's quite easy to test something like this by yourself:
create dummy 10Mb disk: dd if=/dev/zero of=testfs bs=1024 count=10000
create filesystem: mkfs.ext4 testfs
mount via loopback: mount -o loop ./testfs /mnt/test/
copy some stuff on the dummy filesystem
unmount: umount /mnt/test
run grep on the test file with different options
EDIT
it just occurred to me that maybe you are looking for the command '/usr/bin/strings' instead
something like:
extract all printable strings from ruined disk: /usr/bin/strings -a /dev/sda10 > /tmp/recovery
grep on the text only many times for different strings: grep "whatever" /tmp/recovery > /tmp/recovery.whatever
To recover a text file (only a text file) you accidently deleted / overwrote (provided you remember a phrase in that text file)
Ensure the safety of files by unmounting the directory with
umount /home/johndoe.
Find which hard disk partition the folder is at, say sda3
Switch to terminal as root.
Run
grep -a -A800 -B800 'search this phrase' /dev/sda3 | strings>recovery_log.txt
This will take a while. You can go through the file recovery_log.txt using any text editor, even while the command is running.

Resources