How to recover deleted files in linux filesystem (a bit faster)? - linux

If I launch the following command to recover lost file on linux:
grep -a -B 150 -A 600 "class SuperCoolClass" /dev/sda10 > /tmp/SuperCoolClass.repair
Do I really need the "-a"? We need to recover from "sda10" some erased files (sabotage) and we have a bunch of them to recover and I believe removing the -a would be faster.
I believe the files to be on disk but not in binary.
thx

The file you are working on is /dev/sda10 which grep would assume to contain binary data. In order to treat it as text (which you are looking for) you need the -a otherwise grep will just print Binary file /dev/sda10 matches
In addition since the task is IO rather than CPU bound it would not be a big performance gain in any case.
In the future it's quite easy to test something like this by yourself:
create dummy 10Mb disk: dd if=/dev/zero of=testfs bs=1024 count=10000
create filesystem: mkfs.ext4 testfs
mount via loopback: mount -o loop ./testfs /mnt/test/
copy some stuff on the dummy filesystem
unmount: umount /mnt/test
run grep on the test file with different options
EDIT
it just occurred to me that maybe you are looking for the command '/usr/bin/strings' instead
something like:
extract all printable strings from ruined disk: /usr/bin/strings -a /dev/sda10 > /tmp/recovery
grep on the text only many times for different strings: grep "whatever" /tmp/recovery > /tmp/recovery.whatever

To recover a text file (only a text file) you accidently deleted / overwrote (provided you remember a phrase in that text file)
Ensure the safety of files by unmounting the directory with
umount /home/johndoe.
Find which hard disk partition the folder is at, say sda3
Switch to terminal as root.
Run
grep -a -A800 -B800 'search this phrase' /dev/sda3 | strings>recovery_log.txt
This will take a while. You can go through the file recovery_log.txt using any text editor, even while the command is running.

Related

While loop cp give copies of partial file (linux)

I am trying to copy a list of file in varying directories based on their sample name using the following script. Although the files are copied, the file are only partially copied. I have 64k lines in each file, but only exactly 40k lines are copied.
while read sample
do
echo copying ${sample}
cp ${sample_dir}/*${sample}*/file.tsv ${output_dir}/${sample}.file.tsv
done < ${input_list}/sample_list.txt
Am I missing something obvious here? Does the cp command have limits on how many lines it can copy?
Cheers,
I don't think CP command has limit on file size (unless ulimit has restrictions) to copy but it has limit on number of files to copy or check if it is related to new file creation with larger size.
check limits across the system using command ulimit,
ulimit -a
And verify the file size is not limited to 40k and if it is unlimited then no issue (like, file size (blocks, -f) unlimited).
Try with rsync command if it works for you,
rsync -avh source destination
Could you verify few things,
1) Verify if it's not file read error, like cat the file and save the output to another file (either manually/redirect).
cat *.tsv > /tmp/verify-size
And then verify the size on that file,
du -h /tmp/verify-size ---> This should be 64k
2) Create large dummy file with size > 40k (or exact size of .tsv (64k))
dd if=/dev/zero of=/tmp/verify-newfile-size bs=64000 count=1
du -h /tmp/verify-newfile-size ---> This should be 64k
If this new file creation success (if you are able to create a file with any size) then try CP command and verify the size
-OR- Try with dd command
dd if=/tmp/verify-newfile-size of=/tmp/verify-newfile-size2 bs=64000 count=1
3) Try with an atomic command,
mv /tmp/verify-newfile-size /tmp/verify-newfile-size3

Create a filesystem like mkfs.ext3 in perl

I have an empty image-file with the size of 1GB, where I want to create a filesystem.
I've tried the command /sbin/mkfs.ext3 -F MyImage.img, and it worked, but has anyone an idea how to do it by perl code, not by calling a system funcion?

How to list recently deleted files from a directory?

I'm not even sure if this is easily possible, but I would like to list the files that were recently deleted from a directory, recursively if possible.
I'm looking for a solution that does not require the creation of a temporary file containing a snapshot of the original directory structure against which to compare, because write access might not always be available. Edit: If it's possible to achieve the same result by storing the snapshot in a shell variable instead of a file, that would solve my problem.
Something like:
find /some/directory -type f -mmin -10 -deletedFilesOnly
Edit: OS: I'm using Ubuntu 14.04 LTS, but the command(s) would most likely be running in a variety of Linux boxes or Docker containers, most or all of which should be using ext4, and to which I would most likely not have access to make modifications.
You can use the debugfs utility,
debugfs is a simple to use RAM-based file system specially designed
for debugging purposes
First, run debugfs /dev/hda13 in your terminal (replacing /dev/hda13 with your own disk/partition).
(NOTE: You can find the name of your disk by running df / in the terminal).
Once in debug mode, you can use the command lsdel to list inodes corresponding with deleted files.
When files are removed in linux they are only un-linked but their
inodes (addresses in the disk where the file is actually present) are
not removed
To get paths of these deleted files you can use debugfs -R "ncheck 320236" replacing the number with your particular inode.
Inode Pathname
320236 /path/to/file
From here you can also inspect the contents of deleted files with cat. (NOTE: You can also recover from here if necessary).
Great post about this here.
So a few things:
You may have zero success if your partition is ext2; it works best with ext4
df /
Fill mount point with result from #2, in my case:
sudo debugfs /dev/mapper/q4os--desktop--vg-root
lsdel
q (to exit out of debugfs)
sudo debugfs -R 'ncheck 528754' /dev/sda2 2>/dev/null (replace number with one from step #4)
Thanks for your comments & answers guys. debugfs seems like an interesting solution to the initial requirements, but it is a bit overkill for the simple & light solution I was looking for; if I'm understanding correctly, the kernel must be built with debugfs support and the target directory must be in a debugfs mount. Unfortunately, that won't really work for my use-case; I must be able to provide a solution for existing, "basic" kernels and directories.
As this seems virtually impossible to accomplish, I've been able to negotiate and relax the requirements down to listing the amount of files that were recently deleted from a directory, recursively if possible.
This is the solution I ended up implementing:
A simple find command piped into wc to count the original number of files in the target directory (recursively). The result can then easily be stored in a shell or script variable, without requiring write access to the file system.
DEL_SCAN_ORIG_AMOUNT=$(find /some/directory -type f | wc -l)
We can then run the same command again later to get the updated number of files.
DEL_SCAN_NEW_AMOUNT=$(find /some/directory -type f | wc -l)
Then we can store the difference between the two in another variable and update the original amount.
DEL_SCAN_DEL_AMOUNT=$(($DEL_SCAN_ORIG_AMOUNT - $DEL_SCAN_NEW_AMOUNT));
DEL_SCAN_ORIG_AMOUNT=$DEL_SCAN_NEW_AMOUNT
We can then print a simple message if the number of files went down.
if [ $DEL_SCAN_DEL_AMOUNT -gt 0 ]; then echo "$DEL_SCAN_DEL_AMOUNT deleted files"; fi;
Return to step 2.
Unfortunately, this solution won't report anything if the same amount of files have been created and deleted during an interval, but that's not a huge issue for my use case.
To circumvent this, I'd have to store the actual list of files instead of the amount, but I haven't been able to make that work using shell variables. If anyone could figure that out, I'd help me immensely as it would meet the initial requirements!
I'd also like to know if anyone has comments on either of the two approaches.
Try:
lsof -nP | grep -i deleted
history >> history.txt
Look for all rm statements.

Add files to VFAT image without mounting

I want to create a new VFAT image and add a few files to it.
# Create file of 1MB size:
dd if=/dev/zero of=my-image.fat count=1 bs=1M
# Format file as VFAT:
mkfs.vfat ./my-image.fat
Now I want to add the files ./abc, ./def and ./ghi to the image.
How do I do that without mount -o loop or fusermount?
I only want to write to a new, empty, pristine VFAT image.
I don't need deleting appending or any "complicated" operations.
I tried 7z -a because 7zip can read VFAT images, but it does not know how to write to it.
I want to do the exact same thing as part of an image build for an embedded system. It's really annoying that the entire build, which takes ~3hrs, could be completely unattended except for the final steps which required a password in order to mount a VFAT image. Fortunately, I found a set of tools which solve the problem.
You want mcopy provided by GNU mtools.
Mtools is a collection of utilities to access MS-DOS disks from GNU and Unix without mounting them.
It also supports disk images such as VFAT image files.
As an example, the following command will copy the file hello.txt from your current directory into the subdirectory subdir of the VFAT file system in ~/images/fat_file.img:
mcopy -i ~/images/fat_file.img hello.txt ::subdir/hello.txt
There are more useful inclusions in mtools, such as mdir and mtype which are great for inspecting your image file without having to mount it.
mdir -i ~/images/fat_file.img ::
mdir -i ~/images/fat_file.img ::subdir
mtype -i ~/imags/fat_file.img ::subdir/hello.txt
What you want is basically impossible. You can't just "stuff" some file data onto the end of a disk image and have those files magically "appear" within the image. Feel free to stuff in the data, but there's more to a filesystem than just the data. You have to EXACTLY replicate the metadata operations that the file system handles for you, e.g. updating the FAT tables.
In other words, you'd have to build the ENTIRE FAT filesystem handling code in your own code. Which is utterly ludicrous. Just mount the image, use normal file operations on that mounted file system, then dismount it again. Boom, done.

Recover files deleted with rsync -avz --delete

Is it possible to recover files deleted with rsync -avz --delete?
If it is, what are some suggested tools to do so?
I am assuming you ran rsync on some unix system.
If you don't have a backup of your file system,
then its a long tedious process recovering deleted files from unix file system.
High level steps :
find partition where your file resided
create image of entire partition % dd if=/partition of=partition.img ..
(this assumes you have enough space to store this somewhere locally in a different partition, or you can copy it over to different system % dd if=/partition | ssh otherhost "dd of=partition.img")
open the img file in hex edit
(this assumes you know the contents of the files that you've lost and can identify them when you see the content.)
note the byte offset and length of your file
use grep -b to extract the contents of your missing file.
enjoy!
I wasn't able to get extundelete to work, so I ended up using photorec + find/grep in order to recover my important files.

Resources