Restore large sized directories using linux command - linux

I want to restore the files inside the directory which is deleted and I want to use linux. So I am using "cp -ra".
Now large sized directory are getting restored back using "cp -ra" (less than 10MB) but the files inside the directory are not restoring using "cp -ra". Plesae help whats the unix command if anyone has any idea.

cp can't restore deleted files. If restoration of deleted files is possible at all depends on the file system these files were stored in.

Related

Copying multiple files knowing their directories

I have multiple files' directories and i need to copy them into specific folder using terminal, how do i do it. I also have access to GUI of the system, as all this is being done in virtual machine using ssh
According to its manpage, cp is capable of copying various source files to one output directory.
The syntax is as follows:
cp /dir1/file1 /dir1/file2 /dir2/file1_2 /outputdir/
Using this command, you can copy files from multiple directories (/dir1/ and /dir2/ in this example) to one output directory (/outputdir/).

How to delete all files in one particular directory

In Linux, how can I delete all files in particular directory? For example /home/xd/karthik is my path; I want to delete all files in the above directory, if the disk usage exceeds 90%. How can I write a script for that?
rm /path/to/directory/*
add rm -r to remove the file hierarchy rooted in each file argument.
dont need script just basic shell command

How does the 'mv' command work?

I used the command mv to move files from directory /a/b to directory /v/c. I wanted the whole 'b' directory to be moved to the path /v/c.
Now while running this command- mv /a/b /v/c I interrupted it in middle where the source had a large amount of data. Later I deleted directory 'c' since I thought it had partial files.
Now my question is will the directory 'b' contain all the original files along with the files that where moved to path /v/c? Or did I lose files by deleting the directory 'c'?
mv across filesystems will:
create the destination directory
for each file: copy and remove original
remove origin directory
Thus, if you interrupt it, some of the files will have been moved but not all. A mv of a directory within the same filesystem is atomic as it's just re-linking the directory's inode to a new location.
At one time, mv could only do the latter.
I believe it depends on if the source and destination directories were on the same file system or different file systems. If they were on the same file system then a "move" just changes the path information for each file. But if they're on different file systems the "move" command will copy one file at a time, and subsequently delete it on the source.
So, in your scenario if the source and destination were on separate file systems then yes, you just lost files if you interrupted mv and then deleted "c".

Changing file modification time for tar archive members

I have a tar archive on an NTFS drive on a windows machine which contains a folder with files residing on a drive on my linux machine. I try to update the archive from a bash shell script from my linux machine with the -u (--update) tar option, so that only new versions of archive members are appended to the archive. However, due to the "time skew" between file times on two filesystems, tar appends to the archive ALL the files in the folder, even if the folder does not contain any new versions of files at all.
So the problem is: how to add to an archive on machine B only new version of files from a folder on machine A in conditions when there is time skew between machines?
Is there a way to solve this problem so that mtimes of individual files in archive were preserved or changed insignificantly (e.g. adjusted 10 minutes ahead to negate the time skew)? This probably can be accomplished by calling tar individually for appending each file, but is there a more optimal solution?
Maybe there is a way to change mtime individually for each file when it is added to the archive? The option --after-date for appending only files modified after certain date apparently is not quite suitable filter for this task.

rsync : copy files if local file doesn't exist. Don't check filesize, time, checksum etc

I am using rsync to backup a million images from my linux server to my computer (windows 7 using Cygwin).
The command I am using now is :
rsync -rt --quiet --rsh='ssh -p2200' root#X.X.X.X:/home/XXX/public_html/XXX /cygdrive/images
Whenever the process is interrupted, and I start it again, it takes long time to start the copying process.
I think it is checking each file if there is any update.
The images on my server won't change once they are created.
So, is there any faster way to run the command so that it may copy files if local file doesn't exist without checking filesize, time, checksum etc...
Please suggest.
Thank you
did you try this flag -- it might help, but it might still take some time to resume the transfer:
--ignore-existing
This tells rsync to skip updating files that already exist on the destination (this does not ignore
existing directories, or nothing would get done). See also --existing.
This option is a transfer rule, not an exclude, so it doesn't affect the data that goes into the
file-lists, and thus it doesn't affect deletions. It just limits the files that the receiver requests
to be transferred.
This option can be useful for those doing backups using the --link-dest option when they need to con-
tinue a backup run that got interrupted. Since a --link-dest run is copied into a new directory hier-
archy (when it is used properly), using --ignore existing will ensure that the already-handled files
don't get tweaked (which avoids a change in permissions on the hard-linked files). This does mean that
this option is only looking at the existing files in the destination hierarchy itself.

Resources