Linux backup files command [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I had a problem with my Ubuntu install. I was able to boot from liveCD and connect an external hard drive. I want to backup my files now.
I tried cp -r /home destination, but I get problem with spaces in filenames, symlinks, errors "Cannot create fifo: Operation not permitted" "Permission denied" "Invalid argument" and plenty more. What is the best way to do it? Will cp -a fix these issues or should I do something more clever?
I found out that rsync doesn't have problems with filenames. But it doesn't copy .so and .a files. Also it is running extremely slow comparing to cp.
EDIT:
I followed the advice of John Bollinger and created an archive, because my external drive wasn't ext4 formatted, so is not able to preserve all file attributes.
From a liveCD home refers to liveCD home, so one has to use:
tar -c -z -f /my/backup/disk/home.tar.gz -C / media/ubuntu/longDeviceName/home
Despite sudo, I still received some "Cannot open: Permission denied" and "socket ignored" errors creating a tar for several .png files in .cache/software-center/icons/blabla. I wonder whether it is normal.

If you do not want to reformat your backup disk with a filesystem that has enough capabilities to represent all of the attributes of your files (e.g. ext4) then preserving them across the backup requires putting them into some sort of container. The traditional container for this sort of thing is a [compressed] tarball. You might therefore try
tar -c -z -f /my/backup/disk/home.tar.gz -C / home
You would recover the contents of that tarball via
tar -x -z -f /my/backup/disk/home.tar.gz -C /
Either or both might need to be run with privilege, obtained by being root or by using sudo.
That will handle symlinks, executable files, and any filename just fine, but it may still have trouble if the data you are trying to back up include any special files, such as device nodes or FIFOs. In that event, you may simply need to remove such files first, and recreate them after restoring the other files. You can identify such files via find:
find /home -not -type f -not -type d -not -type l

The accepted answer does not backup / recover file permission.
You should use parameter "p" while backing up and while recovering.
Also you might want to recover to specific folder and then move things around to not overwrite files you might want to keep.
"/" on the end of the command stands for backing up entire system:
sudo tar -cvpzf /backupfolder/backup.tar.gz --exclude=/mnt /
sudo mkdir /recover_v1.1
sudo tar -xvpzf backup.tar.gz -C /recover_v1.1
... // replacing whatever you need manually
Manually replace files you need to recover and keep those you want to keep.
-x extract
-p include permissions
-v verbose will show you the files name while working
-z compression
-f name the file
You might want to setup cron jobs to run backup automatically.

Related

mv confusion: I deleted /usr/local/bin by following the socketxp guide, but only the first time around [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 11 months ago.
Improve this question
I was following the SocketXP Agent Download & Setup, and on the very first step I am asked to run the following command:
curl -O https://portal.socketxp.com/download/linux/socketxp && chmod +wx socketxp && sudo mv socketxp /usr/local/bin
This was followed by some confusion, becuase the bin-folder now appeared to be an executable. It turns out that rather than inserting socketxp into the /usr/local/bin folder, I actually deleted the whole folder and replaced it with the socketxp file, now renamed to a file called bin.
However, after recreating the folder, I see I can transfer test files into it without issues with
~touch test
~sudo mv test /usr/local/bin
So after seeing this, I re-ran the same socketxp installation command, and this time around it worked fine.
I'm at a loss as to what the original problem was, but I am very interested in not having this happen again. I suspect I am missing some basic mv knowledge. Grateful for any tips and pointers that can explain for me what caused the issue
I was doing this on a 64-bit Linux SIMATIC controller from Siemens, which runs an OS based on Debian.
I am asked to run the following command:
curl -O https://portal.socketxp.com/download/linux/socketxp && chmod +wx socketxp && sudo mv socketxp /usr/local/bin
This was followed by some confusion, becuase the bin-folder now
appeared to be an executable. It turns out that rather than inserting
socketxp into the /usr/local/bin folder, I actually deleted the whole
folder and replaced it with the socketxp file, now renamed to a file
called bin.
Not plausible. When the destination path in a mv command is a directory, the source file(s) are moved into that directory (unless that is overridden by a command-line option). This is consistent with what you observed in subsequent experiments.
However, if only one source is given and the destination path either does not exist (but its parent directory does) or designates a regular file, then mv renames the source to the destination. We can only guess about what actually happened, but my first guess would be that /usr/local/bin did not initially exist. That might have arisen because of an earlier error, such as executing rm -rf /usr/local/bin when you really meant rm -rf /usr/local/bin/*, or perhaps the machine just came that way.
In Linux you can use mv also to rename files. To prevent a mistake like yours, always add a / at the end of a path that you want to move something into:
sudo mv test /usr/local/bin/
This would have behaved as you expected and put the file test into the folder /usr/local/bin.
As #JohnBollinger stated, the behaviour to expect would be that mv moves files into an existing folder if you do not explicitly tell it that the last parameter is NOT a directory.
In your case it could have been that /usr/local/bin simply didn't exist when you executed your command. In this case the variant with the appended / would have emitted an error message. Or you accidentally specified the option -T (maybe intended for some other command or a copy-paste-mistake?)

mv command creates directories [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I am facing a very weird issue.
Consider an example I have these directories "/ten" and "/one/two/three/four" I have a few files in these directories.
When i execute the following command
mv /ten/ /one/two/three/four/five/six
it gives the output as
mv: cannot move '/ten/' to '/one/two/three/four/five/six' : No such file or directory. Which looks fine as it doesn't create directories.
But if I execute the following command
mv /one/two/three/four/ /one/two/five/six
the directories five/six get created inside /one/two. i.e. the mv command succeeds.
Can anyone please explain what is happening here ? Why doesn't it give an error No such file or directory ?
EDIT : Further Observation ..
Directories /one/two/three/four exists also directories /one/two/five exists.
Executing mv /one/two/three/four/ /one/two/five/six will succeed. Here directory six will get created even though it is not present.
This doesn't happen in the case when I execute mv /one/two/three/four /one/two/five/six and the "five" directory doesn't exists. In this case it will give error.
I thought mv will never create any directories.
Please let me know if I have missed something obvious.
Either you're executing another mv binary, executing another version of mv, or something is wrapping it up like a function, a script or perhaps an alias.
To know if you're really running the real mv or not, run
type mv
You should get
mv is /bin/mv
As suggested by Etan Reisner, you can also add -a to have more information:
type -a mv
UPDATE
Directories /one/two/three/four exists also directories /one/two/five
exists. Executing mv /one/two/three/four/ /one/two/five/six will
succeed. Here directory six will get created even though it is not
present. This doesn't happen in the case when I execute mv
/one/two/three/four /one/two/five/six and the "five" directory doesn't
exists. In this case it will give error.
Since /one/two/five existed it simply moved your directory /one/two/three/four as /one/two/five/six. That means /one/two/five/six is now the new name or pathname of the directory which was previously /one/two/three/four.
The problem in understanding you are having can be helped with a reference to the man page for mv and a few examples. From man 1 mv Rename SOURCE to DEST, or move SOURCE(s) to DIRECTORY. What is not apparent is what is SOURCE and what is DEST and this is where your confusion arises. For example:
mv /ten/ /one/two/three/four/five/six
it gives the output as mv: cannot move '/ten/' to '/one/two/three/four/five/six' :
No such file or directory. Which looks fine as it doesn't create directories.
It doesn't. In you example, the SOURCE is /ten and, your DEST depends on whether /one/two/three/four/five exists and also whether /one/two/three/four/five/six exists.
If /one/two/three/four/five exists, then mv /ten /one/two/three/four/five will cause /ten to be moved and become a new subdirectory of /one/two/three/four/five. e.g. /one/two/three/four/five/ten.
If /one/two/three/four/five exists (but not ../six), then mv /ten /one/two/three/four/five/six will cause /ten to be moved and become six new subdirectory of /one/two/three/four/five. e.g. /one/two/three/four/five/six.
if however /one/two/three/four/five do not exist, then mv will fail because you have not provided a valid DEST.

Unzip and move a downloaded file - Linux [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Surprisingly I could not find a straight-forward answer to this question on here yet. I am still learning Linux. Say I have downloaded a zip file to my Downloads folder. Now, I want to move it into a protected folder, like /opts or /var. Is there a good command to both sudo move AND unzip the file to where I need it to go?
If you wish to perform two separate operations (move and extract) then you have no option but to use two commands.
However, if your end goal is to extract the zip file to a specific directory, you can leave the zip file where it is and specify an extraction directory using the -d option:
sudo unzip thefile.zip -d /opt/target_dir
From the manpage:
[-d exdir]
An optional directory to which to extract files. By default, all files and subdirectories are recreated in the current directory; the -d option allows extraction in an arbitrary directory (always assuming one has permission to write to the directory). This option need not appear at the end of the command line; it is also accepted before the zipfile specification (with the normal options), immediately after the zipfile specification, or between the file(s) and the -x option. The option and directory may be concatenated without any white space between them, but note that this may cause normal shell behavior to be suppressed. In particular, ''-d ~'' (tilde) is expanded by Unix C shells into the name of the user's home directory, but ''-d~'' is treated as a literal subdirectory ''~'' of the current directory.
sudo mv <file_name> /opts && unzip /opts/<file_name>
Also you may specify the unzip destination to unzip so you can do this in a single command. This however will be a bit different from the command above as the zip will be kept in its current location, only the unzipped files will be extracted to the pointed destination.
unzip -d [target directory] [filename].zip

Prevent overwriting of files when using Scp [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I was copying some files using scp and i don't want to overwrite the already present files.
If i was using cp command, i think this can be done using cp -n.
Is there a similar option for scp, i went through the documentation of SCP and there seems to be no such option.
Is rsync or sftp the way to go solve this problem?
Addition Info:
OS: Ubuntu 12.04
rsync seems to be the solution to your problem. Here's an example I got from here:
rsync -avz foo:src/bar /data/tmp
The -a option will preserve permissions, directory structure, ownership, and symlinks. You can also specify any of those options individually as well.
-v and -z mean verbose and compress respectively. You don't really need them although -z is nice if you are copying large files.
I just found a simple hack. Mark the existing files as read-only.
rsync -avz --ignore-existing /source folder/* user#remoteserver:/dstfolder/
--ignore-existing will not overwrite the files on remote server or destination server*.
I've used rsync in the past for this, but found myself trying to grab from a windows box with CopSSH and no rsync :-( The following worked just fine for me, using file tests to eliminate the files that would be overwritten, and generating mutiple 'get' requests to an sftp instance.
( echo 'cd work/ftp/' ;
ssh <user>#<machine> 'cd work/ftp/ && ls -1 ITEM_SALE_SUMMARY_V.*.dat.xz' |
while read line; do [[ -f "$line" ]] || echo get "$line"; done
) | sftp <user>#<machine>
Just in case others need a non-rsync solution!
Just to supplement the other solutions:
For one ascii/bin file, you can do it with:
cat source_file | ssh host "test ! -f target_file && cat > target_file"
I did not test it but maybe first mountung via sshfs and then using cp will do the trick.
rsync over ssh it will have to be.

Incremental backup Linux command [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
What is the command to do a incremental backup? Any source or any links would be much appreciated.
rsync is what you are looking for. Here is a nice tutorial.
Depending on what you need from your backups, rdiff-backup may be what you want. It's based on the same idea as rsync, but also keeps historical backups (in a space-efficient manner, by storing the differences).
Dirvish makes incremental snapshots (that look as full directories trees, thanks to the magic of hardlinks), using rysnc under the hood. It works well for me.
Here's the command I use for incremental backups of my virtual machine using rsync.
rsync -avh --delete --progress --link-dest="/Volumes/canteloup/vm_backups/`ls -1tr /Volumes/canteloup/vm_backups/ | tail -1`" "/Users/julian/Documents/Parallels" "/Volumes/canteloup/vm_backups/`date +%Y-%m-%d-%H-%M-%S`"
-avh means make an archive, with verbose output in a human readable form.
--delete will make sure each incremental backup does not contain files that have been deleted since the last backup. It means the backup taken on a particular date will be a snapshot of the directory as it was on that date.
--progress will display in the terminal the amount transferred, the percentage, and the time remaining for each file. Handy for virtual machine backups with 40Gb+ file sizes.
--link-dest specifies the directory to use for making links for the files that haven't changed. It uses ls -rt | tail -1 to get the last file. Seems to be fine if the file doesn't exist, as in the first time it is run.
The next arg is the directory to backup.
The last arg is the target directory. The name is a timestamp.
Try the following bash script. Please replace src and dest with the source and destination you want to use. If you are not backing up in a local storage then remove --inplace (This option is useful for transfer of large files with block-based changes or appended data, and also on systems that are disk bound, not network bound. you should also not use this option to update files that are being accessed by others).
#!/bin/bash
rsync -ab --dry-run --stats --human-readable --inplace --debug=NONE --log-file=rsync.log --backup-dir=rsync_bak.$(date +"%d-%m-%y_%I-%M-%S%P") --log-file-format='%t %f %o %M' --delete-after src dest | sed -e '1d;5,12d;14,17d'
echo -e "\nDo you want to continue?"
while true; do
case $yn in
[Yy]* ) rsync -ab --human-readable --inplace --info=PROGRESS2,BACKUP,DEL --debug=NONE --log-file=rsync.log --backup-dir=rsync_bak.$(date +"%d-%m-%y_%I-%M-%S%P") --log-file-format='%t %f %o %M' --delete-after src dest; break;;
[Nn]* ) exit;;
* ) read -p "Please answer yes or no: " yn;;
esac
done

Resources