Prevent overwriting of files when using Scp [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I was copying some files using scp and i don't want to overwrite the already present files.
If i was using cp command, i think this can be done using cp -n.
Is there a similar option for scp, i went through the documentation of SCP and there seems to be no such option.
Is rsync or sftp the way to go solve this problem?
Addition Info:
OS: Ubuntu 12.04

rsync seems to be the solution to your problem. Here's an example I got from here:
rsync -avz foo:src/bar /data/tmp
The -a option will preserve permissions, directory structure, ownership, and symlinks. You can also specify any of those options individually as well.
-v and -z mean verbose and compress respectively. You don't really need them although -z is nice if you are copying large files.

I just found a simple hack. Mark the existing files as read-only.

rsync -avz --ignore-existing /source folder/* user#remoteserver:/dstfolder/
--ignore-existing will not overwrite the files on remote server or destination server*.

I've used rsync in the past for this, but found myself trying to grab from a windows box with CopSSH and no rsync :-( The following worked just fine for me, using file tests to eliminate the files that would be overwritten, and generating mutiple 'get' requests to an sftp instance.
( echo 'cd work/ftp/' ;
ssh <user>#<machine> 'cd work/ftp/ && ls -1 ITEM_SALE_SUMMARY_V.*.dat.xz' |
while read line; do [[ -f "$line" ]] || echo get "$line"; done
) | sftp <user>#<machine>
Just in case others need a non-rsync solution!

Just to supplement the other solutions:
For one ascii/bin file, you can do it with:
cat source_file | ssh host "test ! -f target_file && cat > target_file"

I did not test it but maybe first mountung via sshfs and then using cp will do the trick.

rsync over ssh it will have to be.

Related

How to Rename Files in Linux [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
I want to rename all files in selected directory using rename command or move command from :
_02_mp3_cbr_320.m4a?anghakamitoken=sc245ae5a454547.5
_02_mp3_fsgsfsdfsfdfdsfcbr_320.m4a?anghakamitoken=sc245.ae5a
to
1.m4a
2.m4a
If those files always have a sheme like this:
_02_mp3_ * _320.m4a?anghakamitoken= *
You can do it like that:
#!/bin/bash
COUNT=0
for f in ./"_02_mp3_"*"_320.m4a?anghakamitoken="*; do
mv $f "$((++COUNT)).m4a"
done
This will result in
1.m4a
2.m4a
Assuming the initial files are in the same directory as the bash script.
Try this with GNU Parallel. it basically uses GNU Parallel's job number ({#}) as the number for renaming:
parallel --dry-run -k mv {} {#}.m4a ::: *m4a*
Sample Output
mv _02_mp3_cbr_320.m4a\?anghakamitoken\=sc245ae5a454547.5 1.m4a
mv _02_mp3_fsgsfsdfsfdfdsfcbr_320.m4a\?anghakamitoken\=sc245.ae5a 2.m4a
If the commands look correct, remove the --dry-run part and run it again. The -k keeps the output in order. The {} refers to the current file.
Make a backup before using any commands you are unfamiliar with...
To rename any file in Linux using mv (move) command:
mv (cfr. "man mv")
In this case, you need to enter the following lines on the command line:
$mv _02_mp3_cbr_320.m4a?anghakamitoken=sc245ae5a454547.5 1.m4a
$mv _02_mp3_fsgsfsdfsfdfdsfcbr_320.m4a?anghakamitoken=sc245.ae5a 1.m4a
It is important that you refer to the manual when you know the command you must use, to understand how to use it.

Linux backup files command [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I had a problem with my Ubuntu install. I was able to boot from liveCD and connect an external hard drive. I want to backup my files now.
I tried cp -r /home destination, but I get problem with spaces in filenames, symlinks, errors "Cannot create fifo: Operation not permitted" "Permission denied" "Invalid argument" and plenty more. What is the best way to do it? Will cp -a fix these issues or should I do something more clever?
I found out that rsync doesn't have problems with filenames. But it doesn't copy .so and .a files. Also it is running extremely slow comparing to cp.
EDIT:
I followed the advice of John Bollinger and created an archive, because my external drive wasn't ext4 formatted, so is not able to preserve all file attributes.
From a liveCD home refers to liveCD home, so one has to use:
tar -c -z -f /my/backup/disk/home.tar.gz -C / media/ubuntu/longDeviceName/home
Despite sudo, I still received some "Cannot open: Permission denied" and "socket ignored" errors creating a tar for several .png files in .cache/software-center/icons/blabla. I wonder whether it is normal.
If you do not want to reformat your backup disk with a filesystem that has enough capabilities to represent all of the attributes of your files (e.g. ext4) then preserving them across the backup requires putting them into some sort of container. The traditional container for this sort of thing is a [compressed] tarball. You might therefore try
tar -c -z -f /my/backup/disk/home.tar.gz -C / home
You would recover the contents of that tarball via
tar -x -z -f /my/backup/disk/home.tar.gz -C /
Either or both might need to be run with privilege, obtained by being root or by using sudo.
That will handle symlinks, executable files, and any filename just fine, but it may still have trouble if the data you are trying to back up include any special files, such as device nodes or FIFOs. In that event, you may simply need to remove such files first, and recreate them after restoring the other files. You can identify such files via find:
find /home -not -type f -not -type d -not -type l
The accepted answer does not backup / recover file permission.
You should use parameter "p" while backing up and while recovering.
Also you might want to recover to specific folder and then move things around to not overwrite files you might want to keep.
"/" on the end of the command stands for backing up entire system:
sudo tar -cvpzf /backupfolder/backup.tar.gz --exclude=/mnt /
sudo mkdir /recover_v1.1
sudo tar -xvpzf backup.tar.gz -C /recover_v1.1
... // replacing whatever you need manually
Manually replace files you need to recover and keep those you want to keep.
-x extract
-p include permissions
-v verbose will show you the files name while working
-z compression
-f name the file
You might want to setup cron jobs to run backup automatically.

How to download a file into a directory using curl or wget? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I know I can use the following 2 commands to download a file:
curl -O example.com/file.zip
wget example.com/file.zip
But I want them to go into a specific directory. So I can do the following:
curl -o mydir/file.zip example.com/file.zip
wget -O mydir/file.zip example.com/file.zip
Is there a way to not have to specify the filename? Something like this:
curl -dir mydir example.com/file.zip
The following line will download all the files to a directory mentioned by you.
wget -P /home/test www.xyz.com
Here the files will be downloaded to /home/test directory
I know this is an old question but it is possible to do what you ask with curl
rm directory/somefile.zip
rmdir directory
mkdir directory
curl --http1.1 http://example.com/somefile.zip --output directory/somefile.zip
first off if this was scripted you would need to make sure the file if it already exist is deleted then delete the directory then curl the download otherwise curl will fail with a file and directory already exist error.
The simplest way is to cd inside a subshell
(cd somedir; wget example.com/file.zip)
and you could make that a shell function (e.g. in your ~/.bashrc)
wgetinside() {
( cd $1 ; shift; wget $* )
}
then type wgetinside somedir http://example.com/file.zip
Short answer is no as curl and wget automatically writes to STDOUT. It does not have an option built into to place the download file into a directory.
-o/--output <file> Write output to <file> instead of stdout (Curl)
-O, --output-document=FILE write documents to FILE. (WGet)
But as it outputs to STDOUT natively it does give you programatic solutions such as the following:
i="YOURURL"; f=$(awk -F'/' '{print $NF}' <<< $i);curl $i > ~/$f
The first i will define your url (example.com/file.zip) as a variable. The f= part removed the URL and leaves /file.zip and then you curl that file ($i) to the directory (~) as the file name.

Linux how to copy but not overwrite? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I want to cp a directory but I do not want to overwrite any existing files even it they are older than the copied files. And I want to do it completely noninteractive as this will be a part of a Crontab Bash script. Any ideas?
Taken from the man page:
-n, --no-clobber
do not overwrite an existing file (overrides a previous -i option)
Example:
cp -n myoldfile.txt mycopiedfile.txt
Consider using rsync.
rsync -a -v --ignore-existing src dst
As per comments rsync -a -v src dst is not correct because it will update existing files.
cp -n
Is what you want. See the man page.
This will work on RedHat:
false | cp -i source destination 2>/dev/null
Updating and not overwriting is something different.
For people that find that don't have an 'n' option (like me on RedHat) you can use cp -u to only write the file if the source is newer than the existing one (or there isn't an existing one).
[edit] As mentioned in the comments, this will overwrite older files, so isn't exactly what the OP wanted. Use ceving's answer for that.
Alpine linux: Below answer is only for case of single file: in alpine cp -n not working (and false | cp -i ... too) so solution working in my case that I found is:
if [ ! -f env.js ]; then cp env.example.js env.js; fi
In above example if env.js file not exists then we copy env.example.js to env.js.
Some version of cp do not have the --no-clobber option. In that case:
echo n | cp -vipr src/* dst
This works for me
yes n | cp -i src dest

Incremental backup Linux command [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
What is the command to do a incremental backup? Any source or any links would be much appreciated.
rsync is what you are looking for. Here is a nice tutorial.
Depending on what you need from your backups, rdiff-backup may be what you want. It's based on the same idea as rsync, but also keeps historical backups (in a space-efficient manner, by storing the differences).
Dirvish makes incremental snapshots (that look as full directories trees, thanks to the magic of hardlinks), using rysnc under the hood. It works well for me.
Here's the command I use for incremental backups of my virtual machine using rsync.
rsync -avh --delete --progress --link-dest="/Volumes/canteloup/vm_backups/`ls -1tr /Volumes/canteloup/vm_backups/ | tail -1`" "/Users/julian/Documents/Parallels" "/Volumes/canteloup/vm_backups/`date +%Y-%m-%d-%H-%M-%S`"
-avh means make an archive, with verbose output in a human readable form.
--delete will make sure each incremental backup does not contain files that have been deleted since the last backup. It means the backup taken on a particular date will be a snapshot of the directory as it was on that date.
--progress will display in the terminal the amount transferred, the percentage, and the time remaining for each file. Handy for virtual machine backups with 40Gb+ file sizes.
--link-dest specifies the directory to use for making links for the files that haven't changed. It uses ls -rt | tail -1 to get the last file. Seems to be fine if the file doesn't exist, as in the first time it is run.
The next arg is the directory to backup.
The last arg is the target directory. The name is a timestamp.
Try the following bash script. Please replace src and dest with the source and destination you want to use. If you are not backing up in a local storage then remove --inplace (This option is useful for transfer of large files with block-based changes or appended data, and also on systems that are disk bound, not network bound. you should also not use this option to update files that are being accessed by others).
#!/bin/bash
rsync -ab --dry-run --stats --human-readable --inplace --debug=NONE --log-file=rsync.log --backup-dir=rsync_bak.$(date +"%d-%m-%y_%I-%M-%S%P") --log-file-format='%t %f %o %M' --delete-after src dest | sed -e '1d;5,12d;14,17d'
echo -e "\nDo you want to continue?"
while true; do
case $yn in
[Yy]* ) rsync -ab --human-readable --inplace --info=PROGRESS2,BACKUP,DEL --debug=NONE --log-file=rsync.log --backup-dir=rsync_bak.$(date +"%d-%m-%y_%I-%M-%S%P") --log-file-format='%t %f %o %M' --delete-after src dest; break;;
[Nn]* ) exit;;
* ) read -p "Please answer yes or no: " yn;;
esac
done

Resources