rsync not copying my files - linux

I was trying to use rsync to only copy the .c files from a given directory. Therefore I tried this command:
rsync -nrv --include="*.c" --exclude="*" test/ temp/
The output:
sending incremental file list
test1.c
test2.c
sent 63 bytes received 18 bytes 162.00 bytes/sec
The data I wanted to be copied was found, but when I check 'temp', it is empty.
I also tried to let rsync create the directory and the output is the following:
sending incremental file list
created directory temp
./
test1.c
test2.c
sent 66 bytes received 21 bytes 174.00 bytes/sec
But when I check for the directory 'temp', it doesn't exist.
What am I doing wrong?

you gave -n, which means "dry run"!
remove the -n
you should at least check man page for the options you used, to understand the meanings:
-n, --dry-run perform a trial run with no changes made

Related

Restore bash script

I have a backup directory with a tar file in that directory (/home/username/userhome/backup/userbackup-${currentdate}.tar.gz).
I would like a create a script that:
creates the restore directory if it does not exist
displays the contents of the backup directory containing tar files of previous backups
asks the user to enter the name of the tar file to restore
uses the tar command to restore the file to the new restore directory and log file
So far my script has
#!/bin/bash
mkdir restore -p
ls /home/username/userhome/backup
echo "Enter the file name to be restored"
read $filename
tar xvf "/home/username/userhome/restore/$filename" &>/home/username/userhome/restore/restore.log
I am a complete newbie so any help will be greatly appreciated.
Continuing from the comment, one thing you always want to do when writing a script is to validate each step along the way. For instance, if you cannot create the new restore directory, you don't want to loop and then attempt to extract a tar.gz file to the directory that doesn't exits.
Virtually all commands return 0 on success or a nonzero error on failure. You can use this to your advantage to check if the command succeeded and if it didn't you simply exit. A quick way to do that is:
command || exit 1
In your case creating the restore directory that could be:
mkdir -p restore || exit 1
You can add additional error messages if you like, but most time the error generated by the failure will be sufficient to tell you what went wrong.
Whenever you are operating on a fixed base directory using subdirectories of that base, it is best to create a variable for that base directory you can use in your script. For example:
#!/bin/bash
userdir="${1:-/home/username/userhome}"
budir="$userdir"/backup
restore="$userdir"/restore
Here userdir is the base directory and you have additional variables for the backup directory and restore directory. This makes it convenient to reference or operate on files in any of the directories. Note also how userdir can be set from the first command line argument or uses /home/username/userhome by default if no argument is given.
You can create restore and change to that directory, validating each step as follows:
mkdir -p restore || exit 1
cd "$restore" || exit 1
For the menu, let select create the menu for you (now if you have hundreds of .tar.gz files, you may need to write a custom pager, but for a few dozen files, select will be fine). You can generate the menu and restore the selected file with:
select choice in "$budir"/*.tar.gz; do
tar -xvf "$choice" &>"$restore/restore.log"
break
done
Putting it altogether, you would have:
#!/bin/bash
userdir="${1:-/home/username/userhome}"
budir="$userdir"/backup
restore="$userdir"/restore
mkdir -p restore || exit 1
cd "$restore" || exit 1
select choice in "$budir"/*.tar.gz; do
tar -xvf "$choice" &>"$restore/restore.log"
break
done
Example Use/Output
Say I have a couple of .tar.gz files in a directory, e.g.
$ tree /home/david/tmpd/backup
backup
├── v.tar.gz
└── x.tar.gz
Then to create a restore directory under the tmpd directory I can run the script as:
$ bash ~/scr/tmp/restore.sh /home/david/tmpd
1) /home/david/tmpd/backup/v.tar.gz
2) /home/david/tmpd/backup/x.tar.gz
#? 2
By choosing 2 above the x.tar.gz file is restored under the restore directory, e.g.
$ ls -al restore/
total 4
drwxr-xr-x 3 david david 80 Oct 29 01:00 .
drwxr-xr-x 4 david david 80 Oct 29 01:00 ..
drwxr-xr-x 3 david david 60 Oct 29 01:00 home
-rw-r--r-- 1 david david 57 Oct 29 01:00 restore.log
So the restore directory was successfully created, the restore.log was created and the .tar.gz file was restored under the restore directory.
The contents of restore.log are
$ cat restore/restore.log
home/david/scr/utl/xscrd.sh
home/david/scr/utl/xzdate.sh
(which were the two sample files I added to the x.tar.gz file)
Look things over and let me know if you have further questions.

How to retrieve the size of a specific stream before doing sync

I'm working on a script and need to query the size of a specific stream before I do a sync to a local harddrive. The local unsynced folder is empty.
I know there is a p4 sizes and I have tried the following command
p4 -u TheUserName -p ExternalServerUrl -c MyWorkspace sizes -s //Path/To/Stream -H
But it seems to report my local storage (which is empty).
So the output I get is this:
//Path/To/Stream 0 files 0 bytes
-H 0 files 0 bytes
So how can I query the server for the size before actually doing the sync?
Thanks in advance for any feedback!
You need to use wildcards to refer to a set of multiple files (including a depot directory path):
p4 sizes -s -H //Path/To/Stream/...
Note that if the -H goes after the file path, it's treated as another file path, which is why your output included the line -H 0 files 0 bytes.
Another important thing to note is that the depot path corresponding to the stream name is not necessarily the same thing as the stream contents -- if the stream is a virtual stream, or if it has import paths, some or all of its files live under other depot paths. For your purposes you probably want to instead use a client-path syntax, which will correspond to everything in the workspace (i.e. everything mapped by the workspace's stream):
p4 sizes -s -H //MyClient/...
The other solution, which will also work if the local directory isn't empty, is to take advantage of the totalFileSize field in the tagged output of p4 sync:
p4 -Ztag -F "%totalFileSize% bytes (%totalFileCount% files)" sync -n -q

No files have been transferred after rsync

When I ran 'rsync' in the following way, no file has been transferred?!
rsync -rv -e "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i /home/user/.ssh/myrsd.pem" /cygdrive/c/user/local/temp/somefolder root#xx.xx.xx.xx:/
terminal output:
sending incremental file list
sent 118 bytes received 26 bytes 96.00 bytes/sec
total size is 1,560 speedup is 10.83
rsync works only on deltas- meaning if a file already exists on destination folder, and it is identical to the file in the source - it won't be copied. only new/updated files will be transferred
so if all files are already there- rsync will have nothing to do
The culprit is the missing slash after the local folder - 'somefolder' in this case. It should be '/cygdrive/c/user/local/temp/somefolder/' instead of '/cygdrive/c/user/local/temp/somefolder'
In the former case the output shows no files after "sending incremental file list" while it will show the files transferred in the latter one.
sending incremental file list
xx/xx/myfile

What is the -c option for when bunzipping?

I'm a bit confused with the -c flag using bunzip2.
The following line of code works well:
ls -l
-> -rw-r--r-- 1 root root 163 Oct 25 13:06 access_logs.tar.bz2
bunzip2 -c access_logs.tar.bz2 | tar -t
When I would attempt to use this code without the -c flag:
bunzip2 access_logs.tar.bz2 | tar -t
I get the message:
tar: This does not look like a tar archive
tar: Exiting with failure status due to previous errors
But when showing the list ls -l:
-rw-r--r-- 1 root root 10240 Oct 25 13:06 access_logs.tar
Documentation says:
The left side of the pipeline is bunzip –c access_logs.tbz, which
decompresses the file but the (-c option) sends the output to the
screen. The output is redirected to tar –t.
According to the manual:
-c --stdout
Compress or decompress to standard output.
It seems that the decompression also works without the -c flag?
I'm confused about what you're confused about. You observed the answers to your questions, as well as read it in the documentation.
Without -c, bunzip2 will decompress the xx.gz file and save the results as the file xx. With the -c, it will not create a file, but rather send the result to stdout. If you have a pipe, |, then instead of being printed to the terminal (which would be a mess), it becomes the input to the program on the right side of the pipe.
You cant check file information type:
file access_logs.tar.bz2
Check manual: link

Why this file do not get downloaded into a specified location?

I am downloading the file in this link. I am using Ubuntu 12.04 and I used the below command to download it.
wget -p /home/ubuadmin/CUDA http://developer.download.nvidia.com/compute/cuda/5_5/rel/installers/cuda_5.5.22_linux_32.run
Below is my command line input and output.
root#ubuserver3:/home/ubuadmin# wget -p /home/ubuadmin/CUDA http://developer.download.nvidia.com/compute/cuda/5_5/rel/installers/cuda_5.5.22_linux_32.run
/home/ubuadmin/CUDA: Scheme missing.
--2014-03-11 08:06:28-- http://developer.download.nvidia.com/compute/cuda/5_5/rel/installers/cuda_5.5.22_linux_32.run
Resolving developer.download.nvidia.com (developer.download.nvidia.com)... 23.62.239.35, 23.62.239.27
Connecting to developer.download.nvidia.com (developer.download.nvidia.com)|23.62.239.35|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 686412076 (655M) [application/octet-stream]
Saving to: `developer.download.nvidia.com/compute/cuda/5_5/rel/installers/cuda_5.5.22_linux_32.run'
100%[======================================>] 686,412,076 663K/s in 16m 56s
2014-03-11 08:23:24 (660 KB/s) - `developer.download.nvidia.com/compute/cuda/5_5/rel/installers/cuda_5.5.22_linux_32.run' saved [686412076/686412076]
FINISHED --2014-03-11 08:23:24--
Total wall clock time: 16m 56s
Downloaded: 1 files, 655M in 16m 56s (660 KB/s)
It says the download is completed but I can't find the file in that folder. I am accessing this server remotely using PuTTY, and using WinSCP to see the file structure. What has gone wrong? Why is it missing even it is downloaded?
To set the target folder, use -P (upper case) instead of -p.
From man wget:
-P prefix
--directory-prefix=prefix
Set directory prefix to prefix. The directory prefix is the directory
where all other files and subdirectories will be saved to, i.e. the
top of the retrieval tree. The default is . (the current directory).
cd /home/ubuadmin
mkdir test
cd test
wget http://developer.download.nvidia.com/compute/cuda/5_5/rel/installers/cuda_5.5.22_linux_32.run
after use this command
stat cuda_5.5.22_linux_32.run
and show output
The "p" needs to be capitalized. Keep in mind Linux is case sensitive in most aspects.

Resources