I'm having a hard time copying files over to my Google Compute Engine. I am using an Ubuntu server on Google Compute Engine.
I'm doing this from my OS X terminal and I am already authorized using gcloud.
local:$ gcloud compute copy-files /Users/Bryan/Documents/Websites/gce/index.php example-instance:/var/www/html --zone us-central1-a
Warning: Permanently added '<IP>' (RSA) to the list of known hosts.
scp: /var/www/html/index.php: Permission denied
ERROR: (gcloud.compute.copy-files) [/usr/bin/scp] exited with return code [1].
insert root# before the instance name:
local:$ gcloud compute copy-files /Users/Bryan/Documents/Websites/gce/index.php root#example-instance:/var/www/html --zone us-central1-a
The reason this doesn't work is that your username does not have permissions on the GCE VM instance and so cannot write to /var/www/html/.
Note that since this question is about Google Compute Engine VMs, you cannot SSH directly to a VM as root, nor can you copy files directly as root, for the same reason: gcloud compute scp uses scp which relies on ssh for authentication.
Possible solutions:
(also suggested by Faizan in the comments) this solution will require two steps every time
use gcloud compute scp --recurse to transfer files/directories where your user can write to, e.g., /tmp or /home/$USER
login to the GCE VM via gcloud compute ssh or via the SSH button on the console and copy using sudo to get proper permissions:
# note: sample command; adjust paths appropriately
sudo cp -r $HOME/html/* /var/www/html
this solution is one step with some prior prep work:
one-time setup: give your username write access to /var/www/html directly; this can be done in several ways; here's one approach:
# make the HTML directory owned by current user, recursively`
sudo chown -R $USER /var/www/html
now you can run the copy in one step:
gcloud compute scp --recurse \
--zone us-central1-a \
/Users/Bryan/Documents/Websites/gce/index.php \
example-instance:/var/www/html
I use a bash script to copy from my local machine to writable directory on the remote GCE machine; then using ssh move the files.
SRC="/cygdrive/d/mysourcedir"
TEMP="~/incoming"
DEST="/var/my-disk1/my/target/dir"
You also need to set GCE_USER and GCE_INSTANCE
echo "=== Pushing data from $SRC to $DEST in two simple steps"
echo "=== 1) Copy to a writable temp directoy in user home"
gcloud compute copy-files "$SRC"/*.* "${GCE_USER}#${GCE_INSTANCE}:$TEMP"
echo "=== 2) Move with 'sudo' to destination"
gcloud compute ssh ${GCE_USER}#${GCE_INSTANCE} --command "sudo mv $TEMP/*.* $DEST"
In my case I don't want to chown the target dir as this causes other problems with other scripts ...
I had the same problem and didn't get it to work using the methods suggested in the other answers. What finally worked was to explicitly send in my "user" when copying the file as indicated in the official documentation. The important part being the "USER#" in
gcloud compute scp [[USER#]INSTANCE:]SRC [[[USER#]INSTANCE:]SRC …] [[USER#]INSTANCE:]DEST
In my case I could initially transfer files by typing:
gcloud compute scp instance_name:~/file_to_copy /local_dir
but after I got the permission denied I got it working by instead typing:
gcloud compute scp my_user_name#instance_name:~/file_to_copy /local_dir
where the username in my case was the one I was logged in to Google Cloud with.
UPDATE
gcloud compute copy-files is deprecated.
Use instead:
$ gcloud compute scp example-instance:~/REMOTE-DIR ~/LOCAL-DIR \ --zone us-central1-a
More info:
https://cloud.google.com/sdk/gcloud/reference/compute/scp
The updated solution for this exact issue (2020)
For the sake of exposition, we have to break the issue in two parts. The "copy-files" command is officially depreciated and we are to use "scp", however both old and new options are limited to certain folders only.
Since we do have access to the /tmp folder, this means we can easily move our distribution files with the preferred "scp" command, as a staging step.
More importantly we also have access to execute scripts, or commands remotely via SSH on the instance which means the limited access is no longer an issue.
Example Time
The first part is to copy the dist folder, and all it's content recursively to the tmp folder to which gloud does give access:
gcloud compute scp --recurse dist user_name#instance:/tmp
The second part leverages the fact that we can run commands remotely via ssh:
gcloud compute ssh user_name#instance --command "sudo bash golive"
(or any other command you may need to execute)
More importantly this also means that we can just copy our distribution files to the final destination using sudo and the "cp" copy function:
gcloud compute ssh user_name#instance --command "sudo cp -rlf /tmp/dist/* /var/www/html/"
This completely eliminates the need to set the permissions first through the ssh terminal.
This is to copy files from remote machine to your machine. And make sure you have ssh setup because this will use default ssh keys.
This worked for me:
gcloud compute scp 'username'#'instance_name':~/source_dir /destination_dir --recurse
This is the generic syntax, so if you want to copy files from your machine to remote machine, you can use this.
--recurse : required to copy directories with other files inside
Syntax: gcloud compute scp 'SOURCE' 'DESTINATION'
NOTE: run it without root
Related
i run a mixed windows and linux network with different desktops, notebooks and raspberry pis. i am trying to establish an off-site backup between a local raspberry pi and an remote raspberry pi. both run on dietpi/raspbian and have an external hdd with ntfs to store the backup data. as the data to be backuped is around 800GB i already initially mirrored the data on the external hdd, so that only the new files have to be sent to the remote drive via rsync.
i now tried various combinations of options including --ignore-existing --size-only -u -c und of course combinations of other options like -avz etc.
the problem is: nothing of the above really changes anything, the system tries to upload all the files (although they are remotely existing) or at least a good number of them.
could you give me hint how to solve this?
I do this exact thing. Here is my solution to this task.
rsync -re "ssh -p 1234” -K -L --copy-links --append --size-only --delete pi#remote.server.ip:/home/pi/source-directory/* /home/pi/target-directory/
The options I am using are:
-r - recursive
-e - specifies to utilize ssh / scp as a method to transmit data, a note, my ssh command uses a non-standard port 1234 as is specified by -p in the -e flag
-K - keep directory links
-L - copy links
--copy-links - a duplicate flag it would seem...
--append - this will append data onto smaller files in case of a partial copy
--size-only - this skips files that match in size
--delete - CAREFUL - this will delete local files that are not present on the remote device..
This solution will run on a schedule and will "sync" the files in the target directories with the files from the source directory. to test it out, you can always run the command with --dry-run which will not make any changes at all and only show you what will be transferred and/or deleted...
all of this info and additional details can be found in the rsync man page man rsync
**NOTE: I use ssh keys to allow connection/transfer without having to respond to a password prompt between these devices.
I am trying to transfer files to my Google cloud hosted Linux (Debian) instance via secure copy (scp). I did exactly what the documentation told to connect from a local machine to the instance. https://cloud.google.com/compute/docs/instances/connecting-to-instance.
Created a SSH keygen
Added the keygen to my instance
I can login successfully by:
ssh -i ~/.ssh/my-keygen [USERNAME]#[IP]
But when I want to copy files to the instance I get a message "permission denied".
scp -r -i ~/.ssh/my-keygen /path/to/directory/ [USERNAME]#[IP]:/var/www/html/
It looks like the user with which I login has no permissions to write files, so I already tried to change the file permissions of /var/www/, but this still gives the permission denied message.
I also tried to add the user to the root group, but this still gives the same problem.
usermod -G root myuser
The command line should be
scp -r -i ~/.ssh/my-keygen /path/to/directory/ [USERNAME]#[IP]:/var/www/html/
Assuming your files are in the local /path/to/directory/ and the /var/www/html/ is on the remote server.
The permissions does not allow to write in the /var/www/html/. Writing to /tmp/ should work. Then you can copy the files with sudo to the desired destination with root privileges.
If SSH isn't working, install gcloud CLI and run the following locally: gcloud compute scp --recurse /path/to/directory [IP] --tunnel-through-iap. This will dump the directory into your /home/[USERNAME]/ folder. Then log into the console and use sudo to move the directory to /var/www/html/.
For documentation, see https://cloud.google.com/sdk/gcloud/reference/compute/scp.
I want to upload the content of one directory to my Amazon EC2 with rsync:
rsync -r -t -v --progress -z -s -e "ssh -i /home/mostafa/keyamazon.pem" /home/mostafa/splitfiles ubuntu#ec2-64-274-161-87.compute-1.amazonaws.com:~/splitfiles
but I receive the following error message:
sending incremental file list
rsync: link_stat "/home/mostafa/splitfiles" failed: No such file or directory (2)
rsync: change_dir#3 "/home/ubuntu//~" failed: No such file or directory (2)
rsync error: errors selecting input/output files, dirs (code 3) at main.c(712) [Receiver=3.1.0]
and if I do a dry run with grsync, it works correctly
In rsync the trailing / is very important. Also you rsync usually defaults to ssh when one of the destinations contains a host.
So you if you want to preserver modification times then you can get rid of the -e and -s options.
Your command could be written as /home/mostafa/splitfiles/ ubuntu#ec2-64-274-161-87.compute-1.amazonaws.com:splitfiles/ - notice the trailing /'s provided that you have ssh configured to read the private key from your home directory.
On ubuntu you can add this to the key chain, by going
ssh-add [key-file]
And this will save you having to specify the keyfile everytime you ssh into the AWS machine.
The errors seem to say that on the local machine you don't have a source directory and the destination doesn't exist.
I completed this task with Filezilla instead, easier to use.
You are at home ~ if you cd ../ to root you will be able to run the command.
My script is coded in a way that doesn't allow you to connect to a server directly by root. This code basically copies files from a server to my computer and it works but I don't have access to many files because only root can access them. How can I connect to a server as a user and then copy its files by switching to root?
Code I want to change:
sshpass -p "password" scp -q -r username#74.11.11.11:some_directory copy_it/here/
In other words, I want to be able to remotely copy files which are only accessible to root on a remote server, but don't wish to access the remote server via ssh/scp directly as root.
Is it possible through only ssh and not sshpass?
If I understand your question correctly, you want to be able to remotely copy files which are only accessible to root on the remote machine, but you don't wish to (or can't) access the remote machine via ssh/scp directly as root. And a separate question is whether it could be done without sshpass.
(Please understand that the solutions I suggest below have various security implications and you should weigh up the benefits versus potential consequences before deploying them. I can't know your specific usage scenario to tell you if these are a good idea or not.)
When you ssh/scp as a user, you don't have access to the files which are only accessible to root, so you can't copy all of them. So you need to instead "switch to root" once connected in order to copy the files.
"Switching to root" for a command is accomplished by prefixing it with sudo, so the approach would be to remotely execute commands which copy the files via sudo to /tmp on the remote machine, changes their owner to the connected user, and then remotely copy them from /tmp:
ssh username#74.11.11.11 "sudo cp -R some_directory /tmp"
ssh username#74.11.11.11 "sudo chown -R username:username /tmp/some_directory"
scp -q -r username#74.11.11.11:/tmp/some_directory copy_it/here/
ssh username#74.11.11.11 "rm -r /tmp/some_directory"
However, sudo prompts for the user's password, so you'll get a "sudo: no tty present and no askpass program specified" error if you try this. So you need to edit /etc/sudoers on the remote machine to authorize the user to use sudo for the needed commands without a password. Add these lines:
username ALL=NOPASSWD: /bin/cp
username ALL=NOPASSWD: /bin/chown
(Or, if you're cool with the user being able to execute any command via sudo without being prompted for password, you could instead use:)
username ALL=NOPASSWD: ALL
Now the above commands will work and you'll be able to copy your files.
As for avoiding using sshpass, you could instead use a public/private key pair, in which a private key on the local machine unlocks a public key on the remote machine in order to authenticate the user, rather than a password.
To set this up, on your local machine, type ssh-keygen. Accept the default file (/home/username/.ssh/id_rsa). Use an empty passphrase. Then append the file /home/username/.ssh/id_rsa.pub on the local machine to /home/username/.ssh/authorized_keys on the remote machine:
cat /home/username/.ssh/id_rsa.pub | ssh username#74.11.11.11 \
"mkdir -m 0700 -p .ssh && cat - >> .ssh/authorized_keys && \
chmod 0600 .ssh/authorized_keys"
Once you've done this, you'll be able to use ssh or scp from the local machine without password authorization.
I connected to Amazon's linux instance from ssh using private key. I am trying to copy entire folder from that instance to my local linux machine .
Can anyone tell me the correct scp command to do this?
Or do I need something more than scp?
Both machines are Ubuntu 10.04 LTS
another way to do it is
scp -i "insert key file here" -r "insert ec2 instance here" "your local directory"
One mistake I made was scp -ir. The key has to be after the -i, and the -r after that.
so
scp -i amazon.pem -r ec2-user#ec2-##-##-##:/source/dir /destination/dir
Call scp from client machine with recursive option:
scp -r user#remote:src_directory dst_directory
scp -i {key path} -r ec2-user#54.159.147.19:{remote path} {local path}
For EC2 ubuntu
go to your .pem file directory
scp -i "yourkey.pem" -r ec2user#DNS_name:/home/ubuntu/foldername ~/Desktop/localfolder
You could even use rsync.
rsync -aPSHiv remote:directory .
This's how I copied file from amazon ec2 service to local window pc:
pscp -i "your-key-pair.pem" username#ec2-ip-compute.amazonaws.com:/home/username/file.txt C:\Documents\
For Linux to copy a directory:
scp -i "your-key-pair.pem" -r username#ec2-ip-compute.amazonaws.com:/home/username/dirtocopy /var/www/
To connect to amazon it requires key pair authentication.
Note:
Username most probably is ubuntu.
I use sshfs and mount remote directory to local machine and do whatever you want. Here is a small guide, commands may change on your system
This is also important and related to the above answer.
Copying all files in a local directory to EC2. This is a Unix answer.
Copy the entire local folder to a folder in EC2:
scp -i "key-pair.pem" -r /home/Projects/myfiles ubuntu#ec2.amazonaws.com:/home/dir
Copy only the entire contents of local folder to folder in EC2:
scp -i "key-pair.pem" -r /home/Projects/myfiles/* ubuntu#ec2.amazonaws.com:/home/dir
I do not like to use scp for large number of files as it does a 'transaction' for each file. The following is much better:
cd local_dir; ssh user#server 'cd remote_dir_parent; tar -c remote_dir' | tar -x
You can add a z flag to tar to compress on server and uncompress on client.
One way I found on youtube is to connect a local folder with a shared folder in EC2 instance. Please view this video for the full instruction. The sharing is instantaneous.