SCP not working in EC2 (AWS) - linux

I can SSH into the EC2 instance:
ssh -i "my_key.pem" ec2-user#my-public-ip
However, scp doesn't work:
scp -r –i "my_key.pem" ./my_file ec2-user#my-public-ip:/home/ec2-user/my_file
Permission denied (publickey).
lost connection
I've also tried using public instance DNS, but nothing changes.
Any idea why is this happening and how to solve it?

The only way for this to happen is the private key mykey.pem is not found in the current directory. It is possible you tried ssh from a directory different than scp.
Try the following with full path to your key:
scp -r –i /path/to/my_key.pem ./my_file ec2-user#my-public-ip:/home/ec2-user/my_file
If it fails, post the output with -v option. It will tell you exactly where the problem is
scp -v -r –i /path/to/my_key.pem ./my_file ec2-user#my-public-ip:/home/ec2-user/my_file

I am bit late but this might be help full to someone.
Do not use the /home/ec2-user. Rather directly use the file name or folder name
E.g. the following command will put your my_file at the home folder (i.e. /home/ec2-user)
scp -r –i "my_key.pem" ./my_file ec2-user#my-public-ip:my_file
Or Say if you have a folder at /home/ect-user/my_data
Then use the following command to copy your file to the folder
scp -r –i "my_key.pem" ./my_file ec2-user#my-public-ip:my_data

Stupidly late addendum:
To avoid specifying the private key every time, just add to the .ssh/config file (create it if not already there) the following (without comments):
Host testserver // a memorable alias
Hostname 12.34.56.67 // your server ip
User ec2-user // user to connect
IdentityFile /path/to/key.pem // path to the private key
PasswordAuthentication no
Then a simple ssh testserver should work from anywhere (and consequently your scp too).
I use it to connect with Vim via scp using:
vim scp://testserver/relative/file/path
or
vim scp://testserver//absolute/file/path
and
vim scp://testserver/relative/dir/path/ (note the trailing slash)
to respectively edit files and browse folders directly from local (thus using my precious .vimrc <3 configuration).
Solution found here
Hope this helps! :)

I was facing this issue today and found solution for me (not elegant but one which worked). - this solution is good if you want to download something once and rollback all settings afterwards.
Solution:
When I specified -v option while using scp I noticed the certificate is being denied for some reason so I went to /etc/ssh/sshd_config and set PasswordAuthentication yes. Then I used systemctl restart sshd.
After this procedure I went to my local machine and used:
scp -v -r myname#VPC:/home/{user}/filename.txt path/on/local/machine
provided PWD and file transmission has been successful.
Hope this helps to someone :)

Related

copy directory from another computer on Linux

On a computer with IP address like 10.11.12.123, I have a folder document. I want to copy that folder to my local folder /home/my-pc/doc/ using the shell.
I tried like this:
scp -r smb:10.11.12.123/other-pc/document /home/my-pc/doc/
but it's not working.
So you can use below command to copy your files.
scp -r <source> <destination>
(-r: Recursively copy entire directories)
eg:
scp -r user#10.11.12.123:/other-pc/document /home/my-pc/doc
To identify the location you can use the pwd command, eg:
kasun#kasunr:~/Downloads$ pwd
/home/kasun/Downloads
If you want to copy from B to A if you are logged into B: then
scp /source username#a:/destination
If you want to copy from B to A if you are logged into A: then
scp username#b:/source /destination
In addition to the comment, when you look at your host-to-host copy options on Linux today, rsync is by far, the most robust solution around. It is brought to you by the SAMBA team[1] and continues to enjoy active development. Most distributions include the rsync package by default. (if not, you should find an easily installable package for your distro or you can download it from rsync.samba.org ).
The basic use of rsync for host-to-host directory copy is:
$ rsync -uav srchost:/path/to/dir /path/to/dest
-uav simply recursively copies -ua only new or changed files preserving file & directory times and permissions while providing -v verbose output. You will be prompted for the username/password on 10.11.12.123 unless your have setup ssh-keys to allow public/private key authentication (see: ssh-keygen for key generation)
If you notice, the syntax is basically the same as that for scp with a slight difference in the options: (e.g. scp -rv srchost:/path/to/dir /path/to/dest). rsync will use ssh for secure transport by default, so you will want to insure sshd is running on your srchost (10.11.12.123 in your case). If you have name resolution working (or a simple entry in /etc/hosts for 10.11.12.123) you can use the hostname for the remote host instead of the remote IP. Regardless, you can always transfer the files you are interested in with:
$ rsync -uav 10.11.12.123:/other-pc/document /home/my-pc/doc/
Note: do NOT include a trailing / after document if you want to copy the directory itself. If you do include a trailing / after document (i.e. 10.11.12.123:/other-pc/document/) you are telling rsync to copy the contents, (i.e. the files and directories under) document to 10.11.12.123:/other-pc/ without also copying the document directory.
The reason rsync is far superior to other copy apps is it provides options to truly synchronize filesystems and directory trees both locally and between your local machine and remote host. Meaning, in your case, if you have used rsync to transfer files to /home/my-pc/doc/ and then make changes to the files or delete files on 10.11.12.123, you can simply call rsync again and have the changes/deletions reflected in /home/my-pc/doc/. (look at the several flavors of the --delete option for details in rsync --help or in man 1 rsync)
For these, and many more reasons, it is well worth the time to familiarize yourself with rsync. It is an invaluable tool in any Linux user's hip pocket. Hopefully this will solve your problem and get you started.
Footnotes
[1] the same folks that "Opened Windows to a Wider World" allowing seemless connection between windows/Linux hosts via the native windows server message block (smb) protocol. samba.org
If the two directories (document and /home/my-pc/doc/) you mentioned are on the same 10.11.12.123 machine.
then:
cp -ai document /home/my-pc/doc/
else:
scp -r document/ root#10.11.12.123:/home/my-pc/doc/

scp + Avoid copy if the same file name located on remote machine?

Is there any option to tell scp command - not copy file from current machine in case file exists on remote machine
For example
On my machine I have the file -
/etc/secret-pw.txt
On Remote machine I have also the file -
/etc/secret-pw.txt
So
scp /etc/secret-pw.txt $remote_machine:/etc
Will destroy the secret-pw.txt, and scp will not ask questions about: overwrite the target file
Is there any option to avoid copy if file exist on target machine by scp?
Update: I can't install rsync or any other program.
You should be using rsync instead of scp. It will give you what you need.
If you can't install rsync (as you mentioned in the comments) you need to run a script beforehand to check if file exists and run it with ssh.
SCP does not offer any option, unfortunately.
But you can resort standard tools, like this:
ssh $remote_machine -- cp --no-clobber /dev/stdin /etc/secret-pw.txt < /etc/secret-pw.txt
Note that with this trick you gain all the functionalities of cp.

ssh not working correctly with sudo

Good morning everyone! I have a bash script starting automatically when the system boots via the .profile file in the users home directory:
sudo menu.sh
The script starts just as expected however, when calling things like ssh UN#ADDRESS inside the script, the known_hosts file gets placed in the /root/.ssh directory instead of the user account calling the script! I have tried modifying .profile to call 'sudo -E menu.sh' and 'sudo -H menu.sh', but both fail to have the known_hosts file created in the users home directory that's calling the script. My /etc/sudoers is as follows:
# Declarations
Defaults env_keep += "HOME USER"
# User privilege specification
root ALL=(ALL) ALL
user ALL=NOPASSWD: ALL
Any help would be appreciated!
Thanks
Dave
UPDATE: so what I did as a work around is go through the script and add 'sudo -u $USER' before specific calls (since sudo is supposed to keep the $USER env var). This to me seems like a very bad way of resolving this problem. It sudo is supposed to keep the USER and HOME directory variables upon launching menu.sh, why would I need to explicitly call sudo once again as a specific user in order to retain that information (even though sudo is being told to keep it via the /etc/sudoers file). No clue, but wanted to update this post for anyone that comes across it until a better solution can be found.
Regarding OpenSSH, the default location for known_hosts is ~/.ssh/known_hosts. Ssh doesn't honor $HOME when expanding a "~" in a filename. It looks up the user's actual home directory and uses that. When you run ssh as root, it's going to interpret that pathname relative to root's home directory no matter what you've set HOME to.
You could try setting the ssh parameter UserKnownHostsFile to the name of the file you'd like to use:
ssh -o UserKnownHostsFile=$HOME/.ssh/known_hosts user#host...
However, you should test this. Ssh might complain about using a file that belongs to another user, and if it has to update the file then the file might end up being owned by root.
Really, you're best off running ssh as the user whose .ssh folder you want ssh to use. Running processes through sudo creates a risk that the user can find a way to do things you didn't intend for them to do. You should limit that risk by using the elevated privileges as little as possible.

scp copy directory to another server with private key auth

is there something wrong with this scp command ?
scp -C -i ./remoteServerKey.ppk -r /var/www/* root#192.168.0.15:/var/www
I use the same .ppk as in putty and enter the same passphrase, but it asks me 3 times and than says connection denied. I thought I used it before and it worked, but it isn´t atm.
If it is wrong, how should I do it ?
or you can also do ( for pem file )
scp -r -i file.pem user#192.10.10.10:/home/backup /home/user/Desktop/
Covert .ppk to id_rsa using tool PuttyGen, (http://mydailyfindingsit.blogspot.in/2015/08/create-keys-for-your-linux-machine.html) and
scp -C -i ./id_rsa -r /var/www/* root#192.168.0.15:/var/www
it should work !
Putty doesn't use openssh key files - there is a utility in putty suite to convert them.
edit: it is called puttygen
The command looks quite fine. Could you try to run -v (verbose mode) and then we can figure out what it is wrong on the authentication?
Also as mention in the other answer, maybe could be this issue - that you need to convert the keys (answered already here): How to convert SSH keypairs generated using PuttyGen(Windows) into key-pairs used by ssh-agent and KeyChain(Linux) OR http://winscp.net/eng/docs/ui_puttygen (depending what you need)

How can I upload an entire folder, that contains other folders, using sftp on linux?

I have tried put -r directory/*, which only uploaded the files and not folders. Gave me the error, cannot Couldn't canonicalise.
Any help would be greatly appreciated.
For people actually wanting a direct answer to this question (instead of being told to use something other than sftp)...
put -r local/path/to/directoryName
The uploaded directory must already exist in the working directory on the server, so you might need to create it first.
mkdir directoryName
Here you can find detailed explanation as how to copy a directory using scp. In your case, it would be something like:
$ scp -r foo your_username#remotehost.edu:/some/remote/directory/bar
This will copy the directory "foo" from the local host to a remote host's directory "bar".
Here -r is -recursively copy entire directories.
You can also use rcp with similar syntax. The only difference between them is that scp uses secure shell and rcp uses remote shell.
BTW The "Couldn't canonicalise" error you mentioned appear when sftp server is unable to access the file/directory mentioned in the command.
UPDATE: For users who want to use put specifically, please refer to Ben Thielker answer here.
sftp> mkdir source
sftp> put -r source
Uploading source/ to /home/myself/source
Entering source/
source/file1
source/file2
if you have issues using sftp, you can use ncftp
For centos
yum install ncftp
To copy a whole directory recursively
ncftpput -R -v -u username -P 21 ftp.server.dev /remote-path/ /localdirectory
Use scp instead. It uses SSH too and can easily handle recursion.

Resources