How to properly upload a local file to a server using mobaXterm? - linux

I'm trying to upload a file from my local desktop to a server and I'm using this command:
scp myFile.txt cooluser#192.168.10.102:/opt/nicescada/web
following the structure: scp filename user#ip:/remotePath.
But I get "Permission Denied". I tried using sudo , but I get the same message. I'm being able to download from the server to my local machine, so I assume I have all permissions needed.
What can be wrong in that line of code?

In case your /desired/path on your destination machine has write access only for root, and if you have an account on your destination machine with sudo privileges (super user privileges by prefixing a sudo to your command), you could also do it the following way:
Option 1 based on scp:
copy the file to a location on your destination machine where you have write access like /tmp:
scp file user#destinationMachine:/tmp
Login to your destination machine with:
ssh user#destinationMachine
Move the file to your /desired/path with:
sudo mv /tmp/file /desired/path
In case you have a passwordless sudo setup you could also combine step 2. and 3. to
ssh user#destination sudo mv /tmp/file /desired/path
Option 2 based on rsync
Another maybe even simpler option would be to use rsync:
rsync -e "ssh -tt" --rsync-path="sudo rsync" file user#destinationMachine:/desired/path
with -e "ssh -tt" added to run sudo without having a tty.

Try and specify the full destination path:
scp myFile.txt cooluser#192.168.10.102:/opt/nicescada/web/myFile.txt
Of course, double-check cooluser has the right to write (not just read) in that folder: 755, not 644 for the web parent folder.

Related

Copy folder frrom local machine to a server in linux

I am trying to copy a filled folder from my local machine to AWS server.
So, I used the following command, but was not working:
scp -r IPADTEST.pem oafolder ec2-user#__________.compute.amazonaws.com:testfolder
The error was:
ec2-user#_________.compute.amazonaws.com: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
lost connection
I am sure the IPADTEST.pem is working okay, because I can SSH from the same location:
$ ssh -i IPADTEST.pem ec2-user#_____________.compute.amazonaws.com
Also, I can copy a file (not folder), for example I can copy index.html:
sudo scp -i IPADTEST.pem index.html ec2-user#______________.compute.amazonaws.com:testfolder/
You absolutely need "-i .pem".
Q: Have you tried scp -r -i IPADTEST.pem oafolder ec2-user#__________.compute.amazonaws.com:testfolder?

Google cloud scp permission denied

I am trying to transfer files to my Google cloud hosted Linux (Debian) instance via secure copy (scp). I did exactly what the documentation told to connect from a local machine to the instance. https://cloud.google.com/compute/docs/instances/connecting-to-instance.
Created a SSH keygen
Added the keygen to my instance
I can login successfully by:
ssh -i ~/.ssh/my-keygen [USERNAME]#[IP]
But when I want to copy files to the instance I get a message "permission denied".
scp -r -i ~/.ssh/my-keygen /path/to/directory/ [USERNAME]#[IP]:/var/www/html/
It looks like the user with which I login has no permissions to write files, so I already tried to change the file permissions of /var/www/, but this still gives the permission denied message.
I also tried to add the user to the root group, but this still gives the same problem.
usermod -G root myuser
The command line should be
scp -r -i ~/.ssh/my-keygen /path/to/directory/ [USERNAME]#[IP]:/var/www/html/
Assuming your files are in the local /path/to/directory/ and the /var/www/html/ is on the remote server.
The permissions does not allow to write in the /var/www/html/. Writing to /tmp/ should work. Then you can copy the files with sudo to the desired destination with root privileges.
If SSH isn't working, install gcloud CLI and run the following locally: gcloud compute scp --recurse /path/to/directory [IP] --tunnel-through-iap. This will dump the directory into your /home/[USERNAME]/ folder. Then log into the console and use sudo to move the directory to /var/www/html/.
For documentation, see https://cloud.google.com/sdk/gcloud/reference/compute/scp.

Rsync to Amazon Linux EC2 instance - failed: No such file or directory

I want to upload the content of one directory to my Amazon EC2 with rsync:
rsync -r -t -v --progress -z -s -e "ssh -i /home/mostafa/keyamazon.pem" /home/mostafa/splitfiles ubuntu#ec2-64-274-161-87.compute-1.amazonaws.com:~/splitfiles
but I receive the following error message:
sending incremental file list
rsync: link_stat "/home/mostafa/splitfiles" failed: No such file or directory (2)
rsync: change_dir#3 "/home/ubuntu//~" failed: No such file or directory (2)
rsync error: errors selecting input/output files, dirs (code 3) at main.c(712) [Receiver=3.1.0]
and if I do a dry run with grsync, it works correctly
In rsync the trailing / is very important. Also you rsync usually defaults to ssh when one of the destinations contains a host.
So you if you want to preserver modification times then you can get rid of the -e and -s options.
Your command could be written as /home/mostafa/splitfiles/ ubuntu#ec2-64-274-161-87.compute-1.amazonaws.com:splitfiles/ - notice the trailing /'s provided that you have ssh configured to read the private key from your home directory.
On ubuntu you can add this to the key chain, by going
ssh-add [key-file]
And this will save you having to specify the keyfile everytime you ssh into the AWS machine.
The errors seem to say that on the local machine you don't have a source directory and the destination doesn't exist.
I completed this task with Filezilla instead, easier to use.
You are at home ~ if you cd ../ to root you will be able to run the command.

How can I copy file from local server to remote with creating directories which absent via SSH?

I can copy file via SSH by using SCP like this:
cd /root/dir1/dir2/
scp filename root#192.168.0.19:$PWD/
But if on remote server some directories are absent, in example remote server has only /root/ and havn't dir1 and dir2, then I can't do it and I get an error.
How can I do this - to copy file with creating directories which absent via SSH, and how to make it the easiest way?
The easiest way mean that I can get current path only by $PWD, i.e. script must be light moveable without any changes.
This command will do it:
rsync -ahHv --rsync-path="mkdir -p $PWD && rsync" filename -e "ssh -v" root#192.168.0.19:"$PWD/"
I can make the same directories on the remote servers and copy file to it via SSH by using SCP like this:
cd /root/dir1/dir2/
ssh -n root#192.168.0.19 "mkdir -p '$PWD'"
scp -p filename root#192.168.0.19:$PWD/

rsync over SSH preserve ownership only for www-data owned files

I am using rsync to replicate a web folder structure from a local server to a remote server. Both servers are ubuntu linux. I use the following command, and it works well:
rsync -az /var/www/ user#10.1.1.1:/var/www/
The usernames for the local system and the remote system are different. From what I have read it may not be possible to preserve all file and folder owners and groups. That is OK, but I would like to preserve owners and groups just for the www-data user, which does exist on both servers.
Is this possible? If so, how would I go about doing that?
** EDIT **
There is some mention of rsync being able to preserve ownership and groups on remote file syncs here: http://lists.samba.org/archive/rsync/2005-August/013203.html
** EDIT 2 **
I ended up getting the desired affect thanks to many of the helpful comments and answers here. Assuming the IP of the source machine is 10.1.1.2 and the IP of the destination machine is 10.1.1.1. I can use this line from the destination machine:
sudo rsync -az user#10.1.1.2:/var/www/ /var/www/
This preserves the ownership and groups of the files that have a common user name, like www-data. Note that using rsync without sudo does not preserve these permissions.
You can also sudo the rsync on the target host by using the --rsync-path option:
# rsync -av --rsync-path="sudo rsync" /path/to/files user#targethost:/path
This lets you authenticate as user on targethost, but still get privileged write permission through sudo. You'll have to modify your sudoers file on the target host to avoid sudo's request for your password. man sudoers or run sudo visudo for instructions and samples.
You mention that you'd like to retain the ownership of files owned by www-data, but not other files. If this is really true, then you may be out of luck unless you implement chown or a second run of rsync to update permissions. There is no way to tell rsync to preserve ownership for just one user.
That said, you should read about rsync's --files-from option.
rsync -av /path/to/files user#targethost:/path
find /path/to/files -user www-data -print | \
rsync -av --files-from=- --rsync-path="sudo rsync" /path/to/files user#targethost:/path
I haven't tested this, so I'm not sure exactly how piping find's output into --files-from=- will work. You'll undoubtedly need to experiment.
As far as I know, you cannot chown files to somebody else than you, if you are not root. So you would have to rsync using the www-data account, as all files will be created with the specified user as owner. So you need to chown the files afterwards.
The root users for the local system and the remote system are different.
What does this mean? The root user is uid 0. How are they different?
Any user with read permission to the directories you want to copy can determine what usernames own what files. Only root can change the ownership of files being written.
You're currently running the command on the source machine, which restricts your writes to the permissions associated with user#10.1.1.1. Instead, you can try to run the command as root on the target machine. Your read access on the source machine isn't an issue.
So on the target machine (10.1.1.1), assuming the source is 10.1.1.2:
# rsync -az user#10.1.1.2:/var/www/ /var/www/
Make sure your groups match on both machines.
Also, set up access to user#10.1.1.2 using a DSA or RSA key, so that you can avoid having passwords floating around. For example, as root on your target machine, run:
# ssh-keygen -d
Then take the contents of the file /root/.ssh/id_dsa.pub and add it to ~user/.ssh/authorized_keys on the source machine. You can ssh user#10.1.1.2 as root from the target machine to see if it works. If you get a password prompt, check your error log to see why the key isn't working.
I had a similar problem and cheated the rsync command,
rsync -avz --delete root#x.x.x.x:/home//domains/site/public_html/ /home/domains2/public_html && chown -R wwwusr:wwwgrp /home/domains2/public_html/
the && runs the chown against the folder when the rsync completes successfully (1x '&' would run the chown regardless of the rsync completion status)
Well, you could skip the challenges of rsync altogether, and just do this through a tar tunnel.
sudo tar zcf - /path/to/files | \
ssh user#remotehost "cd /some/path; sudo tar zxf -"
You'll need to set up your SSH keys as Graham described.
Note that this handles full directory copies, not incremental updates like rsync.
The idea here is that:
you tar up your directory,
instead of creating a tar file, you send the tar output to stdout,
that stdout is piped through an SSH command to a receiving tar on the other host,
but that receiving tar is run by sudo, so it has privileged write access to set usernames.
rsync version 3.1.2
I mostly use windows in local, so this is the command line i use to sync files with the server (debian) :
user#user-PC /cygdrive/c/wamp64/www/projects
$ rsync -rptgoDvhP --chown=www-data:www-data --exclude=.env --exclude=vendor --exclude=node_modules --exclude=.git --exclude=tests --exclude=.phpintel --exclude=storage ./website/ username#hostname:/var/www/html/website

Resources