How to download from double hop SFTP? - linux

I am new to linux and am having trouble doing this.
I need to download files and this is currently what I do to access the file.
SSH into server A.
From server A, SSH to server B
After logging into server B, run the following command:
sudo -i -u testuser
I enter a password and then I have the privileges I need.
How would I replicate this with WinSCP? I can login to the server following the guide here:
https://superuser.com/questions/303486/sftp-over-double-server-hop
But I cannot download the files because I don't have permissions. How do I execute that sudo command and enter a password in the login process using WinSCP? Or an alternative program (that runs on OSX). My ultimate goal is to download a file form the(double remote) computer to my local computer.

You need to combine two "advanced" features of WinSCP.
Tunneling: That's what the Super User question you have referred to deals with:
SFTP over double server hop
Sudo: There's another Super User question that deals with this:
How to change user in WinSCP?
It is basically covered in WinSCP FAQ How do I change user after login (e.g. su root)?
This is a tricky part.

You can use the solution you've already found, just use:
ssh -o ProxyCommand='ssh myfirsthop nc -w 10 %h %p' testuser#mydestination

Related

Shell command to copy file from one server to remote server with different user

I have a server serverA and a user on it with "akotha", and there is another user "mqm". I can switch to "mqm" by typing sudo su - mqm, but I don't know the password of the mqm user. All I want is to copy a file from my localserver to serverA and place it in a folder which only mqm has write access to.
Can you please let me know the command to fulfill my requirement?
You can use SSH and secure copy command:
$ scp path/to/local/file mqm#ip_address_of_server_A:~/directory
but if you haven't the password of 'mqm' you can send it to user 'akotha' and then change file permissions

Transfer files between local to remote server using ssh without password authentication

I want to transfer some files from my local to remote, like github does it. I want to happend it very smooth like in shell script. I tried creating one shell script which automates the process of ssh authentication without password but for first time it exposes my remote server password. I dont want to do it that way. Like in git we can't see their server password. Is there any possible way that we can do ?
I used this article script to automate ssh login. http://www.techpaste.com/2013/04/shell-script-automate-ssh-key-transfer-hosts-linux/
As i mentioned, you can use the scp command, like this:
scp /local_dir/some*.xml remote_user#remote_machine:/var/www/html
This requires that you need connect to the remote machine without password, only with ssh key-authentication.
Here is a link: http://linuxproblem.org/art_9.html to help you.
The important steps: (automatic login from host A / user a to Host B / user b.)
a#A:~> ssh-keygen -t rsa
a#A:~> ssh b#B mkdir -p .ssh
a#A:~> cat .ssh/id_rsa.pub | ssh b#B 'cat >> .ssh/authorized_keys'

Copy files from Linux server using ssh client with different user name

I have this linux machine with ssh server installed, I can access the server using username="ubuntu". ssh server blocks clients that try to connect using "root" username.
So connection can be made by:
ssh -i mykey ubuntu#myserver
I can get files that belong to "ubuntu" using :
scp -i mykey ubuntu#myserver:<file location> ./
However, what I really want is to get files that belong to "root" username, (Note: I can't access the server with username "root" for obvious security reasons).
so is there a way to do download files that are under "root" username?
I was thinking to do some magic in the server side that enables me to do that.(I don't know how :) )
if this help: I have root access and also I can create files on my server side. but I'm not allowed to change the file permission under the root(if someone get hold of these files I'll be fired)
You can try monster like this
ssh ubuntu#myhost 'sudo cat /path/to/file | uuencode' | uudecode > path/to/local
You should have uuencode and uudecode on coresponding hosts.
Or if file is text you can skip uuencode part
ps: see related topic
You could do it the other way around.
Log into the the pc with the file you want with
ssh ubuntu#myserver
Then gain superuser privileges
sudu su
and then copy the files you want
scp /the_file_you_want ubuntu#myhost:/the_location_and_filename_you_want
Some other ways you can find here
https://unix.stackexchange.com/questions/106480/how-to-copy-files-from-one-machine-to-another-using-ssh
enable ssh on your machine
(if fedora) (for ubuntu you can find command on google easily)
service sshd on
From your local machine
ssh -i ubuntu#myserver
change to root
su
enter password
and copy files using scp
scp somefile.extension randomuser#localmachine:/some/path/
I hope it helps

Changing user to root when connected to a linux server and copying files

My script is coded in a way that doesn't allow you to connect to a server directly by root. This code basically copies files from a server to my computer and it works but I don't have access to many files because only root can access them. How can I connect to a server as a user and then copy its files by switching to root?
Code I want to change:
sshpass -p "password" scp -q -r username#74.11.11.11:some_directory copy_it/here/
In other words, I want to be able to remotely copy files which are only accessible to root on a remote server, but don't wish to access the remote server via ssh/scp directly as root.
Is it possible through only ssh and not sshpass?
If I understand your question correctly, you want to be able to remotely copy files which are only accessible to root on the remote machine, but you don't wish to (or can't) access the remote machine via ssh/scp directly as root. And a separate question is whether it could be done without sshpass.
(Please understand that the solutions I suggest below have various security implications and you should weigh up the benefits versus potential consequences before deploying them. I can't know your specific usage scenario to tell you if these are a good idea or not.)
When you ssh/scp as a user, you don't have access to the files which are only accessible to root, so you can't copy all of them. So you need to instead "switch to root" once connected in order to copy the files.
"Switching to root" for a command is accomplished by prefixing it with sudo, so the approach would be to remotely execute commands which copy the files via sudo to /tmp on the remote machine, changes their owner to the connected user, and then remotely copy them from /tmp:
ssh username#74.11.11.11 "sudo cp -R some_directory /tmp"
ssh username#74.11.11.11 "sudo chown -R username:username /tmp/some_directory"
scp -q -r username#74.11.11.11:/tmp/some_directory copy_it/here/
ssh username#74.11.11.11 "rm -r /tmp/some_directory"
However, sudo prompts for the user's password, so you'll get a "sudo: no tty present and no askpass program specified" error if you try this. So you need to edit /etc/sudoers on the remote machine to authorize the user to use sudo for the needed commands without a password. Add these lines:
username ALL=NOPASSWD: /bin/cp
username ALL=NOPASSWD: /bin/chown
(Or, if you're cool with the user being able to execute any command via sudo without being prompted for password, you could instead use:)
username ALL=NOPASSWD: ALL
Now the above commands will work and you'll be able to copy your files.
As for avoiding using sshpass, you could instead use a public/private key pair, in which a private key on the local machine unlocks a public key on the remote machine in order to authenticate the user, rather than a password.
To set this up, on your local machine, type ssh-keygen. Accept the default file (/home/username/.ssh/id_rsa). Use an empty passphrase. Then append the file /home/username/.ssh/id_rsa.pub on the local machine to /home/username/.ssh/authorized_keys on the remote machine:
cat /home/username/.ssh/id_rsa.pub | ssh username#74.11.11.11 \
"mkdir -m 0700 -p .ssh && cat - >> .ssh/authorized_keys && \
chmod 0600 .ssh/authorized_keys"
Once you've done this, you'll be able to use ssh or scp from the local machine without password authorization.

On Linux, how can I share scripts across an SSH connection for the session only?

For work, I have to connect to dozens of Linux machines via SSH (to perform maintenance, monitor the system, install software, etc).
I have a small arsenal of scripts that help me do some of these tasks, and these are located in a folder on my Mac in /Users/me/bin. I want to be able to run these scripts on the remote Linux machine, but for several reasons I do not want these scripts permanently located on these machines (e.g., other people also connect to these remote machines, and it would be unwise to let them execute these files).
So, is possible to share scripts across an SSH connection for the lifetime of the session only?
I have a couple of ideas on how to do this, but I don't know if any of them will work. Firstly, if SSH allows file mounting, I could automatically mount me#mymac:/Users/me/bin to me#linux:/remote_bin when I connect to the remote Linux box, and set my PATH variable to "$PATH:/remote_bin". Secondly, I could set up port forwarding in the connection string (e.g., ssh me#linux -R 9999:127.0.0.1:<SMBPORT|ETC> and every time I connect mount the share and set the $PATH variable.
EDIT: I've come up with a semi-solution. On the linux machine, edit /etc/ssh/sshd_config to add the following subsystem: Subsystem shareduserbinary sudo su -l -c "/bin/mount -t cifs -o port=9999,username=me,nounix,sec=ntlmssp //127.0.0.1/exported_bin /mnt/remote_bin" && bash -l -i -s. When connecting to the remote machine, set up a reverse port forward and invoke the subsystem. E.g.: ssh -R 9999:127.0.0.1:445 -s shareduserbinary me#linux.
EDIT 2: You can make the solution above cleaner, by removing the -l from the sudo command and changing the path from /mnt/remote_bin to $HOME/rbin.
Interesting question. Perhaps you can add a command to ~/.bash_login (assuming you are using bash) to copy the scripts from a remote host (such as your mac) when you login, then add a command to ~/.bash_logout to delete the scripts when you logout. But, as bmargulies points out, it would be a good idea to go a step further and make sure that nobody else has permissions to read or execute the scripts.
You can use OpenSSH's LocalCommand to upload the files (using e.g. scp or rsync) when initiating an SSH session (see man ssh_config and this):
Host server1 server2 [...]
PermitLocalCommand yes
LocalCommand scp -q /Users/bin/me/* %h:temp_bin/
and use .bash_logout or an EXIT-trap that you specify in your .bashrc to delete the contents of the directory on logout.

Resources