Connect ssh after reboot with port forwarding - linux

I am trying to automatically connect server -> server on startup using ssh with port forwading. I need this so that the 1st server can connect to the 2nd sever's postgres DB.
For the connection I am using
ssh -i /root/.ssh/id_rsa -L 5434:localhost:5432 user#ipAddress
This works fine when I try it manually and I can connect to my DB with
psql -U postgres -h localhost -p 5434
with having the .pgpass file in the home dir..
But the problem is, that the ssh connection is NOT made by itself on startup. I thought of using the sudo crontab's #reboot, but that did not work.. Then I tried to move the script to /etc/rc.local based on this but also with no luck..
Please can someone help me establish the ssh connection on startup?
Thanks in advance

I think I have solved it by adding "-N" to the ssh connection parameters. This should keep it in background and it seems to be working..
So now I have
ssh -N -i /root/.ssh/id_rsa -L 5434:localhost:5432 user#ipAddress
in the root's crontab and it connects after reboot. This does not solve the "cold start" connection, but since it is a server it will be mostly only restarted and not powered down and started..

Related

How could I mount remote directory to local machine through two ssh hops

I can access my serve like this:
(from local)ssh -p5222 name#server1.com
(from server1)ssh name#server2.com
Then I can work on server2.
Now I find I need to mount the folder in server2 to my local machine so that I could use my IDE.
I tried this:
ssh -Nf name#server1.com -p5222 -L 2233:name#server2.com:2233
sshfs -p 2233 localname#localhost:~/ ./target-dir
But I got this error message:
channel 2: open failed: administratively prohibited: open failed read: Connection reset by peer
Why I met this trouble and how could I mount my remote file to my local machine please?
From the commands you run, it looks like the ssh server on server2.com is listening on the default port 22:
(from server1)ssh name#server2.com
If that's the case, then you need to forward the connection towards this port 22.
Instead of:
ssh -Nf name#server1.com -p5222 -L 2233:name#server2.com:2233
Do:
ssh -Nf name#server1.com -p5222 -L 2233:name#server2.com:22
Also, in your sshfs command, you need to provide the ssh user on server2.com, not your local user.
Intead of:
sshfs -p 2233 localname#localhost:~/ ./target-dir
Do:
sshfs -p 2233 name#localhost:~/ ./target-dir

Shell script remotely

I have one script running on server and doing some job on other server
I have many scp commands and ssh commands, this is why each time I have to enter the remote server password at each remote command.
is there any way to establish ssh connection between the servers so I type the remote password only once?
thanks
I would suggest to setup an ssh config together with ssh keys. In a nutshell the config will hold an alias for one or more remote servers.
ssh remote_server1
ssh remote server2
While your config file will look something like this:
Host remote_server1
Hostname 192.168.1.12
user elmo
IdentityFile ~/.ssh/keys/remote.key
...
If an ssh config file is not for you (although I can highly recommend it), you can use sshpass as well.
sshpass -p 't#uyM59bQ' ssh username#server.example.com
Do note that the above does expose your password. If someone else has access to your account, the history command will show the code snippet above.

Run Windows Script with several tunnels for Linux

I'm trying to create a Script in Linux with several tunnels for several servers and run a script in that servers.
Basically i've a DailyCheck.sh in 8 machines RedHat and i've a tunnel for each one in windows with:
"putty.exe user#xxx.xxx.xxx.xxx -pw <password> -L port:127.0.0.1:port"
And i open each one and run the command DailyCheck.sh.
What i want is one file in Windows with:
"putty.exe user#xxx.xxx.xxx.xxx -pw <password> -L port:127.0.0.1:port"
sudo su -
./dailyCheck.sh
<delay if necessary>
"putty.exe user#xxx.xxx.xxx.xxx -pw <password> -L port:127.0.0.1:port"
sudo su -
./dailyCheck.sh
(....)
we have any way to do this?
Thanks & Best Regards,
André.
I recommend you would - instead of tunnels - set up proxy command for each host. Then use remote command execution to run the script on remote.
Also, to get better answers, I recommend tags ssh and putty.

How do I become root on a remote server until I am disconnected from that server?

So far I have this:
sshpass -p "password" ssh -q username#192.168.167.654 " [ "$(whoami)" != "root" ] && exec sudo -- "$0" "$#" ; whoami ; [run some commands as root]"
I keeps giving me username as answer from whoami. I want to be root as soon as I am connected to the server (but I can only connect to it with username). How can I be root throughout the connection to the server?
Clarification:
I want to access a remote server. It is mandatory that I connect as "username" and then switch to root to run and copy files that only root is able to do. So while I am connected to that server via ssh, I want to be root until my commands are over in the remote server. My problem is that I am not able to do so because I don't have the knowledge, hence I am posting it here.
Restrictions:
-can't use rsync.
-have to connect to the server as "username" and then switch to root
sshpass -p "password" ssh -q username#192.168.167.654 exec sudo -s << "END"
whoami
commands to run
END
You can try something like this (untested)
But I've used the same concept to accomplish similar a similar task.
scp FileWithCommands.sh UserName#Hostname:/tmp
ssh Username#HostName "su -s -c /tmp/FileWithCommands.sh"

Restore mysql database from a remote dump file

Is it possible to restore a dump file from a remote server?
mysql -u root -p < dump.sql
Can a dump.sql be located in a remote server? If so how do I refer it in the command above. Copying to the server isn't an option as there is no enough space in the server. I'm on redhat 5
Implement SSH remote access from the remote server to local
run on the remote server
cat dump.sql | ssh -c 'mysql -u root -pPASSWORD '
OR
Implement MYSQL access from the remote server
setup all privileges for root#(REMOTESERVER)
run on remote server
mysql -h mysql.yourcompany.com -u root -p < dump.sql
Maybe try this? Pick an available server port (call it 12345)
On your mysql box do nc -l 12345 | mysql -u root -p
On the box with the dumpfile nc mysqlbox 12345 < local_dump_file.sql
look at the -h options for mysql connections
MySQL Connection Docs
Edit:
Oh I misread, you are not trying to load onto a remote server, you want the file to come from a remote server. If you can log into the remote server where the file is you can then:
remotebox:user>>mysql -h <your db box> < local_dump_file.sql

Resources