Setup ssh to connect 2 PC and use MPI - linux

I am here because I've found different problems setting up SSH using this guide proposed in this other question.
First of all I've a computer (I want to use it as master) called: timmy#timmy-Lenovo-G50-80. My other computer is a Virtual Machine always with linux mint called: test#test-VirtualBox and I'd like to use it as a slave.
What I've done until now is:
install needed packets (both PC):
sudo apt-get install openssh-server openssh-client
Change inside the file /etc/ssh/sshd_config: (Only master)
the port of server from 22 to 2222
set PubkeyAuthentication yes (so no change)
remove comment at line: Banner /etc/issue.net
STOP
I am stuck when I've to execute this command:
ssh-copy-id username#remotehost
I imagine, reading what's written, that I've to execute something like:
ssh-copy-id timmy#timmy-Lenovo-G50-80
but:
from timmy#timmy-Lenovo-G50-80 everything goes OK, I can connect to myself (not what I actually want)
from test#test-VirtualBox it tells me ERROR: ssh: Could not resolve hostname timmy#timmy-Lenovo-G50-80: Name or service not known
Finally, what I've to do in order to connect these 2 PC?

You need to enable port forwarding into your VirtualBox'ed machine. Simply right click on the virtual machine, then go into Network. Then click on advance which will expand the Network window, and then on the button that appeared labeled Port forwarding.
A table will appear with several columns (Name, Protocol, Host IP, Host Port, ...). Simply add a new entry for protocol TCP, host port = X and guest port = 22 (see the list of well-known ports here https://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers#Well-known_ports). The screenshot below is from my cloudera quickstart VM. Notice the outlined entry in the port forwarding rules, which is about setting up the SSH port in the guest OS.
Once you reboot the virtual machine, you can simply connect to it through
# ssh -p X localhost
the -p parameter tells to connect through the port X. Notice that if you want to use scp then you have to use the uppercase -P option rather than the lowercase -p.
# scp -P X localfile localhost:remote-dir/

Related

SSH Tunnel to Ngrok and Initiate RDP

I am trying to access my Linux machine from anywhere in the world. I have tried originally port forwarding and then ssh'ing in; however, I believe my school's WiFi won't allow port forwarding (every time I ran it, it would tell me connection refused). I have setup an account with ngrok and I can remotely SSH in, but now I am wondering if it is possible to RDP. I tried connecting via the Microsoft Remote Desktop app on Mac, but it instantly crashes. I have also looked at trying to connect with localhost, but it's not working. So far, I have tried (with xxxx being the port):
ssh -L xxxx:localhost:xxxx 0.tcp.ngrok.io
and
ssh -L xxxx:localhost:xxxx <user>#0.tcp.ngrok.io
but my computer won't allow it and after about 2 or 3 times, it warns me of a possible DNS Spoofing. Is there anyway that I can run a remote desktop of my linux machine that I have ssh tunneled to (from my mac) on ngrok? Thank you!
First you'll need to sign up with ngrok if you haven't already and you'll be given an authtoken. You'll need to install this by running
./ngrok authtoken <insert your token here>
This will save your token to a file located ../username/.ngrok/ngrok.yml
Then you'll need to ask ngrok to create a TCP tunnel from their servers to your local machine's Remote Desktop port which should be 3389 by default
ngrok tcp 3389
Give it 30 seconds or so then jump to http://localhost:4040/status to see what the tcp address ngrok has allocated you. It should look something like tcp://1.tcp.ngrok.io:158764
Now you should be able to remote into your machine using address 1.tcp.ngrok.io:158764

Cannot access Kaa Sandbox SSH

I wanted to ssh into Kaa's sandbox using ssh kaa#127.0.0.1 -p 2222 given in the virtual machine to us and also in one of the Data Collection demo where it said that we need to ssh into kaa's sandbox then we can see our mongoDB using our application token of our demo to see data saved into it.
But we do know the password is kaa123. But I tried 4 times, it shows permission denied, please try again until it shows permission denied (publickey,password).
ThusIi would like to seek help. I haven set up anything apart from downloading cmake, gcc. I changed the port on Raspberry pi to port 2222. Raspberry pi is connected to my computer using an Ethernet cable.
Raspberry pi static ip address : 169.254.220.68
Computer static ip address : 169.254.220.135
Kaa's sandbox ssh address is : ssh kaa#127.0.0.1 -p 222
Your answers are really very very important to us as we have been stuck for days for our mini Final Year Project.
As I understood, the situation is next:
Kaa Sandbox is running in VirtualBox image on host 169.254.220.135
Raspberry Pi has IP address 169.254.220.68
You tries to get to Kaa Sandbox by ssh from Raspberry Pi
Kaa Sandbox shows in terminal that you can access its SSH via localhost (127.0.0.1) port 2222
If that is correct, the technical details are as follows:
You should be able (if you didn't change Kaa Sandbox configuration) to access the Kaa Sandbox from your VirtualBox host just as it is shown in the Kaa Sandbox terminal:
ssh kaa#localhost -p 2222
Please try this first. Should this fail you will not be able to pass the further checks below.
The Kaa Sandbox is shiped with NAT networking mode configuration. This means (among other things) that its internal IP addresse(s) (including 10.0.2.15) cannot be reached from outside. That is, you cannot connect to this address from Raspberry Pi and even from your VirtualBox host. NAT hides them under the VirtualBox host IP address.
To enable access to the Kaa Sandbox from outside we pre-configured the Kaa Sandbox VirtualBox image to forward several ports from your host IP address to the internal IP address (10.0.2.15) which is under NAT. The port forwarding configuration is next:
${HostIP}:2222 -> 10.0.2.15:22
This means that all the connections to ${HostIP}:2222 will be forwarded to the Kaa Sandbox's 10.0.2.15:22.
Thus:
You should be able to get Kaa Sandbox SSH locally by kaa#localhost -p 2222 and by host IP: kaa#169.254.220.135 -p 2222
You need to use your host IP from remote: kaa#169.254.220.135 -p 2222
Please let me know if something is unclear here or does not work for you.
127.0.0.1 always points to your own computer. If kaa's sanbox is in your Raspberry Pi, try ssh kaa#169.254.220.68 -p 2222

SSH Agent forward specific keys rather than all registered ssh keys

I am using agent forwarding, it works fine. But the ssh client is sharing all registered (ssh-add) keys with the remote server. I have personal keys that I don't want to share with the remote server. Is there a way to restrict with keys are being forwarded?
I have multiple github accounts and aws accounts. I don't want to share all the ssh-keys.
Looks like it is possible with OpenSSH 6.7 - it supports unix socket forwarding. We could start secondary ssh-agent with specific keys and forward it's socket to remote host. Unfortunately this version is not available for my server/client systems at the time of writing.
I have found another possible solution, using socat and standard SSH TCP forwarding.
Idea
On local host we run secondary ssh-agent with only keys we want to see on remote host.
On local host we set up forwarding of TCP connections on some port (portXXX) to secondary ssh-agent's socket.
On remote host we set up forwarding from some socket to some TCP port (portYYY).
Then we establish ssh connection with port forwarding from remote's portYYY to local portXXX.
Requests to ssh agent go like this:
local ssh-agent (secondary)
^
|
v
/tmp/ssh-.../agent.ZZZZZ - agent's socket
^
| (socat local)
v
localhost:portXXX
^
| (ssh port forwarding)
v
remote's localhost:portYYY
^
| (socat remote)
v
$HOME/tmp/agent.socket
^
| (requests for auth via agent)
v
SSH_AUTH_SOCK=$HOME/tmp/agent.socket
^
| (uses SSH_AUTH_SOCK variable to find agent socket)
v
ssh
Drawbacks
It is not completely secure, because ssh-agent becomes partially available through TCP: users of remote host can connect to your local agent on 127.0.0.1:portYYY, and other users of your local host can connect on 127.0.0.1:portXXX. But they will see only limited set of keys you manually added to this agent. And, as AllenLuce mentioned, they can't grab it, they only could use it for authentication while agent is running.
socat must be installed on remote host. But looks like it is possible to simply upload precompiled binary (I tested it on FreeBSD and it works).
No automation: keys must be added manually via ssh-add, forwarding requires 2 extra processes (socat) to be run, multiple ssh connections must be managed manually.
So, this answer is probably just a proof of concept and not a production solution.
Let's see how it can be done.
Instruction
Client side (where ssh-agent is running)
Run new ssh-agent. It will be used for keys you want to see on remote host only.
$ ssh-agent # below is ssh-agent output, DO NOT ACTUALLY RUN THESE COMMANDS BELOW
SSH_AUTH_SOCK=/tmp/ssh-qVnT0UsgV6yO/agent.22982; export SSH_AUTH_SOCK;
SSH_AGENT_PID=22983; export SSH_AGENT_PID;
It prints some variables. Do not set them: you will loose your main ssh agent. Set another variable with suggested value of SSH_AUTH_SOCK:
SSH_AUTH_SECONDARY_SOCK=/tmp/ssh-qVnT0UsgV6yO/agent.22982
Then establish forwarding from some TCP port to our ssh-agent socket locally:
PORT=9898
socat TCP4-LISTEN:$PORT,bind=127.0.0.1,fork UNIX-CONNECT:$SSH_AUTH_SECONDARY_SOCK &
socat will run in background. Do not forget to kill it when you're done.
Add some keys using ssh-add, but run it with modified enviromnent variable SSH_AUTH_SOCK:
SSH_AUTH_SOCK=$SSH_AUTH_SECONDARY_SOCK ssh-add
Server side (remote host)
Connect to remote host with port forwarding. Your main (not secondary) ssh agent will be used for auth on hostA (but will not be available from it, as we do not forward it).
home-host$ PORT=9898 # same port as above
home-host$ ssh -R $PORT:localhost:$PORT userA#hostA
On remote host establish forwarding from ssh-agent socket to same TCP port as on your home host:
remote-host$ PORT=9898 # same port as on home host
remote-host$ mkdir -p $HOME/tmp
remote-host$ SOCKET=$HOME/tmp/ssh-agent.socket
remote-host$ socat UNIX-LISTEN:$SOCKET,fork TCP4:localhost:$PORT &
socat will run in background. Do not forget to kill it when you're done. It does not automatically exit when you close ssh connection.
Connection
On remote host set enviromnent variable for ssh to know where agent socket (from previous step) is. It can be done in same ssh session or in parallel one.
remote-host$ export SSH_AUTH_SOCK=$HOME/tmp/ssh-agent.socket
Now it is possible to use secondary agent's keys on remote host:
remote-host$ ssh userB#hostB # uses secondary ssh agent
Welcome to hostB!
The keys themselves are not shared by forwarding your agent. What's forwarded is the ability to contact the ssh-agent on your local host. Remote systems send challenge requests through the forwarding tunnel. They do not request the keys themselves.
See http://www.unixwiz.net/techtips/ssh-agent-forwarding.html#fwd for a graphical explanation.

ssh: Could not resolve hostname [hostname]: nodename nor servname provided, or not known [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I am trying to set up a VPN with a Raspberry Pi, and the first step is gaining the ability to ssh into the device from outside my local network. For whatever reason, this is proving to be impossible and I haven't the slightest clue why. When I try to ssh into my server with user#hostname, I get the error:
ssh: Could not resolve hostname [hostname]: nodename nor servname provided, or not known
However, I can log into the server with,
ssh user#[local IP]
The server is a Raspberry Pi Model B running the latest distribution of Raspbian and the machine I am trying to connect to it with is a Macbook Pro running Mavericks. ssh was enabled on the Raspberry Pi when I set up Raspbian.
I have perused Stack Overflow for hours trying to see if anyone else had this problem and I have not found anything. Every ssh tutorial I find says that I should just be able to set it up on the remote machine and log in from anywhere using a hostname, and I have never had success with that.
If you're on Mac, restarting the DNS responder fixed the issue for me.
sudo killall -HUP mDNSResponder
I had the same issue connecting to a remote machine. but I managed to login as below:
ssh -p 22 myName#hostname
or:
ssh -l myName -p 22 hostname
Recently I came across the same issue. I was able to ssh to my pi on my network, but not from outside my home network.
I had already:
installed and tested ssh on my home network.
Set a static IP for my pi.
Set up a Dynamic DNS service and installed the software on my pi.
I referenced these instructions for setting up the static ip, and there are many more instructional resources out there.
Also, I set up port forward on my router for hosting a web site and I had even port forward port 22 to my pi's static IP for ssh, but I left the field blank where you specify the application you are performing the port forwarding for on the router. Anyway, I added 'ssh' into this field and, VOILA! A working ssh connection from anywhere to my pi.
I'll write out my router's port forwarding settings.
(ApplicationTextField)_ssh (external port)_22 (Internal Port)_22 (Protocal)_Both (To IP Address)_192.168.1.### (Enabled)_checkBox
Port forwarding settings can be different for different routers though, so look up directions for your router.
Now, when I am outside of my home network I connect to my pi by typing:
ssh pi#[hostname]
Then I am able to input my password and connect.
In my case I was trying ssh like this
ssh pedro#192.168.2.179:22
when the correct format is:
ssh pedro#192.168.2.179 -p 22
If you need access to your VPN from anywhere in the world you need to register a domain name and have it point to the public ip address of your VPN/network gateway. You could also use a Dynamic DNS service to connect a hostname to your public ip.
If you only need to ssh from your Mac to your Raspberry inside your local network, do this: On your Mac, edit /etc/hosts. Assuming the Raspberry has hostname "berry" and ip "172.16.0.100", add one line:
# ip hostname
172.16.0.100 berry
Now: ssh user#berry should work.
I had the same issue, which I was able to resolve by adding a .local to the host name, ala ssh user#hostname.local
For me, the problem was a typo on my ~/.ssh/config file. I had:
Host host1:
HostName 10.10.1.1
User jlyonsmith
The problem was the : after the host1 - it should not be there. ssh gives no warnings for typos in the ~/.ssh/config file. When it can't find host1 it looks for the machine locally, can't find it and prints the cryptic error message.
I had the same problem: The address shown in Preferences -> Sharing -> Remote Login didn't work and I got a '... nodename nor servname provided, or not known'. However, when I manually edited the settings (in Preferences -> Sharing -> Remote Login -> edit) and enabled "Use dynamic global hostname", it suddenly worked.
If your command is:
$ ssh -p 1122 path/to/pemfile user#[hostip/hostname]
You will also face the same error
ssh: Could not resolve hostname [hostname]: nodename nor servname provided, or not known
when you miss the option -i /path/to/pemfile of ssh
So Command should be:
$ ssh -p 1122 -i path/to/pemfile user#[hostip/hostname]
I needed to connect to remote Amazon server
ssh -i ~/.ssh/test.pem -fN -L 5555:localhost:5678 ubuntu#hostname.com
I was getting the following error.
ssh: Could not resolve hostname <hostname.com>: nodename nor servname provided, or not known
Solution For Mac OSX
Pinging the host resolved the issue. I am using Mac OSX Seirra.
ping hostname.com
Now problem resolved. Able to connect to the server.
Note: I tried this solution also. But it didn't work out. Then ping resolved the issue.
It seems that some apps won't read symlinked /etc/hosts (on macOS at least), you need to hardlink it.
ln /path/to/hosts_file /etc/hosts
This was happening to me when trying to access Github. The problem is that I was in the habit of doing:
git remote add <xyz> ssh:\\git#github.com......
But, if you are having this error from the question, removing ssh:\\ may resolve the issue. It solved it for me!
Note that you will have to do a git remote remove <xyz> and re-add the remote url without ssh:\\.
I have the exact same configuration. This answer pertains specifically to connecting to a raspberry pi from inside the local network (not outside). I have A raspberry pi ssh server, and a macbook pro, both connected to a a router. On a test router, my mac connects perfectly when I use ssh danran#mypiserver, however, when I use ssh danran#mypiserver on my main router, i get the error
ssh: Could not resolve hostname [hostname]: nodename nor servname
provided, or not known
Just as you have gotten. It seems, the solution for me at least, was to add a .local extension to the hostname when connecting from my mac via ssh.
So, to solve this, i used the command ssh danran#mypiserver.local (remember to replace the "danran" with your username and the "mypiserver" with your hostname) instead of using ssh danran#mypiserver.
To anyone reading this, try adding a .local as the suffix to your hostname you are trying to connect to. That should solve the issue on a local network.
Try this, considering your allowed ports. Store your .pem file in your Documents folder for instance.
To gain access to it now all you have to do is cd [directory], which moves you to the directory of the allotted file. You can first type ls, to list the directory contents you are currently in:
ls
cd /Documents
chmod 400 mycertificate.pem
ssh -i "mycertificate.pem" ec2-user#ec2-1-2-3-4.us-compass-0.compute.amazonaws.com -p 80
I got this error by using a .yml inventory file in ansible that was not properly formatted. For multiple hosts in a group, each hostname needs to end in a hard colon ":". Otherwise ansible runs the host names together and produces this ssh error.
I had the same problem after testing Visual Studio Code with remote-ssh plugin. During the setup of the remote host the software did ask me where to store the config-file. I thought a good place is the '.ssh-folder' (Linux-system) as it was a ssh-remote configuration.
It turned out to be a bad idea. The next day, after a new start of the computer I couldn't logon via ssh on the remote server. The error message was 'Could not resolve hostname:....... Name or service not known'.
What happen was that the uninstall from VSC did not delete this config-file and of course it was than disturbing the usual process. An 'rm' later the problem was solved (I did delete this config-file).

Connect to PostgreSql database in Linux VirtualBox from Win7

As said in headline, from Win7 host I'm trying to access Postgres 9.3 established in Linux Centos 5.8 which is in VirtualBox on the same machine. I'm trying to access it from PGAdmin and everything is OK when I start the Postgre from Win7 services, so PGAdmin is well configured.
What have I tried? I've read many articles about this subject, and even some questions on this forum but nothing worked. I have:
switched to NAT and forwarded port 5432 in VirtualBox GUI
set listenadresses = '*' in postgresql.conf file
put host all all 10.0.2.1/24 md5 line in the pg_hba.conf file
put 5432 port inbound and outbound rule in win7 firewall settings
disabled linux firewall with #service iptables stop
Just to mention. When service is started in virtual linux, I can access it from linux, so service is properly started. Problem is that windows doesn't see that service. And when service is started from linux, I can start the same service in Win and vice-versa although the port 5432 should be occupied.
The most suspicious part to me is point 3) because I'm not sure whether i have put good address in rule. That address vary from article to article, and I would appreciate if someone could explain me how to be sure which address (or range) to put there, according to my network. Or some other advice if possible. Thanks.
Solved.
Replacing:
"host all all 10.0.2.1/24 md5" with "host all all 0.0.0.0/0 trust" solved it.
In my case adding the below line to pg_hba.conf was enough:
host all all 10.0.0.0/16 md5
and then restart:
sudo /etc/init.d/postgresql restart
The Solution by Filip works, but you can tailor it further.
First, enable Adapter 2 in VM and set it to Host-only Adapter:
Second go to your host machine and find it's ip address.
This can be found by running ipconfig in your windows host machine.
Now you need to edit two files in your VMBox.
First is postgresql.conf
sudo nano /etc/postgresql/<version>/main/postgresql.conf
and add the following line:
listen_addresses = '*'
save it and then edit pg_hba.conf
sudo nano /etc/postgresql/<version>/main/pg_hba.conf
Here you need to add your host machine ip (in my case it was 192.168.56.1:
host all all 192.168.56.1/0 trust
Save it and restart postgresql
sudo /etc/init.d/postgresql restart
Now you can use pgadmin to connect to vm postgresql.
Convenience!

Resources