trying to connect to a Raspberry pi server from windows using ssh causing: Connection timed out - linux

i am using Raspberry for the first time but i wanted to create a ssh connection to a windows pc. both same network but it is causing ssh: connect to host 192.168.0.114 port 22: connection timed out.
those are the steps i followed:
allow ssh connection from Raspberry
ssh pi#192.168.0.114(and this is the ip of the Raspberry) from windows.

Related

How to install MongoDB Enterprise 4.4 on remote redhat server?

I followed the instructions listed here, https://docs.mongodb.com/manual/tutorial/install-mongodb-enterprise-on-red-hat/, and tried to install on a remote server from my local machine. I ssh from my local machine into the server and performed the steps for installation.
I'm not sure if there are additional steps that need to be completed or whether you have to set Directory Paths that are not the default ones since you are using a server instead of local machine. My current error is when I run mongo from my terminal and I get this error
Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
connect#src/mongo/shell/mongo.js:374:17
#(connect):2:6
exception: connect failed
exiting with code 1
[h699972#csc2cxp00020938 ~]$ mongo --host
sudo vim /etc/mongod.conf and setting bindIp: 0.0.0.0 did not work. Any help would be appreciated.

Connecting to cqlsh from remote fails, where as from localhost it connects?

Installed DSE on CentOS7 and saw no errors, I'm able to connect to cqlsh from the local machine and and I'm able to also do a telnet on the port.
However the problem is that when the same is done from a remote machine I get the below error
cqlsh XX.XXX.XX.XX 3389
Connection error: ('Unable to connect to any servers', {'10.223.61.49': ConnectionShutdown('Connection to 10.223.61.49 was closed',)})
To check the connectivity I did a telnet on this and see the below error
telnet XX.XXX.XX.XX 3389
Invalid or unsupported protocol version (13); supported versions are (3/v3, 4/v4, 5/v5-beta, 65/dse-v1)
I tried updating the Python version, also few tweaks to change the py script also didn't help

DBeaver(3.8.5) unable to connect to Cassandra 2.1.2 (NoHostAvailableException)

My DBeaver Enterprise Edition is unable to connect to a remote Cassandra store (2.1.2) on a Unix machine in the same network.It's throwing a NoHostAvailableException- All host(s) tried for query failed. The firewall is down on the Unix machine,the tcp port in cassandra.yaml is 9042 & I am using the same port in the DBeaver Connection configuration. I am also using SSH tunnel in the configuration to connect to the Unix machine.
I could use some help on fixing this problem.

ssh command is not working while connecting linux to solaris

I am trying to connect to remote solaris machine from a linux server using ssh but not able to connect to the solaris machine. I am using below ssd command to connect to the solaris machine
ssh <host_name>
After giving this command, I am not getting any prompt for username and password. Is it the limitation for linux to solaris connection ??
The output is:
root#host> ssh -v user#solaris_host
OpenSSH_5.3p1, OpenSSL 1.0.0-fips 29 Mar 2010
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug1: Connecting to solaris_host [solaris_host] port 22.
debug1: connect to address solaris_host port 22: Connection timed out
ssh: connect to host solaris_host port 22: Connection timed out
Go over following steps
Check the network connectivity with your target, e.g. ping.
Check if the port 22 is open on your remote host e.g. nmap -A 192.168.0.5/32 -p 22
Check if ssh daemon is running on your target svcs ssh
Come back, when the problem still exists.

Unsuccessful connection on ssh to the subordinate nodes of a cluster

Executing start of services on the subordinate nodes of a cluster by means of the following command:
hadoop#one:/export/hadoop-1.0.1/bin$. ./start-all.sh
not the first time I receive result
starting namenode, logging to /export/hadoop-1.0.1/libexec/../logs/hadoop--namenode-one.out
192.168.1.10: starting datanode, logging to /export/hadoop-1.0.1/libexec/../logs/hadoop-hadoop-datanode-myhost2.out
192.168.1.11: ssh: connect to host 192.168.1.11 port 22: Connection timed out
192.168.1.5: starting secondarynamenode, logging to /export/hadoop-1.0.1/libexec/../logs/hadoop-hadoop-secondarynamenode-one.out
starting jobtracker, logging to /export/hadoop-1.0.1/libexec/../logs/hadoop--jobtracker-one.out
192.168.1.10: starting tasktracker, logging to /export/hadoop-1.0.1/libexec/../logs/hadoop-hadoop-myhost2.out
192.168.1.11: ssh: connect to host 192.168.1.11 port 22: Connection timed out
How do I fix the error below:
ssh: connect to host port 22: Connection timed out
hadoop#one: /export/hadoop-1.0.1/bin$ ssh -vvv 192.168.1.10
Sun_SSH_1.5, SSH protocols 1.5/2.0, OpenSSL 0x1000004f
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Rhosts Authentication disabled, originating port will not be trusted.
debug1: ssh_connect: needpriv 0
debug1: Connecting to 192.168.1.10 [192.168.1.10] port 22
debug1: connect to address 192.168.1.10 port 22: Connection timed out
ssh: connect to host 192.168.1.10 port 22: Connection timed out
What to do to correct an error?
Do you have sshd daemon running on your machine? Your OS might come with ssh, but in order to start sshd daemon, you need to install ssh completely. By complete I mean :
ssh : The command we use to connect to remote machines - the client.
sshd : The daemon that runs on the server and allows clients to
connect to this server.
Also, make sure there is no issue with port 22. If you still face some issue try ssh with -v switch to get the complete trace.
ssh -v myhost2
You can go here for a detailed explanation of ssh.

Resources