Cannot SSH to Azure VM - azure

I have a Ubuntu VM on Azure (Resource Group, not the Classic VM) and it all worked out of the box. I recently tried to SSH into the VM using Putty and I could not.
I get the error: Network Error: Connection Timed out.
I have made sure that the port 22 is opened for SSH on the VM Inbound rules.
I had this VM setup about 2 months ago for a side project and at that time I was able to SSH easily without any troubles. Now I can't. Am I missing something?
PS: The HTTP works fine. I have the website running on it and it shows up in the browser. Also, I tried using a browser-based SSH client and it was able to SSH into the VM.

Looks to be an issue with the local firewall. Try resetting the SSH configuration in the portal.
Go to Azure Portal
Select VM in question
Select Reset
Password
Select Reset Configuration Only
Select Update

I am adding this because it might help someone, the chosen answer did not work for me
for some reason the firewall on the ubuntu server
Go to Serial Console type in your ssh username and you will be logged into the server
Check the firewall status to see if port 22 is allowed
sudo ufw status verbose
If the rule is not there then add it
sudo ufw allow ssh

I encountered the same issue. The following is how I solve this issue:
Don't add any port While creating your VM, do it only after only the VM is created
Add the port 22 in the networking tab until the VM status is Running.

When a new VM is created on Azure, by-default the Protocol TCP on Port 22 is Disabled. Need to allow this.
Follow:
https://medium.com/techinpieces/practical-azure-how-to-enable-ssh-on-azure-vm-84d8fba8103e

Create below directory : mkdir -p /run/sshd
Then restart service : systemctl restart ssh
This will definitely solve your issue.

Related

Connect to remote Azure Linux machine through ssh Visual Studio Code

I want to connect to an Azure machine on which is installed an Ubuntu distribution.
I can connect or through ssh or, by installing some other software, by using X2Go.
However, I don't need the UI and if it's possible I would like to use Visual Studio Code.
On this last I've installed the ssh component and I've already used it to connect to other machines.
Unfortunately I'm not able to connect to the Azure machine by using VS Code.
The ssh connection works, I tested it by connecting through the terminal.
The connection string is in the following for:
(user_name)#(machine_name).westeurope.cloudapp.azure.com
I'm not the system administrator and I don't know the public IP.
I think the problem is the ssh port, I read that the standard port for ssh is 22 while I have 53044.
On VS Code I tried the following solution:
connection string: (user_name)#(machine_name).westeurope.cloudapp.azure.com:53044
I added the connection info into the config file with this format:
Host Linux_Azure
HostName (machine_name).westeurope.cloudapp.azure.com
User (user_name)
Port 53044
None of them work.
With the first solution VS Code tries to connect forever, failing with no error messages.
With the second solution VS Code gives back this error message: Could not establish connection to "Linux_Azure": The connection timed out.
I don't understand why it doesn't work, and I don't know how to solve it.
Do you have any idea to solve it?
I deployed one Ubuntu VM and tried connecting to it via SSH in VS code and got connected successfully with Port 22 like below:-
ssh siliconuser#siliconlinuxvm123.centralindia.cloudapp.azure.com -p 22
Output:-
When I tried connecting with port 53044, Even I got the same error code as yours like below:-
By default Azure Linux VM uses Port 22 to SSH, You cannot change the default destination port as it is fixed for a particular protocol. Example For RDP - Port 3389 is used, For HTTP- Port 80 and for SSH- Port 22 is used.
When I try to allow SSH in the inbound rule on VM's Networking Page > Port 22 is selected by default and greyed out thus we cannot add other port as a destination range refer below:-

Azure VM Connection Refused

I created a VM in Microsoft Azure with Ubuntu 20 in which I run a Tomcat Server exposed to Port 443 and 80 (redirecting to 443), Neo4j on Port 7474, and Jenkins on Port 8081.
I can't access neither of those ports, although I set all the Inbound Port Rules like this:
When I try to reach IP:PORT, I always get this:
I am kinda new to Azure. It is possible to log in to the servier via SSH in the Terminal. Can anyone help me? How can I access my Server?
Have you tried to access to the VMs by using SSH and looking whats going on with the logs ?!
Yes, you can connect to a terminal by SSH:
ssh -i <private key path> username#ipaddress
If you don't config your SSH key, you can use create you password on the Azure portal.
In your VM, on the left, you have many options, and one name reset password.

Cannot list directory on IIS FTP server on Azure, even after configuring Azure inbound rules and Windows firewall

I'am running Windows Server 2012 in Azure, and I've configured the FTP server in IIS. When I try to connect the server, it accepts the username and password and log me in but not showing the directory listing.
I've tried using FileZilla FTP client to connect and it saying the same error.
Status: Resolving address of jothiprakashanandan.southindia.cloudapp.azure.com
Status: Connecting to 104.211.244.241:21...
Status: Connection established, waiting for welcome message...
Status: Insecure server, it does not support FTP over TLS.
Status: Logged in
Status: Retrieving directory listing...
Command: PWD
Response: 257 "/" is current directory.
Command: TYPE I
Response: 200 Type set to I.
Command: PASV
Error: Connection timed out after 20 seconds of inactivity
Error: Failed to retrieve directory listing
Status: Disconnected from server
The inbound rule of Azure is this:
The VM's firewall inbound rule.
However when I try to login from the VM's browser it is working fine and showing the directory list.
In Azure, we should deploy the passive mode FTP, we should add data channel ports range in FTP Firewall Support, then add those ports to NSG and windows firewall inbound rules.
By the way, although the windows firewall seems to allow all traffic that’s required, we also need to enable stateful FTP filtering on the firewall:
netsh advfirewall set global StatefulFtp enable
Then restart the FTP windows service and we should be up and running:
net stop ftpsvc
net start ftpsvc
Here is a similar case, same error as you, please refer to it.
Check which port does the FTP site listen on:
It is usually necessary to restart the Microsoft FTP service after enabling the FTP server rules in Windows firewall to have the change take an effect.
Or restarting a whole machine.
See my guide to Installing an FTP Server on Windows using IIS.
The issue was with Azure network NSG. you need to enable the port range on which data is getting transferred.
Added new rule in NSG to open this port range and it worked.

CHECK_NRPE: Error - Could not complete SSL handshake

I have NRPE daemon process running under xinetd on amazon ec2 instance and nagios server on my local machine.
The check_nrpe -H [amazon public IP] gives this error:
CHECK_NRPE: Error - Could not complete SSL handshake.
Both Nrpe are same versions. Both are compiled with this option:
./configure --with-ssl=/usr/bin/openssl --with-ssl-lib=/usr/lib/i386-linux-gnu/
"allowed host" entry contains my local IP address.
What could be the possible reason of this error now??
If you are running nrpe as a service, make sure you have this line in your nrpe.cfg on the client side:
# example 192. IP, yours will probably differ
allowed_hosts=127.0.0.1,192.168.1.100
You say that is done, however, if you are running nrpe under xinetd, make sure to edit the only_from directive in the file /etc/xinetd.d/nrpe.
Don't forget to restart the xinetd service:
service xinetd restart
To check if you have access to it at all attempt a simple telnet on the address:port, a ping or traceroute to see where it is blocking.
telnet IP port
ping IP
traceroute -p $port IP
Also check on the target server that the nrpe daemon is working properly.
netstat -at | grep nrpe
You also need to check the versions of OpenSSL installed on both servers, as I have seen this break checks on occasion with the SSL handshake!
check your /var/sys/system.log . In my case, it turned out my monitored IP was set to something else than the one I set in nrpe.cfg file. I don't know the cause of this change, though.
#jgritty was right.
you should edit nrpe.cfg and nrpe config files to allow your master nagios server's access:
vim /usr/local/nagios/etc/nrpe.cf
allowed_hosts=127.0.0.1,172.16.16.150
and
vim /etc/xinetd.d/nrpe
only_from= 127.0.0.1 172.16.16.150
That's somewhat of a catch-all error message for NRPE. Check your firewall rules and make sure that port is open. Also try disabling SELinux and seeing if that lets the connection through. It's likely not an SSL issue, but just an issue with the connection being refused.
It looks like you are running your Nagios server in a virtual machine on a host-only network. If this is so, this would stop any external access. Ensure that you have a NAT or Bridged Network available.
So many answers, none of them hit the reason why I ran into this issue.
It turns out that nagios has terrible cross-version support and this was caused by me having a version 2 "client" (machine being monitored) and a version 3 "server" (monitoring machine).
Once I upgraded the client to version 3, the problem went away and I could do a check_nrpe -H [client IP] without issues.
Note that I am not sure if client/server are the right terms with nagios, as in the case of an NRPE call, the server is really the machine being called, but I digress.
Make sure that you have restarted the Nagios Client Plugin as well.
I'm running nrpe using the xinetd service.
Make sure also (in addition to the above basic steps) that your nagios user is authenticating properly. In my case:
Jun 6 15:05:52 gse2 xinetd[33237]: **Unknown user: nagios**<br>[file=/etc/xinetd.d/nrpe] [line=9]
Jun 6 15:05:52 gse2 xinetd[33237]: Error parsing attribute user - DISABLING
SERVICE [file=/etc/xinetd.d/nrpe] [line=9]
Jun 6 15:05:52 gse2 xinetd[33237]: **Unknown group: nagios**<br>[file=/etc/xinetd.d/nrpe] [line=10]
Jun 6 15:05:52 gse2 xinetd[33237]: Error parsing attribute group - DISABLING
SERVICE [file=/etc/xinetd.d/nrpe] [line=10]
Jun 6 15:05:52 gse2 xinetd[33237]: Service nrpe missing attribute user - DISABLING
Was showing in the /var/log messages.
It escaped me at first, but then I did a check on ypbind service and found it was not started.
After starting ypbind, nagios user and group was authenticating properly, the error went away.
some edge cases restarting nagios-nrpe-server doesn't help, due to the fact that process was not killed or it was not properly restarted.
just kill it manually then, and start.
SSL handshake error msg.Beside the allow_host you should assign.
your nagios server is in a local lan with C type ip address such as 192.168.xxxx
when the target monitored server feedback the ssl msg to your local nagios server,the message should first comes to your public IP of your line,the message cannot across the public IP into your nagios server which ip is an internal one.
you need NAT to guide the SSL message from target server to inner nagios server.
Or you better use "GET" method which just get monitor message from the nagios client side,such as SNMP to fulfill the remote monitor of local resource of linux servers.
SSL need feedback in double direction.
Best Regards
For me setting the following in /etc/nagios/nrpe.cfg on Client worked:
dont_blame_nrpe=1
It's and ubuntu 16.04 machine.
For other possible problems, I recommend looking at nrpe logs. Here is good article for configuring logs.
If you are running Debian 9 then there is a known issue regarding this problem, caused by OpenSSL dropping support for the method NRPE uses to initiate anonymous SSL connections.
The issue seems to be fixed but the fix hasn't made it into the official packages, yet.
Currently there seems to be no secure work-around.
check configuration in /etc/xinetd.d/nrpe and verify the server IP. If it is showing only_from = 127.0.0.1 change it with Server IP .

Connect to PostgreSql database in Linux VirtualBox from Win7

As said in headline, from Win7 host I'm trying to access Postgres 9.3 established in Linux Centos 5.8 which is in VirtualBox on the same machine. I'm trying to access it from PGAdmin and everything is OK when I start the Postgre from Win7 services, so PGAdmin is well configured.
What have I tried? I've read many articles about this subject, and even some questions on this forum but nothing worked. I have:
switched to NAT and forwarded port 5432 in VirtualBox GUI
set listenadresses = '*' in postgresql.conf file
put host all all 10.0.2.1/24 md5 line in the pg_hba.conf file
put 5432 port inbound and outbound rule in win7 firewall settings
disabled linux firewall with #service iptables stop
Just to mention. When service is started in virtual linux, I can access it from linux, so service is properly started. Problem is that windows doesn't see that service. And when service is started from linux, I can start the same service in Win and vice-versa although the port 5432 should be occupied.
The most suspicious part to me is point 3) because I'm not sure whether i have put good address in rule. That address vary from article to article, and I would appreciate if someone could explain me how to be sure which address (or range) to put there, according to my network. Or some other advice if possible. Thanks.
Solved.
Replacing:
"host all all 10.0.2.1/24 md5" with "host all all 0.0.0.0/0 trust" solved it.
In my case adding the below line to pg_hba.conf was enough:
host all all 10.0.0.0/16 md5
and then restart:
sudo /etc/init.d/postgresql restart
The Solution by Filip works, but you can tailor it further.
First, enable Adapter 2 in VM and set it to Host-only Adapter:
Second go to your host machine and find it's ip address.
This can be found by running ipconfig in your windows host machine.
Now you need to edit two files in your VMBox.
First is postgresql.conf
sudo nano /etc/postgresql/<version>/main/postgresql.conf
and add the following line:
listen_addresses = '*'
save it and then edit pg_hba.conf
sudo nano /etc/postgresql/<version>/main/pg_hba.conf
Here you need to add your host machine ip (in my case it was 192.168.56.1:
host all all 192.168.56.1/0 trust
Save it and restart postgresql
sudo /etc/init.d/postgresql restart
Now you can use pgadmin to connect to vm postgresql.
Convenience!

Resources