Nework connection to Azure VM is slow - azure

I have a Azure VM and Im experiencing a slow connection to VM via SSH.
I can connect using ssh or putty, but after some time VM drop the connection. Sometimes I get a connection timeout when I'm trying to connect to VM.
To try to identify the problem I created a new VM and tried to connect through the private interface, but I'm still having the same problem.
To illustrate the problema see the scp command:
escola#othervm:~$ scp escola#10.0.0.4:backup/backup.dmp.gz .
backup.dmp.gz 1% 1520KB 144.7KB/s 12:45 ETA
the command is executed and after a while download stops.
Any Help ?

Based on my experience, you may increase the VM size to provide more memory for your VM. If it does not make sense. You could try the following solutions and refer to this.
Edit /etc/ssh/sshd_config to set the a large value of ClientAliveCountMax. The value is per minute. Then restart ssh service with service sshd reload.
On the user .ssh directory, add ServerAliveInterval 60 in the config file in /user/.ssh/config
For more information, you could read detailed SSH troubleshooting steps for issues connecting to a Linux VM in Azure.

Related

Can't connect remotely to Jenkins being run on a Debian 8 VM

I've recently set up a Debian 8 Jessie VM on Google Cloud. I've installed Jenkins and have the service up and running(verified by "sudo service jenkins status"), yet I can't connect to the VM's external IP from another machine. I used to run Jenkins from my personal computer until I decided I needed a dedicated server to run it continuously. When I was running it on my personal machine I would just access localhost:8080 and the Jenkins dashboard would load fairly quickly. However, upon trying to access the external IP address of the VM running Jenkins, I'm usually greeted with "Connection refused" in my web browser.
At the suggestion of most posts I've seen regarding such issues, I've lifted all firewalls on the VM and have tried to ensure that the VM is listening at the correct IP address, but nothing seems to be able to change the outcome presented by my browser. Where does the issue most likely reside: the VM, Google Cloud, or Jenkins? I'm at a loss.
My first guess is a connection/firewall issue. To test this, you could try a port forward using SSH: SSH into your server with a local port forward: ssh -L 8080:localhost:8080 yourserver. You should then be able to direct your web browser at http://localhost:8080/ and your packets flow through the SSH connection. If that makes it work, have a good look at
How to open a specific port such as 9090 in Google Compute Engine . Or better yet, if you are the only one to use that Jenkins server, just keep using the SSH tunnel. It's much more secure than opening jenkins to the public world.
Have you tried installing tcpdump on the VM and doing a packet capture? That way you can determine where the traffic is being dropped. If you don't see any traffic, then it is being dropped somewhere in the cloud before it gets to your VM. If you are seeing traffic, then you need to determine is it Jenkins or some agent on the host (perhaps a firewall but you mentioned you cleared all the rules) ... I would suggest stopping the Jenkins service and then trying to access it again. Do you get the same "Connection Refused" message? If so, then it is something on the VM. If not, then it something at the application layer, i.e. Jenkins.
Happy hunting!!!

I failed making a VM from image. I got this error

I made a VM for making a Image in Azure.
After I made the linux vm(Redhat), I stop the vm and made image.
But I failed making the vm from image.
Both cases have the same problems
-1st case:I didn't install anything.
-2nd case:I install something and made ssh key(rsa)
If i execute this command 'sudo waagent -deprovision+user', there is no error.
BUT my ssh key disappear so my VMs from image cannot connect each other, which means that I cannot generate a cluster by using Ambari.
Is there any way to solve this problem?
this is error I got when I failed making a VM from image.
--------error---- Provisioning failed. OS Provisioning for VM 'master0' did not finish in the
allotted time. However, the VM guest agent was detected running. This
suggests the guest OS has not been properly prepared to be used as a
VM image (with CreateOption=FromImage). To resolve this issue, either
use the VHD as is with CreateOption=Attach or prepare it properly for
use as an image: * Instructions for Windows:
https://azure.microsoft.com/documentation/articles/virtual-machines-windows-upload-image/
* Instructions for Linux: https://azure.microsoft.com/documentation/articles/virtual-machines-linux-capture-image/.
OSProvisioningTimedOut
Before you create a image, you should execute sudo waagent -deprovision+user. If you don't do it, you will get this error.
According to your scenario, you could configure Provisioning.RegenerateSshHostKeyPair=n (/etc/waagent.conf). According this official document
deprovision: Attempt to clean the system and make it suitable for
re-provisioning. This operation deleted the following:
All SSH host keys (if Provisioning.RegenerateSshHostKeyPair is 'y' in
the configuration file)
If it does not work for you, I suggest you could add publickey to your VMs by using Azure Portal.

VM Keeps restarting

I have a VM host in Azure, created using Resource Manager. I've come to use it today and can't RDP to the machine. When I view the Boot Diagnostics it has Please Wait. after a period of time it will go to the logon screen. When I view the CPU usage you can see it drop which assume is the VM restarting.
I've tried the following :
Reset Password
Reset Configuration
Redeploy
I've also looked at the network interfaces and tried adding it to a network security group with RDP rule but still nothing.
Is there anything else I can check?
EDIT
When I first start the VM up and look at the Boot diagnostics I can see the login screen. When I try and RDP to the machine it says it can't connect.
The CPU drops where I assume its restarting, I've tried RDP to the machine from another machine on the same VPN
I raised a ticket regarding the following. Support noticed the following in the logs "Rebooting VM to apply DSC configuration." The "DSC extension" was causing the machine to reboot.
They advised me to go to VM in the control panel and then extensions and uninstall the Powershell extension. Not sure what caused this ie I did not knowingly install this. But once I uninstalled it I was able to RDP. Support have asked me to try and install it again and see if the same happens again but at the moment not had a chance to do this.

ssh: connect to host on port 22: Connection refused

I have looked around for answers but didn't not able to find solution related to my query.
I was able to ssh the server earlier but after a reboot I was not able to. I checked Azure portal for the instance it was showing status as Running. I tried rebooting it a couple of times but I was not able to ssh. I checked further and found out that the public IP shown was different this time. I tried with that IP but still not able to login.
Any suggestion what I should I do next. Also, how can I make the IP static in Azure.
Please start from the official SSH connection troubleshooting guide - most of the SSH issues i had (and yours situation is the same i had a few times) were solved by reset-ssh way. Helpful would be to see the serial console output (VM dashboard => Settings).
The fact that your IP changed is normal if you did not reserve that, it will change.

SSH fingerprint change after ubuntu update (Azure only)

After upgrading my Ubuntu 14.04 LTS machine hosted on Azure (previous update was two weeks ago on Feb. 22nd), it now warns me about changed server SSH key when I try to connect to it.
###########################################################
# WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! #
###########################################################
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
I am ruling out the Ubuntu update triggering this change because this happened with my (only) Azure machine, but not the rest of about a dozen Linux servers that run either locally or on AWS with nearly identical configuration that were updated at the same time. I have also checked the host key algorithm as reported by ssh -v and it is unchanged (ECDSA-SHA2-NISTP256).
Is there anything specific about the way Azure handles SSH connections, or something particular about the Ubuntu image provided by Azure that could have led to the change in the server key?
P.S. I am downloading the VHD to check the machine locally, but this will take at least 24 hours with my connection. I was just wondering, maybe somebody has run into the same issue before.
It turns out that the keys were regenerated by cloud-init. As far as I can tell it was due to this bug: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1551419
I would like to be able to provide a less painful solution than downloading the VHD and checking the server fingerprint, but unfortunately the Azure portal still displays the fingerprint for the original key that was created when the instance was first provisioned.

Resources