Azure network output increase significantly without any reason - azure

there is a VM in azure and running an IIS.
I have parsed all IIS logfiles and the whole outgoing traffic over last 7 days is about 2GB. The graphic show me traffic in amount 5-10 GB a day within the last seven days.
What is going on here?
Are there other known azure services increasing network traffic by default?
I guess I should sniff my machine to see if there are other services increasing my outgoing traffic.

There were "Diagnostics settings" enabled within my virtual machine. After disabling this, my traffic is going back to the regular amount.

Related

Azure availability set or zone vm auto turn on

I have a VM that runs IIS and SQL server for an enterprise application used by around 100 users.
Right now I just have this VM but I would like to add some availability. It’s not critical to have zero downtime application but at least that if by some reason the server fails then I’m able to wake up a secondary instance and reroute traffic to it.
So I guess this is done by using Availabilty Sets but what I understand is that I have at least to have two VMs in the availability set and load balancer so traffic is redirected round robin to each VM. By using the above approach that means that I must have to pay for having two instances with same specs I guess.
What I would like and don’t know if this is possible is like having same above scenario where one the of the VMs is stopped so I don’t get any charge and in case of VM failure I can started maybe manually so the application works again. If this is possible how does the hard drive is available so that the other VM always have the latest data.
If it’s not possible then can I have then for the availabilty set a second VM with the lowest specs that my app can support so if the main VM fails at least critical users can still access the app (maybe performance won’t be great but app will work) and when main VM is functional again then main traffic is again redirected to main VM.
you can achieve this by having 2 vms with premium disks only and having one as a cold backup. single vm qualify for an SLA if they only use premium disks, SLA would be 99.9% afair.
with AV sets - you need to have at least 2 running vms.

OutBound TCP connections are at high spike in Azure

TCP connections are getting exhausted. Unable to figure out what is the root cause of it. How to figure out common spike issues.
This is observed after migrating project from .net framework to core 2.0.
Application downloading blobs using WebClient which is same both in framework and core project.
The outbound TCP connections on the VM instance can be exhausted.
In App Service, limits are enforced for the maximum number of outbound connections that can be made for each VM instance. For more information, reference: Cross-VM numerical limits (https://github.com/projectkudu/kudu/wiki/Azure-Web-App-sandbox#cross-vm-numerical-limits). You may scale up the App Service Plan as per your requirement.
These limits apply only for customers of Basic or higher plans; in other words, customers running on their own dedicated VMs. These limits are there to protect the entire VM even though one particular site may be with its limits described above.
The limits are different depending on the size of VM configured.
Limit name Description Small (A1) Medium (A2) Large (A3)
Connections Number of connections across entire VM 1920, 3968, 8064 respectively.
Ensure that your application is not trying to access a local address- Connection attempts to local addresses (e.g. localhost, 127.0.0.1) and the machine's own IP will fail, except if another process in the same sandbox has created a listening socket on the destination port.
Reference: http://www.freekpaans.nl/2015/08/starving-outgoing-connections-on-windows-azure-web-sites/ - it’s a 3rd party blog, be cautious with the steps.

Identify low usage Azure IaaS VM's

I have been working on the Azure monitoring side for a while. I need your inputs for one of my requirement.
We have lot of IaaS VM’s both SQL and Non-SQL provisioned in our subscriptions. We are paying non-trivial amount for these VM’s. I am trying to come up w/ solution to identify low usage machines and during what times( night, early morning etc) the usage is very low. With this, I can take an action by either shut down VM’s during low usage period or reduce VM size.
For this, I am trying couple of options like Azure Advisor, Azure metrics for CPU usage, Network I/O, Disk Read/Write parameters. But considering only these might not help. Because, your network I/O might be having load balancer requests which cannot be considered.
So I need to come up w/ actual IIS requests went in during the given period.
Can you recommend on how to identify low usage VM’s? It would be a great help.
Can you recommend on how to identify low usage VM’s?
Generally, we will based on CPU usage to identify low usage VM.
Network traffic in or out of one application might be having load balancer requests, but network traffic in or out of this VM will not have load balancer, we also can use this to identify low usage VM.
So I need to come up w/ actual IIS requests went in during the given
period.
We can use OMS to monitor IIS request of each VMs in Azure, please follow this article to configure OMS.
like this:
Also we can config zabbix on one Azure VM and use that to monitor all VMs.

Is Azure Traffic manager is reliable for failover? what are other problems I should be worried about?

I am planning to use Azure Traffic manager to do a failover of my app running on one Azure zone to Azure zone.
I need some suggestion, if that is the correct approach to do a failover ? We have seen issue with Azure that, most of the services in one region goes down for few hours. Although I understand that Azure traffic manager is not associated with the region. But is it possible that Azure traffic manager goes down or that traffic manager endpoint is not reachable although my backend webapp is reachable?
If I am planning to use Azure traffic manager, what are other problems I should be worried about ?
I've been working with TM for some time now, so here are a few issues I haven't seen mentioned before:
Keep-Alive
If your service allows Keep-Alive, then your DNS entry will be ignored as long as the connection remains open. I've seen some exceptionally odd behavior result from this, including users being stuck on a fallback page since they kept using the connection, causing it to remain open indefinitely. If you have access to IIS Manager, you can force Keep-Alive to be false.
Browser DNS Caching
Most browsers have their own DNS cache, and very few honor DNS Time To Live. In my experience Chrome is pretty responsive, with IE and Edge having significant delays if you need them to rollover quickly. I've heard that Opera is particularly bad.
Other DNS Caching
Even if you're not accessing your service through a browser, other components can have DNS caches, and some of them will allow you to manage the cache yourself. This can in theory even depend on ISP's DNS caching, though reports on the magnitude of this vary significantly.
Traffic Manager works at the DNS level, which itself is replicated. However, even then, you should still build in redundancy into your solution.
Take a look at the Azure Architecture Center under "Make all things redundant" and you will see a recommendation for Traffic Manager:
consider adding another traffic management solution as a failback. If
the Azure Traffic Manager service fails, change your CNAME records in
DNS to point to the other traffic management service.
The Traffic Manager internal architecture is resilient to the failure of any single Azure region. So, even if a region fails, Traffic Manager should stay up. That applies to all Traffic Manager components: control plane, endpoint monitoring, and DNS name servers.
Since Traffic Manager works at the DNS level, it doesn't have an 'endpoint' that proxies your traffic--it uses DNS to direct clients to the appropriate endpoint, and clients then connect to those endpoints directly. Thus, an unreachable endpoint is an application problem, not a Traffic Manager problem.
That said, if the Traffic Manager DNS name servers are down, you have a serious problem. You DNS resolution path will fail and your customers will be impacted. The only solution is to either accept the risk (small, but can never be zero) or have a plan in place to use another DNS system, either in parallel or failover. This is not a limitation of Traffic Manager; you could say the same about any DNS-based traffic management system.
The earlier answer from DornaDigital is very good (other than the first point which suggests DNS caching will protect you through a name server outage--it won't). It covers some important points. In short, DNS-based failover works well for new sessions. Existing clients may have to refresh or even close their browser and reconnect.
I also agree with the details provided dornadigital.
There are considerations for front end applications as well. The browsers all have different thresholds for how long they maintain persistent connections. Chromium, for example, currently maintains a connection unless there is inactivity for 300 seconds.
In our web applications, we are detecting the failover by the presence of a certain number of failed requests to the endpoint. After requests begin failing, we pause requests for 301 seconds to allow the connection to reset. This allows the DNS change from the traffic manager to be applied to subsequent requests. We pop up a snackbar to indicate to the user that we are having an issue and display the count down when requests will resume. Similar to Gmail when it has an issue connecting to their servers.
I hope that gives you one idea on how to build some redundancy into your web apps.
I disagree with Jonathan as his understanding of the resiliency of the Traffic Manager service is in disagreement with Microsoft's own documentation on the subject.
When you provision Azure Traffic Manager, you select a region in which to deploy the service. I (correctly) inferred this to assert if said region were to fail, the Traffic Manager service could also be impacted and in turn, your application solution would not properly fail over to the secondary region.
According to Microsoft's Azure Application Architecture Guide, under "Make all things redundant", a customer should deploy Traffic Manager into more than one region:
Include redundancy for Traffic Manager. Traffic Manager is a possible failure point. Review the Traffic Manager SLA, and determine whether using Traffic Manager alone meets your business requirements for high availability. If not, consider adding another traffic management solution as a failback. If the Azure Traffic Manager service fails, change your CNAME records in DNS to point to the other traffic management service.
Azure Application Architecture Guide - Make all things redundant
My thought and intention is to not deploy Traffic Manager within the primary service region, but instead to deploy it into the secondary (failover region) and a tertiary (3rd) region as a backup.

Is there any limit to receive response service from windows Azure VM

Is there any limit to send/receive an API response service from windows Azure Virtual Machine?
As far as I know, there isn't any limit to send/receive an API response from Azure VM. The VM create in Azure, it works as a server same as a on-premises server.
About the Azure VMs limits, refer to the link.
The Azure VM's performance according to different VM size.
No, I think there isn't any limit as such. VMs are billed on capacity and no of hours run.
No limit, but you do need to pay for bandwidth after a certain point. Check here: https://azure.microsoft.com/en-us/pricing/details/bandwidth/.
The first 5 Gigabytes per month of outbound traffic are free.
After that you have to start paying something. Your VM will usually cost way more than the bandwidth (license, VM size, storage).

Resources