I have an Azure VM running (Win) where the Scheduler regularly calls a VBS script to load a small data set, retrieved from a web site API, into a SQL database table. Now, when I see the Network-In and -Out chart on my Azure Portal Dashboard there seems ridiculously high traffic going on, like GBs of data flowing in and out for no obvious reason. My VBS only loads small KB amounts per day - where is all that traffic Azure Dashboard Screen Shot coming from?
From your screenshot, you properly set the Time range as Last 30 days in your Metrics Chart. If you take the mouse off the graph, the total bytes show at the bottom. This reflects the total incoming or outgoing network traffic received all the interface on the machine in the last few days. You could set the time range to one day.
Generally, we use the network in or out metric monitor network performance on the VM. Refer to the network metric in the article.
My conclusion / answer is now the the LRS is responsible for the traffic. Thanks also for your help!
Related
I'm running three MEAN stack programmes. Each application receives over 10,000 monthly users. Could you please assist me in finding an EC2 instance for my apps?
I've been using a "t3.large" instance with two vCPUs and eight gigabytes of RAM, but it costs $62 to $64 per month.
I need help deciding which EC2 instance to use for three Nodejs applications.
First check CloudWatch metrics for the current instances. Is CPU and memory usage consistent over time? Analysing the metrics could help you to decide whether you should select a smaller/bigger instance or not.
One way to avoid too unnecessary costs is to use auto scaling groups and load balancers. By using them and finding and applying proper settings, you could have always right amount of computing power for your applications.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/working_with_metrics.html
https://docs.aws.amazon.com/autoscaling/ec2/userguide/auto-scaling-groups.html
Depends on your applications. If your apps need more compute power or more memory or more storage? Deciding a server is similar to installing an app on system. Check what are basic requirements for it & then proceed to choose server.
If you have 10k+ monthly customers, think about using ALB so that traffic gets distributed evenly. Try caching to server some content if possible. Use unlimited burst mode of t3 servers if CPU keeps hitting 100%. Also, try to optimize code so that fewer resources are consumed. Once you are comfortable with ec2 choice, try to purchase saving plans or RIs for less cost.
Also, do monitor the servers & traffic using Cloudwatch agent, internet monitor etc features.
I have an alert configured on Azure storage account which should get fired whenever the availability goes below 100%. This alert has never fired till now. However, in the availability metric chart shown at azure for past hour (attached below), the availability is shown to go below 100 multiple times. It seems that the availability oscillates between 100 and 0.
However, if I increase the time range to 24 hours, the availability is shown to be at 100 always (which it should be, because the alert never went off). The image of the same is attached below.
Can anyone please explain the first availability chart?
Thanks for your question. Metrics in Azure Portal has been improved. In the improved behavior, you will see a dotted line at 100% if there is no incoming traffic. We suggest you to look at Transactions metrics to understand total incoming traffic.
Based on the post date, I believe you were looking at old behavior before the improvement. If there is no incoming request in that minute, you see 0% in minutely chart, instead of 100%. Same applies to hourly chart.
Please check out the new Metrics in Azure Portal and let me know if you still see unexpected 0%.
I maintain an azure cloud service. It is set to auto-scale based on load. To monitor the health of this service I have another service which pings this service every 2 minutes. The usual response time from this service is around 100ms.
Once or twice a week I see that the service does not respond. It is not really a worry for me - because it happens quite infrequently. I still am trying to figure out what could be causing the service to not respond. I do not think the problem is with the pinging service - I don't see any of the other services (not on azure, but on other servers) that it pings having any issues.
What could be causing these occasional delays. Any other azure service owners seeing such delays ?
Having quite similar problems. But I use Applications Inside, so I have some statistics. For example that reponse time increases together with SQL azure access time and CPU usage. My average response time according to Applications Inside is about 600ms and average RPS is about 0,6. During these problems RPS usually higher than avarage - up to 1.5, but average response time grows up to 1min! (During the day my RPS can grow up to 3 or even higher without any reponse time growth). As I have 1min sql connection timeout and I have drammatical growth of total SQL azure access time during this periods I can assume that problem happens bacause of SQL Azure. This also happens once a day or two, for about 10-15 minutes max and my ping service also always reports that service doesn't respond.
So my advice here - install Application Insights to analyze what happens dusring these response delays. It would be great if you share your results here.
P.S. I also use autoscale based on load. Though it doesn't really help in these concrete situations.
I'm looking at performance improvements for Azure Web Role and wondering if Diagnostics should be left on when publishing/deploying to the production site. This article says to disable it, but one of the comments say you lose critical data.
You should absolutely leave it enabled. How else will you do monitoring or auto-scaling of your application, once it is running in production?
Whether you use on-demand monitoring software like RedGate/Cerebrata's Diagnostic Manager or active monitoring/auto-scaling service like AzureWatch, you need to have Diagnostics enabled so that your instances are providing the external software with a way to monitor it and visualize performance data.
Just don't go crazy and enable every possible diagnostic data to be captured at the most frequent rate possible, but do so on a need basis.
Consider the reality that these "thousands of daily transactions" cost approximately 1 penny for 100k of transactions. So, if you transfer data once per minute to table storage, this is 1440 transactions per server per day, or 43,200 transactions per server per month. A whopping 0.43cents per server per month. If the ability to quickly debug or be notified of a production issue is not worth 0.43 cents per server per month, then you should reconsider your cost models :)
HTH
I have an app that I'm thinking about moving to Azure as a Worker Role with an external facing endpoint. It's a small little process that runs in about 200-400ms, but our users would like to start running the little job 50K-100K times a day, per user. Before I go building the Azure prototype, I need to figure out what kind of latency I can expect communicating with an Azure external endpoint. Obviously, the latency depends on the size of information that I'm sending and receiving, and it depends on the speed of my internet connection, but I can't find any metrics anywhere. Are there any kind of base line numbers out there?
For the sake of argument, lets say I'm on a T1 and I'm sending 10K up and 10K down with each job run.
I don't think latency is exactly the term you looking for, that's the delay it takes sending each packet over the network which is affected more by your distance from the server, and the nature of your network.
Having said that, everyones results wrt to latency will be different, the only way to be sure will be to set up a prototype and run some performance tests on it. Also remember with Azure you can specify your data center, so select one near you.