Ways to reduce the latency achieved on my Virtual Machine in Azure - azure

does anyone have any ways in which I could reduce the latency for my Azure WM (in-managed disk)? I currently have one running on the UK South area which is running at an average of 2.85m/s which is fantastic compared to the 16m/s I was receiving using my on-premises system.
However, if possible, I'd like to have this even lower, preferably down to 0.5m/s. Does anyone have any ways in which I could achieve this in the most cost-effective manner
Thanks

Azure VM size could effect latency. Different vm sizes have different bandwidth.
You could check this blog.

Well, to get 0.5ms you need to be in the range of, like, 50-60km (máximum) from the VM. Because physics, you know?
Having said that, the only way to reduce latency is use Express Route, but it probably wont help given you have <3ms latency already.

Related

Prevent bottleneck on bandwidth for mobile internet

I am sure that this question has already been answered, but unfortunately I do not know the keywords. Therefore my search remained unsuccessful until now.
Scenario: I want to transmit a lifestream via Mobile Internet using RaspberryPi, and depending on the bandwidth, downscale the streams and upscale them again when available.
My two questions for the network specialists among you:
i know i can actively check the bandwidth, but how would you do this without interfering with the existing processes transmitting? Should I commit a bandwidth to the processes and then slowly determine the remaining bandwidth using a test tool? Or are there already practical solutions?
Can I determine in the mobile Internet, or in the network interface, when a bottelneck is reached?
Passive methods would be my preference. where I wouldn't have to load the bandwidth. e.g. I could know how much bandwidth the stream uses, and how much arrives. But how do I make sure there is enough capacity before I go up with the bitrate?
Thanks for your wisdom ;)

Struggling to decide on instance size for Win10 VM

Very basic question here. We are a charity looking to leverage a Win10 VM to run our old Sage 50 application as interim measure before we move to Cloud based accounting software.
I'm a bit stuck on the azure pricing model and which tier I should select. We'll likely be powering the VM up and down as we need to use it, and we don't really need something performant, just useable to input in the software.
Can anyone recommend which cheapest instance size would fit the bill for this usecase?
Thanks in advance for all suggestions!
Microsoft do have a sizing guide here:Virtual Machine Sizing
Start with the smallest and scale up to a level of performance you're comfortable with.

Creating a file server in Azure

Our company has an on-prem file server that I'd like to move to the cloud. I followed these directions and was successfully able to map a drive on my local work computer to connect to an Azure File Share. Our company has about 20 locations, ~5 TB of data (mostly "office" type of files) in total, and about 500 users accessing them.
There are two issues I would like to improve but I'm not sure how:
There's somewhat of a lag when opening files. Other than increasing our office's internet speed, is there anything to be done to make it faster? Would some kind of site-to-site VPN help? Would adding some type of server or VM in the "middle" (maybe one per location?) that would perhaps somehow cache the files reduce the lag?
Also, we have and use an Office 365 subscription. What's the easiest way to use our existing AD structure to transfer over the NTFS permissions that are currently in place?
I Googled around and found a bunch of companies advertising their services, notable among them was Talon Storage. But it seems like something that could be done without hiring a company. What I'm hoping for is a DIY direction to optimally solve these issues. Perhaps there's a standard or commonly recommended solution for such issues. Any guidance would be greatly appreciated.
L-A-T-E-N-C-Y. The number one enemy for any cloud-based file server attempt. It ranges from annoying to down right unusable, depending on how far you are from the Azure datacenter of choice.
Imagine a poor soul trying to "stream" a large 20-meg Excel file with 20 references to external files. What used to take maybe 8 seconds on-prem will now take 40 in the cloud (on a good day). It's game over for productivity. Your marketing department that sometimes used to cut video in iMovie over the network? Those days are over.
I understand this is not the answer you were after, but it's the crude reality.
Do not panic, there are solutions, here's a good one - https://azure.microsoft.com/en-us/services/storsimple/
I'm sure you wanted to get rid of boxes not buy more, but it is what it is.

Is network bandwidth on Azure Web Sites different from Virtual Machines?

From answers to other questions (such as this question), it sounds like different instance sizes offer different network throughput. My processing is I/O bound, and I'm trying to use web jobs to do it on a web site instance. Do web sites offer the same bandwidth as VMs with the same size/price point? Or if I need bandwidth higher than 100 Mb/sec, would I need to choose a solution other than web sites to do this processing?
Thanks,
David
Unfortunately, the bandwidth limits are not currently exposed.
At the end of the day, Azure App Service is using some Cloud Services machines and the bandwidth should be quite similar than in Web/Worker roles.
However, the requests go through different mechanisms (IIS ARR for example) but it might not add so much overhead.
That being said, the best way would be to try and scale out (using multiple instances) if you need more.
I hope this helps!
Adding to #dmatson answer a small detail - right now, expect SLA for high availability, which means that you can have different numbers sometimes. You will need to wait for the official release of SLA - or scale out by the size or amount. The very good FAQ i have found on that topic is here, many networking-related questions are covered.
https://blogs.msdn.microsoft.com/igorpag/2014/09/28/my-personal-azure-faq-on-azure-networking-slas-bandwidth-latency-performance-slb-dns-dmz-vnet-ipv6-and-much-more/

hardware infrastructure for public web application

I'd like to start a free budget/personal finance site and will need plenty of horsepower and storage. I'm definitely a nubee, so how does one get started in terms of hardware infrastructure? Do I need to get a dedicated IP from my ISP and obtain my own servers? Do I go with amazon or Sql Server Data Services/Azure or something like that? Is the latter services free or a discount offering available to non-profit/free services such as the budget/personal finance site I'm looking to start?
If you don't mind writing your web application in python, then I's suggest using Google App Engine. See: What Is Google App Engine?
What I like to do when I have new ideas for a site is to find an inexpensive hosting solution ($10 per month). This allows me to test the idea and see if the site is going to be successful. If it is a flop, I haven't wasted much money and if it is successful I can upgrade to better hosting (dedicated server).
There are many hosting options available and several of them have great tools such as an online SQL Server management studio. Your other option would be to host it yourself if you are prepared to deal with firewall issues, backups, storage, etc.
Whether it is feasible to DIY varies a lot by country...if you have a decent broadband connection with a fixed IP this can be the cheapest route to play around with first, especially if you need an awful lot of storage.
Note however that many fast broadband connections are only fast for downloads - when you're running a server, the speed your users will see is the upload speed, which is usually a lot less. Also, you'll need to do your own admin and backup etc.
Apart from this most hosting options have a price tag on top, varying from virtual hosts (sharing a real machine), to colocation (your machine in somebody's data center), to cloud services like amazon et al (which have a good scaling ability)- and you will need to shop around for the software stack and hardware features you really need.
There's really two ways to answer this question, what differentiates them is budget.
One is to properly design this solution, prototype it, benchmark the prototype, extrapolate anticipated user load, add overhead and scale accordingly. This takes time, costs but gives you a supportable solution that serves your customers well.
The other is to just give something, anything a go and fix the problems as they come along. This is quicker and cheaper but might be a headache for a while and might p*** off your customers.
Basically it comes down to budget.
Best of luck.

Resources