Struggling to decide on instance size for Win10 VM - windows-10

Very basic question here. We are a charity looking to leverage a Win10 VM to run our old Sage 50 application as interim measure before we move to Cloud based accounting software.
I'm a bit stuck on the azure pricing model and which tier I should select. We'll likely be powering the VM up and down as we need to use it, and we don't really need something performant, just useable to input in the software.
Can anyone recommend which cheapest instance size would fit the bill for this usecase?
Thanks in advance for all suggestions!

Microsoft do have a sizing guide here:Virtual Machine Sizing
Start with the smallest and scale up to a level of performance you're comfortable with.

Related

cloud terminology, elasticity vs scalability and compute vs networking

first of all would like to say a big thank you to all who is reading my questions, I really appreciate your time and hope to be able contribute back, second, I did see part of my question already asked in another thread but it does not answer it plus my questions have a bit different angle, so here it goes:
Does not elasticity already include scalability? I see scalability and elasticity go as two separate features of the cloud in service promotions, is there a technical difference or is it just marketing play of terminology?
I have similar confusion about compute and networking, does not compute power already include networking, I saw it being briefly displayed as two separate advantages of cloud service
I will give it a try :) But it will largely be my understanding and less citing of provider documentation.
Elasticity vs. Scalability
I interprete elasticity as the capability to react to more or less daily variation in resource needs. Unlike reserved instances or your own server hardware "in the basement" the cloud provider offer both the resources and the managment tools to let you use varying amounts of compute, network , ... resources from hour to hour or day to day.
So elasticity (in my mind) solves the business need to react / adapt to changing demand that might follow a pattern like day / night or season / off-season but might be relatively stable from year to year or even week to week.
Scalability in my mind is more than everything the ability of these "hyper-scalers" to allow customers / you to grow your system continously and almost with no upper limit. So I would say the average (i.e.) weekly usage can go up every week for months after months and you wouldn't run out of upgrade options with the cloud providers to help you serve more and more requests.
Compute and Networking
The cloud uses "software defined networking" which abstracts all that hardware stuff like switches and routers from you as a user and offers connectivity options that would be hard to realize on your own / with traditional networking. So the networking capabilities of a major cloud provider are a feature set of their own, with lots of room for system improvements and capabilities. Therefor it is designed, serviced, billed... separately from other service classes like compute or storage.
A simple example or illustration of that might be a virtual machine (or multiple) that on their own as stand alone compute resources might have a network interface and a public ip attached to them. You can reach that machine, the machine can reach to internet (if you configure it that way) and you can install stuff on it. That's it - you have compute power.
But when you group virtual machines in i.e. application security groups and use these groups as objects in resources that allow, deny, redirect traffic internally or externally and maybe tunnel traffic to these compute resources to your on-premise resources (like in many cases Active Directory Domain Services) you start to use advanced networking capabilites. But obviously there's much more and networking can be one of the hardest parts of certification exams on cloud topics.

Ways to reduce the latency achieved on my Virtual Machine in Azure

does anyone have any ways in which I could reduce the latency for my Azure WM (in-managed disk)? I currently have one running on the UK South area which is running at an average of 2.85m/s which is fantastic compared to the 16m/s I was receiving using my on-premises system.
However, if possible, I'd like to have this even lower, preferably down to 0.5m/s. Does anyone have any ways in which I could achieve this in the most cost-effective manner
Thanks
Azure VM size could effect latency. Different vm sizes have different bandwidth.
You could check this blog.
Well, to get 0.5ms you need to be in the range of, like, 50-60km (máximum) from the VM. Because physics, you know?
Having said that, the only way to reduce latency is use Express Route, but it probably wont help given you have <3ms latency already.

Explaining windows azure to layman or students

I am looking for simple analogies to explain windows azure, app fabric, etc to students or layman person. Please let me know if you have any suggestions.
Thanks
N
Well, first I would try and talk about how we used to build and maintain things. Buying our own hardware, building it, programming it, and connecting it to the internet. That's the old way. Then, I would pivot into what cloud service providers are. In a nutshell, they are just somebody else's servers. Usually Amazons, Microsoft's or Googles servers. AWS/Azure/GCP.
Here is a quick youtube video explaining it in layman's terms.
https://www.youtube.com/watch?v=1ERdeg8Sfv4
Cloud service providers offer web portal, a website, where folks can click and build services like storage, backup, DNS, database, more websites, load balancing, and - maybe the most popular - virtual machine hosting.
What makes CSPs so successful is economies of scale. CSPs will build huge data centers and engineer them to provide the kind of services that most businesses need. COntrast that to if every business were to build their own from scratch. There are however lots of challenges to these CSPs, like needing a lot more spare capacity and having to build something that fits everyone as opposed to something that fits a particular user. So, for a small business, whether they save money depends on their use case. You might save more building from scratch, but then you'd have to train and pay folks to maintain your own servers.
One of the most revolutionary benefits that cloud service providers brought into the market is that purchasing additional capacity is much easier and faster. You might have taken weeks to buy hardware and install it at your location. Or if you are renting though traditional suppliers you might take a few hours to let them manually reconfigure things. However they now make everything automatic so you can get a new server within seconds. This have allowed businesses to build their applications to allow them to scale on demand. This means that they pay different amount of money for the services depending on how much they use. This have the ability to reduce costs but it again require more time to develop and maintain the more complex applications.

Linux server performance analytics and load monitoring software

What I am looking specifically for is software thats runs on Linux (CentOS) that can do the following:
Show human readable CPU, Memory, Disk, Apache, MySQL utilization/performance.
Provide historic reports on the above metrics (today, week, month, year etc...)
Provide this data in an easy to view web based report or at least exportable to excel/csv.
I have looked at Cacti and I don't think its really an enterprise solution. I don't care if this is free or paid for software, though open source would be nice I am really just looking for the best solution.
Does anything like this exist for Linux? The problem this company is faced with is we have no way of measuring how the changes we make in our code and server configurations impact overall performance. So when I saw lets do this - then do it, I can't shows the benefits or revert back cause it was a negative in terms of performance. I am not a linux guru, just a developer with some linux skills, but am open to all suggestions. Thanks for reading.
Even though there are lot of open source projects but the main drawback they suffer is that they are away harder to configure. I have some across a free to called SeaLion which is way easier to install and configure. And it has awesome timeline base to representing outputs. Also there are different paid tools line new relic, server density, solar wind which you can also give a look.
Check out the eginnovations monitoring tool
http://www.eginnovations.com
Monitors Linux, Apache, mySQL and other applications and is web-based, so you dont have to be a linux expert.
M.
Cacti is a simple one. OpenNMS is more complete.
You are not limited to linux, using SNMP you can fetch this data from a remote host and use any NMS you like.
IMHO one of the best "freemium" tools is Zenoss (http://community.zenoss.org/).
The community edition is free. It will do everything you need, and comes with a simple RPM based installation process. It's a lot easier than Cacti or Nagios to setup and use. I would give it a try.
I use munin. I'ts much much simpler to set up than cacti. It's better to compile it yourself than pull it with apt-get (or other) because that way it has more built-in data-gathering scripts.
Basically there is no single dashboard where you can get all reports metrics. There are a range of opensource softwares which and can serve your need.
For server performance many people recommends munin, you will have to learn how to read teh report data. You can also write custom scripts to get certain report parameters of Mysql. Additionally if your server host provides an API, you can then do lot more related to reports in your admin panel.
you have a look at following url which can provide you more idea about choosing best fit to your need.
https://serverfault.com/questions/44/what-tool-do-you-use-to-monitor-your-servers
http://sixrevisions.com/tools/10-free-server-network-monitoring-tools-that-kick-ass/

hardware infrastructure for public web application

I'd like to start a free budget/personal finance site and will need plenty of horsepower and storage. I'm definitely a nubee, so how does one get started in terms of hardware infrastructure? Do I need to get a dedicated IP from my ISP and obtain my own servers? Do I go with amazon or Sql Server Data Services/Azure or something like that? Is the latter services free or a discount offering available to non-profit/free services such as the budget/personal finance site I'm looking to start?
If you don't mind writing your web application in python, then I's suggest using Google App Engine. See: What Is Google App Engine?
What I like to do when I have new ideas for a site is to find an inexpensive hosting solution ($10 per month). This allows me to test the idea and see if the site is going to be successful. If it is a flop, I haven't wasted much money and if it is successful I can upgrade to better hosting (dedicated server).
There are many hosting options available and several of them have great tools such as an online SQL Server management studio. Your other option would be to host it yourself if you are prepared to deal with firewall issues, backups, storage, etc.
Whether it is feasible to DIY varies a lot by country...if you have a decent broadband connection with a fixed IP this can be the cheapest route to play around with first, especially if you need an awful lot of storage.
Note however that many fast broadband connections are only fast for downloads - when you're running a server, the speed your users will see is the upload speed, which is usually a lot less. Also, you'll need to do your own admin and backup etc.
Apart from this most hosting options have a price tag on top, varying from virtual hosts (sharing a real machine), to colocation (your machine in somebody's data center), to cloud services like amazon et al (which have a good scaling ability)- and you will need to shop around for the software stack and hardware features you really need.
There's really two ways to answer this question, what differentiates them is budget.
One is to properly design this solution, prototype it, benchmark the prototype, extrapolate anticipated user load, add overhead and scale accordingly. This takes time, costs but gives you a supportable solution that serves your customers well.
The other is to just give something, anything a go and fix the problems as they come along. This is quicker and cheaper but might be a headache for a while and might p*** off your customers.
Basically it comes down to budget.
Best of luck.

Resources