I have a VPS (using Windows Server, IIS) with around 20 sites, of which I would like to monitor the monthly bandwidth usage independently. Performance Monitor is useful but ideally I'd like to show someone their bandwidth usage (e.g. on their own computer) if they asked for it.
Can anyone recommend me any tools for monitoring this?
Have a look at the software called Anturis, that deals with all kinds of monitoring issues. They can monitor networks, web apps, site with transactions etc. It may work for you in order to monitor what you want.
Related
My working team have bought multiple Linux servers to develop and test some cases in our teams.
So we want to find one tool that can manage and monitor these servers.
I hope this tool can have these functions.
It can monitor these servers, such as the utilization of CPU, RAM and others.
It have website GUI that my team developers can directly find which server is free.
It can provide booking function or show which server is exclusively used.
It is free or opensource.
I have found these 2 tools, Cockpit and Linux Dash, that are opensource, have website GUI, can monitor utilization of servers. But don't have the booking function.
I want to find a tool that can satisfy the above functions all.
Do you know any tools can satisfy these functions all? Please help to tell me.
If I have to choose one between Cockpit and Linux Dash, do you have any suggestion?
From answers to other questions (such as this question), it sounds like different instance sizes offer different network throughput. My processing is I/O bound, and I'm trying to use web jobs to do it on a web site instance. Do web sites offer the same bandwidth as VMs with the same size/price point? Or if I need bandwidth higher than 100 Mb/sec, would I need to choose a solution other than web sites to do this processing?
Thanks,
David
Unfortunately, the bandwidth limits are not currently exposed.
At the end of the day, Azure App Service is using some Cloud Services machines and the bandwidth should be quite similar than in Web/Worker roles.
However, the requests go through different mechanisms (IIS ARR for example) but it might not add so much overhead.
That being said, the best way would be to try and scale out (using multiple instances) if you need more.
I hope this helps!
Adding to #dmatson answer a small detail - right now, expect SLA for high availability, which means that you can have different numbers sometimes. You will need to wait for the official release of SLA - or scale out by the size or amount. The very good FAQ i have found on that topic is here, many networking-related questions are covered.
https://blogs.msdn.microsoft.com/igorpag/2014/09/28/my-personal-azure-faq-on-azure-networking-slas-bandwidth-latency-performance-slb-dns-dmz-vnet-ipv6-and-much-more/
Say I have a bunch of webservers each serving 100's of requests/s, and I want to see real time stats like:
Request rate over last 5s, 60s, 5 min etc
Number of unique users seen again per time window
Or in general for a bunch of timestamped events, I want to see real-time derived statistics - what's the best way to go about it?
I've considered having each GET request update a global counter somewhere, then sampling that at various intervals, but at the event rates I'm seeing it's hard to get a distributed counter that's fast enough.
Any ideas welcome!
Added: Servers are Linux running Apache/mod_wsgi, with a Python (Django) stack.
Added: To give a sense of the event rates I want to track stats for, they're coming in at over 10K events/s. Even incrementing a distributed counter at that rate is a challenge.
You might like to help us try out the beta of our agent for application performance monitoring in Python web applications.
http://newrelic.com
It delves more into the application performance rather than just the web server, but since any bottlenecks aren't generate going to be the web server, but your application then that is going to be more useful anyway.
Disclaimer. I work for New Relic and this is the project I am working on. It is a paid product, but the beta means it is free for now with all features. Later when that changes, if you didn't want to pay for it, their is still a Lite subscription level which is free and which gives you basic web metrics reporting which still covers some of what you are after. Anyway, right now would be a great opportunity to make use of it to debug your performance while you can.
Virtually all good servers provide this kind of functionality out of the box. For example, Apache has the mod_status module and Glassfish supports JMX. Furthermore, there are many commercial packages for monitoring clusters, such as Hyperic and Zenoss.
What web or application server are you using? It is difficult to provide a solution without that information.
Look at using WebSockets, their overhead is much smaller than a HTTP request, they are very well suited to real-time web applications. See: http://nodeknockout.com/ for Node based websocket examples.
http://en.wikipedia.org/wiki/WebSocket
You will need to run a daemon if you want to run it on your apache server.
Also take a look at:
http://kaazing.com/ if you wan't less hassle, but are willing to fork out some cash.
On the Windows side, Perfmonance monitor is the tool you should investigate.
As Jared O'Connor said, you should precise what kind of web server you want to monitor.
What is the limit of IIS 6.0? like for example if i need to host 100,000 or 200,000 websites on IIS 6.0, how many machines would i need? or is IIS7 would be a better choice in this case for some reason?
As mentioned in the comments above the scale isn't so much the number of websites you create in IIS, but how complex and how busy those sites are.
In IIS6 one website does not necessarily equate to one executing process on the server. Application pools can group multiple websites into a single executing process to group and/or isolate applications. Alternately a single app pool can spawn multiple executing processes to make better use of server hardware.
It might help if you were to provide more detail in your question about what exactly you're trying to accomplish. If you're going to be serving hundreds of thousands of sites it would probably be a good idea to partner with a hosting company, or get some assistance from someone who knows the ins and outs of IIS, or another platform in detail and has operational experience with working through large-scale hosting scenarios.
IIS7 is not radically different from IIS6 in any performance-related way; with one exception: you can run ASP.NET in a "native" pipeline mode that bypasses some processing steps. I prefer IIS7 (if I can choose) because of its manageability advantages. But like everyone else said here: the question is impossible to answer without more information.
Hosting that many websites with IIS will be cost-prohibitive in licensing fees. Most large scale web hosting is done on Linux using Apache.
I'd like to start a free budget/personal finance site and will need plenty of horsepower and storage. I'm definitely a nubee, so how does one get started in terms of hardware infrastructure? Do I need to get a dedicated IP from my ISP and obtain my own servers? Do I go with amazon or Sql Server Data Services/Azure or something like that? Is the latter services free or a discount offering available to non-profit/free services such as the budget/personal finance site I'm looking to start?
If you don't mind writing your web application in python, then I's suggest using Google App Engine. See: What Is Google App Engine?
What I like to do when I have new ideas for a site is to find an inexpensive hosting solution ($10 per month). This allows me to test the idea and see if the site is going to be successful. If it is a flop, I haven't wasted much money and if it is successful I can upgrade to better hosting (dedicated server).
There are many hosting options available and several of them have great tools such as an online SQL Server management studio. Your other option would be to host it yourself if you are prepared to deal with firewall issues, backups, storage, etc.
Whether it is feasible to DIY varies a lot by country...if you have a decent broadband connection with a fixed IP this can be the cheapest route to play around with first, especially if you need an awful lot of storage.
Note however that many fast broadband connections are only fast for downloads - when you're running a server, the speed your users will see is the upload speed, which is usually a lot less. Also, you'll need to do your own admin and backup etc.
Apart from this most hosting options have a price tag on top, varying from virtual hosts (sharing a real machine), to colocation (your machine in somebody's data center), to cloud services like amazon et al (which have a good scaling ability)- and you will need to shop around for the software stack and hardware features you really need.
There's really two ways to answer this question, what differentiates them is budget.
One is to properly design this solution, prototype it, benchmark the prototype, extrapolate anticipated user load, add overhead and scale accordingly. This takes time, costs but gives you a supportable solution that serves your customers well.
The other is to just give something, anything a go and fix the problems as they come along. This is quicker and cheaper but might be a headache for a while and might p*** off your customers.
Basically it comes down to budget.
Best of luck.