Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I will be performing a distributed load test using JMeter. I am using the JMeter extras plugin to output some nice graphs but all of these graphs have to do with response times, response latency, throughput, etc. I want to also measure CPU, memory used/free, disk usage/latency, and network utilization, maybe some others.
I will be testing a web application that is running on Ubuntu 14.04.
What tools or commands can I use to gather these stats at various points during the load test and either output the raw data or averages?
Thank you for any information you can provide.
Free and great for high level KPIs. Works within JMeter:
http://jmeter-plugins.org/wiki/PerfMon/
Free / Paid and great for detailed low level analysis (stand alone tool):
http://newrelic.com
We use New Relic ourselves and are very satisfied!
I am using Cacti for that, it is relatively easy to install and configure (on Centos it can be installed with yum from the EPEL repository). It uses snmp to get network, CPU, memory, load,..from the various target servers. To monitor disk io's there is a great template (https://github.com/markround/Cacti-iostat-templates), if you follow step by step their instructions it will work (at least on centos/red-hat).
What I like with cacti is that you can also define your own datasources, for example you can ask cacti to execute a shell script on your server that would parse your access.log (or any other application log files) and returns metrics like throughput (nbr requests, nbr bytes) or processing time,.. etc then you can get this plotted side by side with the devices utilizations metrics.
To set-up the whole think you will probably one day, it is not very intuitive how to define your own data sources for example. Also you have to enable snmp on the box, which is easy if you remove the whole /etc/snmp.conf and use the bare minimum. It is a great tool for capacity management.
Related
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I need to measure how many concurrent users my current azure subscription will accept, to determine my cost per user. How can I do this?
This is quite a big area within capacity planning of a product/solution, but effectively you need to script up a user scenario, say using a tool like JMeter or VS2012 Ultimate has a similar feature, then fire-off lots of requests to your site an monitor the results.
Visual Studio can deploy your Azure project using a profiling mode, which is great for detecting the bottlenecks in your code for optimisation. But if you just want to see how many requests per/role before it breaks something like JMeter should work.
There are also lots of products out there on offer, like http://loader.io/ which is great for not worrying about bandwidth issues, scripting, etc... and it should just work.
If you do role your own manual load testing scripts, please be careful to avoid false negatives or false positives, by this I mean that if you internet connection is slow and you send out millions of requests, the bandwidth of your internet may cause your site to appear VERY slow, when in-fact its not your site at all...
This has been answered numerous times. I suggest searching [Azure] 'load testing' and start reading. You'll need to decide between installing a tool to a virtual machine or Cloud Service (Visual Studio Test, JMeter, etc.) and subscribing to a service (LoadStorm)... For the latter, if you're focused on maximum app load, you'll probably want to use a service that runs within Azure, and make sure they have load generators in the same data center as your system-under-test.
Announced at TechEd 2013, the Team Foundation Test Service will be available in Preview on June 26 (coincident with the //build conference). This will certainly give you load testing from Azure-based load generators. Read this post for more details.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
I'm looking for a simple self hosted website monitoring tool.
It should be somthing similar to watchmouse.com or pimgdom.com, with a nice UI, colorful charts and so on (Customers like that :)).
At the moment we use Zabbix also for HTTP monitoring, but since now our hoster care about the hardware and software monitoring on the machine directly, we don't need Zabbix anymore.
For pure http-monitoring zabbix or an other monitoring suite is really an overkill.
So what I'm not looking for is:
Zabbix
Nagios
Hyperic
...
Sadly but the truth, after some hours of researching I wasn't able to find a fitting application. My hope is now on you.
I realize this is an old question but I was looking for something like this today and came across Cabot which is self hosted and free, and according to the project's description: "provides some of the best features of PagerDuty, Server Density, Pingdom and Nagios".
Hope this helps someone in the future.
I found this a while ago for my purposes. Nice and simple and self hosted.
You do need shell access to setup cron jobs for it so it probably won't work in a shared environment.
php Server Monitor
Hope this helps.
Peter
I had a lot of success with Groundwork in the past, It's a BEAST and does just about everything imaginable and can be configured in so many ways. It might be overkill if you are just looking for something to schedule some http responses then graph the logs.
Groundwork is more for enterprise level deployments and has both Paid and Community editions with a pretty active community behind it too.
Not sure if you have already found a solution to this or not but give a shot to Apica System's Synthetic Monitoring. You can use the full SaaS, full on-premise, or hybrid model of this system. Take a look at the free trial and if you like what you see, the full portal as well as monitoring agents (with tons of more features than the trial) can be hosted behind your firewall in your own network. As per for monitoring, you can monitor websites/mobile apps, API endpoints, DNS, etc. You can also run complex use cases and see how the web app responds using Selenium or ZebraTester scripts.
If all you want to monitor is website uptime/downtime and response time, I'd have a look at TurboMonitor - it doesn't have all the bells and whistles provided by some other monitoring websites but it's quick and accurate for those two things.
Price-wise, I wouldn't take what they have on their website too seriously. I only actually found out about them when I met them in person and they were very happy to give me a "professional" account for free, supposedly like 5€/month or something on their website.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
I'm looking for a web host that will let me run a Haskell web application. VPS's seem attractive to me because you can run essentially anything you want. But some of the cloud hosts offer really nice scalability in terms of hard disk space and bandwidth.
Does anyone know of a host that will let me run exotic languages like Haskell but can also seamlessly scale up the hard disk space/RAM/bandwidth/CPU available to my host?
If you just want very simple hosting with CGI, NearlyFreeSpeech.net supports Haskell and some other less common languages. I personally also like their overall nonsense-free approach and sensible pricing model (pre-pay metered charges, instead of the usual model of a fixed monthly charge, oversold server capacity, and absurd overage fees).
There are a few caveats however, mainly that they don't permit standalone servers or persistent daemons, only things invoked via CGI from Apache. This might be a problem for some Haskell web app frameworks.
Maybe this is obvious, but you can always use Amazon EC2. You'll have full control, and definitely meets your requirement for seamlessly scaling up.
This may be a very late answer but I found that hosting on Heroku with its Cedar stack is the easiest. Yesod has a very clear explanation.
Apparently, it's possible to get ghc running on Webfaction. There are also threads about it in the Webfaction support forums, and the admins/techs are quite willing to make an effort to make it work, though it's clearly not something that is supposed to be available out of the box.
EDIT, 2011-08-23: Fixed link.
In theory all you need is CGI/FastCGI support. I've had some luck playing around with Happstack on a very basic Dreamhost account by following these instructions:
While non-trivial to get running, this
web experiment proves that it is at
the very least possible to run
Happstack applications on cheap
hosting providers such as Dreamhost
with little more than a shell account
and CGI support.
I've only tried this with toy applications, and don't know how it would scale.
Looks like you can also run Haskell in Azure Functions.
If you are using IHP (Integrated Haskell Platform), you can use their free cloud hosting service at https://ihpcloud.com/.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I'm going to build a high-performance web service. It should use a database (or any other storage system), some processing language (either scripting or not), and a web-server daemon. The system should be distributed to a large amount of servers so the service runs fast and reliable.
It should replicate data to achieve reliability and at the same time it must provide distributed computing features in order to process large amounts of data (primarily, queries on large databases that won't survive being executed on a single server with a suitable level of responsiveness). Caching techniques are out of the subject.
Which cluster/cloud solutions I should take for the consideration?
There are plenty of Single-System-Image (SSI), clustering file systems (can be a part of the design), projects like Hadoop, BigTable clones, and many others. Each has its pros and cons, and "about" page always says the solution is great :) If you've tried to deploy something that addresses the subject - share your experience!
UPD: It's not a file hosting and not a game, but something rather interactive. You can take StackOverflow as an example of a web-service: small pieces of data, semi-static content, intensive database operations.
Cross-Post on ServerFault
You really need a better definition of "big". Is "Big" an aspiration, or do you have hard numbers which your marketing department* reckon they'll have on board?
If you can do it using simple components, do so. The likes of Cassandra and Hadoop are neither easy to setup (especially the later) or develop for; developers who are going to be able to develop such an application effectively will be very expensive and difficult to hire.
So I'd say, start off using your favourite "Traditional" database, with an appropriate high-availability solution, then wait until you get close to the limit (You can always measure where the limit is on your real application, once it's built and you have a performance test system).
Remember that Stack Overflow uses pretty conventional components, simply well tuned with a small amount of commodity hardware. This is fine for its scale, but would never work for (e.g. Facebook), but the developers knew that the audience of SO was never going to reach Facebook levels.
EDIT:
When "traditional" techniques start failing, e.g. you reach the limit of what can be done on a single database instance, then you can consider sharding or doing functional partitioning into more instances (again with your choice of HA system).
The only time you're going to need one of these (e.g. Cassandra) "nosql" systems is if you have a homogeneous data store with very high write requirement and availability requirement; even then you could probably still solve it by sharding conventional systems - as others (even Facebook) have done at times.
It's hard to make specific recommendations since you've been a bit vague, but I would recommend Google Appengine for basically any web service. It's reliable, easy to use, and is built on the google architecture so is fast and reliable.
i'd like to recommend stratoscal symphony. it's a private cloud service that does it all. everything you just mentiond - this service provides perfectly. their symphony products deliver the public cloud experience in you enterprise data center. if that's what you're looking for, i suggest you give it a shot
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I use to be on a shared host and I could use there standard tools to look at bandwidth graph.
I now have my sites running on a dedicated server and I have no idea whats going on :P sigh
I have installed webmin on my Fedora core 10 machine and I would like to monitor bandwidth. I was about to setup the bandwidth module and it gave me this warning:
Warning - this module will log ALL network traffic sent or received on the
selected interface. This will consume a large amount of disk space and CPU
time on a fast network connection.
Isn't there anything I can use that is more light weight and suitable for a NOOB? 'cough' Free tool 'cough'
Thanks for any help.
vnStat is about as lightweight as they come. (There's plenty of front ends around if the graphs the command line tool gives aren't pretty enough.)
I use munin. It makes pretty graphs and can set up alerts if you're so inclined.
Unfortunately this is not for *nix but I have an automated process to analyise my IIS logs that moves them off the web server and analyises them with Web Log Expert. Provided the appropriate counter is turned on it gives me the bandwidth consumed for every element of the site.
The free version of their tool won't allow scripting but it does the same analysis. It supports W3C Extended and Apache (Common and Combined) log formats.
Take a look at mrtg. It's fairly easy to set up, runs a simple cron job to collect snmp stats from your router, and shows some reasonable and simple graphs. Data is stored in an RRD database (see the mrtg page for details) and can be mined for other uses as well.