I test my website using ab as ab -n 10000 -c 1000 http://example.com/path and I got response as 160 #/second. But when i test it as ab -n 10000 -c 1000 http://localhost/path the response is totally different 1500 #/second.
why?
Normally you should not be running load generator (ab or any other tool) on the same host where application under test lives as load testing tools themselves are very resource intensive and you may run into the situation when application under test and load generator are struggling for the same CPU, RAM, Network, Disk, Swap, etc.
So I would recommend running ab from another host in your intranet, this way you will be able to get more clear results without aforementioned mutual interference. Remember to monitor baseline OS health metrics using vmstat, iostat, top, sar, etc. on both application under test and load generator side - it should give you more clear picture regarding what's going on and what is the impact of the perceived load.
You may also want to try out a more advanced tool as ab has quite limited load testing capabilities, check out Open Source Load Testing Tools: Which One Should You Use? article for more information on the most prominent free and open source load testing solutions (all listed tools are cross-platform so you will be able to run them on Linux)
From what I understand, you are testing the same website in 2 different configurations:
http://example.com/path, which is testing the remote website from your local computer,
http://localhost/path, which is testing the a local copy of website on your local machine, or being tested directly in the machine where the website is hosted.
Testing your remote website involves the network connection between your computer and the remote server. when testing locally, all the goes through the loopback network interface which is probably several orders of magnitude faster than your DSL internet connection.
Related
So I am trying to display memory and cpu usage of my minecraft server on my website. But i dont know how could i do that. I have searched it up on youtube, but havent found anything.
There are many things that you can use to do.
Linux/ Hosting
If you are using a hosting company that gives you a nice looking website/panel to look at: web scraping their statistics and using it, such as taking it from this area using sort of bot. If they don't then you could look at getting a plugin or creating one such as Lag Monitor
They may be using Multicraft, statistics will be at the top if they have some measurement for it.
If you host the Minecraft server in a docker container then you should have a look at docker stats
If you host the Minecraft server just on the system itself using a service(systemctl) then you should refer to Retrieve CPU usage and memory usage of a single process on Linux?
You would need to create a script to get these things, return and format the value. You could either publish the statistics in almost real-time using some sort of socket connection like socket.io.
However, if that is not available then you could create an API server where ever you run the server(if on your own machine) to run these commands and allow your website to fetch the results every so often or on page load.
Windows
If you are hosting your Minecraft server on Windows then you are doing something wrong. Getting memory and CPU usage would be the least of your problems in this case and you should look into getting some proper hosting for your server.
If you are running the server on your own computer on your own network. Unless you have the experience and knowledge of doing so safely, which clearly you don't have, then you should definitely migrate to a Linux based hosting solution such as a VPS.
TL;DR:
Get a VPS, set up an API server, get statistics from that. There probably is no tutorial for you to follow.
I am making a system that needs to be able to determine if a host is reachable or not by pinging it.
As part of a small end-to-end smoke test suite, I want to be able to bring up hosts and tear them down during the test suite, to test that my system responds correctly. Unfortunately, actually spinning up remote hosts and tearing them down is costly and extremely slow.
Is there any way I can mock this in Linux?
Bonus points if this doesn't require running the test suite as root.
My hope is that I can create a few virtual interfaces, assign arbitrary IP addresses for them and bring them up/down during the test to simulate hosts going down and coming back up. I should even be able to simulate open ports on the hosts using netcat, which would also be tremendously useful. I haven't had any luck figuring this out yet though (if it's even possible), I suspect my combined Google-fu and network engineering skill points are too low.
Depending on your network requirements I think Docker could fit your needs.
I am looking for a cloud based / desktop based free load testing tool which can test direct URLs for up to 1000 VUs and can provide at least 10 minutes of VU load time. I have a website which can be load tested by directly providing the URLs. I dont have enough time and knowledge of load testing to write the script and test my website. Loadimact.com is one of such tools however they charge for more than 100 users.
Here you go:
Siege
Gatling
Apache JMeter
Grinder
Tsung
Taurus
Siege and Taurus are probably the easiest, however others offer record-and-replay functionality and JMeter has GUI, see Open Source Load Testing Tools: Which One Should You Use? for description, comparison and sample load tests with results.
This is my first time building out something with multiple servers. I wanted to know if anyone could point me towards a guide for setting up a dev environment (windows) for a backend that will be set up on multiple servers ie one server for the API, one for another set of processes (ie file compression) and one for everything else.
Again, just trying to figure out if it's possible to set up a dev environment to test out the system on my local machine.
Thanks
You almost certainly want to run virtual machines (on something like VMWare or VirtualBox) to really test multi-machine stuff. However, I also develop for multiple machines every day (we have an array of app servers, an array of background worker servers, e-commerce servers, cache stores and front proxies—and I still just develop on one virtual machine that has all that stuff running on it. Provided you make hostnames and ports configurable for everything, there's not much difference between localhost port 9000 and some.server.tld port 8080. Actually running all the VMs on a single computer would likely be painful, both in terms of system resources and complexity.
There are tools to help with setting up VMs with similar or the same configurations too. Take a look at http://vagrantup.com/ and also http://babushka.me/.
Just my $0.02.
I have a Watchguard x1250e firewall and a fast network setup at pryme.net in Ashburn, VA. I have Verizon FIOS here at the office (50 mbit) and did a test download from a URL they provided me and I get 5.8 MB/sec from their test file (probably off of Linux). But from my servers running Windows 2003 behind the firewall (x1250e) just using normal packet filter for HTTP, not proxy, very very basic config, I am only getting 2 MB/sec from my rack.
What do I need to do to serve downloads FAST? If I have a network with no bottlenecks, 50 mbit service to my computer, GigE connectivity in the rack, where is this slowdown? We tried bypassing the firewall and the problem remains so it's something in Windows 2003 I presume. Anything I can tweak to push downloads at 6 MB/sec instead of 2 MB/sec?
Any tips or tricks? Things to configure to get better HTTP download performance?
Pryme.net test URL:
http://209.9.238.122/test.tgz
My download URL:
http://download.logbookpro.com/lbpro.exe
Thank you.
It could be the Windows server itself. You could test for bottlenecks in memory, disk access, network access, etc. using PerfMon (1, 2) or MSSPA.