How do I increase concurrent requests in Digital Ocean Droplet? - linux

I have optimized a Digital Ocean Droplet (Ubuntu 12.04) using Varnish in front of Apache to serve thousands of web requests per second. When I ssh into my droplet and run
ab -n 100 -c 100 FULL_URL
I get 2000-3000 requests per second even when I raise the value. Varnish is working great.
However, when I run the same ApacheBench command from my local computer, I get all kinds of timeouts whenever concurrency goes above 20 or 30.
Why can my site handle hundreds of concurrent requests locally but only 20 or 30 through the Internet?
I have followed the guidelines of this blog: http://www.lognormal.com/blog/2012/09/27/linux-tcpip-tuning/ thinking that the problem is with the OS tcpip settings, but after a droplet reboot, there is no change.
Can someone help me?

Related

How could i display memory and cpu usage of my minecraft server on my website

So I am trying to display memory and cpu usage of my minecraft server on my website. But i dont know how could i do that. I have searched it up on youtube, but havent found anything.
There are many things that you can use to do.
Linux/ Hosting
If you are using a hosting company that gives you a nice looking website/panel to look at: web scraping their statistics and using it, such as taking it from this area using sort of bot. If they don't then you could look at getting a plugin or creating one such as Lag Monitor
They may be using Multicraft, statistics will be at the top if they have some measurement for it.
If you host the Minecraft server in a docker container then you should have a look at docker stats
If you host the Minecraft server just on the system itself using a service(systemctl) then you should refer to Retrieve CPU usage and memory usage of a single process on Linux?
You would need to create a script to get these things, return and format the value. You could either publish the statistics in almost real-time using some sort of socket connection like socket.io.
However, if that is not available then you could create an API server where ever you run the server(if on your own machine) to run these commands and allow your website to fetch the results every so often or on page load.
Windows
If you are hosting your Minecraft server on Windows then you are doing something wrong. Getting memory and CPU usage would be the least of your problems in this case and you should look into getting some proper hosting for your server.
If you are running the server on your own computer on your own network. Unless you have the experience and knowledge of doing so safely, which clearly you don't have, then you should definitely migrate to a Linux based hosting solution such as a VPS.
TL;DR:
Get a VPS, set up an API server, get statistics from that. There probably is no tutorial for you to follow.

Check if my Nodejs server is running remotely

I have a Nodejs server running a website remotely in a Windows 10 machine. But the machine is sometimes turned off and I do not know that the website is down.
I was thinking of creating a Nodejs website and have it run in Heroku that sends a request to my website running in the windows machine every 5 minutes and notify me via email if it does not get a response. However, I wanted to know if there are better options available for situations like these.

Is there be a difference between ab testing on localhost and hostname?

I test my website using ab as ab -n 10000 -c 1000 http://example.com/path and I got response as 160 #/second. But when i test it as ab -n 10000 -c 1000 http://localhost/path the response is totally different 1500 #/second.
why?
Normally you should not be running load generator (ab or any other tool) on the same host where application under test lives as load testing tools themselves are very resource intensive and you may run into the situation when application under test and load generator are struggling for the same CPU, RAM, Network, Disk, Swap, etc.
So I would recommend running ab from another host in your intranet, this way you will be able to get more clear results without aforementioned mutual interference. Remember to monitor baseline OS health metrics using vmstat, iostat, top, sar, etc. on both application under test and load generator side - it should give you more clear picture regarding what's going on and what is the impact of the perceived load.
You may also want to try out a more advanced tool as ab has quite limited load testing capabilities, check out Open Source Load Testing Tools: Which One Should You Use? article for more information on the most prominent free and open source load testing solutions (all listed tools are cross-platform so you will be able to run them on Linux)
From what I understand, you are testing the same website in 2 different configurations:
http://example.com/path, which is testing the remote website from your local computer,
http://localhost/path, which is testing the a local copy of website on your local machine, or being tested directly in the machine where the website is hosted.
Testing your remote website involves the network connection between your computer and the remote server. when testing locally, all the goes through the loopback network interface which is probably several orders of magnitude faster than your DSL internet connection.

Nginx country based routing with socket.io

I am trying to run a node.js - mongo db application with nginx as a reverse proxy on digital ocean and mlab.
My website will be used from usa, india, uk and some asian countries potentially.
I have created my droplet on digital ocean in bangalore, India site. Config - ubuntu 14x, 2GB Ram, 40 GB disk.
I was very surprised to notice that the performance of the site when accessed from USA is terrible. It takes around 25 seconds to load. However the same url can be accessed within 6 seconds from Mumbai, India.
Lot of my files are already minimized, images are compreseed etc.
So what are my options at this time? I can try to do subdomains and have nginx do County based routing to different servers but what impact will it have on socket.io?
Will i have to do have nginx on each individual servers as well? Or just in routing server? What about nginx caching? On which site will i create the server which does routing?
Any examples will be greatly appreciated! Thanks in advance
I ended up using cloudflare for CDN. I saw significant improvement in speed.

Proftpd incredibly slow sftp and ftp connection on Ec2

Iwas using digital ocean for a long time but I wanted to tive it a shot on amazon ec2 machines. I ve created my environment but when I set up proftpd and configure it as a sftp server it transfer files incredibly slow like in bytes per second not even kbps. It was also the same for ftp. I just had this issue on amazon ec2 server, never happened in digital ocean's. I tried eveything from google but not helped.
Is there any solution?

Resources