Does user network speed have an impact on web server performance? [closed] - performance-testing

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
The Slow Loris attack overwhelms an unpatched web server by simply extending the time it takes to finish making a request, then repeating that action - thus tying up all available resources to respond.
It follows then, that many users from geographic locations with bad internet connectivity, should be similarly detrimental to performance.
Is this really the case? What is the phenomenon called? What is a good way to simulate this with a load testing tool?

Anyone who opens a connection to the server takes a connection out of the webserver's connection pool, normal person makes a request, quickly gets response and closes the connection (as long as the browser doesn't send Connection: keep-alive header)
The point of Slowloris DOS attack is to use all the connections and retrieve the data at minimum speed (i.e. 1 byte per second) so request which normally finishes in a couple of seconds will "hang" for several hours.
It shouldn't have impact on the server's performance, i.e. it will be continuing serving other users normally, however server can run out of available connections and maybe run out of memory given it keeps the response in the memory until it's released.
You can use any of tools listed under Similar Software wikipedia article:
If you're looking for a load testing tool which can simulate slow connections "in addition" to the "normal" load testing features you can take a look at Apache JMeter in general and in How to Simulate Different Network Speeds in Your JMeter Load Test article in particular.

Related

nginx reverse proxy or nodejs fetch? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
In order to deal with the website CORS problem,there are two options for me.
nginx reverse proxy
website fetch nodejs, nodesjs fetch the target, then send back only the part of data I need, which would be a 50% compression on size.
Considering the user experience
and the efficiency of server,
which approach is better ?
Thank you so much.
Reasons to use the generic reverse proxy:
You have a lot of different requests you need this for so it's simpler to have just one proxy that handles everything.
Using nginx removes all work in processing the proxied requests to a separate process so it doesn't cause any load on your nodejs server.
nginx "might" be more scalable than your own code in nodejs just because nginx is pretty highly optimized for these kinds of things. But, if this was a meaningful driver for your decision, you would have to measure to see if this was really the case or not (impossible to predict for sure without measuring).
Reasons to have your nodejs server do the work:
You can do application-specific authentication before allowing a proxy request to be used.
You can design application-specific requests that have your client specify only what you want to specify and your nodejs server can then translate that to the 3rd party website request, filling some parts with defaults, etc...
You can trim the proxy response to just the data your client needs, speeding up client responses and trimming server bandwidth costs.
You can combine multiple requests to the target into a single request/response between client and your nodejs server, allowing you to create more efficient requests, particularly on mobile or slow links.
Which is better depends entirely upon the specific requests and what exactly you're optimizing for. We can't name one as better than the other as they each have advantages and disadvantages so it ultimately comes down to which advantages and disadvantages matter more to you in your system.

Is an HTTP Server a Good Idea For IPC? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I need to implement a tool which gets its data from running processes,in a POST/GET mechanism. The amount of connections between the client/server is quite low - One request per minute on average.
The default way is to implement a simple server based on sockets and so on. However, I find it too much of a work for such a simple thing. There are plenty of tools out there, that are able to create an HTTP server with only 5 lines of coding (Perl Dancer, for example). Interfacing with them is fast and easy. Adding new functionality is as easy as it gets. Resources wise, they are pretty lightweight.
Is an HTTP server a bad idea for such a task (overhead-wise)? Is there a simple RESTful framework for IPC similar to Daner/node.js?
Thanks!
HTTP is, basically, a stateless protocol. You request something, the server replies, and that's the end of it. HTTP 1.1 has changed parts of the implementation, mainly for performance reasons, but it hasn't really changed the "client sends - server answers - transaction finished" pattern. Which means that if you want to implement locking mechanisms, synchronization, or transactions started by the server, you'll have to do a lot of coding to make HTTP do what it wasn't designed for.
This doesn't mean there's no way to do those things, it just means you might have to do a lot of coding to make the HTTP server do what you need. In the long run, it'll be easier to build a server for your specific needs than trying to abuse a HTTP server.
Of course, if what HTTP can do is sufficient for your current need, and if you're in an environment where quick coding is more important than performance and long-term maintainability, use an HTTP server. If you can do the job in one day, and know in advance the requirements won't change much, it doesn't make much sense to spend ten days to get a solution that has 3% better performance and 2% better maintainability.

What are security risks when running an Erlang cluster? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
It's more a general question in terms of what one has to look out for when running an Erlang system. For example, I know of atom exhaustion attacks. What are other possible attacks and how to make your system more secure?
Running a cluster means they are sharing a cookie, and if one knows the cookie than they can attach to any of your nodes (assuming they are attached to your network) and execute any arbitrary Erlang command or program.
So my thought is that clustered means that there are at least two files (and some number of people) who know what the cookie is (or where to find it).
I would be afraid of bugs in applications deployed in your system. Good example from otp is SSL app, which was completely re-written 3 years ago. The next would be http client - memory leaks. Xmerl was never a strong part of the system.
Also, be careful with 3rd party Erlang apps: new web servers (probably better than inets, but if you do not need all the performance consider stable Yaws), ejabberd - number of techniques hitting directly OS, Riak - interaction with filesystem, ulimit, iostats etc.
First of all, you want to have your Cluster in a closed VPN (if they are far apart and parhaps communicate over a WAN). Then, you want to run them atop hardened UNIX or LINUX. Another strong idea is to close all epmd connections to your cluster even if one has got the cookie by using net_kernel:allow(Nodes). One of the main Weaknesses of Erlang Vms (i have come to realise) is memory consumption. I think that if an Erlang Platform providing service to many users and its NOT protected against DOS attacks, its left really vulnerable. You need to limit number of allowed concurrent connections for the Web Servers so that you can easilly block out some script boys in the neighbourhood. Another situation is having distributed/replicated Mnesia database across your cluster. Mnesia replicates data but i ain't sure if that data is encrypted. Lastly, ensure that you are the sole administrator of all the machines in your cluster.

File Server Design Approaches [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I have been working on designing a file server that could take the load off from the primary website, and serve images/files over the web to the client.
Primary goals of the file server:
- Take off load from primary server hosting the site
- Reuse the existing web server code base and avoid duplication of code/logic for better maintainability
- Being scalable for increasing downloads
- Hide real download url path from user
By keeping above in mind, i could come up with two approaches. Sequence diagram representation of the two approaches for ease of understanding [apologies for the skewed use of sequence diagram]. Neither of the approaches would satisfy all my goals.
Which of these approaches would you recommend considering my goals?
Is there a better third approach?
Some of the differences, i could think of:
- Approach #1 would result in duplicating BL code causing maintainability issues
- Approach #2 would reuse code and centralize BL reducing maintainability issues
- Approach #1 would reduce network calls while #2 increases them
The concept of file servers, scalability of downloads, bandwidth distribution have all been there for a while now. Please share your thoughts!
UPDATED:
Approach #1 looks very attractive as it takes the load off the primary server completely. The only issue to address in #1 is the code duplication and maintainability issues. This could be overcome by having just one project for BL/DAC comprising the functionality required by both web service and file server. And, reference the assembly/library in both web service and file server projects. Now, there is only one BL/DAC code to maintain and also avoids the network calls in approach #2.
By serving images/files to the client, I assume you mean static files css, js etc.
Most of the time, a simple solution is the best solution. Just host them on a different server under a different subdomin, i.e. http://content.mydomain.com/img/xyz.jpg. You could host them at a data centre on dedicated server giving your perforamace (close to the backbone), you could load balance the url and by have 2+ servers at 2+ different data centers, giving you resilliance and scalability.
You maintence task is then having todo find a replace when promoting your site to live to replace dev/uat content paths with the live content path (tho you'd only need todo this in css files as you could store the paths for content used within aspx files for as config data).

Monitoring Bandwidth on your server [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I use to be on a shared host and I could use there standard tools to look at bandwidth graph.
I now have my sites running on a dedicated server and I have no idea whats going on :P sigh
I have installed webmin on my Fedora core 10 machine and I would like to monitor bandwidth. I was about to setup the bandwidth module and it gave me this warning:
Warning - this module will log ALL network traffic sent or received on the
selected interface. This will consume a large amount of disk space and CPU
time on a fast network connection.
Isn't there anything I can use that is more light weight and suitable for a NOOB? 'cough' Free tool 'cough'
Thanks for any help.
vnStat is about as lightweight as they come. (There's plenty of front ends around if the graphs the command line tool gives aren't pretty enough.)
I use munin. It makes pretty graphs and can set up alerts if you're so inclined.
Unfortunately this is not for *nix but I have an automated process to analyise my IIS logs that moves them off the web server and analyises them with Web Log Expert. Provided the appropriate counter is turned on it gives me the bandwidth consumed for every element of the site.
The free version of their tool won't allow scripting but it does the same analysis. It supports W3C Extended and Apache (Common and Combined) log formats.
Take a look at mrtg. It's fairly easy to set up, runs a simple cron job to collect snmp stats from your router, and shows some reasonable and simple graphs. Data is stored in an RRD database (see the mrtg page for details) and can be mined for other uses as well.

Resources