Improve file download performance in Windows 2003 - iis

I have a Watchguard x1250e firewall and a fast network setup at pryme.net in Ashburn, VA. I have Verizon FIOS here at the office (50 mbit) and did a test download from a URL they provided me and I get 5.8 MB/sec from their test file (probably off of Linux). But from my servers running Windows 2003 behind the firewall (x1250e) just using normal packet filter for HTTP, not proxy, very very basic config, I am only getting 2 MB/sec from my rack.
What do I need to do to serve downloads FAST? If I have a network with no bottlenecks, 50 mbit service to my computer, GigE connectivity in the rack, where is this slowdown? We tried bypassing the firewall and the problem remains so it's something in Windows 2003 I presume. Anything I can tweak to push downloads at 6 MB/sec instead of 2 MB/sec?
Any tips or tricks? Things to configure to get better HTTP download performance?
Pryme.net test URL:
http://209.9.238.122/test.tgz
My download URL:
http://download.logbookpro.com/lbpro.exe
Thank you.

It could be the Windows server itself. You could test for bottlenecks in memory, disk access, network access, etc. using PerfMon (1, 2) or MSSPA.

Related

Window10 iis Simultaneous Download Problem

Tested on Windows 10 home version of iis ftp server.
It is attempting to download 1GB of files from 20 Android devices at the same time.
However, the maximum number of simultaneous downloads is only two.
The maximum number of connections for the ftp setting is 4294967295.
Twenty devices will attempt to log in to ftp under one account and download it.
I don't think this is a problem.
My ultimate goal is to download more than 10 simultaneous downloads.
As far as I know, the windows10's IIS has the connection limit, it just support three client connect to the IIS at same time.
If you want to increase this number, please use windows server instead.

Azure B1Is nonfunctional upon creation

I have an almost static site that i was happy hosting on Storage blob. However, i need to have php script run to support email communication through the contact html form.
So i decided to buy the smallest VM which is B1Is which has 1 CPU and 0.5 GB of memory. I RDP to the server and to my astonishment I cannot even open one file or folder or Task Manager without waiting endlessly before the "Out of memory ...please try to close programs or restart all"!
The Azure team should not sell such a VM if it will be nonfunctional from the get go. Note that i installed ZERO programs on it.
All i want is php and setup the site on IIS. And add a certificate license to it. NO Database or any other programs will run.
What should i do?
Apparently it is because "1 B1ls is supported only on Linux" based on the notes on their page.
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/sizes-general

Is there be a difference between ab testing on localhost and hostname?

I test my website using ab as ab -n 10000 -c 1000 http://example.com/path and I got response as 160 #/second. But when i test it as ab -n 10000 -c 1000 http://localhost/path the response is totally different 1500 #/second.
why?
Normally you should not be running load generator (ab or any other tool) on the same host where application under test lives as load testing tools themselves are very resource intensive and you may run into the situation when application under test and load generator are struggling for the same CPU, RAM, Network, Disk, Swap, etc.
So I would recommend running ab from another host in your intranet, this way you will be able to get more clear results without aforementioned mutual interference. Remember to monitor baseline OS health metrics using vmstat, iostat, top, sar, etc. on both application under test and load generator side - it should give you more clear picture regarding what's going on and what is the impact of the perceived load.
You may also want to try out a more advanced tool as ab has quite limited load testing capabilities, check out Open Source Load Testing Tools: Which One Should You Use? article for more information on the most prominent free and open source load testing solutions (all listed tools are cross-platform so you will be able to run them on Linux)
From what I understand, you are testing the same website in 2 different configurations:
http://example.com/path, which is testing the remote website from your local computer,
http://localhost/path, which is testing the a local copy of website on your local machine, or being tested directly in the machine where the website is hosted.
Testing your remote website involves the network connection between your computer and the remote server. when testing locally, all the goes through the loopback network interface which is probably several orders of magnitude faster than your DSL internet connection.

How to handle lots of file downloads from my linux server

I have a file 50MB file hosted in my deticated linux server, each day there is almost 50K users that download this file (2.5GB a day).
There are lots of crashes and users reports that sometimes even the file can't be downloaded since the server is overload.
I wonder if someone can help me how do I calculate which server/bandwidth/anything I need to handle that?
Is there any solution where I can host the file and pay per download?
Is there any setting or anything that I can improve or do on my server that will help to fix this issue?
My current server specification is:
2 x Intel Xeon E5 2620V2
2 x (6 x 2.10 GHz)
128 GB REG ECC
256GB SSD HD
1 IP Address
1 Gbit/s port Shared Bandwidth
I'll appreciate any help from you guys.
Thank you very much.
Your hardware configuration should probably be fine. At least if the downloads are more or less evenly distributed over the day.
One of the most effective http servers for serving static content is nginx. Take a look at this guide: Serving static content.
If that doesn't help, you should consider Amazon S3, which is probably the most popular file hosting solution with a reasonable price tag.
This is how not to make the file available for download:
data = read_file(filename)
echo data
You want to be using sendfile(2) to have the kernel stream the file directly into the socket instead of doing it in userspace.
Each server has their own mechanism for invoking sendfile(2); with httpd this is mod_xsendfile and its associated response header (X-SENDFILE). You'll find that moving to this will allow you to not only handle your current userbase but also to add many more without worry.

File Server vs NAS for hosting media files

I have a web portal where my users can login and submit artworks (image, documents, etc.). This web portal is hosted in 2 load-balanced web servers.
Because of this load balancing, I'm thinking of using NAS to serve as a centralized media file storage for my web portal. I'm considering NAS because it's cheaper than a file server and it's easier to maintain.
Now the questions are:
File hosting - Is there any NAS device that can act as a file hosting server? Or, do I need to create a virtual path in my web server to the NAS? This can be achieved easily if I use a file server, I can just bind a separate domain to the file server, something like media.mydomain.com, so all media files will be served through this domain. I don't mind serving the media files through a virtual path from my web servers, smthg like mydomain.com/media. I would like to know if NAS can do any of the approaches above, and whether it's secure, easy to setup, etc.
Performance - This is more important because read and writes are quite intensive. I never use NAS before. I'm thinking of getting 2 hard drives (2TB, 15000RPM) configured for RAID-1. Would this be able to match the performance of a common file server? I know the answer to this question is relative but I just want to see how NAS can be used as a file hosting, not just as a file sharing device.
My web servers are running Windows Server 2008R2 with IIS 7.5. I would appreciate if anyone can also share best practices for integrating NAS with Win Server/IIS.
Thanks.
A NAS provides a shared location for information on a private network (at least you shouldn't expose NAS technologies as NFS and CIFS over the internet) and is not really designed as a web file host. That is not to say you can't configure a NAS as a web file host utilizing IIS/apache/nginx, but then you don't need your web servers. NAS setup is well documented for both windows server and most unix/linux distros, both are relatively easy. A NAS is as secure as it is designed to be, you can utilize a variety of access control methods to secure a NAS depending on your implementation.
This really depends on your concurrent users and what kind of load you are expecting them to put on the system, for the most part performance over a 1Gb LAN connection and a 15,000 RPM hard drive for a NAS should provide ample performance for a decent amount of concurrent users, but I can't say for certain because if a user sits there downloading hundreds of files at a time you can have issues. As with any web technology wrap limits around user usage to prevent one user bringing down your entire system. I'm not sure what you are defining a file server (a NAS is a file server), if you think of a file server as a website that hosts files, a NAS will provide the same if not better performance based on where the device is in relation to your web servers (again, depending on utilization). If you are worried about performance you can always build a larger RAID array using RAID 5, RAID 6, RAID 10 or use SSDs to increase storage performance. For the most part in any NAS the hardware constraints usually are: storage speed, network speed, ram, cpu. Again this really depends on utilization, so test well, benchmark, and monitor performance
Microsoft provides a tuning document for server 2008 r2 that is useful: http://msdn.microsoft.com/en-us/library/windows/hardware/gg463392.aspx
In my opinion your architecture would be your 2 web servers referencing the NAS as a shared location using a virtual directory pointed at the NAS for your files or handle the NAS location in code (using code provides a whole plethora of options around security and usage).

Resources