File Server vs NAS for hosting media files - iis-7.5

I have a web portal where my users can login and submit artworks (image, documents, etc.). This web portal is hosted in 2 load-balanced web servers.
Because of this load balancing, I'm thinking of using NAS to serve as a centralized media file storage for my web portal. I'm considering NAS because it's cheaper than a file server and it's easier to maintain.
Now the questions are:
File hosting - Is there any NAS device that can act as a file hosting server? Or, do I need to create a virtual path in my web server to the NAS? This can be achieved easily if I use a file server, I can just bind a separate domain to the file server, something like media.mydomain.com, so all media files will be served through this domain. I don't mind serving the media files through a virtual path from my web servers, smthg like mydomain.com/media. I would like to know if NAS can do any of the approaches above, and whether it's secure, easy to setup, etc.
Performance - This is more important because read and writes are quite intensive. I never use NAS before. I'm thinking of getting 2 hard drives (2TB, 15000RPM) configured for RAID-1. Would this be able to match the performance of a common file server? I know the answer to this question is relative but I just want to see how NAS can be used as a file hosting, not just as a file sharing device.
My web servers are running Windows Server 2008R2 with IIS 7.5. I would appreciate if anyone can also share best practices for integrating NAS with Win Server/IIS.
Thanks.

A NAS provides a shared location for information on a private network (at least you shouldn't expose NAS technologies as NFS and CIFS over the internet) and is not really designed as a web file host. That is not to say you can't configure a NAS as a web file host utilizing IIS/apache/nginx, but then you don't need your web servers. NAS setup is well documented for both windows server and most unix/linux distros, both are relatively easy. A NAS is as secure as it is designed to be, you can utilize a variety of access control methods to secure a NAS depending on your implementation.
This really depends on your concurrent users and what kind of load you are expecting them to put on the system, for the most part performance over a 1Gb LAN connection and a 15,000 RPM hard drive for a NAS should provide ample performance for a decent amount of concurrent users, but I can't say for certain because if a user sits there downloading hundreds of files at a time you can have issues. As with any web technology wrap limits around user usage to prevent one user bringing down your entire system. I'm not sure what you are defining a file server (a NAS is a file server), if you think of a file server as a website that hosts files, a NAS will provide the same if not better performance based on where the device is in relation to your web servers (again, depending on utilization). If you are worried about performance you can always build a larger RAID array using RAID 5, RAID 6, RAID 10 or use SSDs to increase storage performance. For the most part in any NAS the hardware constraints usually are: storage speed, network speed, ram, cpu. Again this really depends on utilization, so test well, benchmark, and monitor performance
Microsoft provides a tuning document for server 2008 r2 that is useful: http://msdn.microsoft.com/en-us/library/windows/hardware/gg463392.aspx
In my opinion your architecture would be your 2 web servers referencing the NAS as a shared location using a virtual directory pointed at the NAS for your files or handle the NAS location in code (using code provides a whole plethora of options around security and usage).

Related

NGINX and .NetCore application performance

I have a .netCore 5 website hosted with NGINX Linux Ubuntu
I need to select a new server and wonder where I can save money
My app is very small
My APP does NOT HDDs nothing store in the file system
I notice that even I overwrite with the new version (new files) the application does not change until I restart the tread.
My focus is to save money from the HDD
And buy servers with old HDD
Question
How NGINX works?
Load in the memory the app and serve from there?
You have the location of the files but create some cache and serve from there – WHICH means a Hard drive is important
In Short: Does NGINX used the hard drive?
My app is using CDN for all files they are on 3rd part Azure Blobs and Amazon S3
I want to save on HDD instead of SSD and nVME
Which HDDs to select to get the best performance?
Again my app server only web from 3rd party.
You pretty much already have the answer. NGINX won't do disk writes or reads unless it needs to.
For your case, static files are hosted elsewhere, so obviously your NGINX has no need to interact with the disk for serving those.
The app is served by using NGINX as a reverse proxy. The communication between the NGINX and the app is usually done over a network socket (e.g. your app binds to a TCP port) or a UNIX socket. For neither of the two, the disk speed would matter.
You would probably better ask yourself if your app logic does any reads or writes to disk. And if the answer is no or "not much", then yes, a plain HDD would be sufficient.

Host my own node.js webapp or deploy using a hosting service

I have a web app that's like a bulletin board where users upload their images and such. Is it best if I host this web app on my own hardware or use a hosting site?
The reason why I'm considering my own hardware is because it's my hardware which will be simple to me and something I know best, also I'll have file system access and see what the users are uploading, most hosting sites don't offer that .
You might want to base your choice depending on what's your favourite operating system for servers. If it's on window, I would recommend self hosting as most VPS with window cost way more than linux server. Overall, depending on the resources you need, the average cost of a linux server in the cloud is around 2.50 ~ 10.00 per months and with that you have a guaranteed fix IP address.
Soooo the question is, is all the trouble of setting up your server + maintaining it + managing a fixed IP with your ISP (plus ISP charges *) worth at worst 10 box per month? Your choice!
here is a few services you might want to consider for VPS
https://aws.amazon.com/
https://www.digitalocean.com/
https://www.heroku.com/ (this one have free hosting if you don't mind the 24/7 uptime)
and there is so many others.
As an example, my ISP charge around 100 CAD more per month just to consider my IP static and have a commercial profile for my internet.
You can host on VPS. It can have any operation system you want. And you control it. It is much cheaper that dedicated hosting on your hardware.
http://www.google.com/search?q=vps
If you want to host node. You have to install manually, NodeJS, and the required NPM modules.

Best practices for shared image folder on a Linux cluster?

I'm building a web app that will scale into a linux cluster with tomcat and nginx. There will be one nginx web server load balancing multiple tomcat app servers. And a database server in behind. All running on CentOS 6.
The app involves users uploading photos. I plan to keep all the images on the file system of the front nginx box and have pointers to them stored in the database. This way nginx can serve them full speed without involving the app servers.
The app resizes the image in the browser before uploading. So file size will not be too extreme.
What is the most efficient/reliable way of writing the images from the app servers to the nginx front end server? I can think of several ways I could do it. But I suspect some kind of network files system would be best.
What are current best practices?
Assuming you do not use CMS (Content Management System), you could use the following options :
If you have only one front end web server then the suggestion would be to store it locally on the web server in a local Unix filesystem.
If you have multiple web servers, you could store the files on a SAN or NAS shared network device. This way you would not need to synchronize the files across the servers. Make sure that the shared resource is redundant else if it goes down, your site will be down.

Disk configuration for server running IIS

I've inherited a website from an external company (who have gone bust) and I need to get it deployed to our servers (its 3 web sites running together)
However, in testing, although the app runs correctly, performance is poor and I am pretty certain this is because the app writes files to the local disk. We only currently have a single disk in the server but as its virtual we can increase this to two fairly quickly.
Server is Windows 2008 running IIS7 and has two processors already.
Some of the files are 100mb+, but there are also lots of small writes and log file writes as well.
My question is where to put which parts of the application?
Is it best to have the OS on one disk, Web sites and files/log on another?
or sites and OS on one and files on another?
Is there a "standard" point to start from?
If anyone could reply with something like this, but with an explanation so I understand WHY!!
e.g.
C: OS
C: WebSites
D: Files
D: Logs
Your background sounds like it's from Linux, because some people configure new servers taking the items you listed into account. We have a handful of IIS sites; we mostly run Apache and on Linux, so I'm taking a stab at this.
Where we have IIS, we also tend to have MS SQL Server. I would keep the Windows OS on a nice large partition, and put everything IIS, including the root directory on a second drive. IIS installs defaulted to C:\, but I believe you can move it to another drive. The names of the utilities and how to do this, are best left to those who do this regularly.
In other words, I'd make a gross disk split OS/IIS, and then tune from there. However, make sure you have lots of disk space and can defragment.

Virtualization: Do you store web content inside the virtual machines?

In creating virtual web servers (VirtualBox for me), does web content usually go "inside" the virtual machine or is there usually a file server running on bare metal serving files (content) to all the virtual machines?
Is there a typical scenario for storing content for virtual servers?
It depends is this development or production? If production with multiple servers store everything on a file share with fast network access so you don't have to deal with the hassle of updating every server. If it's development just put everything on the virtual server since performance isn't critical and it's simpler.
What are you creating the virtual servers for?
I usually use vms as testing machines, in which case the content will go "inside" (where apache is running).
If you are trying to simulate a networked system (like a load balancer or something), you will set up the content on the simulated machine that represents the file server in your scenario.
It depends on your architecture and how the information is going to be updated. If the information is going to be updated frequently and replicated across multiple machines, it may make sense to have a file server on bare metal that the VMs connect to over the network. However, you should be aware that with VirtualBox in particular you're going to take a pretty significant hit on network performance even with the guest tools installed. I've noticed I get about 20% of the network speed of VMWare from VirtualBox.

Resources