NGINX and .NetCore application performance - linux

I have a .netCore 5 website hosted with NGINX Linux Ubuntu
I need to select a new server and wonder where I can save money
My app is very small
My APP does NOT HDDs nothing store in the file system
I notice that even I overwrite with the new version (new files) the application does not change until I restart the tread.
My focus is to save money from the HDD
And buy servers with old HDD
Question
How NGINX works?
Load in the memory the app and serve from there?
You have the location of the files but create some cache and serve from there – WHICH means a Hard drive is important
In Short: Does NGINX used the hard drive?
My app is using CDN for all files they are on 3rd part Azure Blobs and Amazon S3
I want to save on HDD instead of SSD and nVME
Which HDDs to select to get the best performance?
Again my app server only web from 3rd party.

You pretty much already have the answer. NGINX won't do disk writes or reads unless it needs to.
For your case, static files are hosted elsewhere, so obviously your NGINX has no need to interact with the disk for serving those.
The app is served by using NGINX as a reverse proxy. The communication between the NGINX and the app is usually done over a network socket (e.g. your app binds to a TCP port) or a UNIX socket. For neither of the two, the disk speed would matter.
You would probably better ask yourself if your app logic does any reads or writes to disk. And if the answer is no or "not much", then yes, a plain HDD would be sufficient.

Related

Is downloading file from url in nodejs use user internet or works background?

I'm currently building a nodejs streaming app which has to get the file from a third party after that cache it to my virtual machine running node.js (Heroku) local storage.
I want to ask if I'm requesting for download of file in nodejs app, do user internet speed matter even though file is not being downloaded in browser
Can I download file in background when I deploy to heroku without user interaction?
Thanks, if you can explain how internet bandwidth is being consumed by internet providers. I'm concerned about this because the country I'm in internet cost is expensive so I want to reduce internet usage of my users.
In short - the machine/computer running the download code is the one consuming the internet bandwidth.
So, if your node.js app is running on Heroku, the download is between the Heroku machine and the 3rd party server(s), thus not consuming the user's bandwidth (that data doesn't flow through the user's device).
However, when the user will stream that file from your node.js app to their device - that'll definitely consume their bandwidth.

how to sync web servers

I want to setup 2 LAMP servers for load balancing for webpages. I will only upload files to server A and want the server sync the files from server A to server B automatically. What is the best way to do it?
Don't try to keep two servers in sync. Store the files somewhere central with redundant storage, ie, use RAID
You can use High Availability with shared storage software like openfiler or you can write a scripts for replication of data between both servers.

Host and Deliver Big Files on Nodejs Nodejitsu

I have a website hosted on Nodejitsu using Nodejs.
I want people to be able to download files. Overall, there are about 1k files of 1MB each for a total of 1GB. Those files are in the same directory than regular code.
When I try to deploy, there's the message: "Snapshot is larger than 70M!"
How are you supposed to deliver files with Nodejs? Do I need to host them on a separate website (ex: mediafire) and I redirect people there? Or is there a special place to put them?
Services like Nodejitsu are meant for hosting your application. It sounds like these are static files, not something generated by your application. I recommend putting these on a CDN if they will get a lot of traffic. CloudFront can easily sit out in front of S3. Otherwise, buy a cheap VPS and dump your files there.
I also recommend not using your application server (your Node.js server) to host static content. While it can certainly do this, software like Nginx is often faster. (Unless of course you have a specific reason to serve these from your application...)

Best practices for shared image folder on a Linux cluster?

I'm building a web app that will scale into a linux cluster with tomcat and nginx. There will be one nginx web server load balancing multiple tomcat app servers. And a database server in behind. All running on CentOS 6.
The app involves users uploading photos. I plan to keep all the images on the file system of the front nginx box and have pointers to them stored in the database. This way nginx can serve them full speed without involving the app servers.
The app resizes the image in the browser before uploading. So file size will not be too extreme.
What is the most efficient/reliable way of writing the images from the app servers to the nginx front end server? I can think of several ways I could do it. But I suspect some kind of network files system would be best.
What are current best practices?
Assuming you do not use CMS (Content Management System), you could use the following options :
If you have only one front end web server then the suggestion would be to store it locally on the web server in a local Unix filesystem.
If you have multiple web servers, you could store the files on a SAN or NAS shared network device. This way you would not need to synchronize the files across the servers. Make sure that the shared resource is redundant else if it goes down, your site will be down.

File Server vs NAS for hosting media files

I have a web portal where my users can login and submit artworks (image, documents, etc.). This web portal is hosted in 2 load-balanced web servers.
Because of this load balancing, I'm thinking of using NAS to serve as a centralized media file storage for my web portal. I'm considering NAS because it's cheaper than a file server and it's easier to maintain.
Now the questions are:
File hosting - Is there any NAS device that can act as a file hosting server? Or, do I need to create a virtual path in my web server to the NAS? This can be achieved easily if I use a file server, I can just bind a separate domain to the file server, something like media.mydomain.com, so all media files will be served through this domain. I don't mind serving the media files through a virtual path from my web servers, smthg like mydomain.com/media. I would like to know if NAS can do any of the approaches above, and whether it's secure, easy to setup, etc.
Performance - This is more important because read and writes are quite intensive. I never use NAS before. I'm thinking of getting 2 hard drives (2TB, 15000RPM) configured for RAID-1. Would this be able to match the performance of a common file server? I know the answer to this question is relative but I just want to see how NAS can be used as a file hosting, not just as a file sharing device.
My web servers are running Windows Server 2008R2 with IIS 7.5. I would appreciate if anyone can also share best practices for integrating NAS with Win Server/IIS.
Thanks.
A NAS provides a shared location for information on a private network (at least you shouldn't expose NAS technologies as NFS and CIFS over the internet) and is not really designed as a web file host. That is not to say you can't configure a NAS as a web file host utilizing IIS/apache/nginx, but then you don't need your web servers. NAS setup is well documented for both windows server and most unix/linux distros, both are relatively easy. A NAS is as secure as it is designed to be, you can utilize a variety of access control methods to secure a NAS depending on your implementation.
This really depends on your concurrent users and what kind of load you are expecting them to put on the system, for the most part performance over a 1Gb LAN connection and a 15,000 RPM hard drive for a NAS should provide ample performance for a decent amount of concurrent users, but I can't say for certain because if a user sits there downloading hundreds of files at a time you can have issues. As with any web technology wrap limits around user usage to prevent one user bringing down your entire system. I'm not sure what you are defining a file server (a NAS is a file server), if you think of a file server as a website that hosts files, a NAS will provide the same if not better performance based on where the device is in relation to your web servers (again, depending on utilization). If you are worried about performance you can always build a larger RAID array using RAID 5, RAID 6, RAID 10 or use SSDs to increase storage performance. For the most part in any NAS the hardware constraints usually are: storage speed, network speed, ram, cpu. Again this really depends on utilization, so test well, benchmark, and monitor performance
Microsoft provides a tuning document for server 2008 r2 that is useful: http://msdn.microsoft.com/en-us/library/windows/hardware/gg463392.aspx
In my opinion your architecture would be your 2 web servers referencing the NAS as a shared location using a virtual directory pointed at the NAS for your files or handle the NAS location in code (using code provides a whole plethora of options around security and usage).

Resources