Best practices for shared image folder on a Linux cluster? - linux

I'm building a web app that will scale into a linux cluster with tomcat and nginx. There will be one nginx web server load balancing multiple tomcat app servers. And a database server in behind. All running on CentOS 6.
The app involves users uploading photos. I plan to keep all the images on the file system of the front nginx box and have pointers to them stored in the database. This way nginx can serve them full speed without involving the app servers.
The app resizes the image in the browser before uploading. So file size will not be too extreme.
What is the most efficient/reliable way of writing the images from the app servers to the nginx front end server? I can think of several ways I could do it. But I suspect some kind of network files system would be best.
What are current best practices?

Assuming you do not use CMS (Content Management System), you could use the following options :
If you have only one front end web server then the suggestion would be to store it locally on the web server in a local Unix filesystem.
If you have multiple web servers, you could store the files on a SAN or NAS shared network device. This way you would not need to synchronize the files across the servers. Make sure that the shared resource is redundant else if it goes down, your site will be down.

Related

NGINX and .NetCore application performance

I have a .netCore 5 website hosted with NGINX Linux Ubuntu
I need to select a new server and wonder where I can save money
My app is very small
My APP does NOT HDDs nothing store in the file system
I notice that even I overwrite with the new version (new files) the application does not change until I restart the tread.
My focus is to save money from the HDD
And buy servers with old HDD
Question
How NGINX works?
Load in the memory the app and serve from there?
You have the location of the files but create some cache and serve from there – WHICH means a Hard drive is important
In Short: Does NGINX used the hard drive?
My app is using CDN for all files they are on 3rd part Azure Blobs and Amazon S3
I want to save on HDD instead of SSD and nVME
Which HDDs to select to get the best performance?
Again my app server only web from 3rd party.
You pretty much already have the answer. NGINX won't do disk writes or reads unless it needs to.
For your case, static files are hosted elsewhere, so obviously your NGINX has no need to interact with the disk for serving those.
The app is served by using NGINX as a reverse proxy. The communication between the NGINX and the app is usually done over a network socket (e.g. your app binds to a TCP port) or a UNIX socket. For neither of the two, the disk speed would matter.
You would probably better ask yourself if your app logic does any reads or writes to disk. And if the answer is no or "not much", then yes, a plain HDD would be sufficient.

Hosting multiple .net core application on Ubuntu using Nginx

I created sample web api on .net core and registered it on default file in Nginx and was able to access it from outside.
The API looked like https://<>/api/values.
Now I want to add more configurations to host more web api with different port number. The problem is how will default file differentiate between multiple APIs since base URL is same i.e localhost\<> for all.
You need to create server blocks. Each of these server blocks will handle/listen/respond to different app. You can host as many apps you want to on a single Ubuntu machine using nginx this way.
This will be very helpful and describe the entire process of creating server blocks for your nginx server.

Deploy a MEAN stack application to an existing server

I have a Ubuntu Server on DigitalOcean which hosts a website, and a Windows Server on AWS which hosts another website.
I just built a mean.js stack app on my MAC, and I plan to deploy it to production.
It seems that most of the existing threads discuss about using a new dedicated server. For example, this thread is about deploying on a new AWS EC2 instance; this video is about deploying on a new Windows Azure server; this is to create a new droplet in DigitalOcean.
My question is, is it possible to use an existing server (which hosts other websites), rather than creating a new server? If yes, will there be any difference in terms of performance?
My question is, is it possible to use an existing server (which hosts other websites), rather than creating a new server?
Yes. Both Windows and Ubuntu allows you to deploy multiple applications on same instance.
For Ubuntu you can read this post which will help you server multiple apps.
In this example used Nginx, but you can follow to this example and use it without any server like Apache or Nginx. If you need subdomains I would suggest to use Apache virtual hosts with reverse proxy module and pm2
For Windows and its IIS I would suggest to use iisnode, in google you can find a lot of articles how to configure it.
will there be any difference in terms of performance?
It is depended on your applications, if you are already serving applications which handles huge traffic and need CPU and memory, I would not suggest you to use multiple apps on same instance, but if you are going to use simple web apps, you can easily use same instance.
Hope this answer will help you!

File Server vs NAS for hosting media files

I have a web portal where my users can login and submit artworks (image, documents, etc.). This web portal is hosted in 2 load-balanced web servers.
Because of this load balancing, I'm thinking of using NAS to serve as a centralized media file storage for my web portal. I'm considering NAS because it's cheaper than a file server and it's easier to maintain.
Now the questions are:
File hosting - Is there any NAS device that can act as a file hosting server? Or, do I need to create a virtual path in my web server to the NAS? This can be achieved easily if I use a file server, I can just bind a separate domain to the file server, something like media.mydomain.com, so all media files will be served through this domain. I don't mind serving the media files through a virtual path from my web servers, smthg like mydomain.com/media. I would like to know if NAS can do any of the approaches above, and whether it's secure, easy to setup, etc.
Performance - This is more important because read and writes are quite intensive. I never use NAS before. I'm thinking of getting 2 hard drives (2TB, 15000RPM) configured for RAID-1. Would this be able to match the performance of a common file server? I know the answer to this question is relative but I just want to see how NAS can be used as a file hosting, not just as a file sharing device.
My web servers are running Windows Server 2008R2 with IIS 7.5. I would appreciate if anyone can also share best practices for integrating NAS with Win Server/IIS.
Thanks.
A NAS provides a shared location for information on a private network (at least you shouldn't expose NAS technologies as NFS and CIFS over the internet) and is not really designed as a web file host. That is not to say you can't configure a NAS as a web file host utilizing IIS/apache/nginx, but then you don't need your web servers. NAS setup is well documented for both windows server and most unix/linux distros, both are relatively easy. A NAS is as secure as it is designed to be, you can utilize a variety of access control methods to secure a NAS depending on your implementation.
This really depends on your concurrent users and what kind of load you are expecting them to put on the system, for the most part performance over a 1Gb LAN connection and a 15,000 RPM hard drive for a NAS should provide ample performance for a decent amount of concurrent users, but I can't say for certain because if a user sits there downloading hundreds of files at a time you can have issues. As with any web technology wrap limits around user usage to prevent one user bringing down your entire system. I'm not sure what you are defining a file server (a NAS is a file server), if you think of a file server as a website that hosts files, a NAS will provide the same if not better performance based on where the device is in relation to your web servers (again, depending on utilization). If you are worried about performance you can always build a larger RAID array using RAID 5, RAID 6, RAID 10 or use SSDs to increase storage performance. For the most part in any NAS the hardware constraints usually are: storage speed, network speed, ram, cpu. Again this really depends on utilization, so test well, benchmark, and monitor performance
Microsoft provides a tuning document for server 2008 r2 that is useful: http://msdn.microsoft.com/en-us/library/windows/hardware/gg463392.aspx
In my opinion your architecture would be your 2 web servers referencing the NAS as a shared location using a virtual directory pointed at the NAS for your files or handle the NAS location in code (using code provides a whole plethora of options around security and usage).

Linux patterns and practices for hosting web application

I work mostly on desktop application on Windows platform. Now I am focusing on Linux platform to host web applications.
While hosting the application on Linux, I don't follow any procedure. I simply CHECKOUT the files from SVN and run the application on home directory. I don't know where to store the application data (example: mysql/postgres or Mongodb or redis, tokyo tyrant). Where to keep the log files.. What is the tip you have when we do the backend maintenance work on the server but display to the user saying that 'maintenance in progress' messages.
How do you host your application on VPS/dedicated/cloud service running Linux application?
Do you have any checklist? Do you have any tips & tricks?
Very broad question
Where do you store application data?. Most people would install MySQL which would properly store the data in /var/lib/mysql and Apache where /var/www is typically used. These applications are usually configured in /etc/apache2 and /etc/mysql.
Where to keep log files?. These almost always goes in to /var/log. For configuration check /etc/syslog.conf
How do you configure a server maintenance message?. Create a HTML file with your message and serve it by configuring apache from /etc/apache2/httpd.conf
How to do virtual Linux servers?. The easiest way is to install an instance on Amazon EC2 or you could use Oracle's VirtualBox (similar to VMWare, but free). You could also try Zen/KVM but these are far form trivial, so unless you have Linux maven around then I would stay clear of these.

Resources