Host and Deliver Big Files on Nodejs Nodejitsu - node.js

I have a website hosted on Nodejitsu using Nodejs.
I want people to be able to download files. Overall, there are about 1k files of 1MB each for a total of 1GB. Those files are in the same directory than regular code.
When I try to deploy, there's the message: "Snapshot is larger than 70M!"
How are you supposed to deliver files with Nodejs? Do I need to host them on a separate website (ex: mediafire) and I redirect people there? Or is there a special place to put them?

Services like Nodejitsu are meant for hosting your application. It sounds like these are static files, not something generated by your application. I recommend putting these on a CDN if they will get a lot of traffic. CloudFront can easily sit out in front of S3. Otherwise, buy a cheap VPS and dump your files there.
I also recommend not using your application server (your Node.js server) to host static content. While it can certainly do this, software like Nginx is often faster. (Unless of course you have a specific reason to serve these from your application...)

Related

In SSR, What is the meaning of Server?

I just learned CSR and SSR. So CSR is rendered in browser and
SSR is rendered in server and gives it to browser.
I can understand what CSR means but in SSR, what is the server?
Is it a big database like AWS? or just small codes stuff in npm?
I can't understand exactly what server means.
Anybody could help me?
Fundamentally, a server is just a computer with a program on it configured to respond to network requests. Often such a server's network configuration allows it to be reached from the public internet (with an IPv4 address and hostname). It can be as small or large as you want. You could use a hosted server (or hosted service provider) with a huge infrastructure like AWS; you could also use a spare computer lying around, install a backend on it (Node.js? Next.js? PHP? Etc...) and hook it up to your network, and it'll also function as a server.
There are also local development servers. These are pieces of software running on your local machine that allow you, the developer (and usually only you) to run server code while testing. These are usually accessible by plugging a localhost URL into your web browser, such as localhost:3000.
In SSR, 'the server' is usually the web server which contains the HTML files, but it can also be a separate server, such as with bigger websites that may have dedicated file storage servers, and rendering servers. Either way, the server is always just a computer owned by the website owners, which sends stuff over when someone asks it to.

NGINX and .NetCore application performance

I have a .netCore 5 website hosted with NGINX Linux Ubuntu
I need to select a new server and wonder where I can save money
My app is very small
My APP does NOT HDDs nothing store in the file system
I notice that even I overwrite with the new version (new files) the application does not change until I restart the tread.
My focus is to save money from the HDD
And buy servers with old HDD
Question
How NGINX works?
Load in the memory the app and serve from there?
You have the location of the files but create some cache and serve from there – WHICH means a Hard drive is important
In Short: Does NGINX used the hard drive?
My app is using CDN for all files they are on 3rd part Azure Blobs and Amazon S3
I want to save on HDD instead of SSD and nVME
Which HDDs to select to get the best performance?
Again my app server only web from 3rd party.
You pretty much already have the answer. NGINX won't do disk writes or reads unless it needs to.
For your case, static files are hosted elsewhere, so obviously your NGINX has no need to interact with the disk for serving those.
The app is served by using NGINX as a reverse proxy. The communication between the NGINX and the app is usually done over a network socket (e.g. your app binds to a TCP port) or a UNIX socket. For neither of the two, the disk speed would matter.
You would probably better ask yourself if your app logic does any reads or writes to disk. And if the answer is no or "not much", then yes, a plain HDD would be sufficient.

Deploying Next.js to Apache server

I've been developing a Next.js website locally and now want to set it up on my Apache server (with cPanel). However, I'm very new to Next.js and Node apps and not too sure how to go about it.
Has anyone done this successfully? Can you list the required steps and what files should be on the server?
Also, can this be done on a subdomain?
Thank you!
To start with some clear terms just so we're on the same page, there are two or three very different things people mean when they say "server":
A Server Machine is a computer that is connected to the internet that you intend to use to serve something to people on the internet.
A Server Program is some software you run on your Server Machine. The job of the Server Program is to actually calculate the responses to various requests.
A Server as a Service is a webapp provided by a company that stores your code and then puts it onto Server Machines with the right Server Program as needed.
While we're here, let's also define:
A Programming Language is the language your website is written in. Some sites have no language (and are just raw HTML/CSS files that are meant to be returned directly to the user). Many sites, though, have some code that should be run on the server and then the result of that code should be returned to the user.
In your case, you have a Machine whose condition we don't know other than that it is running the Program Apache (or probably "Apache HTTP Server"). Apache HTTP server is very old and proven and pretty good at serving raw files back to users. It can also run some Programming Languages like PHP and return the result.
However, Next.JS is built on top of the Programming Language Javascript, which Apache does not have the ability to run. Next.JS instead wants its Server Program to be Node.
So the problem here is basically that you have a hammer, but only screws. You can't use the tool you have, Apache, to solve the problem you need solved, running Node code and returning the result. To get around this you have two options:
First, you can find a way to access the Server Machine that is currently running Apache and tell it, instead, to run Node pointed at your Next.JS code whenever it starts up. This might not be possible, depending on who owns this machine and how they've set it up.
Second, and probably easier, is to abandon this Machine and instead use a Server as a Service. Heroku, AWS, and Netlify all support Next.JS and have a free tier. The easiest solution, though, is probably to just deploy it on Vercel, which is a Server as a Service run by the same team that makes Next.JS and which has a very generous free tier for you to get started with.
The good news, though, is that yes next.js does totally support being hosted from a subdomain.
Next.JS allows you to build fully functional Node Applications, as well as simple statically-generated sites like Jeckyl or Docpad. If your use case is a simple statically generated site look here: https://nextjs.org/docs/advanced-features/static-html-export
In particular the next build && next export command will create all the HTML and assets necessary to host a site directly via an HTTP server like Apache or Ngnix. Contents will be outputed to an out directory that could serve as the server root.
Pay very close attention to what features are not supported via this approach.

Are web page data files stored locally for a time?

When we hit any URL( like Facebook.com) on any browser(like chrome) it uses many resources for that particular page like JS files, Images, properties files etc. So, are they stored locally temporarily ?
Yes, it's called the browser cache :-) Websites can also use local storage to store some data on your machine. Additionally along the way various servers might cache the resources. ISPs do this a lot.

Best practices for shared image folder on a Linux cluster?

I'm building a web app that will scale into a linux cluster with tomcat and nginx. There will be one nginx web server load balancing multiple tomcat app servers. And a database server in behind. All running on CentOS 6.
The app involves users uploading photos. I plan to keep all the images on the file system of the front nginx box and have pointers to them stored in the database. This way nginx can serve them full speed without involving the app servers.
The app resizes the image in the browser before uploading. So file size will not be too extreme.
What is the most efficient/reliable way of writing the images from the app servers to the nginx front end server? I can think of several ways I could do it. But I suspect some kind of network files system would be best.
What are current best practices?
Assuming you do not use CMS (Content Management System), you could use the following options :
If you have only one front end web server then the suggestion would be to store it locally on the web server in a local Unix filesystem.
If you have multiple web servers, you could store the files on a SAN or NAS shared network device. This way you would not need to synchronize the files across the servers. Make sure that the shared resource is redundant else if it goes down, your site will be down.

Resources