Accessing images in node microservice - node.js

I have a backend that is composed of multiple microservices, one of them is the media microservice whose job is to receive a payload through rabbitmq and save the received files on the filesystem. The main backend then receives the response from the broker and saves the path and filename in a database. The frontend clients then receive the path and display the image found in the media microservice.
This all works fine when developing locally. But im missing a crucial part in production where i havent quite figured out how to configure nginx to allow access to the files/images. The main backend lives on a certain port and the microservices each on a different port (plan is to later dockerize the microservices and deploy them each on separate vps'). The media microservice does not have any functionality to serve any images, it just handles saving the files to the filesystem, so all i need is a way to access the files on said filesystem. Any hints on how i can configure something of the sort in nginx?

So turns out I still needed express or at least express makes the procedure much easier to serve the actual files.
app.use("/images", express.static(path.join(__dirname + "/media/images")));
With the following nginx configuration
location /images/ {
proxy_pass http://localhost:****;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}

Related

Nodejs Child Process on another server using server to server communication

I want to run the child process using node js from one server to another. I have a too heavy process to run that causing my main server to work slowly so I want to run my heavy processes on another server that will perform heavy tasks like data modifications and return a buffer of that data but I could not find similar like this.
For example, I have server A that is running my website and users are sharing their content using this. When the users' traffic jumps to high my server will get slow because of data like images, videos upload, and pdf report generating on the basic images, videos and serving the site content. I want to perform these tasks on server B, so that server A will only work for data serving and traffic management.
Apparently at this point you probably need to split your webserver frontend routes into different worker servers.
Let's suppose you're using Nginx as a website frontend. If you're not, then your first step would be to setup an nginx webfront.
1 - If haven't done so, serve all public static content (like pdf files, videos, images, etc.) directly from nginx using different rules for static content and node server routes:
Something as basic as this:
server {
listen 80;
server_name example.com;
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:5000;
}
location /static {
root /dir/to/my/static/files; # here you have your videos, images, etc.
}
}
2 - Now, if you need to separate your node server onto 2 services, you can just create 2 (or more) nginx proxy rules:
server {
listen 80;
server_name example.com;
location /api {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.2:5000; # server 2
}
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:5000; " server 1
}
location /static {
root /dir/to/my/static/files;
}
}
That way example.com/api/* routes will go to your secondary Node server (on ip 127.0.0.2), example.com/static will be served directly by Nginx at blazing fast speeds, while the non-mapped routes will be served by the default main node server on 127.0.0.1.
There are many ways to setup proxies and optimize Nginx so that it can, for instance, go through a pool of node servers in round-robin fashion, and you can also compress data and use protocols like HTTP/2 to take the load off the slower node-based webserver (ie. Express).

node.js server and client: one or two node instances?

I am writing a website with node.js, and, until now, I've always separated the client and server parts in two different node.js instances (and processes):
one for the server part (APIs, interaction with databases, etc.)
one for the client part (js code is executed in the browser)
Is this the correct way of doing it? Or there is a way to collapse client and server in one node.js instance?
Thanks.
You do not need node.js to provide clients with static files.
Nginx (or any other reverse proxy) can do it in a more efficient way thus conserving resources of you server and allowing higher loads.
I suggest you to use nginx to provide static files and forward API requests to node.js service.
Here is an example how you could do it:
server {
listen 80 default_server;
root /client-code;
location / {
try_files $uri $uri/ #node;
}
location #node {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://127.0.0.1:8000;
}
}

"TypeError:Networkerror when attempting to fetch resource" when my remote client tries to communicate with my remote server

I have uploaded a React client to DigitalOcean with an SSL certificate to enable HTTPS. I also have uploaded my Express server to Amazon's AWS. The reason for the different host providers is that I wasn't able to upload my client to AWS so I made the switch to DigitalOcean.
The server works great and I get normal responses from it when I use the client from my machine. However, the exact same code doesn't work in DigitalOcean's Nginx server. I get:
TypeError:Networkerror when attempting to fetch resource
But no response error code. The GraphQL/fetch requests aren't visible on the server so they either aren't being sent correctly or they cannot be accepted correctly by the server.
I played around with "proxy" in client's package.json and HOST/PORT/HTTPS attributes as seen here but I realized these have no effect in production.
I have no idea how to fix this. My only guess is that client uses HTTPS while server doesn't, but I haven't found info on if that's a problem.
This is my client's Nginx server configuration:
server {
listen 80 default_server;
server_name example.com www.example.com;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
If your client lives on a domain different from your API server then you need to make sure you have CORS headers enabled on the API server, otherwise the browser will refuse to load the contents.
See here for more information regarding CORS headers.
You turn off the credentials when running the client on DigitalOcean. Maybe storing a cookie on your client is/ was not possible.

Access Node.js app by address on Laravel Forge server

I have my Node.js app working fine on Laravel Homestead, but now need to move it to the server. I'm using Laravel Forge for the server provisioning. The server already has Node and the required packages installed.
When running locally, I can just use Homestead's http://192.168.10.10:3000 to connect. What will the address be on my server, and how can I keep the node app running?
I can run it when I am connected via SSH, and I can see that it is outputting events received. But how do I connect the client to it? I have tried my server's IP and domain but no luck.
Once connected, how can I keep the node app running so I don't need to have an open SSH session?
Many thanks,
Sam
Install NodeJS if you don't have it already on the server:
sudo install nodejs
Push your app to whatever folder you want it.
Create an app (a site) for your NodeJS app on Laravel Forge.
At the bottom part, there will be a button to edit Nginx's configuration. If there exists a location / { part, replace it with a reverse proxy (if there isn't, just add it):
location / {
proxy_pass http://SERVER_IP:APP_PORT;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
Restart nginx, run the app.
It should all be running as you want it to on the address you've provided Laravel Forge.
To set up restarting, you'll need to look into PM2. It's fairly straightforward.

node.js angular jade client and node.js rest api

Are there any good examples or guidance anyone can provide for structuring an app like this?
Client (client.company.com)
Node.js
Angular
Jade
ExpressJS
Server (private) (server.company.com)
node.js
"rest" api (express)
The api is private right now, only accessible from the hosted servers.
If there is a page which creates recipes for example, is this right?
client
- angular form with router that posts to client.company.com/recipe
- express would need route to handle that /recipe
- that route would then post to api server server.company.com/recipe
- then response would be propagated through the layers back to the ui.
Is that right having the client duplicate the api routes? Is there anything that can be done to simplify and make things with less duplication?
angular forms should just post directly to the api server. Express is used just to serve the angular html/javascript/static files. The less layers in between the html and api the better. I don't see any good reasons why you need the client to duplicate the api routes.
Since your api is behind the hosted server, you can setup nginx server to routes all your api calls from the hosted server to the api server. The below is a sample nginx configuration to do the routing:
upstream clientServer {
server client.company.com:80;
}
upstream apiServer {
server server.company.com:80;
}
server {
location / {
root html;
index index.html index.htm;
proxy_pass http://clientServer;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /api {
proxy_pass http://apiServer;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
Note the above is a snippet of the nginx.conf.
Nginx will look at your URL path.
requests accessing / path will go to client server (where you can host express js and angular files)
requests accessing /api/* path will be forwarded to the apiserver
Your angular form can then call the api directly to /api/*
Hope that helps.

Resources