Configure nginx for two node apps, with one on a subdomain - node.js

Issue
I'm trying to set up nginx so I can have my domain, domain.com run by a node web app on port 3000, and the subdomain dev.domain.com run by a second node web app on port 3001. When I run this configuration domain.com is connected to the right port, but dev.domain.com just gives a page that says the server can't be reached.
Edit:
If I go to IP_ADDRESS:3000 I get the same content as domain.com, but if I go to IP_ADDRESS:3001 I get what should be at dev.domain.com. Based on this it seems like the apps are running fine on the right ports, and I'm just not routing the subdomain correctly.
Code
I edited /etc/nginx/sites-available/default directly so it has:
server {
listen 80 default_server;
server_name domain domain.com www.domain.com;
location / {
proxy_pass http://127.0.0.1:3000;
}
}
server {
listen 80;
server_name dev.domain dev.domain.com www.dev.domain.com;
location / {
proxy_pass http://127.0.0.1:3001;
}
}
Other than that file everything else is a fresh install
My logic
I'm very new to nginx but this seems like any requests for domain.com would get sent to port 3000, and requests for dev.domain.com would go to 3001.
Any help or critique of what I've done so far would be greatly appreciated!

Above setup works fine. My issue was with DNS records - I added an A record directing dev.domain.com to the IP address of the server I'm running the node apps on.

Faced same issue and solved it by creating file from root user:
drwxr-xr-x 6 gitlab-runner gitlab-runner 4096 Sep 12 06:56 .
drwxr-xr-x 4 root root 4096 Sep 12 06:57 ..
-rw-r--r-- 1 root root 11 Sep 12 06:54 .env
-rw-rw-r-- 1 gitlab-runner gitlab-runner 599 Sep 12 06:56 app.js
If you will delete all files and directories in this folder from gitlab-runner with rm -Rf command it will delete all files except .env
This is just quick workaround may be will be useful.

Related

Can't redirect traffic to localhost with nginx and docker

I'm new to Docker and nginx so this may be a simple question but I've been searching through question/answers for a while and haven't found the correct solution.
I'm trying to run an nginx server through docker to reroute all requests to my.example.com/api/... to localhost:3000/api/...
I have the following Dockerfile:
FROM nginx
COPY ./nginx.conf /etc/nginx/conf.d/default.conf
and the nginx.conf file:
server {
server_name my.example.com;
location / {
proxy_pass http://localhost:3000/;
}
}
When I make the calls to the api on localhost:3000 this works fine but when I try to run against my.example.com I get a network error that the host isn't found. To be clear the domain I want to 'redirect' traffic from to localhost host is a valid server address, but I want to mock the api for it for development.
This isn't working because your nginx proxying request to localhost which is container itself but your app is running on host's port 3000 -- outside of container. check this article
change
proxy_pass http://localhost:3000/;
  to
proxy_pass http://host.docker.internal.
add 127.0.0.1 example.com my.example.com in etc/hosts

How to configure docker-compose.yml and nginx conf file to read an external drive?

I have nginx in a docker container. My docker-compose.yml is like this (simplified):
nginx:
volumes:
- /var/www/html:/www/:rw
- /media/storage:/storage/:rw
Where /var/www/html is my website root and /media/storage is an external drive in my host machine (Azure).
Now I'm trying to point the website URL example.com/downloads to /storage but without success. My nginx/conf.d/example.com.conf is as following (simplified):
server {
listen 80 default;
server_name example.com;
# this works
root /www;
index index.php;
# this get a 404 error
location /downloads{
root /storage;
}
}
But I get a 404 error for example.com/downloads. What am I forgetting here? The file permissions and owner to both paths are the same. I don't know if the bad configuration is in example.com.conf or in docker-compose.yml. How should I configure these?
I solved this myself using alias /storage; instead of root /storage.

NodeJs Auto start in business server with systemd as different user

I have my business app running in Development env and inside that, 2 folders named Client and Backend.
Client (ReactJS) running in port 5000
Backend (Node.JS) running in Port 6000
Server Nginx.
So in Nginx default.conf file, listening 80 and I've proxy_pass http://localhost:5000.
Its working fine in the Development.
Please note, some redirections are configured like ${host}:3000/xxx in the backend and client scripts
But while doing the production deployment, I found difficulty in doing so.
I have the static build client file and placed it in the nginx root folder.
Below is the .conf file
server {
listen 80;
listen 5000;
server_name xx.xxx.xxx.xxx;
location / {
root /usr/share/nginx/html/client/build;
index index.html index.htm;
try_files $uri $uri/ #backend;
}
location ~ ^/([A-Za-z0-9]+) {
proxy_pass http://localhost:6000;
}
}
I Also have SSO enabled, when I navigate the address, it send the index.html file which is the login page.
When I press login, first it will navigate to "/login/abc/" which is routed in "backend" script.
But it responds with 404 error.
What am I doing wrong?

how to connect graphql Api to front End apps on nginx hosted on Ec2

I have an App composed of three projects with the following structure :
mainapp
->/packages
->/admindashboard
->/shopapp
->/api
I want to deploy the project on an Ec2 instance (which I'm not the administrator), so I built the admindashboard and the shopapp with :
yarn build
added Nginx and configured the /nginx/sites-available/default file like so :
server {
listen 80 default_server;
server_name localhost;
location / {
root /var/www/mainapp/packages/shopapp/out;
index index.html index.htm;
}
}
#running admin-Dashboard
server {
listen 3000 default_server;
server_name localhost;
location / {
root /var/www/mainapp/packages/admindashboard/build;
index index.html index.htm;
}
}
-this got the tow front apps to work, but I couldn't link the api.
when I run yarn dev:api-shop or yarn dev:api-admin
it shows that it's running on port 4000 but the front app's fail to fetch the data, it can't get or post to the api.
what is the correct way to deploy such project?
the project technologies are :
Admin Dashboard :
-CRA
-Apollo
-BaseUI
-Typescript
-React Hook Form
Shop :
-NextJs
-Apollo
-Typescript
-Styled Components
-Stripe Integration
-Formik
API :
-Type GraphQL
-Type ORM
thank you, and sorry if my explanation is not clear.
I resolved the problem, actually in the admin project in the .env file the API URL was :
http://localhost:4000/admin/graphql
I had to change localhost to the Ip address of the instance like so :
(example)
http://15.xxx.xx.xxx:4000/admin/graphql
and it worked, yet having two API (one for the shop and one for the adminDashboard)
I had to run one on the 4000 port and the other one on the 4001, Now it works but I still wonder if it's the proper way to deploy such app . thank you all

How to reverse proxy to a Node.js app in a domain subfolder with Nginx on CentOS?

I have a domain https://ytdownvideo.com running a WordPress website.
I want to run a Node.js app in a subfolder on the same domain, as follows: https://ytdownvideo.com/youtube/
I am using Nginx on CentOS 7 x64.
How can I configure Nginx to reverse proxy to the Node.js app when navigating to this subfolder?
First, you will need to run your Node.js app on a different and unused port. For example, 3000.
Then, add the following to your nginx configuration:
location ~ ^\/youtube.*$ {
proxy_pass http://localhost:3000;
port_in_redirect off;
}
Restart nginx with:
sudo systemctl restart nginx

Resources