I have an nginx/node.js server I'm trying to configure. Basically it's just the issue of running 2 web servers on port 80 at the same time. I have www.mysite.com that I need to point to nginx on port 80. But I also have a node.js server that I need api.mysite.com to point to port 8888.
I'm messing around with proxy_pass in my config (http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass) but with no luck. I also tried this https://stackoverflow.com/a/20716524/605841 with no luck.
If anyone has any tips that would be great. Thanks in advance.
Nginx public dir: /var/www/html. Express app location: /var/www/html/myNodeAppRoot
Here's my /etc/nginx/sites-available/api.mysite.com file (sym linked into sites-enabled):
server {
listen 80;
# server_name ~^(?<login>[a-z]+)\.api\.mysite\.com$;
server_name api.mysite.com$;
location / {
# root /var/www/html/myNodeAppRoot;
# proxy_pass http://unix:/tmp/\$login.api.mysite.com.sock:$uri$is_args$args;
proxy_pass http://unix:/tmp/api.mysite.com.sock:$uri$is_args$args;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
And here's my default.conf file:
#
# The default server
#
server {
listen 80 default_server;
server_name www.mysite.com;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
root /var/www/html;
index index.php index.html index.htm;
}
error_page 404 /404.html;
location = /404.html {
root /var/www/html;
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /var/www/html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
location ~ \.php$ {
root /var/www/html;
try_files $uri =404;
# fastcgi_pass 127.0.0.1:9000;
fastcgi_pass unix:/tmp/php5-fpm.sock;
fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
location ~ /\.ht {
deny all;
}
}
Thanks for any help!
I run a bunch of Node.js applications on the same server, while nginx serves some static content. Here's my setup:
# the meteor server
server {
server_name example.com;
access_log /etc/nginx/logs/example.access;
error_log /etc/nginx/logs/example.error error;
location / {
proxy_pass http://localhost:3030;
proxy_set_header X-Real-IP $remote_addr;
}
}
I just repeat this block and change the port for each new Node.js app. Then I run Node.js with a different --port 3030 parameter.
nginx can be configured to use unix sockets, which look like an item in the file system. Node supports these out of the box. This allows you to avoid any issues with ports behind nginx.
A good tutorial for setting up a Node app with nginx and sockets can be found here.
Related
I have a React frontend app and Node Express Rest API app to be deployed on linux server using nginx and pm2.
React frontend app is deployed by placing the build folder under /var/www/example.com/html
For Node express Rest API, I haved used to run it as a process on port 3000.
In /etc/nginx/sites-available/default, this is how my configuration looks like
server {
listen 80 default_server;
listen [::]:80 default_server;
# SSL configuration
#
# listen 443 ssl default_server;
# listen [::]:443 ssl default_server;
#
# Note: You should disable gzip for SSL traffic.
# See: https://bugs.debian.org/773332
#
# Read up on ssl_ciphers to ensure a secure configuration.
# See: https://bugs.debian.org/765782
#
# Self signed certs generated by the ssl-cert package
# Don't use them in a production server!
#
# include snippets/snakeoil.conf;
root /var/www/mirage.video/html;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
server_name example.com www.example.com;
location / {
# Without this line routing in your Single Page APP will not work
try_files $uri $uri/ /index.html =404;
}
location /api {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
# try_files $uri $uri/ =404;
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
# pass PHP scripts to FastCGI server
#
#location ~ \.php$ {
# include snippets/fastcgi-php.conf;
#
# # With php-fpm (or other unix sockets):
# fastcgi_pass unix:/run/php/php7.4-fpm.sock;
# # With php-cgi (or other tcp sockets):
# fastcgi_pass 127.0.0.1:9000;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
# Virtual Host configuration for example.com
#
# You can move that to a different file under sites-available/ and symlink that
# to sites-enabled/ to enable it.
#
#server {
# listen 80;
# listen [::]:80;
#
# server_name example.com;
#
# root /var/www/example.com;
# index index.html;
#
# location / {
# try_files $uri $uri/ =404;
# }
#}
However when I visit ip/api, it shows Cannot GET /api
My server.ts includes following code
app.use(express.json()); // new way to do it
app.use(express.urlencoded({ extended: false })); // new way to do it
app.get("/", (_req, res) => {
res.send("API Running");
});
Is there anything I am missing. Thanks in advance.
Background/Objectives:
Hi everyone, first time poster here!
I am currently trying to configure NGINX so that it -
Serves a React App when specific routes are hit e.g. /auth/
Proxies all traffic on the /api route to a Node back-end
Serves a Wordpress site on all other routes
Issue:
I have an /auth location block defined in my NGINX config file that points to a React App contained in the folder /home/ubuntu/app/client/build.
If I visit the /auth route, NGINX correctly serves the React App.
However, when I make a HTTP request to a nested route (e.g. /auth/login) I get a 404 error.
I have tried stripping the NGINX config file back to just contain the /auth location block but the issue still persists. Therefore, I don't think it has to do with other settings inside the file.
Specifying exact routes is not a viable solution as many of the other routes I am yet to add have dynamic nested values e.g. /users/someuniqueid.
Configuration File
Hopefully the following sheds some light on what I am doing wrong. Note, I have removed the server name for privacy reasons so please don't assume I have used that actual name :)
server {
listen 80;
server_name myipaddress;
root /home/ubuntu/wordpress;
index index.php index.html index.htm;
location / {
try_files $uri $uri/ /index.php?$args;
}
location /css {
alias /home/ubuntu/app/client/build/css;
}
location /media {
alias /home/ubuntu/app/client/build/media;
}
location /static {
alias /home/ubuntu/app/client/build/static;
}
location /assets {
alias /home/ubuntu/app/client/build/assets;
}
location /auth {
alias /home/ubuntu/app/client/build;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_index index.php;
fastcgi_pass unix:/var/run/php/php7.3-fpm.sock; #Ubuntu 17.10
include fastcgi_params;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
location /api {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/myurl.io/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/myurl.io/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhusername.pem; # managed by Certbot
}
This problem is already mentioned here.
Try replacing the location /auth configuration with this.
location /auth/ {
alias /home/ubuntu/app/client/build;
index index.html;
try_files $uri $uri/ /index.html?$args;
}
location / {
alias WORD_PRESS_PATH_HERE;
index index.php;
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_index index.php;
fastcgi_pass unix:/var/run/php/php7.3-fpm.sock; #Ubuntu 17.10
include fastcgi_params;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
}
Thanks for reading.
I've a VM Ubuntu15.04 server and I've configured nginx to listen requests on port 80 and forward them to respective applications on different ports. I have a simple node.js service running over port 3000 which has one GET and POST service. I have started it by using PM2 and added a proxy_pass to localhost:3000/ in my nginx default conf. The problem is when i try to use a GET request it is working fine but in case of POST it is showing 404 error. I've tried to use the POST service through postman client.
This is my default conf file of nginx
##
# You should look at the following URL's in order to grasp a solid understanding
# of Nginx configuration files in order to fully unleash the power of Nginx.
# http://wiki.nginx.org/Pitfalls
# http://wiki.nginx.org/QuickStart
# http://wiki.nginx.org/Configuration
#
# Generally, you will want to move this file somewhere, and start with a clean
# file but keep this around for reference. Or just disable in sites-enabled.
#
# Please see /usr/share/doc/nginx-doc/examples/ for more detailed examples.
##
upstream my_nodejs_upstream {
server 127.0.0.1:3000;
keepalive 64;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
# SSL configuration
#
# listen 443 ssl default_server;
# listen [::]:443 ssl default_server;
#
# Self signed certs generated by the ssl-cert package
# Don't use them in a production server!
#
# include snippets/snakeoil.conf;
root /var/www/face_rec/ServerTest;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
server_name localhost;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
proxy_pass http://localhost:3000;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# include snippets/fastcgi-php.conf;
#
# # With php5-cgi alone:
# fastcgi_pass 127.0.0.1:9000;
# # With php5-fpm:
# fastcgi_pass unix:/var/run/php5-fpm.sock;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}}
# Virtual Host configuration for example.com
#
# You can move that to a different file under sites-available/ and symlink
that
# to sites-enabled/ to enable it.
#
#server {
# listen 80;
# listen [::]:80;
# server_name example.com;
# root /var/www/example.com;
# index index.html;
# location / {
# try_files $uri $uri/ =404;
# }
#}
Any reference, tutorial, suggestions or solutions please let me know how to do it.
Using try_files and proxy_pass in the same location block is probably not going to work.
If you want nginx to test for the presence of a static file and proxy everything else, use a named location:
root /path/to/root;
location / {
try_files $uri $uri/ #proxy;
}
location #proxy {
proxy_pass ...;
...
}
Test the configuration using nginx -t as your question appears to be missing a closing }.
See this document for details.
It's impossible to tell you what's wrong with your code if you didn't include even a single line of your Node program that your question is about.
Also "404 error" is not enough to know what's wrong because nginx shows a different error message than Express and knowing the exact message would let us know where the error originated.
What you should do is first to make sure that both your GET and POST handlers are working correctly by using:
curl -v http://localhost:3000/your/get/path
and:
curl -v -X POST -d 'somedata' http://localhost:3000/your/post/path
from the same host where your app is running.
Then add the nginx proxy, restart nginx to make sure that the config is reloaded, and do the same with port 80. If anything is different then work from there and diagnose the difference.
But if the POST handler doesn't work on localhost:3000 then you first need to fix that.
I have a Ubuntu Server on DigitalOcean, which hosts several websites. I just built a mean.js stack app on my mac, and I plan to deploy it to production, thus to this existing server (though I don't know if I need to create another droplet like here).
I followed this link to install node.js and mongodb, etc. Then, I cloned my own app from the github:
sudo git clone https://github.com/softtimur/myapp.git /opt/myapp
cd /opt/myapp
sudo npm install
npm start
As a result, in a browser, by entering https://xxx.xx.xx.xx:3000/#/home, it communicates well with the server.
Now, I would like to use the domain name I bought from GoDaddy (ie, myapp.io) rather than the IP address to communicate to the server.
I have modified the records in DNS of myapp.io such that it points to the IP address. As a result, https://www.myapp.io leads well to the server, however, it leads to another page set by nginx by default.
Then, I set /etc/nginx/sites-available/myapp.io and /etc/nginx/sites-enabled/myapp.io as follows:
server {
listen 3000;
listen [::]:3000;
root /opt/myopp/;
index index.php index.html index.htm;
# Make site accessible from http://localhost/
server_name myopp.io;
location / {
try_files $uri $uri/ =404;
}
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
client_max_body_size 15M;
}
location /phpmyadmin {
root /usr/share/;
index index.php index.html index.htm;
location ~ ^/phpmyadmin/(.+\.php)$ {
try_files $uri =404;
root /usr/share/;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include /etc/nginx/fastcgi_params;
}
location ~* ^/phpmyadmin/(.+\.(jpg|jpeg|gif|css|png|js|ico|html|xml|txt))$ {
root /usr/share/;
}
}
}
After restarting nginx, npm start returns an error: Port 3000 is already in use.
Could anyone tell me if this approach is correct? If yes, how could I fix the error, eg., the nginx config file?
Edit 1: In /etc/nginx/sites-avaiable/default, I have
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
root /usr/share/nginx/html;
index index.php index.html index.htm;
server_name xxx.xx.xx.x;
What you are trying to do is reverse proxy from //www.myapp.io to //xxx.xx.xx.xx:3000. This is achieved by listening on port 80 (or 443) and using proxy_pass to connect with your service running on port 3000. See this document for details.
For an http server, you could use:
server {
listen 80;
server_name myopp.io;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Accept-Encoding "";
proxy_set_header Proxy "";
proxy_pass http://127.0.0.1:3000;
}
}
Obviously you are using https which could be implemented by changing your service to use http on port 3000. Installing your certificates and terminating SSL using nginx. See this document for more.
I have a DigitalOcean VPS running nginx which has two websites on it. The two sites on it are: www.ingledow.co.uk and blog.ingledow.co.uk.
My main (www.) domain is predominantly a static site, but my blog (blog.) subdomain runs on Ghost.
Everything works perfectly apart from that I can access my blog from both www. and blog.. For example, here is a blog post at http://blog.ingledow.co.uk/puma-social-club/, but the same blog post can be seen from http://www.ingledow.co.uk/puma-social-club/.
Another point to note is that if you try to go to http://ingledow.co.uk/puma-social-club/ without the www. or blog. it 404s.
The problem lies in having two sites on the same VPS, but not sure if there's a problem with my nginx configs or whether it's problems with my DNS, or both?
The nginx config files are in /sites-available/ and symlinked to /sites-enabled/
I need to get this fixed because it is causing issues with Google search results and SEO.
Here's my DNS:
blog.ingledow.co.uk.conf
# blog.ingledow.co.uk running on Ghost
server {
listen 80; ## listen for ipv4; this line is default and implied
#listen [::]:80 default ipv6only=on; ## listen for ipv6
root /var/www/ghost;
index index.php index.html index.htm;
# Make site accessible from http://localhost/
server_name blog.ingledow.co.uk;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://127.0.0.1:2369;
client_max_body_size 10m;
break;
}
location /doc/ {
alias /usr/share/doc/;
autoindex on;
allow 127.0.0.1;
deny all;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
location /phpmyadmin { index index.php; }
}
ingledow.co.uk.conf
# ingledow.co.uk.conf
server {
listen 80; ## listen for ipv4; this line is default and implied
#listen [::]:80 default ipv6only=on; ## listen for ipv6
root /var/www/ingledow.co.uk/public_html;
index index.php index.html index.htm;
# Make site accessible from http://localhost/
server_name ingledow.co.uk;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to index.html
try_files $uri $uri/ /index.html /index.php;
# Uncomment to enable naxsi on this location
# include /etc/nginx/naxsi.rules
}
location /doc/ {
alias /usr/share/doc/;
autoindex on;
allow 127.0.0.1;
deny all;
}
# Only for nginx-naxsi : process denied requests
#location /RequestDenied {
# For example, return an error code
#return 418;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
location /phpmyadmin { index index.php; }
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
Try adding www. to sever_name ingledow.co.uk; in the ingledow.co.uk server block. e.g. :
server_name www.ingledow.co.uk ingledow.co.uk;
If you don't want to site to be accessed without the www. subdomain prefix you should remove it from the server_name.
Another way to do it is to have the server block as is for the blog, and just use a catch all server block for the main static site.