I understand that multiple node.js can be run on one server using Nginx. I've got Nginx setup and running on a Ubuntu server. I have two node js applications :
127.0.0.1:3001 and
127.0.0.1:3002. I want to access the different application with different url. For example, if I want access
127.0.0.1:3001, I would use url: http://121.42.20.100/. And if want access the application of
127.0.0.1:3002, I would use url: http://121.42.20.100/admin
the default file on the folder sites-available follows :
server {
#listen 80; ## listen for ipv4; this line is default and implied
#listen [::]:80 default ipv6only=on; ## listen for ipv6
root /usr/share/nginx/www;
index index.html index.htm;
# Make site accessible from ` http://localhost/`
server_name 0.0.0.0;
location / {
proxy_pass ` http://127.0.0.1:3001`;
}
location /admin/ {
proxy_pass ` http://127.0.0.1:3002`;
}
}
When I access url like http://121.42.29.100/, it works that get response from
127.0.0.1:3001. However, when I access url like http://121.42.29.100/admin, it does not work that it shows error of "Cannot GET /admin/". How can I configure the nginx to get it work?
Related
I'm trying to enforce the SSL protocol in a Jelastic Enviroment.
My setup is:
one node, with a Nginx Load balancer (+ public ip + custom ssl certificate) and a NodeJS application server.
The SSL setup is working, but i want to enforce the use of HTTPS no HTTP (a redirect).
I've tried to modify the nginx.conf but no success.
Any ideas how should I do that?
Create the config file /etc/nginx/conf.d/nginx_force_https.conf and add the lines below:
server {
listen 80;
server_name _;
return 301 https://$host$request_uri;
}
It will redirect all configured sites to https.
If you want only exact site example.com:
server {
listen 80;
server_name example.com;
return 301 https://example.com$request_uri;
}
Make sure that you have these includes enabled in /etc/nginx/nginx.conf
include /etc/nginx/nginx-jelastic.conf;
in /etc/nginx/nginx-jelastic.conf:
include /etc/nginx/conf.d/*.conf;
Check for errors in the configuration:
sudo service nginx configtest
Reload configuration (this would be enough to make changes "work"):
sudo service nginx reload
Check if all works as expected. Restart the whole webserver (if needed):
sudo service nginx restart
The detailed answer can be found in this post Force www. and https in nginx.conf (SSL)
I've got my build folder on my server and I assume I've correctly configured nginx to be able to see it (I copied the config from Deploy Create-React-App on Nginx), but I still get a 404 page when I try to navigate to it. Is my nginx file just not configured right? I can't figure out what could be causing this problem.
I've tried following Deploy Create-React-App on Nginx exactly, changing things to match my own server name etc. when appropriate.
This is the section I've added to my config file:
server {
listen 80;
server_name xxx.net www.xxx.net;
root /var/www/xxx.net/html/build;
index index.html;
location /news {
try_files $uri /index.html;
}
}
I have a server(Ubuntu 16.04) and a user called coxier.
I configure Nginx to Proxy Requests. I create a file etc/nginx/sites-available/myproject.
server {
listen 80;
server_name 101.200.36.xx;
location / {
include uwsgi_params;
uwsgi_pass unix:/home/coxier/iemoji/server/iemoji.sock;
}
}
In this flask project, server receive request and then generate a .gif file for this request.
At first I directly I use flask#send_file to send gif file about 1MB, however speed is very slow.
So I decide to optimize the request.
Receive http request and generate gif file.
Return url of the generated gif file to user.
User access gif file by url.
I have a question. How can I generate url of the generated gif?
I have tried like below.
server {
listen 80;
server_name 101.200.36.xx;
root /home/coxier/iemoji/server/output;
location / {
include uwsgi_params;
uwsgi_pass unix:/home/coxier/iemoji/server/iemoji.sock;
}
}
For example, I want to access /home/coxier/iemoji/server/output/a3dfa3eb21daffc7085f71630cbd169e/output.gif.
Then I return http://101.200.36.xx/a3dfa3eb21daffc7085f71630cbd169e/output.gif to user.
However nginx returns 404 Not Found.
From https://docs.nginx.com/nginx/admin-guide/web-server/serving-static-content/ , I find solution.
server {
listen 80;
server_name 101.200.36.xx;
location / {
include uwsgi_params;
uwsgi_pass unix:/home/coxier/iemoji/server/iemoji.sock;
}
location ~ \.(gif) {
root /home/coxier/iemoji/server/output;
sendfile on;
sendfile_max_chunk 1m;
}
}
You should redefine root in location block.
Then you can generate url like this:
http://101.200.36.xx/a3dfa3eb21daffc7085f71630cbd169e/output.gif.
I'm trying to get my Node server up and running on Ubuntu 14.04. I followed a tutorial from DigitalOcean to set up nginx and server blocks to serve my content.
I have the server setup correctly, I believe because I can whois my-site.com and also ping my-site.com. when I visit the web address in the browser, however I get just this error that displays in the page: "Internal Error: Missing Template ERR_CONNECT_FAIL".
I thought that maybe I pointed the nginx server block to the incorrect path, because of of the "Missing Template", but it points to the right file. It is supposed to display a simple index.html file located in /var/www/my-site.com/html.
Here is my server block if this sheds some light on the error:
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=off;
root /var/www/my-site.com/html;
index index.html index.htm;
# Make site accessible from http://localhost/
server_name my-site.com www.my-site.com;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
# Uncomment to enable naxsi on this location
# include /etc/nginx/naxsi.rules
}
}
This file is located in /etc/nginx/sites-available/my-site.com and I've copied it to the sites-enabled directory as well.
What am I missing here?
This is a pretty standard error message, and in fact as of this moment nodejs.org is displaying that exact same message. I believe it is generated by a reverse proxy: for example, https://searchcode.com/?q=ERR_CONNECT_FAIL shows that ERR_CONNECT_FAIL appears in the squid reverse proxy software. I couldn't find something similar a quick search through the nginx source code.
When I encountered this error message, I was deploying through the digitalocean one-click dokku app and I did not have a domain in /home/dokku/VHOST, so it was being assigned a random internal IP address. I accessed it using [domain]:[port]. Hope that gives you a clue.
I inherited a node.js project and I am very new to the platform/language.
The application I inherited is in development so it is a work in progress. In its current state it runs off port 7576 so you access it this way: server_ip:7576
I've been tasked with putting this "prototype" on a live server so my boss can show it to investors etc. But I have to password protect it.
So what I did is I got it running on the live server. And then I made it use a nginx vhost like this:
server {
listen 80;
auth_basic "Restricted";
auth_basic_user_file /usr/ssl/htpasswd;
access_log /etc/nginx/logs/access/wip.mydomain.com.access.log;
error_log /etc/nginx/logs/error/wip.mydomain.com.error.log;
server_name wip.mydomain.com;
location / {
proxy_pass http://127.0.0.1:7576;
root /var/app;
expires 30d;
#uncomment this is you want to name an index file:
#index index.php index.html;
access_log off;
}
location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$ {
root /var/app/public;
}
}
`
This got the job done, I can now access my app by going to wip.mydomain.com
And I can easily password protect it via nginx.
My problem is the app is still accessible via the ip:port and I don't know how to prevent that.
Any help is appreciated.
Thanks
In your node javascript code, you need to explicitly bind to the loopback IP:
server.listen(7576, '127.0.0.1');
(You are looking for a call to .listen(<port>) to fix. The variable may be called app or something else though).
Any IP address starting with 127. is a loopback address that can only be accessed within a single machine (doesn't actually use the network).