Production meteor app: NginX proxy - failed(111: connection refused) - node.js

I am attempting to deploy a meteor app to my production server, running on Ununtu 14.04 and I am getting the following error in my log file (/var/log/nginx/error.log)
2014/12/02 16:03:38 [error] 19231#0: *4267 connect() failed (111: Connection refused) while connecting to upstream, client: 162.13.2.250, server: theserver.com, request: "GET /content/staticContent.json HTTP/1.1", upstream: "http://127.0.0.1:3000/content/staticContent.json", host: "theserver.com"
My app is fetching json content from 5 files and these are included as part of the project.
I have the following nginx configuration file. I am running nginx 1.6.2.
I should also note that visiting http://theserver.com/content/staticContent.json loads the content in my browser.
I have also run 'wget http://127.0.0.1:3000/content/staticContent.json' from the command line when logged into my production server and I can fetch the content.
server_tokens off;
upstream anna {
server 127.0.0.1:3000;
}
# HTTP
server {
listen 80;
#server_name *.theserver.com;
server_name theserver.com;
location / {
proxy_pass http://anna;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forward-Proto http;
proxy_set_header X-NginX-Proxy true;
proxy_redirect off;
}
}
I also have the following upstart configuration file to run my Meteor App in a node fibre
description "Meteor.js (NodeJS) application for theserver.com"
author "Me <me#theserver.com>"
start on started mongodb and runlevel [2345]
stop on shutdown
respawn
respawn limit 10 5
setuid anna
setgid anna
script
export PATH=/opt/local/bin:/opt/local/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
export NODE_PATH=/usr/lib/nodejs:/usr/lib/node_modules:/usr/share/javascript
export PWD=/home/anna
export HOME=/home/anna
export BIND_IP=127.0.0.1
export PORT=3000
export HTTP_FORWARDED_COUNT=1
export MONGO_URL=mongodb://localhost:27017/anna
export ROOT_URL=http://theserver.com/
exec node /home/anna/bundle/main.js >> /home/anna/anna.log
end script
My process for deployment is as follows:
On my local machine I run 'meteor build .' and then scp that to my production server to the above home directory
I then untar the .tar.gz file, cd into bundle/programs/server and run 'npm install'
Finally, when logged in as 'anna' with sudo rights, I run 'sudo start anna' given the above upstart script
On checking the generated log file, I see the generated html telling me there is a 502 bad gateway error.
When I go to the URL for the live website, I can see the loading icon I have put in place. This is the Meteor loading template I have put in place and I have configured IronRouter to use this as a loading template while it is waiting for the subscription to my collection to be filled, but this never happens because my server side mongodb instance never gets populated, for the reasons above.
Can anyone see anything unusual in the above that might be causing this 502 bad gateway error?
Running netstat -ln indicates to me that node.js is running, see below:
tcp 0 0 127.0.0.1:3000 0.0.0.0:* LISTEN
Thanks

Have you tried using meteor-up ? It handles the bundling and upstart config for you.
The suggested nginx config for meteor-up is below, it looks pretty similar to yours (taken from Using Meteor Up with NginX vhosts).
/etc/nginx/sites-enabled/mycustomappname.com.conf
server {
listen *:80;
server_name mycustomappname.com;
access_log /var/log/nginx/app.dev.access.log;
error_log /var/log/nginx/app.dev.error.log;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header X-Forwarded-For $remote_addr;
}
}

Related

daphne service listen failure: Couldn't listen on 0.0.0.0:8001 Address in already in use

as the title indicates I am using django-channels with daphne in the production server but when I show the status of daphne.service it says 8001 is already in use.
The interesting thing is that the socket connection is working perfectly with the frontend and easily exchanging messages.
here is my configurations
# /etc/systemd/system/daphne_seb.service
[Unit]
Description=daphne daemon
After=network.target
[Service]
User=simple
Group=www-data
WorkingDirectory=/home/simple/my_proj
ExecStart=/home/simple/my_proj/venv/bin/daphne -b 0.0.0.0 -p 8001 project.asgi:application
Restart=on-failure
[Install]
WantedBy=multi-user.target
daphne asgi in supervisor
# /etc/supervisor/conf.d/daphne_proj.conf
[fcgi-program:asgi]
# TCP socket used by Nginx backend upstream
socket=tcp://localhost:8001
# Directory where your site's project files are located
directory=/home/simple/my_proj
# Each process needs to have a separate socket file, so we use process_num
# Make sure to update "mysite.asgi" to match your project name
command=/home/simple/my_proj/venv/bin/daphne -u /run/daphne/daphne%(process_num)d.sock --endpoint fd:fileno=0 --access-log - --proxy-headers project.asgi:application
# Number of processes to startup, roughly the number of CPUs you have
numprocs=4
# Give each process a unique name so they can be told apart
process_name=asgi%(process_num)d
# Automatically start and recover processes
autostart=true
autorestart=true
# Choose where you want your log to go
stdout_logfile=/home/simple/my_bg/daphne.log
redirect_stderr=true
finally, my nginx looks like this
upstream websocket {
server 0.0.0.0:8001;
}
server {
listen 80;
server_name MY_SERVER_DOMAIN;
location = /favicon.ico { access_log off; log_not_found off; }
location / {
include proxy_params;
proxy_pass http://unix:/home/simple/my_proj/myproject.sock;
}
location /ws/ {
proxy_pass http://websocket;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection “upgrade”;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
here is all I have as I mentioned above everything works but the status of daphne throws the error. For clarification, I am showing the usage of the 8001 port
Any help would be appreciated. Thanks in advance!

asp.net core web app host on Linux Centos issues

I am new to Linux and read several articles to host asp.net core we applications on the Linux server. I have installed NGINX and modified /etc/nginx/sites-available/default. I ran this following command and it shows application started. I tried to check my application on the browser(using IP). It shows "502 Bad Gateway" error. I checked error log(/var/log/nginx/reverse_error.log) to see any errors. But it doesn't provide any relevant errors except http headers information. How to troubleshoot more on this issue?
$ sudo dotnet testapp.dll
info: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[0]
User profile is available. Using '/root/.aspnet/DataProtection-Keys' as key repository; keys will not be encrypted at rest.
Hosting environment: Production
Content root path: /var/www/testapp
Now listening on: http://localhost:5000
Now listening on: https://localhost:5001
Application started. Press Ctrl+C to shut down.
/etc/nginxsites-available/default
server {
listen 80; # [::]:80;
listen [::]:80 ipv6only=on;
access_log /var/log/nginx/reverse_access.log;
error_log /var/log/nginx/reverse_error.log debug;
#rewrite_log on;
location / {
proxy_pass http://127.0.0.1:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}

Cant deploy correctly Vuejs app from git in Jelastic

its the first time i use Jelastic and i need to deploy a vuejs app from git.
I've make a Nodejs env and deploy my vuejs app. Then, i run:
cd ROOT
npm install
npm run build
And i get an successfully message: Build complete. The dist directory is ready to be deployed.
So, what i usually do next in localhost is something like this:
cd dist
npm http-server
But in Jelastic i don't really know whats next after the build. I've try to go into http://node-env.route/dist but i get a 502 error page (The opened link forwards to the environment where the application server is down or is not picked up yet.)
Hope you can help me, thank you!
In order to run your application I suggest you to install pm2 on your server and run this command:
pm2 start npm --name "your-app-alias" -- start
After re-build you need to restart:
pm2 restart your-app-alias
Maybe after that you need a reverse proxy with NGINX to link your nodejs env to your localhost. Something like that:
server {
listen 80; # the port nginx is listening on
server_name YOUR-JELASTIC-ENV-URL; # setup your domain here
location / {
expires $expires;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 1m;
proxy_connect_timeout 1m;
proxy_pass http://127.0.0.1:3000; # set the adress of the Node.js instance here
}
}

nginx: index.js" is forbidden (13: Permission denied)

I am doing something wrong configuring my ubuntu server
My browser displays : 403 Forbidden nginx/1.10.3 (Ubuntu)
And if i run :
tail -f /var/log/nginx/error.log
i get:
".../root/curlist/index.js" is forbidden (13: Permission denied)..."
This is what i have in sites-enabled:
What is the problem?
ps after editing the sites enabled
if i run:
systemctl status nginx.service
i get :
However if i run :
tail -f /var/log/nginx/error.log
i get:
connect() failed (111: Connection refused) while connecting to upstream
You should not directly run index.js in nginx.
Instead, run index.js with node.js in the background, then setup nginx to forward to its listening port.
e.g.
location / {
proxy_pass http://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
Read an tutorial of node.js + nginx configuration here.

NGINX Bad Gateway 502 on Any Port Except 3000 with NodeJS/Express

I have an NGINX instance (1.4 stable) in front of a few NodeJS instances. I'm trying to load balance with NGINX using the upstream module like so:
upstream my_web_upstream {
server localhost:3000;
server localhost:8124;
keepalive 64;
}
location / {
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_set_header Connection "";
proxy_http_version 1.1;
proxy_cache one;
proxy_cache_key sfs$request_uri$scheme;
proxy_pass http://my_web_upstream;
}
The problem occurs when the instance at port 3000 is not available. I get a 502 Bad Gateway from NGINX.
If I change the upstream config to just point at one instance, 8124 for example, the 502 still occurs.
Running a netstat shows 0 other applications listening on any of the ports I've tried.
Why is NGINX reporting a bad gateway? How can I get NGINX to do a fallthrough if one of the instances is down?
If netstat shows that your nodejs applications aren't running on the ports, then the problem is that you haven't started your nodejs applications.
This nginx config knows how to proxy to the nodejs application, but you are guaranteed to get a 502 if the nodejs application has not been started. If you want to run it on multiple ports, then you have to start the application on each port. So, don't hardcode port 3000 into the NodeJS code, but make it take the port from an environmental variable, or spawn multiple instances using a process manager like pm2 (https://github.com/Unitech/pm2). Once these are running, then nginx can proxy to them.

Resources