I am creating a test web app and have deployed it to AWS Ubuntu server using nginx..
I am getting a 502 Bad Gateway error when it tries to reach my API..
I am new to this and have started node.js and all seems to working fine except when I want to perform an API call to mongodb to read or write information. it is working fine locally so I am at a loss....
GET http://ec2-54-72-145-112.eu-west-1.compute.amazonaws.com/api/rest/golf 502 (Bad Gateway)
this is nginx server config
location /xxxxxxxxxxxxxxx{
alias /home/ubuntu/xxxxxxxxxxxxxx/site/public;
}
location /api/ {
proxy_pass http://127.0.x.1:8180/api/;
}
..
I know I may not be giving enough info but hopefully someone has an idea..
Thanks!
The nginx error message HTTP 502 indicates the nginx is working fine but cannot reach your specified proxy. So I'd suggest you to check whether the port and binding IP are correct.
You can check which ports are bound by which application using this command on your Ubuntu machine:
netstat -tulpen
You should see a line with the column "Local Address" and in your case the value 127.0.x.1:8180. If it's not there try to figure out which port is bound by your node application and reconfigure the nginx to use that port.
Related
Trying to setup a staging environment on Amazon LINUX EC2 instance and migrate from Heroku.
My repository has two folders:
Web
API
Our frontend and backend are running on the same port in deployment
In dev, these are run on separate ports and all requests from WEB and proxied to API
(for ex. WEB runs on PORT 3000 and API runs on PORT 3001. Have a proxy set up in the package.json file in WEB/)
Currently the application deployment works like this:
Build Web/ for distribution
Copy build/ to API folder
Deploy to Heroku with web npm start
In prod, we only deploy API folder with the WEB build/
Current nginx.conf looks like this
Commented out all other attempts
Also using PM2 to run the thread like so
$ sudo pm2 bin/www
Current thread running like so:
pm2 log
This is running on PORT 3000 on the EC2 instance
Going to the public IPv4 DNS for instance brings me to the login, which it's getting from the /build folder but none of the login methods (or any API calls) are working.
502 response example
I have tried a lot of different configurations. Set up the proxy_pass to port 3000 since thats where the Node process is running.
The only response codes I get are 405 Not Allowed and 502 Bad Gateway
Please let me know if there is any other information I can provide to find the solution.
It looks like you don't have an upstream block in your configuration. Looks like you're trying to use proxy-pass to send to a named server and port instead of a defined upstream. There's is an example on this page that shows how you define the upstream and then send traffic to it. https://nginx.org/en/docs/http/ngx_http_upstream_module.html
server backend1.example.com weight=5;
server backend2.example.com:8080;
server unix:/tmp/backend3;
server backup1.example.com:8080 backup;
server backup2.example.com:8080 backup;
}
server {
location / {
proxy_pass http://backend;
}
}````
Turns out there was an issue with express-sessions being stored in Postgres.
This led me to retest the connection strings and I found out that I kept receiving the following error:
connect ECONNREFUSED 127.0.0.1:5432
I did have a .env file holding the env variables and they were not being read by pm2.
So I added this line to app.js:
const path = require("path");
require('dotenv').config({ path: path.join(__dirname, '.env') });
then restarted the app with pm2 with the following command:
$ pm2 restart /bin/www --update-env
I have deployed my website to a Digital Ocean droplet (Ubuntu 20.04 server).
Everything was working fine. Today, I did some changes to the website in my local machine. So I pushed the changes to GitHub and then cloned the GitHub repo again to the server. Then, I installed the dependencies and restarted PM2.
Now, when I visit my site https://sundaray.io, I get the following error.
The following is the error log.
How can I fix the error?
Simple meaning is
No HTTP server response, your Node Http server is not answering requests.
502 gateway mean server and Nginx is getting your request but there is issue with upstream.
you can use the command to show the logs of pm2
pm2 show
the application might be crashing or internal server 500 error.
my node api starts on port 1337 and works fine. if I browse localhost:1337 my api return nothing and if I browse localhost:1337/products my api actually works fine and return my products list as Json.
now service nginx installed and configured well to reverse-proxy of localhost:1337 to localhost
so after opening localhost it somehow works and says nothing as before but if I try browsing localhost/products again says nothing
and don't care about products.
I just have found the issue.
in nginx default configuration removed everything in section location /api/ just copy proxy_pass http://localhost:1337/; lonely. and it works correctly.
So I have a node server within a docker container. Right now I would like to have it communicate with the parent system's CUP server. However when I do an ajax call to the server, with port 631 exposed I get a 400 bad request error.
When looking at the CUPS logs it gives this reason for the rejection:
Request from "localhost" using invalid Host: field "host.docker.internal:631"
Now to even access the parent machine I have to use host.docker.internal to gain access, but I have not figured out a way to get cups to ignore the host or think its localhost.
Cups is watching for any serverAlias, and anything on port 631 so it "should" accept the call. Any ideas?
I had the same problem with CUPS (2.3.4) on osx. I spent several hours to fix the invalid Host: field error.
It seems that there's a bug, even when using SeverAlias * on cups conf.
For those who are looking for a workaround:
We have to change the Host header sent from the docker container to localhost in order to do so, I managed to set up an Nginx container listening on port 8888 and rewriting the Host field while proxy_pass to the host’s CUPS server.
This is the nginx conf.d:
server {
listen 8888;
location / {
proxy_pass http://host.docker.internal:631;
proxy_set_header Host localhost;
}}
Now instead of connecting to host.docker.internal:631 we connect the cups client to localhost:8888. (I have set up the nginx sever on the same docker container, you might want to set up a separate container depending on your needs)
I have a Linux EC2 instance. Apache in installed and up, so when I'm ssh'ed to my instance and do
curl localhost
I see a webpage served by my Apache. But when I try to access this page by URL (like http://ec2-xx-xx-xx-xx.eu-west-1.compute.amazonaws.com) I get back only 503 error page on one Internet connection, 404 error page on other connection. access_log and error_log show no activity when I try to access the server by URL. I'm stuck. Please give me some tips how to solve this issue.
I guess missing logs on local hint us that http error messages returned by amazonaws.com itself not from your Apache server. Did you set the security for port TCP 80? ssh port is open as default but I am not sure for port 80
I fixed this by turning iptables off. So firewall was the problem. Thank you guys for help.