My site shows always a blank page when I run Nginx on port 80. However if I run Nginx on an other port e.g. port 8080 and I go to mypage.com:8080 it shows my Meteor App. I have no idea why Nginx work on all ports but 80.
Here are my configs.
Nginx
server {
listen *:80 default_server;
server_name mypage.de;
access_log /var/log/nginx/app.dev.access.log;
error_log /var/log/nginx/app.dev.error.log;
location / {
proxy_pass http://127.0.0.1:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header X-Forwarded-For $remote_addr;
}
}
Meteor App started with
sudo PORT=5000 MONGO_URL=mongodb://user:pwd#127.0.0.1:27017/mypage
ROOT_URL=http://mypage.de forever start -a -o out.log -e err.log main.js
netstat -tulpn shows
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 11214/nginx -g daem
tcp 0 0 0.0.0.0:5000 0.0.0.0:* LISTEN 10853/node
but as I said before mypage.de shows blank page... The same configs with Nginx on Port 8080 works. I working on Ubuntu. How can I fix this?
Your IP Tables seem to be blocking port 80. That or a firewall in between you and your server.
These are the IP Table rules for web traffic. Just run these commands on the command line:
iptables -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
iptables -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
I would also drop any rules that aren't needed like allowing port 5000. You only want people going to the web ports and that's it.
Related
I am trying to make a request to the my Node Express server via the browser and I am being returned with ERR_CONNECTION_REFUSED.
i.e.
POST http://localhost:9000/api/search net::ERR_CONNECTION_REFUSED
Requests from the Chrome console are also refused.
However, when I make curl requests from the EC2 terminal, the request is successful and i'm returned a JSON.
My nginx.conf file is detailed below:
server {
listen 80 default_server;
server_name _;
location / {
root /usr/share/nginx/html;
include /etc/nginx/mime.types;
try_files $uri $uri/ /index.html;
add_header 'Access-Control-Allow-Headers' *;
add_header 'Access-Control-Allow-Origin' *;
add_header 'Access-Control-Allow-Methods' *;
}
location /sockjs-node {
proxy_pass http://localhost:80;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
location /api {
proxy_pass http://localhost:9000;
}
}
From within the EC2 instance, the firewall status is:
sudo ufw status
Status: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
Nginx Full ALLOW Anywhere
9000 ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
Nginx Full (v6) ALLOW Anywhere (v6)
9000 (v6) ALLOW Anywhere (v6)
netstat -tunlp returns
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN -
tcp6 0 0 :::22 :::* LISTEN -
tcp6 0 0 :::9000 :::* LISTEN -
udp 0 0 127.0.0.53:53 0.0.0.0:* -
udp 0 0 172.31.2.45:68 0.0.0.0:* -
udp 0 0 127.0.0.1:323 0.0.0.0:* -
udp6 0 0 ::1:323 :::* -
My EC2 security group rules look like this
I've no idea what the issue could be. Any help would be appreciated.
SOLUTION: I've managed to resolve the issue by changing all fetch requests on the front-end to use the EC2 IP address instead of localhost. This doesn't seem very optimal though. Is there some sort of wildcard operator I could use instead as the EC2 IP address changes on restart. Any advances would be appreciated!
SOLUTION: I've managed to resolve the issue by changing all fetch requests on the front-end to use the EC2 IP address instead of localhost. This doesn't seem very optimal though. Is there some sort of wildcard operator I could use instead as the EC2 IP address changes on restart. Any advances would be appreciated!
I've spent a lot of time looking for a solution, but this is a quite weird and tricky issue.
I have AWS EC2 instance (Ubuntu)
I have a configured domain in AWS Route 53
Everything works properly via IP address of EC2 instance in web browser, but when I'm changing nginx.conf and adding server_name with my domain properties it's instantly throwing a timeout.
To be clear:
Route 53:
added proper IP as A address
added proper NS addresses
checked everything via dig in terminal - it's okay.
EC2 instance:
ubuntu instance
node js app on port 8000
configured security group with Outbound: All, Inbound: HTTP port 80 and Custom TCP Rule Port Range 8000
server {
listen 80;
listen [::]:80;
server_name mydomain.dev www.mydomain.dev;
return 301 http://$server_name$request_uri;
root /home/ubuntu/mydomainfolder_dev/;
location / {
proxy_pass http://localhost:8000;
#proxy_http_version 1.1;
#proxy_set_header Upgrade $http_upgrade;
#proxy_set_header Connection 'upgrade';
#proxy_set_header Host $host;
}
}
after this nginx.conf change and restarting a service (sudo service nginx restart) makes a proper redirect of EC2 address to my domain, but there is a timeout... any ideas how to fix it guys?
also: sudo netstat -tulpn output:
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 4581/nginx: master
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 1608/systemd-resolv
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 935/sshd
tcp6 0 0 :::80 :::* LISTEN 4581/nginx: master
tcp6 0 0 :::22 :::* LISTEN 935/sshd
tcp6 0 0 :::8000 :::* LISTEN 2486/node /home/ubu
SOLUTION
I guess I found something, checking sudo nano /var/log/syslog gives me weird DNS error:
Server returned error NXDOMAIN, mitigating potential DNS violation DVE-2018-0001, retrying transaction with reduced feature level UDP.
Alter the security group to allow port 443 as well as port 80
You will need an SSL certificate on the nginx server also
I'm trying to deploy a NodeJS/React app on an ec2 instance on AWS.
My app runs fine on port 3000, but is not being forwarded to port 80.
Neither modifying proxy_pass or modifying iptables seems to work in this scenario.
I've tried the following:
Modifying Nginx's server configuration to forward port 3000 to port 80. My Nginx configuration:
server {
listen 80;
location / {
proxy_pass http://[My Private ec2 IP]:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
server_name example.com www.example.com;
}
}
Modifying iptables to forward port 3000 to port 80.
sudo iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3000
My directory is structured as follows:
- appname/
- /api (node.js server-side code)
- /client (React client-side code)
I have tried running npm start from within appname/client/ as well as npm build. sudo netstat -lntp | grep 80 shows no processes listening on port 80, so the port is available.
The app renders on [public IP]:3000. When I try to access [public IP], the browser displays 'This site can't be reached'.
This seems like a fairly straightforward thing to do, yet nginx and iptables configurations both are ignored. Am I missing something?
Ports are also needed to forward from Amazon EC2 instance's console panel. In order to enable ports from EC2 instance console panel, perform the below mentioned steps:
Login to Amazon EC2 Dashboard
Select your EC2 instance machine
After selecting your EC2 machine, find the section Security groups in bottom panel
Click on the assigned security group name, it should something like launch-wizard-{number}
Then, open inbound tab from the bottom panel
Click on the edit button and add your ports (80, 3000) which needs to be open in the instance machine
You can check the below URL to get more info about Amazon EC2 Port Forwarding
https://aws.amazon.com/premiumsupport/knowledge-center/connect-http-https-ec2/
I've never used FreeBSD in my life but it's neccesary for me to deploy an HTTP API on FreeBSD. The API is deployed on port 3002.
What do I need to do to forward requests from port 80 to port 3002?
I tried adding this to my /etc/natd.conf file:
interface le0
use_sockets yes
dynamic yes
redirect_port tcp 192.168.1.8:80 192.168.1.8:3002
I also have this in my /etc/ipfw.rules file:
ipfw add 1000 fwd 127.0.0.1,80 tcp from any to any 3002
When I run ipfw -q -f flush I get:
ipfw: setsockopt(IP_FW_XDEL): Protocol not available
I don't know what any of this means, but it's not working.
Can somebody please tell me (in simple newbie terms) how to forward requests from 80 to 3002 in FreeBSD?
(I'm assuming port 80 is both open and the default port for HTTTP requests on a brand new FreeBSD installation)
The easiest way would be to use Nginx or HAproxy to listen on port 80 and then forward/proxy your requests to your API, by doing this you could also benefit from terminating SSL port 443 and just forward traffic to your API
For example to install nginx:
# pkg install nginx-lite
Then edit the /usr/local/etc/nginx/nginx.conf and use this in the server section:
server {
listen 80 default_server;
server_name _;
location / {
proxy_pass http://127.0.0.1:3002;
proxy_http_version 1.1; # for keep-alive
proxy_redirect off;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
}
}
This will forward the request to your API on port 3002 without the need to use NAT or any firewall like ipfw or pf, also works if you have your app running within a jail.
Remember you need to put in /etc/rc.conf: gateway_enable="YES".You may also need to create a pipe(check ipfw man), and load a dummynet module.
In my opinion an easier option would be to use PF. Let me quote an example from the handbook
https://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/firewalls-pf.html
... redirection and NAT rules need to be defined before the filtering rules. Insert this rdr rule immediately after the nat rule:
rdr pass on $int_if proto tcp from any to any port ftp -> 127.0.0.1 port 8021
FWIW, I've published Ansible role to configure PF
https://galaxy.ansible.com/vbotka/freebsd-pf/
almost done !!!!
should be
[was] ipfw add 1000 fwd 127.0.0.1,80 tcp from any to any 3002
ipfw add 1000 allow ipv4 from any to 127.0.0.1 via eth2
ipfw add 1010 fwd 127.0.0.1,3002 ipv4 from any to any 80,443 via eth2
I am trying to get data from my server but if I try from 0.0.0.0/my/api it gives a 400, but if I try from localhost/my/api it gives the content I need. I am using nginx and here is the config file for the server
server {
listen 80;
server_name localhost;
access_log /var/log/nginx/myapp.log;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/myapp;
}
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Also if I do a netstat I get this for the server
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 8080/nginx
Why would it only give me the correct content from localhost, and give me 400's from the 0.0.0.0 address? Also this is making it so I can not access the api from an outside machine. Finally I am using gunicorn as a backend server and using nginx to reverse proxy gunicorn.
It looks like your output is the output from netstat -a not just netstat. Here's a fairly generic result from netstat -a on an AWS Ubuntu image:
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:hostmon 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:http 0.0.0.0:* LISTEN
tcp 0 0 localhost:domain 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:ssh 0.0.0.0:* LISTEN
As one of the commenters noted 0.0.0.0 means "all ip addresses".
Localhost will be represented specifically by the word localhost as shown above.
P.S. The server_name directive doesn't need to be specified for such a simple nginx.conf where you only have one server block.
P.P.S. Nice touch turning off logging for favicon in your nginx config!