Unable to get to node.js externally - linux

I've looked everywhere for an answer on this, but haven't had any luck.
I've installed node.js on my server. I've created the standard "Hello World" example like:
var http = require('http');
http.createServer(function (request, response) {
response.writeHead(200, {'Content-Type': 'text/plain'});
response.end('Hello World\n');
}).listen(8080, "0.0.0.0");
console.log('Server running at http://0.0.0.0:8080/');
After running the script on the server:
node app.js
I can connect internal to port 808 and see the Hello World message, but when I try to connect to port 8080 my server externally I get a "Can't connect to server" error. I've also tried this in my listen function:
etc..
}).listen(8080, "204.xxx.xxx.xxx");
(with my real external IP address) and haven't had any luck.
I've tried to accept connections on 8080 by adding this to iptables:
iptables -A INPUT -p tcp --dport 8080 -j ACCEPT
but still have hit a wall. When I run netstat I get:
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN -
which I think tells me that port 8080 is listening for connections.
So — what am I doing wrong here?

You are most likely behind network address translation (NAT).
If you're using a normal home internet connection and you have a gateway router, you can have multiple devices using your home's internet connection (connected via Ethernet or Wifi), no?
But you only have one IP address.
To accomplish this, the router lets you connect out - but doesn't let any connections initiated from the outside back in (simplification for relevance - read up if you want more information).
You're going to have to look at configuring port forwarding - you want external port 8080 to forward to your computer's internal IP address.

Related

Why my website hosted on aws refuses to connect?

I have a node app which runs on localhost perfectly, I hosted it on AWS ec2 instance on port 80 and it worked fine too, after 7 days of inactivity when I searched public IP address of my ec2 instance(on any browser), it says <public_ipv4> refused to connect.
Here are a few things I did for troubleshooting which I read from AWS forums but not getting any luck:
deleted the node_modules/ directory and reinstalled using npm
install command
Have correctly allowed HTTP traffic on port 80 inside inbound rules of security groups for that instance(i have only one instance running)
Ran netstat -nplt | grep 80 , which gave me output :
tcp6 0 0 :::80 :::* LISTEN
I have added a script in package.json file, through which app.js file will run
In my app.js file i am listening to port 80:
app.listen(80, async function(){
console.log("server has started");
})
What else am I missing?
Screenshot of inbound rules :
it seems you have allowed only IP v6 address on Inbound traffic of Security Groups..
Not added allow permission for IPv4 address..
Add below rule
HTTP tcp 0.0.0.0/0 80
if you have added both rule ( IPV4 and IPV6) then
sudo netstat -tnlp | grep :80
should show below both lines..
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN
tcp6 0 0 :::80 :::* LISTEN
but in your case it showing only tcp6
See if the steps below helps.
Did you stop your ec2 instance start and it again if you did it would have changed your public ip if that is case use the new public ip.
Check your security groups attached to the ec2 instance if it allows in bound traffic on port 80.
If the first step does not work connect to your ec2 instance and run a curl command to see if your app is running.
In my app.js file i have used port 80 :
app.listen(80, async function(){
console.log("server has started");
})
but when i changed port number to 3000 in app.listen , it worked , i dont't know how did this happen though.
There was no issue with security groups.

Failure on local socket bind when wifi drops

We are getting this strange issue on a raspberry pi.
We run a service on a socket that should work for both local and remote clients via wifi.
The trouble is that stopping the remote network also stops connections from local clients.
Our python server sets up a socket like this:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.setsockopt(socket.SOL_SOCKET, socket.SO_DONTROUTE, 1)
s.settimeout(2)
s.bind(("", 8888))
while True:
try:
conn, addr = s.accept()
except socket.timeout:
print("Socket timeout on s.accept(), continuing")
continue
#do stuff
We have a local node client running a loop like this every second or so (and actually sending data):
// every second
socket.connect("localhost", "8888" );
socket.on('connect', function() { /* do stuff */ });
socket.on('error', function(ex) { });
Everything runs fine until we cut wifi.
We server side times-out on s.accept and we see the error message in our logs.
I think that the socket is bound to listen on 0.0.0.0 but somehow does not fail over to 127.0.0.1 or some sort of strange routing situation occurs.
netstat -an | grep 8888 gives
tcp 0 0 0.0.0.0:8888 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:8888 127.0.0.1:52794 TIME_WAIT
tcp 0 0 127.0.0.1:8888 127.0.0.1:52724 TIME_WAIT
tcp 0 0 127.0.0.1:8888 127.0.0.1:52740 TIME_WAIT
tcp 0 0 127.0.0.1:8888 127.0.0.1:52778 TIME_WAIT
netstart -rn gives
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 192.168.1.1 0.0.0.0 UG 304 0 0 wlan0
192.168.1.0 0.0.0.0 255.255.255.0 U 304 0 0 wlan0
I'm guessing that we just need a localhost route?
The local connections establish again when the wifi comes back up. So I don't there is some permanent dropping of the bind in the python socket.
the hosts line in /etc/nsswitch.conf gives
hosts: files mdns4_minimal [NOTFOUND=return] dns
We monitored ping to localhost during the test and it continues to function fine.
We also monitored netstat to see that port stays LISTENING on 0.0.0.0 Perhaps this is the issue?
Easiest Solution
It looks like you should avoid any naming by using "127.0.0.1" as described in our comment discussion.
In more detail:
According to the source and the docs (after nodejs first tests for an ip,) it checks if you've provided a lookup function as an option to connect, if not, it does its own "dns.lookup" call as the default. Despite the name, this function is actually trying to use system naming but might be subtly different, for example it may try to prefer ipv6.
To debug further you could try to make a more direct test case with dns.lookup and compare things like the output of getent ahosts|ahostsv4|ahostsv6 localhost against your different systems and when the wifi is down, as well as comparing other configuration like the gai.conf to try to determine if system naming is a bit different on this system or being given slightly different requests.

Connection Refused on Amazon Web Services

Can't access node.js API on port 3000 on AWS EC2 instance, but netstat shows port :3000 listening and my AWS security group has TCP rules for this port. What else could the problem be?
I've tried changing port, setting security group rules and adding port to iptables and it didn't work. I'm using node 10.6.0.
When I use netstat -tulpn | grep LISTEN it contains the following line:
tcp 0 0 127.0.0.1:3000 0.0.0.0:* LISTEN 8270/node
When i try to access myip:3000/socket (my node endpoint) it shows: ERR_CONNECTION_REFUSED

Port Forwarding works only localy

I have Tomcat server (on Ubuntu) running on port 8080 and I want to connect externally, but Port-Forwarding works only in LAN. port fw :
HTTP Server 0.0.0.0 80 192.168.1.246 8080 TCP
and when put req. on my router WAN IP it wont respond (timeout).
I m not even sure where the problem could be..
netstat:
tcp6 0 0 :::8080 :::* LISTEN 7767/java
You have to point *:80 to 192.168.1.246:8080 and it will be fine. Right now you try authorize the IP 0.0.0.0 on the port 80 to access your IP 192.168.1.246 on the port 8080 which is not correct.

Unable to access express/node server on port 3001 despite enabling via firewall-cmd

I've been searching around this morning trying to figure out how to resolve my issue but nothing appears to suit my situation or solve my problem and so here I am.
I have a server running on CentOS Linux release 7.5.1804 (Core) and I have installed node v10.11.0 in order to host a website. I have a domain foo.ca whereby I have two separate web servers running (one for client, one for server). The client runs on port 3000, and I used iptables to forward port 80 to port 3000 so I can actually view my website without explicitly listing the port (i.e. by entering foo.ca in the address bar)
sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3000
This works fine, and I can see foo.ca
My problem arises when I try to access the server which is running on port 3001. I have enabled the port via tcp using firewall-cmd:
sudo firewall-cmd --zone=public --add-port=3000/tcp --permanent
sudo firewall-cmd --zone=public --add-port=3001/tcp --permanent
sudo firewall-cmd --reload
If I type foo.ca:3001 chrome tells me the site can't be reached, foo.ca took too long to respond.
I tested port 3001 via an online tool and it says that it is open, I also checked netstat:
netstat -tuplen
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name
tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN 995 12161 -
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 0 12066 -
tcp 0 0 0.0.0.0:3000 0.0.0.0:* LISTEN 1000 56647615 4926/node
tcp 0 0 0.0.0.0:3001 0.0.0.0:* LISTEN 1000 56671635 6195/node
Some online suggestions included using 0.0.0.0 rather than localhost but as you can see I already have that implemented. I don't really know what my options are at this point, I've tried enabling the port via iptables as well but I am not sure that did anything:
iptables -A INPUT -p tcp --dport 3001 -j ACCEPT
One last thing, my express server code is like so:
const express = require('express')
const app = express()
const port = 3001
app.get('/', (req, res) => res.send('Hello World!'))
app.listen(port, '0.0.0.0', () => console.log(`Example app listening on port ${port}!`))
And I run it like node test
Anyone have any ideas? I'm not much of a network guru
The solution was my network was blocking it for some reason

Resources