Connection Refused: curl <ec2 public dns>:8080 - node.js

NOTE* there are a lot of question with similar titles but they either (1) do not have any good answer or (2) are not related to the following issue.
I am trying to run a simple chat/messaging demo via node.js on my Ubuntu ec2 instance running apache2. The application works great on my local machine but I am having some trouble with the setup on my server.
I am trying to listen to port 8080 but upon executing:
$ curl <ec2 public dns>:8080
I can the error:
curl: (7) Failed to connect to <ec2 public dns> port 8080: Connection refused
I went through the default troubleshooting steps:
(1) Quadruple check to make sure I am allowing incoming connections on port 8080 in my ec2 security group.
(2) Check my ufw firewall to make sure port 8080 is enabled:
ubuntu#ip-000-00-00-000:~$ sudo ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
22 ALLOW IN Anywhere
80 ALLOW IN Anywhere
443 ALLOW IN Anywhere
21/tcp ALLOW IN Anywhere
8080 ALLOW IN Anywhere
22 (v6) ALLOW IN Anywhere (v6)
80 (v6) ALLOW IN Anywhere (v6)
443 (v6) ALLOW IN Anywhere (v6)
21/tcp (v6) ALLOW IN Anywhere (v6)
8080 (v6) ALLOW IN Anywhere (v6)
I am newer to all of the concepts mentioned in this post so I may be missing something simple.

After consulting this post I realized that port 8080 wasn't listening for anything.
So I ran
node chat.js
Which somewhere in the code called .listen(8080). Then upon calling curl, it worked.

Related

Unable to communicate from one Docker container to an exposed container on the same machine

Currently I'm trying to make one container communicate with another one that is exposed and running on the same machine.
Lets say the external IP address is 123.123.123.123 and I exposed a basic NGINX Docker container on port 8080 via the ports property inside my docker-compose.yaml and I execute curl http://123.123.123.123:8080. From an external machine it successfully gets a response back, same goes for executing the command from the host machine. However when I execute this curl from another container on the same machine it exits with a timeout.
I'm unsure of the cause, I have tried temporarily exposing all ports via https://serverfault.com/a/129087 and this did actually allow communication from one container to the exposed container (Of course I restored the previous configuration afterwards).
It is important for me to be able to use the external routing, especially since in production jwilder/nginx-proxy is used with HTTPS certificates.
The machine is running Ubuntu 20.04, I haven't altered any firewall settings provided by iptables.
ufw status output:
Status: active
To Action From
-- ------ ----
22/tcp LIMIT Anywhere
2375/tcp ALLOW Anywhere
2376/tcp ALLOW Anywhere
22/tcp (v6) LIMIT Anywhere (v6)
2375/tcp (v6) ALLOW Anywhere (v6)
2376/tcp (v6) ALLOW Anywhere (v6)
Probably the most relevant part of iptables -L:
Chain DOCKER (6 references)
target prot opt source destination
ACCEPT tcp -- anywhere 172.18.0.2 tcp dpt:9000
ACCEPT tcp -- anywhere 172.20.0.2 tcp dpt:http
ACCEPT tcp -- anywhere 172.18.0.5 tcp dpt:https
ACCEPT tcp -- anywhere 172.18.0.5 tcp dpt:http
ACCEPT tcp -- anywhere 172.19.0.6 tcp dpt:mysql
Curious how this issue could be fixed. Of course adding both containers to the same internal network fixes this, but since port 8080 is already exposed to the world I would like this to include internal traffic as well. I'm using Docker Compose, both of these containers are not apart of the same docker-compose.yaml
The solution was staring in my face all this time. Basically with this image provided by DigitalOcean both iptables and UFW are enabled at the same time. However Docker is not able to expose ports whenever you add one to your Docker configuration.
Apparently that mostly isn't a problem for external traffic since in this scenario UFW apparently does not handle external traffic, that is all up to iptables. Adding ufw allow http and ufw allow https fixes it!

Could not open ssl 443 port server

I have an external server that hosted in dreamcompute
Which it is can consider as virtual private server (vps)
I have test out using: https://ping.eu/port-chk/
the results:
xxx.xxx.xxx.xxx:443 port is closed
xxx.xxx.xxx.xxx:80 port is open
xxx.xxx.xxx.xxx:22 port is open
I have check my server port is open in linux:
username#server:~$ sudo ufw status
Status: active
To Action From
-- ------ ----
80/tcp ALLOW Anywhere
443/tcp ALLOW Anywhere
22/tcp ALLOW Anywhere
3306/tcp ALLOW Anywhere
80/tcp (v6) ALLOW Anywhere (v6)
443/tcp (v6) ALLOW Anywhere (v6)
22/tcp (v6) ALLOW Anywhere (v6)
3306/tcp (v6) ALLOW Anywhere (v6)
I'm able to access the web server internally by
http://localhost/
https://example.com/
http://example.com/
which all of them I set an index.php with a code:
<?=phpinfo()?>
From my experience, it was my router firewall problem, but I don't have the control to do it.
Maybe it was my dns or something else ?
Steps you need to do to check your port :
verify that you can connect from your local machine... you can just telnet to port 443, both on localhost (127.0.0.1) as well as on your machine's IP address.(You have already done this what I saw.) You should at least get an answer (ie. verification that there is a listener on that port).
Just for an example:
$ telnet 127.0.0.1 443
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused
This confirms port is listening
$ telnet 127.0.0.1 443
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
^]
telnet> quit
Connection closed.
We can also do by checking NETSTAT
$ netstat -antl | grep 443
tcp 0 0 127.0.0.1:52421 127.0.0.1:443 TIME_WAIT
tcp6 0 0 :::443 :::* LISTEN
For SSL we can use openSSL
$ openssl s_client -connect 127.0.0.1:443
CONNECTED(00000003)
...
You may have to open and/or forward the address on your router or network firewall.
Hope it works fine!!
If not tell us, we will work on more solutions!

UFW rule changes doesn't seem to be ignored

I'm running a container on ec2 (docker-compose up). and wanted to route the traffic through nginx.
initially to test the working I allowed traffic from a port 9000. after testing I deleted the rule, but i'm able to access it from outside, another weird issue i've noticed is that i'm able to access traffic from a few other ports too if I just change the listening port.
ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), deny (routed)
New profiles: skip
To Action From
-- ------ ----
22/tcp ALLOW IN Anywhere
80/tcp ALLOW IN Anywhere
9000/tcp DENY IN Anywhere
22/tcp (v6) ALLOW IN Anywhere (v6)
80/tcp (v6) ALLOW IN Anywhere (v6)
9000/tcp (v6) DENY IN Anywhere (v6)
even after i had explicitly denied the traffic to port 9000 i was able to access it
EDIT: I had tried reloading the firewall after the rule changes, it has no effect.
Log before i allowed the traffic from port 9000
Mar 9 06:26:48 10 kernel: [UFW BLOCK] IN=br- OUT= PHYSIN= MAC= SRC=172.19.0.2 DST=172.19.0.1 LEN=1500 TOS=0x00 PREC=0x00 TTL=64 ID=18491 DF PROTO=TCP SPT=9000 DPT=34558 WINDOW=506 RES=0x00 ACK URGP=0
after deleting that rule, the firewall isn't blocking the traffic.
this solution working on Debian 10
https://github.com/chaifeng/ufw-docker/blob/master/README.md#solving-ufw-and-docker-issues
root#malloc:~# ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), deny (routed)
New profiles: skip
To Action From
-- ------ ----
22 ALLOW IN Anywhere
22 (v6) ALLOW IN Anywhere (v6)
9000/tcp ALLOW FWD Anywhere
9000/tcp (v6) ALLOW FWD Anywhere (v6)

Connection refused from local machine to bitcoind

I have bitcoin node installed and configured on Ubuntu 18.04 with the purpose of using RPC/JSON API calls and make a bitcoin service.
My application is built in Laravel 5.8, I use laravel-bitcoinrpc package to connect to the node. If the application is on the same server as the node is, I can connect and make RPC calls, but when I try to connect to node from my local machine (laptop) I receive connection refused.
I have allowed my local IP address.
22/tcp ALLOW Anywhere
443/tcp ALLOW Anywhere
80/tcp ALLOW Anywhere
OpenSSH ALLOW Anywhere
8332 ALLOW Anywhere
Anywhere ALLOW 89.165.xxx.xx - My IP address
8332 ALLOW 89.165.xxx.xx - - My IP address
8332/tcp ALLOW Anywhere
18332 ALLOW 89.165.xxx.xx - - My IP address
22/tcp (v6) ALLOW Anywhere (v6)
443/tcp (v6) ALLOW Anywhere (v6)
80/tcp (v6) ALLOW Anywhere (v6)
OpenSSH (v6) ALLOW Anywhere (v6)
8332 (v6) ALLOW Anywhere (v6)
8332/tcp (v6) ALLOW Anywhere (v6)
This is my bitcoin.conf
prune=600
maxconnection=20
maxuploadtarget=20
daemon=1
server=1
rpcuser=username
rpcpassword=password
rpcport=18332
rpcallowip=127.0.0.1
rpcallowip=<my-local-ip->
rpcbind=<my-local-ip->
keypool=10000
rpctimeout=30
rpcallowip should allow me to connect to the node from the remote user but I still get connection refused.
I have also allowed the ports on my router.
You can try rpcallowip=0.0.0.0.

Cannot connect to remote mongodb server

I changed bindIp setting to
bindIp: 127.0.0.1, 0.0.0.0
in mongod.conf on my ubuntu server hosted on Linode and restarted mongod and status looks ok.
I opened mongodb port on ufw
sudo ufw status
Status: active
To Action From
-- ------ ---- 22/tcp ALLOW Anywhere 10000
ALLOW Anywhere Nginx Full
ALLOW Anywhere 3333
ALLOW Anywhere 27017
ALLOW Anywhere 22/tcp (v6)
ALLOW Anywhere (v6) 10000 (v6)
ALLOW Anywhere (v6) Nginx Full (v6)
ALLOW Anywhere (v6) 3333 (v6)
ALLOW Anywhere (v6) 27017 (v6)
ALLOW Anywhere (v6)
Connecting to it from my mac throws error:
mongo mongodb://admin:secret#ubuntuipaddress/fielddb?authSource=admin
MongoDB shell version v3.6.2 connecting to:
mongodb://ubuntuipaddress/fielddb?authSource=admin
2018-04-08T13:47:32.212 W NETWORK [thread1] Failed to connect to
ubuntuipaddress:27017, in(checking socket for error after poll),
reason: Connection refused 2018-04-08T13:47:32.214 E QUERY
[thread1] Error: couldn't connect to server ubuntuipaddress:27017,
connection attempt failed : connect#src/mongo/shell/mongo.js:251:13
#(connect):1:6 exception: connect failed
How to fix this issue?
The basic steps for enabling remote access for Mongodb running on Ubuntu are:
Setup a minimum of one user in Mongodb (admin with root rights)
Edit the config file (i.e. sudo nano etc/mongodb.conf)
Make sure that
bind_ip = 0.0.0.0
port = 27017
auth = true
are set (and uncommented)
Add a firewall rule in UFW to allow 27017 from your remote IP address (or anywhere)
You should be good to go.
Disable your firewall and try to see if you can connect, if you can then it's your fw rules. Try this first to see if this helps.
The problem was wrong setting of bindIP in mmongod.conf. Changing to:
bindIp: 127.0.0.1,ip_address_of_host_running_mondgod
fixed the problem. Replace ip_address_of_host_running_mondgod
with ip address of host running mongod like 137.142.177.4

Resources