Runs on localhost:4000 but not on ec2-ip:4000 - node.js

I have a nodejs app running on port 4000. I have developed it in my Vagrant box running Ubuntu 16.04 where I am able to curl both http://localhost:4000 and http://vagrant-IP:4000.
However, when I replicate the same set-up on an EC2 instance, I am able to curl only on http://localhost:4000 and not on http://ec2-public-IP:4000.
In both cases server is listening on 0.0.0.0 and CORS is enabled. (Here, vagrant-IP and ec2-public-IP are actual IPv4 addresses). How can I fix this?

I would expect this is a firewall problem - you need to punch a hole through the Amazon firewall to allow the incoming request.
This link should help: Authorizing Inbound Traffic for Your Linux Instances

Related

Cannot access to WSL2 port opened via IPv6 from Windows host

I have a node-server running at WSL2 Ubuntu-20.04.
netstat -tulpn in WSL shows the following ports:
The ports specified as 0.0.0.0:8080 can be accessed in both WSL and Windows via 127.0.0.1:8080 url
My issue is that the ports specified as :::3006 can be accessed via 127.0.0.1:3006 only inside WSL, but from Windows, it works only via the network URL like http://172.28.100.200:3006.
When I send the request to 127.0.0.1:3006 from Windows, there is no connection error, but the server inside WSL does not receive it while using the network address, it does.
How can I investigate this and make the Windows port at 127.0.0.1:3006 forward requests into the same port in WSL?
UPDATE:
So I solved this by adding a port proxy, but again, WSL network IP is needed for this to work:
Any chance to avoid using network IP?

Connecting to host from inside a docker container on linux requires opening firewall port

Background: I'm trying to have XDebug connect to my IDE from within a docker container (my php app is running inside a container on my development machine). On my Macbook, it has no issue doing this. However, on linux, I discovered that from within the container, the port I was using (9000) was not visibile on the host gateway (Using sudo nmap -sT -p- 172.20.0.1 where 172.20.0.1 is my host gateway in docker).
I was able to fix this issue by opening port 9000 on my development machine (sudo ufw allow 9000/tcp). Once I did this, the container could see port 9000 on the host gateway.
My Question: Is this completely necessary? I don't love the idea of opening up a firewall port just so a docker container, running on my machine, can connect to it. Is there a more secure alternative to this?
From what you've told us, opening the port does sound necessary. If a firewall blocks a port, all traffic over that port is blocked and you won't be able to use the application on the container from the host machine.
What you can do to make this more secure is to specify a specific interface to open the port for as specified here:
ufw allow in on docker0 port 9000 proto tcp
Obviously replace docker0 with the docker interface on your machine. You can find this by looking at the output of ip address show or by following the steps here if the interface name is not obvious.

How do I make a NodeJs project publicly accessible on port 3000?

I have a NodeJs/Express project in Alibaba cloud based Ubuntu server.
When I run project and access with curl localhost:3000 and curl 127.0.0.1:3000 it works!
When I access with IP public, e.g. curl 192.x.x.x:3000 it doesn't work, even though I have edited config in Express project in some code to : server.listen(3000,"0.0.0.0") OR server.listen("3000","192.x.x.x").
FYI I have Apache on this server. When I access on Internet with IP public no problem.
What can I do to solve this problem? Thanks beforehand.
PS: the 192.x.x.x is my IP public and it works access with Apache project
Issue the following command to open port 3000 for TCP traffic.
sudo ufw allow 3000/tcp
You have to configure your security ground and create a inbound rule to allow port 3000. Follow this guideline.
https://www.alibabacloud.com/help/doc-detail/25471.htm
Make sure you allow TCP traffic or all traffic from all sources to the port 3000 as the inbound rule.
The fact that you can access your service locally - but not publicly could mean 2 possible configurations:
The server running your application has blocked the port 3000
You have not configured your server to map the port 80 of a specific route to the port 3000
It is highly possible that a most essential part of your server configuration has not been done.

Running NodeJs application on port 80 of amazon linux

I am trying to get a NodeJs application to run on a Amazon Linux server using port 80. Currently when I run the app it is defaulting to port 1024. I understand that this is due to the fact that I have to be root to run on port 80 but given I am on a aws linux box I am not able to run that as root. I have been digging for awhile but I am coming up short on what I need to adjust to get this to run properly.
sudo bash will allow you to connect as root on your EC2 Amazon Linux instance.
I would question why do you want to run NodeJS on port 80, the best practice would have a load balancer in front of your instance to accept HTTPS calls and relay to whatever port nodejs will run on your instance, in a private subnet.
I would suggest to read this doc to learn how to do this : https://aws.amazon.com/getting-started/projects/deploy-nodejs-web-app/

Not able to ssh to port 443 on a Amazon ec2 server

I am running ssh on Amazon EC2 (linux) machine on Port 443.
Yet i am unable to ssh it, as i am behind a firewall.
When i do
http:// host:443
Following message is displayed:
SSH-2.0-OpenSSH_5.3
That means ssh is clearly listening on port 443, and the port is even reachable (via browser).
But yet when i do ssh from my desktop command-line (or putty), it just doesn't work.
Is it that firewall is examining packets and blocking it?
Any ideas?
Are you doing ssh -p 443 host? Sorry to state te obvious... but sometimes the obvious is what eludes us.
Worked!
The putty also required proxy entries :)

Resources