I'm trying to setup my NodeJs Express app with MongoDB in amazon EC2, I have already configured the database.
Now I wanted to clone the backend server (private repo) which is the node express app. But whenever I try it gets time out, unless I give inbound traffic to "All Traffic". Moreover I'm trying to setup GitHub Actions and I am unable to install the runner for the same reason.
The runner command im trying to run
curl -o actions-runner-linux-x64-2.301.1.tar.gz -L https://github.com/actions/runner/releases/download/v2.301.1/actions-runner-linux-x64-2.301.1.tar.gz
I tried all these security rules, but it gets times out.
As mentioned when setting inbound rule to "All Traffic" it works. But I dont want this for security concerns.
Can anyone help me with the correct rule with recommended way base on the security?
I tried the first solution mentioned here ssh: connect to host github.com port 22: Connection timed out
But this doesnt work
Related
I managed to successfully deploy a docker image to a VM instance. How can I send network requests to it?
The code is a simple Node.js / express app that simply res.json() "Hi there!" on the root path. It is listening on port 3000.
I think the deploy process was this:
Build Docker image from Node.js / express src.
Run container on local command line, correctly expose ports. It works locally.
Tagged the image with the correct project ID / zone.
Pushed to VM. I Think I pushed the image, rather than the container. is this a problem?
SSH into VM. Run docker ps and see running container with correct image tag
use command line curl (I am using zsh terminal) as well as browser to check network requests. Getting connection refused error
As a beginner, the google firewall settings appear to be open--I have allowed ingress on all ports.
I will also want to allow egress at some point but for now my problem is that I am getting a connection refused error whenever I try to contact the IP address, either with my web-browser or curl from the command line.
It would seem that the issue is most likely with the firewalls, and I have confirmed that my docker container is running in the VM (and the source code works on my machine).
EDIT:
Updated Firewall Rules with Port 3000 Ingress:
You need a firewall rule that permits traffic to tcp:3000.
Preferably from just your host's IP (Google "what's my IP?" And use that) but for now you can (temporarily) use any IP 0.0.0.0/0.
Firewall rules can be applied only to the VM running your container too, but I'd not worry about that initially.
I'm using NodeJS and Express.js to run my projects.
In local app preview mode, everything works fine. But the ugly, long, and temporary preview link provided on each preview just doesn't cut it for me and I want someone to be able to access my server via:
HTTP://<MY-ELASTIC-IP>:8080
I followed the guide here:
AWS Cloud9 App preview guide
and have my Elastic IP allocated and associated to my EC2 instance running our Cloud9 IDE, I set the inbound security rules as follows:
Inbound rules of my security group for the EC2 instance running our Cloud9 IDE
Then in my NodeJS app I set the listening port to 8080 (as instructed by the guide), and tried all kinds of listening IP addresses (127.0.0.1, 0.0.0.0, (MY-ELASTIC-IP), (MY-PRIVATE-IP)) and tried running the app, hoping I can finally access my server through , but none of them worked.
I'm a seasoned roomie server developer and most of my deployments were through Supervisor, Nginx, Certbot , DNS configuration through my domain registration site, and some router port forwarding and boom my servers would be online in less than 10 minutes.
But really ... what is up with AWS. There's just so much stuff they shoved into this new Cloud9 (I miss the old c9...) and I can't get even the basic stuff done.
What am I missing here? Is there some sort of port forwarding I have to configure between my public elastic IP and my private IPs? I visited most of the similar questions posted about this and still couldn't manage to get my public URL to point to the running NodeJS instance inside C9.
I fought with this forever and finally got access by using listening address '0.0.0.0' and constructed the url using the IPv4 Public IP listed in "Manage EC2 Instance": HTTP://:8080
Not ideal because this Public IP changes each time I launch the environment but solved my immediate problem
I am trying to get a streaming service running from a modified version of an open source repo https://github.com/nabendu82/streams.
I have a frontend client in React, a RTMP server for the stream, and a backend API. I have got a docker compose file to host them all together. If I run docker-compose up on my local computer, everything works perfectly. I can visit http://localhost:3000/matches/view and see two stream windows that aren't loaded, until I open up the streaming software OBS, Settings -> Stream -> Server: rtmp://localhost/live, Stream Key: 7. Then the right stream window will start.
To host this repo on the internet, I've created a basic EC2 instance on AWS (http://13.54.200.18:3000/matches/view). I installed docker-compose and I've copied all the repo files up to it.
However, when running on the AWS box the stream does not load, and the console error is always the spectacularly unhelpful:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://server:3002/streams/6. (Reason: CORS request did not succeed).
So for some reason CORS is preventing the React frontend from reading the server backend while it is hosted on AWS.
Here is the catch. I can actually get the streaming on the AWS hosted site to work, but only by running docker-compose up on my LOCAL computer at the same time. For some unknown reason, the AWS hosted version is able to pick up on the backend server running on my local machine (rather than the one running alongside it in docker-compose on AWS) and connect that way. I can even stream to the website via OBS at rtmp://13.54.200.18/live and everything works. But it only works on exactly my local computer running the docker-compose infrastructure (and only if I use calls to 'localhost' instead of the docker-compose service 'server'), if anyone else tries to see the stream on the live site they will just get Loading... perpetually and the CORS error.
Why is the AWS hosted code not looking at its own docker-compose file and its own server:3002 service? For the rest of the world, and for me if I'm not running a local server, it throws a CORS error. For just my local computer, and only if I'm running a local server and making requests to 'localhost:3002', it works perfectly.
If I ssh on to the AWS image, then docker-compose run client curl localhost:3002/streams will fail, but docker-compose run client curl server:3002/streams will give me back the correct JSON data. From everything I understand about docker compose, my services should be able to access each other and it appears they can, everything works great locally, and the services can talk to each other on the AWS box too, but just somehow this CORS error appears out of nowhere only on the AWS hosted version.
I've tried everything under the sun I can think of. I was originally using json-server, but I thought that might be the issue (as it has to specifically bind to -H 0.0.0.0), so I wrote my own Express server using the cors package to replace it and there has been no change. I've tried every configuration of docker-compose variables I can imagine. As far as I can understand I've done everything right, but somehow the AWS box wants to talk to my own computer's localhost aka "server service" aka 0.0.0.0 instead of its own. What is going on?
Repository here: https://github.com/JeremyEllingham/streams
Any help much appreciated.
I figured out how to get it working, by just posting direct to the Linux box IP address in production instead of trying to get it working with "localhost" or the docker service names. Kind of disappointed that docker-compose doesn't seem to work quite like I thought it did, but it's totally functional to just conditionally alter the base URL.
See also this answer: React app (in a docker container) cannot access API (in a docker container) on AWS EC2
I currently have an EC2 instance up and running with Amazon Linux running and transferred my project (which contains both React/NodeJS/Express) onto the EC2 instance via SFTP using FileZilla.
For the EC2's Security Groups, I opened a port for 3000 (protocol: tcp, source: 0.0.0.0/0), which is how my Express is defined as well.
So I sshed into EC2 instance and ran the project's Express, and sees it listening to port 3000 within the terminal. But once I hit the Public DNS with ec2...us-west-1.compute.amazonaws.com:3000, it says The site can't be reached - ec2...us-west-1.compute.amazonaws.com took too late to respond.
What could be the issue and how can I go about from here to connect to it?
Thank you in advance and will upvote/accept answer.
Just check if your Node.js server is running on the EC2 instance.
Debugging:
Check first if It working locally properly.
Check for the node.js server in EC2.
sudo netstat -tulpn | grep :3000
try to run server with --verbose flag i.e npm run server --verbose
it will show logs of the server while starting.
Check for the security group Setting for the EC2 instance.
try to connect with the ip:port i.e 35.2..:3000
If still it not working and response taking long time.
that means some other service is running on the same port.
try this in ec2:
sudo killall -9 node
npm run server
And connect with using IP(54.4.5.*:3000) or public DNS (http://ec2...us-west-1.compute.amazonaws.com:3000).
Hope It will help :)
You may be encountering an issue with outbound traffic. You may be inside a company's network, either physically connected or VPN'd in. In some instances, your VPN isnt set up to handle split traffic, so you must abide by your company's outbound restrictions.
In a situation like this, you would want to use a proxy to access your site. when locking down your security group, make sure you use your proxy's public IP (not your company's).
Usually, when we have connectivity issues, it is something basic or a firewall. I assume you have checked whether a firewall is running on either end, eg. iptables -L -n. Also, any protocol analyzer like wireshark or tcpdump would tell you where packets to port 3000 are visible.
So i've setup a windows instance, but can't seem to FTP into it. After much research, i've discovered SFTP is the way forward.
I've setup my security group, adding the following rule:
SSH tcp 22 22 0.0.0.0/0
Using the public DNS name supplied in the console, i try to SFTP in using Filezilla & Cyberduck, but they just time out.
I know the next step is sorting out the key pairs, but i doubt that'll do me any good if my server isn't even accepting connections.
Any idea what i've missed?
EDIT:
Looking at the Filezilla logs, its looks like the server isn't responding to the connection requests...
12:51:29 Status: Connecting to ec2-122-248-248-178.ap-southeast-1.compute.amazonaws.com...
12:51:29 Response: fzSftp started
12:51:29 Command: keyfile "D:\Users\berling\Lacie Fuj Sync\Freelancing\AWS_Public_Key.ppk"
12:51:29 Command: open "greg#ec2-122-248-248-178.ap-southeast-1.compute.amazonaws.com" 22
12:51:49 Error: Connection timed out
12:51:49 Error: Could not connect to server
Do i need to install an SFTP/SSH server on the server? I was under the impression it was already setup for Amazon servers for some reason... am i wrong about that?
The rule you have set up - SSH tcp 22 22 0.0.0.0/0 - where is that? Your firewall, or at the EC2 end? And why 0.0.0.0? I would recommend using real IP addresses.
Check why the connection is timing out - is SFTP getting past your firewall? Is it getting blocked at the EC2 end - firewall or network logs will be your friend here.
Have you confirmed it is timing out before starting the handshake? Check SSH logs.
Do you have an SFTP server running and configured correctly? Some require all configs to be set before they are happy - your comment that you haven't yet sorted key pairs makes me wonder if this one only accepts certificate auth.
Check those and see how you do.
Install WinSSHD on your EC2 instance. It provides RDP, SFTP, Console access - all over port 22.
Install Tunnelier on your client.
I haven't tried this particular package, but Cygwin, and Services for Unix provide OpenSSH version.
Copssh claims to install openssh, and allow migration/configuration of users:
http://sourceforge.net/projects/sereds/files/Copssh/4.0.4/