I have a NodeJS server up and running on AWS EC2. It is a t2 micro with no load balancers and no auto-scaling. On the security groups, I have:
Next, I have my domains set up on Route 53 whose nameservers are placed on name.com, and a wildcard SSL from GoDaddy.com which I have set up on ACM. I have made an endpoint on Route 53 to point to EC2 Elastic IP. Everything is set up as it should be.
Now here comes the problem. Whenever I try to hit my server endpoint using the Route 53 endpoint, it fails after two or three calls, but the elastic IP works fine. Same thing happens if I point the Route 53 URL on other resource such as CloudFront.
Normally, this wouldn't bother me if I built this server for mobile apps only. But the issue is that the Web and Server won't communicate until they are both on HTTPS.
So any ideas???
I'll provide more information of needed.
Related
I am currently learning about AWS and I have a single EC2 instance running with a nodejs server on port 3000, an Application load balancer with SSL setup that listen on port 80 and 443 (http & https). When I make requests to the http route it returns back the successful health check message. But when I try to access my api via the https method, I get a 502 Error. I googled around and read some articles and they pointed out that the nodejs server keepAliveTimeout and headersTimeout should be higher than the timeout option of the ALB. I tried that and it didn't work. I also tried to set the max-http-header-value to 16384, I also tried to check the access logs for the load balancer on my S3 bucket and the logs just showed that I am getting a 502 error and nothing more. What could be the issue? Because I have tried all solutions that presented but they don't seem to work.
The 443 listener needs to be pointed to port 80 on the ec2 instance
The first thing to check is that your server is responding to requests. Try connecting to port 3000 on the server, either from the server itself (eg curl localhost:3000) or from outside the server (which will require the Security Group to permit access to port 3000).
Once you have confirm that the server is responding, configure Security Groups as:
A Security Group on the Application Load Balancer (ALB-SG) that permits Inbound access on ports 80 and 443
A Security Group on the Amazon EC2 instance (App-SG) that permits inbound access on port 3000 from ALB-SG
That is, App-SG should specifically refer to ALB-SG in its Inbound rules.
Then, configure the Load Balancer to have a Target Group that points at port 3000 on the app server and provide it a URL for the Health Check (that could simply be /).
Then, connect to the Application Load Balancer and see whether you can access your app.
I would like to ask about how to configure Nodejs (backend/server) to accept HTTPS request from client side (Front end).
What we did.
Registered domain name in AWS.
List item
Requested SSL in ACM.
Create bucket in S3, and store our front-end code (angular 5) inside of it.
Created distribution in cloud front and put our custom ssl there and connect to bucket in S3.
We set up also ec2 instance and store our back-end code (node js) there.
In our front end code we connect to the ip of our ec2 instances so that we can connect to backend.
The problem:
The front-end can't access the backend in ec2 instances because the front end is https and the backend is http (we don't know how to configure it from http to https in AWS EC2 Instance).
Do you know how to setup web app in aws which front end and backend code is separated?
What did we missed?
What did we missed?
If I understand you correctly, you have a Cloudfront distribution serving angular which is then attempting to connect to an EC2 instance - I presume the IP address or public DNS entry for the EC2 is hard-coded into the angular code.
This is not a good arrangement - if your EC2 goes down or the IP address changes you will need to push a new site to S3 - and then this change will take time to propagate through Cloudfront.
What you should rather be doing is this.
create an application load balancer
create a target group and add your EC2 to that target group.
add a listener on the ALB, listening on the port your web app connects on, with a rule that forwards to the HTTP port of the back-end EC2.
Add a route 53 DNS Alias record for the ALB (because ALBs do sometimes go away or change their IP address)
Change your front-end code to point at the Route 53 Alias record.
(This is an incredibly simplistic way of doing things that leaves your EC2 open to the internet etc etc).
You should also give serious thought to putting your EC2 into an autoscaling group that spans at least two availability zones, and to setting its minimum size to ensure at least servers are running at any one time.
AWS EC2 instances can go away at any time, and when they do your app goes down with them.
This is a confusing situation but let me try my level best to present my problem.
I am trying my hand at setting up an aws app architecture by following this blueprint: https://s3.amazonaws.com/awsmedia/architecturecenter/AWS_ac_ra_web_01.pdf
I don't require the web server part of it, so the components that I am trying to set up are Route53 -> Elastic Load Balancer -> (A subnet containing two ec2 instances that contain my nodejs app)
I have created a hosted zone on Route53, and I created a set record with an alias set to the ELB.
At first, I did not setup NGINX on my EC2 instances and in my ELB configuration I registered my EC2 instances on the port on which the application runs, i.e 9000. At this point, if I tried to access my app via the domain name, the page was unreachable, and I tried to access it via the ELB DNS, it returned with 504.
Then I set up Nginx on my instances and registered instances on ELB with port 80, and this time ELB returned with 503, however the page is still unreachable via Route53
I am using an application load balancer with HTTPS protocol.
So, at this point, I can't access my app via Route53, and I am getting 503 when I access it via ELB DNS, however, if I point my browser to the public dns of either of my ec2 instances, I am getting response from my APIs.
Can anyone help me with this?
I am fairly new to all this (being an app/mobile web developer).
I have setup an instance on EC2 which runs perfectly under http.
I want to add https support as I want to write a service worker.
I have used Amazons Certificate Manager to obtain a certificate
I have created an ELB and added a listener at 443 for https
I am not entirely sure whether my ELB and EC2 instance are connected. Following some instructions I attempted to create a CNAME rule in my Route53 setup but it would not accept it (pointing to the ELB DNS).
My understanding is that if they are then my http nodejs instance should now automatically support https.
This is currently not the case. My nodejs code is unchanged (it still only creates a http server listening at port 3002.
When I do a http call to the domain (http://example.com:3002) it works but a https call (https://example.com:3002) does not with a Site can not be reached failure.
This leads me to believe that the ELB and the EC2 are not associated. Can anyone suggest where I may have gone wrong as I have hunted the internet for 3 days and not found any step by step instructions for this.
You need to focus on this part of your question:
I am not entirely sure whether my ELB and EC2 instance are connected.
Following some instructions I attempted to create a CNAME rule in my
Route53 setup but it would not accept it (pointing to the ELB DNS).
Why are you not sure they are connected? You should be able to look at the health check section in the load balancer UI and see that the server instance is "connected" and healthy. If it isn't, then that is the first thing you need to fix.
Regarding the CNAME in Route53, what do you mean it wouldn't accept it? What are the details of that issue? Until you have your DNS pointing to the load balancer you won't actually be using the load balancer, so that's another issue you need to fix.
When I do a http call to the domain (http://example.com:3002) it works
but a https call (https://example.com:3002) does not with a Site can
not be reached failure.
If you had an error setting up the DNS then of course this isn't going to work. You shouldn't even be attempting to test this yet until you get the DNS configured.
I am looking for an easy way to fail over to a different DC quickly, does CloudFlare offer anything special in this regards with things like health checks or is it just like a standard DNS service?
Update: CloudFlare started a closed beta for the Traffic Manager feature which allows to do exactly this kind of failover:
https://www.cloudflare.com/traffic-manager/
AWS Failover:
The following solution seems to work well when you are hosting your backend system on AWS:
I setup a AWS Route 53 zone with a separate domain (e.g. failover-example.com). Route 53 allows you to setup health checks on the backend server (e.g. the load balancer) with DNS failover. AWS will remove the unhealthy backend system from the DNS record list.
In cloudflare I setup a CNAME for example.com record to failover-example.com and activate the cloudflare proxy on example.com.
The result is that the browser resolves the IP address of example.com to a cloudflare IP address. Cloudflare queries the AWS DNS server to lockup failover-example.com. Cloudflare fetches the content from the resolved IP address and returns the content back to the browser.
In my tests the switch to the other backend system occurs after ca. 20 seconds.
The separate domain is required because cloudflare does not route the traffic through the proxy when the CNAME is a subdomain of example.com.
I have tried to visualize the failover. In theory the failover works with any DNS failover capable service and not only with Route53:
The browser connects always with CloudFlare and hence a DNS failover of the backend system does never effect the browser of the user.
We don't have automatic failover at this time (something we're looking at). We can support the additional DNS entries in your zone file, of course, but you would currently have to manually make the change in that circumstance.
To add -- in the mean time, I'd recommend looking at https://runbook.io
Several other DIY options:
http://blog.booru.org/?p=12
https://vpsboard.com/topic/3341-running-your-own-failover-dns-setup/
https://github.com/marccerrato/python-dns-failover
You'd want to decide if these are the right options for you, of course.