I have a Rails 4 application running on Heroku with exception_notification. I was notified that an AWS server was fishing for a login page by trying to access /wp-login.php. Since that is not my app's login page, someone had to manually enter that URL. Tracking the IP shows an Amazon AWS server in Oregon.
There shouldn't be any reason why someone would ever access my app via an AWS server, so my initial thought is someone is trying to get into the application.
In order to avoid any potential attack, I'm thinking about blocking all Amazon AWS requests.
Is there any way to blacklist Amazon AWS servers specifically? The only thing I can think of is checking the IP address of every request and ignoring those coming from a list I keep of Amazon, but I'm not sure if there is an official listing of Amazon IP addresses.
But checking the IP of every request against a blacklist seems inefficient. I'm aware of the rack-attack gem, but that is still running Ruby code to do the check, which doesn't seem very fast...
Blocking all AWS IPs is not a good solution. Potentially, the traffic can come from any part of the world. How are you going to block the traffic? Instead you should make your application robust.
There is an official listing of AWS IP address: AWS IP Address Ranges
If you are 100% sure that traffic originating from AWS (remember there are many AWS regions), then you can block them using IP tabled. One such solution is: AWS Blocker
Blocking all AWS IPs is not a good solution.
Related
I have an express js application running on aws ec2 instance that acts as rest api for my application. If i want to add ssl certification for my expressjs api what should i do?
1)Do i need to get a domain with ssl certification and map it to my ec2 ip address?
2)Or its enough to put aws API gateway in front of my ec2 instance , use the free ssl from ACM and get a domain without ssl?
3)Or get a domain with ssl and also ssl in acm?
(kind of confused with understanding domain and ssl, any help would be appreciated)
Thanks in advance.
The default pattern for this kind of use case, assuming that you don't want to manage a domain + certificate, is to put your EC2 instance behind a service that integrates with ACM, such as Elastic Load Balancing (ELB) or an Amazon CloudFront distribution.
API Gateway, while also giving you an SSL certificate, would also bring many other features that you'd still have to pay for.
Example
EDIT:
Original question was not formatted properly and I missed option 3).
If you are going to get a domain, then you have other options such as managing it with Route53 and directing your traffic to the EC2, or do the same but with the domain registrar. This assumes that the EC2 has a static IP address that allows you to address it. At this point, you can get an SSL Cert either via AWS ACM or by other means directly on the EC2 (i.e. Let's Encrypt). The difference between the two, aside from price, would be that one requires you to manage your own certificate while the other is an AWS managed service.
I would like to ask about how to configure Nodejs (backend/server) to accept HTTPS request from client side (Front end).
What we did.
Registered domain name in AWS.
List item
Requested SSL in ACM.
Create bucket in S3, and store our front-end code (angular 5) inside of it.
Created distribution in cloud front and put our custom ssl there and connect to bucket in S3.
We set up also ec2 instance and store our back-end code (node js) there.
In our front end code we connect to the ip of our ec2 instances so that we can connect to backend.
The problem:
The front-end can't access the backend in ec2 instances because the front end is https and the backend is http (we don't know how to configure it from http to https in AWS EC2 Instance).
Do you know how to setup web app in aws which front end and backend code is separated?
What did we missed?
What did we missed?
If I understand you correctly, you have a Cloudfront distribution serving angular which is then attempting to connect to an EC2 instance - I presume the IP address or public DNS entry for the EC2 is hard-coded into the angular code.
This is not a good arrangement - if your EC2 goes down or the IP address changes you will need to push a new site to S3 - and then this change will take time to propagate through Cloudfront.
What you should rather be doing is this.
create an application load balancer
create a target group and add your EC2 to that target group.
add a listener on the ALB, listening on the port your web app connects on, with a rule that forwards to the HTTP port of the back-end EC2.
Add a route 53 DNS Alias record for the ALB (because ALBs do sometimes go away or change their IP address)
Change your front-end code to point at the Route 53 Alias record.
(This is an incredibly simplistic way of doing things that leaves your EC2 open to the internet etc etc).
You should also give serious thought to putting your EC2 into an autoscaling group that spans at least two availability zones, and to setting its minimum size to ensure at least servers are running at any one time.
AWS EC2 instances can go away at any time, and when they do your app goes down with them.
I was browsing the web using Firefox with my EC2 instance located in Ashburn, Virginia (IP Addr: 54.159.107.46) I visited www.supremenewyork.com and it did not load (other websites like Google did load.) I did some research and found the IP of Supreme's site: 52.6.25.180 . I found out that the location of that IP is ALSO IN ASHBURN, Virginia, which could only mean that supreme is using AWS to host their site. This is an issue for my instance because I want to connect to supreme using it, but because the IPs are in the same Server Building or in Amazon's IP range I can't. Is there a workaround to this issue? Please help.
By the way: I tried pinging Supreme's IP from my EC2 instance – 100% packet loss.
NOTE THAT I CAN ACCESS SUPREME FROM MY HOME COMPUTER: IT IS NOT DOWN
Is there a security problem because I am trying to connect to their site?
I ran some tests locally and on AWS machines. My conclusion: www.supremenewyork.com blocks traffic that originates from AWS. It is easy to block traffic from AWS using IP tables. AWS publishes IP Address Ranges and it is easy to write a simple script like AWS Blocker to block all traffic from AWS IPs.
Why do some vendors block traffic from AWS? Increasing DDoS traffic and bot attacks from AWS hosted machines. Many attackers exploit compromised machines running in AWS to launch their attack. I have seen too many such incidents. AWS does its best to thwart such attempts. But if you see most of the attacks from a set of IP ranges, naturally you will try to block traffic from those IPs. I suspect the same in this case.
The website is not pingable because ICMP traffic is blocked from all IPs. There is nothing you can do (unless you go through a VPN) to access the vendor website from your EC2 machine.
I am fairly new to all this (being an app/mobile web developer).
I have setup an instance on EC2 which runs perfectly under http.
I want to add https support as I want to write a service worker.
I have used Amazons Certificate Manager to obtain a certificate
I have created an ELB and added a listener at 443 for https
I am not entirely sure whether my ELB and EC2 instance are connected. Following some instructions I attempted to create a CNAME rule in my Route53 setup but it would not accept it (pointing to the ELB DNS).
My understanding is that if they are then my http nodejs instance should now automatically support https.
This is currently not the case. My nodejs code is unchanged (it still only creates a http server listening at port 3002.
When I do a http call to the domain (http://example.com:3002) it works but a https call (https://example.com:3002) does not with a Site can not be reached failure.
This leads me to believe that the ELB and the EC2 are not associated. Can anyone suggest where I may have gone wrong as I have hunted the internet for 3 days and not found any step by step instructions for this.
You need to focus on this part of your question:
I am not entirely sure whether my ELB and EC2 instance are connected.
Following some instructions I attempted to create a CNAME rule in my
Route53 setup but it would not accept it (pointing to the ELB DNS).
Why are you not sure they are connected? You should be able to look at the health check section in the load balancer UI and see that the server instance is "connected" and healthy. If it isn't, then that is the first thing you need to fix.
Regarding the CNAME in Route53, what do you mean it wouldn't accept it? What are the details of that issue? Until you have your DNS pointing to the load balancer you won't actually be using the load balancer, so that's another issue you need to fix.
When I do a http call to the domain (http://example.com:3002) it works
but a https call (https://example.com:3002) does not with a Site can
not be reached failure.
If you had an error setting up the DNS then of course this isn't going to work. You shouldn't even be attempting to test this yet until you get the DNS configured.
I am trying to use google cloud platform for the first time and seem a bit hung up on something that I would think should be easy.
I created an instance group and am trying to create a load balancer to point both http and https traffic to that instance group. When I configured the front end for the load balancer I added both http and https; however, doing so created two ip addresses and I can only point the DNS to one of these records. I am assuming I am just missing a simple step, as I am used to working with AWS.
Any help would be much appreciated.
Think I answered my own question. I just pointed to the https ip address and both http and https requests ended up working.