According to the doc of CloudFront
(https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/RequestAndResponseBehaviorCustomOrigin.html), client IP can be the front, middle, end of X-Forwarded-For header.
Is it rignt? Then how can I get real client IP?
Is it right?
Not exactly.
CloudFront follows correct semantics for X-Forwarded-For. Specifically, each system handling the request appends its client's address to the right. This means the rightmost address in X-Forwarded-For in the request from CloudFront is always the address of the machine that connected to CloudFront.
If the client (the machine establishing the connection to CloudFront) includes an X-Forwarded-For header in its request, that header may be forged, or it may be legitimate if the client is a proxy server, but you rarely have a way of knowing that... so either way, you should treat this as potentially valuable, but strictly non-authoritative.
The rightmost value (which may also be the only value) is the only value you can trust in the request you receive from CloudFront.
Generally speaking, parsing from the right, any address that is known to you and trusted to have correctly identified its upstream client can be removed from the list... but once you encounter the first untrusted address, right to left, that's the one you're looking for, since nothing to the left of that address can be trusted.
This means that if components in your stack -- such as Application Load Balancer or your web server are also adding X-Forwarded-For, then you will need to account for what those components also add onto the right side of the value, modifying what CloudFront has supplied.
For example. Client sends:
X-Forwarded-For: a, b, c
CloudFront adds the client's IP d:
X-Forwarded-For: a, b, c, d
ALB receives the request from CloudFront, so it adds the CloudFront egress address e:
X-Forwarded-For: a, b, c, d, e
Then your web server adds the internal address of the balancer f:
X-Forwarded-For: a, b, c, d, e, f
You can trust and remove f as long as it is with the CIDR range of your balancer subnets.
You can trust and remove e as long as it is in the CloudFront address ranges.
That leaves you withd as the client address.
The values a and b and c are almost worthless, in this example, because you can't trust their authenticity since they are to the left of the first (from the right) untrusted address... occasionally, they may be forensically useful, later, but you can't make any real-time decisions based on them.
This is how X-Forwarded-For always works. Many developers seem to make naïve assumptions due to lack of understanding. Be certain you understand it before you use it for anything important.
In Lambda#Edge triggers, CloudFront gives you the client IP address in event.Records[0].cf.request.clientIp. This is always just a single address, and is the same as the rightmost value of X-Forwarded-For as the request leaves CloudFront headed to your origin (which may, as noted above, add additional values onto the right side).
My suggestion would be to use CloudFront provided headers, link - [https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-cloudfront-headers.html]
All you need to do first is Go to Cloudfront -> Select Distribution -> Behaviors -> and do the following in 'Cache key and origin requests'
Select 'CachingDisabled' for 'Cache policy' dropdown, if you don't want anything to get cached. I personally faced problems in my app, if I didn't select this option.
For Origin Request Policy do the following -
Create a new Policy like 'Origin-Policy-For-Cloudfront' and select 'CloudFront-Viewer-Address' and checkout other options as well.
It'll look something like this -
Save it and, finally the Cloudfront Behaviour should look like this -
Now, open conf.d/node.conf or nginx.conf, whereever you have written your 'server -> /location', and simply write the following -
server {
listen 80;
server_name my-server CLOUDFRONT_URL;
location / {
proxy_set_header X-Client-IP $http_CloudFront_Viewer_Address;
proxy_set_header Host $http_host;
proxy_pass "http://127.0.0.1:
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
On the NodeJs Backend, you can fetch the Client IP in the request as follows -
exports.get = (req, res, next) => {
console.log('Clinet IP:', req.headers['x-client-ip']);
}
This is an easier method to get the client Ip rather than messing around with Cloudfront CIDR and all.
With CloudFront Functions, now you can do this without using Lambda#Edge:
function handler(event) {
var request = event.request;
var clientIP = event.viewer.ip;
//Add the true-client-ip header to the incoming request
request.headers['true-client-ip'] = {value: clientIP};
return request;
}
Check out guide at AWS docs
The approach we could follow is make use of "CloudFront Edge with Lambda" and copy the last IP into different propitiatory header(Lets say My-X-Forwarded-For) and then copy this header override onto X-Forwarded-For in the layer just before app server.
Lets say the traffic goes below way
Client sends => CloudFront => ALB => WebServer => AppServer
CloudFront Edge has to inject new header My-X-Forwarded-For with right most IP from X-Forwarded-For.
WebServer will have header rule to override X-Forwarded-For header with value from My-X-Forwarded-For
This works transparently to application layer. Infrastructure issues sorted in infrastructure. No need to make changes in application layer as we introduced AWS CloudFront layer.
Related
I have an AppEngine Node.js application running in a standard environment, and I'm having some trouble with cron verification. The docs say that you can verify that the IP address comes from 0.1.0.2. In the request logs I can see that the request IP is 0.1.0.2; however, in my fastify request object, request.ip is 127.0.0.1. Anyone know what could be happening here?
I was thinking that maybe there's some sidecar like nginx that is accepting the requests, but in that case, I would expect to x-forwarded-for to be defined, but it's not.
As per documentation, X-Forwarded-For value is the list of IP addresses through which the client request has been routed. You can see that the first IP (0.1.0.1) as expected, is the IP of the client that created the request. The subsequent IPs are the proxy servers that handled the request before it reached the application server (e.g.X-Forwarded-For: clientIp, proxy1Ip, proxy2Ip). Therefore, in this case the VM is seeing the remote IP from a Google Cloud internal Load Balancer address, but we do use X-Forwarded-For for things like this.
One quick solution is to only check for the X-Appengine-Cron header rather than also checking the IP address. The X-Appengine-Cron header is set internally by Google App Engine. If your request handler finds this header it can trust that the request is a cron request.
For more information, you can refer to these Stackoverflow Link1 and Link2 which may help you.
I've a site that sometimes receives many, unusual requests and want to block the requester. The session id is different for all requests but x-forwarded-for header is always the same.
Now, the x-forwarded-for IP I traced it with a geo-locator and it shows it is from a known ISP (Cox), so I can't just block all incoming requests from that ISP. The remoteAddress shows the address of MY load balancer.
Any idea what I can do to block this person without blocking all the ISP customers?
I need to do a basic flooding control, nothing very sophisticated. I want to get source IP and delay the answer if they are requesting too many times in a short period.
I saw that there is a req.ip field but also a package: https://www.npmjs.com/package/request-ip
What's the difference?
I suggest you to use the request-ip module, because it looks for specific headers in the request and falls back to some defaults if they do not exist.
The following is the order it uses to determine the user ip from the request.
X-Client-IP
X-Forwarded-For header may return multiple IP addresses in the format: "client IP, proxy 1 IP, proxy 2 IP", so we take the the first one.
CF-Connecting-IP (Cloudflare)
Fastly-Client-IP (Fastly CDN and Firebase hosting header when forwared to a cloud function)
True-Client-IP (Akamai and Cloudflare)
X-Real-IP (nginx proxy/FastCGI)
X-Cluster-Client-IP (Rackspace LB, Riverbed Stingray)
X-Forwarded, Forwarded-For and Forwarded (Variations of #2)
appengine-user-ip (Google App Engine)
req.connection.remoteAddress
req.socket.remoteAddress
req.connection.socket.remoteAddress
req.info.remoteAddress
Cf-Pseudo-IPv4 (Cloudflare fallback)
request.raw (Fastify)
It permits to get the real client IP regardless of your web server configuration or proxy settings, or even the technology of the connection (HTTP, WebSocket...)
You can also take a look to the express req.ips (yes, ips, not req.ip) property to get more informations about the request:
req.ips (http://expressjs.com/en/api.html)
When the trust proxy setting does not evaluate to false, this property contains an array of IP addresses specified in the X-Forwarded-For request header. Otherwise, it contains an empty array. This header can be set by the client or by the proxy.
For example, if X-Forwarded-For is client, proxy1, proxy2, req.ips would be ["client", "proxy1", "proxy2"], where proxy2 is the furthest downstream.
I want to use nginx as a load balancer in front of several node.js application nodes.
round-robin and ip_hash methods are unbelievably easy to implement but in my use case, they're not the best fit.
I need nginx to serve clients to backend nodes in respect to their session id's which are given by first-landed node.
During my googlings, I've come up with "hash"ing method but I couldn't find too many resources around.
Here is what I tried:
my_site.conf:
http {
upstream my_servers {
hash $remote_addr$http_session_id consistent;
server 127.0.0.1:3000;
server 127.0.0.1:3001;
server 127.0.0.1:3002;
}
server {
listen 1234;
server_name example.com;
location / {
proxy_pass http://my_servers;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_redirect off;
proxy_buffering off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
And at the application, I return Session-ID header with the session id.
res.setHeader('Session-ID', req.sessionID);
I'm missing something, but what?
$http_session_id refers to header sent by client (browser), not your application response. And what you need is http://nginx.org/r/sticky, but it's in commercial subscription only.
There is third-party module that will do the same as commercial one, but you'll have to recompile nginx.
It doesn't work out of the box because nginx is a (good) webserver, but not a real load-balancer.
Prefer haproxy for load-balancing.
Furthermore, what you need is not hashing. You need persistence on a session-id header and you need to be able to persist on source IP until you get this header.
This is pretty straight forward with HAProxy. HAProxy can also be used to check if the session id has been generated by the server or if it has been forged by the client.
backend myapp
# create a stick table in memory
# (note: use peers to synchronize the content of the table)
stick-table type string len 32 expire 1h size 1m
# match client http-session-id in the table to do the persistence
stick match hdr(http-session-id)
# if not found, then use the source IP address
stick on src,lower # dirty trick to turn the IP address into a string
# learn the http-session-id that has been generated by the server
stick store-response hdr(http-session-id)
# add a header if the http-session-id seems to be forged (not found in the table)
# (note: only available in 1.6-dev)
acl has-session-id req.hdr(http-session-id) -m found
acl unknown-session-id req.hdr(http-session-id),in_table(myapp)
http-request set-header X-warning unknown\ session-id if has-session-id unknown-session-id
Then you are fully secured :)
Baptiste
I have a couple of small production sites and a bunch of fun hobbyist/experimental apps and such. I'd like to run all of them on one EC2 instance.
Can I install node.js, npm, express and couchdb once, and then run each app on a different port, and adjust the dns settings at my domain registry to point to the appropriate place?
Update: Thanks Mike! For anyone else who's looking for multiple IP addresses on EC2: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instance-addressing.html
There are multiple ways to go about it.
Different Ports
You could run each Node.js process on a different port and simply open the ports to the world. However, your URLs would need a port on the end of the hostname for each project. yoururl.com:8080/ would technically work, but probably not what you're looking for.
Multiple IP Addresses
You could use multiple IP addresses on one EC2 instance, however, they come with an additional cost of about $3.65 a month each. So if you have 10 different domains you want to host on once instance then that's over $30 a month extra in hosting fees.
On the flip side, any domain using SSL needs it's own IP address.
Also, there are limits to the number of IP addresses you can assign to an instance and the smaller the instance, the less IP addresses you get.
The number of IP addresses that you can assign varies by instance type. Small instances can accommodate up to 8 IP addresses (across 2 elastic network interfaces) whereas High-Memory Quadruple Extra Large and Cluster Computer Eight Extra Large instances can be assigned up to 240 IP addresses (across 8 elastic network interfaces). For more information about IP address and elastic network interface limits, go to Instance Families and Types in the Amazon EC2 User Guide.
Express Vhosts
Express comes with virtual host functionality. You can run multiple domains under one Node.js/Express server and setup routes based on domain name. vhost under Express enables this.
Reverse Proxy
You can setup Nginx in front of multiple application servers. This has the most flexibility. You can have one Node.js process per domain which allows you to do updates and restarts on one domain at a time. It also allows you to host applications servers such as Apache/PHP under the same EC2 instance along side your Node.js process.
With Nginx acting as a reverse proxy you could also host different application servers under the same domain, but serving different paths.
For instance, Node.js could serve the main root path of a domain but you could setup the /blog/ path to go to a Apache/PHP/Wordpress setup on the same EC2 instance.
Already answered # https://stackoverflow.com/a/50528580/3477353 but mentioning it again.
Adding to the #Daniel answer, I would like to mention the configuration of my nginx for those who are looking for the exact syntax in implementing it.
server {
listen 80;
server_name mysite.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:3000;
}
}
server {
listen 80;
server_name api.mysite.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:4500;
}
}
Just create two server objects with unique server name and the port address.
Mind proxy_pass in each object.
Thank you.