I have an AppEngine Node.js application running in a standard environment, and I'm having some trouble with cron verification. The docs say that you can verify that the IP address comes from 0.1.0.2. In the request logs I can see that the request IP is 0.1.0.2; however, in my fastify request object, request.ip is 127.0.0.1. Anyone know what could be happening here?
I was thinking that maybe there's some sidecar like nginx that is accepting the requests, but in that case, I would expect to x-forwarded-for to be defined, but it's not.
As per documentation, X-Forwarded-For value is the list of IP addresses through which the client request has been routed. You can see that the first IP (0.1.0.1) as expected, is the IP of the client that created the request. The subsequent IPs are the proxy servers that handled the request before it reached the application server (e.g.X-Forwarded-For: clientIp, proxy1Ip, proxy2Ip). Therefore, in this case the VM is seeing the remote IP from a Google Cloud internal Load Balancer address, but we do use X-Forwarded-For for things like this.
One quick solution is to only check for the X-Appengine-Cron header rather than also checking the IP address. The X-Appengine-Cron header is set internally by Google App Engine. If your request handler finds this header it can trust that the request is a cron request.
For more information, you can refer to these Stackoverflow Link1 and Link2 which may help you.
Related
I have an express server that uses nginx and monitors the X-Forwarded-For header.
The node server has the following lines of code:
app.set('trust proxy', '127.0.0.1');
app.use(morgan(':remote-addr')); // and other info too
Normally, when users make requests, independent of the client (mobile app, scripts, etc.) the IP displayed is the remote one.
Recently, I have observed that someone tried to hack into my server using python-requests/2.22.0 and the remote IP was not his IP address, it was 192.X.X.X. I tried to reproduce this myself by accessing the server from itself, but the remote address (global server IP address) was displayed.
Can you better explain to me how this works and if this is something I should be worried about?
They never accessed your server through Nginx; check the logs. They sent a local connection header directly to the IP:port hosting your server. This could be damaging if your security policies are not set correctly, it could leak site IPs and potentially allow an attacker to have a free path into your server without response back and no limits.
As we get scarier, the user could initiate a BGP hijack and take over the relay points sending users to your server end-points; this is one to YouTube or google more about.
As we finish off, know most hosting companies allow for private networking and do give somewhat of a firewall to use but most users assume this is secure when it actually is not! These private networks connect you to the hundreds->thousands servers in a rack or zone. So if the attacker bought a server next to yours (which would likely be a bot) they could scan the private networks for some fun-time which is against TOS but the hosts don't check this good enough or secure it.
In your case, it sounds like the server is responding to the entire internet and bots are having a go at it; Try setting your Node.js server up as localhost only, at port 443 or whatever and host that through nginx. That way anytime someone inserts your IP or domain name it is forwarded by nginx to the local resource. Someone couldn't just use the IP + Node.js port and play games. If you do this, a user may still send the header with fake IP but it won't result to IP Leak, or anything bad unless that IP had super powers on your site, which no filter on your site should say 192.168.x.x gets ADMIN mode. You can feel confident.
So, I have a express API that checks a list of blocked IPs, if the request IP matches one IP on the IPs blocked list, I want to sent a message to the end-user.
Now I'm implementing tests, I have been using supertest to make calls to my API endpoints
Let's say I want to make two calls the endpoint "/user":
one call with IP 127.0.0.1 (default)
one call with IP 127.0.1.1 (API should block this IP)
I found some old stackoverflow issues, did everything as shown in the answers, but none answers changed the supertest IP, therefore my API didn't return a message saying the IP was blocked
So, is there a way to automatically test IP bans?
my pdns_recursor setup includes this
forward-zones=net=127.0.0.1:5353;8.8.8.8
where at 127.0.0.1:5353 listens my own DNS server that acts as a filter on all DNS requests under .net zone. When my DNS server thinks a request should be blocked, it returns the IP of a blocking page to pdns_recursor. If not, it returns NXDOMAIN to pdns_recursor.
My understanding about pdns_recursor is that it will continue to forward the DNS request to 8.8.8.8 in case it receives NXDOMAIN from my own DNS server. This way, unblocked requests would reach to their destinations via Google DNS. However, the client always sees either the blocking page or NXDOMAIN message from pdns_recursor!
What am I missing here?
Thanks a lot!
NXDOMAIN is a perfectly good answer to a DNS query, and there is no reason for PowerDNS to try another server when it has already received an answer. In fact, RFC1034 says that a recursor should keep asking servers until it receives "a response". Assuming that PowerDNS follows the RFCs, any response from your filter thingy will be passed on to the user. So if you want the query passed on to the next server in the list, your filter thingy must not answer at all. In which case all your users will instead have to wait for a timeout on all non-blocked queries before they get passed on to Google, which will likely annoy them a lot.
I need to do a basic flooding control, nothing very sophisticated. I want to get source IP and delay the answer if they are requesting too many times in a short period.
I saw that there is a req.ip field but also a package: https://www.npmjs.com/package/request-ip
What's the difference?
I suggest you to use the request-ip module, because it looks for specific headers in the request and falls back to some defaults if they do not exist.
The following is the order it uses to determine the user ip from the request.
X-Client-IP
X-Forwarded-For header may return multiple IP addresses in the format: "client IP, proxy 1 IP, proxy 2 IP", so we take the the first one.
CF-Connecting-IP (Cloudflare)
Fastly-Client-IP (Fastly CDN and Firebase hosting header when forwared to a cloud function)
True-Client-IP (Akamai and Cloudflare)
X-Real-IP (nginx proxy/FastCGI)
X-Cluster-Client-IP (Rackspace LB, Riverbed Stingray)
X-Forwarded, Forwarded-For and Forwarded (Variations of #2)
appengine-user-ip (Google App Engine)
req.connection.remoteAddress
req.socket.remoteAddress
req.connection.socket.remoteAddress
req.info.remoteAddress
Cf-Pseudo-IPv4 (Cloudflare fallback)
request.raw (Fastify)
It permits to get the real client IP regardless of your web server configuration or proxy settings, or even the technology of the connection (HTTP, WebSocket...)
You can also take a look to the express req.ips (yes, ips, not req.ip) property to get more informations about the request:
req.ips (http://expressjs.com/en/api.html)
When the trust proxy setting does not evaluate to false, this property contains an array of IP addresses specified in the X-Forwarded-For request header. Otherwise, it contains an empty array. This header can be set by the client or by the proxy.
For example, if X-Forwarded-For is client, proxy1, proxy2, req.ips would be ["client", "proxy1", "proxy2"], where proxy2 is the furthest downstream.
I'm running a nodejs webserver on azure using the express library for http handling. We've been attempting to enable cloudflare protection on the domains pointing to this box, but when we turn cloudflare proxying on, we see cycling periods of requests succeeding, and requests failing with a 524 error. I understand this error is returned when the server fails to respond to the connection with an HTTP response in time, but I'm having a hard time figuring out why it is
A. Only failing sometimes as opposed to all the time
B. Immediately fixed when we turn cloudflare proxying off.
I've been attempting to confirm the TCP connection using
tcpdump -i eth0 port 443 | grep cloudflare (the request come over https) and have seen curl requests fail seemingly without any traffic hitting the box, while others do arrive. For further reference, these requests should be and are quite quick when they succeed, so I'm having a hard time believe the issue is due to a long running process stalling the response.
I do not believe we have any sort of IP based throttling or firewall (at least not intentionally?)
Any ideas greatly appreciated, thanks
It seems that the issue was caused by DNS resolution.
On Azure, you can configure a custom domain name for your created webapp. And according to the CloudFlare usage, you need to switch the DNS resolution to CloudFlare DNS server, please see more infomation for configuring domain name https://azure.microsoft.com/en-us/documentation/articles/web-sites-custom-domain-name/.
You can try to refer to the faq doc of CloudFlare How do I enter Windows Azure DNS records in CloudFlare? to make sure the DNS settings is correct.
Try clearing your cookies.
Had a similar issue when I changed cloudflare settings to a new host but cloudflare cookies for the domain was doing something funky to the request (I am guessing it might be trying to contact the old host?)