Is it possible and if so, how to enable advanced request filtering on the Cloudflare site? - firewall

Original question:
Can we filter and block requests in Cloudflare that are incorrect or non-standard, or appear unlikely often from the same IP address, even if they are correct?
I am particularly interested in filtering requests with regular expressions and the ability to block IPs of wrong requests, or at least to automaticly response them with the 400 Bad Request response code, as it is possible using mod_security in the Apache server.
The second important thing is the ability to filter out extremely frequent requests that appear from the same IP address (even if they are correct). For example, I would like to be able to block IP addresses at the Cloudflare level, which have made more than 1000 requests per minute.
Does Cloudflare give us such an opportunity?
If so, what conditions do I have to meet and where can I do it in the Cloudflare panel? If possible, please give me precise guidelines.
All this matters in the context of defense against DDOS attacks.
An update to explain the context in which I ask this question:
I am a programmer who needs to implement a solution to protect against DDOS attacks. So far my program written in Python has used the Apache access_log in combination with Apache mod_security and other services such as firewall.
Now I have the opportunity to use Cloudflare and that's why I'm asking.
Maybe, thanks to Cloudflare, I will turn out my program, because it will be unnecessary and the solution will be able to do exactly the same as now, but at an earlier stage (the request will not reach the web server), but maybe (depending on the answer), I will have to stay the old way, or maybe I will be able to slightly improve the program and consumption of web server resources by eliminating it with mod_security or in some other way.
I am asking for help and advice.
Thank you in advance!

filtering requests with regular expressions
Use Firewall Rules.
ability to filter out extremely frequent requests that appear from the same IP address
Use Rate-Limiting.
please give me precise guidelines.
Contact Cloudflare Support for specific configuration to meet your requirements.

Related

How to block users accessing site outside of UK?

Searched the web and unable to find a solution. I have an umbraco site using IIS to host on a Windows server. Any ideas on approach to block users accessing site outside the UK? Htaccess approach would be too slow.... thank you in advance!
That's quite hard to do accurately, as you could have someone based in the UK using a European network provider, which means that they might appear to come from say Holland instead of the UK. It's also possible for people to spoof their location fairly easily if they really want to get at your site.
As Lex Li mentions there are plenty of commercial databases and tools for looking up a user's location, but the accuracy of these varies considerably, not to mention the fact that some of them only support IPv4. Any of these options are going to be slow though, as you'll have to check on every request. You also have to make sure you keep the databases up to date.
Another option would be to proxy your site through something like CloudFront or CloudFlare which both support blocking traffic by country.

How to protect a website from DoS attacks

What is the best methods for protecting a site form DoS attack. Any idea how popular sites/services handles this issue?.
what are the tools/services in application, operating system, networking, hosting levels?.
it would be nice if some one could share their real experience they deal with.
Thanks
Sure you mean DoS not injections? There's not much you can do on a web programming end to prevent them as it's more about tying up connection ports and blocking them at the physical layer than at the application layer (web programming).
In regards to how most companies prevent them is a lot of companies use load balancing and server farms to displace the bandwidth coming in. Also, a lot of smart routers are monitoring activity from IPs and IP ranges to make sure there aren't too many inquiries coming in (and if so performs a block before it hits the server).
Biggest intentional DoS I can think of is woot.com during a woot-off though. I suggest trying wikipedia ( http://en.wikipedia.org/wiki/Denial-of-service_attack#Prevention_and_response ) and see what they have to say about prevention methods.
I've never had to deal with this yet, but a common method involves writing a small piece of code to track IP addresses that are making a large amount of requests in a short amount of time and denying them before processing actually happens.
Many hosting services provide this along with hosting, check with them to see if they do.
I implemented this once in the application layer. We recorded all requests served to our server farms through a service which each machine in the farm could send request information to. We then processed these requests, aggregated by IP address, and automatically flagged any IP address exceeding a threshold of a certain number of requests per time interval. Any request coming from a flagged IP got a standard Captcha response, if they failed too many times, they were banned forever (dangerous if you get a DoS from behind a proxy.) If they proved they were a human the statistics related to their IP were "zeroed."
Well, this is an old one, but people looking to do this might want to look at fail2ban.
http://go2linux.garron.me/linux/2011/05/fail2ban-protect-web-server-http-dos-attack-1084.html
That's more of a serverfault sort of answer, as opposed to building this into your application, but I think it's the sort of problem which is most likely better tackled that way. If the logic for what you want to block is complex, consider having your application just log enough info to base the banning policy action on, rather than trying to put the policy into effect.
Consider also that depending on the web server you use, you might be vulnerable to things like a slow loris attack, and there's nothing you can do about that at a web application level.

Firewalls preventing product activation

I'm looking to implement a basic product activation scheme such that when the program is launched it will contact our server via http to complete the activation. I'm wondering if it is a big problem (especially with bigger companies or educational organizations) that firewalls will block the outgoing http request and prevent activation. Any idea how big as issue this may be?
In my experience when HTTP traffic is blocked by a hardware firewall then there is more often than not a proxy server which is used to browse the internet. Therefore it is good practice to allow the user to enter proxy and authentication details.
The amount of times I have seen applications fail due to not using a corporate proxy server and therefore being blocked by the firewall astonishes me.
there are personal software solutions to purposely block outgoing connections. Check out little snitch. This program can set up rules that explicitly block your computer from making connections to certain domains, IP's and / or Ports. A common use for this program is to stop one's computer from "phoning home" to an activation server.
I can't tell you how prevalent this will be, sorry. But I can give you one data point.
In this company Internet access is granted on an as needed basis. There is one product I have had to support which is wonderful for its purpose and reasonably priced, but I will never approve its purchase again - the licensing is too much of a hassle to be worth it.
I'd say that it may not be common, but if any one of your customers is a business it's likely that you will encounter someone who tryes to run your software behind a restricted internet connection or a proxy. Your software will need to handle this situation, otherwise you will ahve a pissed off customer who cannot use your product, and you will lose the sale for sure.
If you are looking for a third party tool, I've used InstallKey (www.lomacons.com) for product activations. This thing has functionaility that allows for validating with and without an internet connection.

Tux, Varnish or Squid? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
We need a web content accelerator for static images to sit in front of our Apache web front end servers
Our previous hosting partner used Tux with great success and I like the fact it's part of Red Hat Linux which we're using, but its last update was in 2006 and there seems little chance of future development. Our ISP recommends we use Squid in reverse caching proxy role.
Any thoughts between Tux and Squid? Compatibility, reliability and future support are as important to us as performance.
Also, I read in other threads here about Varnish; anyone have any real-world experience of Varnish compared with Squid, and/or Tux, gained in high-traffic environments?
Cheers
Ian
UPDATE: We're testing Squid now. Using ab to pull the same image 10,000 times with a concurrency of 100, both Apache on its own and Squid/Apache burned through the requests very quickly. But Squid made only a single request to Apache for the image then served them all from RAM, whereas Apache alone had to fork a large number of workers in order to serve the images. It looks like Squid will work well in freeing up the Apache workers to handle dynamic pages.
In my experience varnish is much faster than squid, but equally importantly it's much less of a black box than squid is. Varnish gives you access to very detailed logs that are useful when debugging problems. It's configuration language is also much simpler and much more powerful that squid's.
#Daniel, #MKUltra, to elaborate on Varnish's supposed problems with cookies, there aren't really any. It is completely normal not to cache a request if it returns a cookie with it. Cookies are mostly meant to be used to distinguish different user preferences, so I don't think one would want to cache these (especially if you they include some secret information like a session id or a password!).
If you server sends cookies with your .js and images, that's a problem on your backend side, not on Varnish's side. As referenced by #Daniel (link provided), you can force the caching of these files anyway, thanks to the really cool language/DSL integrated in Varnish...
If you're looking to push static images and a lot of them, you may want to look at some basics first.
Your application should ensure that all correct headers are being passed, Cache-Control and Expires for example. That should result in the clients browsers caching those images locally and cutting down on your request count.
Use a CDN (if it's in your budget), this brings the images closer to your clients (generally) and will result in a better user experience for them. For the CDN to be a productive investment you'll again need to make sure all your necessary caching headers are properly set,
as per the point I made in the previous paragraph.
After all that if you are still going to use a reverse proxy, I recommend using nginx in proxy mode, over Varnish and squid. Yes Varnish is fast, and as fast as nginx, but what you're wanting to do is really quite simple, Varnish comes into it's own when you want to do complex caching, and ESI. So Keep It Simple, Stupid. nginx will do your job very nicely indeed.
I have no experience with Tux, so I can't comment on it sorry.
For what it's worth, I recently set up nginx as a reverse-proxy in front of Apache on a 6-year-old low-power webserver (running Fedora Core 2) which was under a mild DDoS attack (10K req/sec). Pages loading was snappy (<100ms) and system load stayed low at around 20% CPU utilization, and very little memory consumption. The attack lasted 1 week, and visitors saw no ill effects.
Not bad for over half a million hits per minute sustained. Just be sure to log to /dev/null.
It's interesting that no one mentioned the Apache Traffic Server (formerly, Yahoo! Traffic Server) http://trafficserver.apache.org/
Please have a look at it, it works beautifully.
We use Varnish on http://www.mangahigh.com and have been able to scale from around 100 concurrent pre-varnish to over 560 concurrent post-varnish (server load remained at 0 at this point, so there's plenty of space to grow!). Documentation for varnish could be better, but it is quite flexible once you get used to it.
Varnish is meant to be a lot faster than Squid (having never used Squid, I can't say for certain) - and http://users.linpro.no/ingvar/varnish/stats-2009-05-19 shows Twitter, Wikia, Hulu, perezhilton.com and quite a number of other big names also using it.
both Squid and nginx are specifically designed for this. nginx is particularly easy to configure for a server farm, and can also be a frontend to FastCGI.
I've only used squid and can't compare. We use squid to cache an entire site on a server in the USA (all data gets pulled from a machine in Germany). It was pretty easy to set up and works nicely. I've found the documentation to be kind of lacking unless you already know what to look for.
Since you already have apache serving the static and dynamic content I would recommend you to go with Varnish.
In this way you can use your apache to deliver the static content and use varnish to cache it for you. Varnish is very flexible, giving you both caching and loadbalancing features for growing your website in the best ways.
We are about to roll out a varnish 2.01 server in front of an IIS 6 installation. The only caveats we've had was with our SSL (as varnish can't handle SSL). So we've also installed Nginx to handle those requests.
In all our testing we've shown a 66% percent increase in the amount of traffic the site can handle.
My only gripe is that varnish doesn't handle cookies well, and the documentation is still a bit scattered.
Nobody mentions that Squid follows the HTTP specification to the letter (or at least they try to) whereas Varnish does not. In my opinion, this means Varnish is better suited for caching content for individual sites (by extensively tuning Varnish) and Squid is better for caching content for many sites (each of which will have to make their content "cachable" according to spec).

Do you require deep packet inspection on a server-only firewall? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I have a server behind a firewall. It runs a web application (Java servlets under Apache Tomcat) and responds only to port 443 (HTTPS). There is no scripting code in the pages served - the forms use HTTP POST to receive the form, process the data (with appropriate input filtering) and then output an HTTP result page.
I am currently using an appliance firewall, but it is 'hardware-flakey'. I have been looking at upgrading to a more 'industrial strength' solution, but the vendor is quite insistant that I purchase a subscription to their "deep packet inspection" software. He claims that even web servers need this kind of protection.
I am not convinced, but do not have the security background to be certain. The firewall would sit between the "world" and my server, and use "port forwarding" to allow only ports 443 and 22 (for maintenance) to reach the server.
So - do I really need this deep packet inspection, or not?
Given that the only protocols that you're interested in (ssh and https) are "negotiate encryption on connect" there's little that a standard firewall will be able to inspect after that point. Once the SSL/SSH session is established the firewall will only see encrypted packets. Ask your vendor what their product examines in this context.
Alternatively, it is possible that the device acts more like a proxy -- that it acts as the server-side end-point for the connection before relaying on to your real server -- in which case it is possible that the product does something deeper, although this isn't the case if the firewall really is "port forwarding" as you say. Again, your vendor should be able to explain how their device operates.
Also you may want to ask what vulnerabilities/risks the inspection system is intended to protect against. For example: Does it look out for SQL injection? Is it targetted to a particular platform? (If your web server runs on a SPARC CPU, for example, then there's little point inspecting URLs for x86 shellcode).
As a network security professional, this sounds like overkill to me.
Martin Carpenter's answer is 100% on target. Anytime you're considering security, you need to understand
What you're securing,
What you're securing it against,
The likelihood of an attack, and
Your risk if an attack succeeds.
For your application, which allows only encrypted, authenticated communication on only 2 ports, I can see only a few vulnerabilities:
Denial-of-service (DOS) is always a threat unless your firewall blocks those attacks.
You might have other applications listening on other ports, but you can detect them with any simple port scanning program.
You may want to restrict outbound communication to prevent a user or rogue application from initiating communication to an unauthorized server.
I also agree that it's a good idea to ask the vendor what "deep packet inspection" means to him and why your particular situation requires it. Unless you get a specific, knowledgeable answer, in layman's terms, that makes sense to you, I'd go elsewhere. There's nothing about network security that can't be explained simply, without buzzwords.
Update on several fronts...
First - I now have reason to believe that part of the flakiness of the OTS hardware product is a combination of low-powered CPU and insufficient buffer memory. In weeks of logging and a few crashes, there are no entries in the logs before the crash, yet I'm logging everything according to the log control. Talking with another firewall vendor, it was indicated that may suggest the buffer is filling faster than it can empty during heavy use. This corresponds with findings - the most used IP is the one crashing the most often.
So I checked, and the firewall did have some deep packet inspection stuff turned on. I've turned it off to see if things improve.
The firewall's main purpose in my network scenario is "gate keeper". That is, I want the firewall to prevent all traffic EXCEPT http, https and some ssh from ever getting beyond the WAN port. Since there are no users inside the firewall, any traffic generated from the inside comes from my application and can be allowed out.
Further talks with the one vendor indicated that they no longer thing deep packet inspection is necessary - the other fellow was just trying to "upsell" me on the unit in question. I also found out their hardware won't really do all that I want without spending a ton of money.
I'm now seriously exploring the use of OpenBSD and a PF firewall to do what I reauire in a cost-effective manner.

Resources