I am using CloudFlare WAF in High security mode. When I make webservice calls which includes some non-English characters such as Ö,Ç the application firewall blocks it and returns;
You can email the site owner to let them know you were blocked. Please include what you were doing when this page came up and the CloudFlare Ray ID found at the bottom of this page
CloudFlare Ray ID: 2366df772ee32bbe
When I turn the security level to Low, the webservice is accepted.
How can I find which specific rule causes that error when I am using it with High security mode?
I see the same problem. Here is what I did to see the rules triggered:
Did a test call to the web service.
After a few hours, looked into the traffic report and found my ip's block entry.
Clicked on Details and could see which rules were triggered.
Related
I tested the rule on Firefox within Ubuntu VM and it worked as it gave me a forbidden error code page. When I tested it on my phone browser, it does not work as it just shows the index page with no errors. Is there a different configuration I needed to do for it or it just does not work on a mobile phone browser?
The URL I used is, "HTTP://vhost1.group21.com/?ID=103.50.84.114".
SecHttpBlKey (key here)
SecAction "id:900500,\
phase:1,\
nolog,\
pass,\
t:none,\
setvar:tx.block_search_ip=0,\
setvar:tx.block_suspicious_ip=1,\
setvar:tx.block_harvester_ip=1,\
setvar:tx.block_spammer_ip=1"
This should not be a thing. This is about IP connections and not about the origin of the request.
The only difference I see is the IP address. So the VM and the mobile device have different IPs and the RBL lookup gives a different result for the two.
In order to debug this, I suggest you put all the rules in the 910 rule file to "log" (now "nolog") and then check the error log and compare when it blocks and when it does not. This should allow you to isolate the misbehaviour.
in past week I had to change my server to cloud (Digital Ocean Droplet), I am using a shared service but the concurrent user reached the number of Php execution (30). I shifted the entire site and site is up and running successfully, moreover Yandex and Bing are able to crawl my website but it is Google that I want.
I have got like 100K errors in console dashboard and raising, google ads bot isn't able to crawl my pages too. I have checked the following and there is no error in these.
.htaccess and redirections.
SSL
DNS records (I shifted name servers to DO and then back to registrar to see if the DNS was the error) but it doesn't seem like it is.
I double checked robots.txt it is fine by the google robots.txt validator and other search engines.
Similar setups are running on other servers with no changes at all the are fine.
UfW, I am new to it but due to its temporary nature I don't think it is the reason. I disabled it and checked it doesn't make difference.
I haven't blocked anything on apache so it should be good too.
The error that appears is attached at screenshots
Help me out as instead of scaling, I am going down bad.
I repointed the DNS through another service it took its time but it is resolved. I wasn't sure about the error, now I am, it is because of improper or partial DNS resolution issue.
We are testing out Traffic Manager to see if it is a viable solution for failover. If our primary Azure region becomes unavailable for any reason, we want end users to be directed to a secondary location where they can continue using the site.
I have followed the documentation for setting this up and have 3 simple API return pages as endpoints in 3 different regions that simply alert which one you are hitting. I have them prioritized, 1, 2 and 3.
When hitting the .trafficmanager.net URL, the primary is displayed as it should. All 3 show "online" in the traffic manager profile. If I stop the primary site, then refresh my browser, I get a 403 error stating that the site has stopped.
I set the TTL in the traffic manager profile configuration to 60 seconds. However, after 15+ minutes, the browser still displays the 403. The only way I seem to be able to get the secondary site to pull up is by starting a new browser session. It's like there is some sort of caching and/or TTL issue with the browser session that prevents it from trying the secondary site.
This obviously wouldn't be acceptable in a live, production environment. There has to be a way around this, right? Has anyone else dealt with this issue?
The browser might be using Keep-Alive
Keep in mind that Azure Traffic Manager works at the DNS level so, rather than using a browser to get a repro, try to get a repro with some DNS tools like dig, nslookup, etc.
This isn't just a browser setting. Your IIS Manager could be considered to use keep-alive to reduce strain on itself, thus leaving open connections that completely bypass the Traffic Manager's DNS rules. I had these exact same symptoms, and was able to alleviate them by following steps I posted here. Whether it'll prove useful in a real-world scenario has yet to be seen, but I'm hoping this will help you get further.
We're looking to do some scraping on a specific URL that uses cloudflare. Has anyone experienced issues using Zombie.js/user-agents while trying to crawl cloudflare hosted sites.
Would love some help!
I am trying to interface to an API on a client's site and I am getting a 403 error indeed. The request doesn't even reach my server.
Turning security to "essentially off" did not help. The final solution was to white-list the developer machine's IP.
The error is triggered on a single URL (json serving API) with a Java client with standards compliant libraries.
Solution:
1. try to set a rule to allow direct access for that URL
2. try setting security to weaker and weaker ("essentially off")
3. if both fails: try whitelisting
4. set up an alternate non-cloudflare url (direct.domain.com)
These will of course only work if you can negotiate with the site owners.
Backup solution: use an embedded browser that you can "frame" and "remote control" or a testing framework that does the same through a plugin, and extract the content from there (if you can)
Hope this helps.
You're probably triggering one of our security features by trying to scrape a site on us. The only option, really, would be to ask the site owner to whitelist your IP(s) to override the behavior.
This question is an extension to previously answered question
How to give cname forward support to saas software
Sample sites -
client1.mysite.com
client2.mysite.com
...
clientN.mysite.com
Create affinity by say client[1-10].mysite.com to be forwarded to europe.mysite.com => IP address.
Another criteria is it should have little recourse to proxy, firewall and network changes. In essence the solution I am attempting is a Data Dependent Routing (based on URL, Login Information etc.).
However they all mean I have a token based authentication system to authenticate and then redirect the user to a new URL. I am afraid that can be a single point of failure and will need a seperate site from my core app to do such routing. Also its quite some refactoring to existing code. Another concenr is the solution also may not be entirely transparent to the end user as it will be a HTTP Redirect 301.
Keeping in mind that application can be served from Load Balanced Web Servers (IIS) with LB Switch and other Network appliances, I would greatly appreciate if someone can simplify and educate me how this should be designed.
Another resource I have been looking up is -
http://en.wikipedia.org/wiki/DNAME#DNAME_record
You could stick routing information into a cookie, so that the various intermediary systems can then detect that cookie and redirect the user accordingly without there being a single point of failure.
If the user forges a cookie of his own, he might get redirected to a server where he does not belong, but that server would then check whether the cookie is indeed valid, and prevent unauthorized access.