Blocking googleapis.com at browser level in enterprise policy - browser

I'm trying to set some Chromium browser policies as outlined in Set up policies for a enterprise Linux environment. Some policies are straight forward, however, I have found out from DNS logs that when chromium browser is opened, it does a lot of DNS look-ups such as the following.
accounts.google.com
googleapis.com
google.com
optimizationguide-pa.googleapis.com
update.googleapis.com
I need to block these at a browser level using policies. A list of all the supported polices are listed here. I have tried using the URLBlocklist policy and providing a list of the above mentioned look-ups (using www or https or http). Nothing seem to work.
Any idea how this can be achieved via policies.
Note that I have already set some relevant policies, which are as follows. Nevertheless the look-ups still occur.
DefaultSrarchProviderEnabled : false
AutoFillEnabled : false
AutofillAddressEnabled: false
SearchSuggesteEnabled: false

Related

Protect Static Html Files Website in IIS with Basic Authentication

I have a simple Intranet Website that is just a few HTML pages with a little JavaScript and CSS.
If Allow Anonymous is ON, everyone can see it. It works.
In IIS, I turn on Basic Authentication and it only partially works as expected.
The company only allows IE and Edge installed on Windows 10 PCs for now.
Specific users have been added to that server running IIS.
In IE when users go to the website now, they are prompted for their username and password. Then the website loads.
However, in Edge, the users are never prompted for the their username and password. A 401 errors loads instead.
I have already tried putting the username and password in the URL like so: https://username:password#URL but that did not work.
I want the same or similar behavior that works in IE for Edge.
I assume you're using Edge Chromium browser, correct me if I'm wrong. The issue might be related with this policy: AuthSchemes.
You can visit edge://policy in Edge and check if it has an AuthSchemes policy set. The policy can be used to disable Basic Authentication. If your browser has this policy set, you need to enable 'basic' value in the policy.
I don't have this policy set and I visit the test page https://jigsaw.w3.org/HTTP/Basic/, the Basic Authentication works well in Edge.
You can also refer to this thread and this thread which have similar issues.

Azure Traffic Manager, Priority Mode: Browser refresh won't go to secondary node when primary goes down

We are testing out Traffic Manager to see if it is a viable solution for failover. If our primary Azure region becomes unavailable for any reason, we want end users to be directed to a secondary location where they can continue using the site.
I have followed the documentation for setting this up and have 3 simple API return pages as endpoints in 3 different regions that simply alert which one you are hitting. I have them prioritized, 1, 2 and 3.
When hitting the .trafficmanager.net URL, the primary is displayed as it should. All 3 show "online" in the traffic manager profile. If I stop the primary site, then refresh my browser, I get a 403 error stating that the site has stopped.
I set the TTL in the traffic manager profile configuration to 60 seconds. However, after 15+ minutes, the browser still displays the 403. The only way I seem to be able to get the secondary site to pull up is by starting a new browser session. It's like there is some sort of caching and/or TTL issue with the browser session that prevents it from trying the secondary site.
This obviously wouldn't be acceptable in a live, production environment. There has to be a way around this, right? Has anyone else dealt with this issue?
The browser might be using Keep-Alive
Keep in mind that Azure Traffic Manager works at the DNS level so, rather than using a browser to get a repro, try to get a repro with some DNS tools like dig, nslookup, etc.
This isn't just a browser setting. Your IIS Manager could be considered to use keep-alive to reduce strain on itself, thus leaving open connections that completely bypass the Traffic Manager's DNS rules. I had these exact same symptoms, and was able to alleviate them by following steps I posted here. Whether it'll prove useful in a real-world scenario has yet to be seen, but I'm hoping this will help you get further.

CloudFlare WAF blocks webservice calls with non-English chars

I am using CloudFlare WAF in High security mode. When I make webservice calls which includes some non-English characters such as Ö,Ç the application firewall blocks it and returns;
You can email the site owner to let them know you were blocked. Please include what you were doing when this page came up and the CloudFlare Ray ID found at the bottom of this page
CloudFlare Ray ID: 2366df772ee32bbe
When I turn the security level to Low, the webservice is accepted.
How can I find which specific rule causes that error when I am using it with High security mode?
I see the same problem. Here is what I did to see the rules triggered:
Did a test call to the web service.
After a few hours, looked into the traffic report and found my ip's block entry.
Clicked on Details and could see which rules were triggered.

Has anyone experienced Cloudflare 403 Errors with zombie.js web scraping?/

We're looking to do some scraping on a specific URL that uses cloudflare. Has anyone experienced issues using Zombie.js/user-agents while trying to crawl cloudflare hosted sites.
Would love some help!
I am trying to interface to an API on a client's site and I am getting a 403 error indeed. The request doesn't even reach my server.
Turning security to "essentially off" did not help. The final solution was to white-list the developer machine's IP.
The error is triggered on a single URL (json serving API) with a Java client with standards compliant libraries.
Solution:
1. try to set a rule to allow direct access for that URL
2. try setting security to weaker and weaker ("essentially off")
3. if both fails: try whitelisting
4. set up an alternate non-cloudflare url (direct.domain.com)
These will of course only work if you can negotiate with the site owners.
Backup solution: use an embedded browser that you can "frame" and "remote control" or a testing framework that does the same through a plugin, and extract the content from there (if you can)
Hope this helps.
You're probably triggering one of our security features by trying to scrape a site on us. The only option, really, would be to ask the site owner to whitelist your IP(s) to override the behavior.

IIS 7.5, URL Rewrite 2.0, Kerberos - rewritten URL returning 401.1

I would appreciate any hints regarding the following issue:
The problem summary:
While using Negotiate:Kerberos in IIS 7.5, the authorization works correctly right until we setup URL rewriting (using the MS module "URL Rewrite 2.0") - any rewritten URL then returns "401.1 Unathorized" (requests not matching any rewrite rule keep working though).
The setup:
Windows Server 2008 R2 x64
IIS 7.5
URL Rewrite 2.0
Server is in a domain
SPN exists for HOST/hostname and HOST/hostname.domain (created by default)
Pool is using default ApplicationPoolIdentity (no custom account, not network service)
Kernel mode set to OFF
Authentication providers set to "Negotiate:Kerberos" only (no NTLM or annonymous)
URL Rewrite rule as as "^(.*)/$" => "index?x={R1}"
The result:
1) When accessing any URL not matching any URL rewrite pattern, Kerberos is working correctly, i.e. Kerberos ticket is issued (verified using klist), sent (verified using netmon and HTTP headers) and accepted (verified by URL being accessible and appropriate AUTH_USER property set to my domain account name) => no problem here.
2) When accessing any URL matching URL rewrite pattern, e.g. "hostname/foo" the result is:
HTTP Error 401.1 - Unauthorized
You do not have permission to view this directory or page using the credentials that you supplied.
Module WindowsAuthenticationModule
Notification AuthenticateRequest
Error Code 0x80070055
Requested URL http://hostname/index?x=foo
Physical Path D:\wwwroot\
Logon Method Not yet determined
Logon User Not yet determined
(if we try to access the rewritten URL directly, e.g. hostname/index?x=foo, Kerberos works again normally)
The attempts to solve it so far:
After googling, we have tried several options:
turning kernel mode ON: Kerberos stopped working completely, using either default pool identity or network service (I suppose we would need to setup additional HTTP SPN and/or use custom domain account with additional SPN for that account explicitly)
turning "useAppPoolCredentials" ON: no difference
enabling "Failing Request Tracing": surprisingly these failing 401.1 requests ARE NOT generating any output into the fail logs no matter what rule we try to setup (e.g. 400-999) - the folder is just empty (while other errors, like 404 or even handshake 401.x when accessing not-rewritten URLs are generating logs - very strange)
The conclusion:
So far we have reached a dead end - it may be some weird kind of "double hop" issue requiring using a custom domain account rather than default app pool identity, but as we're in fact accessing the same resources, it seems more like a URL Rewrite issue.
Any tips, hints, pointers? Anything would be highly appreciated.
Best regards,
Marek
we face the same issues as you do. By enabling extended error logging, we were able to put the finger on the actual problem, which seems to be a bug in the rewrite module (or at least in some part of IIS, which is related to the module):
When the URL gets rewritten, the access to the new rewritten URL is checked (seemingly hardcoded) using Basic Authentication and NTLM, neither of which has been configured on the Website at hand. The only configured authentication provider is Kerberos. Since the client doesnt send NTLM nor Basic credentials, there is no way this can work.
We (another person on the current project) are sending the issue to Microsoft. I will let you know, when I get any result.
It seems as though you have multiple issues here.
Failed-Request Tracing Logs
To fix your missing logs issue, you must make sure that the user that is running your site's AppllicationPool has read/modify rights to the folder where those logs are generated, otherwise you won't see anything. See the section labeled "Enable Failed-Request Tracing" on this page: Troubleshoot Failed Requests Using Tracing in IIS 7
What isn't clear is the fact that the site's Application Pool Identity (found in Advanced Settings for Application Pool) is the account that needs read/modify rights to that folder.
Once that is fixed you can load the XML logs in IE and see a much clearer picture of what is going on.
401.1 - Unauthorized Issue
A possible fix to your 401 error is to make sure unlisted file name extensions are allowed in Request Filtering. Go to IIS --> Sites --> [your site] --> Request Filtering
You have two options here:
Allow File Name Extension... and add the value "." (minus the quotes), see this answer.
Edit Feature Settings... and enabled the option "Allow unlisted file name extensions"
The 1st option should work well, the 2nd option obviously opens up a gaping hole but allows everything so you should be able to get it working.
I hope that helps.

Resources