Google Site link reporting We're Sorry - security

We have a curriculum site hosted in New Sites and is shared publicly. Anyone that visits the site gets the Google "We're Sorry" page and can't access the website without refreshing the page multiple times. It seems that after you finally get each page to show, future visits are fine. But as they begin to roll this site out to teachers, they need the link to work. This is both via direct link access or clicking the link in an email. Happens in Chrome and Firefox so far from testing.
I've never seen this happen with Google Sites. There is nothing specific on the page that is unsafe, no insecure embeds (just images and links to google drive docs).
I used https://transparencyreport.google.com/safe-browsing/search to test and it comes back safe.
Per request I am going to include screenshots from the Network tab. However I can no longer replicate this issue on my network or machines, but many teachers are still reporting the issue so trying to get screenshots from them. In this first one, logimpressions is blocked for them but the site loaded - this is most likely caused by having uBlock enabled.

Related

Multilingual Umbraco Website cannot be scraped?

I have created a multilingual Umbraco website which has 3 domain names pointing to it for each language. The site has gone live and people are starting to share links to it on LinkedIn and other social media. I have metadata in the website which should be picked up when these links are shared. On LinkedIn when the link is shared it has 'coming soon' as the strap-line, which is what was in the holding page months ago suggesting the site isn't being re-scraped.
I used the Facebook link debugging tool and that was returning a run-time error with a 500 response code.
My co-worker insists that there is nothing wrong with the DNS and there aren't any errors in the code of the website so I am wondering if anyone has any ideas why the website cannot be scraped?
It also has another issue where one of the domains sometimes doesn't redirect to it's www. version despite have a redirect on the DNS which may be related.
Is there some specific Umbraco configuration that I may have missed? Or a bug within Umbraco that may cause this?
Aside from this issue the website is working fine, it is just these scrapers seem to be unable to hit the website successfully.
Do you have meta data set for encoding? see https://www.w3.org/International/questions/qa-html-language-declarations probably long shot.

Google Chrome could not load the webpage because myPortalapps-12812b1f934c6c.myPortal.apps.com took too long to respond

Trying hello world hosted app but getting this error on deployment,
Google Chrome could not load the webpage because
myPortalapps-12812b1f934c6c.myPortal.apps.com took too long to respond
I can ping myPortalapps.myPortal.apps.com but not myPortalapps-12812b1f934c6c.myPortal.apps.com
I also had some similar problems facing sharepoint web app's this forum post helped me out alot:
When troubleshooting performance issues where more than one
person/computer is impacted, the first place I like to start is by
running a sniffer like Fiddler:
http://www.fiddler2.com/fiddler2/version.asp
Fiddler will let you now exactly how long it takes to load the page,
and break down each and every resource that is also loaded in order to
render the page. This is also a great tool for determining what is
and what is not being cached.
I take the output of this and see if there is anything being loaded
that I'm not expecting. Every once in awhile I'll see where a user
might reference an image housed on an external site or server. This
can have serious consequences to load times.
I also look at the actual SharePoint page to see if there are any
hidden web parts loading list data. Most users accidentaly click
"Close" and not "Delete" so those web parts or list views are still
there. In some cases there could be significant data being loaded and
just not displayed.
Likewise I'll also take a look to see if any audiences are being used
since Audiences can be used to show/hide content.

Internet Explorer Cross Domain Iframe Login

I have a Java web application in domain A (that we control). This application displays another website located in domain B (which we do not control) in an iframe. This external website was recently updated to require users to log on before they can see content. They provided us with a URL that will automatically log our users into their site. This URL works when we navigate directly to it in Internet Explorer (we get automatically logged in etc).
However, apparently there was an update to Internet Explorer so that cross domain communication is not allowed. So now when the login URL is displayed in the iframe, it does not successfully log on (I am guessing its being blocked from creating security cookies).
Also, if we browse to the URL directly and get the security in place, then any iframe elements of the site will not work (I am guessing it is being blocked from accessing security cookies).
Does anyone know of a work around for this? Changing the security level on Internet Explorer is not an option (it is controlled by our company's system administrator). Internet Explorer is also our company standard, so we cannot change that (even though it works fine in Firefox).
When you say "elements of the site will not work" what precisely does that mean?
"Cross-domain" interactions have always been restricted in all browsers. This is called "same-origin-policy" and it's the foundation of web security. The "update" to Internet Explorer you're referring to restricts IE such that a webpage on Domain A can no longer navigate a subframe that is inside a page from Domain B. That restriction has been present in IE for 7+ years and is in all browsers. This restriction is not causing your problem.
This most likely problem here is that the subframe fails to set a P3P header that would permit its cookies to be stored. There are perhaps 30 duplicates on that issue on StackOverflow.
To determine if this is what you're encountering, try this:
In IE, click Tools > Internet Options > Privacy tab.
Set the slider to Accept all
Clear your cookies
Restart the browser and retry the scenario.
If this change solves the problem, then the fix is easy: configure the page which is being framed to specify its cookie policy using a P3P response header.
If this doesn't solve the problem, please update the question with more information that would allow others to reproduce it (e.g. traffic logs, live site URL, etc).
It turns out that this was causes by the login site not being on the trusted sites list. Having security add it as a trusted site and pushing that to all company computers solved the issue.

Google links opens wrong pages

Our website has been recently hacked (Joomla 1.5, hosted on VPS). Attacker added few php scripts that were redirecting to some ad sites. We have cleaned everything (or at least we think we did), and now everything works as it should.
However, links on Google (or Yahoo) that are pointing to our web site are still trying to include these php scripts (and returns 404 as these are deleted now). Direct links from browser works as they should.
We have cleaned site 10 days ago, so I do not think that something is cached at Google servers. Re-indexing should be done by now.
To reproduce this behavior:
Go to www.google.com
type in "anitex socks"
click any php link that starts with "anitexsocks.com"
You will get "The requested URL /wp-includes/client.php was not found on this server" + 404 error
Refresh page and everything works without issues
Why are only Google links making troubles?
Any help is welcome. Thanks!
As for the reason why this is happening, I installed a firefox add-on which blocks my browser's Referrer Header and then followed a Google link to your site and it worked fine. Then I disabled the add-on and the problem started occurring again.
This shows that there is still some malicious code running on your website which is checking all http requests to see if they come from Google (based on checking the HTTP Referrer header) and redirecting them to /wp-includes/client.php if they do,
To try to determine where this code may lie, try performing a recursive grep through all your www files on your server as well as your www configuration files,somewhere in there there must still be a reference to that client.php script, hopefully you can find and eliminate it.
That said, if it were my site and I knew a hacker had had free reign over my server to do whatever they wanted to it, I would not mess around with trying to undo the damage and would instead restore the most recent backup from before the site was hacked. You only have to miss one back door the hacker left in place and they can re-enter your site. After restoring backups, you should also upgrade/reconfigure the software they used to gain access in the first place so they can't simply rehack it in the same manner again.

How to Bypass Output Cache in SharePoint 2007 Publishing Internet site

We're building a mobile-friendly site to work in tandem with our client's MOSS 2007 internet site. We need to be able to redirect users who hit the home page and are using a mobile device.
Our original intention was to add a custom control to the home page page layout that would detect the current user's device and redirect to the mobile site accordingly. We quickly realised that this would not work as we are using the Output Caching functionality provided by SharePoint/Asp.Net. This means that the detection code will only run for the first visitor to the home page until the cache expires.
Our next idea was to build a custom HTTP Module and process the detection there. However, we are finding that the Output Caching is not allowing that either. If the cache is set while a mobile device is visiting all browsers are subsequently redirected to the mobile site (until the cache expires).
If we turn off output caching it works just fine - but we cannot turn output caching off, especically for the home page. We did investigate Substitution (Donut) Caching but this is not working due to the fact we are filtering the Asp.Net response within another HTTP Module that tidies up the rendered HTML for XHTML compatiblity reasons. I've also experimented with the output cache profile by setting it to vary-by-header property to "User-Agent" but I am getting mixed results and am also concerned at the memory implications of caching multipel versions of pages (we already have memory issues now and then).
It's possible we could run the redirection code in JavaScript but then we risk not detecting a lot of devices that don't have JavaScript enabled. This is a government website so the usage of JavaScript has to abide by accessibility guidelines.
Does anyone have any other ideas as to how we can solve this issue. Has anyone done this before? Perhaps in a different way?
Hope you can help, thanks.
p.s. I have also asked this question on SharePoint.SE but wanted to get as many eyes on this as possible.
I would suggest you to try ISAPI filters
I've actually solved this one I think. I've pretty much followed this article here - http://msdn.microsoft.com/en-us/library/ms550239.aspx. We have updated the code in that article to build a cache key based on whether the current page is the home page, whether the current user is using a mobile device and whether or not a cookie exists forcing the user to the full site. I will probably write this up as a blog post. When I do I will update this answer providing a link.

Resources