I recently published a Chrome extension (Source Code) and now discover some broken incoming links on the extension's website which must be related to that extension:
/track_install/search/ext/free/mebkekakcnabgndiakbbefcgpedlaidp/mixcloud_downloader
/webstore/detail/ext/free/mebkekakcnabgndiakbbefcgpedlaidp/mixcloud_downloader
On the chrome extension webstore I don't find such links. Do you have any idea where those links come from and what's their purpose? Would users exepect anything else than a 404 on that URLs?
The website is referenced in the extension's manifest homepage_url field and on the webstore item in the "Websites" field.
Update: I just noticed again one such request where the referer comes from this question.
Normally, those URLs are relative to the Webstore and are used for Analytics tracking and this stat page (only available to you). See this mention, for example.
/track_install/... is, quite obviously, used as a beacon to track installs.
/webstore/detail/ext/free/... tracks opening your extension's listing in Web Store.
Here's documentation on homepage_url, which I believe influenced this, including this quote:
If you distribute your extension using the Chrome Web Store, the homepage URL defaults to the extension's own page.
I believe that it's either a bug that those are sent out to your server instead, or a feature I haven't seen documented anywhere to let you track those instead. Note that those are just beacons sent from analytics code; you don't need to serve content on them.
In any case, it's worth reporting, either on the bugtracker or via the exceptionally well-hidden developer support form.
Related
I built my website with github-pages. (Jekyll)
It opened well in web browser like chorme (pc, mobile both), Internet Explorer or something else.
But, the problem is that it cannot be opened in Wechat App (Android, iOS both). My access region is South-Korea (not china, for your information about chinese regulation)
(I am newbie on wechat and I don't know anything about Chinese online regulation. But I am sure that my blog is not blocked because I can access that blog on QQ browser)
Detail information
Imgur Image - Send URL to someone
I send URL to someone as above image.
Imgur Image - No access
At first, It opened well. But second time clicking url again, It didn't open with weird sentence as above image.
Tips. https://aceshipping.github.io
For account security, do not enter any info related to WeChat password in the Internet.
Continue (button)
But, Continue Button doesn't work and also there is no private-related information required on my github page. (You know that there is no login feature available in github page)
Please help me. I need to open this on WeChat, without other browsers.
TL;DR: use a .com/net/org/cn domain.
As of 2020/04, WeChat blocks links by default for public hosts like GitHub and Netlify, redirecting the page to https://weixin110.qq.com/cgi-bin/mmspamsupport-bin/newredirectconfirmcgi?main_type=1&evil_type=100&source=2&url=<your link here>&scene=1&devicetype=android-28&exportkey=...&pass_ticket=...&wechat_real_lang=(en or zh_CN or something). This page is a normal redirection page indicating "非微信官方网页,请确认是否继续访问。" ("not a WeChat official web page, please confirm to continue browsing.") when the language of WeChat is Chinese (whether simplified or traditional), but on other languages it redirects to a link like https://qbview.url.cn/getResourceInfo?appid=62&url=<your link here>&openid=<your WeChat account identifier>&version=10000&doview=1&platformtype=700. This seems to be a removed Tencent service which serves the page with no JavaScript and reduced CSS ('safe browsing' / 'reader mode' as on redirection pages), which was removed on some date in 2019, now serving on a bad SSL certificate and returns only 400 (Bad Request). It seems that the Chinese version of the weixin110 page is changed while in other languages the link remains unchanged, which leads to a bad link.
The HTML markup of those pages are listed, if anyone is interested: en, zh_CN (The words in the brackets are translations added by me)
I'm using SSL on my website and it is giving me the lock with yellow triangle icon ("The site uses SSL, but Google Chrome has detected insecure content on the page.")
On clicking the lock icon it says:
Your connection to domainname is encrypted with 256-bit encryption. However, this page includes other resources which are not secure. These resources can be viewed by others while in transit, and can be modified by an attacker to change the look of the page. The connection uses TLS 1.0. The connection is encrypted using AES_256_CBC, with SHA1 for message authentication and DHE_RSA as the key exchange mechanism. The connection is not compressed.
How do I ensure I get the green lock?
You must have resources (images, stylesheets, scripts, etc...) which are embedded on the page but are not served over https. Make sure all your resources are served over https, and that warning should go away.
I had the same problem and it occoured because I included a script from Google Analytics using HTTP.
With a provider like Google, one can simply change HTTP to HTTPS - and it will work. This will not work with all providers.
If you are trying to load something from a website that you own, you will have to secure that website with HTTPS.
Google Chrome will detect this and automatically not load the insecure content (from the HTTP domain) which may take away some functionality from the website.
Certain AV/Malware softwares will also detect this and give a security warning which may frighten your visitors away.
If you are using Google Chrome, then you might not notice such a warning, because the AV/Malware software never sees this HTTP-link because it is blocked by Google Chrome.
And if you do not have the kind of AV/Malware software that detects this then you may never notice such a warning while the visitors are.
What you must do is:
Install Google Chrome and go to the website.
Click on "Tools >> JavaScript Console" and see if any warning appears. (this has been commented by Brad Koch on the question as well)
Go throught the different pages on your website and see if any errors appear - if so, then go change the URLs to HTTPS (if this is possible) or find another provider for this javascript.
make sure all references to resources such as images, js files, css files, ads, etc are served through https. If the uri to the resource is relative, e.g. /images/logo.png, then the resource is fetched from the same host and port and protocol as the page itself, in your case https. I would use fiddler to find what files get fetched over http:// when the page is loaded.
This is what I get when I go to TOOLS and then click on JAVASCRIPT CONSOLE ::
Failed to load resource chrome://thumb/https://accounts.google.com/ServiceLogin?service=chromiumsyn...
s%3A%2F%2Fwww.google.com%2Fintl%2Fen-US%2Fchrome%2Fblank.html%3Fsource%3D1
Failed to load resource chrome://thumb/http://www.xe.com/
What do I do after this?
This thread was created back in 2008 Restricting IFRAME access in PHP
I am looking to do almost the exact same thing. i.e. I want to have sites which are publicly accessible as long as they are being viewed from a specific iFrame, from a specific app. The IFrame app will have user authentication giving them access to urls outside the core application. The urls are all likely to be built using Open Source PHP tools e.g. Wordpress.
Both the viewing iFrame and the viewed sites/pages will be owned by us.
Have there been any developments in last few years on ways to do this?
For various reasons not related to this particular issue, I am considering using the serverside RIA framework Vaadin (JAVA) for building the app that will contain the iFrame viewer.
The demo of the embed widget is here http://demo.vaadin.com/sampler#WebEmbed Looking at the page source I don't see anywhere that the address of the embedded webpage is displayed. So to some extent I wonder if I can hide my urls from search engines, give them very long, randomly generated URI's and maybe they will be impossible to find anyway?
You should be able to modify a framekiller to do the opposite. A framekiller is a piece of javascript to prevent clickjacking by detecting if the page has been loaded within an iframe.
Limiting the iframe to load within a specific page is more difficult. Looking at the referer is easy, but also easy to bypass. If you load the iframe from an https page the referer will be blank. A better way would be to require the server to obtain a Nonce and include this in the iframe url. Such as http://iframe_url?key=difhj8j84528423j423894hfdj897 or whatever. Having the server make a request to your server would be ideal. Doing it with client side code and jsonp to fetch the nonce is problematic because an attacker could deliver modified javascript to fetch the nonce.
In Firefox or Chrome I'd like to prevent a private web page from making outgoing connections, i.e. if the URL starts with http://myprivatewebpage/ or https://myprivatewebpage/ in a browser tab, then that browser tab must be restricted so that it is allowed to load images, CSS, fonts, JavaScript, XmlHttpRequest, Java applets, flash animations and all other resources only from http://myprivatewebpage/ or https://myprivatewebpage/, i.e. an <img src="http://www.google.com/images/logos/ps_logo.png"> (or the corresponding <script>new Image(...) must not be able to load that image, because it's not on myprivatewebpage. I need a 100% and foolproof solution: not even a single resource outside myprivatewebpage can be accessible, not even at low probability. There must be no resource loading restrictions on Web pages other than myprivatewebpage, e.g. http://otherwebpage/ must be able to load images from google.com.
Please note that I assume that the users of myprivatewebpage are willing to cooperate to keep the web page private unless it's too much work for them. For example, they would be happy to install a Chrome or Firefox extension once, and they wouldn't be offended if they see an error message stating that access is denied to myprivatewebpage until they install the extension in a supported browser.
The reason why I need this restriction is to keep myprivatewebpage really private, without exposing any information about its use to webmasters of other web pages. If http://www.google.com/images/logos/ps_logo.png was allowed, then the use of myprivatewebpage would be logged in the access.log of Google's ps_logo.png, so Google's webmasters would have some information how myprivatewebpage is used, and I don't want that. (In this question I'm not interested in whether the restriction is reasonable, but I'm only interested in the technical solutions and its strengths and weaknesses.)
My ideas how to implement the restriction:
Don't impose any restrictions, just rely on the same origin policy. (This doesn't provide the necessary protection, the same origin policy lets all images pass through.)
Change the web application on the server so it generates HTML, JavaScript, Java applets, flash animations etc. which never attempt to load anything outside myprivatewebpage. (This is almost impossibly hard to foolproof everywhere on a complicated web application, especially with user-generated content.)
Over-sanitize the web page using a HTML output filter on the server, i.e. remove all <script>, <embed> and <object> tags, restrict the target of <img src=, <link rel=, <form action= etc. and also restrict the links in the CSS files. (This can prevent all unwanted resources if I can remember all HTML tags properly, e.g. I mustn't forget about <video>. But this is too restrictive: it removes all dyntamic web page functionality like JavaScript, Java applets and flash animations; without these most web applications are useless.)
Sanitize the web page, i.e. add an HTML output filter into the webserver which removes all offending URLs from the generated HTML. (This is not foolproof, because there can be a tricky JavaScript which generates a disallowed URL. It also doesn't protect against URLs loaded by Java applets and flash animations.)
Install a HTTP proxy which blocks requests based on the URL and the HTTP Referer, and force all browser traffic (including myprivatewebpage, otherwebpage, google.com) through that HTTP proxy. (This would slow down traffic to other than myprivatewebpage, and maybe it doesn't protect properly if XmlHttpRequest()s, Java applets or flash animations can forge the HTTP Referer.)
Find or write a Firefox or Chrome extension which intercepts all outgoing connections, and blocks them based on the URL of the tab and the target URL of the connection. I've found https://developer.mozilla.org/en/Setting_HTTP_request_headers and thinkahead.js in https://addons.mozilla.org/en-US/firefox/addon/thinkahead/ and http://thinkahead.mozdev.org/ . Am I correct that it's possible to write a Firefox extension using that? Is there such a Firefox extension already?
Some links I've found for the Chrome extension:
http://www.chromium.org/developers/design-documents/extensions/notifications-of-web-request-and-navigation
https://groups.google.com/a/chromium.org/group/chromium-extensions/browse_thread/thread/90645ce11e1b3d86?pli=1
http://code.google.com/chrome/extensions/trunk/experimental.webRequest.html
As far as I can see, only the Firefox or Chrome extension is feasible from the list above. Do you have any other suggestions? Do you have some pointers how to write or where to find such an extension?
I've found https://developer.mozilla.org/en/Setting_HTTP_request_headers and thinkahead.js in https://addons.mozilla.org/en-US/firefox/addon/thinkahead/ and http://thinkahead.mozdev.org/ . Am I correct that it's possible to write a Firefox extension using that? Is there such a Firefox extension already?
I am the author of the latter extension, though I have yet to update it to support newer versions of Firefox. My initial guess is that, yes, it will do what you want:
User visits your web page without plugin. Web page contains ThinkAhead block that would send a simple version header to the server, but this is ignored as plugin is not installed.
Since the server does not see that header, it redirects the client to a page to install the plugin.
User installs plugin.
User visits web page with plugin. Page sends version header to server, so server allows access.
The ThinkAhead block matches all pages that are not myprivatewebpage, and does something like set the HTTP status to 403 Forbidden. Thus:
When the user visits any webpage that is in myprivatewebpage, there is normal behaviour.
When the user visits any webpage outside of myprivatewebpage, access is denied.
If you want to catch bad requests earlier, instead of modifying incoming headers, you could modify outgoing headers, perhaps screwing up "If-Match" or "Accept" so that the request is never honoured.
This solution is extremely lightweight, but might not be strong enough for your concerns. This depends on what you want to protect: given the above, the client would not be able to see blocked content, but external "blocked" hosts might still notice that a request has been sent, and might be able to gather information from the request URL.
When I go to our web site through HTTPS mode, Chome is reporting an error saying that the page contains secure and not secure items. However, I used Firebug, Fiddler, and HttpDebuggerPro, all which are telling me that everything is going through HTTPS. Is this a bug in Chrome?
Sorry but I'm unable to give out the actual URL.
A bit late to the party here but I've been having issues recently and once I had found a http resource and changed it was still getting the red padlock symbol. When I closed the tab and opened a new one it changed to a green padlock so I guess Chrome caches this information for the lifetime of the tab
Current versions of Chrome will show the mixed content's URL in the error console. Hit CTRL+Shift+J and you'll see text like:
"The page at https://www.fiddler2.com/test/securepageinsecureimage.htm contains insecure content from http://www.fiddler2.com/Eric/images/me.jpg."
I was having the same issue: Chromium showing the non-secure static files, but when everything was http://.
Just closing the current tab and re-opening the page in another new tab worked, so I think this is a Chromium/Chrome bug.
Cheers,
Diogo
Using Chrome, if you open up the Developer Tools (View > Developer > Developer Tools) and bring up the Console and choose to filter to warnings, you'll see a list of offending URLs.
You'll see something like the following if you do have insecure content
The page at https://mysite/ displayed insecure content from http://insecureurl.
For the best experience in finding the culprit, you'll want to start your investigation in a new tab.
It is possible that a non-secure URL is referenced but not accessed (e.g. the codebase for a Flash <object>).
I ran into this problem when Jquery was being executing a a few seconds after page load which added a class containing a non-secure image background. Chrome must continually to check for any non-secure resources to be loaded.
See the code example below. If you had code like this, the green padlock is shown in Chrome for about 5 seconds until the deferred class is applied to the div.
setTimeout(function() {
$("#some-div").addClass("deferred")
}, 5000);
.deferred
{
background: url(http://not-secure.com/not-secure.jpg"
}
Check the source of the page for any external objects (scripts, stylesheets, images, objects) linked using http://... rather than https://... or a relative path. Change the links to use relative paths, or absolute paths without protocol, i.e. href="/path/to/file".
If all that if fine, it could be something included from Javascript. For example, the Google Analytics code uses document.write to add a new script to the page, but it has code to check for HTTPS in case the calling page is secure:
<script type="text/javascript">
var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www.");
document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E"));
</script>
On the release of Chrome version 53 on Windows, Google has changed the trust indications to initiate the circle-i. Afterward, Google has announced a new warning message will be issued when a website is not using HTTPS.
From 2017 January Start, Popular web browser Chrome will begin
labeling HTTP sites as “Not Secure” [Which transmit passwords / ask
for credit card details]
If all your resources are indeed secure, then it is a bug. http://code.google.com/p/chromium/issues/detail?id=72015 . Luckily it was fixed.