cpan problems with 02packages.details.txt.gz - linux

I'm getting complaints about 02packages.details.txt.gz when I'm trying to install a package with cpan. The error is "Warning: Your /root/.cpan/sources/modules/02packages.details.txt.gz does not contain a Line-Count header.
"
Apparently, according to cpan, this is not a valid zip file, which it isn't. It's just a web page which I'll paste later. I do not use a proxy and I had five mirrors in my configuration. I've deleted one then the next off the list and I'm still getting the same data back. I have deleted the file and attempted to allow cpan to fetch it again. I have fetched the page with curl and I'm seeing a web page and not anything that looks like a gz file.
I have tried "install cpan" from the cpan command line in case I missed an update but that runs into the exact same problem.
Example fetch:
curl http://noodle.portalus.net/CPAN/modules/02packages.details.txt.gz
Result (parts obsfucated)
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
"http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
<meta http-equiv="Content-Script-Type" content="text/javascript">
<script type="text/javascript">
function getCookie(c_name) { // Local function for getting a cookie value
if (document.cookie.length > 0) {
c_start = document.cookie.indexOf(c_name + "=");
if (c_start!=-1) {
c_start=c_start + c_name.length + 1;
c_end=document.cookie.indexOf(";", c_start);
if (c_end==-1)
c_end = document.cookie.length;
return unescape(document.cookie.substring(c_start,c_end));
}
}
return "";
}
function setCookie(c_name, value, expiredays) { // Local function for setting a value of a cookie
var exdate = new Date();
exdate.setDate(exdate.getDate()+expiredays);
document.cookie = c_name + "=" + escape(value) + ((expiredays==null) ? "" : ";expires=" + exdate.toGMTString()) + ";path=/";
}
function getHostUri() {
var loc = document.location;
return loc.toString();
}
setCookie('XXXXXXXXXXXXXXXXXXXXiw_9289NNNNNNNNJAX666', 'MY.IP.ADDRESS.HERE', 10);
try {
location.reload(true);
} catch (err1) {
try {
location.reload();
} catch (err2) {
location.href = getHostUri();
}
}
</script>
</head>
<body>
<noscript>This site requires JavaScript and Cookies to be enabled. Please change your browser settings or upgrade your browser.</noscript>
</body>
</html>
Note that this is the same as the contents as fetched by cpan.

CPAN stands for Comprehensive Perl Archive Network. The network once consisted of a slew of mirrors, but this has been replaced with a transparent Content Delivery Network (CDN).
The once-valuable service offered by the mirrors is therefore no longer needed, so many sites have ceased mirroring CPAN. It appears to be the case for the mirror your cpan is configured to use.
A simple configuration change is needed to fix the issue. From within the cpan tool, use either
o conf init urllist
or
o conf urllist http://www.cpan.org/
The first picks a mirror from the "official" list. There is only one URL in the list: http://www.cpan.org/. The second picks that URL directly.
To save the changes, follow up with
o conf commit
(This may not be necessary for everyone, but there's no harm in always using it.)
Example:
C:\Users\ikegami>cpan
Loading internal logger. Log::Log4perl recommended for better logging
Unable to get Terminal Size. The Win32 GetConsoleScreenBufferInfo call didn't work. The COLUMNS and LINES environment variables didn't work. at C:\progs\sp5032001001\perl\vendor\lib/Term/ReadLine/readline.pm line 410.
cpan shell -- CPAN exploration and modules installation (v2.28)
Enter 'h' for help.
cpan> o conf init urllist
Now you need to choose your CPAN mirror sites. You can let me
pick mirrors for you, you can select them from a list or you
can enter them by hand.
Would you like me to automatically choose some CPAN mirror
sites for you? (This means connecting to the Internet) [yes] y
Trying to fetch a mirror list from the Internet
Fetching with LWP:
init/MIRRORED.BY
Fetching with LWP:
init/MIRRORED.BY.gz
Fetching with LWP:
http://www.perl.org/CPAN/MIRRORED.BY
Looking for CPAN mirrors near you (please be patient)
.. done!
New urllist
http://www.cpan.org/
commit: wrote 'C:\progs\sp5032001001\perl\lib/CPAN/Config.pm'
cpan> quit
Lockfile removed.
C:\Users\ikegami>
commit: wrote indicates the changes were saved, so I didn't need to use o conf commit.

It seems like indeed, one after another of my mirrors was stale. Currently using mirrors.namecheap.com and all looking good.

Related

How do I cache bust imported modules in es6?

ES6 modules allows us to create a single point of entry like so:
// main.js
import foo from 'foo';
foo()
<script src="scripts/main.js" type="module"></script>
foo.js will be stored in the browser cache. This is desirable until I push a new version of foo.js to production.
It is common practice to add a query string param with a unique id to force the browser to fetch a new version of a js file (foo.js?cb=1234)
How can this be achieved using the es6 module pattern?
There is one solution for all of this that doesn't involve query string. let's say your module files are in /modules/. Use relative module resolution ./ or ../ when importing modules and then rewrite your paths in server side to include version number. Use something like /modules/x.x.x/ then rewrite path to /modules/. Now you can just have global version number for modules by including your first module with
<script type="module" src="/modules/1.1.2/foo.mjs"></script>
Or if you can't rewrite paths, then just put files into folder /modules/version/ during development and rename version folder to version number and update path in script tag when you publish.
HTTP headers to the rescue. Serve your files with an ETag that is the checksum of the file. S3 does that by default at example.
When you try to import the file again, the browser will request the file, this time attaching the ETag to a "if-none-match" header: the server will verify if the ETag matches the current file and send back either a 304 Not Modified, saving bandwith and time, or the new content of the file (with its new ETag).
This way if you change a single file in your project the user will not have to download the full content of every other module. It would be wise to add a short max-age header too, so that if the same module is requested twice in a short time there won't be additional requests.
If you add cache busting (e.g. appending ?x={randomNumber} through a bundler, or adding the checksum to every file name) you will force the user to download the full content of every necessary file at every new project version.
In both scenario you are going to do a request for each file anyway (the imported files on cascade will produce new requests, which at least may end in small 304 if you use etags). To avoid that you can use dynamic imports e.g if (userClickedOnSomethingAndINeedToLoadSomeMoreStuff) { import('./someModule').then('...') }
From my point of view dynamic imports could be a solution here.
Step 1)
Create a manifest file with gulp or webpack. There you have an mapping like this:
export default {
"/vendor/lib-a.mjs": "/vendor/lib-a-1234.mjs",
"/vendor/lib-b.mjs": "/vendor/lib-b-1234.mjs"
};
Step 2)
Create a file function to resolve your paths
import manifest from './manifest.js';
const busted (file) => {
return manifest[file];
};
export default busted;
Step 3)
Use dynamic import
import busted from '../busted.js';
import(busted('/vendor/lib-b.mjs'))
.then((module) => {
module.default();
});
I give it a short try in Chrome and it works. Handling relative paths is tricky part here.
I've created a Babel plugin which adds a content hash to each module name (static and dynamic imports).
import foo from './js/foo.js';
import('./bar.js').then(bar => bar());
becomes
import foo from './js/foo.abcd1234.js';
import('./bar.1234abcd.js').then(bar => bar());
You can then use Cache-control: immutable to let UAs (browsers, proxies, etc) cache these versioned URLs indefinitely. Some max-age is probably more reasonable, depending on your setup.
You can use the raw source files during development (and testing), and then transform and minify the files for production.
what i did was handle the cache busting in webserver (nginx in my instance)
instead of serving
<script src="scripts/main.js" type="module"></script>
serve it like this where 123456 is your cache busting key
<script src="scripts/123456/main.js" type="module"></script>
and include a location in nginx like
location ~ (.+)\/(?:\d+)\/(.+)\.(js|css)$ {
try_files $1/$2.min.$3 $uri;
}
requesting scripts/123456/main.js will serve scripts/main.min.js and an update to the key will result in a new file being served, this solution works well for cdns too.
Just a thought at the moment but you should be able to get Webpack to put a content hash in all the split bundles and write that hash into your import statements for you. I believe it does the second by default.
You can use an importmap for this purpose. I've tested it at least in Edge. It's just a twist on the old trick of appending a version number or hash to the querystring. import doesn't send the querystring onto the server but if you use an importmap it will.
<script type="importmap">
{
"imports": {
"/js/mylib.js": "/js/mylib.js?v=1",
"/js/myOtherLib.js": "/js/myOtherLib.js?v=1"
}
}
</script>
Then in your calling code:
import myThing from '/js/mylib.js';
import * as lib from '/js/myOtherLib.js';
You can use ETags, as pointed out by a previous answer, or alternatively use Last-Modified in relation with If-Modified-Since.
Here is a possible scenario:
The browser first loads the resource. The server responds with Last-Modified: Sat, 28 Mar 2020 18:12:45 GMT and Cache-Control: max-age=60.
If the second time the request is initiated earlier than 60 seconds after the first one, the browser serves the file from cache and doesn't make an actual request to the server.
If a request is initiated after 60 seconds, the browser will consider cached file stale and send the request with If-Modified-Since: Sat, 28 Mar 2020 18:12:45 GMT header. The server will check this value and:
If the file was modified after said date, it will issue a 200 response with the new file in the body.
If the file was not modified after the date, the server will issue a304 "not modified" status with empty body.
I ended up with this set up for Apache server:
<IfModule headers_module>
<FilesMatch "\.(js|mjs)$">
Header set Cache-Control "public, must-revalidate, max-age=3600"
Header unset ETag
</FilesMatch>
</IfModule>
You can set max-age to your liking.
We have to unset ETag. Otherwise Apache keeps responding with 200 OK every time (it's a bug). Besides, you won't need it if you use caching based on modification date.
A solution that crossed my mind but I wont use because I don't like it LOL is
window.version = `1.0.0`;
let { default: fu } = await import( `./bar.js?v=${ window.version }` );
Using the import "method" allows you to pass in a template literal string. I also added it to window so that it can be easily accessible no matter how deep I'm importing js files. The reason I don't like it though is I have to use "await" which means it has to be wrapped in an async method.
If you are using Visual Studio 2022 and TypeScript to write your code, you can follow a convention of adding a version number to your script file names, e.g. MyScript.v1.ts. When you make changes and rename the file to MyScript.v2.ts Visual Studio shows the following dialog similar to the following:
If you click Yes it will go ahead and update all the files that were importing this module to refer to MyScript.v2.ts instead of MyScript.v1.ts. The browser will notice the name change too and download the new modules as expected.
It's not a perfect solution (e.g. if you rename a heavily used module, a lot of files can end up being updated) but it is a simple one!
this work for me
let url = '/module/foo.js'
url = URL.createObjectURL(await (await fetch(url)).blob())
let foo = await import(url)
I came to the conclusion that cache-busting should not be used with ES Module.
Actually, if you have the versioning in the URL, the version is acting like a cache-busting. For instance https://unpkg.com/react#18.2.0/umd/react.production.min.js
If you don't have versioning in the URL, use the following HTTP header Cache-Control: max-age=0, no-cache to force the browser to always check if a new version of the file is available.
no-cache tells the browser to cache the file but to always perform a check
no-store tells the browser to don't cache the file. Don't use it!
Another approach: redirection
unpkg.com solved this problem with HTTP redirection.
Therefore it is not an ideal solution because it involves 2 HTTP requests instead of 1.
The first request is to get redirected to the latest version of the file (not cached, or cached for a short period of time)
The second request is to get the JS file (cached)
=> All JS files include the versioning in the URL (and have an aggressive caching strategy)
For instance https://unpkg.com/react#18.2.0/umd/react.production.min.js
=> Removing the version in the URL, will lead to a HTTP 302 redirect pointing to the latest version of the file
For instance https://unpkg.com/react/umd/react.production.min.js
Make sure the redirection is not cached by the browser, or cached for a short period of time. (unpkg allows 600 seconds of caching, but it's up to you)
About multiple HTTP requests: Yes, if you import 100 modules, your browser will do 100 requests. But with HTTP2 / HTTP3, it is not a problem anymore because all requests will be multiplexed into 1 (it is transparent for you)
About recursion:
If the module you are importing also imports other modules, you will want to check about <link rel="modulepreload"> (source Chrome dev blog).
The modulepreload spec actually allows for optionally loading not just the requested module, but all of its dependency tree as well. Browsers don't have to do this, but they can.
If you are using this technic in production, I am deeply interested to get your feedback!
Append version to all ES6 imports with PHP
I didn't want to use a bundler only because of this, so I created a small function that modifies the import statements of all the JS files in the given directory so that the version is at the end of each file import path in the form of a query parameter. It will break the cache on version change.
This is far from an ideal solution, as all JS file contents are verified by the server on each request and on each version change the client reloads every JS file that has imports instead of just the changed ones.
But it is good enough for my project right now. I thought I'd share.
$assetsPath = '/public/assets'
$version = '0.7';
$rii = new RecursiveIteratorIterator(new RecursiveDirectoryIterator($assetsPath, FilesystemIterator::SKIP_DOTS) );
foreach ($rii as $file) {
if (pathinfo($file->getPathname())['extension'] === 'js') {
$content = file_get_contents($file->getPathname());
$originalContent = $content;
// Matches lines that have 'import ' then any string then ' from ' and single or double quote opening then
// any string (path) then '.js' and optionally numeric v GET param '?v=234' and '";' at the end with single or double quotes
preg_match_all('/import (.*?) from ("|\')(.*?)\.js(\?v=\d*)?("|\');/', $content, $matches);
// $matches array contains the following:
// Key [0] entire matching string including the search pattern
// Key [1] string after the 'import ' word
// Key [2] single or double quotes of path opening after "from" word
// Key [3] string after the opening quotes -> path without extension
// Key [4] optional '?v=1' GET param and [5] closing quotes
// Loop over import paths
foreach ($matches[3] as $key => $importPath) {
$oldFullImport = $matches[0][$key];
// Remove query params if version is null
if ($version === null) {
$newImportPath = $importPath . '.js';
} else {
$newImportPath = $importPath . '.js?v=' . $version;
}
// Old import path potentially with GET param
$existingImportPath = $importPath . '.js' . $matches[4][$key];
// Search for old import path and replace with new one
$newFullImport = str_replace($existingImportPath, $newImportPath, $oldFullImport);
// Replace in file content
$content = str_replace($oldFullImport, $newFullImport, $content);
}
// Replace file contents with modified one
if ($originalContent !== $content) {
file_put_contents($file->getPathname(), $content);
}
}
}
$version === null removes all query parameters of the imports in the given directory.
This adds between 10 and 20ms per request on my application (approx. 100 JS files when content is unchanged and 30—50ms when content changes).
Use of relative path works for me:
import foo from './foo';
or
import foo from './../modules/foo';
instead of
import foo from '/js/modules/foo';
EDIT
Since this answer is down voted, I update it. The module is not always reloaded. The first time, you have to reload the module manually and then the browser (at least Chrome) will "understand" the file is modified and then reload the file every time it is updated.

How to detect extension on a browser?

I'm trying to detect if an extension is installed on a user's browser.
I tried this:
var detect = function(base, if_installed, if_not_installed) {
var s = document.createElement('script');
s.onerror = if_not_installed;
s.onload = if_installed;
document.body.appendChild(s);
s.src = base + '/manifest.json';
}
detect('chrome-extension://' + addon_id_youre_after, function() {alert('boom!');});
If the browser has the extension installed I will get an error like:
Resources must be listed in the web_accessible_resources manifest key
in order to be loaded by pages outside the extension
GET chrome-extension://invalid net::ERR_FAILED
If not, I will get a different error.
GET chrome-extension://addon_id_youre_after/manifest.json net::ERR_FAILED
Here is an image of the errors I am getting:
I tried to catch the errors (fiddle)
try {
var s = document.createElement('script');
//s.onerror = window.setTimeout(function() {throw new Error()}, 0);
s.onload = function(){alert("installed")};
document.body.appendChild(s);
s.src = 'chrome-extension://gcbommkclmclpchllfjekcdonpmejbdp/manifest.json';
} catch (e) {
debugger;
alert(e);
}
window.onerror = function (errorMsg, url, lineNumber, column, errorObj) {
alert('Error: ' + errorMsg + ' Script: ' + url + ' Line: ' + lineNumber
+ ' Column: ' + column + ' StackTrace: ' + errorObj);
}
So far I am not able to catch the errors..
Any help will be appreciated
The first error is informative from Chrome, injected directly into the console and not catchable by you (as you noticed).
The GET errors are from the network stack. Chrome denies load in either case and simulates a network error - which you can catch with onerror handler on the element itself, but not in the window.onerror hander. Quote, emphasis mine:
When a resource (such as an <img> or <script>) fails to load, an error event using interface Event is fired at the element, that initiated the load, and the onerror() handler on the element is invoked. These error events do not bubble up to window, but (at least in Firefox) can be handled with a single capturing window.addEventListener.
Here's an example that will, at least, detect the network error. Note that, again, you can't catch them, as in prevent it from showing in the console. It was a source of an embarrasing problem when Google Cast extension (that was exposing a resource) was using it as a detection method.
s.onload = function(){alert("installed")};
s.error = function(){alert("I still don't know")};
Notice that you can't distinguish between the two. Internally, Chrome redirects one of the requests to chrome-extension://invalid, but such redirects are transparent to your code: be it loading a resource (like you do) or using XHR. Even the new Fetch API, that's supposed to give more control over redirects, can't help since it's not a HTTP redirect. All it gets is an uninformative network error.
As such, you can't detect whether the extension is not installed or installed, but does not expose the resource.
Please understand that this is intentional. The method you refer to used to work - you could fetch any resource known by name. But it was a method of fingerprint browsers - something that Google is explicitly calling "malicious" and wants to prevent.
As a result, web_accessible_resources model was introduced in Chrome 18 (all the way back in Aug 2012) to shield extensions from sniffing - requiring to explicitly declare resources that are exposed. Quote, emphasis mine:
Prior to manifest version 2 all resources within an extension could be accessed from any page on the web. This allowed a malicious website to fingerprint the extensions that a user has installed or exploit vulnerabilities (for example XSS bugs) within installed extensions. Limiting availability to only resources which are explicitly intended to be web accessible serves to both minimize the available attack surface and protect the privacy of users.
With Google actively fighting fingerprinting, only cooperating extensions can be reliably detected. There may be extension-specific hacks - such as specific DOM changes, request interceptions or exposed resources you can fetch - but there is no general method, and extension may change their "visible signature" at any time. I explained it in this question: Javascript check if user has a third party chrome extension installed, but I hope you can see the reason for this better.
To sum this up, if you indeed were to find a general method that exposed arbitrary extensions to fingerprinting, this would be considered malicious and a privacy bug in Chrome.
Have your Chrome extension look for a specific DIV or other element on your page, with a very specific ID.
For example:-
<div id="ExtensionCheck_JamesEggersAwesomeExtension"></div>

Renaming files with MD5

Was going through gulp packages on the npm website and came across this package called gulp-rename-md5. Is there a scenario where renaming a file using MD5 is useful and why?
I've used a similar tool for cache busting (called gulp-freeze which adds an MD5 hash of the file contents to the filename).
When you update a CSS or JS file you want users to get the latest version of that file when they visit your site. If your file is named "app.min.js" and you update it, their browsers might not pull down the latest file. If you're using a CDN even clearing the browser cache probably won't request the new file.
I've used gulp-freeze with gulp-filenames (to store the name of the cache busted file) and gulp-html-replace (to actually update the <link /> or <script /> tags with the name of this cache busted file in the html). It's really handy.
CODE SAMPLE
This will get your files, use gulp-freeze to build the hash, use gulp-filenames to store the name of that file, then write that to the html with gulp-html-replace. This is tested and working
var gulp = require("gulp"),
runSequence = require("run-sequence"),
$ = require("gulp-load-plugins")();
gulp.task("build", () => {
runSequence("js", "write-to-view");
});
gulp.task("js", () => {
return gulp
.src("./js/**/*.js")
.pipe($.freeze())
.pipe($.filenames("my-javascript"))
.pipe(gulp.dest("./"));
});
gulp.task("write-to-view", () => {
return gulp
.src("index.html")
.pipe(
$.htmlReplace(
{
js: $.filenames.get("my-javascript")
},
{ keepBlockTags: true }
)
)
.pipe(gulp.dest("./"));
});
EDIT
I should add that index.html just needs these comments so gulp-html-replace knows where to write the <script /> element
<!--build:js-->
<!-- endbuild -->
One of advantages is that you can setup your app to cache files with MD5 sum in their name (e.g. mystyle.a345fe.css) for long time (several months) because you know that this file will never be modified. This will save you some traffic and your web will be faster for returning visitors.

Re-inject content scripts after update

I have a chrome extension which injects an iframe into every open tab. I have a chrome.runtime.onInstalled listener in my background.js which manually injects the required scripts as follows (Details of the API here : http://developer.chrome.com/extensions/runtime.html#event-onInstalled ) :
background.js
var injectIframeInAllTabs = function(){
console.log("reinject content scripts into all tabs");
var manifest = chrome.app.getDetails();
chrome.windows.getAll({},function(windows){
for( var win in windows ){
chrome.tabs.getAllInWindow(win.id, function reloadTabs(tabs) {
for (var i in tabs) {
var scripts = manifest.content_scripts[0].js;
console.log("content scripts ", scripts);
var k = 0, s = scripts.length;
for( ; k < s; k++ ) {
chrome.tabs.executeScript(tabs[i].id, {
file: scripts[k]
});
}
}
});
}
});
};
This works fine when I first install the extension. I want to do the same when my extension is updated. If I run the same script on update as well, I do not see a new iframe injected. Not only that, if I try to send a message to my content script AFTER the update, none of the messages go through to the content script. I have seen other people also running into the same issue on SO (Chrome: message content-script on runtime.onInstalled). What is the correct way of removing old content scripts and injecting new ones after chrome extension update?
When the extension is updated Chrome automatically cuts off all the "old" content scripts from talking to the background page and they also throw an exception if the old content script does try to communicate with the runtime. This was the missing piece for me. All I did was, in chrome.runtime.onInstalled in bg.js, I call the same method as posted in the question. That injects another iframe that talks to the correct runtime. At some point in time, the old content scripts tries to talk to the runtime which fails. I catch that exception and just wipeout the old content script. Also note that, each iframe gets injected into its own "isolated world" (Isolated world is explained here: http://www.youtube.com/watch?v=laLudeUmXHM) hence newly injected iframe cannot clear out the old lingering iframe.
Hope this helps someone in future!
There is no way to "remove" old content scripts (Apart from reloading the page in question using window.location.reload, which would be bad)
If you want to be more flexible about what code you execute in your content script, use the "code" parameter in the executeScript function, that lets you pass in a raw string with javascript code. If your content script is just one big function (i.e. content_script_function) which lives in background.js
in background.js:
function content_script_function(relevant_background_script_info) {
// this function will be serialized as a string using .toString()
// and will be called in the context of the content script page
// do your content script stuff here...
}
function execute_script_in_content_page(info) {
chrome.tabs.executeScript(tabid,
{code: "(" + content_script_function.toString() + ")(" +
JSON.stringify(info) + ");"});
}
chrome.tabs.onUpdated.addListener(
execute_script_in_content_page.bind( { reason: 'onUpdated',
otherinfo: chrome.app.getDetails() });
chrome.runtime.onInstalled.addListener(
execute_script_in_content_page.bind( { reason: 'onInstalled',
otherinfo: chrome.app.getDetails() });
)
Where relevant_background_script_info contains information about the background page, i.e. which version it is, whether there was an upgrade event, and why the function is being called. The content script page still maintains all its relevant state. This way you have full control over how to handle an "upgrade" event.

Chrome inline installation works only half of the time

we've set up a simple inline webstore installer for our app.
The app site has been verified. The inline installation does work correctly for half of us inside our company, but it doesn't work for the other half. They would get "Uncaught TypeError: Cannot call method 'install' of undefined testsupport.html:15
Uncaught SyntaxError: Unexpected token ). It's as though the chrome or chrome.web variable isn't initialized.
Why does the inline installation work only on some machines but not on others? All these machines have the same chrome browser version.
TIA
I've not seen this issue before but I will try to provide a breakdown of the setup I use to manage inline installations for the multiple Chrome extensions on my website.
Within the head node of every page (optionally, only pages that may include one or more Install links) I add the required links to each extension/app page on the Chrome Web Store. This allows me to easily add install links anywhere on the page for various extensions/apps. The JavaScript simply binds an event handler to each of the install links once the DOM has finished loading. This event handler's sole purpose is to install the extension/app that it links to when clicked and then to change its state to prevent further install attempts.
<!DOCTYPE html>
<html>
<head>
...
<!-- Link for each extension/app page -->
<link rel="chrome-webstore-item" href="https://chrome.google.com/webstore/detail/dcjnfaoifoefmnbhhlbppaebgnccfddf">
<script>
// Ensure that the DOM has fully loaded
document.addEventListener('DOMContentLoaded', function() {
// Support other browsers
var chrome = window.chrome || {};
if (chrome.app && chrome.webstore) {
// Fetch all install links
var links = document.querySelectorAll('.js-chrome-install');
// Create "click" event listener
var onClick = function(e) {
var that = this;
// Attempt to install the extension/app
chrome.webstore.install(that.href, function() {
// Change the state of the button
that.innerHTML = 'Installed';
that.classList.remove('js-chrome-install');
// Prevent any further clicks from attempting an install
that.removeEventListener('click', onClick);
});
// Prevent the opening of the Web Store page
e.preventDefault();
};
// Bind "click" event listener to links
for (var i = 0; i < links.length; i++) {
links[i].addEventListener('click', onClick);
}
}
});
</script>
...
</head>
<body>
...
<!-- Allow inline installation links to be easily identified -->
Install
...
</body>
</html>
In order for this system to work fully you also need to support scenarios where the user has returned to your website after installing your extension/app. Although the official documentation suggests using chrome.app.isInstalled this doesn't work when multiple extensions/apps can be installed from a single page. To get around this issue you can simply add a content script to your extension/app like the following install.js file;
// Fetch all install links for this extension/app running
var links = document.querySelectorAll('.js-chrome-install[href$=dcjnfaoifoefmnbhhlbppaebgnccfddf]');
// Change the state of all links
for (var i = 0; i < links.length; i++) {
links[i].innerHTML = 'Installed';
// Website script will no longer bind "click" event listener as this will be executed first
links[i].classList.remove('js-chrome-install');
}
Then you just need to modify your manifest.json file to ensure this content script is executed on your domain.
{
...
"content_scripts": [
{
"js": ["install.js"],
"matches": ["*://*.yourdomain.com/*"],
"run_at": "document_end"
}
]
...
}
This will result in the content script being run before the JavaScript on your website so there will be no install links with the js-chrome-install class by the time it is executed, thus no event handlers will be bound etc.
Below is an example of how I use this system;
Homepage: http://neocotic.com
Project Homepage: http://neocotic.com/template
Project Source Code: https://github.com/neocotic/template
Your inline installation markup is:
<a href="#" onclick="chrome.webstore.install()">
CaptureToCloud Chrome Extension Installation
</a>
(per one of the comments, it used javascript:void(0) before, which is equivalent to # in this case).
Your <a> tag both navigates the page and has an onclick handler. In some cases, the navigation takes place before the onclick handler finishes running, which disturbs the code that supports inline installation.
If you switch to using a plain <span> (styled to look like a like, if you'd like), then you should no longer have this problem:
<span onclick="chrome.webstore.install()" style="text-decoration: underline; color:blue">
CaptureToCloud Chrome Extension Installation
</span>
Alternatively, you can return false from your onclick handler to prevent the navigation:
<a href="#" onclick="chrome.webstore.install(); return false;">
CaptureToCloud Chrome Extension Installation
</a>
(though since you're not actually linking anywhere, there isn't much point in using the <a> tag)
I get the error you mentioned AND a popup window that allows me to install the extension. So probably everybody get the error but for some it is preventing installation.
I got rid of the error by replacing javascript:void() by # in href.
CaptureToCloud Chrome Extension Installation

Resources