I'm writing a Chrome extension to fallback to Wayback Machine when a link fails.
webNavigation seems sufficient for the DNS-lookup case, but I don't see a way to detect link failure with only webNavigation in general.
For example, http://www.google.com/adasdasdasdasdasdasd is a 404 link - but I still get webNavigation onDOMContentLoaded and onCompleted, without indication of HTTP error (no onErrorOccurred is sent).
I was really hoping to avoid needing the webRequest permission with wide-open host patterns. Is there a way to detect HTTP failure that I'm missing?
Send a XMLHttpRequest HEAD request in onBeforeNavigate and analyze the response status code in onreadystatechange callback. If it's 404 then use chrome.tabs.update to change the tab url.
The drawback of sending an additional request for every page is insignificant since web pages usually generate a lot more requests while loading.
Related
I have created chrome extension, which can handle current page ajax request, but my extension can't handle ajax request, which crated by other exstension (that requests I can see in chroome network tool). How can I handle ajax which initiators are other exstension scripts ??
please take a look at this
The webRequest API only exposes requests that the extension has
permission to see, given its host permissions. Moreover, only the
following schemes are accessible: http://, https://, ftp://, file://,
or chrome-extension://. In addition, even certain requests with URLs
using one of the above schemes are hidden, e.g.,
chrome-extension://other_extension_id where other_extension_id is not
the ID of the extension to handle the request,
https://www.google.com/chrome, and others (this list is not complete).
Also synchronous XMLHttpRequests from your extension are hidden from
blocking event handlers in order to prevent deadlocks. Note that for
some of the supported schemes the set of available events might be
limited due to the nature of the corresponding protocol. For example,
for the file: scheme, only onBeforeRequest, onResponseStarted,
onCompleted, and onErrorOccurred may be dispatched.
According to that, could you please check your host permissions settings and add there chrome-extension://* or just <all_urls>
I currently use res.error('message'); to show a message on the page that I load using res.redirect(url); in my Express/Node.js app.
Is there another way of showing a message, if I don't want it to be error message, but something else?
Thanks!
res.redirect(url) will actually issue an HTTP 302 redirect which will cause your user's browser to make a brand new request for whatever page is specified by the value in the url variable.
If you want to show the user a message after the redirect, you would need to store that message somewhere that will persist between requests and then read it on the subsequent page load. This is often accomplished by storing the message in a cookie or in server-side session state and is often referred to as flash storage.
The express-flash middleware can help you with this, although I'm not 100% certain of its compatibility with Express 4.
You also might find this SO question helpful.
You could use one of the other res method to send the message:
send('message') or end('message') seems being the most appropriated.
You could otherwise display some page with .render (view, templateParam).
More details here, on express request documentation
Use flash. They provide templates for error, success and informational messages. Very useful if your users are coming in with web browsers.
I have developed a Google Chrome extensions that uses YouTube Data API v2. My permission field in the manifest looks like this, because the script is injected in pages under youtube.com and I also need access to tabs:
"permissions": ["tabs", "*://*.youtube.com/*"]
This also works when I do a request to YouTube Data API v2 because the request is done to http://gdata.youtube.com/, so it is the same domain. But now I am migrating to YouTube Data API v3, and the requests must be done to http://www.googleapis.com/youtube/v3/ (note HTTPS instead of HTTP also). However, surprisingly, my requests are working perfectly without adding any new permission.
I know, I am asking something that doesn't seem to be a problem, but personally I consider any behavior that I don't understand in my software a problem. Why does this happen? Am I not supposed to add a permission such as "*://*.googleapis.com/*" in order for my XMLHttpRequest requests to the API to work?
I also have some king of guess about this: HTTP Access Control headers. My requests do send a Origin header with value chrome-extension://myExtensionId. And the answer from the API also contains the following header:
Access-Control-Allow-Origin: chrome-extension://myExtensionId
But could this be the reason Chrome is allowing me to do a cross-origin XMLHttpRequest without any extra permission defined in the manifest? Not sure, and apparently this is not documented anywhere in Google APIs, YouTube Data API v3 or Chrome Extensions developer documentation.
If Chrome does not find the permission in the manifest, it treats a request as a normal request. This means that a request will still succeed when the right CORS headers are set. Otherwise, a request will fail because of the same origin policy.
The Google API JavaScript library explicitly mentions support for CORS:
Making a request: Option 3
Google APIs support CORS. Please visit the CORS page for more information on using CORS to make requests.
If possible, I still recommend adding the permission to the manifest file. For simple requests, this does not bring any advantages. For non-simple requests, this will half the number of requests: Non-simple requests are always preceeded by a preflight (OPTIONS) request which checks if the client is permitted to access the source.
By adding the permission to the manifest file, Chrome will not fall back to CORS, and always use one network request to complete the request. Great!
However... you might think again if you're the author of an already-deployed extension. When new origin permissions are added to the manifest file, the extension will be disabled until the user approves the extension. The dialog box shows "Remove extension" and "Enable" next to each other, so there's a chance of loosing the user.
If you wish, you can overcome this problem by using an optional permission, activated at the options page. Clearly explain in layman language that the option will improve the speed of the extension, and don't forget to mention that additional permissions will be requested.
I want to verify that the images, css, and javascript files that are part of my page are being cached by my browser. I've used Fiddler and Google Page Speed and it's unclear whether either is giving me the information I need. Fiddler shows the HTTP 304 response for images, css, and javascript which should tell the browser to use the cached copy. Google Page Speed shows the 304 response but doesn't show a Transfer Size of Zero, instead it shows the full file size of the resource. Note also, I have seen Google Page Speed report a 200 response but then put the word (cache) next to the 200 (so Status is 200 (cache)), which doesnt make a lot of sense.
Any other suggestions as to how I can verify whether the server is sending back images, css, javascript after they've been retrieved and cached by a previous page hit?
In browser HTTP debuggers are probably the easiest to use in your situation. Try HTTPFox for Firefox or Opera which has dragonfly built-in. Both of these indicate when the local browser cache has been used.
If you appear to be getting conflicting information, then wireshark/tcpdump will show you if the objects are being downloaded or not as it is monitoring the actual network packets being transmitted and received. If you haven't looked at network traces before, this might be a little confusing at first.
In fiddler, check out that the response body (for images, css) is empty. Also make sure your max-age is long enough in Cache-Control header. Most browsers (Safari, Firefox) have good traffic analyzer tools.
Your servers access logs can give you a lot of information on how effective your caching strategy is.
Lets say you have a html page /home.html, which references /some.js and /lookandfeel.css. For a given time period, aggregate the number of requests to all three files.
If your caching is effective, you should see a huge number of requests for home.html, but very few for the css or js. Somewhere in between is when you see identical number of requests for all 3, but the css and js have 304s. The worst is when you are only seeing 200s.
Obviously, you have to know your application to do such a study. The js and css files may be shared across multiple pages - which may complicate your analysis. But the general idea still holds good.
The advantage of such a study is that you can find out how effective your caching strategy is for your users as opposed to 'Is caching working on my machine'. However, this is no substitute for using a http proxy / fiddler.
A HTTP/304 response is forbidden to have a body. Hence, the full-response isn't sent, instead you just get back the headers of the 304 response. But the round-trip itself isn't free, and hence sending proper expiration information is a good practice to improve performance to avoid making the conditional request that returns the 304 in the first place.
http://www.fiddler2.com/redir/?id=httpperf explains this topic in some detail.
This is a dangerously easy thing I feel I should know more about - but I don't, and can't find much around.
The question is: How exactly does a browser know a web page has changed?
Intuitively I would say that F5 refreshes the cache for a given page, and that cache is used for history navigation only and has an expiration date - which leads me to think the browser never knows if a web page has changed, and it just reloads the page if the cache is gone --- but I am sure this is not always the case.
Any pointers appreciated!
Browsers will usually get this information through HTTP headers sent with the page.
For example, the Last-Modified header tells the browser how old the page is. A browser can send a simple HEAD request to the page to get the last-modified value. If it's newer than what the browser has in cache, then the browser can reload it.
There are a bunch of other headers related to caching as well (like Cache-Control). Check out: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html
Don't guess; read the docs. Here's a friendly, but authoritative introduction to the subject.
Web browsers send HTTP requests, and receive HTTP responses. They then displays the contents of the HTTP responses. Typically the HTTP responses will contain HTML. And many HTML elements may need new requests to receive the various parts of the page. For example each image is typically another HTTP request.
There are HTTP headers that indicate if a page is new or not. For example the last modified date. Web browsers typically use a conditional GET (conditional header field) or a HEAD request to detect the changes. A HEAD request receives only the headers and not the actual resource that is requested.
A conditional GET HTTP request will return a status of 304 Not Modified if there are no changes.
The page can then later change based on:
User input
After user input, changes can happen based on javascript code running without a postback
After user input, a new request to the server and get a whole new (possibly the same) page.
Javascript code can run once a page is already loaded and change things at any time. For example you may have a timer that changes something on the page.
Some pages also contain HTML tags that will scroll or blink or have other behavior.
You're getting along the right track, and as Jonathan mentioned, nothing is better than reading the docs. However, if you only want a bit more information:
There are HTTP response headers that let the server set the cacheability of a page, which falls into your expiration date system. However, one other important construct is the HTTP HEAD request, which essentially retrieves the MIME Type and Content-Length (if available) for a given page. Browsers can use the HEAD request to validate what is in their caches...
There is definitely more info on the subject though, so I would suggest reading the docs...