i want to cache all json resources in a directory on my websites's cdn with workbox service worker
resources url :
https://cdn.example.com/mydir/*.json
and my service worker is at:
https://example.com/service-worker.js
i wrote my cache code as below
workbox.routing.registerRoute(
/mydir\/.+\.json$/,
new workbox.strategies.CacheFirst({
plugins: [
new workbox.cacheableResponse.Plugin({
statuses: [0, 200]
})
]
})
);
but i get this error from workbox:
The regular expression '/mydir\/.+\.json$/' only partially matched against the cross-origin URL 'https://cdn.example.com/mydir/test.json'. RegExpRoute's will only handle cross-origin requests if they match the entire URL.
how can i cache these resources in my service worker ? any idea ?
You have to tune your regex so that it matches the whole domain. A bit more verbose but it makes sure that you actually wanted to cache this and that from the exact domain.
So the path (/mydir/bla/blu/bla.json) is not enough, instead match against the whole address (https://sub.domain.com/mydir/bla/blu/bla.json).
i found my answer at worbox routing doc:
However, for cross-origin requests, regular expressions must match the
beginning of the URL. The reason for this is that it’s unlikely that
with a regular expression new RegExp('/styles/.*\\.css') you intended
to match third-party CSS files.
https://cdn.third-party-site.com/styles/main.css
https://cdn.third-party-site.com/styles/nested/file.css
https://cdn.third-party-site.com/nested/styles/directory.css
If you
did want this behaviour, you just need to ensure that the regular
expression matches the beginning of the URL. If we wanted to match the
requests for https://cdn.third-party-site.com we could use the regular
expression
new RegExp('https://cdn\\.third-party-site\\.com.*/styles/.*\\.css')
https://cdn.third-party-site.com/styles/main.css
https://cdn.third-party-site.com/styles/nested/file.css
https://cdn.third-party-site.com/nested/styles/directory.css
so i changed my routing regex to:
/^https:\/\/cdn\.example\.com\/mydir\/.+\.json$/
and now workbox caching is working
Related
I found chrome.declarativeNetRequest only supports static rules, What I want is to call some custom methods before actions like redirect/request. I haven't found a solution so far. I'm not sure if I can still do this under the Manifest V3.
There are two usecases for my extension.
Before the redirect, I need to execute custom method.
chrome.webRequest.onBeforeRequest.addListener(
function(requestDetails) {
//
// I can get id from requestDetails.url,
// then do some custom business logic.
//
custom_function(requestDetails.url);
return {redirectUrl:"javascript:"};
},
{urls: [ "url_pattern?id=*" ]},
["blocking"]
);
Before some request, I want add/modify requestHeaders according to the user's browser.
chrome.webRequest.onBeforeSendHeaders.addListener(
function (details) {
details.requestHeaders.push({
"name": "User-Agent",
"value": navigator.userAgent + "version_1.0.0"
});
return {requestHeaders: details.requestHeaders};
},
{
urls: ["*://url_pattern"],
types: ["xmlhttprequest"]
},
["blocking", "requestHeaders"]
);
#wOxxOm Thank you very much for your patient answer !
I prefer to spinner.html. But I have another problem.
I can't set the regexSubstitution to the extension address,
I can use the extensionPath, but the corresponding capture groups doesn't work here.
"regexFilter": "google.com*"
The following are all incorrect:
can't use the corresponding capture groups.
"extensionPath": "/spinner.html?url=\\0"
can't use the extension's address.
"regexSubstitution": "spinner.html?url=\\0"
Is my configuration incorrect?
Adding/deleting headers can only accept static values and it's shown in the official example.
Conditionally adding/deleting/modifying headers based on response headers is tracked in https://crbug.com/1141166.
Nontrivial transformations that exceed the functionality of the actions listed in the documentation naturally cannot be re-implemented.
When https://crbug.com/1262147 is fixed we will be able to define a declarativeNetRequest rule to redirect to a page inside your extension via regexSubstitution or extensionPath and append the original URL as a parameter. This page will act as an interstitial, it will display some kind of UI or a simple progress spinner, process the URL parameters, and redirect the current tab to another URL.
In many cases this approach would introduce flicker and unnecessary visual fuss while the interstitial is displayed shortly, thus frustrating users who will likely abandon using such extensions altogether. Chromium team members who work on extensions seem to think this obscene workaround is acceptable so it's likely they'll roll with it, see also https://crbug.com/1013582.
Use the observational webRequest (without 'blocking' parameter) and chrome.tabs.update to redirect the tab. The downside is that the original request will be sent to the remote server. And this approach obviously won't work for iframes, to redirect those you'll have to inject/declare a content script, to which your webRequest listener would send a message with a frameId parameter.
Keep a visible tab with an html page from your extension, and use the blocking chrome.webRequest inside its scripts. It's a terrible UX, of course, even though endorsed by the Chromium's extensions team, with many extensions using this kludge the user's browsers will have to keep a lot of such tabs open.
P.S. The blocking webRequest will be still available for force-installed extensions via policies, but it's not something most users would be willing to use.
I have a chrome extension built for scraping a few particular pages, and then generating docs with that data on screens built into the extension. It requires regular updates. I keep getting the "Publishing will be delayed warning" below when I go to publish in the Chrome Web Store. The message suggests that I use active tab and narrower host permissions even though my manifest contains the following:
"permissions": ["storage",
"declarativeContent",
"activeTab",
"downloads"],
"background": {
"scripts": ["background.js"],
"persistent": false
},
In the background.js, I have a chrome.declarativeContent.onPageChanged.addRules statement with the following chrome.declarativeContent.PageStateMatcher conditions:
pageUrl: {hostContains: ''}
pageUrl: {hostContains: 'secure.vermont.gov'}
pageUrl: {urlContains: 'chrome-extension://'}
I replaced first one (intended for local files) with codeforbtv.org so there was no wildcard. Nonetheless, I got the same warning in the store.
The only tabs function I use is in the following code:
chrome.tabs.executeScript(null, { file: 'payload.js' });
Payload.js is two lines of code which grabs a large html block and sends it via chrome.runtime.sendMessage.
The relevant codebase can be found here in the extensionDirectory folder: https://github.com/codeforbtv/expunge-vt.
The extension can work on the sample HTML files in the sampleDocketHTML folder.
hostContains: '' matches every URL because '' is present in every string so it's a broad host permission.
To match local files you can probably use schemes: ['file'] but that will be still a broad host permission so I guess you'll have to forget about files.
urlContains: 'chrome-extension://' from the point of view of the web store's automatic detector is also a broad host permission because evidently the script doesn't analyze the pattern so it's considered just a substring match.
An extension can't work on other extension's pages anyway normally so you probably don't need this.
hostContains: 'secure.vermont.gov' is also a broad host permission because this pattern is not anchored to the TLD (top-level domain) so it may occur anywhere and thus match totally irrelevant hosts.
Either use hostSuffix: '.secure.vermont.gov' which will also match the dot-less version and any subdomain or hostEquals.
The warning is permissions based and is a general warning the review will take longer if you use more sensitive permissions for your extension.
Anecdotally I recently used wildcard host permissions (:///*) for a published extension and received the same warning. The review process ended up taking 3 days until it was approved.
You should in general expect longer review times when using sensitive permissions as Google's bandwidth to manually review extensions is currently reduced.
I want to host a Netlify website where you can search for specific users. Currently its like this https://example.com/users?q=exampleuser . (its https://example.com/users.html but pretty url'ed)
But what I want is to make the URL query pretty. So the endresult should be https://example.com/users/exampleuser but it should still be a url query so the JavaScript can make calls based on the query.
e.g.:
https://example.com/users?q=test123 to
https://example.com/users/test123
https://example.com/users?q=example456 to
https://example.com/users/example456
The rewrite rule will work with:
/users q=:q /users/:q 200
When you navigate to https://example.com/users?q=exampleuser, you must have an existing endpoint at https://example.com/users/exampleuser/ or it will give a 404 status code, but will still rewrite the original path.
Note: If you have an existing endpoint at /users/ this method will not fallback if the rewrite path has an invalid endpoint. Meaning you can't fallback to /users/ endpoint if the query path is an invalid endpoint.
The app I'm working on has a controller that issues templates to the front end (single page app). It's very basic, and simply consists of
#path = params[:path]
render template: "templates/#{#path}", layout: nil
Here my concern however is the direct use of the users input. Everything about this to me feels like it can be attacked with something as simple as path traversal. The route for this is
get "/templates/:path.html" => "templates#file", constraints: { path: /.+/ }, defaults: { format: 'html' }
I've tried multiple things to attempt a path traversal attack, such as
request /templates/path/to/../somewhere/else.html
request /templates?path=/path/to/../../something.rb
request /templates/index.html?path=/path/to/../../config/something.html
request /templates/path/../../../file.html
Fortunately, I haven't had any success with this. The requests that just start with /templates and don't specify anything after it, don't match the route thanks to the constraint so that is good.
It seems as though when that route is matched, rails doesn't allow you to override the path parameter through a url parameter, so I don't seem to be able to inject it there.
The ones that interest are the first and last examples above, where rails seems to internally be changing the requested URL before invoking the routes file. When I request /templates/path/to/../somewhere/else.html, my console output shows a request for /templates/path/somewhere/else.html. When I make a request for /templates/path/../../../file.html, the log shows a request for /file.html.
Am I missing something somewhere that will leave the app open to security issues, or is this just rails being sensible and protecting itself for me?
UPDATE
I've done some more digging, and if I try doing some URL encoding then I can cause the server to simply not respond at all. If I request /templates/%2e%2e%2f%2e%2e%2f%2e%2e%2ffresult.html then I just get an empty response with a connection: close header.
I assume that the parameter parser higher up in the rack is checking all urls for this type of attack? Regardless, my original question still stands. Am I missing something here?
I am trying to append few extra parameters to the url that user typed (before the page gets loaded). Is it possible to do?
For example, if user types www.google.com, I would like to append ?q=query to url (final: www.google.com?q=query.
Thanks
The webRequest API might be what you need. This code goes in your background page:
chrome.webRequest.onBeforeRequest.addListener(
function(details) {
if( details.url == "http://www.google.com/" )
return {redirectUrl: "http://www.google.com/?q=defaultquery" };
},
{urls: ["http://www.google.com/*"]},
["blocking"]);
This is an extremely specific rule that redirects visits to http://www.google.com/ with http://www.google.com/?q=defaultquery, but I think you can see how to expand it to include more functionality.
Note that this will reroute all attempts to reach http://www.google.com/, including Ajax requests and iframes.
Per the documentation, you will need to add the webRequest and webRequestBlocking permissions, along with host permissions for every host you plan to intercept:
"permissions": [
"webRequest",
"webRequestBlocking",
"*://*.google.com/",
...
],
This is an old question still I am answering it for future readers.
Modification of query parameters is a little tricky because you can endup in an infinite loop and Chrome/Firefox may detect it and process whatever is the current state of the request URL.
I have faced this situation in my chrome extension Requestly where Users used Replace Rule and replaced www.google.com with www.google.com?q=query or did something similar.
The problem with this approach is browsers intercept the request URL after adding query parameter so the parameter will be added multiple times and corrupt the URL. So you have to ensure either of the following:-
Do not intercept a request once it has been redirected.
Check if the parameter already exists, then do not redirect it.
As correctly pointed out by #apsillers in his answer, you have to use webRequest API to perform any modifications to the URL. Please have a look at his answer
and write your code accordingly.
Just in case, you are looking for an already available solution, consider trying Requestly's Query Parameter Rule. Here is a screenshot of how it looks like:-
For Firefox, you can download Requestly from its home page.