I have this code to create a devtools panel in a chrome extension. For learning proupose I want to inspect network requests of a certain website and get the body of them.
chrome.devtools.panels.create('Spotify test', '', '/index.html');
chrome.devtools.network.onRequestFinished.addListener( (request) => {
//console.log(request);
if(request.method === 'GET' && request.response.status === '206' ){
console.log(request);
request.getContent( (content, encoding) => {
console.log(content, encoding);
});
}
});
The problem is that the if(request.method === 'GET' && request.response.status === '206' ) will not be executed, I'm able to log the request object of the callback only if I put it outside the statement. I want to log only requests that are using the GET method and that have the 206 status code. As I read in the documentation, on the request object I can use the getContent() method to get the body of the response, is this right? Is there any error in the code that will prevent to the console.log() to be fired?
NB: I'm using the devtools console that is opened by clicking the inspect menu voice inside the panel I've added.
request.request.method instead of request.method
Related
Simplified question
why, when using express.js & request.js following two examples:
request.get(url)
.on('response' (requestjsResponse) => {
requestjsResponse.pipe(res);
})
and
request.get(url, (err, requestjsResponse, requestjsBody) => {
res.send(requestjsResponse)
})
Tends not to produce same results, even when requestjsBody contain expected content?
Detailed question
I have two express.js versions of route handler that are handling some file proxying procedures for multiple file types. The code is using standard express.js req/res/next notation. Basically, what might be important from the background, non-code information for this issue is that two most mainly returned types are handled as follows:
PDF: shall be opened within browser, their size is usually no less
than 18K (accordinng to content-length header)
EML: Shall be
downloaded, therir size is usually smaller than 16K (accordinng to
content-length header)
Both handlers versions are using request.js, one with
get(url: string, callback: (Error, Response, Body) => void)
form, that I'll be referring as callback form, where entire body is expected inside such callback.In this case, the response to user is send by plain express.js res.send(Body). other one is using form
get(url: string).on(event: 'response', callback: listener: (request.Response) => void)
that I'll be referring as event/pipe form, and is transferring response to end user by piping it by request.Response.pipe(res) inside 'response' handler. Details provided in code listing.
I'm unable to find the difference between those two forms, but:
In case of .eml (MIME message/rfc822, you can threat them as fancy HTML) files both versions works exactly same way, file is nicely downloaded.
In case of .pdf, when using event/pipe form get(url).on('response', callback) I'm able to successfully transfer PDF document to client. When I'm using callback form (i.e. get(url: string, callback: (Error, Response, Body) => void)), even when I'm peeking body in debugger (seems to be complete PDF, contains PDF header, EOF marker, e.c.t.), client receives only some strange preamble declaring HTML:
<!doctype html><html><body style='height: 100%; width: 100%; overflow: hidden; margin:0px; background-color: rgb(82, 86, 89);'><embed style='position:absolute; left: 0; top: 0;'width='100%' height='100%' src='about:blank' type='application/pdf' internalid='FD93AFE96F19F67BE0799686C52D978F'></embed></body></html>
but no PDF document is received afterwards. Chrome claims, that he was unable to load the document.
Please see code:
Non-working callback version:
request.get(url, (err, documentResponse, documentBody) => {
if (err) {
logger.error('Document Fetch error:');
logger.error(err);
} else {
const documentResponseContentLength = Number.parseInt(documentResponse.headers['content-length'], 10);
if (documentResponseContentLength === 0 || Number.isNaN(documentResponseContentLength)) {
logger.warn('No content provided for requested document or length header malformed');
res.redirect(get404Navigation());
}
if (mimetype === 'application/pdf') {
logger.info(' overwriting Headers (PDF)');
res.set('content-type', 'application/pdf');
// eslint-disable-next-line max-len, prefer-template
res.set('content-disposition', 'inline; filename="someName.pdf"');
logger.info('Document Download Headers (overridden):', res.headers);
}
if (mimetype === 'message/rfc822') {
logger.info(' overwriting Headers (message/rfc822)');
res.set('content-type', 'message/rfc822');
// eslint-disable-next-line max-len, prefer-template
res.set('content-disposition', 'attachment; filename="someName.eml"');
logger.info('Document Download Headers (overridden):', res.headers);
}
res.send(documentBody) /* Sending message to clinet */
}
})
.on('data', (d) => {
console.log('We are debugging here')
})
Working event based/piped version:
const r = request
.get(url)
.on('response', (documentsResponse) => {
if (Number.parseInt(documentsResponse.headers['content-length'], 10) !== 0) {
// Überschreibe headers für PDF und TIFF, diese kommen gelegentlich unvollständig an
if (mimetype === 'application/pdf') {
logger.info(' overwriting Headers (PDF)');
res.set('content-type', 'application/pdf');
res.set('content-disposition', 'inline; filename="someName".pdf"')
logger.info('Document Download Headers (overridden):', documentsResponse.headers);
}
if (mimetype === 'message/rfc822') {
logger.info(' overwriting Headers (message/rfc822)');
res.set('content-type', 'message/rfc822');
res.set('content-disposition', 'attachment; filename="someName".eml"');
logger.info('Document Download Headers (overridden):', res.headers);
}
r.pipe(res); /* Response is piped to client */
} else {
res.redirect(get404Navigation());
}
}
.on('data', (d) => {
console.log('We are debugging here')
})
Event that part with r.pipe(res) seems extra suspicious (see where r is declared and where is used) this is the versions that works correctly for both cases.
I assume, that issue might be caused by nature of sending multipart content, so I added additional on('data', (d)=>{}) callbacks and set breakepoints to see, when response is ended/piped vs when data handler is called, and results are according to my expectations:
request(url, (err, response, body)) case, data handler is called twice, before execution of callback, entire body is accessible inside handler, so It's even more obscure to me that I'm unable just to res.send it.
request.get(url).on('response') piping to res is called firstly, then two times data handler is called. I believe internal guts of node.js HTTP engine are doing the asynchronous trick and are pushing responses one after another at each response chunk is received.
I'll be glad for any explanation, what I'm doing wrong and what can I align to make my callback version work as expected for PDF case.
Epilogue:
Why such code is used? Our backend is retrieving PDF data from external, non-exposed to public internet server, but due to legacy reasons some headers are set incorrectly (mainly Content-Disposition), so we are intercepting them and act as kind of alignment proxy between data source and client.
I have written the piece of code below:
static async postSearchResult(httpContext: HttpContext, injector: Injector) {
const log = injector.get(Log);
const service = injector.get(Service);
try {
let result = await service.redirectToUI(JSON.parse(httpContext.getRequestBody()));
httpContext.ok(result, 200, {'Content-Type': 'application/json'});
} catch (e) {
httpContext.fail(e, 500);
}
}
protected redirectToUI(response: any) {
// If any post api call happened then it should open web browser and pass some field as query parameter
window.open("https://www.google.com?abc=response.abc");
return response ? response : "failed";
}
Here I am getting the following error :
Execution failed ReferenceError: Window is not defined
What am I doing wrong?
What you are trying to accomplish doesn't make much of a sense. Lambda is a back-end service. To open new browser window, you need to use front-end JavaScript, not back-end Node (on the back-end, you have no access to the front-end window object).
If you want to open a new browser window as a reaction to some back-end response, then you can send some indicator in the HTTP response (i.e shouldOpenNewWindow: true as a part of the response object), parse that response on the front-end and it the indicator is present, then you can issue window.open command. But it has to be done on front-end.
I'm experimenting with Node.js and web scraping. In this case, I'm trying to scrape the most recent songs from a local radio station for display. With this particular website, body returns nothing. When I try using google or any other website, body has a value.
Is this a feature of the website I'm trying to scrape?
Here's my code:
var request = require('request');
var url = "http://www.radiomilwaukee.org";
request(url, function(err,resp,body) {
if (!err && resp.statusCode == 200) {
console.log(body);
}
else
{
console.log(err);
}
});
That's weird, the website you're requesting doesn't seem to return anything unless the accept-encoding header is set to gzip. With that in mind, using this gist will work: https://gist.github.com/nickfishman/5515364
I ran the code within that gist, replacing the URL with "http://www.radiomilwaukee.org" and see the content within the sample.html file once the code has completed.
If you'd rather have access to the web page's content within the code, you could do something like this:
// ...
req.on('response', function(res) {
var body, encoding, unzipped;
if (res.statusCode !== 200) throw new Error('Status not 200');
encoding = res.headers['content-encoding'];
if (encoding == 'gzip') {
unzipped = res.pipe(zlib.createGunzip());
unzipped.on("readable", function() {
// collect the content in the body variable
body += unzipped.read().toString();
});
}
// ...
Is it possible to create a Chrome extension that modifies HTTP response bodies?
I have looked in the Chrome Extension APIs, but I haven't found anything to do this.
In general, you cannot change the response body of a HTTP request using the standard Chrome extension APIs.
This feature is being requested at 104058: WebRequest API: allow extension to edit response body. Star the issue to get notified of updates.
If you want to edit the response body for a known XMLHttpRequest, inject code via a content script to override the default XMLHttpRequest constructor with a custom (full-featured) one that rewrites the response before triggering the real event. Make sure that your XMLHttpRequest object is fully compliant with Chrome's built-in XMLHttpRequest object, or AJAX-heavy sites will break.
In other cases, you can use the chrome.webRequest or chrome.declarativeWebRequest APIs to redirect the request to a data:-URI. Unlike the XHR-approach, you won't get the original contents of the request. Actually, the request will never hit the server because redirection can only be done before the actual request is sent. And if you redirect a main_frame request, the user will see the data:-URI instead of the requested URL.
I just released a Devtools extension that does just that :)
It's called tamper, it's based on mitmproxy and it allows you to see all requests made by the current tab, modify them and serve the modified version next time you refresh.
It's a pretty early version but it should be compatible with OS X and Windows. Let me know if it doesn't work for you.
You can get it here http://dutzi.github.io/tamper/
How this works
As #Xan commented below, the extension communicates through Native Messaging with a python script that extends mitmproxy.
The extension lists all requests using chrome.devtools.network.onRequestFinished.
When you click on of the requests it downloads its response using the request object's getContent() method, and then sends that response to the python script which saves it locally.
It then opens file in an editor (using call for OSX or subprocess.Popen for windows).
The python script uses mitmproxy to listen to all communication made through that proxy, if it detects a request for a file that was saved it serves the file that was saved instead.
I used Chrome's proxy API (specifically chrome.proxy.settings.set()) to set a PAC as the proxy setting. That PAC file redirect all communication to the python script's proxy.
One of the greatest things about mitmproxy is that it can also modify HTTPs communication. So you have that also :)
Like #Rob w said, I've override XMLHttpRequest and this is a result for modification any XHR requests in any sites (working like transparent modification proxy):
var _open = XMLHttpRequest.prototype.open;
window.XMLHttpRequest.prototype.open = function (method, URL) {
var _onreadystatechange = this.onreadystatechange,
_this = this;
_this.onreadystatechange = function () {
// catch only completed 'api/search/universal' requests
if (_this.readyState === 4 && _this.status === 200 && ~URL.indexOf('api/search/universal')) {
try {
//////////////////////////////////////
// THIS IS ACTIONS FOR YOUR REQUEST //
// EXAMPLE: //
//////////////////////////////////////
var data = JSON.parse(_this.responseText); // {"fields": ["a","b"]}
if (data.fields) {
data.fields.push('c','d');
}
// rewrite responseText
Object.defineProperty(_this, 'responseText', {value: JSON.stringify(data)});
/////////////// END //////////////////
} catch (e) {}
console.log('Caught! :)', method, URL/*, _this.responseText*/);
}
// call original callback
if (_onreadystatechange) _onreadystatechange.apply(this, arguments);
};
// detect any onreadystatechange changing
Object.defineProperty(this, "onreadystatechange", {
get: function () {
return _onreadystatechange;
},
set: function (value) {
_onreadystatechange = value;
}
});
return _open.apply(_this, arguments);
};
for example this code can be used successfully by Tampermonkey for making any modifications on any sites :)
Yes. It is possible with the chrome.debugger API, which grants extension access to the Chrome DevTools Protocol, which supports HTTP interception and modification through its Network API.
This solution was suggested by a comment on Chrome Issue 487422:
For anyone wanting an alternative which is doable at the moment, you can use chrome.debugger in a background/event page to attach to the specific tab you want to listen to (or attach to all tabs if that's possible, haven't tested all tabs personally), then use the network API of the debugging protocol.
The only problem with this is that there will be the usual yellow bar at the top of the tab's viewport, unless the user turns it off in chrome://flags.
First, attach a debugger to the target:
chrome.debugger.getTargets((targets) => {
let target = /* Find the target. */;
let debuggee = { targetId: target.id };
chrome.debugger.attach(debuggee, "1.2", () => {
// TODO
});
});
Next, send the Network.setRequestInterceptionEnabled command, which will enable interception of network requests:
chrome.debugger.getTargets((targets) => {
let target = /* Find the target. */;
let debuggee = { targetId: target.id };
chrome.debugger.attach(debuggee, "1.2", () => {
chrome.debugger.sendCommand(debuggee, "Network.setRequestInterceptionEnabled", { enabled: true });
});
});
Chrome will now begin sending Network.requestIntercepted events. Add a listener for them:
chrome.debugger.getTargets((targets) => {
let target = /* Find the target. */;
let debuggee = { targetId: target.id };
chrome.debugger.attach(debuggee, "1.2", () => {
chrome.debugger.sendCommand(debuggee, "Network.setRequestInterceptionEnabled", { enabled: true });
});
chrome.debugger.onEvent.addListener((source, method, params) => {
if(source.targetId === target.id && method === "Network.requestIntercepted") {
// TODO
}
});
});
In the listener, params.request will be the corresponding Request object.
Send the response with Network.continueInterceptedRequest:
Pass a base64 encoding of your desired HTTP raw response (including HTTP status line, headers, etc!) as rawResponse.
Pass params.interceptionId as interceptionId.
Note that I have not tested any of this, at all.
While Safari has this feature built-in, the best workaround I've found for Chrome so far is to use Cypress's intercept functionality. It cleanly allows me to stub HTTP responses in Chrome. I call cy.intercept then cy.visit(<URL>) and it intercepts and provides a stubbed response for a specific request the visited page makes. Here's an example:
cy.intercept('GET', '/myapiendpoint', {
statusCode: 200,
body: {
myexamplefield: 'Example value',
},
})
cy.visit('http://localhost:8080/mytestpage')
Note: You may also need to configure Cypress to disable some Chrome-specific security settings.
The original question was about Chrome extensions, but I notice that it has branched out into different methods, going by the upvotes on answers that have non-Chrome-extension methods.
Here's a way to kind of achieve this with Puppeteer. Note the caveat mentioned on the originalContent line - the fetched response may be different to the original response in some circumstances.
With Node.js:
npm install puppeteer node-fetch#2.6.7
Create this main.js:
const puppeteer = require("puppeteer");
const fetch = require("node-fetch");
(async function() {
const browser = await puppeteer.launch({headless:false});
const page = await browser.newPage();
await page.setRequestInterception(true);
page.on('request', async (request) => {
let url = request.url().replace(/\/$/g, ""); // remove trailing slash from urls
console.log("REQUEST:", url);
let originalContent = await fetch(url).then(r => r.text()); // TODO: Pass request headers here for more accurate response (still not perfect, but more likely to be the same as the "actual" response)
if(url === "https://example.com") {
request.respond({
status: 200,
contentType: 'text/html; charset=utf-8', // For JS files: 'application/javascript; charset=utf-8'
body: originalContent.replace(/example/gi, "TESTING123"),
});
} else {
request.continue();
}
});
await page.goto("https://example.com");
})();
Run it:
node main.js
With Deno:
Install Deno:
curl -fsSL https://deno.land/install.sh | sh # linux, mac
irm https://deno.land/install.ps1 | iex # windows powershell
Download Chrome for Puppeteer:
PUPPETEER_PRODUCT=chrome deno run -A --unstable https://deno.land/x/puppeteer#16.2.0/install.ts
Create this main.js:
import puppeteer from "https://deno.land/x/puppeteer#16.2.0/mod.ts";
const browser = await puppeteer.launch({headless:false});
const page = await browser.newPage();
await page.setRequestInterception(true);
page.on('request', async (request) => {
let url = request.url().replace(/\/$/g, ""); // remove trailing slash from urls
console.log("REQUEST:", url);
let originalContent = await fetch(url).then(r => r.text()); // TODO: Pass request headers here for more accurate response (still not perfect, but more likely to be the same as the "actual" response)
if(url === "https://example.com") {
request.respond({
status: 200,
contentType: 'text/html; charset=utf-8', // For JS files: 'application/javascript; charset=utf-8'
body: originalContent.replace(/example/gi, "TESTING123"),
});
} else {
request.continue();
}
});
await page.goto("https://example.com");
Run it:
deno run -A --unstable main.js
(I'm currently running into a TimeoutError with this that will hopefully be resolved soon: https://github.com/lucacasonato/deno-puppeteer/issues/65)
Yes, you can modify HTTP response in a Chrome extension. I built ModResponse (https://modheader.com/modresponse) that does that. It can record and replay your HTTP response, modify it, add delay, and even use the HTTP response from a different server (like from your localhost)
The way it works is to use the chrome.debugger API (https://developer.chrome.com/docs/extensions/reference/debugger/), which gives you access to Chrome DevTools Protocol (https://chromedevtools.github.io/devtools-protocol/). You can then intercept the request and response using the Fetch Domain API (https://chromedevtools.github.io/devtools-protocol/tot/Fetch/), then override the response you want. (You can also use the Network Domain, though it is deprecated in favor of the Fetch Domain)
The nice thing about this approach is that it will just work out of box. No desktop app installation required. No extra proxy setup. However, it will show a debugging banner in Chrome (which you can add an argument to Chrome to hide), and it is significantly more complicated to setup than other APIs.
For examples on how to use the debugger API, take a look at the chrome-extensions-samples: https://github.com/GoogleChrome/chrome-extensions-samples/tree/main/mv2-archive/api/debugger/live-headers
I've just found this extension and it does a lot of other things but modifying api responses in the browser works really well: https://requestly.io/
Follow these steps to get it working:
Install the extension
Go to HttpRules
Add a new rule and add a url and a response
Enable the rule with the radio button
Go to Chrome and you should see the response is modified
You can have multiple rules with different responses and enable/disable as required. I've not found out how you can have a different response per request though if the url is the same unfortunately.
I started using zombiejs, but i have some begginer questions:
1.) How testing ajax calls ?
For example i have php ajax action (Zend)
public function ajaxSomeAction()
{
$oRequest = $this->getRequest();
if($oRequest->isXmlHttpRequest() === false || $oRequest->isPost() === false) {
throw new Zend_Controller_Action_Exception('Only AJAX & POST request accepted', 400);
}
//process check params...
}
My zombiejs testing code throws http 400.
2.) How fire jquery plugins public methods ? For example i have code:
(function($) {
$.manager.addInvitation = function()
{
//some code ....
}
$.manager = function(options)
{
//some code
}
})(jQuery);
I try:
Browser.visit(url, function(err, browser, status)
{
// not work
browser.window.jQuery.manager.addInviation();
// also not work
browser.document.jQuery.manager.addInvitation();
browser.window.$.manager.addInvitation();
browser.evaluate('$.manager.addInvitation();');
})
3.) How modifiy header with zombiejs ? For exmaple i want add header x-performace-bot:zombie1 to request send using visit method
Browser = require('zombie');
Browser.visit(url, {debug:true}, function(err, browser, status)
{
//send request witch header x-performace-bot
});
After quick testing (on zombie 0.4.21):
ad 1.
As you're checking ($oRequest->isXmlHttpRequest()) if request is an xml http request, you have to specify (in zombie) X-Requested-With header with a value of XMLHttpRequest.
ad 2.
// works for me (logs jQuery function - meaning it's there)
console.log( browser.window.jQuery );
// that works to...
browser.window.$
Your code must be undefined or there are some other errors in Javascript on your page.
ad 3.
There's a header option, which you can pass just as you do with debug.