Does anyone know how to mock network requests that Puppeteer makes when it is used as part of the system under test (but NOT to run the test)? For example, the system under test uses puppeteer to fetch a URL and return some information about the page. The test is run using Jest. I normally use nock for this but it seems that they aren't used by Puppeteer's network code.
You can use
page.setRequestInterception(true);
page.on('request),req =>{
req.respond(data);
})
I would put a proxy between your SUT and your test runner. I tried to use BrowserMob Proxy in the past but eventually found an alternative solution (definitely not applicable in your case) so I can't really help with this.
It may be worth trying to give it a shot:
BrowserMob Proxy allows you to manipulate HTTP requests and responses, capture HTTP content, and export performance data as a HAR file. BMP works well as a standalone proxy server, but it is especially useful when embedded in Selenium tests.
Related
I want to test the Web API of a system which has also a CLI accessible via a telnet connection. The tests must verify that the response given to the API requests matches the output of commands given to the CLI.
The problem is that as far as I know there's no way to initiate a telnet connection from inside a postman script.
What will be the best way to achieve this?
Maybe postman is not the best tool for this job?
What will be the best way to achieve this?
A dedicated script written in NodeJS or any other programming language with or without help of some test framework like Jest \ karma. For sure not Postman.
You can use Postman to run the above mentioned script with pre-request script but it just make the all process much more complicated. Almost any modern script language will give you the option to send http request, no need to use postman.
Maybe postman is not the best tool for this job?
You are right, it's not.
I do integration testing in my Vue SPA with webdriverio and cucumberjs.
When loaded my application does request to get data from api-server.
In my tests I'd like to modify / stub data returned from api-server without 'touching endpoint' (i.e. deny request and return my json).
Nock, moxios and others won't work as my application is loaded with selenium.
I am aware about json-server, wiremock, but I don't want to modify my source code (urls) just for testing purposes.
Ideally selenium / webdriverio should intercept request or add custom code to webpage and return my json.
What options do I have?
Selenium was designed for end-to-end testing and it doesn't provide any mean to mock/stub the requests.
But there're some ways to do so:
Launch the browser on a proxy server which will intercept the requests and mock or redirect them (see browsermob-proxy).
Launch the browser with a web extension to intercept and mock the requests.
You can either code your own web extension or you can use one like Wiremock extension if you are using Chrome/Chromium.
Inject some Javascript in the page to hook XMLHttpRequest.
Since Selenium doesn't provide a way to inject the code before page is loaded, it will only work on the requests triggered upon mouse/keyboard input.
Due to some limitations about the web services I am proxying, I have to inject some JS code so that it allows the iframe to access the parent window and perform some actions.
I have built a proxy system with node-http-proxy which works pretty nicely. However I have spent unmeasurable hours trying to modify the content (on my own, using harmon as well, etc) that is being sent to the user without any success. I have found some articles and even some questions here but all of them are outdated and are not useful anymore.
I was wondering if someone can give me an actual example about how to do this, because I am unable to do it and maybe it is just that it is impossible to do at this point?
I haven't tried harmon, but I did try cheerio and it works.
However, I used http-mitm-proxy and not node-http-proxy.
If you are using http-mitm-proxy, you need to return a promise in the response handler. Otherwise, the proxy continues to send the original response without picking up your changes.
I have recently written another proxy at:
https://github.com/noeltimothy/noelsproxy
I'm going to add response handling to this soon. This one uses a callback mechanism, which means it wont return the response until the caller signals it to.
You should be able to use 'cheerio' and alter the content in JQuery style.
I'm attempting to use grunt-contrib-qunit to run a pre-existing suite of qunit tests (testing parsing of ajax request results) in headless mode with Phantom on Windows 8.
The tests complete fine in these scenarios:
When the remote page is accessed directly from any browser without Fiddler or another proxy running
When Phantom runs the tests from a command prompt with Fiddler open and running
Oddly if I don't have fiddler open monitoring the requests, the AJAX requests I'm testing never seem to initialize. I've checked my default IE LAN Settings and there is no proxy enabled, I've also tried flipping the Auto Detect Settings checkbox there and no change.
Any thoughts??
Details on my setup:
Node v0.10.4
Latest grunt-contrib-qunit
Windows 8
QUnit is divided into 4 or 5 modules with setup and teardown tasks in some modules, asynchronous and synchronous tests, and autorun is set to false.
Update:
If I turn off the options in Fiddler for "Reuse client connections" and "Reuse connections to servers" I seem to get the same failure behavior as when Fiddler is off. This led me to believe its a problem with connections being closed prematurely, so I tried setting a custom keep-alive header -- but it still errors out.
Update 2:
I still question this because the page itself loads fine, but the requests fail, but it looks like this could possibly be related to NTLM authentication. Fiddler might somehow facilitating the handshake. There is an open issue for NTLM on the Phantom github page.
Update 3:
After continued troubleshooting this evening it looks like the issue is only with authentication on POST requests. GET requests seem to work fine. I'm working around this for now by routing all requests through an ASHX handler and thus dropping the auth component. Only thing I had to change was to disable web security on phantom to allow the cross-domain requests through.
I was going to say you need to turn off security, which is done by passing --web-security=no to phantomjs. This will sort out the CORS issues. However I see in your Update#2 that you've already discovered this.
For the POST authentication problem, I blogged about the workaround here:
http://darrendev.blogspot.jp/2013/04/phantomjs-post-auth-and-timeouts.html
I've heard the most recent version has fixed this, so upgrading might be the actual answer?
BTW, be careful with auth in PhantomJS, as the auth details are sent on all requests. E.g. if your test page fetches JQuery from a CDN, the CDN will be sent your authentication headers. (SlimerJS has some new features in place for getting around this; AFAIK PhantomJS does not yet.)
I am looking for a way to control a web browser such as firefox or chrome. I need something like "selenium webdriver" but that will allow me to open many instances URL load, get http headers, response code, get response content, load time, etc.
Is there any library, framework, api that I could use to do it? I couldn't find one exactly that does all, selenium opens browser and go to url but I can't get http headers
Selenium and Jellyfish are strong options in general. Jellyfish is an option that uses Node.js - although I have no experience with it, I've heard good things from my colleagues.
If you just want to get headers and such, you could use the cURL library or wget. I've used cURL with NuSOAP to query XML web services in PHP, for example. The downside is that these are not functional browsers, and merely perform the HTTP requests and consume the response.
http://seleniumhq.org/
https://github.com/admc/jellyfish
http://curl.haxx.se/