I am new to Neoload. I am using Citrix a protocol in Neoload and I want to start my test only after the page has completely loaded.
I am not able to find any inbuilt function to check whether page is completely loaded using NeoLoad.
Can someone provide me pointers how to achieve that ?
Related
I am trying to make a custom web browser inside an electron application. Using webview (because iframe is not loading some necessary web pages) I can load a web page.
Then trying to write something into the web pageĀ“s input by clicking on the react-simple-keyboard which causes blur event, so input loses focus.
I figured out, that this approach would not work directly, so via ipc communication I am trying to resend the key button value and then set it to the window with const {keyboard} = require("#nut-tree/nut-js"); keyboard.type(args.value);
In my input, above the webview tag, it works like a charm, but I am not able to type inside the webview.
Can anyone help me to solve this problem or does anyone know a perfect solution how to use other OSK in electron app or how to open native windows osk on input focus? Thank you in advance.
I'm not sure how you'd accomplish that with this library. But you can just use Window's default on-screen-keyboard to accomplish that. Here is a link to how enable it. windows support
You should also use a BrowserView instead of a Webview, as the Webview is not guaranteed to be present in future versions and it's API is unstable.
The BrowserView doesn't work like an HTML element though and you should read the docs here.
But anyways, just use the system's default and you should be fine.
Also, if you're interested, I'm developing a web browser with Electron (in fact, I'm currently writing this using that browser) and as far as I can say, it's written pretty simply and anyone should understand most of it, so take a look if you're in trouble. But I am no expert and you shouldn't rely on my code as a standard of any kind, really.
Well, I might have just found an answer for you.
Firstly, as I mentioned, you should use a BrowserView instead of webView for your external content, and this time it is a requirement for this method to work. I would create a BrowserWindow with the controls at the top, then place a BrowserView to act as a "browser" and create another BrowserView at the bottom and load in the keyboard html file. And then, when a key is pressed on the virtual keyboard, you should send an ipc message to the main script with the information of what key was pressed(it should be done via a preload script for the OSK BrowserView). In the main script, once you recieve the ipc message (via ipcMain.on()) you should then send an input event to the BrowserView containing your external content. That's done by calling contents.sendInputEvent(Event), so it has to be a main script. Here is a link to contents.sendInputEvent(Event), BrowserView (link) and preload script as well as ipc communication (link).
As for invoking the keyboard once you click on the input element, you could probably do it with a preload script for your "browser's" BrowserView, if you can find how you can check whether the focused element is an input element or something like that, and call an ipc message to then hide or show the keyboard. (Hiding and shwoing the keyboard could be done by calling BrowserWindow.addBrowserView(BrowserView) or BrowserWindow.removeBrowserView(BrowserView). But you would have to search the documentation yourself for those methods as I can't write anymore right now. Documentation could anwser any of your questions if you search for it there.
I have searched and found so many answers but nothing that fits my requirement. I will try to explain here and see if any of you guys have some tips.
I wish to click on a link manually and from there on; I wish that some kind of tool or service starts recording time from my click and stops when the desired page is loaded. This way, I am able to find out the exact user interface response time.
All the online web testing services ask for main URL. In my case the main URL has gazillion links and I wish to use only 1 link as standard sample which is a dynamic link
For example:
- I click on my friend's name on Facebook
- From my click to the time page is loaded, if there's a tool that does the stop watch thing?
End goal is:
I will be stress testing a server with extensive load and client wishes to see response time of simple random pages when load is at 500, 1000, 2000 and so on.
Please help!
Thank you.
You can use a simple load tester tool and the developer tools on the Chrome browser, you can get a clear picture of page load times under load. Also you can see which request completed in how much time and the time from start to finish.
Just start the load test and try from the chrome.
Also you can use a automated latency monitor like smokeping.
You may use httpwatch or YSlow to find the client side page load times.
http Watch and Fiddler helped. Didn't really go as I had thought but pretty Close and satsifactory. Thanks guys
You could try WPT this is a tool which has a private and a public instance for serving exactly what you want to do also supports scripted steps executed via the browsers JS the nicest thing i find in WPT is that you can use the public instance to measure the actual user experience from other than yours world locations or you can make a private one.
I have recently messed around with NodeJs and it loading any website and saving a screenshot. To be more specific, I have used PhantomJS to load the website and save a screenshot. I have also used CasperJS and ZombieJS, but none of these tools really allow you to mess around with the resources of the website before loading. Is it even possible?
To be clear, I would like to load any website, lets say stackoverflow.com and calculate load time and save a screenshot. That's easy, but on second run I want to load the same website and remove jquery resource for example and then calculate load time of that.
It looks like phantomjs and casperjs have callbacks like onResourceRequested or onResourceReceived but there is not method to abort a request. Is it possible? I would not want to proxy the request via some php script that does this but that is the alternative.
So, it looks like this is not possible but it is a feature on the phantomjs roadmap: http://code.google.com/p/phantomjs/issues/detail?id=230
Say someone else has a website generated by JavaScript, so I can't go look at the source and read what should be on the screen. How can I grab the text on the screen so I can feed it into another program? Also, how can I write a program that automatically clicks on radio buttons, links, etc. that satisfy certain criteria?
You can write a web scraping tool in Perl or Python. Or, you can use existing tools and frameworks to achieve that.
Check out Scrapy, an open-source tool written in Python.
Take a look at Selenium too.
To parse dynamic content you could see the javascript source and get that same content the same way the webpage is getting it. (ie. replicating ajax calls and such)
If you want to submit data (not actually click on the elements) as if it were clicked/edited/selected you could also send a request containing the same data that the server is expecting by using some HTTP library, like CURL. See an example here.
If you need to handle content generated by script, then your first problem is to cause the script to execute. Further, the script will want to generate the content into a DOM. That means you need to have a DOM, and a script engine, and probably HTTP access to the Internet, and XML handling, etc.
If that sounds a lot like a web browser, then you're listening.
What you basically need is a web browser that you can control from a program. You'll need to be able to tell it to browse to a page, click buttons and links, etc., then you'll need to read back the resulting DOM.
Only then will you need to parse the page.
If you're in the Microsoft world, then you can use the WebBrowser control. There are several forms of this, and they all amount to the same thing: you can have Internet Explorer run inside of your program, and your program can control it.
I understand there are other browsers that can be controlled from a program, but since I don't know their details, I'll wait for someone else to tell us both.
I have been given a task of toggling nearly 200 users' permissions in an admin. I have access to the database, and I'm sure I can do this in SQL but I'm curious to find out how to do it this way as well, plus I suspect it will be less work because I don't have to study the SQL that's going on and I know exactly what to do after I get access to the browser instance and know how to execute javascript programmatically in the context of the web page open.
I basically want to provide a list of urls which will open ( 195 ) and then execute javascript to toggle checkboxes, then submit the form.
As I stated, I want to use firefox or chrome and I'm on linux.
This is basically what greasemonkey does.
Or, if you can do it all while staying on the same page, you can also just type in arbitrary JS code by hand in the firebug console or its Chrome equivalent. This could work if combined with some iframe trickery.
If you use Chrome, it has built in support for user automation scripts: http://userscripts.wikidot.com/, http://www.chromium.org/developers/design-documents/user-scripts
I think a cleaner solution would be for you to figure out what is the url and the parameters to pass to do what you need. Then you can just use curl to make those requests.
I use CJS Chrome extension. I add a short script take loads a script from my localhost server and executes it. The executed script can also send results back to the server.