Puppeteer in-browser time acceleration - node.js

Is there some option, the opposite to slowMo, that accelerates in-browser time? What I want, is to play all the page scripts/intervals/animations as quick as it possible.

I don't believe that this is achievable. You can slow down actions in the browser, as you pointed out by using the slowMo setting, but otherwise the browser acts just like it would if a user was interacting with it - that's kind of the point really :-)
The suggestions I can make to you are based on things I've tried and implemented from my own experiences with UI automation. Hopefully something in here can help you.
What I want, is to play all the page scripts/intervals/animations as quick as it possible.
As I've said already, I don't believe you will be able to do anything other than wait for the page the load normally as if a user was logging in to and / or interacting with your application. However you do have a very powerful option which can take some of the pain away from waiting for your page to load all of the time.
For example, you can use jest-puppeteer:
https://github.com/smooth-code/jest-puppeteer/tree/master/packages/jest-puppeteer
What jest-puppeteer allows you to do is structure your test suite(s) in a behaviour driven testing format using describe and it statements to define your suite and test scripts respectively. By using this method, you can specify a before method to be executed before all test scripts in the suite are executed - so in here you could, say, log into your application and have it wait for everything to load once and once only. All test scripts will then be executed sequentially on the page that is displayed in the remote browser without having to reload the browser and start all over again from scratch between tests.
This can significantly reduce the pain of having to wait for page loads every single time you want to run a test.
The idea is that you can bunch relevant tests scripts together in each suite - ie. in the first suite, load the login page once and then execute all login based test scripts before tearing down. In the next suite, load the home page once and then execute all home page based test scripts before tearing down. You get the idea.
This is the best recommendation I can personally give you. Hopefully it helps!

Related

Does anyone have any recommendations for speeding up UFT 14.53 on a Windows 10 platform?

I've upgraded a laptop (Windows 10 Enterprise, Version 1803) and 2 VMs (Windows 10 Enterprise, version 1809) with MicroFocus' UFT version 14.53. The previous version of UFT was 14.02.
The performance of script execution is annoyingly slow. Here are some details about the environment:
Two AUT were developed using J2EE and Angular JS, respectively
A script that took 18 minutes to run on the laptop is now taking 20 minutes
The same script is now taking 30 minutes on the VMs
The scripts are being run in fast mode from the GUI
The windows 10 machines have been set to Best Performance
Every time the script starts, the Windows is running low on resources
popup appears
The browser on which the app is being run is IE11
RAM on the laptop is 16GB and 8GB on the VMs
Anybody else experience these pains who can offer any solutions or suggestions? Unfortunately, our support vendor has been no help.
Thank you!
1) Depending on what kind of object recognition you perform, there might be noticeable differences depending on how many windows are open on the windows desktop.
It might be that in you Windows 10 sessions, there are more windows open, maybe invisible, that UFT needs to take account when locating top-level test objects.
For example, opening four unneeded (and non-interfering) browser instances and four explorer instances greatly impacts the runtime performance of my scripts. Thus, I make sure that I always start with the same baseline state before running a test.
To verify, you could close everything you don't need, and see if runtime improves.
2) Do you use RegisterUserFunc to call your functions as methods? That API has a big performance hole: depending on how much library code you have (no matter where, and no matter what kind of code), such method calls can take more time than you expect.
I've seen scenarios where we had enough code that one call took almost a second (850 milliseconds) on a powerful machine.
Fix was to avoid calling the function as a method, which sucks because you have to rearrange all such calls, but as of today, we are still waiting for a fix, after it took us months to proof to MicroFocus that this symptom is indeed real, and really fatal because as you add library code, performance degrades further and further, in very tiny steps. (No Windows 10 dependency here, though.)
3) Disable smart identification. It might playback fine, but it might need quite some time to find out which "smart" identification variant works. If your scripts fail without smart id, you should fix them anyways because your scripts never should rely on smart identification.
4) Disable the new XPath feature where UFT builds an XPath automatically, and re-uses this XPath silently to speed up detection. It completely messes up object identification in certain cases, with the script detecting the wrong control, or taking a lot of time to detect controls.
5) Try hiding the UFT instance. This has been a performance booster for years, and I think it still is, see QTP datatable operations *extremely* slow (much better under MMDRV batch executor)? for info on this, and more.
6) Certain operations do take a lot of time, unexpectedly. For example, Why does setting a USER environment variable take 12 seconds? completely surprised me.
Here are things to consider that have been tweaked to speed up my scripts in the past, hadn't had any problems with UFT 12.x on VM machines or VDI's and using Windows 11. I'm just starting with UFT 14.53 on Windows 10. Check Windows 10 for background applications or services that are running prior to even opening UFT or executing a script. In UFT check the Test Settings and UFT Test Options for the following:
Object synchronization timeout - Sets the maximum time (in seconds) that UFT waits for an object to load before running a step in the test.
Note: When working with Web objects, UFT waits up to the amount of time set for the Browser navigation timeout option, plus the time set for the object synchronization timeout. For details on the Browser navigation timeout option, see the HP Unified Functional Testing Add-ins Guide.
Browser navigation timeout - Sets the maximum time (in seconds) that UFT waits for a Web page to load before running a step in the test.
When pointing at a window, activate it after __ tenths of a second - Specifies the time (in tenths of a second) that UFT waits before it sets the focus on an application window when using the pointing hand to point to an object in the application (for example, when using the Object Spy, checkpoints, Step Generator, Recovery Scenario Wizard, and so forth).
Default = 5
Add ____ seconds to page load time - Page load time is the time it takes to download and display the entire content of a web page in the browser window (measured in seconds).
Page load time is a web performance metric that directly impacts user engagement and a business’s bottom line. It indicates how long it takes for a page to fully load in the browser after a user clicks a link or makes a request.
There are many different factors that affect page load time. The speed at which a page loads depends on the hosting server, amount of bandwidth in transit, and web page design – as well as the number, type, and weight of elements on the page. Other factors include user location, device, and browser type.
Run mode - Normal -> or Fast ->
Hope some of this helps, good luck...Aimee
You can try Run a Repair UFT installation on Windows 10, see if something was wrong on installation of the uft 14.53.
This worry me a lot, since we gonna change in a couple of days for laptop with Win10.
Try see here if something can help you.
Regards

Capybara accessing the same session from two threads

I have a ruby script that is running Capybara using selenium chrome driver.
The test navigates a website, at an unknown time a notification will appear that needs to be closed.
Is it possible to have a second thread that is polling the driver to check for the presence of the notification while the script continues perform the test.
I have tried a few different approaches, but I get errors such as Bad file descriptor (Errno::EBADF) which appears to be because the session/driver is not thread safe.
If this cannot be done, any ideas for dealing with this issue would be much appreciated. I would rather not have a piece of code I keep calling between actions, as I fear this would cause performance issues over time.
This seems like a starting point, but not 100% of what you're looking for http://blog.jthoenes.net/2013/08/16/waiting-for-a-javascript-event-with-seleniumcapybara/

Consequences of not calling WSACleanup

I'm in the process of designing an application that will run on a headless Windows CE 6.0 device. The idea is to make an application that will be started at startup and run until powered off. (Basically it will look like a service, but an application is easier to debug without the complete hassle to stop/deploy/start/attach to process procedure)
My concern is what will happen during development. If I debug/deploy the application I see no way of closing it in a friendly and easy way. (Feel free to suggest if this can be done in a better/user friendly way) I will just stop the debugger and the result will be WSACleanup is not called.
Now, the question. What is the consequence of not calling WSACleanup? Will I be able to start and run the winsock application again using the debugger? Or will there be a resource leak preventing me to do so?
Thanks in advance,
Jef
I think that Harry Johnston comment is correct.
Even if your application has no UI you can find a way to close it gracefully. I suppose that you have one or more threads in loops, you can add a named manual reset event that is checked (or can be used for waits instead of Sleep()) inside the loop condition and build a small application that opens the event using the same name, sets it and quits. This would force also your service app to close.
It may not be needed for debugging, but it may be useful also if you'll need to update your software and this requires that your main service is not running.

Mobile Website - How to keep process alive on client side in mobile browser in Android?

I am new to mobile website development, and facing this issue where I want to refresh data on the website in every 30 sec which is invoked from the client side and server provides the data in response. Problem is when I close the browser or when the browser goes in background it stops working. Is there any thing we can do to make this thing possible?
Have a look at the Android Developers - Processes and Threads guide. You'll get a deeper introduction to how process life-cycles work and what the difference is between the states for background- and foreground processes.
You could embed your web app in a WebView. This way you could deal with the closing browser case: you could provide a means to "exit" the app that involves closing only your container activity. That way the timers you have registered in javascript will still be running in the 'WebViewCoreThread'. This is an undesirable behavior and a source of problems, but you can take advantage of it if you want (just make sure you don't run UI-related code there). I've never tested this in Kit Kat (which uses a different WebView based on Chrome) but works for previous versions, as I described here.
Now the user can always close any app. Even without user interaction, the OS can kill your app on low memory. So just give up on long-running apps that never end, because the OS is designed in such a way this is simply not possible.
You could go native and schedule Alarms using the AlarmManager.
Just checked this out on the Android KitKat WebView and as per Mister Smith's comments the javascript will continue executing in the background until the Activity is killed off:
Just tested with this running in a WebView:
http://jsbin.com/EwEjIyaY/3/edit
My gut instinct is that if the user has moved your application into the background, there seems little value in performing updates every 30 seconds, it makes more sense to just start updating again once the user opens the device up and cache what information you currently have available to you.
As far as Chrome for Android goes the same is happening, as Chrome falls into the background the javascript is still running.
If you are experiencing different behaviour then what exactly are you seeing and can you give us an example?

Execution of Test scripts with Coded UI Test consumes more time

We are facing few issues while executing Coded UI Test scripts.
Regulary we have to execute automated scripts on Coded UI Test, earlier we used Test Partner for execution. Recently we migrated few of our Test Partner scripts to Coded UI Test . However, we observed that Coded UI Test scripts execution time is more when compared toTest Partner exection time. Our automated scripts were completely hand written, no where we used recording and playback feature.
And few of our observations were
IE Browser hangs on executing Coded UI Test scripts on windows XP. Everytime we have to kill the process and we have to recreated the scenario to continue the execution further. So, it does not suffice the automation essentiality, as each and every time one has to monitor whether script execution goes fine without browser hang. Its a very frequent problem on XP.
If we execute Coded UI Test scripts on windows 7. The execution time is quite slow. It will consume more time then the execution time on XP. So our execution time drags, though script goes fine without Browser hang. We tried to execute scripts in release mode as well. But whenever script halts one has to execute script again in debug mode.
Could you please suggest on this. What exactly the point we are missing? By chaning tool settings can we improve performance of the execution time? Thanks for the support.
First of all you should enable the logging and see why the search takes up so many time.
You can also find useful information in the debug outputs that give warning when operations take more time than expected.
Here are two useful links for enabling those logs
For VS/MTM 2010 and 2012 beta: http://blogs.msdn.com/b/gautamg/archive/2009/11/29/how-to-enable-tracing-for-ui-test-components.aspx
For VS/MTM 2012 : http://blogs.msdn.com/b/visualstudioalm/archive/2012/06/05/enabling-coded-ui-test-playback-logs-in-visual-studio-2012-release-candidate.aspx
A friendly .html file with logs should be created in %temp%\UITestLogs*\LastRun\ directory.
As for the possible explanation to your issue - it doesn't matter if you coded your tests explicitly or by hand the produced calls to WpfControl.Find() or one of deriving classes, if the search fails at first it will move up to performing heuristics to find the targeted control anyway.
You can turn MatchExactHierachy setting of your Playback to be true, and stop using the smartmatch feature
(more on it here together with afew other usefull performance tips
http://blogs.msdn.com/b/mathew_aniyan/archive/2009/08/10/configuring-playback-in-vstt-2010.aspx)

Resources