Lets say we have 100 functional Coded UI scripts. What kind of infrastructure is needed to run these tests in parallel? I'm afraid it would take down Visual Studio Server.
Because coded UI interacts with a UI, only one instance can run on a machine at a time. If you set up a lab, or VM instances all running parts of your test your limit is # of instances and how much your application under test can handle.
Related
I'm trying to move some computations to Azure cloud services. One of the steps of the workflow I'm trying to implement includes running a Win32 desktop application generating a file. Obviously, we cannot have a user interaction for cloud calculations, so the application is launched with command line arguments. The process starts, generates a file, and then exists. At the moment I cannot refactor the code and move this functionality to command-line windowless utility.
First, I chose Azure Functions because they are intended for event-driven short calculations, and that's exactly what I need. Also they are cheap. But I encountered a problem that processes in Azure Functions are being executed inside a sandbox blocking User32/GDI32 system calls and thus preventing me from launching desktop applications.
Another solution I came up with is mounting a virtual machine drive with all needed Visual C++ redistributables installed and then using Azure Batch with nodes based on the pre-configured drive. But this solution has another drawbacks, since it takes minutes to mount a new node. Of course, I could have some nodes that are always active, but anyway the further scaling is slow and having active nodes is not so cheap. Also I have a feeling that Azure Batch is a bit overkill, because there is no need for HPC in my case. Azure Functions' computation capabilities are enough for me.
Is there some kind of compromise solution? So that I would have a solution with fast scaling and quick responses, but with no need to establish Azure Batch based on Azure Virtual Machines?
A lot of GDI32 calls are available now but in a containerized form.
So, you can deploy a function with the desktop application but inside a docker container.
Refer the following articlefor more explanation.
Refer the following documentation on how to deploy containerized function.
I'm using pytest to run a few thousands of tests against an API.
The need is now to not only use multiprocessing (pytest-xdist) and multithreading (pytest-parallel) but to also have them run on multiple machines (still keeping the multi process and threading capabilities).
This is the current state, the need is to basically duplicate this chart.
https://i.imgur.com/AKj2nmL.jpg
Our last resort will be to develop a test runner service that will be deployed on as many machines as needed, and use an sqs so these machines can pull work from there.
Are there any better way of achieving this? Using pytest or maybe combined with Jenkins.
I'm trying to run some load tests for a web application hosted on IIS using Visual Studio 2012.
Testing for several hundred users works fine (but isn't too helpful).
When I try raising the number to 1000+, the test fail;
But not because the website can't handle the load --- it's because my computer won't handle it!
Is there any way to test for large amounts of users, without crashing my own computer?
Distributed load tests are a better bet in several ways. In addition to not overwhelming your computer, they better simulate traffic from different locations, whether around the country or around the world. You get a better picture of how actual traffic can affect your site response.
I haven't done load testing from Visual Studio, but it should be possible to hit your site (assuming it's on one or more servers, and not also running locally) from multiple computers running VS. Alternately, you might want to look into load testing services, such as from SOASTA, that run load tests from the cloud.
Some commercial load testing tools also allow for distributed load testing. Disclaimer: I work for Telerik in the Test Studio group, and we have distributed load testing.
Jacob,
It sounds like you're overloading your system. With any significant load run, regardless of the toolset, you need to get the various load components distributed out to run on different systems.
If you haven't already, you should have a look at the Working With Load Tests documentation on MSDN. See also the Considerations for Large Load Tests (especially the part about overloading Agents) as well as the Run a Load Test Using Agents pages.
We are facing few issues while executing Coded UI Test scripts.
Regulary we have to execute automated scripts on Coded UI Test, earlier we used Test Partner for execution. Recently we migrated few of our Test Partner scripts to Coded UI Test . However, we observed that Coded UI Test scripts execution time is more when compared toTest Partner exection time. Our automated scripts were completely hand written, no where we used recording and playback feature.
And few of our observations were
IE Browser hangs on executing Coded UI Test scripts on windows XP. Everytime we have to kill the process and we have to recreated the scenario to continue the execution further. So, it does not suffice the automation essentiality, as each and every time one has to monitor whether script execution goes fine without browser hang. Its a very frequent problem on XP.
If we execute Coded UI Test scripts on windows 7. The execution time is quite slow. It will consume more time then the execution time on XP. So our execution time drags, though script goes fine without Browser hang. We tried to execute scripts in release mode as well. But whenever script halts one has to execute script again in debug mode.
Could you please suggest on this. What exactly the point we are missing? By chaning tool settings can we improve performance of the execution time? Thanks for the support.
First of all you should enable the logging and see why the search takes up so many time.
You can also find useful information in the debug outputs that give warning when operations take more time than expected.
Here are two useful links for enabling those logs
For VS/MTM 2010 and 2012 beta: http://blogs.msdn.com/b/gautamg/archive/2009/11/29/how-to-enable-tracing-for-ui-test-components.aspx
For VS/MTM 2012 : http://blogs.msdn.com/b/visualstudioalm/archive/2012/06/05/enabling-coded-ui-test-playback-logs-in-visual-studio-2012-release-candidate.aspx
A friendly .html file with logs should be created in %temp%\UITestLogs*\LastRun\ directory.
As for the possible explanation to your issue - it doesn't matter if you coded your tests explicitly or by hand the produced calls to WpfControl.Find() or one of deriving classes, if the search fails at first it will move up to performing heuristics to find the targeted control anyway.
You can turn MatchExactHierachy setting of your Playback to be true, and stop using the smartmatch feature
(more on it here together with afew other usefull performance tips
http://blogs.msdn.com/b/mathew_aniyan/archive/2009/08/10/configuring-playback-in-vstt-2010.aspx)
I understand that startup tasks are used to set up your system. For example, if your code is written in Python, you can add a startup task to install Python. But can't this also be done in the ProgramEntryPoint batch script? What's the difference?
Its true that, if you use the ProgramEntryPoint there doesn't seem to be a reason to use startup tasks. You can indeed include all the logic in that same batch file.
Startup tasks get more useful when working with the .NET WebRoles/WorkerRoles. There you only have the option to write code (where you could again call a single batch file calling other batch files) and or use startup tasks.
But if you look at it from a maintenance point of view its much cleaner to use startup tasks for everything having to do with configuration and installation of your instance. You draw a clear line between configuration/installation and your actual application - you could actually see this as separation of concerns (this will be easy to understand be other/new developers on the project).
Besides that you have to know that, when you use tasks, you can execute tasks in different contexts (limited / elevated) which might be important from a security perspective. And tasks exist in different types (simple, background, foreground) which can be used in many different scenarios (a background app that constantly pings your site for example). If you don't use tasks, you might need to handle all of this yourself.
Here is a good blog post covering the details of startup tasks: Using Startup Task in Windows Azure detailed summary
Great answer from Sandrino. In a nutshell - you would use startup tasks if you want some code to execute (or start executing) before your role starts. If that is not a constraint you can always execute any process (including batch scripts) from the OnStart method of the Role. One case where I have used startup tasks in the past is to install the NewRelic monitoring agent. I wanted that running to profile my app before the actual app started.
You will probably not be able to install Python from the ProgramEntryPoint since the install will probably require elevated ("admin") privilegies.
A role (web/worker) usually does not have elevated privilegies (it might be possible but it is a bad practice for obvious security reasons). So code in ProgramEntryPoint does not have elevated privilegies.
On another hand, a startup task can have elevated privilegies. IMO, this probably the biggest (single ?) benefit from using startup tasks.