Testing asyncronous commands [Prism/MEF/Dispatcher/TFS] - multithreading

I'm having some issue setting up unit tests that will work using TFS 2012 and .net 4. Due to IT restrictions of the target users, I can't build against 4.5 until they permit it.
The application in question is a Prism application composed with MEF. It's a plug in application in that we load modules that compose the application via a configuration file. We have certain modules that export themselves to the MEF catalog as an IContent interface that exposes some base behavior that all modules in the application are expected to exhibit.
Each IContent interface implements an Import delegate command and an observable collection of Errors within the module. The errors collection is bound to the UI so it must be updated on the UI thread.
I've created a test class that creates and runs the MEF bootstrapper and loads all of the modules in the application in the ClassInitialize of the test class. The test class can then use the MEF catalog to get all instances of IContent and run the same base unit tests on each module to ensure they pass. As new modules are added to our solution/application, they would automatically be picked up and vetted through the same unit tests.
To further the example, the IContent command has an import Delegate Command as part of its interface. The command executes on a background thread that will load some data add some errors to the errors collection. Since the errors collection is bound to the UI, the updates are done using a checkaccess and begininvoke as needed to ensure that the errors collection is done on the correct thread. At the end of it all, the module will send out an aggregated event to indicate to the shell that the module has done its work.
Everything works fine when running in the context of the WPF application. When running the unit tests, the errors collections are never updated. I believe that the cause is that the main thread which the boot strapper is runs on, creates the observable collections - and since this thread is always running a unit test, its dispatcher never gets to process the additions to the errors collection, so the test will fail. I've verified this as if I change the CheckAccess/BeginInvoke to just invoke to run in line, it will hang at the invoke to update the errors collection. It appears that the execution of a unit test always locks the main thread.
So to sum up:
Is there a way to set up the boot strapper so it runs and processes on a thread different than the main TFS test manager thread?
Can I have the Main thread run, process a test method on a background thread, respond to the necessary events that its dispatcher needs to respond to, then complete my unit tests?
Thanks in advance for any help.

So, after trying something new, the key is in how the bootstrapper creates the shell. I was declaring my shell in the test classes to be based on a Dependancy object not on a wpf window. changing this to a wpf window made everything work. No actual need then to define which runs on what thread. It just works after that.

Related

onCreate method in Android studio

I have started learning Android development recently and have some doubts. " What invokes onCreate method to get called automatically when we create new project or create a new activity ? And how is it getting called"
I tried to search it. But not getting the proper answer
I don't know if there's a specific spot in the documentation that explains this clearly, so I can't really cite sources. This is just what I've picked up over time working in Android and looking at source code.
The Android OS is responsible for launching Activities. An Activity is found and launched through an Intent. It could be internally done in your own app. Or if it is the entry point to your app, it could be launched by the home screen launcher app of the device. Or in debugging mode, the OS can be commanded to launch a specific activity (this is what happens when your Run your app from Android Studio).
An Activity must have an empty constructor (a constructor with no arguments) so the OS can create an instance of your Activity using reflection. Since the OS cannot know of all possible Activity classes ahead of time, it must use reflection to find the constructor and call it.
After it has an instance of your Activity, the OS manages its lifecycle. It will call lifecycle functions like onCreate() at the appropriate times in its life. onCreate() is the first time in your Activity's life where you can do Activity- or Context-specific work safely. So, there are restrictions in what you can do in property initializers and init blocks, since these are called before onCreate().
You must never call your Activity's constructor or its lifecycle functions directly yourself. These are reserved for use by the OS.

How to avoid easyhook injecting dll multiple times?

The target app could be injected multiple times with same dll.
This will lead the same function be hooked multiple times.
Do you know if there is a way to detect if the target app is already injected? or is there a way to avoid multiple injection?
You could check if the DLL is already loaded in the target application before proceeding with the injection:
In .NET you can use the Process.Modules property to iterate over the loaded DLLs in a process.
For native C/C++ you can use the Windows Process Status API method EnumProcessModules to achieve the same result. An example of iterating the modules for processes can be found here.
Alternatively you could use a named system mutex within your DLL that is being injected to ensure that your hook creation logic is not applied multiple times. See the Mutex class for .NET, or CreateMutex for native. This would be more complicated approach, and require you to cleanup the mutex when the hooks are removed.

Application State / Test Fixtures with Xcode UI Tests

A pretty common problem with any kind of integration test is getting the unit under test into a known state -- the state that sets up well for the test you want to perform. With a unit test, there's usually not much state, and the only issue is in potentially mocking out interactions with other classes.
On the other hand, when testing a whole app there's all sorts of potentially persistent state, and getting the app into a clean state, or trickier still, into a known state that isn't "clean" without any access to the app itself is a little tricky.
The only suggestion I've found is to embed any necessary setup in the app, and use something like an environment variable to trigger setup. That is, of course, viable, but it's not ideal. I don't really want to embed test code and test data in my final application if I can avoid it.
And then there's mocking out interactions with remote services. Again you can embed code (or even a framework) to do that, and trigger it with an environment variable, but again I don't love the idea of embedding stubbing code into the final app.
Suggestions? I haven't been able to find much, which makes me wonder if no-one is using Xcode UI testing, or is only using it for incredibly simple apps that don't have these kinds of issues.
Unfortunately, the two suggestions you mentioned are the only ones that are possible with Xcode UI Testing in its current state.
There is, however, one thing you can do to mitigate the risk of embedding test code in your production app. With the help of a few compiler flags you can ensure the specific code is only built when running on the simulator.
#if (arch(i386) || arch(x86_64)) && os(iOS)
class SeededHTTPClient: HTTPClientProtocol {
/// ... //
}
#endif
I'm in the middle of building something to make this a little easier. I'll report back when its ready for use.
Regarding setting up the state on the target app there's a solution. Both the test runner app and your app can read and write to the simulator /Library/Caches folder. Knowing that you can bundle fixture data in your test bundle, copy it to the /Library/Caches on setUp() and pass a launch argument to your application to use that fixture data.
This only requires minimal changes to your app. You only need to prepare it to handle this argument at startup and copy over everything to your app container.
If you want to read more about this, or how you can do the same when running on the device, I've actually written a post on it.
Regarding isolating your UI tests from the network, I think the best solution is to embed a web server on your test bundle and have your app connect to it (again you can use a launch argument parameterize your app). You can use Embassy for that.

Testing Web Site Project with NUnit

i'm new in web dev and have following questions
I have Web Site project. I have one datacontext class in App_Code folder which contains methods for working with database (dbml schema is also present there) and methods which do not directly interfere with db. I want to test both kind of methods using NUnit.
As Nunit works with classes in .dll or .exe i understood that i will need to either convert my entire project to a Web Application, or move all of the code that I would like to test (ie: the entire contents of App_Code) to a class library project and reference the class library project in the web site project.
If i choose to move methods to separate dll, the question is how do i test those methods there which are working with data base? :
Will i have to create a connection to
db in "setup" method before running
each of such methods? Is this correct that there is no need to run web appl in this case?
Or i need to run such tests during
runtime of web site when the
connection is established? In this case how to setup project and Nunit?
or some another way..
Second if a method is dependent on some setup in my .config file, for instance some network credentials or smtp setup, what is the approach to test such methods?
I will greatly appreciate any help!
The more it's concrete the better it is.
Thanks.
Generally, you should be mocking your database rather than really connecting to it for your unit tests. This means that you provide fake data access class instances that return canned results. Generally you would use a mocking framework such as Moq or Rhino to do this kind of thing for you, but lots of people also just write their own throwaway classes to serve the same purpose. Your tests shouldn't be dependent on the configuration settings of the production website.
There are many reasons for doing this, but mainly it's to separate your tests from your actual database implementation. What you're describing will produce very brittle tests that require a lot of upkeep.
Remember, unit testing is about making sure small pieces of your code work. If you need to test that a complex operation works from the top down (i.e. everything works between the steps of a user clicking something, getting data from a database, and returning it and updating a UI), then this is called integration testing. If you need to do full integration testing, it is usually recommended that you have a duplicate of your production environment - and I mean exact duplicate, same hardware, software, everything - that you run your integration tests against.

Java ME Application running fine in Emulator but crashing when deployed to N70. Any way to identify the reason for crashing?

I have developed a Java ME application for CLDC platform. It works fine when executed in an emulator. But when i deploy it to my N70 phone the application doesn't start at all in the phone. In my application there are some 14 classes and am creating an instance of each and putting them in the vector on application start. The classes just have one variable and 2 methods. Can this creating of lot of instances be the reason for its crashing?
Is there any way I can find out the reason why the application is not able to start in the phone?
Update:
Its running fine on emulator. And one more thing I would like to mention is that- The code stops executing only at the point where am creating those 14 instances and adding them to the vector. Till that point the code executes fine.
It might depend on where in the code you are creating those instances. If you are creating them in your MIDlet constructor or the startApp method try moving the initialization into the run method of your application.
One way of debugging J2ME applications that don't start on the phone is by adding "printf" style debug messages in your code to be written in the record store system and adding another MIDlet to your application to read from RMS and display those messages.
Or you could just comment bits of code and see if it works.
You can debug on device. If the emulator you are using is part of the Nokia SDK then there should be facilities elsewhere to carry out on-device testing and debugging. (I'd post more detail on this but I've only done this with Sony Ericsson phones recently.)
Another option is to use the Nokia tools that allow you to view the standard output and error for your application when it is running on your device (via Bluetooth for example).
The probability that your application is actually crashing the Java Virtual Machine bytecode interpreter thread and terminating the whole native process is very small.
It has happened before but you need to eliminate several other potential issues before being convinced of an actual crash.
It is more likely that either:
Your MIDlet is not created or not started because the MIDP runtime decides it is not correct.
or
Your MIDlet simply throws an exception that you don't catch, which can make it look like it was brutally terminated.
Since the MIDlet installer is supposed to prevent you from installing a bad MIDlet, the uncaught exception issue is more likely.
How to find an uncaught exception:
Start with the simplest HelloWorld MIDlet, using a Form so you can easily insert more StringItems at the top of the screen.
Create and start a new Thread in MIDlet.startApp()
In your override of Thread.run(), add a try{}catch(Throwable){} block.
Inside that block, do whatever your original MIDlet did.
Use the form as your standard output for debugging.
You can use Form logging to make sure you don't enter an infinite loop, to display exception classes and messages, to flag logical milestones, to display variable values...
That's the first step to figuring out what is going on.
I also faced a similar problem and when I recompiled my MIDLET as Midlet 1.0 then it worked fine. It seems like N70 is not able to run the new version of MIDLET. I think you downgrade and re-test your midlet.
Regards
Junaid

Resources