What is ExampleUnitTest in android studio - android-studio

My boss asked me to delete Non usable files in my project and asked me to delete ExampleUnitTest and ExampleInstrumentedTest What are these files used for and will it gonna be a problem if I deleted them?

A simple response is : No, there is no problem if you delete this files.
You can evaluate your app's logic using local unit tests when you need to run tests more quickly and don't need the fidelity and confidence associated with running tests on a real device. With this approach, you normally fulfill your dependency relationships using either Robolectric or a mocking framework, such as Mockito. Usually, the types of dependencies associated with your tests determine which tool you use:
If you have dependencies on the Android framework, particularly those that create complex interactions with the framework, it's better to include framework dependencies using Robolectric.
If your tests have minimal dependencies on the Android framework, or if the tests depend only on your own objects, it's fine to include mock dependencies using a mocking framework like Mockito.
Then instrumented unit tests are tests that run on physical devices and emulators. Instrumented tests provide more fidelity than local unit tests, but they run much more slowly. Therefore, we recommend using instrumented unit tests only in cases where you must test against the behavior of a real device.

Related

How to Unit Test a C++/WinRT Component? Preferably with Code Coverage

I'm in the process of writing some new C++/WinRT based components in order to replace some much older C++/CX code. The goal is to be able to use third-party C++ tools that don't understand CX (static code analyzers, etc).
However the first step in the journey is to ensure I can properly unit test my own code. Unit testing C++/CX code typically used the "C++ Unit Test App" project type, which is C++/CX based and has its own issues (lack of code coverage support, run all required before tests show up in the explorer, stability, etc)
Browsing through the available project types in Visual Studio 2017, I did not see a unit test project template for C++/WinRT based projects. Is my only option to use the "C++ Unit Test App" template with all its failings, or is there another way to build tests for a C++/WinRT library?
Perhaps there is a way to configure either the "Native Unit Test Project" or "Google Test" project templates to support what I'm looking for?
Ideally what I'm looking for is something that doesn't require launching a UI, is pure C++(/WinRT), and supports Visual Studio's Code Coverage Analysis.
There is no unit test project that is specific to C++/WinRT, much like there isn't one for other libraries like STL. I would recommend Catch2 as it supports C++17 (a requirement for C++/WinRT) and works well on Windows. It is also what we use for testing C++/WinRT itself. Catch2 is nice because it helps you create a simple console app that acts as the test driver that includes all of the tests.
For code coverage I don't have a strong recommendation, but if you are using Visual Studio then you might want to try VSInstr. It can be used for code coverage and produces a report that can be viewed with Visual Studio.
Make sure your code is built using the /profile linker option. This will ensure that profile hooks are included in a dedicated section of the PE file. Next, run vsinstr to instrument any of the binaries you're interested in (that were previously built with /profile):
vsinstr /coverage tests.exe
Now run vsperfcmd to begin collecting coverage data:
vsperfcmd /start:coverage /output:report
Run the code as normal. For Catch2, you can simply run the executable at the command line. Then you need to stop the collection as follows:
vsperfcmd /shutdown
And you're done. You can now view the report in Visual Studio:
devenv report.coverage
Hope that helps. Again, this is not specific to C++/WinRT and since C++/WinRT is a header-only library you are liable to get a lot of noise that is unrelated to your specific project. I haven't found a good way to deal with that yet.
Expanding on my comment to #KennyKerr's answer for those that are interested...
If you are planning on using Catch2 as recommended, then the C++/WinRT Windows Console Application template is a great starting point. Pretty much all you have to do is tweak the main() to setup Catch2 and start writing your test cases. My only complaint is that the C++/WinRT templates don't allow you to add Windows Runtime Component project references via the UI (must be done by editing the vcxproj). There is probably a similar problem adding NuGet package references.
As noted in my comment above, there is a Catch2 test adapter for Visual Studio 2017/2019 in the marketplace. Be aware that it requires a .runsettings file to enable the adapter and to tell it which projects are Catch2 test applications (via a regex). Without a properly configured runsettings, it will not find your tests. I also had to increase the discovery timeout, otherwise it "forgot" my tests occasionally.
With regards the code coverage, when using Visual Studio you can configure the code coverage to include/exclude functions in the .runsettings file. See Microsoft's Site for details. For myself I added the following in the CodeCoverage section and it works pretty well so far:
<Functions>
<Include>
<Function>.*YourNamespaceHere.*</Function>
</Include>
<Exclude>
<Function>winrt.*GetRuntimeClassName</Function>
<Function>winrt::impl.*</Function>
<Function>winrt::(?!YourNamespaceHere).*</Function>
</Exclude>
</Functions>
For those that are trying to test a C++/WinRT Windows Runtime Component like me, and have code that is not exposed as part of the WRC interface, here is what I did to make that testable...
Create a C++ Shared Items Project
Move all of the code for your Windows Runtime Component (WRC) project into the shared items project, and out of the WRC project. Going forward, only add/remove files from the shared project. That way you don't have to touch the WRC or Test projects when files are added/removed.
Add a reference to this shared items project in both your original WRC project, and your test project
Make sure your test project and WRC project are configured similarly with respect compile settings and project/NuGet references
Edit the test project and ensure the RootNamespace is configured the same as the WRC project (probably has to be done via your favorite editor). This is required otherwise the generated headers will be prefixed with the namespace, and thus won't be found by the shared code.
(Optional for Code Coverage) In the test project, enable profiling (Linker > Advanced > Profile > Yes)
You should now be able to write tests that exercise the private code. As to whether or not this is the best approach, I leave to the reader. It works for me, and the code I'm testing is simple enough that I'm not overly concerned with the project definitions not aligning perfectly. Your mileage may vary.
I will note that the above can also be used to make the "Native Unit Test Project" work with C++/WinRT, you just have the extra steps of integrating the C++/WinRT bits into the test project first.

Testing a library split into multiple packages

I'm developing a database DSL which has multiple backends. To avoid forcing unwanted dependencies upon users, the DSL is split up into a "core" package, containing the DSL itself, and one package for each backend. The backend packages all depend on the core package, as it defines the API each backend needs to provide.
Now, I want to add a test suite for my DSL. Since most of the functionality being tested lives in the core package, that's where I want to put the test suite. However, in order to actually run any tests, at least one backend is needed. This means that the test suite depends on both the core package and a backend package, but the backend package in turn depends on the core package, creating a circular dependency.
The obvious solution is to create yet another package for the tests only which depends on both the core and a backend, or to move the backend API into its own package which the core and backend packages can depend on (allowing the backends to not depend on the core package). However, if possible I'd like to keep the package structure as it is, and have the test suite as part of the core package.
Is this possible?
One possible solution, where you could keep the package layout, would be to have a test suite module provided by the core package, e.g. in YourDSLLib.Tests, implementing the common tests in a generic way for anything that satisfies the defined API.
You could then add a very simple test to each backend, which just calls the testing function in YourDSLLib.Tests.
An advantage would be the possibility to maintain common testing code, but also to customize tests per backend if necessary.

Testing an WPF app with CodedUI tests, should the coded ui test project share a solution or not?

First some context; we are developing a large desktop WPF application in .NET 4.5 targeting 64 bit Windows 7 and 8. We are using Visual Studio 2012.2 (soon to be .3 then probably 2013!) and TFS 2012 (again .2 soon to be .3 then 2013).
Currently this product is all in a single large solution (just over 50 projects) yielding a WPF exe, a load of dlls and a nice MSI to install it.
We use TFS (gated and scheduled) to build the solution, its installer (WiX) and run its tests (SpecFlow for BDD and MSTest unit tests) and this is working very well.
I have a separate scheduled TFS build that deploys the MSI to physical test rig in a untrusted AD domain via a PowerShell script (see TFS2012 LabDefault.11 template deploy scripts fail with “Team Foundation Server could not complete the deployment task” for details of the challenges involved with that!)
OK so that's where I am, now I want to take things to the next step; CodedUI tests to drive full app integration test; I want to "Smoke Test" my builds.
So being a simple soul I added a new project to my products solution; a CodedUI test project.
This happily runs the locally installed product (rather then the just built one; as I ultimately want the CUIT to be running on a deployed test rig as a smoke test, and that rig has just installed the MSI I just built) and performs some UI tests with assertions.
Now my problem is with the CUIT project as part of the products solution a local test run finds and runs my CUIT tests, and this is undesired. I only want to run the CUIT tests in a lab builds test phase.
So is putting the CUIT project into the product solution a bad idea? or should it be a separate solution? Splitting them seems wrong somehow as they are related; the CUIT project is the full stack integration test for the solution's deployable application.
Can I include the CUIT in the products solution and stop the test runner seeing the tests? or is it better just to have two solutions?
What are the pros and cons folks?
Update
In the end we created a new solution containing a coded UI test project and ensured this was built with the same TFS build that built the UI solution. This allows us to load and run the coded UI tests locally without issues, the unit tests in the main UI project are left unmolested. Still seems a little disjointed but on a multiple person team per user test settings were too awkward splitting the coded UI into a different solution was simpler.
What I did was make one Solution and made a CUIT project within, I then made multiple Coded UI test's within that. This is good because using an orderedTest you can run them together and they also share a UIMap which helps too.
I also have/had this problem, because we are at the beginning of using CUIT. For now the CUIT remains in product solution. We do this because the tests should remain in memory of developers. When tests stay in on solution I'm afraid they get lost in oblivion. But indeed there is sometimes a bad feeling that the CUIT pollute the products solution, so i guess they will get their own solution after some time pass and the test become established.
Edit: If you use different Versions of Visual Studio you have to consider that for example a VS Prof. can’t build a solution with Code UI Tests. This means in “multi VS-version environments” you have to separate Coded UI Tests from “real” code.

Unit testing vs Integration testing of an Express.js app

I'm writing tests for an Express.js app and I don't know how to choose between unit tests and integration tests.
currently I experimented with:
unit tests - using Sinon for stubs/mocks/spies and Injects for dependency injection to modules. with this approach I have to stub MongoDB and other external methods.
I thought about unit testing the individual routes and then using an integration test to verify that the correct routes are actually invoked.
integration tests - using Supertest and Superagent, much less code to write (no need to mock/stub anything) but a test environment should exist (databases, etc..)
I'm using Mocha to run both styles of tests.
how should I choose between those two different approaches ?
You should probably do both. Unit test each non-helper method that does non-trivial work. Run the whole thing through a few integration tests. If you find yourself having to do tons and tons and tons of mocks and stubs, it's probably a sign to refactor.

Testing Web Site Project with NUnit

i'm new in web dev and have following questions
I have Web Site project. I have one datacontext class in App_Code folder which contains methods for working with database (dbml schema is also present there) and methods which do not directly interfere with db. I want to test both kind of methods using NUnit.
As Nunit works with classes in .dll or .exe i understood that i will need to either convert my entire project to a Web Application, or move all of the code that I would like to test (ie: the entire contents of App_Code) to a class library project and reference the class library project in the web site project.
If i choose to move methods to separate dll, the question is how do i test those methods there which are working with data base? :
Will i have to create a connection to
db in "setup" method before running
each of such methods? Is this correct that there is no need to run web appl in this case?
Or i need to run such tests during
runtime of web site when the
connection is established? In this case how to setup project and Nunit?
or some another way..
Second if a method is dependent on some setup in my .config file, for instance some network credentials or smtp setup, what is the approach to test such methods?
I will greatly appreciate any help!
The more it's concrete the better it is.
Thanks.
Generally, you should be mocking your database rather than really connecting to it for your unit tests. This means that you provide fake data access class instances that return canned results. Generally you would use a mocking framework such as Moq or Rhino to do this kind of thing for you, but lots of people also just write their own throwaway classes to serve the same purpose. Your tests shouldn't be dependent on the configuration settings of the production website.
There are many reasons for doing this, but mainly it's to separate your tests from your actual database implementation. What you're describing will produce very brittle tests that require a lot of upkeep.
Remember, unit testing is about making sure small pieces of your code work. If you need to test that a complex operation works from the top down (i.e. everything works between the steps of a user clicking something, getting data from a database, and returning it and updating a UI), then this is called integration testing. If you need to do full integration testing, it is usually recommended that you have a duplicate of your production environment - and I mean exact duplicate, same hardware, software, everything - that you run your integration tests against.

Resources