My partial build definition is as following:
[1]: https://i.stack.imgur.com/roBmM.png “Build Definition”
And Error shows that:
Error : No tests were discovered from the specified test sources
[2]: https://i.stack.imgur.com/olrSF.png “Error”
And I have already copied test.bat in C:\Tests\,
Are there any errors in my build definitions? And batch scripts is supported in Functional Test? Thank you.
The batch file is not supported in functional test. You need to make sure the tests can be run in Visual Studio or via vstest.console.exe.
Related
We recently started using karate for API testing in our project and we are using Executable Jar File with visual studio plugin for karate. Currently We are not using any test runner classes or Junit in our framework and still able to achieve almost everything by usage of tags and karate-config.js file. We are using both cucumber-html report and surefire-report plugins and results generated at target folder on execution.
Now we are looking to customize the outputs to different folders. I assume we could use the reportDir() parameter to set the output folder path. Can someone please advice is it achievable in Executable Jar version and without Junit framework? If possible, where can I set this path in our tests? Do I need to create a test runner class for this ?
Can you please start evaluating the RC version, details here: https://github.com/intuit/karate/wiki/1.0-upgrade-guide
You should be able to set a different "output" folder using the command-line -o or --output flag.
Based on your feedback, we can improve it.
I'm in the process of writing some new C++/WinRT based components in order to replace some much older C++/CX code. The goal is to be able to use third-party C++ tools that don't understand CX (static code analyzers, etc).
However the first step in the journey is to ensure I can properly unit test my own code. Unit testing C++/CX code typically used the "C++ Unit Test App" project type, which is C++/CX based and has its own issues (lack of code coverage support, run all required before tests show up in the explorer, stability, etc)
Browsing through the available project types in Visual Studio 2017, I did not see a unit test project template for C++/WinRT based projects. Is my only option to use the "C++ Unit Test App" template with all its failings, or is there another way to build tests for a C++/WinRT library?
Perhaps there is a way to configure either the "Native Unit Test Project" or "Google Test" project templates to support what I'm looking for?
Ideally what I'm looking for is something that doesn't require launching a UI, is pure C++(/WinRT), and supports Visual Studio's Code Coverage Analysis.
There is no unit test project that is specific to C++/WinRT, much like there isn't one for other libraries like STL. I would recommend Catch2 as it supports C++17 (a requirement for C++/WinRT) and works well on Windows. It is also what we use for testing C++/WinRT itself. Catch2 is nice because it helps you create a simple console app that acts as the test driver that includes all of the tests.
For code coverage I don't have a strong recommendation, but if you are using Visual Studio then you might want to try VSInstr. It can be used for code coverage and produces a report that can be viewed with Visual Studio.
Make sure your code is built using the /profile linker option. This will ensure that profile hooks are included in a dedicated section of the PE file. Next, run vsinstr to instrument any of the binaries you're interested in (that were previously built with /profile):
vsinstr /coverage tests.exe
Now run vsperfcmd to begin collecting coverage data:
vsperfcmd /start:coverage /output:report
Run the code as normal. For Catch2, you can simply run the executable at the command line. Then you need to stop the collection as follows:
vsperfcmd /shutdown
And you're done. You can now view the report in Visual Studio:
devenv report.coverage
Hope that helps. Again, this is not specific to C++/WinRT and since C++/WinRT is a header-only library you are liable to get a lot of noise that is unrelated to your specific project. I haven't found a good way to deal with that yet.
Expanding on my comment to #KennyKerr's answer for those that are interested...
If you are planning on using Catch2 as recommended, then the C++/WinRT Windows Console Application template is a great starting point. Pretty much all you have to do is tweak the main() to setup Catch2 and start writing your test cases. My only complaint is that the C++/WinRT templates don't allow you to add Windows Runtime Component project references via the UI (must be done by editing the vcxproj). There is probably a similar problem adding NuGet package references.
As noted in my comment above, there is a Catch2 test adapter for Visual Studio 2017/2019 in the marketplace. Be aware that it requires a .runsettings file to enable the adapter and to tell it which projects are Catch2 test applications (via a regex). Without a properly configured runsettings, it will not find your tests. I also had to increase the discovery timeout, otherwise it "forgot" my tests occasionally.
With regards the code coverage, when using Visual Studio you can configure the code coverage to include/exclude functions in the .runsettings file. See Microsoft's Site for details. For myself I added the following in the CodeCoverage section and it works pretty well so far:
<Functions>
<Include>
<Function>.*YourNamespaceHere.*</Function>
</Include>
<Exclude>
<Function>winrt.*GetRuntimeClassName</Function>
<Function>winrt::impl.*</Function>
<Function>winrt::(?!YourNamespaceHere).*</Function>
</Exclude>
</Functions>
For those that are trying to test a C++/WinRT Windows Runtime Component like me, and have code that is not exposed as part of the WRC interface, here is what I did to make that testable...
Create a C++ Shared Items Project
Move all of the code for your Windows Runtime Component (WRC) project into the shared items project, and out of the WRC project. Going forward, only add/remove files from the shared project. That way you don't have to touch the WRC or Test projects when files are added/removed.
Add a reference to this shared items project in both your original WRC project, and your test project
Make sure your test project and WRC project are configured similarly with respect compile settings and project/NuGet references
Edit the test project and ensure the RootNamespace is configured the same as the WRC project (probably has to be done via your favorite editor). This is required otherwise the generated headers will be prefixed with the namespace, and thus won't be found by the shared code.
(Optional for Code Coverage) In the test project, enable profiling (Linker > Advanced > Profile > Yes)
You should now be able to write tests that exercise the private code. As to whether or not this is the best approach, I leave to the reader. It works for me, and the code I'm testing is simple enough that I'm not overly concerned with the project definitions not aligning perfectly. Your mileage may vary.
I will note that the above can also be used to make the "Native Unit Test Project" work with C++/WinRT, you just have the extra steps of integrating the C++/WinRT bits into the test project first.
I have inherited a Java / Maven / Cucumber project. I am fairly new to Cucumber.
Inside one of the folder I have a class like this...
import com.intuit.karate.junit4.Karate;
import org.junit.runner.RunWith;
#RunWith(Karate.class)
public class RoadsRunner {
}
Then in the same subdirectory / package I have a .feature file.
with a number of scenario's.
Feature: Check transaction
Background:
* url apiHost + '/api/v1'
* configure headers = {'X-TransactionID': '#(Math.random().toString())' }
Scenario: Get Classes
# get classes
Given path '/myUrl/classes'
And param processName = 'myProcess'
When method get
Then status 200
Question One.
I am using Eclipse. Is there a way I can debug through the test in a similar way that I would debug a Java app?
I have downloaded myself the Cucumber Eclipse plugin but can't quite figure out how to use it.
Question Two.
Without using a custom plugin to debug is there anything I can add to the scenarios to maybe print extra debug information.
thanks
The Cucumber Eclipse plugin gives you 2 things:
IDE syntax coloring / formatting support
Being able to right-click and run a Feature directly without the JUnit "runner"
Karate is Java behind the scenes so you can debug and set break-points, but it may not be as seamless as you expect. In 0.6.0 you have the option of placing a conditional break-point in Karate code that runs before / after each test step - see screen-shot.
So as you rightly called out, printing to the log might be the most effective way to work through complicated test scripts. Please refer to the print keyword - which is exactly what you are looking for.
2 more points:
the optional HTML report includes all HTTP request / response logs - which is great for troubleshooting a test.
I would love for the Karate UI (currently in alpha) to become stable sooner and be the best option for debugging, please do submit feedback and contribute if you can.
EDIT: we now have the Visual Studio Code IDE support with first-class debug support: https://github.com/intuit/karate/wiki/IDE-Support#vs-code-karate-plugin
EDIt2: If you want to debug Java code, that is possible with the new IntelliJ plugin: https://plugins.jetbrains.com/plugin/19232-karate
As per the documentation here, At this moment best way to debug Karate Steps is using Visual Studio Code for developing tests and VS Code Karate Plugin for debugging.
Visual Studio Code is Free, built on open Source and runs on all platforms including mac/linux and windows.
Please note this
The Karate UI has been retired and is not available in 0.9.5 onwards !
Use the VS Code Debug Support instead.
As per the comment by Peter Thomas, Eclipse/IntelliJ may also support debugging but I am unable to find any development there.
I'm working on a Typescript project in Visual Studio (2015 Community edition) build server side unit tests using Mocha.
I however read that NodeJS Tools supports running within the VS Test Runner, and even Typescript unit tests. You have to set the TestFramework property of the file to 'Mocha'. The project I'm working on even already has existing tests which this is set for. However I don't get a dropdown option in the GUI to set it, it's just empty:
I'm using NTVS v1.1 (and Typescript 1.7). Am I missing something in my Visual Studio setup? The build type of the test .ts file is also already set to TypeScriptCompile. Perhaps more of a specific VS question than a programming question, but the environment/tools is so programmer's specific that I thought somebody here can help me.
PS Running tests manually each time is driving me crazy, and I bumped into too many problems with using a HTML spec runner which I tried first, because this server side tests (e.g. CommonJS require and import statements that my browser doesn't get) and also because it's TypeScript.
But alternative solutions are also welcome. I'm using grunt and am also experience using Gulp, I'm just hoping for a full solution, not something that'll cost me half a day to script together, debug and document..
Note: I DO get the dropdown to select Mocha Testframework for .js files (after including in VS project), but NOT for .ts files :S.
Hmm... pretty silly, but it seems indeed a GUI issue:
You simply have to type Mocha into the property field yourself manually.
In the case of Typescript there doesn't appear a dropdown (e.g. caret) on hoover in the 'TestFramework' field. My expectation was really fixed on the dropdown' experience beause it DOES do that for Javascript files and in so many other fields in the 'Properties' window.
So a short overview of things to do:
type yourself and make sure you don't type Mohca or something:).
Install Mocha locally
Make sure the BuildAction of the .ts file is set to TypeScriptCompile
I'm off fixing other issues, this Typescript is nice, but the tools and language are evolving too fast for the 'Google-based development' I have to rely too really work well :S.
Are there any extensions to HUnit or QuickCheck that allow a continuous integration system like Bamboo to do detailed reporting of test results?
So far, my best idea is to simply trigger the tests as part of a build script, and rely on the tests to fail with a non-zero exit code. This is effective for getting attention when a test fails, but confuses build failures with test failures and requires wading through console output to determine the problem's source.
If this is the best option with current tools, my thought is to write a reporting module for HUnit that would produce output in the JUnit XML format, then point the CI tool at it as though it were reporting on a Java project. This seems somewhat hackish, though, so I'd appreciate your thoughts both on existing options and directions for new development.
The test-framework package provides tools for integrating tests using different testing paradigms, including HUnit and QuickCheck, and its console test runner can be passed a flag that makes it produce JUnit-compatible XML. We use it with Jenkins for continuous integration.
Invocation example:
$ ./test --jxml=test-results.xml
I've just released a package which generates test-suites based off modules containing quickCheck properties: http://hackage.haskell.org/package/tasty-integrate
This is one step above test-framework/tasty at the moment, as it forcefully pulls/aggregates them off the filesystem, instead of relying upon per-file record keeping. I hope this helps your CI process.