goconvey not showing coverage of packages - goconvey

In goconvey, there's a feature that shows package coverage, and when you click on the package under analysis, the go coverage tool pops up, showing the source code, colored by what has and hasn't been covered.
For example:
However, there are many packages in my SUT that, when clicked on, don't show any test coverage, and even 404. For example, clicking on package db from that list:
Another example:
What is causing this and how do I remedy it?

Not sure if it applies to this case but for me in Mac I had to run the goconvey tool in sudo mode (i.e Admin in Windows). Otherwise the intermediate files/dirs creation failed with the tool/webserver reporting 'no coverage'.
Since you are receiving 404, did you validate if the coverage report dir/files were created?

Related

How to generate excel report me karate framework? [duplicate]

I want to have an option on the cucumber report to mute/hide scenarios with a given tag from the results and numbers.
We have a bamboo build that runs our karate repository of features and scenarios. At the end it produces nice cucumber html reports. On the "overview-features.html" I would like to have an option added to the top right, which includes "Features", "Tags", "Steps" and "Failures", that says "Excluded Fails" or something like that. That when clicked provides the same exact information that the overview-features.html does, except that any scenario that's tagged with a special tag, for example #bug=abc-12345, is removed from the report and excluded from the numbers.
Why I need this. We have some existing scenarios that fail. They fail due to defects in our own software, that might not get fixed for 6 months to a year. We've tagged them with a specified tag, "#bug=abc-12345". I want them muted/excluded from the cucumber report that's produced at the end of the bamboo build for karate so I can quickly look at the number of passed features/scenarios and see if it's 100% or not. If it is, great that build is good. If not, I need to look into it further as we appear to have some regression. Without these scenarios that are expected to fail, and continue to fail until they're resolved, it is very tedious and time consuming to go through all the individual feature file reports and look at the failing scenarios and then look into why. I don't want them removed completely as when they start to pass I need to know so I can go back and remove the tag from the scenario.
Any ideas on how to accomplish this?
Karate 1.0 has overhauled the reporting system with the following key changes.
after the Runner completes you can massage the results and even re-try some tests
you can inject a custom HTML report renderer
This will require you to get into the details (some of this is not documented yet) and write some Java code. If that is not an option, you have to consider that what you are asking for is not supported by Karate.
If you are willing to go down that path, here are the links you need to get started.
a) Example of how to "post process" result-data before rendering a report: RetryTest.java and also see https://stackoverflow.com/a/67971681/143475
b) The code responsible for "pluggable" reports, where you can implement a new SuiteReports in theory. And in the Runner, there is a suiteReports() method you can call to provide your implementation.
Also note that there is an experimental "doc" keyword, by which you can inject custom HTML into a test-report: https://twitter.com/getkarate/status/1338892932691070976
Also see: https://twitter.com/KarateDSL/status/1427638609578967047

How to add an option to Cucumber report to remove scenarios that have a certain tag

I want to have an option on the cucumber report to mute/hide scenarios with a given tag from the results and numbers.
We have a bamboo build that runs our karate repository of features and scenarios. At the end it produces nice cucumber html reports. On the "overview-features.html" I would like to have an option added to the top right, which includes "Features", "Tags", "Steps" and "Failures", that says "Excluded Fails" or something like that. That when clicked provides the same exact information that the overview-features.html does, except that any scenario that's tagged with a special tag, for example #bug=abc-12345, is removed from the report and excluded from the numbers.
Why I need this. We have some existing scenarios that fail. They fail due to defects in our own software, that might not get fixed for 6 months to a year. We've tagged them with a specified tag, "#bug=abc-12345". I want them muted/excluded from the cucumber report that's produced at the end of the bamboo build for karate so I can quickly look at the number of passed features/scenarios and see if it's 100% or not. If it is, great that build is good. If not, I need to look into it further as we appear to have some regression. Without these scenarios that are expected to fail, and continue to fail until they're resolved, it is very tedious and time consuming to go through all the individual feature file reports and look at the failing scenarios and then look into why. I don't want them removed completely as when they start to pass I need to know so I can go back and remove the tag from the scenario.
Any ideas on how to accomplish this?
Karate 1.0 has overhauled the reporting system with the following key changes.
after the Runner completes you can massage the results and even re-try some tests
you can inject a custom HTML report renderer
This will require you to get into the details (some of this is not documented yet) and write some Java code. If that is not an option, you have to consider that what you are asking for is not supported by Karate.
If you are willing to go down that path, here are the links you need to get started.
a) Example of how to "post process" result-data before rendering a report: RetryTest.java and also see https://stackoverflow.com/a/67971681/143475
b) The code responsible for "pluggable" reports, where you can implement a new SuiteReports in theory. And in the Runner, there is a suiteReports() method you can call to provide your implementation.
Also note that there is an experimental "doc" keyword, by which you can inject custom HTML into a test-report: https://twitter.com/getkarate/status/1338892932691070976
Also see: https://twitter.com/KarateDSL/status/1427638609578967047

How to troubleshoot "We couldn’t find a run python"?

I'm working on a pre-existing python code-by-zapier zap. The trigger is "Code By Zapier; Run Python". I've made some changes to the contained python script, and now when I go to test that step I run into the following error message:
We couldn’t find a run python
Create a new run python in your Code by Zapier account and test your trigger again.
Is there any way of figuring out what went wrong?
I'm guessing a little bit, but I think this issue stems from repeatedly testing an existing trigger without returning a new ID.
When you run a test (or click the "load more" button), then Zapier runs the trigger and looks through the array for any new items it hasn't seen before. It bases "newness" on whether it recognizes the id field in each returned object.
So if you're testing code that changed, but is returning objects with previously seen ids, then the editor will error saying that it can't find any new objects (the can't find new run pythons is a quirk of the way that text is generated; think of it as "can't find objects that we haven't seen before).
The best way to fix this depends on if you're returning an id and if you need it for something.
Your code can return a random id. This means every returned item will trigger a Zap every time, which may or may not be intended behavior.
You can probably copy your code, change the trigger app (to basically anything else), run a successful test (which will overwrite your old test data), and then change it back to Code by Zapier and paste your code. Then you should get a "fresh" test. Due to changes in the way sample data is stored, I'm not positive this works now
Duplicate the zap from the "My Zaps" page. The new one won't have any existing sample data, so you should be able to test normally.

HPQC (or MicroFocus ALM) - Errors when using Doc Generator on VDI

I work in a company that started using VDIs for certain SQAs. We have just noticed that in Microfocus ALM in the VDI only, when anyone tries to print a report through the Document Generator, an error occurs. See first screenshot. If you close this out, it freezes the browser and you have to close. When you try again, you'll get the second error below. In researching these, it seems the first could be caused by a Word incompatibility, which we have checked and ruled out. The second can be caused by files in the path of the TD_80 folder, which we have tried to remove as suggested, but the error persists.
Does anyone know what else might cause this error on the VDIs only?
Details from first error
Details from second error
After submitting a ticket to Micro Focus, they said Report Generator is no longer supported. They pointed us to their documentation on creating reports in the Analysis View module under the Dashboard. This seems to work similarly, but the filtering is a little bit different.

Errors when running Language-Solution in MPS

I'm developing an DSL with jetbrains MPS. It's not obvious to use, but I succeeded so far with the design-part.
It's possible to right-click on a solutions node and "run" it, assuming the language is executable (extends executing.util). Plus I use a seperately developed jar as a library (used by the generator).
I build a new project to test, as simple as possible, added some extra nodes and loops in the generator, the error occures and it can't be undone.
As far as I can see, there are several possible sources of errors.
dependencies (they are tricky in MPS)
my jar
wrong cached files or so
Executing "run" causes the following error:
error: could not find or load main class MySolution.package.map_concept
Has some of you out there experience with this?
Tell me, if there are some extra information that would help.
It seems that you have added the jar file as a model to the language, which makes it invisible for the solution. Following the instructions at https://confluence.jetbrains.com/display/MPSD32/Getting+the+dependencies+right#Gettingthedependenciesright-AddingexternalJavaclassesandjarstoaprojectruntimesolutions and creating a separate library solution worked for me.
To me this looks like a problem of the generator. Have you fully rebuilt the project - right-click on the "project" node in the structure tree?
Is the root mapping template correct? If you can share your project, I can have a look.
A small tip that could have saved me some time and might also solve this problem for someone else, even if you followed the instructions in other answers.
When prompted to add your libraries to modules after including the libraries on Java tab, dismiss the window if you already included them in the first place on the Common tab. Otherwise they are listed once despite having been added twice, leading to a compilation failure.

Resources