I want to enforce that each GitLab MR should contain unit test case(s).
how to make sure GitLab MR contains unit test case or not
Thank you!
I think a better way is to check what you actually want. You want new code tested. Instead of checking whether the merge request contains a unit test, I think its better to check code coverage.
For this you can use coverage tools like Jacoco for java, Istanbul for Javascript, etc.. GitLab has a way to include this Code Coverage Information directly into the merge Request which I think is a better solution. Now you get line by line information if the new code is tested.
GitLab Merge Request Test Coverage
Related
I want to have an option on the cucumber report to mute/hide scenarios with a given tag from the results and numbers.
We have a bamboo build that runs our karate repository of features and scenarios. At the end it produces nice cucumber html reports. On the "overview-features.html" I would like to have an option added to the top right, which includes "Features", "Tags", "Steps" and "Failures", that says "Excluded Fails" or something like that. That when clicked provides the same exact information that the overview-features.html does, except that any scenario that's tagged with a special tag, for example #bug=abc-12345, is removed from the report and excluded from the numbers.
Why I need this. We have some existing scenarios that fail. They fail due to defects in our own software, that might not get fixed for 6 months to a year. We've tagged them with a specified tag, "#bug=abc-12345". I want them muted/excluded from the cucumber report that's produced at the end of the bamboo build for karate so I can quickly look at the number of passed features/scenarios and see if it's 100% or not. If it is, great that build is good. If not, I need to look into it further as we appear to have some regression. Without these scenarios that are expected to fail, and continue to fail until they're resolved, it is very tedious and time consuming to go through all the individual feature file reports and look at the failing scenarios and then look into why. I don't want them removed completely as when they start to pass I need to know so I can go back and remove the tag from the scenario.
Any ideas on how to accomplish this?
Karate 1.0 has overhauled the reporting system with the following key changes.
after the Runner completes you can massage the results and even re-try some tests
you can inject a custom HTML report renderer
This will require you to get into the details (some of this is not documented yet) and write some Java code. If that is not an option, you have to consider that what you are asking for is not supported by Karate.
If you are willing to go down that path, here are the links you need to get started.
a) Example of how to "post process" result-data before rendering a report: RetryTest.java and also see https://stackoverflow.com/a/67971681/143475
b) The code responsible for "pluggable" reports, where you can implement a new SuiteReports in theory. And in the Runner, there is a suiteReports() method you can call to provide your implementation.
Also note that there is an experimental "doc" keyword, by which you can inject custom HTML into a test-report: https://twitter.com/getkarate/status/1338892932691070976
Also see: https://twitter.com/KarateDSL/status/1427638609578967047
I want to have an option on the cucumber report to mute/hide scenarios with a given tag from the results and numbers.
We have a bamboo build that runs our karate repository of features and scenarios. At the end it produces nice cucumber html reports. On the "overview-features.html" I would like to have an option added to the top right, which includes "Features", "Tags", "Steps" and "Failures", that says "Excluded Fails" or something like that. That when clicked provides the same exact information that the overview-features.html does, except that any scenario that's tagged with a special tag, for example #bug=abc-12345, is removed from the report and excluded from the numbers.
Why I need this. We have some existing scenarios that fail. They fail due to defects in our own software, that might not get fixed for 6 months to a year. We've tagged them with a specified tag, "#bug=abc-12345". I want them muted/excluded from the cucumber report that's produced at the end of the bamboo build for karate so I can quickly look at the number of passed features/scenarios and see if it's 100% or not. If it is, great that build is good. If not, I need to look into it further as we appear to have some regression. Without these scenarios that are expected to fail, and continue to fail until they're resolved, it is very tedious and time consuming to go through all the individual feature file reports and look at the failing scenarios and then look into why. I don't want them removed completely as when they start to pass I need to know so I can go back and remove the tag from the scenario.
Any ideas on how to accomplish this?
Karate 1.0 has overhauled the reporting system with the following key changes.
after the Runner completes you can massage the results and even re-try some tests
you can inject a custom HTML report renderer
This will require you to get into the details (some of this is not documented yet) and write some Java code. If that is not an option, you have to consider that what you are asking for is not supported by Karate.
If you are willing to go down that path, here are the links you need to get started.
a) Example of how to "post process" result-data before rendering a report: RetryTest.java and also see https://stackoverflow.com/a/67971681/143475
b) The code responsible for "pluggable" reports, where you can implement a new SuiteReports in theory. And in the Runner, there is a suiteReports() method you can call to provide your implementation.
Also note that there is an experimental "doc" keyword, by which you can inject custom HTML into a test-report: https://twitter.com/getkarate/status/1338892932691070976
Also see: https://twitter.com/KarateDSL/status/1427638609578967047
I've a list of features files and the list of the related step definitions. Every feature file refers to some specific functionality of the website.
According to some environment variables defined in package.json and representing the theme of the website, I might need to skip entirely some of the feature files (and obviously their step definition), due to missing feature for some specific theme.
To give some code examples:
"test:cy:run:daylight": "PORT=9000 CYPRESS_THEME=daylight cypress run",
"test:cy:run:darkness": "PORT=9001 CYPRESS_THEME=darkness cypress run",
feature files list:
daylight.feature
afternoon.feature
evening.feature
night.feature
with the relative definitions:
daylight.spec.js
afternoon.spec.js
evening.spec.js
night.spec.js
So in case of CYPRESS_THEME=darkness I would like to skip entirely from my testing process the features evening.feature and night.feature
How to do that? Ideas?
This example is with fake data, my real scenarios includes many more features and themes, so unluckily splitting test in different folders or using Cypress tag is not an efficient option.
Another not efficient idea I am thinking of is to put conditionals in every step definition Given, When and Then with the help of the detection of the Cypress.env('THEME') but obviously I would prefer not to follow this approach.
Anything else? Thanks
The correct answer would be to tag the tests and run only specific tags... if such a feature existed. I believe #mosaad is wrong on his second point; the --tag command line parameter merely adds meta data to the run from what I understand. It doesn't restrict which spec files get run.
If I were you I'd just try to get creative with your folder structure. Alternatively you can implement this person's workaround, which seems a bit heavy to me but probably gets the job done.
You can split tests into 2 folders and run only the files in this folder
cypress run --spec "cypress/integration/daylight/**/*"
Or you can use tags and run tests with the correct tag
cypress run --record --tag "daylight"
I have some tests I'd like to exclude from the spock-report. Is it possible to exclude specific classes or tests from generating it?
I don't know of such a feature out of the box but you could write your own report template.
just copy the default templates and add your filter code directly to the template.
another way I could imagine is that you run your tests twice and exclude the tests to hide with an #IgnoreIf annotation (http://mrhaki.blogspot.com/2014/06/spocklight-ignore-specifications-based.html?m=1). This could make the decision based on an environment variable.
However, tests are important and it is even more important what has NOT been tested. So you should report that certain tests where excluded in order to have a valid test report.
We are setting the CI on our GitLab and we are able to show build status and code coverage on master using the following:
README.md in root directory of myproject:
[![build status](http://mygitlab/mygroup/myproject/badges/master/build.svg)](http://mygitlab/mygroup/myproject/commits/master)
[![coverage report](http://mygitlab/mygroup/myproject/badges/master/coverage.svg)](http://mygitlab/mygroup/myproject/commits/master)
Something we would like is show build status/code coverage of current branch when viewing the README.md in the branch. Right now, the links have master hardcoded so the branches show the status of the master.
Is there a way to use relative URL (or something else) so the build status / code coverage automatically adapt to the branch you're viewing? Looking at the documentation, it looks like it's impossible, because you have to specify the branch.
Starting in GitLab 9.3 (available only in Starter/Bronze and higher), code quality will be evaluated as part of the CI/CD pipeline and the results will display in the merge request.
You can see an example of how this looks in a merge request in the documentation at GitLab Code Quality.
This works 'automatically' via Auto DevOps or you can configure Code Quality manually by using the Code Quality examples
This doesn't address showing code quality and pipeline status for a given branch in the README. As mentioned, it does show the pipeline status and code quality in the MR itself. However, when viewing a particular branch, the commit at the top of the page does show the overall pipeline status. Click on that status to go to the pipeline for more detailed job information, including the Code Quality job itself.
It's not quite as handy as what you're looking for, but it is a workaround. Usually, concerns about quality and build status are most important on the proposed merge request itself, where fixes can be made prior to merging them into the master/target branch.
What I would really like to see is what the Code coverage was Before and After a merge request, while reviewing merge requests
Actually, this is easier with GitLab 13.4 (September 2020)
Show job data for Code Coverage value in MR
As a developer, you should be able to easily see code coverage after a pipeline finishes running, even in complex scenarios that make this more difficult, like when your pipeline has multiple jobs that are parsed to calculate the coverage value.
Until now, the Merge Request widget only showed the average of those values, which meant you had to navigate to the jobs page and then back to the Merge Request itself to get more granular details for the coverage value.
To save you time and eliminate those extra steps, you’re now presented with the average coverage value, how it has changed from the target and source branch, and a tooltip that shows the coverage for each job used to calculate the average.
See Documentation and Issue.
Is there a way to show build status/code coverage per branch
One new feature which does relate to this is with GitLab 13.6 (November 2020):
Pipeline status in branch and tag lists
If you use CI/CD pipelines with tags or branches and want to know the latest pipeline status, you previously had to navigate away from the branch list or tag list to get to the pipeline page. Now, pipeline status icons are displayed for each branch or tag in their respective list views so you can quickly get to this information for many tags or branches in less clicks.
Thanks to Lee Tickett for this contribution!
See Documentation and Issue.