When creating different task lists in Markdown for Gitlabs templates, Gitlabs consider all the tasks from checklist as an individual count and shows count as total number of tasklist.
Like for below template -
## PR Checklist
Please check if your PR fulfills the following requirements:
- [X] The commit message follows our guidelines:
- [X] Tests for the changes have been added (for bug fixes / features)
- [ ] Docs have been added / updated (for bug fixes / features)
## PR Type
What kind of change does this PR introduce?
- [X] Bugfix
- [ ] Feature
- [ ] Code style update (formatting, local variables)
- [X] Refactoring (no functional changes, no api changes)
- [ ] Build related changes
- [ ] CI related changes
- [ ] Documentation content changes
- [ ] angular.io application / infrastructure changes
- [ ] Other... Please describe:
## Does this PR introduce a breaking change?
- [ ] Yes
- [X] No
It shows 5 of the 14 tasks completed. Ideally, I only want first three task lists to be considered as a task by Gitlabs and it should show 2 of the 3 tasks completed. Please let me know if there is something in the markdown or Gitlabs by which I can achieve the desired output.
Do not use task list syntax for things for non-tasks.
See the vendor documentation for task lists.
You can use labels to categorize issues with regard to feature vs bug fix, or breaking versus non-breaking change.
As already pointed out, the docs on task lists specify that a "task" will always add to the task counter.
If you want to keep a similar formatting to encourage users to mark which items apply, then you can use parentheses like so:
- ( ) task
The formatting is not as pretty, but it'll work.
Related
in the answer of this (Trigger Gitlab-CI Pipeline only when there is a new tag ), I understand to set only as filter
But How can I trigger the pipeline only:
there is a new tag created, these tags are semantic versioning tags, how can I filter them? something like
only:
- /[0-9]+\.[0-9]+\.[0-9]+/
but the tags are not always as 1.0.2, sometimes, they have beta or fc, such as 0.2.3-rc.1 or 2.3.5-beta, will the filter to be /[0-9]+\.[0-9]+\.[0-9]+.*/ is fine?
second rule is, the trigger is only on master or main branch
Regarding the Semantic versioning regex, I found this gist
https://gist.github.com/jhorsman/62eeea161a13b80e39f5249281e17c39
the sample is too complex
^([0-9]+)\.([0-9]+)\.([0-9]+)(?:-([0-9A-Za-z-]+(?:\.[0-9A-Za-z-]+)*))?(?:\+[0-9A-Za-z-]+)?$
Using only is deprecated in favor of rules.
rules replaces only/except and they can’t be used together in the same job.
https://docs.gitlab.com/ee/ci/yaml/#rules
Create a rule and check the predefined variable CI_COMMIT_TAG for your pattern.
job:
script: echo "Hello, Rules!"
rules:
- if: $CI_COMMIT_TAG =~ /^\d+\.\d+\.\d+.*/
You can list multiple of these rules. So if you want to execute the job also on every push to the master branch, you'd add another rule:
rules:
- if: ...
- if: $CI_COMMIT_REF_NAME == "master"
The extended pattern with .* at the end would be fine, but you could also be a bit more specific like this (-[0-9a-zA-Z.]+)? if you want to enforce a specific syntax.
I'm wondering if you can use wildcard characters with tags to get all tagged scenarios/features that match a certain pattern.
For example, I've used 17 unique tags on many scenarios throughout many of my feature files. The pattern is "#jira=CIS-" followed by 4 numbers, like #jira=CIS-1234 and #jira=CIS-5678.
I'm hoping I can use a wildcard character or something that will find all of the matches for me.
I want to be able to exclude them from being run, when I run all of my features/scenarios.
I've tried the follow:
--tags ~#jira
--tags ~#jira*
--tags ~#jira=*
--tags ~#jira=
Unfortunately none have given my the results I wanted. I was only able to exclude them when I used the exact tag, ex. ~#jira=CIS-1234. It's not a good solution to have to add each single one (of the 17 different tags) to the command line. These tags can change frequently, with new ones being added and old ones being removed, plus it would make for one real long command.
Yes. First read this - there is this un-documented expression-language (based on JS) for advanced tag selction based on the #key=val1,val2 form: https://stackoverflow.com/a/67219165/143475
So you should be able to do this:
valuesFor('#jira').isPresent
And even (here s will be a string, on which you can even do JS regex if you know how):
valuesFor('#jira').isEach(s => s.startsWith('CIS-'))
Would be great to get your confirmation and then this thread itself can help others and we can add it to the docs at some point.
We have a .gitlab-ci.yml file containing lines like
a_task:
only:
- /^production\/mybranch.*$/
which are clearly meant to match the target git ref.
But we also have:
another_task:
only:
- master
My question is: does this "master" match a part of the git ref as well (so that a tag my-master-123 would match, too) or is it a symbolic thing?
The reason why am asking is that there is also:
third_task:
only:
- tags
That would have to be symbolic, right?
Which would mean that the syntax does e.g. not support a branch named tags, right?
Update
Looks like there are special keywords, tags being one of them.
So indeed that would mean that refs with those special names (external, pipelines, tags, triggers, ...) would not be supported.
from the docs:
only and except are two keywords that set a job policy to limit when jobs are >created:
only defines the names of branches and tags the job runs for.
except defines the names of branches and tags the job does not run for.
Matching via regular expressions is supported, as in your first case, but not default. only: master tasks will run for all refs named master.
I've recently installed arbtt which seems to be an intersting, rule based, automatic time tracker. http://arbtt.nomeata.de/#what
I've got it working for the most part, but after 30 minutes or so of gathering stats, I end up with the following error.
Processing data [=>......................................................................................................................................................................................] 1%
arbtt-stats: Prelude.(!!): index too large
Does anyone have any suggestions on ways I can troubleshoot this issue, or better yet, solve it? I have 0 experience with the coding language used to create the rules (Haskell I believe). All I've done to this point is follow the documentation as closely as possible.
This error ultimately renders the tool useless since it doesn't gather data any longer than 30 minutes. To fix it, I have to delete the log and start from scratch. I'm primarily interested in the notion of having a customizable, rule based time tracker but I'm by no means tied to using arbtt.
Based on the comments below, I'm including some more information below.
When I try to run arbtt-recover I get a long list of errors that look like this. All of them seem to be related to an Unsupported TimeLogEntry.
Trying at position 1726098.
Failed to read value at position 1726098:
Unsupported TimeLogEntry version tag 0
As for the configuration file, here is what I have so far.
$idle > 30 ==> tag inactive,
-- A rule that matches on a list of strings
current window $program == ["Chrome", "Firefox"] ==> tag Web,
current window $program == ["skype"] ==> tag Skype,
current window $program == ["jetbrains-phpstorm"] ==> tag PhpStorm,
( current window $title =~ m!Inbox! ||
current window $title =~ m!Outlook! ) ==> tag Emails,
( current window $title =~ m!AdWords! ||
current window $title =~ m!Analytics! ) ==> tag Adwords,
It goes on further, but I'm fairly confident I've followed this same syntax for all other lines. The rest of the lines are following the same format but are project/client specific for me. If required, I'm happy to include the rest of the file.
As discussed in the comments: This is a case of a corrupt ~/.arbtt/capture.log. You can usually fix this by
running arbtt-recover
and then moving ~/.arbtt/capture.log.recovered to ~/.arbtt/capture.log.
The second manual step is required to avoid accidentially deleting too much data. You can test that the recovered file is better by making arbtt-stats using the recovered file by passing --logfile=~/.arbtt/capture.log.recovered to it.
Data corruption happens for example when there is an unclean shutdown, or other undetermined reasons. But the log file format is such that even after a corruption (e.g. a partial write of one sample), further samples will be written correctly and should be picked up by arbtt-recover, so you did not lose more than a few samples.
I have an issue with executing the cucumber-jvm scenarios in different environments. Data that is incorporated in the feature files for scenarios belongs to one environment. To execute the scenarios in different environemnts, I need to update the data in the features files as per the environment to be executed.
for example, in the following scenario, i have the search criteria included in the feature file. search criteria is valid for lets say QA env.
Scenario: search user with valid criteria
Given user navigated to login page
And clicked search link
When searched by providing search criteria
|fname1 |lname1 |address1|address2|city1|state1|58884|
Then verify the results displayed
it works fine in QA env. But to execute the same scenario in other environments (UAT,stage..), i need to modify search criteria in feature files as per the data in those environments.
I'm thinking about maintaing the data for scenarios in properties file for different environments and read it based on the execution environment.
if data is in properties file, scenario will look like below. Instead of the search criteria, I will give propertyName:
Scenario: search user with valid criteria
Given user navigated to login page
And clicked search link
When searched by providing search criteria
|validSearchCriteria|
Then verify the results displayed
Is there any other way I could maintain the data for scenarios for all environments and use it as per the environment the scenario is getting executed? please let me know.
Thanks
I understand the problem, but I don't quite understand the example, so allow me to provide my own example to illustrate how this can be solved.
Let's assume we test a library management software and that in our development environment our test data have 3 books by Leo Tolstoy.
We can have test case like this:
Scenario: Search by Author
When I search for "Leo Tolstoy" books
Then I should get result "3"
Now let's assume we create our QA test environment and in that environment we have 5 books by Leo Tolstoy. The question is how do we modify our test case so it works in both environments?
One way is to use tags. For example:
#dev_env
Scenario: Search by Author
When I search for "Leo Tolstoy" books
Then I should get result "3"
#qa_env
Scenario: Search by Author
When I search for "Leo Tolstoy" books
Then I should get result "5"
The problem here is that we have lots of code duplication. We can solve that by using Scenario Outline, like this:
Scenario Outline: Search by Author
When I search for "Leo Tolstoy"
Then I should see "<number_of_books>" books
#qa_env
Examples:
| number_of_books |
| 5 |
#dev_env
Examples:
| number_of_books |
| 3 |
Now when you execute the tests, you should use #dev_env tag in dev environment and #qa_env in QA environment.
I'll be glad to hear some other ways to solve this problem.
You can do this in two ways
Push the programming up, so that you pass in the search criteria by the way you run cucumber
Push the programming down, so that your step definition uses the environment to decide where to get the valid search criteria from
Both of these involve writing a more abstract feature that does not specify the details of the search criteria. So you should end up with a feature that is like
Scenario: Search with valid criteria
When I search with valid criteria
Then I get valid results
I would implement this using the second method and write the step definitions as follows:
When "I search with valid criteria" do
search_with_valid_criteria
end
module SearchStepHelper
def search_with_valid_criteria
criteria = retrieve_criteria
search_with criteria
end
def retrieve_criteria
# get the environment you are working in
# use the environment to retrieve the search criteria
# return the criteria
end
end
World SearchStepHelper
Notice that all I have done is change the place where you do the work, from the feature, to a helper method in a module.
This means that as you are doing your programming in a proper programming language (rather than in the features) you can do whatever you want to get the correct criteria.
This may have been answered elsewhere but the team I work with currently tends to prefer pushing environmental-specific pre-conditions down into the code behind the step definitions.
One way to do this is by setting the environment name as an environment variable in the process running the test runner class. An example could be $ENV set to 'Dev'. Then #Before each scenario is tested it is possible verify the environment in which the scenario is being executed and load any environment-specific data needed by the scenario:
#Before
public void before(Scenario scenario) throws Throwable {
String scenarioName = scenario.getName();
env = System.getenv("ENV");
if (env == null) {
env = "Dev";
}
envHelper.loadEnvironmentSpecificVariables();
}
Here we set a 'default' value of 'Dev' in case the test runner is run without the environment variable being set. The envHelper points to a test utility class with the method loadEnvironmentSpecificVariables() that could load data from a JSON, csv, XML file with data specific to the environment being tested against.
An advantage of this approach is that it can help to de-clutter Feature files from potentially distracting environmental meta-data which can impact the readability of the feature outside of the development and testing domains.