String manipulation of `CI_COMMIT_TAG` used by `environment:url` in GitLab pipeline - gitlab

In short:
Find a way to do simple parsing of a variable that is already available to environment:url as part of a pipeline job.
The particulars: A job that is triggered by a git tag following a git-flow release finish, such as v1.32.7. GitLab makes this available in CI_BUILD_TAG. What I would like is to to be able to only use the major version part, e.g. v1 in environment:name and environment:url.
Does anyone have a clever way of solving this? I've considered maybe having hooks that insert this value into the code itself, but I'm curious as to what solutions others have arrived at.
I'm aware that GitLab has strict limitations as to where variables can be expanded, and which variables can be used when. Here is an overview: https://docs.gitlab.com/ee/ci/variables/where_variables_can_be_used.html#gitlab-internal-variable-expansion-mechanism
My question is regarding the use of data that is already available to GitLab, such as the variables in question. And not the act of transferring data evaluated by runners to GitLab, something that is fairly problematic given the architecture, and otherwise discussed here:
https://gitlab.com/gitlab-org/gitlab-ce/issues/27921
https://gitlab.com/gitlab-org/gitlab-ce/issues/28314

Unfortunately, that can't be done strictly within a .gitlab-ci.yml file. It's a technical limitation. The environment name and url are evaluated on the GitLab (server) side, not by the runners. So, there's no access to variables declared in the script block, as you've already discovered. I've yet to find any clever workarounds.
One possibility, related to your comment about “hooks that insert this value into the code itself,” is to have something call the GitLab API to run a pipeline for a tag that parses out the v1 part and passes it in as a variable. Pipeline variables can be used for environment names and urls.
https://docs.gitlab.com/ee/api/pipelines.html#create-a-new-pipeline

Related

Can a Triggerable Scheduler use a change filter, or can a Trigger build step be conditional on a property?

I would like to determine which schedulers to trigger depending on the branch name, from inside the build factory - if that's possible.
Essentially I have a builder that is doing all the common build steps to compile package etc, and then has a bunch of trigger steps that trigger a bunch of tests (via triggerable schedulers).
However, I would like to configure the type of tests that get started (eg which schedulers are triggered) to depend on the branch name. So far I've tried to add the change_filter arg to my Triggerable scheduler, but it seems that it doesn't accept that argument. I guess that makes sense because it supposed to be Triggered, so maybe it doesn't care about using a change filter. That seems a bit strange though because Dependent schedulers do accept this kwarg.
So far the correct way to set this up is not clear to me.
I guess my questions are really:
Is there a way to use renderables / properties to decide which schedulers to trigger (based on the branch name for example)?
Is there a better way to do this? Perhaps create separate schedulers for the build that apply the change filter I need and have a build factory that triggers the correct tests, but that's not very DRY.
I came back to leave this here in case it might help someone with a tricky buildbot setup.
I solved this by making all of the dependent schedulers (for specific types of tests) into triggerable schedulers. Then I created main build schedulers for each subset of tests, each with a change filter and regex for the branches that should undergo that subset of tests. Finally, I created the buildfactory for each main scheduler by passing it only the triggerable schedulers for the test that that specific type of main scheduler should run.
For my current use case, this works great!

Isolating scenarios in Cabbage

I am automating acceptance tests defined in a specification written in Gherkin using Elixir. One way to do this is an ExUnit addon called Cabbage.
Now ExUnit seems to provide a setup hook which runs before any single test and a setup_all hook, which runs before the whole suite.
Now when I try to isolate my Gherkin scenarios by resetting the persistence within the setup hook, it seems that the persistence is purged before each step definition is executed. But one scenario in Gherkin almost always needs multiple steps which build up the test environment and execute the test in a fixed order.
The other option, the setup_all hook, on the other hand, resets the persistence once per feature file. But a feature file in Gherkin almost always includes multiple scenarios, which should ideally be fully isolated from each other.
So the aforementioned hooks seem to allow me to isolate single steps (which I consider pointless) and whole feature files (which is far from optimal).
Is there any way to isolate each scenario instead?
First of all, there are alternatives, for example: whitebread.
If all your features, needs some similar initial step, maybe background steps are something to look into. Sadly those changes were mixed in a much larger rewrite of the library that newer got merged into. There is another PR which also is mixed in with other functionality and currently is waiting on companion library update. So currently that doesn't work.
Haven't tested how the library is behaving with setup hooks, but setup_all should work fine.
There is such a thing as tags. Which I think haven't yet been published with the new release, but is in master. They work with callback tag. You can look closer at the example in tests.
There currently is a little bit of mess. I don't have as much time for this library as I would like to.
Hope this helps you a little bit :)

How can we add copy of existing testcase to another testsuite using groovy

Requirement:
I have 2 testcases and will grow in future. I need a way to run these 2 testcase in multiple environment parallel at runtime.
So I can either make multiple copies of these testcase for multiple environment and add them to empty testsuite and set to run them parallel. All these using groovy script.
Or try a way to run each testcase parallel by some code.
I tried tcase.run(properties,async)
but did not work.
Need help.
Thank you.
This question does not show any research effort; it is unclear and not useful. You are mixing together unrelated things.
If you have a non-Pro installation, then you can parameterize the endpoints. This is accomplished by editing all your endpoints with a SoapUI property, and passing these to your test run. This is explained in the official documentation.
If you have a -Pro license, then you have access to the Environments feature, which essentially wraps the above for you in a convenient manner. Again: consult official documentation.
Then a separate question is how to run these in parallel. That will very much depend on what you have available. In the simplest case, you can create a shell script that calls testrunner the appropriate number of times with appropriate arguments. Official documentation is available. There are also options to run from Maven - official documentation - in which case you can use any kind of CI to run these.
I do not understand how Groovy would play into all this, unless you would like to get really fancy and run all this from junit, which also has official documentation available.
If you need additional information, you could read through SO official documentation and perhaps clarify your answer.

How to iterate over a cucumber feature

I'm writing a feature in cucumber that could be applied to a number of objects that can be programmaticaly determined. Specifically, I'm writing a smoke test for a cloud deployment (though the problem is with cucumber, not the cloud tools, thus stack overflow).
Given a node matching "role:foo"
When I connect to "automatic.eucalyptus.public_ipv4" on port "default.foo.port"
Then I should see "Hello"
The given does a search for nodes with the role foo does and the automatic.eucalyptus... And port come from the node found. This works just fine... for one node.
The search could retun multiple nodes in different environments. Dev will probably return one, test and integration a couple, and prod can vary. The given already finds all of them.
Looping over the nodes in each step doesn't really work. If any one failed in the When, the whole thing would fail. I've looked at scenarios and cucumber-iterate, but both seem to assume that all scenarios are predefined rather than programmatically looked up.
I'm a cuke noob, so I'm probably missing something. Any thoughts?
Edit
I'm "resolving" the problem by flipping the scenario. I'm trying to integrate into a larger cluster definition language to define repeatedly call the feature by passing the info as an environment variable.
I apologize in advance that I can't tell you exactly "how" to do it, but a friend of mine solved a similar problem using a somewhat unorthodox technique. He runs scenarios that write out scenarios to be run later. The gem he wrote to do this is called cukewriter. He describes how to use it in pretty good detail on the github page for the gem. I hope this will work for you, too.

Securing a workspace variable

Maybe you have come past the following situation. You're working and you start to run one script after another and then suddenly realize you've changed the value of a variable you are interested in. Apart from making a backup of the workspace, is there no other way to protect the variables?
Is there a way to select individual variables in the workspace that you're going to protect?
Apart from seeing the command history register, is there a history register of the different values that have been given to one particular variable?
Running scripts in sequence is a recipe for disaster. If possible, try turning those scripts into functions. This will naturally do away with the problems of overwriting variables you are running into, since variables inside functions are local to those functions whereas variables in scripts are local to the workspace -- and thus easily accessed/overwritten by separate scripts (often unintentionally, especially if you use variable names like "result").
I also agree that writing functions can be helpful in this situation. If however you are manipulating very large data sets then you need to be careful to write your code in a form which doesn't make multiple copies of variables within your functions or you may run into memory shortage problems.
No, there is no workspace history. I would say, if you run into that problem that you described, you should consider changing your programming style.
I would suggest you:
put that much code or information in your script, so you can start from an empty workspace to fulfill a task. For that reason I always put clear all at the start of my main file.
If it's getting too complex, consider calling functions. If you need values that are generated by another script or function, rewrite that script to become a function and call it in your main file or save the variables. Loading variables is absolutely okay. But running scripts in sequence leads to disaster as mentioned by marciovm.

Resources