How to set a "title" or "name" for a bitbucket script execution element? - bitbucket-pipelines

i am wondering whether or not one can set a title or name for an execution element in a bitbucket pipeline:
pipelines:
default:
- step:
script: # Modify the commands below to build your repository.
- "Configure": ./configure
- "Build": make
- "Test": make test
- "Long Script": |
make whatever1
make whatever2
make whatever3
I'd expect the output to be:
Configure
Build
Test
Long Script
within the titles, and seeing the script only, if I unfolding the execution elements in the UI, just like with github:
Any ideas? :-)
The only one I found was to put everthing in bash scripts, but then I do not see the executed command, which I still want.
Thanks.

Any step of your pipeline can have its own name property, a good example can be found here.
In case you'd like to assign names to individual commands of your step's script, I reckon echo would be a good option:
echo "Test" && make test

Related

gitlab ci pass parameter to hidden job

I have a gitlab ci job, which runs the same command twice, but the command runs first time and second time differs in one parameter. How to create a parametrized command?
I am looking for something like this:
.long_terminal_command: &long_terminal_command
- echo $variable
main_job:
script:
- *long_terminal_command "One"
- *long_terminal_command "Two"

Read variable from file for usage in GitLab pipeline

Given the following very simple .gitlab-ci.yml pipeline:
---
variables:
KEYCLOAK_VERSION: 20.0.1 # this should be populated from reading a file from the repo...
stages:
- test
build:
stage: test
script:
- echo "$KEYCLOAK_VERSION"
As you might see, this simply outputs the value of KEYCLOAK_VERSION defined in the variables section.
Now, the Git repository contains a env.properties file with KEYCLOAK_VERSION=20.0.1 as content. How would I read the variable from that file and use it in the GitLab pipeline?
The documentation mentions import but this seems to be using YAML files.
To read variables from a file you can use the source or . command.
script:
- source env.properties
- echo $KEYCLOAK_VERSION
Attention:
One reason why you might not want to do it this way is because whatever is in env.properties will be run in your shell, such as rm -rf /, which could be very dangerous.
Maybe you can take a look here for some other solutions.

Gitlab pipeline error with source sh script

I have a simple pipeline with one job to test bash scripts. The pipeline as follow:
image: alpine/git
stages:
- test_branching
test_branch:
stage: test_branching
before_script:
- mkdir -p .common
- wget https://x.x.x.x/branching.sh > .common/test.sh && chmod +x .common/test.sh
- source .common/test.sh
script:
- test_pipe
- echo "app version is ${app_version}"
The bash script as follow:
#!/bin/sh
function test_pipe () {
app_version="1.0.0.0-SNAPSHOT"
}
The problem is that the pipeline for whatever reason does not recognize the function inside the script. The logs are:
...
$ test_pipe
/scripts-1050-417479/step_script: eval: line 180: test_pipe: not found
Does anybody know what happend with this?? I miss a lot Jenkins shared libraries, gitlab does not have it, also gitlab does not have the function to include scripts inside yml files.
I dont want to use multiproject pipeline, I need to do it at this way. This is only an example of a more complicated pipeline logic.
Thanks in advance
As the documentation states before_script is just concatenated together with script and run on a single shell. The script you are downloading does not define test_pipe.
... gitlab does not have the function to include scripts inside yml
files.
It does, just use the YAML multiline literal syntax with |, e.g.:
script:
- |
echo "this"
echo "is"
echo "an \
example"

Publishing test results to Azure (VS Database Project, tSQLt, Azure Pipelines, Docker)

I am trying to fully automate the build, test, and release of a database project using Azure Pipeline.
I already have a Visual Studio solution which consists of three database projects. The first project is the database, which contains the tables, stored procedures, functions, data, etc.. The second project is the tSQLt framework (v 1.0.5873.27393 if anyone is interested). And finally the third project is the tSQLt tests.
My goal here to check the solution into source control, and the pipeline will automatically build the solution, deploy the dacpacs to a build server (docker in this case), run the tSQLt tests, and publish the results back to the pipeline.
My pipeline works like this.
Building the visual studio solution
Publish the Artifacts
Setup a docker container running Ubuntu & SQL Server
Install SQLPackage
Deploy the dacpacs to the SQL instance
Run the tSQLt tests
Publish the test results
Everything up to publishing the results is working, but on this step I got the following error:
[warning]Failed to read /home/vsts/work/1/Results.xml. Error : Data at the root level is invalid. Line 1, position 1.
I added another step in the pipeline to display the content of the Results.xml file. It appears like this:
XML_F52E2B61-18A1-11d1-B105-00805F49916B
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
<testsuites><testsuite id="1" name="MyNewTestClassOne" tests="1" errors="0" failures="0" timestamp="2021-02-01T10:40:31" time="0.000" hostname="f6a05d4a3932" package="tSQLt"><properties/><testcase classname="MyNewTestClassOne" name="TestNumberOne" time="0.
I'm not sure if the column name and dashes should be in the file, but I'm guessing not. I added another step in to remove them, just leaving me with the XML. But this then gave me a different error to deal with:
##[warning]Failed to read /home/vsts/work/1/Results.xml. Error : There is an unclosed literal string. Line 2, position 1.
This one is a little obvious to spot, because as you'll see above, the XML is incomplete.
Here is the part of my pipeline which runs the tSQLt tests and outs the results to Results.xml
- script: |
sqlcmd -S 127.0.0.1,1433 -U SA -P Password.1! -d StagingDB -Q 'EXEC tSQLt.RunAll;'
displayName: 'tSQLt - Run All Tests'
- script: |
cd $(Pipeline.Workspace)
sqlcmd -S 127.0.0.1,1433 -U SA -P Password.1! -d StagingDB -Q 'SET NOCOUNT ON; EXEC tSQLt.XmlResultFormatter;' -o 'tSQLt_Results.xml'
displayName: 'tSQLt - Output Results'
I've research so many blogs and articles on this, and most people are doing the same. Some people use PowerShell instead of sqlcmd, but given I'm using a Ubuntu machine this isn't an option here.
I am all out of options, so I am looking for a little help on this.
You are dealing with 2 problems here. There is noise in your result set, that is not xml and your xml result is truncated after 256 characters. I can help you with both.
What I am doing is basically this:
/opt/mssql-tools/bin/sqlcmd \
-S "localhost, 31114" -U sa \
-P "password" \
-d dbname \
-y0 \
-Q "BEGIN TRY EXEC tSQLt.RunAll END TRY BEGIN CATCH END CATCH; EXEC tSQLt.XmlResultFormatter" \
| grep -w "<testsuites>" \
| tee "resultfile.xml"
Few things to note:
y0 important. This sets the length of the xml result set to unlimited, up from 256.
grep with a regular expression - make sure you only get the xml and not the noise around it.
If you want to run only a subset of your tests, you need to make amendments to the SQL query being passed in, but other than that, this is a catch it all "oneliner" to run all tests and get the results in xml format, readable by Azure DevOps

Gitlab CI pipeline - continue to next stage only on a certain condition

I am trying to build a Gitlab pipeline that is made up of 4 jobs. The stages I have are:
stages:
- compare
- build
- test
- deploy
The compare stage is taking a dump from an API on another server, comparing it to the same dump from the last successful pipeline run (it's made available as an artifact) then comparing the two.
If there is any difference I would like the pipeline to move onto the next stage, if there is no difference then I would like it to exit gracefully.
I have it working but rather than exiting gracefully if there are no differences it fails and the pipeline is marked as failed, here is how it looks.
Here is the important code from my .gitlab-ci.yaml (with some identifying information removed )
Get_inventory_dump:
stage: compare
only:
- schedules
script:
- 'curl -k --output "previous-inventory.json" --header "PRIVATE-TOKEN: $user_token" "https://url/to/get/artifact/from/last/successful/run"'
- python3 auto_config_scripts/dump_device_inventory_api_to_json.py -p $pass -o /inventory.json -u https://url/for/inventory/dump -y
- /usr/bin/cmp previous-inventory.json inventory.json && echo "No Change in inventory since last successful run" && exit 1 || echo "Inventory has changed since last run, continue" && exit 0
artifacts:
when: on_success
expire_in: 4 weeks
paths:
- inventory.json
Generate_icinga_config:
stage: build
only:
- schedules
when: on_success
script:
Everything is behaving as I would expect but I feel like there is a better way to do this.
Is there a way, if the comparison is the same to simply skip the next stages of the pipeline but still have the pipeline completed as 'passed' rather than 'failed'?
There are two solutions I can think of. Unfortunately, they either come slightly confusing UI behavior or you have to adapt all jobs.
Job attributes like only or changes are only concerned with the state of or the files of the git repository (see https://docs.gitlab.com/ee/ci/yaml/) and therefore not of use here as the file is only created during CI and not part of the repository.
Solution 1: You can allow_failure: true to the first job. This will mark the pipeline as successful despite the job failing and subsequent jobs will not be executed as the first job did not succeed. The drawback is that when you investigate the pipeline there will be an exclamation mark instead of a green check for this job.
Solution 2: Instead of failing the first job when there are no changes the inventory.json file is removed. And all subsequent jobs directly terminate with exit code 0 when the file doesn't exist. Note that this only works because inventory.json is marked as an artifact.
Based on Fzgregors suggestion, this is how I solved my problem:
If there was a difference and I wanted my second stage to actually do some work I created a file called "continue" and made it available as an artifact.
The second stage will look for that file and use an IF statement to decide if it should do something or just exit nicely
Get_inventory_dump:
stage: compare
only:
- schedules
script:
- 'curl -k --output "previous-inventory.json" --header "PRIVATE-TOKEN: $user_token" "https://url/to/get/artifact/from/last/successful/run"'
- python3 auto_config_scripts/dump_device_inventory_api_to_json.py -p $pass -o /inventory.json -u https://url/for/inventory/dump -y
- /usr/bin/cmp previous-inventory.json inventory.json && echo "No Change in inventory since last successful run" || echo "Inventory has changed since last run, continue" && touch continue
artifacts:
when: on_success
expire_in: 4 weeks
paths:
- inventory.json
- continue
Generate_icinga_config:
stage: build
only:
- schedules
when: on_success
script:
- if [[ -f continue ]]; then
do some stuff;
else
echo "No Change in inventory, nothing to do";
fi
This allowed me to keep my inventory artifact but at the same time let the next stage know if it needed to do some work or just do nothing and exit with code 0
I have a substantially similar construction and I'm looking for essentially the same solution.
When I allow_failure: true my subsequent jobs DO execute.
If I use a stamp file, all of the subsequent jobs also execute taking up queue, runners, etc, even though they aren't needed.
I was hoping for an easier solution but I think I'm going to have to go with generated yaml files. That seems to be the only way to inject dynamic information like decisions into a pipeline.

Resources