Publishing test results to Azure (VS Database Project, tSQLt, Azure Pipelines, Docker) - azure

I am trying to fully automate the build, test, and release of a database project using Azure Pipeline.
I already have a Visual Studio solution which consists of three database projects. The first project is the database, which contains the tables, stored procedures, functions, data, etc.. The second project is the tSQLt framework (v 1.0.5873.27393 if anyone is interested). And finally the third project is the tSQLt tests.
My goal here to check the solution into source control, and the pipeline will automatically build the solution, deploy the dacpacs to a build server (docker in this case), run the tSQLt tests, and publish the results back to the pipeline.
My pipeline works like this.
Building the visual studio solution
Publish the Artifacts
Setup a docker container running Ubuntu & SQL Server
Install SQLPackage
Deploy the dacpacs to the SQL instance
Run the tSQLt tests
Publish the test results
Everything up to publishing the results is working, but on this step I got the following error:
[warning]Failed to read /home/vsts/work/1/Results.xml. Error : Data at the root level is invalid. Line 1, position 1.
I added another step in the pipeline to display the content of the Results.xml file. It appears like this:
XML_F52E2B61-18A1-11d1-B105-00805F49916B
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
<testsuites><testsuite id="1" name="MyNewTestClassOne" tests="1" errors="0" failures="0" timestamp="2021-02-01T10:40:31" time="0.000" hostname="f6a05d4a3932" package="tSQLt"><properties/><testcase classname="MyNewTestClassOne" name="TestNumberOne" time="0.
I'm not sure if the column name and dashes should be in the file, but I'm guessing not. I added another step in to remove them, just leaving me with the XML. But this then gave me a different error to deal with:
##[warning]Failed to read /home/vsts/work/1/Results.xml. Error : There is an unclosed literal string. Line 2, position 1.
This one is a little obvious to spot, because as you'll see above, the XML is incomplete.
Here is the part of my pipeline which runs the tSQLt tests and outs the results to Results.xml
- script: |
sqlcmd -S 127.0.0.1,1433 -U SA -P Password.1! -d StagingDB -Q 'EXEC tSQLt.RunAll;'
displayName: 'tSQLt - Run All Tests'
- script: |
cd $(Pipeline.Workspace)
sqlcmd -S 127.0.0.1,1433 -U SA -P Password.1! -d StagingDB -Q 'SET NOCOUNT ON; EXEC tSQLt.XmlResultFormatter;' -o 'tSQLt_Results.xml'
displayName: 'tSQLt - Output Results'
I've research so many blogs and articles on this, and most people are doing the same. Some people use PowerShell instead of sqlcmd, but given I'm using a Ubuntu machine this isn't an option here.
I am all out of options, so I am looking for a little help on this.

You are dealing with 2 problems here. There is noise in your result set, that is not xml and your xml result is truncated after 256 characters. I can help you with both.
What I am doing is basically this:
/opt/mssql-tools/bin/sqlcmd \
-S "localhost, 31114" -U sa \
-P "password" \
-d dbname \
-y0 \
-Q "BEGIN TRY EXEC tSQLt.RunAll END TRY BEGIN CATCH END CATCH; EXEC tSQLt.XmlResultFormatter" \
| grep -w "<testsuites>" \
| tee "resultfile.xml"
Few things to note:
y0 important. This sets the length of the xml result set to unlimited, up from 256.
grep with a regular expression - make sure you only get the xml and not the noise around it.
If you want to run only a subset of your tests, you need to make amendments to the SQL query being passed in, but other than that, this is a catch it all "oneliner" to run all tests and get the results in xml format, readable by Azure DevOps

Related

Tabular Editor - Set "Shared Expression" value from Tabular Editor CLI (with Azure Devops)

I need to change the value of a parameter in a TOM. I am using Azure Devops with steps that include Tabular Editor CLI. I have written a one-line script that should be able to change the value of a Shared Expression. (Maybe a shared expression is read only?)
The script that will be executed
Model.Expressions["CustomerNameParameter"].Expression = "\"some value\" meta [IsParameterQuery=true, Type=\"Text\", IsParameterQueryRequired=true]";
I returns an error whenever Azure Devops tries to run it:
It cannot find the CustomerNameParameter in the model.
My build looks like this:
Starting: Build Mode.bim from SourceDirectory
==============================================================================
Task : Command line
Description : Run a command line script using Bash on Linux and macOS and cmd.exe on Windows
Version : 2.201.1
Author : Microsoft Corporation
Help : https://learn.microsoft.com/azure/devops/pipelines/tasks/utility/command-line
==============================================================================
Generating script.
Script contents: shell
TabularEditor.exe "D:\a\1\s" -B "D:\a\1\a\Model.bim"
========================== Starting Command Output ===========================
"C:\Windows\system32\cmd.exe" /D /E:ON /V:OFF /S /C "CALL "D:\a\_temp\ba31b528-d9a3-42cc-9099-d80d46d1ffe6.cmd""
Tabular Editor 2.12.4 (build 2.12.7563.29301)
--------------------------------
Dependency tree built in 113 ms
Loading model...
Building Model.bim file...
Finishing: Build Mode.bim from SourceDirectory
Your script looks good. Have you tried executing it within the Tabular Editor UI?
Perhaps the parameter is named differently in your model. You can use the following script in the CLI to output the list of parameters:
foreach(var expr in Model.Expressions) Info(expr.Name);
The result when executed in the CLI on the model shown in the screenshot above:
What I did to fix this is create a separate script (SharedExpressions.csx) and (manually) created this line for each expression:
Model.Expressions["Start Date - Sales"].Expression = "#datetime(2019, 1, 1, 0, 0, 0) meta [IsParameterQuery=true, Type=\"DateTime\", IsParameterQueryRequired=true]";
In this case my Expressions are datetime, but you can change it to anything you like. Just be very aware with the quote placement.
Then in my pipeline I use the following script to execute the changes:
start /wait TabularEditor.exe "$(System.DefaultWorkingDirectory)/Tabular Model/model/model.bim" -S "$(System.DefaultWorkingDirectory)/Tabular Model/scripts/SharedExpressions.csx" -D -V
My tabular model uses a shared expression (parameter) to set the connection string. The following solution worked for me. I updated the command line script in the release pipeline as follows:
echo var connectionString = Environment.GetEnvironmentVariable("SQLDWConnectionString"); >> SetConnectionStringFromEnv.cs
echo Model.Expressions["ServerName"].Expression = "\"" + connectionString +"\"" + " meta [IsParameterQuery=true, Type=\"Text\", IsParameterQueryRequired=true]"; >> SetConnectionStringFromEnv.cs
TabularEditor.exe "_$(Build.DefinitionName)\drop\Model.bim" -S SetConnectionStringFromEnv.cs -D "%ASConnectionString%" "$(ASDatabaseName)" -O -C -P -R -M -W -E -V

How to set a "title" or "name" for a bitbucket script execution element?

i am wondering whether or not one can set a title or name for an execution element in a bitbucket pipeline:
pipelines:
default:
- step:
script: # Modify the commands below to build your repository.
- "Configure": ./configure
- "Build": make
- "Test": make test
- "Long Script": |
make whatever1
make whatever2
make whatever3
I'd expect the output to be:
Configure
Build
Test
Long Script
within the titles, and seeing the script only, if I unfolding the execution elements in the UI, just like with github:
Any ideas? :-)
The only one I found was to put everthing in bash scripts, but then I do not see the executed command, which I still want.
Thanks.
Any step of your pipeline can have its own name property, a good example can be found here.
In case you'd like to assign names to individual commands of your step's script, I reckon echo would be a good option:
echo "Test" && make test

How can I keep a file private on a public GitLab repository?

I'm working on a Java app that uses multiple APIs and would like to keep the API tokens out of the public GitLab repository. The app is packaged and deployed to a remote server and I don't know how to make the tokens available without including them in the GitLab repository otherwise.
Is there a way I can restrict the viewing of a file (or part of it) to sort of "redact" these tokens? Or should I go about it a different way?
Don't put API keys in your repo. Inject them into your use of the repo via environment variables which are maintained by your deployment system. If you deployment system doesn't have that ability, you probably need to change it. It doesn't need to be too complicated - for example change it to deploying your code from git, then copying a .env file into place separately. If your deployment mechanism only lets you use git repos, you could put your env vars into a separate repo that is kept private.
I have a similar situation with injecting google-services.json file into an Android application. Long story short we have multiple environments our app targets, and the production environment file must, somehow, reside in the build pipelines (either, committed or something).
As pointed by the previous response having this information committed in the main repo is not ideal. Developers could accidentally use the production environment while testing for example.
How we solved this
First, documentation. All those files (google-services.json and similar others) are ignored in git and the developer documentation states you must add your own.
Second, the CI build pipelines. We are also using GitLab, and we store those files as base64 encoded strings in CI variables, controlling then access to those variables via the protected tags/branches mechanism GitLab offers.
Serializing the files
There are two steps involved in here. First serialize an actual file in base64. Second, de-serialize the file from base64 into its appropriate location.
base64 --wraps=0 google-services.json (wraps option prevents line wrapping if done in console directly.). Then store the output in a GitLab CI variable.
In the .gitlab-ci.yml file do the inverse to inject the file.
echo $VAR_NAME | base64 -d > where/you/need/the/file.
You then control the appropriate environment to use via the $VAR_NAME variable.
An example of this an be found at https://gitlab.com/snippets/1926611. This case is for an xml file with the Google Maps API key, but the process is identical.
You can create a variable in you GitLab project settings. The variable can be used in your .gitlab-ci.yml file.
For example,
create a variable named GOOGLE_SERVICE_JSON and set the value to the base64 format of the file content. You can get it by command base64 google-services.json
update your .gitlab-ci.yml file, decode the GOOGLE_SERVICE_JSON value to google-services.json file like this
assembleDebug:
stage: build
script:
- echo ${GOOGLE_SERVICE_JSON} | base64 -d > app/google-services.json
- ./gradlew assembleDebug
artifacts:
paths:
- app/build/outputs/
You can also use this method to encode the keystore file to a variant and decode it to a file in pipeline build.
Here is a full example
image: openjdk:8-jdk
variables:
ANDROID_COMPILE_SDK: "28"
ANDROID_BUILD_TOOLS: "28.0.3"
ANDROID_SDK_TOOLS: "6609375_latest"
before_script:
- echo ANDROID_COMPILE_SDK ${ANDROID_COMPILE_SDK}
- echo ANDROID_BUILD_TOOLS ${ANDROID_BUILD_TOOLS}
- echo ANDROID_SDK_TOOLS ${ANDROID_SDK_TOOLS}
- apt-get --quiet update --yes
- apt-get --quiet install --yes wget tar unzip lib32stdc++6 lib32z1
- wget --quiet --output-document=android-sdk.zip https://dl.google.com/android/repository/commandlinetools-linux-${ANDROID_SDK_TOOLS}.zip
- unzip -d android-sdk-linux android-sdk.zip
- export ANDROID_SDK_ROOT=$PWD/android-sdk-linux
- export SDK_MANAGER="${ANDROID_SDK_ROOT}/tools/bin/sdkmanager --sdk_root=${ANDROID_SDK_ROOT}"
- echo y | ${SDK_MANAGER} "platforms;android-${ANDROID_COMPILE_SDK}" >/dev/null
- echo y | ${SDK_MANAGER} "platform-tools" >/dev/null
- echo y | ${SDK_MANAGER} "build-tools;${ANDROID_BUILD_TOOLS}" >/dev/null
- export PATH=$PATH:${ANDROID_SDK_ROOT}/platform-tools/
- chmod +x ./gradlew
# temporarily disable checking for EPIPE error and use yes to accept all licenses
- set +o pipefail
- echo y | ${SDK_MANAGER} --licenses
- set -o pipefail
stages:
- build
assembleDebug:
stage: build
script:
- echo ${GOOGLE_SERVICE_JSON} | base64 -d > app/google-services.json
- echo ${KEY_STORE_PROP} | base64 -d > app/keystore.properties
- echo ${STORE_FILE} | base64 -d > app/keystore.jks
- ./gradlew assembleDebug
artifacts:
paths:
- app/build/outputs/
assembleRelease:
stage: build
script:
- echo ${GOOGLE_SERVICE_JSON} | base64 -d > app/google-services.json
- echo ${KEY_STORE_PROP} | base64 -d > app/keystore.properties
- echo ${STORE_FILE} | base64 -d > app/keystore.jks
- ./gradlew assembleRelease
artifacts:
paths:
- app/build/outputs/

Extract and write gitlab pipeline ID in gatling log

I am new to both GITLAB and GATLING. I am trying to implement shift-left performance testing as part of CI/CD pipeline using the mentioned tool. Although I am very proficient with Jenkins and Jmeter. I am trying to write Gitlab Pipeline ID to gatling simulation log file.
I have referred to the GITLAB variable and $CI_PIPELINE_IID is used to expose the pipeline execution ID. In case of scala I can fetch or infuse environment variables using JAVA_OPTS but not able to fetch the gitlab pipeline id.
I did some further research and i was able to print the pipeline ID on the Gitlab Console
$ echo $CI_PIPELINE_ID
141683
Further I read as part of scala[Gitlab], CI variables can be used as ENV variables during the build, so retrieving them as
System.getEnv("$CI_PIPELINE_IID")
But while i try the same I get null value.
val varPipelineId = System.getenv("$CI_PIPELINE_ID")
println(varPipelineId)
Console log:
$ echo $CI_PIPELINE_ID
141694
$ mkdir /opt/gatling/user-files/simulations/cloudnative/
$ cp gatling-reports/cloudnativems.scala /opt/gatling/user-files/simulations/cloudnative/
$ /opt/gatling/bin/gatling.sh -s cloudnative.cloudnativems
GATLING_HOME is set to /opt/gatling
null
Second Part of the requirement:
I want to write the value of pipeline id in gatling simulation.log file with each log entry
Actual format of simulation.log
REQUEST 1 account_movement_post 1553663401413 1553663404218 KO status.find.in(200,201,202,203,204,205,206,207,208,209,303), found 500
Modified format of simulation.log
REQUEST 1 account_movement_post 1553663401413 1553663404218 KO status.find.in(200,201,202,203,204,205,206,207,208,209,303), found 500 [pipeline_id]
I made a logical change - removed $ from the environment variable
val varPipelineId = System.getenv("CI_PIPELINE_ID")
println(varPipelineId)
Output:
$ echo $CI_PIPELINE_ID
141714
$ mkdir /opt/gatling/user-files/simulations/cloudnative/
$ cp gatling-reports/cloudnativems.scala /opt/gatling/user-files/simulations/cloudnative/
$ /opt/gatling/bin/gatling.sh -s cloudnative.cloudnativems
GATLING_HOME is set to /opt/gatling
Some(141714)

D2RQ parameters for generate-mapping

We are currently working on a project involving an "ordinary" relational database, but we wish to enable SPARQL requests towards this database.
d2rq.org is a tool that enables SPARQL to be run towards the database with the help of a .ttl file which defines the database to RDF mapping.
This .ttl file can be built automatically with a D2RQ tool named "generate-mapping".
http://d2rq.org/generate-mapping takes quite a few arguments, some preceeded with a single dash "-" and some double "--". My challenge is that any argument preceeded with a double dash generates this error:
Command:
./generate-mapping -u root -p password -o testmappingLocal.ttl --verbose jdbc:mysql:///iswc
Result:
Exception in thread "main" java.lang.IllegalArgumentException: Unknown argument: --verbose
at jena.cmdline.CommandLine.handleUnrecognizedArg(CommandLine.java:215)
at jena.cmdline.CommandLine.process(CommandLine.java:177)
at d2rq.generate_mapping.main(generate_mapping.java:41)
Any help with the double-dash arguments will be greatly appreciated.
OS: Ubuntu Linux, D2RQ version: 0.8
D2rq and mysql database using generate mapping file & rdf files.
1).mapping file generate commands:
./generate-mapping -u root -p root -o /home/bigtapp/Documents/d2rqgenerate_mapping/mapfile.ttl jdbc:mysql://localhost:3306/d2rq
note: 1. root -p root -> mysql db username & password.
2. /home/bigtapp/Documents/d2rqgenerate_mapping/mapfile.ttl -> file save output path .
3.jdbc:mysql://localhost:3306 ->mysql driver.
4./d2rq ->database name.
2).the mapping file using RDF creation:
use following command.
The RDF syntax to use for output. Supported syntaxes are “TURTLE”, “RDF/XML”, “RDF/XML-ABBREV”, “N3”, and “N-TRIPLE” (the default). “N-TRIPLE” works best for large databases.
command:
./dump-rdf -f RDF/XML -b localhost:3306 -o /home/bigtapp/Documents/d2rqgenerate_mapping/dumpfile.rdf /home/bigtapp/Documents/d2rqgenerate_mapping/mapfile.ttl.
apache-jena-fuseki create dataset then rdf file uploadserver then your using sparql query ..you get the result...

Resources