Parameterize Tests to run Parallel or Single Thread in MSTEST - multithreading

My scenario is that we want to run test on devops multithreaded. Mainly things should run and pass..So no pngs or video is needed to record.
BUT, say we have issues, I have a runsettings file that has the to record videos. It seems to have folders for every test that has run, but only the main window that is displayed is actually being recorded. So, my solution to this is to turn off the Parallelize option for the tests. To see if the video recorder can only start and stop on each individual tests. How would I change that setting based on the runsetting files...how can I turn that off at the application level. Or how do I successfully record video for each test run in parallel.

I actually found another way to do it. During the devops process, after checking out the code, I am doing a bit of powershell, which is below. Then only a singular tests run at a time.
task: PowerShell#2
inputs:
targetType: 'inline'
script: |
(Get-Content -Path '$(Build.Repository.LocalPath)\code\properties\AssemblyInfo.cs') |
ForEach-Object {$_ -replace 'Workers = 0','Workers = 1'} |
Out-File '$(Build.Repository.LocalPath)\code\properties\AssemblyInfo.cs'
errorActionPreference: 'continue'

Related

Azure Pipelines - Setting Custom Variable to Build Number

In my Azure Build Pipeline (classic, not YAML) I set my build number to be the branch name and then a revision number variable. This was my process for that:
Pipelines -> Pipelines -> {my pipeline} -> Edit -> Options -> Build Number Format
$(SourceBranchName)$(Rev:.r)
In my testing, that works great.
Now, in my Release Pipeline, the first script I run is a PowerShell script that takes the build number, and applies it to a local variable (MyBuild) I created. The script is as follows:
Write-Host "Pipeline = $($pipeline | ConvertTo-Json -Depth 100)"
$buildNumber = $Env:BUILD_BUILDNUMBER
$pipeline.variables.MyBuild.value = $buildNumber
This variable is used later in the pipeline to create a folder that houses my release files.
$(BuildDirectory)/$(MyBuild)/Debug
For some reason, my variable is always one build behind. For example, if my build number is master.5, the folder that is created by my Release Pipeline is master.4. I have tried changing the order my scripts are in the pipeline, but that doesn't solve anything. It is weird because my Build Pipeline is correct (always named properly, ex. master.1, master.2, master.3, etc.) but my Release Pipeline variable is always one revision behind.
Powershell script to update the custom build number
- powershell: |
[string]$version="$(Build.Repository.Name)_SomeCustomData_$(Build.BuildId)"
Write-Output "##vso[build.updatebuildnumber]$(Version)"
displayName: Set Build Number
I tested it and it works well. Below is my reproduction, you can refer to:
In release pipeline :
Write-Host '##vso[task.setvariable variable=MyBuild]$(Build.BuildNumber)'
md $(Agent.ReleaseDirectory)/$env:MyBuild/Debug
Select build source as release artifact, default version select Latest, enable Continuous deployment trigger. This creates a release every time a new build is available.
Test reuslt:
In addition, the point I am confused about is how do you use the $(BuildDirectory) in the release pipeline? Agent.BuildDirectory:
The local path on the agent where all folders for a given build pipeline are created. This predefined variable should not be available in the release pipeline, we should use Agent.ReleaseDirectory.You can refer to predefined variable.

Declaration and usage of Output Variable in Azure Devops

I'm creating a Continuous Integration pipeline that uses Bash script tasks in order to create the initial variables for runtime.
I have a variable that I call: datebuild, which is formatted accordingly : $(date +%Y%m%d_%H%M%S).
Currently I'm using the pipeline variable that's how I'm declaring it
When using the datebuild variable under Bash#3 task, it successfully formatting it.
Afterwards I want to take the formatted output in order to use it on different tasks inside one agent job.
On the second task I need to copy file to the Artifact Staging Directory:
20200423_141808 is the file and Artifact Staging Directory is the Destination Directory, for example.
I've been reading that it can be used with feature called Output Variables.
Created the reference variable named: ref1, and on the task I want to take the output variable I'm using the ref1.datebuild in order to access the variable
Used the following documentation in order to use the output variable it doesn't seem to work.
here's the task inside the pipeline:
Trying to understand What I'm missing.
You can take the formatted date and set it as a variable for the next steps in the job.
For example, in YAML pipeline:
variables:
datebuild: '$(date +%Y%m%d_%H%M%S)'
steps:
- bash: |
formated="$(datebuild)"
echo "##vso[task.setvariable variable=formatedDate]$formated"
- bash: |
echo $(formatedDate)
In the editor:
The second bash task output is:

Azure Build.SourceVersionMessage returns Null on pipeline task level

In Azure pipeline I use Build.SourceVersionMessage variable to get last commit message. Based on that I want to decide if to build docker image or not (if commit message contains 'BUILD-DOCKER' then build docker image):
...
- task: Docker#0
condition: and(succeeded(), contains(variables['Build.SourceVersionMessage'], 'BUILD-DOCKER'))
...
Problem is that during pipeline execution commit message is a null:
Evaluating: and(succeeded(), contains(variables['Build.SourceVersionMessage'], 'BUILD-DOCKER'))
Expanded: and(True, contains(Null, 'BUILD-DOCKER'))
Result: False
Any idea why is it null?
Additionally e.g. variable Build.SourceBranch is resolved properly
You did not do anything wrong. Just, apologize to say, this is the issue which caused by us.
Because of some design reason which based considering on security, this variable was deleted from system by us. Our team has prepared the fixed code(revoke this deletion) and the PR is in progress.
The deployment procedure will be released as soon as possible. After the released finished, this variable will be shortly injected into the environment again.
Check this ticket to get the in time prompt by our engineer.
You can use below script to check the available variables in system we are still providing:
- task: Bash#3
inputs:
targetType: 'inline'
script: 'env | sort'
Please choose one from its result to set as in condition as a temporary work around which would not affect your build process.

Azure Devops logging commands in release pipeline

I am trying to customize the output of my pipeline release through setting some env variables into a task.
I found the following link:
https://learn.microsoft.com/en-us/azure/devops/pipelines/scripts/logging-commands?view=azure-devops&tabs=powershell
which however does not seem to work.
What I am doing is simply to create a pipeline with a single task (either bash or PS), and there declaring the commands specified in the link through the inline version of the task.
Has anyone already successfully managed to make these commands work?
Do I do something wrong and/or incomplete?
Does anyone have a better way to customise the pipeline with relevant information from a task? E.g. through the release name, or the description and/or tag of the specific release?
Edit:
Write-Host "##vso[task.setvariable variable=sauce;]crushed tomatoes"
Write-Host "##vso[task.setvariable variable=secretSauce;issecret=true]crushed tomatoes with garlic"
Write-Host "Non-secrets automatically mapped in, sauce is $env:SAUCE"
Write-Host "Secrets are not automatically mapped in, secretSauce is $env:SECRETSAUCE"
Write-Host "You can use macro replacement to get secrets, and they'll be masked in the log: $(secretSauce)"
this is the code, copy and pasted. Now I tried also with the script, and it does not work either.
I use an hosted windows agent.
When you set a new variable with the logging command the variable is available only in the next tasks and not in the same task.
So, split your script to 2 tasks, in the second task put the last 3 lines and you will see that the first task works:
this also puzzled me for a while, in the end i found out that if you want to modify the $env:path you can call the special task called task.prependpath by using the special logging command syntax like "##vso[task.prependpath]local directory path". you can find more of this kind of special commands from their source :
https://github.com/microsoft/azure-pipelines-tasks/blob/master/docs/authoring/commands.md

Automate cleanup of IIS sites via Octopus Deploy on TeamCity branch removal

I'm extremely new to this, so apologies if it's a dumb question but I couldn't find anything about it either here, at help.octopusdeploy.com, or google.
Additionally, I'm a DevOps engineer, not a developer and have been using TC and Octopus for about 3 weeks. I'm loving it so far, but it's probably best if you consider me a total rookie ;)
I currently have a build configuration in TeamCity that on a successful build run, creates a release in Octopus and deploys the project to a test server on a succssful build. It is kept separate but deployed alongside the master build. So, in IIS it looks like:
IIS Sites
site.domain.com (master build)
featurebuild1-site.domain.com (feature branch 1)
featurebuild2-site.domain.com (feature branch 2)
etc...
Obviously, this makes life really easy for the devs when testing their feature builds, but it leaves a hell of a mess on the test and integration servers. I can go in and clean them up manually, but I'd vastly prefer it to not leave crap lying around after they've removed the branch in TeamCity.
So, the Project in TeamCity looks like:
Project Name
Feature
/Featurebuild1
/Featurebuild2
/Featurebuild3
Master
Assuming all three feature builds run successfully, I will have 3 feature build IIS sites on the test server alongside the master. If they decide they're done with Featurebuild3 and remove it, I want to somehow automate the removal of featurebuild3-site.domain.com in IIS on my test server. Is this possible? If so, how?
My initial thoughts are to have another Octopus project that will go in and remove the site(s), but I can't figure out if I can/how to trigger it.
Relevant details:
TeamCity version: 9.1.1 (build 37059)
Octopus Deploy version: 3.0.10.2278
Ok, it took me a little while to figure it out, but here's what I ended up doing (just in the event that anyone else is attempting to do the same thing).
I ended up bypassing TeamCity entirely and using our Stash repositories as the source. Also, as I didn't need it to clean up IMMEDIATELY upon deletion, I was happy to have it run nightly. Once I'd decided that, it was then down to a bunch of nested REST API calls to loop through each project and team to enumerate all the different repositories (apologies if I'm butchering terminology here).
$stashroot = "http://<yourstashsite>/rest/api/1.0"
$stashsuffix = "/repos/"
$stashappendix = "/branches"
$teamquery = curl $stash -erroraction silentlycontinue
At this point, I started using jq (https://stedolan.github.io/jq/) to do some better parsing of the text I was getting back
$teams = $teamquery.content | jq -r ".values[].link.url"
Foreach ($team in $teams)
{
# Get the list of branches in the repository
# Feature branch URL format be like: http://<yourstashsite>/projects/<projectname>/repos/<repositoryname>/branches #
$project = $stashroot +$team +$stashsuffix
$projectquery = curl $project -erroraction silentlycontinue
$repos = $projectquery.content | jq -r ".values[].name"
Foreach ($repo in $repos)
{
Try
{
$repository = $stashroot +$team +$stashsuffix +$repo +$stashappendix
$repositoryquery = curl $repository -erroraction silentlycontinue
$reponames = $repositoryquery.content | jq -r ".values[].displayId"
Foreach ($reponame in $reponames)
{
#write-host $team "/" $repo "/" $reponame -erroraction silentlycontinue
$NewObject = new-object PSObject
$NewObject | add-member -membertype NoteProperty -name "Team" -value $team
$NewObject | add-member -membertype NoteProperty -name "Repository" -value $repo
$NewObject | add-member -membertype NoteProperty -name "Branch" -value $reponame
$NewObject | export-csv <desiredfilepath> -notype -append
}
}
Catch{} # Yes, I know this is terrible; it makes me sad too :(
}
}
After that, it was simply a matter of doing a compare-item against the CSV files from two different days (I have logic in place to look for a pre-existing csv and rename it to append "_yesterday" to it), outputting to a file, all the repositories/builds that have been nuked since yesterday.
After that, it strips out the feature branch names (which we use to prefix test site names in IIS, and loops through looking for any sites in IIS that match that site prefix, removes them, the associated application pool, and deletes the directory on the server that stored the site content.
I'm sure there are far better ways to achieve this, especially if you know how to code. I'm just a poor little script monkey though, so I have to make do with what I have :)

Resources