While using runtime parameters in Azure DevOps build pipeline, is there a way to mask the values of the parameters in pipeline logs ?
Based on my research and going through the documentation, there does not seem to be a way to achieve this (at the time of writing this). There are alternatives like using variable groups/secret variables but since the parameter values are user provided and would change for each pipeline trigger, such options are not ideal.
If the parameters values cannot be masked, can we turn off pipeline logs altogether?
During our tests, we tried to define a variable as secret with the same value to the parameter, and in the pipeline logs, the value of the parameters could not be echoed anymore.
Even though the value is still observed in the ui triggering. So for further secure setting, we suppose that you could set more specific permission for the pipeline or the yaml repo, to limit the accessibility. And you could also create a Feature Request to raise your concern with more developers.
==========================================================
11/7 4:20PM UTC+8 Update.
Hi Vineet,
If you are going to hide the logs, you could try to limit the accessibility for the pipeline builds. Even though it could not be achieved directly via UI setting.
Related
So to give you a bit of context we have a service which has been split into two different services ie one for the read and one for the write side operations. The read side is called ProductStore and the write side is called ProductCatalog. The issue were facing was down the write side as the load tests create 100 products in the write side resource web app and then they are transferred to the read side for the load test to then read x number of times. If a build is launched in the product catalog because something new was merged to master then this will cause issues in the product store pipeline if it gets run concurrently.
The question I want to ask is there a way in the ProductStore yaml file to directly query via a specified azure task or via an AzurePowershell script to check if a build is currently running in the ProductCatalog pipeline.
The second part of this would be to loop/wait until that pipeline has successfully finished before resuming the product store pipeline.
Hope this is clear as I'm not sure how to best ask this question as I'm very new to the DevOps pipelines flow but this would massively help if there was a good way of checking this sort of thing.
As a workaround , you can set Pipeline completion trigger in ProductStore pipeline.
To trigger a pipeline upon the completion of another, specify the triggering pipeline as a pipeline resource.
Or configure build completion triggers in the UI, choose Triggers from the settings menu, and navigate to the YAML pane.
I would like to move the existing Azure DevOps pipelines to YAML based for obvious advantages. The problem is there are many of these and each one has many jobs.
When I click around in Azure DevOps, the "View YAML" link only appears for one job at a time. So that's gonna be a lot of manual work to view YAMLs for each pipeline x jobs and move that to code.
But for each pipeline there seems to be a way to "export" the entire pipeline in json. I was wondering if there is a similar way to at least dump the entire pipeline as YAML if not an entire folder.
If there is an API which exports the same then even better.
Until now, what we supported is what you see, use View YAML to copy/paste the definition of agent job. There has another workaround to get the entire definition of one pipeline is use the API to get the JSON from a build definition, convert it to YAML, tweak the syntax, then if needed, update the tasks which are referenced.
First, use Get Build Definition api to get the entire definition
of one pipeline.
Invoke JSON to YAML converter. Copy/paste the JSON of definition
into this converter.
Copy the YAML to a YAML editor of Azure Devops. Then the most important step is tweak the syntax.
Replace the refName key values with task names and version. For this, you can go our tasks source code which opened in github, built in tasks can be found there(note: please see the task.json file of corresponding task)
Noted: Use this method has another disadvantage that you need very familiar with YAML syntax so that you can tweak the content which convert from JSON successfully.
This is done and there is blog post about exporting pipelineas YAML on devblogs
It's it worth to mention that the new system knows how to handle every feature listed here:
Single and multiple jobs
Checkout options
Execution plan parallelism
Timeout and name metadata
Demands
Schedules and other triggers
Pool selection, including jobs which differ from the default
All tasks and all inputs, including optimizing for default inputs
Job and step conditions
Task group unrolling
In fact, there are only two areas which we know aren’t covered.
Variables
Variables defined in YAML “shadow” (hide) variables set in the UI. Therefore, we didn’t want to export them into the YAML file in case you need an ability to alter them at runtime. If you have UI variables in your Classic pipeline, we mention them by name in the comments to remind you that you need to configure them in your new YAML pipeline definition.
Timezone translation
cron schedules in YAML are in UTC, while Classic schedules are in the organization’s timezone. Timezone handling has a lot of sharp edges, so we opted not to try to be clever. We export the schedule without doing any translation, so your scheduled builds might be off by a certain number of hours unless you manually modify them. Here again, we make a note in the comments to remind you.
But there won't be support for release pipelines:
No plans to do so. Classic RM pipelines are different enough in their execution that I can’t make the same strong guarantees about correctness as I can with classic Build. Also, a number of concepts were re-thought between RM and unified YAML pipelines. In some cases, there isn’t a direct translation for an RM feature. A human is required to think about what the pipeline is designed to accomplish and re-implement it using new constructs.
I tried yamlizr https://github.com/f2calv/yamlizr
It works pretty well for exporting Release Pipelines, except it doesn't export out Pre/Post deployment conditions. We use these for Approval gates. So hopefully in a future release it will be supported.
But per Microsoft they won't support the export to YAML for Release Pipelines it sounds like.
https://devblogs.microsoft.com/devops/replacing-view-yaml/#comment-2043
Being novice to ADF CICD i am currently exploring how we can update the pipeline scoped parameters when we deploy the pipeline from one enviornment to another.
Here is the detailed scenario -
I have a simple ADF pipeline with a copy activity moving files from one blob container to another
Example - Below there is copy activity and pipeline has two parameters named :
1- SourceBlobContainer
2- SinkBlobContainer
with their default values.
Here is how the dataset is configured to consume these Pipeline scoped parameters.
Since this is development environment its OK with the default values. But the Test environment will have the containers present with altogether different name (like "TestSourceBlob" & "TestSinkBlob").
Having said that, when CICD will happen it should handle this via CICD process by updating the default values of these parameters.
When read the documents, no where i found to handle such use-case.
Here are some links which i referred -
http://datanrg.blogspot.com/2019/02/continuous-integration-and-delivery.html
https://learn.microsoft.com/en-us/azure/data-factory/continuous-integration-deployment
Thoughts on how to handle this will be much appreciated. :-)
There is another approach in opposite to ARM templates located in 'ADF_Publish' branch.
Many companies leverage that workaround and it works great.
I have spent several days and built a brand new PowerShell module to publish the whole Azure Data Factory code from your master branch or directly from your local machine. The module resolves all pains existed so far in any other solution, including:
replacing any property in JSON file (ADF object),
deploying objects in an appropriate order,
deployment part of objects,
deleting objects not existing in the source any longer,
stop/start triggers, etc.
The module is publicly available in PS Gallery: azure.datafactory.tools
Source code and full documentation are in GitHub here.
Let me know if you have any question or concerns.
There is a "new" way to do ci/cd for ADF that should handle this exact use case. What I typically do is add global parameters and then reference those everywhere (in your case from the pipeline parameters). Then in your build you can override the global parameters with the values that you want. Here are some links to references that I used to get this working.
The "new" ci/cd method following something like what is outlined here Azure Data Factory CI-CD made simple: Building and deploying ARM templates with Azure DevOps YAML Pipelines. If you have followed this, something like this should work in your yaml:
overrideParameters: '-dataFactory_properties_globalParameters_environment_value "new value here"'
Here is an article that goes into more detail on the overrideParameters: ADF Release - Set global params during deployment
Here is a reference on global parameters and how to get them exposed to your ci/cd pipeline: Global parameters in Azure Data Factory
For what I can understand one can build Logic Apps with Terraform. However, the docs are still not very good, and it looks like this feature is pretty new.
What are the limitations when it comes to TF and Azure Logic Apps? Are there any?
I want to build a two apps, one that is triggered every month and another that is triggered by a https request. I want these then to run two python scripts, and I want the later one to return the result from this script to the client that called the https.
Is this possible to automate in Terraform? At this moment, there are very little examples and documentation on this. Any comment or tip is helpful and greeted with open arms!
You can create a blank Logic App instance through Terrform (TF). But, if you want to add triggers and actions, I wouldn't recommend using TF at all, as of the provider version of 1.20.0.
TF lacks document around parameters. As you know there are two parameters properties – right under the properties property and right under the definitions property. This document states parameters but it doesn't clearly say which one. I'm guessing this refers to the one under the definitions property, but it actually doesn't work – throws Invalid Template error without enough explanation.
UPDATE: I just reverse engineered by importing a Logic App instance using terraform import. The parameters is actually pointing to the one under the properties property. However, it still doesn't work as the Logic App's parameter value can be anything – object, string, integer, etc, while TF's parameter expects string only. Also, there is no way to create parameters under the definitions property.
TF only supports two triggers – HTTP trigger and Timer trigger. All other triggers should use the azurerm_logic_app_trigger_custom resource, but it requires the body part to manually write a JSON object or import from a file, which can't be parameterised through variables or locals.
TF only supports one action – HTTP action. All other actions should use the azurerm_logic_app_action_custom resource, but, like the same issue above, it's not that useful.
In conclusion, TF lacks supports parameters, triggers and actions. So, unless you just create a blank Logic App instance, TF wouldn't be an option for Logic Apps. If you still want to create a blank Logic App instance with TF, then I would recommend this approach using Azure PowerShell or Azure CLI.
For clarity, you don't use Terraform to create LogicApps. LogicApps are designed in either the Portal or Visual Studio.
Terraform is a deployment/management tool. You can almost surely deploy your LogicApps, and other resources, with Terraform, but they're are already created.
isn't the point of terraform to stand up resources across various environments just by passing in -var environment=qa to create a qa instance of the logic app? prod? uat? marcplaypen? I was hoping to use terraform import to create the terraform file, then create multiple versions of it. I can do it, but not with any parameters, which breaks one of my 'actions'.
I was using a combo of terraform import and logic app code view. most of my actions are pretty much a combo of copying the json block for each action, and modifying based on the first entry of the 'body' of the action generated from terraform import.
Then setting up the dependencies manually based off of runAfter, which tells me what an action is dependent on.
but, it fails on parameters, complaining there's only these declared parameters for my definition are ''.'
I have a data factory that I would like to publish, however I want to delay one of the pipelines from running as it uses a shared resource that isn't quite ready.
If possible I would like to allow the previous pipelines to run and then enable the downstream pipeline when the resource is ready for it.
How can I disable a pipeline so that I can re-enable it at a later time?
Edit your trigger and make sure Activated is checked NO. And of course don't forget to publish your changes!
Its not really possible in ADF directly. However, I think you have a couple of options to dealing with this.
Option 1.
Chain the datasets in the activities to enforce a fake dependency making the second activity wait. This is a bit clunky and requires the provisioning of fake datasets. But could work.
Option 2.
Manage it at a higher level with something like PowerShell.
For example:
Use the following cmdlet to check the status of the first activity and wait maybe in some sort of looping process.
Get-AzureRmDataFactoryActivityWindow
Next, use the following cmdlet to pause/unpause the downstream pipeline as required.
Suspend-AzureRmDataFactoryPipeline
Hope this helps.
You mentioned publishing, so if you are publishing trough Visual Studio, it is possible to disable a pipeline by setting its property "isPaused" to true in .json pipeline configuration file.
Property for making pipeline paused
You can disable pipeline by clicking Monitor & Manage in Data Factory you are using. Then click on the pipeline and in the upper left corner you have two options:
Pause: Will not terminate current running job, but will not start next
Terminate: Terminates all job instances (as well as not starting future ones)
GUI disabling pipeline
(TIP: Paused and Terminated pipeline have orange color, resumed have green color)
Use the powershell cmdlet to check the status of the activity
Get-AzureRmDataFactoryActivityWindow
Use the powershell cmdlet to pause/unpause a pipeline as required.
Suspend-AzureRmDataFactoryPipeline
Right click on the pipeline in the "Monitor and Manage" application and select "Pause Pipeline".
In case you're using ADF V2 and your pipeline is scheduled to run using a trigger, check which trigger your pipeline uses. Then go to the Manage tab and click on Author->Triggers. There you will get an option to stop the trigger. Publish the changes once you've stopped the trigger.