Openstack: Nested Stack creation via dashboard - openstack-horizon

I have a heat template containing nested stack resources, which i am able to deployed successfully via heat-api. Can the same yaml (nested stack creation be done) be used via dashboard??

No.
When we are launching the stack with nested heat templates, the heat service will search for nested heat templates in the current directory. So if we launch the stack through dashboard with nested templates it will throw the error "Unable to find the nested file". So we need to use only CLI and API for launching stack with nested templates.

Yes it is possible to upload yaml file or URL referring to yaml file.You can also upload environment variable on dashboard horizon.In my case I create cluster with similar computes with a single file.

Related

Testing ARM-Templates with ARM-TTK

I am trying to use ARM-TTK for doing unit testing for my ARM templates and ensuring that the templates follow uniformity. I am only running few tests.
We are using Azure Repos as our VCS
I have incorporated this in my AZDO pipeline as a pre PR merge task which is in the form of a branch policy, so that before a PR is merged, these tests will run and validate all the templates that are pushed to the main branch.
But the problem is, the tests are returning false positives even though there are no issues with the JSON files.
According to this link ARM-TTK it seems there has to be one azuredeploy.json or maintemplate.json and all the other files are tested as linked templates.
I have JSON files with other names pertaining to the function of the template like win_vm_deploy.json, function_app-deploy.json etc etc.
It is not possible for me to have all the files as linked templates to the azuredeploy.json or maintemplate.json as mentioned in the URL.
I would also like to run the selected tests against the files loaded in the repo automatically and not specify a particular file to run the tests against.
So does that mean that in my situation i won't be able to use the ARM-TTK and utilize the unit tests?
What is the best way to check my templates in my particular folder and utilize some of the unit tests that i choose from ARM-TTK, but then i don't have to have a main template and the other templates as linked templates.
Appreciate any help
When we are working with several people to create a complex deployment it is recommended to use separate JSON files linked to an azureDeploy.json or a mainTemplate.json file. But it's not mandatory to do the same in every case.
To test one file in that folder, add the -File parameter. However, the folder must still have a main template named azuredeploy.json or maintemplate.json. In your case all files need to be specified in a script. There is no such shortcut available for automation.
You can customize the default test or even create your own test. You can implement you own set of rules by authoring the custom tests. A custom test needs to be placed in the correct directory:
_/arm-ttk/testcases/deploymentTemplate_
You can check this documentation for more information.
Also try this tasks for integration with Azure Pipeline.

How to update ADF Pipeline level parameters during CICD

Being novice to ADF CICD i am currently exploring how we can update the pipeline scoped parameters when we deploy the pipeline from one enviornment to another.
Here is the detailed scenario -
I have a simple ADF pipeline with a copy activity moving files from one blob container to another
Example - Below there is copy activity and pipeline has two parameters named :
1- SourceBlobContainer
2- SinkBlobContainer
with their default values.
Here is how the dataset is configured to consume these Pipeline scoped parameters.
Since this is development environment its OK with the default values. But the Test environment will have the containers present with altogether different name (like "TestSourceBlob" & "TestSinkBlob").
Having said that, when CICD will happen it should handle this via CICD process by updating the default values of these parameters.
When read the documents, no where i found to handle such use-case.
Here are some links which i referred -
http://datanrg.blogspot.com/2019/02/continuous-integration-and-delivery.html
https://learn.microsoft.com/en-us/azure/data-factory/continuous-integration-deployment
Thoughts on how to handle this will be much appreciated. :-)
There is another approach in opposite to ARM templates located in 'ADF_Publish' branch.
Many companies leverage that workaround and it works great.
I have spent several days and built a brand new PowerShell module to publish the whole Azure Data Factory code from your master branch or directly from your local machine. The module resolves all pains existed so far in any other solution, including:
replacing any property in JSON file (ADF object),
deploying objects in an appropriate order,
deployment part of objects,
deleting objects not existing in the source any longer,
stop/start triggers, etc.
The module is publicly available in PS Gallery: azure.datafactory.tools
Source code and full documentation are in GitHub here.
Let me know if you have any question or concerns.
There is a "new" way to do ci/cd for ADF that should handle this exact use case. What I typically do is add global parameters and then reference those everywhere (in your case from the pipeline parameters). Then in your build you can override the global parameters with the values that you want. Here are some links to references that I used to get this working.
The "new" ci/cd method following something like what is outlined here Azure Data Factory CI-CD made simple: Building and deploying ARM templates with Azure DevOps YAML Pipelines. If you have followed this, something like this should work in your yaml:
overrideParameters: '-dataFactory_properties_globalParameters_environment_value "new value here"'
Here is an article that goes into more detail on the overrideParameters: ADF Release - Set global params during deployment
Here is a reference on global parameters and how to get them exposed to your ci/cd pipeline: Global parameters in Azure Data Factory

How to create ARM linked template when the linked templates are stored locally not in the cloud

Having a main ARM template and a linked ARM template, is it possible to deploy the resources from VS?
When I try that is says "The language expression property 'templateLink' doesn't exist, available properties are
12:40:36 - 'template, parameters, mode, provisioningState".
Looking it up I found answers that indicate that you have to upload the linked templates somewhere in the cloud but to me it is stupid not being able to do all of your work, including deployment, from VS.
Is there a way to deploy from VS or from a command prompt and all the templates to exist on the local drive?
No, linked templates have to be uploaded to some place available to the ARM engine to fetch them. Or you can just "type" them inline, but thats is pretty tricky due to how they work compared to regular linked templates (hint - dont really use this approach).
What I usually do - upload all the templates with powershell and just reference each other with url() function
You can deploy the template completely from VS, it will upload all the linked templates in the cloud and the do the deployment.
See the answer for this question.

Azure Pipelines (DevOps): Custom Consumable Statistic/Metric

I have a build up on Azure Pipelines, and one of the steps provides a code metric that I would like to have be consumable after the build is done. Ideally, this would be in the form of a badge like this (where we have text on the left and the metric in the form of a number on the right). I'd like to put such a badge on the README of the repository to make this metric visible on a per-build basis.
Azure DevOps does have a REST API that one can use to access built-in aspects of a given build. But as far as I can tell there's no way to expose a custom statistic or value that is generated or provided during a build.
(The equivalent in TeamCity would be outputting ##teamcity[buildStatisticValue key='My Custom Metric' value='123'] via Console.WriteLine() from a simple C# program, that TeamCity can then consume and use/make available.)
Anyone have experience with this?
One option is you could use a combination of adding a build tag using a command:
##vso[build.addbuildtag]"My Custom Metric.123"
Then use the Tags - Get Build Tags API.
GET https://dev.azure.com/{organization}/{project}/_apis/build/builds/{buildId}/tags?api-version=5.0

Make Azure Logic Apps with Terraform, what are the limitations?

For what I can understand one can build Logic Apps with Terraform. However, the docs are still not very good, and it looks like this feature is pretty new.
What are the limitations when it comes to TF and Azure Logic Apps? Are there any?
I want to build a two apps, one that is triggered every month and another that is triggered by a https request. I want these then to run two python scripts, and I want the later one to return the result from this script to the client that called the https.
Is this possible to automate in Terraform? At this moment, there are very little examples and documentation on this. Any comment or tip is helpful and greeted with open arms!
You can create a blank Logic App instance through Terrform (TF). But, if you want to add triggers and actions, I wouldn't recommend using TF at all, as of the provider version of 1.20.0.
TF lacks document around parameters. As you know there are two parameters properties – right under the properties property and right under the definitions property. This document states parameters but it doesn't clearly say which one. I'm guessing this refers to the one under the definitions property, but it actually doesn't work – throws Invalid Template error without enough explanation.
UPDATE: I just reverse engineered by importing a Logic App instance using terraform import. The parameters is actually pointing to the one under the properties property. However, it still doesn't work as the Logic App's parameter value can be anything – object, string, integer, etc, while TF's parameter expects string only. Also, there is no way to create parameters under the definitions property.
TF only supports two triggers – HTTP trigger and Timer trigger. All other triggers should use the azurerm_logic_app_trigger_custom resource, but it requires the body part to manually write a JSON object or import from a file, which can't be parameterised through variables or locals.
TF only supports one action – HTTP action. All other actions should use the azurerm_logic_app_action_custom resource, but, like the same issue above, it's not that useful.
In conclusion, TF lacks supports parameters, triggers and actions. So, unless you just create a blank Logic App instance, TF wouldn't be an option for Logic Apps. If you still want to create a blank Logic App instance with TF, then I would recommend this approach using Azure PowerShell or Azure CLI.
For clarity, you don't use Terraform to create LogicApps. LogicApps are designed in either the Portal or Visual Studio.
Terraform is a deployment/management tool. You can almost surely deploy your LogicApps, and other resources, with Terraform, but they're are already created.
isn't the point of terraform to stand up resources across various environments just by passing in -var environment=qa to create a qa instance of the logic app? prod? uat? marcplaypen? I was hoping to use terraform import to create the terraform file, then create multiple versions of it. I can do it, but not with any parameters, which breaks one of my 'actions'.
I was using a combo of terraform import and logic app code view. most of my actions are pretty much a combo of copying the json block for each action, and modifying based on the first entry of the 'body' of the action generated from terraform import.
Then setting up the dependencies manually based off of runAfter, which tells me what an action is dependent on.
but, it fails on parameters, complaining there's only these declared parameters for my definition are ''.'

Resources