I have to bind the service url dynamically for the swagger import via ARM deployments.
I was trying to format the escaped json string in the ARM templates as below
"value": "[format('\"{\"swagger\":\"2.0\",\"host\":\"{0}\"}\"', parameters('ApimServiceUrl'))]",
But i am getting as 'Input string was not in a correct format.'
How to bind the ARM parameter value in escaped string.
something like this should work:
"[concat('\"{\"swagger\":\"2.0\",\"host\":\"', parameters('ApimServiceUrl'), '\"}\"')]"
Also, i think everything inside json should be escaped with \\\:
"[concat('\"{\\\"swagger\\\":\\\"2.0\\\",\\\"host\\\":\\\"', parameters('ApimServiceUrl'), '\\\"}\"')]"
Related
locals {
check_list ="test"
trimoutput = trim(local.check_list, "")
}
currently, the trim output value is still "test"
Expected output is just test. I need this value inside terraform code itself
if you want to include a literal quote mark then you must use a backslash key
locals {
check_list ="test"
trimoutput = trim(local.check_list, "\"")
I think you are confusing the value of the variable in memory with the syntax Terraform uses to show values in the CLI output.
If your goal is to use the raw value of a root module output value to pass on to some subsequent process then you can use the terraform output command with its -raw option to tell Terraform to output the value literally, rather than rendering it using normal Terraform language syntax (which for strings includes quotes).
terraform output -raw name_of_output_value
Note that there are no quotes in the string itself. The quotes are just the markers Terraform uses to understand that the characters within are intended to be a string.
I have a JSON file. I need to pass contents of this file to a terraform resource.
If I just pass it as it is, then the file has newlines and whitespace. I want to remove that and send compact JSON.
Is there a way to do that?
According to this comment, you can do it via a round-trip:
As of Terraform 0.12, you can produce minified JSON using a round-trip through jsondecode and jsonencode, because jsonencode always produces minimal JSON:
policy = jsonencode(jsondecode(templatefile("filename.tpl", {})))
I am reading JSON data from SQL Database in Azure Data Factory.
I have Azure Data Factory (ADF) pipeline, contains "Lookup" activity, which reads the JSON Data from SQL DB and bring into ADF Pipeline. Somehow the escape character (" \ ") get inserted in JSON data when I see at the output of Lookup activity of ADF.
For Example, the output of Lookup activity become like this:
{\ "resourceType\ ":\ "Sales","id" :\ "9i5W6tp-JTd-24252\ "
Any idea how to remove the escape character from JSON in pipeline?
Update:
Thanks for the update Joseph. When I try your steps, It doesn't work for me.
In lookup am reading data from SQL DB.
This is my Append variable:
After running it, I still see escape character
{
"firstRow": {
"JSONData": "{\"resourceType\":\"counter\",\"id\":\"9i5W6tp-JTd- and more
As we know, '\' is an escape character. In your case, this symbol appears because it is used to escape one double quote inside a pair of double quotes.
For example, "\"" => """.
But it doesn't matter, we only need to convert it from string type to json type, it will automatically remove escape characters.
I've created a test to verify it.
First, I defined an Array type variable.
My Lookup activity's output is as follows:
Then I used an AppendVariable activity and used an expression #json(activity('Lookup1').output.firstRow.value) to convert it from string type to json type.
After I run debug, we can see the result as follows, there is no '\'.
I'm trying to extract part of a file name using expressions in ADF expression builder. The part I'm trying to extract is dynamic in size but always appears between "_" and "-".
How can I go about doing this extraction?
Thanks!
Suppose there's a pipeline parameter named filename, you could use the below expression to extract value between '_' and '-', e.g. input 'ab_cd-', you would get 'cd' as output:
#{substring(pipeline().parameters.fileName, add(indexOf(pipeline().parameters.fileName, '_'),1),sub(indexOf(pipeline().parameters.fileName, '-'),3))}
You may want to check the documentation of Expressions and functions in Azure Data Factory for more details: https://learn.microsoft.com/en-us/azure/data-factory/control-flow-expression-language-functions#string-functions
I'm using the terraform template_file data resource to create a file that should be written to the dynamically-created EC2 instance when I apply the stack. In other words, I want this file to be created in the home folder of the newly-created EC2 instance. However, this file contains curly bracket syntax ${}, which terraform is trying to interpolate. How can I escape these curly brackets?
As background, I'm using the cloud-config syntax to write these files.
Ex:
${username} should be written to the file, not interpolated in terraform.
Even when I use the double dollar sign $$, terraform still fails because it can't find the variable:
... failed to render : <template_file>:105,18-26: Unknown variable; There is no variable named "username".
Terraform uses a fairly unique escape for curly braces:
Sequence
Replacement
$${
Literal ${, without beginning an interpolation sequence.
%%{
Literal %{, without beginning a template directive sequence.
Documentation for reference: https://www.terraform.io/docs/language/expressions/strings.html
FYI I ended up working around this by writing the template in another file, then reading it into the terraform stack using the file method:
data "template_file" "config" {
template = "${file("./user_data.tpl")}"
}
For example if you would like to add into Tags {{_payload.ref}} as it was in my case, I have done it this way:
tags = ["CRITICAL", "\u007B\u007B_payload.ref\u007D\u007D"]
so
\uNNNN Unicode character from the basic multilingual plane (NNNN is four hex digits)
ref.page: https://www.terraform.io/language/expressions/strings