Is there a way to get the value returned by uniqueString in Powershell. I am creating the bulk of my environment using ARM templates, but I still need to run Powershell for certain things. Powershell needs to know the resource name suffixes generated by uniqueString. Currently I have these values hard coded.
Also, the value returned by uniqueString is excessive and severely limits resource names, ie espstorage is too long to use with uniqueString. I am considering replacing uniqueString with a CRC32 or .Net String Hash value in my templates -- since I end up hard coding the values anyway in Powershell. But from all the examples, uniqueString appears to be the "correct" way.
I had a similar problem with subscriptionId. My resolution is at post: https://stackoverflow.com/questions/56195642/is-there-a-way-to-get-the-subscriptionid-used-in-task-azure-resource-group-dep
Related
I am currently working on trying to manage a resource with Terraform that has no delete method, and terraform insists there must be one.
1 error occurred:
* resource xray_db_sync_time: Delete must be implemented
The API I am trying to implement is here, and as you can see, there is no "Delete". You can't remove this sync timer. I am open to ideas. The code being worked on is here
This is a situation where you, as the provider developer, will need to make a judgement call about how best to handle this mismatch between Terraform's typical resource instance lifecycle and the actual lifecycle of the object type you're intending to represent.
Broadly speaking, there are two options:
You could make the Delete function immediately return an error, explaining that this object is not deleteable. This could be an appropriate approach if the user might be surprised or harmed by the object continuing to exist even though Terraform has no record of it. I would informally call this the "explicit approach", because it makes the user aware that something unusual is happening and requires them to explicitly confirm that they want Terraform to just "forget" the rather than destroying it, using terraform state rm.
You could make the Delete function just call d.SetId("") (indicating to the SDK that the object no longer exists) and return successfully without taking any other action. I'll call this the "implicit approach", because a user not paying close attention may be fooled into thinking the object was actually deleted, due to the provider not giving any feedback that it just silently discarded the object.
Both of these options have advantages and disadvantages, and so ultimately the final decision is up to you. Terraform and its SDK will support either strategy, but you will need to implement some sort of Delete function, even if it doesn't do anything, to satisfy the API contract.
You are also missing a Create for this API endpoint. With only Update and Read supported, you will need to extend Create to be the same as Update except for additionally adding the resource to the state. You can easily invoke the Update function within the Create function for this behavior.
For the delete function, this should actually be easier than you may expect. The Terraform provider SDKv2 and your resource code should automatically Read the resource prior to attempting the delete to verify that it actually exists (this probably requires no extra effort on your part without seeing the code). Then you would need to remove the resource from the state with d.SetId("") where d is of type *schema.ResourceData. However, this also automatically is invoked assuming the Delete returns no errors. Therefore, you could define a Delete that merely returns warnings or errors of an appropriate Go type. If you do not need that (and probably would not considering the minimal functionality), then you could probably just return nil. Part of this is speculation based on what your code probably looks like, but in general this all holds true.
I deploy 30 SQL databases via copyIndex() as sub deployments of the main deployment, I want to be able to reference the outputs of the dynamic deployments when kicking off another deployment. Once all the databases are deployed, I want to then all Azure Monitor metric rules to the DBs, and need their resourceIds (the Output of the db deploy).
The answer here sounds exactly like what I'm trying to do, and I understand that each deployment is chained to have the output of the previous deploy. But then if I want to use the chained up "state" output, is it the very last element in the array that has the full chain? If so is the best way to reference that to just build up the name of the deployment and append on the length of the copyIndex array?
reference(concat('reference', length(variables('types'))).outputs.state.value
As so?
yes, you basically need to construct a name that is the name of the deployment:
referenceX
where X is the number of the last deployment, you can use length() function for that exactly as you suggest it.
the above will work only if you gather the output from all the intermediate steps, obviously
How do I store and reuse terraform interpolation result within resources that do not expose them as output?
example: In aws_ebs_volume , I am calculating my volume size using:
size = "${lookup(merge(var.default_ebs_vol_sizes,var.ebs_vol_sizes),
var.tag_disk_location[var.extra_ebs_volumes[count.index % length(var.extra_ebs_volumes)]])}"
Now I need to reuse the same size for calculating the cost tags in the same resource as well as in corresponding ec2 resource (in same module). How do I do this without copy pasting the entire formula?
PS: I have come across this usecase in multiple scenarios, so the above is just one of the use cases where I need to reuse the interpolated results. Getting the interpolated result using the corresponding data source is one way out in this case but looking for a more straight forward solution.
This is now possible using the local variable available from terraform 0.10.3 onwards.
https://www.terraform.io/docs/configuration/locals.html
Local values assign a name to an expression, that can then be used
multiple times within a module.
I am trying to get my head around type providers in F# and what they can be used for. I have the following problem:
I have a series of JSON objects in Azure Blob Storage stored as follows:
container/YYYY/MM/DD/file.json
I can easily navigate to a specific file for a given date using a type provider. For example, I can access the JSON object as a string for the 5th of May as
type Azure = AzureTypeProvider<"ConnectionString">
let containers = Azure.Containers.``container``.``2017/``.``05/``.``05/``.``file.json``.Read()
How can I take a user input date string, say "2017-05-05" and get the corresponding JSON object in a type safe way? Should I even be using type providers?
You're coming up against a common "issue" with the nature of many TPs, particularly ones that offer a schema against actual data - because it blends the line between data and types, you need to be aware of when you're working in a mode that works well with static types (i.e. you know at compile time the schema of the blob containers you're working with), or working in a way that's inherently dynamic.
You have a few options here.
Fall back to the "native" .NET SDK. Every blob / container has associated AsCloudBlob() or AsCloudContainer() methods, so you can use the TP for the bits that you do know e.g container name, maybe top level folders etc., and then fall back to the native SDK for the weakly-typed bits.
Since the latest release of the TP, there's now support for programmatic access in a couple of ways: -
You can use indexers to get an unsafe handle to a blob e.g. let blob = Azure.Containers.container.["2017/05/05/file.json"]. There's no guarantee that the blob exists, so you need to check that yourself etc.
You can use the TryGetBlockBlob() method, which returns a blob option async - behind the scenes, it will do a check if the blob exists or not, and then return either None, or Some blob.
You can see more examples of all of these alternatives here.
If you know up-front the full path you're working with (at compile time - perhaps some well-know paths etc.), you can also use the offline support in the TP to create an explicit blob schema at compile time without needing a real storage account.
Is it possible to count the number of items in an array based upon a certain condition in Resource Templates? Similar to how we can use 'Where-Object' within PowerShell. Seems that the 'length' function is only able to count the total number of items.
No, you cannot do that, unless you hack your way through using nested templates. And that is only possible if you want to compare against a specific object, and you would probably need at least 2 levels of indirection.
And i would generally advice against that, unless there's no other option.
but if you want to do that, you would need this function, nested deployments and ARM template way of doing conditionals and I would argue that you would need a state parameter in the nested templates to share the state between those.
The other answer is pretty much old and is outdated.
The ARM template function length(arg1)returns the number of elements in an array, characters in a string, or root-level properties in an object.
https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/template-functions-array#length