Wondering what is the best way to have some logic inside DSC resources without resorting to writing custom DSC Resource. Example is below.
I need to provide content parameter for built in DSC Resourse File. I can not put Function inside Configuration to return that value and does not seem to be able to put logic inside Content tag either. What can be possible approach for this situation.
```
$filePath = Join-path -Path "$($env:programdata)" -ChildPath
"docker\config\daemon.json"
$filePath = Join-path -Path `"$($env:programdata)`" -ChildPath "docker\config\daemon.json`"
if (test-Path ($filePath))
{) { $jsonConfig = get-content $filePath | convertfrom-json
$jsonConfig.graph = $graphLocation
$jsonConfig | convertto-json
}
else { #{ graphLocation = "$graphLocation"} | convertto-json
}
```
If you need the logic to run as part of the DSC job then you will need to resort to building a custom DSC resource. Remember all the DSC code will get compiled into a MOF file, and MOF files cannot run arbitrary PowerShell code. Thus inline functions will not be available during the job.
However, you can have logic that runs during the compilation phase. For instance, computing a property value that will be assigned to a DSC resource property.
Configuration is ultimately just a function that takes a name and a script block as parameters, and it is valid in PowerShell to define a nested function, although it must be defined in the function scope prior to it being used.
Configuration MyConfig {
function ComplexLogic() {
"It works!"
}
Import-DscResource -ModuleName 'PSDesiredStateConfiguration'
Node localhost {
Log Example {
Message = ComplexLogic
}
}
}
You could also run a plain PowerShell script that computes values and then passes the values as arguments to the DSC configuration.
Related
We have a requirement with one of the terraform scripts to execute a python script, generate the output, and read the output file. We are trying to achieve this through the below method,
resource "null_resource" "get_data_plane_ip" {
provisioner "local-exec" {
command = "python myscript.py > output.json"
}
triggers = {
always_run = "${timestamp()}"
}
}
locals {
var1 = jsondecode(file("output.json"))
}
The problem with the above method is, we have seen locals block gets executed before the python script gets executed through local-exec resource. So the terraform apply fails. we can't use depends_on in locals block to specify the order as well.
Any suggestion on how can we make sure locals gets executed only after local-exec resource?
You could potentially use another null_resource resource in this situation.
For example, take the following configuration:
resource "null_resource" "get_data_plane_ip" {
provisioner "local-exec" {
command = "python myscript.py > output.json"
}
triggers = {
always_run = timestamp()
}
}
resource "null_resource" "dependent" {
triggers = {
contents = file("output.json")
}
depends_on = [null_resource.get_data_plane_ip]
}
locals {
var1 = jsondecode(null_resource.b.triggers.contents)
}
output "var1" {
value = local.var1
}
The null_resource.dependent resource has an explicit dependency on the null_resource.get_data_plane_ip resource. Therefore, it will wait for the null_resource.get_data_plane_ip resource to be "created".
Since the triggers argument is of type map(string), you can use the file function to read the contents of the output.json file, which returns a string.
You can then create a local variable to invoke jsondecode on the triggers attribute of the null_resource.dependent resource.
The documentation for the file function says the following:
This function can be used only with files that already exist on disk at the beginning of a Terraform run. Functions do not participate in the dependency graph, so this function cannot be used with files that are generated dynamically during a Terraform operation. We do not recommend using dynamic local files in Terraform configurations, but in rare situations where this is necessary you can use the local_file data source to read files while respecting resource dependencies.
There a few different things to note in this paragraph. The first is that the documentation recommends against doing what you are doing except as a last resort; I don't know if there's another way to get the result you were hoping for, so I'll leave you to think about that part, and focus on the other part...
The local_file data source is a data resource type that just reads a file from local disk. Because it appears as a resource rather than as a language operator/function, it'll be a node in Terraform's dependency graph just like your null_resource.get_data_plane_ip, and so it can depend on that resource.
The following shows one way to write that:
resource "null_resource" "get_data_plane_ip" {
triggers = {
temp_filename = "${path.module}/output.json"
}
provisioner "local-exec" {
command = "python myscript.py >'${self.triggers.temp_filename}'"
}
}
data "local_file" "example" {
filename = null_resource.get_data_plane_ip.triggers.temp_filename
}
locals {
var1 = jsondecode(data.local_file.example.content)
}
Note that this sort of design will make your Terraform configuration non-converging, which is to say that you can never reach a point where terraform apply will report that there are no changes to apply. That's often undesirable, because a key advantage of the declarative approach is that you can know you've reached a desired state and thus your remote system matches your configuration. If possible, I'd suggest to try to find an alternative design that can converge after selecting a particular IP address, though that'd typically mean representing that "data plane IP" as a resource itself, which may require writing a custom provider if you're interacting with a bespoke system.
I've not used it myself so I can't recommend it, but I notice that there's a community provider in the registry which offers a shell_script_resource resource type, which might be useful as a compromise between running commands in provisioners and writing a whole new provider. It seems like it allows you to write a script for create where the result would be retained as part of the resource state, and thus you could refer to it from other resources.
I have a module a in terraform which creates a text file , i need to use that text file in another module b, i am using locals to pull the content of that text file like below in module b
locals {
ports = split("\n", file("ports.txt") )
}
But the terraform expects this file to be present at the start itself, throws error as below
Invalid value for "path" parameter: no file exists at
path/ports.txt; this function works only with files
that are distributed as part of the configuration source code, so if this file
will be created by a resource in this configuration you must instead obtain
this result from an attribute of that resource.
What am i missing here? Any help on this would be appreciated. Is there any depends_on for locals, how can i make this work
Modules are called from within other modules using module blocks. Most arguments correspond to input variables defined by the module. To reference the value from one module, you need to declare the output in that module, then you can call the output value from other modules.
For example, I suppose you have a text file in module a.
.tf file in module a
output "textfile" {
value = file("D:\\Terraform\\modules\\a\\ports.txt")
}
.tf file in module b
variable "externalFile" {
}
locals {
ports = split("\n", var.externalFile)
}
# output "b_test" {
# value = local.ports
# }
.tf file in the root module
module "a" {
source = "./modules/a"
}
module "b" {
source = "./modules/b"
externalFile = module.a.textfile
depends_on = [module.a]
}
# output "module_b_output" {
# value = module.b.b_test
# }
For more reference, you could read https://www.terraform.io/docs/language/modules/syntax.html#accessing-module-output-values
As the error message reports, the file function is only for files that are included on disk as part of your configuration, not for files generated dynamically during the apply phase.
I would typically suggest avoiding writing files to local disk as part of a Terraform configuration, because one of Terraform's main assumptions is that any objects you manage with Terraform will persist from one run to the next, but that could only be true for a local file if you always run Terraform in the same directory on the same computer, or if you use some other more complex approach such as a network filesystem. However, since you didn't mention why you are writing a file to disk I'll assume that this is a hard requirement and make a suggestion about how to do it, even though I would consider it a last resort.
The hashicorp/local provider includes a data source called local_file which will read a file from disk in a similar way to how a more typical data source might read from a remote API endpoint. In particular, it will respect any dependencies reflected in its configuration and defer reading the file until the apply step if needed.
You could coordinate this between modules then by making the output value which returns the filename also depend on whichever resource is responsible for creating the file. For example, if the file were created using a provisioner attached to an aws_instance resource then you could write something like this inside the module:
output "filename" {
value = "D:\\Terraform\\modules\\a\\ports.txt"
depends_on = [aws_instance.example]
}
Then you can pass that value from one module to the other, which will carry with it the implicit dependency on aws_instance.example to make sure the file is actually created first:
module "a" {
source = "./modules/a"
}
module "b" {
source = "./modules/b"
filename = module.a.filename
}
Then finally, inside the module, declare that input variable and use it as part of the configuration for a local_file data resource:
variable "filename" {
type = string
}
data "local_file" "example" {
filename = var.filename
}
Elsewhere in your second module you can then use data.local_file.example.content to get the contents of that file.
Notice that dependencies propagate automatically aside from the explicit depends_on in the output "filename" block. It's a good practice for a module to encapsulate its own behaviors so that everything needed for an output value to be useful has already happened by the time a caller uses it, because then the rest of your configuration will just get the correct behavior by default without needing any additional depends_on annotations.
But if there is any way you can return the data inside that ports.txt file directly from the first module instead, without writing it to disk at all, I would recommend doing that as a more robust and less complex approach.
I want to initially create a resource using Terraform, but if the resource gets later deleted outside of TF - e.g. manually by a user - I do not want terraform to re-create it. Is this possible?
In my case the resource is a blob on an Azure Blob storage. I tried using ignore_changes = all but that didn't help. Every time I ran terraform apply, it would recreate the blob.
resource "azurerm_storage_blob" "test" {
name = "myfile.txt"
storage_account_name = azurerm_storage_account.deployment.name
storage_container_name = azurerm_storage_container.deployment.name
type = "Block"
source_content = "test"
lifecycle {
ignore_changes = all
}
}
The requirement you've stated is not supported by Terraform directly. To achieve it you will need to either implement something completely outside of Terraform or use Terraform as part of some custom scripting written by you to perform a few separate Terraform steps.
If you want to implement it by wrapping Terraform then I will describe one possible way to do it, although there are various other variants of this that would get a similar effect.
My idea for implementing it would be to implement a sort of "bootstrapping mode" which your custom script can enable only for initial creation, but then for subsequent work you would not use the bootstrapping mode. Bootstrapping mode would be a combination of an input variable to activate it and an extra step after using it.
variable "bootstrap" {
type = bool
default = false
description = "Do not use this directly. Only for use by the bootstrap script."
}
resource "azurerm_storage_blob" "test" {
count = var.bootstrap ? 1 : 0
name = "myfile.txt"
storage_account_name = azurerm_storage_account.deployment.name
storage_container_name = azurerm_storage_container.deployment.name
type = "Block"
source_content = "test"
}
This alone would not be sufficient because normally if you were to run Terraform once with -var="bootstrap=true" and then again without it Terraform would plan to destroy the blob, after noticing it's no longer present in the configuration.
So to make this work we need a special bootstrap script which wraps Terraform like this:
terraform apply -var="bootstrap=true"
terraform state rm azurerm_storage_blob.test
That second terraform state rm command above tells Terraform to forget about the object it currently has bound to azurerm_storage_blob.test. That means that the object will continue to exist but Terraform will have no record of it, and so will behave as if it doesn't exist.
If you run the bootstrap script then, you will have the blob existing but with Terraform unaware of it. You can therefore then run terraform apply as normal (without setting the bootstrap variable) and Terraform will both ignore the object previously created and not plan to create a new one, because it will now have count = 0.
This is not a typical use-case for Terraform, so I would recommend to consider other possible solutions to meet your use-case, but I hope the above is useful as part of that design work.
If you have a resource defined in terraform configuration then terraform will always try to create it. I can't imagine what is your setup, but maybe you want to take the blob creation to a CLI script and run terraform and the script in desired order.
I am having a hard time with the literature on this. I am hoping someone can explain the difference here so that I can better understand the flow of my scripts.
function select-bin {
$objForm = New-Object System.Windows.Forms.Form
$objForm.Text = "Select a Bin"
$objForm.Size = New-Object System.Drawing.Size(300,200)
$objForm.StartPosition = "CenterScreen"
$x = #()
# Create $OKButton and $objListBox ... removed code as not relevant.
$OKButton.Add_Click({
$x+=$objListBox.SelectedItems
$objForm.Close()
})
$objForm.ShowDialog()
if ($x) {
return $x
}
else {
return $null
}
}
In the code sample above, it works great in Powershell V2, however in V4 the add_click section doesn't work. It successfully closes the form (created in the functions scope) but fails to update $x.
So I guess here are my questions.
In V2, was the add_click section considered in the same scope as the function? (only way I see it having been able to update $x)
What is the proper way to have an event like this alter data? I feel like declaring $x in the global scope is a bit much seeing as I only need it in the function.
In V4 what scope is add_click running in? It is clearly different from what it was in V2, but is it running in the global? is it relative to the $OKButton or the function? I am assuming its a child of either the global or the function but I truly do not know.
Any clarity that anyone could offer would be greatly appreciated. I have a lot of updating to do before my company moves to V4, seeing as I have not been following best practices for scoping (my bad).
In V2, a ScriptBlock, when converted to a delegate, would run dot sourced in whatever scope happened to be the current scope.
Often, this was the scope that created the script block, so things worked naturally. In some cases though, the scope it ran in had nothing to do with the scope it was created in.
In V4, these script blocks run in their own scope - a new scope that is the child of the current scope, just as they were a function and you called the function normally (not dot sourcing.)
I think your best bet is to use one of the following (in roughly best to worst):
$script:x
$x = Get-Variable -Scope 1 -Name x
$global:x
Is it considered bad practice (or are there specific reasons not to) create a new RunSpace in a custom c# cmdlet? For example, I have a custom Cmdlet, as below and need to call an existing cmdlet and I am wondering if there will be any threading or other issues with doing this.
public class SPCmdletNewBusinessSite : SPNewCmdletBase<SPSite>
{
...
private void ExecuteRunspaceCommand()
{
Runspace runspace = RunspaceFactory.CreateRunspace();
PSSnapInException snapInError;
runspace.RunspaceConfiguration.AddPSSnapIn("Microsoft.SharePoint.PowerShell", out snapInError);
runspace.ThreadOptions = PSThreadOptions.Default;
runspace.Open();
Pipeline pipeline = runspace.CreatePipeline();
Command newSiteProc = new Command("New-SPSite");
newSiteProc.Parameters.Add(new CommandParameter("Url", "http://goober-dc/9393"));
newSiteProc.Parameters.Add(new CommandParameter("OwnerAlias", "GOOBER\\Administrator"));
newSiteProc.Parameters.Add(new CommandParameter("Template", "STS#1"));
newSiteProc.Parameters.Add(new CommandParameter("Language", "1033"));
newSiteProc.Parameters.Add(new CommandParameter("ContentDatabase", "Site_Specific_ContentDB"));
pipeline.Commands.Add(newSiteProc);
Collection<PSObject> results = new Collection<PSObject>();
results = pipeline.Invoke();
foreach (PSObject obj in results)
{
base.WriteObject(((SPSite)obj.BaseObject).RootWeb.Title);
}
}
}
Specifically, I want to create a SharePoint 2010 SPSite and specify a specific content database for the SPSite. There is an overload for SPSitesCollection.Add() which accepts an SPContentDatabase as a parameter, but this is an internal method. I want to create the RunSpace to enable calling the New-SPSite cmdlet (which allows specifying a new content db) and therefore be able to create the site with a specific content database.
I have found http://msdn.microsoft.com/en-us/library/ms714873(v=VS.85).aspx indicates you can invoke cmdlets from within cmdlets, but New-SPSite (actual class SPCmdletNewSite) is also internal and cannot be invoked directly.
If you want to call another cmdlet inside a cmdlet, the usual practice is to use a nested pipeline not a new runspace. This lets you use the cmdlet's scope, giving you access to the same variables and context. A new runspace is completely isolated and is more heavyweight as a result but may be desired if you don't to polluate the calling scope. I think you probably want a nested pipeline so you don't have to reload the sharepoint snapin (I'm presuming it's already loaded when you call your new sharepoint cmdlet.)
You can use this method from within your cmdlet. It's a nested pipeline because your command is running in a pipeline already.
var pipe = Runspace.DefaultRunspace.CreateNestedPipeline(...);
pipe.Invoke()
http://msdn.microsoft.com/en-us/library/system.management.automation.runspaces.runspace.createnestedpipeline(v=VS.85).aspx