I am using puppet 3.7.5. I have written a custom fact php_v which returns a version number.
Now I have a YAML file called defaults.yaml and this is working fine.
---
p_name: "php_v"
p_version: "%{::php_v}" # returns the custom fact
But I want something like this.
---
p_name: "php_v"
p_version: "%{::${p_name}}"
# p_name should be replaced with "php_v"
# and custom fact should return the version
Is there a way to do so?
Related
I wanted to ask if there is a way to ignore whitespace changes when creating a terraform plan.
This question is related to this one, I created a new one because I wanted to give a new example of the issue.
Terraform shows unnecessary changes due to whitespace
For example, when running
terraform plan
I get the following change for a helm provider resource
# helm_release.cert-manager will be updated in-place
~ resource "helm_release" "cert-manager" {
id = "cert-manager"
name = "cert-manager"
~ values = [
- <<-EOT
installCRDs: true
EOT,
+ <<-EOT
installCRDs: true
EOT,
]
# (27 unchanged attributes hidden)
}
I found out that the change was due to line endings. Deployed was CRLF and my local source file had LF as line ending.
Is there an option to ignore whitespaces and/or line ending characters?
It's typically the responsibility of the provider itself to determine whether the prior value and the new value are equivalent despite not being exactly equal, and so making this work automatically would require a change to the provider itself to notice that this argument is defined as being YAML and YAML doesn't ascribe any meaning to the decision between CRLF and just LF. The provider would ideally perform this check itself and thus avoid you needing to worry about it, and I would suggest opening a feature request with the provider developer to see if they would be interested in handling that.
However, if a provider isn't performing that job correctly itself then you can potentially work around it by doing your own normalization of the value using Terraform language features, so that the value passed to the provider is always the same when the meaning is the same.
One straightforward way to achieve that in this case would be to round-trip the value through both yamldecode and yamlencode, thereby normalizing the input to be in the style that yamlencode produces:
values = [yamlencode(yamldecode(var.something))]
If you want to be more surgical about it and only normalize the line endings, you could use replace to remove the CR character from any CRLF pair:
values = [replace(var.something, "\r\n", "\n")]
The above solution assumes that the difference in whitespace is being caused by something in your module, such as if you're storing your Terraform configuration in a misconfigured Git repository that's rewriting LF to CRLF when you clone it on a Windows system. This config-based normalization can undo that sort of transformation so that the provider will always see the value in the same way.
This solution cannot address problems that are caused by the provider itself misbehaving. Unfortunately some providers have bugs where they will silently rewrite the stored values for some arguments during the "refresh" step, regardless of how you wrote it in the configuration. In that case the only recourse is to fix the provider, because that incorrect value is originating inside the provider itself and isn't under the control of the module author.
I'm fairly new to snakemake and inherited a kind of huge worflow that consists in a sequence of 17 rules that run in serial.
Each rule takes outputs from the previous rules and uses them to run a python script. Everything has worked great so far except that now I'm trying to improve the worflow since some of the rules can be run in parallel.
A rough example of what I'm trying to achieve, my understanding is that wildcards should allow me to solve this.
grid = [ 10 , 20 ]
rule all:
input:
expand("path/to/C/{grid}/file_C" ,grid = grid)
rule process_A:
input:
path_A = "path/to/A/file_A"
path_B = "path/to/B/{grid}/file_B" # A rule further in the worflow could need a file from a previous rule saved with this structure
params:
grid = lambda wc: wc.get(grid)
output:
path_C = "path/to/C/{grid}/file_C"
script:
"script_A.py"
And inside the script I retrieve the grid size parameter:
grid = snakemake.params.grid
In the end the whole rule process_A should be rerun with grid = 10 and with grid = 20 and save each result to a folder whose path depends on grid also.
I know there are several things wrong with this, but I can't seem to find were to start from to figure this out. The error I'm getting now is:
name 'params' is not defined
Any help as to where to start from?
It would be useful to post the error stack trace of name 'params' is not defined to know exactly what is causing it. For now...
And inside the script I retrieve the grid size parameter:
grid = snakemake.params.grid
I suspect you are mixing the script directive with the shell directive. Probably you want something like:
rule process_A:
input: ...
output: ...
params: ...
script:
"script_A.py"
inside script_A.py snakemake will replace snakemake.params.grid with the actual param value.
Alternatively, write a standalone python script that parses command line arguments and you execute like any other program using the shell directive. (I tend to prefer this solution as it makes things more explicit and easier to debug but it also means more boiler-plate code to write a standalone script).
Is there a way to render the values of Local Values and Variables? As an example, I have this file
variable "foo" {
type = string
default = "bar"
}
locals {
my_var = "here is ${var.foo}"
}
# Rendered:
# my_var = "here is bar"
Is there a quick and easy way to do it? I've tried terraform console but it's hard to use it with complicated use-cases, such as templatefile, jsonencode, jsondecode, merge and the list goes on and on.
I need this capability to test functions that I'm not yet familiar with. Doing terraform apply for just checking how my functions are working is something that I'm trying to avoid.
I needed something like https://www.katacoda.com/courses/terraform/playground but for local usage with fast results.
I created this project - https://github.com/unfor19/tfcoding - which brings up a Docker container that watches for changes in the file tfcoding.tf and renders the Local Values automatically upon saving this file.
Demo:
I am trying to create the source type image from python script in obs. Want to know proper steps to create source in script. I already checked, no proper doc for python scripting available.
obs.obs_source_create('banner-image','xyz')
obs.obs_source_create('banner-image','xyz')
Logs
TypeError: obs_source_create() takes exactly 4 arguments (2 given)
I want to create source type image from scrip & add that source to my current scene
This may be a simple FFI implementation, so try:
obs.obs_source_create('banner-image', 'xyz', None, None)
Source: First hit on Google when searching for "obs_source_create":
https://obsproject.com/docs/reference-sources.html
Excerpt:
obs_source_t *obs_source_create(const char *id, const char *name, obs_data_t *settings, obs_data_t *hotkey_data)
Creates a source of the specified type with the specified settings.
The “source” context is used for anything related to presenting or modifying video/audio. Use obs_source_release to release it.
Parameters:
id – The source type string identifier
name – The desired name of the source. If this is not unique, it will be made to be unique
settings – The settings for the source, or NULL if none
hotkey_data – Saved hotkey data for the source, or NULL if none
Returns:
A reference to the newly created source, or NULL if failed
I've seen someone doing a check on whether an agent's MAC address is on a specific regular expression before it runs the specified stuff below. The example is something like this:
if $is_virtual == "true" and $kernel == "Linux" and $macaddress =~ /^02:00:0A/ {
include nmonitor
include rootsh
include checkmk-agent
include backuppcacc
include onecontext
include sysstatpkg
include ensurekvmsudo
include cronntpdate
}
That's just it in that particular manifest file. Similarly another manifest example but via regular expression below:
node /^mi-cloud-(dev|stg|prd)-host/ {
if $is_virtual == 'false' {
include etchosts
include checkmk-agent
include nmonitor
include rootsh
include sysstatpkg
include cronntpdate
include fstab-ds-dev
}
}
I've been asked of whether can that similar concept be applied upon checking the agent's hostname with a master file of hostnames allowed to be run or otherwise.
I am not sure whether it can be done, but the rough idea goes around something like:
file { 'hostmasterfile.ini'
ensure => present,
source => puppet:///test/hostmaster.ini,
content => $hostname
}
$coname = content
#Usually the start / head of the manifest
if $hostname == $coname {
include <a>
include <b>
}
Note: $fqdn is out of the question.
To my knowledge, I have not seen any such sample manifest that matches the request. Whats more, it goes against a standard practice of keeping things easier to manage and not putting all eggs in a basket.
An ex-colleague of mine claims that idea above is about self-provisioning. However that concept is non-existent in Puppet (he posed that question at a workshop a few months back). I am not sure how true is that though.
If that thing above can be done, any suggestion of how can it be done? Or is it best to go back to the standard one manifest per node for easy maintenance?
Thanks very much.
M
Well, you can replace your node blocks with if constructs.
if $hostname == 'host1' {
# manifest for host1 here
}
You can combine this with some sort of inifile (e.g., using the generate) function. If the <a> and <b> for the include statements are then fetched from your ini file as well, you have constructed a crude ENC.
Note that this has security implications - any agent can claim to have any host name. It's even very simple to do:
FACTER_hostname=kerberos01 puppet agent --test
Any node can receive the catalog for kerberos01 this way. (node blocks rely on $certname instead, which cannot be forged.)
I could not decipher your precise intent from your question, but I suspect that you really want an ENC or a Hiera based approach.
Edit after feedback from your first comment:
To make the master read contents from local files, you should
get rid of the file { 'hostmasterfile.ini': } - it only allows you to set contents, not retrieve them
initialize the variable content using the file function (this will make all nodes fail if the file is not readable)
The code could look like this (assuming that there can be multiple host names in the ini file).
$ini_data = file('/etc/puppet/files/test/hostmaster.ini')
Next step would be a regex lookup like this:
if $ini_data =~ /name=$hostname/ {
Unfortunately, this does not work! Puppet will not expand variable values in regular expressions, apparently.
You can use this (kind of silly) workaround:
$ini_lookup = regsubst($ini_data, "name=$hostname", '__FOUND__')
if $ini_lookup =~ /__FOUND__/ {
...
}
Final remark about security: If your team is adamant about not using $certname for this lookup (although it should be easy to map host names to cert names), you should consider adding the host name to your trusted facts.