In puppet, I'd like to set a variable on a node (let's say {'acts_as_balancer' => 0}, and then run a script to change that variable to some other (now say {'acts_as_balancer' => 1}). So far I've seen only variables being used as constants in Puppet. What is the way to set variables as non-constants, on nodes?
Variables are (supposed to be) immutable, so you need to do something else, and it really depends on what you are actually trying to achieve.
If you want to run a script that will change the variables on the puppetmaster, then you can just use Hiera and have the script write the proper YAML files. You can also use the generate command, but you will have to be really careful with this.
But you make it sound like you would like to do it during catalog compilation. This is a bad idea at best, as you will almost certainly have to rely on a solution that is parse order dependent.
Related
I would like to automate some Terraform documentation and CI/CD checks related to input variables. Is there any way to do one or more of the following:
detect what input variables a specific module will take
detect what output variables a specific module can generate
detect the data type and description fields of the above (when applicable)
If not possible, I guess I will have to resort to regex parsing of all files in a module folder - but this seems like brute force, and far from ideal.
Any ideas?
I have had a good bit of success with the terraform-docs open-source tool. You essentially point it to your module and it generates fairly standard looking docs in the format you provide.
This tool can also output JSON if you'd like a raw tree of data to process yourself.
If you're looking for something a little more "low level" you could also look into the module that powers terraform-docs: terraform-config-inspect.
This question pertains to the bash shell
First off, I know how to look at the env vars that are currently set.
I want to know how to list the currently set environment variables in the order they were set. Kind of like "ls -lt" but for env vars.
Is this possible?
EDIT: many were asking why I need this.
I do a lot of debugging, code porting, fixing etc. It requires me to experiment with third party codes that are not always well written. During the process of getting to a successful build, I might need to set, overwrite some env vars. I am pretty good at documenting what I am doing so I can retrace my steps. But sometimes I forget or miss to record my steps.
For very good reasons, our env has a ton of env vars.
I can capture the entire env vars at that moment, but that doesnt help me much. If bash had a way to list env vars in the order they were set, I can clearly identify what I had set.
Also, I agree that there is no reason for bash to track this. But I was hoping it has an internal stack of env vars, which automatically is ordered as last-in-first-out. But I guess that was just too optimistic to expect.
thanks to everyone.
As #pmos suggested in a comment, you might be able to hack some shell function that would manually track when you export something, but the shell itself cannot do this. Here's why. Export makes a name available to the environment. That is only meaningful to the exec*e family of functions. In other words, export is really only meaningful to new processes following the standard fork/exec pattern. But this also means the data structure holding the exported names is not up to the shell, but POSIX C. Here's a fragment of documentation about exec environments:
The argument envp is an array of character pointers to null-terminated strings. These strings shall constitute the environment for the new process image. The envp array is terminated by a null pointer.
and
extern char **environ; is initialized as a pointer to an array of character pointers to the environment strings.
It might seem reasonable to assume that processes add strings to the environment in order, but it doesn't really seem to work that way in fact, and POSIX systems being as complex as they are, it's not surprising they do a lot of setting, resetting and unsetting.
Despite your question focusing on environment variables, your phrasing makes me think you're also interested in tracking when variables get set, which is different from when they get exported. That actually is entirely the shell's problem, but alas, bash (at least) seems not to track this either.
set seems to display the names in alphabetical order. I can't even figure out what ordering the external env command displays them in.
I have this node.pp and I am wondering how puppet is going to execute it.
node 'agent.puppet.demo' {
include ssh
include postfix
include mysql
include apache
}
On the agent node, when I run this:
$ puppetd -t -d
The puppet is not executing it sequentially meaning, it does not execute ssh first, then postfix, ...
Does anyone know why this is? Is it because it is called 'declarative language' where the order of execution does not really matter?
If this is the case, then I can just in a certain way, declare what I want and puppet will figure out how to execute it?
Disclaimer: I am one of the developers of Puppet.
It will execute it in a consistent but unpredictable order, with the exception of any explicit or implicit dependencies in the code. Explicit dependencies are things that you specify with the subscribe or require metaparameters. Implicit dependencies come from the autorequire feature, which does things like automatically apply file resources in a sensible order.
The reason for this isn't so much that the language is declarative, but rather the language is declarative because order doesn't matter for most things in the underlying problem space.
For example, there really isn't much connection between managing ssh and managing postfix for most people - you could do the work in either order, or even at the same time, and everything would work out the same.
That frees us up to improve things in a whole lot of ways that "everything is in linear order" doesn't. We are working, for example, to batch up package installs while still respecting the explicit dependencies outside packages.
So, the order of execution and dependencies follows the underlying problem, and we have preserved that property to be able to do more awesome things.
The goal is exactly what you say at the end: that you declare what you want, and we take care of all the details of getting it there. In time we hope to be much smarter about logical dependencies, so you have to say even less to get that, too.
Disclaimer: I am still pretty new to puppet :)
The key is to think of everything in terms of dependencies. For class dependencies, I like to use the Class['a'] -> Class['b'] syntax. Say you have a tomcat class that requires a jdk class which downloads/installs the sun jdk from oracle. In your tomcat class, you can specify this with
Class['jdk'] -> Class['tomcat']
Alternatively you can declare a class with a require meta parameter rather than using include.
Hey, in my python modules (written in python) I want to be able to access the variables from the top level scope, that is, the file that is calling the modules. How would I go about doing this?
Thanks.
There's no way that will work in all environments, so a precise answer might depend on how you are running your top-level Python code. The best thing to do is to put the variables into an object and pass the object to the functions that need it.
In general, the way to do this is to provide some API to your plugins (e.g. a module that they import) to provide controlled access to the information they need.
If this information may vary by context, and passing in different arguments to a plugin initialisation function isn't appropriate, then an information API that uses threading.local under the covers may be of value.
Maybe you have come past the following situation. You're working and you start to run one script after another and then suddenly realize you've changed the value of a variable you are interested in. Apart from making a backup of the workspace, is there no other way to protect the variables?
Is there a way to select individual variables in the workspace that you're going to protect?
Apart from seeing the command history register, is there a history register of the different values that have been given to one particular variable?
Running scripts in sequence is a recipe for disaster. If possible, try turning those scripts into functions. This will naturally do away with the problems of overwriting variables you are running into, since variables inside functions are local to those functions whereas variables in scripts are local to the workspace -- and thus easily accessed/overwritten by separate scripts (often unintentionally, especially if you use variable names like "result").
I also agree that writing functions can be helpful in this situation. If however you are manipulating very large data sets then you need to be careful to write your code in a form which doesn't make multiple copies of variables within your functions or you may run into memory shortage problems.
No, there is no workspace history. I would say, if you run into that problem that you described, you should consider changing your programming style.
I would suggest you:
put that much code or information in your script, so you can start from an empty workspace to fulfill a task. For that reason I always put clear all at the start of my main file.
If it's getting too complex, consider calling functions. If you need values that are generated by another script or function, rewrite that script to become a function and call it in your main file or save the variables. Loading variables is absolutely okay. But running scripts in sequence leads to disaster as mentioned by marciovm.