Have ODM Rule or XOM "know" which environment (prod or non prod) is currently executing rule - odm

I would like to use a property or a trick in ODM rules at execution time, to know if the rule is being executed on a prod or non production environment.
I need to pass that information through to XOM java class to load a different properties depending on the environment
I wonder if there is something built in I can use in the rule language to know that

neither the java code (nor the rules) know where they are executed (DEV, QA, PROD). Usually this information is a configuration information.
Maybe you can use that configuration information to populate a Java object and reuse that in the rules.
Best
Emmanuel

Related

Terraform : manage specificities over each environment

I have 3 environments to manage via Terraform: dev, staging, prod.
An example of use case is below:
create a "common" service account for each environment (sa-xxx#dev + sa-xxx#staging + sa-xxx#prod)
create a "dev-specific" role for this sa-xxx#dev SA
create a "staging-specific" role for this sa-xxx#staging SA
create a "prod-specific" role for this sa-xxx#prod SA
How can I easily manage common & specific resources for each environment?
Terraform is very simple if all environments are equals, but for specificities it looks more complicated. The goal is have a structural way to manage it, and then to avoid:
code duplication in 3 distinct folders
"count" conditions in each tf resource definition
It should be possible for Terraform to look into current root folder UNION dev/staging/prod folder (depending on the environment).
The need is very simple but implementation seems so difficult.
Thanks for help ! :)
This is a pretty broad question and so it's hard to answer specifically, but one general answer to this question is to make use of shared modules as a means for sharing code between your separate configurations.
The Module Composition guide describes some different patterns that might help you in your goal. The idea would be to make each of your configurations share modules wherever it makes sense for them to do so but to also potentially use different modules -- or the same modules but with different relationships/cardinalities -- so that your configurations can represent both what is the same and what is different between each of them.
One way would be to put shared resources in a common configuration managed in a remote state. Then in other configurations, you can refer to the shared, remote state using terraform_remote_state data source.

Is there anyway to mock terraform data source

I am working on unit testing for terraform. For some modules, I have to authorized into AWS to be able to retrieve terraform data source. Is there anyway to mock or override data source for something like below?
data "aws_region" "current" {
}
Thank you in advance.
Terraform does not include any built-in means to mock the behavior of a provider. Module authors generally test their modules using integration testing rather than unit testing, e.g. by writing a testing-only Terraform configuration that calls the module in question with suitable arguments to exercise the behaviors the author wishes to test.
The testing process is then to run terraform apply within that test configuration and observe it making the intended changes. Once you are finished testing you can run terraform destroy to clean up the temporary infrastructure that the test configuration declared.
A typical Terraform module doesn't have much useful behavior in itself and instead is just a wrapper around provider behaviors, so integration testing is often a more practical approach than unit testing in order to achieve confidence that the module will behave as expected in real use.
If you particularly want to unit test a module, I think the best way to achieve your goal within the Terraform language itself is to think about working at the module level of abstraction rather than the resource level of abstraction. You can then use Module Composition techniques, like dependency inversion, so that you can pass your module fake input when you are testing it and real input when it's being used in a "real" configuration. The module itself would therefore no longer depend directly on the aws_region data source.
However, it's unlikely that you'd be able to achieve unit testing in the purest sense with the Terraform language alone unless the module you are testing consists only of local computation (locals and output blocks, and local-compute-only resources) and doesn't interact with any remote systems at all. While you could certainly make a Terraform module that takes an AWS region as an argument, there's little the module could do with that value unless it is also interacting with the AWS provider.
A more extreme alternative would be to write your own aws provider that contains the subset of resource type names you want to test with but whose implementations of them are all fake. You could then use your own fake aws provider instead of the real one when you're running your tests, and thus avoid interacting with real AWS APIs at all.
This path is considerably more work of course, and so I would suggest to embark on it only if the value of unit testing your particular module(s) is high.
Another super-labour-intensive solution would be to emulate aws api on localhost and redirect (normal) aws provider there. I've found https://github.com/localstack/localstack - https://docs.localstack.cloud/integrations/terraform/ may probably be helpful with this.

Using Terraform as an API

I would like to use Terraform programmatically like an API/function calls to create and teardown infrastructure in multiple specific steps. e.g reserve a couple of eips, add an instance to a region and assign one of the IPs all in separate steps. Terraform will currently run locally and not on a server.
I would like to know if there is a recommended way/best practices for creating the configuration to support this? So far it seems that my options are:
Properly define input/output, heavily rely on resource separation, modules, the count parameter and interpolation.
Generate the configuration files as JSON which appears to be less common
Thanks!
Instead of using Terraform directly, I would recommend a 3rd party build/deploy tool such as Jenkins, Bamboo, Travis CI, etc. to manage the release of your infrastructure managed by Terraform. Reason being is that you should treat your Terraform code in the exact same manner as you would application code (i.e. have a proper build/release pipeline). As an added bonus, these tools come integrated with a standard api that can be used to execute your build and deploy processes.
If you choose not to create a build/deploy pipeline, your other options are to use a tool such as RunDeck which allows you to execute arbitrary commands on a server. It also has the added bonus of having a excellent privilege control system to only allow specified users to execute commands. Your other option could be to upgrade from the Open Source version of Terraform to the Pro/Premium version. This version includes an integrated GUI and extensive API.
As for best practices for using an API to automate creation/teardown of your infrastructure with Terraform, the best practices are the same regardless of what tools you are using. You mentioned some good practices such as clearly defining input/output and creating a separation of concerns which are excellent practices! Some others I can recommend are:
Create all of your infrastructure code with idempotency in mind.
Use modules to separate the common shared portions of your code. This reduces the number of places that you will have to update code and therefore the number of points of error when pushing an update.
Write your code with scalability in mind from the beginning. It is much simpler to start with this than to adjust later on when it is too late.

Using the Jenkins Java API in a Workflow Script

I'm trying to take advantage of the Jenkins Java from within a Workflow groovy script.
I'm finding it very difficult to get to grips with what I can and can't do, are there any good resources on how to do this.
At the moment what I'm trying to do is get the workspace path, I've got as far as
def jenkins = Jenkins.instance;
def build = jenkins.getItem(env.JOB_NAME).getBuild(env.BUILD_NUMBER)
But this seems to be a dead end, there doesn't seem to be anything useful you can actually do with these objects.
If anyone can point me at any resources giving examples of the kind of useful things that can be done like this, or help with my specific problem of getting the workspace path that would be great.
You can use the standard Workflow step pwd() to get the workspace path, without any use of Jenkins APIs.
As far as other cases are concerned, there is no particular document summarizing what you can do with Jenkins APIs from a Workflow script, because it is simply whatever the Jenkins APIs allow you to do generally (see Javadoc). Two caveats you need to be aware of:
Almost all such calls will be rejected by the Groovy sandbox (and many are not safe to whitelist). This means that you cannot write such scripts in a secured Jenkins installation unless you are an administrator. more
Most API objects will not be Serializable, which means that you must encapsulate their use in a method marked with the #NonCPS annotation. The method may take as parameters and return any serializable (or primitive) types. more
Currently there is no supported way to get access to Jenkins model objects which are defined in a local workspace context. JENKINS-28385 would help. In the meantime, there are often workarounds; for example, if you want the Node you are using inside a node {} block, you can use Jenkins.instance.getNode(env.NODE_NAME).
This doesn't answer your overarching question, but env.WORKSPACE will give you the absolute path to the workspace :) Open the Snippet Generator in a Workflow job config, scroll down, and you will see a list of all the available environment variables.
If you are trying to get the workspace path for the purpose of reading some files, you should really be using a Job DSL and readFileFromWorkspace(filePath).
Just having a workspace path and trying to read file with new File(filePath) may not work if you are using slaves.
More details are here
https://github.com/jenkinsci/job-dsl-plugin/wiki/Job-DSL-Commands

How stop Nashorn from allowing the quit() function?

I'm trying to add a scripting feature to our system where untrusted users can write simple scripts and have them execute on the server side. I'm trying to use Nashorn as the scripting engine.
Unfortunately, they added a few non-standard features to Nashorn:
https://docs.oracle.com/javase/8/docs/technotes/guides/scripting/nashorn/shell.html#sthref29
Scroll down to "Additional Nashorn Built-in Functions" and see the "quit()" function. Yup, if an untrusted user runs this code, the whole JVM shuts down.
This is strange, because Nashorn specifically anticipates running untrusted scripts. See: https://docs.oracle.com/javase/8/docs/technotes/guides/scripting/nashorn/api.html#classfilter_introduction
Applications that embed Nashorn, in particular, server-side JavaScript
frameworks, often have to run scripts from untrusted sources and
therefore must limit access to Java APIs. These applications can
implement the ClassFilter interface to restrict Java class access to a
subset of Java classes.
Is there any way to prevent this behavior? How do I prevent users from running any of the additional functions?
Unfortunately, there is currently no way to control creation of non-standard global functions. One workaround is to simply delete these functions from the global object after the ScriptEngine has been initialized:
final NashornScriptEngineFactory engineManager = new NashornScriptEngineFactory();
final ScriptEngine engine = engineManager.getScriptEngine();
final Bindings bindings = engine.getBindings(ScriptContext.ENGINE_SCOPE);
bindings.remove("print");
bindings.remove("load");
bindings.remove("loadWithNewGlobal");
bindings.remove("exit");
bindings.remove("quit");
System.err.println(engine.eval("'quit is ' + typeof quit"));
If you are using the Nashorn shell, a simple delete quit; will do.
If you are using the ScriptEngine interface and create multiple bindings, you'll have to do this with every global object/binding you create.
If you're going to run "untrusted" scripts, please run your program with the SecurityManager turned on. With that "quit" would have resulted in SecurityException. ClassFilter by itself is not a replacement for SecurityManager. It is used to be used along with the SecurityManager. Please check the JEP on ClassFilter here: http://openjdk.java.net/jeps/202. The JEP clearly states this:
Make security managers redundant for scripts. Embedding applications should still turn on security management before evaluating scripts from untrusted sources. Class filtering alone will not provide a complete script "sandbox." Even if only untrusted scripts (with no additional Java classes) are executed, a security manager should still be utilized. Class filtering provides finer control beyond what a security manager provides. For example, a Nashorn-embedding application may prevent the spawning of threads from scripts or other resource-intensive operations that may be allowed by security manager.

Resources