Using the Jenkins Java API in a Workflow Script - groovy

I'm trying to take advantage of the Jenkins Java from within a Workflow groovy script.
I'm finding it very difficult to get to grips with what I can and can't do, are there any good resources on how to do this.
At the moment what I'm trying to do is get the workspace path, I've got as far as
def jenkins = Jenkins.instance;
def build = jenkins.getItem(env.JOB_NAME).getBuild(env.BUILD_NUMBER)
But this seems to be a dead end, there doesn't seem to be anything useful you can actually do with these objects.
If anyone can point me at any resources giving examples of the kind of useful things that can be done like this, or help with my specific problem of getting the workspace path that would be great.

You can use the standard Workflow step pwd() to get the workspace path, without any use of Jenkins APIs.
As far as other cases are concerned, there is no particular document summarizing what you can do with Jenkins APIs from a Workflow script, because it is simply whatever the Jenkins APIs allow you to do generally (see Javadoc). Two caveats you need to be aware of:
Almost all such calls will be rejected by the Groovy sandbox (and many are not safe to whitelist). This means that you cannot write such scripts in a secured Jenkins installation unless you are an administrator. more
Most API objects will not be Serializable, which means that you must encapsulate their use in a method marked with the #NonCPS annotation. The method may take as parameters and return any serializable (or primitive) types. more
Currently there is no supported way to get access to Jenkins model objects which are defined in a local workspace context. JENKINS-28385 would help. In the meantime, there are often workarounds; for example, if you want the Node you are using inside a node {} block, you can use Jenkins.instance.getNode(env.NODE_NAME).

This doesn't answer your overarching question, but env.WORKSPACE will give you the absolute path to the workspace :) Open the Snippet Generator in a Workflow job config, scroll down, and you will see a list of all the available environment variables.

If you are trying to get the workspace path for the purpose of reading some files, you should really be using a Job DSL and readFileFromWorkspace(filePath).
Just having a workspace path and trying to read file with new File(filePath) may not work if you are using slaves.
More details are here
https://github.com/jenkinsci/job-dsl-plugin/wiki/Job-DSL-Commands

Related

Using Terraform as an API

I would like to use Terraform programmatically like an API/function calls to create and teardown infrastructure in multiple specific steps. e.g reserve a couple of eips, add an instance to a region and assign one of the IPs all in separate steps. Terraform will currently run locally and not on a server.
I would like to know if there is a recommended way/best practices for creating the configuration to support this? So far it seems that my options are:
Properly define input/output, heavily rely on resource separation, modules, the count parameter and interpolation.
Generate the configuration files as JSON which appears to be less common
Thanks!
Instead of using Terraform directly, I would recommend a 3rd party build/deploy tool such as Jenkins, Bamboo, Travis CI, etc. to manage the release of your infrastructure managed by Terraform. Reason being is that you should treat your Terraform code in the exact same manner as you would application code (i.e. have a proper build/release pipeline). As an added bonus, these tools come integrated with a standard api that can be used to execute your build and deploy processes.
If you choose not to create a build/deploy pipeline, your other options are to use a tool such as RunDeck which allows you to execute arbitrary commands on a server. It also has the added bonus of having a excellent privilege control system to only allow specified users to execute commands. Your other option could be to upgrade from the Open Source version of Terraform to the Pro/Premium version. This version includes an integrated GUI and extensive API.
As for best practices for using an API to automate creation/teardown of your infrastructure with Terraform, the best practices are the same regardless of what tools you are using. You mentioned some good practices such as clearly defining input/output and creating a separation of concerns which are excellent practices! Some others I can recommend are:
Create all of your infrastructure code with idempotency in mind.
Use modules to separate the common shared portions of your code. This reduces the number of places that you will have to update code and therefore the number of points of error when pushing an update.
Write your code with scalability in mind from the beginning. It is much simpler to start with this than to adjust later on when it is too late.

What is the terminology for creating a UI for an existing CLI Tool

I'm having trouble finding a library to help me with a project where I hope to put a WebUI in place to drive an existing CLI Tool. If one doesn't exist I will just produce my own, but I suspect I'm using the wrong terminology. I've tried using 'shell' and 'wrapper' in my searches but those words are often used for other things so I'm coming up dry.
What is the terminology for creating a UI for an existing CLI Tool?
Original Wording
I'm looking to write a WebUI to 'wrap?' an existing CLI tool I use. It doesn't seem too difficult to use Child Processes in Node, but got me wondering if there was already a Library that handles alot of the repeatable tasks like mapping the params to a function and parsing the output.
My initial research hasn't brought up anything so I suspect that I may not be aware of the correct terminology for this type of application or behavior?
In the older desktop app days I'd have referred to this a GUI shell/wrapper for a CLI app, but I don't think that is really correct.
Can anyone educate me on the correct terminology and ideally point me towards any NodeJS library that might assist with this. I've no problem with the Web side of things, just really the handling of the CLI executions, keeping track of states and parsing the results.

Configuring Distributed Objects Dynamically

I'm currently evaluating using Hazelcast for our software. Would be glad if you could help me elucidate the following.
I have one specific requirement: I want to be able to configure distributed objects (say maps, queues, etc.) dynamically. That is, I can't have all the configuration data at hand when I start the cluster. I want to be able to initialise (and dispose) services on-demand, and their configuration possibly to change in-between.
The version I'm evaluating is 3.6.2.
The documentation I have available (Reference Manual, Deployment Guide, as well as the "Mastering Hazelcast" e-book) are very skimpy on details w.r.t. this subject, and even partially contradicting.
So, to clarify an intended usage: I want to start the cluster; then, at some point, create, say, a distributed map structure, use it across the nodes; then dispose it and use a map with a different configuration (say, number of backups, eviction policy) for the same purposes.
The documentation mentions, and this is to be expected, that bad things will happen if nodes have different configurations for the same distributed object. That makes perfect sense and is fine; I can ensure that the configs will be consistent.
Looking at the code, it would seem to be possible to do what I intend: when creating a distributed object, if it doesn't already have a proxy, the HazelcastInstance will go look at its Config to create a new one and store it in its local list of proxies. When that object is destroyed, its proxy is removed from the list. On the next invocation, it would go reload from the Config. Furthermore, that config is writeable, so if it has been changed in-between, it should pick up those changes.
So this would seem like it should work, but given how silent the documentation is on the matter, I'd like some confirmation.
Is there any reason why the above shouldn't work?
If it should work, is there any reason not to do the above? For instance, are there plans to change the code in future releases in a way that would prevent this from working?
If so, is there any alternative?
Changing the configuration on the fly on an already created Distributed object is not possible with the current version though there is a plan to add this feature in future release. Once created the map configs would stay at node level not at cluster level.
As long as you are creating the Distributed map fresh from the config, using it and destroying it, your approach should work without any issues.

How stop Nashorn from allowing the quit() function?

I'm trying to add a scripting feature to our system where untrusted users can write simple scripts and have them execute on the server side. I'm trying to use Nashorn as the scripting engine.
Unfortunately, they added a few non-standard features to Nashorn:
https://docs.oracle.com/javase/8/docs/technotes/guides/scripting/nashorn/shell.html#sthref29
Scroll down to "Additional Nashorn Built-in Functions" and see the "quit()" function. Yup, if an untrusted user runs this code, the whole JVM shuts down.
This is strange, because Nashorn specifically anticipates running untrusted scripts. See: https://docs.oracle.com/javase/8/docs/technotes/guides/scripting/nashorn/api.html#classfilter_introduction
Applications that embed Nashorn, in particular, server-side JavaScript
frameworks, often have to run scripts from untrusted sources and
therefore must limit access to Java APIs. These applications can
implement the ClassFilter interface to restrict Java class access to a
subset of Java classes.
Is there any way to prevent this behavior? How do I prevent users from running any of the additional functions?
Unfortunately, there is currently no way to control creation of non-standard global functions. One workaround is to simply delete these functions from the global object after the ScriptEngine has been initialized:
final NashornScriptEngineFactory engineManager = new NashornScriptEngineFactory();
final ScriptEngine engine = engineManager.getScriptEngine();
final Bindings bindings = engine.getBindings(ScriptContext.ENGINE_SCOPE);
bindings.remove("print");
bindings.remove("load");
bindings.remove("loadWithNewGlobal");
bindings.remove("exit");
bindings.remove("quit");
System.err.println(engine.eval("'quit is ' + typeof quit"));
If you are using the Nashorn shell, a simple delete quit; will do.
If you are using the ScriptEngine interface and create multiple bindings, you'll have to do this with every global object/binding you create.
If you're going to run "untrusted" scripts, please run your program with the SecurityManager turned on. With that "quit" would have resulted in SecurityException. ClassFilter by itself is not a replacement for SecurityManager. It is used to be used along with the SecurityManager. Please check the JEP on ClassFilter here: http://openjdk.java.net/jeps/202. The JEP clearly states this:
Make security managers redundant for scripts. Embedding applications should still turn on security management before evaluating scripts from untrusted sources. Class filtering alone will not provide a complete script "sandbox." Even if only untrusted scripts (with no additional Java classes) are executed, a security manager should still be utilized. Class filtering provides finer control beyond what a security manager provides. For example, a Nashorn-embedding application may prevent the spawning of threads from scripts or other resource-intensive operations that may be allowed by security manager.

How far should I go with puppet?

I'd like to preface by saying I'm new to puppet. I have been working with it via vagrant and am starting to feel comfortable writing manifests, but I lack perhaps the experience, or intuition, that can answer my question.
I am trying to grasp how puppet is scoped, and where lines are drawn. I am specifically interested in how this applies to modules and their creation and use.
A more concrete example: the puppletlabs-nginx module. So hypothetically I'm going along my merry way, creating a manifest for a given server role; say it's a dead-simple static file webserver, and I'd like to use nginx. The module will clearly help me with that; there's try_files support and such. I can even ramp up to reverse-proxying via this module. But what if things get stickier? What if there's something I want to do programmatically that I cannot do with the module?
Well, perhaps the short answer is to fix it myself, do a pull request, and go along my merry way. But where does that stop? Is the goal of a community puppet module to support every facet of a given software package? That seems unmanageable. On the other hand, doesn't that create a bunch of mostly-baked modules, build solely from use cases?
Then, there's an analog to Android UI: there are setter methods for what I think is most XML UI definitions. In puppet if feels similar. You can build a config file programmatically, or create it by filling in an ERB template. In other words, I feel the line in puppet between programmatic creation of configuration files and the templated creation of configuration files is blurred; I found no best way with Android and so I don't know which is the way to go with puppet.
So, to the question: what constitutes the ideal puppet module? Should it rely more on templates? On the manifest? Should it account for all configuration scenarios?
From a further-withdrawn perspective it almost seems I want something more opinionated. Puppet's power seems to be flexibility and abstraction, but the modules that are out there feel inconsistent and not as fleshed out.
Thanks for reading...
Thanks to Mark. In just a small amount of time I've switched over to playing with Chef and the modules seem better in regards to many of the concerns I voiced.
In short i can explain you about puppet.
Puppet is nothing but it is an IT automation tool where we can install software's on other machines by creating manifests (recipes or scripts) on master for those softwares to be installed on target machine.
Here master indicates implementation of puppet manifests for softwares.
Target machine indicates agent where softwares to be installed.
Puppet module constitutes of following structure where we do this in master.
In master the path is /etc/puppet/modules to enter into modules directory,you have mentioned puppletlabs-nginx module.so now we can take this module as an example.
After modules directory we have to create files and manifests directories.furthermore,in manifests directory we will create .pp files.For instance, install.pp,uninstall.pp.this is how module structure will be.we usually run these scripts by using few resources like package,service,file,exec etc.
Templates play a minor role in puppet manifests just to hardcore the values.it is not a major part of puppet.Manifests has a great importance in puppet.
For automating any software by using puppet we can follow the above structure.
Thank you.
The PuppetLabs solution here is to use different types of modules for each function -- Components, Profiles, and Roles. See the presentation Designing Puppet: Roles/Profiles Pattern for more information.
The modules available from PuppetForge are of the "Component" type. Their purpose is to be as flexible as possible, while focusing on a single logical piece of software -- e.g., either the apache httpd server OR apache tomcat, but not both.
The kinds of modules that you might write to wrap around one of those Component modules would be a perfect example of a "Profile" module. This could tie apache httpd together along with tomcat and jboss and maybe also some other components like mysql and php. But it's one logical stack, with multiple components. Or maybe you have a LAMP Profile module, and a separate tomcat+jboss Profile module.
The next level up is "Role" modules. They shouldn't have anything in them other than "include" statements that point at the appropriate Profile modules.
See the PuppetLabs presentation for more details, but this logic is pretty similar to what is seen in the Chef world with "wrapper cookbooks".

Resources