How stop Nashorn from allowing the quit() function? - nashorn

I'm trying to add a scripting feature to our system where untrusted users can write simple scripts and have them execute on the server side. I'm trying to use Nashorn as the scripting engine.
Unfortunately, they added a few non-standard features to Nashorn:
https://docs.oracle.com/javase/8/docs/technotes/guides/scripting/nashorn/shell.html#sthref29
Scroll down to "Additional Nashorn Built-in Functions" and see the "quit()" function. Yup, if an untrusted user runs this code, the whole JVM shuts down.
This is strange, because Nashorn specifically anticipates running untrusted scripts. See: https://docs.oracle.com/javase/8/docs/technotes/guides/scripting/nashorn/api.html#classfilter_introduction
Applications that embed Nashorn, in particular, server-side JavaScript
frameworks, often have to run scripts from untrusted sources and
therefore must limit access to Java APIs. These applications can
implement the ClassFilter interface to restrict Java class access to a
subset of Java classes.
Is there any way to prevent this behavior? How do I prevent users from running any of the additional functions?

Unfortunately, there is currently no way to control creation of non-standard global functions. One workaround is to simply delete these functions from the global object after the ScriptEngine has been initialized:
final NashornScriptEngineFactory engineManager = new NashornScriptEngineFactory();
final ScriptEngine engine = engineManager.getScriptEngine();
final Bindings bindings = engine.getBindings(ScriptContext.ENGINE_SCOPE);
bindings.remove("print");
bindings.remove("load");
bindings.remove("loadWithNewGlobal");
bindings.remove("exit");
bindings.remove("quit");
System.err.println(engine.eval("'quit is ' + typeof quit"));
If you are using the Nashorn shell, a simple delete quit; will do.
If you are using the ScriptEngine interface and create multiple bindings, you'll have to do this with every global object/binding you create.

If you're going to run "untrusted" scripts, please run your program with the SecurityManager turned on. With that "quit" would have resulted in SecurityException. ClassFilter by itself is not a replacement for SecurityManager. It is used to be used along with the SecurityManager. Please check the JEP on ClassFilter here: http://openjdk.java.net/jeps/202. The JEP clearly states this:
Make security managers redundant for scripts. Embedding applications should still turn on security management before evaluating scripts from untrusted sources. Class filtering alone will not provide a complete script "sandbox." Even if only untrusted scripts (with no additional Java classes) are executed, a security manager should still be utilized. Class filtering provides finer control beyond what a security manager provides. For example, a Nashorn-embedding application may prevent the spawning of threads from scripts or other resource-intensive operations that may be allowed by security manager.

Related

Using Terraform as an API

I would like to use Terraform programmatically like an API/function calls to create and teardown infrastructure in multiple specific steps. e.g reserve a couple of eips, add an instance to a region and assign one of the IPs all in separate steps. Terraform will currently run locally and not on a server.
I would like to know if there is a recommended way/best practices for creating the configuration to support this? So far it seems that my options are:
Properly define input/output, heavily rely on resource separation, modules, the count parameter and interpolation.
Generate the configuration files as JSON which appears to be less common
Thanks!
Instead of using Terraform directly, I would recommend a 3rd party build/deploy tool such as Jenkins, Bamboo, Travis CI, etc. to manage the release of your infrastructure managed by Terraform. Reason being is that you should treat your Terraform code in the exact same manner as you would application code (i.e. have a proper build/release pipeline). As an added bonus, these tools come integrated with a standard api that can be used to execute your build and deploy processes.
If you choose not to create a build/deploy pipeline, your other options are to use a tool such as RunDeck which allows you to execute arbitrary commands on a server. It also has the added bonus of having a excellent privilege control system to only allow specified users to execute commands. Your other option could be to upgrade from the Open Source version of Terraform to the Pro/Premium version. This version includes an integrated GUI and extensive API.
As for best practices for using an API to automate creation/teardown of your infrastructure with Terraform, the best practices are the same regardless of what tools you are using. You mentioned some good practices such as clearly defining input/output and creating a separation of concerns which are excellent practices! Some others I can recommend are:
Create all of your infrastructure code with idempotency in mind.
Use modules to separate the common shared portions of your code. This reduces the number of places that you will have to update code and therefore the number of points of error when pushing an update.
Write your code with scalability in mind from the beginning. It is much simpler to start with this than to adjust later on when it is too late.

Using the Jenkins Java API in a Workflow Script

I'm trying to take advantage of the Jenkins Java from within a Workflow groovy script.
I'm finding it very difficult to get to grips with what I can and can't do, are there any good resources on how to do this.
At the moment what I'm trying to do is get the workspace path, I've got as far as
def jenkins = Jenkins.instance;
def build = jenkins.getItem(env.JOB_NAME).getBuild(env.BUILD_NUMBER)
But this seems to be a dead end, there doesn't seem to be anything useful you can actually do with these objects.
If anyone can point me at any resources giving examples of the kind of useful things that can be done like this, or help with my specific problem of getting the workspace path that would be great.
You can use the standard Workflow step pwd() to get the workspace path, without any use of Jenkins APIs.
As far as other cases are concerned, there is no particular document summarizing what you can do with Jenkins APIs from a Workflow script, because it is simply whatever the Jenkins APIs allow you to do generally (see Javadoc). Two caveats you need to be aware of:
Almost all such calls will be rejected by the Groovy sandbox (and many are not safe to whitelist). This means that you cannot write such scripts in a secured Jenkins installation unless you are an administrator. more
Most API objects will not be Serializable, which means that you must encapsulate their use in a method marked with the #NonCPS annotation. The method may take as parameters and return any serializable (or primitive) types. more
Currently there is no supported way to get access to Jenkins model objects which are defined in a local workspace context. JENKINS-28385 would help. In the meantime, there are often workarounds; for example, if you want the Node you are using inside a node {} block, you can use Jenkins.instance.getNode(env.NODE_NAME).
This doesn't answer your overarching question, but env.WORKSPACE will give you the absolute path to the workspace :) Open the Snippet Generator in a Workflow job config, scroll down, and you will see a list of all the available environment variables.
If you are trying to get the workspace path for the purpose of reading some files, you should really be using a Job DSL and readFileFromWorkspace(filePath).
Just having a workspace path and trying to read file with new File(filePath) may not work if you are using slaves.
More details are here
https://github.com/jenkinsci/job-dsl-plugin/wiki/Job-DSL-Commands

The Intern: Preferred method of accessing Capabilities of the current session?

I'm writing an Intern Functional test suite, and I'd like to scan my environment for features in order to skip tests that aren't relevant to the environment. For example, I never want to run tests that involve touch interactions in browsers that are not touch capable.
My plan was to hook into Leadfoot's session object and pick up the capabilities property, but after some exploration in Node Inspector I've only been able to get it through this.remote.session, which has it hidden behind an underscore.
Is there a better way to get access to the current session's capabilities?

Intercepting events and controlling behaviour via events for BPEL runtime engine

I would like to
1. intercepting events and
2. controlling behaviour via events for BPEL runtime engine. May I know which BPEL runtime engine support this?
For 1. for example when an invocation to a service name "hello", I would like to receive the event "invoke_hello" from the server.
For 2. for example, when the server has parallel invocation of 3 services, "invoke_hello1", "invoke_hello2" and "invoke_hello3", I could control the behaviour by saying I would only allowed "invoke_hello1" to be run.
I am interested if there is any BPEL engines that supports 1, or 2, or both, with its documentation page that roughly talked about this (so I could make use of this feature).
Disclaimer: I haven't personally used the eventing modules of these engines, so I cannot guarantee that they work as they promise.
Concerning question 1 (event notification):
Apache ODE has support for Execution Events. These events go into its database, and you have several ways of retrieving events. You can:
query the database to read them.
use the engines Management API to do this via Web Services
add your own event listener implementation to the engine's classpath.
ODE's events concern the lifecycle of activities in BPEL. So your "invoke_hello" should map to one of the ActivityXXX events in ODE.
The Sun BPEL Service Engine included in OpenESB has some support for alerting, but the documentation is not that verbose concerning how to use it. Apparently, you can annotate activities with an alert level and events are generated when an activity is executed.
Concerning question 2 (controlling behaviour):
This is hard and I am not sure if any engine really supports this in normal execution mode. One straight-forward way of achieving this would be to execute the engine in debug mode and to manually control each step. So you could skip the continuation of "invoke_hello2" and "invoke_hello3" and just continue with "invoke_hello1".
As far as I know, ODE does not have a debugger. The Sun BPEL Service Engine on the other hand, has quite a nice one. It is integrated in the BPEL editor in Netbeans which is a visual editor (that uses BPMN constructs to visualize BPEL activities) and lets you jump from and to every activity.
Another option would be to manually code your own web service that intercepts messages and forwards these to the engine depending on your choice. However, as I understand your question, you would rather like an engine that provides this out of the box.
Apparently Oracle BPEL also has support for eventing and according to this tutorial also comes with a debugger, but I haven't used this engine personally so far, so I won't include it in this answer.

What is the difference with these technology related terms?

What is the difference between the next terms, it can help a lot in interviews and general understanding.
Framerwork
Library
IDE
API
Framework
Some predefined architecture that a developer has chosen and which dictates how the application will be written. It usually already includes many concepts which helps the developer to concentrate on the domain of the application instead of the plumbing. This plumbing is provided by the framework. For example the .NET framework provides out-of-the-box tools that would allow you to talk to web servers, without even knowing the internals of the TCP/IP protocol (actually it helps knowing the internals but you get the point).
Library
A reusable compiled unit that can be redistributed and reused across various projects. Well not necessary compiled in case of dynamic languages.
IDE
It's the development environment where you create the other three parts (usually text editor), it might also include compiler and the possibility to execute, debug and see the output of the program in order to speed up the development process.
API
Application Programming Interface. This could mean many things but usually it is a set of functions given to the disposition of the developer and which perform specific tasks and work only in a specific context.
IDE is a tool for fast, easy and flexible development
An API is provided for an existing software. Using these third party applications can interact with main/primary application.
A framework or library are typically same. They are a common set of functionality for other software to use.
Ref: wiki for Framework, API
Framework: a collection of libraries and programming practices to provide general functionality for a program, so that it doesn't have to be rewritten. Typically a framework for an application program will handle user display and input, among other things. The intent is usually to hide the more complex functionality of an application, and to encourage a certain style.
Library: A piece of software to provide certain functionality to other programs that call it. Typically designed to be reusable and modular, so that a library can be distributed and be useful without its source code.
Integrated Development Environment: A integrated set of tools to write programs and turn them into finished products, usually including at least an editor, compiler, linker, and debugger. IDEs sometimes provide support for frameworks.
Application Programming Interface: A set of function calls and sometimes variable accesses available to a program, typically being the public interface of one or more libraries.

Resources