Groovy Postbuild do not execute scripts on Jenkins - groovy

I've written simple groovy script, but I don't know how to execute it on Jenkins.
Look at this simple script:
String jbN = System.getenv('JOB_NAME')
println jbN
println "Hello"
I would except that I will reveived at least "Hello". Script give no return. I've just received Build step 'Groovy Postbuild' marked build as failure(or success)
It seems that script is not executed.
EDIT:
I didn't add it, but I have already script which will analize logs, so I need it to execute it post-build.
The problem is bigger then I thought. Plugins: "Scriptler" or "Groovy Plugin" do not print anything.
Script which I'm trying to print out:
String jbN = System.getenv('JOB_NAME')
println jbN

I found the solution:
Script was executed but wasn't printed to console output.
To print result to console output you need to write:
manager.listener.logger.println("Some string") instead of println.
To make it shorter do:
logger = manager.listener.logger.&println
// and call like this:
logger("test log message")
EDIT: add in logger example and to describe how to get env vars (and how to not get them) and to hopefully save people some debugging time . . . this is simple but awkward stuff.
To get the workspace you can go through the manager object. Like this:
manager.build.workspace
To get env vars, this does not work:
String jbN = System.getenv('JOB_NAME')
It shows jbN is null.
That makes sense as JOB_NAME is not an actual system environment var.
This also does not work to get env vars, an exception is thrown:
${manager.envVars['WORKSPACE']}
This does work to get jenkins job "env vars" like WORKSPACE, JOB_NAME, BUILD_NAME:
def build = Thread.currentThread().executable
workspace = build.getEnvVars()["WORKSPACE"]
Example of use, you can call a groovy script in workspace like this:
evaluate(new File(manager.build.workspace.toString() + "/dirinworkspace/scriptname.groovy"))

In your case you want to use the Groovy plugin rather than the Groovy Postbuild plugin.
The Groovy Postbuild plugin is made to change the build result (postbuild).
The Groovy plugin is made to run simple Groovy scripts inside your job.

Click Manage Jenkins-->Script Console

Related

How to invoke a java program with classpath from Python 3.x

I am trying to execute an external java program from a python 3.7 program using the java command with classpath. I am using subprocess.Popen module in Python. Somehow I am not able to get it working! Appreciate any assistance!
cmd = ['java',
'-classpath', 'C:/Users/Documents/MqTransfer.jar', 'C:/Users/Documents/com.ibm.mq.commonservices.jar',
'C:/Users/Documents/com.ibm.mq.headers.jar', 'C:/Users/Documents/com.ibm.mq.jar',
'C:/Users/Documents/com.ibm.mq.jmqi.jar', 'C:/Users/Documents/com.ibm.mq.pcf.jar',
'C:/Users/Documents/connector.jar', 'C:/Users/Documents/xerces.jar',
'MyMqTransfer', 'C:/Users/Documents/queueTransfer.properties']
jproc = subprocess.Popen(cmd, stdout=PIPE, stderr=PIPE)
output, errors = jproc.communicate()
print(output, errors)
I am getting the below error
b'' b'Error: Could not find or load main class C:.Users.Documents.com.ibm.mq.commonservices.jar\r\n'
When I try to run the java program from my batch script it runs fine! This is the command I use in my batch script. The issue is with my python code!
java -classpath MqTransfer.jar;com.ibm.mq.commonservices.jar;com.ibm.mq.headers.jar;com.ibm.mq.jar;com.ibm.mq.jmqi.jar;com.ibm.mq.pcf.jar;connector.jar;xerces.jar com.ibm.my.mq.MyMqTransfer C:\Users\Documents\queueTransfer.properties
Based on the error, I believe the process being executed is something like 'java -classpath C:/Users/Documents/MqTransfer.jar c:/Users/Documents/com.ibm.mq.commonServices.jar [followed by the rest of the arguments you are passing to process]' such that java is passed MqTransfer.jar as the entire classpath argument and thinks 'C:.Users.Documents.com.ibm.mq.commonservices.jar' is your class to launch. Try combining your entire intended classpath into the 3rd argument of your launch and I think you will be good. It would look something like this:
cmd = ['java',
'-classpath', 'C:/Users/Documents/MqTransfer.jar;C:/Users/Documents/com.ibm.mq.commonservices.jar;C:/Users/Documents/com.ibm.mq.headers.jar;C:/Users/Documents/com.ibm.mq.jar;C:/Users/Documents/com.ibm.mq.jmqi.jar;C:/Users/Documents/com.ibm.mq.pcf.jar;C:/Users/Documents/connector.jar;C:/Users/Documents/xerces.jar',
'MyMqTransfer', 'C:/Users/Documents/queueTransfer.properties']

Scala.js - pass command line arguments from SBT run

When running the app using the sbt run while developing normal JVM app, I can pass command line arguments using run <args>. When I try the same with Scala.js, I get an error "No valid parser available". When trying runMain variant like runMain Main.main arg, the error is "Expected non-whitespace character", with arrow pointing just behind Main.main.
Is there some way how to pass arguments to the Scala.js / Node.js application when running it from SBT?
(I am using Scala.js 0.6.15).
No, there isn't, because JavaScript does not have a notion of command-line arguments. Node.js does, but only if started from the command-line, and that use case is not supported by the sbt plugin, I'm afraid.
Feel free to file a feature request. I'm not sure it can be accommodated, but we can look into it eventually.
One can define a custom task calling node.js, and parse arguments using SBT parsers. Add this into build.sbt:
import complete.DefaultParsers._
lazy val runa = inputKey[Unit]("Run app with arguments")
runa := {
(fastOptJS in Compile).value // build it first
val args: Seq[String] = spaceDelimited("<arg>").parsed
val npmRun = "node index.js" + args.map("\"" + _ + "\"").mkString(" "," ","")
npmRun.!
}
You also need to create a file index.js in your project root, containing something like this:
require("./target/scala-2.12/xxxx-jsdeps.js");
require("./target/scala-2.12/xxxx-fastopt.js");
In the intervening years, a library has emerged to address this:
https://ben.kirw.in/decline/

"invalid option" error when running cucumber with "--tags"

I've been playing around with Cucumber for about three weeks now, and everything works well, except this little thing here.
Whenever I run my tests with e.g. cucumber checkout.feature --tags #monthly, I get the following on my console after the test have run successfully:
invalid option: --tags
Test::Unit automatic runner.
Usage: /Users/myusername/.rvm/gems/ruby-2.0.0-p0/bin/cucumber [options] [-- untouched arguments]
-r, --runner=RUNNER Use the given RUNNER.
(c[onsole], e[macs], x[ml])
--collector=COLLECTOR Use the given COLLECTOR.
(de[scendant], di[r], l[oad], o[bject]_space)
-n, --name=NAME Runs tests matching NAME.
(patterns may be used).
--ignore-name=NAME Ignores tests matching NAME.
(patterns may be used).
-t, --testcase=TESTCASE Runs tests in TestCases matching TESTCASE.
(patterns may be used).
--ignore-testcase=TESTCASE Ignores tests in TestCases matching TESTCASE.
(patterns may be used).
--location=LOCATION Runs tests that defined in LOCATION.
LOCATION is one of PATH:LINE, PATH or LINE
--attribute=EXPRESSION Runs tests that matches EXPRESSION.
EXPRESSION is evaluated as Ruby's expression.
Test attribute name can be used with no receiver in EXPRESSION.
EXPRESSION examples:
!slow
tag == 'important' and !slow
--[no-]priority-mode Runs some tests based on their priority.
--default-priority=PRIORITY Uses PRIORITY as default priority
(h[igh], i[mportant], l[ow], m[ust], ne[ver], no[rmal])
-I, --load-path=DIR[:DIR...] Appends directory list to $LOAD_PATH.
--color-scheme=SCHEME Use SCHEME as color scheme.
(d[efault])
--config=FILE Use YAML fomat FILE content as configuration file.
--order=ORDER Run tests in a test case in ORDER order.
(a[lphabetic], d[efined], r[andom])
--max-diff-target-string-size=SIZE
Shows diff if both expected result string size and actual result string size are less than or equal SIZE in bytes.
(1000)
-v, --verbose=[LEVEL] Set the output level (default is verbose).
(important-only, n[ormal], p[rogress], s[ilent], v[erbose])
--[no-]use-color=[auto] Uses color output
(default is auto)
--progress-row-max=MAX Uses MAX as max terminal width for progress mark
(default is auto)
--no-show-detail-immediately Shows not passed test details immediately.
(default is yes)
--output-file-descriptor=FD Outputs to file descriptor FD
-- Stop processing options so that the
remaining options will be passed to the
test.
-h, --help Display this help.
Deprecated options:
--console Console runner (use --runner).
I probably didn't need to put all of that here, but I wanted to give you an impression of how much text appears on my screen after each test, which can be a bit distracting.
Here is my setup:
Gemfile
source 'https://rubygems.org'
gem "rspec"
gem "cucumber"
gem "capybara"
gem "capybara-webkit"
gem "selenium"
gem "selenium-client"
gem "selenium-webdriver"
env.rb
require_relative '../../../config.rb'
require 'capybara/cucumber'
require 'capybara/rspec'
Capybara.app_host = AT_ROOT
Capybara.default_driver = :selenium
Capybara.javascript_driver = :webkit
Capybara.default_wait_time = DEFAULT_WAIT_TIME
Capybara.ignore_hidden_elements = IGNORE_HIDDEN_ELEMENTS
# Define window size of the browser here
Capybara.current_session.driver.browser.manage.window.resize_to(DEFAULT_WINDOW_HEIGHT, DEFAULT_WINDOW_WIDTH)
I couldn't find any connection to the Test::Unit automatic runner in the console output, but apparently it's got something to do with it.
Do you have any idea what that could be? I found some threads related to this issue, but they didn't help me unfortunately.
Thank you
Try
cucumber features -t #monthly

SCons manual build step

Is it possible to get SCons to remind me to perform a manual step using it's dependancy tracking?
My build uses the .swc output from a .fla, which you can't do using a command-line.
I tried something like:
env.Command(target, sources + SHARED_SOURCES,
Action(lambda target, source, env: 1, "Out of date: $TARGET"))
But with that method, I have to use Decider('make') or I get:
$ scons --debug=explain
scons: rebuilding `view_bin\RoleplaySkin.swc' because `view_src\RoleplaySkin.fla' changed
Out of date: view_bin\RoleplaySkin.swc
scons: *** [view_bin\RoleplaySkin.swc] Error 1
And, more importantly, SCons never realizes it's cache is out of date, so any change in the Environment or sources since it wrote the signature in .sconsign.dblite means it will allways try to rebuild (and therefore, always fail).
What about using the Precious method to protect the *.swc output before converting it into a *.fla?
How about creating your own RemindMe builder which reminds you and fails to build the target?
It would look something like this:
def remind_me(target, source, env):
os.remove(target.abspath) #we do not build, we destroy
print ("This is a friendly reminder, your $SOURCE is out of date, run manual build step")
return None
reminder = Builder(action = remind_me,
suffix = '.swc',
src_suffix = '.fla')
env = Environment(BUILDERS = {'RemindMe' : reminder})
#Run builder like this
swc_file = env.RemindMe('some_fla_file')
final_target = env.BuildWithSWC(some_other_target,swc_file)
This is however only a theory, I have never tried actually deleting the target instead of creating it. It might be worth a try at least.

How do you get the path of the running script in groovy?

I'm writing a groovy script that I want to be controlled via a properties file stored in the same folder. However, I want to be able to call this script from anywhere. When I run the script it always looks for the properties file based on where it is run from, not where the script is.
How can I access the path of the script file from within the script?
You are correct that new File(".").getCanonicalPath() does not work. That returns the working directory.
To get the script directory
scriptDir = new File(getClass().protectionDomain.codeSource.location.path).parent
To get the script file path
scriptFile = getClass().protectionDomain.codeSource.location.path
As of Groovy 2.3.0 the #SourceURI annotation can be used to populate a variable with the URI of the script's location. This URI can then be used to get the path to the script:
import groovy.transform.SourceURI
import java.nio.file.Path
import java.nio.file.Paths
#SourceURI
URI sourceUri
Path scriptLocation = Paths.get(sourceUri)
Note that this will only work if the URI is a file: URI (or another URI scheme type with an installed FileSystemProvider), otherwise a FileSystemNotFoundException will be thrown by the Paths.get(URI) call. In particular, certain Groovy runtimes such as groovyshell and nextflow return a data: URI, which will not typically match an installed FileSystemProvider.
This makes sense if you are running the Groovy code as a script, otherwise the whole idea gets a little confusing, IMO. The workaround is here: https://issues.apache.org/jira/browse/GROOVY-1642
Basically this involves changing startGroovy.sh to pass in the location of the Groovy script as an environment variable.
As long as this information is not provided directly by Groovy, it's possible to modify the groovy.(sh|bat) starter script to make this property available as system property:
For unix boxes just change $GROOVY_HOME/bin/groovy (the sh script) to do
export JAVA_OPTS="$JAVA_OPTS -Dscript.name=$0"
before calling startGroovy
For Windows:
In startGroovy.bat add the following 2 lines right after the line with
the :init label (just before the parameter slurping starts):
#rem get name of script to launch with full path
set GROOVY_SCRIPT_NAME=%~f1
A bit further down in the batch file after the line that says "set
JAVA_OPTS=%JAVA_OPTS% -Dgroovy.starter.conf="%STARTER_CONF%" add the
line
set JAVA_OPTS=%JAVA_OPTS% -Dscript.name="%GROOVY_SCRIPT_NAME%"
For gradle user
I have same issue when I'm starting to work with gradle. I want to compile my thrift by remote thrift compiler (custom by my company).
Below is how I solved my issue:
task compileThrift {
doLast {
def projectLocation = projectDir.getAbsolutePath(); // HERE is what you've been looking for.
ssh.run {
session(remotes.compilerServer) {
// Delete existing thrift file.
cleanGeneratedFiles()
new File("$projectLocation/thrift/").eachFile() { f ->
def fileName=f.getName()
if(f.absolutePath.endsWith(".thrift")){
put from: f, into: "$compilerLocation/$fileName"
}
}
execute "mkdir -p $compilerLocation/gen-java"
def compileResult = execute "bash $compilerLocation/genjar $serviceName", logging: 'stdout', pty: true
assert compileResult.contains('SUCCESSFUL')
get from: "$compilerLocation/$serviceName" + '.jar', into: "$projectLocation/libs/"
}
}
}
}
One more solution. It works perfect even you run the script using GrovyConsole
File getScriptFile(){
new File(this.class.classLoader.getResourceLoader().loadGroovySource(this.class.name).toURI())
}
println getScriptFile()
workaround: for us it was running in an ANT environment and storing some location parent (knowing the subpath) in the Java environment properties (System.setProperty( "dirAncestor", "/foo" )) we could access the dir ancestor via Groovy's properties.get('dirAncestor').
maybe this will help for some scenarios mentioned here.

Resources