How do I zip files in bazel - zip

I have a set of files as part of the my repository. How do I produce a zip file out of those files in bazel. I found a rules for tar.gz etc. but cannot find a way how to achive a zip archive.
Found references mentioning zipper but couldn't figure out how to load it and use it. Can someone more experienced with bazel help?

A basic pkg_zip rule was added to rules_pkg recently. Here is a basic usage example from the unit tests:
load("#rules_pkg//:pkg.bzl", "pkg_zip")
pkg_zip(
name = "test_zip_basic",
srcs = [
"testdata/hello.txt",
"testdata/loremipsum.txt",
],
)
You can specify the paths using the extra rules in mappings.bzl. Here is the example given by the Bazel team:
load("#rules_pkg//:mappings.bzl", "pkg_attributes", "pkg_filegroup", "pkg_files", "pkg_mkdirs", "strip_prefix")
load("#rules_pkg//:pkg.bzl", "pkg_tar", "pkg_zip")
# This is the top level BUILD for a hypothetical project Foo. It has a client,
# a server, docs, and runtime directories needed by the server.
# We want to ship it for Linux, macOS, and Windows.
#
# This example shows various techniques for specifying how your source tree
# transforms into the installation tree. As such, it favors using a lot of
# distict features, at the expense of uniformity.
pkg_files(
name = "share_doc",
srcs = [
"//docs",
],
# Required, but why?: see #354
strip_prefix = strip_prefix.from_pkg(),
# Where it should be in the final package
prefix = "usr/share/doc/foo",
)
pkg_filegroup(
name = "manpages",
srcs = [
"//src/client:manpages",
"//src/server:manpages",
],
prefix = "/usr/share",
)
pkg_tar(
name = "foo_tar",
srcs = [
"README.txt",
":manpages",
":share_doc",
"//resources/l10n:all",
"//src/client:arch",
"//src/server:arch",
],
)
pkg_zip(
name = "foo_zip",
srcs = [
"README.txt",
":manpages",
":share_doc",
"//resources/l10n:all",
"//src/client:arch",
"//src/server:arch",
],
)

The zipper utility is at #bazel_tools//tools/zip:zipper, this is its usage:
Usage: zipper [vxc[fC]] x.zip [-d exdir] [[zip_path1=]file1 ... [zip_pathn=]filen]
v verbose - list all file in x.zip
x extract - extract files in x.zip to current directory, or
an optional directory relative to the current directory
specified through -d option
c create - add files to x.zip
f flatten - flatten files to use with create or extract operation
C compress - compress files when using the create operation
x and c cannot be used in the same command-line.
For every file, a path in the zip can be specified. Examples:
zipper c x.zip a/b/__init__.py= # Add an empty file at a/b/__init__.py
zipper c x.zip a/b/main.py=foo/bar/bin.py # Add file foo/bar/bin.py at a/b/main.py
If the zip path is not specified, it is assumed to be the file path.
So it can be used in a genrule like this:
$ tree
.
├── BUILD
├── dir
│   ├── a
│   ├── b
│   └── c
└── WORKSPACE
1 directory, 5 files
$ cat BUILD
genrule(
name = "gen_zip",
srcs = glob(["dir/*"]),
tools = ["#bazel_tools//tools/zip:zipper"],
outs = ["files.zip"],
cmd = "$(location #bazel_tools//tools/zip:zipper) c $# $(SRCS)",
)
$ bazel build :files.zip
INFO: Analyzed target //:files.zip (7 packages loaded, 41 targets configured).
INFO: Found 1 target...
Target //:files.zip up-to-date:
bazel-bin/files.zip
INFO: Elapsed time: 0.653s, Critical Path: 0.08s
INFO: 1 process: 1 linux-sandbox.
INFO: Build completed successfully, 2 total actions
$ unzip -l bazel-bin/files.zip
Archive: bazel-bin/files.zip
Length Date Time Name
--------- ---------- ----- ----
0 2010-01-01 00:00 dir/a
0 2010-01-01 00:00 dir/b
0 2010-01-01 00:00 dir/c
--------- -------
0 3 files
It can similarly be used in Starlark:
def _some_rule_impl(ctx):
zipper_inputs = []
zipper_args = ctx.actions.args()
zipper_args.add("c", ctx.outputs.zip.path)
....
ctx.actions.run(
inputs = zipper_inputs,
outputs = [ctx.outputs.zip],
executable = ctx.executable._zipper,
arguments = [zipper_args],
progress_message = "Creating zip...",
mnemonic = "zipper",
)
some_rule = rule(
implementation = _some_rule_impl,
attrs = {
"deps": attr.label_list(),
"$zipper": attr.label(default = Label("#bazel_tools//tools/zip:zipper"), cfg = "host", executable=True),
},
outputs = {"zip": "%{name}.zip"},
)

Related

Failed build Yocto Gatesgarth "extensible SDK" (eSDK) - populate_sdk_ext fail

I'm working with Yocto "Gatesgarth" on a custom board based on i.MX6ULL.
I'm facing some problems in generating the extensible SDK (eSDK).
The generation of normal SDK it's accomplished correctly.
Below some details.
Details of system:
Board based on NXP i.MX6ULL
Yocto version "Gatesgarth 3.2.4 (May 2021)"
BB_VERSION = "1.48.0",
NATIVELSBSTRING = "ubuntu-18.04"
DISTRO_VERSION = "5.10-gatesgarth"
meta-qt5 is present
Build environment based on Docker Container
Environment Variable:
File: conf/local.conf
SDKMACHINE ?= 'x86_64'
File: test-image-mx6ull.bb
inherit core-image
inherit populate_sdk_qt5
inherit populate_sdk_ext
SDK_EXT_TYPE = "minimal"
SDK_INCLUDE_TOOLCHAIN = "1"
SDK_INCLUDE_PKGDATA = "0"
SDK_INCLUDE_NATIVESDK = "1"
The command executed is :
bitbake test-image-mx6ull -c populate_sdk_ext
Output:
ERROR: test-image-mx6ull-1.0-r0 do_populate_sdk_ext: Error executing a python function in exec_python_func() autogenerated:
The stack trace of python calls that resulted in this exception/failure was:
File: 'exec_python_func() autogenerated', lineno: 2, function: <module>
0001:
*** 0002:do_populate_sdk_ext(d)
0003:
File: '/yocto/sources/poky/meta/classes/populate_sdk_ext.bbclass', lineno: 720, function: do_populate_sdk_ext
0716: bb.fatal('The extensible SDK can currently only be built for the same architecture as the machine being built on - SDK_ARCH is set to %s (likely via setting
SDKMACHINE) which is different from the architecture of the build machine (%s). Unable to continue.' % (d.getVar('SDK_ARCH'), d.getVar('BUILD_ARCH')))
0717:
0718: d.setVar('SDK_INSTALL_TARGETS', get_sdk_install_targets(d))
0719: if d.getVar('SDK_INCLUDE_BUILDTOOLS') == '1':
*** 0720: buildtools_fn = get_current_buildtools(d)
0721: else:
0722: buildtools_fn = None
0723: d.setVar('SDK_REQUIRED_UTILITIES', get_sdk_required_utilities(buildtools_fn, d))
0724: d.setVar('SDK_BUILDTOOLS_INSTALLER', buildtools_fn)
File: '/yocto/sources/poky/meta/classes/populate_sdk_ext.bbclass', lineno: 556, function: get_current_buildtools
0552: import glob
0553: btfiles = glob.glob(os.path.join(d.getVar('SDK_DEPLOY'), '*-buildtools-nativesdk-standalone-*.sh'))
0554: btfiles.sort(key=os.path.getctime)
0555: print("MY-DEBUG - btfiles = {} - SDK_DEPLOY = {}".format(btfiles, d.getVar('SDK_DEPLOY')))
*** 0556: return os.path.basename(btfiles[-1])
0557:
0558:def get_sdk_required_utilities(buildtools_fn, d):
0559: """Find required utilities that aren't provided by the buildtools"""
0560: sanity_required_utilities = (d.getVar('SANITY_REQUIRED_UTILITIES') or '').split()
Exception: IndexError: list index out of range
DEBUG: Python function do_populate_sdk_ext finished
MY-DEBUG - btfiles = [] - SDK_DEPLOY = /yocto/build-mX6ull/tmp/deploy/sdk
Question:
In line 553 the array btfiles should be filled,
but the array is empty and the line 556 generate the exception.
I have no idea of whats is wrong, what I have forget and what Yocto environment variables are needed to setup to do a correctly work.
hope you are doing good
i had similar issue where i couldnt populate esdk,
its all in GLIBC version..
kindly update your GLIBC version
In my case i had to update GLIBC version to 2.33 in "yocto-uninative.inc" file. It worked for me!!!

Nextflow with Azure Batch - Cannot find a matching VM image

While trying to set up Nextflow with Azure Batch (NF-Core), I am getting following error. I tried this on multiple workflows (sarek, ataseq etc.) I get the same error -
N E X T F L O W ~ version 22.04.0
Pulling nf-core/atacseq ...
downloaded from https://github.com/nf-core/atacseq.git
Launching `https://github.com/nf-core/atacseq` [rhl6d5529] DSL1 - revision: 1b3a832db5 [1.2.1]
Downloading plugin nf-azure#0.13.1
----------------------------------------------------
,--./,-.
___ __ __ __ ___ /,-._.--~'
|\ | |__ __ / ` / \ |__) |__ } {
| \| | \__, \__/ | \ |___ \`-._,-`-,
`._,._,'
nf-core/atacseq v1.2.1
----------------------------------------------------
Run Name : rhl6d5529
Data Type : Paired-End
Design File : https://raw.githubusercontent.com/nf-core/test-datasets/atacseq/design.csv
Genome : Not supplied
Fasta File : https://raw.githubusercontent.com/nf-core/test-datasets/atacseq/reference/genome.fa
GTF File : https://raw.githubusercontent.com/nf-core/test-datasets/atacseq/reference/genes.gtf
Mitochondrial Contig : MT
MACS2 Genome Size : 1.2E+7
Min Consensus Reps : 1
MACS2 Narrow Peaks : No
MACS2 Broad Cutoff : 0.1
Trim R1 : 0 bp
Trim R2 : 0 bp
Trim 3' R1 : 0 bp
Trim 3' R2 : 0 bp
NextSeq Trim : 0 bp
Fingerprint Bins : 100
Save Genome Index : No
Max Resources : 6 GB memory, 2 cpus, 12h time per job
Container : docker - nfcore/atacseq:1.2.1
Output Dir : ./results
Launch Dir : /
Working Dir : /nextflow/atacseq/rhl6d5529
Script Dir : /.nextflow/assets/nf-core/atacseq
User : root
Config Profile : test,azurebatch
Config Description : Minimal test dataset to check pipeline function
Config Contact : Venkat Malladi (#vsmalladi)
Config URL : https://azure.microsoft.com/services/batch/
----------------------------------------------------
Uploading local `bin` scripts folder to az://nextflow/atacseq/rhl6d5529/tmp/66/bd55d79e42999df38ba04a81c3aa04/bin
[- ] process > CHECK_DESIGN -
[- ] process > CHECK_DESIGN [ 0%] 0 of 1
[- ] process > CHECK_DESIGN [ 0%] 0 of 1
Error executing process > 'CHECK_DESIGN (design.csv)'
Caused by:
Cannot find a matching VM image with publisher=microsoft-azure-batch; offer=centos-container; OS type=linux; verification type=verified
[58/55b7f7] process > CHECK_DESIGN (design.csv) [100%] 1 of 1, failed: 1
Error executing process > 'CHECK_DESIGN (design.csv)'
Caused by:
Cannot find a matching VM image with publisher=microsoft-azure-batch; offer=centos-container; OS type=linux; verification type=verified
I tried looking into the source code of nextflow. I found the error to be in AzBatchService.groovy (line number below).
https://github.com/nextflow-io/nextflow/blob/0e593e6ab82880810d8139a4fe6e3c47ff69a531/plugins/nf-azure/src/main/nextflow/cloud/azure/batch/AzBatchService.groovy#L442
I did some further digging in my Azure Batch account instance. Basically, I wanted to confirm if the list of supported images being received from the Azure Batch account has the one that is required for this pipeline. I could confirm that the server did indeed respond with the required image -
What could be the issue here? I remember running the exact same pipeline a few weeks back and it did work a few times. Am I missing something?
Just had another look through the Azure Cloud docs and think this might be relevant:
By default, Nextflow creates CentOS 8-based pool nodes, but this
behavior can be customised in the pool configuration. Below the
configurations for image reference/SKU combinations to select two
popular systems.
Ubuntu 20.04:
sku = "batch.node.ubuntu 20.04"
offer = "ubuntu-server-container"
publisher = "microsoft-azure-batch"
CentOS 8 (default):
sku = "batch.node.centos 8"
offer = "centos-container"
publisher = "microsoft-azure-batch"
I think the issue here is a mismatched nodeAgentSkuId. Nextflow is expecting a CentOS 8 node agent SKU, but you have a CentOS 7 SKU. If it's not possible to change the nodeAgentSkuId somehow, the node agent SKU that Nextflow uses should be able to be overridden by adding this to your nextflow.config:
azure.batch.pools.<name>.sku = 'batch.node.centos 7'
Where <name> is the pool identifier:
azure.batch.pools.<name>.sku
Specify the ID of the Compute Node agent SKU which the pool identified with <name> supports (default: batch.node.centos 8, requires nf-azure#0.11.0).
https://www.nextflow.io/docs/edge/azure.html#advanced-settings

Compiling Rocket in Bazel

I'm attempting to get a working prototype of the following scenario:
Language: Rust (rustc 1.45.0-nightly (ad4bc3323 2020-06-01))
Framework: Rocket v0.4.4
Build Tool: Bazel
Platform: Mac OS X / Darwin x64
Running bazel build //web-api yields the below error. I believe, based on looking at the Cargo.lock file it is because Rocket's dependency on the hyper library specifies a dependency on the log 0.3.9 library. For whatever reason it is not using the more recent log=0.4.x. That said, I don't know why it's pulling this library since, if I build it manually, it works fine.
ERROR: /private/var/tmp/_bazel_nathanielford/2a39169ea9f6eb02fe788b12f9eae88f/external/raze__log__0_3_9/BUILD.bazel:27:1: error executing shell command: '/bin/bash -c CARGO_MANIFEST_DIR=$(pwd)/external/raze__log__0_3_9 external/rust_darwin_x86_64/bin/rustc "$#" --remap-path-prefix="$(pwd)"=__bazel_redacted_pwd external/raze__log__0_3_9/src/lib.rs -...' failed (Exit 1) bash failed: error executing command /bin/bash -c 'CARGO_MANIFEST_DIR=$(pwd)/external/raze__log__0_3_9 external/rust_darwin_x86_64/bin/rustc "$#" --remap-path-prefix="$(pwd)"=__bazel_redacted_pwd' '' external/raze__log__0_3_9/src/lib.rs ... (remaining 24 argument(s) skipped)
Use --sandbox_debug to see verbose messages from the sandbox
error[E0425]: cannot find function `set_logger` in crate `log`
--> external/raze__log__0_3_9/src/lib.rs:731:16
|
731 | match log::set_logger(&ADAPTOR) {
| ^^^^^^^^^^ not found in `log`
|
help: consider importing this function
|
204 | use set_logger;
|
The following is my directory structure:
/
|-WORKSPACE
|-BUILD # Empty
|-web-api/
| |-BUILD
| |-src/
| | |-main.rs
| |-cargo/
| |-Cargo.toml
| |-Cargo.lock
| |-BUILD.bazel
| |-remote/
| |-... (Cargo-raze files)
In order to set up the cargo-raze I did the following, following instructions from the github page.:
$ cd web-api/cargo
$ cargo generate-lockfile
$ cargo vendor --versioned-dirs --locked
$ cargo raze
(The generate-lockfile is what creates the Cargo.lock file, and the cargo raze is what creates the BUILD.bazel file and all the contents of the remote sub directory.)
And then to execute the bazel build I go back to the root and run bazel build //web-api, which produces the above error.
This is my WORKSPACE file:
workspace(name = "rocket-bazel")
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "io_bazel_rules_rust",
sha256 = "f21c67fc2fef9d57fa3c81fde1defd9e57d451883388c0a469ec1c470fd30dcb",
strip_prefix = "rules_rust-master",
urls = [
"https://github.com/bazelbuild/rules_rust/archive/master.tar.gz"
],
)
http_archive(
name = "bazel_skylib",
sha256 = "9a737999532daca978a158f94e77e9af6a6a169709c0cee274f0a4c3359519bd",
strip_prefix = "bazel-skylib-1.0.0",
url = "https://github.com/bazelbuild/bazel-skylib/archive/1.0.0.tar.gz",
)
load("#io_bazel_rules_rust//rust:repositories.bzl", "rust_repositories")
rust_repositories(version="nightly", iso_date="2020-06-02")
load("#io_bazel_rules_rust//:workspace.bzl", "bazel_version")
bazel_version(name = "bazel_version")
load("//web-api/cargo:crates.bzl", "raze_fetch_remote_crates")
raze_fetch_remote_crates()
This is my web-api/BUILD file:
load("#io_bazel_rules_rust//rust:rust.bzl", "rust_binary")
rust_binary(
name = "web-api",
srcs = ["src/main.rs"],
deps = [
"//web-api/cargo:rocket",
],
)
And my web-api/cargo/Cargo.toml file:
load("#io_bazel_rules_rust//rust:rust.bzl", "rust_binary")
rust_binary(
name = "web-api",
srcs = ["src/main.rs"],
deps = [
"//web-api/cargo:rocket",
],
)
I've run out of ideas as to what to try. I can get this to compile without Bazel, just using rust (though obviously the files are in slightly different places). I can get it to compile inside a Docker container. I just can't get Bazel (necessarily with cargo raze, either in vendor or remote mode) to run successfully: I assume that there is some mismatch in compile target or the nightly build that is not being properly set - but I'm not sure how to diagnose or get past that.
Here is a link to a repository with the files/structure I tried.
I had a similar issue when I made a minimal Bazel workspace with rust and the log crate together with env_logger crate. I found a similar issue when you try to compile without features = ["std"]. I then tried to enable that in Cargo.toml on the log dependency without success.
My solution is that in Cargo.toml under [raze] I added:
default_gen_buildrs = true
I could trace it down to that when default_gen_buildrs flag is not set in the generated log crate the BUILD.bazel file did not have a cargo_build_script definition or this:
crate_features = [
"std",
],

AC_ARG_ENABLE in an m4_foreach_w loop: no help string

I wish to generate a lot of --enable-*/--disable-* options by something like:
COMPONENTS([a b c], [yes])
where the second argument is the default value of the automatic enable_* variable. My first attempt was to write an AC_ARG_ENABLE(...) within an m4_foreach_w, but so far, I'm only getting the first component to appear in the ./configure --help output.
If I add hand-written AC_ARG_ENABLEs, they work as usual.
Regardless, the --enable-*/--disable-* options work as they should, just the help text is missing.
Here's the full code to reproduce the problem:
AC_INIT([foo], 1.0)
AM_INIT_AUTOMAKE([foreign])
AC_DEFUN([COMPONENTS],
[
m4_foreach_w([component], [$1], [
AS_ECHO(["Processing [component] component with default enable=$2"])
AC_ARG_ENABLE([component],
[AS_HELP_STRING([--enable-[]component], [component] component)],
,
[enable_[]AS_TR_SH([component])=$2]
)
])
AC_ARG_ENABLE([x],
[AS_HELP_STRING([--enable-[]x], [component x])],
,
[enable_[]AS_TR_SH([x])=$2]
)
AC_ARG_ENABLE([y],
[AS_HELP_STRING([--enable-[]y], [component y])],
,
[enable_[]AS_TR_SH([y])=$2]
)
])
COMPONENTS([a b c], [yes])
for var in a b c x y; do
echo -n "\$enable_$var="
eval echo "\$enable_$var"
done
AC_CONFIG_FILES(Makefile)
AC_OUTPUT
And an empty Makefile.am. To verify that the options work:
$ ./configure --disable-a --disable-b --disable-d --disable-x
configure: WARNING: unrecognized options: --disable-d
...
Processing component a with default enable=yes
Processing component b with default enable=yes
Processing component c with default enable=yes
$enable_a=no
$enable_b=no
$enable_c=yes
$enable_x=no
$enable_y=yes
After I poked around in autoconf sources, I figured out this has to do with the m4_divert_once call in the implementation of AC_ARG_ENABLE:
# AC_ARG_ENABLE(FEATURE, HELP-STRING, [ACTION-IF-TRUE], [ACTION-IF-FALSE])
# ------------------------------------------------------------------------
AC_DEFUN([AC_ARG_ENABLE],
[AC_PROVIDE_IFELSE([AC_PRESERVE_HELP_ORDER],
[],
[m4_divert_once([HELP_ENABLE], [[
Optional Features:
--disable-option-checking ignore unrecognized --enable/--with options
--disable-FEATURE do not include FEATURE (same as --enable-FEATURE=no)
--enable-FEATURE[=ARG] include FEATURE [ARG=yes]]])])dnl
m4_divert_once([HELP_ENABLE], [$2])dnl
_AC_ENABLE_IF([enable], [$1], [$3], [$4])dnl
])# AC_ARG_ENABLE
# m4_divert_once(DIVERSION-NAME, CONTENT)
# ---------------------------------------
# Output CONTENT into DIVERSION-NAME once, if not already there.
# An end of line is appended for free to CONTENT.
m4_define([m4_divert_once],
[m4_expand_once([m4_divert_text([$1], [$2])])])
I'm guessing that the HELP-STRING argument is remembered in it's unexpanded form, so it is added just once for all components. Manually expanding the AC_HELP_STRING does what I want:
AC_DEFUN([COMPONENTS],
[
m4_foreach_w([comp], [$1], [
AS_ECHO(["Processing component 'comp' with default enable=$2"])
AC_ARG_ENABLE([comp],
m4_expand([AS_HELP_STRING([--enable-comp], enable component comp)]),
,
[enable_[]AS_TR_SH([comp])=$2]
)
])
])
COMPONENTS([a b c x y], [yes])
I couldn't find a way to properly quote components so that it appears as a string, after being used as the loop variable in m4_foreach_w, so I just renamed it to spare me the trouble.

Groovy File() not reporting correct size / length

I have a Jenkins post build Groovy script running out of the "Post build task plugin". From the same plugin, immediately before running the Groovy script, I check for the existence of the file and its size. The log shows:
09:14:53 -rw-r--r-- 1 aaa users 978243 Nov 4 08:53 /jk/workspace/xxxx/output/delta.txt
09:14:53 cppcheck.groovy: Checking build result: SUCCESS
09:14:53 cppcheck.groovy: workspace = /jk/workspace/xxxx
09:14:53 cppcheck.groovy: delta = /jk/workspace/xxxx/output/delta.txt
09:14:53 cppcheck.groovy: delta.txt length = 0
The groovy script is as follows:
import hudson.model.*
def build = Thread.currentThread().executable
def result = build.getResult()
println("cppcheck.groovy: Checking build result: " + result.toString())
if (result.isBetterOrEqualTo(hudson.model.Result.SUCCESS)) {
def workspace = build.getEnvVars()["WORKSPACE"]
def delta = workspace + "/output/delta.txt"
println("cppcheck.groovy: workspace = " + workspace)
println("cppcheck.groovy: delta = " + delta)
def f = new File(delta)
println("cppcheck.groovy: delta.txt length = " + f.length())
if (f.length() > 0) {
build.setResult(hudson.model.Result.UNSTABLE)
}
}
What am I doing wrong here?
Update: There seems to be some scepticism that the file exists and that there is some sort of race condition. To put your minds at rest, let's rule that out. I have modified the build to execute the same ls -l command after it runs the groovy script, to prove the file does exist and that this problem is ultimately Groovy not being able to open the file. I also added the file exists() check to the above Groovy script, which as I suspected it would, reports the file doesn't exist. I don't dispute that Groovy thinks the file doesn't exist. What I am trying to work out is why?
10:31:39 [xxxx] $ /bin/sh -xe /tmp/hudson8964729240493636268.sh
10:31:39 + ls -l /jk/workspace/xxxx/output/delta.txt
10:31:39 -rw-r--r-- 1 aaa users 978243 Nov 4 08:53 /jk/workspace/xxxx/output/delta.txt
10:31:40 cppcheck.groovy: Checking build result: SUCCESS
10:31:40 cppcheck.groovy: workspace = /jk/workspace/xxxx
10:31:40 cppcheck.groovy: delta = /jk/workspace/xxxx/output/delta.txt
10:31:40 cppcheck.groovy: delta.txt length = 0
10:31:40 cppcheck.groovy: delta.txt exists = false
10:31:40 [xxxx] $ /bin/sh -xe /tmp/hudson8007562636561507409.sh
10:31:40 + ls -l /jk/workspace/xxxx/output/delta.txt
10:31:40 -rw-r--r-- 1 aaa users 978243 Nov 4 08:53 /jk/workspace/xxxx/output/delta.txt
Also, notice the timestamp on said file, is still 08:53 when it was created.
I suspected that the Groovy script was running on the build master as opposed to the build node that this particular build was running on. I added some debug to print the hostname for which the Groovy script was running and sure enough it wasn't the same host that the shell variant of the script was running.

Resources