Use tcl interp alias to rename tcl built in command - hide

How can I rename an existing tcl command in a slave interpreter?
In other words
interp create test
test alias __proc proc
test eval {
__proc hello {} {
puts "hiya"
}
hello
}
This unfortunately does not work. However, if I hide and expose under a different name that works. But, I would like to use both commands proc and __proc - so I would prefer to use aliases, or any other way...

The alias command on a Tcl interpreter allows you to create a command in an interpreter that causes code to run in a different interpreter. So that's not what you want.
I think the following does what you want:
interp create test
test eval {
rename proc __proc
__proc hello {} {
puts "hiya"
}
hello
}
You can then combine the creation of the interpreter with the renaming of the proc command as follows:
proc myinterp {interpName newProcName} {
interp create $interpName
$interpName eval "rename proc $newProcName"
}
myinterp test func
test eval { func greet {} { puts "hello" } }
test eval ( greet }

You can do either
interp alias test __proc test proc
in the main interp or
interp alias {} __proc {} proc
in the slave, so both of the following scrips work:
interp create test
interp alias test __proc test proc
test eval {
__proc hello {} {
puts "hiya"
}
hello
}
or
interp create test
test eval {
interp alias {} __proc {} proc
__proc hello {} {
puts "hiya"
}
hello
}
test alias __proc proc is the same as interp alias test __proc {} proc, that means that __proc in the slave will call proc in the master.

Related

How can I get shell array using process.env in node in zsh

In zsh(v5.8), I can't read shell environment variable array using process.env in node.js(v12.20.1).
./env.js
console.log(process.env);
# zsh
export TEST1=(xxx yyy)
export TEST2=zzz
node env.js
# results
{
TEST2: 'zzz'
}
# bash
export TEST1=(xxx yyy)
export TEST2=zzz
node env.js
# results
{
TEST1: '(xxx yyy)',
TEST2: 'zzz',
}
So, How can I get shell array in node.js in zsh.
zsh supports array type for variable, but does not support array for environment.
http://zsh.sourceforge.net/Guide/zshguide02.html
# example
# '-x' options, automatic export to the environment
typeset TEST1=(xxx yyy)
typeset -x TEST2=(aaa bbb) # ignore -x because, TEST2 type is arrays
typeset TEST3=xxx
typeset -x TEST4=aaa
echo $TEST1 $TEST2 $TEST3 $TEST4
# print all
xxx yyy aaa bbb xxx aaa
env
# print TEST4
{
TEST4=aaa
}
Instead of directly handling the array type on zsh, it has been replaced by a way to solve it on node.
After reading in string format rather than array type, I solved it through split().
env.js
console.log(process.env);
const CONST_TEST1 = process.env.TEST1.split(' ');
console.log(CONST_TEST1 instanceof Array);
console.log(CONST_TEST1);
# zsh
export TEST1='xxx yyy'
node env.js
# results
{
TEST1: 'xxx yyy',
_: '/Users/kmk/.nvm/versions/node/v12.20.1/bin/node'
}
true
[ 'xxx', 'yyy' ]

How to read function names from file and prefix file with said function names in Bash effectively?

I have a bunch of files which I concatenate into one large file. The single large file then looks like this:
function foo() {
// ... implementation
}
function bar() {
// ... implementation
}
function baz() {
// ... implementation
}
function foo_bar() {
// ... implementation
}
...
A bunch of functions. I want to create a new file with all this content, PLUS prefixing it with this:
module.exports = {
foo,
bar,
baz,
foo_bar,
...
}
Basically exporting every function. What is the most simple, cleanest way I can do this in bash?
As far as I got is this haha, it is really confusing to try and come up with a solution:
A := out/a.js
B := out/b.js
all: $(A) $(B)
$(A):
#find src -name '*.js' -exec cat {} + > $#
$(B):
#cat out/a.js | grep -oP '(?function )[a-zA-Z0-9_]+(? \{)'
.PHONY: all
Store the list of functions declared before and after sourcing the file. Compute the difference. You can get the list of currently declared functions with declare -F.
A() { :; }
pre=$(declare -F | sed 's/^declare -f //')
function foo() {
// ... implementation
}
function bar() {
// ... implementation
}
function baz() {
// ... implementation
}
function foo_bar() {
// ... implementation
}
post=$(declare -F | sed 's/^declare -f //')
diff=$(comm -13 <(sort <<<"$pre") <(sort <<<"$post"))
echo "module.exports = {
$(<<<"$diff" paste -sd, | sed 's/,/,\n\t/g')
}"
I think with bash --norc you should get a clean environment, so with bash --norc -c 'source yourfile.txt; declare -F' you could get away with computing the difference:
cat <<EOF >yourfile.txt
function foo() {
// ... implementation
}
function bar() {
// ... implementation
}
function baz() {
// ... implementation
}
function foo_bar() {
// ... implementation
}
EOF
diff=$(bash --norc -c 'source yourfile.txt; declare -F' | cut -d' ' -f3-)
echo "module.exports = {
$(<<<"$diff" paste -sd, | sed 's/,/,\n\t/g')
}"
Both code snippets should output:
module.exports = {
bar,
baz,
foo,
foo_bar
}
Note: the function name() {} is a mix of ksh and posix form of function definition - the ksh uses function name {} while posix uses name() {}. Bash supports both forms and also the strange mix of both forms. To be portable, just use the posix version name() {}. More info maybe at wiki-deb-bash-hackers.org obsolete and deprecated syntax.
This simple awk script will do it
awk -F '( |\\()' 'BEGIN {print "module.exports = {"} /function/ {print "\t" $2 ","} END {print "}"}' largefile.js
You could use echo and sed:
echo 'modules.exports = {'; sed -n 's/^function \([^(]*\)(.*/ \1,/p' input.txt; echo '}'
result:
modules.exports = {
foo,
bar,
baz,
foo_bar,
}

Why my tcl thread exit when ::thread::wait is present?

I'm trying to understand the working of thread:wait from the below code
set logger [thread::create {
proc OpenLog {file} {
global fid
set fid [open $file a]
} proc CloseLog {} {
global fid
close $fid
} proc AddLog {
msg} {
global fid
puts $fid $msg
} thread::wait
}]
% ::thread::exists $logger
0
Why the above code does not wait for even and exit on the spot?
The problem is that your thread-creation script has some syntax errors in it, making it fail to start up correctly; it dies asynchronously and prints an error message. That error seems to be going missing in your case; no idea why, but it ought to read something like:
Error from thread tid0x100481000
wrong # args: should be "proc name args body"
while executing
"proc OpenLog {file} {
global fid
set fid [open $file a]
} proc CloseLog {} {
global fid
close $fid
} proc AddLog {
msg} {
global fid..."
If we correct the obvious syntax problems, converting spaces to newlines where it matters, then we can get this which appears to work for me:
set logger [thread::create {
proc OpenLog {file} {
global fid
set fid [open $file a]
}
proc CloseLog {} {
global fid
close $fid
}
proc AddLog {msg} {
global fid
puts $fid $msg
}
thread::wait
}]
The only differences are to whitespace. Tcl cares about whitespace. Get it right.

Concatenate files on linux with Gradle

Here's my current Gradle task :
task concat << {
println "cat $localWebapp/sc*.js > $buildDir/js/sc.concat.js"
exec {
commandLine "bash","-c",'cat',"$localWebapp/sc*.js", ">", "$buildDir/js/sc.concat.js"
}
}
Even while the command I print using println is correct (it's working if I paste it in a console in the project directory), the command doesn't build the sc.concat.js file.
What's happening and how can I fix that ?
sh -c takes one param for the script/commands to execute:
commandLine "/bin/sh","-c","cat $localWebapp/sc*.js > $buildDir/js/sc.concat.js"
Otherwise the params after cat are passed as further params to shell, which are "misinterpreted".
Instead of commandLine, it seems executable works:
task concat << {
println "cat $localWebapp/sc*.js > $buildDir/js/sc.concat.js"
exec {
executable "sh"
args "-c", "cat $localWebapp/sc*.js > $buildDir/js/sc.concat.js"
}
}

Load script from groovy script

File1.groovy
def method() {
println "test"
}
File2.groovy
method()
I want to load/include the functions/methods from File1.groovy during runtime, equals to rubys/rake's load. They are in two different directories.
If you don't mind the code in file2 being in a with block, you can do:
new GroovyShell().parse( new File( 'file1.groovy' ) ).with {
method()
}
Another possible method would be to change file1.groovy to:
class File1 {
def method() {
println "test"
}
}
And then in file2.groovy you can use mixin to add the methods from file1
def script = new GroovyScriptEngine( '.' ).with {
loadScriptByName( 'file1.groovy' )
}
this.metaClass.mixin script
method()
You can evaluate any expression or script in Groovy using the GroovyShell.
File2.groovy
GroovyShell shell = new GroovyShell()
def script = shell.parse(new File('/path/file1.groovy'))
script.method()
It will be easiest if file1.groovy is an actual class class File1 {...}.
Given that, another way to do it is to load the file into the GroovyClassLoader:
this.class.classLoader.parseClass("src/File1.groovy")
File1.method()
File1.newInstance().anotherMethod()
I am late on this but. This is how we have been achieving what you were asking. So, i have a file1.gsh like so:
File1:
println("this is a test script")
def Sometask(param1, param2, param3)
{
retry(3){
try{
///some code that uses the param
}
catch (error){
println("Exception throw, will retry...")
sleep 30
errorHandler.call(error)
}
}
}
return this;
And in the other file, these functions can be accessed by instantiating first. So in file2.
File2:
def somename
somename = load 'path/to/file1.groovy'
//the you can call the function in file1 as
somename.Sometask(param1, param2, param3)
Here is what I'm using.
1: Write any_path_to_the_script.groovy as a class
2: In the calling script, use:
def myClass = this.class.classLoader.parseClass(new File("any_path_to_the_script.groovy"))
myClass.staticMethod()
It's working in the Jenkins Groovy script console. I have not tried non-static methods.
The answer by #tim_yates that uses metaClass.mixin should have worked without needing any changes to file1.groovy (i.e., mixin with the script object), but unfortunately there is a bug in metaClass.mixin that causes a SO error in this scenario (see GROOVY-4214 on this specific issue). However, I worked around the bug using the below selective mixin:
def loadScript(def scriptFile) {
def script = new GroovyShell().parse(new File(scriptFile))
script.metaClass.methods.each {
if (it.declaringClass.getTheClass() == script.class && ! it.name.contains('$') && it.name != 'main' && it.name != 'run') {
this.metaClass."$it.name" = script.&"$it.name"
}
}
}
loadScript('File1.groovy')
method()
The above solution works with no changes being needed to File1.groovy or the callers in File2.groovy (except for the need to introduce a call to loadScript function).

Resources