I have a module test.als
module test
sig Sig1 {}
fun f:Sig1->Sig1 {iden}
run {#Sig1 = 1} for 1
and a submodule test2.als
module test2
open test
run{#Sig1=1}
But if I execute this and look at the model in the evaluator function f is not shown whereas it is in test. How can I change this?
Thanks
Edit: f is also not available when I go to Theme, so the issue is not that I have set it to be not shown
Related
i have process that generates a value. I want to forward this value into an value output channel. but i can not seem to get it working in one "go" - i'll always have to generate a file to the output and then define a new channel from the first:
process calculate{
input:
file div from json_ch.collect()
path "metadata.csv" from meta_ch
output:
file "dir/file.txt" into inter_ch
script:
"""
echo ${div} > alljsons.txt
mkdir dir
python3 $baseDir/scripts/calculate.py alljsons.txt metadata.csv dir/
"""
}
ch = inter_ch.map{file(it).text}
ch.view()
how do I fix this?
thanks!
best, t.
If your script performs a non-trivial calculation, writing the result to a file like you've done is absolutely fine - there's nothing really wrong with this approach. However, since the 'inter_ch' channel already emits files (or paths), you could simple use:
ch = inter_ch.map { it.text }
It's not entirely clear what the objective is here. If the desire is to reduce the number of channels created, consider instead switching to the new DSL 2. This won't let you avoid writing your calculated result to a file, but it might mean you can avoid an intermediary channel, potentially.
On the other hand, if your Python script actually does something rather trivial and can be refactored away, it might be possible to assign a (global) variable (below the script: keyword) such that it can be referenced in your output declaration, like the line x = ... in the example below:
Valid output
values
are value literals, input value identifiers, variables accessible in
the process scope and value expressions. For example:
process foo {
input:
file fasta from 'dummy'
output:
val x into var_channel
val 'BB11' into str_channel
val "${fasta.baseName}.out" into exp_channel
script:
x = fasta.name
"""
cat $x > file
"""
}
Other than that, your options are limited. You might have considered using the env output qualifier, but this just adds some syntactic-sugar to your shell script at runtime, such that an output file is still created:
Contents of test.nf:
process test {
output:
env myval into out_ch
script:
'''
myval=$(calc.py)
'''
}
out_ch.view()
Contents of bin/calc.py (chmod +x):
#!/usr/bin/env python
print('foobarbaz')
Run with:
$ nextflow run test.nf
N E X T F L O W ~ version 21.04.3
Launching `test.nf` [magical_bassi] - revision: ba61633d9d
executor > local (1)
[bf/48815a] process > test [100%] 1 of 1 ✔
foobarbaz
$ cat work/bf/48815aeefecdac110ef464928f0471/.command.sh
#!/bin/bash -ue
myval=$(calc.py)
# capture process environment
set +u
echo myval=$myval > .command.env
Here's my case: I have a project P, two libraries L1 and L2. L2 provides an executable bin in its package, which was called in L1's code using execSync function, code like:
// code in library L1
import { execSync } from 'child_process';
export function foo() {
const cmd = 'npx L2';
return execSync(cmd).toString()
}
As usual, L1 has L2 as its dependency in package.json file.
In project P, I tried to use library L1 like this.
First, use npm i L1 to install library L1;
Then, call the function foo like:
// code in project P
import { foo } from 'L1';
foo();
But I got an error like this:
Error: Cannot find module '~/project_P/node_modules/L1/node_modules/L2/index.js'
It seems that it went to a wrong place to find the L2's executable bin file. Because now L2 is in project_P/node_modules, but not in L1/node_moudules any more.
Also, I tried to change the cmd in foo like below, but none of them works.
cmd = 'PATH=$(npm bin):$PATH L2';
cmd = 'npm run L2' (having script 'L2' in package.json at the same time);
cmd = 'node ../node_modules/.bin/L2';
Does anyone have a clue how to resolve this? Any help would be greatly appreciated!
Given is a class EnumTest that declares an inner enum MyEnum.
Using MyEnum as parameter type from within the class works as expected.
Using MyEnum as a parameter type outside of EnumTest fails to compile with unable to resolve class test.EnumTest.MyEnum.
I've browsed related questions, of which the best one was this, but they didn't address the specific issue of using the enum as a type.
Am I missing something very obvious here (as I'm very new to Groovy)? Or is this just another of the language's quirks "enhancements" regarding enums?
Edit: This is just a test demonstrating the issue. The actual issue happens in Jenkins JobDSL, and classpaths and imports seem to be fine there otherwise.
Groovy Version: 2.4.8
JVM: 1.8.0_201
Vendor: Oracle Corporation
OS: Linux
$ tree test
test
├── EnumTest.groovy
├── File2.groovy
└── File3.groovy
EnumTest.groovy:
package test
public class EnumTest {
public static enum MyEnum {
FOO, BAR
}
def doStuff(MyEnum v) {
println v
}
}
File2.groovy:
package test
import test.EnumTest
// prints BAR
new EnumTest().doStuff(EnumTest.MyEnum.BAR)
// prints FOO
println EnumTest.MyEnum.FOO
File3.groovy:
package test
import test.EnumTest
// fails: unable to resolve class test.EnumTest.MyEnum
def thisShouldWorkIMHO(EnumTest.MyEnum v) {
println v
}
When I'm running the test files using groovy -cp %, the output is as follows:
# groovy -cp . File2.groovy
BAR
FOO
# groovy -cp . File3.groovy
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
/home/lwille/-/test/GroovyTest2.groovy: 6: unable to resolve class EnumTest.MyEnum
# line 6, column 24.
def thisShouldWorkIMHO(EnumTest.MyEnum v) {
^
1 error
A few things worth mentioning. You don't need to import classes from the same package. Secondly, when you use a package test then you need to execute Groovy from the root folder, e.g. groovy test/File3.groovy to properly set up the classpath. (There is no need to use -cp . in such case).
Here's what it should look like.
$ tree test
test
├── EnumTest.groovy
├── File2.groovy
└── File3.groovy
0 directories, 3 files
test/EnumTest.groovy
package test
public class EnumTest {
public static enum MyEnum {
FOO, BAR
}
def doStuff(MyEnum v) {
println v
}
}
test/File2.groovy
package test
// prints BAR
new EnumTest().doStuff(EnumTest.MyEnum.BAR)
// prints FOO
println EnumTest.MyEnum.FOO
test/File3.groovy
package test
// fails: unable to resolve class test.EnumTest.MyEnum
def thisShouldWorkIMHO(EnumTest.MyEnum v) {
println v
}
thisShouldWorkIMHO(EnumTest.MyEnum.BAR)
The console output:
$ groovy test/File2.groovy
BAR
FOO
$ groovy test/File3.groovy
BAR
However, if you want to execute script from inside the test folder then you need to specify the classpath to point to the parent folder, e.g.:
$ groovy -cp ../. File3.groovy
BAR
$ groovy -cp . File3.groovy
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
/home/wololock/workspace/groovy-sandbox/src/test/File3.groovy: 4: unable to resolve class EnumTest.MyEnum
# line 4, column 24.
def thisShouldWorkIMHO(EnumTest.MyEnum v) {
^
1 error
UPDATE: the difference between Groovy 2.4 and 2.5 versions
One thing worth mentioning - the above solution works for Groovy 2.5.x and above. It is important to understand that things like methods parameters type check happen at the compiler's Phase.SEMANTIC_ANALYSIS phase. In Groovy 2.4 version, semantic analysis class resolving happens without loading classes. In case of using an inner class, it is critical to load its outer class so it can get resolved. Groovy 2.5 fixed that problem (intentionally or not) and semantic analysis resolves inner classes without an issue mentioned in this question.
For more detailed analysis, please check the following Stack Overflow question GroovyScriptEngine throws MultipleCompilationErrorsException while loading class that uses other class' static inner class where I have investigated a similar issue found in a Groovy 2.4 script. I explained there step by step how to dig down to the roots of this problem.
I would like to make interface (class, or instance) and implementation files in Haskell separately as follow:
file1: (For interface)
class X where
funcX1 = doFuncX1
funcX2 = doFuncX2
....
instance Y where
funcY1 = doFuncY1
funcY2 = doFuncY2
...
file 2: (For implementation)
doFuncX1 = ...
doFuncX2 = ...
doFuncY1 = ...
...
How can I do that when file1 must be imported in file2 and vice versa ?
You don't need any such cumbersome separation in Haskell. Just mark only what you want to be public in the module export list (module Foo ( X(..) ... ) where ...), build your project with cabal, and if you want to export a library but not release the source code you can simply publish only the dist folder with the binary interface files and the Haddock documentation. That's much more convenient than nasty e.g. .h and .cpp files that need to be kept manually in sync.
But of course, nothing prevents you from putting implementations in a seperate, non-public file. You just don't need to do "vice versa" imports for this, only perhaps a common file with the necessary data type declarations. E.g.
Public.hs:
module Public(module Public.Datatypes) where
import Public.Datatypes
import Private.Implementations
instance X Bar where { funcX1 = implFuncX1; ... }
Public/Datatypes.hs:
module Public.Datatypes where
data Bar = Bar { ... }
class X bar where { funcX1 :: ... }
Private/Implementations.hs:
module Private.Implementations(implFuncX1, ...) where
import Public.Datatypes
implFuncX1 :: ...
implFuncX1 = ...
But usually it would be better to simply put everything in Public.hs.
Pickle doesn't seem to be loading for me when I'm using spork...
If I run my cucumber normally, the step works as expected:
➜ bundle exec cucumber
And a product exists with name: "Windex", category: "Household Cleaners", description: "nasty bluish stuff" # features/step_definitions/pickle_steps.rb:4
But if I run it through spork, I get an undefined step:
You can implement step definitions for undefined steps with these snippets:
Given /^a product exists with name: "([^"]*)", category: "([^"]*)", description: "([^"]*)"$/ do |arg1, arg2, arg3|
pending # express the regexp above with the code you wish you had
end
What gives?
So it turns out there is an extra config line necessary for features/support/env.rb when using spork in order to have Pickle be able to pickup on AR models, a la this gist:
In features/support/env.rb
Spork.prefork do
ENV["RAILS_ENV"] ||= "test"
require File.expand_path(File.dirname(__FILE__) + '/../../config/environment')
# So that Pickle's AR adapter properly picks up the application's models.
Dir["#{Rails.root}/app/models/*.rb"].each { |f| load f }
# ...
end
Adding in this line fixes my problem. This is more of a spork issue than guard, per se. I'll update my question...