I would like to make interface (class, or instance) and implementation files in Haskell separately as follow:
file1: (For interface)
class X where
funcX1 = doFuncX1
funcX2 = doFuncX2
....
instance Y where
funcY1 = doFuncY1
funcY2 = doFuncY2
...
file 2: (For implementation)
doFuncX1 = ...
doFuncX2 = ...
doFuncY1 = ...
...
How can I do that when file1 must be imported in file2 and vice versa ?
You don't need any such cumbersome separation in Haskell. Just mark only what you want to be public in the module export list (module Foo ( X(..) ... ) where ...), build your project with cabal, and if you want to export a library but not release the source code you can simply publish only the dist folder with the binary interface files and the Haddock documentation. That's much more convenient than nasty e.g. .h and .cpp files that need to be kept manually in sync.
But of course, nothing prevents you from putting implementations in a seperate, non-public file. You just don't need to do "vice versa" imports for this, only perhaps a common file with the necessary data type declarations. E.g.
Public.hs:
module Public(module Public.Datatypes) where
import Public.Datatypes
import Private.Implementations
instance X Bar where { funcX1 = implFuncX1; ... }
Public/Datatypes.hs:
module Public.Datatypes where
data Bar = Bar { ... }
class X bar where { funcX1 :: ... }
Private/Implementations.hs:
module Private.Implementations(implFuncX1, ...) where
import Public.Datatypes
implFuncX1 :: ...
implFuncX1 = ...
But usually it would be better to simply put everything in Public.hs.
Related
Given is a class EnumTest that declares an inner enum MyEnum.
Using MyEnum as parameter type from within the class works as expected.
Using MyEnum as a parameter type outside of EnumTest fails to compile with unable to resolve class test.EnumTest.MyEnum.
I've browsed related questions, of which the best one was this, but they didn't address the specific issue of using the enum as a type.
Am I missing something very obvious here (as I'm very new to Groovy)? Or is this just another of the language's quirks "enhancements" regarding enums?
Edit: This is just a test demonstrating the issue. The actual issue happens in Jenkins JobDSL, and classpaths and imports seem to be fine there otherwise.
Groovy Version: 2.4.8
JVM: 1.8.0_201
Vendor: Oracle Corporation
OS: Linux
$ tree test
test
├── EnumTest.groovy
├── File2.groovy
└── File3.groovy
EnumTest.groovy:
package test
public class EnumTest {
public static enum MyEnum {
FOO, BAR
}
def doStuff(MyEnum v) {
println v
}
}
File2.groovy:
package test
import test.EnumTest
// prints BAR
new EnumTest().doStuff(EnumTest.MyEnum.BAR)
// prints FOO
println EnumTest.MyEnum.FOO
File3.groovy:
package test
import test.EnumTest
// fails: unable to resolve class test.EnumTest.MyEnum
def thisShouldWorkIMHO(EnumTest.MyEnum v) {
println v
}
When I'm running the test files using groovy -cp %, the output is as follows:
# groovy -cp . File2.groovy
BAR
FOO
# groovy -cp . File3.groovy
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
/home/lwille/-/test/GroovyTest2.groovy: 6: unable to resolve class EnumTest.MyEnum
# line 6, column 24.
def thisShouldWorkIMHO(EnumTest.MyEnum v) {
^
1 error
A few things worth mentioning. You don't need to import classes from the same package. Secondly, when you use a package test then you need to execute Groovy from the root folder, e.g. groovy test/File3.groovy to properly set up the classpath. (There is no need to use -cp . in such case).
Here's what it should look like.
$ tree test
test
├── EnumTest.groovy
├── File2.groovy
└── File3.groovy
0 directories, 3 files
test/EnumTest.groovy
package test
public class EnumTest {
public static enum MyEnum {
FOO, BAR
}
def doStuff(MyEnum v) {
println v
}
}
test/File2.groovy
package test
// prints BAR
new EnumTest().doStuff(EnumTest.MyEnum.BAR)
// prints FOO
println EnumTest.MyEnum.FOO
test/File3.groovy
package test
// fails: unable to resolve class test.EnumTest.MyEnum
def thisShouldWorkIMHO(EnumTest.MyEnum v) {
println v
}
thisShouldWorkIMHO(EnumTest.MyEnum.BAR)
The console output:
$ groovy test/File2.groovy
BAR
FOO
$ groovy test/File3.groovy
BAR
However, if you want to execute script from inside the test folder then you need to specify the classpath to point to the parent folder, e.g.:
$ groovy -cp ../. File3.groovy
BAR
$ groovy -cp . File3.groovy
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
/home/wololock/workspace/groovy-sandbox/src/test/File3.groovy: 4: unable to resolve class EnumTest.MyEnum
# line 4, column 24.
def thisShouldWorkIMHO(EnumTest.MyEnum v) {
^
1 error
UPDATE: the difference between Groovy 2.4 and 2.5 versions
One thing worth mentioning - the above solution works for Groovy 2.5.x and above. It is important to understand that things like methods parameters type check happen at the compiler's Phase.SEMANTIC_ANALYSIS phase. In Groovy 2.4 version, semantic analysis class resolving happens without loading classes. In case of using an inner class, it is critical to load its outer class so it can get resolved. Groovy 2.5 fixed that problem (intentionally or not) and semantic analysis resolves inner classes without an issue mentioned in this question.
For more detailed analysis, please check the following Stack Overflow question GroovyScriptEngine throws MultipleCompilationErrorsException while loading class that uses other class' static inner class where I have investigated a similar issue found in a Groovy 2.4 script. I explained there step by step how to dig down to the roots of this problem.
As per ref-doc:
A GroovyClassLoader keeps a reference of all the classes it created, so it is easy to create a memory leak. In particular, if you execute the same script twice, if it is a String, then you obtain two distinct classes!
I use a file as a source for parsing but turned caching off:
GroovyCodeSource src = new GroovyCodeSource( file )
src.cachable = false
Class clazz = groovyClassLoader.parseClass src
Class clazz1 = groovyClassLoader.parseClass src
log.info "$clazz <=> $clazz1 equal: ${clazz == clazz1}"
the log output is always
class MyClass <=> class MyClass equal: false
If I comment the line src.cachable = false, then the class instances become equal, but they are NOT re-compiling even though the underlying file has changed.
Hence the question: how can I re-compile classes properly without creating a memory leak?
i did the following test and don't see any memory leak.
as for me, the normal GC work.
the link to your class will be alive while any instance of class is alive.
GroovyCodeSource src = new GroovyCodeSource( 'println "hello world"', 'Test', '/' )
src.cachable = false
def cl=this.getClass().getClassLoader()
for(int i=0;i<1000000;i++){
Class clazz = cl.parseClass src
if(i%10000==0)println "$i :: $clazz :: ${System.identityHashCode(clazz)}"
}
After doing some experiments, I figured out that switching back to using String:
String src = 'class A {}'
Class clazz = groovyClassLoader.parseClass src
log.info groovyClassLoader.loadedClasses.join( ', ' )
the loaded classes do not change in lenght, even if a class has some Closures inside (which appear as classes as well).
I have to build a target using a two steps compilation.
The first step: .c -> .asm
The second step: .asm -> .o
I am creating a library from some .o files.
My implementation is the following:
The first step:
c_to_asm_builder = SCons.Builder.Builder(action = SCons.Defaults.CAction,
emitter = {},
suffix = '.asm',
src_suffix = ['.c','.cpp'],
src_builder = '',
source_scanner = SCons.Tool.CScanner
)
env['Builders']['CTOASM'] = c_to_asm_builder
The second step:
suffixesASM = ['.asm', '.s']
static_obj, shared_obj = SCons.Tool.createObjBuilders(env)
for suffix in suffixesASM:
static_obj.add_action(suffix, SCons.Defaults.ASAction)
I am then calling the builders as follows:
env.CTOASM(['file1.c', 'file2.c', 'file3.c'], CFLAGS = '-flag')
env.Object(['file1.asm', 'file2.asm', 'file3.asm'], ASFLAGS = '-flag')
I am creating a library like this:
env.Library('name', ['file1.o', 'file2.o'])
Everything works fine for the compilation.
The problem appers when:
I change file1.c content. I expect file1.c to pass trough these steps:
file1.c -> file1.asm -> file1.o and then name.a library to be recreated.
What happens:
Only c_to_asm_builder is retriggered by the change (file1.c -> file1.asm). The Object builder (file1.asm -> file1.o) is not retriggered and also the Library builder and Program builder are not retriggered.
I don't know what I am missing. I know that for a single step compilation that I configured in another project the Object builder and Library builder are somehow aware of each other.
How to make Library and Program builder aware of Object and CTOASM builders ?
You are not specifying an emitter, so either write one, or explicitly list your expected targets...
asm = env.CTOASM(['file1.asm', 'file2.asm', 'file3.asm'],
['file1.c', 'file2.c', 'file3.c'], CFLAGS = '-flag')
obj = env.Object(['file1.o', 'file2.o', 'file3.o'], asm, ASFLAGS = '-flag')
lib = env.Library('name', obj)
Here is a good reference on how to add an emitter to your builder.
https://bitbucket.org/scons/scons/wiki/ToolsForFools
Just scroll down to "Using Emitters".
I'm looking to monkey-patch require() to replace its file loading with my own function. I imagine that internally require(module_id) does something like:
Convert module_id into a file path
Load the file path as a string
Compile the string into a module object and set up the various globals correctly
I'm looking to replace step 2 without reimplementing steps 1 + 3. Looking at the public API, there's require() which does 1 - 3, and require.resolve() which does 1. Is there a way to isolate step 2 from step 3?
I've looked at the source of require mocking tools such as mockery -- all they seem to be doing is replacing require() with a function that intercepts certain calls and returns a user-supplied object, and passes on other calls to the native require() function.
For context, I'm trying to write a function require_at_commit(module_id, git_commit_id), which loads a module and any of that module's requires as they were at the given commit.
I want this function because I want to be able to write certain functions that a) rely on various parts of my codebase, and b) are guaranteed to not change as I evolve my codebase. I want to "freeze" my code at various points in time, so thought this might be an easy way of avoiding having to package 20 copies of my codebase (an alternative would be to have "my_code_v1": "git://..." in my package.json, but I feel like that would be bloated and slow with 20 versions).
Update:
So the source code for module loading is here: https://github.com/joyent/node/blob/master/lib/module.js. Specifically, to do something like this you would need to reimplement Module._load, which is pretty straightforward. However, there's a bigger obstacle, which is that step 1, converting module_id into a file path, is actually harder than I thought, because resolveFilename needs to be able to call fs.exists() to know where to terminate its search... so I can't just substitute out individual files, I have to substitute entire directories, which means that it's probably easier just to export the entire git revision to a directory and point require() at that directory, as opposed to overriding require().
Update 2:
Ended up using a different approach altogether... see answer I added below
You can use the require.extensions mechanism. This is how the coffee-script coffee command can load .coffee files without ever writing .js files to disk.
Here's how it works:
https://github.com/jashkenas/coffee-script/blob/1.6.2/lib/coffee-script/coffee-script.js#L20
loadFile = function(module, filename) {
var raw, stripped;
raw = fs.readFileSync(filename, 'utf8');
stripped = raw.charCodeAt(0) === 0xFEFF ? raw.substring(1) : raw;
return module._compile(compile(stripped, {
filename: filename,
literate: helpers.isLiterate(filename)
}), filename);
};
if (require.extensions) {
_ref = ['.coffee', '.litcoffee', '.md', '.coffee.md'];
for (_i = 0, _len = _ref.length; _i < _len; _i++) {
ext = _ref[_i];
require.extensions[ext] = loadFile;
}
}
Basically, assuming your modules have a set of well-known extensions, you should be able to use this pattern of a function that takes the module and filename, does whatever loading/transforming you need, and then returns an object that is the module.
This may or may not be sufficient to do what you are asking, but honestly from your question it sounds like you are off in the weeds somewhere far from the rest of the programming world (don't take that harshly, it's just my initial reaction).
So rather than mess with the node require() module, what I ended up doing is archiving the given commit I need to a folder. My code looks something like this:
# commit_id is the commit we want
# (note that if we don't need the whole repository,
# we can pass "commit_id path_to_folder_we_need")
#
# path is the path to the file you want to require starting from the repository root
# (ie 'lib/module.coffee')
#
# cb is called with (err, loaded_module)
#
require_at_commit = (commit_id, path, cb) ->
dir = 'old_versions' #make sure this is in .gitignore!
dir += '/' + commit_id
do_require = -> cb null, require dir + '/' + path
if not fs.existsSync(dir)
fs.mkdirSync(dir)
cmd = 'git archive ' + commit_id + ' | tar -x -C ' + dir
child_process.exec cmd, (error) ->
if error
cb error
else
do_require()
else
do_require()
I am using R, on linux.
I have a set a functions that I use often, and that I have saved in different .r script files. Those files are in ~/r_lib/.
I would like to include those files without having to use the fully qualified name, but just "file.r". Basically I am looking the same command as -I in the c++ compiler.
I there a way to set the include file from R, in the .Rprofile or .Renviron file?
Thanks
You can use the sourceDir function in the Examples section of ?source:
sourceDir <- function(path, trace = TRUE, ...) {
for (nm in list.files(path, pattern = "\\.[RrSsQq]$")) {
if(trace) cat(nm,":")
source(file.path(path, nm), ...)
if(trace) cat("\n")
}
}
And you may want to use sys.source to avoid cluttering your global environment.
If you set the chdir parameter of source to TRUE, then the source calls within the included file will be relative to its path. Hence, you can call:
source("~/r_lib/file.R",chdir=T)
It would probably be better not to have source calls within your "library" and make your code into a package, but sometimes this is convenient.
Get all the files of your directory, in your case
d <- list.files("~/r_lib/")
then you can load them with a function of the plyr package
library(plyr)
l_ply(d, function(x) source(paste("~/r_lib/", x, sep = "")))
If you like you can do it in a loop as well or use a different function onstead of l_ply. Conventional loop:
for (i in 1:length(d)) source(paste("~/r_lib/", d[[i]], sep = ""))
Write your own source() wrapper?
mySource <- function(script, path = "~/r_lib/", ...) {
## paste path+filename
fname <- paste(path, script, sep = "")
## source the file
source(fname, ...)
}
You could stick that in your .Rprofile do is will be loaded each time you start R.
If you want to load all the R files, you can extend the above easily to source all files at once
mySource <- function(path = "~/r_lib/", ...) {
## list of files
fnames <- list.files(path, pattern = "\\.[RrSsQq]$")
## add path
fnames <- paste(path, fnames, sep = "")
## source the files
lapply(fnames, source, ...)
invisible()
}
Actually, though, you'd be better off starting your own private package and loading that.