Does it exist? I can't find it and it isn't listed on wikipedia. (which means it doesn't exist :) )
I know node.js has it. Not sure if writing my node app in coffeescript and applying quick check would work.
http://en.wikipedia.org/wiki/Quick_check
Any clues?
I don't know of any QuickCheck library written particularly in or for CoffeeScript, but googling pulls up qc.js. Here's a snippet from demo.js in that repository:
declare("reverse", [arbWholeNumList, arbWholeNumList],
function(c, x, y) {
var z = x.concat(y);
x.reverse();
y.reverse();
z.reverse();
c.assert(z.toString() == y.concat(x).toString());
});
Now I'm no CoffeeScript expert, but I ran this through http://js2coffee.org. If you can manage to import qc.js, then using it from CoffeeScript would look something like this:
declare "reverse", [ arbWholeNumList, arbWholeNumList ], (c, x, y) ->
z = x.concat(y)
x.reverse()
y.reverse()
z.reverse()
c.assert z.toString() is y.concat(x).toString()
Related
I have a module test.als
module test
sig Sig1 {}
fun f:Sig1->Sig1 {iden}
run {#Sig1 = 1} for 1
and a submodule test2.als
module test2
open test
run{#Sig1=1}
But if I execute this and look at the model in the evaluator function f is not shown whereas it is in test. How can I change this?
Thanks
Edit: f is also not available when I go to Theme, so the issue is not that I have set it to be not shown
Here's my case: I have a project P, two libraries L1 and L2. L2 provides an executable bin in its package, which was called in L1's code using execSync function, code like:
// code in library L1
import { execSync } from 'child_process';
export function foo() {
const cmd = 'npx L2';
return execSync(cmd).toString()
}
As usual, L1 has L2 as its dependency in package.json file.
In project P, I tried to use library L1 like this.
First, use npm i L1 to install library L1;
Then, call the function foo like:
// code in project P
import { foo } from 'L1';
foo();
But I got an error like this:
Error: Cannot find module '~/project_P/node_modules/L1/node_modules/L2/index.js'
It seems that it went to a wrong place to find the L2's executable bin file. Because now L2 is in project_P/node_modules, but not in L1/node_moudules any more.
Also, I tried to change the cmd in foo like below, but none of them works.
cmd = 'PATH=$(npm bin):$PATH L2';
cmd = 'npm run L2' (having script 'L2' in package.json at the same time);
cmd = 'node ../node_modules/.bin/L2';
Does anyone have a clue how to resolve this? Any help would be greatly appreciated!
I need to load, alter and write the code in a mix.exs file. I want to be able to load the file, write the dependencies and write the file.
I start with:
defmodule Elixir_2ndTest.Mixfile do
use Mix.Project
def project do
[app: :elixir_2nd_test,
version: "0.0.1",
elixir: "~> 1.2",
build_embedded: Mix.env == :prod,
start_permanent: Mix.env == :prod,
description: description(),
deps: deps]
end
def application do
[applications: [:logger]]
end
defp deps do
[]
end
end
And I need to end up with (the only difference is in the deps fun):
defmodule Elixir_2ndTest.Mixfile do
use Mix.Project
def project do
[app: :elixir_2nd_test,
version: "0.0.1",
elixir: "~> 1.2",
build_embedded: Mix.env == :prod,
start_permanent: Mix.env == :prod,
description: description(),
deps: deps]
end
def application do
[applications: [:logger]]
end
defp deps do
[{:httpoison, "~> 0.8.3"}]
end
end
The dependencies come from a different build system (I cannot use hex directly form the public internet, so I use it in OFFLINE mode and drop the dependencies in .hex/
I know what teh depenencies and what the versions are an need to insert them in the deps function (in this case httpoison 0.8.3).
If my understanding is correct this should be possible by loading the file, quoting, altering, unquoting.
This is what I have up until this point:
{:ok, body} = File.read("mix.exs")
{:ok, ast} = Code.string_to_quoted(body)
Any pointer on how I can alter the ast and write it back would be appreciated.
It won't look exactly the same, but you can use Macro.to_string to convert the ast back to elixir code.
I was playing around with using my library PhStTransform to modify the ast and convert it back to code. Here's a very simple example from the PhStTransform test library.
test "transform quote do output" do
data = quote do: Enum.map(1..3, fn(x) -> x*x end)
data_transform = quote do: Enum.map(1..3, fn(y) -> y*y end)
replace_x = fn(a, _d ) ->
case a do
:x -> :y
atom -> atom
end
end
potion = %{ Atom => replace_x }
assert PhStTransform.transform(data, potion) == data_transform
end
What that does is convert all references to :x in the ast into :y. You'd need to be a bit more clever with writing the potion for PhStTransform, but I think it should be possible. PhStTransform is in hex.pm.
https://hex.pm/packages/phst_transform
I'm not an Elixir expert, but I know about transforming source code; see my bio.
If you have access to the AST as a data structure, you can always write procedural code to climb over it and hack at where you want something different. I assume if Elixir will give you the AST, it will give you access/modification procedures for working with it. This is compiler 101.
That's usually NOT pretty code to write or maintain. And, it may not be enough: you often need more than just the AST to do serious analysis and transformation. See my essay on Life After Parsingl. Think of this as compiler 102.
One the first stumbling blocks is regenerating text from the AST. Here is my SO discussion on how to prettyprint an AST, and why it is harder than it looks: https://stackoverflow.com/a/5834775/120163
(Sounds like Fred the Magic Wonder Dog didn't think what Elixir offered was enough and is inventing his own extensions to make this easier.).
For example:
/project/SConstruct
/project/main.cpp
/project/folder/bar.h
/project/folder/bar.cpp
/project/folder/foo.h
/project/folder/foo.cpp
What I want to for SCons to just compile all source files in all subdirectories without having to add a SConscript file in every subdirectory. Basically I want to pass a Glob('*.cpp') for /project and all subdirectoires in /project.
Thanks in advance to anyone who replies!
A recursive globber that worked for me
def GlobRecursive(pattern, node='.'):
results = []
for f in Glob(str(node) + '/*', source=True):
if type(f) is SCons.Node.FS.Dir:
results += GlobRecursive(pattern, f)
results += Glob(str(node) + '/' + pattern, source=True)
return results
Robᵩ's answer didn't work for me because a) isdir() always returned False (scons 2.5.1) and b) list comprehensions are hard for me to comprehend :-)
As Brady points out, "Glob() is not recursive", but perhaps we could create a recursive globber:
def AllSources(node='.', pattern='*'):
result = [AllSources(dir, pattern)
for dir in Glob(str(node)+'/*')
if dir.isdir()]
result += [source
for source in Glob(str(node)+'/'+pattern)
if source.isfile()]
return result
env = Environment()
env.Program('program', source=AllSources('.', '*.c*'))
Instead of globbing by Glob('*.cpp') you should also look in the subdirectories using Glob('**/*.cpp) and iterate over the received files.
You can do this from the root SConstruct, as follows:
env = Environment()
env.Program(source=[Glob('*.cpp'), Glob('folder/*.cpp')], target='yourBinaryName')
You will probably also need to configure the include directory as follows:
env.Append(CPPPATH='folder')
Remeber that Glob() is not recursive.
I'm looking to monkey-patch require() to replace its file loading with my own function. I imagine that internally require(module_id) does something like:
Convert module_id into a file path
Load the file path as a string
Compile the string into a module object and set up the various globals correctly
I'm looking to replace step 2 without reimplementing steps 1 + 3. Looking at the public API, there's require() which does 1 - 3, and require.resolve() which does 1. Is there a way to isolate step 2 from step 3?
I've looked at the source of require mocking tools such as mockery -- all they seem to be doing is replacing require() with a function that intercepts certain calls and returns a user-supplied object, and passes on other calls to the native require() function.
For context, I'm trying to write a function require_at_commit(module_id, git_commit_id), which loads a module and any of that module's requires as they were at the given commit.
I want this function because I want to be able to write certain functions that a) rely on various parts of my codebase, and b) are guaranteed to not change as I evolve my codebase. I want to "freeze" my code at various points in time, so thought this might be an easy way of avoiding having to package 20 copies of my codebase (an alternative would be to have "my_code_v1": "git://..." in my package.json, but I feel like that would be bloated and slow with 20 versions).
Update:
So the source code for module loading is here: https://github.com/joyent/node/blob/master/lib/module.js. Specifically, to do something like this you would need to reimplement Module._load, which is pretty straightforward. However, there's a bigger obstacle, which is that step 1, converting module_id into a file path, is actually harder than I thought, because resolveFilename needs to be able to call fs.exists() to know where to terminate its search... so I can't just substitute out individual files, I have to substitute entire directories, which means that it's probably easier just to export the entire git revision to a directory and point require() at that directory, as opposed to overriding require().
Update 2:
Ended up using a different approach altogether... see answer I added below
You can use the require.extensions mechanism. This is how the coffee-script coffee command can load .coffee files without ever writing .js files to disk.
Here's how it works:
https://github.com/jashkenas/coffee-script/blob/1.6.2/lib/coffee-script/coffee-script.js#L20
loadFile = function(module, filename) {
var raw, stripped;
raw = fs.readFileSync(filename, 'utf8');
stripped = raw.charCodeAt(0) === 0xFEFF ? raw.substring(1) : raw;
return module._compile(compile(stripped, {
filename: filename,
literate: helpers.isLiterate(filename)
}), filename);
};
if (require.extensions) {
_ref = ['.coffee', '.litcoffee', '.md', '.coffee.md'];
for (_i = 0, _len = _ref.length; _i < _len; _i++) {
ext = _ref[_i];
require.extensions[ext] = loadFile;
}
}
Basically, assuming your modules have a set of well-known extensions, you should be able to use this pattern of a function that takes the module and filename, does whatever loading/transforming you need, and then returns an object that is the module.
This may or may not be sufficient to do what you are asking, but honestly from your question it sounds like you are off in the weeds somewhere far from the rest of the programming world (don't take that harshly, it's just my initial reaction).
So rather than mess with the node require() module, what I ended up doing is archiving the given commit I need to a folder. My code looks something like this:
# commit_id is the commit we want
# (note that if we don't need the whole repository,
# we can pass "commit_id path_to_folder_we_need")
#
# path is the path to the file you want to require starting from the repository root
# (ie 'lib/module.coffee')
#
# cb is called with (err, loaded_module)
#
require_at_commit = (commit_id, path, cb) ->
dir = 'old_versions' #make sure this is in .gitignore!
dir += '/' + commit_id
do_require = -> cb null, require dir + '/' + path
if not fs.existsSync(dir)
fs.mkdirSync(dir)
cmd = 'git archive ' + commit_id + ' | tar -x -C ' + dir
child_process.exec cmd, (error) ->
if error
cb error
else
do_require()
else
do_require()