Reconsolidate protocols in Elixir 1.2 or higher - metaprogramming

I have a macro that creates a module, a struct for the module, and implements a protocol for that struct.
In my suite I have a simple test module that calls the macro, and then makes assertions on the generated module/struct/protocol implementation. One test calls the protocol function with the struct to assert that it has been implemented. Prior to 1.2 this worked, but now it fails, and I get the following warning when running the suite.
test/dogma/rule_builder_test.exs:7: warning: the Dogma.Rule \
protocol has already been consolidated, an implementation for \
Dogma.RuleBuilderTest.MagicTestRule has no effect
I have removed this test for now, as I believe the rest of my suite tests this functionality sufficiently, but I'm curious if there is a way to make this work again, or at least silence the warning.
I played around with Process.consilodate/2, but was unsuccessful.

Starting in Elixir 1.2, Mix consolidates protocols by default, which can cause the problem described here:
https://github.com/elixir-lang/elixir/blob/v1.2/CHANGELOG.md#workflow-improvements
It sounds to me like you have a different flavor of this same problem, but the fix is most likely the same. Set consolidate_protocols: false in your project config (only when running in the test environment).

Related

Test for GHC compile time errors

I'm working on proto-lens#400 tweaking a Haskell code generator. In one of the tests I'd like to verify that certain API has not been built. Specifically, I want to ensure that a certain type of program will not type check successfully. I'd also have a similar program with one identifier changed which should compile, to guard against a typo breaking the test. Reading Extending and using GHC as a Library I have managed to have my test write a small file and compile it using GHC as a library.
But I need the code emitted by the test to load some other modules. Specifically the output of the code generator of that project and its runtime environment with transitive dependencies. I have at best a very rough understanding of stack and hpack, which is providing the build time system. I know I can add dependencies to some package.yaml file to make them available to individual tests, but I have no clue how to access such dependencies from the GHC session set up as part of running the test. I imagine I might find some usable data in some environment variables, but I also believe such an approach might be undocumented and prone to break without warning.
How can I have a test case use GHC as a library and have it access dependencies expressed in package.yaml? Or alternatively, can I use some construct other than a regular test case to express a file with dependencies but check that the file won't compile?
I don't know if this applies to you because there are too many details going way over my head, but one way to test for type errors is to build your test suite with -fdefer-type-errors and to catch the exception at run-time (of type TypeError).

Isolating scenarios in Cabbage

I am automating acceptance tests defined in a specification written in Gherkin using Elixir. One way to do this is an ExUnit addon called Cabbage.
Now ExUnit seems to provide a setup hook which runs before any single test and a setup_all hook, which runs before the whole suite.
Now when I try to isolate my Gherkin scenarios by resetting the persistence within the setup hook, it seems that the persistence is purged before each step definition is executed. But one scenario in Gherkin almost always needs multiple steps which build up the test environment and execute the test in a fixed order.
The other option, the setup_all hook, on the other hand, resets the persistence once per feature file. But a feature file in Gherkin almost always includes multiple scenarios, which should ideally be fully isolated from each other.
So the aforementioned hooks seem to allow me to isolate single steps (which I consider pointless) and whole feature files (which is far from optimal).
Is there any way to isolate each scenario instead?
First of all, there are alternatives, for example: whitebread.
If all your features, needs some similar initial step, maybe background steps are something to look into. Sadly those changes were mixed in a much larger rewrite of the library that newer got merged into. There is another PR which also is mixed in with other functionality and currently is waiting on companion library update. So currently that doesn't work.
Haven't tested how the library is behaving with setup hooks, but setup_all should work fine.
There is such a thing as tags. Which I think haven't yet been published with the new release, but is in master. They work with callback tag. You can look closer at the example in tests.
There currently is a little bit of mess. I don't have as much time for this library as I would like to.
Hope this helps you a little bit :)

Can I externalize parts of a Rust documentation test to an external file?

When writing Rust documentation tests, is it possible to externalize parts of the code to an external file to keep the example short?
# include!("src/fragment.rs") appears to work and does not show up in the output. I have no idea how this interferes with Cargo's dependency processing, though.
I don't think it is officially supported at this moment; there is a related Cargo issue and a tool that attempts to allow it until it is introduced in Cargo (I haven't used it, though).

PowerMock issue when mocking a static method with Java7 construct

I am experiencing an issue with mocking a static test with my code compiled with Java7.
I am annotating my jUnit test with the annotations
#RunWith(PowerMockRunner.class)
#PrepareForTest(StaticClassToMock.class)
When running my test and try to mock my static class with
PowerMockito.mockStatic(StaticClassToMock.class);
it returns
java.lang.VerifyError: JVMVRFY012 stack shape inconsistent [...]
If in StaticClassToMock I remove the Java7 constructs by substituting the catched exceptions in OR and putting them in cascade it works fine.
I saw that the last version of Powemock (1.6.6) is compiled with Java6.
Is my issue related to the Java7 constructs when PowerMock is compiled with Java6?
Thanks
That is the thing with PowerMock - welcome to its bizarre errors.
First question would be - are you using an IBM JDK? Because IBM JDK and PowerMock go even more "bizarre" than Oracle/OpenJDK and PowerMock.
If you do some search, there are plenty of potential hints around:
VerifyError on WAS
Code not working with Java7
Anyway, the first answer would be: simply try if running your JVM using -noverify makes any difference.
The longer answer: unless you are testing 3rd party code which you can't change; consider ... not using static code in a way that makes you turn to PowerMock.
You see, static is first of all an abnormality to good OO design. It should be used with great care; as it puts a lot of direct coupling into your code. And simply spoken: using static is a one of the simpl ways to create code that is hard/impossible to test! So, if changing your code is an option, you could watch those videos to learn how to create testable code in the first place. And then your need to turn to PowerMock ... will simply vanish.
My personal two cents: I have spent many hours hunting down such PowerMock problems. Then we decided to do different designs that only allows for static content that does not break our ordinary unit testing. Since then we are living fine with EasyMock and Mockito. No more need for PowerMock; no more need to spend hours on debugging problems that had nothing to do with our production code; but only the mocking framework.

Safe execution of untrusted Haskell code

I'm looking for a way to run an arbitrary Haskell code safely (or refuse to run unsafe code).
Must have:
module/function whitelist
timeout on execution
memory usage restriction
Capabilities I would like to see:
ability to kill thread
compiling the modules to native code
caching of compiled code
running several interpreters concurrently
complex datatype for compiler errors (insted of simple message in String)
With that sort of functionality it would be possible to implement a browser plugin capable of running arbitrary Haskell code, which is the idea I have in mind.
EDIT: I've got two answers, both great. Thanks! The sad part is that there doesn't seem to be ready-to-go library, just a similar program. It's a useful resource though. Anyway I think I'll wait for 7.2.1 to be released and try to use SafeHaskell in my own program.
We've been doing this for about 8 years now in lambdabot, which supports:
a controlled namespace
OS-enforced timeouts
native code modules
caching
concurrent interactive top-levels
custom error message returns.
This series of rules is documented, see:
Safely running untrusted Haskell code
mueval, an alternative implementation based on ghc-api
The approach to safety taken in lambdabot inspired the Safe Haskell language extension work.
For approaches to dynamic extension of compiled Haskell applications, in Haskell, see the two papers:
Dynamic Extension of Typed Functional Languages, and
Dynamic applications from the ground up.
GHC 7.2.1 will likely have a new facility called SafeHaskell which covers some of what you want. SafeHaskell ensures type-safety (so things like unsafePerformIO are outlawed), and establishes a trust mechanism, so that a library with a safe API but implemented using unsafe features can be trusted. It is designed exactly for running untrusted code.
For the other practical aspects (timeouts and so on), lambdabot as Don says would be a great place to look.

Resources