Hello i've ported RNCryptor to Android Library. I have to run this against the test vectors. But i don't know how to do it.
Can i get an example or explanation about this type of test? I'm not asking code, but the pattern. What i have to test, what to expect in test result, what cases to test & etc.
Related
I'd like to build a DSL and use it as follows:
The DSL compiles to Java.
Export the DSL compiler and package it (i.e. as a JAR), so I can invoke the DSL compiler from a Java application to compile "code written in my DSL" into "Java source code" (I'll use other libraries to programmatically compile Java into bytecode).
Can I use JetBrains MPS to build a DSL and export its compiler as described? If no, other suggestions are appreciated?
I raised the question on MPS Support forum, and the answer I got was that it's not possible to export a compiler for my DSL (e.g. as JAR) from MPS IDE and then invoke the exported compiler from some Java application (think of a Java backend service) passing a text input representing a program written on my DSL.
Though you can use ant to invoke the "MPS code generator" (which is responsible for generating the target language code, e.g. Java, representing the input DSP program), but the generator expects as input "the MPS model" of your DSL program (I guess it's some AST like MPS internal representation of the DSL program). But the only way to generate "the MPS model" of your DSL program is by using Jetbrains' MPS IDE (or a stripped version of it, or intellij with a plugin for your DSL). In other words, the only way to write/edit programs in your DSL and be able to compile them, is by using Jetbrains MPS IDE (or one of its derivatives).
Link to the question I posted on MPS Support forum and the answer.
It seems to me your question is not so far from this documentation entry: https://confluence.jetbrains.com/display/MPSD32/Building+standalone+IDEs+for+your+languages
Maybe you cannot do it directly as a jar library, but it is possible, with some ant or gradle magic, to call a DSL compiler (or, as it's called in MPS, a generator) from an ant task. Documentation about this can be found at https://www.jetbrains.com/help/mps/building-mps-language-plugins.html#
I know it says building plugins but the same mechanism is used.
Why you would want to do this, though, eludes me, since the strong point of MPS is IDE support and very advanced multi-language integration, not necessarily code generation.
invoke the exported compiler [...] passing a text input representing a program written on my DSL
Your idea is sadly inherently flawed. There is no such thing as an MPS "DSL compiler" which takes text as input. In MPS there are generators which transform your DSL into another MPS language, in your case your target language would be BaseLanguage (MPS version of Java). After the transformation, the Java source code is generated as .java files and is automatically compiled as .class files. So yeah, this can be done with an Ant script built in BuildLanguage and called from cmd. But, the generator does NOT take as input text but an AST. The AST is your program "coded" (proper term would be modeled) in MPS.
So what you actually want is a parser (if your language is textual and parseable that is), which has text as input and AST as output. Once you have the AST in any form, you can somehow put it into an MPS model.
Please refer to my other answer where I commented on some portability (basically import, export) in MPS here. I have mentioned (not only) a project I am working on there. It allows to import a language and programs into MPS.
If you don't want to use MPS' IDE at all, but to work with text, it loses the advantage of MPS as a language workbench (LWB) with projectional editor. Maybe you should use another textual LWB (f.e. Xtext) or a parser generator (f.e. ANTLR). If the grammar definitions in parser generators scare you, you could use a model-based parser generator like YAJCo (I have contributed).
Does JetBrains MPS provide an JIT compiler which can be used inside other applications?
We have a legacy application with its on script language. Because this script language is very difficult to use to our customer, we would like to provide a new DSL to them.
So the question is: Can we use Jetbrains MPS to design our DSL and then use the MPS JITCompiler/Translator to transform it to Java or whatever after the user wrote his script in our Software?
If you mean by JITCompiler/Translator, to take your DSL generate Java from it and then run that compiled java code, yes that is possible. But it would be an extra transformation step like: write code -> generate/compile -> run (the resulting jar).
If you mean interpreting the model without doing a transformation step first then the answer is, not out of the box. We have build a interpreter framework for MPS and build two interpreters with it so far. One for Java and one for C. Though the focus is not on performance there. We use it for small calculations in formulas or REPL like things. It is currently work in progress but work quite nice. You can look here for Interpreter and find some more information and where to look. As a midterm project we might want to integrate this interpreter definition with the Graal compiler which would then be much more a JITCompiler then just a interpreter.
I have hundreds of test specifications written in Spock. All of these are functional tests and can be run independently. But I have come across a situation where I need to run a specific test before running some other test.
This was very easy to achieve using Junit Test Suite and it was very straight forward in Eclipse. But since all my tests are groovy tests there is no easy way to create a Test Suite in Spring IDE for the spock tests (written in Groovy).
Can someone please share some ideas as to how we can create a Test suite and run some specific tests and also define the order of tests.
Any help would be much appreciated.
Spock specifications are valid JUnit tests (or suites) as well. That's why they are recognized by tools such as STS. You should be able to add it to the test suites as well as other JUnit test.
On the other hand it doesn't sound as a good practise if your tests depend on execution order.
If certain tasks needs to be performed before the test execution, it should be placed in setup() method. If that logic is common to more than one test, consider extracting it to the parent class.
If all you need is sequential execution of methods within a spec, have a look at #spock.lang.Stepwise, which is handy for testing workflows. Otherwise, you have the same possibilities as with plain JUnit: you can use JUnit (4) test suites, model test suites in your build tool of choice (which might not help within STS), or define test suites via Eclipse run configurations. I don't know how far support for the latter goes, but at the very least, it should allow you to run all tests in a package.
Although I think that it won't allow you to specify the order of the tests, you could use Spock's Runner configuration or #IgnoreIf/#Require built-in extensions. Have a look at my response to a similar question. It's probably also worth having a look at RunnerConfiguration javadoc as it shows that you can include classes directly instead of using annotations.
If the tests you want to run in a specific order are part of the same spock Specification, then you can use the #Stepwise annotation to direct that the tests (feature methods) are executed in the order they appear in the Specification class.
As others mentioned, its best to avoid this dependency if you can because of the complexity it introduces. For example, what happens if the first test fails? Does that leave the system in a undefined state for the subsequent tests? So it would be better to prevent the intra-test dependencies with setup() and cleanup() methods (or setupSpec() and cleanupSpec()).
Another option is to combine two dependent tests into a single multi-stage test with multiple when:/then: block pairs in sequence.
I often listen people talk about unit test. But I don't know the exact apperance of it, especially in Linux C code. I know that for VisualStudio this kind of tool will provide some template? to do unit test, but how to make one for Linux general C code. Could anyone show me an example? Or maybe show me how to do TDD for Linux C code? It's mysterious for me. Thanks!
Unit Testing is a type of automated test which tests individual units of functionality. It's especially useful during initial development and for ensuring that changes don't break your program, which is invaluable when you refactor. You can read more here:
http://en.wikipedia.org/wiki/Unit_testing
Unit testing became a big deal with Extreme Programming and Test Driven Development. With that, a large number of unit testing frameworks sprung up.
Which answers your second question. To do unit testing in C, you'll need a unit testing framework for the language. Fortunately, there are a variety available that you can just use, so you needn't worry about creating your own:
http://en.wikipedia.org/wiki/List_of_unit_testing_frameworks#C
If you choose to do your own, you'll need to find a way to attach tests with units of functionality, ensure they are all run when you run your program, and have a way to remove them easily for deployment. Macros will likely be a key.
I've been looking into Spock and I've had experience with FitNesse. I'm wondering how would people choose one over the other - if they appear to be addressing the same or similar problem space.
Also for the folks who have been using Spock or other groovy code for tests, do you see any noticeable performance degradation? Tests are supposed to give immediate feedback - as we know that if the tests take longer to run, the developer tends to run them less frequently - so I'm wondering if the reduction in speed of test execution has had any impact in the real world.
Thanks
I am no FitNesse guy, so please take what I say with a grain of salt. To me it seems what FitNesse is trying to do is to provide a programming language independent environment to specify tests. They use it to have a more visual interface with the programmer. In Spock a Groovy ast transform is used to transform the table into a groovy program.
Since you basically stay in a programming language it is in Spock more easy to realize more complicated test setups. As a result you often seem to have to write fixture code in FitNesse.
I personally don't need a test execution button, I like the direct approach. I like not having to take of even more classes, only to enable testing and I like looking at the code directly. For example I want to just execute my test from the command line, not from a web interface. That is surely possible in FitNesse too, but as a result the whole visual thing FitNesse is trying to give the user is just ballast for me. That's why I would choose Spock over FitNesse.
The advantage of the language agnostic approach is of course, that a lot of test specifications can be used for Java and for .Net. so if that is a requirement for you, you may want to judge different. It usually is not to me.
As for performance, I would not worry too much about that part.