Convert string version information to integer for easy comparison - string

I am in a process of designing a program that runs end-user scripts written in Lua. The program defines an interface that tells how many parameters are being passed to which function, their order and so on. At some point in the future, the interface may change so I would like to pass a version information of the host program to a script. My common practice is to use MAJOR.MINOR.PATCH format.
The goal is to keep the representation simple as much as possible, so that the information can be processed using arithmetic operations, like so:
if program.version < SCRIPT_SUPPORTS_VERSION then
error ("This version is not supported, please upgrade...")
end
Common sense dictates to use integers written in base of sixteen, where each byte represents a part in the version format. Like so:
-- Version 1.3.2
-- Compatible with versions >=1.2.0
--
local program = {}
program.version = 0x010302
local SCRIPT_SUPPORTS_VERSION = 0x010200
As I mentioned before, this is the first thing that came to my head when thinking about the problem and it seems to work. On other hand, hexadecimal numbers may seem scary to some.
I would like to know your opinion, please propose alternative solutions.

Related

MongoDB types in Node.js

Is there a node.js module that allows my application to have the same types as MongoDB:
http://docs.mongodb.org/manual/reference/bson-types/
For example, in my node.js app, I want it to have full understanding of the Integer type, but node.js doesn't recognize anything but Number out of the box to my understanding.
node.js doesn't recognize anything but Number out of the box to my understanding.
Well, that's a design decision of JavaScript. JavaScript only has a Number type, which is, if the implementation follows the standard's recommendation, represented as IEEE-754 double precision floating point number. That hides much of the complexity of short vs. long vs. float vs. double and all the other numeric types that are common in most other languages. It's also a good compromise, because they have integer precision up to 2^53 - 1, so you rarely need something outside this range, most of the time, and if you do, a long usually doesn't cut it either...
MongoDB, on the other hand, is written in C++ and is thus much closer to the gritty details of how data is actually stored in memory. Bridging that gap isn't trivial.
Is there a node.js module that allows my application to have the same types as MongoDB?
In a way, you're battling your underlying programming language. If such a module existed, it would also have to somehow implement a whole lot of operations, e.g. long addition (for results > 2^53-1) in a fundamentally different way. For instance, it could do this by converting the digits to a string and adding the individual digits manually, which would be atrociously slow.
Something like this happens in the mongo shell, e.g. for very long integers it uses a syntax like NumberLong("1223"). But it doesn't re-implement the operations, it falls back to regular V8 JavaScript. Proof:
> var foo = NumberLong("36028797018963968") // 2^55
> var foo2 = foo + NumberLong("1")
> foo2
36028797018963970 // should be 36028797018963969
Alternatively, such a module could be a native module that used the existing implementations of a native programming language such as C++. There's a node module called 'edge' that apparently does it for C#. However, that comes with quite a bit of complexity because crossing language borders requires marshalling, which is both expensive and tricky to get right.
In other words: getting a truly different set of types requires another programming language. Making your code roughly stick to your schema is possible, e.g. via mongoose or via coding discipline, but it won't make JS strongly typed.

Algorithm to detect if a file(or string) have been patched

This question is related to string algorithm, not version control tools or management tools.
I learnt the diff algorithm and tried to implement one. That is, given string A and string B, the diff calculate a sequence of actions that can convert A into B.
I wonder, if it possible, given a string S, and a sequence of actions that diff algorithm can produce, the algorithm will tell if the string S is (a) the origin string A, (b) the patched string B, (c) unrelated string. And what if S is only one of A and B.
Actuallly, what I'm really doing is researching a method that can tell if a patch have been applied (source code level or binary code level). I tried google some time, but didn't find something useful.
It's pretty complicated, but it can be done, on some level.
Essentially, you parse the source level into tokens, after that, you build the abstract syntax tree. Once that is done, you must build a diff tool that can do semantic differential analysis between abstract syntax trees. SemanticMerge for example, does that.
Once that is done, you have semantical difference between two source codes, and then you need to define what exactly consists of a patch.
Some of the rules can be:
1) Variable content was changed
2) A if check was added
The bottom line is, differenting between patch and new functionality is not an easy task. The most reliable way is to probably check the binary file version numbers, and understand the versioning schema.
Eg, only minor version is updated, if patches are applied.

Comparison of atom libraries for Haskell, e.g. simple-atom and stringtable-atom

I find myself needing a string table in a Haskell program I'm developing. In particular, I want a system which allows be to box any String into (say) an 'Atom'; given an Atom, you should be able to recover the original string it came from, and (critically) comparing two Atoms for equality should be as fast (or almost as fast) as a pointer compare.
(One can easily devise a referentially-transparent interface for this functionality; the implementation will use unsafePerformIO internally but the user of the library need not know about such details.)
Two libraries available on Hackage seem to be in the right ballpark: stringtable-atom and simple-atom. Does anyone have any experience using these libraries? In particular, are there any suggestions as to what the benefits of one over the other might be?
Another nice choice would be ekmett's new intern package, which handles bytestrings as well as more complex recursive types: http://hackage.haskell.org/package/intern
He has assured me it is threadsafe.
I wrote monad-atom for my own use. It's not what you want if you need globally unique atoms, but if all you need is a string table it is simple and safe.

A language in which everything compiles

I'm trying to do some research for a new project, and I need to create objects dynamically from random data.
For this to work, I need a language / compiler that doesn't have problems with weird uncompilable code lying around.
Basically, I need the random code to compile (or be interpreted) as much as possible - Meaning that the uncompilable parts will be ignored, and only the compilable parts will create the objects (which could be ran).
Object Oriented-ness is not a must, but is a very strong advantage.
I thought of ASM, but it's very messy, and I'd probably need a more readable code
Thanks!
It sounds like you're doing something very much like genetic programming; even if you aren't, GP has to solve some of the same problems—using randomness to generate valid programs. The approach to this that is typically used is to work with a syntax tree: rather than storing x + y * 3 - 2, you store something like the following:
Then, instead of randomly changing the syntax, one can randomly change nodes in the tree instead. And if x should randomly change to, say, +, you can statically know that this means you need to insert two children (or not, depending on how you define +).
A good choice for a language to work with for this would be any Lisp dialect. In a Lisp, the above program would be written (- (+ x (* y 3)) 2), which is just a linearization of the syntax tree using parentheses to show depth. And in fact, Lisps expose this feature: you can just as easily work with the object '(- (+ x (* y 3)) 2) (note the leading quote). This is a three-element list, whose first element is -, second element is another list, and third element is 2. And, though you might or might not want it for your particular application, there's an eval function, such that (eval '(- (+ x (* y 3)) 2)) will take in the given list, treat it as a Lisp syntax tree/program, and evaluate it. This is what makes Lisps so attractive for doing this sort of work; Lisp syntax is basically a reification of the syntax-tree, and if you operate at the syntax-tree level, you can work on code as though it was a value. Lisp won't help you read /dev/random as a program directly, but with a little interpretation layered on top, you should be able to get what you want.
I should also mention, though I don't know anything about it (not that I know much about ordinary genetic programming) the existence of linear genetic programming. This is sort of like the assembly model that you mentioned—a linear stream of very, very simple instructions. The advantage here would seem to be that if you are working with /dev/random or something like it, the amount of interpretation needed is very small; the disadvantage would be, as you mentioned, the low-level nature of the code.
I'm not sure if this is what you're looking for, but any programming language can be made to function this way. For any programming language P, define the language Palways as follows:
If p is a valid program in P, then p is a valid program in Palways whose meaning is the same as its meaning in P.
If p is not a valid program in P, then p is a valid program in Palways whose meaning is the same as a program that immediately terminates.
For example, I could make the language C++always so that this program:
#include <iostream>
using namespace std;
int main() {
cout << "Hello, world!" << endl;
}
would compile as "Hello, world!", while this program:
Hahaha! This isn't legal C++ code!
Would be a legal program that just does absolutely nothing.
To solve your original problem, just take any OOP language like Java, Smalltalk, etc. and construct the appropriate Javaalways, Smalltalkalways, etc. language from it. Again, I'm not sure if this is at all what you're looking for, but it could be done very easily.
Alternatively, consider finding a grammar for any OOP language and then using that grammar to produce random syntactically valid programs. You could then filter those programs down by using the Palways programming language for that language to eliminate syntactically but not semantically valid programs.
Divide the ASCII byte values into 9 classes (division modulo 9 would help). Then assign then to Brainfuck codewords (see http://en.wikipedia.org/wiki/Brainfuck). Then interpret as Brainfuck.
There you go, any sequence of ASCII characters is a program. Not that it's going to do anything sensible... This approach has a much better chance, compared to templatetypedef's answer, to get a nontrivial program from a random byte sequence.
Text Editors
You could try feeding random character strings to an editor like Emacs or VI. Many (most?) characters will perform an editing action but some will do nothing (other than beep, perhaps). You would have to ensure that the random code mutator never generates the character sequence that exits the editor. However, this experience would be much like programming a Turing machine -- the code is not too readable.
Mathematica
In Mathematica, undefined symbols and other expressions evaluate to themselves, without error. So, that language might be a viable choice if you can arrange for the random code mutator to always generate well-formed expressions. This would be readily achievable since the basic Mathematica syntax is trivial, making it easy to operate on syntactic units rather than at the character level. It would be even easier if the mutator were written in Mathematica itself since expression-munging is Mathematica's forte. You could define a mini-language of valid operations within a Mathematica package that does not import the system-defined symbols. This would allow you to generate well-formed expressions to your heart's content without fear of generating a dangerous expression, like DeleteFile[FileNames["*.*", "/", Infinity]].
I believe Common Lisp should suit your needs. I always have some code in my SLIME/Emacs session that wouldn't compile. You can always tweak things, redefine functions in run-time. It is actually very good for prototyping.
A few years ago it took me quite a while to learn. But nowadays we have quicklisp and everything is so much easier.
Here I describe my development environment:
Install lisp on my linux machine
PS: I want to give an example, where Common Lisp was useful for me:
Up to maybe 2004 I used to write small programs in C (the keep it simple Unix way).
The last 3 years I had to get lots of different hardware running. Motorized stages, scientific cameras, IO cards.
The cameras turned out to be quite annoying. Usually you have to cool them down to -50 degree celsius or so and (in some SDKs) they don't like it when you close them. But this
is exactly how my C development cycle worked: write (30s), compile (1s), run (0.1s), repeat.
Eventually I decided to just use Common Lisp. Often it is straight forward to define the foreign function interfaces to talk to the SDKs and I can do this without ever leaving the running Lisp image. I start the editor in the morning define the open-device function, to talk to the device and after 3 hours I have enough of the functions implemented to set gain, temperature, region of interest and obtain the video.
Then I can often put the SDK manual away and just use the camera.
I used the same interactive programming approach when I have to parse some webpage or some weird XML.

Strategies for parallel implementation of Lua numbers and a 64bit integer

Lua by default uses a double precision floating point (double) type as its only numeric type. That's nice and useful. However, I'm working on software that expects to see 64bit integers, for which I don't get around using actual 64bit integers one way or another.
The place where the integer type becomes relevant is for file sizes. Although I don't truly expect to see file sizes beyond what Lua can represent with full "integer" precision using a double, I want to be prepared.
What strategies can you recommend when using a 64bit integer type in parallel with the default numeric type of Lua? I don't really want to throw the default implementation overboard (and I'm not worried of its performance compared to integer arithmetics), but I need some way of representing 64bit integers up to their full precision without too much of a performance penalty.
My problem is that I'm unsure where to modify the behavior. Should I modify the syntax and extend the parser (numbers with appended LL or ULL come to mind, which to my knowledge doesn't exist in default Lua) or should I instead write my own C module and define a userdata type that represents the 64bit integer, along with library functions able to manipulate the values? ...
Note: yes, I am embedding Lua, so I am free to extend it whichever way I please.
As part of LuaJIT's port to ARM CPUs (which often have poor floating-point), LuaJIT implemented a "Dual-number VM", which allows it to switch between integers and floats dynamically as needed. You could use this yourself, just switch between 64-bit integers and doubles instead of 32-bit integers and floats.
It's currently live in builds, so you may want to consider using LuaJIT as your Lua "interpreter." Or you could use it as a way to learn how to do this sort of thing.
However, I do agree with Marcelo; the 53-bit mantissa should be plenty. You shouldn't really need this for a good 10 years or so.
I'd suggest storing your data outside of Lua and use some type of reference to retrieve it when calling your other libraries. You can then push various results onto the Lua stack for the user the see, you can even retrieve the value as a string to be precise, but I would avoid modifying them in Lua and relying on the Lua values when calling your external library.
If you're not going to need floating-point precision at any point in the program, you can just redefine LUA_NUMBER to __int64 (or whatever 64-bit int may be in your environment) in luaconf.h.
Otherwise, you can just bring in another library to handle your integers- for infinite precision, you can use a bignum library such as lhf's lbn.

Resources