Variable argument list with different types in MetaTrader5 - variadic-functions

I have some mql5 code which I want to print debug messages if the DEBUG macro is set. I would like to use a different function (DebugPrint for that matter) for those debug messages. My first attempt was to create a regular function, but variable arguments don't seem to work. I then tried to use the precompiler to remove the DebugPrint-calls based on this answer, however the compiler's pre-processor doesn't seem to understand the variable argument list either. This is the code I tried:
#ifdef DEBUG
#define DebugPrint(...)
#else
#define DebugPrint(...) Print(__VA_ARGS__)
#endif
Any ideas on how to achieve what I'm trying to do?

My few cents on MQL4/5:
Preprocessor directives:
while the revised New-MQL4.56789 compiler has opened some new, more complex constructs for #define preprocessor directives syntax, I have almost always burnt my fingers when trying to use them in production code.
Variadic arguments:
MQL4/5 is a strong-typed, compiled language and as such does not provide means for variadic functions. With some recent syntax aids, coming from ( OOP ) Class-based function ( method ) call-interface overrides and maybe using some advanced abstractions from so called function-template-s, there are chances to create some sort of syntax-support for your #define-dependent behaviour.
Function Overloading,template-sandtypename-dependent actions:
Whereas these techniques have brought even more "New" compiler features into the MQL4/5 software domain, the additional levels of complexity do not justify the efforts, given the resulting principles are restricted from being usable in cases where their use is restricted from export, virtual or #import constructs.
So how to make this work?
Well, for the sake of the rapid & iterative development needs, one may resort to an "almost-variadic" PrintFormat( DEBUG_MASK, ..., ..., ... ); using a context-full (known) matching set of attributes against a static, context-specific #define-ed DEBUG_MASK. Nested construction of FormatString( MASK_A, par1, par2[, FormatString( MASK_B, par3, par4[, FormatString( ... )[, ... ] )[, ... ]) are left for one's own kind imagination.

Related

squeak(smalltallk) how to 'inject' string into string

I'm writing a class named "MyObject".
one of the class methods is:
addTo: aCodeString assertType: aTypeCollection
when the method is called with aCodeString, I want to add (in runtime) a new method to "MyObject" class which aCodeString is it's source code and inject type checking code into the source code.
for example, if I call addTo: assertType: like that:
a := MyObject new.
a addTo: 'foo: a boo:b baz: c
^(a*b+c)'
assertType: #(SmallInteger SmallInteger SmallInteger).
I expect that I could write later:
answer := (a foo: 2 boo: 5 baz: 10).
and get 20 in answer.
and if I write:
a foo: 'someString' boo: 5 baz: 10.
I get the proper message because 'someString' is not a SmallInteger.
I know how to write the type checking code, and I know that to add the method to the class in runtime I can use 'compile' method from Behavior class.
the problem is that I want to add the type checking code inside the source code.
I'm not really familiar with all of squeak classes so I'm not sure if I rather edit the aCodeString as a string inside addTo: assertType: and then use compile: (and I don't know how to do so), or that there is a way to inject code to an existing method in Behavior class or other squeak class.
so basically, what I'm asking is how can I inject string into an existing string or to inject code into an existing method.
There are many ways you could achieve such type checking...
The one you propose is to modify the source code (a String) so as to insert additional pre-condition type checks.
The key point with this approach is that you will have to insert the type checking at the right place. That means somehow parsing the original source (or at least the selector and arguments) so as to find its exact span (and the argument names).
See method initPattern:return: in Parser and its senders. You will find quite low level (not most beautiful) code that feed the block (passed thru return: keyword) with sap an Array of 3 objects: the method selector, the method arguments and the method precedence (a code telling if the method is connected to unary, binary or keyword message). From there, you'll get enough material for achieving source code manipulation (insert a string into another with copyReplace:from:to:with:).
Do not hesitate to write small snippets of code and execute in the Debugger (select code to debug, then use debug it menu or ALT+Shift+D). Also use the inspectors extensively to gain more insight on how things work!
Another solution is to parse the whole Abstract Syntax Tree (AST) of the source code, and manipulate that AST to insert the type checks. Normally, the Parser builds the AST, so observe how it works. From the modified AST, you can then generate new CompiledMethod (the bytecode instructions) and install it in methodDictionary - see the source code of compile: and follow the message sent until you discover generateMethodFromNode:trailer:. This is a bit more involved, and has a bad side effect that the source code is now not in phase with generated code, which might become a problem once you want to debug the method (fortunately, Squeak can used decompiled code in place of source code!).
Last, you can also arrange to have an alternate compiler and parser for some of your classes (see compilerClass and/or parserClass). The alternate TypeHintParser would accept modified syntax with the type hints in source code (once upon a time, it was implemented with type hints following the args inside angle brackets foo: x <Integer> bar: y <Number>). And the alternate TypeHintCompiler would arrange to compile preconditions automatically given those type hints. Since you will then be very advanced in Squeak, you will also create special mapping between source code index and bytecodes so as to have sane debugger and even special Decompiler class that could recognize the precondition type checks and transform them back to type hints just in case.
My advice would be to start with the first approach that you are proposing.
EDIT
I forgot to say, there is yet another way, but it is currently available in Pharo rather than Squeak: Pharo compiler (named OpalCompiler) does reify the bytecode instructions as objects (class names beginning with IR) in the generation phase. So it is also possible to directly manipulate the bytecode instructions by proper hacking at this stage... I'm pretty sure that we can find examples of usage. Probably the most advanced technic.

Is it possible / easy to include some mruby in a nim application?

I'm currently trying to learn Nim (it's going slowly - can't devote much time to it). On the other hand, in the interests of getting some working code, I'd like to prototype out sections of a Nim app I'm working on in ruby.
Since mruby allows embedding a ruby subset in a C app, and since nim allows compiling arbitrary C code into functions, it feels like this should be relatively straightforward. Has anybody done this?
I'm particularly looking for ways of using Nim's funky macro features to break out into inline ruby code. I'm going to try myself, but I figure someone is bound to have tried it and /or come up with more elegant solutions than I can in my current state of learning :)
https://github.com/micklat/NimBorg
This is a project with a somewhat similar goal. It targets python and lua at the moment, but using the same techniques to interface with Ruby shouldn't be too hard.
There are several features in Nim that help in interfacing with a foreign language in a fluent way:
1) Calling Ruby from Nim using Nim's dot operators
These are a bit like method_missing in Ruby.
You can define a type like RubyValue in Nim, which will have dot operators that will translate any expression like foo.bar or foo.bar(baz) to the appropriate Ruby method call. The arguments can be passed to a generic function like toRubyValue that can be overloaded for various Nim and C types to automatically convert them to the right Ruby type.
2) Calling Nim from Ruby
In most scripting languages, there is a way to register a foreign type, often described in a particular data structure that has to be populated once per exported type. You can use a bit of generic programming and Nim's .global. vars to automatically create and cache the required data structure for each type that was passed to Ruby through the dot operators. There will be a generic proc like getRubyTypeDesc(T: typedesc) that may rely on typeinfo, typetraits or some overloaded procs supplied by user, defining what has to be exported for the type.
Now, if you really want to rely on mruby (because you have experience with it for example), you can look into using the .emit. pragma to directly output pieces of mruby code. You can then ask the Nim compiler to generate only source code, which you will compile in a second step or you can just change the compiler executable, which Nim will call when compiling the project (this is explained in the same section linked above).
Here's what I've discovered so far.
Fetching the return value from an mruby execution is not as easy as I thought. That said, after much trial and error, this is the simplest way I've found to get some mruby code to execute:
const mrb_cc_flags = "-v -I/mruby_1.2.0_path/include/ -L/mruby_1.2.0_path/build/host/lib/"
const mrb_linker_flags = "-v"
const mrb_obj = "/mruby_1.2.0_path/build/host/lib/libmruby.a"
{. passC: mrb_cc_flags, passL: mrb_linker_flags, link: mrb_obj .}
{.emit: """
#include <mruby.h>
#include <mruby/string.h>
""".}
proc ruby_raw(str:cstring):cstring =
{.emit: """
mrb_state *mrb = mrb_open();
if (!mrb) { printf("ERROR: couldn't init mruby\n"); exit(0); }
mrb_load_string(mrb, `str`);
`result` = mrb_str_to_cstr(mrb, mrb_funcall(mrb, mrb_top_self(mrb), "test_func", 0));
mrb_close(mrb);
""".}
proc ruby*(str:string):string =
echo ruby_raw("def test_func\n" & str & "\nend")
"done"
let resp = ruby """
puts 'this was a puts from within ruby'
"this is the response"
"""
echo(resp)
I'm pretty sure that you should be able to omit some of the compiler flags at the start of the file in a well configured environment, e.g. by setting LD_LIBRARY_PATH correctly (not least because that would make the code more portable)
Some of the issues I've encountered so far:
I'm forced to use mrb_funcall because, for some reason, clang seems to think that the mrb_load_string function returns an int, despite all the c code I can find and the documentation and several people online saying otherwise:
error: initializing 'mrb_value' (aka 'struct mrb_value') with an expression of incompatible type 'int'
mrb_value mrb_out = mrb_load_string(mrb, str);
^ ~~~~~~~~~~~~~~~~~~~~~~~~~
The mruby/string.h header is needed for mrb_str_to_cstr, otherwise you get a segfault. RSTRING_PTR seems to work fine also (which at least gives a sensible error without string.h), but if you write it as a one-liner as above, it will execute the function twice.
I'm going to keep going, write some slightly more idiomatic nim, but this has done what I needed for now.

Why do I have to specify an ExtPgm parameter for the Main Procedure?

My program, PKGDAYMONR has the control option:
ctl-opt Main( CheckDailyPackages )
The CheckDailyPackages procedure has the following PI:
dcl-pi *n ExtPgm( 'PGMNAME' );
As you can see the ExtPgm parameter is not the name of the program. In fact, it’s what came over in the template source and I forgot to change it. Despite the wrong name in ExtPgm, the program runs without a problem.
If I remove that parameter and leave the keyword as just ExtPgm, I get the following message:
RNF3573: A parameter is required for the EXTPGM keyword when the
procedure name is longer than 10.
If I drop ExtPgm from the Procedure Interface altogether, it also complains:
RNF3834: EXTPGM must be specified on the prototype for the MAIN()
procedure.
So why is it that I have to specify a parameter if it doesn't matter what value I enter?
O/S level: IBM i 7.2
Probably worth pursuing as a defect with the service provider; presumably for most, that would be IBM rather than a third-party, as they would have to contact IBM anyhow, given the perceived issue is clearly with their compiler. Beyond that, as my "Answer", I offer some thoughts:
IMO, and in apparent agreement with the OP, naming the ExtPgm seems pointless in the given scenario. I think the compiler is confused while trying to enforce some requirements in validations of the implicitly generated Prototype for the linear-main for which only a Procedure Interface is supplied; i.e. enforcing requirements that are appropriate for an explicit Prototype, but requirements that could be overlooked [thus are no longer requirements] in the given scenario.? I am suggesting that while the RNF3573 would seem appropriate for diagnosing EXTPGM specifications of an explicit Prototype, IMO that same effect is inappropriate [i.e. the validation should not be performed] for an implicit prototype that was generated by the compiler.
FWiW: Was the fixed-format equivalent of that free-form code tested, to see if the same or a different error was the effect? The following source code currently includes the EXTPGM specification with 'PGMNAME' as the argument [i.e. supplying any bogus value of 10-byte naming to supplicate the compiler, just as is being done in the scenario of the OP, solely to effect a successful compile], but could be compiled with the other variations with changes to the source, mimicking what was done with free-form variations, to test if the same\consistent validations and errors are the effect:
- just EXTPGM keyword coded (w/out argument); is RNF3573 the effect?
- the EXTPGM keyword could be omitted; is RNF3834 the effect?
- the D-spec removed entirely (if there are no parameters defined); ¿that was not one of the variations noted in the OP as being tried, so... the effect?
H MAIN(CheckDailyPackages)
*--------------------------------------------------
* Program name: CheckDailyPackages (PGMNAME)
*--------------------------------------------------
P CheckDailyPackages...
P B
D PI EXTPGM('PGMNAME')
/free
// Work is done here
/end-free
P CheckDailyPackages...
P E
I got a response from IBM and essentially Biswa was on to something, it simply wasn't clear (in my opinion) about the answer.
Essentially the EXTPGM is required on long Main procedure names in order to support recursive program calls.
This is the response I received from IBM explaining the reason for the scenario:
The incorrect EXTPGM would only matter if there was a call to the main
procedure (the program) within the module.
When the compiler processes the procedure interface, it doesn't know
whether there might be a call that appears later in the module.
EXTPGM keyword is used to define the external name of the program which you want to prototype. If you mention the EXTPGM then the program will be called dynamically.
Let us take an example in order to explain your query.
PGMA
D cmdExc PR ExtPgm('QSYS/QCMDEXC')
D 200A const
D 15P05 const
c callp cmdExc('CLRPFM LIB1/PF1':200)
C Eval *INLR = *ON
In the above example CmdExc used for the dynamic call to QSYS/QCMDEXC.
When we use the same program name as the EXTPGM parameter it acts as an entry point to the program when called from other programs or procedure.
But in any case when we mention any name as the sample parameter the EXTPGM will not give any error in compilation, but it gives the error during run time as it tries to resolve the name during run time.

Use Alex macros from another file

Is there any way to have an Alex macro defined in one source file and used in other source files? In my case, I have definitions for $LowerCaseLetter and $UpperCaseLetter (these are all letters except e and O, since they have special roles in my code). How can I refer to these macros from other .x files?
Disproving something exists is always harder than finding something that does exist, but I think the info below does show that Alex can only get macro definitions from the .x file it is reading (other than predefinied stuff like $white), and not via includes from other files....
You can get the sourcecode for Alex by doing the following:
> cabal unpack alex
> cd alex-3.1.3
In src/Main.hs, predefined macros are first set in variables called initSetEnv (charset macros $white, $printable, and "."), and initREEnv (regexp macros, there are none). This gets passed into runP, in src/ParseMonad.hs, which is used to hold the current parsing state, including all defined macros. The initial state is set using the values passed in, but macros can be added using a function called newSMac (or newRMac for regular expression macros).
Since this seems to be the only way that macros can be set, it is then only a matter of some grep bookkeeping to verify the only ways that macros can be added is through an actual macro definition in the source .x file. Unsurprisingly, Alex recursively uses its own .x/.y files for .x source file parsing (src/parser.y, src/Scan.x). It is a couple of levels of indirection away, but you can verify that the only way newSMac can be called is through the src/Scan.x macro
#smac = \$ #id | \$ \{ #id \}
<0> #smac #ws? \= { smacdef }
Other than some obvious predefined stuff, I don't believe reuse in lexers is all that typical anyway, because at the token level things are usually pretty simple (often simple tokens like SPACE, WORD, NUMBER, and a few operators, symbols and parens are all that are needed). The complexity comes at the parsing stage, although for technical reasons, parser-includes aren't that common either (see scannerless parsing for a newer technology that does allow reuse through nesting, like javascript embedded in html.... The tools for scannerless parsing are still pretty primitive though).

Using the C preprocessor to determine current scope?

I am developing an application in C / Objective-C (No C++ please, I already have a solution there), and I came across an interesting use case.
Because clang does not support nested functions, my original approach will not work:
#define CREATE_STATIC_VAR(Type, Name, Dflt) static Type Name; __attribute__((constructor)) void static_ ## Type ## _ ## Name ## _init_var(void) { /* loading code here */ }
This code would compile fine with GCC, but because clang doesn't support nested functions, I get a compile error:
Expected ';' at end of declaration.
So, I found a solution that works for Clang on variables inside a function:
#define CREATE_STATIC_VAR_LOCAL(Type, Name, Dflt) static Type Name; ^{ /* loading code here */ }(); // anonymous block usage
However, I was wondering if there was a way to leverage macro concatenation to choose the appropriate one for the situation, something like:
#define CREATE_STATIC_VAR_GLOBAL(Type, Name, Dflt) static Type Name; __attribute__((constructor)) void static_ ## Type ## _ ## Name ## _init_var(void) { /* loading code here */ }
#define CREATE_STATIC_VAR_LOCAL(Type, Name, Dflt) static Type Name; ^{ /* loading code here */ }(); // anonymous block usage
#define SCOPE_CHOOSER LOCAL || GLOBAL
#define CREATE_STATIC_VAR(Type, Name, DFLT) CREATE_STATIC_VAR_ ## SCOPE_CHOOSER(Type, Name, Dflt)
Obviously, the ending implementation doesn't have to be exactly that, but something similar will suffice.
I have attempted to use __builtin_constant_p with __func__, but because __func__ is not a compile-time constant, that wasn't working.
I have also tried to use __builtin_choose_expr, but that doesn't appear to work at the global scope.
Is there something else I am missing in the docs? Seems like this should be something fairly easy to do, and yet, I cannot seem to figure it out.
Note: I am aware that I could simply type CREATE_STATIC_VAR_GLOBAL or CREATE_STATIC_VAR_LOCAL instead of messing with macro concatenation, but this is me attempting to push the limits of the compiler. I am also aware that I could use C++ and get this over with right away, but that's not my goal here.
#define SCOPE_CHOOSER LOCAL || GLOBAL
#define CREATE_STATIC_VAR(Type, Name, DFLT) CREATE_STATIC_VAR_ ## SCOPE_CHOOSER(Type, Name, Dflt)
The biggest difficulty here is that the C preprocessor works by textual substitution, so even if you figured out how to get SCOPE_CHOOSER to do what you want, you'd end up with a macro expansion that looked something like
CREATE_STATIC_VAR_LOCAL || GLOBAL(Type, Name, Dflt);
There's no way to get the preprocessor to "constant-fold" macro expansions during substitution; the only time things are "folded" is when they appear in #if expressions. So your only hope (modulo slight handwaving) is to find a single construction that will work both inside and outside of a function.
Can you explain more about the ultimate goal here? I don't think you can load the variable's initial value with __attribute__((constructor)), but maybe there's a way to load the initial value the first time the function body is entered... or register all the addresses of these variables into a global list at compile-time and have a single __attribute__((constructor)) function that traverses that list... or some mishmash of those approaches. I don't have any specific ideas in mind, but maybe if you give more information something will emerge.
EDIT: I don't think this helps you either, since it's not a preprocessor trick, but here is a constant-expression that will evaluate to 0 at function scope and 1 at global scope.
#define AT_GLOBAL_SCOPE __builtin_types_compatible_p(const char (*)[1], __typeof__(&__func__))
However, notice that I said "evaluate" and not "expand". These constructs are compile-time, not preprocessing-time.
Inspired by the #Qxuuplusone answer.
The suggested macro for AT_GLOBAL_SCOPE does indeed work (in GCC), but causes a compiler warning (and I am pretty sure it cannot be silenced by Diagnostic Pragma because it's created by pedwarn with a test here).
Unless you turn on -w you will always see these warnings and have, in the back of your mind, a horrible feeling that you probably shouldn't be doing whatever it is that you are doing.
Fortunately, there is a solution that can silence these lingering doubts.
In the Other Builtins section, there is __builtin_FUNCTION with this very interesting description (emphasis mine):
This function is the equivalent of the __FUNCTION__ symbol and returns an address constant pointing to the name of the function from which the built-in was invoked, or the empty string if the invocation is not at function scope.
It turns out, at least in version 8.3 of GCC, you can do this:
#define AT_GLOBAL_SCOPE (__builtin_FUNCTION()[0] == '\0')
This still probably won't answer the original question, but until GCC decides this too will cause a warning (it kind of seems like it's intentionally designed not to though), it lets me continue doing questionable things using macros without anything to warn me that it's a bad idea.

Resources