I've been tasked with creating conformance tests of user input, the task if fairly tricky and we need very high levels of reliability. The server runs on PHP, the client runs on JS, and I thought Haxe might reduce duplicative work.
However, I'm having trouble with deadcode removal. Since I am just creating helper functions (utilObject.isMeaningOfLife(42)) I don't have a main program that calls each one. I tried adding #:keep: to a utility class, but it was cut out anyway.
I tried to specify that utility class through the -main switch, but I had to add a dummy main() method and this doesn't scale beyond that single class.
You can force the inclusion of all the files defined in a given package and its sub packages to be included in the build using a compiler argument.
haxe --macro include('my.package') ..etc
This is a shortcut to the macro.Compiler.include function.
As you can see the signature of this function allows you to do it recursive and also exclude packages.
static include (pack:String, rec:Bool = true, ?ignore:Array<String>, ?classPaths:Array<String>):Void
I think you don't have to use #:keep in that case for each library class.
I'm not sure if this is what you are looking for, I hope it helps.
Otherwise this could be helpful checks:
Is it bad that the code is cut away if you don't use it?
It could also be the case some code is inlined in the final output?
Compile your code using the compiler flag -dce std as mentioned in comments.
If you use the static analyzer, don't use it.
Add #:keep and reference the class+function somewhere.
Otherwise provide minimal setup if you can reproduce.
Related
The two functions mkYesodData and mkYesodDispatch in the Yesod framework are supposed to separate the handler definition and the dispatch process. Though by some miracle (to me), templates use this interesting function "resourcesApp":
mkYesodDispatch "App" resourcesApp
The only mention of this function I have found in hoogle is in the Hledger package. And it is not a yesod dependency.
In the school of Haskell by this link they give an explanation that resourcesApp is "generated" by mkYesodData, although it still does not work for my side.
https://www.schoolofhaskell.com/school/advanced-haskell/building-a-file-hosting-service-in-yesod/part%202
What is the reason for this?
There's some Template Haskell (TH) going on under the hood in Yesod, and I think this is what's confusing you. Template Haskell can be confusing when searching in documentation because it produces values at compile-time for use at runtime that aren't there before the code is compiled. resourcesApp is just one of these values.
In the code you reference, the author describes that you must have another module (which he calls Foundation) in which you have invoked mkYesodData. Indeed, without this other module, the code in the Dispatch module won't work. Strangely, it's not until (Part 4)[https://www.schoolofhaskell.com/school/advanced-haskell/building-a-file-hosting-service-in-yesod/part%204] that he seems to define the Foundation module, but you can see that there is a line:
mkYesodData "App" $(parseRoutesFile "config/routes")
That may not look like it defines a value called resourcesApp, but sure enough, it does.
In short, you should be able to get your code working by just finishing the entire tutorial and running the code altogether.
In case you're wondeering, a call to mkYesodData takes a String and then literally generates code that defines a value names resources**** where the **** is the string you passed. In this case, that would be a value resourcesApp, but in someone else's Yesod project, it could be resourcesFoo. Furthermore, since this resourcesFoo value isn't concretely in the code, projects that use Yesod typically wouldn't have it show up in their export lists or haddock documentation. It's actually very strange that you found even one hit for resourcesApp on hoogle at all, but upon closer examination, it kind of makes sense: Hledger seems to be some sort of extended interface around yesod, so they pre-generated the TH values so that they would be easily accessible to users.
As another note, TH has some restrictions in its use. For one, you typically need to perform the TH invocations ("splices" as they're typically called) in a separate module than the one you use the generated values. This is probably why the author has you create a separate Foundation module rather than just putting the line mkYesodData ... in the Dispatch module.
Is there a way to disable react-jsx transformation in some files of a ReasonReact project?
I think the other way around is possible by not adding "reason": { "react-jsx": 3 } to bsconfig.json and by adding ##bs.config({jsx: 3}) to the top of the files where you want react-jsx transformation, but that would force me to add this annotation in too many files.
I'd like to build a small DSL based on JSX in a few files while benefiting from React in the rest of my project.
Note: the solution suggested is not very straight forward, and I think it's much simpler to add ##bs.config annotations explicitly in all required files, but if you really don't want to do that, the following might work.
If I'm reading the compiler code correctly, user-defined ppxs are applied before ReasonReact ppx. In the linked compiler module, Cmd_ppx_apply.apply_rewriters will apply with all arguments passed with -ppx flag, and Ppx_entry.rewrite_implementation is ReasonReact ppx.
Assuming that's true, one could have a ppx that checks a top-level statement like ##custom.jsx at the top of the file, that the ppx would check. The ReasonReact ppx used to have a similar check, in case it serves as reference.
Then if this statement is found, the custom ppx would process the nodes that have the #JSX attributes and make sure it removes the attributes from them, so when the compiler passes the AST to ReasonReact ppx, it won't see them.
Note this would break if the ReScript ppx pipeline is updated one day to a driver-based one (unlikely I'd say because that would mean ReScript should support native libraries as 1st class citizens somehow), or if the ordering that was mentioned above changes (ReasonReact ppx applies before user-defined ones).
Im trying to accomplish a twincat 3 library which does things using global constants defined in the main project, like creating arrays the size of those constants and cycling trough them. However I've been unsuccessful and I wonder if this can be done. I just get this error "Error 4 Border 'cPassedConstant' of array is no constant value" when I try to build the main project. The error comes from the array defined in the library.
I've tried making a GVL with a constant of the same name to the library and then setting the "external implementation" property true but that does not help.
My goal here is to make a IO management library with filtering and such. And then I could just add it to the main project and define some constants like "cDigitalIputsCount","cAnalogInputCount" and so on.
Maybe you can get along with the new ARRAY[*] feature instead, although it is still very limited. There is no other way than to define the constant in the library.
The library concept is the same as in other environments. A library provides you reusable components. Your main project depends on the library and not the other way around. Therefore your library cannot know a thing about the project where it is used.
A confusing thing in TwinCat3 is, that you can build projects successful with programming errors inside. The TwinCat3 compiler allows broken code inside a project as long as it is not called. Therefore when you ship libraries you should always use "Check all objects".
You should check Beckhoff's feature called Parameter List. By adding a parameter list to the library project, you can re-define library constants in the project that uses the library. The definition happens in the library manager.
Image from Beckhoff's site:
I think that should do it. Of course, the other option is to use the ARRAY[*] option, which is awesome too (for a PLC programming world). The problem with parameter lists is that it is a project-wide re-definition. Using the ARRAY[*] allows the size be changed dynamically.
I would suggest using a variable length ARRAY[*], as explained in the link below (and also in the Beckhoff/Infosys, section DataTypes/Array).
The point is that you should declare the ARRAY[1..cAINs] of FB_AnalogIO in your main program (it knows the FB_AnalogIO from your analog library and can declare it with a constant size).
The PRG_IO should then be changed to either a function or function block, so that it accepts the ARRAY[*] as a VAR_IN_OUT without knowing the exact size.
https://stefanhenneken.wordpress.com/2016/09/27/iec-61131-3-arrays-with-variable-length/
I am writing haxe code which I want to compile to an arbitrary target as a module and then use the results from another module compiled for this same target. I don’t want to handle this the “Haxe way” (which is to fully inline all libraries at compiletime). Instead I want to be able to write distinct Haxe modules and reference them with full type safety without inlining between the modules. The natural way to do this would be to have both source Haxe files and a separate directory of “headers” filled with extern describing the public API of my module, with these externs somehow automatically generated so that they don’t need to be manually maintained.
I cannot figure out how to get Haxe to emit externs. It would make sense to me if haxe-externs were an actual “target platform” so that I could do something like:
$ haxe ClassName -hxe externsoutdir
It would make less sense but still be acceptable if one of the -D flags like -D dump (which seems to sort of get one part of the way there) or some imaginary, nonexistent -D dump-externs existed. Then you could generate externs while compiling to your favorite target:
$ haxe ClassName -js outfile.js -D shallow-expose -D dump-externs=externsoutdir
The idea is to take a class definition like this:
#:expose
class ClassName {
function quack() {
trace('quack');
}
}
and emit something like this in a separate directory:
extern class ClassName {
function quack():Void;
}
so that I can consume it from another module like this:
#:expose
class MyClassName extends ClassName {
override function quack() {
super.quack();
trace('…and again I say “quack”');
}
}
$ haxe -cp path\to\externsoutdir MyClassName -js outfile.js -D shallow-expose
It would only make sense to generate externs for things decorated with #:expose or some other decorator.
I will figure out how to wrap the emitted modules to load each other correctly. That’s easy. The hard part is generating the extern definitions—shouldn’t Haxe already have a way to do this?
Is there already some tool or built-in way I’m missing to do this? When Googling, all I see are projects that supposedly help with generating externs for existing JavaScript libraries. But that’s not my use case…
Update: --gen-hx-classes was removed sometime aroudn Haxe 4.0.0-rc3. Apparently the functionality still exists secretly as -D gen-hx-classes, but beware, if you rely on this, it seems like its going away.
I believe --gen-hx-classes option might be what you're looking for. Oddly I don't see it in the compiler flags list.
I use it in a modular JavaScript build system that is similar to what you're talking about.
I believe it creates a directory of .hx files that are externs for every class generated by the build (including those from the Haxe standard library.) Actually, getting duplicates of the classes in the standard library may be a problem you will face.
You may also need to use #:keep (or the related macro) to ensure dead code elimination doesn't remove things the other build will need.
You might also need to exclude a class from one or the other builds, e.g. --macro 'exclude("haxe.io.Input")' (or, excludeFile is actually more performant for a whole list of exclusions.)
I have an erlang program, compiled with rebar, after the new debian release, it won't compile anymore, complaining about this:
-import(erl_scan).
-import(erl_parse).
-import(io_lib).
saying:
bad import declaration
I don't know erlang, I am just trying to compile this thing.
Apparently something bad happened to -import recently http://erlang.org/pipermail/erlang-questions/2013-March/072932.html
Is there an easy way to fix this?
Well, -import(). is working but it does NOT do what you are expecting it to do. It does NOT "import" the module into your module, nor does it go out, find the module and get all the exported functions and allow you to use them without the module name. You use -import like this:
-import(lists, [map/2,foldl/3,foldr/3]).
Then you can call the explicitly imported functions without module name and the compiler syntactically transforms the call by adding the module name. So the compiler will transform:
map(MyFun, List) ===> lists:map(MyFun, List)
Note that this is ALL it does. There are no checks for whether the module exists or if the function is exported, it is a pure naive syntactic transformation. All it gives you is slightly shorter code. For this reason it is seldom used most people advise not to use it.
Note also that the unit of code for all operations is the module so the compiler does not do any inter-module checking or optimisation at all. Everything between modules like checking a modules existence or which functions it exports is done at run-time when you call a function in the other module.
No, there is no easy way to fix this. The source code has to be updated, and every reference to imported functions prefixed with the module in question. For example, every call to format should be replaced with io_lib:format, though you'd have to know which function was imported from which module.
You could start by removing the -import directives. The compilation should then fail, complaining about undefined functions. That is where you need to provide the correct module name. Look at the documentation pages for io_lib, erl_scan and erl_parse to see which functions are in which module.
Your problem is that you were using the experimental -import(Mod) directive which is part of parameterized modules. These are gone in R16B and onwards.
I often advise against using import. It hurts quick searches and unique naming of foreign calls. Get an editor which can quickly expand names.
Start by looking at what is stored in the location $ERL_LIBS, typically this points to /usr/lib/erlang/lib.