Where does the resourcesApp come from in Yesod? - haskell

The two functions mkYesodData and mkYesodDispatch in the Yesod framework are supposed to separate the handler definition and the dispatch process. Though by some miracle (to me), templates use this interesting function "resourcesApp":
mkYesodDispatch "App" resourcesApp
The only mention of this function I have found in hoogle is in the Hledger package. And it is not a yesod dependency.
In the school of Haskell by this link they give an explanation that resourcesApp is "generated" by mkYesodData, although it still does not work for my side.
https://www.schoolofhaskell.com/school/advanced-haskell/building-a-file-hosting-service-in-yesod/part%202
What is the reason for this?

There's some Template Haskell (TH) going on under the hood in Yesod, and I think this is what's confusing you. Template Haskell can be confusing when searching in documentation because it produces values at compile-time for use at runtime that aren't there before the code is compiled. resourcesApp is just one of these values.
In the code you reference, the author describes that you must have another module (which he calls Foundation) in which you have invoked mkYesodData. Indeed, without this other module, the code in the Dispatch module won't work. Strangely, it's not until (Part 4)[https://www.schoolofhaskell.com/school/advanced-haskell/building-a-file-hosting-service-in-yesod/part%204] that he seems to define the Foundation module, but you can see that there is a line:
mkYesodData "App" $(parseRoutesFile "config/routes")
That may not look like it defines a value called resourcesApp, but sure enough, it does.
In short, you should be able to get your code working by just finishing the entire tutorial and running the code altogether.
In case you're wondeering, a call to mkYesodData takes a String and then literally generates code that defines a value names resources**** where the **** is the string you passed. In this case, that would be a value resourcesApp, but in someone else's Yesod project, it could be resourcesFoo. Furthermore, since this resourcesFoo value isn't concretely in the code, projects that use Yesod typically wouldn't have it show up in their export lists or haddock documentation. It's actually very strange that you found even one hit for resourcesApp on hoogle at all, but upon closer examination, it kind of makes sense: Hledger seems to be some sort of extended interface around yesod, so they pre-generated the TH values so that they would be easily accessible to users.
As another note, TH has some restrictions in its use. For one, you typically need to perform the TH invocations ("splices" as they're typically called) in a separate module than the one you use the generated values. This is probably why the author has you create a separate Foundation module rather than just putting the line mkYesodData ... in the Dispatch module.

Related

How to use helm-semantic-or-imenu for code navigation with type annotated python code

I would like to use the helm-semantic-or-imenu command to navigate components of type annotated Python code, but whatever code analyzer is used to dentify the components doesn't seem to recognize the type annotated python code. Functions with the return type annotation doesn't get recognized at all and functions with annotated arguments show the type instead of the arguments names in the signatures
The main problem I have is that I do not properly understand the components that is involved in making this work (when it does work). Obviously it might help to somehow update the code analyzer, but in which project do I find that? helm? semantic? imenu? or as someone mentioned somewhere else with regards to code analysis python.el? I could really use some help getting started to solve this. If the code analyzer is found in python.el, can I then try to modify and make emacs use a local version preferentially over the installed one?
EDIT:
After making the initial post I finally made a break through in trying to figure out where the components come from. I searched for python*.el in all of the file systemsystem and discovered these:
./usr/share/emacs/26.2/lisp/cedet/semantic/wisent/python.elc
./usr/share/emacs/26.2/lisp/cedet/semantic/wisent/python-wy.elc
I found the source for emacs 26.2 and discovered that indeed it seems python-el is responsible for parsing python files for semantic. It also internally uses the python-wy for recognizing a large portion of the language components. But unfortunately that is where I hit a brick wall. I was hoping to be able to monkey patch the function that recognizes a function definition via an re or something, but semantic actually solves the problem the right way. So python-wy seems to be auto-generated from a formal grammar definition file (in emacs git admin/grammars/python.wy) and figuring out how to modify that it is unfortunately much beyond my abilities.
The semantic python backend doesn't appear to parse type annotations correctly (and there hasn't been much recent development on those libraries as far as I can tell). Since helm-semantic-or-imenu favors semantic when it is active, you can disable semantic altogether for python buffers unless you use its other features (personally I only use it for C/C++).
When the semantic mode-specific libraries are loaded they set imenu-create-default-create-index and imenu-default-goto-function, causing imenu to use semantic instead of python.el's imenu function.
To disable semantic support for your python files you can customize the semantic-new-buffer-setup-functions, only adding entries for modes you want semantic support for, eg. in your semantic hook (or alternatively with the customize UI),
(setq semantic-new-buffer-setup-functions
'((c-mode . semantic-default-c-setup)
(c++-mode . semantic-default-c-setup)
(srecode-template-mode . srecode-template-setup-parser)
(texinfo-mode . semantic-default-texi-setup)
;; etc.
;; (makefile-automake-mode . semantic-default-make-setup)
;; (makefile-mode . semantic-default-make-setup)
;; (makefile-gmake-mode . semantic-default-make-setup)
))

Referring to a constant from library, twincat 3

Im trying to accomplish a twincat 3 library which does things using global constants defined in the main project, like creating arrays the size of those constants and cycling trough them. However I've been unsuccessful and I wonder if this can be done. I just get this error "Error 4 Border 'cPassedConstant' of array is no constant value" when I try to build the main project. The error comes from the array defined in the library.
I've tried making a GVL with a constant of the same name to the library and then setting the "external implementation" property true but that does not help.
My goal here is to make a IO management library with filtering and such. And then I could just add it to the main project and define some constants like "cDigitalIputsCount","cAnalogInputCount" and so on.
Maybe you can get along with the new ARRAY[*] feature instead, although it is still very limited. There is no other way than to define the constant in the library.
The library concept is the same as in other environments. A library provides you reusable components. Your main project depends on the library and not the other way around. Therefore your library cannot know a thing about the project where it is used.
A confusing thing in TwinCat3 is, that you can build projects successful with programming errors inside. The TwinCat3 compiler allows broken code inside a project as long as it is not called. Therefore when you ship libraries you should always use "Check all objects".
You should check Beckhoff's feature called Parameter List. By adding a parameter list to the library project, you can re-define library constants in the project that uses the library. The definition happens in the library manager.
Image from Beckhoff's site:
I think that should do it. Of course, the other option is to use the ARRAY[*] option, which is awesome too (for a PLC programming world). The problem with parameter lists is that it is a project-wide re-definition. Using the ARRAY[*] allows the size be changed dynamically.
I would suggest using a variable length ARRAY[*], as explained in the link below (and also in the Beckhoff/Infosys, section DataTypes/Array).
The point is that you should declare the ARRAY[1..cAINs] of FB_AnalogIO in your main program (it knows the FB_AnalogIO from your analog library and can declare it with a constant size).
The PRG_IO should then be changed to either a function or function block, so that it accepts the ARRAY[*] as a VAR_IN_OUT without knowing the exact size.
https://stefanhenneken.wordpress.com/2016/09/27/iec-61131-3-arrays-with-variable-length/

How to prevent dead-code removal of utility libraries in Haxe?

I've been tasked with creating conformance tests of user input, the task if fairly tricky and we need very high levels of reliability. The server runs on PHP, the client runs on JS, and I thought Haxe might reduce duplicative work.
However, I'm having trouble with deadcode removal. Since I am just creating helper functions (utilObject.isMeaningOfLife(42)) I don't have a main program that calls each one. I tried adding #:keep: to a utility class, but it was cut out anyway.
I tried to specify that utility class through the -main switch, but I had to add a dummy main() method and this doesn't scale beyond that single class.
You can force the inclusion of all the files defined in a given package and its sub packages to be included in the build using a compiler argument.
haxe --macro include('my.package') ..etc
This is a shortcut to the macro.Compiler.include function.
As you can see the signature of this function allows you to do it recursive and also exclude packages.
static include (pack:String, rec:Bool = true, ?ignore:Array<String>, ?classPaths:Array<String>):Void
I think you don't have to use #:keep in that case for each library class.
I'm not sure if this is what you are looking for, I hope it helps.
Otherwise this could be helpful checks:
Is it bad that the code is cut away if you don't use it?
It could also be the case some code is inlined in the final output?
Compile your code using the compiler flag -dce std as mentioned in comments.
If you use the static analyzer, don't use it.
Add #:keep and reference the class+function somewhere.
Otherwise provide minimal setup if you can reproduce.

erlang -import not working

I have an erlang program, compiled with rebar, after the new debian release, it won't compile anymore, complaining about this:
-import(erl_scan).
-import(erl_parse).
-import(io_lib).
saying:
bad import declaration
I don't know erlang, I am just trying to compile this thing.
Apparently something bad happened to -import recently http://erlang.org/pipermail/erlang-questions/2013-March/072932.html
Is there an easy way to fix this?
Well, -import(). is working but it does NOT do what you are expecting it to do. It does NOT "import" the module into your module, nor does it go out, find the module and get all the exported functions and allow you to use them without the module name. You use -import like this:
-import(lists, [map/2,foldl/3,foldr/3]).
Then you can call the explicitly imported functions without module name and the compiler syntactically transforms the call by adding the module name. So the compiler will transform:
map(MyFun, List) ===> lists:map(MyFun, List)
Note that this is ALL it does. There are no checks for whether the module exists or if the function is exported, it is a pure naive syntactic transformation. All it gives you is slightly shorter code. For this reason it is seldom used most people advise not to use it.
Note also that the unit of code for all operations is the module so the compiler does not do any inter-module checking or optimisation at all. Everything between modules like checking a modules existence or which functions it exports is done at run-time when you call a function in the other module.
No, there is no easy way to fix this. The source code has to be updated, and every reference to imported functions prefixed with the module in question. For example, every call to format should be replaced with io_lib:format, though you'd have to know which function was imported from which module.
You could start by removing the -import directives. The compilation should then fail, complaining about undefined functions. That is where you need to provide the correct module name. Look at the documentation pages for io_lib, erl_scan and erl_parse to see which functions are in which module.
Your problem is that you were using the experimental -import(Mod) directive which is part of parameterized modules. These are gone in R16B and onwards.
I often advise against using import. It hurts quick searches and unique naming of foreign calls. Get an editor which can quickly expand names.
Start by looking at what is stored in the location $ERL_LIBS, typically this points to /usr/lib/erlang/lib.

Persistent model types in Fay code

I'm using the Yesod scaffolded site (yesod 1.1.9.2) and spent a few hours yesterday wrapping my head around basic usage of Fay with Yesod. I think I now understand the intended workflow for using Fay to add a chunk of AJAX functionality to a page (I'm going to be a little pedantic here just because someone else might find the step-by-step helpful):
Add a data constructor Example a to SharedTypes.Command.
In the expression case readFromFay Command of ... in Handler.Fay.onCommand, add a case that matches on my new data constructor.
Create a Fay file 'Example.hs' in /fay, patterned after fay/Home.hs. Somewhere in here, use the expression call (Example "foo") $ myFayCallback.
Define a route and handler for the page that will use the Javascript I'm generating. In the handler, use $(fayFile' (ConE 'ScriptR) "Example.hs").
My question: In the current Yesod/Fay architecture, how should I go about sharing my Persistent model types with my Fay code?
Using import Model in a Fay file doesn't work -- when I try to load the page that's using this Fay file, I get an error in the browser (Fay's standard way of alerting me to errors, I guess) indicating that it couldn't find module 'Model' but that it only searched the following directories:
projectroot/cabal-dev//share/fay-0.14.2.0/src
projectroot/cabal-dev/share/fay-base-0.14.2.0/src
projectroot/cabal-dev/share/fay-base-0.14.2.0
projectroot/fay
projectroot/fay-shared
I also tried importing and re-exporting Model in SharedTypes.hs, but that produced the same error.
Is there a way to do this? If not, why not? (I'm a relative noob in both Haskell and Yesod, so the answer to the "why not?" question would be really helpful.)
EDIT:
I just realized that mentioning Persistent in this question's title might be misleading. To be clearer about what I'm trying to do: I just want to be able to represent data in my Fay code using the same datatypes Yesod defines for my models. E.g. if I define a model thusly in config/models...
Foo
bar BarId
textThatCanBeNull Text Maybe
deriving Show
... I want to be able to define an AJAX 'command' that receives and/or returns a value of type Foo and have my Fay code deal in Foos without me having to write any de/serialization code. I understand that I won't be able to use any of Persistent's query functionality directly from my Fay code; I only mentioned Persistent in the title because I mentally associate everything in Model.hs and config/models with Persistent.
This currently is not supported; there are many features leveraged by Persistent which Fay does not support (e.g., Template Haskell). For now, it probably makes sense to have an intermediate data type which is shared by both Fay and Yesod and convert your Persistent data to/from that type.

Resources