erlang -import not working - linux

I have an erlang program, compiled with rebar, after the new debian release, it won't compile anymore, complaining about this:
-import(erl_scan).
-import(erl_parse).
-import(io_lib).
saying:
bad import declaration
I don't know erlang, I am just trying to compile this thing.
Apparently something bad happened to -import recently http://erlang.org/pipermail/erlang-questions/2013-March/072932.html
Is there an easy way to fix this?

Well, -import(). is working but it does NOT do what you are expecting it to do. It does NOT "import" the module into your module, nor does it go out, find the module and get all the exported functions and allow you to use them without the module name. You use -import like this:
-import(lists, [map/2,foldl/3,foldr/3]).
Then you can call the explicitly imported functions without module name and the compiler syntactically transforms the call by adding the module name. So the compiler will transform:
map(MyFun, List) ===> lists:map(MyFun, List)
Note that this is ALL it does. There are no checks for whether the module exists or if the function is exported, it is a pure naive syntactic transformation. All it gives you is slightly shorter code. For this reason it is seldom used most people advise not to use it.
Note also that the unit of code for all operations is the module so the compiler does not do any inter-module checking or optimisation at all. Everything between modules like checking a modules existence or which functions it exports is done at run-time when you call a function in the other module.

No, there is no easy way to fix this. The source code has to be updated, and every reference to imported functions prefixed with the module in question. For example, every call to format should be replaced with io_lib:format, though you'd have to know which function was imported from which module.
You could start by removing the -import directives. The compilation should then fail, complaining about undefined functions. That is where you need to provide the correct module name. Look at the documentation pages for io_lib, erl_scan and erl_parse to see which functions are in which module.

Your problem is that you were using the experimental -import(Mod) directive which is part of parameterized modules. These are gone in R16B and onwards.
I often advise against using import. It hurts quick searches and unique naming of foreign calls. Get an editor which can quickly expand names.

Start by looking at what is stored in the location $ERL_LIBS, typically this points to /usr/lib/erlang/lib.

Related

Will everything imported by a wildcard use directive ("use some_crate::*") be included in the binary?

If I import everything using * in Rust's use, does the built file include only what I use in the import?
e.g. use std::io::prelude::*
Through LLVM, Rust benefits from aggressive dead code elimination.
In fact, you can see that work by default: unless no_std, code implicitly has the standard prelude in-scope.
For instance, compare trivial printing code versus trivial printing code with a (useless) Box invocation: https://godbolt.org/z/3KWbGeqsh
Only the latter has the relevant Box code generated and compiled in.

Where does the resourcesApp come from in Yesod?

The two functions mkYesodData and mkYesodDispatch in the Yesod framework are supposed to separate the handler definition and the dispatch process. Though by some miracle (to me), templates use this interesting function "resourcesApp":
mkYesodDispatch "App" resourcesApp
The only mention of this function I have found in hoogle is in the Hledger package. And it is not a yesod dependency.
In the school of Haskell by this link they give an explanation that resourcesApp is "generated" by mkYesodData, although it still does not work for my side.
https://www.schoolofhaskell.com/school/advanced-haskell/building-a-file-hosting-service-in-yesod/part%202
What is the reason for this?
There's some Template Haskell (TH) going on under the hood in Yesod, and I think this is what's confusing you. Template Haskell can be confusing when searching in documentation because it produces values at compile-time for use at runtime that aren't there before the code is compiled. resourcesApp is just one of these values.
In the code you reference, the author describes that you must have another module (which he calls Foundation) in which you have invoked mkYesodData. Indeed, without this other module, the code in the Dispatch module won't work. Strangely, it's not until (Part 4)[https://www.schoolofhaskell.com/school/advanced-haskell/building-a-file-hosting-service-in-yesod/part%204] that he seems to define the Foundation module, but you can see that there is a line:
mkYesodData "App" $(parseRoutesFile "config/routes")
That may not look like it defines a value called resourcesApp, but sure enough, it does.
In short, you should be able to get your code working by just finishing the entire tutorial and running the code altogether.
In case you're wondeering, a call to mkYesodData takes a String and then literally generates code that defines a value names resources**** where the **** is the string you passed. In this case, that would be a value resourcesApp, but in someone else's Yesod project, it could be resourcesFoo. Furthermore, since this resourcesFoo value isn't concretely in the code, projects that use Yesod typically wouldn't have it show up in their export lists or haddock documentation. It's actually very strange that you found even one hit for resourcesApp on hoogle at all, but upon closer examination, it kind of makes sense: Hledger seems to be some sort of extended interface around yesod, so they pre-generated the TH values so that they would be easily accessible to users.
As another note, TH has some restrictions in its use. For one, you typically need to perform the TH invocations ("splices" as they're typically called) in a separate module than the one you use the generated values. This is probably why the author has you create a separate Foundation module rather than just putting the line mkYesodData ... in the Dispatch module.

find_dependency(Threads) or include(FindThreads) in a package config file

In CMake, we can use find_dependency() in an package -config.cmake file to "forwards the correct parameters for QUIET and REQUIRED which were passed to the original find_package() call." So, naturally we'll want to do that instead of calling find_package() in such files.
Also, for dependency on a threads library, CMake offers us the FindThreads module, so that we write include(FindThreads), prepended by some preference commands, and get a bunch of interesting variables set. So, that's preferable to find_package(Threads).
And thus we have a dilemma: What to put in -config.cmake files, for a threads library dependency? The former, or the latter?
Following a discussion in comments with #Tsyarev, it seems that:
find_package(Threads) includes the FindThreads module internally.
... which means it "respects" the preference variables affecting FindThreads behavioe.
so it makes sense, functionally and aesthetically, to just use find_package() in your main CMakeLists.txt and find_dependency() in -config.cmake.

What is supposed to be in __init__.py

I am trying to create a Python package. You can have a look at my awful attempt here.
I have a module called imguralbum.py. It lives in a directory called ImgurAlbumDownloader which I understand is the name of the package -- in terms of what you type in an import statement, e.g.
import ImgurAlbumDownloader
My module contains two classes ImgurAlbumDownloader and ImgurAlbumException. I need to be able to use both of these classes in another module (script). However, I cannot for the life of me work out what I am supposed to put in my __init__.py file to make this so. I realize that this duplicates a lot of previously answered questions, but the advice seems very conflicting.
I still have to figure out why (I have some ideas), but this is now working:
from ImgurAlbumDownloader.imguralbum import ImgurAlbumDownloader, ImgurAlbumException
The trick was adding the package name to the module name.
It sure sounds to me like you don't actually want a package. It's OK to just use a single module, if your code just does one main thing and all its parts are closely related. Packages are useful when you have distinct parts of your code that might not all be needed at the same time, or when you have so much code that a single module would be very large and hard to find things in.

How to prevent dead-code removal of utility libraries in Haxe?

I've been tasked with creating conformance tests of user input, the task if fairly tricky and we need very high levels of reliability. The server runs on PHP, the client runs on JS, and I thought Haxe might reduce duplicative work.
However, I'm having trouble with deadcode removal. Since I am just creating helper functions (utilObject.isMeaningOfLife(42)) I don't have a main program that calls each one. I tried adding #:keep: to a utility class, but it was cut out anyway.
I tried to specify that utility class through the -main switch, but I had to add a dummy main() method and this doesn't scale beyond that single class.
You can force the inclusion of all the files defined in a given package and its sub packages to be included in the build using a compiler argument.
haxe --macro include('my.package') ..etc
This is a shortcut to the macro.Compiler.include function.
As you can see the signature of this function allows you to do it recursive and also exclude packages.
static include (pack:String, rec:Bool = true, ?ignore:Array<String>, ?classPaths:Array<String>):Void
I think you don't have to use #:keep in that case for each library class.
I'm not sure if this is what you are looking for, I hope it helps.
Otherwise this could be helpful checks:
Is it bad that the code is cut away if you don't use it?
It could also be the case some code is inlined in the final output?
Compile your code using the compiler flag -dce std as mentioned in comments.
If you use the static analyzer, don't use it.
Add #:keep and reference the class+function somewhere.
Otherwise provide minimal setup if you can reproduce.

Resources