PyInstaller 3.6: module importlib has no attribute 'machinery' [duplicate] - python-3.x

Let's say I have a module foo and a submodule foo.bar. If I want to use a method in foo.bar, do I need to import foo.bar directly or is importing foo sufficient?
For example, the following throws an error:
import foo
foo.bar.my_method()
and the following works:
import foo.bar
foo.bar.my_method()
But I'm not sure if this is generally what's needed, or if there's something wrong with my code itself. (I would think importing the submodule directly is generally needed... But I could have sworn I've seen code where it's not imported directly and still works fine.)

If I want to use a method in foo.bar, do I need to import foo.bar directly or is importing foo sufficient?
You'll need to import the submodule explicitly. Executing import foo.bar will automatically import the parent module foo, and necessarily† bind the name foo, but the reverse is not true.
But I could have sworn I've seen code where it's not imported directly and still works fine
Yes. Sometimes accessing a submodule works without the explicit import. This happens when a parent module itself imports the submodules. Never rely on that unless it's documented, because it may be an implementation detail and could change without warning after a library version upgrade.
As an example of a popular library which demonstrates both behaviors, look at requests==2.18.4. This package has submodules called sessions and help (amongst others). Importing requests will make requests.sessions available implicitly, yet requests.help will not be available until explicitly imported. You'll find when the source code of the package init is executed that the sessions submodule gets imported, but the help submodule does not.
† This makes sense, because subsequent use of foo.bar requires an attribute access on an existing foo object. Note that from foo.bar import something does not bind the name foo nor foo.bar, though both modules foo and foo.bar are imported and cached into sys.modules.

Related

Why is doctest skipping tests on imported methods?

I have a python module some_module with an __init__.py file that imports methods, like this:
from .some_python_file import some_method
some_method has a docstring that includes doctests:
def some_method():
"""
>>> assert False
"""
But when I run doctest on the module, it passes even if the tests should fail.
import some_module
# How do I get this to consistently fail, regardless of whether
# `some_module.some_method` was declared inline or imported?
assert doctest.testmod(some_module).failed == 0
If I instead define some_method within the __init__.py file, the doctest correctly fails.
Why are these two situations behaving differently? The method is present and has the same __doc__ attribute in both cases.
How do I get doctest to run the tests defined in the dostrings of methods that were imported into the module?
In Python a module is define by a single file and in your case some_python_file is a module while __init__ is another one. Doctest has a check to run tests only for examples reachable from the module which can be found here.
The best way to see this behaviour in practice is to use PDB and pdb.set_trace() right before you call doctest.testmod(some_module) and step inside to follow the logic.
LE:
Doctest ignores imported methods per this comment. If you want to be able to run your test you should probably define a main function in your module and run the test test with python some_module.py. You can follow this example.
To achieve your expected behaviour you need to manually create a __test__ dict in your init file:
from .some_python_file import some_method
__test__ = {"some_method": some_method}
See this link as well.
Objects imported into the module are not searched.
See docs on which docstrings are examined.
You can can inject the imported function into the module's __test__ attribute to have imported objects tested:
__test__ = {'some_method': some_method}
I stumbled upon this question because I was hacking a __doc__ attribute of an imported object.
from foo import bar
bar.__doc__ = """
>>> assert True
"""
and I was also wondering why the doctest of bar did not get executed by the doctest runner.
The previously given answer to add a `__test__` mapping solved it for good.
```python
__test__ = dict(bar=bar.__doc__)
I think the explanation for this behaviour is the following. If you are using a library, lets say NumPy, you do not want all of their doctests to be collected and run in your own code.
Simply, because it would be redundant.
You should trust the developers of the library to (continuously) test their code, so you do not have to do it.
If you have tests defined in your own code, you should have a test collector (e.g. pytest) descend into all files of your project structure and run these.
You would end up testing all doctests in used libraries, which takes a lot of time. So the decision to ignore imported doctests is very sane.

Python Modules Replacing Themselves During Load

I've come across some code recently that uses a trick that makes me rather nervous. The system I'm looking at has a Python extension Moo.so file stored outside the path and the developer wants to import it with just import Moo. For various reasons neither the file location nor sys.path can be changed, and the extension must be loaded with ExtensionFileLoader anyway.
So what has been done is to have a Moo.py in the path that loads the extension module and then replaces itself in sys.modules with the extension module, along the following lines:
' Moo.py '
from importlib.machinery import ExtensionFileLoader
loader = ExtensionFileLoader('AnotherNameForMoo', '/path/to/Moo.so')
module = loader.load_module()
sys.modules['Moo'] = module
Now this does actually work. (I have some tests of it in rather gory detail in this repo if you want to have a look.) It appears that, at least in CPython 3.4 through 3.7, import Moo does not bind to Moo the module that it loaded and put into sys.modules['Moo'], but instead binds the current value of sys.modules['Moo'] after the module's top-level script returns, regardless of whether or not that's what it originally put in there.
I can't find anything in any Python documentation that indicates that this is required behaviour rather than just an accident of implementation.
How safe is this? What are other ways that one might try to achieve a similar "bootstrap" effect?

Haxe: how to keep unused class from being eliminated

I have classes that are never directly mentioned in other code, but only accessed using Type.resolveClass. I want them to be compiled and included in the app, but I can't get how to do this. I thought that #:keep (or #:keepSub) is exactly for this, but it does not work as I expected. This is what I do:
Main.hx:
package;
//import Foo; //uncomment this line to make it work
class Main {
static function main() trace(Type.resolveClass('Foo'));
}
Foo.hx:
package;
#:keep class Foo {}
But this traces null (I've tested JS and Flash)
Even if I compile with -dce no it still traces null.
Not sure if it's a compiler issue or I do not understand how it works.
This is not a compiler issue, it's correct behavior. If a module is never imported, it is never included in compilation to begin with. The compiler never gets to see the #:keep.
A common workaround for this is to use --macro include('package') (see Compiler.include()), which forces all modules in package to be compiled.
Note that a wildcard import (import package.*;) won't work, since wildcard imports are lazy:
When using wildcard imports on a package the compiler does not eagerly process all modules in that package. This means that these modules are never actually seen by the compiler unless used explicitly and are then not part of the generated output.
#:keep and #:keepSub only keep classes. In your case, Foo has never been included in the compilation so the compiler has nothing to keep.
You can list more than one entry point for your application in your hxml (or in your haxe ... command) to include some classes: haxe -cp src -main MainFoo Foo --interp will find Foo. (more on --interp here: http://old.haxe.org/doc/compiler#macro-options)
You can also include a package and all its classes with --macro include('my.package') but in your example your classes are on the root package and I don't know if you can include this one.
More info on include: http://api.haxe.org/haxe/macro/Compiler.html#include

Haskell how to hide (exclude) a function from module without explicitly listing all the non hidden functions?

I have a Haskell module, and I’d like it to export all objects declared in its file except for one specific function local_func.
Is there a cleaner way to achieve this than by writing an export list explicitly listing all the other declarations (and carefully keeping this list up-to-date for all eternity)?
In other words, I’d like an analogue of import MyModule hiding (local_func), but specified in the exporting module rather that at import time.
As far as I'm aware there is not currently a way to do this.
What I usually end up doing is having a central module that re-exports important things as a convenient way to import everything that is necessary while not hiding anything in the modules defining these things (which in some cases - that you probably won't foresee! - makes it easier for your users to modify things in your module).
To do this use the following syntax:
-- |Convenient import module
module Foo.Import (module All) where
-- Import what you want to export
import Foo.Stuff as All hiding (local_func)
-- You can import several modules into the same namespace for this trick!
-- For example if using your module also requires 'decode' from "Data.Aeson" you can do
import Data.Aeson as All (decode)
You have now conveniently exported these things.
Unfortunately not.
One could imagine a small syntactic addition which would allow the kind of thing you're asking for. Right now it's possible to write:
module M (module M) where
foo = quux
quux = 1+2
You can explicitly export the whole module. But suppose we were to add syntax such that it was possible to hide from that module. Then we would be able to write like this:
module M (module M hiding (quux)) where
foo = quux
quux = 1+2

Is it possible to hide specific functions from appearing in the documentation using haddock?

I use haddock and don't want all of my exported functions to be displayed in the documentation. Is it possible to hide specific functions?
I found the prune attribute at http://www.haskell.org/haddock/doc/html/module-attributes.html, but this is not what I want since some functions which shall be exported don't have a documentation annotation.
Assuming your current module is Foo.Bar, one solution would be to break it up into Foo.Bar and Foo.Bar.Internal. You could move all the definitions related to the function you don't want to export--perhaps even all the definitions--into Foo.Bar.Internal. Then, in Foo.Bar, you would re-export only the definitions you want the world to see.
This approach has a couple of advantages. It lets you export everything you need, while still giving the user a clear sign that certain things shouldn't be used. It also lets you document your special functions inside the Internal module, which is going to be useful (if only for your future self :P).
You could simply not export Foo.Bar.Internal in your .cabal file, hiding it from the world. However, this is not necessarily the best approach; look at the answers to How, why and when to use the ".Internal" modules pattern?, in particular luqui's.
Another possibility is to make a module Foo.Bar.Hidden exporting all of the stuff you want to hide, and then re-export the whole Foo.Bar.Hidden module from Foo.Bar:
module Foo.Bar (
blah1,
blah2,
blah3,
module Foo.Bar.Hidden
)
where
import Foo.Bar.Hidden
blah1 = ...
blah2 = ...
blah3 = ...
This way, the hidden stuff will be exported from Foo.Bar, but not included in the documentation. The documentation will only include one relatively unobtrusive link to the Foo.Bar.Hidden module as a whole.

Resources