I have a few test files residing in test/ directory which has an __init__.py file. The test files are basically Python files but have the extension as .thpy (Python test files). Each of these tests internally use unittest.
For code coverage purposes, I'm using the trace module. coverage.py, unfortunately would have been ideal but at this time, doesn't give information on the number of line hits per line.
unittest and trace are not really compatible. unittest.py doesn't play well with trace.py - why?
For example: I have a file name cluster_ha.thpy. From the solution given for the above question, I need to mention module='cluster_ha' inside cluster_ha.thpy but because of the extension, Python doesn't look at it as a Python module.
Is there any way to get around this? Is there a hack that can make other extensions to be considered as a Python module? Or perhaps, there's another module out there which can help me get code coverage?
Related
This is my project structure:
git_dir/
root/
__init__.py
tests/
__init__.piy
test1.py
foo_to_test/
foo.py
I'm using pytest to test foo.py, and test1.py is as follows:
from foo_to_test.foo import func_to_test
def test1():
assert something about func_to_test
From the terminal, i want to run
pytest tests
Now for the problem.
When using --import-mode append or --import-mode prepend it adds 'git_dir' to PYTHONPATH.
The problem is that 'git_dir' is not the project root.
I can add sys.path.append('git_dir/root') but the name 'git_dir' is different for other programmers working on other computers.
I've seen in the pytest documentation that using --import-mode importlib might solve my problem, but it doesn't seem to have any effect on my PYTHONPATH and i can't understand what it is doing.
So my questions are:
What --import-mode importlib is doing?
How can i automatically add my root to the path when testing so it will be the same for every programmer pulling this project?
This looks hard because it's not how it's supposed to work. It may be time to learn about packaging.
The Python module system is designed to look for modules on the sys.path directories, as you describe. This variable is to be populated by the user, somehow, and should preferably not be meddled with.
When a developer wants to use your package, they must make sure it is installed on the system they run the tests on.
If you want to make that easier for them, provide e.g. a project.toml file; then they can build your package, and install it with pip install /path/to/your/distribution-file.tgz.
More info about that here: https://setuptools.readthedocs.io/en/latest/userguide/quickstart.html#python-packaging-at-a-glance.
Your problem is that you have __init__.py in the root directory. When pytest finds a test module, it goes to the parent folders until it finds the one that does not have __init__.py. That's going to be you a directory that's added to sys.path, see Choosing a test layout / import rules. So your root directory should NOT be a Python module.
Now about importlib and why you probably don't need it
--import-mode=importlib circumvents the standard Python way of using modules and sys.path. The exact reasons are unclear from the docs:
With --import-mode=importlib things are less convoluted because pytest doesn’t need to change sys.path or sys.modules, making things much less surprising.
And they also mention that this allows test modules to have non-unique names. But why would we want this?! It seems like with proper file structure everything works fine even if we mess with sys.path.
Anyway, overall the usage of this option doesn't sound convincing, especially given this in the official docs:
Initially we intended to make importlib the default in future releases, however it is clear now that it has its own set of drawbacks so the default will remain prepend for the foreseeable future.
I'm trying out Freeling's API for python. The installation and test were ok, they provide a sample.py file that works perfectly (I've played around a little bit with it and it works).
So I was trying to use it on some other python code I have, in a different folder (I'm kind of guessing this is a path issue), but whenever I import freeling (like it shows on the sample.py):
import freeling
FREELINGDIR = "/usr/local";
DATA = FREELINGDIR+"/share/freeling/";
LANG="es";
freeling.util_init_locale("default");
I get this error:
ModuleNotFoundError: No module named 'freeling'.
The sample.py is located on the ~/Freeling-4.0/APIs/Python/ folder, while my other file is located in ~/project/, I dont know if that can be an issue.
Thank you!
A simple solution is to have a copy of freeling.py in the same directory as your code, since python will look there.
A better solution is to either paste it in one of the locations where it usually checks (like the lib folder in its install directory), or to tell it that the path where your file is should be scanned for a module.
You can check out this question to see how it can be done on Windows. You are basically just setting the PYTHONPATH environment variable, and there will only be minor differences in how to do so for other OSes. This page gives instructions that should work on Linux systems.
I like this answer since it adds the path at runtime in the script itself, doesn't make persistent changes, and is largely independent of the underlying OS (apart from the fact that you need to use the appropriate module path of course).
You need to set PYTHONPATH so python can find the modules if they are not in the same folder.
I have written a module for teaching Python. I'd like to make it difficult for the smarter ones to view the source code as a short-cut. Does not need to be fully secure - disabling the inspect module might be enough - if this is possible.
In case this is useful to anyone else using Python3 for class tests etc here's what I've ended up doing (with thanks to wbwlkr).
python3 -OO -m py_compile testmod.py creates a file __pycache__/testmod.cpython-34.pyo
Creating a symbolic link to this file named testmod.pyc means the code can't easily be inspected.
One other consideration is that sensitive local variables should be overwritten when not needed or they can be queried by locals()
What you are searching for is a solution to "obfuscate" the source code of your module.
You could compile your module to byte-code, as suggested here :
https://stackoverflow.com/a/7418341/8714367
I'm confused with some specific behaviour and can't find some informations that help me understand the error.
The situation is as follows: I made a small pyqt4 app that at some point dumps an OrderedDict to a yaml string using pyyaml or ruamel.yaml (tried both) and writes this to a file, or reads from this file. This goes very well executing the code as normal. Now I want to distribute my app by bundling it into a single file windows exe using pyinstaller.
Now if I directly use yaml.dump() or ruamel.yaml.dump() in a method of my pyqt4 form class to generate the yaml-string and write to a file (the standard way using with open ...), I am able to bundle the app using pyinstaller and the exe runs fine.
However, if I write a small function in a sub-folder/module that uses the exact same call to pyyaml (yaml.dump(dict)) or ruamel.yaml (ruamel.yaml.dump(dict, Dumper=ruamel.yaml.RoundTripDumper)) to generate the yaml string and save to a file using with open ... and use this in my pyqt4 method (I just wanted to make things more readable), pyinstaller starts to load a bunch of modules and does a lot more stuff (according to console output), resulting in the exe file beeing almost 5 times larger plus unusable throwing a fatal error pyi_rth_pkgres returned -1 at start.
Unfortunately, I don't understand much from either console output or warnings log, viewable in this gist. Maybe I am searching for the wrong terms. I also tried renaming the module to prevent shadowing.
Now my question is, does anybody know whats going on and can explain this behaviour?
After doing a lot of trial and error, I finally got it working.
I created a new module and build the dumping functions inside it. pyinstaller and the bundled exe work flawless. However, if i do the exact same thing in the previous module, even refactoring the name, it does not work. I even copied the complete code to the old module and it doesn't work. I have no idea why and at this point I am too afraid to ask :|
I am just glad it works now.
My team is creating a build system based on SCons. We have created a bunch of helper classes in our own site_scons/site_tools folder.
My task is to create and run tests on our code, using pyunit. The test code would probably live in a subfolder, with the directory layout looking something like:
SConstruct
our_source_code/
Sconscript
site_scons/
site_tools/
a.py
b.py
c.py
tests/
test_a.py
test_b.py
test_c.py
Now my question is: What is the best way to invoke our tests, given they will probably require the correct SCons environment set up? (that is a.py uses SCons.Environment)
Do I add a Builder or a Command? Or something else?
I think the best approach would be to use the test setup code from SCons itself. This requires a SVN checkout of SCons, as the test files are not shipped with the regular SCons tarballs. This is probably workable, as not everyone in your team would be writing tools and running tests on them.
For example, this is the test for javac. Basically you write out files that you want, run a SConstruct, then check the results are what you expected. You can mock tools with Python scripts, to ensure they are really called with the flags and files that you expect. For example:
import TestSCons
test = TestSCons.TestSCons()
test.write('SConstruct', '''env = Environment(tools = ["yourtool"])
env.RunYourTool()''')
test.write('sourcefile.x', 'Content goes here')
test.run(arguments = '.', stderr = None)
test.must_match('outputfile', 'desired contents')
test.pass_test()
There are also more instructions on writing SCons tools tests on the swtoolkit wiki, which is a seemingly-defunct SCons extension from Google. The info on the wiki is still useful, and there are some good examples on how to write tests for custom SCons tools.