I have a custom Python Flask App and I build it for my embedded target using Yocto and Bitbake.
I also have a set of test cases that I run on my Build Machine against the app using PyTest. I'd like the build to fail if the tests fail.
I'm looking for a Yocto way of running these tests as a custom task as part of my recipe. So far by Google search has ( unusually ) come up empty.
Here is what I have implemented so far - it works but it requires the system Python3 and various pip installs. Ideally the requirements would be built in Yocto but only for the Host machine and not the target. I haven't figured out how to do that yet.
# Run the pytest test cases against the app code after it is installed
python do_run_pytest_testsuite(){
# Change to the testing directory
import os
testDir = "%s"%(d.getVar('WORKDIR', True))
os.chdir(testDir)
# Run pytest execute the test suite
from subprocess import Popen, PIPE
with open("%s/TestOutput.txt"%(testDir), "w") as testReportFile:
with Popen(['/usr/bin/python3','-m','pytest','-v','tests/test_app.py'], stdout=PIPE, bufsize=1, universal_newlines=True) as proc:
for line in proc.stdout:
testReportFile.write(line)
# Get the return code
if not proc.returncode == 0:
# Force Failure
bb.fatal("Test Cases Failed! ( %s )"%(testDir))
}
addtask run_pytest_testsuite before do_install after do_configure
How can I accomplish this self-contained within the Yocto environment and not install any PyTest dependencies for the target board.
First I would have a look at poky/meta/recipes-devtools/python to get an idea what python recipes are available (mind which release you are working on).
Then you can add a dependency on the native version of the recipes e.g. DEPENDS = "python3-native python3-yaml-native" (or whatever packages you need to run your python application)
Then add a task that runs your application
do_run_testsuite(){
python3 ${WORKDIR}/test/test_app.py
}
addtask do_run_testsuite before do_install after do_configure
Maybe also check the openembedded python layer
If you don't find all the dependencies you need, it is relatively easy to add a pip package to your own layer (just have a look at the recipes in the mentioned layers).
Related
I need to install a python plugin that is a simple python file before the starting tests using pytest. I have used entry_points in setup.py. My problem is a bit complex so let's get into the problem with an example and we will come back to the problem later.
There are two packages- one is core and another one is mypackage.
Core provided functionality to add a plugin with group name 'my.plugin'.
core package logic
from importlib_metadata import entry_points
def plugin_method(some_data):
plugins = entry_points()['my.plugin']
loaded_plugins = []
for p in plugins:
loaded_plugins.apend(p.load())
#Does some processing on the data and decides which method to call from plugin
#call plugin method
return result
mypackage logic
setup.py
setup(
...
entry_points={'my.plugin': 'plugin1= plugin1.logic'}
...
)
logic.py
def method1(method_data):
print('method1 called')
return 1
def method2(method_data):
print('method1 called')
return 2
main.py
def method_uses_plugin()
# create data
plugin_method(data)
The plugin works fine. :)
Problem
I have written a test case for the method_uses_plugin method. It works fine if I have installed pypackage on my machine but it fails if installation is not done (in jenkins pipeline 🙁 )
We don't usually install the package to run test cases because test cases should use source code directly.
We might need to do something with pytest to register the plugin in entry_points. I have tried many links but nothing worked.
My use case a bit complex but a similar question can be found here
There is two usecase to run the test on the actual source code.
In your Local machine
If you want to test the source code while working, you can simply install your package in editable mode with the command:
pip install -e .
Documentation of -e from the man page:
-e,--editable <path/url>
Install a project in editable mode (i.e. setuptools "develop mode") from a local project path or a VCS url.
This will link the package to the . location of the code, meaning that any change made to the source code will be reflected on the package.
In Continuous Integration (CI)
As your CI is running on a docker container, you can simply copy the source code inside it, install it with pip install . and finally run pytest.
If all else fails you can try to convert your code into an executable and use batch commands to run pip install for as many packages as you need and then run your program. I believe in Jenkins you can run batch files as an administrator.
Invoke pip install from batch file
Run Batch file as an administrator in Jenkins
How do I setup a Python virtual environment with the FreeCAD library embedded as to enable import as a module into scripts?
I would like to avoid using the FreeCAD GUI as well as being dependent on having FreeCAD installed on the system, when working with Python scripts that use FreeCAD libraries to create and modify 3D geometry. I hope a Python virtual environment can make that possible.
I am working in PyCharm 2021.1.1 with Python 3.8 in virtualenv on Debian 10.
I started out with FreeCAD documentation for embedding in scripts as a basis:
https://wiki.freecadweb.org/Embedding_FreeCAD
In one attempt, I downloaded and unpacked .deb packages from Debian, taking care to get the correct versions required by each dependency. In another attempt, I copied the contents of a FreeCAD flatpak install, as it should contain all the libraries that FreeCAD depends on.
After placing the libraries to be imported in the virtual maching folder, I have pointed to them with sys.path.append() as well as PyCharm's Project Structure tool in various attempts. In both cases the virtual environment detects where FreeCAD.so is located, but fails to find any of its dependencies, even when located in the same folder. When importing these dependencies explicitly, each of them have the same issue. This leads to a dead end when an import fails because it does not define a module export function according to Python:
ImportError: dynamic module does not define module export function (PyInit_libnghttp2)
I seem to be looking at a very long chain of broken dependencies, even though I make the required libraries available and inform Python where they are located.
I would appreciate either straight up instructions for how to do this or pointers to documentation that describes importing FreeCAD libraries in Python virtual environments, as I have not come across anything that specific yet.
I came across a few prior questions which seemed to have similar intent, but no answers:
Embedding FreeCAD in python script
Is it possible to embed Blender/Freecad in a python program?
Similar questions for Conda focus on importing libraries from the host system rather than embedding them in the virtual environment:
Incude FreeCAD in system path for just one conda virtual environment
Other people's questions on FreeCAD forums went unanswered:
https://forum.freecadweb.org/viewtopic.php?t=27929
EDIT:
Figuring this out was a great learning experience. The problem with piecing dependencies together is that for that approach to work out, everything from the FreeCAD and its dependencies to the Python interpreter and its dependencies seems to need to be built on the same versions of the libraries that they depend on to avoid causing segmentation faults that brings everything to a crashing halt. This means that the idea of grabbing FreeCAD modules and libraries it depends on from a Flatpak installation is in theory not horrible, as all parts are built together using the same library versions. I just couldn't make it work out, presumably due to how the included libraries are located and difficulty identifying an executable for the included Python interpreter. In the end, I looked into the contents of the FreeCAD AppImage, and that turned out to have everything needed in a folder structure that appears to be very friendly to what PyCharm and Python expects from modules and libraries.
This is what I did to get FreeCAD to work with PyCharm and virtualenv:
Download FreeCAD AppImage
https://www.freecadweb.org/downloads.php
Make AppImage executable
chmod -v +x ~/Downloads/FreeCAD_*.AppImage
Create folder for extracting AppImage
mkdir -v ~/Documents/freecad_appimage
Extract AppImage from folder (note: this expands to close to 30000 files requiring in excess of 2 GB disk space)
cd ~/Documents/freecad_appimage
~/Downloads/./FreeCAD_*.AppImage --appimage-extract
Create folder for PyCharm project
mkdir -v ~/Documents/pycharm_freecad_project
Create pycharm project using Python interpreter from extracted AppImage
Location: ~/Documents/pycharm_freecad_project
New environment using: Virtualenv
Location: ~/Documents/pycharm_freecad_project/venv
Base interpreter: ~/Documents/freecad_appimage/squashfs-root/usr/bin/python
Inherit global site-packages: False
Make available to all projects: False
Add folder containing FreeCAD.so library as Content Root to PyCharm Project Structure and mark as Sources (by doing so, you shouldn't have to set PYTHONPATH or sys.path values, as PyCharm provides module location information to the interpreter)
File: Settings: Project: Project Structure: Add Content Root
~/Documents/freecad_appimage/squashfs-root/usr/lib
After this PyCharm is busy indexing files for a while.
Open Python Console in PyCharm and run command to check basic functioning
import FreeCAD
Create python script with example functionality
import FreeCAD
vec = FreeCAD.Base.Vector(0, 0, 0)
print(vec)
Run script
Debug script
All FreeCAD functionality I have used in my scripts so far has worked. However, one kink seems to be that the FreeCAD module needs to be imported before the Path module. Otherwise the Python interpreter exits with code 139 (interrupted by signal 11: SIGSEGV).
There are a couple of issues with PyCharm: It is showing a red squiggly line under the import name and claiming that an error has happened because "No module named 'FreeCAD'", even though the script is running perfectly. Also, PyCharm fails to provide code completion in the code view, even though it manages to do so in it's Python Console. I am creating new questions to address those issues and will update info here if I find a solution.
I need to run a script inside a Popen inside a packaged project (with pyinstaller). The command NEEDS to run in python3 and I (obviously) need it to be portable, which means that I can't rely on the config (python/python3 notation) used on the machine I've used to generate the package neither the one where I am using it (I can't also guarantee that there will be any python in the deployment machine...).
I have computers where python is already the correct version, and others where the correct one is python3. I tried to insert #!/usr/bin python3 in the beginning of the file and then run as python but it didn't work.
subprocess.Popen(['python', 'internal.py', arg1, arg2], universal_newlines=True)
In the non-frozen environment I am able to run using sys.executable instead of python. I tried to "pyinstall" the internal.py then, "pyinstall" the rest copying the internal folder to the new folder and then call:
subprocess.Popen(['./internal/internal', arg1, arg2], universal_newlines=True)
But this doesn't work... In the console (moving to the bigger project folder) it does, but when running the package, it doesn't...
Any idea?
Ps. I can't just import the script as a class or whatever. I can't modify system environment vars (except the ones inside the pyinstaller thing...)
I saw something about the PyInstaller.compat that has a exec_python and a __wrap_python, but I couldn't use it I get errors when trying to run the package due to pkg_resources.DistributionNotFound: The 'PyInstaller' distribution was not found and is required by the application, I tried to create a hook, to add as hidden import... None worked (but the hook part maybe I got it wrong...)
Here is a link to a project and output that you can use to reproduce the problem I describe below.
I'm using coverage with tox against multiple versions of python. My tox.ini file looks something like this:
[tox]
envlist =
py27
py34
[testenv]
deps =
coverage
commands =
coverage run --source=modules/ -m pytest
coverage report -m
My problem is that coverage will run using only one version of python (in my case, py27), not both py27 and py34. This is a problem whenever I have code execution dependent on the python version, e.g.:
def add(a, b):
import sys
if sys.version.startswith('2.7'):
print('2.7')
if sys.version.startswith('3'):
print('3')
return a + b
Running coverage against the above code will incorrectly report that line 6 ("print('3')") is "Missing" for both py27 and py34. It should only be Missing for py34.
I know why this is happening: coverage is installed on my base OS (which uses python2.7). Thus, when tox is run, it notices that coverage is already installed and inherits coverage from the base OS rather than installing it in the virtualenv it creates.
This is fine and dandy for py27, but causes incorrect results in the coverage report for py34. I have a hacky, temporary work-around: I require a slightly earlier version of coverage (relative to the one installed on my base OS) so that tox will be forced to install a separate copy of coverage in the virtualenv. E.g.
[testenv]
deps =
coverage==4.0.2
pytest==2.9.0
py==1.4.30
I don't like this workaround, but it's the best I've found for now. Any suggestions on a way to force tox to install the current version of coverage in its virtualenv's, even when I already have it installed on my base OS?
I came upon this problem today, but couldn't find an easy answer. So, for future reference, here is the solution that I came up with.
Create an envlist that contains each version of Python that will be tested and a custom env for cov.
For all versions of Python, set COVERAGE_FILE environment varible to store the .coverage file in {envdir}.
For the cov env I use two commands.
coverage combine that combines the reports, and
coverage html to generate the report and, if necessary, fail the test.
Create a .coveragerc file that contains a [paths] section to lists the source= locations.
The first line is where the actual source code is found.
The subsequent lines are the subpaths that will be eliminated by `coverage combine'.
tox.ini:
[tox]
envlist=py27,py36,py35,py34,py33,cov
[testenv]
deps=
pytest
pytest-cov
pytest-xdist
setenv=
py{27,36,35,34,33}: COVERAGE_FILE={envdir}/.coverage
commands=
py{27,36,35,34,33}: python -m pytest --cov=my_project --cov-report=term-missing --no-cov-on-fail
cov: /usr/bin/env bash -c '{envpython} -m coverage combine {toxworkdir}/py*/.coverage'
cov: coverage html --fail-under=85
.coveragerc:
[paths]
source=
src/
.tox/py*/lib/python*/site-packages/
The most peculiar part of the configuration is the invocation of coverage combine. Here's a breakdown of the command:
tox does not handle Shell expansions {toxworkdir}/py*/.coverage, so we need to invoke a shell (bash -c) to get the necessary expansion.
If one were inclined, you could just type out all the paths individually and not jump through all of these hoops, but that would add maintenance and .coverage file dependency for each pyNN env.
/usr/bin/env bash -c '...' to ensure we get the correct version of bash. Using the fullpath to env avoids the need for setting whitelist_externals.
'{envpython} -m coverage ...' ensures that we invoke the correct python and coverage for the cov env.
NOTE: The unfortunate problem of this solution is that the cov env is dependent on the invocation of py{27,36,35,34,33} which has some not so desirable side effects.
My suggestion would be to only invoke cov through tox.
Never invoke tox -ecov because, either
It will likely fail due to a missing .coverage file, or
It could give bizarre results (combining differing tests).
If you must invoke it as a subset (tox -epy27,py36,cov), then wipe out the .tox directory first (rm -rf .tox) to avoid the missing .coverage file problem.
I don't understand why tox wouldn't install coverage in each virtualenv properly. You should get two different coverage reports, one for py27 and one for py35. A nicer option might be to produce one combined report. Use coverage run -p to record separate data for each run, and then coverage combine to combine them before reporting.
I've been looking, and looking, and I can't find an answer to my question.
I've just started learning scons tonight, and it looks awesome! I'm running into a little confusion though.
For ease of development, I often like to have my make file build my target, and then run it so that I can test a change with one keystroke. This is very simple in a make file:
run: $(exe)
chmod a+x $(exe)
$(exe)
I have figured out that I can do it using subprocess like so:
import subprocess import os.path
env = Environment();
result = env.Program(target = "FOO", source = "BAR");
location = os.path.abspath(result[0].name)
subprocess.call([location])
But there's a problem with this solution. As far as I have experimented, scons won't wait until your program is finished building before it starts the subprocess call, so you end up running the old executable, or having an error if it's a build after a clean.
What you do in your scons file is a typical beginner error in scons. Your assume that you are writing a script for building your project.
Scons doesn't work like that. The scons files is a script that add targets to the project. This is done through python, and the various objects allows you to create and manipulate targets until the script is done. First then will the project start building.
What you do in your code is to describe the Environment to use, the Program to create, and after that you call a subprocess that runs some program. After this the project will start building - no wonder the old executable is run, the new one haven't started to be built yet.
What you should do is to use a custom builder for executing the program.
env = Environment() #construct your environment
files = "test.cpp" #env.Glob or list some files
#now we create some targets
program = env.Program("test",files) #create the target *program*
execution = env.Command(None,None,"./test") #create the execution target (No input & output
Depends(execution,program) #tell scons that execution depends on program
#there might be a way to get SCons to figure out this dependency itself
#now the script is completed, so the targets are built
Here the dependencies are clear, the program must be built before the execution is done, and it will
I may be a little bit late for you, but I have this solution using Alias.
By using the following command, it will build and run the program:
$ scons run
# Define the different target output
program = env.Program('build/output', Glob('build/test/*.cpp'))
env.Default(program)
env.Alias('run', program, program[0].abspath)
note that we use the abspath, so it can be cross platform win/linux (for linux you need to add the "./" before the program name if your PATH is not correctly set.
Ok, I'm a little nervous to answer my own question, but I found a more or less acceptable solution.
I have just set up a simple chain.
I set up a Makefile with something like this in it:
run:
scons -s
./name_of_executable
This calls scons in silent mode, and runs your program automatically afterwards. It's not a scons-only solution, but it works. I'd still be interested to see if anyone has another answer.
Thanks!
Murphy