I want to create a venv environment (not virtualenv) using the following commands:
sudo apt-get install python3.8-venv
python3.8 -m venv venv_name
source venv_name/bin/activate
But it seems to be that it contains dependency on the system where it is created and it creates problems whenever I want to make it portable. That means, I want when I copy this folder along with my project and run it in another machine, it will work without making any changes.
But I am unable to activate the environment (it gets activated but the interpreter still uses system's python and pip.
Therefore, I tried making another venv in the second computer and copied the lib and lib64 folders from the older venv to this newer venv (without replacing existing files) but getting the following error this time:
File "/usr/local/lib/python3.8/ctypes/__init__.py" line 7, in <module>
from _ctypes import Union, Structure, Array
ModuleNotFoundError: No module named '_ctypes'
But interesting thing is, if you notice, the newly created venv in the new machine also searching the missing package in its local directory and not in the venv.
How do I make the venv portable along with all its dependencies and reliably deploy in another device just by activating it?
Disclaimer: None of this is my work, I just found this blog-post and will briefly summarize: https://aarongorka.com/blog/portable-virtualenv/ archived
Caveat: This only works (semi-reliably) among Linux machines. Don't use in production!
The first step is to get copies of your python-executables in the venv/bin folder, so be sure to specify --copies when creating the virtual environment:
python3 -m venv --copies venv
All that's left seems to be changing the hardcoded absolute paths into relative paths, using your tool of choice. In the blogpost, they use pwd after changing to the venv-parent-directory whenever venv/bin/activate is run.
sed -i '43s/.*/VIRTUAL_ENV="$(cd "$(dirname "$(dirname "${BASH_SOURCE[0]}" )")" && pwd)"/' venv/bin/activate
Then, similarly all pip-scripts need to be adapted to run execution with the local python
sed -i '1s/./#!/usr/bin/env python/' venv/bin/pip
BUT, the real problem starts when installing new modules. I would expect most modules to behave nicely, but there will be those that hardcode expected path-structures or similarly thwart any work towards replacing path dependencies.
However: I find this trick is very useful to share a single folder among developers for finding elusive bugs.
Related
I have a device with Python 3.7 pre-installed, without any pip package.
I created the program on my local machine with some packages in my venv (I have a requirements.txt file) and it works perfectly.
My problem is that now I want to create a directory with my programs and upload it to my device. This doesn't work because I don't have additional packages installed.
My question: Is there a way to export the installed package to a directory in my program files and import it locally and not from venv?
Copy the all the venv modules to some directory and modify PYTHONPATH variable when running your program, append your modules directory path to it.
man python3
PYTHONPATH
Augments the default search path for module files. The format is the same as the shell's $PATH: one or more directory
pathnames separated by colons. Non-existent directories are silently ignored. The default search path is installation
dependent, but generally begins with ${prefix}/lib/python<version> (see PYTHONHOME above). The default search path is al‐
ways appended to $PYTHONPATH. If a script argument is given, the directory containing the script is inserted in the path
in front of $PYTHONPATH. The search path can be manipulated from within a Python program as the variable sys.path.
In general, you have the following options to run a python script on another device than the one you developed the script on:
Generate an executable (for example with the package pyinstaller). With that solution, it is not required to have python installed on your device, as everything is embedded in the executable
If you have python installed on the device (like your case), you can just run it on it. However, if you have dependency (from PyPi or Conda), you must also install them on your device
If you have access to internet and have your requirements.txt file, you can just run pip install -r requirements.txt
If you don't have access to internet, you can either:
download the wheel for each package and then ship it to your device
just ship to your device the contents of the folders lib and lib64 of your virtual environnement folder .venv of your local machine (I hope you are using one python -m venv .venv) into the virtual environment of your device
I don't know how it happened, but my sys.path now apparently contains the path to my local Python project directory, let's call that /home/me/my_project. (Ubuntu).
echo $PATH does not contain that path and echo $PYTHONPATH is empty.
I am currently preparing distribution of the package and playing with setup.py, trying to always work in an virtualenv. Perhaps I messed something up while not having a virtualenv active. Though I trying to re-install using python3 setup.py --record (in case I did an accidental install) fails with insufficient privileges - so I probably didn't accidentally install it into the system python.
Does anyone have an idea how to track down how my module path got to the sys.path and how to remove that?
I had the same problem. I don't have the full understanding of my solution, but here it is nonetheless.
My solution
Remove my package from site-packages/easy-install.pth
(An attempt at) explanation
The first hurdle is to understand that PYTHONPATH only gets added to sys.path, but is not necessarily equal to it. We are thus after what adds the package into sys.path.
The variable sys.path is defined by site.py.
One of the things site.py does is automatically add packages from site-packages into sys.path.
In my case, I incorrectly installed my package as a site-package, causing it to get added to easy-install.pth in site-packages and thus its path into sys.path.
By error, I forgot to specify the WORKON_HOME variable before creating my virtual environments, and they were created in /root/.virtualenvs directory. They worked fine, and I did some testing by activating certain environment and then doing (env)$ pip freeze to see what specific modules are installed there.
So, whe I discovered the workon home path error, I needed to change the host directory to /usr/local/pythonenv. I created it and moved all the contents of /root/.virtualenvs directory to /usr/local/pythonenv, and changed the value of WORKON_HOME variable. Now, activating an environment using workon command seems to work fine (ie, the promt changes to (env)$), however if I do (env)$ pip freeze, I get way longer list of modules than before and those do not include the ones installed in that particular env before the move.
I guess that just moving the files and specifying another dir for WORKON_HOME variable was not enough. Is there some config where I should specify the new location of the host directory, or some config files for the particular environment?
Virtualenvs are not by default relocatable. You can use virtualenv --relocatable <virtualenv> to turn an existing virtualenv into a relocatable one, and see if that works. But that option is experimental and not really recommended for use.
The most reliable way is to create new virtualenvs. Use pip freeze -l > requirements.txt in the old ones to get a list of installed packages, create the new virtualenv, and use pip install -r requirements.txt to install the packages in the new one.
I used the virtualenv --relocatable feature. It seemed to work but then I found a different python version installed:
$ . VirtualEnvs/moslog/bin/activate
(moslog)$ ~/VirtualEnvs/moslog/bin/mosloganalisys.py
python: error while loading shared libraries: libpython2.7.so.1.0: cannot open shared object file: No such file or directory
Remember to recreate the same virtualenv tree on the destination host.
In excercise 46, it says
When you are done setting all of this up, your directory should look like mine here:
skeleton/
NAME/
__init__.py
bin/
docs/
**setup.py**
tests/
**NAME_tests.py**
__init__.py
...but he never said how to save these files inside the virtual directory.
How do I make a .py file and save it inside this venv directory?
I can't find the files and folder structures in windows explorer so I have no clue as to where to save.
I am stuck.
Thanks a lot for you help.
It sounds like you're confusing the use of venv with the layout of your code in a project structure. You shouldn't be putting your Python code and modules in a venv generated directory. You didn't mention what OS you're using but here is the general workflow I use on OSX.
I put all of my venv environments in $HOME$/.venv. So I'd generate a venv environment like python -m venv venv ~/.venv/skeleton or you might have to use python3 -m venv venv ~/.venv/skeleton depending on your OS.
You would then activate the venv environment by source ~/.venv/skeleton/bin/activate
You'd then create your project like LPTHW says in a directory like $HOME$/projects/skeleton
I have a debian package that I built that contains a tar ball of the files, a control file, and a postinst file. Its built using dpkg-deb and it installs properly using dpkg.
The modification I would like to make is to have the installation directory of the files be determined at runtime based on an environment variable that will be set when dpkg -i is run on the deb file. I echo out the environment variable in the postinst script and I can see that its set properly.
My questions:
1) Is it possible to dynamically determine the installation directory at runtime?
2) If its possible how would I go about this? I have read about the rules file and the mypackage.install files but I don't know if either of these would allow me to accomplish this.
I could hack it by copying the files to the target location in the posinst script but I would prefer to do it the right way if possible.
Thanks in advance!
So this is what I found out about this problem over the past couple of weeks.
With prepackaged binaries you can't build a debian package with a destination directory dynamicall determined at runtime. I believe that this might be possible if installing a package that is built from source where you can set the install directory using configure. But in this case since these are embedded Ubuntu machines they don't have make so I didn't pursue such an option. I did work out a non traditional method (hack) for installing that did work. Since debian packages simply contain a tar ball relative to / simply build your package relative to a directory under /tmp. In the postinst script you can then determine where to copy the files from the archive into a permanent location.
I expected that after rebooting and the automatic deletion of the subdirectory under /tmp that dpkg might not know that the file package existed. This wasn't a problem. When I ran 'dpkg -l myapp' it showed as still installed. Updating the package using dpkg/apt-get also worked without a hitch.
What I did find is that if you attempted to remove the package using 'dpkg -r myapp' that dpkg would try and remove /tmp which wasn't good. However /tmp isn't easily removed so it never succeeded. Plus in our situation we never remove packages but instead simply upgrade them.
I eventually had to abandon the universal package due to code differences in the sources resulting in having to recompile per platform but I would have left it this way and it did work.
I tried using --instdir to change the install directory of the package and it does relocate the files but dpkg fails since the dpkg file can't be found relative to the new instdir. Using --instdir is sort of like a chroot. I also tried --admindir and --root in various combinations to see if I could use the dpkg system relative to / but install relocate the files but they didn't work. I guess rpm has a relocate option that works but not Ubuntu.
You can also write a script that runs dpkg-deb with a different environment for 6 times, generating 6 different packages. When you make a modification, you simply have to run your script, and all 6 packages gets generated and you can install them on your machines avoiding postinst hacking!
Why not install to a standard location, and simply use a postinst script to create symbolic links to the desired location? This is much cleaner, and shouldn't break anything in dpk -I.