Using your own venv in other system - python-3.x

I am having 3 small problems which are inter-related.
1.I recently created my own virtual environment ,I want to export that environment to my friend's system so that he can run my environment's main program in one tap.
2.Also where to put main driver python code file in venv so that it can easily be executed in other system.
3.I used open() to read a text file ,but i am not sure what must be its directory so that it can be worked on other(any) system ,i am currently storing it within my venv
What I tried:
1.It is completed so I exported it to other system and (but i am not sure which folder to select so it can be operated on other Window),I copied my_venv directly and pasted it in other system.
2.I stored it within my_venv/main.py
3.I tried open(r'.vmy_env/text.txt','r').

You cannot actually move your virtual environment from one system to your friend's system. What you can do instead is :
Create a src folder inside the virtual environment folder and keep all the code files and necessary files related to the project inside this source folder.
Use pip freeze command in order to obtain all the installation details. Store all these details inside a file like requirements.txt. You can do it either by manually copy-pasting or by using output redirection.
Now that we have the requirements and basic structure down, make a virtual environment in another (your friend's) system. Ask him to place the src folder in the exact same place as you did. Then ask him to install all the dependencies using pip (you can follow this link)
Then he should be good to go with the project execution. Another helpful link can be this which shows how to use the above mentioned steps.
A suggested folder structure can be something like this :
+
|
|---- src
| |
| |---- main.py
| |
| |---- data
| | |
| | |---- dataset1.csv
| | |---- dataset2.csv
| | +
| |
| |---- utils
| | |
| | |---- helper.py
| | |
| | +
| |
| |---- requirements.txt
| |
| +
+
Here the main.py is the driver code and all other directories will contain helper/utility functions and classes.
Some good practices while managing Project's folder structure is :
Keep all the code or data files inside the source folder (here src).
Make use of relative paths instead of absolute paths. You can make use of os module in order to do the same. Since it would eliminate the need of modifying the code every time you run it on a different machine or operating system.
Never copy the venv folder. It's only the src folder we need.
Using version control system is a big plus when it comes to effective project management and collaboration. So try looking into git
If you could share your current folder structure then I could help you out more precisely.

Related

Relative imports within a git repo

I want to create a git repo that can be used like this:
git clone $PROJECT_URL my_project
cd my_project
python some_dir/some_script.py
And I want some_dir/some_script.py to import from another_dir/some_module.py.
How can I accomplish this?
Some desired requirements, in order of decreasing importance to me:
No sys.path modifications from within any of the .py files. This leads to fragility when doing IDE-powered automated refactoring.
No directory structure changes. The repo has been thoughtfully structured.
No changes to my environment. I don't want to add a hard-coded path to my $PYTHONPATH for instance, as that can result in unexpected behavior when I cd to other directories and launch unrelated python commands.
Minimal changes to the sequence of 3 commands above. I don't want a complicated workflow, I want to use tab-completion for some_dir/some_script.py, and I don't want to spend keystrokes on extra python cmdline flags.
I see four solutions to my general problem described here, but none of them meet all of the above requirements.
If no solution is possible, then why are things this way? This seems like such a natural want, and the requirements I list seem perfectly reasonable. I'm aware of a religious argument in a 2007 email from Guido:
I'm -1 on this and on any other proposed twiddlings of the __main__
machinery. The only use case seems to be running scripts that happen
to be living inside a module's directory, which I've always seen as an
antipattern. To make me change my mind you'd have to convince me that
it isn't.
But not sure if things have changed since then.
Opinions haven't changed on this topic since Guido's 2007 comment. If anything, we're moving even further in the opposite direction, with the additions of PYTHONSAFEPATH var and corresponding -P option in 3.11:
https://docs.python.org/3/using/cmdline.html#envvar-PYTHONSAFEPATH
https://docs.python.org/3/using/cmdline.html#cmdoption-P
These options will nerf direct sibling module imports too, requiring sys.path to be explicitly configured even for scripts!
So, scripts still can't easily do relative imports, and executable scripts living within a package structure are still considered an anti-pattern. What to do instead?! The widely accepted alternative here is to use the packaging feature of entry-points. One type of entry-point group in packaging metadata is the "console_scripts" group, used to point to arbitrary callables defined within your package code. If you add entries in this group within your package metadata, then script wrappers for those callables will be auto-generated and put somewhere on $PATH at pip install time). No hacking of sys.path necessary.
That being said, it's still possible to run .py files directly as scripts, provided you've configured the underlying Python environment for them to resolve their dependencies (imports) correctly. To do that, you'll want to define a package structure and "install" the package so that your source code is visible on sys.path.
Here's a minimum example:
my_project
├── another_dir
│ ├── __init__.py <-- __init__ file required for package dirs (it can be empty)
│ └── some_module.py
├── pyproject.toml <-- packaging metadata lives here
└── some_dir <-- no __init__ file necessary for non-packaged subdirs
└── some_script.py
Minimal contents of the packaging definition in pyproject.toml:
[build-system]
requires = ["setuptools"]
build-backend = "setuptools.build_meta"
[project]
name = "my_proj"
version = "0.1"
[tool.setuptools.packages.find]
namespaces = false
An additional once-off step is required to create/configure an environment in between the git clone and the script execution:
python3 -m venv .venv
source .venv/bin/activate
pip install -e .
This makes sure that another_dir is available to import from the environment's site-packages directory, which is already one of the locations on sys.path (check with python -m site). That's what's required for any/all of these import statements to work from within the script file(s)
from another_dir import some_module
import another_dir.some_module
from another_dir.some_module import something
Note that this does not necessarily put the parent of another_dir onto sys.path directly. For an editable install, it will setup some scaffolding which makes your package appear to be "installed" in the site, which is sufficient for those imports to succeed. For a non-editable install (pip install without the -e flag), it will just copy your package directly into the site, compile the .pyc files, and then the code will be found by the normal SourceFileLoader.

Creating a project specific Vosk dictionary

I am working on an application which uses Vosk for speech recognition. I would like to create a dictionary for the application which contains only the trigger words and spoken numbers needed by the application. Using command line instructions found here: www.alphacephei.com/vosk/adaptation I was able to install Kaldi on my laptop. These are,
export KALDI_ROOT=`pwd`/kaldi
git clone https://github.com/kaldi-asr/kaldi
cd kaldi/tools
make
extras/install_opengrm.sh
However, I am having a problem building a dictionary using the provided commands. These are,
export PATH=$KALDI_ROOT/tools/openfst/bin:$PATH
export LD_LIBRARY_PATH=$KALDI_ROOT/tools/openfst/lib/fst
cd model
fstsymbols --save_osymbols=words.txt Gr.fst > /dev/null
farcompilestrings --fst_type=compact --symbols=words.txt --keep_symbols text.txt | \
ngramcount | ngrammake | \
fstconvert --fst_type=ngram > Gr.new.fst
mv Gr.new.fst Gr.fst
The problem occurs at "cd model" because there is no /model directory in the directory structure created during the Kaldi installation. Checking in my Vosk project, I find /models, but no /model directory either.
I have tried creating /model in /kaldi/tools and then running the above commands with no success. Please let me know what I am missing here. Thanks in advance.
The command cd model in the docs is actually incomplete. To run this you have to cd into the directory where Gr.fst exists. This file usually exists in the directory <any model with dynamic graph>/graph.
Head to https://alphacephei.com/vosk/models, download a model that supports dynamic vocabulary reconfiguration (usually small models do, big models don't).
Unzip the folder
Prepare a .txt file with words that your project relates to.
Proceed with the steps mentioned in your second code snippet (with a slight modification to the cd model part)

Packaging Multiple Python Files

I currently am using this guide to package up my project wasp. However currently everything lives inside of the wasp file.
That's not ideal. I would rather have all the classes in separate files so it can be more effectively managed. I have the series of files needed in the debian directory. But I'm not sure how to configure the packaging to package multiple files.
Is there a way to change my packaging to package more than just the one script file?
I'm not a debian package or Python expert, but one way would be to copy the various source files to another location (outside of /usr/bin), and then have /usr/bin/wasp call out to them.
Say you put all of your python code in src/ in the root of your repo. In the debian/install file, you'd have:
wasp usr/bin
src/* usr/lib/wasp/
You'd then just need /usr/bin/wasp to call some entry point in src. For example,
#!/usr/bin/python3
import sys
sys.path.append('/usr/lib/wasp/')
import wasp # or whatever you expose in src
# ...
Again, I don't know the best practices here (either in directory or python usage) but I think this would at least work!

mac directory structure to text converter

Im looking for a tool that would convert a given directory into a text based directory in form of:
im working with macOS, maybe there is a browser based tool for this?
./directory
|
+-- subdirectory1/
| |
| + fileA.md
|
+-- subdirectory2/
|
+ fileA.md
+ ...
There is a Unix tool that will do this called tree. This tool will output the directory tree structure of a given folder. It is a command line tool which means that you will have to use the terminal to get your results. Typing tree -d ~ will, for example, output the tree structure of you home directory.
Although it is not included by default on MacOS you can install it yourself. You can download and compile the source on their homepage (link) or use a package manager like HomeBrew to install it (brew install tree).

git submodules: ln -s

How can I create a directory link that will survive git submodule update?
Given the following example directory structure:
proj
|-- lib3
| `-- submod
| `-- lib
`-- lib
I created a soft link from proj/lib/ to proj/lib3/submod/lib using something like the following command:
brad#bradpc:~/proj/lib$ ln -s ../lib3/submod/lib submodlib
creating the following directory structure:
proj
|-- lib3
| `-- submod
| `-- lib
`-- lib
`-- submodlib
However, running git submodule update destroys my link. I was under the impression that a soft link is a logical link to a relative path, so removing the object and re-adding it would keep the link intact. How can I achieve the desired effect?
A soft link made with ln -s should behave like you intended. As I understand it, if you do a git submodule update some part of your directory proj/lib3/submod/lib gets deleteted and recreated. That means there's no difference in that, than manually do a rm proj/lib3/submod and after that a mkdir -p proj/lib3/submod/lib for example.
I tested this manually (rm and mkdir) on my openSuse Linux installation and the soft link was still fine after recreating the directory structure.
In which OS enviroment do you work? Perhaps it's not a true softlink.

Resources