Is there a way to find the original source directory path in setup.py while install the package being from the source directory?
For example my source code is in
cd /home/jumbo/project/
ls -ltr
Pipfile Pipfile.lock README.md bin src_code setup.py
Being in the above directory, i run 'pip3 install .'
In setup.py, i want to capture the git source directory path (/home/jumbo/project/) and write the commit hash of the git code to a file.
The git source path is not constant as it changes for each user whoever installing the setup.
git -C /home/jumbo/project/ rev-parse HEAD > hash.txt
Thanks for checking.
This is my setup.py code
import os.path
import subprocess
from setuptools import setup
from setuptools.command.install import install
class IW(install):
def run(self):
repo_path = os.path.dirname(os.path.realpath(__file__))
print ("REPO_PATH:", repo_path)
command = 'git -C ' + repo_path + ' rev-parse HEAD > hash.txt'
execute_command = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE)
execute_command.communicate()
if execute_command.returncode != 0:
raise OSError("Command %s failed" % command)
install.run(self)
setup(name='jumbo_deploy',
version='1.1.0',
url='https://github.com/src/jumbo-deploy',
license='Copyright Jumbo 2018',
packages=['jumbo_deploy'],
install_requires=[
'argparse',
'requests',
],
zip_safe=False,
package_data={'jumbo_deploy': ['hash.txt']},
include_package_data=True,
scripts=['bin/jumbo_deploy'],
cmdclass={
'install': IW,
}
)
+++++ END of my setup.py ++++
Currently with the above setup.py, my function run(self) is being executed after creating and changing the directory to some random
user1 $ cd /home/jumbo/project/
user1 $ pip3 install . --upgrade -v
Created temporary directory: /private/var/folders/_w/sv2ms8pd0zl38l3lyy6f787w005lxf/T/pip-ephem-wheel-cache-w28h4dpd
Created temporary directory: /private/var/folders/_w/sv2ms8pd0zl38l3lyy6f787w005lxf/T/pip-req-tracker-pc07b4yn
Created requirements tracker '/private/var/folders/_w/sv2ms8pd0zl38l3lyy6f787w005lxf/T/pip-req-tracker-pc07b4yn'
Created temporary directory: /private/var/folders/_w/sv2ms8pd0zl38l3lyy6f787w005lxf/T/pip-install-wqohpdxt
Processing /home/jumbo/project
Created temporary directory: /private/var/folders/_w/sv2ms8pd0zl38l3lyy6f787w005lxf/T/pip-req-build-1df74t7f
Added file:////home/jumbo/project/ to build tracker '/private/var/folders/_w/sv2ms8pd0zl38l3lyy6f787w005lxf/T/pip-req-tracker-pc07b4yn'
Running setup.py (path:/private/var/folders/_w/sv2ms8pd0zl38l3lyy6f787w005lxf/T/pip-req-build-1df74t7f/setup.py) egg_info for package from file:///home/jumbo/project/
Running command python setup.py egg_info
REPO_PATH:/private/var/folders/_w/sv2ms8pd0zl38l3lyy6f787w005lxf/T/pip-req-build-1df74t7f
========
I'm expecting REPO_PATH:/home/jumbo/project
but seems before my setup code runs, it already changed the directory to /private/var/folders/_w/sv2ms8pd0zl38l3lyy6f787w005lxf/T/pip-req-build-1df74t7f
I am pretty sure you can not do this reliably with a custom setuptools command, and even more unlikely with a custom install command. Indeed (as you correctly noticed) you have little control over where, when this command actually runs.
You probably should look more into customizing the sdist, build, and develop commands. These are usually run directly from within the original source directory. You will need to get at least these 3, probably more, to hit all the cases, and that might not even be enough.
Next you could try with a custom egg_info command (if I understood right, more or less all commands will run egg_info at some point), but I haven't looked much into it and it might be more tricky than it looks to get all the cases right.
Also look at the setuptools documentation on "Extending and Reusing Setuptools" for more ideas where to hook up your custom code.
Finally you might have better luck with setuptools-scm and in particular its write_to option, either using it directly or looking at its code for inspiration.
Related
My setup.py
import setuptools
import os, stat
with open('README.md', 'r') as fh:
long_description = fh.read()
setuptools.setup(
python_requires='>3.7',
name='MyApp',
version='1.0.0',
description='MyApp',
long_description=long_description,
long_description_content_type='text/markdown',
packages=setuptools.find_packages(),
data_files=[
('MyApp/', ['MyAppScripts/my_script'])],
entry_points = { 'console_scripts':['my_script = MyApp.myapp:main'] }
)
My Package:
./README.md
./MyAppScripts
./MyAppScripts/my_script
./MyApp
./MyApp/__init__.py
./MyApp/myapp.py
./setup.py
Hello Everyone, I hope I find you well and happy.
I have created a python application and would like entry_point scripts to install into directory /usr/local/MyApp and NOT /usr/local/bin. So far I am unable to get this to work and wondered if there is a way to override the install location for the entry_point scripts only? Package files should live in the default location.
As a work around I have generated the entry_point scripts and placed them into my setup directory below MyAppScripts. Using 'data_files' they are then copied relative to '/usr/local' into '/usr/local/MyApp' at install time which is the overall aim, but this is a bit of a cludge and I'd really like those entry_point scripts to get generated and land in the correct spot at install time.
I tried unsuccessfully:
entry_points = { 'console_scripts':['MyApp/my_script = myapp.scripts.myapp:main'] }
I also tried numerous install options such as:
python3 -m pip install --install-option="--prefix=/usr/local/MyApp" dist/MyApp-1.0.0-py3-none-any.whl
WARNING: Disabling all use of wheels due to the use of --build-option / --global-option / --install-option.
Which didn't workout ( possibly because my build is a whl? )
My build command:
python3 setup.py bdist_wheel
Please excuse my ignorance around packging, it is something that I am only just getting to grips with.
To summarise I'd like to run pip install and end up with entry_point script:
/usr/local/MyApp/my_script
Is anyone able to provide any advice please?
Thank you.
Regards,
D
I couldn't run easy_install, even as setuptools are already installed. By the way, I could see the easy_install.py file under the ...\Lib\site-packages\setuptools\command directory but there is no easy_install.exe file under .../Scripts directory. So the problem is not about PATH not added as there's no .exe to be found.
setuptools 58.0.4
python 3.8.8
Windows 10
I wonder is there a way to directly invoke the easy_install.py to install .egg package?
The solution is to invoke easy_install from within a python script (credit to GitHub user #emailweixu source).
from setuptools import setup
if __name__ == '__main__':
setup( script_args = [ '-q', 'easy_install'
, '-v', 'PATH_TO_YOUR_EGG_FILE'
]
)
Note that this hacky solution is not going to work in the future, when setuptools removes easy_install from its code package altogether.
I am trying to make a Nextflow script that utilizes a python script. My python script imports a number of modules but within Nextflow python3 does not find two (cv2 and matplotlib) of 7 modules and crashes. If I call the script directly from bash it works fine. I would like to avoid creating a docker image to run this script.
Error executing process > 'grab_images (1)'
Caused by:
Process `grab_images (1)` terminated with an error exit status (1)
Command executed:
python3 --version
echo 'processing image-1.npy'
python3 /home/hq/cv_proj/k_means2.py image-1.npy
Command exit status:
1
Command output:
Python 3.7.3
processing image-1.npy
Command error:
Traceback (most recent call last):
File "/home/hq/cv_proj/k_means2.py", line 5, in <module>
import matplotlib.pyplot as plt
ModuleNotFoundError: No module named 'matplotlib'
Work dir:
/home/hq/cv_proj/work/7f/b787c62ec420b2b5eb490603ef913f
Tip: you can replicate the issue by changing to the process work dir and entering the command `bash .command.run`
I think there is a path issue as modules like numpy, sys, re, time are successfully loaded. How can I fix?
Thanks in advance
UPDATE
To assist other who may have problems using python in nextflow scripts... Make sure your shebang is correct. I was using
#!/usr/bin/python
instead of
#!/usr/bin/python3
Since all of my packages were installed with pip3 and I exclusively use python3 you need to have the right shebang.
Best to avoid absolute paths to your script(s) in your process declarations. This section of the docs is worth taking some time to read: https://www.nextflow.io/docs/latest/sharing.html#manage-dependencies, particularly the subsection on how to manage third party scripts:
Any third party script that does not need to be compiled (Bash,
Python, Perl, etc) can be included in the pipeline project repository,
so that they are distributed with it.
Grant the execute permission to these files and copy them into a
folder named bin/ in the root directory of your project repository.
Nextflow will automatically add this folder to the PATH environment
variable, and the scripts will automatically be accessible in your
pipeline without the need to specify an absolute path to invoke them.
Then the problem is how to manage your Python dependencies. You mentioned Docker is not an option. Is Conda also not an option? The config for Conda might look something like:
name: myenv
channels:
- conda-forge
- bioconda
- defaults
dependencies:
- conda-forge::matplotlib-base=3.4.3
- conda-forge::numpy=1.21.2
- conda-forge::opencv=4.5.2
Then if the above is in a file called environment.yml, create the environment with:
conda env create
See also the best practices for using Conda.
I follow this doc: https://mg.pov.lt/objgraph/
objgraph_test.py:
import objgraph
import os
x = ['a', '1', [2, 3]]
filename = os.path.dirname(__file__) + '/objgraph_test.png'
objgraph.show_refs([x], filename=filename)
When I try to output a .png image file, it throw an error:
(venv) ☁ python-codelab [master] ⚡ python3 /Users/ldu020/workspace/github.com/mrdulin/python-codelab/src/performance-optimization/memory-profile-and-objgraph/objgraph_test.py
Graph written to /var/folders/38/s8g_rsm13yxd26nwyqzdp2shd351xb/T/objgraph-4hy982i9.dot (6 nodes)
Image renderer (dot) not found, not doing anything else
I already installed xdot package.
(venv) ☁ python-codelab [master] ⚡ pip3 list | grep -e 'xdot\|objgraph'
objgraph 3.4.1
xdot 1.1
How can I solve this?
I ran into the same problem using python3. These three steps worked for me:
Install Graphviz package (which contains the dot.exe file that your script is not finding to generate a .png from a .dot) - Either via pip install or directly download from https://graphviz.gitlab.io/
Add dot.exe to path - You need to be able to run dot.exe just by typing dot in the command line. To do this you have to add the entire path of dot.exe to the environment variables.
Re-run your command line or IDE and run the script again -This time you will be able to generate the png image.
Hope it helps!
You need to install system package: sudo apt install graphviz, just installation of python package won't help pip install xdot, (installation for other OS)
The issue is objgraph.py:_program_in_path#L1253 can't find binary file!
paths = os.environ.get("PATH", os.defpath).split(os.pathsep)
paths = [os.path.join(dir, program) for dir in path]
paths = [file for file in path
if os.path.isfile(file) or os.path.isfile(file + '.exe')]
print(paths)
some of working output:
['/usr/bin/dot', '/bin/dot']
I'm still a newbie at bash programming, but trying to run a program with little script. Reducing the problem to the error message, I have
cd /full/path/to/program
python3 -m krop
that is the command working when the actual folder is the /full/path/to/program
but if I run the same from root it doesn't work.
cd /another/path
python3 -m /full/path/to/program/krop
/usr/bin/python3: Error while finding module specification for '/full/path/to/program/krop'
(ModuleNotFoundError: No module named 'krop-0')
I tried lot of variants, but always the same output with errors. I do not have a clue of why the library "python3" adds the "-0" at the end of the name of the file.
What should I put to run the program from root?
python -m command expects a module name, similarly to the syntax you would import in a python program. So if your modules lies in ./directory and directory is a valid python module, you can do python -m directory.krop
You can't however index python modules from file system root. You have either to make your bash script run it in the good directory so you make a local import; or you have to package and install your module system-wide to make a global import that would be invoked with python -m krop from anywhere.
More information on packaging and installing modules: https://packaging.python.org/tutorials/packaging-projects/
Problem solved!,
It was a matter of managing the python import paths, as #hiroprotagonist replied. The list that contains all of directories python will use to search for modules, is available in a variable named sys.path.
So, if somebody wants to run a program (a 'library module as a script', according to python help) through python's command, from a directory different from the 'pwd' one, should write in the command line:
export PYTHONPATH='/full/path/to/program/'
python3 -c "import sys; print(sys.path)"
python3 -m krop
The second line is actually to print on screen, but the first one is the only necessary (export PYTHONPATH).
Thank you for the keywords and help!
Ps. May be should be edited the question title to "problem with a python command to run a program from command line on linux" or something like that.
Reference: python --help :)