I'm trying to "rbt post CHANGENUM"(CHANGENUM is the number of pending list) from Perforce P4V in Windows7.
I succeeded to execute this command from command prompt, but I failed to execute from Perforce P4V with the error below.
>>> RBTools 0.7.1 alpha 0 (dev)
>>> Python 2.7.6 (default, Nov 10 2013, 19:24:18) [MSC v.1500 32 bit (Intel)]
>>> Running on Windows-7-6.1.7601-SP1
>>> Home = C:\Users\xxxx\AppData\Roaming
>>> Current directory = c:\Users\xxxx\p4vworkspace
>>> Checking for a Subversion repository...
>>> Unable to execute "svn help": skipping SVN
>>> Checking for a Git repository...
>>> Running: git rev-parse --git-dir
>>> Command exited with rc 128: ['git', 'rev-parse', '--git-dir']
fatal: Not a git repository (or any of the parent directories): .git
---
>>> Checking for a Mercurial repository...
>>> Unable to execute "hg --help": skipping Mercurial
>>> Checking for a CVS repository...
>>> Unable to execute "cvs": skipping CVS
>>> Checking for a Perforce repository...
>>> Running: p4 info
>>> Running: diff --version
>>> repository info: Path: perforce.tw.trendnet.org:1666, Base path: None, Supports changesets: True
>>> Making HTTP GET request to http://xxxxx/api/
Path 'CHANGENUM' does not match a valid Perforce path.
CustomTool settings are below
■ Application:C:\Program Files\RBTools\bin\rbt.cmd
■ Arguments: post -d %p
Do you have any ideas to fix it ?
I unset P4CHARSET before executing rbt post and succeeded to post it
Related
Only as of recently, have I had this error with DVC.
Tracback:
(venv) me#ubuntu-pcs:~/PycharmProjects/project$ dvc push
ERROR: unexpected error - _register_s3_control_events() takes 2 positional arguments but 6 were given
Having any troubles? Hit us up at https://dvc.org/support, we are always happy to help!
(venv) me#ubuntu-pcs:~/PycharmProjects/project$ dvc doctor
DVC version: 2.28.0 (pip)
---------------------------------
Platform: Python 3.9.13 on Linux-5.15.0-52-generic-x86_64-with-glibc2.35
Subprojects:
dvc_data = 0.13.0
dvc_objects = 0.5.0
dvc_render = 0.0.11
dvc_task = 0.1.2
dvclive = 0.12.1
scmrepo = 0.1.1
Supports:
http (aiohttp = 3.8.3, aiohttp-retry = 2.8.3),
https (aiohttp = 3.8.3, aiohttp-retry = 2.8.3),
s3 (s3fs = 2022.10.0, boto3 = 1.24.59)
Cache types: hardlink, symlink
Cache directory: ext4 on /dev/nvme0n1p5
Caches: local
Remotes: s3
Workspace directory: ext4 on /dev/nvme0n1p5
Repo: dvc, git
.dvc/config/:
[core]
remote = storage
autostage = true
['remote "storage"']
url = s3://bucket/project/dvcstore/
Let me know if I should add anything else to the post.
It looks like your post is mostly code; please add some more details.
Solution
pip check checks whether dependencies for installed packages are compatible.
(venv) me#ubuntu-pcs:~/PycharmProjects/project$ pip check
# ...
aiobotocore 2.4.0 has requirement botocore<1.27.60,>=1.27.59, but you have botocore 1.29.0.
I had installed packages with conflicting dependencies. i.e. Packages that required different, but default, versions of botocore.
Hence why recreating venv didn't work.
In my case, I had to:
conda install botocore==1.27.59
dvc push
And dvc push was working again :)
I have a python program, which should run in Jenkins job. But I got below error:
Started by user admin
Building in workspace /var/lib/jenkins/workspace/automatic_test
[automatic_test] $ /bin/sh -xe /tmp/jenkins257763233971180370.sh
+ cd /ext/data/nlu_test/ifly/job123
+ python3 bleu.py Zh ref_zh.txt translation_zh.txt
Traceback (most recent call last):
File "bleu.py", line 5, in <module>
import jieba
ImportError: No module named 'jieba'
Build step 'Execute shell' marked build as failure
Finished: FAILURE
While I run the same commands on Linux shell, then it runs normally as below. Why?
[jenkins#localhost ~]$ cd /ext/data/nlu_test/ifly/job123
[jenkins#localhost job123]$ ls
bleu.py input.orig.txt input.trans.txt input.txt ref_zh.txt splitText.py translation_zh.txt
[jenkins#localhost job123]$ python3 bleu.py Zh ref_zh.txt translation_zh.txt
W0310 00:06:37.430938 295363 init.cc:157] AVX is available, Please re-compile on local machine
Paddle enabled successfully......
reference 1: 3615
candidate: 3493
score: 31.8288254543782
[jenkins#localhost job123]$
And I installed the python package jieba already as below.
[root#localhost ~]# pip3 install jieba
Requirement already satisfied: jieba in /usr/local/lib/python3.7/site-packages (0.42.1)
[root#localhost ~]# pip install jieba
Requirement already satisfied: jieba in /usr/local/lib/python3.7/site-packages (0.42.1)
You can use absolute path of Python for executing the script in Jenkins.
Example: /usr/bin/python3 bleu.py Zh ref_zh.txt translation_zh.txt
You should check how your Jenkins agent is set up. Usually, Jenkins agents run under a separate user ID than the one you are using. This means that they use different environments than the one where you are doing your testing. In your example, your python script fails to import a package. You should see if this package exists and is available in the the environment used by the Jenkins agent.
Is there a way to find the original source directory path in setup.py while install the package being from the source directory?
For example my source code is in
cd /home/jumbo/project/
ls -ltr
Pipfile Pipfile.lock README.md bin src_code setup.py
Being in the above directory, i run 'pip3 install .'
In setup.py, i want to capture the git source directory path (/home/jumbo/project/) and write the commit hash of the git code to a file.
The git source path is not constant as it changes for each user whoever installing the setup.
git -C /home/jumbo/project/ rev-parse HEAD > hash.txt
Thanks for checking.
This is my setup.py code
import os.path
import subprocess
from setuptools import setup
from setuptools.command.install import install
class IW(install):
def run(self):
repo_path = os.path.dirname(os.path.realpath(__file__))
print ("REPO_PATH:", repo_path)
command = 'git -C ' + repo_path + ' rev-parse HEAD > hash.txt'
execute_command = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE)
execute_command.communicate()
if execute_command.returncode != 0:
raise OSError("Command %s failed" % command)
install.run(self)
setup(name='jumbo_deploy',
version='1.1.0',
url='https://github.com/src/jumbo-deploy',
license='Copyright Jumbo 2018',
packages=['jumbo_deploy'],
install_requires=[
'argparse',
'requests',
],
zip_safe=False,
package_data={'jumbo_deploy': ['hash.txt']},
include_package_data=True,
scripts=['bin/jumbo_deploy'],
cmdclass={
'install': IW,
}
)
+++++ END of my setup.py ++++
Currently with the above setup.py, my function run(self) is being executed after creating and changing the directory to some random
user1 $ cd /home/jumbo/project/
user1 $ pip3 install . --upgrade -v
Created temporary directory: /private/var/folders/_w/sv2ms8pd0zl38l3lyy6f787w005lxf/T/pip-ephem-wheel-cache-w28h4dpd
Created temporary directory: /private/var/folders/_w/sv2ms8pd0zl38l3lyy6f787w005lxf/T/pip-req-tracker-pc07b4yn
Created requirements tracker '/private/var/folders/_w/sv2ms8pd0zl38l3lyy6f787w005lxf/T/pip-req-tracker-pc07b4yn'
Created temporary directory: /private/var/folders/_w/sv2ms8pd0zl38l3lyy6f787w005lxf/T/pip-install-wqohpdxt
Processing /home/jumbo/project
Created temporary directory: /private/var/folders/_w/sv2ms8pd0zl38l3lyy6f787w005lxf/T/pip-req-build-1df74t7f
Added file:////home/jumbo/project/ to build tracker '/private/var/folders/_w/sv2ms8pd0zl38l3lyy6f787w005lxf/T/pip-req-tracker-pc07b4yn'
Running setup.py (path:/private/var/folders/_w/sv2ms8pd0zl38l3lyy6f787w005lxf/T/pip-req-build-1df74t7f/setup.py) egg_info for package from file:///home/jumbo/project/
Running command python setup.py egg_info
REPO_PATH:/private/var/folders/_w/sv2ms8pd0zl38l3lyy6f787w005lxf/T/pip-req-build-1df74t7f
========
I'm expecting REPO_PATH:/home/jumbo/project
but seems before my setup code runs, it already changed the directory to /private/var/folders/_w/sv2ms8pd0zl38l3lyy6f787w005lxf/T/pip-req-build-1df74t7f
I am pretty sure you can not do this reliably with a custom setuptools command, and even more unlikely with a custom install command. Indeed (as you correctly noticed) you have little control over where, when this command actually runs.
You probably should look more into customizing the sdist, build, and develop commands. These are usually run directly from within the original source directory. You will need to get at least these 3, probably more, to hit all the cases, and that might not even be enough.
Next you could try with a custom egg_info command (if I understood right, more or less all commands will run egg_info at some point), but I haven't looked much into it and it might be more tricky than it looks to get all the cases right.
Also look at the setuptools documentation on "Extending and Reusing Setuptools" for more ideas where to hook up your custom code.
Finally you might have better luck with setuptools-scm and in particular its write_to option, either using it directly or looking at its code for inspiration.
I had a computer vision project on Ubuntu 14.04 with ROS indigo and python3, then I had to move on Ubuntu 16.04 with ROS kinetic. Here I ran into multiple issues:
1) I installed opencv, but couldn't import it in python3, the error message was:
Traceback (most recent call last):
File "test.py", line 2, in <module>
import cv2
ImportError: /opt/ros/kinetic/lib/python2.7/dist-packages/cv2.so:
undefined symbol: PyCObject_Type
What I figured out is just rename cv2.so
cd /opt/ros/kinetic/lib/python2.7/dist-packages/
sudo mv cv2.so cv2_ros.so
Then I was able to import cv2 and use it
2) Then I couldn't import rospy, but installation of python3-catkin-pkg-modules and python3-rospkg-modules fixed that problem
3) Finally I ran into an issue with cv_bridge it says:
[ERROR] [1520780674.845066]: bad callback: <bound method ViewsBuffer.update of <__main__.ViewsBuffer object at 0x7f5f45a07f28>>
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 750, in _invoke_callback
cb(msg)
File "test.py", line 48, in update
im = self.bridge.imgmsg_to_cv2(im, "bgr8")
File "/opt/ros/kinetic/lib/python2.7/dist-packages/cv_bridge/core.py", line 163, in imgmsg_to_cv2
dtype, n_channels = self.encoding_to_dtype_with_channels(img_msg.encoding)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/cv_bridge/core.py", line 99, in encoding_to_dtype_with_channels
return self.cvtype2_to_dtype_with_channels(self.encoding_to_cvtype2(encoding))
File "/opt/ros/kinetic/lib/python2.7/dist-packages/cv_bridge/core.py", line 91, in encoding_to_cvtype2
from cv_bridge.boost.cv_bridge_boost import getCvType
ImportError: dynamic module does not define module export function (PyInit_cv_bridge_boost)
I believe the problem is in the cv_bridge_boost.so file.
I also tried to build cv_bridge from https://github.com/ros-perception/vision_opencv, but it automatically builds for python2.7, I was trying to modify CMakeLists.txt a little bit to specify python3 there, but I don't have much experience with CMakeLists, so it didn't work. I also tried to copy cv_bridge module to my project folder, but it didn't change much and it still points on that cv_bridge_boost.so file.
Another thing to mention is that cv_bridge works fine with python2.7, but my project requires python3.5.
You are right, you should build cv_bridge with python3.
You can do it with passing
-DPYTHON_EXECUTABLE=/usr/bin/python3 -DPYTHON_INCLUDE_DIR=/usr/include/python3.5m -DPYTHON_LIBRARY=/usr/lib/x86_64-linux-gnu/libpython3.5m.so
args to cmake.
Or, if you are using catkin to build packages, you can do next steps:
# `python-catkin-tools` is needed for catkin tool
# `python3-dev` and `python3-catkin-pkg-modules` is needed to build cv_bridge
# `python3-numpy` and `python3-yaml` is cv_bridge dependencies
# `ros-kinetic-cv-bridge` is needed to install a lot of cv_bridge deps. Probaply you already have it installed.
sudo apt-get install python-catkin-tools python3-dev python3-catkin-pkg-modules python3-numpy python3-yaml ros-kinetic-cv-bridge
# Create catkin workspace
mkdir catkin_workspace
cd catkin_workspace
catkin init
# Instruct catkin to set cmake variables
catkin config -DPYTHON_EXECUTABLE=/usr/bin/python3 -DPYTHON_INCLUDE_DIR=/usr/include/python3.5m -DPYTHON_LIBRARY=/usr/lib/x86_64-linux-gnu/libpython3.5m.so
# Instruct catkin to install built packages into install place. It is $CATKIN_WORKSPACE/install folder
catkin config --install
# Clone cv_bridge src
git clone https://github.com/ros-perception/vision_opencv.git src/vision_opencv
# Find version of cv_bridge in your repository
apt-cache show ros-kinetic-cv-bridge | grep Version
Version: 1.12.8-0xenial-20180416-143935-0800
# Checkout right version in git repo. In our case it is 1.12.8
cd src/vision_opencv/
git checkout 1.12.8
cd ../../
# Build
catkin build cv_bridge
# Extend environment with new package
source install/setup.bash --extend
And
$ python3
Python 3.5.2 (default, Nov 23 2017, 16:37:01)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from cv_bridge.boost.cv_bridge_boost import getCvType
>>>
If you encountered an next error
CMake Error at /usr/share/cmake-3.5/Modules/FindBoost.cmake:1677 (message):
Unable to find the requested Boost libraries.
Boost version: 1.58.0
Boost include path: /usr/include
Could not find the following Boost libraries:
boost_python3
No Boost libraries were found. You may need to set BOOST_LIBRARYDIR to the
directory containing Boost libraries or BOOST_ROOT to the location of
Boost.
Call Stack (most recent call first):
CMakeLists.txt:11 (find_package)
It is because CMake tries to find libboost_python3.so library, but in ubuntu it is libboost_python-py35.so(/usr/lib/x86_64-linux-gnu/libboost_python-py35.so), so you should change line
find_package(Boost REQUIRED python3)
to
find_package(Boost REQUIRED python-py35)
in file src/vision_opencv/cv_bridge/CMakeLists.txt and rebuild package.
I have anaconda3 installed in my system and I face a similar issue while using cv_bridge. When I try to build using catkin build cv_bridge it throws up an error. I use python3.7 from conda. My catkin cofig command is:
catkin config -DPYTHON_EXECUTABLE=/home/akashbaskaran/anaconda3/bin/python3 -DPYTHON_INCLUDE_DIR=/home/akashbaskaran/anaconda3/include/python3.7m -DPYTHON_LIBRARY=/home/akashbaskaran/anaconda3/lib/libpython3.7m.so
Solution:
There are a couple of things which I did and the issue got resolved.
since anaconda is being used, the executables and include directorires should be the one inside your current virtual enviroment. -DPYTHON_EXECUTABLE=/home/akashbaskaran/anaconda3/envs/tf/bin/python3.6 -DPYTHON_INCLUDE_DIR=/home/akashbaskaran/anaconda3/envs/tf/include/python3.6m -DPYTHON_LIBRARY=/home/akashbaskaran/anaconda3/envs/tf/lib/libpython3.6m.so
e
I was having build issues when I tried catkin build cv_bridge. If you face a similar issue, delete all folders except src. Then run catkin_make from the terminal (make sure you are present inside catkin_workspace).
source the current workspace source devel/setup.bash
Now
import cv2
from cv_bridge.boost.cv_bridge_boost import getCvType
should work without an error.
My basic environment:Ubuntu 18.04 and ROS melodic.
I am compiling cv_bridge in conda's virtual environment(python3.7), and I have the same problem as #Jibril. When I catkin build cv_bridge, I get 1 package successful and 3 packages skipped / blacklisted.
The command I use is
# 1
catkin init
# 2
catkin config \
-DPYTHON_EXECUTABLE=/home/zed/anaconda3/envs/yolov5/bin/python3.7 \
-DPYTHON_INCLUDE_DIR=/home/zed/anaconda3/envs/yolov5/include/python3.7m \
-DPYTHON_LIBRARY=/home/zed/anaconda3/envs/yolov5/lib/libpython3.7m.so
# 3
catkin config --install
# 4
catkin build cv_bridge
Then I found the error message about the --install-layout parameter by looking at the log file in the {ws}/log folder.
Referring to the solution in the issue, I added -DSETUPTOOLS_DEB_LAYOUT=OFF to the original command and finally finished compiling.
The command I modified is
# 1
catkin init
# 2
catkin config \
-DPYTHON_EXECUTABLE=/home/zed/anaconda3/envs/yolov5/bin/python3.7 \
-DPYTHON_INCLUDE_DIR=/home/zed/anaconda3/envs/yolov5/include/python3.7m \
-DPYTHON_LIBRARY=/home/zed/anaconda3/envs/yolov5/lib/libpython3.7m.so \
-DSETUPTOOLS_DEB_LAYOUT=OFF
# 3
catkin config --install
# 4
catkin build cv_bridge
When you need to use the compiled cv_bridge in another ROS workspace, you only need to execute source {PATH_cv_bridge_ws}/install/setup.bash --extend, then source {PATH_current_ws}/devel/setup.bash.
If the above solution doesn't work for you, you can also try some of the alternatives to cv_bridge proposed in this issue!!
While trying to make my scons based build system as platform independent as possible I was wondering about the following:
Why does python2 return the include path /usr/local/include/python2.7? This path does not contain Python.h and building fails if I rely on that path.
python2
Use sysconfig inside python2:
$ /usr/bin/python2
Python 2.7.13 (default, Nov 23 2017, 15:37:09)
[GCC 6.3.0 20170406] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import sysconfig
>>> sysconfig.get_path('include')
'/usr/local/include/python2.7'
Gives /usr/local/include/python2.7. This is an empty folder.
Call python2-config from shell:
$ /usr/bin/python2-config --includes
-I/usr/include/python2.7 -I/usr/include/x86_64-linux-gnu/python2.7
This gives a different path. I was able to find Python.h in /usr/include/python2.7.
python3
Use sysconfig inside python3:
$ /usr/bin/python3
Python 3.5.3 (default, Nov 23 2017, 11:34:05)
[GCC 6.3.0 20170406] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sysconfig
>>> sysconfig.get_path('include')
'/usr/include/python3.5m'
Call python3-config from shell:
/usr/bin/python3-config --includes
-I/usr/include/python3.5m -I/usr/include/python3.5m
The resulting path /usr/include/python3.5m is the same for boath approaches. Python.h is located here.
anaconda
If I use anaconda python (2 or 3) the paths are also consistent (as with python3).
I already found some workarounds such as placing a softlink to usr/include in usr/local/include or just throwing away the local/ in the path, but both do not really look like a nice solution.
edit:
Currently the incorrect include path in python2 makes my build system not as platform independend as I would like. Adding an (optional) environment variable PYTHON_INCLUDE_PATH gives me the possibility to define the correct path, if python2 is used. But a solution where the correct path is always returned would help me a lot (either on python side or using scons-specific functionality).
As my build system is based on scons a
Python uses an installation scheme that differs depending on the
platform and on the installation options.
https://docs.python.org/2/library/sysconfig.html#installation-paths
There are various schemes including posix_local and posix_prefix which determine the location of various installation directories. It seems that sysconfig does not actually record which scheme was used to install the particular build of Python -- it only has the build information.
So when you call sysconfig.get_path() it guesses the scheme based on the default for the current platform [1]. The Python2.7 sysconfig guesses posix_local while the Python3 sysconfig guesses posix_prefix [2].
It looks like both versions of Python are installed using the posix_prefix scheme, so you can specify that when you call sysconfig.get_path:
$ python -c "import sysconfig; print(sysconfig.get_path('include', 'posix_prefix'))"
/usr/include/python2.7
$ python3 -c "import sysconfig; print(sysconfig.get_path('include', 'posix_prefix'))"
/usr/include/python3.5m
[1] https://github.com/python/cpython/blob/2.7/Lib/sysconfig.py#L169
[2] Run sysconfig as a script:
$ python -m sysconfig | head
Platform: "linux-x86_64"
Python version: "2.7"
Current installation scheme: "posix_local"
$ python3 -m sysconfig | head
Platform: "linux-x86_64"
Python version: "3.5"
Current installation scheme: "posix_prefix"
I can't actually find posix_local in the sysconfig source so not sure where it comes from.
EDIT
I looked into this a bit more and learned that it specific to the Debian/Ubuntu versions of Python; upstream Python doesn't use /usr/local/ or have a posix_local scheme. The Debian packages use a kind of hybrid approach which is the same as posix_prefix with the addition of /usr/local/ for modules.
I haven't found a link the source online, but my local system has this in its Python2.7 sysconfig.py (notice the FIXME):
def _get_default_scheme():
if os.name == 'posix':
# the default scheme for posix on Debian/Ubuntu is posix_local
# FIXME: return dist-packages/posix_prefix only for
# is_default_prefix and 'PYTHONUSERBASE' not in os.environ and 'real_prefix' not in sys.__dict__
# is_default_prefix = not prefix or os.path.normpath(prefix) in ('/usr', '/usr/local')
return 'posix_local'
return os.name
The Debian python3 sysconfig.py does away with posix_local and uses the same default as upstream python:
def _get_default_scheme():
if os.name == 'posix':
# the default scheme for posix is posix_prefix
return 'posix_prefix'
return os.name
You might want to replicate that to be compatible with Mac or Windows:
sysconfig.get_path('include', 'posix_prefix' if os.name == 'posix' else os.name)
https://wiki.debian.org/Python#Deviations_from_upstream
https://www.debian.org/doc/packaging-manuals/python-policy/python.html#paths