When I open an h5 file file.h5 with h5py and check for a certain key:
then this does not work:
found = "data" in h5File.keys() # warning on this line
if found:
a = h5File["data"][...]
and spits out the following warning:
HDF5-DIAG: Error detected in HDF5 (1.8.15-patch1) thread 0:
#000: /cluster/home/nuetzig/installHDF5/hdf5-1.8.15-patch1/src/H5Gdeprec.c line 893 in H5Gget_objinfo(): not a location
major: Invalid arguments to routine
minor: Inappropriate type
#001: /cluster/home/nuetzig/installHDF5/hdf5-1.8.15-patch1/src/H5Gloc.c line 173 in H5G_loc(): invalid file ID
major: Invalid arguments to routine
minor: Bad value
However making a set from h5File.keys() which is of type KeysViewWithLock(<HDF5 file "file.h5" (mode r)>)
works:
found = "data" in set(h5File.keys())
if found:
a = h5File["data"][...]
Anybody some idea where the problem is?
I have to say, that the file resides on a parallel network storage (Lustre) on a cluster...
Note: also "data" in h5File does not work
Problem solved:
Reinstalled h5py manually
by
python3 setup.py configure --hdf5-version=1.8.15
python3 install
Somehow the version did not get set and it used 1.8.4, dont know why...
Related
WARNING: Ignoring invalid distribution -ip (c:\users\sangay sherpa\appdata\local\programs\python\python37\lib\site-packages)
WARNING: Error parsing requirements for tensorflow-gpu: [Errno 2] No such file or directory: 'c:\users\sangay sherpa\appdata\local\programs\python\python37\lib\site-packages\tensorflow_gpu-2.1.0.dist-info\METADATA'
You can find some problems in your folder[c:\users\sangay sherpa\appdata\local\programs\python\python37\lib\site-packages],There have some error packet such as the [ip],this packet maybe become ~ip.
like this packet_img
enter image description here
You can del these packets or make a folder to store them.
I run an automated python job on an EMR cluster that updates Amazon Athena Tables.
It was running well until few days ago (on python 2.7 and 3.7). Here is the script:
from pyathenajdbc import connect
import yaml
config = yaml.load(open('athena-config.yaml', 'r'))
statements = config['statements']
staging_dir = config['staging_dir']
conn = connect(s3_staging_dir=staging_dir, region_name='eu-west-1')
try:
with conn.cursor() as cursor:
for statement in statements:
cursor.execute(statement)
finally:
conn.close()
The athena-config.yaml has a staging directory and few Athena Statements.
Here is the Error:
You are using pip version 9.0.3, however version 19.1.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
Unrecognized option: -server
create_tables.py:5: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
config = yaml.load(open('athena-config.yaml', 'r'))
/mnt/conda/lib/python3.7/site-packages/jpype/_core.py:210: UserWarning:
-------------------------------------------------------------------------------
Deprecated: convertStrings was not specified when starting the JVM. The default
behavior in JPype will be False starting in JPype 0.8. The recommended setting
for new code is convertStrings=False. The legacy value of True was assumed for
this session. If you are a user of an application that reported this warning,
please file a ticket with the developer.
-------------------------------------------------------------------------------
""")
Traceback (most recent call last):
File "create_tables.py", line 10, in <module>
region_name='eu-west-1')
File "/mnt/conda/lib/python3.7/site-packages/pyathenajdbc/__init__.py", line 69, in connect
driver_path, log4j_conf, **kwargs)
File "/mnt/conda/lib/python3.7/site-packages/pyathenajdbc/connection.py", line 68, in __init__
self._start_jvm(jvm_path, jvm_options, driver_path, log4j_conf)
File "/mnt/conda/lib/python3.7/site-packages/pyathenajdbc/util.py", line 25, in _wrapper
return wrapped(*args, **kwargs)
File "/mnt/conda/lib/python3.7/site-packages/pyathenajdbc/connection.py", line 97, in _start_jvm
jpype.startJVM(jvm_path, *args)
File "/mnt/conda/lib/python3.7/site-packages/jpype/_core.py", line 219, in startJVM
_jpype.startup(jvmpath, tuple(args), ignoreUnrecognized, convertStrings)
RuntimeError: Unable to start JVM
at loadJVM(native/common/jp_env.cpp:169)
at loadJVM(native/common/jp_env.cpp:179)
at startup(native/python/pyjp_module.cpp:159)
As far as I understand the issue in convertStrings being deprecated. Can anyone help me resolve that? I cannot understand why this """) comes before the traceback, and what changed in past days to break the code. Thanks!
Got the same issue today. Try to downgrade JPype1 to 0.6.3. JPype1 released 0.7.0 today, which is not compatible with old interfaces.
The issue appears to be that the package is calling the JVM with an unrecognized argument -server. The previous version was ignoring those sort of errors allowing things to proceed. To get the same behavior with 0.7.0, the flag ignoreUnrecognized would need to be set to True. Likely this needs to be send to pyathenajdbc to correct the defect which placed the bogus argument into the startJVM in the first place.
Looking at the source the -server is hardcoded into the module.
if not jpype.isJVMStarted():
_logger.debug('JVM path: %s', jvm_path)
args = [
'-server',
'-Djava.class.path={0}'.format(driver_path),
'-Dlog4j.configuration=file:{0}'.format(log4j_conf)
]
if jvm_options:
args.extend(jvm_options)
_logger.debug('JVM args: %s', args)
jpype.startJVM(jvm_path, *args)
cls.class_loader = jpype.java.lang.Thread.currentThread().getContextClassLoader()
It is assuming a particular JVM which accepts -server as an argument.
I installed healthcareai from the instructions at the readthedocsURL
pip install https://github.com/HealthCatalyst/healthcareai-
py/zipball/master
After successfully running example_classification_1.py, I attempted to run example_classification_2.py. However, on example_classification_2.py, the code errored out at line 62:
factors = trained_model.make_factors(prediction_dataframe,
number_top_features=3)
The error in the console was:
File "/home/jdoe/anaconda3/lib/python3.5/site-packages/healthcareai/common/top_factors.py", line 55, in top_k_features
results = list(step2.values[:, :k])
IndexError: too many indices for array
I went back and looked at the documentation for "Choosing a Prediction Output" and it indicated the code should be run without the "number_top_features" option in the list:
factors = trained_model.make_factors(prediction_dataframe)
I did that, but it still threw an error.
Please advise. Thanks.
I'm trying to build automotive grade linux, but I keep getting the same errors.
I tried building on different machines with different linux distributions, but I keep getting the error nevertheless.
I also tried building for different machines without success.
This is the full log of the last time I tried to build automotive grade linux for a raspberry pi 2:
Pt 1: https://pastebin.com/0pqKDdv5
Pt 2: https://pastebin.com/dmRCtTLe
This is my build configuration:
BB_VERSION = "1.32.0"
BUILD_SYS = "x86_64-linux"
NATIVELSBSTRING = "Debian-9.3"
TARGET_SYS = "arm-agl-linux-gnueabi"
MACHINE = "raspberrypi2"
DISTRO = "poky-agl"
DISTRO_VERSION = "4.0.2"
TUNE_FEATURES = "arm armv7ve vfp thumb neon vfpv4 callconvention-hard"
TARGET_FPU = "hard"
meta-raspberrypi = "HEAD:28d4404f89eb59d406b4976c0e3f5ca19137ba74"
meta-netboot = "HEAD:a5f69d3d31e7d75351f0e2abfc36d35ba39ce304"
meta-security-smack
meta-security-framework = "HEAD:20bbb97f6d5400b126ae96ef446c3e60c7e16285"
meta-app-framework = "HEAD:a5f69d3d31e7d75351f0e2abfc36d35ba39ce304"
meta-qt5 = "HEAD:5f837b47f5c3e462f24cd5abf58ff6ef1dd04932"
meta-agl-demo = "HEAD:65619655cb3f373a1b15da7a4d91191a2867464b"
meta-oe
meta-multimedia
meta-efl
meta-networking
meta-python
meta-filesystems = "HEAD:fe5c83312de11e80b85680ef237f8acb04b4b26e"
meta-ivi-common
meta-agl
meta-agl-distro
meta-agl-bsp = "HEAD:a5f69d3d31e7d75351f0e2abfc36d35ba39ce304"
meta
meta-poky = "HEAD:60402978fe3648bf560b3386c6e9dd661cdf2083"
This is the first error I get:
WARNING: qemu-native-2.7.0-r1 do_populate_sysroot: File '/.../build/tmp/sysroots/x86_64-linux/usr/share/qemu/openbios-ppc' from qemu-native was already stripped, this will prevent future debugging!
WARNING: qemu-native-2.7.0-r1 do_populate_sysroot: File '/.../build/tmp/sysroots/x86_64-linux/usr/share/qemu/openbios-sparc32' from qemu-native was already stripped, this will prevent future debugging!
WARNING: qemu-native-2.7.0-r1 do_populate_sysroot: File '/.../build/tmp/sysroots/x86_64-linux/usr/share/qemu/openbios-sparc64' from qemu-native was already stripped, this will prevent future debugging!
WARNING: qemu-native-2.7.0-r1 do_populate_sysroot: File '/.../build/tmp/sysroots/x86_64-linux/usr/share/qemu/s390-ccw.img' from qemu-native was already stripped, this will prevent future debugging!
WARNING: qemu-native-2.7.0-r1 do_populate_sysroot: File '/.../build/tmp/sysroots/x86_64-linux/usr/share/qemu/u-boot.e500' from qemu-native was already stripped, this will prevent future debugging!
ERROR: qemu-native-2.7.0-r1 do_populate_sysroot: runstrip: ''strip' --remove-section=.comment --remove-section=.note '/.../build/tmp/work/x86_64-linux/qemu-native/2.7.0-r1/sysroot-destdir/.../build/tmp/sysroots/x86_64-linux/usr/share/qemu/palcode-clipper'' strip command failed with 1 (b"strip: Unable to recognise the format of the input file `/.../build/tmp/work/x86_64-linux/qemu-native/2.7.0-r1/sysroot-destdir/.../build/tmp/sysroots/x86_64-linux/usr/share/qemu/palcode-clipper'\n")
This is the second error, which I think is caused by the first one.
ERROR: gcc-runtime-6.2.0-r0 do_compile: oe_runmake failed
ERROR: gcc-runtime-6.2.0-r0 do_compile: Function failed: do_compile (log file is located at /.../build/tmp/work/armv7vehf-neon-vfpv4-agl-linux-gnueabi/gcc-runtime/6.2.0-r0/temp/log.do_compile.30447)
ERROR: Logfile of failure stored in: /.../build/tmp/work/armv7vehf-neon-vfpv4-agl-linux-gnueabi/gcc-runtime/6.2.0-r0/temp/log.do_compile.30447
NOTE: recipe gcc-runtime-6.2.0-r0: task do_compile: Failed
ERROR: Task (/.../poky/meta/recipes-devtools/gcc/gcc-runtime_6.2.bb:do_compile) failed with exit code '1'
I found this thread of someone having the same issue, but there is no fix there, so I thought I'd ask it again, because this issue still persist.
I can see that there is a problem with the stripping of files, but I can't find where to fix this in the recipes.
I'm not sure if this error is caused by the used host system or if it's dependant on the recipes.
I hope someone knows a fix to this issue.
Thanks in advance
Laurens Wuyts
EDIT:
Here is the log file of the gcc error: https://www.dropbox.com/s/4hd5kzgbadhdc9p/log.do_compile.30447?dl=0
Here is the specific error message on pastebin:
https://pastebin.com/H8M3sY6a
I'm trying to install OpenCV-2.4.9 on CentOS 7 (PC) however getting error after 16% when running "make" command. I leave default configuration for OpenCV.
make
...
[ 16%] Building CXX object modules/highgui/CMakeFiles/opencv_highgui.dir/src/cap_v4l.cpp.o /opt/opencv-2.4.9/opencv/modules/highgui/src/cap_v4l.cpp:306:29: error: field ‘capability’ has incomplete type
struct video_capability capability;
^ /opt/opencv-2.4.9/opencv/modules/highgui/src/cap_v4l.cpp:307:29: error: field ‘captureWindow’ has incomplete type
struct video_window captureWindow;
....
....
/opt/opencv-2.4.9/opencv/modules/highgui/src/cap_v4l.cpp: In function ‘void icvCloseCAM_V4L(CvCaptureCAM_V4L*)’:
/opt/opencv-2.4.9/opencv/modules/highgui/src/cap_v4l.cpp:2812:46: error: ‘CvCaptureCAM_V4L’ has no member named ‘memoryBuffer’
It seems that the define HAVE_CAMV4L has the value 1, if you look in the file modules/highgui/src/cap_v4l.cpp looking for the structure definition at the row 306. If the compilation fails at that point this means that the video4linux development configuration is corrupted.
Using google I have found that the OpenCV Bug #1357 is described as follow:
CHECK_INCLUDE_FILE(linux/videodev.h HAVE_CAMV4L) succeeds even though linux/videodev.h doesn't exist on the system. (Bug #1357)
http://code.opencv.org/issues/1357
Anyway the solution is described at the same URL for "HAVE_CAMV4L gets set incorrectly": "Setting it to FALSE in CMakeLists.txt fixes the problem".