onnxruntime with openvino mix build happen an error "Failed to load library" - openvino

my environment is windows,i want to use python to infernce with onnxruntime with openvion.after installing openvino,i build onnxruntime with openvino,my build command is
.\build.bat --update --build --build_shared_lib --build_wheel --config RelWithDebInfo --cmake_generator "Visual Studio 16 2019" --use_openvino CPU_FP32 --parallel --skip_tests
there is no error hapend in buiding.
but when i import onnxruntime and use it to inference,there happand an error ,that is
[E:onnxruntime:Default, provider_bridge_ort.cc:634 onnxruntime::ProviderLibrary::Get] Failed to load library, error code: 126
and the inference speed is very slow.
who can tell me why?

Did you cross-check your configs, layers and also topology? Not all of them are supported.
Here is some infos.

Related

`Undefined symbol: ossl_sha3_512_functions` when trying to build the Kadena chainweb-node Haskell project

I've been trying to compile the Kadena chainweb-node project from source (via the docs found here) in a docker container running the arm64v8/ubuntu base image on an Apple M1 host machine and I keep getting errors. Thanks to the super helpful folks in the #haskell IRC channel, I've made it over a number of small hurdles, but the build still fails with:
Undefined symbol: ossl_sha3_512_functions
I can not find any resources on how to fix this but one of the #haskell community members informed me that these symbols are not provided by the openssl 3.0 API. That person created an issue here: https://github.com/larskuhtz/hs-hashes/issues/14
I've found that I can use the docker base image haskell:8 which comes preinstalled with ghc, cabal, and openssl 1.1.1 to cabal build chainweb-node successfully using a slightly different set of apt-get dependencies. However, when trying to cabal install the binaries so that I can run chainweb-node, I get:
Failed to build chainweb-2.14.1. The failure occurred during the configure
step.
Build log (
/root/.cabal/logs/ghc-8.10.7/chainweb-2.14.1-36aedf5adc1967eb17358e6434b6fd51bc7e64082f6c12e0df40ddafece6ff69.log
):
[1 of 1] Compiling Main ( /tmp/cabal-install.-838/dist-newstyle/tmp/src-838/chainweb-2.14.1/dist/setup/setup.hs, /tmp/cabal-install.-838/dist-newstyle/tmp/src-838/chainweb-2.14.1/dist/setup/Main.o )
Linking /tmp/cabal-install.-838/dist-newstyle/tmp/src-838/chainweb-2.14.1/dist/setup/setup ...
Configuring chainweb-2.14.1...
setup: Encountered missing or private dependencies:
criterion -any,
data-ordlist >=0.4.7,
resource-pool >=0.2,
retry >=0.7,
statistics >=0.15,
tasty-json >=0.1
cabal: Failed to build chainweb-2.14.1. See the build log above for details.
I've tried cabal install criterion and the other missing deps, one by one, but the cabal install gives me the same errors.
Can someone tell me what I'm doing wrong and help me get a working chainweb-node binary built from source and installed using docker?

"gnu/stubs-o32_hard.h: No such file or directory" golang cgo mipsel embedded chip system fatal error:

1. bg: i got a project to develop the embedded camera system, the system is using mips 32bit, i got the toolchain in ubuntu. go version: 1.17.7
2. my question:
2.1) pure golang cross compile into mipsle platform is good and working.the command it use is :
GOOS=linux GOARCH=mipsle CGO_ENABLED=0 go build main.go
2.2) but i got some third-party lib which they offer as xxx.so and xxx.h to me. when i finish coding it into my go project using cgo. i want to cross complie into mips system which i found fail.
3. what i had try:
3.1) i know go is not support cross complie cgo, so i try the normal way which is like this:
GOOS=linux GOARCH=mipsle CGO_ENABLED=1 CC=/opt/mips-gcc720-glibc226/bin/mips-linux-gnu-gcc go build main.go
result:
# runtime/cgo
In file included from /opt/mips-gcc720-glibc226/mips-linux-gnu/libc/usr/include/features.h:447:0,
from /opt/mips-gcc720-glibc226/mips-linux-gnu/libc/usr/include/bits/libc-header-start.h:33,
from /opt/mips-gcc720-glibc226/mips-linux-gnu/libc/usr/include/stdlib.h:25,
from _cgo_export.c:3:
/opt/mips-gcc720-glibc226/mips-linux-gnu/libc/usr/include/gnu/stubs.h:11:11: fatal error: gnu/stubs-o32_hard.h: No such file or directory
# include <gnu/stubs-o32_hard.h>
^~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
3.2) i also try gccgo, in ubuntu, i try:
go build -compiler=mipsel-linux-gnu-gccgo main.go
result:
invalid value "mipsel-linux-gnu-gccgo" for flag -compiler: unknown compiler "mipsel-linux-gnu-gccgo"
usage: go build [-o output] [build flags] [packages]
Run 'go help build' for details.
3.3) i had try xgo project to compile, which is can output a result, but when i move to the mipsle system it still can not working.
so , can any body help? if need the toolchain to test, i can send you
cantact me: justforjobonly#126.com
thanks!

"error while loading shared libraries: libnvinfer.so.7: cannot open shared object file: No such file or directory" when running TRTorch sample

I'm trying to compile my pytorch model with TRTorch engine.
I've installed TRTorch according to this link.
When the sample code is run (with the below command from this link) the given error arise:
sudo bazel run //cpp/trtorchexec -- $(realpath /home/TRTorch/tests/modules/alexnet_scripted.jit.pt) "(1,3,227,227)"
error while loading shared libraries: libnvinfer.so.7: cannot open shared object file: No such file or directory
Also, the LD_LIBRARY_PATH is set correctly.
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/TensorRT/TensorRT-7.0.0.11/lib
More info:
TRTorch: latest version (python package and binary)
TensorRT: 7.0.0.11
Pytorch: 1.5.1
CUDA: 10.2
Python: 3.6
I asked this question in TRTorch github and fixed it using:
sudo LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/TensorRT/TensorRT-7.0.0.11/lib bazel run //cpp/trtorchexec $(realpath tests/models/alexnet_traced.jit.pt) "(32 3 227 227)"
The issue is available here.

Unable to build Tensorflow from source MacOS High Sierra

I've followed all the steps in the official guide. Except I built it using:
$ bazel build -c opt --copt=-mavx --copt=-mavx2 --copt=-mfma --copt=- msse4.1 --copt=-msse4.2 --config=opt -k //tensorflow/tools/pip_package:build_pip_package
And during ./config I've set the right paths and disabled Google Cloud Platform, Hadoop, XLA, VERBS, OpenCL, CUDA, MPI support.
Hardware:
Macbook Pro 13 inch (mid 2014)
CPU: Intel Core i5 (4278U)
RAM: 8GB
Software:
High Sierra (10.13.2)
Clang Version: clang-900.0.39.2
Bazel Version: 0.9.0
Conda Version: 4.4.3
Python: 3.6.3
All the packages are upto date. This worked perfectly fine 2 months ago on this machine. For some strange reasons it doesn't build anymore now. I'm just posting a part of the error list here:
WARNING: Config values are not defined in any .rc file: opt
ERROR: Skipping 'msse4.1': no such target '//:msse4.1': target 'msse4.1' not declared in package '' defined by /Users/rakshithgb/Documents/Tensorflow/tensorflow/BUILD
WARNING: Target pattern parsing failed.
ERROR: /private/var/tmp/_bazel_rakshithgb/fde7bc60972656b0c2db4fd0b79e24fb/external/com_googlesource_code_re2/BUILD:96:1: First argument of 'load' must be a label and start with either '//', ':', or '#'. Use --incompatible_load_argument_is_label=false to temporarily disable this check.
ERROR: /private/var/tmp/_bazel_rakshithgb/fde7bc60972656b0c2db4fd0b79e24fb/external/com_googlesource_code_re2/BUILD:98:1: name 're2_test' is not defined (did you mean 'ios_test'?)
ERROR: /private/var/tmp/_bazel_rakshithgb/fde7bc60972656b0c2db4fd0b79e24fb/external/com_googlesource_code_re2/BUILD:100:1: name 're2_test' is not defined (did you mean 'ios_test'?)
And it ends like this:
ERROR: /Users/rakshithgb/Documents/Tensorflow/tensorflow/tensorflow/core/kernels/BUILD:550:1: Target '#local_config_sycl//sycl:using_sycl' contains an error and its package is in error and referenced by '//tensorflow/core/kernels:debug_ops'
WARNING: errors encountered while analyzing target '//tensorflow/tools/pip_package:build_pip_package': it will not be built
INFO: Analysed target //tensorflow/tools/pip_package:build_pip_package (203 packages loaded).
INFO: Found 0 targets...
ERROR: command succeeded, but there were errors parsing the target pattern
INFO: Elapsed time: 12.763s, Critical Path: 0.02s
FAILED: Build did NOT complete successfully
Has anyone else had this issue? How do I fix it? I've uploaded the entire error log on GitHub Tensorflow issue page. #15622
Ok it looks like the new bazel version isn't compatible with the current Tensorflow release. It looks like the fix will be issued in the next release. According to this thread on GitHub - #15492
The temporary fix that worked for me was to build it using --incompatible_load_argument_is_label=false in the bazel command. So my build command now looks like this:
$ bazel build --config=opt --incompatible_load_argument_is_label=false //tensorflow/tools/pip_package:build_pip_package

Cannot install accelerate-cuda in Haskell

I am on a linux box and trying to experiment with Haskell's Accelerate library but having problems installing it. I have successfully installed the accelerate package but there seems to be a dependency problem, which I have detailed below.
cabal: Error: some packages failed to install:
accelerate-cuda-0.14.0.0 depends on haskell-src-exts-1.14.0.1 which failed to
install.
cuda-0.5.1.1 failed during the configure step. The exception was:
ExitFailure 1
haskell-src-exts-1.14.0.1 failed during the configure step. The exception was:
ExitFailure 1
haskell-src-meta-0.6.0.5 depends on haskell-src-exts-1.14.0.1 which failed to
install.
language-c-quote-0.7.6 depends on haskell-src-exts-1.14.0.1 which failed to
install.
I searched SO and noticed someone else had a similiar issue installed the cuda package, and was resolved by adding the cabal bin path to the PATH; I tried this but it didn't solve this problem.
Please could someone help as I really keen to play with this fantastic library.
I wanted to check accelerate-examples and play with them and I also didn't have CUDA GPU (AMD only) and that's how I eventually installed accelerate-examples with stack:
git clone https://github.com/AccelerateHS/accelerate-examples
cd accelerate-examples
#choose version:
ln stack-8.6.yaml stack.yaml
#build without CUDA targeting:
stack build --flag accelerate-examples:-llvm-ptx --flag accelerate-fft:-llvm-ptx
Installation will build all the examples and print the info regarding where they was put.
Might also need to specify GHC libs path with something like: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/.stack/programs/x86_64-linux/ghc-8.6.5/lib/ghc-8.6.5/rts/
Also checked that accelerate installs as well the same way but even without specifying any flags (I guess because it doesn't build any programs yet?), but examples are what's fun :)
P.S. You can move llvm-ptx flags to stack.yaml config: change # flags: {} line to:
flags:
accelerate-fft:
llvm-ptx: false
accelerate-examples:
llvm-ptx: false

Resources