anaconda env: cupti64_100.dll not found - python-3.x

# output metadata end of epochs
print('\nAdding run metadata for epoch ' + str(batch_idx) + '\n')
run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)
run_metadata = tf.RunMetadata()
summary, _ = sess.run([merged, train_step],
feed_dict={x: batch_xs, y_: batch_ys, keep_prob: 1.0}, # test_xs, test,_ys
options=run_options,
run_metadata=run_metadata)
train_writer.add_run_metadata(run_metadata, 'step%03d' % batch_idx)
train_writer.add_summary(summary, batch_idx)
2019-11-10 16:07:00.007157: I tensorflow/core/profiler/lib/profiler_session.cc:174] Profiler session started.
2019-11-10 16:07:00.012641: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Could not dlopen library 'cupti64_100.dll'; dlerror: cupti64_100.dll not found
2019-11-10 16:07:00.018396: W tensorflow/core/profiler/lib/profiler_session.cc:182] Encountered error while starting profiler: Unavailable: CUPTI error: CUPTI could not be loaded or symbol could not be found.
2019-11-10 16:07:00.108662: I tensorflow/core/platform/default/device_tracer.cc:641] Collecting 0 kernel records, 0 memcpy records.
2019-11-10 16:07:00.114275: E tensorflow/core/platform/default/device_tracer.cc:68] CUPTI error: CUPTI could not be loaded or symbol could not be found.
W1110 16:07:00.226252 11440 deprecation_wrapper.py:119] From training.py:284: The name tf.train.Saver is deprecated. Please use tf.compat.v1.train.Saver instead.
my setup:
-Python 3.7.3
-cudatoolkit 10.0.130
-cudnn 7.6.0
-tensorflow 1.14.0
-tensorflow-gpu 1.14.0
i using Anaconda environment.
please help me what should i do?

Providing the solution here (Answer Section), even though it is present in the Comment Section, for the benefit of the community.
Installed nvidia cuda 10.0.130 and copy cudnn 7.6.0 to C:\Program Files has resolved the issue.

Related

MantaFlow with deep-fluids, Target numpy array of soruce grid not matching

I have just installed MantaFlow with byungsook's deep-fluids this is running on python3.7 as that is the last tensorflow 1.15 works on I am also using Linux mint
When I try to run the file that is in the readme of the github it says
:~/Documents/409/deep-fluids$ ../mantaflow/build/manta ./scene/smoke_pos_size.py
Version: mantaflow 0.13 64bit fp1 omp commit 1f380bb004709a69918acdffa0864034be9954f0 from Nov 15 2022, 01:07:57
Loading script './scene/smoke_pos_size.py'
2022-11-15 01:46:03.544265: arguments
log_dir: data/smoke_pos21_size5_f200
num_param: 3
path_format: %d_%d_%d.npz
p0: src_x_pos
p1: src_radius
p2: frames
num_src_x_pos: 21
min_src_x_pos: 0.2
max_src_x_pos: 0.8
src_y_pos: 0.1
num_src_radius: 5
min_src_radius: 0.04
max_src_radius: 0.12
num_frames: 200
min_frames: 0
max_frames: 199
num_simulations: 21000
resolution_x: 96
resolution_y: 128
buoyancy: -0.004
bWidth: 1
open_bound: False
time_step: 0.5
adv_order: 2
clamp_mode: 2
./scene/smoke_pos_size.py:146: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. To silence this warning, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
v_range = [np.finfo(np.float).max, np.finfo(np.float).min]
start generation
scenes: 0%| | 0/105 [00:00<?, ?it/sWarning: boundaryWidth and openBounds parameters in AdvectSemiLagrange plugin are deprecated (and have no more effect), please remove.
Warning: boundaryWidth and openBounds parameters in AdvectSemiLagrange plugin are deprecated (and have no more effect), please remove.
ICP/mICP pre-conditioning only supported in 3D for now, disabling it.
Error in copyGridToArrayMAC
sim: 0%| | 0/200 [00:00<?, ?it/s]
scenes: 0%| | 0/105 [00:00<?, ?it/s]
Traceback (most recent call last):
File "./scene/smoke_pos_size.py", line 258, in <module>
main()
File "./scene/smoke_pos_size.py", line 198, in main
copyGridToArrayMAC(vel, v_)
RuntimeError: The dimensions of source grid (96, 128, 1) and target numpy array (3, 96, 128) do not match!
Error raised in /home/luke/Documents/409/mantaflow/source/plugin/numpyconvert.cpp:122
Script finished.
I have checked the setup files and everything is in order and mantaflow on its own works just fine. I have tried force changing the resolution in the options but this just outputs the wrong size resolution box
see image
I was expecting something like
this

Failed build Yocto Gatesgarth "extensible SDK" (eSDK) - populate_sdk_ext fail

I'm working with Yocto "Gatesgarth" on a custom board based on i.MX6ULL.
I'm facing some problems in generating the extensible SDK (eSDK).
The generation of normal SDK it's accomplished correctly.
Below some details.
Details of system:
Board based on NXP i.MX6ULL
Yocto version "Gatesgarth 3.2.4 (May 2021)"
BB_VERSION = "1.48.0",
NATIVELSBSTRING = "ubuntu-18.04"
DISTRO_VERSION = "5.10-gatesgarth"
meta-qt5 is present
Build environment based on Docker Container
Environment Variable:
File: conf/local.conf
SDKMACHINE ?= 'x86_64'
File: test-image-mx6ull.bb
inherit core-image
inherit populate_sdk_qt5
inherit populate_sdk_ext
SDK_EXT_TYPE = "minimal"
SDK_INCLUDE_TOOLCHAIN = "1"
SDK_INCLUDE_PKGDATA = "0"
SDK_INCLUDE_NATIVESDK = "1"
The command executed is :
bitbake test-image-mx6ull -c populate_sdk_ext
Output:
ERROR: test-image-mx6ull-1.0-r0 do_populate_sdk_ext: Error executing a python function in exec_python_func() autogenerated:
The stack trace of python calls that resulted in this exception/failure was:
File: 'exec_python_func() autogenerated', lineno: 2, function: <module>
0001:
*** 0002:do_populate_sdk_ext(d)
0003:
File: '/yocto/sources/poky/meta/classes/populate_sdk_ext.bbclass', lineno: 720, function: do_populate_sdk_ext
0716: bb.fatal('The extensible SDK can currently only be built for the same architecture as the machine being built on - SDK_ARCH is set to %s (likely via setting
SDKMACHINE) which is different from the architecture of the build machine (%s). Unable to continue.' % (d.getVar('SDK_ARCH'), d.getVar('BUILD_ARCH')))
0717:
0718: d.setVar('SDK_INSTALL_TARGETS', get_sdk_install_targets(d))
0719: if d.getVar('SDK_INCLUDE_BUILDTOOLS') == '1':
*** 0720: buildtools_fn = get_current_buildtools(d)
0721: else:
0722: buildtools_fn = None
0723: d.setVar('SDK_REQUIRED_UTILITIES', get_sdk_required_utilities(buildtools_fn, d))
0724: d.setVar('SDK_BUILDTOOLS_INSTALLER', buildtools_fn)
File: '/yocto/sources/poky/meta/classes/populate_sdk_ext.bbclass', lineno: 556, function: get_current_buildtools
0552: import glob
0553: btfiles = glob.glob(os.path.join(d.getVar('SDK_DEPLOY'), '*-buildtools-nativesdk-standalone-*.sh'))
0554: btfiles.sort(key=os.path.getctime)
0555: print("MY-DEBUG - btfiles = {} - SDK_DEPLOY = {}".format(btfiles, d.getVar('SDK_DEPLOY')))
*** 0556: return os.path.basename(btfiles[-1])
0557:
0558:def get_sdk_required_utilities(buildtools_fn, d):
0559: """Find required utilities that aren't provided by the buildtools"""
0560: sanity_required_utilities = (d.getVar('SANITY_REQUIRED_UTILITIES') or '').split()
Exception: IndexError: list index out of range
DEBUG: Python function do_populate_sdk_ext finished
MY-DEBUG - btfiles = [] - SDK_DEPLOY = /yocto/build-mX6ull/tmp/deploy/sdk
Question:
In line 553 the array btfiles should be filled,
but the array is empty and the line 556 generate the exception.
I have no idea of whats is wrong, what I have forget and what Yocto environment variables are needed to setup to do a correctly work.
hope you are doing good
i had similar issue where i couldnt populate esdk,
its all in GLIBC version..
kindly update your GLIBC version
In my case i had to update GLIBC version to 2.33 in "yocto-uninative.inc" file. It worked for me!!!

TVM fails compiling a pytorch model in dense_strategy_cpu

I made and trained a pytorch v1.4 model that predicts a sin() value (based on an example found on the web). Inference works. I then tried to compile it with TVM v0.8dev0 and llvm 10 on Ubuntu with a x86 cpu. I followed the TVM setup guide and ran some tutorials for onnx that do work.
I mainly used existing tutorials on TVM to figure out the procedure below. Note that I'm not a ML nor DataScience engineer. These were my steps:
import tvm, torch, os
from tvm import relay
state = torch.load("/home/dude/tvm/tst_state.pt") # load the trained pytorch state
import tst
m = tst.Net()
m.load_state_dict(state) # init the model with its trained state
m.eval()
sm = torch.jit.trace(m, torch.tensor([3.1415 / 4])) # convert to a scripted model
# the model only takes 1 input for inference hence [("input0", (1,))]
mod, params = tvm.relay.frontend.from_pytorch(sm, [("input0", (1,))])
mod.astext # outputs some small relay(?) script
with tvm.transform.PassContext(opt_level=1):
lib = relay.build(mod, target="llvm", target_host="llvm", params=params)
The last line gives me this error that I don't know how to solve nor where I went wrong. I hope that someone can pinpoint my mistake ...
... removed some lines here ...
[bt] (3) /home/dude/tvm/build/libtvm.so(TVMFuncCall+0x5f) [0x7f5cd65660af]
[bt] (2) /home/dude/tvm/build/libtvm.so(+0xb4f8a7) [0x7f5cd5f318a7]
[bt] (1) /home/dude/tvm/build/libtvm.so(tvm::GenericFunc::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const+0x1ab) [0x7f5cd5f315cb]
[bt] (0) /home/tvm/build/libtvm.so(+0x1180cab) [0x7f5cd6562cab]
File "/home/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 81, in cfun
rv = local_pyfunc(*pyargs)
File "/home/tvm/python/tvm/relay/op/strategy/x86.py", line 311, in dense_strategy_cpu
m, _ = inputs[0].shape
ValueError: not enough values to unpack (expected 2, got 1)

Bazel Error After Upgrading Nodejs Rules - ERROR: defs.bzl has been removed from build_bazel_rules_nodejs

After upgrading build_bazel_rules_nodejs from 0.42.2 to 1.0.1 I get this error:
ERROR: /home/flolu/.cache/bazel/_bazel_flolu/698f7adad10ea020bcdb85216703ce08/external/build_bazel_rules_nodejs/defs.bzl:19:5: Traceback (most recent call
last):
File "/home/flolu/Desktop/minimal-bazel-monorepo/services/server/src/BUILD", line 76
nodejs_image(name = "server", <2 more arguments>)
File "/home/flolu/.cache/bazel/_bazel_flolu/698f7adad10ea020bcdb85216703ce08/external/io_bazel_rules_docker/nodejs/image.bzl", line 112, in nodejs_image
nodejs_binary(name = binary, <2 more arguments>)
File "/home/flolu/.cache/bazel/_bazel_flolu/698f7adad10ea020bcdb85216703ce08/external/build_bazel_rules_nodejs/defs.bzl", line 19, in nodejs_binary
fail(<1 more arguments>)
ERROR: defs.bzl has been removed from build_bazel_rules_nodejs
Please update your load statements to use index.bzl instead.
See https://github.com/bazelbuild/rules_nodejs/wiki#migrating-off-build_bazel_rules_nodejsdefsbzl for help.
ERROR: error loading package 'services/server/src': Package 'services/server/src' contains errors
INFO: Elapsed time: 0.119s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (1 packages loaded)
FAILED: Build did NOT complete successfully (1 packages loaded)
Line 76 in the error refers to this part of the BUILD file:
load("#io_bazel_rules_docker//nodejs:image.bzl", "nodejs_image")
nodejs_image(
name = "server",
data = [":lib"],
entry_point = ":index.ts",
)
But there is no defs.bzl. So I am confused by the error.
So in detail I have upgraded from
http_archive(
name = "build_bazel_rules_nodejs",
sha256 = "16fc00ab0d1e538e88f084272316c0693a2e9007d64f45529b82f6230aedb073",
urls = ["https://github.com/bazelbuild/rules_nodejs/releases/download/0.42.2/rules_nodejs-0.42.2.tar.gz"],
)
to
http_archive(
name = "build_bazel_rules_nodejs",
sha256 = "e1a0d6eb40ec89f61a13a028e7113aa3630247253bcb1406281b627e44395145",
urls = ["https://github.com/bazelbuild/rules_nodejs/releases/download/1.0.1/rules_nodejs-1.0.1.tar.gz"],
)
You can recreate the error by cloning this repo: https://github.com/flolude/minimal-bazel-monorepo/tree/48add7ddcad4d25e361e1c7f7f257cf916a797b2 and running
bazel test //services/server/src:test
There are some breaking changes between those versions of build_bazel_rules_nodejs. Namely the import path this:
load("#build_bazel_rules_nodejs//:defs..bzl", <whatever>)
needs to become this
load("#build_bazel_rules_nodejs//:index.bzl", <whatever>)
You also need to update your io_bazel_rules_docker to at least v0.13.0. From looking at the release notes its the version compatible with 1.0.1 in node. https://github.com/bazelbuild/rules_docker/releases/

Julia Interpolation Error with $-sign in Pkg-dir() Path in Pkg.build()

Using a Pkg.dir() Path with the following String-representation:
"\\\\SUBDNxyz.companynet.ch\\Service\$\\Programme\\"
I get an error whenever I run Pkg.build():
ERROR: syntax: invalid interpolation syntax: "$\"
ERROR: Build process failed.
Stacktrace:
[1] build!(::Array{String,1}, ::Set{Any}, ::String) at .\pkg\entry.jl:629
[2] build!(::Array{String,1}, ::Dict{Any,Any}, ::Set{Any}) at .\pkg\entry.jl:637
[3] build(::Array{String,1}) at .\pkg\entry.jl:652
[4] (::Base.Pkg.Dir.##3#6{Array{Any,1},Base.Pkg.Entry.#build,Tuple{Array{String,1}}})() at .\pkg\dir.jl:33
[5] cd(::Base.Pkg.Dir.##3#6{Array{Any,1},Base.Pkg.Entry.#build,Tuple{Array{String,1}}}, ::String) at .\file.jl:59
[6] withenv(::Base.Pkg.Dir.##2#5{Array{Any,1},Base.Pkg.Entry.#build,Tuple{Array{String,1}},String}, ::Pair{String,String}, ::Vararg{Pair{String,String},N} where N) at .\env.jl:157
[7] #cd#1(::Array{Any,1}, ::Function, ::Function, ::Array{String,1}, ::Vararg{Array{String,1},N} where N) at .\pkg\dir.jl:32
[8] build(::String, ::Vararg{String,N} where N) at .\pkg\pkg.jl:254
How can I go around this error? Is there a way to represent $ differently than with "\$"?
julia> versioninfo()
Julia Version 0.6.3
Commit d55cadc350* (2018-05-28 20:20 UTC)
DEBUG build
Platform Info:
OS: Windows (x86_64-w64-mingw32)
CPU: Intel(R) Xeon(R) CPU E5-2687W v3 # 3.10GHz
WORD_SIZE: 64
BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Nehalem)
LAPACK: libopenblas64_
LIBM: libopenlibm
LLVM: libLLVM-3.9.1 (ORCJIT, haswell)
EDIT:
The problem does not occur for all Packages, e.g. Pkg.build("Missings") works fine. Packages as "PyCall" or "DataFrames" cause that error. In addition, the representation of the String doesn't matter. I also tried raw"\\SUBDNxyz.companynet.ch\Service$\Programme" etc.

Resources