size mismatch + ragged tensor - python-3.x

I'm having this problem which I'm unable to solve.
Has anyone had the same issue?
I'm using windows 10
Tensorflow 2
This is the command that I ran :
python test.py --model_architecture ds_cnn --model_size_info 5 64 10 4 2 2 64 3 3 1 1 64 3 3 1 1 64 3 3 1 1 64 3 3 1 1 --dct_coefficient_count 10 --window_size_ms 40 --window_stride_ms 20 --checkpoint ../Pretrained_models/DS_CNN/DS_CNN_S/ckpt/ds_cnn_0.94_ckpt
Untarring speech_commands_v0.02.tar.gz...
Running testing on validation set...Traceback (most recent call last):
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\util\dispatch.py",
line 206, in wrapper
return target(*args, **kwargs)
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\ops\math_ops.py",
line 1838, in tensor_not_equals
return gen_math_ops.not_equal(self, other, incompatible_shape_error=False)
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-
packages\tensorflow\python\ops\gen_math_ops.py", line 6573, in not_equal
ctx=_ctx)
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-
packages\tensorflow\python\ops\gen_math_ops.py", line 6601, in not_equal_eager_fallback
_attr_T, _inputs_T = _execute.args_to_matching_eager([x, y], ctx, [])
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\eager\execute.py",
line 280, in args_to_matching_eager
ret = [ops.convert_to_tensor(t, dtype, ctx=ctx) for t in l]
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\eager\execute.py",
line 280, in
ret = [ops.convert_to_tensor(t, dtype, ctx=ctx) for t in l]
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-
packages\tensorflow\python\profiler\trace.py", line 163, in wrapped
return func(*args, **kwargs)
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-packages\tensorflow\python\framework\ops.py",
line 1566, in convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-
packages\tensorflow\python\framework\constant_op.py", line 339, in
_constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-
packages\tensorflow\python\framework\constant_op.py", line 265, in constant
allow_broadcast=True)
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-
packages\tensorflow\python\framework\constant_op.py", line 276, in _constant_impl
return _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-
packages\tensorflow\python\framework\constant_op.py", line 301, in _constant_eager_impl
t = convert_to_eager_tensor(value, ctx, type)
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-
packages\tensorflow\python\framework\constant_op.py", line 98, in convert_to_eager_tensor
return ops.EagerTensor(value, ctx.device_name, type)
ValueError: TypeError: object of type 'RaggedTensor' has no len()
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "test.py", line 182, in
test()
File "test.py", line 48, in test
val_data = audio_processor.get_data(audio_processor.Modes.VALIDATION).batch(FLAGS.batch_size)
File "C:\Users\x\Dropbox\Documents\x\Coding\KWS\tflu-kws-cortex-m\Training\data.py", line
190, in get_data
use_background = (self.background_data != []) and (mode == AudioProcessor.Modes.TRAINING)
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-
packages\tensorflow\python\util\dispatch.py", line 210, in wrapper
result = dispatch(wrapper, args, kwargs)
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-
packages\tensorflow\python\util\dispatch.py", line 122, in dispatch
result = dispatcher.handle(args, kwargs)
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-
packages\tensorflow\python\ops\ragged\ragged_dispatch.py", line 219, in handle
ragged_tensor_shape.RaggedTensorDynamicShape.from_tensor(y))
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-
packages\tensorflow\python\ops\ragged\ragged_tensor_shape.py", line 470, in
broadcast_dynamic_shape
shape_x = shape_x.broadcast_dimension(axis, shape_y.dimension_size(axis))
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-
packages\tensorflow\python\ops\ragged\ragged_tensor_shape.py", line 351, in
broadcast_dimension
condition, data=broadcast_err, summarize=10)
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-
packages\tensorflow\python\util\dispatch.py", line 206, in wrapper
return target(*args, **kwargs)
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-
packages\tensorflow\python\util\tf_should_use.py", line 247, in wrapped
return _add_should_use_warning(fn(*args, **kwargs),
File "C:\Users\x\anaconda3\envs\newenvt\lib\site-
packages\tensorflow\python\ops\control_flow_ops.py", line 164, in Assert
(condition, "\n".join(data_str)))
tensorflow.python.framework.errors_impl.InvalidArgumentError: Expected 'tf.Tensor(False,
shape=(), dtype=bool)' to be true. Summarized data: b'Unable to broadcast: dimension
size mismatch in dimension'
1
b'lengths='
0
b'dim_size='
1522930, 988891, 980062, 960000, 978488, 960000`
Thank you

The issue was TensorFlow compatibility. Downgrading from TensorFlow 2.5 to 2.3 fixes the problem.

Related

BERTopic: pop from empty list IndexError while Inferencing

I have trained a BERTopic model on colab and I am now trying to use it locally I get the IndexError.
IndexError: Failed in nopython mode pipeline (step: analyzing bytecode)
pop from empty list
The code I used is:
from sentence_transformers import SentenceTransformer
sentence_model = SentenceTransformer('KBLab/sentence-bert-swedish-cased')
model = BERTopic.load('bertopic_model')
text = "my text here for example"
text = [text]
embeddings = sentence_model.encode(text)
topic, _ = model.transform(text, embeddings)
The last line gives me the error.
Noticeably, the same code works just fine on colab. Not sure whats going on mlocally.
My numba and other related libraries are up-to-date as it was on colab.
Full Traceback:
Traceback (most recent call last):
File "/home/vaibhav/.local/lib/python3.10/site-packages/flask/app.py", line 2525, in wsgi_app
response = self.full_dispatch_request()
File "/home/vaibhav/.local/lib/python3.10/site-packages/flask/app.py", line 1822, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/vaibhav/.local/lib/python3.10/site-packages/flask/app.py", line 1820, in full_dispatch_request
rv = self.dispatch_request()
File "/home/vaibhav/.local/lib/python3.10/site-packages/flask/app.py", line 1796, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "app.py", line 20, in reference_prediction
preds = data_process(input_api)
File "data_process.py", line 63, in data_process
topic, _ = topic_model_mi.transform(text, embeddings)
File "/home/vaibhav/.local/lib/python3.10/site-packages/bertopic/_bertopic.py", line 423, in transform
umap_embeddings = self.umap_model.transform(embeddings)
File "/home/vaibhav/.local/lib/python3.10/site-packages/umap/umap_.py", line 2859, in transform
dmat = pairwise_distances(
File "/home/vaibhav/.local/lib/python3.10/site-packages/sklearn/metrics/pairwise.py", line 2022, in pairwise_distances
return _parallel_pairwise(X, Y, func, n_jobs, **kwds)
File "/home/vaibhav/.local/lib/python3.10/site-packages/sklearn/metrics/pairwise.py", line 1563, in _parallel_pairwise
return func(X, Y, **kwds)
File "/home/vaibhav/.local/lib/python3.10/site-packages/sklearn/metrics/pairwise.py", line 1607, in _pairwise_callable
out[i, j] = metric(X[i], Y[j], **kwds)
File "/home/vaibhav/.local/lib/python3.10/site-packages/numba/core/dispatcher.py", line 487, in _compile_for_args
raise e
File "/home/vaibhav/.local/lib/python3.10/site-packages/numba/core/dispatcher.py", line 420, in _compile_for_args
return_val = self.compile(tuple(argtypes))
File "/home/vaibhav/.local/lib/python3.10/site-packages/numba/core/dispatcher.py", line 965, in compile
cres = self._compiler.compile(args, return_type)
File "/home/vaibhav/.local/lib/python3.10/site-packages/numba/core/dispatcher.py", line 125, in compile
status, retval = self._compile_cached(args, return_type)
File "/home/vaibhav/.local/lib/python3.10/site-packages/numba/core/dispatcher.py", line 139, in _compile_cached
retval = self._compile_core(args, return_type)
File "/home/vaibhav/.local/lib/python3.10/site-packages/numba/core/dispatcher.py", line 152, in _compile_core
cres = compiler.compile_extra(self.targetdescr.typing_context,
File "/home/vaibhav/.local/lib/python3.10/site-packages/numba/core/compiler.py", line 716, in compile_extra
return pipeline.compile_extra(func)
File "/home/vaibhav/.local/lib/python3.10/site-packages/numba/core/compiler.py", line 452, in compile_extra
return self._compile_bytecode()
File "/home/vaibhav/.local/lib/python3.10/site-packages/numba/core/compiler.py", line 520, in _compile_bytecode
return self._compile_core()
File "/home/vaibhav/.local/lib/python3.10/site-packages/numba/core/compiler.py", line 499, in _compile_core
raise e
File "/home/vaibhav/.local/lib/python3.10/site-packages/numba/core/compiler.py", line 486, in _compile_core
pm.run(self.state)
File "/home/vaibhav/.local/lib/python3.10/site-packages/numba/core/compiler_machinery.py", line 368, in run
raise patched_exception
File "/home/vaibhav/.local/lib/python3.10/site-packages/numba/core/compiler_machinery.py", line 356, in run
self._runPass(idx, pass_inst, state)
File "/home/vaibhav/.local/lib/python3.10/site-packages/numba/core/compiler_lock.py", line 35, in _acquire_compile_lock
return func(*args, **kwargs)
File "/home/vaibhav/.local/lib/python3.10/site-packages/numba/core/compiler_machinery.py", line 311, in _runPass
mutated |= check(pss.run_pass, internal_state)
File "/home/vaibhav/.local/lib/python3.10/site-packages/numba/core/compiler_machinery.py", line 273, in check
mangled = func(compiler_state)
File "/home/vaibhav/.local/lib/python3.10/site-packages/numba/core/untyped_passes.py", line 86, in run_pass
func_ir = interp.interpret(bc)
File "/home/vaibhav/.local/lib/python3.10/site-packages/numba/core/interpreter.py", line 1321, in interpret
flow.run()
File "/home/vaibhav/.local/lib/python3.10/site-packages/numba/core/byteflow.py", line 107, in run
runner.dispatch(state)
File "/home/vaibhav/.local/lib/python3.10/site-packages/numba/core/byteflow.py", line 282, in dispatch
fn(state, inst)
File "/home/vaibhav/.local/lib/python3.10/site-packages/numba/core/byteflow.py", line 1061, in _binaryop
rhs = state.pop()
File "/home/vaibhav/.local/lib/python3.10/site-packages/numba/core/byteflow.py", line 1344, in pop
return self._stack.pop()
IndexError: Failed in nopython mode pipeline (step: analyzing bytecode)
pop from empty list

PyG: RuntimeError: Tensors must have same number of dimensions: got 2 and 3

I am using TransformerConv and encountered this error:
Traceback (most recent call last):
File "pipeline_model_gat.py", line 1018, in <module>
output = model(
File"/mount/arbeitsdaten61/studenten3/advanced-ml/2022/gogirlspower/nicole/conda/envs/new_gvqa/lib/python3.8/sitepackages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "pipeline_model_gat.py", line 881, in forwardquestions_encoded = self.question_encoder(question_graphs)
File "/mount/arbeitsdaten61/studenten3/advanced-ml/2022/gogirlspower/nicole/conda/envs/new_gvqa/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "pipeline_model_gat.py", line 628, in forward= self.conv1(x, question_graphs.edge_index, edge_attr)
File "/mount/arbeitsdaten61/studenten3/advanced-ml/2022/gogirlspower/nicole/conda/envs/new_gvqa/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/mount/arbeitsdaten61/studenten3/advanced-ml/2022/gogirlspower/nicole/conda/envs/new_gvqa/lib/python3.8/site-packages/torch_geometric/nn/conv/transformer_conv.py", line 190, in forward
beta = self.lin_beta(torch.cat([out, x_r, out - x_r], dim=-1))
RuntimeError: Tensors must have same number of dimensions: got 2 and 3
Can someone please tell me what could have gone wrong?

Error on odoo 12 after installation and db creation

I have to start using odoo 12 on my job but I can't start using it, I have already 5 days searching on google to find an answer.
I would be very grateful if you could please help me, with some of your wisdom :)
After I install odoo on my computer and visit localhost:8069 for first time it asks me to create my database but after I do this it doesn't load the login page instead it gives me an 500 internal server error with this console log every time I refresh the page:
2020-10-12 18:31:41,068 21425 ERROR prueba werkzeug: Error on request:
Traceback (most recent call last):
File "/opt/odoo/odoo12/entvirt/lib/python3.8/site-packages/werkzeug/serving.py", line 205, in run_wsgi
execute(self.server.app)
File "/opt/odoo/odoo12/entvirt/lib/python3.8/site-packages/werkzeug/serving.py", line 193, in execute
application_iter = app(environ, start_response)
File "/opt/odoo/odoo12/entvirt/src/odoo/service/server.py", line 434, in app
return self.app(e, s)
File "/opt/odoo/odoo12/entvirt/src/odoo/service/wsgi_server.py", line 142, in application
return application_unproxied(environ, start_response)
File "/opt/odoo/odoo12/entvirt/src/odoo/service/wsgi_server.py", line 117, in application_unproxied
result = odoo.http.root(environ, start_response)
File "/opt/odoo/odoo12/entvirt/src/odoo/http.py", line 1320, in __call__
return self.dispatch(environ, start_response)
File "/opt/odoo/odoo12/entvirt/src/odoo/http.py", line 1293, in __call__
return self.app(environ, start_wrapped)
File "/opt/odoo/odoo12/entvirt/lib/python3.8/site-packages/werkzeug/wsgi.py", line 599, in __call__
return self.app(environ, start_response)
File "/opt/odoo/odoo12/entvirt/src/odoo/http.py", line 1488, in dispatch
result = ir_http._dispatch()
File "/opt/odoo/odoo12/entvirt/src/addons/web_editor/models/ir_http.py", line 22, in _dispatch
return super(IrHttp, cls)._dispatch()
File "/opt/odoo/odoo12/entvirt/src/odoo/addons/base/models/ir_http.py", line 212, in _dispatch
return cls._handle_exception(e)
File "/opt/odoo/odoo12/entvirt/src/odoo/addons/base/models/ir_http.py", line 182, in _handle_exception
return request._handle_exception(exception)
File "/opt/odoo/odoo12/entvirt/src/odoo/http.py", line 776, in _handle_exception
return super(HttpRequest, self)._handle_exception(exception)
File "/opt/odoo/odoo12/entvirt/src/odoo/http.py", line 314, in _handle_exception
raise pycompat.reraise(type(exception), exception, sys.exc_info()[2])
File "/opt/odoo/odoo12/entvirt/src/odoo/tools/pycompat.py", line 87, in reraise
raise value
File "/opt/odoo/odoo12/entvirt/src/odoo/addons/base/models/ir_http.py", line 208, in _dispatch
result = request.dispatch()
File "/opt/odoo/odoo12/entvirt/src/odoo/http.py", line 835, in dispatch
r = self._call_function(**self.params)
File "/opt/odoo/odoo12/entvirt/src/odoo/http.py", line 346, in _call_function
return checked_call(self.db, *args, **kwargs)
File "/opt/odoo/odoo12/entvirt/src/odoo/service/model.py", line 98, in wrapper
return f(dbname, *args, **kwargs)
File "/opt/odoo/odoo12/entvirt/src/odoo/http.py", line 342, in checked_call
result.flatten()
File "/opt/odoo/odoo12/entvirt/src/odoo/http.py", line 1270, in flatten
self.response.append(self.render())
File "/opt/odoo/odoo12/entvirt/src/odoo/http.py", line 1263, in render
return env["ir.ui.view"].render_template(self.template, self.qcontext)
File "/opt/odoo/odoo12/entvirt/src/odoo/addons/base/models/ir_ui_view.py", line 1324, in render_template
return self.browse(self.get_view_id(template)).render(values, engine)
File "/opt/odoo/odoo12/entvirt/src/addons/web_editor/models/ir_ui_view.py", line 29, in render
return super(IrUiView, self).render(values=values, engine=engine, minimal_qcontext=minimal_qcontext)
File "/opt/odoo/odoo12/entvirt/src/odoo/addons/base/models/ir_ui_view.py", line 1333, in render
return self.env[engine].render(self.id, qcontext)
File "/opt/odoo/odoo12/entvirt/src/odoo/addons/base/models/ir_qweb.py", line 59, in render
result = super(IrQWeb, self).render(id_or_xml_id, values=values, **context)
File "/opt/odoo/odoo12/entvirt/src/odoo/addons/base/models/qweb.py", line 275, in render
self.compile(template, options)(self, body.append, values or {})
File "<decorator-gen-54>", line 2, in compile
File "/opt/odoo/odoo12/entvirt/src/odoo/tools/cache.py", line 93, in lookup
value = d[key] = self.method(*args, **kwargs)
File "/opt/odoo/odoo12/entvirt/src/odoo/addons/base/models/ir_qweb.py", line 114, in compile
return super(IrQWeb, self).compile(id_or_xml_id, options=options)
File "/opt/odoo/odoo12/entvirt/src/odoo/addons/base/models/qweb.py", line 338, in compile
raise QWebException("Error when compiling AST", e, path, node and etree.tostring(node[0], encoding='unicode'), name)
odoo.addons.base.models.qweb.QWebException: Name node can't be used with 'None' constant
Traceback (most recent call last):
File "/opt/odoo/odoo12/entvirt/src/odoo/tools/cache.py", line 88, in lookup
r = d[key]
File "/opt/odoo/odoo12/entvirt/src/odoo/tools/func.py", line 69, in wrapper
return func(self, *args, **kwargs)
File "/opt/odoo/odoo12/entvirt/src/odoo/tools/lru.py", line 44, in __getitem__
a = self.d[obj].me
KeyError: ('ir.qweb', <function IrQWeb.compile at 0x7f3b32b84280>, 173, ('en_US', None, None, None, None, None))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/odoo/odoo12/entvirt/src/odoo/addons/base/models/qweb.py", line 330, in compile
unsafe_eval(compile(astmod, '<template>', 'exec'), ns)
ValueError: Name node can't be used with 'None' constant
Error when compiling AST
ValueError: Name node can't be used with 'None' constant
Template: 173
Path: /templates/t/t/form/input[2]
Node: <input type="hidden" name="redirect" t-att-value="redirect"/> - - -
This is a problem with python 3.8.5.Try applying this fix https://github.com/odoo/odoo/pull/55305/commits/5baf0f2130b8d27d50aa60b54d68a5fc57b127a0

Odoo : Error on the second creation of project deadline

In Odoo framework, i've all projects done and created already, now im stack on projects deadline, when i want to create a deadline of projet, the first commit and operation passes successfully, but the second gives me this error below :
Traceback (most recent call last):
File "D:\Odoo 11.0\server\odoo\fields.py", line 936, in __get__
value = record.env.cache.get(record, self)
File "D:\Odoo 11.0\server\odoo\api.py", line 960, in get
value = self._data[field][record.id][key]
KeyError: <odoo.api.Environment object at 0x000000000B34F550>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\Odoo 11.0\server\odoo\http.py", line 647, in _handle_exception
return super(JsonRequest, self)._handle_exception(exception)
File "D:\Odoo 11.0\server\odoo\http.py", line 307, in _handle_exception
raise pycompat.reraise(type(exception), exception, sys.exc_info()[2])
File "D:\Odoo 11.0\server\odoo\tools\pycompat.py", line 87, in reraise
raise value
File "D:\Odoo 11.0\server\odoo\http.py", line 689, in dispatch
result = self._call_function(**self.params)
File "D:\Odoo 11.0\server\odoo\http.py", line 339, in _call_function
return checked_call(self.db, *args, **kwargs)
File "D:\Odoo 11.0\server\odoo\service\model.py", line 97, in wrapper
return f(dbname, *args, **kwargs)
File "D:\Odoo 11.0\server\odoo\http.py", line 332, in checked_call
result = self.endpoint(*a, **kw)
File "D:\Odoo 11.0\server\odoo\http.py", line 933, in __call__
return self.method(*args, **kw)
File "D:\Odoo 11.0\server\odoo\http.py", line 512, in response_wrap
response = f(*args, **kw)
File "D:\Odoo 11.0\server\odoo\addons\web\controllers\main.py", line 872, in search_read
return self.do_search_read(model, fields, offset, limit, domain, sort)
File "D:\Odoo 11.0\server\odoo\addons\web\controllers\main.py", line 894, in do_search_read
offset=offset or 0, limit=limit or False, order=sort or False)
File "D:\Odoo 11.0\server\odoo\models.py", line 4169, in search_read
result = records.read(fields)
File "D:\Odoo 11.0\server\odoo\models.py", line 2535, in read
values[name] = field.convert_to_read(record[name], record, use_name_get)
File "D:\Odoo 11.0\server\odoo\models.py", line 4688, in __getitem__
return self._fields[key].__get__(self, type(self))
File "D:\Odoo 11.0\server\odoo\fields.py", line 940, in __get__
self.determine_value(record)
File "D:\Odoo 11.0\server\odoo\fields.py", line 1051, in determine_value
self.compute_value(recs)
File "D:\Odoo 11.0\server\odoo\fields.py", line 1007, in compute_value
self._compute_value(records)
File "D:\Odoo 11.0\server\odoo\fields.py", line 998, in _compute_value
getattr(records, self.compute)()
File "D:\Odoo 11.0\server\odoo\addons\dev\models\delais.py", line 24, in _delai_arret
domaine = ('project_ids', '=', self.project_ids).id
File "D:\Odoo 11.0\server\odoo\fields.py", line 934, in __get__
record.ensure_one()
File "D:\Odoo 11.0\server\odoo\models.py", line 4296, in ensure_one
raise ValueError("Expected singleton: %s" % self)
ValueError: Expected singleton: delais.delais(2, 18)
And this is the function which calculate the sum of missions number of each project, in this function i got the error. Please i didn't figure out where is the problem exactly, help me to get rid of this please, thank's for your support.
def _delai_arret(self):
domaine = ('project_ids', '=', self.project_ids).id
dict = self.env['mission.mission'].search_read([domaine], ['nbr_mission'])
# print(dict)
if not dict:
pass
else:
somme = 0
for key in dict:
# print(key['nbr_mission'])
somme = somme + key['nbr_mission'] / 30
# print('la somme est : {somme}')
self.delai_arr_mois = somme
# print(somme)

Do I have to restart colaboratory runtime every time?

I cannot run my code in Google Colaboratory twice without restarting runtime. Is there a way to run it without restarting runtime.
My code takes TensorD libraries and compute aproximation of a random 2x4x4 tensor using CP ALS algorithm. (this is an example taken from https://github.com/Large-Scale-Tensor-Decomposition/tensorD)
!git clone https://github.com/Large-Scale-Tensor-Decomposition/tensorD.git
import sys
import time
sys.path.append("/content/tensorD")
from tensorD.factorization.env import Environment
from tensorD.dataproc.provider import Provider
from tensorD.demo.DataGenerator import *
from tensorD.factorization.cp import CP_ALS
# generate a random tensor with shape 3x4x4
t = time.time()
X = synthetic_data_cp([3, 4, 4], 7)
data_provider = Provider()
data_provider.full_tensor = lambda: X
env = Environment(data_provider, summary_path='/tmp/cp_' + '7')
cp = CP_ALS(env)
args = CP_ALS.CP_Args(rank=7, validation_internal=1)
# build CP model with arguments
cp.build_model(args)
# train CP model with the maximum iteration of 100
cp.train(50)
# obtain factor matrices from trained model
factor_matrices = cp.factors
# obtain scaling vector from trained model
lambdas = cp.lambdas
for matrix in factor_matrices:
print(matrix)
elapsed = time.time() - t
print(elapsed)
when I run it first time I have no problem. When I run it again (without restart of runtime) I obtain:
CP model initial finish
---------------------------------------------------------------------------
InvalidArgumentError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
1333 try:
-> 1334 return fn(*args)
1335 except errors.OpError as e:
7 frames
InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder' with dtype float and shape [3,4,4]
[[{{node Placeholder}}]]
During handling of the above exception, another exception occurred:
InvalidArgumentError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
1346 pass
1347 message = error_interpolation.interpolate(message, self._graph)
-> 1348 raise type(e)(node_def, op, message)
1349
1350 def _extend_graph(self):
InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder' with dtype float and shape [3,4,4]
[[node Placeholder (defined at /content/tensorD/tensorD/factorization/cp.py:69) ]]
Caused by op 'Placeholder', defined at:
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py", line 16, in <module>
app.launch_new_instance()
File "/usr/local/lib/python3.6/dist-packages/traitlets/config/application.py", line 658, in launch_instance
app.start()
File "/usr/local/lib/python3.6/dist-packages/ipykernel/kernelapp.py", line 477, in start
ioloop.IOLoop.instance().start()
File "/usr/local/lib/python3.6/dist-packages/tornado/ioloop.py", line 888, in start
handler_func(fd_obj, events)
File "/usr/local/lib/python3.6/dist-packages/tornado/stack_context.py", line 277, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/zmq/eventloop/zmqstream.py", line 450, in _handle_events
self._handle_recv()
File "/usr/local/lib/python3.6/dist-packages/zmq/eventloop/zmqstream.py", line 480, in _handle_recv
self._run_callback(callback, msg)
File "/usr/local/lib/python3.6/dist-packages/zmq/eventloop/zmqstream.py", line 432, in _run_callback
callback(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tornado/stack_context.py", line 277, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/ipykernel/kernelbase.py", line 283, in dispatcher
return self.dispatch_shell(stream, msg)
File "/usr/local/lib/python3.6/dist-packages/ipykernel/kernelbase.py", line 235, in dispatch_shell
handler(stream, idents, msg)
File "/usr/local/lib/python3.6/dist-packages/ipykernel/kernelbase.py", line 399, in execute_request
user_expressions, allow_stdin)
File "/usr/local/lib/python3.6/dist-packages/ipykernel/ipkernel.py", line 196, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "/usr/local/lib/python3.6/dist-packages/ipykernel/zmqshell.py", line 533, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py", line 2718, in run_cell
interactivity=interactivity, compiler=compiler, result=result)
File "/usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py", line 2822, in run_ast_nodes
if self.run_code(code, result):
File "/usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py", line 2882, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-4-4d2250ec007b>", line 21, in <module>
cp.build_model(args)
File "/content/tensorD/tensorD/factorization/cp.py", line 69, in build_model
input_data = tf.placeholder(tf.float32, shape=self._env.full_shape())
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/array_ops.py", line 2077, in placeholder
return gen_array_ops.placeholder(dtype=dtype, shape=shape, name=name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_array_ops.py", line 5791, in placeholder
"Placeholder", dtype=dtype, shape=shape, name=name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 3300, in create_op
op_def=op_def)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 1801, in __init__
self._traceback = tf_stack.extract_stack()
InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder' with dtype float and shape [3,4,4]
[[node Placeholder (defined at /content/tensorD/tensorD/factorization/cp.py:69) ]]
Any help will be appreciated!
This seems to mostly be a question about TensorFlow. Do you get what you want outside of Colab? I'm not totally clear about what you are expecting, but import tensorflow as tf; tf.reset_default_graph() at the top of your snippet seems sensible and squelches the error.

Resources