What does it mean to assert an object in python? - pytorch

We are getting an assertion error in this line "assert dataset". We are printing the dataset object and the value we got was this '<datasets.TextDataset object at 0x000002531F10E408>'. We are using 'python 3.7' in this code. Why are we getting assertion error on dataset object?
We are basically trying to run the code of AttnGAN (https://github.com/taoxugit/AttnGAN).
The error is happening on Line: 130 in 'code/main.py'.
Code
dataset = TextDataset(cfg.DATA_DIR, split_dir, base_size=cfg.TREE.BASE_SIZE, transform=image_transform)
print(dataset)
assert dataset
dataloader = torch.utils.data.DataLoader(dataset, batch_size=cfg.TRAIN.BATCH_SIZE, drop_last=True, shuffle=bshuffle, num_workers=int(cfg.WORKERS))
Output
Load from: C:\Users\admin\Desktop\TextToImage\AttnGAN-master (1)\AttnGAN-master\code/../data/birds/captions.pickle
<datasets.TextDataset object at 0x000002531F10E408>
Traceback (most recent call last):
File "./code/main.py", line 131, in
assert dataset
AssertionError
PS C:\Users\admin\Desktop\TextToImage\AttnGAN-master (1)\AttnGAN-master>

In this case, assert dataset is a not-very-clear way of checking if the dataset is empty. assert throws an exception if the expression (in this case the dataset object) evaluates to false.
https://docs.python.org/3/library/stdtypes.html "Truth Value Testing" says
By default, an object is considered true unless its class defines
either a __bool__() method that returns False or a __len__() method
that returns zero
Loking at the github repo, TextDataset does define __len__(). The logical conclusion is that the returned length of the dataset in your case (after it is loaded) is zero.
Try to look at where it is loading data from, try to make sure the data is there, and print the length before the assertion. Bonus: try to figure out why the original loading doesn't throw an exception but succeeds and produces an empty dataset.

Related

How to get parameter names in python format function?

func = "Hello {name}. how are you doing {time}!".format
For example, let's assume func is defined as above.
we don't have a definition of func at hand but we have an instance of it.
how can I get all arguments to this function?
apparently inspect.getargspec(func) does not work here!
if I just run this with empty parameters it returns an error with one missing parameter at a time, but I don't know how to get them directly:
a()
-------
KeyError Traceback (most recent call last)
<ipython-input-228-8d7b4527e81d> in <module>
----> 1 a()
KeyError: 'name'
what exactly are you trying to do? what is your expected output? as far as i know, there is no possibility to insert name and time, after you define func. because "string".format("call variables") <---- it cant call variables that are not defined. As far as i know the calling process happens first, independent of the formatting type. (f string, %, .format().. whatever you use)

TFX Pipeline Error While Executing TFMA: AttributeError: 'NoneType' object has no attribute 'ToBatchTensors'

Basically I only reused code from iris utils and iris pipeline with minor change on serving input:
def _get_serve_tf_examples_fn(model, tf_transform_output):
model.tft_layer = tf_transform_output.transform_features_layer()
feature_spec = tf_transform_output.raw_feature_spec()
print(feature_spec)
feature_spec.pop(_LABEL_KEY)
#tf.function
def serve_tf_examples_fn(*args):
parsed_features = {}
for arg in args:
parsed_features[arg.name.split(":")[0]] = arg
print(parsed_features)
transformed_features = model.tft_layer(parsed_features)
return model(transformed_features)
def run_fn(fn_args: TrainerFnArgs):
...
feature_spec = tf_transform_output.raw_feature_spec()
feature_spec.pop(_LABEL_KEY)
inputs = [tf.TensorSpec(
shape=[None, 1],
dtype=feature_spec[f].dtype,
name=f) for f in feature_spec]
signatures = {
'serving_default':
_get_serve_tf_examples_fn(model, tf_transform_output).get_concrete_function(*inputs),
}
model.save(fn_args.serving_model_dir, save_format='tf', signatures=signatures)
the get_concrete_function() original input from iris codes is only a TensorSpec with dtype string. I already tried serving the model using the exact input but when I test the REST API I got a parsing error. So I tried to change the serving input so it can receive JSON input like this:
{"instances": [{"feat1": 90, "feat2": 23.8, "feat3": 12}]}
when I run the pipeline, the training was successful but then the error occurred when running the evaluator component. this is the latest logs:
INFO:absl:Using ./tfx/pipelines/toilet_native_keras/Trainer/model/67/serving_model_dir as candidate model.
INFO:absl:Using ./tfx/pipelines/toilet_native_keras/Trainer/model/14/serving_model_dir as baseline model.
INFO:absl:The 'example_splits' parameter is not set, using 'eval' split.
INFO:absl:Evaluating model.
INFO:absl:We decided to produce LargeList and LargeBinary types.
WARNING:tensorflow:5 out of the last 5 calls to <function recreate_function.<locals>.restored_function_body at 0x7fa7f0e44560> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating #tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your #tf.function outside of the loop. For (2), #tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.WARNING:tensorflow:6 out of the last 6 calls to <function recreate_function.<locals>.restored_function_body at 0x7fa7c77f8a70> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating #tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your #tf.function outside of the loop. For (2), #tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.
...
Traceback (most recent call last):
File "apache_beam/runners/common.py", line 1213, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam/runners/common.py", line 570, in apache_beam.runners.common.SimpleInvoker.invoke_process
File "/usr/local/lib/python3.7/site-packages/tensorflow_model_analysis/model_util.py", line 466, in process
result = self._batch_reducible_process(element)
File "/usr/local/lib/python3.7/site-packages/tensorflow_model_analysis/extractors/batched_predict_extractor_v2.py", line 164, in _batch_reducible_process
self._tensor_adapter.ToBatchTensors(record_batch), input_names)
AttributeError: 'NoneType' object has no attribute 'ToBatchTensors'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py", line 256, in _execute
response = task()
File "/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py", line 313, in <lambda>
lambda: self.create_worker().do_instruction(request), request)
File "/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py", line 483, in do_instruction
getattr(request, request_type), request.instruction_id)
File "/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py", line 518, in process_bundle
bundle_processor.process_bundle(instruction_id))
File "/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/bundle_processor.py", line 983, in process_bundle
element.data)
File "/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/bundle_processor.py", line 219, in process_encoded
self.output(decoded_value)
File "apache_beam/runners/worker/operations.py", line 330, in apache_beam.runners.worker.operations.Operation.output
...
File "apache_beam/runners/common.py", line 1294, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "/usr/local/lib/python3.7/site-packages/future/utils/__init__.py", line 446, in raise_with_traceback
raise exc.with_traceback(traceback)
File "apache_beam/runners/common.py", line 1213, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam/runners/common.py", line 570, in apache_beam.runners.common.SimpleInvoker.invoke_process
File "/usr/local/lib/python3.7/site-packages/tensorflow_model_analysis/model_util.py", line 466, in process
result = self._batch_reducible_process(element)
File "/usr/local/lib/python3.7/site-packages/tensorflow_model_analysis/extractors/batched_predict_extractor_v2.py", line 164, in _batch_reducible_process
self._tensor_adapter.ToBatchTensors(record_batch), input_names)
AttributeError: 'NoneType' object has no attribute 'ToBatchTensors' [while running 'ExtractEvaluateAndWriteResults/ExtractAndEvaluate/ExtractBatchPredictions/Predict']
...
WARNING:tensorflow:7 out of the last 7 calls to <function recreate_function.<locals>.restored_function_body at 0x7fa7f0273050> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating #tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your #tf.function outside of the loop. For (2), #tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.WARNING:tensorflow:8 out of the last 8 calls to <function recreate_function.<locals>.restored_function_body at 0x7fa7c77fc170> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating #tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your #tf.function outside of the loop. For (2), #tf.function has experimental_relax_shapes=True option that relaxes arg
I don't think evaluator component has anything to do with serving input function as it just compare to the newly trained model with a latest published model, but then where did I go wrong?
So in the end I was mistaken about the evaluator component, or more appropriately if I address the TFMA instead. it indeed uses the serving input function defined in serving signatures. According to this link, the default signature used by the TFMA EvalConfig is "serving_default" which describes the serving model input to be serialized examples. That's why when I changed the input signature other than string, TFMA would raised as exception.
I think this signature is not meant to be used in serving the model through REST API and because the "serving_default" signature is still needeed and I am not in the mood with tinkering the EvalConfig, I created another signature which would receive the JSON input that I want. For tht to work, I need to make another function decorated by #tf.function. That's all. I hope my answer will help people who struggle with similar problems.

Get method's name using __getattribute__ without type error

I'm trying to print method's name using __getattribute__
but I get typeerror everytime I called the method, and the method is not excuted, is there anyway to get rid of the type error and have the method excuted ?
class Person(object):
def __init__(self):
super()
def test(self):
print(1)
def __getattribute__(self, attr):
print(attr)
p = Person()
p.test()
The above code gives the error
test
Traceback (most recent call last):
File "test1.py", line 15, in <module>
p.test()
TypeError: 'NoneType' object is not callable
Is there anyway to print the method's name only without giving the error ?
I tried to catch typeError inside __getattribute__ method, but it doesn't work
Another question is, why it says None Type object is not callable here
Thank you !
Ps. I know I can catch the error when I call the method, I mean is there anyway to deal this error inside __getattribute method? since my goal is to print method's name everytime a method is called
Answering your second question first, why is it saying NoneType not callable.
When you call p.test() Python tries to lookup the test attribute of the p instance. It calls the __getattribute__ method that you've overridden which prints 'test' and then returns. Because you're not returning anything it implicitly returns None. p.test is therefore None and calling it gives the error you get.
So how do we fix this? Once you've printed the attribute name, you need to return the attribute you're after. You can't just call getattr(self, attr) or you'll end up in an infinite loop so you have to call the method that would have been called if you hadn't overridden it.
def __getattribute__(self, attr):
print(attr)
return super().__getattribute__(attr) # calls the original method

Getting Python error, "TypeError: 'NoneType' object is not callable" SOMETIMES

Not very new to programming or to python but incredibly green to using pyunit. Need to use it for my new job and I keep getting this error but only sometimes when it is run. My code below.
import unittest
from nose_parameterized import parameterized
from CheckFromFile import listFileCheck, RepresentsFloat
testParams = listFileCheck()
class TestSequence(unittest.TestCase):
#parameterized.expand(testParams)
def test_sequence(self, name, a, b):
if RepresentsFloat(a):
self.assertAlmostEqual(a,b,2)
else:
self.assertEqual(a,b)
if __name__ == '__main__':
unittest.main()
What is happening here is that my test case is pulling a method listFileCheck from another class. What it does is it reads values from the serial port communicating with the control board and compares them with a calibration file. It puts the control board values in an MD array along with the calibration file values. These values can be either str, int, or float.
I used the test case to compare the values to one another however I keep getting this error but only sometimes. After every 3rd or so run it fails with this error.
Error
Traceback (most recent call last):
File "C:\Python34\lib\unittest\case.py", line 57, in testPartExecutor
yield
File "C:\Python34\lib\unittest\case.py", line 574, in run
testMethod()
TypeError: 'NoneType' object is not callable
Process finished with exit code 0
Anyone know why I might be getting this error on occasion?

How can I use pytest.raises with multiple exceptions?

I'm testing code where one of two exceptions can be raised: MachineError or NotImplementedError. I would like to use pytest.raises to make sure that at least one of them is raised when I run my test code, but it only seems to accept one exception type as an argument.
This is the signature for pytest.raises:
raises(expected_exception, *args, **kwargs)
I tried using the or keyword inside a context manager:
with pytest.raises(MachineError) or pytest.raises(NotImplementedError):
verb = Verb("donner<IND><FUT><REL><SG><1>")
verb.conjugate()
but I assume this only checks whether the first pytest.raises is None and sets the second one as the context manager if it is.
Passing multiple exceptions as positional arguments doesn't work, because pytest.raises takes its second argument to be a callable. Every subsequent positional argument is passed as an argument to that callable.
From the documentation:
>>> raises(ZeroDivisionError, lambda: 1/0)
<ExceptionInfo ...>
>>> def f(x): return 1/x
...
>>> raises(ZeroDivisionError, f, 0)
<ExceptionInfo ...>
>>> raises(ZeroDivisionError, f, x=0)
<ExceptionInfo ...>
Passing the exceptions as a list doesn't work either:
Traceback (most recent call last):
File "<pyshell#4>", line 1, in <module>
with pytest.raises([MachineError, NotImplementedError]):
File "/usr/local/lib/python3.4/dist-packages/_pytest/python.py", line 1290, in raises
raise TypeError(msg % type(expected_exception))
TypeError: exceptions must be old-style classes or derived from BaseException, not <class 'list'>
Is there a workaround for this? It doesn't have to use a context manager.
Pass the exceptions as a tuple to raises:
with pytest.raises( (MachineError, NotImplementedError) ):
verb = ...
In the source for pytest, pytest.raises may:
catch expected_exception; or
pass expected_exception to a RaisesContext instance, which then uses issubclass to check whether the exception was one you wanted.
In Python 3, except statements can take a tuple of exceptions. The issubclass function can also take a tuple. Therefore, using a tuple should be acceptable in either situation.

Resources