tensorflow object detection for our own objects - python-3.x

I am using tensorflow 1.9 for custom object detection and followed same steps as in https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html#preparing-workspace.
But at training the model,i am getting error.
(tensorflow_cpu) C:\Users\Z004032A\Documents\Tensorflow\workspace\training_demo>python train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/ssd_inception_v2_coco.config
WARNING:tensorflow:From C:\Users\Z004032A\anaconda3\envs\tensorflow_cpu\lib\site-packages\tensorflow\python\platform\app.py:125: main (from __main__) is deprecated and will be removed in a future version.
Instructions for updating:
Use object_detection/model_main.py.
Traceback (most recent call last):
File "C:\Users\Z004032A\anaconda3\envs\tensorflow_cpu\lib\site-packages\google\protobuf\text_format.py", line 1460, in _ConsumeSingleByteString
result = text_encoding.CUnescape(text[1:-1])
File "C:\Users\Z004032A\anaconda3\envs\tensorflow_cpu\lib\site-packages\google\protobuf\text_encoding.py", line 115, in CUnescape
.decode('unicode_escape')
UnicodeDecodeError: 'unicodeescape' codec can't decode bytes in position 2-3: truncated \UXXXXXXXX escape
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "train.py", line 184, in <module>
tf.app.run()
File "C:\Users\Z004032A\anaconda3\envs\tensorflow_cpu\lib\site-packages\tensorflow\python\platform\app.py", line 125, in run
_sys.exit(main(argv))
File "C:\Users\Z004032A\anaconda3\envs\tensorflow_cpu\lib\site-packages\tensorflow\python\util\deprecation.py", line 250, in new_func
return func(*args, **kwargs)
File "train.py", line 93, in main
FLAGS.pipeline_config_path)
File "C:\Users\Z004032A\Documents\Tensorflow\models\research\object_detection\utils\config_util.py", line 100, in get_configs_from_pipeline_file
text_format.Merge(proto_str, pipeline_config)
File "C:\Users\Z004032A\anaconda3\envs\tensorflow_cpu\lib\site-packages\google\protobuf\text_format.py", line 685, in Merge
allow_unknown_field=allow_unknown_field)
File "C:\Users\Z004032A\anaconda3\envs\tensorflow_cpu\lib\site-packages\google\protobuf\text_format.py", line 752, in MergeLines
return parser.MergeLines(lines, message)
File "C:\Users\Z004032A\anaconda3\envs\tensorflow_cpu\lib\site-packages\google\protobuf\text_format.py", line 777, in MergeLines
self._ParseOrMerge(lines, message)
File "C:\Users\Z004032A\anaconda3\envs\tensorflow_cpu\lib\site-packages\google\protobuf\text_format.py", line 799, in _ParseOrMerge
self._MergeField(tokenizer, message)
File "C:\Users\Z004032A\anaconda3\envs\tensorflow_cpu\lib\site-packages\google\protobuf\text_format.py", line 924, in _MergeField
merger(tokenizer, message, field)
File "C:\Users\Z004032A\anaconda3\envs\tensorflow_cpu\lib\site-packages\google\protobuf\text_format.py", line 998, in _MergeMessageField
self._MergeField(tokenizer, sub_message)
File "C:\Users\Z004032A\anaconda3\envs\tensorflow_cpu\lib\site-packages\google\protobuf\text_format.py", line 924, in _MergeField
merger(tokenizer, message, field)
File "C:\Users\Z004032A\anaconda3\envs\tensorflow_cpu\lib\site-packages\google\protobuf\text_format.py", line 998, in _MergeMessageField
self._MergeField(tokenizer, sub_message)
File "C:\Users\Z004032A\anaconda3\envs\tensorflow_cpu\lib\site-packages\google\protobuf\text_format.py", line 924, in _MergeField
merger(tokenizer, message, field)
File "C:\Users\Z004032A\anaconda3\envs\tensorflow_cpu\lib\site-packages\google\protobuf\text_format.py", line 1049, in _MergeScalarField
value = tokenizer.ConsumeString()
File "C:\Users\Z004032A\anaconda3\envs\tensorflow_cpu\lib\site-packages\google\protobuf\text_format.py", line 1420, in ConsumeString
the_bytes = self.ConsumeByteString()
File "C:\Users\Z004032A\anaconda3\envs\tensorflow_cpu\lib\site-packages\google\protobuf\text_format.py", line 1435, in ConsumeByteString
the_list = [self._ConsumeSingleByteString()]
File "C:\Users\Z004032A\anaconda3\envs\tensorflow_cpu\lib\site-packages\google\protobuf\text_format.py", line 1462, in _ConsumeSingleByteString
raise self.ParseError(str(e))
google.protobuf.text_format.ParseError: 170:17 : ' input_path: "C:\Users\Z004032A\Documents\Tensorflow\workspace\trai': 'unicodeescape' codec can't decode bytes in position 2-3: truncated \UXXXXXXXX escape
Can anyone please help me how to resolve that problem and also tell which tensorflow version best suits for custom object detection

This is probably caused by Windows using C:\ as default user directory. When you use this user directory in Python in string form, you might get a unicode error, as the \U unicode escape is used.
Try duplicating backslashes. In other words, turn C:\User\Documents into C:\\User\\Documents.
As for which Tensorflow version is best, there isn't a "best" version. I'd recommend using the same TF version as whatever library you're using. I'd also recommend not coding this in raw TF. Instead, use an existing library, such as YOLO. Just Google "best object detection library tensorflow" and choose one of the existing libraries.

Related

How can I compile the example xml file from the Open62541 tutorial?

I'm on chapter 11 of the official guide to the open62541 library. The html version is here. Before trying anything custom, I just want to try this feature in the most basic way by "compiling" their example xml file into C code, which can then be compiled with the GCC and run as an OPC server. (If you would like to follow along, download the full source code from the main pageā€”the nodeset compiler tool is in there.)
I'm in a Debian-based environment (CLI only). I made a copy of myNS.xml and saved it directly in the path ~/code/open62541-open62541-6249bb2/tools/nodeset_compiler/, which is also my current working directory in this example. I tried to use the nodeset compiler with exactly the same command that they use in the tutorial: python ./nodeset_compiler.py --types-array=UA_TYPES --existing ../../deps/ua-nodeset/Schema/Opc.Ua.NodeSet2.xml --xml myNS.xml myNS
The error message I got is this:
Traceback (most recent call last):
File "./nodeset_compiler.py", line 126, in <module>
ns.addNodeSet(xmlfile, True, typesArray=getTypesArray(nsCount))
File "/root/code/open62541-open62541-6249bb2/tools/nodeset_compiler/nodeset.py", line 224, in addNodeSet
nodesets = dom.parseString(fileContent).getElementsByTagName("UANodeSet")
File "/usr/lib/python2.7/xml/dom/minidom.py", line 1928, in parseString
return expatbuilder.parseString(string)
File "/usr/lib/python2.7/xml/dom/expatbuilder.py", line 940, in parseString
return builder.parseString(string)
File "/usr/lib/python2.7/xml/dom/expatbuilder.py", line 223, in parseString
parser.Parse(string, True)
xml.parsers.expat.ExpatError: syntax error: line 1, column 0
Any idea what I might be doing wrong?
UPDATE:
Alright, I found out there was a problem with my Opc.Ua.NodeSet2.xml file, which I corrected. If you are following along and would like to grab the version of the file I have, you can get it here.
But now I have this issue:
INFO:__main__:Preprocessing (existing) ../../deps/ua-nodeset/Schema/Opc.Ua.NodeSet2.xml
INFO:__main__:Preprocessing myNS.xml
Traceback (most recent call last):
File "./nodeset_compiler.py", line 178, in <module>
ns.allocateVariables()
File "/root/code/open62541-open62541-6249bb2/tools/nodeset_compiler/nodeset.py", line 322, in allocateVariables
n.allocateValue(self)
File "/root/code/open62541-open62541-6249bb2/tools/nodeset_compiler/nodes.py", line 291, in allocateValue
self.value.parseXMLEncoding(self.xmlValueDef, dataTypeNode, self)
File "/root/code/open62541-open62541-6249bb2/tools/nodeset_compiler/datatypes.py", line 161, in parseXMLEncoding
val = self.__parseXMLSingleValue(el, parentDataTypeNode, parent)
File "/root/code/open62541-open62541-6249bb2/tools/nodeset_compiler/datatypes.py", line 281, in __parseXMLSingleValue
extobj.value.append(extobj.__parseXMLSingleValue(ebodypart, parentDataTypeNode, parent, alias=None, encodingPart=e))
File "/root/code/open62541-open62541-6249bb2/tools/nodeset_compiler/datatypes.py", line 223, in __parseXMLSingleValue
alias=alias, encodingPart=enc[1], valueRank=enc[2] if len(enc)>2 else None)
File "/root/code/open62541-open62541-6249bb2/tools/nodeset_compiler/datatypes.py", line 198, in __parseXMLSingleValue
t.parseXML(xmlvalue)
File "/root/code/open62541-open62541-6249bb2/tools/nodeset_compiler/datatypes.py", line 330, in parseXML
self.value = int(unicode(xmlvalue.firstChild.data))
ValueError: invalid literal for int() with base 10: ''
UPDATE_2:
I tried doing the same thing on my Windows laptop, and here is the error I got:
INFO:__main__:Preprocessing (existing) ../../deps/ua-nodeset/Schema/Opc.Ua.NodeSet2.xml
INFO:__main__:Preprocessing myNS.xml
Traceback (most recent call last):
File "./nodeset_compiler.py", line 178, in <module>
ns.allocateVariables()
File "C:\Users\ekstraaa\Source\open62541\open62541-open62541-6249bb2\tools\nodeset_compiler\nodeset.py", line 322, in allocateVariables
n.allocateValue(self)
File "C:\Users\ekstraaa\Source\open62541\open62541-open62541-6249bb2\tools\nodeset_compiler\nodes.py", line 291, in allocateValue
self.value.parseXMLEncoding(self.xmlValueDef, dataTypeNode, self)
File "C:\Users\ekstraaa\Source\open62541\open62541-open62541-6249bb2\tools\nodeset_compiler\datatypes.py", line 161, in parseXMLEncoding
val = self.__parseXMLSingleValue(el, parentDataTypeNode, parent)
File "C:\Users\ekstraaa\Source\open62541\open62541-open62541-6249bb2\tools\nodeset_compiler\datatypes.py", line 281, in __parseXMLSingleValue
extobj.value.append(extobj.__parseXMLSingleValue(ebodypart, parentDataTypeNode, parent, alias=None, encodingPart=e))
File "C:\Users\ekstraaa\Source\open62541\open62541-open62541-6249bb2\tools\nodeset_compiler\datatypes.py", line 223, in __parseXMLSingleValue
alias=alias, encodingPart=enc[1], valueRank=enc[2] if len(enc)>2 else None)
File "C:\Users\ekstraaa\Source\open62541\open62541-open62541-6249bb2\tools\nodeset_compiler\datatypes.py", line 198, in __parseXMLSingleValue
t.parseXML(xmlvalue)
File "C:\Users\ekstraaa\Source\open62541\open62541-open62541-6249bb2\tools\nodeset_compiler\datatypes.py", line 330, in parseXML
self.value = int(unicode(xmlvalue.firstChild.data))
ValueError: invalid literal for int() with base 10: '\n '
The complete documentation for the open62541 nodeset compiler can be found here:
https://open62541.org/doc/current/nodeset_compiler.html
The command you are using also seems to be fine.
The last issue you are describing invalid literal for int() is due to a newline inside the value tag of a variable.
This will be fixed with
https://github.com/open62541/open62541/pull/2768
For a workaround you can change your .xml from
<Value>
<Int32>
</Int32>
</Value>
to (no newline):
<Value>
<Int32></Int32>
</Value>

Python library "Crypto" conflict

I'm trying to integrate two frameworks, and I'm installing requirements for both of frameworks, but it seems like the library 'Crypto' used in both frameworks and have different versions of use, so if I install requirements for one of the frameworks, it returns me the first error:
Traceback (most recent call last):
File "dapp_bdb.py", line 134, in <module>
main()
File "dapp_bdb.py", line 112, in main
blockchain = LevelDBBlockchain(settings.chain_leveldb_path)
File "/home/ubuntu/.local/lib/python3.6/site-packages/neo/Implementations/Blockchains/LevelDB/LevelDBBlockchain.py", line 190, in __init__
self.Persist(Blockchain.GenesisBlock())
File "/home/ubuntu/.local/lib/python3.6/site-packages/neo/Implementations/Blockchains/LevelDB/LevelDBBlockchain.py", line 691, in Persist
account = accounts.GetAndChange(output.AddressBytes, AccountState(output.ScriptHash))
File "/home/ubuntu/.local/lib/python3.6/site-packages/neo/Core/TX/Transaction.py", line 121, in AddressBytes
return bytes(self.Address, encoding='utf-8')
File "/home/ubuntu/.local/lib/python3.6/site-packages/neo/Core/TX/Transaction.py", line 111, in Address
return Crypto.ToAddress(self.ScriptHash)
File "/home/ubuntu/.local/lib/python3.6/site-packages/neocore/Cryptography/Crypto.py", line 103, in ToAddress
return scripthash_to_address(script_hash.Data)
File "/home/ubuntu/.local/lib/python3.6/site-packages/neocore/Cryptography/Helper.py", line 78, in scripthash_to_address
return base58.b58encode(bytes(outb)).decode("utf-8")
AttributeError: 'str' object has no attribute 'decode'
and with the second requrirements of the framework, it returns me other error:
exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "dapp_bdb.py", line 95, in custom_background_code
put_bdb("Hello world")
File "dapp_bdb.py", line 68, in put_bdb
fulfilled_creation_tx = bdb.transactions.fulfill(prepared_creation_tx, private_keys=private_key)
File "/home/ubuntu/.local/lib/python3.6/site-packages/bigchaindb_driver/driver.py", line 270, in fulfill
return fulfill_transaction(transaction, private_keys=private_keys)
File "/home/ubuntu/.local/lib/python3.6/site-packages/bigchaindb_driver/offchain.py", line 346, in fulfill_transaction
signed_transaction = transaction_obj.sign(private_keys)
File "/home/ubuntu/.local/lib/python3.6/site-packages/bigchaindb_driver/common/transaction.py", line 823, in sign
PrivateKey(private_key) for private_key in private_keys}
File "/home/ubuntu/.local/lib/python3.6/site-packages/bigchaindb_driver/common/transaction.py", line 823, in <dictcomp>
PrivateKey(private_key) for private_key in private_keys}
File "/home/ubuntu/.local/lib/python3.6/site-packages/bigchaindb_driver/common/transaction.py", line 817, in gen_public_key
public_key = private_key.get_verifying_key().encode()
File "/home/ubuntu/.local/lib/python3.6/site- packages/cryptoconditions/crypto.py", line 62, in get_verifying_key
return Ed25519VerifyingKey(self.verify_key.encode(encoder=Base58Encoder))
File "/home/ubuntu/.local/lib/python3.6/site-packages/nacl/encoding.py", line 90, in encode
return encoder.encode(bytes(self))
File "/home/ubuntu/.local/lib/python3.6/site-packages/cryptoconditions/crypto.py", line 15, in encode
return base58.b58encode(data).encode()
AttributeError: 'bytes' object has no attribute 'encode'
Is there any ideas how I can avoid it?
Looks like cryptoconditions library is doing it wrong.
You should file a bug asking to update required version of base58 and review all the calls to it. Usual behavior for Python3 is to return bytes on some_encoder_library.encode() and str on some_encoder_library.decode(). New versions of base58 module follow this rule although base58-encoded objects never contain any special symbols of course. cryptoconditions are still using previous version where b58encode were returning str.
Meanwhile you can make local modifications to the installed library or for it and install your fork instead.
It is likely that everything will work OK with encode() call removed from this line.

InternalError: Invalid variable reference. [Op:ResourceApplyAdam] on TensorFlow

I am currently working on EagerExecution with tensorflow 1.7.0.
I get this error when I am working on GPU :
tensorflow.python.framework.errors_impl.InternalError: Invalid variable reference. [Op:ResourceApplyAdam]
Unfortunately, I wasn't able to isolate the error so I can't give a snippet which could explain that.
The error doesn't occur when I am working on CPU. My code was working fine on GPU until a recent update. I don't think it is machine related because it occurs on different machines.
I wasn't able to find something relevant so if you have any hints on what can cause this error, please let me know.
Complete tracking :
2018-07-19 17:52:32.393711: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at training_ops.cc:2507 : Internal: Invalid variable reference.
Traceback (most recent call last):
File "debugging_jules_usage.py", line 391, in <module>
mainLoop()
File "debugging_jules_usage.py", line 370, in mainLoop
raise e
File "debugging_jules_usage.py", line 330, in mainLoop
Kn.fit(train)
File "/home/jbayet/xai-link-prediction/xai_lp/temporal/models_temporal.py", line 707, in fit
self._train_one_batch(X_bis, i)
File "/home/jbayet/xai-link-prediction/xai_lp/temporal/models_temporal.py", line 639, in _train_one_batch
self.optimizer.minimize(batch_model_loss, global_step=tf.train.get_global_step())
File "/home/jbayet/miniconda3/envs/xai/lib/python3.6/site-packages/tensorflow/python/training/optimizer.py", line 409, in minimize
name=name)
File "/home/jbayet/miniconda3/envs/xai/lib/python3.6/site-packages/tensorflow/python/training/optimizer.py", line 564, in apply_gradients
update_ops.append(processor.update_op(self, grad))
File "/home/jbayet/miniconda3/envs/xai/lib/python3.6/site-packages/tensorflow/python/training/optimizer.py", line 161, in update_op
update_op = optimizer._resource_apply_dense(g, self._v)
File "/home/jbayet/miniconda3/envs/xai/lib/python3.6/site-packages/tensorflow/python/training/adam.py", line 166, in _resource_apply_dense
grad, use_locking=self._use_locking)
File "/home/jbayet/miniconda3/envs/xai/lib/python3.6/site-packages/tensorflow/python/training/gen_training_ops.py", line 1105, in resource_apply_adam
_six.raise_from(_core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InternalError: Invalid variable reference. [Op:ResourceApplyAdam]

cannot parse sentence using stanford parser even i have set environment variables

I am using python 3.5.
I downloaded the Stanford parser and extracted it. I have also set the environment variable properly and it got set properly. But when I ran a sentence and tried to parse I am getting an error.
This is the error:
Traceback (most recent call last):
File "<pyshell#15>", line 1, in <module>
sp.parse("this is a sentence".split())
File "C:\Users\MAHESH\AppData\Local\Programs\Python\Python35\lib\site-packages\nltk\parse\api.py", line 45, in parse
return next(self.parse_sents([sent], *args, **kwargs))
File "C:\Users\MAHESH\AppData\Local\Programs\Python\Python35\lib\site-packages\nltk\parse\stanford.py", line 120, in parse_sents
cmd, '\n'.join(' '.join(sentence) for sentence in sentences), verbose))
File "C:\Users\MAHESH\AppData\Local\Programs\Python\Python35\lib\site-packages\nltk\parse\stanford.py", line 216, in _execute
stdout=PIPE, stderr=PIPE)
File "C:\Users\MAHESH\AppData\Local\Programs\Python\Python35\lib\site-packages\nltk\internals.py", line 134, in java
raise OSError('Java command failed : ' + str(cmd))
OSError: Java command failed : ['java.exe', '-mx1000m', '-cp', 'C:/Users/MAHESH/stanfordparser/stanford-parser-full-2015-04-20\\stanford-parser-3.5.2-models.jar;C:/Users/MAHESH/stanfordparser/stanford-parser-full-2015-04-20\\ejml-0.23.jar;C:/Users/MAHESH/stanfordparser/stanford-parser-full-2015-04-20\\stanford-parser-3.5.2-javadoc.jar;C:/Users/MAHESH/stanfordparser/stanford-parser-full-2015-04-20\\stanford-parser-3.5.2-models.jar;C:/Users/MAHESH/stanfordparser/stanford-parser-full-2015-04-20\\stanford-parser-3.5.2-sources.jar;C:/Users/MAHESH/stanfordparser/stanford-parser-full-2015-04-20\\stanford-parser.jar', 'edu.stanford.nlp.parser.lexparser.LexicalizedParser', '-model', 'edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz', '-sentences', 'newline', '-outputFormat', 'penn', '-tokenized', '-escaper', 'edu.stanford.nlp.process.PTBEscapingProcessor', '-encoding', 'utf8', 'C:\\Users\\MAHESH\\AppData\\Local\\Temp\\tmp1jcjvrl1']

spyder unicode decode error in startup

I was using spyder-ide while parsing a tumblr page with the permission of the author, and at some point everything just crashed. Even my linux system had freezed. Well, to cut to the chase now I can not start spyder, it gives me the following error after I had written spyder to my terminal:
Traceback (most recent call last):
File "/home/dk/anaconda3/bin/spyder", line 2, in <module>
from spyderlib import start_app
File "/home/dk/anaconda3/lib/python3.5/site-packages/spyderlib/start_app.py", line 13, in <module>
from spyderlib.config import CONF
File "/home/dk/anaconda3/lib/python3.5/site-packages/spyderlib/config.py", line 736, in <module>
subfolder=SUBFOLDER, backup=True, raw_mode=True)
File "/home/dk/anaconda3/lib/python3.5/site-packages/spyderlib/userconfig.py", line 215, in __init__
self.load_from_ini()
File "/home/dk/anaconda3/lib/python3.5/site-packages/spyderlib/userconfig.py", line 265, in load_from_ini
self.read(self.filename(), encoding='utf-8')
File "/home/dk/anaconda3/lib/python3.5/configparser.py", line 696, in read
self._read(fp, filename)
File "/home/dk/anaconda3/lib/python3.5/configparser.py", line 1012, in _read
for lineno, line in enumerate(fp, start=1):
File "/home/dk/anaconda3/lib/python3.5/codecs.py", line 321, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode bytes in position 0-1: invalid continuation byte
I tried the solution here and I had received the following error:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/dk/anaconda3/lib/python3.5/site-packages/spyderlib/spyder.py", line 107, in <module>
from spyderlib.utils.qthelpers import qapplication
File "/home/dk/anaconda3/lib/python3.5/site-packages/spyderlib/utils/qthelpers.py", line 24, in <module>
from spyderlib.guiconfig import get_shortcut
File "/home/dk/anaconda3/lib/python3.5/site-packages/spyderlib/guiconfig.py", line 22, in <module>
from spyderlib.config import CONF
File "/home/dk/anaconda3/lib/python3.5/site-packages/spyderlib/config.py", line 736, in <module>
subfolder=SUBFOLDER, backup=True, raw_mode=True)
File "/home/dk/anaconda3/lib/python3.5/site-packages/spyderlib/userconfig.py", line 215, in __init__
self.load_from_ini()
File "/home/dk/anaconda3/lib/python3.5/site-packages/spyderlib/userconfig.py", line 265, in load_from_ini
self.read(self.filename(), encoding='utf-8')
File "/home/dk/anaconda3/lib/python3.5/configparser.py", line 696, in read
self._read(fp, filename)
File "/home/dk/anaconda3/lib/python3.5/configparser.py", line 1012, in _read
for lineno, line in enumerate(fp, start=1):
File "/home/dk/anaconda3/lib/python3.5/codecs.py", line 321, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode bytes in position 0-1: invalid continuation byte
I tried uninstalling and reinstalling anaconda and it doesn't seem to work I am open to suggestions, I am very much new to python, so I would appriciate a simple explanation of the possible causes of the error too.
Thanks in advance
Well here is how I solved the issue.
l opened this: spyderlib/userconfig.py
and changed this: self.read(self.filename(), encoding='utf-8')
to this: self.read(self.filename(), encoding='latin-1')
It gave me a Warning: File contains no section headers but started spyder anyway. After that, I closed spyder, opened the terminal and entered spyder --reset then restarted spyder, it seems to work now.
Here is what you should not do at all costs for this problem: thinkering with these, I learned my lesson the hard way:
python3.5/configparser.py
python3.5/codecs.py

Resources