Problems to be solved when import keras - python-3.5

I've just install keras for deep learning research, however when I import keras,it shows:
import keras
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.5/dist-packages/keras/__init__.py", line 2, in <module>
from . import backend
File "/usr/local/lib/python3.5/dist-packages/keras/backend/__init__.py", line 31, in <module>
_config = json.load(open(_config_path))
File "/usr/lib/python3.5/json/__init__.py", line 268, in load
parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
File "/usr/lib/python3.5/json/__init__.py", line 319, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3.5/json/decoder.py", line 339, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python3.5/json/decoder.py", line 355, in raw_decode
obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Expecting ',' delimiter: line 6 column 5 (char 85)
I don't know how to handle it, please give me some advice. Thanks in advance!

The output you are seeing there is a stack trace, indicating that some error has occurred. The top line of the stack trace is where your program first entered the function call chain that eventually resulted in some error, and the last line is the end of that chain, the spot where the error actually occurred.
In this case, we see the rather specific error message:
json.decoder.JSONDecodeError: Expecting ',' delimiter: line 6 column 5 (char 85)
I interpret this as saying that there is something malformed in your JSON input. JSON is a relatively simple data encoding format (its documented in a single web page), and the error says that it sees a problem very early in your input stream (you can either count 85 characters in your input file, or you can jump down to line 6 and then step 5 columns to the right on that line).
So the real question is: Do you know where that JSON file is located? From the stack trace, it sounds like it is at some "config path"; maybe there is some configuration file that you edited, but left out a comma in it?

Related

Currently having an error with youtube-dl in python, any ideas?

This is the error I am getting, I tried to see if anyone else had this issue but it was only with other sites other than youtube and there was no fix listed. If needed I can link the python Code as well. This is happening when using a play command on a discord bot using discord.py-rewrite.
[youtube:search] query " www.youtube.com/watch?v=5abamRO41fE": Downloading page 1
ERROR: query " www.youtube.com/watch?v=5abamRO41fE": Failed to parse JSON (caused by JSONDecodeError('Expecting value: line 1 column 1 (char 0)')); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; see https://yt-dl.org/update on how to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
Ignoring exception in command play:
Traceback (most recent call last):
File "C:\Users\Jimmy\AppData\Local\Programs\Python\Python38-32\lib\site-packages\youtube_dl\extractor\common.py", line 904, in _parse_json
return json.loads(json_string)
File "C:\Users\Jimmy\AppData\Local\Programs\Python\Python38-32\lib\json\__init__.py", line 357, in loads
return _default_decoder.decode(s)
File "C:\Users\Jimmy\AppData\Local\Programs\Python\Python38-32\lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "C:\Users\Jimmy\AppData\Local\Programs\Python\Python38-32\lib\json\decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Jimmy\AppData\Local\Programs\Python\Python38-32\lib\site-packages\youtube_dl\YoutubeDL.py", line 797, in extract_info
ie_result = ie.extract(url)
File "C:\Users\Jimmy\AppData\Local\Programs\Python\Python38-32\lib\site-packages\youtube_dl\extractor\common.py", line 532, in extract
ie_result = self._real_extract(url)
File "C:\Users\Jimmy\AppData\Local\Programs\Python\Python38-32\lib\site-packages\youtube_dl\extractor\common.py", line 3014, in _real_extract
return self._get_n_results(query, n)
File "C:\Users\Jimmy\AppData\Local\Programs\Python\Python38-32\lib\site-packages\youtube_dl\extractor\youtube.py", line 3200, in _get_n_results
data = self._download_json(
File "C:\Users\Jimmy\AppData\Local\Programs\Python\Python38-32\lib\site-packages\youtube_dl\extractor\common.py", line 893, in _download_json
res = self._download_json_handle(
File "C:\Users\Jimmy\AppData\Local\Programs\Python\Python38-32\lib\site-packages\youtube_dl\extractor\common.py", line 879, in _download_json_handle
return self._parse_json(
File "C:\Users\Jimmy\AppData\Local\Programs\Python\Python38-32\lib\site-packages\youtube_dl\extractor\common.py", line 908, in _parse_json
raise ExtractorError(errmsg, cause=ve)
youtube_dl.utils.ExtractorError: query " www.youtube.com/watch?v=5abamRO41fE": Failed to parse JSON (caused by JSONDecodeError('Expecting value: line 1 column 1 (char 0)')); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; see https://yt-dl.org/update on how to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Jimmy\AppData\Local\Programs\Python\Python38-32\lib\site-packages\discord\ext\commands\core.py", line 85, in wrapped
ret = await coro(*args, **kwargs)
File "g:\Bots\TestBot\test.py", line 574, in play
results = ydl.extract_info(f'ytsearch1: {song_search}', download=True)
File "C:\Users\Jimmy\AppData\Local\Programs\Python\Python38-32\lib\site-packages\youtube_dl\YoutubeDL.py", line 820, in extract_info
self.report_error(compat_str(e), e.format_traceback())
File "C:\Users\Jimmy\AppData\Local\Programs\Python\Python38-32\lib\site-packages\youtube_dl\YoutubeDL.py", line 625, in report_error
self.trouble(error_message, tb)
File "C:\Users\Jimmy\AppData\Local\Programs\Python\Python38-32\lib\site-packages\youtube_dl\YoutubeDL.py", line 595, in trouble
raise DownloadError(message, exc_info)
youtube_dl.utils.DownloadError: ERROR: query " www.youtube.com/watch?v=5abamRO41fE": Failed to parse JSON (caused by JSONDecodeError('Expecting value: line 1 column 1 (char 0)')); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; see https://yt-dl.org/update on how to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.

astropy.extern.configobj.configobj.ConfigObjError: Parsing failed with several errors

When I import the astropy package, I got the following error message.
>>> import astropy
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/lalitawadee/anaconda3/lib/python3.7/site-packages/astropy/__init__.py", line 288, in <module>
log = _init_log()
File "/home/lalitawadee/anaconda3/lib/python3.7/site-packages/astropy/logger.py", line 97, in _init_log
log._set_defaults()
File "/home/lalitawadee/anaconda3/lib/python3.7/site-packages/astropy/logger.py", line 473, in _set_defaults
self.setLevel(conf.log_level)
File "/home/lalitawadee/anaconda3/lib/python3.7/site-packages/astropy/config/configuration.py", line 273, in __get__
return self()
File "/home/lalitawadee/anaconda3/lib/python3.7/site-packages/astropy/config/configuration.py", line 396, in __call__
sec = get_config(self.module)
File "/home/lalitawadee/anaconda3/lib/python3.7/site-packages/astropy/config/configuration.py", line 530, in get_config
cobj = configobj.ConfigObj(cfgfn, interpolation=False)
File "/home/lalitawadee/anaconda3/lib/python3.7/site-packages/astropy/extern/configobj/configobj.py", line 1227, in __init__
self._load(infile, configspec)
File "/home/lalitawadee/anaconda3/lib/python3.7/site-packages/astropy/extern/configobj/configobj.py", line 1316, in _load
raise error
astropy.extern.configobj.configobj.ConfigObjError: Parsing failed with several errors.
First error at line 142.
I've already tried to remove Anaconda and re-install it, but the problem still remains. Could you please help me with this? Thank you in advance.
It sounds like you somehow have a corrupt astropy config file somewhere. It would help if the error message gave the filename, but see http://docs.astropy.org/en/stable/config/ for possible locations.
For starters, I would try something at the command-line like
$ mv ~/.astropy/ ~/.astropy.bak
and try again. I don't think this should have anything specifically to do with Anaconda.

tensorflow object detection for our own objects

I am using tensorflow 1.9 for custom object detection and followed same steps as in https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html#preparing-workspace.
But at training the model,i am getting error.
(tensorflow_cpu) C:\Users\Z004032A\Documents\Tensorflow\workspace\training_demo>python train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/ssd_inception_v2_coco.config
WARNING:tensorflow:From C:\Users\Z004032A\anaconda3\envs\tensorflow_cpu\lib\site-packages\tensorflow\python\platform\app.py:125: main (from __main__) is deprecated and will be removed in a future version.
Instructions for updating:
Use object_detection/model_main.py.
Traceback (most recent call last):
File "C:\Users\Z004032A\anaconda3\envs\tensorflow_cpu\lib\site-packages\google\protobuf\text_format.py", line 1460, in _ConsumeSingleByteString
result = text_encoding.CUnescape(text[1:-1])
File "C:\Users\Z004032A\anaconda3\envs\tensorflow_cpu\lib\site-packages\google\protobuf\text_encoding.py", line 115, in CUnescape
.decode('unicode_escape')
UnicodeDecodeError: 'unicodeescape' codec can't decode bytes in position 2-3: truncated \UXXXXXXXX escape
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "train.py", line 184, in <module>
tf.app.run()
File "C:\Users\Z004032A\anaconda3\envs\tensorflow_cpu\lib\site-packages\tensorflow\python\platform\app.py", line 125, in run
_sys.exit(main(argv))
File "C:\Users\Z004032A\anaconda3\envs\tensorflow_cpu\lib\site-packages\tensorflow\python\util\deprecation.py", line 250, in new_func
return func(*args, **kwargs)
File "train.py", line 93, in main
FLAGS.pipeline_config_path)
File "C:\Users\Z004032A\Documents\Tensorflow\models\research\object_detection\utils\config_util.py", line 100, in get_configs_from_pipeline_file
text_format.Merge(proto_str, pipeline_config)
File "C:\Users\Z004032A\anaconda3\envs\tensorflow_cpu\lib\site-packages\google\protobuf\text_format.py", line 685, in Merge
allow_unknown_field=allow_unknown_field)
File "C:\Users\Z004032A\anaconda3\envs\tensorflow_cpu\lib\site-packages\google\protobuf\text_format.py", line 752, in MergeLines
return parser.MergeLines(lines, message)
File "C:\Users\Z004032A\anaconda3\envs\tensorflow_cpu\lib\site-packages\google\protobuf\text_format.py", line 777, in MergeLines
self._ParseOrMerge(lines, message)
File "C:\Users\Z004032A\anaconda3\envs\tensorflow_cpu\lib\site-packages\google\protobuf\text_format.py", line 799, in _ParseOrMerge
self._MergeField(tokenizer, message)
File "C:\Users\Z004032A\anaconda3\envs\tensorflow_cpu\lib\site-packages\google\protobuf\text_format.py", line 924, in _MergeField
merger(tokenizer, message, field)
File "C:\Users\Z004032A\anaconda3\envs\tensorflow_cpu\lib\site-packages\google\protobuf\text_format.py", line 998, in _MergeMessageField
self._MergeField(tokenizer, sub_message)
File "C:\Users\Z004032A\anaconda3\envs\tensorflow_cpu\lib\site-packages\google\protobuf\text_format.py", line 924, in _MergeField
merger(tokenizer, message, field)
File "C:\Users\Z004032A\anaconda3\envs\tensorflow_cpu\lib\site-packages\google\protobuf\text_format.py", line 998, in _MergeMessageField
self._MergeField(tokenizer, sub_message)
File "C:\Users\Z004032A\anaconda3\envs\tensorflow_cpu\lib\site-packages\google\protobuf\text_format.py", line 924, in _MergeField
merger(tokenizer, message, field)
File "C:\Users\Z004032A\anaconda3\envs\tensorflow_cpu\lib\site-packages\google\protobuf\text_format.py", line 1049, in _MergeScalarField
value = tokenizer.ConsumeString()
File "C:\Users\Z004032A\anaconda3\envs\tensorflow_cpu\lib\site-packages\google\protobuf\text_format.py", line 1420, in ConsumeString
the_bytes = self.ConsumeByteString()
File "C:\Users\Z004032A\anaconda3\envs\tensorflow_cpu\lib\site-packages\google\protobuf\text_format.py", line 1435, in ConsumeByteString
the_list = [self._ConsumeSingleByteString()]
File "C:\Users\Z004032A\anaconda3\envs\tensorflow_cpu\lib\site-packages\google\protobuf\text_format.py", line 1462, in _ConsumeSingleByteString
raise self.ParseError(str(e))
google.protobuf.text_format.ParseError: 170:17 : ' input_path: "C:\Users\Z004032A\Documents\Tensorflow\workspace\trai': 'unicodeescape' codec can't decode bytes in position 2-3: truncated \UXXXXXXXX escape
Can anyone please help me how to resolve that problem and also tell which tensorflow version best suits for custom object detection
This is probably caused by Windows using C:\ as default user directory. When you use this user directory in Python in string form, you might get a unicode error, as the \U unicode escape is used.
Try duplicating backslashes. In other words, turn C:\User\Documents into C:\\User\\Documents.
As for which Tensorflow version is best, there isn't a "best" version. I'd recommend using the same TF version as whatever library you're using. I'd also recommend not coding this in raw TF. Instead, use an existing library, such as YOLO. Just Google "best object detection library tensorflow" and choose one of the existing libraries.

Python Code with Plugin "Create Points Layer from Spreadsheet"

I'm new to python, that is the first problem.
Secondly I am trying to automate the task of adding vector point layers from spreadsheets (xlsx-files) with Python.
The task can be done manually with the plugin "add spreadsheet layer".
I have a folder with roughly 20 xlsx-files that need to be added into the QGIS-project as vector point layers.
I have tried the following code snippet, to check if the core task of adding a spreadsheet layer actually works:
The Computer has a Win7 OS. The program in question is Python which is contained in the program QGIS 3.4.
The Plugin that I want to control through python is called "add spreadsheet layer".
from qgis.core import *
import processing
processing.run("qgis:createpointslayerfromtable",
{'INPUT':r'C:\Users\Desktop\PlayItAll\Test.xlsx',
'XFIELD':'X_Pos',
'YFIELD':'Y_Pos',
'ZFIELD':None,
'MFIELD':None,
'TARGET_CRS':QgsCoordinateReferenceSystem('EPSG:4326'),
'OUTPUT':r'memory'})
It produces this error:
File "C:/PROGRA1/QGIS31.4/apps/qgis/./python/plugins\processing\core\Processing.py", line 183, in runAlgorithm
raise QgsProcessingException(msg)
I have contacted the programmer of the plugin and he gave me this code to try:
import processing
processing.runAndLoadResults("qgis:createpointslayerfromtable",
{
'INPUT':r'C:\Users\username\Desktop\Delete\test.xlsx',
'XFIELD':'Longitude',
'YFIELD':'Latitude',
'ZFIELD':None,
'MFIELD':None,
'TARGET_CRS':QgsCoordinateReferenceSystem('EPSG:4326'),
'OUTPUT':'memory'
})
For him it worked, for me it didn't.
I got this on the processing tab:
2019-07-03T13:19:43 CRITICAL Traceback (most recent call last):
File "C:/PROGRA~1/QGIS3~1.4/apps/qgis/./python/plugins\processing\algs\qgis\PointsLayerFromTable.py", line 112, in processAlgorithm
fields, wkb_type, target_crs)
Exception: unknown
2019-07-03T13:19:43 CRITICAL Traceback (most recent call last):
File "C:/PROGRA~1/QGIS3~1.4/apps/qgis/./python/plugins\processing\algs\qgis\PointsLayerFromTable.py", line 112, in processAlgorithm
fields, wkb_type, target_crs)
Exception: unknown
2019-07-03T13:19:43 CRITICAL There were errors executing the algorithm.
The "python warnings" tab showed this:
2019-07-03T13:19:43 WARNING warning:__console__:1: ResourceWarning:
unclosed file
traceback: File "C:/PROGRA~1/QGIS3~1.4/apps/qgis/./python\console\console.py", line 575, in runScriptEditor
self.tabEditorWidget.currentWidget().newEditor.runScriptCode()
File "C:/PROGRA~1/QGIS3~1.4/apps/qgis/./python\console\console_editor.py", line 629, in runScriptCode
.format(filename.replace("\\", "/"), sys.getfilesystemencoding()))
File "C:/PROGRA~1/QGIS3~1.4/apps/qgis/./python\console\console_sci.py", line 635, in runCommand
more = self.runsource(src)
File "C:/PROGRA~1/QGIS3~1.4/apps/qgis/./python\console\console_sci.py", line 665, in runsource
return super(ShellScintilla, self).runsource(source, filename, symbol)
File "C:\PROGRA~1\QGIS3~1.4\apps\Python37\lib\code.py", line 74, in runsource
self.runcode(code)
File "C:\PROGRA~1\QGIS3~1.4\apps\Python37\lib\code.py", line 90, in runcode
exec(code, self.locals)
File "", line 1, in

Keras / Theano exceptions are getting masked

I am using an evolutionary algorithm to find satisfactory hyper-parameters for a CNN written in Keras/Theano. The stochastic nature of this approach means that from time to time a pathological configuration will be tried, which will yield an exception. In those scenarios, I'd like to catch the exception so I can assign an appropriate low fitness. Unfortunately, when Theano throws an exception, it appears to be masked before it reaches my try/catch block. That is, at some point the exception is caught and not re-raised, which means it never propagates up the stack to reach my try/catch block.
I've asked on the Keras Slack workspace if there was some configuration I had to tickle in Keras to un-mask these exceptions, but I was told that the problem was not at the Keras level, that it was something with Theano. And, so here I am.
I have the following configuration settings at the top of the corresponding theanorc file that I had hoped would solve the problem:
[config]
on_opt_error = raise
on_shape_error = raise
numpy.seterr_all = raise
compute_test_value = raise
And, these are the exceptions I am seeing:
ERROR (theano.gof.opt): SeqOptimizer apply <theano.tensor.opt.ShapeOptimizer object at 0x2aaae03674a8>
ERROR (theano.gof.opt): Traceback:
ERROR (theano.gof.opt): Traceback (most recent call last):
File "/ccs/proj/geo121/python3.5-packages/dl4sm/theano/gof/opt.py", line 235, in apply
sub_prof = optimizer.optimize(fgraph)
File "/ccs/proj/geo121/python3.5-packages/dl4sm/theano/gof/opt.py", line 83, in optimize
self.add_requirements(fgraph)
File "/ccs/proj/geo121/python3.5-packages/dl4sm/theano/tensor/opt.py", line 1482, in add_requirements
fgraph.attach_feature(ShapeFeature())
File "/ccs/proj/geo121/python3.5-packages/dl4sm/theano/gof/fg.py", line 541, in attach_feature
attach(self)
File "/ccs/proj/geo121/python3.5-packages/dl4sm/theano/tensor/opt.py", line 1299, in on_attach
self.on_import(fgraph, node, reason='on_attach')
File "/ccs/proj/geo121/python3.5-packages/dl4sm/theano/tensor/opt.py", line 1362, in on_import
self.set_shape(r, s)
File "/ccs/proj/geo121/python3.5-packages/dl4sm/theano/tensor/opt.py", line 1151, in set_shape
shape_vars.append(self.unpack(s[i], r))
File "/ccs/proj/geo121/python3.5-packages/dl4sm/theano/tensor/opt.py", line 1073, in unpack
raise ValueError(msg)
ValueError: There is a negative shape in the graph!
Backtrace when that variable is created:
File "/ccs/proj/geo121/mcoletti/dl-4-settlement-mapping/eadl/train_cnn.py", line 218, in <module>
validation_accuracy = train_cnn(data_dir=args.data_dir, kernel_sizes=args.kernel_sizes, max_epoch=args.epoch, batch_sizes=args.batch_size)
File "/ccs/proj/geo121/mcoletti/dl-4-settlement-mapping/eadl/train_cnn.py", line 193, in train_cnn
model = create_cnn(kernel_sizes=kernel_sizes)
File "/ccs/proj/geo121/mcoletti/dl-4-settlement-mapping/eadl/train_cnn.py", line 52, in create_cnn
model.add(Conv2D(256, kernel_size=kernel_sizes[3], activation="relu", kernel_initializer="normal"))
File "/ccs/proj/geo121/python3.5-packages/dl4sm/keras/models.py", line 475, in add
output_tensor = layer(self.outputs[0])
File "/ccs/proj/geo121/python3.5-packages/dl4sm/keras/engine/topology.py", line 602, in __call__
output = self.call(inputs, **kwargs)
File "/ccs/proj/geo121/python3.5-packages/dl4sm/keras/layers/convolutional.py", line 164, in call
dilation_rate=self.dilation_rate)
File "/ccs/proj/geo121/python3.5-packages/dl4sm/keras/backend/theano_backend.py", line 1890, in conv2d
filter_dilation=dilation_rate)
And, if you're curious to see the try/catch block, it's just this:
try:
validation_accuracy = train_cnn(data_dir=args.data_dir, kernel_sizes=args.kernel_sizes, max_epoch=args.epoch, batch_sizes=args.batch_size)
except Exception as e:
print(socket.gethostname(), ', Caught exception while training:', str(e) )
My intuition is that this is probably something very, very simple. Maybe I need to add more options to the THEANORC file?
Setting theano.config.compute_test_value = 'raise' appears to work.
Curiously, compute_test_value should have been set from the Theano configuration file, which suggests that it's not being properly read and parsed. I should not have to set this value programmatically when I explicitly set it in the configuration file.

Resources