Error following first Theano program example - theano

I'm totally new to theano and following this simple intro exercise to theano found here: http://deeplearning.net/software/theano/introduction.html#introduction
The idea is to simply declare some tensor variables and wrap them in a function, it is the most simple thing you could possibly do with theano
the exact code is:
import theano
from theano import tensor
# declare two symbolic floating-point scalars
a = tensor.dscalar()
b = tensor.dscalar()
# create a simple expression
c = a + b
# convert the expression into a callable object that takes (a,b)
# values as input and computes a value for c
f = theano.function([a,b], c)
# bind 1.5 to 'a', 2.5 to 'b', and evaluate 'c'
assert 4.0 == f(1.5, 2.5)
However, I get this traceback:
Traceback (most recent call last):
File "test.py", line 13, in <module>
f = theano.function([a,b], c)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/theano/compile/function.py", line 223, in function
profile=profile)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/theano/compile/pfunc.py", line 512, in pfunc
on_unused_input=on_unused_input)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/theano/compile/function_module.py", line 1312, in orig_function
defaults)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/theano/compile/function_module.py", line 1181, in create
_fn, _i, _o = self.linker.make_thunk(input_storage=input_storage_lists)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/theano/gof/link.py", line 434, in make_thunk
output_storage=output_storage)[:3]
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/theano/gof/vm.py", line 847, in make_all
no_recycling))
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/theano/gof/op.py", line 606, in make_thunk
output_storage=node_output_storage)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/theano/gof/cc.py", line 948, in make_thunk
keep_lock=keep_lock)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/theano/gof/cc.py", line 891, in __compile__
keep_lock=keep_lock)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/theano/gof/cc.py", line 1314, in cthunk_factory
key = self.cmodule_key()
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/theano/gof/cc.py", line 1032, in cmodule_key
c_compiler=self.c_compiler(),
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/theano/gof/cc.py", line 1090, in cmodule_key_
sig.append('md5:' + theano.configparser.get_config_md5())
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/theano/configparser.py", line 146, in get_config_md5
['%s = %s' % (cv.fullname, cv.__get__()) for cv in all_opts]))
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/theano/configparser.py", line 146, in <listcomp>
['%s = %s' % (cv.fullname, cv.__get__()) for cv in all_opts]))
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/theano/configparser.py", line 273, in __get__
val_str = self.default()
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/theano/tensor/blas.py", line 282, in default_blas_ldflags
if GCC_compiler.try_flags(["-lblas"]):
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/theano/gof/cmodule.py", line 1852, in try_flags
flags=flag_list, try_run=False)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/theano/gof/cmodule.py", line 1799, in try_compile_tmp
os.write(fd, src_code)
TypeError: ('The following error happened while compiling the node', Elemwise{add,no_inplace}(<TensorType(float64, scalar)>, <TensorType(float64, scalar)>), '\n', "'str' does not support the buffer interface")
My only thought is that it may be python3 related, but that should not be the case. Please help.

Theano code base do not work out of the box for python2 and python3. It need to get converted. This is done during the installation of Theano. When installed via pip, this is done automatically. If you cloned/downloded the source code, you need to install it with:
python setup.py install
Here is a Theano ticket with more information:
https://github.com/Theano/Theano/issues/2317
Also, for python 3 support, you should use the development version line the other answer:
pip3 install --upgrade --no-deps git+git://github.com/Theano/Theano.git
But this isn't related to BLAS as it is written.

Problem is not including the BLAS in the most recent version of theano. Solved when you pull the bleeding-edge version:
pip3 install --upgrade --no-deps git+git://github.com/Theano/Theano.git

Related

python module autopep8 problem in Linux terminal

I run ubuntu by wsl 2. I use python 3.10 version. Now when I run autopep8 the problem of your below picture comes. How can it be fixed?
when I type autopep8 --in-place -a filename.py ,this problem appears
Traceback (most recent call last):
File "/home/asib/.local/bin/autopep8", line 8, in <module>
sys.exit(main())
File "/home/asib/.local/lib/python3.10/site-packages/autopep8.py", line 4518, in main
results = fix_multiple_files(args.files, args, sys.stdout)
File "/home/asib/.local/lib/python3.10/site-packages/autopep8.py", line 4413, in fix_multiple_files
ret = _fix_file((name, options, output))
File "/home/asib/.local/lib/python3.10/site-packages/autopep8.py", line 4383, in _fix_file
return fix_file(*parameters)
File "/home/asib/.local/lib/python3.10/site-packages/autopep8.py", line 3569, in fix_file
original_source = readlines_from_file(filename)
File "/home/asib/.local/lib/python3.10/site-packages/autopep8.py", line 195, in readlines_from_file
with open_with_encoding(filename) as input_file:
File "/home/asib/.local/lib/python3.10/site-packages/autopep8.py", line 172, in open_with_encoding
encoding = detect_encoding(filename, limit_byte_check=limit_byte_check)
File "/home/asib/.local/lib/python3.10/site-packages/autopep8.py", line 182, in detect_encoding
from lib2to3.pgen2 import tokenize as lib2to3_tokenize
ModuleNotFoundError: No module named 'lib2to3'
I want when I give autopep8 command, it will work.
For example: autopep8 --in-place -a filename.py

pylint and astroid AttributeError: 'Module' object has no attribute 'col_offset'

It fails using pylint version 2.9.0 and 2.9.3. With version 2.8.3 it still works though.
See GitHub-issue under the provided link.
Traceback (most recent call last):
File "/root/.pyenv/versions/3.8.8/lib/python3.8/site-packages/astroid/inference_tip.py", line 19, in _inference_tip_cached
return iter(_cache[func, node])
KeyError: (<function register_builtin_transform.<locals>._transform_wrapper at 0x7f310b222700>, <Call l.166 at 0x7f3102ebf970>)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/root/.pyenv/versions/3.8.8/bin/pylint", line 8, in <module>
sys.exit(run_pylint())
File "/root/.pyenv/versions/3.8.8/lib/python3.8/site-packages/pylint/__init__.py", line 24, in run_pylint
PylintRun(sys.argv[1:])
File "/root/.pyenv/versions/3.8.8/lib/python3.8/site-packages/pylint/lint/run.py", line 384, in __init__
linter.check(args)
File "/root/.pyenv/versions/3.8.8/lib/python3.8/site-packages/pylint/lint/pylinter.py", line 973, in check
self._check_files(
File "/root/.pyenv/versions/3.8.8/lib/python3.8/site-packages/pylint/lint/pylinter.py", line 1007, in _check_files
self._check_file(get_ast, check_astroid_module, name, filepath, modname)
File "/root/.pyenv/versions/3.8.8/lib/python3.8/site-packages/pylint/lint/pylinter.py", line 1033, in _check_file
check_astroid_module(ast_node)
File "/root/.pyenv/versions/3.8.8/lib/python3.8/site-packages/pylint/lint/pylinter.py", line 1170, in check_astroid_module
retval = self._check_astroid_module(
File "/root/.pyenv/versions/3.8.8/lib/python3.8/site-packages/pylint/lint/pylinter.py", line 1215, in _check_astroid_module
walker.walk(ast_node)
File "/root/.pyenv/versions/3.8.8/lib/python3.8/site-packages/pylint/utils/ast_walker.py", line 77, in walk
self.walk(child)
File "/root/.pyenv/versions/3.8.8/lib/python3.8/site-packages/pylint/utils/ast_walker.py", line 77, in walk
self.walk(child)
File "/root/.pyenv/versions/3.8.8/lib/python3.8/site-packages/pylint/utils/ast_walker.py", line 74, in walk
callback(astroid)
File "/root/.pyenv/versions/3.8.8/lib/python3.8/site-packages/pylint/checkers/typecheck.py", line 1071, in visit_assign
self._check_assignment_from_function_call(node)
File "/root/.pyenv/versions/3.8.8/lib/python3.8/site-packages/pylint/checkers/typecheck.py", line 1081, in _check_assignment_from_function_call
function_node = safe_infer(node.value.func)
File "/root/.pyenv/versions/3.8.8/lib/python3.8/site-packages/pylint/checkers/utils.py", line 1177, in safe_infer
value = next(infer_gen)
File "/root/.pyenv/versions/3.8.8/lib/python3.8/site-packages/astroid/node_classes.py", line 353, in infer
yield from self._infer(context, **kwargs)
File "/root/.pyenv/versions/3.8.8/lib/python3.8/site-packages/astroid/decorators.py", line 136, in raise_if_nothing_inferred
yield next(generator)
File "/root/.pyenv/versions/3.8.8/lib/python3.8/site-packages/astroid/decorators.py", line 100, in wrapped
res = next(generator)
File "/root/.pyenv/versions/3.8.8/lib/python3.8/site-packages/astroid/inference.py", line 299, in infer_attribute
for owner in self.expr.infer(context):
File "/root/.pyenv/versions/3.8.8/lib/python3.8/site-packages/astroid/node_classes.py", line 367, in infer
for i, result in enumerate(generator):
File "/root/.pyenv/versions/3.8.8/lib/python3.8/site-packages/astroid/decorators.py", line 136, in raise_if_nothing_inferred
yield next(generator)
File "/root/.pyenv/versions/3.8.8/lib/python3.8/site-packages/astroid/decorators.py", line 100, in wrapped
res = next(generator)
File "/root/.pyenv/versions/3.8.8/lib/python3.8/site-packages/astroid/bases.py", line 144, in _infer_stmts
for inferred in stmt.infer(context=context):
File "/root/.pyenv/versions/3.8.8/lib/python3.8/site-packages/astroid/node_classes.py", line 367, in infer
for i, result in enumerate(generator):
File "/root/.pyenv/versions/3.8.8/lib/python3.8/site-packages/astroid/decorators.py", line 136, in raise_if_nothing_inferred
yield next(generator)
File "/root/.pyenv/versions/3.8.8/lib/python3.8/site-packages/astroid/decorators.py", line 100, in wrapped
res = next(generator)
File "/root/.pyenv/versions/3.8.8/lib/python3.8/site-packages/astroid/bases.py", line 144, in _infer_stmts
for inferred in stmt.infer(context=context):
File "/root/.pyenv/versions/3.8.8/lib/python3.8/site-packages/astroid/node_classes.py", line 343, in infer
results = tuple(self._explicit_inference(self, context, **kwargs))
File "/root/.pyenv/versions/3.8.8/lib/python3.8/site-packages/astroid/inference_tip.py", line 21, in _inference_tip_cached
result = func(*args, **kwargs)
File "/root/.pyenv/versions/3.8.8/lib/python3.8/site-packages/astroid/brain/brain_builtin_inference.py", line 203, in _transform_wrapper
if result.col_offset is None:
AttributeError: 'Module' object has no attribute 'col_offset'
The requirements.txt - file of this testing environment contains:
astroid
src/packages/project/requirements.txt
pycodestyle
pylint
pylint_junit
pytest
pytest-cov
yapf
with src/packages/project/requirements.txt containing:
awswrangler==2.8.0
babel==2.9.1
boto3==1.17.77
botocore==1.20.77
category-encoders==2.2.2
joblib==1.0.1
markdown==3.3.4
matplotlib==3.3.4
openpyxl==3.0.7
pandas==1.1.5
pyarrow==4.0.0
pytz==2021.1
requests==2.25.1
scikit-learn==0.24.2
simple_salesforce==1.0.0
EDIT on different attempts producing the same error:
Installing pylint=2.9.3
Installing astroid and pylint (latest versions, no version specification)
Installing astroid and pylint and upgrade astroid to latest version during build-process (in AWS test-buildspec.yml) via pip install --upgrade astroid (suggested here)
It look like you found a bug/crash in pylint 2.9, you can open an issue here. You can downgrade to 2.8.3 while it's being fixed.
Thanks to the work of #Pierre.Sassoulas this issue has been resolved (see here).
After having proven that the following combination of pylint - and astroid - versions worked well together without producing the OP-error, new commits have been made to the GitHub-project to fix the problem:
pip install pylint==2.9.3
pip install git+git://github.com/PyCQA/astroid.git#c37b6fd47b62486fd6cbe77b913b568b809f1a6d#egg=astroid
From here going forward, the issue should not occur again installing the latest version of pylint combined with astroid. Yet, if the issue returns, I'll let you know here.

TypeError: can't pickle cv2.CLAHE objects

I am trying to run the code on github(https://github.com/AayushKrChaudhary/RITnet)
I did not get the semantic segmentation dataset of OpenEDS, so I tried to download the png image from the Internet and put it in Semantic_Segmentation_Dataset\test\ to run the test program.
That code gives the following error:
Traceback (most recent call last):
File "test.py", line 59, in <module>
for i, batchdata in tqdm(enumerate(testloader),total=len(testloader)):
File "C:\Users\b0743\AppData\Local\Continuum\anaconda3\envs\Machine_Learning\lib\site-packages\torch\utils\data\dataloader.py", line 291, in __iter__
return _MultiProcessingDataLoaderIter(self)
File "C:\Users\b0743\AppData\Local\Continuum\anaconda3\envs\Machine_Learning\lib\site-packages\torch\utils\data\dataloader.py", line 737, in __init__
w.start()
File "C:\Users\b0743\AppData\Local\Continuum\anaconda3\envs\Machine_Learning\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Users\b0743\AppData\Local\Continuum\anaconda3\envs\Machine_Learning\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\b0743\AppData\Local\Continuum\anaconda3\envs\Machine_Learning\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\b0743\AppData\Local\Continuum\anaconda3\envs\Machine_Learning\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__
reduction.dump(process_obj, to_child)
File "C:\Users\b0743\AppData\Local\Continuum\anaconda3\envs\Machine_Learning\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
TypeError: can't pickle cv2.CLAHE objects
(Machine_Learning) C:\Users\b0743\Downloads\RITnet-master\RITnet-master>Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\b0743\AppData\Local\Continuum\anaconda3\envs\Machine_Learning\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Users\b0743\AppData\Local\Continuum\anaconda3\envs\Machine_Learning\lib\multiprocessing\spawn.py", line 115, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input
and my environment is:
# Name Version
cudatoolkit 10.1.243
cudnn 7.6.5
keras-applications 1.0.8
keras-base 2.3.1
keras-gpu 2.3.1
keras-preprocessing 1.1.0
matplotlib 3.3.1
matplotlib-base 3.3.1
numpy 1.19.1
numpy-base 1.19.1
opencv 3.3.1
pillow 7.2.0
python 3.6.10
pytorch 1.6.0
scikit-learn 0.23.2
scipy 1.5.2
torchsummary 1.5.1
torchvision 0.7.0
tqdm 4.48.2
I don’t know if this is a stupid question, but I hope someone can try to answer it for me.
I literally just got into the dataset python file and commented all the parts that require opencv.
Turns out it works but you wont get that sweet clahe and the other stuff but it works.
if you don't need the dataset thing just make a tensor out of the 640 by 400 image and put it in a empty array and put that array in an array until you have a 4d tensor and pass it in the dnn , and then put the output through the get predictions function and viola you have a array of eye features.

sre_constants.error error en_core_web_sm from spacy

I had installed Spacy and en_core_web_sm separately and I am trying to load en_core_web_sm with complete path.
import spacy
import en_core_web_sm
nlp = spacy.load(r'C:\Anaconda3\Lib\site-packages\en_core_web_sm\en_core_web_sm-2.0.0')
doc = nlp("The big grey dog ate all of the chocolate, but fortunately he wasn't sick!")
This leads to the following error:
sre_constants.error: bad escape \p at position 257.
Full stacktrace is as below:
Traceback (most recent call last):
File "C:/Users/43976209/PycharmProjects/NLP_Exercises/spacy_trial.py", line 3, in <module>
nlp = spacy.load(r'C:\ProgramData\Anaconda3\Lib\site-packages\en_core_web_sm\en_core_web_sm-2.0.0')
File "C:\ProgramData\Anaconda3\lib\site-packages\spacy\__init__.py", line 30, in load
return util.load_model(name, **overrides)
File "C:\ProgramData\Anaconda3\lib\site-packages\spacy\util.py", line 166, in load_model
return load_model_from_path(Path(name), **overrides)
File "C:\ProgramData\Anaconda3\lib\site-packages\spacy\util.py", line 211, in load_model_from_path
return nlp.from_disk(model_path)
File "C:\ProgramData\Anaconda3\lib\site-packages\spacy\language.py", line 941, in from_disk
util.from_disk(path, deserializers, exclude)
File "C:\ProgramData\Anaconda3\lib\site-packages\spacy\util.py", line 654, in from_disk
reader(path / key)
File "C:\ProgramData\Anaconda3\lib\site-packages\spacy\language.py", line 928, in <lambda>
p, exclude=["vocab"]
File "tokenizer.pyx", line 526, in spacy.tokenizer.Tokenizer.from_disk
File "tokenizer.pyx", line 572, in spacy.tokenizer.Tokenizer.from_bytes
File "C:\ProgramData\Anaconda3\lib\re.py", line 233, in compile
return _compile(pattern, flags)
File "C:\ProgramData\Anaconda3\lib\re.py", line 301, in _compile
p = sre_compile.compile(pattern, flags)
File "C:\ProgramData\Anaconda3\lib\sre_compile.py", line 562, in compile
p = sre_parse.parse(p, flags)
File "C:\ProgramData\Anaconda3\lib\sre_parse.py", line 856, in parse
p = _parse_sub(source, pattern, flags & SRE_FLAG_VERBOSE, False)
File "C:\ProgramData\Anaconda3\lib\sre_parse.py", line 415, in _parse_sub
itemsappend(_parse(source, state, verbose))
File "C:\ProgramData\Anaconda3\lib\sre_parse.py", line 526, in _parse
code1 = _class_escape(source, this)
File "C:\ProgramData\Anaconda3\lib\sre_parse.py", line 336, in _class_escape
raise source.error('bad escape %s' % escape, len(escape))
sre_constants.error: bad escape \p at position 257
I am running Python 3.6.1 :: Anaconda 4.4.0 (64-bit)
with Spacy version of 2.2.3
These are the latest versions available on my network and I can't download any newer versions from PyPi.
Please advise.
Thanks,
Shailendra
The problem is that you are trying to load a spacy v2.0 model with spacy v2.2, and they aren't compatible. You can check whether your installed models are compatible with your current version of spacy using the command spacy validate.
Use spacy download en_core_web_sm to download and install a compatible version of the model. Afterwards, the output of spacy validate should include a line like this with a green checkmark:
package en-core-web-sm en_core_web_sm 2.2.5 ✔
You shouldn't need to provide the full path to a site-packages directory because spacy will automatically search for models in the installed packages, so just use spacy.load("en_core_web_sm").

Pymunk drawing utils not working

I am struggling to setup pymunk on my Ubuntu 16.04. I am using virtualenv, I have Python 3.5.2, pymunk 5.3.0 and cffi 1.11.0 installed.
I tried a very simple code first; basically, I created an empty Space and called step on it and everything worked smoothly. However, when I try to visualize it and create DrawOptions instance, I get strange errors, which I can't decipher. Also, I tried matplotlib_util and pygame_util, but both failed to create DrawOptions.
This is the code snippet I used:
import pymunk
import pyglet
import pymunk.pyglet_util
s = pymunk.Space()
options = pymunk.pyglet_util.DrawOptions()
s.debug_draw(options)
# s.step(0.02)
This is the output I get:
Loading chipmunk for Linux (64bit) [/home/wm/.virtualenvs/cv/lib/python3.5/site-packages/pymunk/libchipmunk.so]
Traceback (most recent call last):
File "/home/wm/.virtualenvs/cv/lib/python3.5/site-packages/cffi-1.11.0-py3.5-linux-x86_64.egg/cffi/api.py", line 167, in _typeof
result = self._parsed_types[cdecl]
KeyError: 'typedef void (*cpSpaceDebugDrawCircleImpl)(cpVect pos, cpFloat angle, cpFloat radius, cpSpaceDebugColor outlineColor, cpSpaceDebugColor fillColor, cpDataPointer data)'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/wm/.virtualenvs/cv/lib/python3.5/site-packages/cffi-1.11.0-py3.5-linux-x86_64.egg/cffi/cparser.py", line 276, in _parse
ast = _get_parser().parse(fullcsource)
File "/home/wm/.virtualenvs/cv/lib/python3.5/site-packages/pycparser/c_parser.py", line 152, in parse
debug=debuglevel)
File "/home/wm/.virtualenvs/cv/lib/python3.5/site-packages/pycparser/ply/yacc.py", line 331, in parse
return self.parseopt_notrack(input, lexer, debug, tracking, tokenfunc)
File "/home/wm/.virtualenvs/cv/lib/python3.5/site-packages/pycparser/ply/yacc.py", line 1199, in parseopt_notrack
tok = call_errorfunc(self.errorfunc, errtoken, self)
File "/home/wm/.virtualenvs/cv/lib/python3.5/site-packages/pycparser/ply/yacc.py", line 193, in call_errorfunc
r = errorfunc(token)
File "/home/wm/.virtualenvs/cv/lib/python3.5/site-packages/pycparser/c_parser.py", line 1761, in p_error
column=self.clex.find_tok_column(p)))
File "/home/wm/.virtualenvs/cv/lib/python3.5/site-packages/pycparser/plyparser.py", line 66, in _parse_error
raise ParseError("%s: %s" % (coord, msg))
pycparser.plyparser.ParseError: <cdef source string>:2:16: before: cpSpaceDebugDrawCircleImpl
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "pmtest2.py", line 5, in <module>
options = pymunk.pyglet_util.DrawOptions()
File "/home/wm/.virtualenvs/cv/lib/python3.5/site-packages/pymunk/pyglet_util.py", line 89, in __init__
super(DrawOptions, self).__init__()
File "/home/wm/.virtualenvs/cv/lib/python3.5/site-packages/pymunk/space_debug_draw_options.py", line 51, in __init__
#ffi.callback("typedef void (*cpSpaceDebugDrawCircleImpl)"
File "/home/wm/.virtualenvs/cv/lib/python3.5/site-packages/cffi-1.11.0-py3.5-linux-x86_64.egg/cffi/api.py", line 375, in callback
cdecl = self._typeof(cdecl, consider_function_as_funcptr=True)
File "/home/wm/.virtualenvs/cv/lib/python3.5/site-packages/cffi-1.11.0-py3.5-linux-x86_64.egg/cffi/api.py", line 170, in _typeof
result = self._typeof_locked(cdecl)
File "/home/wm/.virtualenvs/cv/lib/python3.5/site-packages/cffi-1.11.0-py3.5-linux-x86_64.egg/cffi/api.py", line 155, in _typeof_locked
type = self._parser.parse_type(cdecl)
File "/home/wm/.virtualenvs/cv/lib/python3.5/site-packages/cffi-1.11.0-py3.5-linux-x86_64.egg/cffi/cparser.py", line 476, in parse_type
return self.parse_type_and_quals(cdecl)[0]
File "/home/wm/.virtualenvs/cv/lib/python3.5/site-packages/cffi-1.11.0-py3.5-linux-x86_64.egg/cffi/cparser.py", line 479, in parse_type_and_quals
ast, macros = self._parse('void __dummy(\n%s\n);' % cdecl)[:2]
File "/home/wm/.virtualenvs/cv/lib/python3.5/site-packages/cffi-1.11.0-py3.5-linux-x86_64.egg/cffi/cparser.py", line 278, in _parse
self.convert_pycparser_error(e, csource)
File "/home/wm/.virtualenvs/cv/lib/python3.5/site-packages/cffi-1.11.0-py3.5-linux-x86_64.egg/cffi/cparser.py", line 307, in convert_pycparser_error
raise CDefError(msg)
cffi.error.CDefError: cannot parse "typedef void (*cpSpaceDebugDrawCircleImpl)(cpVect pos, cpFloat angle, cpFloat radius, cpSpaceDebugColor outlineColor, cpSpaceDebugColor fillColor, cpDataPointer data)"
<cdef source string>:2:16: before: cpSpaceDebugDrawCircleImpl
What do you think is causing that? Is that the python version I use, or maybe cffi compilation is faulty?
This error happens because there was a new version of pycparser (which is used by cffi) released, and that version breaks pymunk 5.3.0 and earlier versions. Yesterday I made a new release of Pymunk, 5.3.1 with a workaround for the problem. If you try to update your Pymunk version to 5.3.1 it should work.

Resources