I'm new to python, that is the first problem.
Secondly I am trying to automate the task of adding vector point layers from spreadsheets (xlsx-files) with Python.
The task can be done manually with the plugin "add spreadsheet layer".
I have a folder with roughly 20 xlsx-files that need to be added into the QGIS-project as vector point layers.
I have tried the following code snippet, to check if the core task of adding a spreadsheet layer actually works:
The Computer has a Win7 OS. The program in question is Python which is contained in the program QGIS 3.4.
The Plugin that I want to control through python is called "add spreadsheet layer".
from qgis.core import *
import processing
processing.run("qgis:createpointslayerfromtable",
{'INPUT':r'C:\Users\Desktop\PlayItAll\Test.xlsx',
'XFIELD':'X_Pos',
'YFIELD':'Y_Pos',
'ZFIELD':None,
'MFIELD':None,
'TARGET_CRS':QgsCoordinateReferenceSystem('EPSG:4326'),
'OUTPUT':r'memory'})
It produces this error:
File "C:/PROGRA1/QGIS31.4/apps/qgis/./python/plugins\processing\core\Processing.py", line 183, in runAlgorithm
raise QgsProcessingException(msg)
I have contacted the programmer of the plugin and he gave me this code to try:
import processing
processing.runAndLoadResults("qgis:createpointslayerfromtable",
{
'INPUT':r'C:\Users\username\Desktop\Delete\test.xlsx',
'XFIELD':'Longitude',
'YFIELD':'Latitude',
'ZFIELD':None,
'MFIELD':None,
'TARGET_CRS':QgsCoordinateReferenceSystem('EPSG:4326'),
'OUTPUT':'memory'
})
For him it worked, for me it didn't.
I got this on the processing tab:
2019-07-03T13:19:43 CRITICAL Traceback (most recent call last):
File "C:/PROGRA~1/QGIS3~1.4/apps/qgis/./python/plugins\processing\algs\qgis\PointsLayerFromTable.py", line 112, in processAlgorithm
fields, wkb_type, target_crs)
Exception: unknown
2019-07-03T13:19:43 CRITICAL Traceback (most recent call last):
File "C:/PROGRA~1/QGIS3~1.4/apps/qgis/./python/plugins\processing\algs\qgis\PointsLayerFromTable.py", line 112, in processAlgorithm
fields, wkb_type, target_crs)
Exception: unknown
2019-07-03T13:19:43 CRITICAL There were errors executing the algorithm.
The "python warnings" tab showed this:
2019-07-03T13:19:43 WARNING warning:__console__:1: ResourceWarning:
unclosed file
traceback: File "C:/PROGRA~1/QGIS3~1.4/apps/qgis/./python\console\console.py", line 575, in runScriptEditor
self.tabEditorWidget.currentWidget().newEditor.runScriptCode()
File "C:/PROGRA~1/QGIS3~1.4/apps/qgis/./python\console\console_editor.py", line 629, in runScriptCode
.format(filename.replace("\\", "/"), sys.getfilesystemencoding()))
File "C:/PROGRA~1/QGIS3~1.4/apps/qgis/./python\console\console_sci.py", line 635, in runCommand
more = self.runsource(src)
File "C:/PROGRA~1/QGIS3~1.4/apps/qgis/./python\console\console_sci.py", line 665, in runsource
return super(ShellScintilla, self).runsource(source, filename, symbol)
File "C:\PROGRA~1\QGIS3~1.4\apps\Python37\lib\code.py", line 74, in runsource
self.runcode(code)
File "C:\PROGRA~1\QGIS3~1.4\apps\Python37\lib\code.py", line 90, in runcode
exec(code, self.locals)
File "", line 1, in
Related
I'm using an LCI dataset from an Excel file.
I used it several times to conduct LCA with Brightway2.
I created a new product in that same Excle file and the first steps of importation went right, I mean these ones:
imp = bw.ExcelImporter(os.path.join(ROOT_DIR, "LCI_CW.xlsx"))
imp.apply_strategies()
imp.match_database(fields=('name', 'unit', 'location'))
imp.match_database("ecoinvent 3.7 cut-off",
fields=('name', 'unit', 'location'))
imp.statistics()
When checking with imp.write_excel() the activities match,etc.
BUT
When using imp.write_database()
I come with this error:
SyntaxError: at expr='ecoinvent 3.7 cut-off'
Any idea where this mistake could be hidden? I checked my use of expr='ecoinvent 3.7 cut-off', etc.
More details below:
Traceback (most recent call last):
File "C:\...\asteval\asteval.py", line 254, in parse
out = ast.parse(text)
File "C:\...\lib\ast.py", line 50, in parse
return compile(source, filename, mode, flags,
File "<unknown>", line 1
ecoinvent 3.7 cut-off
^
SyntaxError: invalid syntax
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\...\IPython\core\interactiveshell.py", line 3441, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "C:\...\Local\Temp/ipykernel_14440/3637353444.py", line 1, in <module>
imp.write_database()
File "C:\...\bw2io\importers\excel.py", line 277, in write_database
super(ExcelImporter, self).write_database(**kwargs)
File "C:\...\bw2io\importers\base_lci.py", line 266, in write_database
self.write_database_parameters(activate_parameters, delete_existing)
File "C:\...\bw2io\importers\excel.py", line 270, in write_database_parameters
super(ExcelImporter, self).write_database_parameters(
File "C:\...\bw2io\importers\base_lci.py", line 118, in write_database_parameters
parameters.new_database_parameters(
File "C:\...\bw2data\parameters.py", line 1319, in new_database_parameters
DatabaseParameter.recalculate(database)
File "C:\...\bw2data\parameters.py", line 348, in recalculate
new_symbols = get_new_symbols(data.values(), set(data))
File "C:\...\bw2data\parameters.py", line 1526, in get_new_symbols
nf.generic_visit(interpreter.parse(formula))
File "C:\...\asteval\asteval.py", line 256, in parse
self.raise_exception(None, msg='Syntax Error', expr=text)
File "C:\...\asteval\asteval.py", line 244, in raise_exception
raise exc(self.error_msg)
File "<string>", line unknown
SyntaxError: at expr='ecoinvent 3.7 cut-off'
The phrase 'ecoinvent 3.7 cut-off' is showing up in a definition of a database parameter, which isn't a valid Python expression, i.e. it is the same as typing in a Python prompt:
In [1]: ecoinvent 3.7 cut-off
File "<ipython-input-1-db6705818daf>", line 1
ecoinvent 3.7 cut-off
^
SyntaxError: invalid syntax
We could help debug the Excel file if you are comfortable sharing it. Otherwise I don't see a way to help understand the specific formatting error.
I upgraded my python version from 3.8 to 3.9. I was using autocomplete-Pythonn extension. And an error came in my atom editor. That grammer3.9.txt not found.
Looks like this error originated from Jedi. Please do not report such issues in autocomplete-python issue tracker. Report them directly to Jedi. Turn off outputProviderErrors setting to hide such errors in future. Traceback output:
*Traceback (most recent call last):
File "C:\Users\hp.atom\packages\autocomplete-python\lib\jedi\parser_init_.py", line 56, in load_grammar
return _loaded_grammars[path]
KeyError: 'C:\Users\hp\.atom\packages\autocomplete-python\lib\jedi\parser\grammar3.9.txt'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\hp.atom\packages\autocomplete-python\lib\completion.py", line 376, in watch
self._process_request(request)
File "C:\Users\hp.atom\packages\autocomplete-python\lib\completion.py", line 338, in process_request
script = jedi.api.Script(
File "C:\Users\hp.atom\packages\autocomplete-python\lib\jedi\api_init.py", line 126, in init
self.grammar = load_grammar(version='%s.%s' % sys.version_info[:2])
File "C:\Users\hp.atom\packages\autocomplete-python\lib\jedi\parser_init.py", line 58, in load_grammar
return _loaded_grammars.setdefault(path, generate_grammar(path))
File "C:\Users\hp.atom\packages\autocomplete-python\lib\jedi\parser\pgen2\pgen.py", line 393, in generate_grammar
p = ParserGenerator(filename)
File "C:\Users\hp.atom\packages\autocomplete-python\lib\jedi\parser\pgen2\pgen.py", line 18, in init
stream = open(filename)
FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\hp\.atom\packages\autocomplete-python\lib\jedi\parser\grammar3.9.txt'
I came up with this error last time. But then I Just renamed the file grammer3.7 to grammer3.8. But I do not think this is the right way to do so.
Started text analysing, and eventually ran into a need for downloading Corpora in using PyCharm2019 as IDE. Not really sure what traceback message wants me to do, since I used PyCharm's own lib import interface to enable Corpora already. Why does an error stating that Corpora is not available to the code keep reappearing?
Imported TextBlob, tried to do a line like: from textblob import TextBlob...view code below
from textblob import TextBlob
TextBlob(train['tweet'][1]).words
print("\nPRINT TOKENIZATION") # own instruction to allow for knowing what code result delivers
print(TextBlob(train['tweet'][1]).words)
….
Tried to install via nltk, no luck...error when downloading 'brown.tei'
showing info https://raw.githubusercontent.com/nltk/nltk_data/gh-pages/index.xml
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Users\jcst\AppData\Local\Programs\Python\Python37-32\lib\tkinter__init__.py", line 1705, in call
return self.func(*args)
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\nltk\downloader.py", line 1796, in _download
return self._download_threaded(*e)
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\nltk\downloader.py", line 2082, in _download_threaded
assert self._download_msg_queue == []
AssertionError
Traceback (most recent call last):
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\textblob\decorators.py", line 35, in decorated
return func(*args, **kwargs)
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\textblob\tokenizers.py", line 57, in tokenize
return nltk.tokenize.sent_tokenize(text)
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\nltk\tokenize__init__.py", line 104, in sent_tokenize
tokenizer = load('tokenizers/punkt/{0}.pickle'.format(language))
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\nltk\data.py", line 870, in load
opened_resource = _open(resource_url)
Resource File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\nltk\data.py", line 995, in open
punkt not found.
Please use the NLTK Downloader to obtain the resource:
return find(path, path + ['']).open()
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\nltk\data.py", line 701, in find
import nltk
nltk.download('punkt')
For more information see: https://www.nltk.org/data.html
Attempted to load tokenizers/punkt/english.pickle
Searched in:
- 'C:\Users\jcst/nltk_data'
- 'C:\Users\jcst\PycharmProjects\TextMining\venv\nltk_data'
- 'C:\Users\jcst\PycharmProjects\TextMining\venv\share\nltk_data'
- 'C:\Users\jcst\PycharmProjects\TextMining\venv\lib\nltk_data'
- 'C:\Users\jcst\AppData\Roaming\nltk_data'
- 'C:\nltk_data'
- 'D:\nltk_data'
- 'E:\nltk_data'
- ''
raise LookupError(resource_not_found)
LookupError:
Resource punkt not found.
Please use the NLTK Downloader to obtain the resource:
import nltk
nltk.download('punkt')
For more information see: https://www.nltk.org/data.html
Attempted to load tokenizers/punkt/english.pickle
Searched in:
- 'C:\Users\jcst/nltk_data'
- 'C:\Users\jcst\PycharmProjects\TextMining\venv\nltk_data'
- 'C:\Users\jcst\PycharmProjects\TextMining\venv\share\nltk_data'
- 'C:\Users\jcst\PycharmProjects\TextMining\venv\lib\nltk_data'
- 'C:\Users\jcst\AppData\Roaming\nltk_data'
- 'C:\nltk_data'
- 'D:\nltk_data'
- 'E:\nltk_data'
- ''
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:/Users/jcst/PycharmProjects/TextMining/ModuleImportAndTrainFileIntro.py", line 151, in
TextBlob(train['tweet'][1]).words
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\textblob\decorators.py", line 24, in get
value = obj.dict[self.func.name] = self.func(obj)
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\textblob\blob.py", line 649, in words
return WordList(word_tokenize(self.raw, include_punc=False))
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\textblob\tokenizers.py", line 73, in word_tokenize
for sentence in sent_tokenize(text))
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\textblob\base.py", line 64, in itokenize
return (t for t in self.tokenize(text, *args, **kwargs))
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\textblob\decorators.py", line 38, in decorated
raise MissingCorpusError()
textblob.exceptions.MissingCorpusError:
Looks like you are missing some required data for this feature.
To download the necessary data, simply run
python -m textblob.download_corpora
or use the NLTK downloader to download the missing data: http://nltk.org/data.html
If this doesn't fix the problem, file an issue at https://github.com/sloria/TextBlob/issues.
I am struggling to setup pymunk on my Ubuntu 16.04. I am using virtualenv, I have Python 3.5.2, pymunk 5.3.0 and cffi 1.11.0 installed.
I tried a very simple code first; basically, I created an empty Space and called step on it and everything worked smoothly. However, when I try to visualize it and create DrawOptions instance, I get strange errors, which I can't decipher. Also, I tried matplotlib_util and pygame_util, but both failed to create DrawOptions.
This is the code snippet I used:
import pymunk
import pyglet
import pymunk.pyglet_util
s = pymunk.Space()
options = pymunk.pyglet_util.DrawOptions()
s.debug_draw(options)
# s.step(0.02)
This is the output I get:
Loading chipmunk for Linux (64bit) [/home/wm/.virtualenvs/cv/lib/python3.5/site-packages/pymunk/libchipmunk.so]
Traceback (most recent call last):
File "/home/wm/.virtualenvs/cv/lib/python3.5/site-packages/cffi-1.11.0-py3.5-linux-x86_64.egg/cffi/api.py", line 167, in _typeof
result = self._parsed_types[cdecl]
KeyError: 'typedef void (*cpSpaceDebugDrawCircleImpl)(cpVect pos, cpFloat angle, cpFloat radius, cpSpaceDebugColor outlineColor, cpSpaceDebugColor fillColor, cpDataPointer data)'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/wm/.virtualenvs/cv/lib/python3.5/site-packages/cffi-1.11.0-py3.5-linux-x86_64.egg/cffi/cparser.py", line 276, in _parse
ast = _get_parser().parse(fullcsource)
File "/home/wm/.virtualenvs/cv/lib/python3.5/site-packages/pycparser/c_parser.py", line 152, in parse
debug=debuglevel)
File "/home/wm/.virtualenvs/cv/lib/python3.5/site-packages/pycparser/ply/yacc.py", line 331, in parse
return self.parseopt_notrack(input, lexer, debug, tracking, tokenfunc)
File "/home/wm/.virtualenvs/cv/lib/python3.5/site-packages/pycparser/ply/yacc.py", line 1199, in parseopt_notrack
tok = call_errorfunc(self.errorfunc, errtoken, self)
File "/home/wm/.virtualenvs/cv/lib/python3.5/site-packages/pycparser/ply/yacc.py", line 193, in call_errorfunc
r = errorfunc(token)
File "/home/wm/.virtualenvs/cv/lib/python3.5/site-packages/pycparser/c_parser.py", line 1761, in p_error
column=self.clex.find_tok_column(p)))
File "/home/wm/.virtualenvs/cv/lib/python3.5/site-packages/pycparser/plyparser.py", line 66, in _parse_error
raise ParseError("%s: %s" % (coord, msg))
pycparser.plyparser.ParseError: <cdef source string>:2:16: before: cpSpaceDebugDrawCircleImpl
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "pmtest2.py", line 5, in <module>
options = pymunk.pyglet_util.DrawOptions()
File "/home/wm/.virtualenvs/cv/lib/python3.5/site-packages/pymunk/pyglet_util.py", line 89, in __init__
super(DrawOptions, self).__init__()
File "/home/wm/.virtualenvs/cv/lib/python3.5/site-packages/pymunk/space_debug_draw_options.py", line 51, in __init__
#ffi.callback("typedef void (*cpSpaceDebugDrawCircleImpl)"
File "/home/wm/.virtualenvs/cv/lib/python3.5/site-packages/cffi-1.11.0-py3.5-linux-x86_64.egg/cffi/api.py", line 375, in callback
cdecl = self._typeof(cdecl, consider_function_as_funcptr=True)
File "/home/wm/.virtualenvs/cv/lib/python3.5/site-packages/cffi-1.11.0-py3.5-linux-x86_64.egg/cffi/api.py", line 170, in _typeof
result = self._typeof_locked(cdecl)
File "/home/wm/.virtualenvs/cv/lib/python3.5/site-packages/cffi-1.11.0-py3.5-linux-x86_64.egg/cffi/api.py", line 155, in _typeof_locked
type = self._parser.parse_type(cdecl)
File "/home/wm/.virtualenvs/cv/lib/python3.5/site-packages/cffi-1.11.0-py3.5-linux-x86_64.egg/cffi/cparser.py", line 476, in parse_type
return self.parse_type_and_quals(cdecl)[0]
File "/home/wm/.virtualenvs/cv/lib/python3.5/site-packages/cffi-1.11.0-py3.5-linux-x86_64.egg/cffi/cparser.py", line 479, in parse_type_and_quals
ast, macros = self._parse('void __dummy(\n%s\n);' % cdecl)[:2]
File "/home/wm/.virtualenvs/cv/lib/python3.5/site-packages/cffi-1.11.0-py3.5-linux-x86_64.egg/cffi/cparser.py", line 278, in _parse
self.convert_pycparser_error(e, csource)
File "/home/wm/.virtualenvs/cv/lib/python3.5/site-packages/cffi-1.11.0-py3.5-linux-x86_64.egg/cffi/cparser.py", line 307, in convert_pycparser_error
raise CDefError(msg)
cffi.error.CDefError: cannot parse "typedef void (*cpSpaceDebugDrawCircleImpl)(cpVect pos, cpFloat angle, cpFloat radius, cpSpaceDebugColor outlineColor, cpSpaceDebugColor fillColor, cpDataPointer data)"
<cdef source string>:2:16: before: cpSpaceDebugDrawCircleImpl
What do you think is causing that? Is that the python version I use, or maybe cffi compilation is faulty?
This error happens because there was a new version of pycparser (which is used by cffi) released, and that version breaks pymunk 5.3.0 and earlier versions. Yesterday I made a new release of Pymunk, 5.3.1 with a workaround for the problem. If you try to update your Pymunk version to 5.3.1 it should work.
Hi I am unable to use the "print_control_identifiers()" for my desktop application.
I am using a) Python 3.5.3 (32 bit since my application I am automating is 32 bit)
b) Pywinauto 0.6.2.
My simple code is as follows:
`from pywinauto import Application
app = Application(backend="uia")
app = Application().start(r"C:\Program Files (x86)\Trane\TRACE 3D Plus\TRACE™ 3D Plus.exe")
app['TRACE™ 3D Plus'].print_control_identifiers()`
When I run the above command, I got the following in command prompt:
Traceback (most recent call last):
File "D:\Python\lib\site-packages\pywinauto\application.py", line 243, in __re
solve_control
criteria)
File "D:\Python\lib\site-packages\pywinauto\timings.py", line 424, in wait_until_passes
raise err
pywinauto.timings.TimeoutError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "inspect.py", line 4, in <module>
app['TRACE\u2122 3D Plus'].print_control_identifiers()
File "D:\Python\lib\site-packages\pywinauto\application.py", line 573, in prin
t_control_identifiers
this_ctrl = self.__resolve_control(self.criteria)[-1]
File "D:\Python\lib\site-packages\pywinauto\application.py", line 246, in __re
solve_control
raise e.original_exception
File "D:\Python\lib\site-packages\pywinauto\timings.py", line 402, in wait_unt
il_passes
func_val = func(*args)
File "D:\Python\lib\site-packages\pywinauto\application.py", line 188, in __ge
t_ctrl
dialog = self.backend.generic_wrapper_class(findwindows.find_element(**crite
ria[0]))
File "D:\Python\lib\site-packages\pywinauto\findwindows.py", line 84, in find_
element
elements = find_elements(**kwargs)
File "D:\Python\lib\site-packages\pywinauto\findwindows.py", line 294, in find
_elements
elements = findbestmatch.find_best_control_matches(best_match, wrapped_elems
)
File "D:\Python\lib\site-packages\pywinauto\findbestmatch.py", line 534, in fi
nd_best_control_matches
raise MatchError(items = name_control_map.keys(), tofind = search_text)
pywinauto.findbestmatch.MatchError: Could not find 'TRACE\u2122 3D Plus' in 'dic
t_keys([])'
Can anyone tell me what the problem is and what I could do to resolve it ?
Thanks in advance !
Replace these commands
app = Application(backend="uia")
app = Application().start(r"C:\Program Files (x86)\Trane\TRACE 3D Plus\TRACE™ 3D Plus.exe")
with this one:
app = Application(backend="uia").start(r'"C:\Program Files (x86)\Trane\TRACE 3D Plus\TRACE™ 3D Plus.exe"')
Because you re-create app object with default backend="win32" if not using any argument. If it's hard to understand, I'd recommend to get a Python course first. Basic Python programming skills is necessary here to understand what's going on.