dicttoxml throws AttributeError - python-3.x

I try to check how dicttoxml is working. But I receive this error from within the dicttoxml module.
I am starting program from geany.
Can anybody help?
Thanks!
import dicttoxml
myDict = {'myKey':"theirValue"};
xml =dicttoxml.dicttoxml(myDict);
Output is:
martin#saturn:~/it/python/python_work$ /bin/sh /tmp/geany_run_script_Q2NH3Z.sh
0.32000000000000006
1.6666666666666667
['1', '6666666666666667']
Traceback (most recent call last):
File "dicttoxmlExmp.py", line 4, in <module>
xml =dicttoxml.dicttoxml(myDict);
File "/home/martin/.local/lib/python3.6/site-packages/dicttoxml.py", line 393, in dicttoxml
convert(obj, ids, attr_type, item_func, cdata, parent=custom_root),
File "/home/martin/.local/lib/python3.6/site-packages/dicttoxml.py", line 176, in convert
if isinstance(obj, numbers.Number) or type(obj) in (str, unicode):
AttributeError: module 'numbers' has no attribute 'Number'

You can simply do:
python yourscript
Instead of running it as a shell script.

Related

How can I execute python module on Python3 if I encountered print without parants

I want to launch pybrain tests on Python3 but I get error:
Traceback (most recent call last):
File "runtests.py", line 107, in <module>
runner.run(make_test_suite())
File "runtests.py", line 72, in make_test_suite
test_package = __import__(test_package_path, fromlist=module_names)
File "B:\msys64\mingw64\bin\WinPython\Python373\lib\site-packages\pybrain\tests\__init__.py", line 1, in <module>
from helpers import gradientCheck, buildAppropriateDataset, xmlInvariance, \
File "B:\msys64\mingw64\bin\WinPython\Python373\Lib\site-packages\pybrain\tests\helpers.py", line 42
print 'Module has no parameters'
^
SyntaxError: Missing parentheses in call to 'print'. Did you mean print('Module has no parameters')?
I looked helpers.py and found that prints are without parents(as operators, I think it was in Python2).How can I fix that?Can I import some module to
execute with such problem, for example six, but I don t know what it does.

Python3 StringIO has no clone

Following code works in Python2 but not in Python3?
import http.client
from io import StringIO
if __name__ == '__main__':
res=http.client.HTTPMessage(StringIO(u"headers"))
print(str(res))
[]$ python3 test.py
Traceback (most recent call last):
File "test.py", line 9, in <module>
print(str(res))
File "/usr/lib64/python3.9/email/message.py", line 135, in __str__
return self.as_string()
File "/usr/lib64/python3.9/email/message.py", line 158, in as_string
g.flatten(self, unixfrom=unixfrom)
File "/usr/lib64/python3.9/email/generator.py", line 97, in flatten
policy = policy.clone(max_line_length=self.maxheaderlen)
AttributeError: '_io.StringIO' object has no attribute 'clone'
I'm currently porting old Python2 code over to Python3.
This is a dummy test of a problem I run into.

Pytest stop runnig with AttributeError (module 'html' has no attribute 'td' in pytest-html)

In my PyTest, I have included conftest.py for customizing the HTML report.
But I have the following error comes up while the test script tries to access the HTML report.
"C:\Users\gobiraaj.anandavel\AppData\Local\Programs\Python\Python37-32\lib\site-packages\pluggy\callers.py", line 187, in _multicall
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR> File "C:\Projects\TripTickAT\conftest.py", line 14, in pytest_html_results_table_row
INTERNALERROR> cells.insert(2, html.td(report.status_code))
INTERNALERROR> AttributeError: module 'html' has no attribute 'td'
Traceback (most recent call last):
File "C:\Users\gobiraaj.anandavel\AppData\Local\Programs\Python\Python37-32\Scripts\pytest-script.py", line 11, in <module>
load_entry_point('pytest==5.2.2', 'console_scripts', 'pytest')()
File "C:\Users\gobiraaj.anandavel\AppData\Local\Programs\Python\Python37-32\lib\site-packages\pytest-5.2.2-py3.7.egg\_pytest\config\__init__.py", line
File "C:\Projects\TripTickAT\conftest.py", line 8, in pytest_html_results_table_header
cells.insert(2, html.th('Status_code'))
AttributeError: module 'html' has no attribute 'th'
conftest.py
from datetime import datetime
import html.parser
import pytest
#pytest.mark.optionalhook
def pytest_html_results_table_header(cells):
cells.insert(2, html.th('Status_code'))
cells.insert(1, html.th('Time', class_='sortable time', col='time'))
cells.pop()
#pytest.mark.optionalhook
def pytest_html_results_table_row(report, cells):
cells.insert(2, html.td(report.status_code))
cells.insert(1, html.td(datetime.utcnow(), class_='col-time'))
cells.pop()
#pytest.mark.hookwrapper
def pytest_runtest_makereport(item, call):
outcome = yield
report = outcome.get_result()
Use the following html import instead
from py.xml import html
Initially pycharm will not identify this import but this will not impact the execution. If you want you can change the pycharm settings to ignore this error

Issues tokenizing text

Started text analysing, and eventually ran into a need for downloading Corpora in using PyCharm2019 as IDE. Not really sure what traceback message wants me to do, since I used PyCharm's own lib import interface to enable Corpora already. Why does an error stating that Corpora is not available to the code keep reappearing?
Imported TextBlob, tried to do a line like: from textblob import TextBlob...view code below
from textblob import TextBlob
TextBlob(train['tweet'][1]).words
print("\nPRINT TOKENIZATION") # own instruction to allow for knowing what code result delivers
print(TextBlob(train['tweet'][1]).words)
….
Tried to install via nltk, no luck...error when downloading 'brown.tei'
showing info https://raw.githubusercontent.com/nltk/nltk_data/gh-pages/index.xml
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Users\jcst\AppData\Local\Programs\Python\Python37-32\lib\tkinter__init__.py", line 1705, in call
return self.func(*args)
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\nltk\downloader.py", line 1796, in _download
return self._download_threaded(*e)
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\nltk\downloader.py", line 2082, in _download_threaded
assert self._download_msg_queue == []
AssertionError
Traceback (most recent call last):
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\textblob\decorators.py", line 35, in decorated
return func(*args, **kwargs)
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\textblob\tokenizers.py", line 57, in tokenize
return nltk.tokenize.sent_tokenize(text)
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\nltk\tokenize__init__.py", line 104, in sent_tokenize
tokenizer = load('tokenizers/punkt/{0}.pickle'.format(language))
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\nltk\data.py", line 870, in load
opened_resource = _open(resource_url)
Resource File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\nltk\data.py", line 995, in open
punkt not found.
Please use the NLTK Downloader to obtain the resource:
return find(path, path + ['']).open()
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\nltk\data.py", line 701, in find
import nltk
nltk.download('punkt')
For more information see: https://www.nltk.org/data.html
Attempted to load tokenizers/punkt/english.pickle
Searched in:
- 'C:\Users\jcst/nltk_data'
- 'C:\Users\jcst\PycharmProjects\TextMining\venv\nltk_data'
- 'C:\Users\jcst\PycharmProjects\TextMining\venv\share\nltk_data'
- 'C:\Users\jcst\PycharmProjects\TextMining\venv\lib\nltk_data'
- 'C:\Users\jcst\AppData\Roaming\nltk_data'
- 'C:\nltk_data'
- 'D:\nltk_data'
- 'E:\nltk_data'
- ''
raise LookupError(resource_not_found)
LookupError:
Resource punkt not found.
Please use the NLTK Downloader to obtain the resource:
import nltk
nltk.download('punkt')
For more information see: https://www.nltk.org/data.html
Attempted to load tokenizers/punkt/english.pickle
Searched in:
- 'C:\Users\jcst/nltk_data'
- 'C:\Users\jcst\PycharmProjects\TextMining\venv\nltk_data'
- 'C:\Users\jcst\PycharmProjects\TextMining\venv\share\nltk_data'
- 'C:\Users\jcst\PycharmProjects\TextMining\venv\lib\nltk_data'
- 'C:\Users\jcst\AppData\Roaming\nltk_data'
- 'C:\nltk_data'
- 'D:\nltk_data'
- 'E:\nltk_data'
- ''
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:/Users/jcst/PycharmProjects/TextMining/ModuleImportAndTrainFileIntro.py", line 151, in
TextBlob(train['tweet'][1]).words
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\textblob\decorators.py", line 24, in get
value = obj.dict[self.func.name] = self.func(obj)
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\textblob\blob.py", line 649, in words
return WordList(word_tokenize(self.raw, include_punc=False))
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\textblob\tokenizers.py", line 73, in word_tokenize
for sentence in sent_tokenize(text))
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\textblob\base.py", line 64, in itokenize
return (t for t in self.tokenize(text, *args, **kwargs))
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\textblob\decorators.py", line 38, in decorated
raise MissingCorpusError()
textblob.exceptions.MissingCorpusError:
Looks like you are missing some required data for this feature.
To download the necessary data, simply run
python -m textblob.download_corpora
or use the NLTK downloader to download the missing data: http://nltk.org/data.html
If this doesn't fix the problem, file an issue at https://github.com/sloria/TextBlob/issues.

TypeError: Can't convert 'bytes' object to str implicitly for tweepy

from tweepy import Stream
from tweepy import OAuthHandler
from tweepy.streaming import StreamListener
ckey=''
csecret=''
atoken=''
asecret=''
class listener(StreamListener):
def on_data(self,data):
print(data)
return True
def on_error(self,status):
print(status)
auth = OAuthHandler(ckey,csecret)
auth.set_access_token(atoken, asecret)
twitterStream = Stream(auth, listener())
twitterStream.filter(track="cricket")
This code filter the twitter stream based on the filter. But I am getting following traceback after running the code. Can somebody please help
Traceback (most recent call last):
File "lab.py", line 23, in <module>
twitterStream.filter(track="car".strip())
File "C:\Python34\lib\site-packages\tweepy\streaming.py", line 430, in filter
self._start(async)
File "C:\Python34\lib\site-packages\tweepy\streaming.py", line 346, in _start
self._run()
File "C:\Python34\lib\site-packages\tweepy\streaming.py", line 286, in _run
raise exception
File "C:\Python34\lib\site-packages\tweepy\streaming.py", line 255, in _run
self._read_loop(resp)
File "C:\Python34\lib\site-packages\tweepy\streaming.py", line 298, in _read_loop
line = buf.read_line().strip()
File "C:\Python34\lib\site-packages\tweepy\streaming.py", line 171, in read_line
self._buffer += self._stream.read(self._chunk_size)
TypeError: Can't convert 'bytes' object to str implicitly
Im assuming you're using tweepy 3.4.0. The issue you've raised is 'open' on github (https://github.com/tweepy/tweepy/issues/615).
Two work-arounds :
1)
In streaming.py:
I changed line 161 to
self._buffer += self._stream.read(read_len).decode('UTF-8', 'ignore')
and line 171 to
self._buffer += self._stream.read(self._chunk_size).decode('UTF-8', 'ignore')
and then reinstalled via python3 setup.py install on my local copy of tweepy.
2)
remove the tweepy 3.4.0 module, and install 3.3.0 using command: pip install -I tweepy==3.3.0
Hope that helps,
-A
You can't do twitterStream.filter(track="car".strip()). Why are you adding the strip() it's serving no purpose in there.
track must be a str type before you invoke a connection to Twitter's Streaming API and tweepy is preventing that connection because you're trying to add strip()
If for some reason you need it, you can do track_word='car'.strip() then track=track_word, that's even unnecessary because:
>>> print('car'.strip())
car
Also, the error you're getting does not match the code you have listed, the code that's in your question should work fine.

Resources