Python http.client.Incomplete Read(0 bytes read) error - python-3.x

I have seen this error on the forum and read through the responses yet I still don't understand what it is or how to address it. I'm scraping data from the internet from 16k links, my script scrapes similar information from each link and writes it to a .csv some of the date gets written before this error.
Traceback (most recent call last):
File "/usr/local/Cellar/python3/3.5.2_3/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 541, in _get_chunk_left
chunk_left = self._read_next_chunk_size()
File "/usr/local/Cellar/python3/3.5.2_3/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 508, in _read_next_chunk_size
return int(line, 16)
ValueError: invalid literal for int() with base 16: b''
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/Cellar/python3/3.5.2_3/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 558, in _readall_chunked
chunk_left = self._get_chunk_left()
File "/usr/local/Cellar/python3/3.5.2_3/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 543, in _get_chunk_left
raise IncompleteRead(b'')
http.client.IncompleteRead: IncompleteRead(0 bytes read)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "MoviesToDb.py", line 91, in <module>
html = r.read()
File "/usr/local/Cellar/python3/3.5.2_3/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 455, in read
return self._readall_chunked()
File "/usr/local/Cellar/python3/3.5.2_3/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 565, in _readall_chunked
raise IncompleteRead(b''.join(value))
http.client.IncompleteRead: IncompleteRead(17891 bytes read)
I would like to know:1) What does this error mean? 2) How do I prevent it?

try to import :
from http.client import IncompleteRead
and add this in your script :
except IncompleteRead:
# Oh well, reconnect and keep trucking
continue

requests.exceptions.ChunkedEncodingError: (‘Connection broken: IncompleteRead(0 bytes read)’, IncompleteRead(0 bytes read)).
It is because the server of http protocal is 1.0 version,while python use 1.1 version. The solution is to assign the protocal version of client, like this
Python3 Version please add:
> import http.client
> http.client.HTTPConnection._http_vsn = 10
> http.client.HTTPConnection._http_vsn_str = 'HTTP/1.0'
Python2 Version please add:
> import http.client
> http.client.HTTPConnection._http_vsn = 10
> http.client.HTTPConnection._http_vsn_str = 'HTTP/1.0'
See the reference How to deal with "http.client.IncompleteRead: IncompleteRead(0 bytes read)" problem

Related

Currently having an error with youtube-dl in python, any ideas?

This is the error I am getting, I tried to see if anyone else had this issue but it was only with other sites other than youtube and there was no fix listed. If needed I can link the python Code as well. This is happening when using a play command on a discord bot using discord.py-rewrite.
[youtube:search] query " www.youtube.com/watch?v=5abamRO41fE": Downloading page 1
ERROR: query " www.youtube.com/watch?v=5abamRO41fE": Failed to parse JSON (caused by JSONDecodeError('Expecting value: line 1 column 1 (char 0)')); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; see https://yt-dl.org/update on how to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
Ignoring exception in command play:
Traceback (most recent call last):
File "C:\Users\Jimmy\AppData\Local\Programs\Python\Python38-32\lib\site-packages\youtube_dl\extractor\common.py", line 904, in _parse_json
return json.loads(json_string)
File "C:\Users\Jimmy\AppData\Local\Programs\Python\Python38-32\lib\json\__init__.py", line 357, in loads
return _default_decoder.decode(s)
File "C:\Users\Jimmy\AppData\Local\Programs\Python\Python38-32\lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "C:\Users\Jimmy\AppData\Local\Programs\Python\Python38-32\lib\json\decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Jimmy\AppData\Local\Programs\Python\Python38-32\lib\site-packages\youtube_dl\YoutubeDL.py", line 797, in extract_info
ie_result = ie.extract(url)
File "C:\Users\Jimmy\AppData\Local\Programs\Python\Python38-32\lib\site-packages\youtube_dl\extractor\common.py", line 532, in extract
ie_result = self._real_extract(url)
File "C:\Users\Jimmy\AppData\Local\Programs\Python\Python38-32\lib\site-packages\youtube_dl\extractor\common.py", line 3014, in _real_extract
return self._get_n_results(query, n)
File "C:\Users\Jimmy\AppData\Local\Programs\Python\Python38-32\lib\site-packages\youtube_dl\extractor\youtube.py", line 3200, in _get_n_results
data = self._download_json(
File "C:\Users\Jimmy\AppData\Local\Programs\Python\Python38-32\lib\site-packages\youtube_dl\extractor\common.py", line 893, in _download_json
res = self._download_json_handle(
File "C:\Users\Jimmy\AppData\Local\Programs\Python\Python38-32\lib\site-packages\youtube_dl\extractor\common.py", line 879, in _download_json_handle
return self._parse_json(
File "C:\Users\Jimmy\AppData\Local\Programs\Python\Python38-32\lib\site-packages\youtube_dl\extractor\common.py", line 908, in _parse_json
raise ExtractorError(errmsg, cause=ve)
youtube_dl.utils.ExtractorError: query " www.youtube.com/watch?v=5abamRO41fE": Failed to parse JSON (caused by JSONDecodeError('Expecting value: line 1 column 1 (char 0)')); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; see https://yt-dl.org/update on how to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Jimmy\AppData\Local\Programs\Python\Python38-32\lib\site-packages\discord\ext\commands\core.py", line 85, in wrapped
ret = await coro(*args, **kwargs)
File "g:\Bots\TestBot\test.py", line 574, in play
results = ydl.extract_info(f'ytsearch1: {song_search}', download=True)
File "C:\Users\Jimmy\AppData\Local\Programs\Python\Python38-32\lib\site-packages\youtube_dl\YoutubeDL.py", line 820, in extract_info
self.report_error(compat_str(e), e.format_traceback())
File "C:\Users\Jimmy\AppData\Local\Programs\Python\Python38-32\lib\site-packages\youtube_dl\YoutubeDL.py", line 625, in report_error
self.trouble(error_message, tb)
File "C:\Users\Jimmy\AppData\Local\Programs\Python\Python38-32\lib\site-packages\youtube_dl\YoutubeDL.py", line 595, in trouble
raise DownloadError(message, exc_info)
youtube_dl.utils.DownloadError: ERROR: query " www.youtube.com/watch?v=5abamRO41fE": Failed to parse JSON (caused by JSONDecodeError('Expecting value: line 1 column 1 (char 0)')); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; see https://yt-dl.org/update on how to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.

Autocomplete-Python not working, ATOM, Error from jed, grammer 3.9.txt not found

I upgraded my python version from 3.8 to 3.9. I was using autocomplete-Pythonn extension. And an error came in my atom editor. That grammer3.9.txt not found.
Looks like this error originated from Jedi. Please do not report such issues in autocomplete-python issue tracker. Report them directly to Jedi. Turn off outputProviderErrors setting to hide such errors in future. Traceback output:
*Traceback (most recent call last):
File "C:\Users\hp.atom\packages\autocomplete-python\lib\jedi\parser_init_.py", line 56, in load_grammar
return _loaded_grammars[path]
KeyError: 'C:\Users\hp\.atom\packages\autocomplete-python\lib\jedi\parser\grammar3.9.txt'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\hp.atom\packages\autocomplete-python\lib\completion.py", line 376, in watch
self._process_request(request)
File "C:\Users\hp.atom\packages\autocomplete-python\lib\completion.py", line 338, in process_request
script = jedi.api.Script(
File "C:\Users\hp.atom\packages\autocomplete-python\lib\jedi\api_init.py", line 126, in init
self.grammar = load_grammar(version='%s.%s' % sys.version_info[:2])
File "C:\Users\hp.atom\packages\autocomplete-python\lib\jedi\parser_init.py", line 58, in load_grammar
return _loaded_grammars.setdefault(path, generate_grammar(path))
File "C:\Users\hp.atom\packages\autocomplete-python\lib\jedi\parser\pgen2\pgen.py", line 393, in generate_grammar
p = ParserGenerator(filename)
File "C:\Users\hp.atom\packages\autocomplete-python\lib\jedi\parser\pgen2\pgen.py", line 18, in init
stream = open(filename)
FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\hp\.atom\packages\autocomplete-python\lib\jedi\parser\grammar3.9.txt'
I came up with this error last time. But then I Just renamed the file grammer3.7 to grammer3.8. But I do not think this is the right way to do so.

Issues tokenizing text

Started text analysing, and eventually ran into a need for downloading Corpora in using PyCharm2019 as IDE. Not really sure what traceback message wants me to do, since I used PyCharm's own lib import interface to enable Corpora already. Why does an error stating that Corpora is not available to the code keep reappearing?
Imported TextBlob, tried to do a line like: from textblob import TextBlob...view code below
from textblob import TextBlob
TextBlob(train['tweet'][1]).words
print("\nPRINT TOKENIZATION") # own instruction to allow for knowing what code result delivers
print(TextBlob(train['tweet'][1]).words)
….
Tried to install via nltk, no luck...error when downloading 'brown.tei'
showing info https://raw.githubusercontent.com/nltk/nltk_data/gh-pages/index.xml
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Users\jcst\AppData\Local\Programs\Python\Python37-32\lib\tkinter__init__.py", line 1705, in call
return self.func(*args)
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\nltk\downloader.py", line 1796, in _download
return self._download_threaded(*e)
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\nltk\downloader.py", line 2082, in _download_threaded
assert self._download_msg_queue == []
AssertionError
Traceback (most recent call last):
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\textblob\decorators.py", line 35, in decorated
return func(*args, **kwargs)
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\textblob\tokenizers.py", line 57, in tokenize
return nltk.tokenize.sent_tokenize(text)
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\nltk\tokenize__init__.py", line 104, in sent_tokenize
tokenizer = load('tokenizers/punkt/{0}.pickle'.format(language))
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\nltk\data.py", line 870, in load
opened_resource = _open(resource_url)
Resource File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\nltk\data.py", line 995, in open
punkt not found.
Please use the NLTK Downloader to obtain the resource:
return find(path, path + ['']).open()
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\nltk\data.py", line 701, in find
import nltk
nltk.download('punkt')
For more information see: https://www.nltk.org/data.html
Attempted to load tokenizers/punkt/english.pickle
Searched in:
- 'C:\Users\jcst/nltk_data'
- 'C:\Users\jcst\PycharmProjects\TextMining\venv\nltk_data'
- 'C:\Users\jcst\PycharmProjects\TextMining\venv\share\nltk_data'
- 'C:\Users\jcst\PycharmProjects\TextMining\venv\lib\nltk_data'
- 'C:\Users\jcst\AppData\Roaming\nltk_data'
- 'C:\nltk_data'
- 'D:\nltk_data'
- 'E:\nltk_data'
- ''
raise LookupError(resource_not_found)
LookupError:
Resource punkt not found.
Please use the NLTK Downloader to obtain the resource:
import nltk
nltk.download('punkt')
For more information see: https://www.nltk.org/data.html
Attempted to load tokenizers/punkt/english.pickle
Searched in:
- 'C:\Users\jcst/nltk_data'
- 'C:\Users\jcst\PycharmProjects\TextMining\venv\nltk_data'
- 'C:\Users\jcst\PycharmProjects\TextMining\venv\share\nltk_data'
- 'C:\Users\jcst\PycharmProjects\TextMining\venv\lib\nltk_data'
- 'C:\Users\jcst\AppData\Roaming\nltk_data'
- 'C:\nltk_data'
- 'D:\nltk_data'
- 'E:\nltk_data'
- ''
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:/Users/jcst/PycharmProjects/TextMining/ModuleImportAndTrainFileIntro.py", line 151, in
TextBlob(train['tweet'][1]).words
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\textblob\decorators.py", line 24, in get
value = obj.dict[self.func.name] = self.func(obj)
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\textblob\blob.py", line 649, in words
return WordList(word_tokenize(self.raw, include_punc=False))
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\textblob\tokenizers.py", line 73, in word_tokenize
for sentence in sent_tokenize(text))
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\textblob\base.py", line 64, in itokenize
return (t for t in self.tokenize(text, *args, **kwargs))
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\textblob\decorators.py", line 38, in decorated
raise MissingCorpusError()
textblob.exceptions.MissingCorpusError:
Looks like you are missing some required data for this feature.
To download the necessary data, simply run
python -m textblob.download_corpora
or use the NLTK downloader to download the missing data: http://nltk.org/data.html
If this doesn't fix the problem, file an issue at https://github.com/sloria/TextBlob/issues.

python3 UnicodeError: encoding with 'idna' codec failed (UnicodeError: label empty or too long)

I am having some issues running my program. I have a "masterfile" that has a list of ips on a \n like so but I keep getting this error message:
68.x.0.56
68.x.0.53
I'm not sure what exactly the problem is, but I've been searching forums and other stackoverflow help, but can't seem to determine what the issue is.
def dns_resolver(subdomains):
print('\n\n########## Checking Subdomains for DNS Resolutions ##########\n')
queries = []
with open ('masterfile', 'r') as f:
domains = f.read().splitlines()
for i in domains:
try:
resp = socket.gethostbyname(i)
print(resp)
queries.append((i, resp))
except socket.error:
pass
return queries
Traceback (most recent call last):
File "/usr/lib/python3.6/encodings/idna.py", line 165, in encode
raise UnicodeError("label empty or too long")
UnicodeError: label empty or too long
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "subdomain-hunter.py", line 170, in <module>
main()
File "subdomain-hunter.py", line 59, in main
enumerate(DOMAIN)
File "subdomain-hunter.py", line 120, in enumerate
resolvediff = dns_resolver(diff)
File "subdomain-hunter.py", line 142, in dns_resolver
resp = socket.gethostbyname(i)
UnicodeError: encoding with 'idna' codec failed (UnicodeError: label empty or too long)

Python 3.X pycomm.ab_comm.clx CommError: must be str, not bytes

Im trying to get "PYCOMM" to connect to my CLX 5000 processor.
Every time I run my code I get:CommError: must be str, not bytes.
I have looked all over the code and I cant find where the issue is. Everything that is supposed to be in a string format is.
I am using python3.6
Here is the code:
import sys
from pycomm.ab_comm.clx import Driver as ClxDriver
c = ClxDriver()
if c.open('172.16.2.161'):
print(c.read_tag('Start'))
# Prints (1,'BOOL') if true; (0,'BOOL') if false
c.close()
Here is the error:
C:\Users\shirley\Miniconda3\python.exe C:/Users/shirley/Downloads/pycomm-pycomm3/pycomm-pycomm3/examples/test_clx_comm.py
Traceback (most recent call last):
File "C:\Users\shirley\Miniconda3\lib\site-packages\pycomm\cip\cip_base.py", line 617, in build_header
h += pack_uint(length) # Length UINT
TypeError: must be str, not bytes
The header is 24 bytes fixed length, and includes the command and the length of the optional data portion.
:return: the headre
"""
try:
h = command # Command UINT
**h += pack_uint(length) # Length UINT**
h += pack_dint(self._session) # Session Handle UDINT
h += pack_dint(0) # Status UDINT
h += self.attribs['context'] # Sender Context 8 bytes
h += pack_dint(self.attribs['option']) # Option UDINT
return h
except Exception as e:
raise CommError(e)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\shirley\Miniconda3\lib\site-packages\pycomm\cip\cip_base.py", line 786, in open
if self.register_session() is None:
File "C:\Users\shirley\Miniconda3\lib\site-packages\pycomm\cip\cip_base.py", line 635, in register_session
self._message = self.build_header(ENCAPSULATION_COMMAND['register_session'], 4)
File "C:\Users\shirley\Miniconda3\lib\site-packages\pycomm\cip\cip_base.py", line 624, in build_header
raise CommError(e)
pycomm.cip.cip_base.CommError: must be str, not bytes
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:/Users/shirley/Downloads/pycomm-pycomm3/pycomm-pycomm3/examples/test_clx_comm.py", line 5, in
if c.open('172.16.2.161'):
File "C:\Users\shirley\Miniconda3\lib\site-packages\pycomm\cip\cip_base.py", line 793, in open
raise CommError(e)
pycomm.cip.cip_base.CommError: must be str, not bytes
Process finished with exit code 1
After banging my head against wall trying to figure this out on my own.
I just switched to using PYLOGIX
https://github.com/dmroeder/pylogix
It worked the first time I ran it
and its reasonably fast.
I had the same problem:
TypeError: must be str, not bytes.
The error is because you are using Python 3.x, and I did the same thing: I used Python 3.6 instead of Python 2.6 or 2.7.
Change Python 3.x to 2.6 or 2.7 (I used 2.7).
This was a bug that was fixed recently
I have it working with pycomm#1.0.8 and python#3.7.3

Resources