ValueError: too many values to unpack (expected 2) in pytube - python-3.x

I'm trying to make a script that downloads videos from youtube, but I keep getting the below error.
>>> from pytube import YouTube
>>> vid = YouTube("https://www.youtube.com/watch?v=dfnCAmr569k")
Traceback (most recent call last):
File "<stdin>", line1, in <module>
File "/home/user/.local/lib/python3.8/site-packages/pytube/__main__.py", line 92 in __init__
self.descramble()
File "/home/user/.local/lib/python3.8/site-packages/pytube/__main__.py", line 140 in descramble
apply_signature(self.player_confing_args, fmt, self.js)
File "/home/user/.local/lib/python3.8/site-packages/pytube/extract.py", line 225 in apply_signature
cipher = Cipher(js=js)
File "/home/user/.local/lib/python3.8/site-packages/pytube/cipher.py", line 31 in __init__ var, _ =
self.transform_plan[0].split(".")
ValueError: too many values to unpack (expected 2)

It's may "cipher.py" file's parsing fnc error. Just copy & paste bug fixed source to your "pytube/cipher.py".
related thread : https://github.com/nficano/pytube/issues/641#issuecomment-665268989

Related

Prgm_MAWS: "IndexError: list index out of range"

MAWS.py is a 4 years old program that uses Amber/AmberTools as calculation platform. Although I follow program user guide instructions, I do not know how to debug and find the solution to raise self._value IndexError: list index out of range
I replaced the force field(ff) specified in MAWS.py with a newer Amber ff.
As I am not familiar with python code and where the error could be generated I recommend to download the MAWS.py code from GitHub Repository: https://github.com/igemsoftware/Heidelberg_15
python3 MAWS_rev1.py -b 0.01 -i 200 -s 200 -l 15 -t 0.01 -f pdb -y HYBRID Prot1a.frcmod /home/bcramer/workdir-amber/Prot_1/
['DGN', 'DAN', 'DTN', 'DCN']
Choosing from candidates ...
Constructing Ligand/Aptamer complex ...
Constructing Ligand/Aptamer complex ...
etc..................
Loading Aptamer/Ligand complex ...
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/home/bcramer/miniconda3/lib/python3.7/multiprocessing/pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "/home/bcramer/miniconda3/lib/python3.7/multiprocessing/pool.py", line 44, in mapstar
return list(map(*args))
File "MAWS_rev1.py", line 1090, in initial
ligand_range = get_ligand_range(aptamer_top.topology)
File "MAWS_rev1.py", line 197, in get_ligand_range
return [get_ligand(topology)[0], len(get_ligand(topology))]
IndexError: list index out of range
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "MAWS_rev1.py", line 1357, in <module>
positions_and_Ntides = loop()
File "MAWS_rev1.py", line 1237, in loop
pos_Nt_S_task = pool.map(initial, alphabet)
File "/home/bcramer/miniconda3/lib/python3.7/multiprocessing/pool.py", line 268, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "/home/bcramer/miniconda3/lib/python3.7/multiprocessing/pool.py", line 657, in get
raise self._value
IndexError: list index out of range

Issues tokenizing text

Started text analysing, and eventually ran into a need for downloading Corpora in using PyCharm2019 as IDE. Not really sure what traceback message wants me to do, since I used PyCharm's own lib import interface to enable Corpora already. Why does an error stating that Corpora is not available to the code keep reappearing?
Imported TextBlob, tried to do a line like: from textblob import TextBlob...view code below
from textblob import TextBlob
TextBlob(train['tweet'][1]).words
print("\nPRINT TOKENIZATION") # own instruction to allow for knowing what code result delivers
print(TextBlob(train['tweet'][1]).words)
….
Tried to install via nltk, no luck...error when downloading 'brown.tei'
showing info https://raw.githubusercontent.com/nltk/nltk_data/gh-pages/index.xml
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Users\jcst\AppData\Local\Programs\Python\Python37-32\lib\tkinter__init__.py", line 1705, in call
return self.func(*args)
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\nltk\downloader.py", line 1796, in _download
return self._download_threaded(*e)
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\nltk\downloader.py", line 2082, in _download_threaded
assert self._download_msg_queue == []
AssertionError
Traceback (most recent call last):
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\textblob\decorators.py", line 35, in decorated
return func(*args, **kwargs)
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\textblob\tokenizers.py", line 57, in tokenize
return nltk.tokenize.sent_tokenize(text)
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\nltk\tokenize__init__.py", line 104, in sent_tokenize
tokenizer = load('tokenizers/punkt/{0}.pickle'.format(language))
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\nltk\data.py", line 870, in load
opened_resource = _open(resource_url)
Resource File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\nltk\data.py", line 995, in open
punkt not found.
Please use the NLTK Downloader to obtain the resource:
return find(path, path + ['']).open()
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\nltk\data.py", line 701, in find
import nltk
nltk.download('punkt')
For more information see: https://www.nltk.org/data.html
Attempted to load tokenizers/punkt/english.pickle
Searched in:
- 'C:\Users\jcst/nltk_data'
- 'C:\Users\jcst\PycharmProjects\TextMining\venv\nltk_data'
- 'C:\Users\jcst\PycharmProjects\TextMining\venv\share\nltk_data'
- 'C:\Users\jcst\PycharmProjects\TextMining\venv\lib\nltk_data'
- 'C:\Users\jcst\AppData\Roaming\nltk_data'
- 'C:\nltk_data'
- 'D:\nltk_data'
- 'E:\nltk_data'
- ''
raise LookupError(resource_not_found)
LookupError:
Resource punkt not found.
Please use the NLTK Downloader to obtain the resource:
import nltk
nltk.download('punkt')
For more information see: https://www.nltk.org/data.html
Attempted to load tokenizers/punkt/english.pickle
Searched in:
- 'C:\Users\jcst/nltk_data'
- 'C:\Users\jcst\PycharmProjects\TextMining\venv\nltk_data'
- 'C:\Users\jcst\PycharmProjects\TextMining\venv\share\nltk_data'
- 'C:\Users\jcst\PycharmProjects\TextMining\venv\lib\nltk_data'
- 'C:\Users\jcst\AppData\Roaming\nltk_data'
- 'C:\nltk_data'
- 'D:\nltk_data'
- 'E:\nltk_data'
- ''
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:/Users/jcst/PycharmProjects/TextMining/ModuleImportAndTrainFileIntro.py", line 151, in
TextBlob(train['tweet'][1]).words
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\textblob\decorators.py", line 24, in get
value = obj.dict[self.func.name] = self.func(obj)
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\textblob\blob.py", line 649, in words
return WordList(word_tokenize(self.raw, include_punc=False))
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\textblob\tokenizers.py", line 73, in word_tokenize
for sentence in sent_tokenize(text))
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\textblob\base.py", line 64, in itokenize
return (t for t in self.tokenize(text, *args, **kwargs))
File "C:\Users\jcst\PycharmProjects\TextMining\venv\lib\site-packages\textblob\decorators.py", line 38, in decorated
raise MissingCorpusError()
textblob.exceptions.MissingCorpusError:
Looks like you are missing some required data for this feature.
To download the necessary data, simply run
python -m textblob.download_corpora
or use the NLTK downloader to download the missing data: http://nltk.org/data.html
If this doesn't fix the problem, file an issue at https://github.com/sloria/TextBlob/issues.

python3 UnicodeError: encoding with 'idna' codec failed (UnicodeError: label empty or too long)

I am having some issues running my program. I have a "masterfile" that has a list of ips on a \n like so but I keep getting this error message:
68.x.0.56
68.x.0.53
I'm not sure what exactly the problem is, but I've been searching forums and other stackoverflow help, but can't seem to determine what the issue is.
def dns_resolver(subdomains):
print('\n\n########## Checking Subdomains for DNS Resolutions ##########\n')
queries = []
with open ('masterfile', 'r') as f:
domains = f.read().splitlines()
for i in domains:
try:
resp = socket.gethostbyname(i)
print(resp)
queries.append((i, resp))
except socket.error:
pass
return queries
Traceback (most recent call last):
File "/usr/lib/python3.6/encodings/idna.py", line 165, in encode
raise UnicodeError("label empty or too long")
UnicodeError: label empty or too long
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "subdomain-hunter.py", line 170, in <module>
main()
File "subdomain-hunter.py", line 59, in main
enumerate(DOMAIN)
File "subdomain-hunter.py", line 120, in enumerate
resolvediff = dns_resolver(diff)
File "subdomain-hunter.py", line 142, in dns_resolver
resp = socket.gethostbyname(i)
UnicodeError: encoding with 'idna' codec failed (UnicodeError: label empty or too long)

Constant error when using ImageMagick with Python

When I did a code to convert PDF files to JPG images, but I have met an issue.Sometimes the .pdf file work well.But sometimes there is an error,the first temp file can't read.I checked the temp folder.When it works well, all the created temp file be deleted successfully. And when the error happens,only the first temp file is deleted. I can't resolve it.Anyone can fix?
code:
from wand.image import Image
with open("C:\software\1234.pdf",'rb') as f:
image_binary = f.read()
f.close()
with Image(blob=image_binary,resolution=400) as img:
Traceback (most recent call last):
File "C:\xxxx\eclipse-workspace\test\123\pdf2jpg.py", line 16, in
with Image(blob=image_binary,resolution=400) as img:
File "C:\xxxx\AppData\Local\Programs\Python\Python37\lib\site-packages\wand\image.py", line 2742, in init
self.read(blob=blob, resolution=resolution)
File "C:\xxxx\AppData\Local\Programs\Python\Python37\lib\site-packages\wand\image.py", line 2822, in read
self.raise_exception()
File "C:\xxxx\AppData\Local\Programs\Python\Python37\lib\site-packages\wand\resource.py", line 222, in raise_exception
raise e
wand.exceptions.CorruptImageError: unable to read image data `C:/xxxx/AppData/Local/Temp/magick-22200zyw89Zpq8IFJ1' # error/pnm.c/ReadPNMImage/1344
Exception ignored in:
Traceback (most recent call last):
File "C:\xxxx\AppData\Local\Programs\Python\Python37\lib\site-packages\wand\resource.py", line 232, in del
File "C:\xxxx\AppData\Local\Programs\Python\Python37\lib\site-packages\wand\image.py", line 2767, in destroy
TypeError: object of type 'NoneType' has no len()

Python http.client.Incomplete Read(0 bytes read) error

I have seen this error on the forum and read through the responses yet I still don't understand what it is or how to address it. I'm scraping data from the internet from 16k links, my script scrapes similar information from each link and writes it to a .csv some of the date gets written before this error.
Traceback (most recent call last):
File "/usr/local/Cellar/python3/3.5.2_3/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 541, in _get_chunk_left
chunk_left = self._read_next_chunk_size()
File "/usr/local/Cellar/python3/3.5.2_3/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 508, in _read_next_chunk_size
return int(line, 16)
ValueError: invalid literal for int() with base 16: b''
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/Cellar/python3/3.5.2_3/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 558, in _readall_chunked
chunk_left = self._get_chunk_left()
File "/usr/local/Cellar/python3/3.5.2_3/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 543, in _get_chunk_left
raise IncompleteRead(b'')
http.client.IncompleteRead: IncompleteRead(0 bytes read)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "MoviesToDb.py", line 91, in <module>
html = r.read()
File "/usr/local/Cellar/python3/3.5.2_3/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 455, in read
return self._readall_chunked()
File "/usr/local/Cellar/python3/3.5.2_3/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 565, in _readall_chunked
raise IncompleteRead(b''.join(value))
http.client.IncompleteRead: IncompleteRead(17891 bytes read)
I would like to know:1) What does this error mean? 2) How do I prevent it?
try to import :
from http.client import IncompleteRead
and add this in your script :
except IncompleteRead:
# Oh well, reconnect and keep trucking
continue
requests.exceptions.ChunkedEncodingError: (‘Connection broken: IncompleteRead(0 bytes read)’, IncompleteRead(0 bytes read)).
It is because the server of http protocal is 1.0 version,while python use 1.1 version. The solution is to assign the protocal version of client, like this
Python3 Version please add:
> import http.client
> http.client.HTTPConnection._http_vsn = 10
> http.client.HTTPConnection._http_vsn_str = 'HTTP/1.0'
Python2 Version please add:
> import http.client
> http.client.HTTPConnection._http_vsn = 10
> http.client.HTTPConnection._http_vsn_str = 'HTTP/1.0'
See the reference How to deal with "http.client.IncompleteRead: IncompleteRead(0 bytes read)" problem

Resources