Failed to get followers count of 'b'metapodcode'' ~empty list - python-3.x

With the June 2022 update, there have been some changes in Instagram's APIs. There was a discussion here about changing or updating this code. You can find the discussion here then i did some research on this topic and i found the fix here but this code is written in another javascript language, if the code here is integrated into instapy, it seems that all the problem will be solved. Instapy is an application that I love, but I've been looking for a solution to this problem for days and I'm not good at programming languages. I'm trying to get help here as a last resort. I'm waiting for your help
INFO [2022-07-22 03:20:06] [metapodcod] Failed to get following count of 'b'metapodcod'' ~empty list
WARNING [2022-07-22 03:20:06] [metapodcod] Unable to save account progress, skipping data update
b"'NoneType' object has no attribute 'get'"
INFO [2022-07-22 03:20:07] [metapodcod] Sessional Live Report:
|> No any statistics to show
[Session lasted 2.5 minutes]
OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
INFO [2022-07-22 03:20:07] [metapodcod] Session ended!
ooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
Traceback (most recent call last):
File "C:\Users\metapodcod\InstaPy\yeni.py", line 41, in <module>
with smart_run(session):
File "C:\Users\metapodcod\AppData\Local\Programs\Python\Python310\lib\contextlib.py", line 135, in __enter__
return next(self.gen)
File "C:\Users\metapodcod\InstaPy\instapy\util.py", line 1983, in smart_run
session.login()
File "C:\Users\metapodcod\InstaPy\instapy\instapy.py", line 475, in login
self.followed_by = log_follower_num(self.browser, self.username, self.logfolder)
File "C:\Users\metapodcod\InstaPy\instapy\print_log_writer.py", line 21, in log_follower_num
followed_by = getUserData("graphql.user.edge_followed_by.count", browser)
File "C:\Users\metapodcod\InstaPy\instapy\util.py", line 501, in getUserData
get_key = shared_data.get("entry_data").get("ProfilePage")
AttributeError: 'NoneType' object has no attribute 'get'

Related

python-docx: Error opening file - "Bad magic number for file header" / "EOFError"

The company I work for distributes document assembly software that uses the python-docx library. The software runs a function on every generated document that opens the document and does a simple search and replace for characters that weren't escaped properly (namely "& amp;" -> "&").
FYI The actual document assembly uses python-docx-template. However, the error happens after the document has already been assembled and the error is triggered by the search-and-replace function, which only uses python-docx.
Recently, we've had a few cases where documents are failing to generate on client deployments. They're throwing an error on this line where the document object is instantiated:
doc = Document(docx=Path(doc_path))
We've seen two errors:
raise BadZipFile("Bad magic number for file header")
and
raise EOFError
The software is widely used and we've never had this issue before. We can't reproduce it in our test environments. The error has only started appearing in the past week but has shown up for several clients after they were updated. The software will fail to generate a particular document some number of times but will succeed after a few tries.
We've only seen it happen with one document in particular, but all documents use the same search and replace function, and like I said the error is only intermittent with the problem document.
There have been no changes in code to this search and replace function and I can't think of any other meaningful difference to our doc assembly process that would explain this.
I'm having a lot of trouble finding info on what could cause this specifically with the python-docx library. Is this a sign that the generated document is corrupted? If anyone is able to shed some light on possible causes that would be very helpful!
Here's the stack trace for both errors:
Bad magic number...
File "/home/user/app/application/document_assembly/core_da.py", line 524, in translate_ampersands
doc = Document(docx=Path(doc_path))
File "/home/user/app-venv/lib/python3.6/site-packages/docx/api.py", line 25, in Document
document_part = Package.open(docx).main_document_part
File "/home/user/app-venv/lib/python3.6/site-packages/docx/opc/package.py", line 116, in open
pkg_reader = PackageReader.from_file(pkg_file)
File "/home/user/app-venv/lib/python3.6/site-packages/docx/opc/pkgreader.py", line 36, in from_file
phys_reader, pkg_srels, content_types
File "/home/user/app-venv/lib/python3.6/site-packages/docx/opc/pkgreader.py", line 69, in _load_serialized_parts
for partname, blob, reltype, srels in part_walker:
File "/home/user/app-venv/lib/python3.6/site-packages/docx/opc/pkgreader.py", line 104, in _walk_phys_parts
part_srels = PackageReader._srels_for(phys_reader, partname)
File "/home/user/app-venv/lib/python3.6/site-packages/docx/opc/pkgreader.py", line 83, in _srels_for
rels_xml = phys_reader.rels_xml_for(source_uri)
File "/home/user/app-venv/lib/python3.6/site-packages/docx/opc/phys_pkg.py", line 129, in rels_xml_for
rels_xml = self.blob_for(source_uri.rels_uri)
File "/home/user/app-venv/lib/python3.6/site-packages/docx/opc/phys_pkg.py", line 108, in blob_for
return self._zipf.read(pack_uri.membername)
File "/usr/lib/python3.6/zipfile.py", line 1337, in read
with self.open(name, "r", pwd) as fp:
File "/usr/lib/python3.6/zipfile.py", line 1396, in open
raise BadZipFile("Bad magic number for file header")
zipfile.BadZipFile: Bad magic number for file header
EOFError
File "/home/user/app/application/document_assembly/core_da.py", line 524, in translate_ampersands
doc = Document(docx=Path(doc_path))
File "/home/user/app-venv/lib/python3.6/site-packages/docx/api.py", line 25, in Document
document_part = Package.open(docx).main_document_part
File "/home/user/app-venv/lib/python3.6/site-packages/docx/opc/package.py", line 116, in open
pkg_reader = PackageReader.from_file(pkg_file)
File "/home/user/app-venv/lib/python3.6/site-packages/docx/opc/pkgreader.py", line 36, in from_file
phys_reader, pkg_srels, content_types
File "/home/user/app-venv/lib/python3.6/site-packages/docx/opc/pkgreader.py", line 69, in _load_serialized_parts
for partname, blob, reltype, srels in part_walker:
File "/home/user/app-venv/lib/python3.6/site-packages/docx/opc/pkgreader.py", line 110, in _walk_phys_parts
for partname, blob, reltype, srels in next_walker:
File "/home/user/app-venv/lib/python3.6/site-packages/docx/opc/pkgreader.py", line 105, in _walk_phys_parts
blob = phys_reader.blob_for(partname)
File "/home/user/app-venv/lib/python3.6/site-packages/docx/opc/phys_pkg.py", line 108, in blob_for
return self._zipf.read(pack_uri.membername)
File "/usr/lib/python3.6/zipfile.py", line 1338, in read
return fp.read()
File "/usr/lib/python3.6/zipfile.py", line 858, in read
buf += self._read1(self.MAX_N)
File "/usr/lib/python3.6/zipfile.py", line 940, in _read1
data += self._read2(n - len(data))
File "/usr/lib/python3.6/zipfile.py", line 975, in _read2
raise EOFError
EOFError
Both of these errors indicate that the specified file is not a valid zip archive. So I expect something is going wrong with the writing of the file (by the step prior to find-and-replace).
I would start by stopping the process after writing the file and seeing if the file is present on the filesystem and whether it can be opened manually using Word. This should bisect the problem and narrow it down to a writing problem or a reading problem.
It could be possible that an error is raised on the write and it's not being caught or whatever, leaving an empty or un-flushed (open) file. So having a way to monitor that step is probably a good idea. Writing to a log comes to mind as how you might manage that.
Inspecting the particular cases where there is a failure and managing to reproduce it are going to be critically important. If that's not possible, it's going to be a tough road of guesswork and disappointment on both sides.
It turns out some code was added recently before this started happening, which effectively sent a duplicate request to the server to generate the document in question. These requests seem to run in parallel - which is surprising because I would predict the conflict to happen much more frequently (same template file being used, generated document writing to the same directory).
Seems like if the sequence of the requests happened in a particular timing, the "find-and-replace" operation of one request would run into the "save" operation of the other request. So in other words I think one request was trying to open a document that was in the process of being saved.
So I'm glad it's not something more obscure with the python-docx library, which would have been a lot harder to nail down.

ete3 error : could not be translated into taxids! - Bioinformatics

I am using ete3(http://etetoolkit.org/) package in Python within a bioinformatics pipeline I wrote myself.
While running this script, I get the following error. I have used this script a lot for other datasets which don't have any issues and have not given any errors. I am using Python3.5 and miniconda. Any fixes/insights to resolve this error will be appreciated.
[Error]
Traceback (most recent call last):
File "/Users/d/miniconda2/envs/py35/bin/ete3", line 11, in <module>
load_entry_point('ete3==3.1.1', 'console_scripts', 'ete3')()
File "/Users/d/miniconda2/envs/py35/lib/python3.5/site-packages/ete3/tools/ete.py", line 95, in main
_main(sys.argv)
File "/Users/d/miniconda2/envs/py35/lib/python3.5/site-packages/ete3/tools/ete.py", line 268, in _main
args.func(args)
File "/Users/d/miniconda2/envs/py35/lib/python3.5/site-packages/ete3/tools/ete_ncbiquery.py", line 168, in run
collapse_subspecies=args.collapse_subspecies)
File "/Users/d/miniconda2/envs/py35/lib/python3.5/site-packages/ete3/ncbi_taxonomy/ncbiquery.py", line 434, in get_topology
lineage = id2lineage[sp]
KeyError: 3
Continuing from the comment section for better formatting.
Assuming that the sp contains 3 as suggested by the error message (do check this yourself). You can inspect the ete3 code (current version) following its definition, you can trace it to line:
def get_lineage_translator(self, taxids):
"""Given a valid taxid number, return its corresponding lineage track as a
hierarchically sorted list of parent taxids.
So I went to https://www.ncbi.nlm.nih.gov/Taxonomy/Browser/wwwtax.cgi
and checked if 3 is valid taxid and it appears that it is not.
# relevant section from ncbi taxonomy browser
No result found in the Taxonomy database for taxonomy id
3
It appears to me that your only option is to trace how the 3 gets computed. Because the root cause is simply that taxid 3 is not valid taxid number as required by the function.

How to track down errors in Flopy created MODFLOW output

Since a few weeks i'm writing writing MODFLOW models with Flopy in Python. I chose to write models in Flopy because of the transparency of Python. However, once in a while my model doesn't run but it doesn't tell me where it goes wrong, that makes error handeling difficult.
At the moment my model gives an error at running the model. It raises the error with the message I added manually (and is common used):
success, mfoutput = mf.run_model(silent=False, pause=False)
if not success:
raise Exception('MODFLOW did not terminate normally.')
The error I got is:
FloPy is using the following executable to run the model: /usr/bin/mf2005
MODFLOW-2005
U.S. GEOLOGICAL SURVEY MODULAR FINITE-DIFFERENCE GROUND-WATER FLOW MODEL
Version 1.12.00 2/3/2017
Using NAME file: spangen_mod.nam
Run start date and time (yyyy/mm/dd hh:mm:ss): 2019/04/23 16:12:39
Traceback (most recent call last):
File "<ipython-input-85-c7ecca798eed>", line 1, in <module>
runfile('/Users/user/Desktop/modflow/model.py', wdir='/Users/user/Desktop/modflow')
File "/Users/user/anaconda3/lib/python3.7/site-packages/spyder_kernels/customize/spydercustomize.py", line 704, in runfile
execfile(filename, namespace)
File "/Users/user/anaconda3/lib/python3.7/site-packages/spyder_kernels/customize/spydercustomize.py", line 108, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "/Users/user/Desktop/modflow/model.py", line 223, in <module>
raise Exception('MODFLOW did not terminate normally.')
Exception: MODFLOW did not terminate normally.
Besides, all the MODFLOW files are created but .hds and .cbc file contain zero bytes.
My question: does someone have tips to track down these kind of errors efficient and smart.

Unable to download file (Web Scraping) - OSError [Errorno22] - invalid argument

I wrote a program in Python 3 which scrapes and download the pages of wikipedia category with certain depth and places them in a directory.
The problem which I am facing is, "suppose during the execution of code, if the algorithm encounters any page of wikipedia having special character like (*, #, $ etc.), then the algorithm fails with the below mentioned message in terms of error trace ".
An example of the special character wiki page is as follows:
https://en.wikipedia.org/wiki/Eden*
The error trace is as follows:
Traceback (most recent call last):
File "F:\Pen Drive 8 GB\PDF\Code\wiki.py", line 103, in <module>
d.search_and_store("Biomedical_engineering", subcategory_depth=2, path=PATH)
File "F:\Pen Drive 8 GB\PDF\Code\wiki.py", line 98, in search_and_store
self.search_and_store(subcat_result['title'], subcategory_depth-1, path)
File "F:\Pen Drive 8 GB\PDF\Code\wiki.py", line 98, in search_and_store
self.search_and_store(subcat_result['title'], subcategory_depth-1, path)
File "F:\Pen Drive 8 GB\PDF\Code\wiki.py", line 76, in search_and_store
if self.write_page_text(path, page_result):
File "F:\Pen Drive 8 GB\PDF\Code\wiki.py", line 44, in write_page_text
txt_file = open(file_path, 'w')
OSError: [Errno 22] Invalid argument: 'F:\\Code\\Wikipedia\\DATASETS\\Biomedical Engineering/Eden*.txt'
As you can see clearly, the algorithm scrapes the data of the pages without having any special character's, but why it raising the aforementioned error.
The MWE is very large. If anybody suggests, then I can share the same.
Please suggest something, as I am trying this since long and frustrated. I don't even have idea what I am doing wrong? Please help.
Any small help is deeply appreciated.
Thanks in Advance.

Error I do not understand [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed last month.
Improve this question
I generated this error in Python 3.5:
Traceback (most recent call last):
File "C:\Users\Owner\AppData\Local\Programs\Python\Python35\lib\shelve.py", line 111, in __getitem__
value = self.cache[key]
KeyError: 'P4_vegetables'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Owner\Documents\Python\Allotment\allotment.py", line 217, in
main_program()
File "C:\Users\Owner\Documents\Python\Allotment\allotment.py", line 195, in main_program
main_program()
File "C:\Users\Owner\Documents\Python\Allotment\allotment.py", line 49, in main_program
print("Plot 4 - ", s["P4_vegetables"])
File "C:\Users\Owner\AppData\Local\Programs\Python\Python35\lib\shelve.py", line 113, in __getitem__
f = BytesIO(self.dict[key.encode(self.keyencoding)])
File "C:\Users\Owner\AppData\Local\Programs\Python\Python35\lib\dbm\dumb.py", line 141, in __getitem__
pos, siz = self._index[key] # may raise KeyError
KeyError: b'P4_vegetables'
It has been a while, but in case somebody comes across this: The following error
Traceback (most recent call last):
File "filepath", line 111, in __getitem__
value = self.cache[key]
KeyError: 'item1'
can occur if one attempts to retrieve the item outside of the with block. The shelf is closed once we start executing code outside of the with block. Therefore, any operation performed on the shelf outside of the with block in which the shelf is opened will be considered an invalid operation. For example,
import shelve
with shelve.open('ShelfTest') as item:
item['item1'] = 'item 1'
item['item2'] = 'item 2'
item['item3'] = 'item 3'
item['item4'] = 'item 4'
print(item['item1']) # no error, since shelf file is still open
# If we try print after file is closed
# an error will be thrown
# This is quite common
print(item['item1']) #error. It has been closed, so you can't retrieve it.
Hope this helps anyone who comes across a similar issue as the original poster.
It means that the dictionary (or of whatever type it is) cache does not contain the key named key which is of value 'P4_vegetables'. Make sure you added the key before using it.

Resources