I am trying to make my program save some tkinter String variables to a txt files.
Here is the code:
def saveFile():
file = filedialog.asksaveasfile(mode='w')
if file != None:
file.write(seat1, seat2, seat3, seat4, seat5)
file.close()
Then I get an error when I try to save the file:
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Users\User\AppData\Local\Programs\Thonny\lib\tkinter\__init__.py", line 1705, in __call__
return self.func(*args)
File "E:\Teacher Plan\seat-plan.py", line 64, in saveFile
file.write(seat1, seat2, seat3, seat4, seat5)
TypeError: write() takes exactly one argument (5 given)
Any ideas?
Make ALL your variables into 1 variable seat[0], seat[1], seat[2], seat[3], seat[4] and then save (seat)
I've used this in one of my projects and it works fine
First you define your main varibale at the start
ess=[[],[],[],[],[],[]]
Then you make your work with variables, when you're done you save (or append) then into the single variable and save to file
ess[0].append(ess_e), ess[1].append(essv), ess[2].append(essp), ess[3].append(essq), ess[4].append(ess_s), ess[5].append(ess_d)
file = open("relais.txt", "w")
file.write(repr(ess) + "\n")
file.close()
Related
Im working with a django project(im pretty new to django) and running into an issue passing a model object between my view and a celery task.
I am taking input from a form which contains several ModelChoiceField fields and using the selected object in a celery task. When I queue the task(from the post method in the view) using someTask.delay(x, y, z) where x, y and z are various objects from the form ModelChoiceFields I get the error object of type <someModelName> is not JSON serializable.
That said, if I create a simple test function and pass any of the same objects from the form into the function I get the expected behavior and the name of the object selected in the form is logged.
def test(object):
logger.debug(object.name)
I have done some poking based on the above error and found django serializers which allows for a workaround by serializing the object using serializers.serialize('json', [template]), in the view before passing it to the celery task.
I can then access the object in the celery task by using template = json.loads(template)[0].get('fields') to access its required bits as a dictionary -- while this works, it does seem a bit inelegant and I wanted to see if there is something I am missing here.
Im obviously open to any feedback/guidance here however my main questions are:
Why do I get the object...is not JSON serializable error when passing a model object into a celery task but not when passing to my simple test function?
Is the approach using django serializers before queueing the celery task considered acceptable/correct or is there a cleaner way to achieve this goal?
Any suggestions would be greatly appreciated.
Traceback:
I tried to post the full traceback here as well however including that caused the post to get flagged as 'this looks like spam'
Internal Server Error: /build/
Traceback (most recent call last):
File "/home/tech/sandbox_project/venv/lib/python3.8/site-packages/kombu/serialization.py", line 49, in _reraise_errors
yield
File "/home/tech/sandbox_project/venv/lib/python3.8/site-packages/kombu/serialization.py", line 220, in dumps
payload = encoder(data)
File "/home/tech/sandbox_project/venv/lib/python3.8/site-packages/kombu/utils/json.py", line 65, in dumps
return _dumps(s, cls=cls or _default_encoder,
File "/usr/lib/python3.8/json/__init__.py", line 234, in dumps
return cls(
File "/usr/lib/python3.8/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/lib/python3.8/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/home/tech/sandbox_project/venv/lib/python3.8/site-packages/kombu/utils/json.py", line 55, in default
return super().default(o)
File "/usr/lib/python3.8/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type Template is not JSON serializable
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/tech/sandbox_project/venv/lib/python3.8/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
Add this lines to settings.py
# Project/settings.py
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
Then instead of passing object, send JSON with id/pk if you're using a model instance call the task like this..
test.delay({'pk': 1})
Django model instance is not available in celery environment, as it runs in a different process
How you can get the model instance inside task then? Well, you can do something like below -
def import_django_instance():
"""
Makes django environment available
to tasks!!
"""
import django
import os
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'Project.settings')
django.setup()
# task
#shared_task(name="simple_task")
def simple_task(data):
import_django_instance()
from app.models import AppModel
pk = data.get('pk')
instance = AppModel.objects.get(pk=pk)
# your operation
Hello,
I am using CherryPy to host the gui of an application that takes json files from qualtrics and drops them in a mysql server.
The code seems to work for most surveys but for some I get the following error:
Traceback (most recent call last):
File "C:\Users\jam66\AppData\Local\Programs\Python\Python37-32\lib\site-packages\cherrypy\_cprequest.py", line 627, in respond
self._do_respond(path_info)
File "C:\Users\jam66\AppData\Local\Programs\Python\Python37-32\lib\site-packages\cherrypy\_cprequest.py", line 686, in _do_respond
response.body = self.handler()
File "C:\Users\jam66\AppData\Local\Programs\Python\Python37-32\lib\site- packages\cherrypy\lib\encoding.py", line 264,
in __call__ct.params['charset'] = self.find_acceptable_charset()
File "C:\Users\jam66\AppData\Local\Programs\Python\Python37-32\lib\site- packages\cherrypy\lib\encoding.py", line 173, in find_acceptable_charset
if encoder(self.default_encoding):
File "C:\Users\jam66\AppData\Local\Programs\Python\Python37-32\lib\site-packages\cherrypy\lib\encoding.py", line 114, in encode_string
for chunk in self.body:
TypeError: 'bool' object is not iterable
Any help on beginning to understand this issues is appreciated
My guess is that some of your exposed methods are returning a boolean. You have to return a string or an iterable. Unless you are using the json tool, in that case the dictionary to string is handled by the tool.
As a way to debug it, just print or log the value that would will be returned, verify the type with the type function.
I'm trying to catch a Null Byte Exception in the last line of a CSV file:
def Catch(csv_filename):
with open(csv_filename,'r+') as File:
File_reader = csv.reader(File,delimiter="\t",dialect='excel-tab')
a = []
for row in File_reader:
try:
a.append(row)
except csv.Error:
return "Error"
Catch("/../DataLogger.csv")
but an _csv.Error is raised:
Traceback (most recent call last):
File "/../test.py", line 21, in <module>
Catch("/../DataLogger.csv")
File "/../test.py", line 13, in Catch
for row in File_reader:
_csv.Error: line contains NULL byte
I don't get why the exception is not catched with the function.
I'm using python 3.4
that's because the exception occurs as soon as your code reaches the for statement.
No exception can happen in the a.append line, the csv module does its job in the iteration of the for loop.
Once you know that, the fix is trivial:
try:
for row in File_reader:
a.append(row)
except csv.Error:
return "Error"
note that one could be tempted to use direct conversion to list: a = list(File_reader) but since the exception would take place in the list conversion, a wouldn't be filled, which would be a nuisance if the start of the file contains useful data you want to read (but since you're returning an error string, it doesn't seem to matter here)
def Catch(csv_filename):
with open(csv_filename,'r+') as File:
try:
File_reader = csv.reader(File,delimiter="\t",dialect='excel-tab')
a = []
for row in File_reader:
a.append(row)
except csv.Error
return "Error"
Catch("/../DataLogger.csv")
The whole parsing has to be inside Try/catch, instead of only a.append.
I'm trying to download one message using the GMail API. Below is my traceback:
pdiracdelta#pdiracdelta-Laptop:~/GMail Metadata$ ./main.py
<oauth2client.client.OAuth2Credentials object at 0x7fd6306c4d30>
False
Traceback (most recent call last):
File "./main.py", line 105, in <module>
main()
File "./main.py", line 88, in main
service = discovery.build('gmail', 'v1', http=http)
File "/usr/lib/python3/dist-packages/oauth2client/util.py", line 137, in positional_wrapper
return wrapped(*args, **kwargs)
File "/usr/lib/python3/dist-packages/googleapiclient/discovery.py", line 197, in build
resp, content = http.request(requested_url)
File "/usr/lib/python3/dist-packages/oauth2client/client.py", line 562, in new_request
redirections, connection_type)
File "/usr/lib/python3/dist-packages/httplib2/__init__.py", line 1138, in request
headers = self._normalize_headers(headers)
File "/usr/lib/python3/dist-packages/httplib2/__init__.py", line 1106, in _normalize_headers
return _normalize_headers(headers)
File "/usr/lib/python3/dist-packages/httplib2/__init__.py", line 194, in _normalize_headers
return dict([ (key.lower(), NORMALIZE_SPACE.sub(value, ' ').strip()) for (key, value) in headers.items()])
File "/usr/lib/python3/dist-packages/httplib2/__init__.py", line 194, in <listcomp>
return dict([ (key.lower(), NORMALIZE_SPACE.sub(value, ' ').strip()) for (key, value) in headers.items()])
TypeError: sequence item 0: expected str instance, bytes found
And below is a snippet of code which produces the credential object and boolean print just before the Traceback. It confirms that the credentials object is valid and is being used as suggested by Google:
credentials = get_credentials()
print(credentials)
print(str(credentials.invalid))
http = credentials.authorize(httplib2.Http())
service = discovery.build('gmail', 'v1', http=http)
What is going wrong here? It seems to me that I am not at fault, since the problem can be traced back to service = discovery.build('gmail', 'v1', http=http) which uses nothing but valid information (implying one of the packages used further in the stack cannot handle this valid information). Is this a bug, or am I doing something wrong?
UPDATE: it seems that the _normalize_headers function has now been patched. Updating your python version should fix the problem (I'm using 3.6.7 now).
Solved with help from Padraic Cunningham, who identified the problem as an encoding issue. I solved this problem by applying .decode('utf-8') to the header keys and values (headers is a dict) if they are bytes-type objects (which are apparently UTF-8 encoded) and transforming them into python3 strings. This is probably due to some python2/3 mixing in the google API.
The fix also includes changing all code from google API examples to python3 code (e.g. exception handling) but most importantly my workaround involves editing /usr/lib/python3/dist-packages/httplib2/__init__.py at lines 193-194, redefining the _normalize_headers(headers) function as:
def _normalize_headers(headers):
for key in headers:
# if not encoded as a string, it is ASSUMED to be encoded as UTF-8, as it used to be in python2.
if not isinstance(key, str):
newkey = key.decode('utf-8')
headers[newkey] = headers[key]
del headers[key]
key = newkey
if not isinstance(headers[key], str):
headers[key] = headers[key].decode('utf-8')
return dict([ (key.lower(), NORMALIZE_SPACE.sub(value, ' ').strip()) for (key, value) in headers.items()])
WARNING: this workaround is obviously quite dirty as it involves editing files from the httplib2 package. If someone finds a better fix, please post it here.
This code is simplification of code in a Django app that receives an uploaded zip file via HTTP multi-part POST and does read-only processing of the data inside:
#!/usr/bin/env python
import csv, sys, StringIO, traceback, zipfile
try:
import io
except ImportError:
sys.stderr.write('Could not import the `io` module.\n')
def get_zip_file(filename, method):
if method == 'direct':
return zipfile.ZipFile(filename)
elif method == 'StringIO':
data = file(filename).read()
return zipfile.ZipFile(StringIO.StringIO(data))
elif method == 'BytesIO':
data = file(filename).read()
return zipfile.ZipFile(io.BytesIO(data))
def process_zip_file(filename, method, open_defaults_file):
zip_file = get_zip_file(filename, method)
items_file = zip_file.open('items.csv')
csv_file = csv.DictReader(items_file)
try:
for idx, row in enumerate(csv_file):
image_filename = row['image1']
if open_defaults_file:
z = zip_file.open('defaults.csv')
z.close()
sys.stdout.write('Processed %d items.\n' % idx)
except zipfile.BadZipfile:
sys.stderr.write('Processing failed on item %d\n\n%s'
% (idx, traceback.format_exc()))
process_zip_file(sys.argv[1], sys.argv[2], int(sys.argv[3]))
Pretty simple. We open the zip file and one or two CSV files inside the zip file.
What's weird is that if I run this with a large zip file (~13 MB) and have it instantiate the ZipFile from a StringIO.StringIO or a io.BytesIO (Perhaps anything other than a plain filename? I had similar problems in the Django app when trying to create a ZipFile from a TemporaryUploadedFile or even a file object created by calling os.tmpfile() and shutil.copyfileobj()) and have it open TWO csv files rather than just one, then it fails towards the end of processing. Here's the output that I see on a Linux system:
$ ./test_zip_file.py ~/data.zip direct 1
Processed 250 items.
$ ./test_zip_file.py ~/data.zip StringIO 1
Processing failed on item 242
Traceback (most recent call last):
File "./test_zip_file.py", line 26, in process_zip_file
for idx, row in enumerate(csv_file):
File ".../python2.7/csv.py", line 104, in next
row = self.reader.next()
File ".../python2.7/zipfile.py", line 523, in readline
return io.BufferedIOBase.readline(self, limit)
File ".../python2.7/zipfile.py", line 561, in peek
chunk = self.read(n)
File ".../python2.7/zipfile.py", line 581, in read
data = self.read1(n - len(buf))
File ".../python2.7/zipfile.py", line 641, in read1
self._update_crc(data, eof=eof)
File ".../python2.7/zipfile.py", line 596, in _update_crc
raise BadZipfile("Bad CRC-32 for file %r" % self.name)
BadZipfile: Bad CRC-32 for file 'items.csv'
$ ./test_zip_file.py ~/data.zip BytesIO 1
Processing failed on item 242
Traceback (most recent call last):
File "./test_zip_file.py", line 26, in process_zip_file
for idx, row in enumerate(csv_file):
File ".../python2.7/csv.py", line 104, in next
row = self.reader.next()
File ".../python2.7/zipfile.py", line 523, in readline
return io.BufferedIOBase.readline(self, limit)
File ".../python2.7/zipfile.py", line 561, in peek
chunk = self.read(n)
File ".../python2.7/zipfile.py", line 581, in read
data = self.read1(n - len(buf))
File ".../python2.7/zipfile.py", line 641, in read1
self._update_crc(data, eof=eof)
File ".../python2.7/zipfile.py", line 596, in _update_crc
raise BadZipfile("Bad CRC-32 for file %r" % self.name)
BadZipfile: Bad CRC-32 for file 'items.csv'
$ ./test_zip_file.py ~/data.zip StringIO 0
Processed 250 items.
$ ./test_zip_file.py ~/data.zip BytesIO 0
Processed 250 items.
Incidentally, the code fails under the same conditions but in a different way on my OS X system. Instead of the BadZipfile exception, it seems to read corrupted data and gets very confused.
This all suggests to me that I am doing something in this code that you are not supposed to do -- e.g.: call zipfile.open on a file while already having another file within the same zip file object open? This doesn't seem to be a problem when using ZipFile(filename), but perhaps it's problematic when passing ZipFile a file-like object, because of some implementation details in the zipfile module?
Perhaps I missed something in the zipfile docs? Or maybe it's not documented yet? Or (least likely), a bug in the zipfile module?
I might have just found the problem and the solution, but unfortunately I had to replace Python's zipfile module with a hacked one of my own (called myzipfile here).
$ diff -u ~/run/lib/python2.7/zipfile.py myzipfile.py
--- /home/msabramo/run/lib/python2.7/zipfile.py 2010-12-22 17:02:34.000000000 -0800
+++ myzipfile.py 2011-04-11 11:51:59.000000000 -0700
## -5,6 +5,7 ##
import binascii, cStringIO, stat
import io
import re
+import copy
try:
import zlib # We may need its compression method
## -877,7 +878,7 ##
# Only open a new file for instances where we were not
# given a file object in the constructor
if self._filePassed:
- zef_file = self.fp
+ zef_file = copy.copy(self.fp)
else:
zef_file = open(self.filename, 'rb')
The problem in the standard zipfile module is that when passed a file object (not a filename), it uses that same passed-in file object for every call to the open method. This means that tell and seek are getting called on the same file and so trying to open multiple files within the zip file is causing the file position to be shared and so multiple open calls result in them stepping all over each other. In contrast, when passed a filename, open opens a new file object. My solution is for the case when a file object is passed in, instead of using that file object directly, I create a copy of it.
This change to zipfile fixes the problems I was seeing:
$ ./test_zip_file.py ~/data.zip StringIO 1
Processed 250 items.
$ ./test_zip_file.py ~/data.zip BytesIO 1
Processed 250 items.
$ ./test_zip_file.py ~/data.zip direct 1
Processed 250 items.
but I don't know if it has other negative impacts on zipfile...
EDIT: I just found a mention of this in the Python docs that I had somehow overlooked before. At http://docs.python.org/library/zipfile.html#zipfile.ZipFile.open, it says:
Note: If the ZipFile was created by passing in a file-like object as the first argument to the
constructor, then the object returned by open() shares the ZipFile’s file pointer. Under these
circumstances, the object returned by open() should not be used after any additional operations
are performed on the ZipFile object. If the ZipFile was created by passing in a string (the
filename) as the first argument to the constructor, then open() will create a new file
object that will be held by the ZipExtFile, allowing it to operate independently of the ZipFile.
what i did was update setup tools then re download and it works now
https://pypi.python.org/pypi/setuptools/35.0.1
In my case, this solved the problem:
pip uninstall pillow
could it be that you had it open in your desktop? It has happened sometimes to me and the solution was just to run the code without having the files open outside of the python session.