python hangs even with exception handling - python-3.x

I've got a raspberry PI attached to a MCP3008 ADC which is measuring an analog voltage across a thermistor. I'm using the gpiozero python library for communication between the PI and ADC. My code below runs for several minutes and then spits out an error, and then hangs on function get_temp_percent. That function returns the average of five measurements from the ADC. I'm using Signal to throw an exception after 1 second of waiting to try to get past the hang, but it just throws an error and hangs anyway. It looks like nothing in my except statement is being read. Why am I not escaping the code hang?
import time
from gpiozero import MCP3008
from math import log
import pymysql.cursors
from datetime import datetime as dt
import signal
import os
def handler(signum, frame):
print('Signal handler called with signal', signum, frame)
raise Exception("Something went wrong!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!")
def get_temp_percent(pos=0):
x=[]
for i in range(0,5):
while True:
try:
signal.signal(signal.SIGALRM, handler)
signal.alarm(1)
adc = MCP3008(pos)
x.append(adc.value)
#adc.close()
except Exception as inst:
print('get_temp_percent {}'.format(inst) )
signal.alarm(0)
continue
break
signal.alarm(0)
time.sleep(.1)
return round(sum(x)/len(x),5)
def write_date(temp0):
<writes temp0 to mysql db >
# Connect to the database
connection = pymysql.connect(host='', user='', password='', db='',cursorclass = pymysql.cursors.DictCursor)
while True:
temp_percent = get_temp_percent()
print('Temp Percent = {}'.format(temp_percent) )
<some function that do some arithmetic to go temp_percent to temp0>
write_date(temp0)
print('Data Written')
time.sleep(1)
print('Sleep time over')
print('')
Function get_temp_percent causes the problem below
Signal handler called with signal 14 <frame object at 0x76274800>
Exception ignored in: <bound method SharedMixin.__del__ of SPI(closed)>
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/gpiozero/mixins.py", line 137, in __del__
super(SharedMixin, self).__del__()
File "/usr/lib/python3/dist-packages/gpiozero/devices.py", line 122, in __del__
self.close()
File "/usr/lib/python3/dist-packages/gpiozero/devices.py", line 82, in close
old_close()
File "/usr/lib/python3/dist-packages/gpiozero/pins/local.py", line 102, in close
self.pin_factory.release_all(self)
File "/usr/lib/python3/dist-packages/gpiozero/pins/__init__.py", line 85, in release_all
with self._res_lock:
File "/home/pi/Desktop/testing exceptions.py", line 13, in handler
raise Exception("Something went wrong!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!")
Exception: Something went wrong!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

It looks like your call to gpiozero does a lot of work behind the scenes.
When your exception is processed, the library is trying to clean up and gets stuck.
I took a quick look at the docs for the library and it looks like you may be able to keep hold of the pins so you can re-use them.
e.g.
import ...
adcs = {}
def get_adc_value(pos):
if pos not in adcs:
adcs[pos] = MCP3008(pos)
return adcs[pos].value
def get_temp_percent(pos=0):
x = []
for i in range(0, 5):
x.append(get_adc_value(pos))
time.sleep(.1)
return round(sum(x)/len(x),5)
while True:
temp_percent = get_temp_percent()
...

Related

Deadlock when running celery tests under pytest with xdist

If I run without xdist involved, like this:
pytest --disable-warnings --verbose -s test_celery_chords.py
Works just fine. I see the DB created, the tasks run and it exits as expected.
If I run with xdist involved (-n 2), like this:
pytest --disable-warnings --verbose -n 2 -s test_celery_chords.py
I end up with a hung process (and sometimes these messages):
Destroying old test database for alias 'default'...
Chord callback '4c7664ce-89e0-475e-81a7-4973929d2256' raised: ValueError('4c7664ce-89e0-475e-81a7-4973929d2256')
Traceback (most recent call last):
File "/Users/bob/.virtualenv/testme/lib/python3.10/site-packages/celery/backends/base.py", line 1019, in on_chord_part_return
raise ValueError(gid)
ValueError: 4c7664ce-89e0-475e-81a7-4973929d2256
Chord callback '4c7664ce-89e0-475e-81a7-4973929d2256' raised: ValueError('4c7664ce-89e0-475e-81a7-4973929d2256')
Traceback (most recent call last):
File "/Users/bob/.virtualenv/testme/lib/python3.10/site-packages/celery/backends/base.py", line 1019, in on_chord_part_return
raise ValueError(gid)
ValueError: 4c7664ce-89e0-475e-81a7-4973929d2256
Chord callback '4c7664ce-89e0-475e-81a7-4973929d2256' raised: ValueError('4c7664ce-89e0-475e-81a7-4973929d2256')
Traceback (most recent call last):
File "/Users/bob/.virtualenv/testme/lib/python3.10/site-packages/celery/backends/base.py", line 1019, in on_chord_part_return
raise ValueError(gid)
ValueError: 4c7664ce-89e0-475e-81a7-4973929d2256
Chord callback '4c7664ce-89e0-475e-81a7-4973929d2256' raised: ValueError('4c7664ce-89e0-475e-81a7-4973929d2256')
Traceback (most recent call last):
File "/Users/bob/.virtualenv/testme/lib/python3.10/site-packages/celery/backends/base.py", line 1019, in on_chord_part_return
raise ValueError(gid)
ValueError: 4c7664ce-89e0-475e-81a7-4973929d2256
Chord callback '4c7664ce-89e0-475e-81a7-4973929d2256' raised: ValueError('4c7664ce-89e0-475e-81a7-4973929d2256')
Traceback (most recent call last):
File "/Users/bob/.virtualenv/testme/lib/python3.10/site-packages/celery/backends/base.py", line 1019, in on_chord_part_return
raise ValueError(gid)
ValueError: 4c7664ce-89e0-475e-81a7-4973929d2256
[gw0] ERROR test_celery_chords.py::test_chords Destroying test database for alias 'default'...
Only way to end it is with ^C
These are my two tests (essentially the same test). The DB isn't needed for these tasks (simple add and average example tests) but will be needed for the other Django tests that do use the DB.
def test_chords(transactional_db, celery_app, celery_worker, celery_not_eager):
celery_app.config_from_object("django.conf:settings", namespace="CELERY")
task = do_average.delay()
results = task.get()
assert task.state == "SUCCESS"
assert len(results[0][1][1]) == 10
def test_chord_differently(transactional_db, celery_app, celery_worker, celery_not_eager):
celery_app.config_from_object("django.conf:settings", namespace="CELERY")
task = do_average.delay()
results = task.get()
assert task.state == "SUCCESS"
assert len(results[0][1][1]) == 10
and the tasks (shouldn't matter)
#shared_task
def _add(x: int, y: int) -> int:
print(f"{x} + {y} {time.time()}")
return x + y
#shared_task
def _average(numbers: List[int]) -> float:
print(f"AVERAGING {sum(numbers)} / {len(numbers)}")
return sum(numbers) / len(numbers)
#shared_task
def do_average():
tasks = [_add.s(i, i) for i in range(10)]
print(f"Creating chord of {len(tasks)} tasks at {time.time()}")
return chord(tasks)(_average.s())
using a conftest.py of this:
#pytest.fixture
def celery_not_eager(settings):
settings.CELERY_TASK_ALWAYS_EAGER = False
settings.CELERY_TASK_EAGER_PROPAGATES = False
pytest --fixtures
celery_app -- .../python3.10/site packages/celery/contrib/pytest.py:173
Fixture creating a Celery application instance.
celery_worker -- .../python3.10/site-packages/celery/contrib/pytest.py:195
Fixture: Start worker in a thread, stop it when the test returns.
Using
django=4.1.2
pytest-celery==0.0.0
pytest-cov==3.0.0
pytest-django==4.5.2
pytest-xdist==2.5.0
While I have not solved this, I have found a workaround of sorts using #pytest.mark.xdist_group(name="celery") to decorate the test class and I can do the following:
#pytest.mark.xdist_group(name="celery")
#override_settings(CELERY_TASK_ALWAYS_EAGER=False)
#override_settings(CELERY_TASK_EAGER_PROPAGATES=False)
class SyncTaskTestCase2(TransactionTestCase):
#classmethod
def setUpClass(cls):
super().setUpClass()
cls.celery_worker = start_worker(app, perform_ping_check=False)
cls.celery_worker.__enter__()
print(f"Celery Worker started {time.time()}")
#classmethod
def tearDownClass(cls):
print(f"Tearing down Superclass {time.time()}")
super().tearDownClass()
print(f"Tore down Superclass {time.time()}")
cls.celery_worker.__exit__(None, None, None)
print(f"Celery Worker torn down {time.time()}")
def test_success(self):
print(f"Starting test at {time.time()}")
self.task = do_average_in_chord.delay()
self.task.get()
print(f"Finished Averaging at {time.time()}")
assert self.task.successful()
This, combined with the command line option --dist loadgroup forces all of the "celery" group to be run on the same runner process which prevents the deadlock and allows --numprocesses 10 to run to completion.
The biggest drawback here is the 9 second penalty to teardown the celery worker which makes you prone to push all of your celery testing into one class.
# This accomplishes the same things as the unitest above WITHOUT having a Class wrapped around the tests it also eliminates the 9 second teardown wait.
#pytest.mark.xdist_group(name="celery")
#pytest.mark.django_db # Why do I need this and transactional_db???
def test_averaging_in_a_chord(
transactional_db,
celery_session_app,
celery_session_worker,
use_actual_celery_worker,
):
task = do_average_in_chord.delay()
task.get()
assert task.successful()
You do need this in your conftest.py
from typing import Type
import time
import pytest
from pytest_django.fixtures import SettingsWrapper
from celery import Celery
from celery.contrib.testing.worker import start_worker
#pytest.fixture(scope="function")
def use_actual_celery_worker(settings: SettingsWrapper) -> SettingsWrapper:
"""Turns of CELERY_TASK_ALWAYS_EAGER and CELERY_TASK_EAGER_PROPAGATES for a single test. """
settings.CELERY_TASK_ALWAYS_EAGER = False
settings.CELERY_TASK_EAGER_PROPAGATES = False
return settings
#pytest.fixture(scope="session")
def celery_session_worker(celery_session_app: Celery):
"""Re-implemented this so that my celery app gets used. This keeps the priority queue stuff the same
as it is in production. If BROKER_BACKEND is set to "memory" then rabbit shouldn't be involved anyway."""
celery_worker = start_worker(
celery_session_app, perform_ping_check=False, shutdown_timeout=0.5
)
celery_worker.__enter__()
yield celery_worker
# This causes the worker to exit immediately so that we don't have a 9 second wait for the timeout.
celery_session_app.control.shutdown()
print(f"Tearing down Celery Worker {time.time()}")
celery_worker.__exit__(None, None, None)
print(f"Celery Worker torn down {time.time()}")
#pytest.fixture(scope="session")
def celery_session_app() -> Celery:
from workshop.celery import app
""" Get the app you would regularly use for celery tasks and return it. This insures all of your default
app options mirror what you use at runtime."""
yield app

What makes Python Multiprocessing raise different errors when sharing objects between processes?

Context: I want to create attributes of an object class in parallel by distributing them in the available cores. This question was answered in this post here by using the python Multiprocessing Pool.
The MRE for my task is the following using Pyomo 6.4.1v:
from pyomo.environ import *
import os
import multiprocessing
from multiprocessing import Pool
from multiprocessing.managers import BaseManager, NamespaceProxy
import types
class ObjProxy(NamespaceProxy):
"""Returns a proxy instance for any user defined data-type. The proxy instance will have the namespace and
functions of the data-type (except private/protected callables/attributes). Furthermore, the proxy will be
pickable and can its state can be shared among different processes. """
def __getattr__(self, name):
result = super().__getattr__(name)
if isinstance(result, types.MethodType):
def wrapper(*args, **kwargs):
return self._callmethod(name, args, kwargs)
return wrapper
return result
#classmethod
def create(cls, *args, **kwargs):
# Register class
class_str = cls.__name__
BaseManager.register(class_str, cls, ObjProxy, exposed=tuple(dir(cls)))
# Start a manager process
manager = BaseManager()
manager.start()
# Create and return this proxy instance. Using this proxy allows sharing of state between processes.
inst = eval("manager.{}(*args, **kwargs)".format(class_str))
return inst
ConcreteModel.create = create
class A:
def __init__(self):
self.model = ConcreteModel.create()
def do_something(self, var):
if var == 'var1':
self.model.var1 = var
elif var == 'var2':
self.model.var2 = var
else:
print('other var.')
def do_something2(self, model, var_name, var_init):
model.add_component(var_name,var_init)
def init_var(self):
print('Sequentially')
self.do_something('var1')
self.do_something('test')
print(self.model.var1)
print(vars(self.model).keys())
# Trying to create the attributes in parallel
print('\nParallel')
self.__sets_list = [(self.model,'time',Set(initialize = [x for x in range(1,13)])),
(self.model,'customers',Set(initialize = ['c1','c2','c3'])),
(self.model,'finish_bulks',Set(initialize = ['b1','b2','b3','b4'])),
(self.model,'fermentation_types',Set(initialize = ['ft1','ft2','ft3','ft4'])),
(self.model,'fermenters',Set(initialize = ['f1','f2','f3'])),
(self.model,'ferm_plants',Set(initialize = ['fp1','fp2','fp3','fp4'])),
(self.model,'plants',Set(initialize = ['p1','p2','p3','p4','p5'])),
(self.model,'gran_plants',Set(initialize = ['gp1','gp2','gp3','gp4','gp4']))]
with Pool(7) as pool:
pool.starmap(self.do_something2,self.__sets_list)
self.model.time.pprint()
self.model.customers.pprint()
def main(): # The main part run from another file
obj = A()
obj.init_var()
# Call other methods to create other attributes and the solver step.
# The other methods are similar to do_something2() just changing the var_init to Var() and Constraint().
if __name__ == '__main__':
multiprocessing.set_start_method("spawn")
main = main()
Ouput
Sequentially
other var.
var1
dict_keys(['_tls', '_idset', '_token', '_id', '_manager', '_serializer', '_Client', '_owned_by_manager', '_authkey', '_close'])
Parallel
WARNING: Element gp4 already exists in Set gran_plants; no action taken
time : Size=1, Index=None, Ordered=Insertion
Key : Dimen : Domain : Size : Members
None : 1 : Any : 12 : {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12}
customers : Size=1, Index=None, Ordered=Insertion
Key : Dimen : Domain : Size : Members
None : 1 : Any : 3 : {'c1', 'c2', 'c3'}
I change the number of parallel processes for testing, but it raises different errors, and other times it runs without errors. This is confusing for me, and I did not figure out what is the problem behind it. I did not find another post that had a similar problem, but I saw some posts discussing that pickle does not handle large data. So, the errors that sometimes I gotcha are the following:
Error 1
Unserializable message: Traceback (most recent call last):
File "/home/.../anaconda3/envs/.../lib/python3.9/multiprocessing/managers.py", line 300, in serve_client
send(msg)
File "/home/.../anaconda3/envs/.../lib/python3.9/multiprocessing/connection.py", line 211, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/home/.../anaconda3/envs/.../lib/python3.9/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
SystemError: <method 'dump' of '_pickle.Pickler' objects> returned NULL without setting an error
Error 2
Unserializable message: Traceback (most recent call last):
File "/home/.../anaconda3/envs/.../lib/python3.9/multiprocessing/managers.py", line 300, in serve_client
send(msg)
File "/home/.../anaconda3/envs/.../lib/python3.9/multiprocessing/connection.py", line 211, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/home/.../anaconda3/envs/.../lib/python3.9/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
RuntimeError: dictionary changed size during iteration
Error 3
*** Reference count error detected: an attempt was made to deallocate the type 32727 (? ***
*** Reference count error detected: an attempt was made to deallocate the type 32727 (? ***
*** Reference count error detected: an attempt was made to deallocate the type 32727 (? ***
Unserializable message: Traceback (most recent call last):
File "/home/.../anaconda3/envs/.../lib/python3.9/multiprocessing/managers.py", line 300, in serve_client
send(msg)
File "/home/.../anaconda3/envs/.../lib/python3.9/multiprocessing/connection.py", line 211, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/home/.../anaconda3/envs/.../lib/python3.9/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
numpy.core._exceptions._ArrayMemoryError: <unprintble MemoryError object>
Error 4
Unserializable message: Traceback (most recent call last):
File "/home/.../anaconda3/envs/.../lib/python3.9/multiprocessing/managers.py", line 300, in serve_client
send(msg)
File "/home/.../anaconda3/envs/.../lib/python3.9/multiprocessing/connection.py", line 211, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/home/.../anaconda3/envs/.../lib/python3.9/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
AttributeError: Can't pickle local object 'WeakSet.__init__.<locals>._remove'
So, there are different errors, and it looks like it is not stable. I hope that someone has had and solved this problem. Furthermore, if someone has implemented other strategies for this task, please, feel free to post your answer in this issue here
Tkx.

RuntimeWarning , RuntimeError (Python Al Chat Bot on Discord Server)

My Aim: Be able to integrate Al Chatbot and Discord
import nltk
nltk.download('punkt')
from nltk.stem.lancaster import LancasterStemmer
stemmer=LancasterStemmer()
import numpy
import tflearn
import tensorflow
import random
import json
import pickle
import nest_asyncio
import asyncio
#---------------------------------------------------------------------------
import discord
import os
with open("intents.json") as file:
data=json.load(file)
print(data['intents'])
try:
with open("data.pickle","rb") as f:
words,labels,training,output=pickle.load(f)
except:
words=[]
labels=[]
docs_x=[]
docs_y=[]
for intent in data['intents']:
for pattern in intent['patterns']:
wrds=nltk.word_tokenize(pattern)
words.extend(wrds)
docs_x.append(wrds)
docs_y.append(intent["tag"])
if intent["tag"] not in labels:
labels.append(intent["tag"])
#remove duplicate
words=[stemmer.stem(w.lower()) for w in words if w != "?"]
words=sorted(list(set(words)))
labels=sorted(labels)
training=[]
output=[]
out_empty=[0 for _ in range(len(labels))]
for x, doc in enumerate(docs_x):
bag=[]
wrds=[stemmer.stem(w) for w in doc]
for w in words:
if w in wrds:
bag.append(1)
else:
bag.append(0)
output_row=out_empty[:]
output_row[labels.index(docs_y[x])]=1
training.append(bag)
output.append(output_row)
training=numpy.array(training)
output=numpy.array(output)
with open("data.pickle","wb") as f:
pickle.dump((words,labels,training,output),f)
tensorflow.compat.v1.reset_default_graph()
net=tflearn.input_data(shape=[None,len(training[0])])
net=tflearn.fully_connected(net,16)
net=tflearn.fully_connected(net,16)
net=tflearn.fully_connected(net,len(output[0]),activation="softmax")
net=tflearn.regression(net)
model=tflearn.DNN(net)
model.fit(training, output,n_epoch=10000,batch_size=16,show_metric=True )
model.save('C:/Users/Desktop/chatbot/model/model.tflearn')
model.load('C:/Users/Desktop/chatbot/model/model.tflearn')
def bag_of_words(s,words):
bag=[0 for _ in range(len(words))]
s_words=nltk.word_tokenize(s)
s_words=[stemmer.stem(word.lower()) for word in s_words]
for se in s_words:
for i,w in enumerate(words):
if w==se:
bag[i]=1
return numpy.array(bag)
def chat():
print("start talking with the bot (type quit to stop!")
while True:
inp=input("You:")
if inp.lower()=="quit":
break
results= model.predict([bag_of_words(inp,words)])[0]
# print("results:",results)
results_index=numpy.argmax(results)
if results[results_index]>0.7:
tag=labels[results_index]
print("tag:", tag)
for tg in data["intents"]:
if tg["tag"]==tag:
responses=tg['responses']
client=discord.Client() #FOR DISCORD--------------------------------------
async def on_message(message):
if inp.author == client.user:
return
if inp.content.startswith("$M-bot"):
response=responses.request(inp.content[7:])
await asyncio.sleep(5)
await inp.channel.send(response)
on_message(inp)
client.run("API KEY TAKEN FROM DISCORD for BOT")
print("Bot:",random.choice(responses))
else:
print("I didn't get that. Please try again")
chat()
Warnings and Errors (Pyconsole):
start talking with the bot (type quit to stop!
You:hello
tag: greeting
C:/Users/Desktop/chatbot/chatbot.py:154: RuntimeWarning: coroutine 'chat.<locals>.on_message' was never awaited
on_message(inp)
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
Traceback (most recent call last):
File "F:\Anaconda\lib\site-packages\discord\client.py", line 713, in run
loop.run_forever()
File "F:\Anaconda\lib\asyncio\base_events.py", line 560, in run_forever
self._check_running()
File "F:\Anaconda\lib\asyncio\base_events.py", line 552, in _check_running
raise RuntimeError('This event loop is already running')
RuntimeError: This event loop is already running
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "F:\Anaconda\lib\site-packages\discord\client.py", line 90, in _cleanup_loop
_cancel_tasks(loop)
File "F:\Anaconda\lib\site-packages\discord\client.py", line 75, in _cancel_tasks
loop.run_until_complete(asyncio.gather(*tasks, return_exceptions=True))
File "F:\Anaconda\lib\asyncio\base_events.py", line 592, in run_until_complete
self._check_running()
File "F:\Anaconda\lib\asyncio\base_events.py", line 552, in _check_running
raise RuntimeError('This event loop is already running')
RuntimeError: This event loop is already running
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:/Users/Desktop/chatbot/chatbot.py", line 162, in <module>
chat()
File "C:/Users/Desktop/chatbot/chatbot.py", line 155, in chat
client.run("API KEY TAKEN FROM DISCORD for BOT")
File "F:\Anaconda\lib\site-packages\discord\client.py", line 719, in run
_cleanup_loop(loop)
File "F:\Anaconda\lib\site-packages\discord\client.py", line 95, in _cleanup_loop
loop.close()
File "F:\Anaconda\lib\asyncio\selector_events.py", line 89, in close
raise RuntimeError("Cannot close a running event loop")
RuntimeError: Cannot close a running event loop
PROBLEM: Hello Friends, I'm trying to make a chatbot that works on discord and can give its answers through the artificial intelligence model I built, but I am getting a RuntimeWarning: Enable tracemalloc to get the object allocation traceback and RuntimeError: This event loop is already running How can I solve these?
Your error is because you keep reinitiating discord.Client. In every program, there should be only one instance of discord.Client. If you want to make it spit out the last response, you should move client out of the loop. Set the bot's response to a global variable and have the bot spit out the global variable when a command is sent
arrangements:
import nltk
nltk.download('punkt')
from nltk.stem.lancaster import LancasterStemmer
stemmer=LancasterStemmer()
import numpy
import tflearn
import tensorflow
import random
import json
import pickle
import nest_asyncio
import asyncio
#-------------------------------------------------------------------------------
import discord
import os
with open("intents.json") as file:
data=json.load(file)
print(data['intents'])
client=discord.Client() #OUT OF LOOP
#client.event #LISTEN EVENTS
async def on_message(message):
if message.author == client.user:
return
if message.content.startswith("$M-bot"):
response=responses.request(message.content[7:])
await message.channel.send(response)
try:
with open("data.pickle","rb") as f:
words,labels,training,output=pickle.load(f)
except:
words=[]
labels=[]
docs_x=[]
docs_y=[]
for intent in data['intents']:
for pattern in intent['patterns']:
wrds=nltk.word_tokenize(pattern)
words.extend(wrds)
docs_x.append(wrds)
docs_y.append(intent["tag"])
if intent["tag"] not in labels:
labels.append(intent["tag"])
#remove duplicate
words=[stemmer.stem(w.lower()) for w in words if w != "?"]
words=sorted(list(set(words)))
labels=sorted(labels)
training=[]
output=[]
out_empty=[0 for _ in range(len(labels))]
for x, doc in enumerate(docs_x):
bag=[]
wrds=[stemmer.stem(w) for w in doc]
for w in words:
if w in wrds:
bag.append(1)
else:
bag.append(0)
output_row=out_empty[:]
output_row[labels.index(docs_y[x])]=1
training.append(bag)
output.append(output_row)
training=numpy.array(training)
output=numpy.array(output)
with open("data.pickle","wb") as f:
pickle.dump((words,labels,training,output),f)
tensorflow.compat.v1.reset_default_graph()
net=tflearn.input_data(shape=[None,len(training[0])])
net=tflearn.fully_connected(net,16)
net=tflearn.fully_connected(net,16)
net=tflearn.fully_connected(net,len(output[0]),activation="softmax")
net=tflearn.regression(net)
model=tflearn.DNN(net)
model.fit(training, output,n_epoch=5000,batch_size=16,show_metric=True )
model.save('C:/Users/Desktop/chatbot/model/model.tflearn')
model.load('C:/Users/Desktop/chatbot/model/model.tflearn')
def bag_of_words(s,words):
bag=[0 for _ in range(len(words))]
s_words=nltk.word_tokenize(s)
s_words=[stemmer.stem(word.lower()) for word in s_words]
for se in s_words:
for i,w in enumerate(words):
if w==se:
bag[i]=1
return numpy.array(bag)
def chat():
global responses #GLOBAL VARIABLES
global inp #GLOBAL VARIABLES
print("start talking with the bot (type quit to stop!")
while True:
inp=input("You:")
if inp.lower()=="quit":
break
results= model.predict([bag_of_words(inp,words)])[0]
# print("results:",results)
results_index=numpy.argmax(results)
if results[results_index]>0.7:
tag=labels[results_index]
print("tag:", tag)
for tg in data["intents"]:
if tg["tag"]==tag:
responses=tg['responses']
print("Bot:",random.choice(responses))
else:
print("I didn't get that. Please try again")
chat()
client.run("API KEY")

ThreadPoolExecutor keep waiting when exception happens

from asyncio import FIRST_EXCEPTION
from concurrent.futures.thread import ThreadPoolExecutor
from queue import Queue
from concurrent.futures import wait
import os
def worker(i: int, in_queue: Queue) -> None:
while 1:
data = in_queue.get()
if data is None:
in_queue.put(data)
print(f'worker {i} exit')
return
print(os.path.exists(data))
def main():
with ThreadPoolExecutor(max_workers=2) as executor:
queue = Queue(maxsize=2)
workers = [executor.submit(worker, i, queue) for i in range(2)]
for obj in [{'fn': '/path/to/sth'}, {}]:
fn = obj['fn'] # here is the exception
queue.put((fn,))
queue.put(None)
done, error = wait(workers, return_when=FIRST_EXCEPTION)
print(done, error)
main()
This program get stuck when exception happens.
From the log:
Traceback (most recent call last):
File "test.py", line 34, in <module>
main()
File "test.py", line 31, in main
print(done, error)
File "/.pyenv/versions/3.7.4/lib/python3.7/concurrent/futures/_base.py", line 623, in __exit__
self.shutdown(wait=True)
File "/.pyenv/versions/3.7.4/lib/python3.7/concurrent/futures/thread.py", line 216, in shutdown
t.join()
File "/.pyenv/versions/3.7.4/lib/python3.7/threading.py", line 1044, in join
self._wait_for_tstate_lock()
File "/.pyenv/versions/3.7.4/lib/python3.7/threading.py", line 1060, in _wait_for_tstate_lock
elif lock.acquire(block, timeout):
KeyboardInterrupt
It happens because wait function keep locked, but it's weird because the exception happens before the wait function. It should be returned when exception happens!!
Why it doesn't return immediately when exception happens?
Yes, I find the reason. When the exception happens, executor will exit, and in with statement, it calls self.shutdown(wait=True), so main thread waits for sub thread to exit, however sub thread keep running.
So the solution is that, shutdown the executor manually with wait=Falseļ¼š
try:
## code here
except Exception as e:
traceback.print_exc()
executor.shutdown(False)

python tkinter non blocking information

I am trying to create non-blocking tkiner messagebox using multithreading. I know messagebox is not defined in this way, but I preferred the messagebox concept and functions. I am having actual main program which is quite large and complex so showing some example
program toto.py
import threading
from disp_message import disp_message
msg =" This is a test message"
msgtype = 1
t1 = threading.Thread(target=disp_message, args=(msg,msgtype,))
t1.start()
t1.join()
for i in range(100000):
print(i)
disp_message(msg,msgtype)
print("Done!")
disp_messagee python function code in another file
from tkinter import *
from tkinter import messagebox
def disp_message(msg,msgtype):
top = Tk()
top.withdraw()
if msgtype==1:
messagebox.showwarning("Warning",msg)
elif msgtype==2:
messagebox.showinfo("information",msg)
else:
messagebox.showerror("Error",msg)
When I run this program i am having 2 issues
1. Following error
Traceback (most recent call last):
File "toto.py", line 13, in <module>
disp_message(msg,msgtype)
File "c:\NSE\scripts\disp_message.py", line 8, in disp_message
messagebox.showwarning("Warning",msg)
File "C:\ProgramData\Anaconda\lib\tkinter\messagebox.py", line 87, in showwarning
return _show(title, message, WARNING, OK, **options)
File "C:\ProgramData\Anaconda\lib\tkinter\messagebox.py", line 72, in _show
res = Message(**options).show()
File "C:\ProgramData\Anaconda\lib\tkinter\commondialog.py", line 39, in show
w = Frame(self.master)
File "C:\ProgramData\Anaconda\lib\tkinter\__init__.py", line 2744, in __init__
Widget.__init__(self, master, 'frame', cnf, {}, extra)
File "C:\ProgramData\Anaconda\lib\tkinter\__init__.py", line 2299, in __init__
(widgetName, self._w) + extra + self._options(cnf))
RuntimeError: main thread is not in main loop
Secondly it displays messagebox and wait for acknowledgement.
While my objective is to display messagebox and parallel let program get executed and finished.
Can you pl help ?

Resources