I have a python script that controls different test-instruments (signal generator, amplifier, spectrum analyzer...) to automate a test.
These devices communicate over ethernet or serial with the pc running this python script.
I wrote a class for each device that I use. The script starts with initializing an instance of those classes. Something like this:
multimeter = Multimeter(192.168.1.5,5025)
amplifier = Amplifier(192.168.1.9,5025)
stirrer = Stirrer('COM4',9600)
.....
This can co wrong in many ways (battery is low, device not turned on, cable not connected ... )
It's possible to catch the errors with try/catch - try-except:
try:
multimeter = Multimeter(192.168.1.5,5025)
amplifier = Amplifier(192.168.1.9,5025)
stirrer = Stirrer('COM4',9600)
.....
except:
multimeter.close()
amplifier.close()
stirrer.close()
But now the problem is inside the except code block... We are not sure if the initialization of the objects succeeded and if they exist. They may not exist and so we can't call the close() method.
Because creating the instances is just normal sequential code, I know that when creating an instance of one of my classes fails, all the instances of the other classes previous to that line of code succeed. So you can try to create an instance of every class and check if that fails or not, and if it fails closing the connections of all previous objects.
try:
multimeter = Multimeter(192.168.1.5,5025)
except:
#problem with the multimeter
print('error')
try:
amplifier = Amplifier(192.168.1.9,5025)
except:
#problem with the amplifier, but we can close the multimeter
multimeter.close()
try:
stirrer = Stirrer('COM4',9600)
except:
#problem with the stirrer, but we can close the multimeter and the
amplifier
multimeter.close()
amplifier.close()
....
But I think this is ugly code? In particular when the number of objects (here test instruments grows, this becomes unmanageable. And it's sensitive for errors when you want to add or remove an object... Is there a better way to be sure that all connections are closed? Sockets should be closed on failure so we can assign the ip-address and port to a socket the next time the script is executed. Same with the serial interfaces, if it's not closed, it will raise an error to because you can't connect to a serial interface that already is open...
Use a container to store already created instruments, and split your code in short, independent, manageable parts:
def create_instruments(defs):
instruments = {}
for key, cls, params in instruments_defs:
try:
instruments[key] = cls(*params)
except Exception as e:
print("failed to instanciate '{}': {}".format(key, e))
close_instruments(instruments)
raise
return instruments
def close_instruments(intruments):
for key, instrument in intruments.items():
try:
instrument.close()
except Exception as e:
# just mention it - we can't do much more anyway
print("got error {} when closing {}".format(e, key))
instruments_defs = [
#(key, classname, (param1, ...)
("multimeter", Multimeter, ("192.168.1.5", 5025)),
("amplifier", Amplifier, ("192.168.1.9" ,5025)),
("stirrer", Stirrer, ('COM4',9600)),
]
instruments = create_instruments(instruments_defs)
You may also want to have a look at context managers (making sure resources are properly released is the main reason of context managers) but it might not necessarily be the best choice here (depends on how you use those objects, how your code is structured etc).
In fact, the solution that I'm suggesting in my question is the easiest way to solve this issue. In the try block, the script tries to initialize the instances one by one.
If you close the objects in the same order that they're created in the try block, then closing the connection will succeed for every test instrument, except for the instruments that where not initialized because of the error that happened in the try block.
(see comments in code snippet)
try:
multimeter = Multimeter(192.168.1.5,5025) #succes
amplifier = Amplifier(192.168.1.9,5025) #succes
stirrer = Stirrer('COM4',9600) # error COM4 is not available --> jump to except
generator = Generator() #not initialized because of error in stirrer init
otherTestInstrument = OtherTestInsrument() #not initialized because of error in stirrer init
.....
except:
multimeter.close() #initialized in try, so close() works
amplifier.close() #initialized in try, so close() works
stirrer.close() #probably initialized in try, so close() works probably
generator.close() #not initialized, will raise error, but doesn't matter.
otherTestInstrument.close() #also not initialized. No need to close it too.
Related
I'm coding a script that connects to the Binance websocket and uses the .run_forever() method to constantly get live data from the site. I want to be able to debug my code and watch the values of variables as the change but I'm not sure how to do this as the script basically hangs on the line with the .run_forever() method, because it is an infinite event loop. This is by design as I want to continuously get live data (it receives a message approximately every second), but I can't think of a way a good way to debug it.
I'm using VSCode and here are some snippets of my code to help understand my issue. The message function for the websocket is just a bunch of technical analysis and trade logic, but it is also the function that contains all the changing variables that I want to watch.
socket = f"wss://stream.binance.com:9443/ws/{Symbol}#kline_{interval}"
def on_open(ws):
print("open connection")
def on_message(ws, message):
global trade_list
global in_position
json_message = json.loads(message)
candle = json_message['k'] # Accesses candle data
...[trade logic code here]...
def on_close(ws):
print("Websocket connection close")
# ------------------------- Define a websocket object ------------------------ #
ws = websocket.WebSocketApp(socket, on_open=on_open, on_message=on_message, on_close=on_close)
ws.run_forever()
If more code is required to answer the question, then I can edit this question to include it (I'm thinking if you would like to have an idea of what variables I want to look at, I just thought it would be easier and simpler to show these parts).
Also, I know using global isn't great, once I've finished (or am close to finishing) the script, I want to go and tidy it up, I'll deal with it then.
I'm a little late to the party but the statement
websocket.enableTrace(True)
worked for me. Place it just before you define your websocket object and it will print all traffic in and out of the websocket including any exceptions that you might get as you process the messages.
I'm building an apllication which is intended to do a bulk-job processing data within another software. To control the other software automatically I'm using pyautoit, and everything works fine, except for application errors, caused from the external software, which occur from time to time.
To handle those cases, I built a watchdog:
It starts the script with the bulk job within a subprocess
process = subprocess.Popen(['python', job_script, src_path], stdout=subprocess.PIPE,
stderr=subprocess.PIPE, shell=True)
It listens to the system event using winevt.EventLog module
EventLog.Subscribe('System', 'Event/System[Level<=2]', handle_event)
In case of an error occurs, it shuts down everything and re-starts the script again.
Ok, if an system error event occurs, this event should get handled in a way, that the supprocess gets notified. This notification should then lead to the following action within the subprocess:
Within the subprocess there's an object controlling everything and continuously collecting
generated data. In order to not having to start the whole job from the beginnig, after re-starting the script, this object has to be dumped using pickle (which isn't the problem here!)
Listening to the system event from inside the subprocess didn't work. It results in a continuous loop, when calling subprocess.Popen().
So, my question is how I can either subscribe for system events from inside a childproces, or communicate between the parent and childprocess - means, sending a message like "hey, an errorocurred", listening within the subprocess and then creating the dump?
I'm really sorry not being allowed to post any code in this case. But I hope (and actually think), that my description should be understandable. My question is just about what module to use to accomplish this in the best way?
Would be really happy, if somebody could point me into the right direction...
Br,
Mic
I believe the best answer may lie here: https://docs.python.org/3/library/subprocess.html#subprocess.Popen.stdin
These attributes should allow for proper communication between the different processes fairly easily, and without any other dependancies.
Note that Popen.communicate() may suit better if other processes may cause issues.
EDIT to add example scripts:
main.py
from subprocess import *
import sys
def check_output(p):
out = p.stdout.readline()
return out
def send_data(p, data):
p.stdin.write(bytes(f'{data}\r\n', 'utf8')) # auto newline
p.stdin.flush()
def initiate(p):
#p.stdin.write(bytes('init\r\n', 'utf8')) # function to send first communication
#p.stdin.flush()
send_data(p, 'init')
return check_output(p)
def test(p, data):
send_data(p, data)
return check_output(p)
def main()
exe_name = 'Doc2.py'
p = Popen([sys.executable, exe_name], stdout=PIPE, stderr=STDOUT, stdin=PIPE)
print(initiate(p))
print(test(p, 'test'))
print(test(p, 'test2')) # testing responses
print(test(p, 'test3'))
if __name__ == '__main__':
main()
Doc2.py
import sys, time, random
def recv_data():
return sys.stdin.readline()
def send_data(data):
print(data)
while 1:
d = recv_data()
#print(f'd: {d}')
if d.strip() == 'test':
send_data('return')
elif d.strip() == 'init':
send_data('Acknowledge')
else:
send_data('Failed')
This is the best method I could come up with for cross-process communication. Also make sure all requests and responses don't contain newlines, or the code will break.
I try to use python jack-client module to send a program change midi when I click on a button
here is a simplified version of the code :
def process_callback(frames: int):
global midiUi
if(midiUi is not None):
midiUi.process_callback(frames)
class MidiUi:
def __init__(self):
self.client = jack.Client('MidiUi')
self.client.set_process_callback(process_callback)
self.client.activate()
def sendProgramChange(self):
self.midiQueue.append([0xC0,0])
def process_callback(self,frames: int):
while(len(self.midiQueue)>0):
data = self.midiQueue.pop()
self.outMidiPort.clear_buffer()
buffer = self.outMidiPort.reserve_midi_event(0,len(data))
buffer[:] = bytearray(data)
self.outMidiPort.write_midi_event(0,buffer) # this only happens once yet midi input receives tons of program changes events
#raise jack.CallbackExit
midiUi = MidiUi()
while True:
....
#some button calls midiUi.sendProgramChange()
write_midi_event is called only once when pressing the button,
but apparently the destination midi port receives a continuous flow of midi C0 program changes (unless I call jack.CallbackExit, but then the call back never triggers again)
(I monitor my python script output using jack_midi_dump and midisnoop)
anyone know how to solve this ?
thanks for your help
I now user python-rtmidi for this matter
midiout = rtmidi.MidiOut(rtapi=rtmidi.API_UNIX_JACK)
rtMidiOutputPorts=midiout.get_ports()
then write data to the port
This post may be kind of old and it seems like you've got it figured out, but I did find a solution:
The midi client is sending whatever is in the buffer, which means that things like write_midi_event need to be cleared to stop sending it. Therefore, what helped me was near the beginning of my process, I had;
outport.clear_buffer()
Hope that helps ;)
I am working on a GUI based chat program.
I am using someone else's server which has worked well for many people so I am assuming the problem is with my client's code.
When I run a single instance of the client it works perfectly, but if I run two instances of the client on the same computer the listener stops responding when the 2nd client logs in.
# server is from socket module
# chat_box is a tkinter ListBox
# both are copies of global variable
class listener_thread(threading.Thread):
def __init__(self, server, chat_box):
super(listener_thread, self).__init__()
self.server = server
self.chat_box = chat_box
def run(self):
try:
update = self.server.recv(1024)
msg = update.decode("utf-8")
if msg != "":
self.chat_box.insert(END, msg)
except Exception as e:
print(e)
I've verified that the server is putting each client on a different port. The server is receiving the messages. When 'Michael' logs in and says 'Hi' it updates in his chat_box.
Though, the clients are no longer updating their histories after 'Dave' logs in.
Yet, the server continues to show that it is receiving the messages from both clients.
#This is the server output
#Hi is Michael
#Yo is Dave
#So Michael is still connecting and transmitting after Dave connects
Michael - ('127.0.0.1', 56263) connected
Hi
Dave - ('127.0.0.1', 56264) connected
Yo
Hi
The network connection is working properly. It just locks up the list_box updating threads.
No exceptions are being thrown.
I solved my own problem.
I needed to make the chat_history_listbox as a ListBox initially, instead of None
I needed to put the receive code into a function, with a loop and an exit condition
def receive_func():
global server, chat_history_listbox
while True:
try:
update = server.recv(1024)
except OSError as e:
update = None
break
connect()
msg = update.decode("utf-8")
if msg != "":
chat_history_listbox.insert(END, msg)
I needed to make the thread call a function and make it a daemon
listener = Thread(target=receive_func, daemon=True)
listener.start()
This got it working with multiple clients
All, I'm implementing websockets using flask/uWSGI. This is relegated to a module that's instantiated in the main application. Redacted code for the server and module:
main.py
from WSModule import WSModule
app = Flask(__name__)
wsmodule = WSModule()
websock = WebSocket(app)
#websock.route('/websocket')
def echo(ws):
wsmodule.register(ws)
print("websock clients", wsmodule.clients)
while True: # This while loop is related to the uWSGI websocket implementation
msg = ws.receive()
if msg is not None:
ws.send(msg)
else: return
#app.before_request
def before_request():
print ("app clients:",wsmodule.clients)
and WSModule.py:
class WSModule(object):
def __init__(self):
self.clients = list()
def register(self, client):
self.clients.append(client)
Problem: When a user connects using websockets (into the '/websocket' route), the wsmodule.register appends their connection socket, this works fine- printout 'websocket clients' shows the appended connection.
The issue is that I can't access those sockets from the main application. This is seen by the 'app clients' printout which never updates (list stays empty). Something is clearly updating, but how to access those changes?
It sounds like your program is being run with either threads or processes, and a wsmodule exists for each thread/process that is running.
So one wsmodule is being updated with the client info, while a different one is being asked for clients... but the one being asked is still empty.
If you are using threads, check out thread local storage.