How to debug a python websocket script that uses `.run_forever()` method (infinite event loop) - python-3.x

I'm coding a script that connects to the Binance websocket and uses the .run_forever() method to constantly get live data from the site. I want to be able to debug my code and watch the values of variables as the change but I'm not sure how to do this as the script basically hangs on the line with the .run_forever() method, because it is an infinite event loop. This is by design as I want to continuously get live data (it receives a message approximately every second), but I can't think of a way a good way to debug it.
I'm using VSCode and here are some snippets of my code to help understand my issue. The message function for the websocket is just a bunch of technical analysis and trade logic, but it is also the function that contains all the changing variables that I want to watch.
socket = f"wss://stream.binance.com:9443/ws/{Symbol}#kline_{interval}"
def on_open(ws):
print("open connection")
def on_message(ws, message):
global trade_list
global in_position
json_message = json.loads(message)
candle = json_message['k'] # Accesses candle data
...[trade logic code here]...
def on_close(ws):
print("Websocket connection close")
# ------------------------- Define a websocket object ------------------------ #
ws = websocket.WebSocketApp(socket, on_open=on_open, on_message=on_message, on_close=on_close)
ws.run_forever()
If more code is required to answer the question, then I can edit this question to include it (I'm thinking if you would like to have an idea of what variables I want to look at, I just thought it would be easier and simpler to show these parts).
Also, I know using global isn't great, once I've finished (or am close to finishing) the script, I want to go and tidy it up, I'll deal with it then.

I'm a little late to the party but the statement
websocket.enableTrace(True)
worked for me. Place it just before you define your websocket object and it will print all traffic in and out of the websocket including any exceptions that you might get as you process the messages.

Related

What is best practice to interact with subprocesses in python

I'm building an apllication which is intended to do a bulk-job processing data within another software. To control the other software automatically I'm using pyautoit, and everything works fine, except for application errors, caused from the external software, which occur from time to time.
To handle those cases, I built a watchdog:
It starts the script with the bulk job within a subprocess
process = subprocess.Popen(['python', job_script, src_path], stdout=subprocess.PIPE,
stderr=subprocess.PIPE, shell=True)
It listens to the system event using winevt.EventLog module
EventLog.Subscribe('System', 'Event/System[Level<=2]', handle_event)
In case of an error occurs, it shuts down everything and re-starts the script again.
Ok, if an system error event occurs, this event should get handled in a way, that the supprocess gets notified. This notification should then lead to the following action within the subprocess:
Within the subprocess there's an object controlling everything and continuously collecting
generated data. In order to not having to start the whole job from the beginnig, after re-starting the script, this object has to be dumped using pickle (which isn't the problem here!)
Listening to the system event from inside the subprocess didn't work. It results in a continuous loop, when calling subprocess.Popen().
So, my question is how I can either subscribe for system events from inside a childproces, or communicate between the parent and childprocess - means, sending a message like "hey, an errorocurred", listening within the subprocess and then creating the dump?
I'm really sorry not being allowed to post any code in this case. But I hope (and actually think), that my description should be understandable. My question is just about what module to use to accomplish this in the best way?
Would be really happy, if somebody could point me into the right direction...
Br,
Mic
I believe the best answer may lie here: https://docs.python.org/3/library/subprocess.html#subprocess.Popen.stdin
These attributes should allow for proper communication between the different processes fairly easily, and without any other dependancies.
Note that Popen.communicate() may suit better if other processes may cause issues.
EDIT to add example scripts:
main.py
from subprocess import *
import sys
def check_output(p):
out = p.stdout.readline()
return out
def send_data(p, data):
p.stdin.write(bytes(f'{data}\r\n', 'utf8')) # auto newline
p.stdin.flush()
def initiate(p):
#p.stdin.write(bytes('init\r\n', 'utf8')) # function to send first communication
#p.stdin.flush()
send_data(p, 'init')
return check_output(p)
def test(p, data):
send_data(p, data)
return check_output(p)
def main()
exe_name = 'Doc2.py'
p = Popen([sys.executable, exe_name], stdout=PIPE, stderr=STDOUT, stdin=PIPE)
print(initiate(p))
print(test(p, 'test'))
print(test(p, 'test2')) # testing responses
print(test(p, 'test3'))
if __name__ == '__main__':
main()
Doc2.py
import sys, time, random
def recv_data():
return sys.stdin.readline()
def send_data(data):
print(data)
while 1:
d = recv_data()
#print(f'd: {d}')
if d.strip() == 'test':
send_data('return')
elif d.strip() == 'init':
send_data('Acknowledge')
else:
send_data('Failed')
This is the best method I could come up with for cross-process communication. Also make sure all requests and responses don't contain newlines, or the code will break.

python 3 - jack-client - using write_midi_event - jack keeps sending the midi event forever

I try to use python jack-client module to send a program change midi when I click on a button
here is a simplified version of the code :
def process_callback(frames: int):
global midiUi
if(midiUi is not None):
midiUi.process_callback(frames)
class MidiUi:
def __init__(self):
self.client = jack.Client('MidiUi')
self.client.set_process_callback(process_callback)
self.client.activate()
def sendProgramChange(self):
self.midiQueue.append([0xC0,0])
def process_callback(self,frames: int):
while(len(self.midiQueue)>0):
data = self.midiQueue.pop()
self.outMidiPort.clear_buffer()
buffer = self.outMidiPort.reserve_midi_event(0,len(data))
buffer[:] = bytearray(data)
self.outMidiPort.write_midi_event(0,buffer) # this only happens once yet midi input receives tons of program changes events
#raise jack.CallbackExit
midiUi = MidiUi()
while True:
....
#some button calls midiUi.sendProgramChange()
write_midi_event is called only once when pressing the button,
but apparently the destination midi port receives a continuous flow of midi C0 program changes (unless I call jack.CallbackExit, but then the call back never triggers again)
(I monitor my python script output using jack_midi_dump and midisnoop)
anyone know how to solve this ?
thanks for your help
I now user python-rtmidi for this matter
midiout = rtmidi.MidiOut(rtapi=rtmidi.API_UNIX_JACK)
rtMidiOutputPorts=midiout.get_ports()
then write data to the port
This post may be kind of old and it seems like you've got it figured out, but I did find a solution:
The midi client is sending whatever is in the buffer, which means that things like write_midi_event need to be cleared to stop sending it. Therefore, what helped me was near the beginning of my process, I had;
outport.clear_buffer()
Hope that helps ;)

Communication with Python and Supercollider through OSC

I'm trying to connect Python with Supercollider through OSC, but it's not working.
I'm using Python3 and the library osc4py3.
The original idea was to send a text word by word, but upon trying I realized the connection was not working.
Here's the SC code:
(
OSCdef.new(\texto,{
|msg, time, addr, port|
[msg, time, addr,port].postIn;
},
'/texto/supercollider',
n
)
)
OSCFunc.trace(true);
o = OSCFunc(\texto);
And here's the Python code:
osc_startup()
osc_udp_client("127.0.0.1", 57120, "supercollider")
## here goes a function called leerpalabras to separate words in rows.
with open("partitura.txt", "r") as f:
for palabra in leerpalabras(f):
msg = oscbuildparse.OSCMessage("/texto/supercollider", ",s", palabra)
osc_send(msg, "supercollider")
sleep(2)
osc_terminate()
I've also tried with this, to see if maybe there was something wrong with my for loop (with the startup, and terminate of course):
msg = oscbuildparse.OSCMessage("/texto/supercollider", ",s", "holis")
osc_send(msg, "supercollider")
I run the trace method on SC, nothing appears on the post window when I run the Python script on terminal, but no error appears on neither one of them, so I'm a bit lost on what I can test to make sure is getting somewhere.
It doesn't print on the SC post window, it just says OSCdef(texto, /texto/supercollider, nil, nil, nil).
When I run the SuperCollider piece of your example, and then run:
n = NetAddr("127.0.0.1", 57120);
n.sendMsg('/texto/supercollider', 1, 2, 3);
... I see the message printed immediately (note that you used postIn instead of postln, if you don't fix that you'll get an error instead of a printed message).
Like you, I don't see anything when I send via the Python library - my suspicion is that there's something wrong on the Python side? There's a hint in this response that you have to call osc_process() after sends, but that still doesn't work for me
You can try three things:
Run OSCFunc.trace in SuperCollider and watch for messages (this will print ALL incoming OSC messages), to see if your OSCdef is somehow not receiving messages.
Try a network analyzer like Packet Peeper (http://packetpeeper.org/) to watch network traffic on your local loopback network lo0. When I do this, I can clearly see messages sent by SuperCollider, but I don't see any of the messages I send from Python, even when I send in a loop and call osc_process().
If you can't find any sign of Python sending OSC packets, try a different Python library - there are many others available.
(I'm osc4py3 author)
osc4py3 store messages to send within internal lists and returns immediately. These lists are processed during osc_process() calls or directly by background threads (upon selected theading model).
So, if you have selected as_eventloop threading model, you need to call osc_process() some times, like:
…
with open("partitura.txt", "r") as f:
for palabra in leerpalabras(f):
msg = oscbuildparse.OSCMessage("/texto/supercollider", ",s", palabra)
osc_send(msg, "supercollider")
for missme in range(4):
osc_process()
sleep(0.5)
…
See doc: https://osc4py3.readthedocs.io/en/latest/userdoc.html#threading-model

Kivy: adding widgets from another thread

I've been stuck on this same issue for short of a week now:
the program should add widgets based on a http request. However, that request may take some time depending on user's internet connection, so I decided to thread that request and add a spinner to indicate that something is being done.
Here lies the issue. Some piece of code:
#mainthread
def add_w(self, parent, widget):
parent.add_widget(widget)
def add_course():
# HTTP Request I mentioned
course = course_manager.get_course(textfield_text)
courses_stack_layout = constructor_screen.ids.added_courses_stack_layout
course_information_widget = CourseInformation(coursename_label=course.name)
self.add_w(courses_stack_layout, course_information_widget)
constructor_screen.ids.spinner.active = False
add_course is being called from a thread, and spinner.active is being set True before calling this function. Here's the result, sometimes: messed up graphical interface
I also tried solving this with clock.schedule_once and clock.schedule_interval with a queue. The results were the same. Sometimes it works, sometimes it doesn't. The spinner does spin while getting the request, which is great.
Quite frankly, I would've never thought that implementing a spinner would be so hard.
How to implement that spinner? Maybe another alternative to threading? Maybe another alternative to urllib to make a request?
edit: any feedback on how I should've posted this so I can get more help? Is is too long? Maybe I could've been more clear?
The problem here was simply that widgets must also be created within the mainthread.
Creating another function marqued with #mainthread and calling that from the threaded one solved the issue.
Thanks for those who contributed.

What is the proper way to raise an exception for invalid property settings?

I am writing a component that reads data from a specific filetype. Currently, it has a property for filepath - I would like for this block to quit as hard as possible when passed an invalid file/no file found.
Throwing an exception causes it to stop execution, but also deletes the block from the chalkboard while I am testing (?), which makes me think there is a more "approved" way to do it.
My current solution is something like:
LOG_ERROR( MyReader_i, "Unable to open file at " + Filepath );
return FINISH;
Is there another way to stop if something is wrong, that will hopefully stop all downstream processing as well?
Have you taken a look at the Data Reader component in the basic components? It also has a file path as an input. It deals with this during the onConfigure call as shown below:
def onconfigure_prop_InputFile(self, oldvalue, newvalue):
self.InputFile = newvalue
if not os.path.exists(self.InputFile):
self._log.error("InputFile path provided can not be accessed")
And then again in the service function by returning NOOP.
def process(self):
if (self.Play == False):
return NOOP
if not (os.path.exists(self.InputFile)):
return NOOP
This isn't the only way to deal with invalid input however. It's a design decision that is up to the developer.
If you'd like additional components down stream to know about an issue elsewhere in the chain, you have a few options. You could use the End of Stream bit, available in bulkio port implementations, to signal to down stream components that there is no additional data. They can then use this information to clean up and shut down. You could also use messaging to send a message out to an event channel and anyone who has subscribed to this event channel can be made aware of the message. Again, it's a design decision.

Resources