Communication with Python and Supercollider through OSC - python-3.x

I'm trying to connect Python with Supercollider through OSC, but it's not working.
I'm using Python3 and the library osc4py3.
The original idea was to send a text word by word, but upon trying I realized the connection was not working.
Here's the SC code:
(
OSCdef.new(\texto,{
|msg, time, addr, port|
[msg, time, addr,port].postIn;
},
'/texto/supercollider',
n
)
)
OSCFunc.trace(true);
o = OSCFunc(\texto);
And here's the Python code:
osc_startup()
osc_udp_client("127.0.0.1", 57120, "supercollider")
## here goes a function called leerpalabras to separate words in rows.
with open("partitura.txt", "r") as f:
for palabra in leerpalabras(f):
msg = oscbuildparse.OSCMessage("/texto/supercollider", ",s", palabra)
osc_send(msg, "supercollider")
sleep(2)
osc_terminate()
I've also tried with this, to see if maybe there was something wrong with my for loop (with the startup, and terminate of course):
msg = oscbuildparse.OSCMessage("/texto/supercollider", ",s", "holis")
osc_send(msg, "supercollider")
I run the trace method on SC, nothing appears on the post window when I run the Python script on terminal, but no error appears on neither one of them, so I'm a bit lost on what I can test to make sure is getting somewhere.
It doesn't print on the SC post window, it just says OSCdef(texto, /texto/supercollider, nil, nil, nil).

When I run the SuperCollider piece of your example, and then run:
n = NetAddr("127.0.0.1", 57120);
n.sendMsg('/texto/supercollider', 1, 2, 3);
... I see the message printed immediately (note that you used postIn instead of postln, if you don't fix that you'll get an error instead of a printed message).
Like you, I don't see anything when I send via the Python library - my suspicion is that there's something wrong on the Python side? There's a hint in this response that you have to call osc_process() after sends, but that still doesn't work for me
You can try three things:
Run OSCFunc.trace in SuperCollider and watch for messages (this will print ALL incoming OSC messages), to see if your OSCdef is somehow not receiving messages.
Try a network analyzer like Packet Peeper (http://packetpeeper.org/) to watch network traffic on your local loopback network lo0. When I do this, I can clearly see messages sent by SuperCollider, but I don't see any of the messages I send from Python, even when I send in a loop and call osc_process().
If you can't find any sign of Python sending OSC packets, try a different Python library - there are many others available.

(I'm osc4py3 author)
osc4py3 store messages to send within internal lists and returns immediately. These lists are processed during osc_process() calls or directly by background threads (upon selected theading model).
So, if you have selected as_eventloop threading model, you need to call osc_process() some times, like:
…
with open("partitura.txt", "r") as f:
for palabra in leerpalabras(f):
msg = oscbuildparse.OSCMessage("/texto/supercollider", ",s", palabra)
osc_send(msg, "supercollider")
for missme in range(4):
osc_process()
sleep(0.5)
…
See doc: https://osc4py3.readthedocs.io/en/latest/userdoc.html#threading-model

Related

How to get bot command values of another Discord bot

I'm trying to have my bot keep track of items that are being spawned by another bot (Mee6).
The following code gives me a None output.
#client.event
async def on_message(message:discord.Message):
if message.author.bot:
print(message.content)
The command the other bot is responding to is:
/spawn-item member={member} item={item} amount={amount}
I would like to retrieve these values.
Any help would be welcome!
Your code doesn't work, because you want to get an interaction, not a message.
Unfortunately, there is no way to get the interaction of another client. At most you can have a MessageInteraction, which gives you the name of the command used.
But in your case you can make it work since MEE6 still accepts the old command format. If you use the prefix of MEE6 instead of the / command, your code should work (with a bit of reworking: you would need to look at the message right before MEE6's response, or look if a message starts with the MEE6's prefix)

How to debug a python websocket script that uses `.run_forever()` method (infinite event loop)

I'm coding a script that connects to the Binance websocket and uses the .run_forever() method to constantly get live data from the site. I want to be able to debug my code and watch the values of variables as the change but I'm not sure how to do this as the script basically hangs on the line with the .run_forever() method, because it is an infinite event loop. This is by design as I want to continuously get live data (it receives a message approximately every second), but I can't think of a way a good way to debug it.
I'm using VSCode and here are some snippets of my code to help understand my issue. The message function for the websocket is just a bunch of technical analysis and trade logic, but it is also the function that contains all the changing variables that I want to watch.
socket = f"wss://stream.binance.com:9443/ws/{Symbol}#kline_{interval}"
def on_open(ws):
print("open connection")
def on_message(ws, message):
global trade_list
global in_position
json_message = json.loads(message)
candle = json_message['k'] # Accesses candle data
...[trade logic code here]...
def on_close(ws):
print("Websocket connection close")
# ------------------------- Define a websocket object ------------------------ #
ws = websocket.WebSocketApp(socket, on_open=on_open, on_message=on_message, on_close=on_close)
ws.run_forever()
If more code is required to answer the question, then I can edit this question to include it (I'm thinking if you would like to have an idea of what variables I want to look at, I just thought it would be easier and simpler to show these parts).
Also, I know using global isn't great, once I've finished (or am close to finishing) the script, I want to go and tidy it up, I'll deal with it then.
I'm a little late to the party but the statement
websocket.enableTrace(True)
worked for me. Place it just before you define your websocket object and it will print all traffic in and out of the websocket including any exceptions that you might get as you process the messages.

python 3 - jack-client - using write_midi_event - jack keeps sending the midi event forever

I try to use python jack-client module to send a program change midi when I click on a button
here is a simplified version of the code :
def process_callback(frames: int):
global midiUi
if(midiUi is not None):
midiUi.process_callback(frames)
class MidiUi:
def __init__(self):
self.client = jack.Client('MidiUi')
self.client.set_process_callback(process_callback)
self.client.activate()
def sendProgramChange(self):
self.midiQueue.append([0xC0,0])
def process_callback(self,frames: int):
while(len(self.midiQueue)>0):
data = self.midiQueue.pop()
self.outMidiPort.clear_buffer()
buffer = self.outMidiPort.reserve_midi_event(0,len(data))
buffer[:] = bytearray(data)
self.outMidiPort.write_midi_event(0,buffer) # this only happens once yet midi input receives tons of program changes events
#raise jack.CallbackExit
midiUi = MidiUi()
while True:
....
#some button calls midiUi.sendProgramChange()
write_midi_event is called only once when pressing the button,
but apparently the destination midi port receives a continuous flow of midi C0 program changes (unless I call jack.CallbackExit, but then the call back never triggers again)
(I monitor my python script output using jack_midi_dump and midisnoop)
anyone know how to solve this ?
thanks for your help
I now user python-rtmidi for this matter
midiout = rtmidi.MidiOut(rtapi=rtmidi.API_UNIX_JACK)
rtMidiOutputPorts=midiout.get_ports()
then write data to the port
This post may be kind of old and it seems like you've got it figured out, but I did find a solution:
The midi client is sending whatever is in the buffer, which means that things like write_midi_event need to be cleared to stop sending it. Therefore, what helped me was near the beginning of my process, I had;
outport.clear_buffer()
Hope that helps ;)

Direct communication between Javascript in Jupyter and server via IPython kernel

I'm trying to display an interactive mesh visualizer based on Three.js inside a Jupyter cell. The workflow is the following:
The user launches a Jupyter notebook, and open the viewer in a cell
Using Python commands, the user can manually add meshes and animate them interactively
In practice, the main thread is sending requests to a server via ZMQ sockets (every request needs a single reply), then the server sends back the desired data to the main thread using other socket pairs (many "request", very few replies expected), which finally uses communication through ipython kernel to send the data to the Javascript frontend. So far so good, and it works properly because the messages are all flowing in the same direction:
Main thread (Python command) [ZMQ REQ] -> [ZMQ REP] Server (Data) [ZMQ XREQ] -> [ZMQ XREQ] Main thread (Data) [IPykernel Comm] -> [Ipykernel Comm] Javascript (Display)
However, the pattern is different when I'm want to fetch the status of the frontend to wait for the meshes to finish loading:
Main thread (Status request) --> Server (Status request) --> Main thread (Waiting for reply)
| |
<--------------------------------Javascript (Processing) <--
This time, the server sends a request to the frontend, which in return does not send the reply directly back to the server, but to the main thread, that will forward the reply to the server, and finally to the main thread.
There is a clear issue: the main thread is supposed to jointly forward the reply of the frontend and receive the reply from the server, which is impossible. The ideal solution would be to enable the server to communicate directly with the frontend but I don't know how to do that, since I cannot use get_ipython().kernel.comm_manager.register_target on the server side. I tried to instantiate an ipython kernel client on the server side using jupyter_client.BlockingKernelClient, but I didn't manged to use it to communicate nor to register targets.
OK so I found a solution for now but it is not great. Indeed of just waiting for a reply and keep busy the main loop, I added a timeout and interleave it with do_one_iteration of the kernel to force to handle to messages:
while True:
try:
rep = zmq_socket.recv(flags=zmq.NOBLOCK).decode("utf-8")
except zmq.error.ZMQError:
kernel.do_one_iteration()
It works but unfortunately it is not really portable and it messes up with the Jupyter evaluation stack (all queued evaluations will be processed here instead of in order)...
Alternatively, there is another way that is more appealing:
import zmq
import asyncio
import nest_asyncio
nest_asyncio.apply()
zmq_socket.send(b"ready")
async def enforce_receive():
await kernel.process_one(True)
return zmq_socket.recv().decode("utf-8")
loop = asyncio.get_event_loop()
rep = loop.run_until_complete(enforce_receive())
but in this case you need to know in advance that you expect the kernel to receive exactly one message, and relying on nest_asyncio is not ideal either.
Here is a link to an issue on this topic of Github, along with an example notebook.
UPDATE
I finally manage to solve completely my issue, without shortcomings. The trick is to analyze every incoming messages. The irrelevant messages are put back in the queue in order, while the comm-related ones are processed on-the-spot:
class CommProcessor:
"""
#brief Re-implementation of ipykernel.kernelbase.do_one_iteration
to only handle comm messages on the spot, and put back in
the stack the other ones.
#details Calling 'do_one_iteration' messes up with kernel
'msg_queue'. Some messages will be processed too soon,
which is likely to corrupt the kernel state. This method
only processes comm messages to avoid such side effects.
"""
def __init__(self):
self.__kernel = get_ipython().kernel
self.qsize_old = 0
def __call__(self, unsafe=False):
"""
#brief Check once if there is pending comm related event in
the shell stream message priority queue.
#param[in] unsafe Whether or not to assume check if the number
of pending message has changed is enough. It
makes the evaluation much faster but flawed.
"""
# Flush every IN messages on shell_stream only
# Note that it is a faster implementation of ZMQStream.flush
# to only handle incoming messages. It reduces the computation
# time from about 10us to 20ns.
# https://github.com/zeromq/pyzmq/blob/e424f83ceb0856204c96b1abac93a1cfe205df4a/zmq/eventloop/zmqstream.py#L313
shell_stream = self.__kernel.shell_streams[0]
shell_stream.poller.register(shell_stream.socket, zmq.POLLIN)
events = shell_stream.poller.poll(0)
while events:
_, event = events[0]
if event:
shell_stream._handle_recv()
shell_stream.poller.register(
shell_stream.socket, zmq.POLLIN)
events = shell_stream.poller.poll(0)
qsize = self.__kernel.msg_queue.qsize()
if unsafe and qsize == self.qsize_old:
# The number of queued messages in the queue has not changed
# since it last time it has been checked. Assuming those
# messages are the same has before and returning earlier.
return
# One must go through all the messages to keep them in order
for _ in range(qsize):
priority, t, dispatch, args = \
self.__kernel.msg_queue.get_nowait()
if priority <= SHELL_PRIORITY:
_, msg = self.__kernel.session.feed_identities(
args[-1], copy=False)
msg = self.__kernel.session.deserialize(
msg, content=False, copy=False)
else:
# Do not spend time analyzing already rejected message
msg = None
if msg is None or not 'comm_' in msg['header']['msg_type']:
# The message is not related to comm, so putting it back in
# the queue after lowering its priority so that it is send
# at the "end of the queue", ie just at the right place:
# after the next unchecked messages, after the other
# messages already put back in the queue, but before the
# next one to go the same way. Note that every shell
# messages have SHELL_PRIORITY by default.
self.__kernel.msg_queue.put_nowait(
(SHELL_PRIORITY + 1, t, dispatch, args))
else:
# Comm message. Processing it right now.
comm_handler = getattr(
self.__kernel.comm_manager, msg['header']['msg_type'])
msg['content'] = self.__kernel.session.unpack(msg['content'])
comm_handler(None, None, msg)
self.qsize_old = self.__kernel.msg_queue.qsize()
process_kernel_comm = CommProcessor()

poplib mark as seen

I am using poplib in Python 3.3 to fetch emails from a gmail account and everything is working well, except that the mails are not marked as read after retrieving them with the retr() method, despite the fact that the documentation says "Retrieve whole message number which, and set its seen flag."
Here is the code:
pop = poplib.POP3_SSL("pop.gmail.com", "995")
pop.user("recent:mymail#gmail.com")
pop.pass_("mypassword")
numMessages = len(pop.list()[1])
for i in range(numMessages):
for j in pop.retr(i+1)[1]:
print(j)
pop.quit()
Am I doing something wrong or does the documentation lie? (or, did I just misinterpret it?)
The POP protocol has no concept of "read" or "unread" messages; the LIST command simply shows all existing messages. You may want to use another protocol, like IMAP, if the server supports it.
You could delete messages after successful retrieval, using the DELE command. Only after a successful QUIT command will the server actually delete them.

Resources