Is there a function or method in Matplotlib that will tell you
which events have been connected, and perhaps what code is being called by that listener? I've looked all over, no joy. I'm looking preferably for a generic solution, not one that is backend specific, but I will take what I can get.
In these situations, don't be afraid of the matplotlib codebase - it is quite reasonably pythonic and legible for the most part (with perhaps one or two skeletons here and there).
I'm going to talk you through the steps I'm going to take to understand what is going on and what you can get access to.
First off, start at a tagged version of matplotlib on GitHub - it will make all the links consistent (and not rot as code moves around). https://github.com/matplotlib/matplotlib/tree/v3.0.2
Our entry point is that we attach events through the mpl_connect method. A quick search for this (using "def mpl_connect", including the quotes) in the matplotlib codebase turns up https://github.com/matplotlib/matplotlib/blob/v3.0.2/lib/matplotlib/backend_bases.py#L2134-L2180.
self.callbacks.connect(s, func)
So now we need to figure out what self.callbacks actually is on this object. I did a ctrl+f to find the text self.callbacks = in this file.
https://github.com/matplotlib/matplotlib/blob/v3.0.2/lib/matplotlib/backend_bases.py#L1621
# a dictionary from event name to a dictionary that maps cid->func
self.callbacks = cbook.CallbackRegistry()
We are making progress now, and getting a little deeper each time :)
Following the imports logically, we now need to find out what matplotlib.cbook.CallbackRegistry looks like. https://github.com/matplotlib/matplotlib/blob/v3.0.2/lib/matplotlib/cbook/init.py#L88.
Specifically, we were calling the CallbackRegistry.connect method: https://github.com/matplotlib/matplotlib/blob/v3.0.2/lib/matplotlib/cbook/init.py#L162-L177. Implementation at time of writing:
def connect(self, s, func):
"""Register *func* to be called when signal *s* is generated.
"""
self._func_cid_map.setdefault(s, {})
try:
proxy = WeakMethod(func, self._remove_proxy)
except TypeError:
proxy = _StrongRef(func)
if proxy in self._func_cid_map[s]:
return self._func_cid_map[s][proxy]
cid = next(self._cid_gen)
self._func_cid_map[s][proxy] = cid
self.callbacks.setdefault(s, {})
self.callbacks[s][cid] = proxy
return cid
Rather than trying to read all of that, I'm looking for public data structures (those that don't start with a _) to see if there is anything I can query. It is starting to look like CallbackRegistry.callbacks is a dictionary that maps event names to some form of collection containing those event functions.
Indeed, this is supported by trying it out:
In [6]: fig.canvas.callbacks.callbacks
Out[6]:
{'button_press_event': {0: <weakref at 0x117bcff98; to 'FigureCanvasTkAgg' at 0x117c0f860>,
4: <matplotlib.cbook._StrongRef at 0x117c0f550>,
5: <matplotlib.cbook._StrongRef at 0x119686dd8>},
'scroll_event': {1: <weakref at 0x117be4048; to 'FigureCanvasTkAgg' at 0x117c0f860>},
'key_press_event': {2: <weakref at 0x117be43c8; to 'FigureManagerTk' at 0x117c0fe10>},
'motion_notify_event': {3: <weakref at 0x117be4438; to 'NavigationToolbar2Tk' at 0x117c0fe48>}}
What is interesting is that in this particular case, I've personally only added one event handler (button_press_event), but I've clearly got more events than that. What were seeing here is actually all of the events that are running in your backend too. Ever wondered how to disable some of those (like the keyboard shortcuts)? It is just a dictionary of events, and there is nothing stopping from just trashing them:
# Delete / remove the keyboard press event handlers.
fig.canvas.callbacks.callbacks.pop('key_press_event')
If you want to get a reference to the underlying function, a little bit of introspection suggests you can do something like:
In [37]: for cid, func_ref in fig.canvas.callbacks.callbacks['button_press_event'].items():
...: func = func_ref()
...: print(cid, func)
Out[37]: 5 <function onclick at 0x114d7e620>
Related
I'm coding a script that connects to the Binance websocket and uses the .run_forever() method to constantly get live data from the site. I want to be able to debug my code and watch the values of variables as the change but I'm not sure how to do this as the script basically hangs on the line with the .run_forever() method, because it is an infinite event loop. This is by design as I want to continuously get live data (it receives a message approximately every second), but I can't think of a way a good way to debug it.
I'm using VSCode and here are some snippets of my code to help understand my issue. The message function for the websocket is just a bunch of technical analysis and trade logic, but it is also the function that contains all the changing variables that I want to watch.
socket = f"wss://stream.binance.com:9443/ws/{Symbol}#kline_{interval}"
def on_open(ws):
print("open connection")
def on_message(ws, message):
global trade_list
global in_position
json_message = json.loads(message)
candle = json_message['k'] # Accesses candle data
...[trade logic code here]...
def on_close(ws):
print("Websocket connection close")
# ------------------------- Define a websocket object ------------------------ #
ws = websocket.WebSocketApp(socket, on_open=on_open, on_message=on_message, on_close=on_close)
ws.run_forever()
If more code is required to answer the question, then I can edit this question to include it (I'm thinking if you would like to have an idea of what variables I want to look at, I just thought it would be easier and simpler to show these parts).
Also, I know using global isn't great, once I've finished (or am close to finishing) the script, I want to go and tidy it up, I'll deal with it then.
I'm a little late to the party but the statement
websocket.enableTrace(True)
worked for me. Place it just before you define your websocket object and it will print all traffic in and out of the websocket including any exceptions that you might get as you process the messages.
I'm forwarding alert messages from a AWS Lambda function to Sentry using the sentry_sdk in Python.
The problem is that even if I use scope.clear() before capture_message() the events I receive in sentry are enriched with information about the runtime environment where the message is captured in (the AWS lambda python environment) - which in this scenario is completly unrelated to the actual alert I'm forwarding.
My Code:
sentry_sdk.init(dsn, environment="name-of-stage")
with sentry_sdk.push_scope() as scope:
# Unfortunately this does not get rid of lambda specific context information.
scope.clear()
# here I set relevant information which works just fine.
scope.set_tag("priority", "high")
result = sentry_sdk.capture_message("mymessage")
The behaviour does not change if I pass scope as an argument to capture_message().
The tag I set manually is beeing transmitted just fine. But I also receive information about the Python runtime - therefore scope.clear() either does not behave like I expect it to OR capture_message gathers additional information itself.
Can someone explain how to only capture the information I'm actively assigning to the scope with set_tag and similar functions and surpress everything else?
Thank you very much
While I didn't find an explaination for the behaviour I was able to solve my problem (Even though it' a little bit hacky).
The solution was to use the sentry before_send hook in the init step like so:
sentry_sdk.init(dsn, environment="test", before_send=cleanup_event)
with sentry_sdk.push_scope() as scope:
sentry_sdk.capture_message(message, state, scope)
# when using sentry from lambda don't forget to flush otherwise messages can get lost.
sentry_sdk.flush()
Then in the cleanup_event function it gets a little bit ugly. I basically iterate over the keys of the event and remove the ones I do not want to show up. Since some Keys hold objects and some (like "tags") are a list with [key, value] entries this was quite some hassle.
KEYS_TO_REMOVE = {
"platform": [],
"modules": [],
"extra": ["sys.argv"],
"contexts": ["runtime"],
}
TAGS_TO_REMOVE = ["runtime", "runtime.name"]
def cleanup_event(event, hint):
for (k, v) in KEYS_TO_REMOVE.items():
with suppress(KeyError):
if v:
for i in v:
del event[k][i]
else:
del event[k]
for t in event["tags"]:
if t[0] in TAGS_TO_REMOVE:
event["tags"].remove(t)
return event
I have simulated a controllogix controller using the library CPPPO.
Command -
enip_server -v SCADA=INT[1000] TEXT=SSTRING[100] FLOAT=REAL
Output -
I want to use pycomm3 library to read and write the tags, as you can see above three tags have created by cpppo while starting the simulation server - SCADA, TEXT and FLOAT, i just want to read any one of them.
Here is the code I'm using -
from pycomm3 import LogixDriver
with LogixDriver('127.0.0.1') as plc:
print(plc)
# plc.write('TEXT', 'Hello World!')
print(plc.read('TEXT'))
Output -
The logs in CPPPO Server are -
Instead of Tags doesn't exist , we should receive the value of TEXT Tag
So, there is a couple things going on here. The main 'feature' of pycomm3 is how it handles everything automatically for you, but for it to do that it needs to first upload the tag list from the PLC. It looks like CPPPO doesn't implement those services, if you enable the logging you will see that it errors out when trying to upload the tag list. (I think this error should have bubbled up and exited the with block before ever trying to read the tag - I will get it changed in the next release) You can bypass this though by defining your own _initialize_driver method and setting the tag list manually:
from pycomm3 import SHORT_STRING, REAL # also need to import the CIP types
def _cpppo_initialize_driver(self, _, __):
self._cfg['use_instance_ids'] = False # force it to only use the tag name in requests
self._info = self.get_plc_info() # optional
self._tags = {
'TEXT': {
'tag_name': 'TEXT',
'tag_type': 'atomic',
'data_type_name': 'SHORT_STRING',
'data_type': 'SHORT_STRING',
'dim': 1,
'dimensions': [100, 0, 0],
'type_class': SHORT_STRING,
},
'FLOAT': {
'tag_name': 'FLOAT',
'tag_type': 'atomic',
'data_type_name': 'REAL',
'data_type': 'REAL',
'dim': 0,
'dimensions': [0, 0, 0],
'type_class': REAL,
}
}
LogixDriver._initialize_driver = _cpppo_initialize_driver
The _tags attribute is a dict of the tag name to the definition for the tag, this section in the docs has a lot more details about what each field is for. The examples I added are simple atomic tags, if you want to do structs it is a little more complicated.
In addition to that, I did find a bug dealing with the write method. Currently, it is including part of the request twice in the packet. Real PLCs seems to ignore this, but CPPPO doesn't handle it and leads to an error. I have a fix already in my development branch and can confirm both reads and writes will work. Unfortunately, I have a few other changes in progress that I need to get done before I release a new version. If you follow the repo on GitHub it will notify you when it is released. If writes are critical and waiting for a fix is not possible, I can give you the fix since it's fairly small.
I've been stuck on this same issue for short of a week now:
the program should add widgets based on a http request. However, that request may take some time depending on user's internet connection, so I decided to thread that request and add a spinner to indicate that something is being done.
Here lies the issue. Some piece of code:
#mainthread
def add_w(self, parent, widget):
parent.add_widget(widget)
def add_course():
# HTTP Request I mentioned
course = course_manager.get_course(textfield_text)
courses_stack_layout = constructor_screen.ids.added_courses_stack_layout
course_information_widget = CourseInformation(coursename_label=course.name)
self.add_w(courses_stack_layout, course_information_widget)
constructor_screen.ids.spinner.active = False
add_course is being called from a thread, and spinner.active is being set True before calling this function. Here's the result, sometimes: messed up graphical interface
I also tried solving this with clock.schedule_once and clock.schedule_interval with a queue. The results were the same. Sometimes it works, sometimes it doesn't. The spinner does spin while getting the request, which is great.
Quite frankly, I would've never thought that implementing a spinner would be so hard.
How to implement that spinner? Maybe another alternative to threading? Maybe another alternative to urllib to make a request?
edit: any feedback on how I should've posted this so I can get more help? Is is too long? Maybe I could've been more clear?
The problem here was simply that widgets must also be created within the mainthread.
Creating another function marqued with #mainthread and calling that from the threaded one solved the issue.
Thanks for those who contributed.
Is there a way to filter github events using the GET request?
For example can I preform a GET that returns a subset (ForkEvents) of a repo's events?:
pseudo-request (though this doesn't work):
GET /repos/:owner/:repo/events?type=ForkEvent
More generally is there any way to implicitly filter the response data in the GET request i.e. before the data reaches my code? (I am new to the github-api and RESTful APIs in general, so I apologize in advance if this is a clueless question)
Thanks
If the Events Documentation is correct, it would appear this is in fact not possible. If you're new to the GitHub API you should probably try to use a library that exists for it. For example, if you were using python and github3.py then you might do something like:
import github3
g = github3.login("nelag", "nelag's password")
r = g.repository("nelag", "nelags_repo")
forks = filter(lambda event: event.type == 'ForkEvent', r.iter_events())
Nice and simple and you will have the added benefit of your code being lazy.