Locust, on_start method doesn't work with tasks - python-3.x

I am trying to add some variables (e.g. self.boolean_flag) to HttpUser.
Which represents the user state. This variable is used in scenario load testing.
According to the documentation, I should use on_start to initialise variables.
However, when I use tasks = [TaskSet] like below, The on_start doesn't seem to work.
AttributeError: 'ExampleTask' object has no attribute 'boolean_flag':
class ExampleTask(TaskSet):
#task
def example_one(self):
print(self.boolean_flag) # AttributeError: 'ExampleTask' object has no attribute 'boolean_flag'
make_api_request(self, "example_one")
class CustomUser(HttpUser):
wait_time = between(
int(os.getenv("LOCUST_MIN_WAIT", 200)), int(os.getenv("LOCUST_MAX_WAIT", 1000))
)
def on_start(self):
self.boolean_flag = False
tasks = {ExampleTask1 : 10, ExampleTask2: 5 ... }
The bottom works though:
class CustomUser(HttpUser):
wait_time = between(
int(os.getenv("LOCUST_MIN_WAIT", 200)), int(os.getenv("LOCUST_MAX_WAIT", 1000))
)
def on_start(self):
self.boolean_flag = False
#task
def example_one(self):
print(self.boolean_flag)
make_api_request(self, "example_one")
Since I have many different scenarios that reuse many Tasksets, I need to use Tasks = {}..
I also tried subclassing HttpUser and add those variables in init().
But that doesn't work well with tasks={} either.
class CustomUser(HttpUser):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.boolean_flag = False
class AllOfApisCallForLoadAtOneGo(CustomUser):
wait_time = between(
int(os.getenv("LOCUST_MIN_WAIT", 200)), int(os.getenv("LOCUST_MAX_WAIT", 1000))
)
tasks = {ExampleTask1 : 10, ExampleTask2: 5 ... }
(loadtest-GvbsrA_X-py3.8) ➜ loadtest git:(abcd) ✗ locust -f locustfile_scenario.py first -H https://www.somehost.com
[2020-09-02 06:24:27,276] MacBook-Pro.local/INFO/locust.main: Starting web interface at http://0.0.0.0:8089 (accepting connections from all network interfaces)
[2020-09-02 06:24:27,286] MacBook-Pro.local/INFO/locust.main: Starting Locust 1.2.3
[2020-09-02 06:24:35,881] MacBook-Pro.local/INFO/locust.runners: Spawning 10 users at the rate 3 users/s (0 users already running)...
[2020-09-02 06:24:35,883] MacBook-Pro.local/ERROR/locust.user.task: You must specify the base host. Either in the host attribute in the User class, or on the command line using the --host option.
Traceback (most recent call last):
File "/Users/poetry/virtualenvs/loadtest-GvbsrA_X-py3.8/lib/python3.8/site-packages/locust/user/task.py", line 284, in run
self.execute_next_task()
File "/Users/poetry/virtualenvs/loadtest-GvbsrA_X-py3.8/lib/python3.8/site-packages/locust/user/task.py", line 309, in execute_next_task
self.execute_task(self._task_queue.pop(0))
File "/Users/poetry/virtualenvs/loadtest-GvbsrA_X-py3.8/lib/python3.8/site-packages/locust/user/task.py", line 422, in execute_task
task(self.user)
File "/Users/poetry/virtualenvs/loadtest-GvbsrA_X-py3.8/lib/python3.8/site-packages/locust/user/users.py", line 224, in __init__
raise LocustError(
locust.exception.LocustError: You must specify the base host. Either in the host attribute in the User class, or on the command line using the --host option.

It appears you're assuming that TaskSet inherits from or somehow otherwise is called directly from HttpUser, which isn't the case. But TaskSet does have the user passed into it when it's instantiated. You just have to use self.user. So in your case instead of print(self.boolean_flag) in your task, you'd do print(self.user.boolean_flag).

Related

Disable paralled build for a specific target

I need to disable parallel run for a single target. It is a test that verifies if program doesn't create some random or incorrectly named files. Any other file that is build in the meantime fails this test.
I found this advice on SCons FAQ:
Use the SideEffect() method and specify the same dummy file for each target that shouldn't be built in parallel. Even if the file doesn't exist, SCons will prevent the simultaneous execution of commands that affect the dummy file. See the linked method page for examples.
However, this is useless, as it would prevent parallel build of any two targets not only the test script.
Is there any way to prevent parallel build of one target while allowing it for all others?
We discussed this in the scons discord, and came up with an example which will setup synchronous test runners which will make sure no other tasks are running when the test is run.
This is the example SConstruct from the github example repo:
import SCons
# A bound map of stream (as in stream of work) name to side-effect
# file. Since SCons will not allow tasks with a shared side-effect
# to execute concurrently, this gives us a way to limit link jobs
# independently of overall SCons concurrency.
node_map = dict()
# A list of nodes that have to be run synchronously.
# sync node ensures the test runners are syncrhonous amongst
# themselves.
sync_nodes = list()
# this emitter will make a phony sideeffect per target
# the test builders will share all the other sideeffects making
# sure the tests only run when nothing else is running.
def sync_se_emitter(target, source, env):
name = str(target[0])
se_name = "#unique_node_" + str(hash(name))
se_node = node_map.get(se_name, None)
if not se_node:
se_node = env.Entry(se_name)
# This may not be necessary, but why chance it
env.NoCache(se_node)
node_map[se_name] = se_node
for sync_node in sync_nodes:
env.SideEffect(se_name, sync_node)
env.SideEffect(se_node, target)
return (target, source)
# here we force all builders to use the emitter, so all
# targets will respect the shared sideeffect when being built.
# NOTE: that the builders which should be synchronous must be listed
# by name, as SynchronousTestRunner is in this example
original_create_nodes = SCons.Builder.BuilderBase._create_nodes
def always_emitter_create_nodes(self, env, target = None, source = None):
if self.get_name(env) != "SynchronousTestRunner":
if self.emitter:
self.emitter = SCons.Builder.ListEmitter([self.emitter, sync_se_emitter])
else:
self.emitter = SCons.Builder.ListEmitter([sync_se_emitter])
return original_create_nodes(self, env, target, source)
SCons.Builder.BuilderBase._create_nodes = always_emitter_create_nodes
env = Environment()
env.Tool('textfile')
nodes = []
# this is a fake test runner which acts like its running a test
env['BUILDERS']["SynchronousTestRunner"] = SCons.Builder.Builder(
action=SCons.Action.Action([
"sleep 1",
"echo Starting test $TARGET",
"sleep 5",
"echo Finished test $TARGET",
'echo done > $TARGET'],
None))
# this emitter connects the test runners with the shared sideeffect
def sync_test_emitter(target, source, env):
for name in node_map:
env.SideEffect(name, target)
sync_nodes.append(target)
return (target, source)
env['BUILDERS']["SynchronousTestRunner"].emitter = SCons.Builder.ListEmitter([sync_test_emitter])
# in this test we create two test runners and make them depend on various source files
# being generated. This is just to force the tests to be run in the middle of
# the build. This will allow the example to demonstrate that all other jobs
# have paused so the test can be performed.
env.SynchronousTestRunner("test.out", "source10.c")
env.SynchronousTestRunner("test2.out", "source62.c")
for i in range(50):
nodes.append(env.Textfile(f"source{i}.c", f"int func{i}(){{return {i};}}"))
for i in range(50, 76):
node = env.Textfile(f"source{i}.c", f"int func{i}(){{return {i};}}")
env.Depends(node, "test.out")
nodes.append(node)
for i in range(76, 100):
node = env.Textfile(f"source{i}.c", f"int func{i}(){{return {i};}}")
env.Depends(node, "test2.out")
nodes.append(node)
nodes.append(env.Textfile('main.c', 'int main(){return 0;}'))
env.Program('out', nodes)
This solution is based in dmoody256's answer.
The underlying concept is the same but the code should be easier to use and it's ready to be put in the site_scons directory to not obfuscate SConstruct itself.
site_scons/site_init.py:
# Allows using functions `SyncBuilder` and `Environment.SyncCommand`.
from SyncBuild import SyncBuilder
site_scons/SyncBuild.py:
from SCons.Builder import Builder, BuilderBase, ListEmitter
from SCons.Environment import Base as BaseEnvironment
# This code allows to build some targets synchronously, which means there won't
# be anything else built at the same time even if SCons is run with flag `-j`.
#
# This is achieved by adding a different dummy values as side effect of each
# target. (These files won't be created. They are only a way of enforcing
# constraints on SCons.)
# Then the files that need to be built synchronously have added every dummy
# value from the entire configuration as a side effect, which effectively
# prevents it from being built along with any other file.
#
# To create a synchronous target use `SyncBuilder`.
__processed_targets = set()
__lock_values = []
__synchronous_nodes = []
def __add_emiter_to_builder(builder, emitter):
if builder.emitter:
if isinstance(builder.emitter, ListEmitter):
if not any(x is emitter for x in builder.emitter):
builder.emitter.append(emitter)
else:
builder.emitter = ListEmitter([builder.emitter, emitter])
else:
builder.emitter = ListEmitter([emitter])
def __individual_sync_locks_emiter(target, source, env):
if not target or target[0] not in __processed_targets:
lock_value = env.Value(f'.#sync_lock_{len(__lock_values)}#')
env.NoCache(lock_value)
env.SideEffect(lock_value, target + __synchronous_nodes)
__processed_targets.update(target)
__lock_values.append(lock_value)
return target, source
__original_create_nodes = BuilderBase._create_nodes
def __create_nodes_adding_emiter(self, *args, **kwargs):
__add_emiter_to_builder(self, __individual_sync_locks_emiter)
return __original_create_nodes(self, *args, **kwargs)
BuilderBase._create_nodes = __create_nodes_adding_emiter
def _all_sync_locks_emitter(target, source, env):
env.SideEffect(__lock_values, target)
__synchronous_nodes.append(target)
return (target, source)
def SyncBuilder(*args, **kwargs):
"""It works like the normal `Builder` except it prevents the targets from
being built at the same time as any other target."""
target = Builder(*args, **kwargs)
__add_emiter_to_builder(target, _all_sync_locks_emitter)
return target
def __SyncBuilder(self, *args, **kwargs):
"""It works like the normal `Builder` except it prevents the targets from
being built at the same time as any other target."""
target = self.Builder(*args, **kwargs)
__add_emiter_to_builder(target, _all_sync_locks_emitter)
return target
BaseEnvironment.SyncBuilder = __SyncBuilder
def __SyncCommand(self, *args, **kwargs):
"""It works like the normal `Command` except it prevents the targets from
being built at the same time as any other target."""
target = self.Command(*args, **kwargs)
_all_sync_locks_emitter(target, [], self)
return target
BaseEnvironment.SyncCommand = __SyncCommand
SConstruct (this is adapted dmoody256's test that does the same thing as the original):
env = Environment()
env.Tool('textfile')
nodes = []
# this is a fake test runner which acts like its running a test
env['BUILDERS']["SynchronousTestRunner"] = SyncBuilder(
action=Action([
"sleep 1",
"echo Starting test $TARGET",
"sleep 5",
"echo Finished test $TARGET",
'echo done > $TARGET'],
None))
# in this test we create two test runners and make them depend on various source files
# being generated. This is just to force the tests to be run in the middle of
# the build. This will allow the example to demonstrate that all other jobs
# have paused so the test can be performed.
env.SynchronousTestRunner("test.out", "source10.c")
env.SynchronousTestRunner("test2.out", "source62.c")
for i in range(50):
nodes.append(env.Textfile(f"source{i}.c", f"int func{i}(){{return {i};}}"))
for i in range(50, 76):
node = env.Textfile(f"source{i}.c", f"int func{i}(){{return {i};}}")
env.Depends(node, "test.out")
nodes.append(node)
for i in range(76, 100):
node = env.Textfile(f"source{i}.c", f"int func{i}(){{return {i};}}")
env.Depends(node, "test2.out")
nodes.append(node)
nodes.append(env.Textfile('main.c', 'int main(){return 0;}'))
env.Program('out', nodes)
After creating site_scons/site_init.py and site_scons/SyncBuild.py, you can just use function SyncBuilder or method Environment.SyncCommand in any SConstruct or SConscript file in the project without any additional configuration.

Accessing list of registered factories/services in wired/pyramid_services

I'm trying to debug my usage of wired and pyramid_services as well as migrate from using named services to registering services with interfaces and context classes.
Is there a way to see everything that is registered with the current container? Both for debugging and also to create fixtures for pytest during testing. Sort of like the get_registrations line of this pseudo code for injecting tests into conftest.py for pytests:
def generate_service_fixture(reg):
#pytest.fixture()
def service_fixture(base_app_request):
return base_app_request.find_service(iface=reg.iface, context=reg.context, name=reg.name)
return service_fixture
def inject_service_fixture(reg):
parts = [
get_iface_name(reg.iface),
get_context_name(reg.context),
get_name(reg.name)]
# Make up a name that tests can use to pull in the appropriate fixture.
fixture_name = '__'.join(filter(None, parts)) + '_service'
globals()[fixture_name] = generate_service_fixture(reg)
def get_iface_name(iface):
return iface.__name__ if iface else None
def get_context_name(context):
return context.__name__ if context else None
def get_name(name):
return name if name else None
def register_fixtures(container):
for reg in container.get_registrations():
inject_service_fixture(reg)
Later on in tests I would do something like:
def test_service_factory(IRequest_service):
assert IRequest_service, "Factory failed to construct request."
This sort of works for debugging after the services have been declared. I'm just posting this half-answer for now. I don't have a clean solution for dynamic pytest fixture creation.
def includeme(config):
# ...
config.commit()
introspector = config.registry.introspector
for intr in introspector.get_category('pyramid_services'):
print (intr['introspectable'])

Error while popping out Flask app context

I am trying to create an async API using threading (Celery is an overkill in my case). To achieve the same, I subclassed the Thread class in following manner. Since the code that will run inside thread requires app as well as request context, I have pushed both the contexts in the stack.
from threading import Thread
from flask import _app_ctx_stack, _request_ctx_stack
class AppContextThread(Thread):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# App context and request context are popped from the stack once request is completed but we require them for
# accessing application data. Hence, storing them and then again pushing into the stack.
self.app_context = _app_ctx_stack.top
self.request_context = _request_ctx_stack.top
def run(self):
self.app_context.push()
self.request_context.push()
super().run()
print(f"App context top: {_app_ctx_stack.top}")
print(f"Req context top: {_request_ctx_stack.top}")
print(f"After app_context: {self.app_context}")
print(f"After request_context: {self.request_context}")
self.request_context.pop()
print(f"After request_context pop: {self.request_context}")
print(f"After request_context pop -> app_context: {self.app_context}")
self.app_context.pop()
print(f"After app_context pop: {self.app_context}")
Now when I try to pop app context out of the stack I get following error even though app context is present in the stack (printed logs for the same).
App context top: <flask.ctx.AppContext object at 0x7f7f512100f0>
Req context top: <RequestContext 'http://0.0.0.0:8000/rest/v1/api' [PUT] of app.app>
After app_context: <flask.ctx.AppContext object at 0x7f7f512100f0>
After request_context: <RequestContext 'http://0.0.0.0:8000/rest/v1/api' [PUT] of app.app>
After request_context pop: <RequestContext 'http://0.0.0.0:8000/rest/v1/api' [PUT] of app.app>
After request_context pop -> app_context: <flask.ctx.AppContext object at 0x7f7f512100f0>
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/Users/app/utils/app_context_thread.py", line 27, in run
self.app_context.pop()
File "/Users/venv/lib/python3.6/site-packages/flask/ctx.py", line 235, in pop
% (rv, self)
AssertionError: Popped wrong app context. (None instead of <flask.ctx.AppContext object at 0x7f7f512100f0>)
Could anyone please point me out what I am doing wrong here?
Hopefully this will help someone out even though this question is over a year old.
I have ran into this issue while working on something similar in Flask (threading out and needing the App and Request contexts). I also attempted to create a subclass of the python Thread class to automatically add the contexts to the thread. My code for the class is below and similar to yours but does not have the assertion errors you are coming across.
import threading
from flask import has_app_context, has_request_context, _app_ctx_stack, _request_ctx_stack
class FlaskThread(threading.Thread):
def __init__(self, group=None, target=None, name=None, args=(), kwargs=None, *, daemon=None):
super().__init__(group=group, target=target, name=name, args=args, kwargs=kwargs, daemon=daemon)
# first check if there is application or request context
if not has_app_context():
raise RuntimeError("Running outside of Flask AppContext")
if not has_request_context():
raise RuntimeError("Running outside of Flask RequestContext")
# set the app and request context variables
self.app_ctx = _app_ctx_stack.top
self.request_ctx = _request_ctx_stack.top
def run(self):
self.app_ctx.push()
self.request_ctx.push()
super().run()
The assertion error states that self.request_context.pop() and self.app_context.pop() are popping nothing from the LocalStack. I believe, from my understanding, the reason nothing is being popped is because the contexts have already been popped by the main flask process.
If you take a look at the Flask source code in flask/ctx.py Github link here, I add the following print statements.
class AppContext
def push(self):
"""Binds the app context to the current context."""
self._refcnt += 1
if hasattr(sys, "exc_clear"):
sys.exc_clear()
_app_ctx_stack.push(self)
print(threading.current_thread(), "PUSH AppContext:", self, "ACEND")
appcontext_pushed.send(self.app)
def pop(self, exc=_sentinel):
"""Pops the app context."""
try:
self._refcnt -= 1
if self._refcnt <= 0:
if exc is _sentinel:
exc = sys.exc_info()[1]
self.app.do_teardown_appcontext(exc)
finally:
rv = _app_ctx_stack.pop()
print(threading.current_thread(), "POP AppContext:", rv, "ACEND")
assert rv is self, "Popped wrong app context. (%r instead of %r)" % (rv, self)
appcontext_popped.send(self.app)
class RequestContext
def push(self):
"""Binds the request context to the current context."""
# If an exception occurs in debug mode or if context preservation is
# activated under exception situations exactly one context stays
# on the stack. The rationale is that you want to access that
# information under debug situations. However if someone forgets to
# pop that context again we want to make sure that on the next push
# it's invalidated, otherwise we run at risk that something leaks
# memory. This is usually only a problem in test suite since this
# functionality is not active in production environments.
top = _request_ctx_stack.top
if top is not None and top.preserved:
top.pop(top._preserved_exc)
# Before we push the request context we have to ensure that there
# is an application context.
app_ctx = _app_ctx_stack.top
if app_ctx is None or app_ctx.app != self.app:
app_ctx = self.app.app_context()
app_ctx.push()
self._implicit_app_ctx_stack.append(app_ctx)
else:
self._implicit_app_ctx_stack.append(None)
if hasattr(sys, "exc_clear"):
sys.exc_clear()
_request_ctx_stack.push(self)
print(threading.current_thread(), "PUSH RequestContext:", self, "RQEND")
# Open the session at the moment that the request context is available.
# This allows a custom open_session method to use the request context.
# Only open a new session if this is the first time the request was
# pushed, otherwise stream_with_context loses the session.
if self.session is None:
session_interface = self.app.session_interface
self.session = session_interface.open_session(self.app, self.request)
if self.session is None:
self.session = session_interface.make_null_session(self.app)
if self.url_adapter is not None:
self.match_request()
def pop(self, exc=_sentinel):
"""Pops the request context and unbinds it by doing that. This will
also trigger the execution of functions registered by the
:meth:`~flask.Flask.teardown_request` decorator.
.. versionchanged:: 0.9
Added the `exc` argument.
"""
app_ctx = self._implicit_app_ctx_stack.pop()
try:
clear_request = False
if not self._implicit_app_ctx_stack:
self.preserved = False
self._preserved_exc = None
if exc is _sentinel:
exc = sys.exc_info()[1]
self.app.do_teardown_request(exc)
# If this interpreter supports clearing the exception information
# we do that now. This will only go into effect on Python 2.x,
# on 3.x it disappears automatically at the end of the exception
# stack.
if hasattr(sys, "exc_clear"):
sys.exc_clear()
request_close = getattr(self.request, "close", None)
if request_close is not None:
request_close()
clear_request = True
finally:
rv = _request_ctx_stack.pop()
print(threading.current_thread(), "POP RequestContext: ", rv, "RQEND")
# get rid of circular dependencies at the end of the request
# so that we don't require the GC to be active.
if clear_request:
rv.request.environ["werkzeug.request"] = None
# Get rid of the app as well if necessary.
if app_ctx is not None:
app_ctx.pop(exc)
assert rv is self, "Popped wrong request context. (%r instead of %r)" % (
rv,
self,
)
I have an endpoint in Flask that will thread out via FlaskThread. It does not join with the FlaskThread and will immediately return a response.
And here is the output that I have edited for ease of reading since there were printing issues due to concurrency. Thread-10 is the main Flask process. Thread-11 is the FlaskThread.
1. <Thread(Thread-10, started daemon 21736)> PUSH AppContext: <flask.ctx.AppContext object at 0x000001924995B940> ACEND
2. <Thread(Thread-10, started daemon 21736)> PUSH RequestContext: <RequestContext 'http://localhost:7777/' [GET] of LIMS> RQEND
3. <FlaskThread(Thread-11, started daemon 38684)> PUSH AppContext: <flask.ctx.AppContext object at 0x000001924995B940> ACEND
4. <Thread(Thread-10, started daemon 21736)> POP RequestContext: <RequestContext 'http://localhost:7777/' [GET] of LIMS> RQEND
5. <FlaskThread(Thread-11, started daemon 38684)> POP AppContext: <flask.ctx.AppContext object at 0x000001924995B940> ACEND
6. <Thread(Thread-10, started daemon 21736)> PUSH RequestContext: <RequestContext 'http://localhost:7777/' [GET] of LIMS> RQEND
On first call to the endpoint, the main Flask application will push both AppContext and RequestContext (see line 1 and 2). In my code, this endpoint will make a FlaskThread, and in my FlaskThread run() function, I push both contexts (see line 3 and 5). While this is running, the main Flask application will pop the contexts when a response has been generated.
In YOUR code, because you call another pop() in your thread, it will throw the AssertionError because that context has already been popped.
I believe this is because in the thread constructor, there is code that saves the context from the top of the stack to the thread, which explains why the thread will keep running with the data from the contexts, but when you pop, it will give an error because the main Flask process has already popped this context.

How to zmq.poll() some socket and some sort of variable?

I'm attempting to poll a few sockets and a multiprocessing.Event
The documentation states:
A zmq.Socket or any Python object having a fileno() method that
returns a valid file descriptor.
which means that I can't use my Event but I should be able to use a file(as returned from open(...) or an io object (anything from the io library), but I'm having no success:
Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm Community Edition 2017.3\helpers\pydev\pydevd.py", line 1683, in <module>
main()
File "C:\Program Files\JetBrains\PyCharm Community Edition 2017.3\helpers\pydev\pydevd.py", line 1677, in main
globals = debugger.run(setup['file'], None, None, is_module)
File "C:\Program Files\JetBrains\PyCharm Community Edition 2017.3\helpers\pydev\pydevd.py", line 1087, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "C:\Program Files\JetBrains\PyCharm Community Edition 2017.3\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:\work\polldamnyou.py", line 122, in <module>
p = poller.poll(1000)
File "C:\WinPython-64bit-3.6.3.0Qt5\python-3.6.3.amd64\Lib\site-packages\zmq\sugar\poll.py", line 99, in poll
return zmq_poll(self.sockets, timeout=timeout)
File "zmq\backend\cython\_poll.pyx", line 143, in zmq.backend.cython._poll.zmq_poll
File "zmq\backend\cython\_poll.pyx", line 123, in zmq.backend.cython._poll.zmq_poll
File "zmq\backend\cython\checkrc.pxd", line 25, in zmq.backend.cython.checkrc._check_rc
zmq.error.ZMQError: Unknown error
I have found the same question asked before but the solution was to use another socket which is sidestepping the problem. I am curious and want to see this working. Does any one have any clues what sort of object can be used in zmq.Poller other than a socket?
Edit: a few things I've tried
import traceback, os, zmq
def poll(poller):
try:
print('polled: ', poller.poll(3))
except zmq.error.ZMQError as e:
traceback.print_exc()
class Pollable:
def __init__(self):
self.fd = os.open('dump', os.O_RDWR | os.O_BINARY)
self.FD = self.fd
self.events = 0
self.EVENTS = 0
self.revents = 0
self.REVENTS = 0
def fileno(self):
return self.fd
def __getattribute__(self, item):
if item != '__class__':
print("requested: ", item)
return super().__getattribute__(item)
ctx = zmq.Context()
sock = ctx.socket(zmq.SUB)
poller = zmq.Poller()
poller.register(sock, zmq.POLLIN)
poll(poller) # works
file = open('dump', 'w+b')
print("fileno: ", file.fileno())
poller.register(file, zmq.POLLIN)
poll(poller) # fails
file.events = 0
file.revents = 0
file.EVENTS = 0
file.REVENTS = 0
file.fd = file.fileno()
file.FD = file.fileno()
poll(poller) # still fails
poller.unregister(file)
file.close()
poll(poller) # works
fd = os.open('dump', os.O_RDWR|os.O_BINARY)
print("fd: ", fd)
dummy = Pollable()
poller.register(dummy, zmq.POLLIN)
poll(poller) # fails
__getattribute__ shows that that fd and fileno are being accessed, but nothing else, so what is still wrong?!
In case one has never worked with ZeroMQ,one may here enjoy to first look at "ZeroMQ Principles in less than Five Seconds"before diving into further details
Q : "what sort of object can be used in zmq.Poller other than a socket?"
Welcome to the lovely lands of the Zen-of-Zero. Both the published API and the pyzmq ReadTheDocs are clear and sound on this :
The zmq_poll() function provides a mechanism for applications to multiplex input/output events in a level-triggered fashion over a set of sockets. Each member of the array pointed to by the items argument is a zmq_pollitem_t structure. The nitems argument specifies the number of items in the items array. The zmq_pollitem_t structure is defined as follows :
typedef struct
{
void //*socket//;
int //fd//;
short //events//;
short //revents//;
} zmq_pollitem_t;
For each zmq_pollitem_t item, zmq_poll() shall examine either the ØMQ socket referenced by socket or the standard socket specified by the file descriptor fd, for the event(s) specified in events. If both socket and fd are set in a single zmq_pollitem_t, the ØMQ socket referenced by socket shall take precedence and the value of fd shall be ignored.
For each zmq_pollitem_t item, zmq_poll() shall first clear the revents member, and then indicate any requested events that have occurred by setting the bit corresponding to the event condition in the revents member.
If none of the requested events have occurred on any zmq_pollitem_t item, zmq_poll() shall wait timeout milliseconds for an event to occur on any of the requested items. If the value of timeout is 0, zmq_poll() shall return immediately. If the value of timeout is -1, zmq_poll() shall block indefinitely until a requested event has occurred on at least one zmq_pollitem_t.
Last, but not least,the API is explicit in warning about implementation :
The zmq_poll() function may be implemented or emulated using operating system interfaces other than poll(), and as such may be subject to the limits of those interfaces in ways not defined in this documentation.
Solution :
For any wished-to-have-poll()-ed object, that does not conform to the given mandatory property of having an ordinary fd-filedescriptor, implement a such mandatory property mediating-proxy, meeting the published API-specifications or do not use it with zmq_poll() for successful API calls.
There is no third option.

Python 3 script using libnotify fails as cron job

I've got a Python 3 script that gets some JSON from a URL, processes it, and notifies me if there's any significant changes to the data I get. I've tried using notify2 and PyGObject's libnotify bindings (gi.repository.Notify) and get similar results with either method. This script works a-ok when I run it from a terminal, but chokes when cron tries to run it.
import notify2
from gi.repository import Notify
def notify_pygobject(new_stuff):
Notify.init('My App')
notify_str = '\n'.join(new_stuff)
print(notify_str)
popup = Notify.Notification.new('Hey! Listen!', notify_str,
'dialog-information')
popup.show()
def notify_notify2(new_stuff):
notify2.init('My App')
notify_str = '\n'.join(new_stuff)
print(notify_str)
popup = notify2.Notification('Hey! Listen!', notify_str,
'dialog-information')
popup.show()
Now, if I create a script that calls notify_pygobject with a list of strings, cron throws this error back at me via the mail spool:
Traceback (most recent call last):
File "/home/p0lar_bear/Documents/devel/notify-test/test1.py", line 3, in <module>
main()
File "/home/p0lar_bear/Documents/devel/notify-test/test1.py", line 4, in main
testlib.notify(notify_projects)
File "/home/p0lar_bear/Documents/devel/notify-test/testlib.py", line 8, in notify
popup.show()
File "/usr/lib/python3/dist-packages/gi/types.py", line 113, in function
return info.invoke(*args, **kwargs)
gi._glib.GError: Error spawning command line `dbus-launch --autolaunch=776643a88e264621544719c3519b8310 --binary-syntax --close-stderr': Child process exited with code 1
...and if I change it to call notify_notify2() instead:
Traceback (most recent call last):
File "/home/p0lar_bear/Documents/devel/notify-test/test2.py", line 3, in <module>
main()
File "/home/p0lar_bear/Documents/devel/notify-test/test2.py", line 4, in main
testlib.notify(notify_projects)
File "/home/p0lar_bear/Documents/devel/notify-test/testlib.py", line 13, in notify
notify2.init('My App')
File "/usr/lib/python3/dist-packages/notify2.py", line 93, in init
bus = dbus.SessionBus(mainloop=mainloop)
File "/usr/lib/python3/dist-packages/dbus/_dbus.py", line 211, in __new__
mainloop=mainloop)
File "/usr/lib/python3/dist-packages/dbus/_dbus.py", line 100, in __new__
bus = BusConnection.__new__(subclass, bus_type, mainloop=mainloop)
File "/usr/lib/python3/dist-packages/dbus/bus.py", line 122, in __new__
bus = cls._new_for_bus(address_or_type, mainloop=mainloop)
dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NotSupported: Unable to autolaunch a dbus-daemon without a $DISPLAY for X11
I did some research and saw suggestions to put a PATH= into my crontab, or to export $DISPLAY (I did this within the script by calling os.system('export DISPLAY=:0')) but neither resulted in any change...
You are in the right track. This behavior is because cron is run in a multiuser headless environment (think of it as running as root in a terminal without GUI, kinda), so he doesn't know to what display (X Window Server session) and user target to. If your application open, for example, windows or notification to some user desktop, then this problems is raised.
I suppose you edit your cron with crontab -e and the entry looks like this:
m h dom mon dow command
Something like:
0 5 * * 1 /usr/bin/python /home/foo/myscript.py
Note that I use full path to Python, is better if this kind of situation where PATH environment variable could be different.
Then just change to:
0 5 * * 1 export DISPLAY=:0 && /usr/bin/python /home/foo/myscript.py
If this still doesn't work you need to allow your user to control the X Windows server:
Add to your .bash_rc:
xhost +si:localuser:$(whoami)
If you want to set the DISPLAY from within python like you attempted with os.system('export DISPLAY=:0'), you can do something like this
import os
if not 'DISPLAY' in os.environ:
os.environ['DISPLAY'] = ':0'
This will respect any DISPLAY that users may have on a multi-seat box, and fall back to the main head :0.
If ur notify function, regardless of Python version or notify library, does not track the notify id [in a Python list] and deleting the oldest before the queue is completely full or on error, then depending on the dbus settings (in Ubuntu it's 21 notification max) dbus will throw an error, maximum notifications reached!
from gi.repository import Notify
from gi.repository.GLib import GError
# Normally implemented as class variables.
DBUS_NOTIFICATION_MAX = 21
lstNotify = []
def notify_show(strSummary, strBody, strIcon="dialog-information"):
try:
# full queue, delete oldest
if len(lstNotify)==DBUS_NOTIFICATION_MAX:
#Get oldest id
lngOldID = lstNotify.pop(0)
Notify.Notification.clear(lngOldID)
del lngOldID
if len(lstNotify)==0:
lngLastID = 0
else:
lngLastID = lstNotify[len(lstNotify) -1] + 1
lstNotify.append(lngLastID)
notify = Notify.Notification.new(strSummary, strBody, strIcon)
notify.set_property('id', lngLastID)
print("notify_show id %(id)d " % {'id': notify.props.id} )
#notify.set_urgency(Notify.URGENCY_LOW)
notify.show()
except GError as e:
# Most likely exceeded max notifications
print("notify_show error ", e )
finally:
if notify is not None:
del notify
Although it may somehow be possible to ask dbus what the notification queue max limit is. Maybe someone can help ... Improve this until perfection.
Plz
Cuz gi.repository complete answers are spare to come by.

Resources