Python 3 script using libnotify fails as cron job - python-3.x

I've got a Python 3 script that gets some JSON from a URL, processes it, and notifies me if there's any significant changes to the data I get. I've tried using notify2 and PyGObject's libnotify bindings (gi.repository.Notify) and get similar results with either method. This script works a-ok when I run it from a terminal, but chokes when cron tries to run it.
import notify2
from gi.repository import Notify
def notify_pygobject(new_stuff):
Notify.init('My App')
notify_str = '\n'.join(new_stuff)
print(notify_str)
popup = Notify.Notification.new('Hey! Listen!', notify_str,
'dialog-information')
popup.show()
def notify_notify2(new_stuff):
notify2.init('My App')
notify_str = '\n'.join(new_stuff)
print(notify_str)
popup = notify2.Notification('Hey! Listen!', notify_str,
'dialog-information')
popup.show()
Now, if I create a script that calls notify_pygobject with a list of strings, cron throws this error back at me via the mail spool:
Traceback (most recent call last):
File "/home/p0lar_bear/Documents/devel/notify-test/test1.py", line 3, in <module>
main()
File "/home/p0lar_bear/Documents/devel/notify-test/test1.py", line 4, in main
testlib.notify(notify_projects)
File "/home/p0lar_bear/Documents/devel/notify-test/testlib.py", line 8, in notify
popup.show()
File "/usr/lib/python3/dist-packages/gi/types.py", line 113, in function
return info.invoke(*args, **kwargs)
gi._glib.GError: Error spawning command line `dbus-launch --autolaunch=776643a88e264621544719c3519b8310 --binary-syntax --close-stderr': Child process exited with code 1
...and if I change it to call notify_notify2() instead:
Traceback (most recent call last):
File "/home/p0lar_bear/Documents/devel/notify-test/test2.py", line 3, in <module>
main()
File "/home/p0lar_bear/Documents/devel/notify-test/test2.py", line 4, in main
testlib.notify(notify_projects)
File "/home/p0lar_bear/Documents/devel/notify-test/testlib.py", line 13, in notify
notify2.init('My App')
File "/usr/lib/python3/dist-packages/notify2.py", line 93, in init
bus = dbus.SessionBus(mainloop=mainloop)
File "/usr/lib/python3/dist-packages/dbus/_dbus.py", line 211, in __new__
mainloop=mainloop)
File "/usr/lib/python3/dist-packages/dbus/_dbus.py", line 100, in __new__
bus = BusConnection.__new__(subclass, bus_type, mainloop=mainloop)
File "/usr/lib/python3/dist-packages/dbus/bus.py", line 122, in __new__
bus = cls._new_for_bus(address_or_type, mainloop=mainloop)
dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NotSupported: Unable to autolaunch a dbus-daemon without a $DISPLAY for X11
I did some research and saw suggestions to put a PATH= into my crontab, or to export $DISPLAY (I did this within the script by calling os.system('export DISPLAY=:0')) but neither resulted in any change...

You are in the right track. This behavior is because cron is run in a multiuser headless environment (think of it as running as root in a terminal without GUI, kinda), so he doesn't know to what display (X Window Server session) and user target to. If your application open, for example, windows or notification to some user desktop, then this problems is raised.
I suppose you edit your cron with crontab -e and the entry looks like this:
m h dom mon dow command
Something like:
0 5 * * 1 /usr/bin/python /home/foo/myscript.py
Note that I use full path to Python, is better if this kind of situation where PATH environment variable could be different.
Then just change to:
0 5 * * 1 export DISPLAY=:0 && /usr/bin/python /home/foo/myscript.py
If this still doesn't work you need to allow your user to control the X Windows server:
Add to your .bash_rc:
xhost +si:localuser:$(whoami)

If you want to set the DISPLAY from within python like you attempted with os.system('export DISPLAY=:0'), you can do something like this
import os
if not 'DISPLAY' in os.environ:
os.environ['DISPLAY'] = ':0'
This will respect any DISPLAY that users may have on a multi-seat box, and fall back to the main head :0.

If ur notify function, regardless of Python version or notify library, does not track the notify id [in a Python list] and deleting the oldest before the queue is completely full or on error, then depending on the dbus settings (in Ubuntu it's 21 notification max) dbus will throw an error, maximum notifications reached!
from gi.repository import Notify
from gi.repository.GLib import GError
# Normally implemented as class variables.
DBUS_NOTIFICATION_MAX = 21
lstNotify = []
def notify_show(strSummary, strBody, strIcon="dialog-information"):
try:
# full queue, delete oldest
if len(lstNotify)==DBUS_NOTIFICATION_MAX:
#Get oldest id
lngOldID = lstNotify.pop(0)
Notify.Notification.clear(lngOldID)
del lngOldID
if len(lstNotify)==0:
lngLastID = 0
else:
lngLastID = lstNotify[len(lstNotify) -1] + 1
lstNotify.append(lngLastID)
notify = Notify.Notification.new(strSummary, strBody, strIcon)
notify.set_property('id', lngLastID)
print("notify_show id %(id)d " % {'id': notify.props.id} )
#notify.set_urgency(Notify.URGENCY_LOW)
notify.show()
except GError as e:
# Most likely exceeded max notifications
print("notify_show error ", e )
finally:
if notify is not None:
del notify
Although it may somehow be possible to ask dbus what the notification queue max limit is. Maybe someone can help ... Improve this until perfection.
Plz
Cuz gi.repository complete answers are spare to come by.

Related

AWS secret manager time out sometimes

I am fetching a secret from secret manager on a lambda. The request fails sometimes. Which is totally strange, it is working fine and couple of hours later I check and I am getting time out.
def get_credentials(self):
"""Retrieve credentials from the Secrets Manager service."""
boto_config = BotoConfig(connect_timeout=3, retries={"max_attempts": 3})
secrets_client = self.boto_session.client(
service_name="secretsmanager",
region_name=self.boto_session.region_name,
config=boto_config,
)
secret_value = secrets_client.get_secret_value(SecretId=self._secret_name)
secret = secret_value["SecretString"]
I try to debug the lambda and later seems to be working again, without any change, those state changes happen in hours. Any hint why that could happen?
Traceback (most recent call last):
File "/opt/python/botocore/endpoint.py", line 249, in _do_get_response
http_response = self._send(request)
File "/opt/python/botocore/endpoint.py", line 321, in _send
return self.http_session.send(request)
File "/opt/python/botocore/httpsession.py", line 438, in send
raise ConnectTimeoutError(endpoint_url=request.url, error=e)
botocore.exceptions.ConnectTimeoutError: Connect timeout on endpoint URL: "https://secretsmanager.eu-central-1.amazonaws.com/"
You are using the legacy retry mode (is the default one in boto3), which has very limited functionality as it only works for a very limited number of errors.
You should try changing your retry strategy to something like Standard retry mode or Adaptative. In that case your code would look like:
from botocore.config import Config as BotoConfig
boto_config = BotoConfig(
connect_timeout=3,
retries={
"max_attempts": 3,
"mode":"standard"
}
)
secrets_client = self.boto_session.client(
service_name="secretsmanager",
region_name=self.boto_session.region_name,
config=boto_config,
)

How to zmq.poll() some socket and some sort of variable?

I'm attempting to poll a few sockets and a multiprocessing.Event
The documentation states:
A zmq.Socket or any Python object having a fileno() method that
returns a valid file descriptor.
which means that I can't use my Event but I should be able to use a file(as returned from open(...) or an io object (anything from the io library), but I'm having no success:
Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm Community Edition 2017.3\helpers\pydev\pydevd.py", line 1683, in <module>
main()
File "C:\Program Files\JetBrains\PyCharm Community Edition 2017.3\helpers\pydev\pydevd.py", line 1677, in main
globals = debugger.run(setup['file'], None, None, is_module)
File "C:\Program Files\JetBrains\PyCharm Community Edition 2017.3\helpers\pydev\pydevd.py", line 1087, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "C:\Program Files\JetBrains\PyCharm Community Edition 2017.3\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:\work\polldamnyou.py", line 122, in <module>
p = poller.poll(1000)
File "C:\WinPython-64bit-3.6.3.0Qt5\python-3.6.3.amd64\Lib\site-packages\zmq\sugar\poll.py", line 99, in poll
return zmq_poll(self.sockets, timeout=timeout)
File "zmq\backend\cython\_poll.pyx", line 143, in zmq.backend.cython._poll.zmq_poll
File "zmq\backend\cython\_poll.pyx", line 123, in zmq.backend.cython._poll.zmq_poll
File "zmq\backend\cython\checkrc.pxd", line 25, in zmq.backend.cython.checkrc._check_rc
zmq.error.ZMQError: Unknown error
I have found the same question asked before but the solution was to use another socket which is sidestepping the problem. I am curious and want to see this working. Does any one have any clues what sort of object can be used in zmq.Poller other than a socket?
Edit: a few things I've tried
import traceback, os, zmq
def poll(poller):
try:
print('polled: ', poller.poll(3))
except zmq.error.ZMQError as e:
traceback.print_exc()
class Pollable:
def __init__(self):
self.fd = os.open('dump', os.O_RDWR | os.O_BINARY)
self.FD = self.fd
self.events = 0
self.EVENTS = 0
self.revents = 0
self.REVENTS = 0
def fileno(self):
return self.fd
def __getattribute__(self, item):
if item != '__class__':
print("requested: ", item)
return super().__getattribute__(item)
ctx = zmq.Context()
sock = ctx.socket(zmq.SUB)
poller = zmq.Poller()
poller.register(sock, zmq.POLLIN)
poll(poller) # works
file = open('dump', 'w+b')
print("fileno: ", file.fileno())
poller.register(file, zmq.POLLIN)
poll(poller) # fails
file.events = 0
file.revents = 0
file.EVENTS = 0
file.REVENTS = 0
file.fd = file.fileno()
file.FD = file.fileno()
poll(poller) # still fails
poller.unregister(file)
file.close()
poll(poller) # works
fd = os.open('dump', os.O_RDWR|os.O_BINARY)
print("fd: ", fd)
dummy = Pollable()
poller.register(dummy, zmq.POLLIN)
poll(poller) # fails
__getattribute__ shows that that fd and fileno are being accessed, but nothing else, so what is still wrong?!
In case one has never worked with ZeroMQ,one may here enjoy to first look at "ZeroMQ Principles in less than Five Seconds"before diving into further details
Q : "what sort of object can be used in zmq.Poller other than a socket?"
Welcome to the lovely lands of the Zen-of-Zero. Both the published API and the pyzmq ReadTheDocs are clear and sound on this :
The zmq_poll() function provides a mechanism for applications to multiplex input/output events in a level-triggered fashion over a set of sockets. Each member of the array pointed to by the items argument is a zmq_pollitem_t structure. The nitems argument specifies the number of items in the items array. The zmq_pollitem_t structure is defined as follows :
typedef struct
{
void //*socket//;
int //fd//;
short //events//;
short //revents//;
} zmq_pollitem_t;
For each zmq_pollitem_t item, zmq_poll() shall examine either the ØMQ socket referenced by socket or the standard socket specified by the file descriptor fd, for the event(s) specified in events. If both socket and fd are set in a single zmq_pollitem_t, the ØMQ socket referenced by socket shall take precedence and the value of fd shall be ignored.
For each zmq_pollitem_t item, zmq_poll() shall first clear the revents member, and then indicate any requested events that have occurred by setting the bit corresponding to the event condition in the revents member.
If none of the requested events have occurred on any zmq_pollitem_t item, zmq_poll() shall wait timeout milliseconds for an event to occur on any of the requested items. If the value of timeout is 0, zmq_poll() shall return immediately. If the value of timeout is -1, zmq_poll() shall block indefinitely until a requested event has occurred on at least one zmq_pollitem_t.
Last, but not least,the API is explicit in warning about implementation :
The zmq_poll() function may be implemented or emulated using operating system interfaces other than poll(), and as such may be subject to the limits of those interfaces in ways not defined in this documentation.
Solution :
For any wished-to-have-poll()-ed object, that does not conform to the given mandatory property of having an ordinary fd-filedescriptor, implement a such mandatory property mediating-proxy, meeting the published API-specifications or do not use it with zmq_poll() for successful API calls.
There is no third option.

python subprocess.run if you re-direct stdout/stderr, it error exits instead of working

#!/usr/bin/env python3
import subprocess
import os
if False:
# create log file.
kfd = os.open( 'kk.log', os.O_WRONLY )
# redirect stdout & err to a log file.
os.close(1)
os.dup(kfd)
os.close(2)
os.dup(kfd)
subprocess.run([ "echo", "hello world"], check=True )
% ./kk.py
hello world
%
The above works fine, but if you try edit file, and replace False with true:
% ./kk.py
% more kk.log
Traceback (most recent call last):
File "./kk.py", line 16, in <module>
subprocess.run([ "echo", "hello world"], check=True )
File "/usr/lib/python3.6/subprocess.py", line 418, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['echo', 'hello world']' returned
non-zero exit status 1.
%
We don't get the output, and the process error exits...
I would have expected it to just work, writing to kk.log.
You probably want to say something like this instead:
#!/usr/bin/env python3
import subprocess
import os
if True:
# create log file.
kfd = os.open( 'kk.log', os.O_WRONLY | os.O_CREAT )
# redirect stdout & err to a log file.
os.dup2(kfd, 1)
os.dup2(kfd, 2)
subprocess.run([ "echo", "hello world"], check=True )
Notice the use of os.dup2. There are two reasons for that. First and foremost, resulting file descriptors are inheritable. Your echo had no open stdout/-err actually and hence failed (You can run the following in shell to echo with just stdout closed to check out the behavior: /bin/echo hello world 1>&-). Also note, it may not always hold true that if I closed stdout (1), the lowest descriptor (and result of os.dup) is 1. Someone could have closed your stdin (0) before running the script (same goes for stderr).
The background story to file descriptors inheritance is in the PEP-446.
I've also added os.O_CREAT, since my first failure trying to reproduce your problem was non-existing kk.log.
Needless to say, unless trying to play with interfaces into the os (perhaps as a an exercise), I guess you should normally stick to subprocess itself.

No attribute 'HookManager'

I am copying the key logger from this video: (https://www.youtube.com/watch?v=8BiOPBsXh0g) and running the code:
import pyHook, sys, logging, pythoncom
file_log = 'C:\Users\User\Google Drive\Python'
def OnKeyboardEvent(event):
logging.basicConfig(filename = file_log, level = logging.DEBUG, format = '%(message)s')
chr(event.Ascii)
logging.log(10, chr(event.Ascii))
return True
hooks_manager = pyHook.HookManager()
hooks_manager.KeyDown = OnKeyboardEvent
hooks_manager.HookKeyboard()
pythoncom.Pumpmessages()
This returns the error:
Traceback (most recent call last):
File "C:\Users\User\Google Drive\Python\pyHook.py", line 2, in <module>
import pyHook, sys, logging, pythoncom
File "C:\Users\User\Google Drive\Python\pyHook.py", line 12, in <module>
hooks_manager = pyHook.HookManager()
AttributeError: 'module' object has no attribute 'HookManager'
I am running Python 2.7.11 and a windows computer.
I don't know what the problem is; please help.
Thank you
I found the solution. If you open HookManager.py and change all 'key_hook' words to 'keyboard_hook' no more error occurs
I'm still unsure what the issue is but I found a solution.
If you move the program you are trying to run into the same folder as the HookManager.py file then it works.
For me this file was:
C:\Python27\Lib\site-packages\pyHook
Bro this line is wrong
file_log = 'C:\Users\User\Google Drive\Python'
As the system doesn't allow your program to write to the 'C' drive, you should use another path, like 'D' drive or 'E' drive or etc. as given below.
file_log = 'D:\keyloggerOutput.txt'
I had same message error after having installed pyWinhook-1.6.1 on Python 3.7 with the zip file pyWinhook-1.6.1.zip.
In application file, I replaced the import statement:" import pyWinhook as pyHook" with "from pywinhook import *". The problem was then solved.

Starting a Python script from within another - odd behavior

Through a command-line (/bin/sh) on a Ubuntu system, I executed a Python3 script that uses multiprocessing.Process() to start another Python3 script. I got the error message below:
collier#Nacho-Laptop:/media/collier/BETABOTS/Neobot$ ./Betabot #THE SECOND SCRIPT NEVER EXECUTES
/bin/sh: 1: Syntax error: "(" unexpected (expecting "}")
Traceback (most recent call last):
File "./Betabot", line 26, in <module>
JOB_CONFIG = multiprocessing.Process(os.system('./conf/set_data.py3'))
File "/usr/lib/python3.3/multiprocessing/process.py", line 72, in __init__
assert group is None, 'group argument must be None for now'
AssertionError: group argument must be None for now
#TESTING THE SECOND SCRIPT BY ITSELF IN TWO WAYS (both work)
collier#Nacho-Laptop:/media/collier/BETABOTS/Neobot$ python3 -c "import os; os.system('./conf/set_data.py3')" #WORKS
collier#Nacho-Laptop:/media/collier/BETABOTS/Neobot$ ./conf/set_data.py3 #WORKS
The question is - Why is this not working? It should start the second script and both continue executing with out issues.
I made edits to the code trying to solve the issue. The error is now on line 13. The same error occurs on line 12 "JOB_CONFIG = multiprocessing.Process(os.system('date')); JOB_CONFIG.start()" that I used as a testing line. I changed line 12 to be "os.system('date')" and that works, so the error lies in the multiprocessing command.
#!/usr/bin/env python3
import os, subprocess, multiprocessing
def write2file(openfile, WRITE):
with open(openfile, 'w') as file:
file.write(str(WRITE))
writetofile = writefile = filewrite = writer = filewriter = write2file
global BOTNAME, BOTINIT
BOTNAME = subprocess.getoutput('cat ./conf/startup.xml | grep -E -i -e \'<property name=\"botname\" value\' | ssed -r -e "s|<property name=\"botname\" value=\"(.*)\"/>|\1|gI"')
BOTINIT = os.getpid()
###Setup science information under ./mem/###
JOB_CONFIG = multiprocessing.Process(os.system('date')); JOB_CONFIG.start()
JOB_CONFIG = multiprocessing.Process(os.system('./conf/set_data.py3')); JOB_CONFIG.start()
###START###
write2file('./mem/BOTINIT_PID', BOTINIT); write2file('./mem/tty', os.ctermid()); write2file('./mem/SERVER_PID', BOTINIT)
JOB_EMOTION = multiprocessing.Process(os.system('./lib/emoterm -T Emotion -e ./lib/Emotion_System')); JOB_EMOTION.start()
JOB_SENSORY = multiprocessing.Process(os.system('./lib/Sensory_System')); JOB_SENSORY.start()
print(BOTNAME + ' is starting'); JOB_CONFIG.join()
try:
os.system('./lib/neoterm -T' + BOTNAME + ' -e ./lib/beta_engine')
except:
print('There seems to be an error.'); JOB_EMOTION.join(); JOB_SENSORY.join(); exit()
JOB_EMOTION.join(); JOB_SENSORY.join(); exit()
When starting a Python3 script from a Python3 script that is to be run while the main script continues, a command like this must be done:
JOB_CONFIG = subprocess.Popen([sys.executable, './conf/set_data.py3'])
The filename string is the script. This is save to a variable to allow me to manipulate the process later. For instance, I could use the command "JOB_CONFIG.wait()" when the main script should wait for the other script.
As for that hashpling in the first line of the error message, that is due to a syntax error in the first subprocess command used.

Resources