Detect inhibited shutdown on D-Bus - linux

I use x86_64 Debian 9 Stretch. I run systemd-inhibit cat and then in another console systemctl poweroff. Shutdown correctly gets inhibited. According to this doc signal PrepareForShutdown(false) is supposed to be emitted, but I can't see it. I watch dbus with dbus-monitor --system and using a python program:
#!/usr/bin/env python
import dbus
import gobject
from dbus.mainloop.glib import DBusGMainLoop
def handle(*args):
print "PrepareForShutdown %s" % (args)
DBusGMainLoop(set_as_default=True) # integrate into gobject main loop
bus = dbus.SystemBus() # connect to system wide dbus
bus.add_signal_receiver( # define the signal to listen to
handle, # callback function
'PrepareForShutdown', # signal name
'org.freedesktop.login1.Manager', # interface
'org.freedesktop.login1' # bus name
)
loop = gobject.MainLoop()
loop.run()
The program prints nothing. The dbus-monitor outputs few obscure messages (looks like smth calls ListInhibitors).
Is signal not being emited or I just can't catch it? My goal is to detect inhibited shutdown by listening D-Bus, how do I do it?
EDIT: Turned out when non-delayed inhibition is used, shutdown request just gets discarded, signal doesn't fire. But if I use delay lock via systemd-inhibit --mode=delay --what=shutdown cat then PrepareForShutdown signal fires.

Is signal not being emited or I just can't catch it?
Not sure. My guess would be that systemd only emits the signal to processes which have taken a delay lock (unicast signal emission), as the documentation page has some pretty dire warnings about race conditions if you listen for PrepareForShutdown without taking a delay lock first.
The way to check this would be to read the systemd source code.
My goal is to detect inhibited shutdown by listening D-Bus, how do I do it?
If I run sudo dbus-monitor --system in one terminal, and then run systemd-inhibit cat in another, I see the following signal emission:
signal time=1543917703.712998 sender=:1.9 -> destination=(null destination) serial=1150 path=/org/freedesktop/login1; interface=org.freedesktop.DBus.Properties; member=PropertiesChanged
string "org.freedesktop.login1.Manager"
array [
dict entry(
string "BlockInhibited"
variant string "shutdown:sleep:idle:handle-power-key:handle-suspend-key:handle-hibernate-key:handle-lid-switch"
)
]
array [
]
Hence you could watch for property changes on the /org/freedesktop/login1 object exposed by service org.freedesktop.login1, and see when its BlockInhibited or DelayInhibited properties change. Shutdown is inhibited when either of those properties contains shutdown. They are documented on the same documentation page:
The BlockInhibited and DelayInhibited properties encode what types of
locks are currently taken. These fields are a colon separated list of
shutdown, sleep, idle, handle-power-key, handle-suspend-key,
handle-hibernate-key, handle-lid-switch. The list is basically the
union of the What fields of all currently active locks of the specific
mode.

Related

Direct communication between Javascript in Jupyter and server via IPython kernel

I'm trying to display an interactive mesh visualizer based on Three.js inside a Jupyter cell. The workflow is the following:
The user launches a Jupyter notebook, and open the viewer in a cell
Using Python commands, the user can manually add meshes and animate them interactively
In practice, the main thread is sending requests to a server via ZMQ sockets (every request needs a single reply), then the server sends back the desired data to the main thread using other socket pairs (many "request", very few replies expected), which finally uses communication through ipython kernel to send the data to the Javascript frontend. So far so good, and it works properly because the messages are all flowing in the same direction:
Main thread (Python command) [ZMQ REQ] -> [ZMQ REP] Server (Data) [ZMQ XREQ] -> [ZMQ XREQ] Main thread (Data) [IPykernel Comm] -> [Ipykernel Comm] Javascript (Display)
However, the pattern is different when I'm want to fetch the status of the frontend to wait for the meshes to finish loading:
Main thread (Status request) --> Server (Status request) --> Main thread (Waiting for reply)
| |
<--------------------------------Javascript (Processing) <--
This time, the server sends a request to the frontend, which in return does not send the reply directly back to the server, but to the main thread, that will forward the reply to the server, and finally to the main thread.
There is a clear issue: the main thread is supposed to jointly forward the reply of the frontend and receive the reply from the server, which is impossible. The ideal solution would be to enable the server to communicate directly with the frontend but I don't know how to do that, since I cannot use get_ipython().kernel.comm_manager.register_target on the server side. I tried to instantiate an ipython kernel client on the server side using jupyter_client.BlockingKernelClient, but I didn't manged to use it to communicate nor to register targets.
OK so I found a solution for now but it is not great. Indeed of just waiting for a reply and keep busy the main loop, I added a timeout and interleave it with do_one_iteration of the kernel to force to handle to messages:
while True:
try:
rep = zmq_socket.recv(flags=zmq.NOBLOCK).decode("utf-8")
except zmq.error.ZMQError:
kernel.do_one_iteration()
It works but unfortunately it is not really portable and it messes up with the Jupyter evaluation stack (all queued evaluations will be processed here instead of in order)...
Alternatively, there is another way that is more appealing:
import zmq
import asyncio
import nest_asyncio
nest_asyncio.apply()
zmq_socket.send(b"ready")
async def enforce_receive():
await kernel.process_one(True)
return zmq_socket.recv().decode("utf-8")
loop = asyncio.get_event_loop()
rep = loop.run_until_complete(enforce_receive())
but in this case you need to know in advance that you expect the kernel to receive exactly one message, and relying on nest_asyncio is not ideal either.
Here is a link to an issue on this topic of Github, along with an example notebook.
UPDATE
I finally manage to solve completely my issue, without shortcomings. The trick is to analyze every incoming messages. The irrelevant messages are put back in the queue in order, while the comm-related ones are processed on-the-spot:
class CommProcessor:
"""
#brief Re-implementation of ipykernel.kernelbase.do_one_iteration
to only handle comm messages on the spot, and put back in
the stack the other ones.
#details Calling 'do_one_iteration' messes up with kernel
'msg_queue'. Some messages will be processed too soon,
which is likely to corrupt the kernel state. This method
only processes comm messages to avoid such side effects.
"""
def __init__(self):
self.__kernel = get_ipython().kernel
self.qsize_old = 0
def __call__(self, unsafe=False):
"""
#brief Check once if there is pending comm related event in
the shell stream message priority queue.
#param[in] unsafe Whether or not to assume check if the number
of pending message has changed is enough. It
makes the evaluation much faster but flawed.
"""
# Flush every IN messages on shell_stream only
# Note that it is a faster implementation of ZMQStream.flush
# to only handle incoming messages. It reduces the computation
# time from about 10us to 20ns.
# https://github.com/zeromq/pyzmq/blob/e424f83ceb0856204c96b1abac93a1cfe205df4a/zmq/eventloop/zmqstream.py#L313
shell_stream = self.__kernel.shell_streams[0]
shell_stream.poller.register(shell_stream.socket, zmq.POLLIN)
events = shell_stream.poller.poll(0)
while events:
_, event = events[0]
if event:
shell_stream._handle_recv()
shell_stream.poller.register(
shell_stream.socket, zmq.POLLIN)
events = shell_stream.poller.poll(0)
qsize = self.__kernel.msg_queue.qsize()
if unsafe and qsize == self.qsize_old:
# The number of queued messages in the queue has not changed
# since it last time it has been checked. Assuming those
# messages are the same has before and returning earlier.
return
# One must go through all the messages to keep them in order
for _ in range(qsize):
priority, t, dispatch, args = \
self.__kernel.msg_queue.get_nowait()
if priority <= SHELL_PRIORITY:
_, msg = self.__kernel.session.feed_identities(
args[-1], copy=False)
msg = self.__kernel.session.deserialize(
msg, content=False, copy=False)
else:
# Do not spend time analyzing already rejected message
msg = None
if msg is None or not 'comm_' in msg['header']['msg_type']:
# The message is not related to comm, so putting it back in
# the queue after lowering its priority so that it is send
# at the "end of the queue", ie just at the right place:
# after the next unchecked messages, after the other
# messages already put back in the queue, but before the
# next one to go the same way. Note that every shell
# messages have SHELL_PRIORITY by default.
self.__kernel.msg_queue.put_nowait(
(SHELL_PRIORITY + 1, t, dispatch, args))
else:
# Comm message. Processing it right now.
comm_handler = getattr(
self.__kernel.comm_manager, msg['header']['msg_type'])
msg['content'] = self.__kernel.session.unpack(msg['content'])
comm_handler(None, None, msg)
self.qsize_old = self.__kernel.msg_queue.qsize()
process_kernel_comm = CommProcessor()

TCL:shared scope when using exec to launch 2 new tcl shells

Let's say I have a proc called upgrade that's is used to upgrade machines/devices. I want to upgrade 2 machines in parallel. Inside a proc called main, I use exec to launch 2 tcl shells that eventually call the upgrade proc. The thing is, before I launch the 2 tcl shells using exec, I have to connect to a traffic generator that only allows one connection instance to it.You can connect to it if you already have a connection to it.How to make the newly launched shells upgrade proc aware that a connection already exists and no need to connect to it? It seems that the newly created shells dont share the space and scope of the main proc.
Note that if I don't use exec and call upgrade in series, both upgrade calls know about the connection and the upgrades work.
Maybe I'm doing multi-processing in TCL wrong?
Thanks for your help
exec will not inherit any open file descriptors.
One possible solution: Have the subprocesses connect to the parent process. The parent process will accept the connections and pass all data directly through to the traffic generator and send any responses back to the appropriate subprocess.
Edit:
Another solution is to rewrite your upgrade procedure to process multiple upgrades at the same time. This might be easier than using exec.
The main problem is that you will need some way to determine which process or upgrade connection the data received from the traffic manager is meant for. This will be true whether you use the method outlined above, or if you rewrite your upgrade process so that it handles multiple upgrades at one time.
If you do not have a way to route incoming data from the traffic manager, what you want to do will be difficult.
This code is overly simplified. There is no error checking and it doesn't handle the closing of a socket.
Any operation on a socket should be enclosed in a try { } block, as a socket error can happen at any point in time.
Also, the connection needs to have its encoding set properly (if sending binary data).
# First, the server (the main process) must create the
# server socket and associate it with a connection handler.
# A read handler is set up to handle the incoming data.
proc readHandler {sock} {
global tmsock
if {[gets $sock data] >= 0} {
puts $tmsock $data
}
}
proc connectHandler {sock addr port} {
global socks
# save the socket the connection came in on.
# the array index should not be the port, but rather some
# data which can be used to route incoming messages from the
# traffic manager.
set socks($port) $sock
fconfigure $sock -buffering line -blocking false
fileevent $sock readable [list ::readHandler $sock]
}
socket -server ::connectHandler $myport
# The server also needs an event handler for data
# from the traffic manager.
proc tmReadHandler {} {
global tmsock
global socks
if {[gets $tmsock data] >= 0} {
# have to determine which process the data is for
set port unknown?
set sock $socks($port)
puts $sock $data
}
}
fileevent $tmsock readable [list ::tmReadHandler]

How to send "CTRL+C" to child process in Node.js?

I tried to spawn child process - vvp (https://linux.die.net/man/1/vvp). At the certain time, I need to send CTRL+C to that process.
I am expecting that simulation will be interrupted and I get the interactive prompt. And after that I can continue the simulation by send command to the child process.
So, I tried something like this:
var child = require('child_process');
var fs = require('fs');
var vcdGen = child.spawn('vvp', ['qqq'], {});
vcdGen.stdout.on('data', function(data) {
console.log(data.toString())
});
setTimeout(function() {
vcdGen.kill('SIGINT');
}, 400);
In that case, a child process was stopped.
I also tried vcdGen.stdin.write('\x03') instead of vcdGen.kill('SIGINT'); but it isn't work.
Maybe it's because of Windows?
Is there any way to achieve the same behaviour as I got in cmd?
kill only really supports a rude process kill on Windows - the application signal model in Windows and *nix isn't compatible. You can't pass Ctrl+C through standard input, because it never comes through standard input - it's a function of the console subsystem (and thus you can only use it if the process has an attached console). It creates a new thread in the child process to do its work.
There's no supported way to do this programmatically. It's a feature for the user, not the applications. The only way to do this would be to do the same thing the console subsystem does - create a new thread in the target application and let it do the signalling. But the best way would be to simply use coöperative signalling instead - though that of course requires you to change the target application to understand the signal.
If you want to go the entirely unsupported route, have a look at https://stackoverflow.com/a/1179124/3032289.
If you want to find a middle ground, there's a way to send a signal to yourself, of course. Which also means that you can send Ctrl+C to a process if your consoles are attached. Needless to say, this is very tricky - you'd probably want to create a native host process that does nothing but create a console and run the actual program you want to run. Your host process would then listen for an event, and when the event is signalled, call GenerateConsoleCtrlEvent.

how to validate sigprocmask is working

I am using sigprocmask to block signal and getting my process killed.
It works but is there a way to validate that say using /proc/<pid>/status
If I remove sigprocmask statements below the code does not work but I do not see any difference in /proc/<pid>/status with resepct to SigIgn,SigCgt,SigPnd ,Sig*
my $sigset_old = POSIX::SigSet->new;
my $sigset = POSIX::SigSet->new;
$sigset->emptyset();
$sigset->addset(&POSIX::SIGPOLL);
POSIX::sigprocmask(SIG_BLOCK,$sigset,$old_sigset);
while(1)
{
POSIX::RT::Signal::sigwait($sigset);
POSIX::sigprocmask(SIG_UNBLOCK,$old_sigset)
<some code>
...
...
}
Instead of:
POSIX::sigprocmask(SIG_UNBLOCK,$old_sigset)
do:
POSIX::sigprocmask(SIG_SETMASK,$old_sigset)
The old signal mask does not contain your POLL signal, so it is not unblocked in the second call. What you want is either to set the old signal mask like shown or to unblock your signal with
POSIX::sigprocmask(SIG_UNBLOCK,$sigset)
This is incorrectly documented in many places in the internet e.g. https://www.oreilly.com/library/view/perl-cookbook/1565922433/ch16s21.html
Seems like often, one copies from the other.

How to keep Node uv_run from exiting when hosting own thread in an add on?

I have a custom server that runs in its own posix thread in a native Node Add On.
What is the proper way to keep the node process running the uv_run event loop? In other words, if I start the server in my Add On via a script, my process will exit at the end of the script instead of keeping the event loop running.
I've tried adding a SignalWatcher via process.on and that still exits. I didn't see anything else in the process object for doing this from script.
In node.cc, there is this comment:
// Create all the objects, load modules, do everything.
// so your next reading stop should be node::Load()!
Load(process_l);
// All our arguments are loaded. We've evaluated all of the scripts. We
// might even have created TCP servers. Now we enter the main eventloop. If
// there are no watchers on the loop (except for the ones that were
// uv_unref'd) then this function exits. As long as there are active
// watchers, it blocks.
uv_run(uv_default_loop());
EmitExit(process_l);
What does the Add On have to do?
I've tried calling uv_ref(uv_default_loop()) in the main thread in my Add On when starting the server/pthread but the process still exits.
Note: I can bind to a TCP/UDP port or set a timer and that will keep uv_run from exiting, but I would like to do this the "correct" way.

Resources