Let's say I have a proc called upgrade that's is used to upgrade machines/devices. I want to upgrade 2 machines in parallel. Inside a proc called main, I use exec to launch 2 tcl shells that eventually call the upgrade proc. The thing is, before I launch the 2 tcl shells using exec, I have to connect to a traffic generator that only allows one connection instance to it.You can connect to it if you already have a connection to it.How to make the newly launched shells upgrade proc aware that a connection already exists and no need to connect to it? It seems that the newly created shells dont share the space and scope of the main proc.
Note that if I don't use exec and call upgrade in series, both upgrade calls know about the connection and the upgrades work.
Maybe I'm doing multi-processing in TCL wrong?
Thanks for your help
exec will not inherit any open file descriptors.
One possible solution: Have the subprocesses connect to the parent process. The parent process will accept the connections and pass all data directly through to the traffic generator and send any responses back to the appropriate subprocess.
Edit:
Another solution is to rewrite your upgrade procedure to process multiple upgrades at the same time. This might be easier than using exec.
The main problem is that you will need some way to determine which process or upgrade connection the data received from the traffic manager is meant for. This will be true whether you use the method outlined above, or if you rewrite your upgrade process so that it handles multiple upgrades at one time.
If you do not have a way to route incoming data from the traffic manager, what you want to do will be difficult.
This code is overly simplified. There is no error checking and it doesn't handle the closing of a socket.
Any operation on a socket should be enclosed in a try { } block, as a socket error can happen at any point in time.
Also, the connection needs to have its encoding set properly (if sending binary data).
# First, the server (the main process) must create the
# server socket and associate it with a connection handler.
# A read handler is set up to handle the incoming data.
proc readHandler {sock} {
global tmsock
if {[gets $sock data] >= 0} {
puts $tmsock $data
}
}
proc connectHandler {sock addr port} {
global socks
# save the socket the connection came in on.
# the array index should not be the port, but rather some
# data which can be used to route incoming messages from the
# traffic manager.
set socks($port) $sock
fconfigure $sock -buffering line -blocking false
fileevent $sock readable [list ::readHandler $sock]
}
socket -server ::connectHandler $myport
# The server also needs an event handler for data
# from the traffic manager.
proc tmReadHandler {} {
global tmsock
global socks
if {[gets $tmsock data] >= 0} {
# have to determine which process the data is for
set port unknown?
set sock $socks($port)
puts $sock $data
}
}
fileevent $tmsock readable [list ::tmReadHandler]
Related
I use x86_64 Debian 9 Stretch. I run systemd-inhibit cat and then in another console systemctl poweroff. Shutdown correctly gets inhibited. According to this doc signal PrepareForShutdown(false) is supposed to be emitted, but I can't see it. I watch dbus with dbus-monitor --system and using a python program:
#!/usr/bin/env python
import dbus
import gobject
from dbus.mainloop.glib import DBusGMainLoop
def handle(*args):
print "PrepareForShutdown %s" % (args)
DBusGMainLoop(set_as_default=True) # integrate into gobject main loop
bus = dbus.SystemBus() # connect to system wide dbus
bus.add_signal_receiver( # define the signal to listen to
handle, # callback function
'PrepareForShutdown', # signal name
'org.freedesktop.login1.Manager', # interface
'org.freedesktop.login1' # bus name
)
loop = gobject.MainLoop()
loop.run()
The program prints nothing. The dbus-monitor outputs few obscure messages (looks like smth calls ListInhibitors).
Is signal not being emited or I just can't catch it? My goal is to detect inhibited shutdown by listening D-Bus, how do I do it?
EDIT: Turned out when non-delayed inhibition is used, shutdown request just gets discarded, signal doesn't fire. But if I use delay lock via systemd-inhibit --mode=delay --what=shutdown cat then PrepareForShutdown signal fires.
Is signal not being emited or I just can't catch it?
Not sure. My guess would be that systemd only emits the signal to processes which have taken a delay lock (unicast signal emission), as the documentation page has some pretty dire warnings about race conditions if you listen for PrepareForShutdown without taking a delay lock first.
The way to check this would be to read the systemd source code.
My goal is to detect inhibited shutdown by listening D-Bus, how do I do it?
If I run sudo dbus-monitor --system in one terminal, and then run systemd-inhibit cat in another, I see the following signal emission:
signal time=1543917703.712998 sender=:1.9 -> destination=(null destination) serial=1150 path=/org/freedesktop/login1; interface=org.freedesktop.DBus.Properties; member=PropertiesChanged
string "org.freedesktop.login1.Manager"
array [
dict entry(
string "BlockInhibited"
variant string "shutdown:sleep:idle:handle-power-key:handle-suspend-key:handle-hibernate-key:handle-lid-switch"
)
]
array [
]
Hence you could watch for property changes on the /org/freedesktop/login1 object exposed by service org.freedesktop.login1, and see when its BlockInhibited or DelayInhibited properties change. Shutdown is inhibited when either of those properties contains shutdown. They are documented on the same documentation page:
The BlockInhibited and DelayInhibited properties encode what types of
locks are currently taken. These fields are a colon separated list of
shutdown, sleep, idle, handle-power-key, handle-suspend-key,
handle-hibernate-key, handle-lid-switch. The list is basically the
union of the What fields of all currently active locks of the specific
mode.
P4 Server version: P4D/LINUX26X86_64/2013.2/938876 (2014/09/23)
on RHEL6
While running a perl script using p4perl, I trap an error something like this...
if ($p4->ErrorCount() {
foreach $err ($p4->Errors) {
print "$err\n";
}
}
These errors pop up in a nondeterministic way, sometimes I get them, sometimes not. But if I trap an error with the code above, and I get ...
TCP receive failed.
read: socket:
Connection reset by peer
Is that a real error (given that, apparently, the connection was reset ?)
Can I ignore this? Will it run the thing I wanted to run after resetting the connection ? Or do I need to rerun that command ?
I fear that the problem may be rooted in the fact that the perl script does a fork earlier on and the $p4 handle I have was sent to the forked process. Could I do something like this to detect and remedy this
use P4;
our $p4 = new P4;
<perl forks off a new process...>
if(!($p4-IsConnected)) {
$p4->SetCwd("$CWD");
if($p4->ErrorCount()) {handle_p4_error();}
$p4->Connect();
if($p4->ErrorCount()) {handle_p4_error();}
}
....etc....
exit;
sub handle_p4_err {
print "<<<P4 ERROR>>>\n";
foreach $err ($p4->Errors) {
print "$err\n";
}
exit;
}
Or will the SetCwd fail for lack of connection?
Could the P4 admin be setting some sort of timeout (kill connection after x minutes of inactivity) ?
Thanks for any help !
Is that a real error (given that, apparently, the connection was reset ?)
Yes; the connection with the server was terminated.
Can I ignore this? Will it run the thing I wanted to run after resetting the connection?
No.
Or do I need to rerun that command ?
Yes; I think you will also need to reopen the connection first.
$p4->Connect();
if($p4->ErrorCount()) {handle_p4_error();}
That is the general pattern, although if the connection fails you probably want to bail out since nothing you do after that point is going to work (and in most cases it means something is configured wrong).
Or will the SetCwd fail for lack of connection?
No; that's purely a client side action and does not talk to the server.
Could the P4 admin be setting some sort of timeout (kill connection after x minutes of inactivity) ?
That is one possibility -- does your script hold an idle connection open? That's considered poor manners since enough of those will constitute a DDoS attack by preventing any new connections from being opened. Another possibility is that there was some other network failure (your VPN connection went down, etc).
In my particular case, I believe the problem had to do with the fork in the perl script. Perhaps the handle was passed to the forked process and this interfered with attempts to reconnect in the main process thread. I had a similar problem with a DB connection. The remedy was similar as well....
What seemed to work was to unconditionally disconnect from P4 just before the fork and unconditionally reconnect right after. The forked process didn't need a P4 connection, so this is OK (in my particular case).
In my case, the problem was I needed to make an ssh connection to the perforce server, but I hadn't added the correct ssh key. You can list ssh keys added to your session with this command:
ssh-add -L
I tried to spawn child process - vvp (https://linux.die.net/man/1/vvp). At the certain time, I need to send CTRL+C to that process.
I am expecting that simulation will be interrupted and I get the interactive prompt. And after that I can continue the simulation by send command to the child process.
So, I tried something like this:
var child = require('child_process');
var fs = require('fs');
var vcdGen = child.spawn('vvp', ['qqq'], {});
vcdGen.stdout.on('data', function(data) {
console.log(data.toString())
});
setTimeout(function() {
vcdGen.kill('SIGINT');
}, 400);
In that case, a child process was stopped.
I also tried vcdGen.stdin.write('\x03') instead of vcdGen.kill('SIGINT'); but it isn't work.
Maybe it's because of Windows?
Is there any way to achieve the same behaviour as I got in cmd?
kill only really supports a rude process kill on Windows - the application signal model in Windows and *nix isn't compatible. You can't pass Ctrl+C through standard input, because it never comes through standard input - it's a function of the console subsystem (and thus you can only use it if the process has an attached console). It creates a new thread in the child process to do its work.
There's no supported way to do this programmatically. It's a feature for the user, not the applications. The only way to do this would be to do the same thing the console subsystem does - create a new thread in the target application and let it do the signalling. But the best way would be to simply use coöperative signalling instead - though that of course requires you to change the target application to understand the signal.
If you want to go the entirely unsupported route, have a look at https://stackoverflow.com/a/1179124/3032289.
If you want to find a middle ground, there's a way to send a signal to yourself, of course. Which also means that you can send Ctrl+C to a process if your consoles are attached. Needless to say, this is very tricky - you'd probably want to create a native host process that does nothing but create a console and run the actual program you want to run. Your host process would then listen for an event, and when the event is signalled, call GenerateConsoleCtrlEvent.
I have an IRC bot written in Perl, using the deprecated, undocumented and unloved Net::IRC library. Still, it runs just fine... unless the connection goes down. It appears that the library ceased to be updated before they've implemented support for reconnecting. The obvious solution would be to rewrite the whole bot to make use of the library's successors, but that would unfortunately require rewriting the whole bot.
So I'm interested in workarounds.
Current setup I have is supervisord configured to restart the bot whenever the process exits unexpectedly, and a cron job to kill the process whenever internet connectivity is lost.
This does not work as I would like it to, because the bot seems incapable of detecting that it has lost connectivity due to internet outage. It will happily continue running, doing nothing, pretending to still be connected to the IRC server.
I have the following code as the main program loop:
while (1) {
$irc->do_one_loop;
# can add stuff here
}
What I would like it to do is:
a) detect that the internet has gone down,
b) wait until the internet has gone up,
c) exit the script, so that supervisord can resurrect it.
Are there any other, better ways of doing this?
EDIT: The in-script method did not work, for unknown reasons. I'm trying to make a separate script to solve it.
#!/usr/bin/perl
use Net::Ping::External;
while (1) {
while (Net::Ping::External::ping(host => "8.8.8.8")) { sleep 5; }
sleep 5 until Net::Ping::External::ping(host => "8.8.8.8");
system("sudo kill `pgrep -f 'perl painbot.pl'`");
}
Assuming that do_one_loop will not hang (may need to add some alarm if it does), you'll need to actively poll something to tell whether or not the network is up. Something like this should work to ping every 5 seconds after a failure until you get a response, then exit.
use Net::Ping::External;
sub connectionCheck {
return if Net::Ping::External::ping(host => "8.8.8.8");
sleep 5 until Net::Ping::External::ping(host => "8.8.8.8");
exit;
}
Edit:
Since do_one_loop does seem to hang, you'll need some way to wrap a timeout around it. The amount of time depends on how long you expect it to run for, and how long you are willing to wait if it becomes unresponsive. A simple way to do this is using alarm (assuming you are not on windows):
local $SIG{'ALRM'} = sub { die "Timeout" };
alarm 30; # 30 seconds
eval {
$irc->do_one_loop;
alarm 0;
};
The Net::IRC main loop has support for timeouts and scheduled events.
Try something like this (I haven't tested it, and it's been 7 years since I last used the module...):
# connect to IRC, add event handlers, etc.
$time_of_last_ping = $time_of_last_pong = time;
$irc->timeout(30);
# Can't handle PONG in Net::IRC (!), so handle "No origin specified" error
# (this may not work for you; you may rather do this some other way)
$conn->add_handler(409, sub { $time_of_last_pong = time });
while (1) {
$irc->do_one_loop;
# check internet connection: send PING to server
if ( time-$time_of_last_ping > 30 ) {
$conn->sl("PING"); # Should be "PING anything"
$time_of_last_ping = time;
}
break if time-$time_of_last_pong > 90;
}
I have a custom server that runs in its own posix thread in a native Node Add On.
What is the proper way to keep the node process running the uv_run event loop? In other words, if I start the server in my Add On via a script, my process will exit at the end of the script instead of keeping the event loop running.
I've tried adding a SignalWatcher via process.on and that still exits. I didn't see anything else in the process object for doing this from script.
In node.cc, there is this comment:
// Create all the objects, load modules, do everything.
// so your next reading stop should be node::Load()!
Load(process_l);
// All our arguments are loaded. We've evaluated all of the scripts. We
// might even have created TCP servers. Now we enter the main eventloop. If
// there are no watchers on the loop (except for the ones that were
// uv_unref'd) then this function exits. As long as there are active
// watchers, it blocks.
uv_run(uv_default_loop());
EmitExit(process_l);
What does the Add On have to do?
I've tried calling uv_ref(uv_default_loop()) in the main thread in my Add On when starting the server/pthread but the process still exits.
Note: I can bind to a TCP/UDP port or set a timer and that will keep uv_run from exiting, but I would like to do this the "correct" way.