Parent process <- child processes UNIdirectional communication in "real-world" Haskell? - haskell

Goal:
There is an IShell which is nothing but an ordinary console able
to consume somewhat command like do param1=value1 --option.
IShell should orchestrate whole execution. It does not run commands, the only thing it does is starts appropriate
process.
Any process started from the running IShell instance should be able to report back to it what's happening inside. So,
say, IShell has started process A to do something
complicated; process A should be able to report both progress
and result back to parent IShell. In practice, it means, that
there should be a mechanism how to, for example, print message
from process A to appropriate IShell.
Finally, code should work both with Windows and Linux.
I really like Haskell and I'd like to promote "real-world" Haskell usage. But I don't know existing libraries well, I haven't done yet any "real-world" Haskell app.
Thus, questions:
How can I establish IShell <- it's processes communication? Is there a single library able to handle both Windows-specific and Linux-specific stuff?

The process package supports Linux and Windows and provides mechanisms for communicating with children processes via their stdin, stdout, stderr, and exit code.
The network package supports Linux and Windows and provides mechanisms for communicating with children processes via socket.

Related

How node IPC works between 2 processes

Using nodejs fork you can perform IPC between the parent process and the child process. Previously I was under the impression that the child process would have an extra environment variable with a file descriptor. I printed the process env but I can't see any variable with a file Id, I don't see any open sockets either, so my question is how does node IPC works behind the scenes?
so my question is how does node IPC (for forked processes) works behind the scenes
The source code for fork uses a Pipe object internally. Looking further into that Pipe object, it is a wrapper over the libuv Pipe object. Then, looking into libuv, it's Pipe abstraction is a domain socket on Unix and a named pipe on Windows.
Now, since this is all undocumented implementation details, there's nothing that says it has to always be done this way in the future - though one would not expect it to change unless there was a really good reason.

Is it possible to attach to a running background process with ruby?

I have a nodejs daemon running on my server, I would like to give him some input on stdin and read it stdout from a Rails controller, is it possible with Ruby?
I am looking at Open3 but it seems to give me only the chance to spawn a new process.
I need the keep the nodejs process running because the starting overhead is too high to be called at every request.
In general there is no way to attach to a running process's IO streams unless it was set up to do so initially. It is easy if, for example, the process was set up to read from a pipe: just have Ruby write to that pipe like any other file (this is what the Open3 lib does).
For a daemon usually there are more proper ways to interact with it than hijacking its input with a pipe, though it depends on the particular daemon you are running and how it is being managed by the OS. For example, sockets are a popular way to communicate to a running process on *nix systems.

Best way to communicate between processes with Node.js

I'm developing a lightweight framework to work as a coordinator in a Robotics competition I compete.
My idea, is to have agnostic programs about the whole, just with inputs that might triggers outputs. I then, connect those outputs to inputs, and can have different behaviours with the same modules, without hard work.
I'm planning on doing this with Node.js and WebKit, to allow a nice UI for modifying the process. However, each "module" might not really be a code wrapped in some javascript class-like function, it might be a real Thread, running maybe some C++ native code (without Node.js), or even a Python program.
What I'm facing now, is a fast way, and also generic, to exchange data among processes. I have read about it, but haven't got to any conclusions...
Here are the 3 methods I found out:
Local Socket: Uses the localhost to dispatch a broadcast to a port
Unix Socket: Maybe more efficient than the above (but using filesystem?)
Stdin/Out communication: When a process is launched by Node.js, binding the stdin and stdout can be used to communicate between the program.
So, I have those 3 ways of doing it, what should I use mostly? I need things to communicate REALLY fast (data might go through 5 different processes, and I need that not to exceed 2ms)

How to fork/clone an identical Node child process in the same sense as fork() of Linux system call?

So I was developing a server farm on Node which requires multiple processes per machine to handle the load. Since Windows doesn't quite get along with Node cluster module, I had to manually work it out.
The real problem is when I was forking Node processes, a JS module path was required as the first argument to the child_process.fork() function and once forked, the child process wouldn't inherit anything from its parent. In my case, I want a function that does similar thing as fork() system call in Linux, which clones the parent process, inherits everything and continues execution from exactly where the fork() is done. Can this be achieved on the Node platform?
I don't think node.js is ever going to support fork(2)
The comment from the node github page on the subject
https://github.com/joyent/node/issues/2334#issuecomment-3153822
We're not (ever) going to support fork.
not portable to windows
difficult conceptually for users
entire heap will be quickly copied with a compacting VM; no benefits from copy-on-write
not necessary
difficult for us to do
child_process.fork()
This is a special case of the spawn() functionality for spawning Node
processes. In addition to having all the methods in a normal
ChildProcess instance, the returned object has a communication channel
built-in. See child.send(message, [sendHandle]) for details.

Remote process control in Linux

I'm currently working on a project requiring a number of processes running under control of a "master" process, which receives remote commands via TCP and tells the child processes what to do (e.g.: what files they should act on, what processing operations they should perform).
I've come up with the following ideas to pass commands/configuration down to the child processes:
Signals (not powerful enough)
A binary protocol over sockets or pipes connecting each process to the master (reinvent the wheel).
RPC (maybe overkill)
CORBA (perhaps overkill)
DDS (totally overkill)
Any ideas/suggestions?
D-Bus
How about a text-protocol via pipes?
text-protocols are always better than binary protocols because they are easier to test, and easier testing generally means fewer bugs.
You could also use message queues, or shared memory with semaphores.
You could also look into an Apache project called ActiveMQ which allows messages to be dispatched to subscription queues, etc. Its very powerful and flexible and there are C interfaces. Its ideal if you have many machines/networks to which you need to dispatch messages.
http://activemq.apache.org/
A lightweight message queue like beanstalkd or resque seems like the right level of complexity. Files with inotify could also work; inotify is designed as an event queue. You can try it with incrontab before baking it in. {xml,json}-rpc are (slightly) more complex, but also more standard, as they use http. However, the message queue metaphor is more appropriate than rpc for non-blocking interactions.
The supervisord tool may be useful. This is a client/server system that allows its users to monitor and control a number of processes on UNIX-like operating systems.

Resources