I want to daemonize a lua script;
Fortunately, I see that the luaposix package offers a unistd interface, so I can call fork() etc. Unfortunately, it DOES NOT offer the setsid() function, so though I've been able to write a simple daemon, it's not complete.
I also saw that there's a luarocks package named luadaemon available spefically for this. Unfortunately, it fails to compile correctly and seems to not really be supported or developed.
I'm thinking of writing my own 'luadaemon' in C if I can't find an alternative, but I wanted to double-check first.
Is there any way to daemonize a lua script?
PS:
I'm not interested in systemd etc options
I already know about start-stop-daemon
What I'm interested in is if there's any way to write a daemon in lua directly only using lua modules.
Related
In NodeJS, how does one make the c-level exec call, like other languages and runtimes allow (e.g. Python's os.exec() or even Bash's exec), please?
Unlike the child_process module (and all the wrappers I've found for it so far), this would be meant to completely replace the current NodeJS process by the execed process.
What I am aiming for is to create a NodeJS cli adaptor of sorts, that you provide with one set of configuration options and you can use it to dispatch different tools that use the same configuration, but each has a different expectations on the format (some expect ENV VARs, some command-line arguments, some need a file, ...).
The reason why I am looking for such an exec call is that, once the tool gets dispatched, I have no expectation on the NodeJS process sticking around -- there's no reason for it to continue existing. At the same time, I need to tool to become the foreground process (accept all signals, control characters, ...). It seems to me that it would be possible to forward all such events from the parent NodeJS to the child tool (if I used child_process), but it seems a bit too much to implement from scratch...
Cheers!
OK, so I think I have finally got it. I am posting this as an answer as it actually shows a real code example:
child_process.spawnSync(
"bash",
[
"-ic",
["psql"],
],
{
stdio: "inherit",
}
);
process.exit();
This will spin up bash (with control tty) and inside it psql. Pressing ctrl-c does not kill the node process; it is, as expected, sent to psql. I have no idea if it would be possible to rewrite this code in a way where the node process terminates before psql and bash, but for my use case, this does it. Maybe it would be even possible to get it done without bash, but I think that bash -i is actually the key here.
I'm looking for code examples on how to use the Linux system call ptrace() to trace system calls of a process and all its child, grandchild, etc processes. Similar to the behaviour of strace when it is fed the fork flag -f.
I'm aware of the alternative of looking into the sources of strace but I'm asking for a clean tutorial first in the hopes of getting a more isolated explanation.
I'm gonna use this to implement a fast generic system call memoizer similar to https://github.com/nordlow/strace-memoize but written in a compiled language. My current code examples I want to extend with this logic is my fork of ministrace at https://github.com/nordlow/ministrace/blob/master/ministrace.c
RTFM PTRACE_SETOPTIONS with the PTRACE_O_TRACECLONE, PTRACE_O_TRACEFORK and PTRACE_O_TRACEVFORK flags. In a nutshell, if you set it on a process, any time it creates children, those will automatically be traced as well.
I am trying to build a python GUI app which needs to run bash scripts and capture their output. Most of the commands used in these scripts require further inputs at run time, as they deal with network and remote systems.
Python's subprocess module allows one to easily run external programs, while providing various options for convenience and customizability.
To run simple programs that do not require interaction, the functions call(), check_call(), and check_output() (omitting arguments) are very useful.
For more complex use cases, where interaction with the running program is required, Popen Objects can be used, where you can customize input/output pipes, as well as many other options - the aformentioned functions are wrappers around these objects. You can interact with a running process through the provided methods poll(), wait(), communicate(), etc.
As well, if communicate() doesn't work for your use case, you can get the file descriptors of the PIPEs via Popen.stdin, Popen.stdout, and Popen.stderr, and interact with them directly with select. I prefer Polling Objects :)
I'm not sure if what I am trying to do is possible, and I'm fairly new to perl so I'd appreciate any help.
My perl application will use system() to issue commands to Perforce that will create a devel/workspace, integrate, sync, etc. But obviously I can't integrate until my devel is created, and I can't sync unless some condition is met, so on and so forth. Also when my code is synced and I run it, I'm not sure how to tell if it finished or not either.
So I'm wondering how to say (slack pseudo code):
system(create my devel);
wait until devel created
system(integrate blah);
wait until integration complete
system (launch test);
wait until test complete;
etc...
I looked at other questions and saw the possibility of using forks, but I am not familiar with how to code that in this context.
Thanks
Normally, the system command in Perl will wait until the command you asked it to run has completed. This would work exactly the same as if you entered the command at a shell prompt, the program would run and the shell prompt would appear only when the command has completed whatever it is doing.
Perforce has a free Perl module downloadable from http://www.perforce.com/downloads/Perforce/20-User?qt-perforce_downloads_step_3=6#qt-perforce_downloads_step_3#52, with documentation at http://www.perforce.com/perforce/r12.1/manuals/p4script/02_perl.html#1047731.
But it sounds like you need more experience with Perl multiprogramming and IPC. Have you read the Camel book?
Nearly all linux courses say that init process, given the run level, will execute appropriate shell scripts to initialize the enivronment. But non of the courses describe in detail how init process does it.
As I understand the init process is basically a C program, much like any Hello World C code. Only much more sophisticated. Does anyone knows how this C program actually runs through all the scripts and invokes them?
I would really appreciate any answer and especially if you have a link to an example source code.
You can find explanations of what it does in different documentation:
http://www.centos.org/docs/5/html/5.1/Installation_Guide/s2-boot-init-shutdown-init.html
http://www.gentoo.org/doc/en/handbook/handbook-x86.xml?part=2&chap=4
and you can find its source code over there:
init.h
init.c
basically, init as process 1, has for role to fork() every application on your system. If you boot linux with the command line init=/bin/sh at boot time, process 1 forked by the kernel will be a shell. The sysvinit program makes it a bit more easy to handle a complex boot. It adds the concept of runlevels, define basic environment etc.. so that makes it easy to boot a system and have many services, and not only a shell. All that part is well explained in the documentations I gave you.
Does anyone knows how this C program actually runs through all the scripts and invokes them?
Well, is as simple as in your question. When you boot your system, init reads the inittab file, figures out what are your preferences (what is the default runlevel? what program to spawn? how many consoles?..), and for the chosen runlevel will fork a shell that will execute the startup script. Then that shell script will makes its way up to the shell script you activated from /etc/init.d. Usually the shell script part is very distribution-specific, that's why I gave you two links about that, and you may find it is different on ubuntu and debian...
For more details on the source code, you may want to start at the bottom of init.c which contains init's mainloop.
And +1 on your question for your curiosity!