Common Lisp: Run function in the background - multithreading

What's the best way to run a function in the background, in Common Lisp? Specifically, I'm making a call like
(trivial-shell:shell-command "<long and complicated command>". This operation is blocking for ~10 seconds, but I don't care for the output, just the side effect - so I want it to be run in the background, so that program flow can continue. I've tried wrapping the whole thing in sb-thread:make-thread, but that didn't appear to make a difference.
I'd avoid getting wrapped up in all kinds of complicated threading, if at all possible. I'm running SBCL 1.1.18 on 64-bit Gentoo Linux.

My little investigation: it looks like the only solution is Renzo's answer: the launch-program function of UIOP.
Otherwise in order to run shell commands there is
other methods of UIOP, like run-program, synchronous.
[inferior-shell https://gitlab.common-lisp.net/qitab/inferior-shell] takes precedence over trivial-shell. It uses uiop:run-program so it only runs synchronously,

Here is the example with cl-async and bordeaux-thread packages on SBCL. Suppose you have a shell script ./echo.sh at the current directory. You can run the script at the background. After the invocation of the script, the following code is immediately evaluated so you get Waiting..... on your screen. After the script is done, the notifier is triggered and displays Threaded job done.
Make sure the *features* contains SB-THREAD as #coredump says.
(require 'cl-async)
(require 'bordeaux-threads)
(as:with-event-loop()
(let ((notifier (as:make-notifier
(lambda ()
(format t "Threaded job done.~%")
(as:exit-event-loop)))))
(format t "App started.~%")
(bt:make-thread (lambda ()
(sb-ext:run-program "/bin/bash" (list "./echo.sh"))
(as:trigger-notifier notifier))))
(format t "Waiting......~%"))
If you want to capture the stdout of shell script, add :output t to the argument of sb-ext:run-program.

Related

Determine if Javascript (NodeJS) code is running in a REPL

I wish to create one NodeJS source file in a Jupyter notebook which is using the IJavascript kernel so that I can quickly debug my code. Once I have it working, I can then use the "Download As..." feature of Jupyter to save the notebook as a NodeJS script file.
I'd like to have the ability to selectively ignore / include code in the notebook source that will not execute when I run the generated NodeJS script file.
I have solved this problem for doing a similar thing for Python Jupyter notebooks because I can determine if the code is running in an interactive session (IPython [REPL]). I accomplished this by using this function in Python:
def is_interactive():
import __main__ as main
return not hasattr(main, '__file__')
(Thanks to Tell if Python is in interactive mode)
Is there a way to do a similar thing for NodeJS?
I don't know if this is the correct way but couldn't find anything else
basically if you
try {
const repl = __dirname
} catch (err) {
//code run if repl
}
it feels a little hacky but works ¯\_(ツ)_/¯
This may not help the OP in all cases, but could help others googling for this question. Sometimes it's enough to know if the script is running interactively or not (REPL and any program that is run from a shell).
In that case, you can check for whether standard output is a TTY:
process.stdout.isTTY
The fastest and most reliable route would just be to query the process arguments. From the NodeJS executable alone, there are two ways to launch the REPL. Either you do something like this without any script following the call to node.
node --experimental-modules ...
Or you force node into the REPL using interactive mode.
node -i ...
The option ending parameter added in v6.11.0 -- will never append arguments into the process.argv array unless it's executing in script mode; via FILE, -p, or -e. Any arguments meant for NodeJS will be filtered into the accompanying process.execArgv variable, so the only thing left in the process.argv array should be process.execPath. Under these circumstances, we can reduce the query to the solution below.
const isREPL = process.execArgv.includes("-i") || process.argv.length === 1;
console.log(isREPL ? "You're in the REPL" : "You're running a script m8");
This isn't the most robust method since any user can otherwise instantiate a REPL from an intiator script which your code could be ran by. For that I'm pretty sure you could use an artificial error to crawl the traceback and look for a REPL entry. Although I haven't the time to implement and ensure that solution at this time.

Is it possible to break out of a restricted (custom) shell?

Not sure if this is the right place to ask.
Say I write a shell that takes stdin input, filters this input so let's say only certain commands like
ls (list contents of binary directory and subdirectory)
update (git clone)
build (go build)
test (go test)
start (systemctl start this.service only)
stop (systemctl stop this.service only)
running (is the binary being executed and with how many GOMAXPROCS?)
usage (memory, cpu usage)
gensvc (generate .service file)
exit (leave shell/logout)
work, you guessed it, I'm trying to give a user only very limited maintenance access over ssh.
Say I'm careful with \0 (I'd write it in Go anyway using bufio.Scanner)
Is there any way to stop the running shell and execute /bin/sh or similar or any way to get around this shell?
The idea is a user should push their stuff via git to a bare repo, this repo is cloned to the filesystem to a certain directory, then go build is called and the binary is ran with a systemd .service file that is generated previously.
Thinking logically, if the user is only able to write certain strings that are accepted, no there is no way. But maybe you know of one, some ctrl+z witchcraft ;) or whatever.
The only attack surface is the input string or rather bytes. Of course the user could git push a program that builds its own shell or runs certain commands, but that's out of scope (I would remove capabilities with systemd and restrict device access and forbid anything but the connection to the database server, private tmp and all, namespace and subnamespace it TODO)
The only problem I see is git pushing but I'm sure I could work around that in a git only mode argv and adding it to ~/.ssh/authorized_keys. something like lish gitmode and execute stdin commands if they start with git or something like it.
Example:
https://gist.github.com/dalu/ce2ef43a2ef5c390a819
If you're only allowed certain commands, your "shell" will read the command, parse it and then execute it then you should be fine, unless I misunderstood it.
Go "memory" can't be executed, not without you doing some nasty hacks with assembly anyway, so you don't have to worry about shell injection.
Something along these lines should be safe:
func getAction() (name string, args []string) {
// read stdin to get the command of the user
}
func doAction() {
for {
action, args := getAction()
switch action {
case "update": //let's assume the full command is: update https://repo/path.git
if len(args) != 1 {
//error
}
out, err := exec.Command("/usr/bin/git", "clone", "--recursive", args[0]).CombinedOutput()
// do stuff with out and err
}
}
}
If you are implementing the shell yourself and directly executing the commands via exec() or implementing them internally, then it is certainly possible to produce a secure restricted shell. If you are just superficially checking a command line before passing it on to a real shell then there will probably be edge cases you might not expect.
With that said, I'd be a bit concerned about the test command you've listed. Is it intended to run the test suite of a Go package the user uploads? If so, I wouldn't even try to exploit the restricted shell if I was an attacker: I'd simply upload a package with tests that perform the actions I want. The same could be said for build/start.
Have it reviewed by a pentesting team.
People can be very creative when breaking out a sandbox of any type. Only if you never accept the user's input you can consider yourself rather safe on premises (but here any command is an input) - paper security assumptions are considered a weak to assess the software. They are similar to 'no-bug' assumptions for an algorithm on paper: as soon as you implement it, 99% of time a bug raises

python interpreter running shutdown code when I don't want it to

I'm using Python 3. I'm still learning but am, let's say, an intermediate at other programming languages. I'm building a simple GUI that just does simple things for now but will add on more things as I go. I've got some success. Then I had an idea, to have a function (or whatever it's called in Python) to run an external script, stored in my script folder. So I wrote this...
def runscript(scriptname):
from subprocess import call
call(['scripts/'+scriptname])
Then later on in my code I have this...
sdb = Button(topbar, text="Shutdown", command= runscript("shutdown.sh"), font=("Helvetica", 20), width=18)
shutdown.sh is a simple script that does what you might expect it to.
Now whenever I run the python script with python3 MyScript.py the machine instantly shuts down! I obviously only want it to shut down when I click the Shutdown button. From reading I gather it's to do with the fact that Python executes every line as it goes. So I don't understand why there are plenty of examples around on the internet for functions that will shutdown your PC, whereas my more general script-running code doesn't work in any useful way.
When you write the code runscript("shutdown.sh"), what should it do?
Obviously, the answer is that it should call the runscript function and pass it the argument "shutdown.sh".
So, when you write the code command=runscript("shutdown.sh"), what should it do?
Do you see the problem? You're executing the runscript function, and passing the result of that function to the command attribute. Instead, the command attribute takes a reference to a callable function. Since you are trying to pass in an explicit argument, one way to accomplish that is with lambda:
sdb = Button(..., command=lambda script="shutdown.sh": runscript(script))
Some people prefer functools.partial over lambda:
sdb = Button(..., command=functools.partial(runscript, "shutdown.sh"))

Loading and Unloading the .arx file with LISP

I have few .arx applications for AutoCAD. In these applications few are menu based and others are command line. Now what I am trying to do is,
Load the .arx app,
run it and then
unload it once the .arx application runs through a LISP command.
.arx applications run once the user clicks on the tabs that are provided.
.arx applications are written in VC++.
Now I have a lisp file, which gets loaded once the user starts AutoCAD. In the lisp files I have declared these functions for various .arx applications;
(defun c:XYZ_program() (command) (command) (arxload "C:/ABC/XYZ.arx") (command "XYZ_program") (arxunload "XYZ.arx") )
It works fine for Programs which need input data from Menu based forms, but says error unloading xyz.arx for programs which need command line input.
I was wondering if there were any commands in LISP that will make sure arxunload "XYZ.arx" will execute only once (command "XYZ_program") is executed.
I am not sure on how to approach this problem. Any help with the same would be greatly appreciated.
Code I am currently using is this
;
(
defun c:XYZ_program() (command) (command)
(arxload "C:/Example/Folder/XYZ.arx")
(command "XYZ_program")
ads_queueexpr( (arxunload "XYZ.arx") )
)
It's not clear from your question, but it sounds like the module cannot be unloaded because it is actively executing a command that is waiting for user input. So, I think you are asking how to postpone the unloading until the command is finished executing. The answer to that question is to use ads_queueexpr() to queue the (arxunload "XXX") function from within the command itself.
However, you are creating much bigger problems for yourself by attempting to unload the module. Unloading takes time, so it most certainly does not help performance. The correct solution to your problem is to not unload your modules and leave the unloading to AutoCAD.
http://docs.autodesk.com/ACD/2013/ENU/index.html?url=files/GUID-3FF72BD0-9863-4739-8A45-B14AF1B67B06.htm,topicNumber=d30e502824
(defun c:Load()
(arxload "the\\file\\path")
; run the app
)
Try this:
(arxload "C:/ABC/XYZ.arx" nil)
(defun c:XYZ_program()
(command)
(command)
(command "XYZ_program")
(arxunload "XYZ.arx" nil)
);
Good luck.

node.js -- execute command synchronously and get result

I'm trying to execute a child_process synchronously in node.js (Yes, I know this is bad, I have a good reason) and retrieve any output on stdout, but I can't quite figure out how...
I found this SO post: node.js execute system command synchronously that describes how to use a library (node-ffi) to execute the command, and this works great, but the only thing I'm able to get is the process exit code. Any data the command executes is sent directly to stdout -- how do I capture this?
> run('whoami')
username
0
in otherwords, username is echo'd to stdout, the result of run is 0.
I'd much rather figure out how to read stdout
So I have a solution working, but don't exactly like it... Just posting here for reference:
I'm using the node-ffi library referenced in the other SO post. I have a function that:
takes in a given command
appends >> run-sync-output
executes it
reads run-sync-output synchronously and stores the result
deletes this tmp file
returns result
There's an obvious issue where if the user doesn't have write access to the current directory, it will fail. Plus, it's just wasted effort. :-/
I have built a node.js module that solves this exact problem. Check it out :)
exec-plan
Update
The above module solves your original problem, because it allows for the synchronous chaining of child processes. Each link in the chain gets the stdout from the previous process in the chain.
I had a similar problem and I ended up writing a node extension for this. You can check out the git repository. It's open source and free and all that good stuff !
https://github.com/aponxi/npm-execxi
ExecXI is a node extension written in C++ to execute shell commands
one by one, outputting the command's output to the console in
real-time. Optional chained, and unchained ways are present; meaning
that you can choose to stop the script after a command fails
(chained), or you can continue as if nothing has happened !
Usage instructions are in the ReadMe file. Feel free to make pull requests or submit issues!
However it doesn't return the stdout yet... Well, I just released it today. Maybe we can build on it.
Anyway, I thought it was worth to mention it. I also posted this to a similar question: node.js execute system command synchronously
Since Node version v0.11.12, there is a child_process.execSync function for this.
Other than writing code a little diferent, there's actually no reason to do anything synched.
What don't you like about this? (docs)
var exec = require('child_process').exec;
exec('whoami', function (error, username) {
console.log('stdout: %s', username);
continueWithYourCode();
});

Resources