I wish to create one NodeJS source file in a Jupyter notebook which is using the IJavascript kernel so that I can quickly debug my code. Once I have it working, I can then use the "Download As..." feature of Jupyter to save the notebook as a NodeJS script file.
I'd like to have the ability to selectively ignore / include code in the notebook source that will not execute when I run the generated NodeJS script file.
I have solved this problem for doing a similar thing for Python Jupyter notebooks because I can determine if the code is running in an interactive session (IPython [REPL]). I accomplished this by using this function in Python:
def is_interactive():
import __main__ as main
return not hasattr(main, '__file__')
(Thanks to Tell if Python is in interactive mode)
Is there a way to do a similar thing for NodeJS?
I don't know if this is the correct way but couldn't find anything else
basically if you
try {
const repl = __dirname
} catch (err) {
//code run if repl
}
it feels a little hacky but works ¯\_(ツ)_/¯
This may not help the OP in all cases, but could help others googling for this question. Sometimes it's enough to know if the script is running interactively or not (REPL and any program that is run from a shell).
In that case, you can check for whether standard output is a TTY:
process.stdout.isTTY
The fastest and most reliable route would just be to query the process arguments. From the NodeJS executable alone, there are two ways to launch the REPL. Either you do something like this without any script following the call to node.
node --experimental-modules ...
Or you force node into the REPL using interactive mode.
node -i ...
The option ending parameter added in v6.11.0 -- will never append arguments into the process.argv array unless it's executing in script mode; via FILE, -p, or -e. Any arguments meant for NodeJS will be filtered into the accompanying process.execArgv variable, so the only thing left in the process.argv array should be process.execPath. Under these circumstances, we can reduce the query to the solution below.
const isREPL = process.execArgv.includes("-i") || process.argv.length === 1;
console.log(isREPL ? "You're in the REPL" : "You're running a script m8");
This isn't the most robust method since any user can otherwise instantiate a REPL from an intiator script which your code could be ran by. For that I'm pretty sure you could use an artificial error to crawl the traceback and look for a REPL entry. Although I haven't the time to implement and ensure that solution at this time.
Related
I'm just basically asking:
if it's considered OK to use exec() in this context
if there's a better/more pythonic solution
for any input or comments on how my code could be improved
First, some context. I have main.py which basically takes input and checks to see if I've written a command. Let's say I type '/help'. The slash just tells it my input was supposed to be a command, so then it checks if a function called 'help' exists, and if so, that function will be run.
To keep things tidy in main.py, and to allow myself to add more commands easily, I have a 'commands' directory, with individual command files in it, such as help.py. help.py would look like this for example:
def help():
print("You've been helped")
So then of course, I need to import help() from help.py, which was trivial.
As I added more commands, I decided to add an init.py file where I'd keep all the command import lines of code, and then just do 'from init import *' in main.py. At first, each time I added a command, I'd add another line in init.py to import it. But that wasn't as flexible as I wanted, so I thought, there's got to be a way to just loop through all the .py files in my commands directory and import them. I struggled with this for a while but came up with a solution that works.
In the init.py snippet below, I loop through the commands directory (and a couple others, but they're irrelevant to the question), and you'll see I use the dreaded exec() function to actually import the commands.
loaded, failed = '', ''
for directory in command_directories:
command_list = os.listdir(directory)
command_list.sort()
for command_file in command_list:
if command_file.endswith(".py"):
command_name = command_file.split(".")[0]
try:
# Evil exec() hack to use variable-name directories/modules
# Haven't found a more... pythonic... way to do this
exec(f"from {directory}.{command_name} import {command_name}")
loaded = loaded + f" - Loaded: {command_name}\n"
except:
failed = failed + f" - Failed to load: {command_name}\n"
if debug == True:
for init_debug in [loaded, failed]: print(init_debug)
I use exec() because I don't know a better way to make a variable with the name of the function being loaded, so I use {command_name} in my exec string to arbitrarily evaluate the variable name that will store the function I'm importing. And... well, it works. The functions work perfectly when called from main.py, so I believe they are being imported in the correct namespace.
Obviously, exec() can be dangerous, but I'm not taking any user input into it, just file names. Filenames that I only I am creating. This program isn't being distributed, but if it was, then I believe using exec() would be bad since there's potential someone could exploit it.
If I'm wrong about something, I'd love to hear about it and get suggestions for a better implementation. Python has been very easy to pick up for me, but I'm probably missing some of the fundamentals.
I should note, I'm running python 3.10 on replit (until I move this project to another host).
Does anybody knows, how we can run test cases on a data(function) sent by the user in Django, (as implemented by leetcode, codesignals and codewar)
How my function(solution) is tested against these test, how can I implement this functionality at backend using django, django rest framework (click on this link to see the image)
import subprocess
def test_submission():
compiled_code = "test.out"
test_input = open("path/to/test_input.txt")
submission_output = open("path/to/submission_output.txt")
cmd = [f"./{compiled_code}"]
subprocess.run(cmd, stdin=test_input, stdout=submission_output)
# At this point the output of executed code with test_input is stored in "path/to/submission_output.txt"
return
The test_program can be any file be it C, C++ or Python.
If you don't need test support, you should be able to run code easily by just using the official container images for the language and the Docker CLI.
and there is a tool too which was used by codewar and qualified,
and you can also see this clone of codewar too for better understanding, I think it's bit simplified and a practical implementation of this codewar CLI too,
and one last thing, if you are in python use can also use eval but the prob with using this, some malicious user can insert rogue script that could be harmful, so I think you should avoid using it,
personally I think, you should use the simple docker option with a bit of sandboxing, because in our case, It's mostly just a thin wrapper around Docker API that takes the submitted code, prepares the environment and executes them. It's so simple that the original PoC was just few lines of shell script and a tiny tool written in Go.
Repo for all code I've been using is updated here . When I run the requestor script it exits with a runtime error 2 (File not found). I am not sure how to further debug this or fix it. So far I've converted my code over to a python slim docker image to better mirror the example. It also works for me when I spin up a docker image that typing and running "/golem/work/imageclassifier.py --trainmodel" works from root. I switched all my code to use absolute paths. I also did make sure the shebang (#!) uses linux end of file characters rather than windows before which was giving me errors. Fixed a bug where my script returns error code 2 when called with no args to now pass.
clf.fit(trainDataGlobal, trainLabelsGlobal)
pkl_file = "classifier.pkl"
with open(pkl_file, 'wb') as file:
pickle.dump(clf, file)
is the only piece I could think of that causes the issue, but as far as I can tell this is the proper way to pickle something in python. Requestor script is also heavily based on the simple service example and I tried to mirror my design to that. I just need help in getting more information while debugging, or guidance on how to move forward from here
I have a node.js application, which connect everyday to a server.
On this server, a new version of the app can be available, if so, the installed app download it, check if the download is complete, and if so, stop itself calling a shell script, which replace the old app by the new one, and start it.
I m struggling at starting the update script.
I know I can start it with child_process_execFile function, which I do:
var execF = require('child_process').execFile;
var PATH = process.argv[1].substr(0, process.argv[1].lastIndexOf('/')+1),
filename = 'newapp.js',
execF(PATH + 'up.sh', [PATH + filename], function () {console.log('done'); return ;});
up.sh, for now is just:
cat $1 > /home/pi/test
I get 'done' printed in the console, but test isn t created.
I know that execFile create a subprocess, is it what block the script to do that?
If I suceed to start this, I know I only have to make some cp in the script to have my app auto-updating.
EDIT:
Started as usual (calling the script from console), it work well, is there a reason for the script to don t execute when called from node.js?
I'd suggest that you consider using a module that can do this for you automatically rather than duplicating the effort. Or, at least use their technique as inspiration for you own requirements.
One example is: https://github.com/edwardhotchkiss/always
It's simple to use:
Usage: always <app.js>
=> always app.js
Then, anytime your code changes, the app is killed, and restarted.
As you can see in the source, it uses the Monitor class to watch a specified file, and then uses spawn to kick it off (and of course kill to end the process when a change has happened).
Unfortunately, the [always] output is currently hardcoded into the code, but it would be a simple change/pull request I'm sure to make it optional/configurable. If the author doesn't accept your change, you could just modify a local copy of the code (as it's quite simple overall).
Make sure when you spawn/exec the process you are executing the shell that will be processing the script and not the script itself.
Should be something like
execF("/usr/bin/sh", [PATH + 'up.sh', PATH + filename]);
I'm trying to execute a child_process synchronously in node.js (Yes, I know this is bad, I have a good reason) and retrieve any output on stdout, but I can't quite figure out how...
I found this SO post: node.js execute system command synchronously that describes how to use a library (node-ffi) to execute the command, and this works great, but the only thing I'm able to get is the process exit code. Any data the command executes is sent directly to stdout -- how do I capture this?
> run('whoami')
username
0
in otherwords, username is echo'd to stdout, the result of run is 0.
I'd much rather figure out how to read stdout
So I have a solution working, but don't exactly like it... Just posting here for reference:
I'm using the node-ffi library referenced in the other SO post. I have a function that:
takes in a given command
appends >> run-sync-output
executes it
reads run-sync-output synchronously and stores the result
deletes this tmp file
returns result
There's an obvious issue where if the user doesn't have write access to the current directory, it will fail. Plus, it's just wasted effort. :-/
I have built a node.js module that solves this exact problem. Check it out :)
exec-plan
Update
The above module solves your original problem, because it allows for the synchronous chaining of child processes. Each link in the chain gets the stdout from the previous process in the chain.
I had a similar problem and I ended up writing a node extension for this. You can check out the git repository. It's open source and free and all that good stuff !
https://github.com/aponxi/npm-execxi
ExecXI is a node extension written in C++ to execute shell commands
one by one, outputting the command's output to the console in
real-time. Optional chained, and unchained ways are present; meaning
that you can choose to stop the script after a command fails
(chained), or you can continue as if nothing has happened !
Usage instructions are in the ReadMe file. Feel free to make pull requests or submit issues!
However it doesn't return the stdout yet... Well, I just released it today. Maybe we can build on it.
Anyway, I thought it was worth to mention it. I also posted this to a similar question: node.js execute system command synchronously
Since Node version v0.11.12, there is a child_process.execSync function for this.
Other than writing code a little diferent, there's actually no reason to do anything synched.
What don't you like about this? (docs)
var exec = require('child_process').exec;
exec('whoami', function (error, username) {
console.log('stdout: %s', username);
continueWithYourCode();
});