Pass JSON as command line argument to Node - node.js

I'd like to pass a JSON object as a command line argument to node. Something like this:
node file.js --data { "name": "Dave" }
What's the best way to do this or is there another more advisable way to do accomplish the same thing?

if its a small amount of data, I'd use https://www.npmjs.com/package/minimist, which is a command line argument parser for nodejs. It's not json, but you can simply pass options like
--name=Foo
or
-n Foo
I think this is better suited for a command line tool than json.
If you have a large amount of data you want to use you're better of with creating a json file and only pass the file name as command line argument, so that your program can load and parse it then.
Big objects as command line argument, most likely, aren't a good idea.

this works for me:
$ node foo.js --json-array='["zoom"]'
then in my code I have:
import * as _ from 'lodash';
const parsed = JSON.parse(cliOpts.json_array || []);
_.flattenDeep([parsed]).forEach(item => console.log(item));
I use dashdash, which I think is the best choice when it comes to command line parsing.
To do the same thing with an object, just use:
$ node foo.js --json-object='{"bar": true}'

This might be a bit overkill and not appropriate for what you're doing because it renders the JSON unreadable, but I found a robust way (as in "works on any OS") to do this was to use base64 encoding.
I wanted to pass around lots of options via JSON between parts of my program (a master node routine calling a bunch of small slave node routines). My JSON was quite big, with annoying characters like quotes and backslashes so it sounded painful to sanitize that (particularly in a multi-OS context).
In the end, my code (TypeScript) looks like this:
in the calling program:
const buffer: Buffer = new Buffer(JSON.stringify(myJson));
const command: string = 'node slave.js --json "' + buffer.toString('base64') + '" --b64';
const slicing: child_process.ChildProcess = child_process.exec(command, ...)
in the receiving program:
let inputJson: string;
if (commander.json) {
inputJson = commander.json;
if (commander.b64) {
inputJson = new Buffer(inputJson, 'base64').toString('ascii');
}
}
(that --b64 flag allows me to still choose between manually entering a normal JSON, or use the base64 version, also I'm using commander just for convenience)

Related

exec() not working when trying to execute a string containing the command "abs.__doc__"

I am trying to execute the command abs.__ doc__ inside the exec() function but for some reason it does not work.
function = input("Please enter the name of a function: ")
proper_string = str(function) + "." + "__doc__"
exec(proper_string)
Essentially, I am going through a series of exercises and one of them asks to provide a short description of the entered function using the __ doc__ attribute. I am trying with abs.__ doc__ but my command line comes empty. When I run python in the command line and type in abs.__ doc__ without anything else it works, but for some reason when I try to input it as a string into the exec() command I can't get any output. Any help would be greatly appreciated. (I have deliberately added spaces in this description concerning the attribute I am trying to use because I get bold type without any of the underscores showing.)
As a note, I do not think I have imported any libraries that could interfere, but these are the libraries that I have imported so far:
import sys
import datetime
from math import pi
My Python version is Python 3.10.4. My operating system is Windows 10.
abs.__doc__ is a string. You should use eval instead of exec to get the string.
Example:
function = input("Please enter the name of a function: ")
proper_string = str(function) + "." + "__doc__"
doc = eval(proper_string)
You can access it using globals():
def func():
"""Func"""
pass
mine = input("Please enter the name of a function: ")
print(globals()[mine].__doc__)
globals() return a dictionary that keeps track of all the module-level definitions. globals()[mine] is just trying to lookup for the name stored in mine; which is a function object if you assign mine to "func".
As for abs and int -- since these are builtins -- you can look it up directly using getattr(abs, "__doc__") or a more explicit: getattr(__builtins__, "abs").__doc__.
There are different ways to lookup for a python object corresponding to a given string; it's better not to use exec and eval unless really needed.

Subprocess stdout: remove unnecessary char

I´m using the subprocess module and it works fine, the only thing is that the stdout returns a value "b'" or in some cases longer text like "user config - ignore ...". Is it possible to remove this first part of the stdout without using str.substring() or similar methodes.
output = subprocess.run(['ls', '-l'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
in the abrove example the std.out.decode() function could be used and it will be saved as < str >
decoded_output = nodes.stdout.decode()
And if some type of commands support json output(for example pvesh in proxmox) you could use the string and load it as json.
json_output = json.loads(decoded_output)

How do I re-link the library use require? (nodejs)

For example
I have file 1
[{"a1":1}]
And file 2
var obj1 = JSON.parse(fs.readFileSync("./файл 1.JSON" , "UTF-8"));
module.exports.obj1 = obj1
I start program...
for(i=0; ; i++) {
console.log(require('./файл 2').a1);
let bb = JSON.parse(fs.readFileSync("./файл 1.JSON" , "UTF-8"));
console.log(bb.a1);
(there pause for slowed)
}
And at that time I correct file 1. Then I will see it in fs, but don't see in require.
Now the question: how to make, so that require show the new value?
When node executes your program and executes a require statement, it loads any modules. It then caches those modules so that subsequent requests for the same module give the same code. So in your example, on the first time through the loop, file 2 is going to be initialized with whatever value it reads from the JSON file and it won't be re-initialized.
Modules are intended to be static bundles of code and using require to include the same module multiple times is supposed to return the same module. For dynamic values like your JSON file, use fs functions to access the values at the time they're needed.

Terminating process.stdin in node.js

The program below simply reads a string and outputs it. When I run this on cmd, the program doesn't print out the string. It keeps reading inputs until I terminate with Ctrl+C. How do I tell the program when my input string is over, so it can print the output?
var concat=require('concat-stream');
var str=[];
process.stdin.pipe(concat(function(buff){
console.log(buff.toString());
}));
concat-stream is waiting to receive a finish event. In your example that will happen when you close stdin. If you’re running this in a shell you can close stdin by pressing Ctrl+D. If you’re piping something to your process, make sure it closes its stdout when it’s done.
If you’re trying to make your script interactive in the shell, try split:
process.stdin
.pipe(require('split')())
.on('data', function (line) {
console.log('got “%s”', line);
});
Obviously the answer by Todd Yandell is the right one, and I have already upvoted it, but I wanted to add that besides split, you may also consider the use of through which creates a sort of transformer and it would also work in a interactive way, since it is not an aggregation pipe.
Like this example in which everything you write in the standard input gets uppercased in standard output interactively:
var through = require('through');
function write(buffer){
var text = buffer.toString();
this.queue(text.toUpperCase());
}
function end(){
this.queue(null);
}
var transform = through(write, end);
process.stdin.pipe(transform).pipe(process.stdout);
You may even combine it with split by doing:
process.stdin
.pipe(split())
.pipe(transform)
.pipe(process.stdout);

Using a String as code with Groovy XML Parser

I am new to groovy - I am hoping this is a simple thing to solve. I am reading in an xml document, and then I am able to access data like this:
def root = new XmlParser().parseText(xmlString)
println root.foo.bar.text()
What I would like to do, is to have loaded the "foo.bar" portion of the path from a file or data base, so that I can do something like this:
def paths = ["foo.bar","tashiStation.powerConverter"] // defined for this example
paths.each {
path ->
println path + "\t" + root.path.text()
}
Obviously the code as written does not work... I thought maybe this would work:
paths.each {
path ->
println path + "\t" + root."${path}".text()
}
...but it doesn't. I based my initial solution on pg 153 of Groovy for DSL where dynamic methods can be created in a similar way.
Thoughts? The ideal solution will not add significant amounts of code and will not add any additional library dependencies. I can always fall back to doing this stuff in Java with JDOM but I was hoping for an elegant groovy solution.
This is very similar to this question from 3 days ago and this question
You basically need to split your path on . and then walk down this list moving through your object graph
def val = path.split( /\./ ).inject( root ) { obj, node -> obj?."$node" }?.text()
println "$path\t$val"
Should do it in this instance :-)

Resources