Executing REDIS Command in Node.js - node.js

I am writing a Node app. This app interacts with a REDIS database. To do that, I'm using node_redis. Sometimes, I want to just execute a command using a line of text. In other words, I want to do a pass through without using the wrapper functions. For instance, I may have:
set myKey myValue
I would LOVE to be able to just execute that without having to break apart the text and call client.set('mykey', 'myValue'); Is there a way to just execute a command like that against REDIS in the Node world? If so, how?
Thanks!

You should be able to use client.send_command(command_name, args, callback) to send arbitrary commands to redis. Args can be empty and so in your case you would just call client.send_command('set myKey myValue', null, cb).

Related

yargs (node module): command module files... parse the command, but without automatically executing the command's exports.handler code

I'm using yargs with command modules, i.e. separate file for each command, with each command file defining exports.command, exports.desc, exports.handler for its own command.
For 60% of my commands, the way it works by default is fine. But the other 40% I want to first connect to a database with TypeORM, which is async, and then only after the connection has been made, execute exports.handler wrapped inside a .then() so that it doesn't execute before the DB has connected.
The issue is that in my global entry file, as soon as all the global yargs options are defined, and I execute .argv for yargs to parse the command (which determines if it's a db or non-db command), it also executes exports.handler immediately, so I have no opportunity first connect to the database, and use .then to wrap the exports.handler command code (for the 40% of commands that need it).
I want to avoid having to add async code just for the db connection to every one of those 40% of command files. And also avoid connecting to the db for all 100% of commands, as 60% of them don't need it.
I was about to make major changes and no longer use yargs' command modules... but there must be an easier way to do this?
Is there a way to tell yargs that I want to parse the command but not execute exports.handler yet?
If I can just figure that out, then I can simply segregate the commands that need to be wrapped inside .then() and those that don't.

get the output of the remote execution of the SSH command

I work with expressJs and to execute a remote SSH command I use the 'simple-ssh', this code allows to execute the command except that I could not get the result of the display outside this block.
ssh.exec('ls Documents/versions', {
out: function(stdout) {
arrayOfVersion = stdout.split("\n");}}).start();
How to get the content of arrayOfVersion and manipulate it after
Your function which creates arrayofVersion async, you won't be able access it outside of this scope without some sort of waiting process which waits until the variable has a value.
You can do this in a few ways, to begin with I would recommend researching how nodejs handles async functions as this is a big part of nodejs. Generally you would use one of the following: callbacks, promises, or async/await.
With any of those techniques, you should be able to run your SSH code and then continue on with the result of the stdout.

Can I execute a raw MongoDB query in node-mongodb-native driver?

FYI - I know how to use the MongoDB driver and know this is not how one would use it in a web app, but this is not for a web app. My aim is to emulate the MongoDB shell in NodeJS
I'm writing a DB GUI and would like to execute a raw MongoDB query, eg db.tableName.find({ col: 'value' }). Can I achieve this using the native MongoDB driver? I'm using v2.2, which is current the latest version.
If not, how can I achieve this in NodeJS?
Note: The question has changed - see the updates below.
Original answer:
Yes.
Instead of:
db.tableName.find({ col: 'value' })
You use it as:
db.collection('tableName').find({ col: 'value' }, (err, data) => {
if (err) {
// handle error
} else {
// you have data here
}
});
See: http://mongodb.github.io/node-mongodb-native/2.2/api/Collection.html#find
Update
After you changed your question and posted some comments it is more clear what you want to do.
To achieve your goal of emulating the Mongo shell in Node you would need to parse the command typed by the user and execute the appropriate command while keeping in mind:
the difference between SpiderMonkey used by the Mongo shell and Node with V8 and libuv
the difference between BSON and JSON
the fact that Mongo shell works synchronously and the Node driver works asynchronously
The last part will probably be the hardest part for you. Remember that in the Mongo shell this is perfectly legal:
db.test.find()[0].x;
In Node the .find() method doesn't return the value but it either takes a callback or returns a promise. It will be tricky. The db.test.find()[0].x; case may be relatively easy to handle with promises (if you understand the promises well) but this will be harder:
db.test.find({x: db.test.find()[0].x});
and remember that you need to handle arbitrarily nested levels.
The Mongo protocol
After reading some of the comments I think it's worth noting that what you actually send to the Mongo server has nothing to do with the JavaScript that you write in the Mongo shell. The Mongo shell uses SpiderMonkey with a number of predefined functions and objects.
But you don't actually send JavaScript to the Mongo server so you can't send things like db.collection.find(). Rather you send a binary OP_QUERY struct with a collection name encoded as a cstring and a query encoded as BSON plus a bunch of binary flags. See:
https://docs.mongodb.com/manual/reference/mongodb-wire-protocol/#wire-op-query
The BSON is itself a binary format with a number of low level values defined as bytes:
http://bsonspec.org/spec.html
The bottom line is that you don't send to the Mongo server anything resembling what you enter in the Mongo shell. The Mongo shell parses the things that you type using the SpiderMonkey parser and sends binary requests to the actual Mongo server. The Mongo shell uses JavaScript but you don't communicate with the Mongo server in JavaScript.
Example
Even the JSON query object is not sent to Mongo as JSON. For example, when you are searching for a document with a hello property equal to "world" you would use {hello: 'world'} in JavaScript or {"hello": "world"} in JSON but this is what gets send to the Mongo server - by the Mongo shell or by any other Mongo client:
\x16\x00\x00\x00\x02hello\x00\x06\x00\x00\x00world\x00\x00
Why it's so different
To understand why the syntax used in Node is so different from the Mongo shell, see this answer:
Why nodejs-mongodb middleware has different syntax than mongo shell?

how can i use common-node with mongo-sync in my nodejs?

i found the plugin mongo-sync which can make me to use mongoDB synchronized
on the git,there show :
It is a thin wrapper around the official MongoDB driver for Node. Here is a quick usage example that you can use with Common Node:
var Server = require("mongo-sync").Server;
var server = new Server('127.0.0.1');
var result = server.db("test").getCollection("posts").find().toArray();
console.log(result);
server.close();
how can i use like this?
it's mentioned that use with Common Node
whether it's means common-node?
so, how can i use it ? or use mongo-sync straightforwardly?
It means you have to follow Common-Node installation instructions and use common-node command instead of the plain-old node to run your program.
As the docs mention, to use it with plain-old node you need to use node-fibers and make queries inside a Fiber.
No way around node-fibers I'm afraid, as mongo-sync is just a "synchronous" wrapper around asynchronous mongo driver, and it's hard to make async js code synchronous without some low level monkey-patching.

is it possible to call lua functions defined in other lua scripts in redis?

I have tried to declare a function without the local keyword and then call that function from anther script but it gives me an error when I run the command.
test = function ()
return 'test'
end
# from some other script
test()
Edit:
I can't believe I still have no answer to this. I'll include more details of my setup.
I am using node with the redis-scripto package to load the scripts into redis. Here is an example.
var Scripto = require('redis-scripto');
var scriptManager = new Scripto(redis);
scriptManager.loadFromDir('./lua_scripts');
var keys = [key1, key2];
var values = [val];
scriptManager.run('run_function', keys, values, function(err, result) {
console.log(err, result)
})
And the lua scripts.
-- ./lua_scripts/dict_2_bulk.lua
-- turns a dictionary table into a bulk reply table
dict2bulk = function (dict)
local result = {}
for k, v in pairs(dict) do
table.insert(result, k)
table.insert(result, v)
end
return result
end
-- run_function.lua
return dict2bulk({ test=1 })
Throws the following error.
[Error: ERR Error running script (call to f_d06f7fd783cc537d535ec59228a18f70fccde663): #enable_strict_lua:14: user_script:1: Script attempted to access unexisting global variable 'dict2bulk' ] undefined
I'm going to be contrary to the accepted answer, because the accepted answer is wrong.
While you can't explicitly define named functions, you can call any script that you can call with EVALSHA. More specifically, all of the Lua scripts that you have explicitly defined via SCRIPT LOAD or implicitly via EVAL are available in the global Lua namespace at f_<sha1 hash> (until/unless you call SCRIPT FLUSH), which you can call any time.
The problem that you run into is that the functions are defined as taking no arguments, and the KEYS and ARGV tables are actually globals. So if you want to be able to communicate between Lua scripts, you either need to mangle your KEYS and ARGV tables, or you need to use the standard Redis keyspace for communication between your functions.
127.0.0.1:6379> script load "return {KEYS[1], ARGV[1]}"
"d006f1a90249474274c76f5be725b8f5804a346b"
127.0.0.1:6379> eval "return f_d006f1a90249474274c76f5be725b8f5804a346b()" 1 "hello" "world"
1) "hello"
2) "world"
127.0.0.1:6379> eval "KEYS[1] = 'blah!'; return f_d006f1a90249474274c76f5be725b8f5804a346b()" 1 "hello" "world"
1) "blah!"
2) "world"
127.0.0.1:6379>
All of this said, this is in complete violation of spec, and is entirely possible to stop working in strange ways if you attempt to run this in a Redis cluster scenario.
Important Notice: See Josiah's answer below. My answer turns out to be wrong or at the least incomplete. Which makes me very happy ofcourse, it makes Redis all the more flexible.
My incorrect/incomplete answer:
I'm quite sure this is not possible. You are not allowed to use global variables (read the docs ), and the script itself gets a local and temporary scope by the Redis Lua engine.
Lua functions automatically set a 'writing' flag behind the scenes if they do any write action. This starts a transaction. If you cascade Lua calls, the bookkeeping in Redis would become very cumbersome, especially when the cascade is executed on a Redis slave. That's why EVAL and EVALSHA are intentionally not made available as valid Redis calls inside a Lua script. Same goes for calling an already 'loaded' Lua function which you are trying to do. What would happen if the slave is rebooted between the load of the first script and the exec of the second script?
What we do to overcome this limitation:
Don't use EVAL, only use SCRIPT LOAD and EVALSHA.
Store the SHA1 inside a redis hash set.
We automated this in our versioning system, so a committed Lua script automatically gets it's SHA1 checksum stored in the Redis master, in a hash set, with a logical name. The clients can't use EVAL (on a slave; we disabled EVAL+LOAD in config). But the client can ask for the SHA1 for the next step. Almost all our Lua functions return a SHA1 for the next call.
Hope this helps, TW
Because I'm not one to leave well enough alone, I built a package that allows for simple internal calling semantics. The package (for Python) is available on GitHub.
Long story short, it uses ARGV as a call stack, translates KEYS/ARGV references to _KEYS and _ARGV, uses Redis as a name -> hash mapping internally, and translates CALL.<name>(<keys>, <argv>) to a table append + Redis lookup + Lua function call.
The METHOD.txt file describes what goes on, and all of the regular expressions I used to translate the Lua scripts are available in lua_call.py. Feel free to re-use my semantics.
The use of the function registry makes this very unlikely to work in Redis cluster or any other multi-shard setup, but for single-master applications, it should work for the foreseeable future.

Resources