I'm using Node.js, and the 'redis-scripto' module, and I'm trying to define a function in Lua:
var redis = require("redis");
var redisClient = redis.createClient("6379","127.0.0.1");
var Scripto = require('redis-scripto');
var scriptManager = new Scripto(redisClient);
var scripts = {'add_script':'function add(i,j) return (i+j) end add(i,j)'};
scriptManager.load(scripts);
scriptManager.run('add_script', [], [1,1], function(err, result){
console.log(err || result);
});
so I'm getting this error:
[Error: ERR Error running script (call to .... #enable_strict_lua:7: user_script:1: Script attempted to create global variable 'add']
so I've found that it's a protection, as explained in this thread:
"The doc-string of scriptingEnableGlobalsProtection indicates that intent is to notify script authors of common mistake (not using local)."
but I still didn't understand - where is this scripting.c ? and the solution of changing global tables seems risky to me.
Is there no simple way of setting functions to redis - using lua ?
It looks like the script runner is running your code in a function. So when you create function add(), it thinks you're doing it inside a function by accident. It'll also, likely, have a similar issue with the arguments to the call to add(i,j).
If this is correct then this should work:
local function add(i,j) return (i+j) end return add(1,1)
and if THAT works then hopefully this will too with your arguments passed from the js:
local function add(i,j) return (i+j) end return add(...)
You have to assign an anonymous function to a variable (using local).
local just_return_it = function(value)
return value
end
--// result contains the string "hello world"
local result = just_return_it("hello world")
Alternately, you can create a table and add functions as fields on the table. This allows a more object oriented style.
local obj = {}
function obj.name()
return "my name is Tom"
end
--// result contains "my name is Tom"
local result = obj.name()
Redis requires this to prevent access to the global scope. The entire scripting mechanism is designed to keep persistence out of the scripting layer. Allowing things in the global scope would create an easy way to subvert this, making it much more difficult to track and manage state on the server.
Note: this means you'll have to add the same functions to every script that uses them. They aren't persistent on the server.
This worked for me: (change line 5 from the question above):
var scripts = {
'add_script':'local function add(i,j) return (i+j) end return(add(ARGV[1],ARGV[2]))'};
to pass variables to the script use KEYS[] and ARGV[], as explained in the redis-scripto guide
and also there's a good example in here: "Lua: A Guide for Redis Users"
Related
We are in the process of embedding JS in our application, and we will use a few dozen scripts each assigned to an event. Inside these scripts we provide a minimal callback api,
function onevent(value)
{ // user javascript code here
}
which is called whenever that event happens. The scripts have to have their own global, since this funtion has always the same name and we access it from cpp code with
duk_get_global_string(js_context_duk, "onevent");
duk_push_number(js_context_duk, val);
if (duk_pcall(js_context_duk, 1) != 0)
{
printf("Duk error: %s\n", duk_safe_to_string(js_context_duk, -1));
}
duk_pop(js_context_duk); /* ignore result */
Then again we want to allow minimal communication between scripts, e.g.
Script 1
var a = 1;
function onevent(val)
{
log(a);
}
Script 2
function onevent(val)
{
a++;
}
Is there a way we achieve this? Maybe by introducing an own 'ueber-' global object, that is defined once and referencable everywhere? It should be possible to add properties to this 'ueber-global object' from any script like
Script 1
function onevent(val)
{
log(ueber.a);
}
Script 2
function onevent(val)
{
ueber.a=1;
}
Instead of simple JS files you could use modules. duktape comes with a code example to implement a module system (including its code isolation) like in Node.js. Having that in place you can export variables that should be sharable.
We have an approach that seems to work now. After creating the new context with
duk_push_thread_new_globalenv(master_ctx);
new_ctx = duk_require_context(master_ctx, -1);
duk_copy_element_reference(master_ctx, new_ctx, "ueber");
we issue this call sequence in for all properties/objects/functions created in the main context:
void duk_copy_element_reference(duk_context* src, duk_context* dst, const char* element)
{
duk_get_global_string(src, element);
duk_require_stack(dst, 1);
duk_xcopy_top(dst, src, 1);
duk_put_global_string(dst, element);
}
It seems to work (because everything is in the same heap and all is single threaded). Maybe someone with deeper insight into duktape can comment on this? Is this a feasible solution with no side effects?
edit: mark this as answer. works as expected, no memory leaks or other issues.
I have a script setupDB.js that runs asynchronously and is intended to be called from command line. Recently, I added test cases to my project, some of which require a database to be set up (and thus the execution of aforementioned script).
Now, I would like to know when the script has finished doing its thing. At the moment I'm simply waiting for a few seconds after requiring setupDB.js before I start my tests, which is obviously a bad idea.
The problem with simply exporting a function with a callback parameter is that it is important that the script can be run without any overhead, meaning no command line arguments, no additional function calls etc., since it is part of a bigger build process.
Do you have any suggestions for a better approach?
I was also looking for this recently, and came across a somewhat-related question: "Node.JS: Detect if called through require or directly by command line
" which has an answer that helped me build something like the following just a few minutes ago where the export is only run if it's used as a module, and the CLI library is only required if ran as a script.
function doSomething (opts) {
}
/*
* Based on
* https://stackoverflow.com/a/46962952/7665043
*/
function isScript () {
return require.main && require.main.filename === /\((.*):\d+:\d+\)$/.exec((new Error()).stack.split('\n')[ 2 ])[ 1 ]
}
if (isScript) {
const cli = require('some CLI library')
opts = cli.parseCLISomehow()
doSomething(opts)
} else {
module.exports = {
doSomething
}
}
There may be some reason that this is not a good idea, but I am not an expert.
I have now handled it this way: I export a function that does the setup. At the beginning I check if the script has been called from command line, and if so, I simply call the function. At the same time, I can also call it directly from another module and pass a callback.
if (require.main === module) {
// Called from command line
runSetup(function (err, res) {
// do callback handling
});
}
function runSetup(callback) {
// do the setup
}
exports.runSetup = runSetup;
make-runnable npm module can help with this.
I need one simple thing:
var Base = function(module){
this.outsideMethod = function(arg1)
{
// run method in new context - sandbox
return vm.runInNewContext(module.insideMethod, arg1);
}
}
is something like this possible in nodejs? thx very much
If the insideMethod function does not call or use functions/vlues from outside the context it shall run in, yes.
You can convert any function in Javascript to a string.
Doing vm.runInNewContext('('+module.insideMethod+')('+JSON.stringify(arg1)+"); could be what you want.
I figured out how to run javascript code on the MongoDB server, from a node.js client:
db.eval("function(x){ return x*10; }", 1, function (err, retval) {
console.log('err: '+err);
console.log('retval: '+retval);
});
And that works fine. But the docs say that db.eval() issues a write lock, so that nothing else can read or write to the database. I do not want that.
It also says that eval has no such limitation, but I do not know where to find it. From the way they're talking about it, it seems as if regular eval is only available in the mongo shell, and so not from the client side.
So: how can I run these stored procedures on the mongodb server without blocking everything?
you can pass an object with the field nolock set to true as an optional 3rd parameter to eval:
db.eval('function (x) {return x*10; }', [1], {nolock:true}, function(err, retval) {
console.log('err: '+err);
console.log('retval: '+retval);
});
Note that this prevents eval from setting an obligatory write-lock, but it doesn't prevent any operations inside your function from creating write-locks on their own.
Source: the documentation.
Note that the term "stored procedure" is wrong in this case. A stored procedure refers to code which is stored on the database itself and not delivered by the application layer. MongoDB can also do this utilizing the special collection db.system.js, but doing this is discouraged: http://docs.mongodb.org/manual/applications/server-side-javascript/#storing-functions-server-side
By the way: MongoDB wasn't designed for stored procedures. It is usually recommended to implement any advanced logic on the application layer. The practice to implement even trivial operations as stored procedures, like it is sometimes done on SQL databases, is discouraged.
This is this the way to store your functions on the Server Side and you call use it as shown below:
db.system.js.save( { _id : "myAddFunction" , value : function (x,y)
{ return x +y;} } );
db.system.js.find()
{ "_id" : "myAddFunction", "value" : function (x,y){ return x + y; } }
db.eval( "myAddFunction( 1 ,2)" )
3
I'm using Redis in my application and one thing is not clear for me. I save an object with a random generated string as its key. However I would like to check if that key exists. I am planning to use while loop however I am not sure how would I struct it according to Redis. Since if I would like to check for once, I would do;
redisClient.get("xPQ", function(err,result){
if(result==null)
exists = false
});
But I would like use the while loop as;
while(exists == false)
However I cannot build the code structure in my head. Would the while be inside the function or outside the function?
In general, you shouldn't check for existence of a key on the client side. It leads to race conditions. For example, another thread could insert the key after the first thread checked for its presence.
You should use the commands ending with NX. For example - SETNX and HSETNX. These will insert the key only if doesn't already exist. It is guaranteed to be atomic.
I do not understand why you need to implement active polling to check whether a key exists (there are much better ways to handle this kind of situations), but I will try to answer the question.
You should not use a while loop at all (inside or outside the function). Because of the asynchronous nature of node.js, these loops are better implemented using tail recursion. Here is an example:
var redis = require('redis')
var rc = redis.createClient(6379, 'localhost');
function wait_for_key( key, callback ) {
rc.get( key, function(err,result) {
if ( result == null ) {
console.log( "waiting ..." )
setTimeout( function() {
wait_for_key(key,callback);
}, 100 );
} else {
callback(key,result);
}
});
}
wait_for_key( "xPQ", function(key,value) {
console.log( key+" exists and its value is: "+value )
});
There are multiple ways to simplify these expressions using dedicated libraries (using continuation passing style, or fibers). For instance you may want to check the whilst and until functions of the async.js package.
https://github.com/caolan/async