Internal ArangoDB error on simple REPLACE_NTH query - arangodb

In a completely empty arangodb instance (3.7.9), I first create the empty collection: temp.
Then, I run two queries:
INSERT {v:[1]} INTO temp
and
FOR doc IN temp
RETURN REPLACE_NTH(doc.v, 0, 0, 0)
The first query, creates a single entry in temp. The second query, should return [1]. However, instead I get the error:
Query: Expecting Array slice (while executing) (exception location: /work/ArangoDB/arangod/RestHandler/RestCursorHandler.cpp:320). Please report this error to arangodb.com
Is there a reason why this is happening?

This bug was fixed in v3.7.10, see https://github.com/arangodb/arangodb/issues/13632

Related

UnsupportedOperationException in gremlin "by step with range" for more than one element

g.V().hasLabel('employee').fold().as("emp", "count")
.select("emp", "count")
.by(range(local, 0, 2).elementMap())
.by(count(local));
The above query is working fine when the range interval is 1 (OR) total number of return vertex is 1.
If it has more than one vertex or the range interval is increased, getting the UnsupportedOperationException exception. How to solve?
The same query works without elementMap(), however, it's needed in application just vertex id is not helpful
The UnsupportedOperation exception is because you are creating essentially a list of vertices when the range yields more than one result. The full error is actually going to be something like:
{
"requestId": "c98008d3-104e-4972-9d9c-c78186646cba",
"code": "UnsupportedOperationException",
"detailedMessage": "java.util.LinkedList cannot be cast to org.apache.tinkerpop.gremlin.structure.Element"
}
Which gives us a clue. The query expected an element, like a vertex or an edge but instead got a list of vertices.
When the range is from 0 to 2, the result will be of the form:
{'emp': [v[3741], v[3742]], 'count': 7}```
You will need to do something like
by(range(local, 0, 2).unfold().elementMap().fold())
for the query to work correctly.
Note that the query can be rewritten using project:
g.V().hasLabel('employee').fold().
project("emp", "count").
by(range(local, 0, 2).unfold().elementMap().fold()).
by(count(local))

RedisJSON and Python3: JSON.get and ft('index').search(Query('#orig_ip:{192\.168\.210\.27}')) returning no results (matching entry in redis)

I am new to redis and created an index and am attempting to ingest Zeek Logging Data, create an index for multiple fields, and then search fields in that index. For the life of me, I cannot get any values to return when searching for the #orig_ip name or using JSON.GET to retried any id.* related fields.
UPDATE: I figured this out after more troubleshooting and am updating here to help anyone else struggling with this problem.
Here is my WRONG code for creating the index:
# Options for index creation
index_def = IndexDefinition(
index_type=IndexType.JSON,
prefix = ['uid:'],
score = 0.5,
score_field = 'doc_score'
)
# Schema definition
schema = (
TagField('$.orig_l2_addr', as_name='orig_mac'),
TagField('$.id.orig_h', as_name='orig_ip'), #Wrong field path
TagField('$.id.resp_h', as_name='resp_ip'), #Wrong field path
NumericField('$.orig_bytes', as_name='orig_bytes'),
NumericField('$.resp_bytes', as_name='resp_bytes'),
NumericField('$.ts', as_name='timestamp')
)
r.ft('py_conn_idx').create_index(schema, definition = index_def)
Here is the result I kept getting with the above WRONG schema (no results)
search_result4 = r.ft('py_conn_idx').search(Query('#orig_ip:{192\.168\.210\.27}'))
Results for "#orig_ip:{192\.168\.210\.27}":
0
UPDATE: Working schema definition:
So it turns out even though Zeek is only using the . in field names vice using it to create an object, but the . in the field names was the culprit in my query failures. I needed to access the fields for the index as follows:
# Schema definition
schema = (
TagField('$.orig_l2_addr', as_name='orig_mac'),
TagField('$.["id.orig_h"]', as_name='orig_ip'), #Fixed field reference
TagField('$.["id.resp_h"]', as_name='resp_ip'), #Fixed field reference
NumericField('$.orig_bytes', as_name='orig_bytes'),
NumericField('$.resp_bytes', as_name='resp_bytes'),
NumericField('$.ts', as_name='timestamp')
)
After recreating the index with this schema, I get results with my query:
Results for "#orig_ip:{192\.168\.210\.27}":
Document {'id': 'uid:CPvYfTI4Zb1Afp2l5',....
Thanks to this stackoverflow question for finally walking me to the cause of my troubles: How to get objects value if its name contains dots?
Putting this answer here so this question gets marked as having one. See the updated question/code above!

Cloud Function ArrayRemove not deleting array value, but function runs without error

My cloud function is not deleting value from the array it is suppose to. It gets all the values correctly as I see in my debug console, but the value in the array is still there.
My code statement is as follows:
for (const id of membersUnderStructure){
const ele = context.params.structure;
console.log(`Deleting ${ele} From ${id}`);
await db.collection(`TeamMember`)
.doc(id)
.update({teamStructIds: admin.firestore.FieldValue.arrayRemove(ele)})
I know the problem is with the admin.firestore.... line as if I put a constant value there, it does update fine. My console shows the correct value of 'ele' and 'id' as well, so the values are correct. The function executes 'ok' without any error but does NOT delete the value of 'ele' from the 'teamStructIds' array.
I am not sure what am I doing wrong here?
Solution:
The Array is of 'Number' and it is trying to delete a String of 'ele'.
I just changed to parseInt(context.parama.structure) and it works now.

Check if bulk is empty in mongoDB

Is there a way to check if Mongodb bulk has some operations to do before calling .execute() on it?
I am pretty sure I don't send any empty objects to insert but keep getting this error on one document
Invalid Operation, No operations in bulk
Here is some code:
bulk.find({"AcctSessionId":insert['AcctSessionId']}).upsert().update({$set:insert});
and insert object looks like this
{ AcctStatusTypeU: '3',
AcctSessionId: '1183628512-105130252',
h323setuptimeU: '<sip:27117929995#41.66.146.252>',
h323connecttimeU: Sun Mar 08 2015 19:30:37 GMT+0100 (CET),
AcmeSessionEgressRealmU: '620',
AcmeSessionIngressRealmU: 'CORE_PSX'
}
I see my objects inserted but still get this error. By the way this is a Nodejs driver I am talking about and I am using UNorderedBulkOp to insert documents.
I run into the same problem. Check bulk.length
if (bulk.length > 0) {
// run bulk operations
}
You can easily check wether an bulk operation has operations to execute:
bulk
returns something like this
{ "nInsertOps" : 0, "nUpdateOps" : 1, "nRemoveOps" : 0, "nBatches" : 1 }
The reason why your bulk operation actually is empty is because your h323connecttimeUholds an illegal value – you need to quote the date string.
Furthermore, you are using the bulk operation in a wrong way. It should be
bulk.find({"AcctSessionId":insert['AcctSessionId']}).upsert().updateOne(insert);

Nodejs node-sqlite3 run callback not working

I am trying to perform a delete of a row in sqlite db using nodejs and node-sqlite3 package.
When I run the delete command, and manually check the entries, I can see that the query successfully deleted that row but I cant seem to write the code that confirms this.
This is the query
db.run("DELETE FROM Table1 WHERE id=? AND username=?", [id, user], function(error) {
console.log(error);
});
Regardless of a wrong or right input, it outputs null to the console. If the right details are given, it deletes it and prints null, if wrong id and user are given, it still prints null.
Any ideas on what might be wrong?
Thanks
To my prevoius question, the problem was that I've used fat arrow for callback declaration. From javascript documentation I've discovered that in arrow function (fat arrow ), this has lexical scope and so this result undefined and not valued as in library documentation said. Using otherwise anonimous function, this is bounded in dynamic scope and so this.changes is valued.
Now, with code as below, is ok:
var sql = 'update recipes set stars = stars + 1 where id = ?';
db.run(sql,
[
1 // id = 1 execute update - if id = 11111 do nothing
], function(err) {
if(err)
throw err;
console.log("VALUE CHANGES: " + this.changes + " - " + util.inspect(this, { showHidden: false, depth: null }));
if(this.changes == 1)
console.log("WORK DONE");
else
console.log("NOTHING DONE");
});
Here more explanations: https://github.com/mapbox/node-sqlite3/issues/606
There is nothing wrong in the node and node-sqlite3 behaviour here.
Here are two parts to explain first regarding node and other regarding Sqlite.
Node
Your callback is getting called after execution of the statement. So nothing wrong here, since your callback is getting called (as proved by 'null' as output).
Sqlite
Delete query in Sqlite deletes if condition given in where clause evaluates to true, otherwise nothing is deleted.
Referring from node-sqlite3 documentation's Database#run api:
callback (optional): If given, it will be called when an error occurs
during any step of the statement preparation or execution, and after
the query was run. If an error occurred, the first (and only)
parameter will be an error object containing the error message. If
execution was successful, the first parameter is null.
So, in your case query execution succeeds without any error, resulting in error argument to callback function null as you see in output.
Further, if you want to check if any row was actually removed, you can use changes property as mentioned in the documentation:
If execution was successful, it contains two properties named "lastID"
and "changes" which contain the value of the last inserted row ID and
the number of rows affected by this query respectively. Note that
"lastID" only contains valid information when the query was a
successfully completed INSERT statement and "changes" only contains
valid information when the query was a successfully completed UPDATE
or DELETE statement. In all other cases, the content of these
properties is inaccurate and should not be used. The .run() function
is the only query method that sets these two values; all other query
methods such as .all() or .get() don't retrieve these values.
Hope it helps...
I had similar problem, callbacks just would not fire. Period. The problem was that elsewhere I was calling process.exit(1), so my code was exiting before the the callbacks had a chance to return.
Search for process.exit, that may (or may not) save you hours of debugging and blaming sqlite :)
Off the topic: What bugs my mind is why they all-cap ID in lastID. It's not like it's an abbreviation like SQL or USA. It stands for Identification, which is one word.

Resources