RedisJSON add TTL on specific Value in JSON Object - node.js

Hey working with redisJSON
NodeJS
package npm Redis 4.3.1
Key (userID):(Country) with values Json
Example
data = {
"info": {
"name":"test",
"email": "test#test,test"
},
"suppliers": {
"s1": 1,
"s2": 22
},
"suppliersCap": {
"s1": 0,
"s2": 10
}
}
redis.json.set('22:AU', '.', data);
now I try to add TTL for 5 minutes on the specific key in the JSON
for example on this key
22:AU .data.suppliersCap.s2, after 5 minutes the cap will be 0;
bit this not works
redis.json.set(22:AU, '.data.suppliersCap.s2', {
EX: 300
});

You cannot set TTL on an inner element of a RedisJSON object.
Note: It can be done only on an entire RedisJSON object.

NX and XX are the only valid modifiers for the command:
redisClient.json.set(
key: string,
path: string,
json: RedisJSON,
options?: NX | XX | undefined
): Promise<"OK" | null>

Related

<h5>json sorted ascending order by value based</h5>

Here the Json data sorted:
{ designer : 5, tester :8,developer : 10, backend :7 }
I would like to the following sorted order
{ developer :10, tester :8, backend:7, designer :5 }
In order to sort JSON objects, we have to take help of Arrays. Arrays maintain an order of insertion. Moreover, it provides native sort() method to sort array elements.
Following sortByKey() function takes JSON Object as an input and returns JSON Array which is sorted by key.
var data = [
{ designer : 5, tester :8,developer : 10, backend :7 }
]
function sortByValue(jsObj){
var sortedArray = [];
for(var i in jsObj)
{
sortedArray.push([i,jsObj[i]]);
}
sortedArray.sort(function(k,v){ return k[1] - v[1]});
sortedArray.reverse();
}
var jsObj = {};
jsObj.designer = 5
jsObj.tester = 8
jsObj.developer = 10
jsObj.backend = 7
var sortedbyValueJSONArray = sortByValue(jsObj);
console.table(sortedbyValueJSONArray);
We will create JSON Array of [jsonValue, jsonKey] from JSON Object. Then, we will sort JSON Array using sort() function. Checkout folllwing example.
------------------------------
| 0 | 10 | 'developer' |
| 1 | 5 | 'designer' |
| 2 | 7 | 'backend' |
| 3 | 8 | 'tester' |
If the order of the data is relevant for your case, you should use an array, not an object.
e.g.
[
{ "name": "developer", "value": 10 },
{ "name": "tester", "value": 8 },
{ "name": "backend", "value": 7},
{ "name": "designer", "value": 5 }
]
Ordering the elements of the array is as simple as JSON.parse(data).sort(({value: a}, {value: b}) => b - a) (in JavaScript, ES6).

AQL filter by array of IDs

If I need to filter by array of IDs, how would I do that using bindings?
Documentation does not provide any hints on that.
for c in commit
filter c.hash in ['b0a3', '9f0eb', 'f037a0']
return c
Updating the answer to deal with bindings reference that I missed.
LET commit = [
{ name: "111", hash: "b0a3" },
{ name: "222", hash: "9f0eb" },
{ name: "333", hash: "asdf" },
{ name: "444", hash: "qwer" },
{ name: "555", hash: "f037a0" }
]
FOR c IN commit
FILTER c.hash IN #hashes
RETURN c
The key is that when you send the bind param #hashes, it needs to be an array, not a string that contains an array.
If you use the AQL Query tool via the ArangoDB Admin Tool, make sure you click the "JSON" button in the top right to ensure the parameter hashes has the value
["b0a3", "9f0eb", "f037a0"] and not
"['b0a3', '9f0eb', 'f037a0']"
If you want to send a string as a parameter such as "b0a3","9f0eb","f037a0", so { "hashes": "\"b0a3\",\"9f0eb\",\"f037a0\"" } as bind parameter, then you can split the string into an array like this:
LET commit = [
{ name: "111", hash: "b0a3" },
{ name: "222", hash: "9f0eb" },
{ name: "333", hash: "asdf" },
{ name: "444", hash: "qwer" },
{ name: "555", hash: "f037a0" }
]
FOR c IN commit
FILTER c.hash IN REMOVE_VALUE(SPLIT(#hashes, ['","', '"']), "")
RETURN c
This example will take the string #hashes and then SPLIT the contents using "," and " as delimiters. This converts the input variable into an array and then the query works as expected. It will also hit an index on the hash attribute.
The delimiters are enclosed with single quote marks to avoid escaping, which would also be possible but less readable: ["\",\"", "\""]
Note that "," is listed first as delimiter, so that the result of the SPLIT is
[ "", "9f0eb", "b0a3", "f037a0" ] instead of
[ "", ",", "9f0eb", "b0a3", "f037a0" ].
The empty string element caused by the first double quote mark in the bind parameter value, which would make the query return commit records with an empty string as hash, can be eliminated with REMOVE_VALUE.
The recommended way is to pass ["b0a3", "9f0eb", "f037a0"] as array however, as shown at the beginning.
like this:
with person FOR id in ["person/4201061993070840084011","person/1001230840198901011999","person/4201008406196506156918"]
FOR v,e,p In 1..1 ANY id
relation_samefamily,stay,relation_internetbar,relation_flight,relation_train
OPTIONS {
bfs:true
}
FILTER (p.edges[*]._from ALL IN ["person/42010619930708400840084011","person/10012310840989084001011999","person/4201060840196506156918"] and p.edges[*]._to ALL IN ["person/4201061993070808404011","person/1001231908408901011999","person/4200840106196506156918"])
RETURN {v,e}

id cannot be used in graphQL where clause?

{
members {
id
lastName
}
}
When I tried to get the data from members table, I can get the following responses.
{ "data": {
"members": [
{
"id": "TWVtYmVyOjE=",
"lastName": "temp"
},
{
"id": "TWVtYmVyOjI=",
"lastName": "temp2"
}
] } }
However, when I tried to update the row with 'id' where clause, the console shows error.
mutation {
updateMembers(
input: {
values: {
email: "testing#test.com"
},
where: {
id: 3
}
}
) {
affectedCount
clientMutationId
}
}
"message": "Unknown column 'NaN' in 'where clause'",
Some results from above confused me.
Why the id returned is not a numeric value? From the db, it is a number.
When I updated the record, can I use numeric id value in where clause?
I am using nodejs, apollo-client and graphql-sequelize-crud
TL;DR: check out my possibly not relay compatible PR here https://github.com/Glavin001/graphql-sequelize-crud/pull/30
Basically, the internal source code is calling the fromGlobalId API from relay-graphql, but passed a primitive value in it (e.g. your 3), causing it to return undefined. Hence I just removed the call from the source code and made a pull request.
P.S. This buggy thing which used my 2 hours to solve failed in build, I think this solution may not be consistent enough.
Please try this
mutation {
updateMembers(
input: {
values: {
email: "testing#test.com"
},
where: {
id: "3"
}
}
) {
affectedCount
clientMutationId
}
}

CouchDB group view to keep string key and value

I have a view with documents in the form of {key:[year,month,day,string],value:int}:
{
rows:[
{
key: [
2016,
4,
30,
"String1"
],
value: 20
},
{
key: [
2016,
4,
30,
"String2"
],
value: 7
},
{
key: [
2016,
4,
30,
"String3"
],
value: 13
},{
key: [
2016,
5,
1,
"String1"
],
value: 10
},
{
key: [
2016,
5,
1,
"String4"
],
value: 12
},{
key: [
2016,
5,
2,
"String1"
],
value: 3
},
]}
From this I use startkey and endkey to get a range of values by date. My issue is then grouping the documents I get returned by the key string, and summing the value int. The rest of the key may or may not be present it does not matter. So far with group levels I have only been able to sum values per date key.
When rendered in a table I get something like:
What I want is:
So I ended up reducing in my controller with javascript like:
$scope.reduceMap = function (rows) {
var reducedMap = {};
var sortableArray = [];
for (var i = 0; i < rows.length; i++) {
var key = rows[i].key[3];
if (!reducedMap.hasOwnProperty(key)) {
reducedMap[key] = {key: key, value: rows[i].value};
} else {
reducedMap[key] = {key: key, value: rows[i].value + reducedMap[key].value};
}
}
for (var k in reducedMap) {
sortableArray.push(reducedMap[k]);
}
return sortableArray;
};
Since I asked for a CouchDB answer, I will leave this here but not accept it.
If you emit view's key as: string, year, month, day and use a built in reduce function _sum, then the following URL example gives you the desired result:
http://localhost:5984/text/_design/search/_view/by_text?startkey=["",2016,1,1]&endkey=[{},2016,1,1]&group_level=1
Your date search criteria is specified as normal, but the first part of the key is basically any string. Then grouping level 1 and reducing using sum gives you the count of string occurrences withing date range grouped by string.

Azure Storage Table Query - result vs response

I'm using node.js as my Server and have an account on Azure where my storage table resides. I'm retrieving all records for a specific partition by using the following :
var query= new azure.TableQuery().where('PartitionKey eq ?',username);
tableSvc.queryEntities(localTableName,query, null, function(error, result, response) {
}
When this call comes back, I want to access the values for the rest of the fields of table. But when I do that using result.entries, it kinda looks weird. Alternatively I think I can access the results via response.body.value.userID.
Here is how the structure of "result.entries" vs "response" object looks like:
result.entries :
[ { PartitionKey: { '$': 'Edm.String', _: '048tfbne' },
RowKey: { '$': 'Edm.String', _: '145610564488450166' },
Timestamp:
{ '$': 'Edm.DateTime',
_: Mon Feb 22 2016 01:47:26 GMT+0000 (UTC) },
username: { _: '048tfbne' },
userID: { _: '145610564488450166' },
deleteAfter: { _: 'not set yet' },
'.metadata': { etag: 'W/"datetime\'2016-02-22T01%3A47%3A26.4394133Z\'"' } } ]
response :
{ isSuccessful: true,
statusCode: 200,
body:
{ 'odata.metadata': 'https://photoshareuserdata.table.core.windows.net/$metadata#userIdentifier',
value:
[ { 'odata.etag': 'W/"datetime\'2016-02-22T01%3A47%3A26.4394133Z\'"',
PartitionKey: '048tfbne',
RowKey: '145610564488450166',
Timestamp: '2016-02-22T01:47:26.4394133Z',
username: '048tfbne',
userID: '145610564488450166',
deleteAfter: 'not set yet' } ] },
I thought results.entries would be a better way to access the records, but I am sort of weirded out by the nested objects and Edm.String here.
Which is a better way to access the records ?
Table Node Sample shows how to access entities in a table as result of a query. See method "runPageQuery".
Actually, according the official Section: Query a set of entities, there is a paragraph as following:
If successful, result.entries will contain an array of entities that match the query. If the query was unable to return all entities, result.continuationToken will be non-null and can be used as the third parameter of queryEntities to retrieve more results.
And we also can refer to the sample at Azure-storage-for-node repository on GitHub. Which has told us the answer.

Resources