How, i can insert (store) data something like this (node.js + redis):
var timestamp = new Date().getTime();
client.hmset('room:'+room, {
'enabled' : true,
timestamp : {
'g1' : 0,
'g2' : 0
}
});
and how affter that i can do increment for g1 or g2 ?
P.S. when insert timestamp this way, redis-cli show timestamp instead UNIX time
You're looking for a combination of HMGET and HMSET. According to the docs:
HMGET key field [field ...]
Returns the values associated with the specified fields in the hash
stored at key.
For every field that does not exist in the hash, a nil value is
returned. Because a non-existing keys are treated as empty hashes,
running HMGET against a non-existing key will return a list of nil
values.
HMSET key field value [field value ...]
Sets the specified fields to their respective values in the hash
stored at key. This command overwrites any existing fields in the
hash. If key does not exist, a new key holding a hash is created.
What you want to do, then, is retrieve your value from the has, perform any operations on it that seem appropriate, and save over the previous value.
Another, possibly better solution, would be to use HINCRBY. Provided you stick with a timestamp, you can increment the field without performing a get operation:
HINCRBY key field increment
Increments the number stored at field in the hash stored at key by
increment. If key does not exist, a new key holding a hash is created.
If field does not exist the value is set to 0 before the operation is
performed.
The range of values supported by HINCRBY is limited to 64 bit signed
integers.
You probably will need to restructure your hash for this to work though, unless there is a way to drill down to your g1/g2 fields (stackoverflow community, feel free to edit this answer or comment it if you know a way). A structure like this should work:
{
enabled : true,
timestamp_g1 : 0,
timestamp_g2 : 0
}
Related
Hi I would like to insert random test data into an edge collection called Transaction with the fields _id, Amount and TransferType with random data. I have written the following code below, but it is showing a syntax error.
FOR i IN 1..30000
INSERT {
_id: CONCAT('Transaction/', i),
Amount:RAND(),
Time:Rand(DATE_TIMESTAMP),
i > 1000 || u.Type_of_Transfer == "NEFT" ? u.Type_of_Transfer == "IMPS"
} INTO Transaction OPTIONS { ignoreErrors: true }
Your code has multiple issues:
When you are creating a new document you can either not specify the _key key and Arango will create one for you, or you specify one as a string to be used. _id as a key will be ignored.
RAND() produces a random number between 0 and 1, so it needs to be multiplied in order to make it into the range you want you might need to round it, if you need integer values.
DATE_TIMESTAMP is a function and you have given it as a parameter to the RAND() function which needs no parameter. But because it generates a numerical timestamp (milliseconds since 1970-01-01 00:00 UTC), actually it's not needed. The only thing you need is the random number generation shifted to a range that makes sense (ie: not in the 1970s)
The i > 1000 ... line is something I could only guess what it wanted to be. Here the key for the JSON object is missing. You are referencing a u variable that is not defined anywhere. I see the first two parts of a ternary operator expression (cond ? true_value : false_value) but the : is missing. My best guess is that you wanted to create a Type_of_transfer key with value of "NEFT" when i>1000 and "IMPS" when i<=1000
So, I rewrote your AQL and tested it
FOR i IN 1..30000
INSERT {
_key: TO_STRING(i),
Amount: RAND()*1000,
Time: ROUND(RAND()*100000000+1603031645000),
Type_of_Transfer: i > 1000 ? "NEFT" : "IMPS"
} INTO Transaction OPTIONS { ignoreErrors: true }
I'm trying to set some new fields in a nested dict within a Firestore document, which results in the data being overwritten.
Here's where I write the first part of the info I need:
upd = {
"idOffer": {
<offerId> : {
"ref" : <ref>,
"value" : <value>
}
}
}
<documentRef>.update(upd)
So output here is something like:
<documentid>:{idOffer:{<offerId>:{ref:<ref>, value:<value>}}}
Then I use this code to add some fields to the current <offerId> nested data:
approval = {
"isApproved" : <bool>,
"dateApproved" : <date>,
"fullApproval" : <bool>
}
<documentRef>.update({
"idOffer.<offerId>" : approval
})
From which I expect to get:
<documentid>:{idOffer:{<offerId>:{ref:<ref>, value:<value>, isApproved:<bool>,dateApproved:<date>,fullApproval:<bool>}}}
But I end up with:
<documentid>:{idOffer:{<offerId>:{isApproved:<bool>,dateApproved:<date>,fullApproval:<bool>}}}
Note: I use <> to refer to dynamic data, like document Ids or References.
When you call update with a dictionary (or map, or object, or whatever key/value pair structure used in other languages), the entire set of data behind the given top-level keys are going to be replaced. So, if you call update with a key of idOffer.<offerId>, then everything under that key is going to be replaced, while every other child key of the idOffer level will remain unchanged.
If you don't want to replace the entire object behind the key, then be more specific about which children you'd like to update. In your example, instead of updating a single idOffer.<offerId> key, specify three keys for the nested children:
idOffer.<offerId>.isApproved
idOffer.<offerId>.dateApproved
idOffer.<offerId>.fullApproval
That is to say, the dictionary you pass should have three keyed entries like this at the top level, rather than a single key of idOffer.<offerId>.
Is there a way to get the index of the results within an aql query?
Something like
FOR user IN Users sort user.age DESC RETURN {id:user._id, order:{index?}}
If you want to enumerate the result set and store these numbers in an attribute order, then this is possible with the following AQL query:
LET sorted_ids = (
FOR user IN Users
SORT user.age DESC
RETURN user._key
)
FOR i IN 0..LENGTH(sorted_ids)-1
UPDATE sorted_ids[i] WITH { order: i+1 } IN Users
RETURN NEW
A subquery is used to sort users by age and return an array of document keys. Then a loop over a numeric range from the first to the last index of the that array is used to iterate over its elements, which gives you the desired order value (minus 1) as variable i. The current array element is a document key, which is used to update the user document with an order attribute.
Above query can be useful for a one-off computation of an order attribute. If your data changes a lot, then it will quickly become stale however, and you may want to move this to the client-side.
For a related discussion see AQL: Counter / enumerator
If I understand your question correctly - and feel free to correct me, this is what you're looking for:
FOR user IN Users
SORT user.age DESC
RETURN {
id: user._id,
order: user._key
}
The _key is the primary key in ArangoDB.
If however, you're looking for example data entered (in chronological order) then you will have to have to set the key on your inserts and/or create a date / time object and filter using that.
Edit:
Upon doing some research, I believe this link might be of use to you for AI the keys: https://www.arangodb.com/2013/03/auto-increment-values-in-arangodb/
I have an array of Objects that I want to store in Redis. I can break up the array part and store them as objects but I am not getting how I can get somethings like
{0} : {"foo" :"bar", "qux" : "doe"}, {1} : {"name" "Saras", "age" : 23}
and then search the db based on name and get the requested key back. I need something like this. but can't come close to getting it right.
incr id //correct
(integer) 3
get id //correct
"3"
SADD id {"name" : "Saras"} //wrong
SADD myset {"name" : "Saras"} //correct
(integer) 1
First is getting this part right.
Second is somehow getting the key from the value i.e.
if name==="Saras"
then key=1
Which I find tough. Or I can store it directly as array of objects and use a simple for loop.
for (var i = 0; i < userCache.users.length; i++) {
if (userCache.users[i].userId == userId && userCache.users[i].deviceId == deviceId) {
return i;
}
}
Kindly suggest which route is best with some implementation?
The thing I found working was storing the key as a unique identifier and stringifying the whole object while storing the data and applying JSON.parse while extracting it.
Example code:
client
.setAsync(obj.deviceId.toString(), JSON.stringify(obj))
.then((doc) => {
return client.getAsync(obj.deviceId.toString());
})
.then((doc) => {
return JSON.parse(doc);
}).catch((err) => {
return err;
});
Though stringifying and then parsing it back is a computationally heavy operation and will block the Node.js server if the size of JSON becomes large. I am probably ready to take a hit for lesser complexity because I know my JSON wouldn't be huge, but that needs to be kept in mind while going for this approach.
Redis is pretty simple key-value storage. Yes, there are other data structures like sets, but it has VERY limited query capabilities. For example, if you want to get find data by name, then you would have to to something like that:
SET Name "serialized data of object"
SET Name2 "serialized data of object2"
SET Name3 "serialized data of object3"
then:
GET Name
would return data.
Of course this means that you can't store two entries with the same names.
You can do limited text matching on keys using: http://redis.io/commands/scan
To summarize: I think you should use other tool for complex queries.
The first issue you have, SADD id {"name" : "Saras"} //wrong, is obvious since the "id" key is not of type set, it is a string type.
In redis the only access point to data is through its key.
As kiss said, perhaps you should be looking for other tools.
As you know, the reduce function in CouchDB views looks like this:
function (key, values, rereduce) {
return sum(values);
}
where the definition of first arguments is as follows:
when rereduce is false, then:
key will be an array whose elements are arrays of the form [key,id], where key is a key emitted by the map function and id is that of the document from which the key was generated.
values will be an array of the values emitted for the respective elements in keys.
My question is: when rereduce is false, are there any guarantees regarding the order of key (or values) array elements? My gut feel (based on Reduce vs Rereduce chapter) is that keys, and respectively values, should be ordered, but I do not see any direct confirmation.
Any ideas?
Thank you!
From https://cloudant.com/for-developers/all_docs/
Sort Order
All indexes are sorted by their key. The sort order is:
null
false
true
numbers
text, cases sensitive - lower case first
arrays, sorted element by element
objects
The full specification is documented in the CouchDB Wiki.