I have an array of Objects that I want to store in Redis. I can break up the array part and store them as objects but I am not getting how I can get somethings like
{0} : {"foo" :"bar", "qux" : "doe"}, {1} : {"name" "Saras", "age" : 23}
and then search the db based on name and get the requested key back. I need something like this. but can't come close to getting it right.
incr id //correct
(integer) 3
get id //correct
"3"
SADD id {"name" : "Saras"} //wrong
SADD myset {"name" : "Saras"} //correct
(integer) 1
First is getting this part right.
Second is somehow getting the key from the value i.e.
if name==="Saras"
then key=1
Which I find tough. Or I can store it directly as array of objects and use a simple for loop.
for (var i = 0; i < userCache.users.length; i++) {
if (userCache.users[i].userId == userId && userCache.users[i].deviceId == deviceId) {
return i;
}
}
Kindly suggest which route is best with some implementation?
The thing I found working was storing the key as a unique identifier and stringifying the whole object while storing the data and applying JSON.parse while extracting it.
Example code:
client
.setAsync(obj.deviceId.toString(), JSON.stringify(obj))
.then((doc) => {
return client.getAsync(obj.deviceId.toString());
})
.then((doc) => {
return JSON.parse(doc);
}).catch((err) => {
return err;
});
Though stringifying and then parsing it back is a computationally heavy operation and will block the Node.js server if the size of JSON becomes large. I am probably ready to take a hit for lesser complexity because I know my JSON wouldn't be huge, but that needs to be kept in mind while going for this approach.
Redis is pretty simple key-value storage. Yes, there are other data structures like sets, but it has VERY limited query capabilities. For example, if you want to get find data by name, then you would have to to something like that:
SET Name "serialized data of object"
SET Name2 "serialized data of object2"
SET Name3 "serialized data of object3"
then:
GET Name
would return data.
Of course this means that you can't store two entries with the same names.
You can do limited text matching on keys using: http://redis.io/commands/scan
To summarize: I think you should use other tool for complex queries.
The first issue you have, SADD id {"name" : "Saras"} //wrong, is obvious since the "id" key is not of type set, it is a string type.
In redis the only access point to data is through its key.
As kiss said, perhaps you should be looking for other tools.
Related
I am using sequelize ORM and for DB I am using MySQL, in which I have associated two tables and getting the data, but I want 2nd data (another table's data) should not come in like nested object instead could we show all data in a single object (both table data).
let me show the screen shot.
What I want to something like.
Ignore the keys and values of data, but my question here, could we here show the data of another table (think like inner join) in the same object, not something like in nested object.
Any suggestion would be appreciable thanks.
I'm not sure what you exactly want but according to your input/output I think you want to flatten the nested object(?) and for that you can use the next piece of code-
nested_obj = {"message": "Hey you", "nested_obj": {"id": 1}}
flattened_obj = Object.assign(
{},
...function _flatten(o) {
return [].concat(...Object.keys(o)
.map(k =>
typeof o[k] === 'object' ?
_flatten(o[k]) :
({[k]: o[k]})
)
);
}(nested_obj)
)
console.log(JSON.stringify(flattened_obj));
So I got a small database, It's not going to grow much more and I'm trying to get one document from the db in an API that I implemented in python so that with a given document Id I retrieve the document in the db. However, I find it a little hard to put the user to write a random number from the db. All I require is a function that modifies each document by setting an id field and to Auto-Increment. As I said, it's not going to grow that much and the performance isn't really an issue here.
So far what I've been able to do is this:
var i = 0
db.MyCollection.update({},
{$set : {"new_field":1}},
{upsert:false,
multi:true}
i ++;),
I achieved to set an id field but it sets the same number to each document (the count of every document) So let's say that if the db has 10 docs, it'll set the Id to 10.
Find-and-modify operation returns the document updated (before or after the update depending on returnDocument setting). You can use this with $inc to implement a counter. Ruby example where c is a collection:
irb(main):005:0> c['foo'].insert_one(counter:true,count:1)
=> #<Mongo::Operation::Insert::Result:0x8040 documents=[{"n"=>1, "opTime"=>{"ts"=>#<BSON::Timestamp:0x00005609f260b7e0 #seconds=1594961771, #increment=2>, "t"=>1}, "electionId"=>BSON::ObjectId('7fffffff0000000000000001'), "ok"=>1.0, "$clusterTime"=>{"clusterTime"=>#<BSON::Timestamp:0x00005609f260b538 #seconds=1594961771, #increment=2>, "signature"=>{"hash"=><BSON::Binary:0x8060 type=generic data=0x0000000000000000...>, "keyId"=>0}}, "operationTime"=>#<BSON::Timestamp:0x00005609f260b290 #seconds=1594961771, #increment=2>}]>
irb(main):011:0> c['foo'].find_one_and_update({counter:true},{'$inc':{count:1}})
=> {"_id"=>BSON::ObjectId('5f112f6b2c97a6281f63f575'), "counter"=>true, "count"=>1}
irb(main):012:0> c['foo'].find_one_and_update({counter:true},{'$inc':{count:1}})
=> {"_id"=>BSON::ObjectId('5f112f6b2c97a6281f63f575'), "counter"=>true, "count"=>2}
irb(main):013:0> c['foo'].find_one_and_update({counter:true},{'$inc':{count:1}})
=> {"_id"=>BSON::ObjectId('5f112f6b2c97a6281f63f575'), "counter"=>true, "count"=>3}
irb(main):014:0> c['foo'].find_one_and_update({counter:true},{'$inc':{count:1}})
=> {"_id"=>BSON::ObjectId('5f112f6b2c97a6281f63f575'), "counter"=>true, "count"=>4}
Why not just use this logic? Instead of updating all via one query, just launch multiple queries one by one? Mongo will do it pretty fast, even if you have >1M docs in database (according to your phrase: I got a small database) because pre-builded index on _id field.
this is a javasript code, but I guess, you'll understand the logic of it
let all_documents = db.MyCollection.find({});
for (let i = 0; i < all_documents.length; i++) {
db.MyCollection.update({_id: all_documents[i]._id }, {$set : {"new_field": i}}, {upsert:false})
}
For example I have a thousands of documents with same structure, for example:
{
"key_1":"value_1",
"key_2":"value_2",
"key_3":"value_3",
...
...
}
And I need to get, let's say key_1, key_3 and key_23 from some set of documents with known IDs, for example, I need to process only 5 documents while my DB contains several thousands. Each time I have a different set of keys and document IDs. Is it possible to get that information for a one request?
You can use a list function (see: this, this, and this).
Since you know the ids, you can then query _all_docs with the list function:
POST /{db}/_design/{ddoc}/_list/{func}/_all_docs?include_docs=true&columns=["key_1","key_2","key_3"]
Accept: application/json
Content-Length: {whatever}
{
"keys": [
"docid002",
"docid005"
]
}
The list function needs to look at documents, and send the appropriate JSON for each one. Not tested:
(function (head, req) {
send('{"total_rows":' + head.total_rows + ',"offset":' + head.offset + ',"rows":[');
var columns = JSON.parse(req.query.columns);
var delim = '';
var row;
while (row = getRow()) {
var doc = {};
for (var k in columns) {
doc[k] = row.doc[k];
}
row.doc = doc;
send(delim + toJSON(row));
delim = ',';
}
send(']}');
})
Whether this is a good idea, I'm not sure. If your documents are big, and bandwidth savings important, it might.
Yes, that’s possible. Your question can be broken up into two distinct problems:
Getting only a part of the document (in your example: key_1, key_3 and key_23). This can be done using a view. A view is saved into a design document. See the wiki for more info on how to create views.
Retrieving only certain documents, which are defined by their ID. When querying views, you cannot only specify a single ID (or rather key), but also an array of keys, which is what you would need here. Again, see the section on querying views in the wiki for explanations and examples.
Even though you only need a subset of values from a document, you may find that the system as a whole performs better if you just ask for the entire document then select the values you need from that result.
To only get the specific key value pairs you need to create a view that has view entries with a multipart key consisting of the doc id and doc item name, with value of the corresponding doc item.
So your map function would look something like:
function(doc){
for(var i = 1; i < doc.keysInDoc; i++){
var k = "key_"+i;
emit([doc._id, k], doc.[k]);
}
}
You can then use multi key lookup with each key being of the form ["docid12345", "key_1"], ["docid56789", "key_23"], etc.
So a query like:
http://host:5984/db/_design/design/_view/view?&keys=[["docid002","key_8"],["docid005","key_7"]]
will return
{"total_rows":84,"offset":67,"rows":[
{"id":"docid002","key":["docid002","key_8"],"value":"value d2_k8"},
{"id":"docid005","key":["docid005","key_12"],"value":"value d5_k12"}
]}
I have a coucdb database which contains about 200000 tweets, keys are tweet ID. I have a query which needs to retrieve all documents to look for some information. I'm using lightcouch to work with couchdb in a java web app. If I create a dbClient like this:
List<JsonObject>tweets = dbClient.view("_all_docs").query(JsonObject.class);
and then loop through tweets, for each JsonObject in tweets, use
JsonObject tweetJson = dbClient.find(JsonObject.class, tweet.get("id").toString().replaceAll("\"", ""));
to retrieve each tweet one by one it took extremely long time for 200000 documents. If I load all documents in one single query using includeDocs(true)
List<JsonObject>allTweets = dbClient.view("_all_docs").includeDocs(true).query(JsonObject.class);
it caused outofmemory exception since the number of documents are too large. So how can i deal with this problem? I'm thinking about using limit(5000) to retrieve 5000 documents for each time and loop through whole database, but I don't know how to write the loop to continue to retrieve the next 5000 after the first 5000 docs. One possible solution is using startKey and endKey but I'm confused how to use them when the key is tweet ID.
Use queryPage but make sure to use a String as the Key
See: https://github.com/lightcouch/LightCouch/issues/26#event-122327174
0.1.6 still seems to show this behaviour.
A workaround that I found for this goes something like this:
changes = DbClient.changes()
.since(null) // or... since(since) if you want an offset
.includeDocs(true);
int size = 1;
getCursor("0");
while (size > 0 ) {
ChangesResult resultSet = changes.limit(40000).getChanges();
List<ChangesResult.Row> rowList = resultSet.getResults();
for (ChangesResult.Row feed: rowList) {
<instantiate your object via gson>
.
.
.
}
getCursor(resultSet.getLastSeq());
size = rowList.size();
}
I've an entity with assigned string Id on NHibernate and I've a little problem when get an entity by Id.
Example...
Suppose that have a database record like this...
Id Description
-------------------
AAA MyDescription
now, if I use "Get" method using search id "aaa"...
MYENTITYTYPE entity = Session.Get<MYENTITYTYPE>("aaa")
return right entity but Id field (entity.Id) is "aaa", while I wish it were equal to "AAA".
In summary I would like that "Get" method return the id identical to the one stored in the database...with the same case.
Is possible? How can I do?
Interesting question. My guess is that it's not possible, because the Id might exist before the DB call. Consider the following:
var foo = session.Load<Foo>("aaa"); //no DB call, foo is a proxy
Console.WriteLine(foo.Id); //Prints "aaa";
var bar = foo.Bar; //Forces loading
Console.WriteLine(foo.Id); //No matter what, the Id can't change at this point
This illustrates another reason why primary keys with meaning are usually a bad idea, especially if their input is not controlled.
Now, if instead of Get you use a query, you will get the right-cased Id:
//example with LINQ; you can use HQL, Criteria, etc
var foo = session.Query<Foo>().Single(x => x.Id == "aaa");
The drawback is that you will always go to the DB, even if the entity is already loaded.
Now, if you defined your entity as {Id, Code, Description}, where Id is a synthetic POID (I recommend Hilo or Guid) and Code is the existing string Id, you will avoid potential bugs caused by using Get instead of a query with the code.