I am working on a dynamic system, for simplicity let's say it's a e-commerce page and the frontend has no idea how many caregories or which categories or which attributes should be displayed beforehand. Let's take a simple product.
I had a choice, to have the backend pass either:
[
{
Name: 'Some shoe'
}
]
Or
[
[
{
value: 'Some shoe',
label: Name
}
]
]
I will use a translation file, at first I though I'd use it to translate from english if needed, but it could be used to parse a key each time such as "product_name" and output the label "Name" to solve the problem above.
What are some best practices?
Thanks
Related
I have a record that looks like this
name, value
Parent Name
MyName, MyValue
Parent Name
OtherName, OtherValue
This should translate into
[
{
name:"ParentName",
children: [
{
name: "myName",
value: "myValue"
},{
name: "otherName",
value: "otherValue"
}
]
}
]
Now I know this is poorly put together CSV but I can't control that so I need to use it as it currently is done. I could preprocess it but I am not sure how I would tell if it is the parent line or a child line.
Is there a good way to parse these sorts of documents using Node?
This is not a CSV format by any definition of such format. As such, you're not going to find any ready-made CSV code to parse this or to put it into your desired output.
Instead, you're going to have to write your own code to read it, parse it and organize it the way you want the output.
This is the case: A webshop in which I want to configure which items should be listed in the sjop based on a set of parameters.
I want this to be configurable, because that allows me to experiment with different parameters also change their values easily.
I have a Product collection that I want to query based on multiple parameters.
A couple of these are found here:
within product:
"delivery" : {
"maximum_delivery_days" : 30,
"average_delivery_days" : 10,
"source" : 1,
"filling_rate" : 85,
"stock" : 0
}
but also other parameters exist.
An example of such query to decide whether or not to include a product could be:
"$or" : [
{
"delivery.stock" : 1
},
{
"$or" : [
{
"$and" : [
{
"delivery.maximum_delivery_days" : {
"$lt" : 60
}
},
{
"delivery.filling_rate" : {
"$gt" : 90
}
}
]
},
{
"$and" : [
{
"delivery.maximum_delivery_days" : {
"$lt" : 40
}
},
{
"delivery.filling_rate" : {
"$gt" : 80
}
}
]
},
{
"$and" : [
{
"delivery.delivery_days" : {
"$lt" : 25
}
},
{
"delivery.filling_rate" : {
"$gt" : 70
}
}
]
}
]
}
]
Now to make this configurable, I need to be able to handle boolean logic, parameters and values.
So, I got the idea, since such query itself is JSON, to store it in Mongo and have my Java app retrieve it.
Next thing is using it in the filter (e.g. find, or whatever) and work on the corresponding selection of products.
The advantage of this approach is that I can actually analyse the data and the effectiveness of the query outside of my program.
I would store it by name in the database. E.g.
{
"name": "query1",
"query": { the thing printed above starting with "$or"... }
}
using:
db.queries.insert({
"name" : "query1",
"query": { the thing printed above starting with "$or"... }
})
Which results in:
2016-03-27T14:43:37.265+0200 E QUERY Error: field names cannot start with $ [$or]
at Error (<anonymous>)
at DBCollection._validateForStorage (src/mongo/shell/collection.js:161:19)
at DBCollection._validateForStorage (src/mongo/shell/collection.js:165:18)
at insert (src/mongo/shell/bulk_api.js:646:20)
at DBCollection.insert (src/mongo/shell/collection.js:243:18)
at (shell):1:12 at src/mongo/shell/collection.js:161
But I CAN STORE it using Robomongo, but not always. Obviously I am doing something wrong. But I have NO IDEA what it is.
If it fails, and I create a brand new collection and try again, it succeeds. Weird stuff that goes beyond what I can comprehend.
But when I try updating values in the "query", changes are not going through. Never. Not even sometimes.
I can however create a new object and discard the previous one. So, the workaround is there.
db.queries.update(
{"name": "query1"},
{"$set": {
... update goes here ...
}
}
)
doing this results in:
WriteResult({
"nMatched" : 0,
"nUpserted" : 0,
"nModified" : 0,
"writeError" : {
"code" : 52,
"errmsg" : "The dollar ($) prefixed field '$or' in 'action.$or' is not valid for storage."
}
})
seems pretty close to the other message above.
Needles to say, I am pretty clueless about what is going on here, so I hope some of the wizzards here are able to shed some light on the matter
I think the error message contains the important info you need to consider:
QUERY Error: field names cannot start with $
Since you are trying to store a query (or part of one) in a document, you'll end up with attribute names that contain mongo operator keywords (such as $or, $ne, $gt). The mongo documentation actually references this exact scenario - emphasis added
Field names cannot contain dots (i.e. .) or null characters, and they must not start with a dollar sign (i.e. $)...
I wouldn't trust 3rd party applications such as Robomongo in these instances. I suggest debugging/testing this issue directly in the mongo shell.
My suggestion would be to store an escaped version of the query in your document as to not interfere with reserved operator keywords. You can use the available JSON.stringify(my_obj); to encode your partial query into a string and then parse/decode it when you choose to retrieve it later on: JSON.parse(escaped_query_string_from_db)
Your approach of storing the query as a JSON object in MongoDB is not viable.
You could potentially store your query logic and fields in MongoDB, but you have to have an external app build the query with the proper MongoDB syntax.
MongoDB queries contain operators, and some of those have special characters in them.
There are rules for mongoDB filed names. These rules do not allow for special characters.
Look here: https://docs.mongodb.org/manual/reference/limits/#Restrictions-on-Field-Names
The probable reason you can sometimes successfully create the doc using Robomongo is because Robomongo is transforming your query into a string and properly escaping the special characters as it sends it to MongoDB.
This also explains why your attempt to update them never works. You tried to create a document, but instead created something that is a string object, so your update conditions are probably not retrieving any docs.
I see two problems with your approach.
In following query
db.queries.insert({
"name" : "query1",
"query": { the thing printed above starting with "$or"... }
})
a valid JSON expects key, value pair. here in "query" you are storing an object without a key. You have two options. either store query as text or create another key inside curly braces.
Second problem is, you are storing query values without wrapping in quotes. All string values must be wrapped in quotes.
so your final document should appear as
db.queries.insert({
"name" : "query1",
"query": 'the thing printed above starting with "$or"... '
})
Now try, it should work.
Obviously my attempt to store a query in mongo the way I did was foolish as became clear from the answers from both #bigdatakid and #lix. So what I finally did was this: I altered the naming of the fields to comply to the mongo requirements.
E.g. instead of $or I used _$or etc. and instead of using a . inside the name I used a #. Both of which I am replacing in my Java code.
This way I can still easily try and test the queries outside of my program. In my Java program I just change the names and use the query. Using just 2 lines of code. It simply works now. Thanks guys for the suggestions you made.
String documentAsString = query.toJson().replaceAll("_\\$", "\\$").replaceAll("#", ".");
Object q = JSON.parse(documentAsString);
We have been using MongoDB for some time now and there is one thing I just cant wrap my head around. Lets say I have a a collection of Users that have a Watch List or Favorite Items List like this:
usersCollection = [
{
_id: 1,
name: "Rob",
itemWatchList:[
"111111",
"222222",
"333333"
]
}
];
and a separate Collection of Items
itemsCollection = [
{
_id:"111111",
name: "Laptop",
price:1000.00
},
{
_id:"222222",
name: "Bike",
price:123.00
},
{
_id:"333333",
name: "House",
price:500000.00
}
];
Obviously we would not want to insert the whole item obj inside the itemWatchList array because the items data could change i.e. price.
Lets say we pull that user to the GUI and want to diplay a grid of the user itemWatchList. We cant because all we have is a list of ID's. Is the only option to do a second collection.find([itemWatchList]) and then in the results callback manipulate the user record to display the current items? The problem with that is what if I return an array of multiple Users each with an array of itemWatchList's, that would be a callback nightmare to try and keep the results straight. I know Map Reduce or Aggregation framework cant traverse multiple collections.
What is the best practice here and is there a better data structure that should be used to avoid this issue all together?
You have 3 different options with how to display relational data. None of them are perfect, but the one you've chosen may not be the best option for your use case.
Option 1 - Reference the IDs
This is the option you've chosen. Keep a list of Ids, generally in an array of the objects you want to reference. Later to display them, you do a second round-trip with an $in query.
Option 2 - Subdocuments
This is probably a bad solution for your situation. It means putting the entire array of documents that are stored in the items collection into your user collection as a sub-document. This is great if only one user can own an item at a time. (For example, different shipping and billing addresses.)
Option 3 - A combination
This may be the best option for you, but it'll mean changing your schema. For example, lets say that your items have 20 properties, but you really only care about the name and price for the majority of your screens. You then have a schema like this:
usersCollection = [
{
_id: 1,
name: "Rob",
itemWatchList:[
{
_id:"111111",
name: "Laptop",
price:1000.00
},
{
_id:"222222",
name: "Bike",
price:123.00
},
{
_id:"333333",
name: "House",
price:500000.00
}
]
}
];
itemsCollection = [
{
_id:"111111",
name: "Laptop",
price:1000.00,
otherAttributes: ...
},
{
_id:"222222",
name: "Bike",
price:123.00
otherAttributes: ...
},
{
_id:"333333",
name: "House",
price:500000.00,
otherAttributes: ...
}
];
The difficulty is that you then have to keep these items in sync with each other. (This is what is meant by eventual consistency.) If you have a low-stakes application (not banking, health care etc) this isn't a big deal. You can have the two update queries happen successively, updating the users that have that item to the new price. You'll notice this sort of latency on some websites if you pay attention. Ebay for example often has different prices on the search results pages than the actual price once you open the actual page, even if you return and refresh the search results.
Good luck!
Beginner with Elasticsearch. I feel like this should be pretty simple, but I'm stuck here. I've got a mapping for Posts that looks like this:
[ post1: {
title: 'asdfasd',
comments: [commment1, comment2, comment3]
},
post2: {
title: 'asdf',
comments: [comment1, comment2]
}
.
.
.]
And I'm trying to search for them by title and then order them by number of comments. I can search by title just fine, but I'm a little confused as to how to go about ordering the results by comments count. What would be the best way to go about doing this?
You have two options -
Use a script to get the length of an array. So you would do something like:
{
"query" : {
....
},
"sort" : {
"_script" : {
"script" : "doc['comments'].values.length",
"type" : "number",
"order" : "desc"
}
}
}
Keep an additional field for the number of comments, each time you add a comment also increment the value of the comments counter, and sort by it.
Option #2 is preferable if you have a lot of data. Using a script has its overhead and it can increase search time if you have to calculate the script on a large collection of documents.
Sorting by a field, on the other hand, is much better in terms of performance. I would go with #2.
I've just started building a little application using MongoDB and can't seem to find any examples where I can add objects to a deep array that I can then find on an individual basis.
Let me illustrate by the following set of steps I take as well as the code I've written.
I create a simple object in MongoDB like so:
testing = { name: "s1", children: [] };
db.data.save(testing);
When I query it everything looks nice and simple still:
db.data.find();
Which outputs:
{
"_id" : ObjectId("4f36121082b4c129cfce3901"),
"name" : "s1",
"children" : [ ]
}
However, after I update the "children" array by "pushing" an object into it, I get into all sorts of problems.
First the update command that I run:
db.data.update({ name:"s1" },{
$push: {
children: { name:"r1" }
}
});
Then when I query the DB:
db.data.find({
children: { name: "r1" }
});
Results in:
{
"_id" : ObjectId("4f36121082b4c129cfce3901"),
"children" : [ { "name" : "r1" } ],
"name" : "s1"
}
Which doesn't make any sense to me, since I would have expected the following:
{
"name": "r1"
}
Is there a better way of inserting data into MongoDB so that when I run queries I extract individual objects rather than the entire tree? Or perhaps a better way of writing the "find" query?
By default mongodb find retrieve all the fields(like * from in sql). You can extract the particular field by specifying the field name
.
db.data.find({ "children.name": "r1" },'children.name');
Why would you expect ot to return only part of a document? It returns the whole document unless you tell it which fields you want to explicitly include or exclude. See http://www.mongodb.org/display/DOCS/Retrieving+a+Subset+of+Fields