I have a JSON I would like to put into my mongo db database. My JSON holds some relational data. This data is internal to that JSON, I don't need to store this anywhere else. Example:
{
title: "John film list"
films [
{
title: "Once upon a time in Hollywood"
director: '1a' //referencing the director using an ID of type string
},
{
title: "Some film with empty director field",
director: ''
}
],
directors: [
{
id: '1a', //find the director here
name: 'Tarantino'
}
]
}
I do not need to store anything centrally (I don't need a big list of directors somewhere), but in this very document I need to be able to look up the director (1a) and get back Tarantino.
I managed to push this JSON format to MongoDB. However, it gives my schemas new ids (_id-field) and I am confused now as to how to relate the two properly in mongo?
The default unique primary key in the MongoDB document is _id. When you insert a new document, it returns the unique id of the record that inserted (created).
The value inside this id is ObjectId who creates based on time. you can read about it here.
If you want to use your own value for the _id, have to pass it when you call the insert, like this:
db.directors.insertOne({_id: '1a', name: 'Tarantino'})
Related
I have a Nodejs app with Mongodb. Now I want to user Elasticsearch to replicate data from mongo to Elasticsearch. I'm using npm package "elasticsearch". For example for collection "Posts" I have like this:
items: [
{
_id: '111111111111',
title: 'test1',
status: true,
},
{
_id: '22222222',
title: 'test2',
status: 0,
},
{
_id: '333333333',
title: 'test1',
status: {published: trye},
}
]
As you can see, My data is unstructured and Elasticsearch shows me error while I'm adding these items. I want a trick to turn off Elasticsearch Restriction and allow me to add these data. I can't made changes on my data its huge.
Any solution?
It gives you errors because your status field is a boolean first, a number next, and an object at the end -> mapping conflicts. If you don't want to change the data I assume you don't expect to search over the fields that show conflicts (how could you query consistently over a field that could be anything?). Then, my best recommendation is to store the conflictual fields without indexing them. That means you will see them in the documents, but you won't be able to query them or aggregate over them. To disable indexing you set their mapping type as an Object and set the enabled mapping property to false (see Docs).
If you want to be able to query or aggregate over everything, you must do the extra effort of preprocessing your data consistently.
I am new to MongoDB and mongoose. I am trying to create a Node & MongoDB auction app. So since it is actually an online auction, users should be able to bid for items. I successfully completed the user registration, sign in page and authentication process, however, I am a bit stuck in the bidding page.
I created a Schema using mongoose and each item for auction is saved in the database. I want to add name and price of each user who bid for the item in the same object in MongoDB, like this:
{
name: "valuable vase from 1700s",
owner: "John Doe",
itemId: 100029,
bids: {
100032: 30000,
100084: 34000
}
}
So each user will have ids like 100032: 30000, and when they bid, their "account id: price" will be added under bids in the database object of the item.
I made some research and found some ways to solve the problem but I want to know if what I want to do is possible and if it is the right solution to do.
Thanks for giving me your time!
There are indeed couple of ways to achieve what you want.
In my opinion, a collection called ItemBids, where each document includes this data structure, will benefit you the most.
{
itemId: ObjectId # reference to the item document
accountId: ObjectId # reference to the account
bid: Number # the bid value
}
This pattern is suitable for your case because you can easily query the bids by whatever you want -
You can get all the account bids, you can get all the item bids, and you can sort them with native Mongo by the bid price.
Every time there's a bid, you insert a new document to this collection.
Another option is embedding an array of Bids objects in the item Object.
Each Bid object should include:
bids: [{
account: ObjectId("") # This is the account
price: Number
}]
The cons here are that querying it and accessing it will require more complex queries.
You can read more about the considerations
here:
https://docs.mongodb.com/manual/core/data-model-design
https://coderwall.com/p/px3c7g/mongodb-schema-design-embedded-vs-references
The way you decided to implement your functionality is a little bit complicated.
It is not impossible to do that but, the better way is to use array of objects instead of a single object like this:
{
name: '',
..
..
bids: [{
user: 100032,
price: 30000
}, {
user: 100084,
price: 34000
}]
}
I'm rather new to working with MongoDB.
In my application, the user can create to-do-lists. I save the data of these to-do-lists to my database using node.js with express framework and mongoose (with a ReactJS front-end), however, the user is supposed to be able to create several to-do-lists and I'm not sure about how to best sort the data of these lists so I can always access the correct data in my corresponding to-do-list.
Let's say I have this schema:
var TodoSchema = new mongoose.Schema({
task: String,
prio: String,
updated_at: { type: Date, default: Date.now },
});
module.exports = mongoose.model("Todo", TodoSchema);
for my database called tododb.
I was first planning on creating a new collection for each new list, but in this question ( how to create a new collection automatically in mongodb ) it says that it would be much better to create one collection for all lists, however, I'm not sure about how you would filter out the correct data in this case.
I imagine that I'm not the first person to encounter this problem, so how is it done usually? What other options do I have besides collections? And how would I access exactly the data that I need?
Edit: I was also thinking about just adding an element called "name" or something similar, where the user could enter a name for the list, and when fetching the data I would iterate over all data and filter out the once whose name matches, however, that seems terribly inefficient.
I'd model a todo list like the following:
{
"_id": "id of the todo list",
"name": "name of the todo list (e.g. daily tasks)",
"tasks" : [
{"name": "drink coffee", priority: 1, updated: "sometime" },
{"name": "write code", priority: 2, updated: "sometime" },
{"name": "drink tea", priority: 3, updated: "sometime" }
]
}
and then put them all in the same collection, if you need to split by user, just add a userId field to the todo list document.
I see in all examples the suffix "_id" on a field referencing to another document.
Example:
record: {
_id : ObjectId("57f2fb5d1c6c3b0de45b170e",
artist_id: "prince" )
}
artist: {
_id: "prince"
}
Being that my artist mongo Schema has the "unique" attribute on the name field.
Is it Ok to things like below ?
record: {
_id : ObjectId("57f2fb5d1c6c3b0de45b170e",
artist: "prince" )
}
artist: {
_id : ObjectId(6eygdqzd5d1c6c3b0de45b1s0r",
name: "prince"
}
Or should you always reference directly the Id like in the first example?
if you visualize your problem in RDBMS world, there too to establish a foreign key constraint the field should be primary key in the referenced table and the same rule applies here.
now in your artist document though each document is going to contain a unique artist name but the name field itself is not key (primary key) but the ID is.
hence you have to establish the reference using the _id field.
what you can do is for ease if you want rather than relying on the mongodb generated ID field you can probably use name as the _id.
I am writing query JSON data in sequelize.js using postgres db.
Postgres table structure
id: integer,
name: string,
data: json
Here is my data structure
{
id: 3,
name: "v4th79j"
data: {
phone: "123456789",
email: "example#gmail.com",
password: "$2a$10$qCttQ8leMPCzJfE",
company_id: 2
}
}
I am writing query like this
var filter = {};
// Filter by email for a user
filter.where = { data: { email: "example#gmail.com"} } ;
Entity.findOne(filter)
.then(function (entity) {
console.log(JSON.stringify(entity))
});
but it is not working. How to find object in JSON datatype?
PostgreSQL treat json datatype as text so querying about one field is not possible. It stores the raw text of the JSON data in the database, complete with white space and retaining all the orders of any keys and any duplicate keys.
I convert data type to jsonb which is introduced in 9.4 version of PostgreSQL.
With JSONB, it turns the JSON document into a hierarchy of key/value data pairs. All the white space is discarded, only the last value in a set of duplicate keys is used and the order of keys is lost to the structure dictated by the hashes in which they are stored. We can apply index on specific field and can search any specific field using sequelize.js easily.
You can read more about with practical example here