Why am I unable to change values of a Mongoose object to a different type ‘directly’? - node.js

I've just spent a good hour figuring out something mind-boggling (at least to me, as a JS noob) and I'd like to understand the underlying logic (or just why it works this way, because I think it's illogical and quite unexpected).
Suppose I'm using Mongoose to retrieve documents from a database, some or all of which include a date property (created with new Date()), a numeric property, and a string property.
[{
string: 'foo',
date: '2018-10-13T21:11:39.244Z',
number: 10
},
...
{
string: 'bar',
date: '2018-10-13T21:12:39.244Z',
number: 20
}]
I thus obtain an array of objects and now want to take the date property for each object and change the value to a string, so I do something like:
doc.find({}, (err, list) => {
list.forEach((item, index) => {
list[index].date = 'new value'
})
})
But I can't do that!
I can do list[index].string = 'new value' as well as list[index].date = new Date() but I can't change values that are of a different type, in this example date and number.
However, when I do list[index]._doc.date = 'new value', which took so long to figure out because I didn't know Mongoose objects weren't just plain old objects and focused on solving problems I didn't have instead, I can modify the value just fine.
It appears that the mongoose object somehow translates obj.key to obj._doc.key only if the type of the value matches but I'd appreciate a more detailed explanation than my uneducated guesses.

I suppose you want to use multi type on a document field, Mongoose support this by "Mixed" type when you define the Schema.
You can get more detail from https://mongoosejs.com/docs/schematypes.html#mixed.

Related

bigQuery: PartialFailureError on table insert

I'm trying to insert data row to bigQuery table as follows:
await bigqueryClient
.dataset(DATASET_ID)
.table(TABLE_ID)
.insert(row);
But I get a PartialFailureError when deploying the cloud function.
The table schem has a name (string) and campaigns (record/repeated) fields which I created manually from the console.
hotel_name STRING NULLABLE
campaigns RECORD REPEATED
campaign_id STRING NULLABLE
platform_id NUMERIC NULLABLE
platform_name STRING NULLABLE
reporting_id STRING NULLABLE
And the data I'm inserting is an object like this:
const row = {
hotel_name: hotel_name,//string
campaigns: {
id: item.id,//string
platform_id: item.platform_id,//int
platform_name: item.platform_name,//string
reporting_id: item.reporting_id,//string
},
};
The errors logged don't give much clue about the issue.
These errors suck. The actual info about what went wrong can be found in the errors property on the PartialFailureError. In https://www.atdatabases.org we reformat the error to make this easier using: https://github.com/ForbesLindesay/atdatabases/blob/0e1a033264aac33deaff2ab753796049e623ab86/packages/bigquery/src/implementation/BigQueryDriver.ts#L211
According to my test it seems that there are 2 errors here. First is that you have campaign_id in schema while id in JSON.
2nd thing is related with format of REPEATED mode data in JSON. The documentation mentions following:
. Notice that the addresses column contains an array of values (indicated by [ ]). The multiple addresses in the array are the repeated data. The multiple fields within each address are the nested data.
It's not so straight in mentioned document (probably can be found somewhere else) however when you use REPEATED mode you should use brackets [].
I tested it shortly on my side and it seems that it should work like this:
const row = {
hotel_name: hotel_name,//string
campaigns: [ {
campaign_id: item.id,//string
platform_id: item.platform_id,//int
platform_name: item.platform_name,//string
reporting_id: item.reporting_id,//string
}, ]
};

Change date format in dialogflow

I`m currently trying to build up a chatbot/agent with dialogflow and have honestly no knowledge about anything in the programming business/IT stuff. I´m a student who had a guestlecture where we were shown how to create Chatbots haha. But I was interested and sat down and tried to create one for my work. A simple bot that tells the customer about the opening times and gives out some information to save us some phone calls. So far so good. I want to include the function to book a table and my problem is the following:
I´ve read many questions about changing the date and time format to receive a format like "4pm on Thursday" instead of "2020-12-26T16:00:00+01:00".
So as I said I have no clue so far how the change the code to get a different output so my question would be if you could tell me where exactly I have to do that or where I can find a solution for that. Don´t get me wrong I´d love to know how to do it so yeah I´d be so happy if you could save that christmas present :)
Best regards
Mo
So, your question is vague and lacks details.
If you want to convert "2020-12-26T16:00:00+01:00" to "4pm on Thursday" in your local time here are helper functions to achieve that:
function convertParametersDateTime(date, time){
return new Date(Date.parse(date.split('T')[0] + 'T' + time.split('T')[1].split('+')[0]));
}
// A helper function that adds the integer value of 'hoursToAdd' to the Date instance 'dateObj' and return a new Data instance.
function addHours(dateObj, hoursToAdd){
return new Date(new Date(dateObj).setHours(dateObj.getHours() + hoursToAdd));
}
// A helper funciton that converts the Date instance 'dateObj' into a string that represents this time in English.
function getLocaleTimeString(dateObj){
return dateObj.toLocaleTimeString('en-US', {hour: 'numeric', hour12: true});
}
// A helper dunction that converts the Date instance 'dateObj' into a string that represents this date in English
function getLocaleDateString(dateObj){
return dateObj.toLocaleDateString('en-US', {weekday: 'long', month: 'long', day: 'numeric'});
}
Those are the helper functions. You have to call them inside the Fulfillment function for your intent. Here's a very simple example:
function makeAppointment (agent) {
// Use the Dialogflow's date and time parameters to create Javascript Date instances, 'dateTimeStart' and 'dateTimeEnd',
// which are used to specify the appointment's time.
const dateTimeStart = convertParametersDateTime(agent.parameters.date, agent.parameters.time);
const dateTimeEnd = addHours(dateTimeStart, appointmentDuration);
const appointmentTimeString = getLocaleTimeString(dateTimeStart);
const appointmentDateString = getLocaleDateString(dateTimeStart);
agent.add(`Here's the summary of your reservation:\nDate&Time: ${appointmentDateString} at ${appointmentTimeString}`);
}
The codes might include some syntax errors. Those functions give what you are looking for but you would have to adjust them according to your needs.

Azure DocumentDB: order by and filter by DateTime

I have the following query:
SELECT * FROM c
WHERE c.DateTime >= "2017-03-20T10:07:17.9894476+01:00" AND c.DateTime <= "2017-03-22T10:07:17.9904464+01:00"
ORDER BY c.DateTime DESC
So as you can see I have a WHERE condition for a property with the type DateTimeand I want to sort my result by the same one.
The query ends with the following error:
Order-by item requires a range index to be defined on the corresponding index path.
I have absolutely no idea what this error message is about :(
Has anybody any idea?
You can also do one thing that don't require indexing explicitly. Azure documentBD is providing indexing on numbers field by default so you can store the date in long format. Because you are already converting date to string, you can also convert date to long an store, then you can implement range query.
I think I found a possible solution, thanks for pointing out the issue with the index.
As stated in the following article https://learn.microsoft.com/en-us/azure/documentdb/documentdb-working-with-dates#indexing-datetimes-for-range-queries I changed the index for the datatype string to RangeIndex, to allow range queries:
DocumentCollection collection = new DocumentCollection { Id = "orders" };
collection.IndexingPolicy = new IndexingPolicy(new RangeIndex(DataType.String) { Precision = -1 });
await client.CreateDocumentCollectionAsync("/dbs/orderdb", collection);
And it seems to work! If there are any undesired side effects I will let you know.

is there a quick method to return saved data to its default value in mongoose.js

In my User Schema I have various fields with various default values. By example, see a few fields below:
acceptedStatus: {
type: String,
trim: true,
default: 'no' //possibilities (no, yes, thinkingAboutIt, yesInFuture)
}
Is there a way to quickly return the saved data for a particular field to its default value without explicitly doing it like
user.acceptedStatus = 'no';
and, if so, is there a way to return all fields that carry default values to their original status. Thanks for your help. There are times when I need to quickly do this, and didn't know if there were any methods I am missing.
One way could be that you store schema in an object, then from that object you can easily come to know what property have defaults.

Storing data efficiently in MongoLab and in general

I have an app that listens to a websocket and it stores usernames/userID's (Usernames are 1-20 bytes, UserID's are 17 bytes). This is not a big deal because it's only one document. However, every round they participate in, it pushes the round ID (24 bytes) and a 'score' decimal value (ex: 1190.0015239999999).
The thing is, there is no limit to how many rounds there are and I can't afford to pay so much per month for mongolab. What's the best way to handle this data?
My thoughts:
- If there is a way to replace the _id: field in mongodb, I will replace it with the userID which is 17 bytes long. Not sure if I can do that though.
Store user data with timestamps and remove OLD data that has a score value less than 200.
Cut off user names that are more than 10 characters.
Completely remove Round ID's (Or replace the _id field with roundId). (Won't work since there are multiple roundID's in each document)
Round the decimal value to two places.
Remove Round ID's after 30 days
tl;dr
Need to store data efficiently < 500 mb in mongo lab
Documents consists of username(1-20 characters), userid(17 characters), round(Object Array) = [{round Id(24 characters), score(1190.0015239999999)}].
Thanks in advance!
Edit:
Document Schema:
userID: {type: String},
userName: {type: String},
rounds: [{roundID: String, score: String}]
Modelling 1:n relationships as embedded document is not the best except for very rare cases. This is because there is a 16MB size limit for BSON documents at the time of this writing.
A better (read more scalable and efficient approach) is to do use document references.
First, you need your player data, of course. Here is an example:
{
_id: "SomeUserId",
name: "SomeName"
}
There is no need for an extra userId field since each document needs to have a _id field with unique values anyway. Contrary to popular belief, this fields value does not have to be an ObjectId. So we already reduced the size you need for your player data by 1/3, if I am not mistaken.
Next, the results of each round:
{
_id: {
round: "SomeString",
player: "SomeUserId"
},
score: 5,
createdAt: ISODate("2015-04-13T01:03:04.0002Z")
}
A few things are to note here. First and foremost: Do NOT use strings to record values. Even grades should rather be stored as corresponding numerical values. Otherwise you can not get averages and alike. I'll show more of that later. We are using a compound field for _id here, which is perfectly valid. Furthermore, it will give us a free index optimizing a few of the most likely queries, like "How did player X score in round Y?"
db.results.find({"_id.player":"X","_id.round":"Y"})
or "What where the results of round Y?"
db.results.find({"_id.round":"Y"})
or "What we're the scores of Player X in all rounds?"
db.results.find({"_id.player":"X"})
However, by not using a string to save the score, even some nifty stats become rather cheap, for example "What was the average score of round Y?"
db.results.aggregate(
{ $match: { "_id.round":"Y" } },
{ $group: { "round":"$_id.round", "averageScore": {$avg:"$score"} }
)
or "What is the average score of each player in all rounds?"
db.results.aggregate(
{ $group: { "player: "$_id.player", "averageAll": {$avg:"$score"} }
)
While you could do these calculation in your application, MongoDB can do them much more efficiently since the data does not have to be send to your app prior to processing it.
Next, for the data expiration. We have a createdAt field, of type ISODate. Now, we let MongoDB take care of the rest by creating a TTL index
db.results.ensureIndex(
{ "createdAt":1 },
{ expireAfterSeconds: 60*60*24*30}
)
So all in all, this should be pretty much the most efficient way of storing and expiring your data, while improving scalability in the same time.
So currently you are storing three data points in the array for each record.
_id: false will prevent mongoose from automatically creating an id for the document. If you don't need roundID, then you can use the following which only stores one data point in the array:
round[{_id:false, score:String}]
Otherwise if roundID actually has meaning, use the following which stores two data points in the array:
round[{_id:false, roundID: string, score:String}]
Lastly, if you just need an ID for reference purposes, use the following, which will store two data points in the array - a random id and the score:
round[{score:String}]

Resources