Prisma how to add hours while comparing columns in the same table - node.js

I am using NestJS and Prisma[4.4.0].
My table:
id: int
created_at: Timestamp
first_active: Timestamp
Query that I want to implement
select count(*) from {table} where id = {id} and first_active <= {created_at} + 48hours
I want to get a count of users which were active within 48 hours of creation.
With https://www.prisma.io/docs/reference/api-reference/prisma-client-reference#compare-columns-in-the-same-table now I can access the column name.
Example
where: {
// find all users where 'name' is in a list of tags
id: ${id},
first_active: {
this.prisma.table.fields.created_at // Not sure how to + 48 hours
}
},
any suggestion on how I can add time (72 hours) to the created_at

You will need to perform two queries to accomplish this, for now. First, retrieve the created_at, then add the necessary hours.
You could create a feature request if you would like to see this functionality added to Prisma.

Related

How to avoid Cassandra ALLOW FILTERING?

I have Following Data Model :-
campaigns {
id int PRIMARY KEY,
scheduletime text,
SchduleStartdate text,
SchduleEndDate text,
enable boolean,
actionFlag boolean,
.... etc
}
Here i need to fetch the data basing on start date and end data with out ALLOW FILTERING .
I got more suggestions to re-design schema to full fill the requirement But i cannot filter the data basing on id since i need the data in b/w the dates .
Some one give me a good suggestion to full fill this scenario to execute Following Query :-
select * from campaings WHERE startdate='XXX' AND endDate='XXX' ; // With out Allow Filtering thing
CREATE TABLE campaigns (
SchduleStartdate text,
SchduleEndDate text,
id int,
scheduletime text,
enable boolean,
PRIMARY KEY ((SchduleStartdate, SchduleEndDate),id));
You can make the below queries to the table,
slect * from campaigns where SchduleStartdate = 'xxx' and SchduleEndDate = 'xx'; -- to get the answer to above question.
slect * from campaigns where SchduleStartdate = 'xxx' and SchduleEndDate = 'xx' and id = 1; -- if you want to filter the data again for specific ids
Here the SchduleStartdate and SchduleEndDate is used as the Partition Key and the ID is used as the Clustering key to make sure the entries are unique.
By this way, you can filter based on start, end and then id if needed.
One downside with this will be if you only need to filter by id that wont be possible as you need to first restrict the partition keys.

Suggestions on how to store age and other time derived data in MongoDB using Mongoose

I am trying to store my users age on MongoDB and i want to calculate the age of the user dynamically and update it autonomously when possible.
I'm thinking of two approaches one is to store the date and on query use today's date as a reference and find the difference but the problem with this is the age will not be updated on the schema in MongoDB, how do i solve this?
The other is to set a hook for an API and trigger that API to update the details.
My schema looks something like this, though i am computing the age when saving it won't get updated as and when needed. Also as told taking today's date as a reference is affecting our analytics.
dateOfBirth: Date,
age: Number
You can store as DOB and project age while querying using aggregation framework
db.getCollection('callmodels').aggregate([{
$project: {
name : 1,
email: 1,
age: {$trunc:{$divide: [ {$subtract: [new Date(), '$dob']},1000 * 60 * 60 * 24* 365]}}}
}
])

knex SQL - column must appear in the GROUP BY clause or be used in an aggregate function

I am using postgres and knex with NodeJS to aggregate session_ids by hours in a given date range. My start_timestamp is in format2016-12-12 14:53:17.243-05. I want to group all records by the hour such that:
Hour 14:00:00-15:00:00 would have n records within that hour.
Hour 15:00:00-16:00:00 would have n records within that hour.
etc...
Given the query
db.knex.count('t1.id as session_ids')
.from('sessions as t1')
.where(db.knex.raw('t1.start_timestamp'), '>=', startDate)
.andWhere(db.knex.raw('t1.start_timestamp'), '<=', endDate)
.groupByRaw("date_trunc('hour', t1.start_timestamp)");
My start date and end date dictate a range of 1 Day. So there shouldn't be duplicate/ambiguous times based on grouping by hour of the day.
I am succesfully able to get counts of each record by the hour:
[ anonymous { session_ids: '6' },
anonymous { session_ids: '1' },
anonymous { session_ids: '1' },
anonymous { session_ids: '3' },
...
But I need their actual time displayed, as such:
{
hour: 10
session_ids: 5 //5 session IDs
}
Adding .select('t1.start_timestamp') below count, as shown in this example, I get the following error:
Unhandled rejection error: column "t1.start_timestamp" must appear in
the GROUP BY clause or be used in an aggregate function
This error doesn't make sense to me as t1.start_timestamp appears in the GROUP BY stage.
Moreover, I've already checked out PostgreSQL -must appear in the GROUP BY clause or be used in an aggregate function for help. I don't have ambiguity in my records so the DB should know which records to select.
I am not familiar with knex, but the following query does not work (it gives the same error message you noted):
SELECT t1.start_timestamp, count(t1.id) AS session_ids
FROM sessions AS t1
GROUP BY date_trunc('hour', t1.start_timestamp);
However, the following query does work:
SELECT date_trunc('hour', t1.start_timestamp), count(t1.id) AS session_ids
FROM sessions AS t1
GROUP BY date_trunc('hour', t1.start_timestamp);
Can you try changing .select accordingly?

Cassandra + Fetch the last records using in query

I am new in this cassandra database using with nodejs.
I have user_activity table. In this table data will insert based on user activity.
Also I have some user list. I need to fetch the data in that particular users and last record.
I don't interest to put the query in for loop. Have any other idea to achieve this?
Example Code:
var userlist = ["12", "34", "56"];
var query = 'SELECT * FROM user_activity WHERE userid IN ?';
server.user.execute(query, [userlist], {
prepare : true
}, function(err, result) {
console.log(results);
});
How to get the user lists for last one ?
Example:
user id = 12 - need to get last record;
user id = 34 - need to get last record;
user id = 56 - need to get last record;
I need to get these 3 records.
Table Schema:
CREATE TABLE test.user_activity (
userid text,
ts timestamp,
clientid text,
clientip text,
status text,
PRIMARY KEY (userid, ts)
)
It is not possible if you use the IN filter.
If it is a single user_id filter you can apply order by. Of course you need a column for inserted/updated time. So query will be like this:
SELECT * FROM user_activity WHERE user_id = 12 ORDER BY updated_at LIMIT 1;
You can put N value to get number of records
SELECT * FROM user_activity WHERE userid IN ? ORDER BY id DESC LIMIT N

Index multiple MongoDB fields, make only one unique

I've got a MongoDB database of metadata for about 300,000 photos. Each has a native unique ID that needs to be unique to protect against duplication insertions. It also has a time stamp.
I frequently need to run aggregate queries to see how many photos I have for each day, so I also have a date field in the format YYYY-MM-DD. This is obviously not unique.
Right now I only have an index on the id property, like so (using the Node driver):
collection.ensureIndex(
{ id:1 },
{ unique:true, dropDups: true },
function(err, indexName) { /* etc etc */ }
);
The group query for getting the photos by date takes quite a long time, as one can imagine:
collection.group(
{ date: 1 },
{},
{ count: 0 },
function ( curr, result ) {
result.count++;
},
function(err, grouped) { /* etc etc */ }
);
I've read through the indexing strategy, and I think I need to also index the date property. But I don't want to make it unique, of course (though I suppose it's fine to make it unique in combine with the unique id). Should I do a regular compound index, or can I chain the .ensureIndex() function and only specify uniqueness for the id field?
MongoDB does not have "mixed" type indexes which can be partially unique. On the other hand why don't you use _id instead of your id field if possible. It's already indexed and unique by definition so it will prevent you from inserting duplicates.
Mongo can only use a single index in a query clause - important to consider when creating indexes. For this particular query and requirements I would suggest to have a separate unique index on id field which you would get if you use _id. Additionally, you can create a non-unique index on date field only. If you run query like this:
db.collection.find({"date": "01/02/2013"}).count();
Mongo will be able to use index only to answer the query (covered index query) which is the best performance you can get.
Note that Mongo won't be able to use compound index on (id, date) if you are searching by date only. You query has to match index prefix first, i.e. if you search by id then (id, date) index can be used.
Another option is to pre aggregate in the schema itself. Whenever you insert a photo you can increment this counter. This way you don't need to run any aggregation jobs. You can also run some tests to determine if this approach is more performant than aggregation.

Resources