Specifying that a column should just be inserted not upserted - node.js

I have some data i want to insert via insertGraph
ala ModelName
.query(trx)
.insertGraph(data)
problem is I have a guard with allowInsert that specifies which columns should be populated. I have a column holding a foreign key to another table. I don't want this column to be populated. I keep getting trying to upsert an unallowed relation. I'm at a loss on how to specify that foreignId shouldn't be populated.
My code looks like this with the allowInsert guard
ModelName
.query(trx)
.allowInsert([subrelation2.[columnToPopulate1, columnToPopulate2]])
.insertGraph(data)
P.s. I've tried specifying foreignId in the allowInsert condition to no avail. Specifying relation2.* allows the insertion. But I want to retain the sanity checks

Seems like you are specifying columns of the relation2 model, instead of subrelations. Is columnToPopulate1 a subrelation of the Model of relation2? By the name looks like a column of the model, which is wrong.
I think that you want to use a relations to insert two columns in the 'relation2' model. Something like:
let data = {
modelNameColumn: 'value',
relation2: {
columnToPopulate1: 'value',
columnToPopulate2: 'value',
}
}
await ModelName
.query(trx)
.allowInsert('[relation2]')
.insertGraph(data)
In allowInsert method you can only specify which relations are allowed to be inserted, but cant define which columns.
In case you want to remove the posibility of a column to be updated, you can use a beforeUpdate() trigger:
class Model2 extends Model {
async $beforeUpdate(opt, queryContext) {
await super.$beforeUpdate(opt, queryContext);
if (this.columnName) throw new Error('columnName shouldnt be updated')
}
}

Related

Atomic way of inserting a row if not exist in bigtable

We would like to insert a row if not exist in bigtable. Our idea is to use CheckAndMutateRow api with a onNoMatch insert. We are using the nodejs sdk, the idea would be to do the following (it seems to works, but we don't no about the atomicity of the operation)
const row = table.row('phone#4c410523#20190501');
const filter = [];
const config = {
onNoMatch: [
{
method: 'insert',
data: {
stats_summary: {
os_name: 'android',
timestamp,
},
},
},
],
};
await row.filter(filter, config);
CheckAndMutateRow is atomic. Based on the API definition:
Mutations are applied atomically and in order, meaning that earlier mutations can be masked / negated by later ones. Cells already present in the row are left unchanged unless explicitly changed by a mutation.
After committing the accumulated mutations, resets the local mutations.
MutateRow does an upsert. So if you give it a rowkey, column name and timestamp it will create a new cell if it doesn't exist, or overwrite it otherwise. You can achieve this behavior with a "simple write".
Conditional writes are use e.g. if you want to check the value before you overwrite it. Let's say you want to set column A to X only if column B is Y or overwrite column A only if column A's current value is Z.

Android Studio Room query to get a random row of db and saving the rows 2nd column in variable

like the title mentions I want a Query that gets a random row of the existing database. After that I want to save the data which is in a specific column of that row in a variable for further purposes.
The query I have at the moment is as follows:
#Query("SELECT * FROM data_table ORDER BY RANDOM() LIMIT 1")
fun getRandomRow()
For now I am not sure if this query even works, but how would I go about writing my function to pass a specific column of that randomly selected row to a variable?
Ty for your advice, tips and/or solutions!
Your query is almost correct; however, you should specify a return type in the function signature. For example, if the records in the data_table table are mapped using a data class called DataEntry, then the query could read as shown below (note I've also added the suspend modifier so the query must be run using a coroutine):
#Query("SELECT * FROM data_table ORDER BY RANDOM() LIMIT 1")
suspend fun getRandomRow(): DataEntry?
If your application interacts with the database via a repository and view model (as described here: https://developer.android.com/topic/libraries/architecture/livedata) then the relevant methods would be along the lines of:
DataRepository
suspend fun findRandomEntry(): DataEntry? = dataEntryDao.getRandomRow()
DataViewModel
fun getRandomRecord() = viewModelScope.launch(Dispatchers.IO) {
val entry: DataEntry? = dataRepository.findRandomEntry()
entry?.let {
// You could assign a field of the DataEntry record to a variable here
// e.g. val name = entry.name
}
}
The above code uses the view model's coroutine scope to query the database via the repository and retrieve a random DataEntry record. Providing the returning DataEntry record is not null (i.e. your database contains data) then you could assign the fields of the DataEntry object to variables in the let block of the getRandomRecord() method.
As a final point, if it's only one field that you need, you could specify this in the database query. For example, imagine the DataEntry data class has a String field called name. You could retrieve this bit of information only and ignore the other fields by restructuring your query as follows:
#Query("SELECT name FROM data_table ORDER BY RANDOM() LIMIT 1")
suspend fun getRandomRow(): String?
If you go for the above option, remember to refactor your repository and view model to expect a String instead of a DataEntry object.

Firestore delete ALL fields but ONE

Is it possible to delete All fields in a Firestore document apart from ONE field in a SINGLE database write (Without pior reading it)
I know I have a document with some proprieties but I don't know all of them. I want to delete all of these properties except one that I know.
The one that I know is keep .
{
keep: 'keep',
remove1: 'remove',
remove2: 'remove',
remove3: 'remove',
}
The doc after the transaction should be:
{
keep: 'keep',
}
I could have used firebase.firestore.FieldValue.delete() on each of the keys t
If you know the name and the value of the field you want to keep, you can just overwrite the document with an object that only contains the property you know:
const keepValue = ...;
db.collection('mycollection').doc('mydoc').
set(
{ keep: keepValue }
);
Since we use the set() method without the merge option, all the fields in the document will be overwritten with the object passed to the set() method.
If you don't know the value (or the name) of the field you want to keep, you will need to read the document, in order to find this value or name.

loopback relational database hasManyThrough pivot table

I seem to be stuck on a classic ORM issue and don't know really how to handle it, so at this point any help is welcome.
Is there a way to get the pivot table on a hasManyThrough query? Better yet, apply some filter or sort to it. A typical example
Table products
id,title
Table categories
id,title
table products_categories
productsId, categoriesId, orderBy, main
So, in the above scenario, say you want to get all categories of product X that are (main = true) or you want to sort the the product categories by orderBy.
What happens now is a first SELECT on products to get the product data, a second SELECT on products_categories to get the categoriesId and a final SELECT on categories to get the actual categories. Ideally, filters and sort should be applied to the 2nd SELECT like
SELECT `id`,`productsId`,`categoriesId`,`orderBy`,`main` FROM `products_categories` WHERE `productsId` IN (180) WHERE main = 1 ORDER BY `orderBy` DESC
Another typical example would be wanting to order the product images based on the order the user wants them to
so you would have a products_images table
id,image,productsID,orderBy
and you would want to
SELECT from products_images WHERE productsId In (180) ORDER BY orderBy ASC
Is that even possible?
EDIT : Here is the relationship needed for an intermediate table to get what I need based on my schema.
Products.hasMany(Images,
{
as: "Images",
"foreignKey": "productsId",
"through": ProductsImagesItems,
scope: function (inst, filter) {
return {active: 1};
}
});
Thing is the scope function is giving me access to the final result and not to the intermediate table.
I am not sure to fully understand your problem(s), but for sure you need to move away from the table concept and express your problem in terms of Models and Relations.
The way I see it, you have two models Product(properties: title) and Category (properties: main).
Then, you can have relations between the two, potentially
Product belongsTo Category
Category hasMany Product
This means a product will belong to a single category, while a category may contain many products. There are other relations available
Then, using the generated REST API, you can filter GET requests to get items in function of their properties (like main in your case), or use custom GET requests (automatically generated when you add relations) to get for instance all products belonging to a specific category.
Does this helps ?
Based on what you have here I'd probably recommend using the scope option when defining the relationship. The LoopBack docs show a very similar example of the "product - category" scenario:
Product.hasMany(Category, {
as: 'categories',
scope: function(instance, filter) {
return { type: instance.type };
}
});
In the example above, instance is a category that is being matched, and each product would have a new categories property that would contain the matching Category entities for that Product. Note that this does not follow your exact data scheme, so you may need to play around with it. Also, I think your API query would have to specify that you want the categories related data loaded (those are not included by default):
/api/Products/13?filter{"include":["categories"]}
I suggest you define a custom / remote method in Product.js that does the work for you.
Product.getCategories(_productId){
// if you are taking product title as param instead of _productId,
// you will first need to find product ID
// then execute a find query on products_categories with
// 1. where filter to get only main categoris and productId = _productId
// 2. include filter to include product and category objects
// 3. orderBy filter to sort items based on orderBy column
// now you will get an array of products_categories.
// Each item / object in the array will have nested objects of Product and Category.
}

how to skip unique constraint?

I have tried to insert many records into a table, and this table has a unique constraint, so when if one user try to add a new record with the same unique value, I get the dbUpdateException.
But I would like to know how to skipt this error and try to add the remaining records that the first user are trying to add to the table.
How can do that?
Thanks.
One approach could be to catch the DbUpdateException, and use its Entries property to remove the duplicate entities from the context.
You could then retry the save - rinse and repeat - and eventually all the non-duplicate entities will be saved.
E.g.
var duplicates = new List<MyEntity>();
...
catch(DbUpdateException ex)
{
ex.Entries.Each(e => DbContext.Entry(e).State = EntityState.Detached;
duplicates.Add(ex.Entries);
ReTrySave(); // do whatever you need todo to re-enter your saving code
}
...
// Report to user the duplicate entities
ReportToUser(duplicates);
NOTE - treat as pseudo code as I haven't attempted to compile this snippet.

Resources