UserInfoProvider.DeleteUser() vs DeleteData(whereCondition) - kentico

I know that DeleteUser() will run procedures to delete all relationships etc. Will the private internal DeleteData with a where condition also delete all relationships or will it just try deleting the main record from the table? If any relational data exists will it throw an error?

If you call UserInfoProvider.DeleteData() it won't delete the related data. It just executes the object's deletion SQL query. It won't even look for the cms.user.removedependencies query.
On the other hand, calling DeleteData() upon an info object would cause the related data to be deleted.
If you need to bulk delete users then retrieve them from the DB using object query (make sure you restrict columns, UserID should be enough) first. And then iterate through the collection calling Delete() on each one of them.
foreach (var user in UserInfoProvider.GetUsers().Where("UserEnabled=0").Columns("UserID").TypedResult.Items)
{
user.Delete();
}

Related

How to delete a document from a collection in cosmos db using Java?

How can I delete a document from a collection.
AsyncDocumentClient client = getDBClient();
RequestOptions options = new RequestOptions();
options.setPartitionKey(new PartitionKey("143003"));
client.deleteDocument(String.format("dbs/test-lin/colls/application/docs/%s", document.id()), options);
I am trying to delete a set of documents from collection based on some condition. I have set the partition key. The read-write keys are being used (So no permission issue).
There are no errors when executing this code. The document is not getting deleted from the collection.
How to fix the issue?
#Suj Patil
You should call subscribe(). The publisher does not do anything until some one subscribes.
client.deleteDocument(String.format("dbs/test-lin/colls/application/docs/%s", document.id()), options).subscribe()

EntityManage - How can I Rollback 1st table updated data if 2nd table update is not successful using hibernate entitymanager

I want to update two database tables using hibernate entityManager. Currently I am updating 2nd table after verifying that data has been updated in 1st table.
My question is how to Rollback 1st table if data is not updated in 2nd table.
This is how I am updating individual table.
try {
wapi = getWapiUserUserAuthFlagValues(subject, UserId);
wapi.setFlags((int) flags);
entityManager.getTransaction().begin();
entityManager.merge(wapi);
entityManager.flush();
entityManager.getTransaction().commit();
} catch (NoResultException nre) {
wapi = new Wapi();
wapi.setSubject(merchant);
wapi.setUserId(UserId);
wapi.setFlags((int) flags);
entityManager.getTransaction().rollback();
}
Note - I am calling separate methods to update each table data
Thanks
I got the solution. basically I was calling two methods to update 2 db table and these two methods I am calling from one method.
Ex - I am method p and q from method r.
Initially I was calling begin, merge, flush and commit entitymanager methods in both p and q.
Now I am calling begin and commit in r and merge and flush in p and q.
So now my tables are getting updated together and rollback is also simple.
Hope it will help someone. Because I wasted my time for this, probably it can save someone else.
Thanks

Sequelize/Postgres - how to update each row individually on migrate?

I have lots of records in my postgres. (using sequelize to communicate)
I want to have a migrate script, but due to locking, I have to do each change as atomic as possible.
So I don't want to selectAll and then modify and then saveAll.
In mongo I have forEach cursor which allows me to update a record, save it and only then move to the next one.
Anything similar in sequelize/postgres?
Currently, I am doing that in my code - getting the IDs, then for each performing a query.
return migration.runOnAllUpdates((record)=>{
record.change = 'new value';
return record.save()
});
where runOnAllUpdates will simply give me records one by one.

model.count not a reliable test of existence for fast asynchronous writes

Before saving a new document to a mongodb collection via nodejs in my MongoLab database, I'm using model.count to check certain fields to prevent a duplicate entry:
MyModel.count({field1: criteria1, field2: criteria2}, function (err, count) {
if (count == 0) {
// Create new document and call .save()
}
});
However, during testing I'm noticing many duplicates (inconsistent in number across test runs) in the collection after the process finishes, although not as many as if I did not do the .count() check.
Since the MyModel.count() statement is embedded in a callback being repeatedly called whenever the 'readable' event is emitted by one of several ReadStreams, I suspect there is an async issue caused by rapid writes to the collection. Specifically, two or more identical and nearly simultaneous calls to MyModel.count return a count of 0, and end up each creating and saving (identical) documents to the collection.
Does this sound probable? And if so how can I enforce uniqueness of document writes without setting timeouts or using a synchronous pattern?
As Peter commented, the right way to enforce uniqueness is to create a unique index on the collection over those fields and then handle the code: 11000 insert error to recover from attempts at creating duplicates.
You can add the index via your schema before you create the model from it:
mySchema.index({field1: 1, field2: 1}, {unique: true});

Insert/update Doctrine object from Excel

On the project which I am currently working, I have to read an Excel file (with over a 1000 rows), extract all them and insert/update to a database table.
in terms of performance, is better to add all the records to a Doctrine_Collection and insert/update them after using the fromArray() method, right? One other possible approach is to create a new object for each row (a Excel row will be a object) and them save it but I think its worst in terms of performance.
Every time the Excel is uploaded, it is necessary to compare its rows to the existing objects on the database. If the row does not exist as object, should be inserted, otherwise updated. My first approach was turn both object and rows into arrays (or Doctrine_Collections); then compare both arrays before implementing the needed operations.
Can anyone suggest me any other possible approach?
We did a bit of this in a project recently, with CSV data. it was fairly painless. There's a symfony plugin tmCsvPlugin, but we extended this quite a bit since so the version in the plugin repo is pretty out of date. Must add that to the #TODO list :)
Question 1:
I don't explicitly know about performance, but I would guess that adding the records to a Doctrine_Collection and then calling Doctrine_Collection::save() would be the neatest approach. I'm sure it would be handy if an exception was thrown somewhere and you had to roll back on your last save..
Question 2:
If you could use a row field as a unique indentifier, (let's assume a username), then you could search for an existing record. If you find a record, and assuming that your imported row is an array, use Doctrine_Record::synchronizeWithArray() to update this record; then add it to a Doctrine_Collection. When complete, just call Doctrine_Collection::save()
A fairly rough 'n' ready implementation:
// set up a new collection
$collection = new Doctrine_Collection('User');
// assuming $row is an associative
// array representing one imported row.
foreach ($importedRows as $row) {
// try to find an existing record
// based on a unique identifier.
$user = Doctrine_Core::getTable('User')
->findOneByUsername($row['username']);
// create a new user record if
// no existing record is found.
if (!$user instanceof User) {
$user = new User();
}
// sync record with current data.
$user->synchronizeWithArray($row);
// add to collection.
$collection->add($user);
}
// done. save collection.
$collection->save();
Pretty rough but something like this worked well for me. This is assuming that you can use your imported row data in some way to serve as a unique identifier.
NOTE: be wary of synchronizeWithArray() if you're using sf1.2/doctrine 1.0 - if I remember correctly it was not implemented correctly. it works fine in doctrine 1.2 though.
I have never worked on Doctrine_Collections, but I can answer in terms of database queries and code logic in a broader sense. I would apply the following logic:-
Fetch all the rows of the excel sheet from database in a single query and store them in an array $uploadedSheet.
Create a single array of all the rows of the uploaded excel sheet, call it $storedSheet. I guess the structures of the Doctrine_Collections $uploadedSheet and $storedSheet will be similar (both two-dimensional - rows, cells can be identified and compared).
3.Run foreach loops on the $uploadedSheet as follows and only identify the rows which need to be inserted and which to be updated (do actual queries later)-
$rowsToBeUpdated =array();
$rowsToBeInserted=array();
foreach($uploadedSheet as $row=>$eachRow)
{
if(is_array($storedSheet[$row]))
{
foreach($eachRow as $column=>$value)
{
if($value != $storedSheet[$row][$column])
{//This is a representation of comparison
$rowsToBeUpdated[$row]=true;
break; //No need to check this row anymore - one difference detected.
}
}
}
else
{
$rowsToBeInserted[$row] = true;
}
}
4. This way you have two arrays. Now perform 2 database queries -
bulk insert all those rows of $uploadedSheet whose numbers are stored in $rowsToBeInserted array.
bulk update all the rows of $uploadedSheet whose numbers are stored in $rowsToBeUpdated array.
These bulk queries are the key to faster performance.
Let me know if this helped, or you wanted to know something else.

Resources