How to initialize database with default values in sqlalchemy? - python-3.x

I want to put certain default values in the database when it is first created.
Is there a hook/func available for that, so that it executes only once after the db is created?
One way could be to use the Inspector and check if the table/db is available or not...and then set a flag before creating the table. And then use this flag to insert default values.
Is there a better way to do it?

I usually have a dedicated install function that is called for this purpose as I can do anything in this function that I need. However, if you just want to launch your application and do Base.metadata.create_all then you can use the after_create event. You'd have to test out whether it gives you one metadata object or multiple table objects and handle that accordingly. In this context you even get a connection object that you can use to insert data. Depending on transaction management and database support this could even mean that table creation is rolled back if the insert failed.
Depending on your needs, both ways are okay, but if you are certain you only need to insert data after creation then the event way is actually the best idea.

Related

Hi ,how can skip the update of unchanged value and will perform update operation on changed value in MongoDB and nodejs

I have to send some objects data form angular reactive form to MongoDB, in that object some value will be changed and some unchanged, i use like below code
db.findByIdAndUpdate({_id:id},{item1:value1,item2:value2,item3:value3})
if any value ie value1 or value2 or value3 is changing then update operation will be ok but nothing any change then how can skip this updation,i want do this because i want avoid unnecessarily server interaction
tl;dr--You can't skip the interaction with the DB.
Basically, you only want to update your record in the database when the user changes one of its values on the UI. This means you have two different versions of the same record: one in the UI, and one in the DB. If you're sending the one from the UI to the DB, you basically have two options:
save the version from the UI, no matter what is in the DB
retrieve the value from the database and compare; save the version from the UI if they are different
You might've noticed that the first option has fewer interactions with the DB on average. This is good if you have high latency between your server and your DB.
But, on the other hand, the second option has fewer writes than the first option. If concurrent writes to the same record are common, and they're causing timeouts, then this might be the best option for you.

About speedy mass deletion of users in Kentico10

I want to delete more than 1 million User information in Kentico10.
I tried to delete it with UserInfoProvider.DeleteUser (); (see the following documentation), but it is expected that it will take nearly one year with a simple calculation.
https://docs.kentico.com/api10/configuration/users#Users-Deletingauser
Because it's a simple calculation, I think it's actually a bit shorter, but it still takes time.
Is there any other way to delete users in a short time?
Of course make sure you have a backup of your database before you do any of this.
Depending on the features you're using, you could get away with a SQL statement. Due to the complexities of the references of a user to multiple other tables, the SQL statement can get pretty complex and you need to make sure you remove the other references before removing the actual user record.
I'd highly recommend an API approach and delete users through the API so it removes all the references for you automatically. In your API calls make sure you wrap the delete action in the following so it stops the logging of the events and other labor-intensive activities not needed.
using (var context = new CMSActionContext())
{
context.DisableAll();
// delete your user
}
In your code, I'd only select the top 100 or so at a time and delete them in batches. Assuming you don't need this done all in one run, you could let the scheduled task run your custom code for a week and see where you're at.
If all else fails, figure out how to delete the user and the 70+ foreign key references and you'll be golden.
Why don't you delete them with SQL query? - I believe it will be much faster.
Bulk delete functionality exist starting from version 10.
UserInfoProvider has BulkDelete method. Actually any InfoProvider object inhereted from AbstractInfoProvider has BulkDelete method.

Handle Duplicate insertion node js

What is the best way to handle duplicate insertion?
Either we should check before insertion if item already exist then notify user for duplicate entry or we can handle error message and let user know that its a duplicate entry.
Using first approach will cost us an extra database call.
Or if there is any other better approach to handle this please let me know.
Duplicate insertion is at database level.
Your call to the api must be coming from front end.So you need to
ensure that duplicate call is avoided at first place e.g you should
disable the button as soon as user clicks it first time.
Or
You can add database schema level check like primary key so that if
duplicate data comes error is thrown and same can be forwarded to
user.
Or
add checks mentioned in
http://dev.mysql.com/doc/refman/5.7/en/insert-on-duplicate.html
Checking Whether data exists before insertion is a expensive call and that too you will have to hit on master so try to avoid that.
The best approach is to use a primary key based on the data. If this is not possible with your data then you'll have to query the database before insertion.

Selecting and updating against tables in separate data sources within the same transaction

The attributes for the <jdbc:inbound-channel-adapter> component in Spring Integration include data-source, sql and update. These allow for separate SELECT and UPDATE statements to be run against tables in the specified database. Both sql statements will be part of the same transaction.
The limitation here is that both the SELECT and UPDATE will be performed against the same data source. Is there a workaround for the case when the the UPDATE will be on a table in a different data source (not just separate databases on the same server)?
Our specific requirement is to select rows in a table which have a timestamp prior to a specific time. That time is stored in a table in a separate data source. (It could also be stored in a file). If both sql statements used the same database, the <jdbc:inbound-channel-adapter> would work well for us out of the box. In that case, the SELECT could use the time stored, say, in table A as part of the WHERE clause in the query run against table B. The time in table A would then be updated to the current time, and all this would be part of one transaction.
One idea I had was, within the sql and update attributes of the adapter, to use SpEL to call methods in a bean. The method defined for sql would look up a time stored in a file, and then return the full SELECT statement. The method defined for update would update the time in the same file and return an empty string. However, I don't think such an approach is failsafe, because the reading and writing of the file would not be part of the same transaction that the data source is using.
If, however, the update was guaranteed to only fire upon commit of the data source transaction, that would work for us. If the event of a failure, the database transaction would commit, but the file would not be updated. We would then get duplicate rows, but should be able to handle that. The issue would be if the file was updated and the database transaction failed. That would mean lost messages, which we could not handle.
If anyone has any insights as to how to approach this scenario it is greatly appreciated.
Use two different channel adapters with a pub-sub channel, or an outbound gateway followed by an outbound channel adapter.
If necessary, start the transaction(s) upstream of both; if you want true atomicity you would need to use an XA transaction manager and XA datasources. Or, you can get close by synchronizing the two transactions so they get committed very close together.
See Dave Syer's article "Distributed transactions in Spring, with and without XA" and specifically the section on Best Efforts 1PC.

How to update fields automatically

In my CouchDB database I'd like all documents to have an 'updated_at' timestamp added when they're changed (and have this enforced).
I can't modify the document with validation functions
updates functions won't run unless they're called specifically (so it'd be possible to update the document and not call the specific update function)
How should I go about implementing this?
There is no way to do this now without triggering _update handlers. This is nice idea to track documents changing time, but it faces problems with replications.
Replications are working on top of public API and this means that:
In case of enforcing such trigger you'll have replications broken since it will be impossible to sync data as it is without document modification. Since document get modified, he receives new revision which may easily lead to dead loop if you replicate data from database A to B and B to A in continuous mode.
In other case when replications are fixed there will be always way to workaround your trigger.
I can suggest one work around - you can create a view which emits a current date as a key (or a part of it):
function( doc ){
emit( new Date, null );
}
This will assign current dates to all documents as soon as the view generation gets triggered (which happens after first request to it) and will reassign new dates on each update of a specific document.
Although the above should solve your issue, I would advice against using it for the reasons already explained by Kxepal: if you're on a replicated network, each node will assign its own dates. So taking this into account, the best I can recommend is to solve the issue on the client side and just post the documents with a date already embedded.

Resources