Transaction Inserts in Liferay Service Builder - liferay

I have to perform multiple inserts using for-loop in DB using service builder, is there any way to insert into the DB using transactions so that all the inserts are performed together without hitting the Database for every inserts.
Thank You

My guess it is not possible.
See the comment from David H Nebinger in the below link https://www.liferay.com/community/forums/-/message_boards/message/54666240
Liferay doesn't support the batch updates because of the cache mechanism that is being used.
HTH..

Related

Azure "/_partitionKey"

I created multiple collections through code without realising the importance of having a partition key. I have since read the only way to add a partition key and redistribute the data is by deleting the collection and recreating it.
I don't really want to have to do this as I have quite a-lot of data already and want to avoid the downtime. When I look the Scale & Settings menu in Azure for each of my collections is see this below.
Can someone explain this - I thought my partition key was null but looks like MS have given me one called _partitionKey? Can I not just add _partitionKey to my documents, run a script to update them all to the key I want to use (e.g country)?
This is a new feature which allows non-partitioned collections (now called containers in the latest SDKs) to start using partitions with 0 downtime. The big caveat is that you need to be using the latest SDKs (which will be announced GA really soon (in fact most are already published, just waiting on doc publishing/etc.). Portal got the feature first since it's using the latest SDK under the covers already.

A Good Point to Create Design Documents in CouchDb

I have a CouchDb instance running a peruser database configuration.
Each user database generated (when a user is added to the _users database) needs to have the same design documents with view/list logic etc.
What is the defacto way to add the design documents to the database upon database creation? Is it simply to add them after a successful user creation request? Or is there a more elegant way of doing this in CouchDb?
There is not a mechanism for initialize newly created user databases, you should include it in your user creation logic.
If you want to decouple user creation and db initialization, I suggest you to explore the following strategy
1 - Create a template database and place on it your design documents that should be applied to every user db
2 - Listen continuously _db_updates endpoint where db level events will be notified. This library can help you.
3 - When a db that matches the user db name pattern is created, you can trigger a replication from the template database to the newly created database using the _replicate endpoint.
If you plan on using the Follow npm module as #Juanjo Rodriguez suggested, please consider using the Cloudant's version. The Iriscouch's version (the one pointed by #Juanjo Rodriguez) is way out of date. For example it doesn't support CouchDB v2.x among other issues. I worked with the Cloudant's team to improve all this these last days and they just released the updated npm package yesterday here: https://www.npmjs.com/package/cloudant-follow?activeTab=versions
The 0.17.0-SNAPSHOT.47 version embarks the patches we worked on so don't use the 0.16.1 (which is officially the latest).
You can read more about the issues we fixed here:
https://github.com/cloudant-labs/cloudant-follow/issues/54
https://github.com/cloudant-labs/cloudant-follow/issues/50
https://github.com/cloudant-labs/cloudant-follow/issues/47

Cassandra: Adding new denormalized query tables for existing keyspace/data

From the beginning of an application, you plan ahead and denormalize data at write-time for faster queries at read-time. Using Cassandra "BATCH" commands, you can ensure atomic updates across multiple tables.
But, what about when you add a new feature, and need a new denormalized table? Do you need to run a temporary script to populate this new table with data? Is this how people normally do it? Is there a feature in Cassandra that will do this for me?
I can't comment yet hence the new answer. The answer is yes, you'd have to write a migration script and run that when you deploy your software upgrade with the new feature. That's fairly a typical devops release process from my experience.
I've not seen anything like Code First Migrations (for MS SQL Server & Entity Framework) for Cassandra, which does the migration script automatically for you.

oracle database sync using spring integration

I would greatly appreciate if someone could share if it is possible to do a near real time oracle database sync application using spring integration. Its a lightweight requirement where only certain data fields across couple of tables to be copied over as soon as they change in source database. Any thoughts around what architecture can be used would greatly help. Also if any Oracle utility that can be leveraged along with SI?
I'd say that the Oracle Trigger is for you. When the main data is changed you should use a trigger to move those changes to another table at the same DB.
From SI you should use <int-jdbc:inbound-channel-adapter> to read and remove data from that sync table. Within the same transaction you have to use <int-jdbc:outboud-channel-adapter> to move the data to another DB.
The main feature here should be a XA transaction, because you use two DBs and what is good they both are Oracle.
Of course you can try to use the 1PC effort, but there will be need more work to do.

SQLite The database file is locked during Insert/Delete

I am using a C++ shell extension DLL which used to read, write data into the SQLite database tables. There another application ( exe) which used to access all the tables.
Sometimes, my dll displaying an exception "The database file is locked" when I try to Delete/Insert/Update to the SQLite Database tables. This is because the other application was accessing the tables at this time.
Is there any way to resolve this issue from my DLL? Can I use the solution as mentioned in the link : "http://stackoverflow.com/questions/6455290/implementing-sqlite3-busy-timeout-in-an-ios-app"
In the current code, I am using CppSQLite3.cpp method execQuery(const char* szSQL) to execute the SQL query.
Please advice.
First of all you should know that SQLite does a Database level locking. When you start a transaction and the other application tries to write something to the same database, then you get Database is locked and SQLite automatically tries executing that same query after sqlite3_busy_timeout interval.
So the trick is to make sure you keep your transactions are as short as possible i.e
1. Do a begin transaction
2. Update/Delete/Insert
3. Commit
and not have anything else between these 3 steps.
And also increase your sqlite3_busy_timeout interval to suite your application depending on how large your transactions are.
You can try WAL mode, where reading and writing to SQLite can be done at the same time. But it comes with its own set of disadvantages. You can refer SQLite documentation.
SQLite has some restrictions regarding multiple users and multiple transactions. That means you can't read/write on a resource from different transactions. The database will be locked when the database is being updated.
Here are some links that might help you
http://sqlite.org/c3ref/busy_timeout.html
http://www.sqlite.org/c3ref/busy_handler.html
Good Luck

Resources