J2ME RMS - Best practice for opening/closing record store? - java-me

My midlet uses two record stores. Currently, I create/open both record stores when the app starts and I leave them both open for the entire lifetime of the app.
If I open/close the record store after each operation (e.g., reading or writing) the delays are really bad in the emulator.
Similarly, if I close the recordstores when the app exits, there is another very long delay.
So is it OK for me to never close the record stores in my code (thereby, presuming the device will do this itself when the app exits). If not, what is the best practice I can employ without causing a noticeable delay for the user and without risking any data loss?
There is nothing in the docs regarding this, and nothing I could find on google.

As far as I remember, on some phones changes in DB are stored permanently only when DB is closed. While in most J2ME implementations changes are saved on each record change.
I would suggest keeping DB open for whole app session, if it significantly improves performance. It is worth handling DB close in destroyApp() of course.
You also can consider implementing 'auto save' feature - close and reopen DB if IO is inactive for some time.
Usually heavy DB access is performed in some actions only, not constantly. In this case you could wrap bunch of IO operations in a 'transaction' finishing it with DB close.
In other words, on most devices you can go with the first approach (keeping DB open) but on some devices (do not remember exactly, probably on Nokia S40 or S60) it can lead to data loss when the app will be terminated by VM (and you can't handle it since destroyApp is not guarantied to be called), without proper DB close. So in general case it would be right to wrap critical transactions with DB.close() statements

Related

Calling Save Changes Multiple Times

I was wondering if anyone has done any perf tests around the effect calling EF Cores SaveChangesAsync() has on performance if there are no changes to be saved.
Essentially I am assuming it's basically nothing and therefore isn't a big deal to call it "just in case"?
(I am trying to do something with tracking user activity in middleware in asp net core and essentially on the way out I want to make sure save changes was called to persist the activity to the database. There is a chance that it has already been called on the context depending on the operation of the user and if that's the case I don't want to incur the cost of a second operation when the activity could be persisted as part of the normal transaction/round trip)
As you can see in implementation if there are no changes, nothing will be done. As far it has impact to performance, I don't know. But of course calling SaveChanges or SaveChangesAsync without any changes will have a performance impact in relation to don't call them.
That's the same behavior like EF6 has too.

Locks on postgres transactions

I am load testing my node.js application. At some point I reach state where requests are pending and my best guess it's because of a locked transaction. This is the last log statement:
SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ;
And in pg_lock I've got 4 rows with the above query which are GRANTED = true, with mode ExclusiveLock.
Where should I start looking for a bug?
If in this locking request I make there are a lot of insert and update operations, should the isolation level be REPEATABLE READ?
Is there any way to debug/process that kind of situations?
Is there any mechanism to timeout that locks so app can be easily/automatically released and not blocking further requests?
Side question (since I'm not looking for a tool directly): are there any tools to monitor and spot that kind of situations? (I was hoping to use Munin.)
I am using nodejs 4.2.1 with express 4.13.3, sequelize 3.19.3 as Postgres 9.4.1 ORM.
Welcome to PostgreSQL transaction locks hell :)
You can spend a lot of time trying to figure out where exactly the lock happens and why. But there is a very little chance that it will help you in resolving the situation.
The general recipe for solving this kind of situations is as follows:
Keep your transactions size to the bare minimum required by the business logic of your application. For example, avoid same-type inserts or updates, replacing them with multi-row analogues, because query IO is expensive
Do not use transactions while executing only a single query that modifies data, i.e. avoid unnecessary transactions.
Implement error handling that can determine a transaction lock and provide a repeated attempt at executing the transaction. Logging such repeats will help you understand weak spots of your system and how to redesign it better.
Even in a well-engineered system the last step often becomes a necessity, don't let it scare you ;)
I encountered a similar situation where I started 5 parallell transactions requesting the same update lock, and the first one also continued with work that required more postgres calls. The entire system deadlocks, and the first transaction is listed as idle in transaction in pg_stat_activity and granted access to all locks it has requested in pg_locks.
What I think is happening;
The first transaction got the lock granted, and then finished the query. After this it drops its connection to postgres.
The following 4 transactions open a connection each and blocks on the lock, that is held by the first transaction.
Since they are blocked, the first transaction gets to execute, when it tries to connect to postgres to make a query, it gets deadlocked, because sequiezlize has run out of connections.
When I changed my sequiezlize initialisation and added more connections to the pool, default being 5, the deadlock disappears.
I am not sure who is using the 5'th connection, or if the default happens to be 4 and not 5, for some reason, but still seem to tick all the boxes.
Another solution is to use the NOWAIT option in postgres, so a transaction abort when asking for a lock and not getting it, depending on your usecase.
Hope it helps if someone else gets encounters the same issue.

close RethinkDB changefeed before changed

I can't seem to find any info in the rethinkdb docs on how you might stop a changefeed before the first change is fired. Here's the problem that makes this necessary:
A client connects to the server via a socket, which begins a changefeed, something like this:
var changeCursors = {};
db('app').table('things').changes().run(cursor, function(err, cursor) {
// do something when changed
changeCursors[user.id] = cursor
})
// later, when the user disconnects
changeCursors[user.id].close()
When the first change is dispatched, I can assign the cursor to a variable in memory, and if the client disconnects, close this cursor.
However, what if the user disconnects before the first change?
As far as I can tell, rethink doesn't support dispatching an initial state to the feed, so the cursor will only be available after a change. However, if the user disconnects, changeCursors[user.id] is undefined, and the changefeed stays open forever.
This can be solved by checking a state object inside the changefeed and just closing the feed after the first change, but in theory if there are no changes and many connected clients, we can potentially open many cursors that will eat memory for no reason (they'll be closed as soon as they update).
Is there a way to get the cursor from a changefeed without the run callback being executed? Alternatively, is there a way to force rethink to perform an initial state update to the run callback?
You'd have this problem even if the server responded immediately, because the user might disconnect after you've sent the query to the server and before the response has made it back. Unfortunately we can't create the cursor before sending the query to the server because in the general case figuring out the return type of the query is sort of hard, so we don't put that logic in the clients.
I think the best option is what you described, where if the cursor hasn't been returned yet you set a flag and close it inside the callback. You might be able to make the logic cleaner using promises.
I wouldn't worry about memory usage unless you're sure it's a problem; if some portion of a second passes without a change, we return a cursor with no initial values to the client, so your memory use in the case of a lot of users opening and then immediately closing connections will be proportional to how many users can do that in that portion of a second. If that portion of a second is too long for you, you can configure it to be smaller with the optargs to run (http://rethinkdb.com/api/javascript/run/). (I would just set firstBatchScaledownFactor to be higher in your case.)

QSQLite Error: Database is locked

I am new to Qt development, the way it handles threads (signals and slots) and databases (and SQLite at that). It has been 4 weeks that I have started working on the mentioned technologies. This is the first time I'm posting a question on SO and I feel I have done research before coming to you all. This may look a little long and possibly a duplicate, but I request you all to read it thoroughly once before dismissing it off as a duplicate or tl;dr.
Context:
I am working on a Windows application that performs a certain operation X on a database. The application is developed in Qt and uses QSQLite as database engine. It's a single threaded application, i.e., the tables are processed sequentially. However, as the DB size grows (in number of tables and records), this processing becomes slower. The result of this operation X is written in a separate results table in the same DB. The processing being done is immaterial to the problem, but in basic terms here's what it does:
Read a row from Table_X_1
Read a row from Table_X_2
Do some operations on the rows (only read)
Push the results in Table_X_Results table (this is the only write being performed on the DB)
Table_X_1 and Table_X_2 are identical in number and types of columns and number of rows, only the data may differ.
What I'm trying to do:
In order to improve the performance, I am trying to make the application multi-threaded. Initially I am spawning two threads (using QtConcurrentRun). The two tables can be categorized in two types, say A and B. Each thread will take care of the tables of two types. Processing within the threads remains same, i.e., within each thread the tables are being processed sequentially.
The function is such that it uses SELECT to fetch rows for processing and INSERT to insert result in results table. For inserting the results I am using transactions.
I am creating all the intermediate tables, result tables and indices before starting my actual operation. I am opening and closing connections everytime. For the threads, I create and open a connection before entering the loop (one for each thread).
THE PROBLEM:
Inside my processing function, I get following (nasty, infamous, stubborn) error:
QSqlError(5, "Unable to fetch row", "database is locked")
I am getting this error when I'm trying to read a row from DB (using SELECT). This is in the same function in which I'm performing my INSERTs into results table. The SELECT and the INSERT are in the same transaction (begin and commit pair). For INSERT I'm using prepared statement (SQLiteStatement).
Reasons for seemingly peculiar things that I am doing:
I am using QtConcurrentRun to create the threads because it is straightforward to do! I have tried using QThread (not subclassing QThread, but the other method). That also leads to same problem.
I am compiling with DSQLITE_THREADSAFE=0 to avoid application from crashing. If I use the default (DSQLITE_THREADSAFE=1), my application crashes at SQLiteStatement::recordSet->Reset(). Also, with the default option, internal SQLITE sync mechanism comes into play which may not be reliable. If the need be, I'll employ explicit sync.
Making the application multi-threaded to improve performance, and not doing this. I'm taking care of all the optimizations recommended there.
Using QSqlDatabase::setConnectOptions with QSQLITE_BUSY_TIMEOUT=0. A link suggested that it will prevent the DB to get locked immediately and hence may give my thread(s) appropriate amount of time to "die peacefully". This failed: the DB got locked much frequently than before.
Observations:
The database goes into lock only and as soon as when one of the threads return. This behavior is consistent.
When compiling with DSQLITE_THREADSAFE=1, the application crashes when one of the threads return. Call stack points at SQLiteStatement::recordSet->Reset() in my function, and at winMutexEnter() (called from EnterCriticalSection()) in sqlite3.c. This is consistent as well.
The threads created using QtConcurrentRun do not die immediately.
If I use QThreads, I can't get them to return. That is to say, I feel the thread never returns even though I have connected the signals and the slots correctly. What is the correct way to wait for threads and how long it takes them to die?
The thread that finishes execution never returns, it has locked the DB and hence the error.
I checked for SQLITE_BUSY and tried to make the thread sleep but could not get it to work. What is the correct way to sleep in Qt (for threads created with QtConcurrentRun or QThreads)?
When I close my connections, I get this warning:
QSqlDatabasePrivate::removeDatabase: connection 'DB_CONN_CREATE_RESULTS' is still in use, all queries will cease to work.
Is this of any significance? Some links suggested that this warning arises because of using local QSqlDatabase, and will not arise if the connection is made a class member. However, could it be the reason for my problem?
Further experiments:
I am thinking of creating another database which will only contain results table (Table_X_Results). The rationale is that while the threads will read from one DB (the one that I have currently), they will get to write to another DB. However, I may still face the same problem. Moreover, I read on the forums and wikis that it IS possible to have two threads doing read and write on same DB. So why can I not get this scenario to work?
I am currently using SQLITE version 3.6.17. Could that be the problem? Will things be better if I used version 3.8.5?
I was trying to post the web resources that I have already explored, but I get a message saying "I'd need 10 reps to post more than 2 links". Any help/suggestions would be much appreciated.

Discarding NSManagedObjects

I create a new managed object context in a new thread an insert some objects into it. Can I discard (just forget them) them by just not saving the context? My problem is this: I start a lenghty process which creates some NSManagedObjects atthe beginning and saves them at the end (merges them back into the main store). This happens in a NSOperation. I want the user to be able to quit the app at any time without having to wait for the process to finish. Can I just kill the operation and be save? My understanding is that this is possible because the context does not persist anything without saving. Right?
Yes, you can do that but you shouldn't if the background operation handles any user data.
The UI grammar on MacOS teachers users to expect that all of their data will be saved unless they specify otherwise.
Since saving is virtually instantaneous (from the user's perspective) in the vast majority of cases, it would be better to send a notification to the background operation telling it to stop and save.

Resources