I'm running MongoDB on Windows. I have 1 or more threads that drop and recreate a collection.
Using mongo.exe with the show collections() command, I'm seeing multiple collections with the same name (well over 1,000 collections with the same name!).
When I run validate:
> db.MY_COLLECTION.validate()
I get:
{ "errmsg" : "ns not found", "ok" : 0, "valid" : false }
The size() command returns 0, and find() returns nothing.
My question is: Is MongoDB thread safe? A follow on question would be something like 'Am I doing this correctly (dropping and recreating) or is there a better way to refresh the whole content of a collection?'
From mongodb documentation:
Thread safety
Only a few of the C# Driver classes are thread safe. Among them: MongoServer, MongoDatabase, MongoCollection and MongoGridFS. Common classes you will use a lot that are not thread safe include MongoCursor and all the classes from the BSON Library (except BsonSymbolTable which is thread safe). A class is not thread safe unless specifically documented as being thread safe.
All static properties and methods of all classes are thread safe.
You can search for the word Thread on this page:
http://mongodb.onconfluence.com/pages/viewpage.action?pageId=18907708&navigatingVersions=true#CSharpDriverTutorial-Threadsafety
Changed in version 2.2.
MongoDB allows multiple clients to read and write a single corpus of data using a locking system to ensure that all clients receive a consistent view of the data and to prevent multiple applications from modifying the exact same pieces of data at the same time. Locks help guarantee that all writes to a single document occur either in full or not at all.
http://docs.mongodb.org/manual/faq/concurrency/
Related
Using Delphi 7 & UIB, I'm running database operations in a background thread to eliminate problems like:
Timeout
Priority
Immediate Force-reconnect after network-loss
Non-blocked UI
Keeping an opened DB connection alive
User canceling
I've read ALL related topics here, and realized: using while isMyThreadStillRuning and not UserCanceled do sleep(100); end; isn't the recommended way to do this, but rather using TEvent.WaitFor(3000)....
The solutions here are either about sending signals FROM or TO... the thread, or doing it with messages, but never both ways.
Reading the help file, I've also found TSimpleEvent, which seems to be easier to use.
So what is the recommended way to communicate between Main-UI + DB-Thread in both ways?
Should I simply create 2+2 TSimpleEvent?
to start a new transaction (thread should stop sleeping)
force-STOP execution
to signal back if it's moved to a new stage (transaction started / executed / commited=done)
to signal back if there is any error happened
or should there be only 1 TEvent?
Update 2:
First tests show:
2x TSimpleEvent is enough (1 for Thread + 1 for Gui)
Both created as public properties of the background thread
Force-terminating the thread does not work. (Too many errors impossible to handle..)
Better to set a variable like (Stop_yourself) and let it cancel and free itself, (while creating a new instance from the same class and try again.)
(still work in progress...)
You should move the query to a TThread. Unfortunately, anonymous threads are not available in D7 so you need to write your own TThread derived class. Inside, you need its own DB connection to prevent shared resources. From the caller method, you can wait for the thread to end. The results should be stored somewhere in the caller class. Ensure that the access to parameters of the query and for storing the result of the query is handled thread-safe by using a TMutex or TMonitor.
I have a lot of C++11 threads running which all need database access at some time. In main I do initalize the database connection and open the database. Qt documentation says that queries are not threadsafe so I use a global mutex until a QSqlQuery exists inside a thread.
This works but is that guaranteed to work or do I run into problems at some time?
A look at the Documentation tells us, that
A connection can only be used from within the thread that created it.
Moving connections between threads or creating queries from a
different thread is not supported.
So you do indeed need one connection per thread. I solved this by generating dynamic names based on the thread:
auto name = "my_db_" + QString::number((quint64)QThread::currentThread(), 16);
if(QSqlDatabase::contains(name))
return QSqlDatabase::database(name);
else {
auto db = QSqlDatabase::addDatabase( "QSQLITE", name);
// open the database, setup tables, etc.
return db;
}
In case you use threads not managed by Qt make use of QThreadStorage to generate names per thread:
// must be static, to be the same for all threads
static QThreadStorage<QString> storage;
QString name;
if(storage.hasLocalData())
name = storage.localData();
else {
//simple way to get a random name
name = "my_db_" + QUuid::createUuid().toString();
storage.setLocalData(name);
}
Important: Sqlite may or may not be able to handle multithreading. See https://sqlite.org/threadsafe.html. As far as I know, the sqlite embedded into Qt is threadsafe, as thats the default, and I could not find any flags that disable it in the sourcecode. But If you are using a different sqlite version, make shure it does actually support threads.
You can write class with SQL functions and use signals-slots to do the queries and get result from database.
It's thread-safe also no need to use mutex.
You choose not well approach. Should use shared QSqlDatabase object instead QSqlQuery. Please check next example of multithreading database access. If that will not clear for you please let me know. Will explain more.
Currently I am working on a database that is updated by another java application, but need a NodeJS application to provide Restful API for website use. To maximize the performance of NodeJS application, it is clustered and running in a multi-core processor.
However, from my understanding, a clustered NodeJS application has a their own event loop on each CPU core, if so, does that mean, with cluster architect, NodeJS will have to face traditional concurrency issues like in other multi-threading architect, for example, writing to same object which is not writing protected? Or even worse, since it is multi-process running at same time, not threads within a process blocked by another...
I have been searching Internet, but seems nobody cares that at all. Can anyone explain the cluster architect of NodeJS? Thanks very much
Add on:
Just to clarify, I am using express, it is not like running multiple instances on different ports, it is actually listening on the same port, but has one process on each CPUs competing to handle requests...
the typical problem I am wondering now is: a request to update Object A base on given Object B(not finish), another request to update Object A again with given Object C (finish before first request)...then the result would base on Object B rather than C, because first request actually finishes after the second one.
This will not be problem in real single-threaded application, because second one will always be executed after first request...
The core of your question is:
NodeJS will have to face traditional concurrency issues like in other multi-threading architect, for example, writing to same object which is not writing protected?
The answer is that that scenario is usually not possible because node.js processes don't share memory. ObjectA, ObjectB and ObjectC in process A are different from ObjectA, ObjectB and ObjectC in process B. And since each process are single-threaded contention cannot happen. This is the main reason you find that there are no semaphore or mutex modules shipped with node.js. Also, there are no threading modules shipped with node.js
This also explains why "nobody cares". Because they assume it can't happen.
The problem with node.js clusters is one of caching. Because ObjectA in process A and ObjectA in process B are completely different objects, they will have completely different data. The traditional solution to this is of course not to store dynamic state in your application but to store them in the database instead (or memcache). It's also possible to implement your own cache/data synchronization scheme in your code if you want. That's how database clusters work after all.
Of course node, being a program written in C, can be easily extended in C and there are modules on npm that implement threads, mutex and shared memory. If you deliberately choose to go against node.js/javascript design philosophy then it is your responsibility to ensure nothing goes wrong.
Additional answer:
a request to update Object A base on given Object B(not finish), another request to update Object A again with given Object C (finish before first request)...then the result would base on Object B rather than C, because first request actually finishes after the second one.
This will not be problem in real single-threaded application, because second one will always be executed after first request...
First of all, let me clear up a misconception you're having. That this is not a problem for a real single-threaded application. Here's a single-threaded application in pseudocode:
function main () {
timeout = FOREVER
readFd = []
writeFd = []
databaseSock1 = socket(DATABASE_IP,DATABASE_PORT)
send(databaseSock1,UPDATE_OBJECT_B)
databaseSock2 = socket(DATABASE_IP,DATABASE_PORT)
send(databaseSock2,UPDATE_OPJECT_C)
push(readFd,databaseSock1)
push(readFd,databaseSock2)
while(1) {
event = select(readFD,writeFD,timeout)
if (event) {
for (i=0; i<length(readFD); i++) {
if (readable(readFD[i]) {
data = read(readFD[i])
if (data == OBJECT_B_UPDATED) {
update(objectA,objectB)
}
if (data == OBJECT_C_UPDATED) {
update(objectA,objectC)
}
}
}
}
}
}
As you can see, there's no threads in the program above, just asynchronous I/O using the select system call. The program above can easily be translated directly into single-threaded C or Java etc. (indeed, something similar to it is at the core of the javascript event loop).
However, if the response to UPDATE_OBJECT_C arrives before the response to UPDATE_OBJECT_B the final state would be that objectA is updated based on the value of objectB instead of objectC.
No asynchronous single-threaded program is immune to this in any language and node.js is no exception.
Note however that you don't end up in a corrupted state (though you do end up in an unexpected state). Multithreaded programs are worse off because without locks/semaphores/mutexes the call to update(objectA,objectB) can be interrupted by the call to update(objectA,objectC) and objectA will be corrupted. This is what you don't have to worry about in single-threaded apps and you won't have to worry about it in node.js.
If you need strict temporally sequential updates you still need to either wait for the first update to finish, flag the first update as invalid or generate error for the second update. Typically for web apps (like stackoverflow) an error would be returned (for example if you try to submit a comment while someone else have already updated the comments).
As we know, NHibernate sessions are not thread safe. But we have a code path split in several long running threads, all using objects loaded in the initial thread.
using (var session = factory.OpenSession())
{
var parent = session.Get<T>(parentId);
DoSthWithParent(session, parent);
foreach (var child in parent.children)
{
parallelThreadMethodLongRunning.BeginInvoke(session, child);
//[Thread #1] DoSthWithChild(child #1) -> SaveOrUpdate(child #1) + Flush()
//[Thread #2] DoSthWithChild(child #2) -> SaveOrUpdate(child #2) + Flush()
//[Thread #3] DoSthWithChild(child #3) -> SaveOrUpdate(child #3) + Flush()
// -> etc... changes to be persisted immediately, not all at the end.
EndInvoke();
}
DoFinalChangesOnParentAndChildren(parent);
session.Flush();
}
}
One way would be a session for each thread, but that would require the parent object to be reloaded in each. Plus, the final method is also doing changes on the children and would run in a StaleObjectException if another session changed it meanwhile, or had to be evicted/reloaded.
So all threads have to use the same session. What is the best way to do this?
Use save queue in initial thread (thread safe implementation), which is polled in a loop (instead of EndInvoke()) from the main thread. Child threads can insert NHibernate objects to be saved by the main thread.
Use some callback mechanism to save/flush objects in main thread. Is there something similar possible to UI thread callback in WPF, Control.Invoke() or BackgroundWorker?
Put Save/Flush accesses into lock(session) blocks? Maybe dangerous, because modifying the NHibernate objects might change the session, even if not doing a Save()/Flush().
Or should I live with the database overhead to load the same objects for separate sessions in each thread, evict and reload them in the main thread and then do changes again? [edit: bad "solution" due to object concurrency/risk of stale objects]
Consider also that the application has a business logic layer above NHibernate, which has similar objects, but sends it's property values to the NHibernate objects on it's own Save() command, only then modifying them and doing NHibernate Save()/Flush() immediately.
Edit:
It's important that any read operation on NHibernate objects may change the session - lazy loading, chilren collection change under certain conditions. So it is really better to have a business object layer on top, which synchronizes all access to NHibernate objects. Considering the database operations take only a minimum time of the threads (mainly occasional status settings), and most is for calculations, watching, web service access and similar, the performance loss by data layer synchronization is negligible.
Firstly, if I understand correctly, different threads may be updating the same objects. In that case, nHibernate or not, you're performing several updates on the same objects concurrently, which may lead to unexpected results.
You may want to tweak your design a bit to ensure that an object can be only updated by (at most) a single thread.
Now, assuming your flow may include having the same threads reading the same data (but writing different data), I'd suggest using different sessions- one per thread, and utilizing 2nd level cache;
2nd level cache is kept at the SessionFactory (rather than in the Session) level, and is therefore shared by all session instances.
The session object is not thread safe, you can't use it over different threads. The SaveOrUpdate in your sepperate threads will most likely crash your program or corrupt your database. However what about creating the data set you want to update and do the SaveOrUpdate actions in your main thread (were your session is created)?
You should observe the following practices when creating NHibernate
Sessions: • Never create more than one concurrent ISession or
ITransaction instance per database connection.
• Be extremely careful when creating more than one ISession per
database per transaction. The ISession itself keeps track of updates
made to loaded objects, so a different ISession might see stale data.
• The ISession is not threadsafe! Never access the same ISession in
two concurrent threads. An ISession is usually only a single
unit-of-work!
What I need is a system I can define simple objects on (say, a "Server" than can have an "Operating System" and "Version" fields, alongside other metadata (IP, MAC address, etc)).
I'd like to be able to request objects from the system in a safe way, such that if I define a "Server", for example, can be used by 3 clients concurrently, then if 4 clients ask for a Server at the same time, one will have to wait until the server is freed.
Furthermore, I need to be able to perform requests in some sort of query-style, for example allocate(type=System, os='Linux', version=2.6).
Language doesn't matter too much, but Python is an advantage.
I've been googling for something like this for the past few days and came up with nothing, maybe there's a better name for this kind of system that I'm not aware of.
Any recommendations?
Thanks!
Resource limitation in concurrent applications - like your "up to 3 clients" example - is typically implemented by using semaphores (or more precisely, counting semaphores).
You usually initialize a semaphore with some "count" - that's the maximum number of concurrent accesses to that resource - and you decrement this counter every time a client starts using that resource and increment it when a client finishes using it. The implementation of semaphores guarantees the "increment" and "decrement" operations will be atomic.
You can read more about semaphores on Wikipedia. I'm not too familiar with Python but I think these two links can help:
Python Threading Library
Semaphore Objects in Python.
For Java there is a very good standard library that has this functionality:
http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/package-summary.html
Just create a class with Semaphore field:
class Server {
private static final MAX_AVAILABLE = 100;
private final Semaphore available = new Semaphore(MAX_AVAILABLE, true);
// ... put all other fields (OS, version) here...
private Server () {}
// add a factory method
public static Server getServer() throws InterruptedException {
available.acquire();
//... do the rest here
}
}
Edit:
If you want things to be more "configurable" look into using AOP techniques i.e. create semaphore-based synchronization aspect.
Edit:
If you want completely standalone system, I guess you can try to use any modern DB (e.g. PostgreSQL) system that support row-level locking as semaphore. For example, create 3 rows for each representing a server and select them with locking if they are free (e.g. "select * from server where is_used = 'N' for update"), mark selected server as used, unmark it in the end, commit transaction.