I have a web application working with sqlite database.
My version of sqlite is the latest from official windows binary distribution - 3.7.13.
The problem is that under heavy load on database, sqlite API functions (such as sqlite3_step) are returning SQLITE_BUSY.
I pass the following pragmas when initializing a connection:
journal_mode = WAL
page_size = 4096
synchronous = FULL
foreign_keys = on
The databas is one-file database. And I'm using Mono 2.10.8 and Mono.Data.Sqlite assembly provided with it to access database.
I'm testing it with 50 parallel threads which are sending 50 subsequent http-requests each to my application. On every request some reading and writing are done to the database. Every set of IO operations is executed inside the transaction.
Everything goes well until near 400th - 700th request. In this (random) moment API functions are starting to return SQLITE_BUSY permanently (To be more exact - until the limit of retries is reached).
As far as i know WAL mode transparently supports parallel reads and writes. I've guessed that it could be because of attempt to read database while checkpoint operation is executed. But even after turning autocheckpoint off the situation remains the same.
What could be wrong in this situation?
How to serve large amount of parallel database IO correctly?
P.S.
Only one connection per request is supposed.
I use nhibernate configured with WebSessionContext.
I initialize my NHibernate session like this:
ISession session = null;
//factory variable is session factory
if (CurrentSessionContext.HasBind(factory))
{
session = factory.GetCurrentSession();
if (session == null)
CurrentSessionContext.Unbind(factory);
}
if (session == null)
{
session = factory.OpenSession();
CurrentSessionContext.Bind(session);
}
return session;
And on HttpApplication.EndRequest i release it like this:
//factory variable is session factory
if (CurrentSessionContext.HasBind(factory))
{
try
{
CurrentSessionContext.Unbind(factory)
.Dispose();
}
catch (Exception ee)
{
Logr.Error("Error uninitializing session", ee);
}
}
So as far as i know there should be only one connection per request life cycle. While proceessing the request, code is executed sequentially (ASP.NET MVC 3). So it doesn't look like any concurency is possible here. Can i conclude that no connections are shared in this case?
It's not clear to me if the request threads share the same connection or not. If they don't then you should not be having these issues.
Assuming that you are indeed sharing the connection object across multiple threads, you should use some locking mechanism as the the SqliteConnection isn't thread-safe (an old post, but the SQLite library maintained as part of Mono evolved from System.Data.SQLite found on http://sqlite.phxsoftware.com).
So assuming that you don't lock around using the SqliteConnection object, can you please try it? A simple way to accomplish this could look like this:
static readonly object _locker = new object();
public void ProcessRequest()
{
lock (_locker) {
using (IDbCommand dbcmd = conn.CreateCommand()) {
string sql = "INSERT INTO foo VALUES ('bar')";
dbcmd.CommandText = sql;
dbcmd.ExecuteNonQuery();
}
}
}
You may however choose to open a distinct connection with each thread to ensure you don't have any more threading issues with the SQLite library.
EDIT
Following-up on the code you posted, do you close the session after committing the transaction? If you don't use some ITransaction, do you flush and close the session? I'm asking since I don't see it in your code, and I see it mentioned in https://stackoverflow.com/a/43567/610650
I also see it mentioned on http://nhibernate.info/doc/nh/en/index.html#session-configuration:
Also note that you may call NHibernateHelper.GetCurrentSession(); as
many times as you like, you will always get the current ISession of
this HTTP request. You have to make sure the ISession is closed after
your unit-of-work completes, either in Application_EndRequest event
handler in your application class or in a HttpModule before the HTTP
response is sent.
Related
I know this is not recommended way by Acumatica, but we don't have other option than to use stored procedure. I have created a new processing screen to execute stored procedure but am facing time out exception.
My code sample is below:
using (new PXConnectionScope())
{
using (PXTransactionScope ts = new PXTransactionScope())
{
PXDatabase.Execute("MYSTOREDPROCEDURE", pars.ToArray());
ts.Complete();
}
}
Try executing long running code in PXLongOperation context. I assume these establishes a connection with periodic ping to avoid time-out while waiting for data to arrive.
PXLongOperation.StartOperation(Base, delegate()
{
// Code executed in long operation context
});
If your code is executed from the context of a processing delegate I think it should be already wrapped in a long operation though. Otherwise long operation should be used inside an action event handler.
A last recourse would be to increase time-out in the web.config file.
Use of stored procedure is a concern mainly for SAAS hosting and obtaining an Acumatica ISV Certification. There's likely no official support for it but I doubt it's gonna go away.
I'm building a server using SailsJS (a framework built on top of Express) and I need to keep an object in memory between requests. I would like to do this because loading it to/ from a database is taking way too long. Any ideas how I could do this?
Here's my code:
var params = req.params.all();
Network.findOne({ id: params.id }, function(err, network) {
if(network) {
var synapticNetwork = synaptic.Network.fromJSON(network.jsonValue);
if(synapticNetwork) { ...
Specifically, the fromJSON() function takes way too long and I would rather keep the synapticNetwork object in memory while the server is running (aka. load it when the server starts and just save periodically).
There are plenty libraries out there for caching purposes, one of which is node-cache as you've mentioned. All of them share similar api :
var cache = require('memory-cache');
// now just use the cache
cache.put('foo', 'bar');
console.log(cache.get('foo'))
You can also implement your own module and just require it wherever you need:
var cache = {};
module.exports = {
put: function(key, item) {
cache[key] = item;
},
get: function(key) {
return cache[key];
}
}
There are a lot of potential solutions. The first and most obvious one is using some session middleware for express. Most web frameworks should have some sort of session solution.
https://github.com/expressjs/session
The next option would be to use a caching utility like what Vsevolod suggested. It accomplishes pretty much the same thing as session, except if the data needs to be tied to a user/session then you'll have to store some kind of identifier in the session and use that to retrieve from the cache. Which I think is a bit redundant if that's your use-case.
There are also utilities that will expand your session middle-ware and persist objects in session to a database or other kinds of data stores, so that session information isn't lost even after server restarts. You still get the speed of an in-memory store, but backed by a database in case the in-memory store gets blown away.
Another option is to use Redis. You still have to serialize/deserialize your objects, but Redis is an in-memory data store and is super quick to write to and read from.
We have an application where we use an H2 embedded database to store the data. We have a synchronized write method which does DB inserts. Since the H2 DB is a small Java embedded DB, we use "synchronized" on the write method to handle the transaction management in embedded DB rather than in DB.
But during heavy load, we could see that the write thread is getting hung. We are not sure for which resource, this thread is getting hung.
Please look at this snippet of code:
public synchronized int write(IEvent event) {
String methodName = "write";
Connection conn = null;
PreparedStatement updtStmt = null;
Statement stmt = null;
ResultSet rSet = null;
int status = 0;
try {
dbConnect.checkDBSizeExceed();
conn = dbConnect.getConnection();
updtStmt = conn.prepareStatement(insertQuery);
updtStmt.setString(1, (String) event.getAttributeValue());
......
updtStmt.setString(30, (String) event.getAttributeValue());
updtStmt.setBoolean(31, false);
status = updtStmt.executeUpdate();
}catch(SQLException ex){
logger.log(methodName,logger.print(ex),Logger.ERROR);
} catch(Exception ex){
logger.log(methodName,logger.print(ex),Logger.ERROR);
} finally {
try {
if (updtStmt != null)
updtStmt.close();
if (conn != null)
conn.close();
}catch(SQLException ex) {
logger.log(methodName,logger.print(ex),Logger.ERROR);
return status;
}
return status;
}
}
We have multiple write methods which can access this DB. From the code we could see that the code is straightforward. But we are not sure where the resource is locked.
Another problem is in the thread dump in the (Websphere) system.out, we could see the thread stacktrace as below.
[6/15/12 3:13:38:225 EDT] 00000032 ThreadMonitor W WSVR0605W: Thread "WebContainer : 3" (00000066) has been active for 632062 milliseconds and may be hung. There is/are 2
thread(s) in total in the server that may be hung.
at com.xxxx.eaws.di.agent.handlers.AuditEmbeddedDBHandler.store(Unknown Source)
at com.xxxx.eaws.di.agent.eventlogger.2LoggerImpl.logEvent(Unknown Source)
at com.xxxx.eecs.eventlogger.EventLoggerAdapter.logAuditEvent(EventLoggerAdapter.java:682)
at com.xxxx.eecs.eventlogger.EventLoggerAdapter.logEvent(EventLoggerAdapter.java:320)
at com.xxxx.eecs.eventlogger.EventLogger.logEventInternal(EventLogger.java:330)
at com.xxxx.eecs.eventlogger.EventLogger.logEvent(EventLogger.java:283)
at com.ibm.wps.auth.impl.ImplicitLoginFilterChainImpl.login(ImplicitLoginFilterChainImpl.java:55)
at com.ibm.wps.auth.impl.AuthenticationFilterChainHandlerImpl.invokeImplicitLoginFilterChain(AuthenticationFilterChainHandlerImpl.java:393)
at com.ibm.wps.auth.impl.InitialAuthenticationHandlerImpl.checkAuthentication(InitialAuthenticationHandlerImpl.java:204)
at com.ibm.wps.state.phases.PhaseManagerImpl.callInitialAuthenticationHandler(PhaseManagerImpl.java:240)
In the above stacktrace, I need to know the reason why I am getting "Unknown Source" in the stack trace. Those jars are available in the class path and we also have the H2.jar in the classpath. We are not sure why, if the thread gets in hung in H2, we are not able to get the thread stacktrace.
If not, I also need to know why the thread stack trace is showing "Unknown Source".
Appreciate your help.
Thanks in advance.
Are you using ejbs? How do you get the connection? Is it injected by the aop server? Do you retrieve it from jndi? You should Not synchronize the method.
Even if it is an embedded db you should rely on the app server facilities.
You need to configure the connection as a datasource, even if your db is in memory. If you want a serialized write on the db you need to configure the connection pool to the serialized ansi isolation level (there are 4 ansi isolation levels). In this way you should obtain the same effect in a managed environment (the app server) without the synchronized, that should be avoided inside an app server.
Unknown Source typically implies that the line numbers are available.
When you compile, the compiler can add in debug information like line numbers. If they are not present in the JAR or .class files, the JVM can't provide you that information.
HTH
It looks like the conn = dbConnect.getConnection(); is waiting for more than 60000ms.
The error thrown by WAS is because resource adapter has one mechanism poll period. It is considered the rate (in milliseconds) at which to poll the enterprise information system (EIS) event store for new inbound events. The poll cycle is established at a fixed rate, meaning that if execution of the poll cycle is delayed for any reason, the next cycle will occur immediately to “catch up”. During the poll period, the polling thread will be sleeping.
Once the time is calculated to 60000 milliseconds, the WebSphere Application Server thread monitor regards this polling thread as hung and throws the exception.
I need to develop a netty based Client, that accepts messages from a Notification Server, and places these messages as Http Requests to another Server in real time.
I have already coded a working application which does this, but I need to add multi-threading to this.
At this point, I am getting confused on how to handle Netty Channels inside a multi-threaded program, as I am all loaded with the conventional approach of sockets and threads.
When I tried to separate the Netty requesting part into a method, It complains about the Channels not being closed.
Can anyone guide me how to handle this?
I would like to use ExecutionHandler and OrderedMemoryAwareThreadPoolExecutor, but I am really new into this.
Help with some examples would be a real favour at this time.
Thanks in advance.
Just add an ExecutionHandler to the ChannelPipeline. This will make sure that every ChannelUpstreamHandler which is added behind the ExecutionHandler will get executed in an extra thread and so does not block the worker-thread.
Have you looked at the example code on the Netty site? The TelnetServer looks to do what you are talking about. The factory creates new handlers whenever it gets a connection. Threads from the Executors will be used whenever there is a new connection. You could use any thread pool and executor there I suspect:
// Configure the server.
ServerBootstrap bootstrap = new ServerBootstrap(
new NioServerSocketChannelFactory(
Executors.newCachedThreadPool(), << change
Executors.newCachedThreadPool())); << change
// Configure the pipeline factory.
bootstrap.setPipelineFactory(new TelnetServerPipelineFactory());
// Bind and start to accept incoming connections.
bootstrap.bind(new InetSocketAddress(8080));
The TelnetServerHandler then handles the individual results.
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
// Cast to a String first.
// We know it is a String because we put some codec in TelnetPipelineFactory.
String request = (String) e.getMessage();
// Generate and write a response.
String response;
boolean close = false;
if (request.length() == 0) {
response = "Please type something.\r\n";
When the telnet is ready to close the connection it does this:
ChannelFuture future = e.getChannel().write(response);
if (close) {
future.addListener(ChannelFutureListener.CLOSE);
}
I have an ASP.NET MVC 3 (.NET 4) web application.
This app fetches data from an Oracle database and mixes some information with another Sql Database.
Many tables are joined together and lot of database reading is involved.
I have already optimized the best I could the fetching side and I don't have problems with that.
I've use caching to save information I don't need to fetch over and over.
Now I would like to build a responsive interface and my goal is to present the users the order headers filtered, and load the order lines in background.
I want to do that cause I need to manage all the lines (order lines) as a whole cause of some calculations.
What I have done so far is using jQuery to make an Ajax call to my action where I fetch the order headers and save them in a cache (System.Web.Caching.Cache).
When the Ajax call has succeeded I fire off another Ajax call to fetch the lines (and, once again, save the result in a cache).
It works quite well.
Now I was trying to figure out if I can move some of this logic from the client to the server.
When my action is called I want to fetch the order header and start a new thread - responsible of the order lines fetching - and return the result to the client.
In a test app I tried both ThreadPool.QueueUserWorkItem and Task.Factory but I want the generated thread to access my cache.
I've put together a test app and done something like this:
TEST 1
[HttpPost]
public JsonResult RunTasks01()
{
var myCache = System.Web.HttpContext.Current.Cache;
myCache.Remove("KEY1");
ThreadPool.QueueUserWorkItem(o => MyFunc(1, 5000000, myCache));
return (Json(true, JsonRequestBehavior.DenyGet));
}
TEST 2
[HttpPost]
public JsonResult RunTasks02()
{
var myCache = System.Web.HttpContext.Current.Cache;
myCache.Remove("KEY1");
Task.Factory.StartNew(() =>
{
MyFunc(1, 5000000, myCache);
});
return (Json(true, JsonRequestBehavior.DenyGet));
}
MyFunc crates a list of items and save the result in a cache; pretty silly but it's just a test.
I would like to know if someone has a better solution or knows of some implications I might have access the cache in a separate thread?!
Is there anything I need to be aware of, I should avoid or I could improve ?
Thanks for your help.
One possible issue I can see with your approach is that System.Web.HttpContext.Current might not be available in a separate thread. As this thread could run later, once the request has finished. I would recommend you using the classes in the System.Runtime.Caching namespace that was introduced in .NET 4.0 instead of the old HttpContext.Cache.