Websphere Server threads getting hung - multithreading

We have an application where we use an H2 embedded database to store the data. We have a synchronized write method which does DB inserts. Since the H2 DB is a small Java embedded DB, we use "synchronized" on the write method to handle the transaction management in embedded DB rather than in DB.
But during heavy load, we could see that the write thread is getting hung. We are not sure for which resource, this thread is getting hung.
Please look at this snippet of code:
public synchronized int write(IEvent event) {
String methodName = "write";
Connection conn = null;
PreparedStatement updtStmt = null;
Statement stmt = null;
ResultSet rSet = null;
int status = 0;
try {
dbConnect.checkDBSizeExceed();
conn = dbConnect.getConnection();
updtStmt = conn.prepareStatement(insertQuery);
updtStmt.setString(1, (String) event.getAttributeValue());
......
updtStmt.setString(30, (String) event.getAttributeValue());
updtStmt.setBoolean(31, false);
status = updtStmt.executeUpdate();
}catch(SQLException ex){
logger.log(methodName,logger.print(ex),Logger.ERROR);
} catch(Exception ex){
logger.log(methodName,logger.print(ex),Logger.ERROR);
} finally {
try {
if (updtStmt != null)
updtStmt.close();
if (conn != null)
conn.close();
}catch(SQLException ex) {
logger.log(methodName,logger.print(ex),Logger.ERROR);
return status;
}
return status;
}
}
We have multiple write methods which can access this DB. From the code we could see that the code is straightforward. But we are not sure where the resource is locked.
Another problem is in the thread dump in the (Websphere) system.out, we could see the thread stacktrace as below.
[6/15/12 3:13:38:225 EDT] 00000032 ThreadMonitor W WSVR0605W: Thread "WebContainer : 3" (00000066) has been active for 632062 milliseconds and may be hung. There is/are 2
thread(s) in total in the server that may be hung.
at com.xxxx.eaws.di.agent.handlers.AuditEmbeddedDBHandler.store(Unknown Source)
at com.xxxx.eaws.di.agent.eventlogger.2LoggerImpl.logEvent(Unknown Source)
at com.xxxx.eecs.eventlogger.EventLoggerAdapter.logAuditEvent(EventLoggerAdapter.java:682)
at com.xxxx.eecs.eventlogger.EventLoggerAdapter.logEvent(EventLoggerAdapter.java:320)
at com.xxxx.eecs.eventlogger.EventLogger.logEventInternal(EventLogger.java:330)
at com.xxxx.eecs.eventlogger.EventLogger.logEvent(EventLogger.java:283)
at com.ibm.wps.auth.impl.ImplicitLoginFilterChainImpl.login(ImplicitLoginFilterChainImpl.java:55)
at com.ibm.wps.auth.impl.AuthenticationFilterChainHandlerImpl.invokeImplicitLoginFilterChain(AuthenticationFilterChainHandlerImpl.java:393)
at com.ibm.wps.auth.impl.InitialAuthenticationHandlerImpl.checkAuthentication(InitialAuthenticationHandlerImpl.java:204)
at com.ibm.wps.state.phases.PhaseManagerImpl.callInitialAuthenticationHandler(PhaseManagerImpl.java:240)
In the above stacktrace, I need to know the reason why I am getting "Unknown Source" in the stack trace. Those jars are available in the class path and we also have the H2.jar in the classpath. We are not sure why, if the thread gets in hung in H2, we are not able to get the thread stacktrace.
If not, I also need to know why the thread stack trace is showing "Unknown Source".
Appreciate your help.
Thanks in advance.

Are you using ejbs? How do you get the connection? Is it injected by the aop server? Do you retrieve it from jndi? You should Not synchronize the method.
Even if it is an embedded db you should rely on the app server facilities.
You need to configure the connection as a datasource, even if your db is in memory. If you want a serialized write on the db you need to configure the connection pool to the serialized ansi isolation level (there are 4 ansi isolation levels). In this way you should obtain the same effect in a managed environment (the app server) without the synchronized, that should be avoided inside an app server.

Unknown Source typically implies that the line numbers are available.
When you compile, the compiler can add in debug information like line numbers. If they are not present in the JAR or .class files, the JVM can't provide you that information.
HTH

It looks like the conn = dbConnect.getConnection(); is waiting for more than 60000ms.
The error thrown by WAS is because resource adapter has one mechanism poll period. It is considered the rate (in milliseconds) at which to poll the enterprise information system (EIS) event store for new inbound events. The poll cycle is established at a fixed rate, meaning that if execution of the poll cycle is delayed for any reason, the next cycle will occur immediately to “catch up”. During the poll period, the polling thread will be sleeping.
Once the time is calculated to 60000 milliseconds, the WebSphere Application Server thread monitor regards this polling thread as hung and throws the exception.

Related

How to execute stored procedure

I know this is not recommended way by Acumatica, but we don't have other option than to use stored procedure. I have created a new processing screen to execute stored procedure but am facing time out exception.
My code sample is below:
using (new PXConnectionScope())
{
using (PXTransactionScope ts = new PXTransactionScope())
{
PXDatabase.Execute("MYSTOREDPROCEDURE", pars.ToArray());
ts.Complete();
}
}
Try executing long running code in PXLongOperation context. I assume these establishes a connection with periodic ping to avoid time-out while waiting for data to arrive.
PXLongOperation.StartOperation(Base, delegate()
{
// Code executed in long operation context
});
If your code is executed from the context of a processing delegate I think it should be already wrapped in a long operation though. Otherwise long operation should be used inside an action event handler.
A last recourse would be to increase time-out in the web.config file.
Use of stored procedure is a concern mainly for SAAS hosting and obtaining an Acumatica ISV Certification. There's likely no official support for it but I doubt it's gonna go away.

Can the Azure Service Bus be delayed before retrying a message?

The Azure Service Bus supports a built-in retry mechanism which makes an abandoned message immediately visible for another read attempt. I'm trying to use this mechanism to handle some transient errors, but the message is made available immediately after being abandoned.
What I would like to do is make the message invisible for a period of time after it is abandoned, preferably based on an exponentially incrementing policy.
I've tried to set the ScheduledEnqueueTimeUtc property when abandoning the message, but it doesn't seem to have an effect:
var messagingFactory = MessagingFactory.CreateFromConnectionString(...);
var receiver = messagingFactory.CreateMessageReceiver("test-queue");
receiver.OnMessageAsync(async brokeredMessage =>
{
await brokeredMessage.AbandonAsync(
new Dictionary<string, object>
{
{ "ScheduledEnqueueTimeUtc", DateTime.UtcNow.AddSeconds(30) }
});
}
});
I've considered not abandoning the message at all and just letting the lock expire, but this would require having some way to influence how the MessageReceiver specifies the lock duration on a message, and I can't find anything in the API to let me change this value. In addition, it wouldn't be possible to read the delivery count of the message (and therefore make a decision for how long to wait for the next retry) until after the lock is already required.
Can the retry policy in the Message Bus be influenced in some way, or can a delay be artificially introduced in some other way?
Careful here because I think you are confusing the retry feature with the automatic Complete/Abandon mechanism for the OnMessage event-driven message handling. The built in retry mechanism comes into play when a call to the Service Bus fails. For example, if you call to set a message as complete and that fails, then the retry mechanism would kick in. If you are processing a message an exception occurs in your own code that will NOT trigger a retry through the retry feature. Your question doesn't get explicit on if the error is from your code or when attempting to contact the service bus.
If you are indeed after modifying the retry policy that occurs when an error occurs attempting to communicate with the service bus you can modify the RetryPolicy that is set on the MessageReciver itself. There is an RetryExponitial which is used by default, as well as an abstract RetryPolicy you can create your own from.
What I think you are after is more control over what happens when you get an exception doing your processing, and you want to push off working on that message. There are a few options:
When you create your message handler you can set up OnMessageOptions. One of the properties is "AutoComplete". By default this is set to true, which means as soon as processing for the message is completed the Complete method is called automatically. If an exception occurs then abandon is automatically called, which is what you are seeing. By setting the AutoComplete to false you required to call Complete on your own from within the message handler. Failing to do so will cause the message lock to eventually run out, which is one of the behaviors you are looking for.
So, you could write your handler so that if an exception occurs during your processing you simply do not call Complete. The message would then remain on the queue until it's lock runs out and then would become available again. The standard dead lettering mechanism applies and after x number of tries it will be put into the deadletter queue automatically.
A caution of handling this way is that any type of exception will be treated this way. You really need to think about what types of exceptions are doing this and if you really want to push off processing or not. For example, if you are calling a third party system during your processing and it gives you an exception you know is transient, great. If, however, it gives you an error that you know will be a big problem then you may decide to do something else in the system besides just bailing on the message.
You could also look at the "Defer" method. This method actually will then not allow that message to be processed off the queue unless it is specifically pulled by its sequence number. You're code would have to remember the sequence number value and pull it. This isn't quite what you described though.
Another option is you can move away from the OnMessage, Event-driven style of processing messages. While this is very helpful you don't get a lot of control over things. Instead hook up your own processing loop and handle the abandon/complete on your own. You'll also need to deal some of the threading/concurrent call management that the OnMessage pattern gives you. This can be more work but you have the ultimate in flexibility.
Finally, I believe the reason the call you made to AbandonAsync passing the properties you wanted to modify didn't work is that those properties are referring to Metadata properties on the method, not standard properties on BrokeredMessage.
I actually asked this same question last year (implementation aside) with the three approaches I could think of looking at the API. #ClemensVasters, who works on the SB team, responded that using Defer with some kind of re-receive is really the only way to control this precisely.
You can read my comment to his answer for a specific approach to doing it where I suggest using a secondary queue to store messages that indicate which primary messages have been deferred and need to be re-received from the main queue. Then you can control how long you wait by setting the ScheduledEnqueueTimeUtc on those secondary messages to control exactly how long you wait before you retry.
I ran into a similar issue where our order picking system is legacy and goes into maintenance mode each night.
Using the ideas in this article(https://markheath.net/post/defer-processing-azure-service-bus-message) I created a custom property to track how many times a message has been resubmitted and manually dead lettering the message after 10 tries. If the message is under 10 retries it clones the message increments the custom property and sets the en queue of the new message.
using Microsoft.Azure.ServiceBus;
public PickQueue()
{
queueClient = new QueueClient(QUEUE_CONN_STRING, QUEUE_NAME);
}
public async Task QueueMessageAsync(int OrderId)
{
string body = JsonConvert.SerializeObject(OrderId);
var message = new Message(Encoding.UTF8.GetBytes(body));
await queueClient.SendAsync(message);
}
public async Task ReQueueMessageAsync(Message message, DateTime utcEnqueueTime)
{
int resubmitCount = (int)(message.UserProperties["ResubmitCount"] ?? 0) + 1;
if (resubmitCount > 10)
{
await queueClient.DeadLetterAsync(message.SystemProperties.LockToken);
}
else
{
Message clone = message.Clone();
clone.UserProperties["ResubmitCount"] = ++resubmitCount;
await queueClient.ScheduleMessageAsync(message, utcEnqueueTime);
}
}
This question asks how to implement exponential backoff in Azure Functions. If you do not want to use the built-in RetryPolicy (only available when autoComplete = false), here's the solution I've been using:
public static async Task ExceptionHandler(IMessageSession MessageSession, string LockToken, int DeliveryCount)
{
if (DeliveryCount < Globals.MaxDeliveryCount)
{
var DelaySeconds = Math.Pow(Globals.ExponentialBackoff, DeliveryCount);
await Task.Delay(TimeSpan.FromSeconds(DelaySeconds));
await MessageSession.AbandonAsync(LockToken);
}
else
{
await MessageSession.DeadLetterAsync(LockToken);
}
}

sqlite returns SQLITE_BUSY in WAL mode

I have a web application working with sqlite database.
My version of sqlite is the latest from official windows binary distribution - 3.7.13.
The problem is that under heavy load on database, sqlite API functions (such as sqlite3_step) are returning SQLITE_BUSY.
I pass the following pragmas when initializing a connection:
journal_mode = WAL
page_size = 4096
synchronous = FULL
foreign_keys = on
The databas is one-file database. And I'm using Mono 2.10.8 and Mono.Data.Sqlite assembly provided with it to access database.
I'm testing it with 50 parallel threads which are sending 50 subsequent http-requests each to my application. On every request some reading and writing are done to the database. Every set of IO operations is executed inside the transaction.
Everything goes well until near 400th - 700th request. In this (random) moment API functions are starting to return SQLITE_BUSY permanently (To be more exact - until the limit of retries is reached).
As far as i know WAL mode transparently supports parallel reads and writes. I've guessed that it could be because of attempt to read database while checkpoint operation is executed. But even after turning autocheckpoint off the situation remains the same.
What could be wrong in this situation?
How to serve large amount of parallel database IO correctly?
P.S.
Only one connection per request is supposed.
I use nhibernate configured with WebSessionContext.
I initialize my NHibernate session like this:
ISession session = null;
//factory variable is session factory
if (CurrentSessionContext.HasBind(factory))
{
session = factory.GetCurrentSession();
if (session == null)
CurrentSessionContext.Unbind(factory);
}
if (session == null)
{
session = factory.OpenSession();
CurrentSessionContext.Bind(session);
}
return session;
And on HttpApplication.EndRequest i release it like this:
//factory variable is session factory
if (CurrentSessionContext.HasBind(factory))
{
try
{
CurrentSessionContext.Unbind(factory)
.Dispose();
}
catch (Exception ee)
{
Logr.Error("Error uninitializing session", ee);
}
}
So as far as i know there should be only one connection per request life cycle. While proceessing the request, code is executed sequentially (ASP.NET MVC 3). So it doesn't look like any concurency is possible here. Can i conclude that no connections are shared in this case?
It's not clear to me if the request threads share the same connection or not. If they don't then you should not be having these issues.
Assuming that you are indeed sharing the connection object across multiple threads, you should use some locking mechanism as the the SqliteConnection isn't thread-safe (an old post, but the SQLite library maintained as part of Mono evolved from System.Data.SQLite found on http://sqlite.phxsoftware.com).
So assuming that you don't lock around using the SqliteConnection object, can you please try it? A simple way to accomplish this could look like this:
static readonly object _locker = new object();
public void ProcessRequest()
{
lock (_locker) {
using (IDbCommand dbcmd = conn.CreateCommand()) {
string sql = "INSERT INTO foo VALUES ('bar')";
dbcmd.CommandText = sql;
dbcmd.ExecuteNonQuery();
}
}
}
You may however choose to open a distinct connection with each thread to ensure you don't have any more threading issues with the SQLite library.
EDIT
Following-up on the code you posted, do you close the session after committing the transaction? If you don't use some ITransaction, do you flush and close the session? I'm asking since I don't see it in your code, and I see it mentioned in https://stackoverflow.com/a/43567/610650
I also see it mentioned on http://nhibernate.info/doc/nh/en/index.html#session-configuration:
Also note that you may call NHibernateHelper.GetCurrentSession(); as
many times as you like, you will always get the current ISession of
this HTTP request. You have to make sure the ISession is closed after
your unit-of-work completes, either in Application_EndRequest event
handler in your application class or in a HttpModule before the HTTP
response is sent.

What is the best architecture we can use for a Netty Client Application?

I need to develop a netty based Client, that accepts messages from a Notification Server, and places these messages as Http Requests to another Server in real time.
I have already coded a working application which does this, but I need to add multi-threading to this.
At this point, I am getting confused on how to handle Netty Channels inside a multi-threaded program, as I am all loaded with the conventional approach of sockets and threads.
When I tried to separate the Netty requesting part into a method, It complains about the Channels not being closed.
Can anyone guide me how to handle this?
I would like to use ExecutionHandler and OrderedMemoryAwareThreadPoolExecutor, but I am really new into this.
Help with some examples would be a real favour at this time.
Thanks in advance.
Just add an ExecutionHandler to the ChannelPipeline. This will make sure that every ChannelUpstreamHandler which is added behind the ExecutionHandler will get executed in an extra thread and so does not block the worker-thread.
Have you looked at the example code on the Netty site? The TelnetServer looks to do what you are talking about. The factory creates new handlers whenever it gets a connection. Threads from the Executors will be used whenever there is a new connection. You could use any thread pool and executor there I suspect:
// Configure the server.
ServerBootstrap bootstrap = new ServerBootstrap(
new NioServerSocketChannelFactory(
Executors.newCachedThreadPool(), << change
Executors.newCachedThreadPool())); << change
// Configure the pipeline factory.
bootstrap.setPipelineFactory(new TelnetServerPipelineFactory());
// Bind and start to accept incoming connections.
bootstrap.bind(new InetSocketAddress(8080));
The TelnetServerHandler then handles the individual results.
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
// Cast to a String first.
// We know it is a String because we put some codec in TelnetPipelineFactory.
String request = (String) e.getMessage();
// Generate and write a response.
String response;
boolean close = false;
if (request.length() == 0) {
response = "Please type something.\r\n";
When the telnet is ready to close the connection it does this:
ChannelFuture future = e.getChannel().write(response);
if (close) {
future.addListener(ChannelFutureListener.CLOSE);
}

Networking without blocking ui in Qt 4.7

I have a server to which multiple clients can connect to. The client is GUI while the server is command line. The client has several functions (such as connect and login) which, when sent to the server should receive a reply.
Basically I need to run the QTcpSocket functions waitForConnection and waitForReadyRead. However, I need to do this without blocking the UI.
What I thought of doing was the following:
Have a class (Client) implement QThread which does all the waiting. This is created in main.
Client::Client (...)
{
moveToThread (this); // Not too sure what this does
mClient = new QTcpSocket (this);
start();
}
void Client::run (void)
{
exec();
}
void Client::connectToServer (...)
{
mClient->connectToHost (hostname, port);
bool status = mClient->waitForConnected (TIMEOUT);
emit connected (status);
}
void Client::login (...)
{
... Similar to connectToServer ...
}
Then the GUI (for example, ConnectToServerDialog) I run this whenever I am ready to make a connection. I connect the "connected signal" from the thread to the dialog so when I am connected or connection timed out it will emit this signal.
QMetaObject::invokeMethod (mClient, "connectToServer", Qt::QueuedConnection,
Q_ARG (const QString &, hostname), Q_ARG (quint16, port));
I am getting an assert failure with this (Cannot send events to objects owned by a different thread.) Since I am fairly new to Qt I don't know if what I am doing is the correct thing.
Can somebody tell me if what I am doing is a good approach and if so why is my program crashing?
The best thing is never to call methods like waitForBlah() ... forcing the event loop to wait for an undetermined period introduces the possibility of the GUI freezing up during that time. Instead, connect your QTcpSocket's connected() signal to some slot that will update your GUI as appropriate, and let the event loop continue as usual. Do your on-connected stuff inside that slot.
I don't recommend start thread in constructor.
Initialize it like:
Client * client = new Client();
client->moveToThread(client);
client->start();
Or if you don't want to use such solution, add in constructor before start(); line this->moveToThread(this);
upd: sorry, i didn't saw at first time, that you have this string.

Resources