Netty 4 multithreaded DefaultEventExecutorGroup - multithreading

I started a netty4 nio server with multiple business threads for handling long-term businesses
like below
public void start(int listenPort, final ExecutorService ignore)
throws Exception {
...
bossGroup = new NioEventLoopGroup();
ioGroup = new NioEventLoopGroup();
businessGroup = new DefaultEventExecutorGroup(businessThreads);
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, ioGroup).channel(NioServerSocketChannel.class)
.childOption(ChannelOption.TCP_NODELAY,
Boolean.parseBoolean(System.getProperty(
"nfs.rpc.tcp.nodelay", "true")))
.childOption(ChannelOption.SO_REUSEADDR,
Boolean.parseBoolean(System.getProperty(
"nfs.rpc.tcp.reuseaddress", "true")))
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch)
throws Exception {
ch.pipeline().addLast("decoder",
new Netty4ProtocolDecoder());
ch.pipeline().addLast("encoder",
new Netty4ProtocolEncoder());
ch.pipeline().addLast(businessGroup, "handler",
new Netty4ServerHandler());
}
});
b.bind(listenPort).sync();
LOGGER.warn("Server started,listen at: " + listenPort + ", businessThreads is " + businessThreads);
}
I found that there was only one thread working when the server accepted one connection.
How can I bootstrap a server that can start multiple business threads for only one connection?
Thanks,
Mins

Netty will always use the same thread for one connection. It's by design. If you would like to change this you may be able to implement a custom EventExecutorGroup and pass it in when adding your ChannelHandler to the ChannelPipeline.
Be aware this may result in messed up order of packets.

Related

Azure web jobs - parallel message processing from queues not working properly

I need to provision SharePoint Online team rooms using azure queues and web jobs.
I have created a console application and published as continuous web job with the following settings:
config.Queues.BatchSize = 1;
config.Queues.MaxDequeueCount = 4;
config.Queues.MaxPollingInterval = TimeSpan.FromSeconds(15);
JobHost host = new JobHost();
host.RunAndBlock();
The trigger function looks like this:
public static void TriggerFunction([QueueTrigger("messagequeue")]CloudQueueMessage message)
{
ProcessQueueMsg(message.AsString);
}
Inside ProcessQueueMsg function i'm deserialising the received json message in a class and run the following operations:
I'm creating a sub site in an existing site collection;
Using Pnp provisioning engine i'm provisioning content in the sub
site (lists,upload files,permissions,quick lunch etc.).
If in the queue I have only one message to process, everything works correct.
However, when I send two messages in the queue with a few seconds delay,while the first message is processed, the next one is overwriting the class properties and the first message is finished.
Tried to run each message in a separate thread but the trigger functions are marked as succeeded before the processing of the message inside my function.This way I have no control for potential exceptions / message dequeue.
Tried also to limit the number of threads to 1 and use semaphore, but had the same behavior:
private const int NrOfThreads = 1;
private static readonly SemaphoreSlim semaphore_ = new SemaphoreSlim(NrOfThreads, NrOfThreads);
//Inside TriggerFunction
try
{
semaphore_.Wait();
new Thread(ThreadProc).Start();
}
catch (Exception e)
{
Console.Error.WriteLine(e);
}
public static void ThreadProc()
{
try
{
DoWork();
}
catch (Exception e)
{
Console.Error.WriteLine(">>> Error: {0}", e);
}
finally
{
// release a slot for another thread
semaphore_.Release();
}
}
public static void DoWork()
{
Console.WriteLine("This is a web job invocation: Process Id: {0}, Thread Id: {1}.", System.Diagnostics.Process.GetCurrentProcess().Id, Thread.CurrentThread.ManagedThreadId);
ProcessQueueMsg();
Console.WriteLine(">> Thread Done. Processing next message.");
}
Is there a way I can run my processing function for parallel messages in order to provision my sites without interfering?
Please let me know if you need more details.
Thank you in advance!
You're not passing in the config object to your JobHost on construction - that's why your config settings aren't having an effect. Change your code to:
JobHost host = new JobHost(config);
host.RunAndBlock();

On servlet 3.0 webserver, is it good to make all servlets and filters async?

I am confused with Async feature introduced in Servlet 3.0 spec
From Oracle site (http://docs.oracle.com/javaee/7/tutorial/doc/servlets012.htm):
To create scalable web applications, you must ensure that no threads
associated with a request are sitting idle, so the container can use
them to process new requests.
There are two common scenarios in which a thread associated with a
request can be sitting idle.
1- The thread needs to wait for a resource to become available or process data before building the response. For example, an application
may need to query a database or access data from a remote web service
before generating the response.
2- The thread needs to wait for an event before generating the response. For example, an application may have to wait for a JMS
message, new information from another client, or new data available in
a queue before generating the response.
The first item happens a lot (nearly always, we always query db or call a remote webservice to get some data). And calling an external resource will always consume some time.
Does it mean that we should ALWAYS use servelt async feature for ALL our servelts and filter ?!
I can ask this way too, if I write all my servelts and filters async, will I lose anything (performance)?!
If above is correct the skeleton of ALL our servlets will be:
public class Work implements ServletContextListener {
private static final BlockingQueue queue = new LinkedBlockingQueue();
private volatile Thread thread;
#Override
public void contextInitialized(ServletContextEvent servletContextEvent) {
thread = new Thread(new Runnable() {
#Override
public void run() {
while (true) {
try {
ServiceFecade.doBusiness();
AsyncContext context;
while ((context = queue.poll()) != null) {
try {
ServletResponse response = context.getResponse();
PrintWriter out = response.getWriter();
out.printf("Bussiness done");
out.flush();
} catch (Exception e) {
throw new RuntimeException(e.getMessage(), e);
} finally {
context.complete();
}
}
} catch (InterruptedException e) {
return;
}
}
}
});
thread.start();
}
public static void add(AsyncContext c) {
queue.add(c);
}
#Override
public void contextDestroyed(ServletContextEvent servletContextEvent) {
thread.interrupt();
}
}

HornetQ allows only one session per connection

I am using HornetQ in distributed transaction environment with MDBs. I read from the JMS documentation that we should not create Connection instance frequently, rather we should reuse the connection and create JMS sessions as and when required. So I wrote a code which creates JMS connection and then reuse it. But I have encountered the following exception while reusing the JMS connection object.
Could not create a session: Only allowed one session per connection.
See the J2EE spec, e.g. J2EE1.4 Section 6.6
I read few blogs on this but they all are specific to seam framework.
Here is my code
public class DefaultService implements IMessageService {
private static final long serialVersionUID = 1L;
private static final Logger logger = LogManager.getLogger(DefaultService.class);
private static final String connectionFactoryJndiName = "java:/JmsXA";
private static volatile Connection connection = null;
private Session session = null;
#Override
public void sendMessage(String destinationStr, Serializable object) {
try {
Destination destination = jmsServiceLocator.getDestination(destinationStr);
ObjectMessage message = session.createObjectMessage();
message.setObject(object);
MessageProducer messageProducer = session.createProducer(destination);
messageProducer.send(destination, message);
messageProducer.close();
logger.trace("Sent JMS Messagae for: " + object.getClass().getName());
}
catch (NamingException e) {
throw new RuntimeException("Couldn't send jms message", e);
}
catch (JMSException e) {
throw new RuntimeException("Couldn't send jms message", e);
}
}
#Override
public void close() {
try {
if (session != null) {
session.close();
}
}
catch (Exception e) {
logger.error("Couldn't close session", e);
}
}
}
I am using JBoss EAP 6.
Did I miss any settings here?
On JCA connection (i.e. connection where you used the PooledConnectionFactory) you are supposed to create one Session only per connection. That is part of the EE specification. (It has always been).
This is because these connections are pooled and it would be impossible to put them back on the pool if you were using more than one session per connection.
If you switch for non pooled connection factories (the ones that are meant for remote clients) you would have it working the way you wanted but then you would miss pooling from the application server. EE components are usually short lived and opening / closing JMS Connections (any connection to be more precise) it's an expensive operation.

Netty OrderedMemoryAwareThreadPoolExecutor not creating multiple threads

I use Netty for a multithreaded TCP server and a single client persistent connection.
The client sends many binary messages (10000 in my use case) and is supposed to receive an answer for each message. I added an OrderedMemoryAwareThreadPoolExecutor to the pipeline to handle the execution of DB calls on multiple threads.
If I run a DB call in the method messageReceived() (or simulate it with Thread.currentThread().sleep(50)) then all events are handled by a single thread.
5 count of {main}
1 count of {New
10000 count of {pool-3-thread-4}
For a simple implementation of messageReceived() the server creates many executor threads as expected.
How should I configure the ExecutionHandler to get multiple threads executors for the business logic, please?
Here is my code:
public class MyServer {
public void run() {
OrderedMemoryAwareThreadPoolExecutor eventExecutor = new OrderedMemoryAwareThreadPoolExecutor(16, 1048576L, 1048576L, 1000, TimeUnit.MILLISECONDS, Executors.defaultThreadFactory());
ExecutionHandler executionHandler = new ExecutionHandler(eventExecutor);
bootstrap.setPipelineFactory(new ServerChannelPipelineFactory(executionHandler));
}
}
public class ServerChannelPipelineFactory implements ChannelPipelineFactory {
public ChannelPipeline getPipeline() throws Exception {
pipeline.addLast("encoder", new MyProtocolEncoder());
pipeline.addLast("decoder", new MyProtocolDecoder());
pipeline.addLast("executor", executionHandler);
pipeline.addLast("myHandler", new MyServerHandler(dataSource));
}
}
public class MyServerHandler extends SimpleChannelHandler {
public void messageReceived(ChannelHandlerContext ctx, final MessageEvent e) throws DBException {
// long running DB call simulation
try {
Thread.currentThread().sleep(50);
} catch (InterruptedException ex) {
}
// a simple message
final MyMessage answerMsg = new MyMessage();
if (e.getChannel().isWritable()) {
e.getChannel().write(answerMsg);
}
}
}
OrderedMemoryAwareThreadPoolExecutor guarantees that events from a single channel are processed in order. You can think of it as binding a channel to a specific thread in the pool and then processing all events on that thread - although it's a bit more complex than that, so don't depend on a channel always being processed by the same thread.
If you start up a second client you'll see it (most likely) being processed on another thread from the pool. If you really can process a single client's requests in parallel then you probably want MemoryAwareThreadPoolExecutor but be aware that this offers no guarantees on the order of channel events.

SimpleChannelUpstreamHandler await*() in I/O thread causes a dead lock

In my Netty SimpleChannelUpstreamHandler when I receive a message I need to start up a connection to another Netty Server and forward the message on. However, when starting up this second connection I use:
ChannelFuture channelFuture = clientBootstrap.connect(new InetSocketAddress(host, port));
hannelFuture.awaitUninterruptibly();
Which results in the following error:
java.lang.IllegalStateException: await*() in I/O thread causes a dead lock or sudden performance drop. Use addListener() instead or call await*() from a different thread.
at org.jboss.netty.channel.DefaultChannelFuture.checkDeadLock(DefaultChannelFuture.java:314)
at org.jboss.netty.channel.DefaultChannelFuture.awaitUninterruptibly(DefaultChannelFuture.java:226)
at com.my.NettyClient.start(NettyClient.java:204)
....
at com.my.MyChannelUpstreamHandler.messageReceived(MyChannelUpstreamHandler.java:52)
Whats the best way to start this second connection? Should I do the following?:
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) throws Exception {
ExecutorService executorService = Executors.newSingleThreadExecutor();
executorService.submit(new Runnable() {
#Override
public void run() {
// Connect to another Netty Server...
// Forward on message...
}
});
executorService.shutdown();
...
Is this wasteful to start a new thread on each message recieved?
Checkout the proxy example to see how you can do it without blocking:
http://netty.io/docs/stable/xref/org/jboss/netty/example/proxy/HexDumpProxyInboundHandler.html

Resources