Does web service implemented through jax ws is multi threaded - multithreading

I have written a web service using spring, cxf and jax ws implementation and I have a basic question on WS. How does a Web Service endpoint handles concurrent requests? Does it creates a new thread for each and every request similar to a servlet or it ia a single threaded model? As we are expecting a huge volume for each web service, Does it makes any difference to slipt WSDL to multiple WSDLs to have different end points?

The web service is, of course, hosted by a web server (like Glassfish for example), which is multithreaded when receiving multiple simultaneous requests.

From the perspective of both your client and your service, there's no such thing as "multithreading". Your client invokes a request, and gets a response (possibly a fault response). Your server receives a request, and services that request. Period.
How the request is dispatched is an implementation detail.
And the WSDL is simply a "contract". The service "publishes" what operations it supports and what data types it uses with the WSDL; the client packs and unpacks his request and response SOAP messages accordingly. But a WSDL plays no direct role in any given web service invocation.

Its late but might help.
Endpoint.publish(Url, ServiceImplObj) publishes a webservice at a given url. The no. of threads assigned for request handling truly is under control of the jvm because this is a light weight deployment which is handled by jvm itself.
For better clarification you can print the current thread name at service side and you can see that the service threads are being assigned from a thread pool which is managed by jvm.
[pool-1-thread-1]: Response[57]:
[pool-1-thread-5]: Response[58]:
[pool-1-thread-4]: Response[59]:
[pool-1-thread-3]: Response[60]:
[pool-1-thread-6]: Response[61]:
[pool-1-thread-6]: Response[62]:
This i tried on jdk 1.6.0_35.
xjc -version
xjc version "JAXB 2.1.10 in JDK 6"
JavaTM Architecture for XML Binding(JAXB) Reference Implementation, (build JAXB
2.1.10 in JDK 6)

Related

How does resttemplate.exchange() execute on a different thread?

It is my understanding that call to exchange method of resttemplate executes on a different thread. Basically all client libraries execute on a different thread.
Let's say my servlet container is tomcat. When a request is made to the endpoint exposed, tomcat thread recieves the request and the request comes to service layer from controller layer on the same thread. In the service layer, I have a call to 3rd party service using resttemplate. When exchange method is invoked, internally the operation runs on different thread and gets the result of the operation.
I have a question regarding this:
Where does the resttemplate get the thread from basically from which thread pool to execute on a different thread ?
I would like to know if executing resttemplate on a different thread has got to do anything with tomcat thread pool.
Can anybody shed some lights on this?
When a request is made to the endpoint exposed, tomcat thread recieves
the request and the request comes to service layer from controller
layer on the same thread.
This happens only if tomcat and java applications are in same JVM (like in embedded tomcat). Otherwise, by default, Java threads are created and destroyed without being pooled. Of course, you can create a java thread pool too.
Every time a third-party API is called via RestTemplate it will create new Httpconnection and will close it once it is done. You can create RestTemplate's own connection pool using HttpComponentsClientHttpRequestFactory like so:
new org.springframework.web.client.RestTemplate(new HttpComponentsClientHttpRequestFactory())

JMS Problems Spring Batch With Partitioned Jobs On JBoss 5.2 EAP

We are using Spring Batch and partitioned job extensively with our project. Occasionally we see problems with partitioned jobs getting "hung" because of what apepars to be lost messages. The remote partitions all complete but the parent step stays in STARTED. Our configuration uses 1 connection factory for reading messages from the queues (inbound gateway) and a different clustered connection to send out the partition messages (outbound gateway). The reason for this is the JBoss messaging doesnt uniformly distribute messages around the cluster and the client connection factory provides that functionality.
Redhat came in and frankly threw mud at Spring and the configuration. The following are excerpts from their report
The Spring JMSTemplate code employs several anti-patterns, like creating a new connection, session, producer just to send a message, then closing the connection. Also, when receiving a message it can create a consumer each time,
receive the message, then close the consumer. This can results in poor performance under load. The use of anti-patterns not only results in poor performance, but can deplete operating system resources such as
threads and file handles, since some of the connection resources are released asynchronously. Moreover, with non-durable topic subscribers you can end up losing messages, since any messages received between the closing of
the last and opening of the next consumer will be lost. There is one place where it may be acceptable to use the Spring JMSTemplate is inside the application server using the JCA managed connection factory (normally at "java:/JmsXA") and that only works when you're sending messages.
The JCA managed connection factory caches connections so they will not actually be created each time. However using the JCA managed connection factory will not resolve the issue with consumers since they are not cached.
In summary, the Spring JMSTemplate is not safe to use apart from the very specific use case of using it inside the application server with the JCA managed connection factory (java:/JmsXA) and only in that case to send messages
(do not use it to consume messages).
Using it from a JMS client application outside the application server is never safe, and using it with a standard connection factory (e.g. "ConnectionFactory," "ClusteredConnectionFactory", "jms/RemoteConnectionFactory," etc.) is
never safe; also using it to receive messages is never safe. To safely receive messages using Spring, consider the use of MessageListenerContainers [7] with MessageDriven Pojos [8].
Finally, note that issues encountered are based on JMS anti-patterns and is thus not a problem specific to JBoss EAP. For example, see a similar discussion with regard to ActiveMQ [9].
Red Hat does not support using the Spring JMSTemplate with JBoss Messaging apart from the one acceptable use case for sending message via JCA managed connection factory.
RECOMMENDATIONS
● As to Spring JMS, as a rule, use JCA managed connection factories configured in JBoss EAP. Do not use the Spring configured connection factories. Use JNDI template to pull in the connection factories to Spring from JBoss. This will get rid of most of the Spring JMS problems.
● Use standard JMS instead of Spring JMS for the batch job. Spring is a non-standard (and probably sub-standard implementation of JMS). Standard JMS uses a pool of a few senders to send the message and close the session after the message is sent. On the listener side, standard JMS uses a pool of works listening to a distributed Queue or Topic. Each web server has JMS listener deployed as singleton and uses standard java observer to
notify any caller that is expecting a call back.
The JMS connection factories are configured in JBoss and loaded via JNDI.
Can you provide your feedback on their assessment?
To avoid the overhead of creating new connections/sessions per send, you need to wrap the provider's connection factory in a CachingConnectionFactory. It reuses the same connection for sends and caches sessions, producers, consumers.

Thread per request in play framework

I am a J2ee developer and i am new to play framework. I did a thorough research but not able to find any clear documentation on that.
The question is, how play handles a request. Will it creates a thread for every request just like J2ee containers?
If it is not Thread per request then what happens if we deploy the play application in Tomcat as war file.
First, play2 framework does not support tomcat.
With play and netty, you don't assign one thread per request.
By default you have one thread per core in Play but lets assume that you have only one thread for all requests;
In this architecture one thread is shared by all requests. So the thread handles the first request and when it's idle (it is idle when it calls to db or a url etc.) it begins to handle second request. So the thread does not have to return response for the first request to start the second one.
One might think that the system will get too slow with this architecture but it's not since the performance depends on cpu.
Play 2.3.x uses Netty under the hood to handle HTTP request. You can learn more about Netty here
You will also find informations on the Play documentation : https://www.playframework.com/documentation/2.3.x/ThreadPools

WCF service accepting concurrent requests

I am new to WCF web services. My requirement is to create a WCF service which is a wrapper for third-party COM dll object.
Let's assume that the dll takes 5 sec to calculate one particular input.
When I created the service and tested it (using the WCF test client) the scenario I see that I am not able to send 2nd request until first request is completed.
So I was thinking to start a new thread for consuming the com functionality and call a callback function once done. I want to send the response and end request in this callback function.
This is for every request that hits the WCF service.
I have tested this, but problem is I am getting the response without completing the request.
I want current thread to wait until the calculations are done and also accept other requests in parallel
Can you please let me know how I can fix this considering the performance?
My service will be consumed by multiple SAP Portals clients via SAP PI
The concurrencymode for service can be set applying [ServiceBehavior] attribute on Service Class implementing ServiceContract.
http://msdn.microsoft.com/en-us/library/system.servicemodel.concurrencymode(v=vs.110).aspx
However, in your situation where you access a COM component in service operation, I'd first check the Threading model for COM component i.e. does it implement Apartment (STA) or MTA. If COM component implements Apartment threading model, COM call invocation will be serialized. Thus, changing WCF ConcurrencyMode will not have any impact.
HTH,
Amit Bhatia

Prevent thread blocking in Tomcat

I have a Java servlet that acts as a facade to other webservices deployed on the same Tomcat instance. My wrapper servlet creates N more threads, each which invokes a webservice, collates the response and sends it back to the client. The webservices are deployed all on the same Tomcat instance as different applications.
I am seeing thread blocking on this facade wrapper service after a few hours of deployment which brings down the Tomcat instance. All blocked threads are endpoints to this facade webservice (like http://domain/appContext/facadeService)
Is there a way to control such thread-blocking, due to starvation of available threads that actually do the processing? What are the best practices to prevent such deadlocks?
The common solution to this problem is to use the Executor framework. You need to express your web service call as Callable and pass it to the executor either as it stands, or as a Collection<Callable> (see the Javadoc for complete list of options).
You have two choices to control the time. First is to use parameters of an appropriate method of the Executor class where you specify the max web service timeout. Another option is to do get the result (which is expressed as Future<T>) and use .get(long, TimeUnit) to specify the maximum amount of time you can wait for a result.

Resources