why WCF service stop working every 20 mins? - multithreading

I am newbie of wcf and mulit-thread, and I write a wcf service hosted in IIS7, in this service there is a long running task (System.Threading.Tasks.Task) which will probably be going to run 20 hours.
But this wcf service always stops to work every 20 minutes.
I make the wcf send email to me when application_stop and application_start is running. Therefore, I receive a email when it start run, and then after 20 minutes, I receive the email show that the service stop.
I really cannot figure out why this happen, why the service stop work every 20 minutes.
Does the wcf services stop every 20 minutes, or the Task thread stop every 20 minutes?
Will any configuration of IIS7 impact wcf running time?
I try to set the receiveTimeout to a every large time amount, and I use asych call to invoke wcf in client side, but this does not make help.
Guys, I really need help, many thanks.
Following code belongs to a website which calls this wcf service
<system.serviceModel>
<bindings>
<basicHttpBinding>
<binding name="BasicHttpBinding_IMailingService" closeTimeout="00:01:00"
openTimeout="00:01:00" receiveTimeout="24.20:31:23.6470000" sendTimeout="00:01:00"
allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard"
maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536"
messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered"
useDefaultWebProxy="true">
<readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384"
maxBytesPerRead="4096" maxNameTableCharCount="16384" />
<security mode="None">
<transport clientCredentialType="None" proxyCredentialType="None"
realm="" />
<message clientCredentialType="UserName" algorithmSuite="Default" />
</security>
</binding>
</basicHttpBinding>
</bindings>
<client>
<endpoint address="http://localhost:92/MailingService.svc" binding="basicHttpBinding"
bindingConfiguration="BasicHttpBinding_IMailingService" contract="MalingService.IMailingService"
name="BasicHttpBinding_IMailingService" />
</client>
</system.serviceModel>

IIS application pool idle default timeout is 20 minutes. You can configure application pool idle timeout by the below steps, which I picked from this article - http://technet.microsoft.com/en-us/library/cc771956(WS.10).aspx
Open IIS Manager.
expand the server node and click Application Pools.
On the Application Pools page, select the application pool for which you want to specify idle time-out settings, and then click Advanced Settings in the Actions pane.
In the Idle Time-out (minutes) box, type a number of minutes, and then click OK.
As rerun said, IIS host is not the best for your scenario, WAS on a Windows Service would be a better option.

IIS is a poor choice for hosting a long running task. If it is at exactly 20 minutes I would think that iis is shutting down the owner thread but that is speculation. I would really suggest moving to a windows service wcf hosting environment.

I use a cheap cron job service service to hit a handler on my app every 5 minutes. This helps keep the pool alive. If you don't want to move away from IIS, this is a quick and "dirty" fix.

Related

Queue job instances in Spring Batch

We have a job running in Spring batch each weekday, triggered from another system. Sometimes there are several instances of the job to be run on the same day. Each one triggered from the other system.
Every job runs for about an hour and if there are several job instances to be run we experience some problems with the data.
We would like to optimize this step as following, if no job instance is running then start a new one, if there is a job instance running already put the new one in a queue.
Each job instance must be COMPLETED before the next one is triggered. If one fail the next one must wait.
The job parameters are an incrementer and a timestamp.
I've Googled a bit but can't find anything that I find useful.
So I wonder if this is duable, to queue job instances in spring batch?
If so, how do I do this? I have looked into Spring integration and job-launching-gateway but I don't really see how to implement it, I guess I don't understand how it works. I try to read about these things but I still don't understand.
Maybe I have the wrong versions of spring batch? Maybe I am missing something?
If you need more information from me please let me know!
Thank you!
We are using spring-core and spring-beans 3.2.5, spring-batch-integration 1.2.2, spring-integration-core 3.0.5, spring-integration-file, -http, -sftp, -stream 2.0.3
Well, if you are good to have Spring Integration in your application alongside with the Spring Batch, that really would be great idea to leverage the job-launching-gateway capability.
Right, you can place your tasks into the queue - essentially QueueChannel.
The endpoint to poll that channel can be configured with the max-message-per-poll="1" to poll from the internal queue only one task at a time.
When you have just polled one message, send it into the job-launching-gateway and at the same time to the Control Bus component the command to stop that polling endpoint do not touch other messages in queue until the finish of the current job. When job is COMPLETED, you can send one more control message to start that polling endpoint.
Be sure that you use all the Spring Integration modules in the same version: spring-integration-core 3.0.5, spring-integration-file, -http, -sftp, -stream 3.0.5, as well.
If you still require an answer one could use a ThreadPoolTaskExecutor with a core size of 1 and max size of 1 and then a queue size that you desire.
i.e.
<bean id="jobLauncherTaskExecutor"
class="org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor">
<property name="corePoolSize" value="1" />
<property name="maxPoolSize" value="1" />
<property name="queueCapacity" value="200" />
</bean>
and then pass that to the SimpleJobLauncher
i.e.
<bean id="jobLauncher" class="org.springframework.batch.core.launch.support.SimpleJobLauncher">
<property name="jobRepository" ref="jobRepository" />
<property name="taskExecutor" ref="jobLauncherTaskExecutor" />
</bean>

JMS connections exhausted using WebSphere MQ

I have configured CachingConnectionFactory that wraps a MQTopicConnectionFactory and MQQueueConnectionFactory with cache size set to 10 each.
These are than used in several jms:outbound-channel-adapter or jms:message-driven-channel-adapter as part of various spring integration workflows that I have in my application.
It is noticed that once in a while the connection count on MQ channel reaches maximum allowed (about 1000) when the process stops functioning. This is a serious problem for a production application.
Bringing the application down does not reduce the connection count so looks like orphaned connections on MQ side? I am not sure if I am missing anything in my spring jms / SI configuration that can resolve this issue, any help would be highly appreciated.
Also I would like to log connection open and close from application but don't see a way to do that.
<bean id="mqQcf" class="com.ibm.mq.jms.MQQueueConnectionFactory">
//all that it needs host/port/ queue manager /channel
</bean>
<bean id="qcf" class="org.springframework.jms.connection.CachingConnectionFactory">
<property name="targetConnectionFactory" ref=" mqQcf "/>
<property name="sessionCacheSize" value="10"/>
</bean>
<bean id="mqTcf" class="com.ibm.mq.jms.MQTopicConnectionFactory">
//all that it needs host/port/ queue manager /channel
</bean>
<bean id="tcf" class="org.springframework.jms.connection.CachingConnectionFactory">
<property name="targetConnectionFactory" ref=" mqTcf "/>
<property name="sessionCacheSize" value="10"/>
</bean>
//Qcf and tcf are than used in spring integration configuration as required
You really need to show your configuration but the Spring CachingConnectionFactory only creates a single connection that's shared for all sessions. Turning on INFO logging for the CCF category emits this log when a new connection is created...
if (logger.isInfoEnabled()) {
logger.info("Established shared JMS Connection: " + this.target);
}
EDIT:
There's nothing in your config that stands out. As I said, each CCF will have at most 1 connection open at a time.
One possibility, if you have idle times, is that the network (a switch or firewall) might be silently dropping connections without telling the client or server. The next time the client tries to use its connection it will fail and create a new one but the server may never find out that the old one is dead.
Typically, for such situations, enabling heartbeats or keepalives would keep the connection active (or at least allow the server to know it's dead).
I was debugging a similar issue in my application about the number of open output counts in MQ when there is only one Connection is opened by the connection factory.
The number output counts in MQ explorer is the number of connection handles created by the IBM MQ classes. Per IBM documentation, A Session object encapsulates an IBM MQ connection handle, which therefore defines the transnational scope of the session.
Since the session cache size was 10 in my application, there were 10 IBM MQ connections handles created (one for each session) stayed open for days and the handle state was inactive.
More info can be found in,
https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_9.0.0/com.ibm.mq.dev.doc/q031960_.htm
As Gary Russell mentioned, Spring doesn't provide way to configure the time outs for these idle connections. IBM has inbuilt properties in MQConnectionFactory which can be configured to setup the reconnect timeouts.
More infor can be found in,
https://www.ibm.com/developerworks/community/blogs/messaging/entry/simplify_your_wmq_jms_client_with_automatic_client_reconnection19?lang=en
The reconnect on exception is true by default for CCF. So care should be taken if IBM throws an exception after time out interval. I am not sure if there is a max number of times reconnect will try before throwing an exception in CCF.

IIS application pool is recycled every 20 minutes despite configuration

I'm trying to disable application pool recycling and changed recycling interval in the app pool configuration to 0. Here's full configuration from .config:
<add name="DefaultAppPool" autoStart="true" managedRuntimeVersion="v4.0">
<recycling logEventOnRecycle="Time, Memory, IsapiUnhealthy, OnDemand, ConfigChange, PrivateMemory">
<periodicRestart time="00:00:00">
<schedule>
<clear />
</schedule>
</periodicRestart>
</recycling>
</add>
Despite that application is still recycled several times a day which can be seen in the event log:
A worker process with process id of '1584' serving application pool 'DefaultAppPool' was shutdown due to inactivity. Application Pool timeout configuration was set to 20 minutes. A new worker process will be started when needed.
This happens on Azure Windows 2008 R2 VM with IIS 7.5.
Is there anything else I need to do to make this setting work?
Possible duplicate of IIS: Idle Timeout vs Recycle and others.
What you are looking for is the idleTimeout which you will find under the process model element in applicationhost.config. Unless you have a strong case I would not recommend disabling the timeout as it is a primary mechanism for releasing unused resources when the site in question is not under load.
If it is idle and shuts down then there is a very small overhead as the new process is spooled up, if this is really an issue then I would suggest investigating some form of persisted cache such disk cache offered by the Application Request Routing IIS Module.

Tomcat Web Application threads stuck in Service Stage - causing app hang-ups

We are using Tomcat 6 / IIS to host our Java MVC web applications (Spring MVC and Frontman). We started running into problems recently when we see threads stuck in the Service stage for hours.
Using Lambda Probe we see the threads start to pile up and eventually the app becomes unresponsive. The processing time increases, zero bytes in or out. The url is reachable and the logs show that it starts but never finishes.
IP Stage processing time bytes-in bytes-out url
111.11.111.111 Service 00:57:26.0 0b 0b GET /Application/command/monitor
All of this is on a test server set up as follows:
ISAPI filter worker:
worker.testuser.type=ajp13
worker.testuser.host=localhost
worker.testuser.port=8009
worker.testuser.socket_timeout=300
worker.testuser.connection_pool_timeout=600
Server.xml:
<
Connector
port="8009"
protocol="AJP/1.3"
redirectPort="8443"
tomcatAuthentication="false"
connectionTimeout="6000"
/>
Any thoughts on why this happens or how to configure Tomcat to kill ancient application threads?
Can use java monitoring package to get the all the thread and thread dumps and kill using the thread id (though thread stop is deprecated it does the work)
http://docs.oracle.com/javase/1.5.0/docs/guide/management/overview.html

IIS7.5/MVC Limit number of Managed Threads

I'm trying to limit the number of managed threads permitted by an ASP.NET MVC application running under IIS7.5 on Windows Server 2008. I've attempted a number of different approaches but none seem to have worked as expected, I need to limit the number of threads as reported by
Threading.Thread.CurrentThread.ManagedThreadId
I also tried changing the ASP/behaviour/limits properties/threads per processor limit but I still get new threads with a different thread id.
I really need a limited number of threads (say 5-10) with the same thread id for each one every time it's used.
At the moment I have the following config file
<configuration>
<system.web>
<applicationPool maxConcurrentRequestsPerCPU="1" maxConcurrentThreadsPerCPU="1" requestQueueLimit="5000"/>
</system.web> </configuration>
pointed to by applicationhost.config
<applicationPools>
<add name="DefaultAppPool" enable32BitAppOnWin64="true" CLRConfigFile="C:\Inetpub\wwwroot\SCRWeb\Data\apppool.config">
<processModel identityType="NetworkService" />
</add>
And yet I still see more than 1 thread id in my application as reported by Threading.Thread.CurrentThread.ManagedThreadId
Any ideas?
Thanks

Resources