WCF does not terminate call to Client - azure

I have a problem with a WCF service which is hosted on a Windows 2016 VM in Azure running in IIS 10.
The service is basically a test service where I put in the operationContract a Thread.Sleep (timer) and the duration is sent as a parameter defined in the operationContract.
The problem is that specifying a sleep duration of up to 4.2 seconds runs without problems, but if I call the service specifying 5 seconds to run. The task process ends at 5 seconds, with the code above, the internal logger notifies me that I am done but for some reason WCFTestClient is still waiting for a response and continues to wait until the configured timeout is reached. In this case I have my Receive_timeout and Send 10 minutes on both sides in the service config and in the client config.
As proof I created a local environment in my network, mounting the service on a server, and here after 5 minutes or even a 9 minute test test client behaves as expected.
[OperationContract]
public void TestServiceTimeout (int timer)
{
try
{
log.Info("Start test Service");
System.Threading.Thread.Sleep(timer);
log.Info("End test srervices");
}
catch (Exception ex)
{
log.Error($"An error ocurred. Details: {ex.Message}");
throw;
}
}
Web.Config IIS
<binding name="Control.ws.sTest.customBinding0" receiveTimeout="00:20:00"
sendTimeout="00:20:00">
<binaryMessageEncoding />
<httpTransport />
</binding>
WCFTestClient Config
<bindings>
<basicHttpBinding>
<binding name="BasicHttpBinding_sFacturacion" sendTimeout="00:10:00" />
</basicHttpBinding>
</bindings>
<client>
<endpoint address="http://localhost:60000/ws/sFacturacion.svc"
binding="basicHttpBinding" bindingConfiguration="BasicHttpBinding_sFacturacion"
contract="sFacturacion" name="BasicHttpBinding_sFacturacion" />
</client>

WCF will not shut down the client actively, even if the server is offline, the client will catch the communication exception after the configured timeout is reached.
Please refer to the below definition of the Timeout settings.
https://learn.microsoft.com/en-us/dotnet/framework/wcf/feature-details/configuring-timeout-values-on-a-binding
Feel free to let me know if there is anything I can help with.

Related

Spring Integration gateway threads released or not

<gateway id="testService" service-interface="org.example.TestService"
default-request-channel="requestChannel"/>
public interface TestService {
void placeOrder(Order order);
}
<int:router input-channel="requestchannel" expression="payload.name">
<int:mapping value="foo" channel="channel_one" />
....
</int:router>
<int:chain input-channel="channel_one" output-channel="channel_default" >
<int:gateway request-channel="chainC" error-channel="errChannel"/>
<int:filter expression="headers['Release'] != null" discard-channel="nullChannel"/>
</int:chain>
There are two paths in this chain - success path moving to channel_default and error path moving to null channel.
Can this gateway cause a memory leak. How do I check that? Is there any way to ensure that the threads initiated for handling gateway requests are released after some time.
If the flow downstream of channel chainC might not return a reply, you need to set the reply-timeout to release the thread which will otherwise hang waiting for a reply that will never come.
As long as you have no async handoffs in that subflow, it is safe to set reply-timeout="0", because, in that case, the timer doesn't start until the subflow completes.
If chainC always returns a result, or exception, there is no possibility of a memory leak with that configuration.

Azure Worker Roles is not running its script

I have published a cloud service with my worker role - it is meant to be polling for messages from a queue and uploading a file to a blob services. The messages are being sent to the queue, but I cannot see that the files are being uploaded. From the portal, I can see that the worker role is live. I don't know what I am missing. Am I meant to write some where in my code to run automatically? The code, when run on the virtual machine seems to work fine and will poll for messages as well as upload files. Furthermore, I am not sure how to debug the service once it is deployed and I am open to any suggestions.
I am using Python to develop the whole service.
This is my code:
if __name__ == '__main__':
while True:
message = message_bus_service.receive_subscription_message('jsonpayload-topic','sMessage')
guid = message.body
try:
message.delete()
except:
print "No messages"
//bunch of code that does things with the guid and uploads//
sleep(10)
this is in the csdef file:
<Runtime>
<Environment>
<Variable name="EMULATED">
<RoleInstanceValue xpath="/RoleEnvironment/Deployment/#emulated" />
</Variable>
</Environment>
<EntryPoint>
<ProgramEntryPoint commandLine="bin\ps.cmd LaunchWorker.ps1" setReadyOnProcessStart="true" />
</EntryPoint>
</Runtime>
As you can see, the setReadyOnProcessStart is set to "true"
There's an example of configuring automatic start here that you could check through.. http://www.dexterposh.com/2015/07/powershell-azure-custom-settings.html
Also, have you considered configuring remote access so you can log on and troubleshoot directly (i.e. check your code is running etc.)
Configuring Remote Desktop for Worker role in the new portal

Exception Handling for Service Bus Queue Host Service

I have a WCF service that is connected to a service bus Queue ready to receive messages. This is working great but i would like to be able to mark the message as a DeadLetter if i have an issue processing the message. Currently if my code throws an exception the message still gets removed from the queue, but i want to be able in configuration to specify to not deleted from the queue but mark it as a DeadLetter. I've done some search and I can't figure out on how to do that. I am currently running the service as a windows service
Uri baseAddress = ServiceBusEnvironment.CreateServiceUri("sb",
"namespace", "servicequeue");
_serviceHost = new ServiceHost(typeof(PaperlessImportServiceOneWay), baseAddress);
_serviceHost.Open();
config:
<services>
<service name="Enrollment.ServiceOneWay">
<endpoint name="ServiceOneWay"
address="sb://namespace.servicebus.windows.net/servicequeue"
binding="netMessagingBinding"
bindingConfiguration="messagingBinding"
contract="IServiceOneWaySoap"
behaviorConfiguration="sbTokenProvider" />
</service>
</services>
<netMessagingBinding>
<binding name="messagingBinding" closeTimeout="00:03:00" openTimeout="00:03:00"
receiveTimeout="00:03:00" sendTimeout="00:03:00" sessionIdleTimeout="00:01:00"
prefetchCount="-1">
<transportSettings batchFlushInterval="00:00:01" />
</binding>
</netMessagingBinding>
<behavior name="sbTokenProvider">
<transportClientEndpointBehavior>
<tokenProvider>
<sharedSecret issuerName="owner" issuerSecret="XXXXXXXXXXXXXXXXXXXXXXXX" />
</tokenProvider>
</transportClientEndpointBehavior>
</behavior>
In your interface for the opertion Contract add this
[ReceiveContextEnabled(ManualControl = true)]
then you can manage to commit or abandon the message
Found it in this link:
http://msdn.microsoft.com/en-us/library/windowsazure/hh532034.aspx

Delay making virtual machine role available until startup tasks complete

Is it possible delay making a virtual machine role available untill startup tasks complete?
I have a few tasks I need to complete on virtual machine start before the machine can safely be added to the load balancer. Is there a way to do this?
Found the solution. In the VM Role Startup windows service I can handle the RoleEnvironment.StatusCheck event. I can then call SetBusy() to tell prevent the instance being available in the load balancer.
private void RoleEnvironmentStatusCheck(object sender, RoleInstanceStatusCheckEventArgs e)
{
if (this.busy)
{
e.SetBusy();
}
statusCheckWaitHandle.Set();
}
I believe that setting the taskType attribute to simple will make the Role wait for the task completion before actually starting:
<ServiceDefinition name="MyService" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition">
<WebRole name="WebRole1">
<Startup>
<Task commandLine="Startup.cmd" executionContext="limited" taskType="simple">
</Task>
</Startup>
</WebRole>
</ServiceDefinition>

nservicebus using generic host with azure queues causes msmq error

I have an on premise service bus that is configured to handle messages from an azure queue. The problem i am having is that the host is reporting an msmq error saying that it could not create the error queue. Aside from the fact that it should not be using msmq, it also handles the messages with no problems despite the error so it does not seem to be critical.
My Host is running as a class library configured to start with the nservicebus.host.exe process.
Here is my host code and config:
internal class EndpointConfig : IConfigureThisEndpoint, AsA_Server, IWantCustomInitialization
{
#region IWantCustomInitialization Members
public void Init()
{
Configure.With()
.DefaultBuilder()
.AzureMessageQueue()
.JsonSerializer()
.UnicastBus()
.IsTransactional(true)
.InMemorySubscriptionStorage();
}
#endregion
}
Config:
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<configSections>
<section name="UnicastBusConfig" type="NServiceBus.Config.UnicastBusConfig, NServiceBus.Core" />
<section name="AzureQueueConfig" type="NServiceBus.Config.AzureQueueConfig, NServiceBus.Azure"/>
<section name="MessageForwardingInCaseOfFaultConfig" type="NServiceBus.Config.MessageForwardingInCaseOfFaultConfig, NServiceBus.Core" />
</configSections>
<MessageForwardingInCaseOfFaultConfig ErrorQueue="error" />
<AzureQueueConfig QueueName="sender" ConnectionString="UseDevelopmentStorage=true" PeekInterval="5000" MaximumWaitTimeWhenIdle="60000" />
<startup useLegacyV2RuntimeActivationPolicy="true">
<supportedruntime version="v4.0" />
<requiredruntime version="v4.0.20506" />
</startup>
</configuration>
And Here is the actual Error Message:
2012-04-24 07:57:10,973 [1] ERROR NServiceBus.Utils.MsmqUtilities [(null)] <(nul
l)> - Could not create queue error#UseDevelopmentStorage=true or check its exist
ence. Processing will still continue.
System.Messaging.MessageQueueException (0x80004005): Message Queue service is no
t available.
at System.Messaging.MessageQueue.Create(String path, Boolean transactional)
at NServiceBus.Utils.MsmqUtilities.CreateQueue(String queueName, String accou
nt)
at NServiceBus.Utils.MsmqUtilities.CreateQueueIfNecessary(Address address, St
ring account)
EDIT: Adding .MessageForwardingInCaseOfFault() to the initialization corrected the issue.
Looks like AsA_Server assumes msmq, guess you'll have to configure the process manually
Adding .MessageForwardingInCaseOfFault() to the init method resolved the issue. Still feels like there is an underlying bug, but it is working.
I suspect that below described the next hurdle (not handling errors correctly) but i will have to try to force a failed message to verify.
As described in:
NServiceBus error queues in Azure

Resources