Exception Handling for Service Bus Queue Host Service - azure

I have a WCF service that is connected to a service bus Queue ready to receive messages. This is working great but i would like to be able to mark the message as a DeadLetter if i have an issue processing the message. Currently if my code throws an exception the message still gets removed from the queue, but i want to be able in configuration to specify to not deleted from the queue but mark it as a DeadLetter. I've done some search and I can't figure out on how to do that. I am currently running the service as a windows service
Uri baseAddress = ServiceBusEnvironment.CreateServiceUri("sb",
"namespace", "servicequeue");
_serviceHost = new ServiceHost(typeof(PaperlessImportServiceOneWay), baseAddress);
_serviceHost.Open();
config:
<services>
<service name="Enrollment.ServiceOneWay">
<endpoint name="ServiceOneWay"
address="sb://namespace.servicebus.windows.net/servicequeue"
binding="netMessagingBinding"
bindingConfiguration="messagingBinding"
contract="IServiceOneWaySoap"
behaviorConfiguration="sbTokenProvider" />
</service>
</services>
<netMessagingBinding>
<binding name="messagingBinding" closeTimeout="00:03:00" openTimeout="00:03:00"
receiveTimeout="00:03:00" sendTimeout="00:03:00" sessionIdleTimeout="00:01:00"
prefetchCount="-1">
<transportSettings batchFlushInterval="00:00:01" />
</binding>
</netMessagingBinding>
<behavior name="sbTokenProvider">
<transportClientEndpointBehavior>
<tokenProvider>
<sharedSecret issuerName="owner" issuerSecret="XXXXXXXXXXXXXXXXXXXXXXXX" />
</tokenProvider>
</transportClientEndpointBehavior>
</behavior>

In your interface for the opertion Contract add this
[ReceiveContextEnabled(ManualControl = true)]
then you can manage to commit or abandon the message
Found it in this link:
http://msdn.microsoft.com/en-us/library/windowsazure/hh532034.aspx

Related

WCF does not terminate call to Client

I have a problem with a WCF service which is hosted on a Windows 2016 VM in Azure running in IIS 10.
The service is basically a test service where I put in the operationContract a Thread.Sleep (timer) and the duration is sent as a parameter defined in the operationContract.
The problem is that specifying a sleep duration of up to 4.2 seconds runs without problems, but if I call the service specifying 5 seconds to run. The task process ends at 5 seconds, with the code above, the internal logger notifies me that I am done but for some reason WCFTestClient is still waiting for a response and continues to wait until the configured timeout is reached. In this case I have my Receive_timeout and Send 10 minutes on both sides in the service config and in the client config.
As proof I created a local environment in my network, mounting the service on a server, and here after 5 minutes or even a 9 minute test test client behaves as expected.
[OperationContract]
public void TestServiceTimeout (int timer)
{
try
{
log.Info("Start test Service");
System.Threading.Thread.Sleep(timer);
log.Info("End test srervices");
}
catch (Exception ex)
{
log.Error($"An error ocurred. Details: {ex.Message}");
throw;
}
}
Web.Config IIS
<binding name="Control.ws.sTest.customBinding0" receiveTimeout="00:20:00"
sendTimeout="00:20:00">
<binaryMessageEncoding />
<httpTransport />
</binding>
WCFTestClient Config
<bindings>
<basicHttpBinding>
<binding name="BasicHttpBinding_sFacturacion" sendTimeout="00:10:00" />
</basicHttpBinding>
</bindings>
<client>
<endpoint address="http://localhost:60000/ws/sFacturacion.svc"
binding="basicHttpBinding" bindingConfiguration="BasicHttpBinding_sFacturacion"
contract="sFacturacion" name="BasicHttpBinding_sFacturacion" />
</client>
WCF will not shut down the client actively, even if the server is offline, the client will catch the communication exception after the configured timeout is reached.
Please refer to the below definition of the Timeout settings.
https://learn.microsoft.com/en-us/dotnet/framework/wcf/feature-details/configuring-timeout-values-on-a-binding
Feel free to let me know if there is anything I can help with.

Strange Azure Service bus behaviour - subscription #2 is receiving message while subscription #1 is not

I am having some very strange issues with Azure Service Bus topic subscriptions. For the reference I have extracted the service bus topic configurations via service bus explorer and hoping somebody could point me into the right direction here. Really appreciate any help I can get.
Setup
I have 1 topic named topic2 and two subscription for this topic topicsub2 and topicsub3.
Issues
When publishing message to topic. Subscription topicsub3 recieves all messages that are sent while topicsub2 does not receive any messages. This behaviour is random, sometimes I would see messages in topicsub2 and sometimes the message is never delivered to the topicsub2 both subscriptions are using the same filters.
Followings are the few test scenarios that I have tried to figure out why this is happening.
I tried removing the filter from topicsub2 and sending message. But this had no impact still no messages. My assumption is that if no filters are applied all message for the topic will be forwarded to the subscription.
After removing the filter, I added default SQL filter (1=1) filter back to the subscription and tried sending message again it didn't work. I tried sending 5 messages to the topic, none of them appeared in the subscription topicsub2 and all were received by topicsub3.
I restarted the console application receiver (console application that reads from topicsub2) and tried sending messages again. This time it worked.
After about 10 mins, the subscription topicsub2 stopped receiving messages again.
Summary
This is where I am at. My investigation leads me this behaviour is random and there is some idling going on in topicsub2 that I am unaware of. Topicsub3 is working fine.
Can somebody please lead me into the right direction?
Following is the settings for topic 2 including the two subscriptions.
<?xml version="1.0" encoding="UTF-8"?>
<Entities xmlns="http://schemas.microsoft.com/servicebusexplorer" serviceBusNamespace="defaultstd">
<Topics>
<Topic>
<DefaultMessageTimeToLive>14.00:00:00</DefaultMessageTimeToLive>
<AutoDeleteOnIdle>10675199.02:48:05.4775807</AutoDeleteOnIdle>
<MaxSizeInMegabytes>1024</MaxSizeInMegabytes>
<RequiresDuplicateDetection>False</RequiresDuplicateDetection>
<DuplicateDetectionHistoryTimeWindow>00:10:00</DuplicateDetectionHistoryTimeWindow>
<Path>topic2</Path>
<EnableBatchedOperations>True</EnableBatchedOperations>
<SupportOrdering>False</SupportOrdering>
<EnableFilteringMessagesBeforePublishing>True</EnableFilteringMessagesBeforePublishing>
<IsAnonymousAccessible>False</IsAnonymousAccessible>
<Status>Active</Status>
<UserMetadata />
<EnablePartitioning>True</EnablePartitioning>
<EnableExpress>False</EnableExpress>
<IsReadOnly>False</IsReadOnly>
<Subscriptions>
<Subscription>
<LockDuration>00:00:30</LockDuration>
<RequiresSession>False</RequiresSession>
<DefaultMessageTimeToLive>14.00:00:00</DefaultMessageTimeToLive>
<AutoDeleteOnIdle>10675199.02:48:05.4775807</AutoDeleteOnIdle>
<EnableDeadLetteringOnMessageExpiration>True</EnableDeadLetteringOnMessageExpiration>
<EnableDeadLetteringOnFilterEvaluationExceptions>True</EnableDeadLetteringOnFilterEvaluationExceptions>
<TopicPath>topic2</TopicPath>
<Name>topicsub2</Name>
<MaxDeliveryCount>2</MaxDeliveryCount>
<EnableBatchedOperations>False</EnableBatchedOperations>
<Status>Active</Status>
<ForwardTo />
<ForwardDeadLetteredMessagesTo />
<UserMetadata />
<IsReadOnly>False</IsReadOnly>
<Rules>
<Rule>
<Filter>
<SqlFilter>
<SqlExpression>sys.Label NOT IN ('TestEvent')</SqlExpression>
<CompatibilityLevel>20</CompatibilityLevel>
</SqlFilter>
</Filter>
<Action>
<EmptyRuleAction />
</Action>
<Name>$Default</Name>
<IsReadOnly>True</IsReadOnly>
</Rule>
</Rules>
</Subscription>
<Subscription>
<LockDuration>00:00:30</LockDuration>
<RequiresSession>False</RequiresSession>
<DefaultMessageTimeToLive>14.00:00:00</DefaultMessageTimeToLive>
<AutoDeleteOnIdle>10675199.02:48:05.4775807</AutoDeleteOnIdle>
<EnableDeadLetteringOnMessageExpiration>True</EnableDeadLetteringOnMessageExpiration>
<EnableDeadLetteringOnFilterEvaluationExceptions>True</EnableDeadLetteringOnFilterEvaluationExceptions>
<TopicPath>topic2</TopicPath>
<Name>topicsub3</Name>
<MaxDeliveryCount>2</MaxDeliveryCount>
<EnableBatchedOperations>False</EnableBatchedOperations>
<Status>Active</Status>
<ForwardTo />
<ForwardDeadLetteredMessagesTo />
<UserMetadata />
<IsReadOnly>False</IsReadOnly>
<Rules>
<Rule>
<Filter>
<SqlFilter>
<SqlExpression>sys.Label NOT IN ('TestEvent')</SqlExpression>
<CompatibilityLevel>20</CompatibilityLevel>
</SqlFilter>
</Filter>
<Action>
<EmptyRuleAction />
</Action>
<Name>$Default</Name>
<IsReadOnly>True</IsReadOnly>
</Rule>
</Rules>
</Subscription>
</Subscriptions>
</Topic>
</Topics>
</Entities>

Azure Worker Roles is not running its script

I have published a cloud service with my worker role - it is meant to be polling for messages from a queue and uploading a file to a blob services. The messages are being sent to the queue, but I cannot see that the files are being uploaded. From the portal, I can see that the worker role is live. I don't know what I am missing. Am I meant to write some where in my code to run automatically? The code, when run on the virtual machine seems to work fine and will poll for messages as well as upload files. Furthermore, I am not sure how to debug the service once it is deployed and I am open to any suggestions.
I am using Python to develop the whole service.
This is my code:
if __name__ == '__main__':
while True:
message = message_bus_service.receive_subscription_message('jsonpayload-topic','sMessage')
guid = message.body
try:
message.delete()
except:
print "No messages"
//bunch of code that does things with the guid and uploads//
sleep(10)
this is in the csdef file:
<Runtime>
<Environment>
<Variable name="EMULATED">
<RoleInstanceValue xpath="/RoleEnvironment/Deployment/#emulated" />
</Variable>
</Environment>
<EntryPoint>
<ProgramEntryPoint commandLine="bin\ps.cmd LaunchWorker.ps1" setReadyOnProcessStart="true" />
</EntryPoint>
</Runtime>
As you can see, the setReadyOnProcessStart is set to "true"
There's an example of configuring automatic start here that you could check through.. http://www.dexterposh.com/2015/07/powershell-azure-custom-settings.html
Also, have you considered configuring remote access so you can log on and troubleshoot directly (i.e. check your code is running etc.)
Configuring Remote Desktop for Worker role in the new portal

Azure project lost endpoints and uses default now?

A weird thing happened to my project. I have an Azure WCF project which basically consists of the WebRole and the Azure project. Azure Project contains ServiceDefinition.csdef which in turn contains stuff like endpoint information.
I was playing around in my WebRole and manually set an endpoint there. However, my original issue, due to a stupid user error, did not require this. After I removed the endpoint devinition from web.config, my webrole still gets bound to port 6627 instead of the two endpoints described in my Azure project (80 & 8080). I can't find that port being mentioned anywhere so I'm guessing it is the default.
Here's the part of the web.config that I edited (the removed part is in comments). How do I revert back to getting the configuration from the Azure project?
<system.serviceModel>
<!-- services>
<service name="MyWebRole.MyService" behaviorConfiguration="MyWebRole.BasicUserInformationBehavior">
<endpoint address="" binding="mexHttpBinding" contract="MyWebRole.IMyService"/>
</service>
</services -->
<extensions>
<behaviorExtensions>
<add name="userInformationProcessor" type="MyWebRole.BasicUserInformationBehaviorExtensionElement, MyWebRole, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null"/>
</behaviorExtensions>
</extensions>
<bindings />
<client />
<behaviors>
<serviceBehaviors>
<behavior>
<serviceMetadata httpGetEnabled="true" />
<serviceDebug includeExceptionDetailInFaults="false" />
<userInformationProcessor />
</behavior>
</serviceBehaviors>
<endpointBehaviors>
</endpointBehaviors>
</behaviors>
<serviceHostingEnvironment multipleSiteBindingsEnabled="true" />
</system.serviceModel>
[Edit] More information on the subject! The problem is related to compute emulator no longer starting at all! I don't know why the service works then, but I guess it's running it IIS alone.
I think the solution as mentioned in the comment is that you have to set up the Windows Azure project as the startup project not the webrole.

nservicebus using generic host with azure queues causes msmq error

I have an on premise service bus that is configured to handle messages from an azure queue. The problem i am having is that the host is reporting an msmq error saying that it could not create the error queue. Aside from the fact that it should not be using msmq, it also handles the messages with no problems despite the error so it does not seem to be critical.
My Host is running as a class library configured to start with the nservicebus.host.exe process.
Here is my host code and config:
internal class EndpointConfig : IConfigureThisEndpoint, AsA_Server, IWantCustomInitialization
{
#region IWantCustomInitialization Members
public void Init()
{
Configure.With()
.DefaultBuilder()
.AzureMessageQueue()
.JsonSerializer()
.UnicastBus()
.IsTransactional(true)
.InMemorySubscriptionStorage();
}
#endregion
}
Config:
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<configSections>
<section name="UnicastBusConfig" type="NServiceBus.Config.UnicastBusConfig, NServiceBus.Core" />
<section name="AzureQueueConfig" type="NServiceBus.Config.AzureQueueConfig, NServiceBus.Azure"/>
<section name="MessageForwardingInCaseOfFaultConfig" type="NServiceBus.Config.MessageForwardingInCaseOfFaultConfig, NServiceBus.Core" />
</configSections>
<MessageForwardingInCaseOfFaultConfig ErrorQueue="error" />
<AzureQueueConfig QueueName="sender" ConnectionString="UseDevelopmentStorage=true" PeekInterval="5000" MaximumWaitTimeWhenIdle="60000" />
<startup useLegacyV2RuntimeActivationPolicy="true">
<supportedruntime version="v4.0" />
<requiredruntime version="v4.0.20506" />
</startup>
</configuration>
And Here is the actual Error Message:
2012-04-24 07:57:10,973 [1] ERROR NServiceBus.Utils.MsmqUtilities [(null)] <(nul
l)> - Could not create queue error#UseDevelopmentStorage=true or check its exist
ence. Processing will still continue.
System.Messaging.MessageQueueException (0x80004005): Message Queue service is no
t available.
at System.Messaging.MessageQueue.Create(String path, Boolean transactional)
at NServiceBus.Utils.MsmqUtilities.CreateQueue(String queueName, String accou
nt)
at NServiceBus.Utils.MsmqUtilities.CreateQueueIfNecessary(Address address, St
ring account)
EDIT: Adding .MessageForwardingInCaseOfFault() to the initialization corrected the issue.
Looks like AsA_Server assumes msmq, guess you'll have to configure the process manually
Adding .MessageForwardingInCaseOfFault() to the init method resolved the issue. Still feels like there is an underlying bug, but it is working.
I suspect that below described the next hurdle (not handling errors correctly) but i will have to try to force a failed message to verify.
As described in:
NServiceBus error queues in Azure

Resources