how to get azure webrole to log all 404s - azure

I have been trying to get logging working with azure for my MVC project but so far haven't had much success.
I have a Diagnostics connection string in my ServiceConfiguration.Cloud.cscfg file which points to my blob storage:
...
<Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.DiagnosticsConnectionString" value="**ConectionString**" />
</ConfigurationSettings>
My web.config has tracing set up
...
<tracing>
<traceFailedRequests>
<remove path="*"/>
<add path="*">
<traceAreas>
<add provider="ASP" verbosity="Verbose" />
<add provider="ASPNET" areas="Infrastructure,Module,Page,AppServices" verbosity="Verbose" />
<add provider="ISAPI Extension" verbosity="Verbose" />
<add provider="WWW Server" areas="Authentication,Security,Filter,StaticFile,CGI,Compression,Cache,RequestNotifications,Module" verbosity="Verbose" />
</traceAreas>
<failureDefinitions timeTaken="00:00:15" statusCodes="400-599" />
</add>
</traceFailedRequests>
</tracing>
</system.webServer>
My WebRole.cs has the following in
using System;
using System.Collections.Generic;
using System.Linq;
using Microsoft.WindowsAzure;
using Microsoft.WindowsAzure.Diagnostics;
using Microsoft.WindowsAzure.ServiceRuntime;
namespace MvcWebRole1
{
public class WebRole : RoleEntryPoint
{
public override bool OnStart()
{
// Get the factory configuration so that it can be edited
DiagnosticMonitorConfiguration config = DiagnosticMonitor.GetDefaultInitialConfiguration();
// Set scheduled transfer interval for infrastructure logs to 1 minute
config.DiagnosticInfrastructureLogs.ScheduledTransferPeriod = System.TimeSpan.FromMinutes(1);
// Specify a logging level to filter records to transfer
config.DiagnosticInfrastructureLogs.ScheduledTransferLogLevelFilter = LogLevel.Verbose;
// Set scheduled transfer interval for user's Windows Azure Logs to 1 minute
config.Logs.ScheduledTransferPeriod = System.TimeSpan.FromMinutes(1);
DiagnosticMonitor.Start("Microsoft.WindowsAzure.Plugins.Diagnostics.DiagnosticsConnectionString", config);
//RoleEnvironment.Changing += this.RoleEnvironmentChanging;
return base.OnStart();
}
}
}
But the I am not seeing any diagnostics logs
The mam folder just contains an MACommanda.xml and a MASecret, vsdeploy folder is empty and the wad-control-container has a file for each deployment.
Am I missing something / doing something wrong?
I have been trying to follow the guides from http://msdn.microsoft.com/en-us/library/windowsazure/gg433048.aspx in particular http://channel9.msdn.com/learn/courses/Azure/Deployment/DeployingApplicationsinWindowsAzure/Exercise-3-Monitoring-Applications-in-Windows-Azure
Update:
I found the following which could be part of the problem
IIS7 Logs Are Not Collected Properly -
http://msdn.microsoft.com/en-us/library/hh134842
although that should only account for the 404s not working, with a failure definition of 15 seconds my 17 second sleep in my controller action should have still been logged

Here is how I was finally able to get all the logging working on the Azure web role:
In the WebRole.cs include the following:
// Get the default initial configuration for DiagnosticMonitor.
var config = DiagnosticMonitor.GetDefaultInitialConfiguration();
// Filter the logs so that only error-level logs are transferred to persistent storage.
config.DiagnosticInfrastructureLogs.ScheduledTransferLogLevelFilter = config.Logs.ScheduledTransferLogLevelFilter =
config.WindowsEventLog.ScheduledTransferLogLevelFilter = LogLevel.Verbose;
// Schedule a transfer period of 30 minutes.
config.DiagnosticInfrastructureLogs.ScheduledTransferPeriod = config.Logs.ScheduledTransferPeriod = config.WindowsEventLog.ScheduledTransferPeriod =
config.Directories.ScheduledTransferPeriod = config.PerformanceCounters.ScheduledTransferPeriod = TimeSpan.FromMinutes(1);
// Specify a buffer quota.
config.DiagnosticInfrastructureLogs.BufferQuotaInMB = config.Logs.BufferQuotaInMB = config.WindowsEventLog.BufferQuotaInMB =
config.Directories.BufferQuotaInMB = config.PerformanceCounters.BufferQuotaInMB = 512;
// Set an overall quota of 8GB maximum size.
config.OverallQuotaInMB = 8192;
// WindowsEventLog data buffer being added to the configuration, which is defined to collect event data from the System and Application channel
config.WindowsEventLog.DataSources.Add("System!*");
config.WindowsEventLog.DataSources.Add("Application!*");
// Use 30 seconds for the perf counter sample rate.
TimeSpan perfSampleRate = TimeSpan.FromSeconds(30D);
config.PerformanceCounters.DataSources.Add(new PerformanceCounterConfiguration()
{
CounterSpecifier = #"\Memory\Available Bytes",
SampleRate = perfSampleRate
});
config.PerformanceCounters.DataSources.Add(new PerformanceCounterConfiguration()
{
CounterSpecifier = #"\Processor(_Total)\% Processor Time",
SampleRate = perfSampleRate
});
config.PerformanceCounters.DataSources.Add(new PerformanceCounterConfiguration()
{
CounterSpecifier = #"\ASP.NET\Applications Running",
SampleRate = perfSampleRate
});
// Start the DiagnosticMonitor using the diagnosticConfig and our connection string.
DiagnosticMonitor.Start("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString",config);
return base.OnStart();
In the web.config under system.WebServer add the following:
<tracing>
<traceFailedRequests>
<remove path="*"/>
<add path="*">
<traceAreas>
<add provider="ASPNET" areas="Infrastructure,Module,Page,AppServices" verbosity="Verbose" />
<add provider="WWW Server" areas="Authentication,Security,Filter,StaticFile,CGI,Compression,Cache,RequestNotifications,Module" verbosity="Verbose" />
</traceAreas>
<failureDefinitions statusCodes="400-599" />
</add>
</traceFailedRequests>
</tracing>
In the service definition file add the following under the web role:
<LocalResources>
<LocalStorage name="DiagnosticStore" sizeInMB="8192" cleanOnRoleRecycle="false"/>
</LocalResources>
That should enable all your logging in the MVC application.

Can you try by removing the "timeTaken" attribute from "failureDefinitions" node? Ref: http://msdn.microsoft.com/en-us/library/aa965046(v=VS.90).aspx.

Check the deployment's diagnostics settings by viewing the appropriate file in wad-control-container.
I note that you are not setting what in my experience is all the required values for DiagnosticInfrastructureLogs or for Logs, including bufferQuotaInMB and scheduledTransferLogLevelFilter.
Try this:
// Get the factory configuration so that it can be edited
DiagnosticMonitorConfiguration config = DiagnosticMonitor.GetDefaultInitialConfiguration();
config.DiagnosticInfrastructureLogs.bufferQuotaInMB = 512;
config.DiagnosticInfrastructureLogs.ScheduledTransferPeriod = System.TimeSpan.FromMinutes(1D);
config.DiagnosticInfrastructureLogs.ScheduledTransferLogLevelFilter = LogLevel.Verbose;
config.Logs.bufferQuotaInMB = 512;
config.Logs.ScheduledTransferLogLevelFilter = LogLevel.Verbose;
config.Logs.ScheduledTransferPeriod = System.TimeSpan.FromMinutes(1D);
Try that to start. Ensure as well that you have added trace listeners.

Related

How to configure IIS to define Client Certificate required on a specific endpoint when routing make endpoint path different from physical path

I implemented a dotnet core Api where endpoints are defined based on the Controller attribute Route.
I have for example 2 endpoints
api/controller1 and
api/controller2
I want to configure IIS so a client certificate is ignored for controller1 and required for controller2.
In my Api, I implemented the host this way
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.UseStartup<Startup>()
.ConfigureKestrel(o =>
{
o.ConfigureHttpsDefaults(o=>o.ClientCertificateMode=ClientCertificateMode.AllowCertificate);
})
.UseIISIntegration()
.ConfigureLogging(logging =>
{
logging.ClearProviders();
logging.SetMinimumLevel(LogLevel.Debug);
})
.UseNLog();
and configured services
services.AddSingleton<CertificateValidationService>();
services.Configure<IISOptions>(options =>
{
options.ForwardClientCertificate = true;
});
services.AddAuthentication()
.AddCertificate(x =>
{
x.AllowedCertificateTypes = CertificateTypes.All;
x.ValidateValidityPeriod = true;
x.RevocationMode = X509RevocationMode.NoCheck;
x.Events = new CertificateAuthenticationEvents
{
OnCertificateValidated = context =>
{
_logger.Trace("Enters OnCertificateValidated");
var validationService =
context.HttpContext.RequestServices.GetService<CertificateValidationService>();
if (validationService.ValidateCertificate(context.ClientCertificate))
{
_logger.Trace("OnCertificateValidated success");
context.Success();
}
else
{
_logger.Trace("OnCertificateValidated fail");
context.Fail("invalid certificate");
}
return Task.CompletedTask;
},
OnAuthenticationFailed = context =>
{
_logger.Trace("Enters OnAuthenticationFailed");
context.Fail("invalid certificate");
return Task.CompletedTask;
}
};
});
Here is the middleware pipeline configuration in Configure method of Startup.cs
if (env.IsLocal())
{
app.UseDeveloperExceptionPage();
}
else
{
app.UseExceptionHandler(appBuilder =>
{
appBuilder.Use(async (context, next) =>
{
var error = context.Features[typeof(IExceptionHandlerFeature)] as IExceptionHandlerFeature;
if (error != null && error.Error is SecurityTokenExpiredException)
{
_logger.Warn($"No valid token provided. {error.Error.Message}");
context.Response.StatusCode = 401;
context.Response.ContentType = "application/json";
await context.Response.WriteAsync(JsonConvert.SerializeObject(new
{
IpUrl = _globalSettings.IdP.Url,
SpName = _globalSettings.IdP.Name,
Authenticate = context.Request.GetEncodedUrl(),
//State = 401,
Msg = "Token expired"
}));
}
else if (error?.Error != null)
{
_logger.Error($"Unexpected error - {error.Error.Message}");
context.Response.StatusCode = 500;
context.Response.ContentType = "application/json";
await context.Response.WriteAsync(JsonConvert.SerializeObject(new
{
State = 500,
Msg = error.Error.Message
}));
}
else
{
await next();
}
});
});
// The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts.
app.UseHsts();
}
app.UseHttpsRedirection();
app.UseRouting();
app.UseCors("AllowOrigin");
app.UseAuthentication();
app.UseAuthorization();
app.UseSwagger(SwaggerHelper.ConfigureSwagger);
app.UseSwaggerUI(SwaggerHelper.ConfigureSwaggerUi);
app.UseEndpoints(endpoints => endpoints.MapControllers());
I tried to use web.config location but the "path" api/controller2 doesn't not actually exists (it's routed) so it has no effect
I created in the app folder faked api/controller2 folders to setup the SSL requirement on it. Unfortunately, I get a 405 because I lose then the routing and there's nothing behind those folders.
The only way I have yet is to "accept" a certificate at the api application level. But then, my front end, as soon as it queries for the first time my API asks for a certificate when it uses only api/controller1
Is there a way or do I have to build and deploy a specific API to have it protected and the other one for not using client certificate ?
Unfortunatelly this is not possible. Certificate validation happens on TLS level, i.e. before the actual request gets to ASP.NET core, so you cannot distinguish by route. It fails even before you could implement such logic.
We had a similar problem and we had to set up two applications, one with certificate validation and one without. The one with certificate validation than called the other app with "normal" (JWT machine-to-machine in our case) authentication and passed certificate parameters along.
This is official docu that states this:
Can I configure my app to require a certificate only on certain paths?
This isn't possible. Remember the certificate exchange is done at the
start of the HTTPS conversation, it's done by the server before the
first request is received on that connection so it's not possible to
scope based on any request fields.
I have a similar issue.
And I found a solution for iis express.
I think for iis it is solved similarly, I will write about it later (if it's will work).
But about solution (starting terms):
I'am on stage testing from visual studio, and i run net core app under iis express integrated in VS.
For my solution i need to request user certificate when user go to url '/certificate/apply/' (only on this page).
Project name is 'TestCore'
Steps:
In visual studio project folder you need to finde hidden folder .vs and in this folder you need to find folder 'config' and file 'applicationhost.config'
In this file you need to find similar as below section with you project configuration:
<location path="TestCore" inheritInChildApplications="false">
<system.webServer>
<modules>
<remove name="WebMatrixSupportModule" />
</modules>
<handlers>
<add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModuleV2" resourceType="Unspecified" />
</handlers>
<aspNetCore processPath="%LAUNCHER_PATH%" stdoutLogEnabled="false" hostingModel="InProcess" startupTimeLimit="3600" requestTimeout="23:00:00" />
<httpCompression>
<dynamicTypes>
<add mimeType="text/event-stream" enabled="false" />
</dynamicTypes>
</httpCompression>
</system.webServer>
clone (copy - paste) this section in file and modify copy (change path and add sequryti section):
<location path="TestCore/certificate/apply" inheritInChildApplications="false">
<system.webServer>
<modules>
<remove name="WebMatrixSupportModule" />
</modules>
<handlers>
<add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModuleV2" resourceType="Unspecified" />
</handlers>
<aspNetCore processPath="%LAUNCHER_PATH%" stdoutLogEnabled="false" hostingModel="InProcess" startupTimeLimit="3600" requestTimeout="23:00:00" />
<httpCompression>
<dynamicTypes>
<add mimeType="text/event-stream" enabled="false" />
</dynamicTypes>
</httpCompression>
<security>
<access sslFlags="SslNegotiateCert" />
</security>
</system.webServer>
try to start project (for mee it works fine).
I hope i (or some body else) will find same way for IIS.
For IIS and IISExpress, this is possible by using the location element.
I have successfully used it like this (this is added in the web.config, in the configuration element - see here for an example):
<location path="restapi">
<system.webServer>
<security>
<access sslFlags="Ssl,SslNegotiateCert,SslRequireCert"/>
</security>
</system.webServer>
</location>
This means that any route other than restapi e.g., http://localhost/whatever, the server will not require a client certificate to be part of the request. As soon as you hit http://localhost/restapi/whatever2, a client certificate is required (if you call this from a browser, you will get a pop-up asking you to choose a client certificate to post).
For Kestrel, look at the answer provided by Maxim Zabolotskikh.

Uploading big files to dotnetcore app in Azure fails (404)

I can't upload big files and I'm not sure if it's still about a size limitation or about a timeout.
On the controller endpoint, I tried all the attributes I found (at once)
[HttpPost("[action]")]
[DisableRequestSizeLimit]
[RequestFormLimits(MultipartBodyLengthLimit = long.MaxValue, BufferBodyLengthLimit = long.MaxValue)]
[RequestSizeLimit(int.MaxValue)]
public async Task UploadForm()
During 'ConfigureServices' I also setup this:
services.Configure<FormOptions>(options =>
{
options.MemoryBufferThreshold = int.MaxValue;
options.ValueLengthLimit = int.MaxValue;
options.ValueCountLimit = int.MaxValue;
options.MultipartBodyLengthLimit = int.MaxValue; // In case of multipart
});
But I still get 404 errors after uploading a part of the file (30 MB are already too much).
Then I even tried setting up the kestrel with the following code, but like that the app doesn't even start (502)
.UseKestrel((KestrelServerOptions o) =>
{
o.Limits.KeepAliveTimeout = TimeSpan.FromMinutes(120);
o.Limits.RequestHeadersTimeout = TimeSpan.FromMinutes(120);
o.Limits.MaxRequestBodySize = null;
})
have a look of this Offcial doc.
Solution:
Change the value of maxAllowedContentLength.
Add these code in Web.config(under site/wwwroot on Kudu):
<configuration>
<system.webServer>
<security>
<requestFiltering>
<requestLimits maxAllowedContentLength="<valueInBytes>"/>
</requestFiltering>
</security>
</system.webServer>
</configuration>
This should work without restart.

Deploy asp.net core webapi app from VS Code to IIS web server errors

I am new to asp.net core, Im used to Visual Studio 2017 and how publishing web apps works with that.
I rewrote a simple login endpoint in asp.net core using VS Code that was previously written in Visual Studio 2017 using MVC and tried to publish it following articles I googled. I'm getting an error when I browse the app in IIS. Something else looks weird too, when I view the published files they look incomplete. I'm new to VS Code creating webapi core apps so maybe I am wrong. I'll attach a screen shot of the published files.
The error I am getting is: "
HTTP Error 502.3 - Bad Gateway
There was a connection error while trying to route the request.
"
I have installed the .NET Core SDK and required installs on the IIS machine and set up the APP Pool for No Managed Code as well.
Web Config
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<location path="." inheritInChildApplications="false">
<system.webServer>
<handlers>
<add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModule" resourceType="Unspecified" />
</handlers>
<aspNetCore processPath="dotnet" arguments=".\LB_CONNECT_API_2.dll" stdoutLogEnabled="false" stdoutLogFile=".\logs\stdout" hostingModel="InProcess" />
</system.webServer>
</location>
</configuration>
SCREENSHOT Of PUBLISHED FILES
I tried editing web.config and removing the V2 from AspNetCore
[Route("api/[controller]")]
[ApiController]
public class LoginController : ControllerBase
{
// POST: api/Login
public User Post(User user)
{
SqlDataReader reader = null;
SqlConnection myConnection = new SqlConnection
{
ConnectionString = #"Server=ServerName;Database=DB;User ID=User;Password=password;"
};
SqlCommand sqlCmd = new SqlCommand
{
CommandType = CommandType.StoredProcedure,
CommandText = "lb_Login",
Connection = myConnection
};
sqlCmd.Parameters.AddWithValue("#Email", user.Email);
sqlCmd.Parameters.AddWithValue("#Password", user.Password);
myConnection.Open();
reader = sqlCmd.ExecuteReader();
User _user = null;
while (reader.Read())
{
_user = new User
{
UserID = Guid.Parse(reader["UserID"].ToString()),
FirstName = reader["FirstName"].ToString(),
LastName = reader["LastName"].ToString(),
GroupName = reader["GroupName"].ToString(),
Email = reader["Email"].ToString(),
Cell = reader["Cell"].ToString()
};
}
return _user;
}
I need it to return the user object in JSON

IIS 7 dynamic content compression not working

My IIS 7 dynamic content compression will not work as verified by server logs... bytes sent/received are identical with compression on and off.
Let me go through the things I've done so far to make sure this is done right:
1) Install dynamic compression module (duh)
2) Enable dynamic compression
3) in web.config under system.webserver/httpCompression, I've added DynamicCompressionDisableCpuUsage=100 and DynamicCompressionEnableCpuUsage=99 to make sure that compression is on as often as possible. server load is generally 0% to 2% CPU, so this shouldn't be a problem at all.
4) I changed system.webserver/httpCompression/scheme dynamicCompressionLevel from 0 to 7 since the default value is 0
5) I've added the mime types and set enabled=true under system.webserver/httpCompression/dynamicTypes and ensured via a request analyzer that mimetype is indeed correct
6) After this, I've even restarted sites/recycled app pool.
7) I've even added mime-types to include the the charset, which I've read places sometimes affects dynamic compression.
I've still got no reduction in traffic! What gives!? I even set system.webserver/httpCompression/minFileSizeForComp to 1000B even though that's only for static compression thinking that perhaps it might somehow carry over to dynamic compression. Bytes sent in the logs are still the same as without compression on.
Here's my web.config section FYI:
<system.webServer>
<httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files" dynamicCompressionDisableCpuUsage="100" dynamicCompressionEnableCpuUsage="99" minFileSizeForComp="1000">
<scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" dynamicCompressionLevel="7" staticCompressionLevel="7"/>
<dynamicTypes>
<add mimeType="text/*" enabled="true"/>
<add mimeType="message/*" enabled="true"/>
<add mimeType="application/javascript" enabled="true"/>
<add mimeType="application/x-javascript" enabled="true"/>
<add mimeType="application/xml" enabled="true"/>
<add mimeType="application/json" enabled="true"/>
<add mimeType="application/json; charset=utf-8" enabled="true"/>
<add mimeType="application/json; charset=UTF-8" enabled="true"/>
<add mimeType="*/*" enabled="false"/>
</dynamicTypes>
</httpCompression>
<urlCompression doStaticCompression="true" doDynamicCompression="true"/>
</system.webServer>
Here are a couple other questions I've referenced to come up with these settings... it seems like I've tried every trick in the book.
How can I get gzip compression in IIS7 working?
https://serverfault.com/questions/200041/how-do-determine-the-dynamiccompressiondisablecpuusage-setting-on-iis7
as per this ServerFault answer: https://serverfault.com/a/125156/117212 - you can't change httpCompression in web.config, it needs to be done in applicationHost.config file. Here is the code I use in my Azure web role to modify applicationHost.config file and add mime types for compression:
using (var serverManager = new ServerManager())
{
var config = serverManager.GetApplicationHostConfiguration();
var httpCompressionSection = config.GetSection("system.webServer/httpCompression");
var dynamicTypesCollection = httpCompressionSection.GetCollection("dynamicTypes");
Action<string> fnCheckAndAddIfMissing = mimeType =>
{
if (dynamicTypesCollection.Any(x =>
{
var v = x.GetAttributeValue("mimeType");
if (v != null && v.ToString() == mimeType)
{
return true;
}
return false;
}) == false)
{
ConfigurationElement addElement = dynamicTypesCollection.CreateElement("add");
addElement["mimeType"] = mimeType;
addElement["enabled"] = true;
dynamicTypesCollection.AddAt(0, addElement);
}
};
fnCheckAndAddIfMissing("application/json");
fnCheckAndAddIfMissing("application/json; charset=utf-8");
serverManager.CommitChanges();
}
ServerManager comes from Microsoft.Web.Administration package in NuGet.

On premise NServicebus applicaton receiving messages from Azure ServiceBus queue

I am currently struggling to get something up and running on an nServiceBus hosted application. I have an azure ServiceBus queue that a 3rd party is posting messages to and I want my application (which is hosted locally at the moment) to receive these messages.
I have googled for answers on how to configure the endpoint but I have had no luck in a valid config. Has anyone ever done this as I can find examples of how to connect to Azure storage queues but NOT servicebus queue. (I need azure servicebus queues for other reasons)
The config I have is as below
public void Init()
{
Configure.With()
.DefaultBuilder()
.XmlSerializer()
.UnicastBus()
.AzureServiceBusMessageQueue()
.IsTransactional(true)
.MessageForwardingInCaseOfFault()
.UseInMemoryTimeoutPersister()
.InMemorySubscriptionStorage();
}
.
Message=Exception when starting endpoint, error has been logged. Reason: Input queue [mytimeoutmanager#sb://[*].servicebus.windows.net/] must be on the same machine as this Source=NServiceBus.Host
.
<configuration>
<configSections>
<section name="MessageForwardingInCaseOfFaultConfig" type="NServiceBus.Config.MessageForwardingInCaseOfFaultConfig, NServiceBus.Core" />
<section name="UnicastBusConfig" type="NServiceBus.Config.UnicastBusConfig, NServiceBus.Core" />
<section name="AzureServiceBusQueueConfig" type="NServiceBus.Config.AzureServiceBusQueueConfig, NServiceBus.Azure" />
<section name="AzureTimeoutPersisterConfig" type="NServiceBus.Timeout.Hosting.Azure.AzureTimeoutPersisterConfig, NServiceBus.Timeout.Hosting.Azure" />
</configSections>
<AzureServiceBusQueueConfig IssuerName="owner" QueueName="testqueue" IssuerKey="[KEY]" ServiceNamespace="[NS]" />
<MessageForwardingInCaseOfFaultConfig ErrorQueue="error" />
<!-- Use the following line to explicitly set the Timeout manager address -->
<UnicastBusConfig TimeoutManagerAddress="MyTimeoutManager" />
<!-- Use the following line to explicity set the Timeout persisters connectionstring -->
<AzureTimeoutPersisterConfig ConnectionString="UseDevelopmentStorage=true" />
<startup useLegacyV2RuntimeActivationPolicy="true">
<supportedruntime version="v4.0" />
<requiredruntime version="v4.0.20506" />
<supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0" />
</startup>
</configuration>
Try moving UnicastBus() to the end of your call, like this:
Configure.With()
.DefaultBuilder()
.XmlSerializer()
.AzureServiceBusMessageQueue()
.IsTransactional(true)
.MessageForwardingInCaseOfFault()
.UseInMemoryTimeoutPersister()
.InMemorySubscriptionStorage()
.UnicastBus(); // <- Here
And about those third parties posting messages to the queue. Keep in mind that they need to respect how NServiceBus handles serialization/deserialization. Here is how this is done in NServiceBus (the most important part is that the BrokeredMessage is initialized with a raw message, the result of a serialziation using the BinaryFormatter):
private void Send(Byte[] rawMessage, QueueClient sender)
{
var numRetries = 0;
var sent = false;
while(!sent)
{
try
{
var brokeredMessage = new BrokeredMessage(rawMessage);
sender.Send(brokeredMessage);
sent = true;
}
// back off when we're being throttled
catch (ServerBusyException)
{
numRetries++;
if (numRetries >= MaxDeliveryCount) throw;
Thread.Sleep(TimeSpan.FromSeconds(numRetries * DefaultBackoffTimeInSeconds));
}
}
}
private static byte[] SerializeMessage(TransportMessage message)
{
if (message.Headers == null)
message.Headers = new Dictionary<string, string>();
if (!message.Headers.ContainsKey(Idforcorrelation))
message.Headers.Add(Idforcorrelation, null);
if (String.IsNullOrEmpty(message.Headers[Idforcorrelation]))
message.Headers[Idforcorrelation] = message.IdForCorrelation;
using (var stream = new MemoryStream())
{
var formatter = new BinaryFormatter();
formatter.Serialize(stream, message);
return stream.ToArray();
}
}
If you want NServiceBus to correctly deserialize the message, make sure your thierd parties serialize it correctly.
I now had exactly the same problem and spent several hours to figure out how to solve it. Basically Azure timeout persister is only supported for Azure hosted endpoints that use NServiceBus.Hosting.Azure. If you use NServiceBus.Host process to host your endpoints, it uses NServiceBus.Timeout.Hosting.Windows namespace classes. It initialized a TransactionalTransport with MSMQ and there you get this message.
I used two methods to avoid it:
If you must use As_Server endpoint configuration, you can use .DisableTimeoutManager() in your initialization, it will skip the TimeoutDispatcher initialization completely
Use As_Client endpoint configuration, it doesn't use transactional mode for the transport and timeout dispatcher is not inialized
There could be a way to inject Azure timeout manager somehow but I have not found it yet and I actually need As_Client thingy, so it works fine for me.

Resources