NServiceBus Event Subscriptions Not Working With Azure Service Bus - azure

I'm attempting to modify the Azure-based Video Store sample app so that the front-end Ecommerce site can scale out.
Specifically, I want all instances of the web site to be notified of events like OrderPlaced so that no matter which web server the client web app happens to be connected to via SignalR, it will correctly receive the notification and update the UI.
Below is my current configuration in the Global.asax:
Feature.Disable<TimeoutManager>();
Configure.ScaleOut(s => s.UseUniqueBrokerQueuePerMachine());
startableBus = Configure.With()
.DefaultBuilder()
.TraceLogger()
.UseTransport<AzureServiceBus>()
.PurgeOnStartup(true)
.UnicastBus()
.RunHandlersUnderIncomingPrincipal(false)
.RijndaelEncryptionService()
.CreateBus();
Configure.Instance.ForInstallationOn<Windows>().Install();
bus = startableBus.Start();
And I've also configured the Azure Service Bus queues using:
class AzureServiceBusConfiguration : IProvideConfiguration<NServiceBus.Config.AzureServiceBusQueueConfig>
{
public AzureServiceBusQueueConfig GetConfiguration()
{
return new AzureServiceBusQueueConfig()
{
QueuePerInstance = true
};
}
}
I've set the web role to scale to two instances, and as expected, two queues (ecommerce and ecommerce-1) are created. I do not, however, see additional topic subscriptions being created under the videostore.sales.events topic. Instead, I see:
I would think that you would see VideoStore.ECommerce-1.OrderCancelled and VideoStore.ECommerce-1.OrderPlaced subscriptions under the Videostore.Sales.Events topic. Or is that not how subscriptions are stored when using Azure Service Bus?
What am I missing here? I get the event on one of the ecommerce instances, but never on both. Even if this isn't the correct way to scale out SignalR, my use case extends to stuff like cache invalidation.
I also find it strange that two error and audit queues are being created. Why would that happen?
UPDATE
Yves is correct. The AzureServiceBusSubscriptionNamingConvention was not applying the correct individualized name. I was able to fix this by implementing the following EndpointConfig:
namespace VideoStore.ECommerce
{
public class EndpointConfig : IConfigureThisEndpoint, IWantCustomInitialization
{
public void Init()
{
AzureServiceBusSubscriptionNamingConvention.Apply = BuildSubscriptionName;
AzureServiceBusSubscriptionNamingConvention.ApplyFullNameConvention = BuildSubscriptionName;
}
private static string BuildSubscriptionName(Type eventType)
{
var subscriptionName = eventType != null ? Configure.EndpointName + "." + eventType.Name : Configure.EndpointName;
if (subscriptionName.Length >= 50)
subscriptionName = new DeterministicGuidBuilder().Build(subscriptionName).ToString();
if (!SettingsHolder.GetOrDefault<bool>("ScaleOut.UseSingleBrokerQueue"))
subscriptionName = Individualize(subscriptionName);
return subscriptionName;
}
public static string Individualize(string queueName)
{
var parser = new ConnectionStringParser();
var individualQueueName = queueName;
if (SafeRoleEnvironment.IsAvailable)
{
var index = parser.ParseIndexFrom(SafeRoleEnvironment.CurrentRoleInstanceId);
var currentQueue = parser.ParseQueueNameFrom(queueName);
if (!currentQueue.EndsWith("-" + index.ToString(CultureInfo.InvariantCulture))) //individualize can be applied multiple times
{
individualQueueName = currentQueue
+ (index > 0 ? "-" : "")
+ (index > 0 ? index.ToString(CultureInfo.InvariantCulture) : "");
}
if (queueName.Contains("#"))
individualQueueName += "#" + parser.ParseNamespaceFrom(queueName);
}
return individualQueueName;
}
}
}
I could not, however, get NServiceBus to recognize my EndpointConfig class. Instead, I had to call it manually before starting the bus. From my Global.asax.cs:
new EndpointConfig().Init();
bus = startableBus.Start();
Once I did this, the subscription names appeared as expected:
Not sure why it's ignoring my IConfigureThisEndpoint, but this works.

This sounds like a bug, can you raise a github issue on this at https://github.com/Particular/NServiceBus.Azure
That said, I think it's better to use signalr's scaleout feature instead of using QueuePerInstance as signalr needs to replicate other information like (connection/group mappings) internally as well when running in scaleout mode.
Update:
I think I see the issue, the subscriptions should be individualised as well, which isn't the case in current naming conventions
https://github.com/Particular/NServiceBus.Azure/blob/master/src/NServiceBus.Azure.Transports.WindowsAzureServiceBus/NamingConventions/AzureServiceBusSubscriptionNamingConvention.cs
while it is in the queuenamingconventions
https://github.com/Particular/NServiceBus.Azure/blob/master/src/NServiceBus.Azure.Transports.WindowsAzureServiceBus/NamingConventions/AzureServiceBusQueueNamingConvention.cs#L27
As these conventions are public you can override them to work around the problem by changing the func in IWantCustomInitialization until I can get a fix in, just copy the current method and add the individualizer logic. The queue individualizer is internal though, so you'll have to copy that class from
https://github.com/Particular/NServiceBus.Azure/blob/master/src/NServiceBus.Azure.Transports.WindowsAzureServiceBus/Config/QueueIndividualizer.cs

Related

create custom module for pdf manipulation

I want to create a custom Kofax module. When it comes to the batch processing the scanned documents get converted to PDF files. I want to fetch these PDF files, manipulate them (add a custom footer to the PDF document) and hand them back to Kofax.
So what I know so far:
create Kofax export scripts
add a custom module to Kofax
I have the APIRef.chm (Kofax.Capture.SDK.CustomModule) and the CMSplit as an example project. Unfortunately I struggle getting into it. Are there any resources out there showing step by step how to get into custom module development?
So I know that the IBatch interface represents one selected batch and the IBatchCollection represents the collection of all batches.
I would just like to know how to setup a "Hello World" example and could add my code to it and I think I don't even need a WinForms application because I only need to manipulate the PDF files and that's it...
Since I realized that your question was rather about how to create a custom module in general, allow me to add another answer. Start with a C# Console Application.
Add Required Assemblies
Below assemblies are required by a custom module. All of them reside in the KC's binaries folder (by default C:\Program Files (x86)\Kofax\CaptureSS\ServLib\Bin on a server).
Setup Part
Add a new User Control and Windows Form for setup. This is purely optional - a CM might not even have a setup form, but I'd recommend adding it regardless. The user control is the most important part, here - it will add the menu entry in KC Administration, and initialize the form itself:
[InterfaceType(ComInterfaceType.InterfaceIsIDispatch)]
public interface ISetupForm
{
[DispId(1)]
AdminApplication Application { set; }
[DispId(2)]
void ActionEvent(int EventNumber, object Argument, out int Cancel);
}
[ClassInterface(ClassInterfaceType.None)]
[ProgId("Quipu.KC.CM.Setup")]
public class SetupUserControl : UserControl, ISetupForm
{
private AdminApplication adminApplication;
public AdminApplication Application
{
set
{
value.AddMenu("Quipu.KC.CM.Setup", "Quipu.KC.CM - Setup", "BatchClass");
adminApplication = value;
}
}
public void ActionEvent(int EventNumber, object Argument, out int Cancel)
{
Cancel = 0;
if ((KfxOcxEvent)EventNumber == KfxOcxEvent.KfxOcxEventMenuClicked && (string)Argument == "Quipu.KC.CM.Setup")
{
SetupForm form = new SetupForm();
form.ShowDialog(adminApplication.ActiveBatchClass);
}
}
}
Runtime Part
Since I started with a console application, I could go ahead and put all the logic into Program.cs. Note that is for demo-purposes only, and I would recommend adding specific classes and forms later on. The example below logs into Kofax Capture, grabs the next available batch, and just outputs its name.
class Program
{
static void Main(string[] args)
{
AppDomain.CurrentDomain.AssemblyResolve += (sender, eventArgs) => KcAssemblyResolver.Resolve(eventArgs);
Run(args);
return;
}
static void Run(string[] args)
{
// start processing here
// todo encapsulate this to a separate class!
// login to KC
var login = new Login();
login.EnableSecurityBoost = true;
login.Login();
login.ApplicationName = "Quipu.KC.CM";
login.Version = "1.0";
login.ValidateUser("Quipu.KC.CM.exe", false, "", "");
var session = login.RuntimeSession;
// todo add timer-based polling here (note: mutex!)
var activeBatch = session.NextBatchGet(login.ProcessID);
Console.WriteLine(activeBatch.Name);
activeBatch.BatchClose(
KfxDbState.KfxDbBatchReady,
KfxDbQueue.KfxDbQueueNext,
0,
"");
session.Dispose();
login.Logout();
}
}
Registering, COM-Visibility, and more
Registering a Custom Module is done via RegAsm.exe and ideally with the help of an AEX file. Here's an example - please refer to the documentation for more details and all available settings.
[Modules]
Minimal CM
[Minimal CM]
RuntimeProgram=Quipu/CM/Quipu.KC.CM/Quipu.KC.CM.exe
ModuleID=Quipu.KC.CM.exe
Description=Minimal Template for a Custom Module in C#
Version=1.0
SupportsTableFields=True
SupportsNonImageFiles=True
SetupProgram=Minimal CM Setup
[Setup Programs]
Minimal CM Setup
[Minimal CM Setup]
Visible=0
OCXFile=Quipu/CM/Quipu.KC.CM/Quipu.KC.CM.exe
ProgID=Quipu.KC.CM.Setup
Last but not least, make sure your assemblies are COM-visible:
I put up the entire code on GitHub, feel free to fork it. Hope it helps.
Kofax exposes a batch as an XML, and DBLite is basically a wrapper for said XML. The structure is explained in AcBatch.htm and AcDocs.htm (to be found under the CaptureSV directory). Here's the basic idea (just documents are shown):
AscentCaptureRuntime
Batch
Documents
Document
A single document has child elements itself such as pages, and multiple properties such as Confidence, FormTypeName, and PDFGenerationFileName. This is what you want. Here's how you would navigate down the document collection, storing the filename in a variable named pdfFileName:
IACDataElement runtime = activeBatch.ExtractRuntimeACDataElement(0);
IACDataElement batch = runtime.FindChildElementByName("Batch");
var documents = batch.FindChildElementByName("Documents").FindChildElementsByName("Document");
for (int i = 0; i < documents.Count; i++)
{
// 1-based index in kofax
var pdfFileName = documents[i + 1]["PDFGenerationFileName"];
}
Personally, I don't like this structure, so I created my own wrapper for their wrapper, but that's up to you.
With regard to the custom module itself, the sample shipped is already a decent start. Basically, you would have a basic form that shows up if the user launches the module manually - which is entirely optional if work happens in the back, preferably as Windows Service. I like to start with a console application, adding forms only when needed. Here, I would launch the form as follows, or start the service. Note that I have different branches in case the user wants to install my Custom Module as service:
else if (Environment.UserInteractive)
{
// run as module
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
Application.Run(new RuntimeForm(args));
}
else
{
// run as service
ServiceBase.Run(new CustomModuleService());
}
}
The runtime for itself just logs you into Kofax Capture, registers event handlers, and processes batch by batch:
// login to KC
cm = new CustomModule();
cm.Login("", "");
// add progress event handlers
cm.BatchOpened += Cm_BatchOpened;
cm.BatchClosed += Cm_BatchClosed;
cm.DocumentOpened += Cm_DocumentOpened;
cm.DocumentClosed += Cm_DocumentClosed;
cm.ErrorOccured += Cm_ErrorOccured;
// process in background thread so that the form does not freeze
worker = new BackgroundWorker();
worker.DoWork += (s, a) => Process();
worker.RunWorkerAsync();
Then, your CM fetches the next batch. This can either make use of Kofax' Batch Notification Service, or be based on a timer. For the former, just handle the BatchAvailable event of the session object:
session.BatchAvailable += Session_BatchAvailable;
For the latter, define a timer - preferrably with a configurable polling interval:
pollTimer.Interval = pollIntervalSeconds * 1000;
pollTimer.Elapsed += PollTimer_Elapsed;
pollTimer.Enabled = true;
When the timer elapses, you could do the following:
private void PollTimer_Elapsed(object sender, System.Timers.ElapsedEventArgs e)
{
mutex.WaitOne();
ProcessBatches();
mutex.ReleaseMutex();
}

Spring Aggregation Group

I did create an aggregate service as below
#EnableBinding(Processor.class)
class Configuration {
#Autowired
Processor processor;
#ServiceActivator(inputChannel = Processor.INPUT)
#Bean
public MessageHandler aggregator() {
AggregatingMessageHandler aggregatingMessageHandler =
new AggregatingMessageHandler(new DefaultAggregatingMessageGroupProcessor(),
new SimpleMessageStore(10));
//AggregatorFactoryBean aggregatorFactoryBean = new AggregatorFactoryBean();
//aggregatorFactoryBean.setMessageStore();
aggregatingMessageHandler.setOutputChannel(processor.output());
//aggregatorFactoryBean.setDiscardChannel(processor.output());
aggregatingMessageHandler.setSendPartialResultOnExpiry(true);
aggregatingMessageHandler.setSendTimeout(1000L);
aggregatingMessageHandler.setCorrelationStrategy(new ExpressionEvaluatingCorrelationStrategy("requestType"));
aggregatingMessageHandler.setReleaseStrategy(new MessageCountReleaseStrategy(3)); //ExpressionEvaluatingReleaseStrategy("size() == 5")
aggregatingMessageHandler.setExpireGroupsUponCompletion(true);
aggregatingMessageHandler.setGroupTimeoutExpression(new ValueExpression<>(3000L)); //size() ge 2 ? 5000 : -1
aggregatingMessageHandler.setExpireGroupsUponTimeout(true);
return aggregatingMessageHandler;
}
}
Now i want to release the group as soon as a new group is created, so i only have one group at a time.
To be more specific i do receive two types of requests 'PUT' and 'DEL' . i want to keep aggregating per the above rules but as soon as i receive a request type other than what i am aggregating i want to release the current group and start aggregating the new Type.
The reason i want to do this is because these requests are sent to another party that don't support having PUT and DEL requests at the same time and i can't delay any DEL request as sequence between PUT and DEL is important.
I understand that i need to create a custom release Pojo but will i be able to check the current groups ?
For Example
If i receive 6 messages like below
PUT PUT PUT DEL DEL PUT
they should be aggregated as below
3PUT
2 DEL
1 PUT
OK. Thank you for sharing more info.
Yes, you custom ReleaseStrategy can check that message type and return true to lead to the group completion function.
As long as you have only static correlationKey, so only one group is there in the store. When your message is stepping to the ReleaseStrategy, there won't be much magic just to check the current group for completion signal. Since there are no any other groups in the store, there is no need any complex release logic.
You should add expireGroupsUponCompletion = true to let the group to be removed after completion and the next message will form a new group for the same correlationKey.
UPDATE
Thank you for further info!
So, yes, your original PoC is good. And even static correlationKey is fine, since you are just going to collect incoming messages to batches.
Your custom ReleaseStrategy should analyze MessageGroup for a message with different key and return true in that case.
The custom MessageGroupProcessor should filter a message with different key from the output List and send that message to the aggregator back to let to form a new group for a sequence for its key.
i ended up implementing the below ReleaseStrategy as i found it simpler than removing message and queuing it again.
class MessageCountAndOnlyOneGroupReleaseStrategy implements org.springframework.integration.aggregator.ReleaseStrategy {
private final int threshold;
private final MessageGroupProcessor messageGroupProcessor;
public MessageCountAndOnlyOneGroupReleaseStrategy(int threshold,MessageGroupProcessor messageGroupProcessor) {
super();
this.threshold = threshold;
this.messageGroupProcessor = messageGroupProcessor;
}
private MessageGroup currentGroup;
#Override
public boolean canRelease(MessageGroup group) {
if(currentGroup == null)
currentGroup = group;
if(!group.getGroupId().equals(currentGroup.getGroupId())) {
messageGroupProcessor.processMessageGroup(currentGroup);
currentGroup = group;
return false;
}
return group.size() >= this.threshold;
}
}
Note that i did used new HeaderAttributeCorrelationStrategy("request_type") instead of just FOO for CollorationStrategy

Azure Mobile Services Soft Delete Issue / Practices

With soft delete turned on, I add a single record on the client, push, delete the added record push and then attempt to add a new record (and then push) with the same primary key as the initial record I get an exception. It would appear that EntityDomainManager just attempts to do a new insert without checking to see if the record is to be 'updated' instead of inserted.
However if I turn off soft delete in the domain manager constructor everything works fine.
We are using incremental sync, so soft delete as I understand it is required to make this work, so we don't end up with different pictures of what's right between mobile and server.
When is/are the recommended approach? A Custom EntityDomainManager (or other DomainManager)? If so it would be useful for more clarity on the interactions between the table controller and the domain manager.
I have constructed this custom domain manager which seems to work, but would appreciate any guidance/suggestions.
public class CustomEntityDomainManager<TData> : EntityDomainManager<TData> where TData : class, ITableData
{
public CustomEntityDomainManager(DbContext context, HttpRequestMessage request, ApiServices services)
: base(context, request, services)
{
}
public CustomEntityDomainManager(DbContext context, HttpRequestMessage request, ApiServices services, bool enableSoftDelete) : base(context, request, services, enableSoftDelete)
{
}
public async override Task<TData> InsertAsync(TData data)
{
if (data == null)
{
throw new ArgumentNullException("data");
}
// now then, if we have soft delete enabled & data has been provided with an id in it
if (EnableSoftDelete && data.Id != null)
{
// now look to see if the record exists and if it is deleted
// if so we look to remove the record before then attempting the insert
// record old value of deleted, since need to query to see if deleted.
var oldIncludeDeleted = IncludeDeleted;
try
{
IncludeDeleted = true;
var existingData = await this.Lookup(data.Id).Queryable.FirstOrDefaultAsync();
// if record exists, and its soft deleted then truly delete it
if (existingData != null && existingData.Deleted)
{
// now need to remove this record...
this.Context.Set<TData>().Remove(existingData);
}
}
finally
{
IncludeDeleted = oldIncludeDeleted;
}
}
if (data.Id == null)
{
data.Id = Guid.NewGuid().ToString("N");
}
return await base.InsertAsync(data);
}
This behavior is by design--we require that you do an explicit undelete before doing the update.
The solution you've presented is fine. You can also move the code to your table controller, assuming you only need this behavior in one table. If you need it in multiple tables, then the custom domain manager is the best approach.

CQRS in data-centric processes

I have got a question related to CQRS in data centric processes. Let me explain it better.
Consider we have a SOAP/JSON/whatever service, which transfers some data to our system during an integration process. It is said that in CQRS every state change must be achieved by the means of commands (or events if Event Sourcing is used).
When it comes to our integrating process we have got a great deal of structured DATA instead of a set of commands/events and I am wondering how to actually process those data.
// Some Façade service
class SomeService
{
$_someService;
public function __construct(SomeService $someService)
{
$this->_someService = $someService;
}
// Magic function to make it all good and
public function process($dto)
{
// if I get it correctly here I need somehow
// convert incoming dto (xml/json/array/etc)
// to a set of commands, i. e
$this->someService->doSomeStuff($dto->someStuffData);
// SomeStuffChangedEvent raised here
$this->someService->doSomeMoreStuff($dtom->someMoreStuffData);
// SomeMoreStuffChangedEvent raised here
}
}
My question is whether my suggestion is suitable in the given case or there may be some better methods to do what I need. Thank you in advance.
Agreed, a service may have a different interface. If you create a rest-api to update employees, you may want to provide an UpdateEmployeeMessage which contains everything that can change. In a CRUD-kind of service, this message would probably mirror the database.
Inside of the service, you can split the message into commands:
public void Update(UpdateEmployeeMessage message)
{
bus.Send(new UpdateName
{
EmployeeId = message.EmployeeId,
First = message.FirstName,
Last = message.LastName,
});
bus.Send(new UpdateAddress
{
EmployeeId = message.EmployeeId,
Street = message.Street,
ZipCode = message.ZipCode,
City = message.City
});
bus.Send(new UpdateContactInfo
{
EmployeeId = message.EmployeeId,
Phone = message.Phone,
Email = message.Email
});
}
Or you could call the aggregate directly:
public void Update(UpdateEmployeeMessage message)
{
var employee = repository.Get<Employee>(message.EmployeeId);
employee.UpdateName(message.FirstName, message.LastName);
employee.UpdateAddress(message.Street, message.ZipCode, message.City);
employee.UpdatePhone(message.Phone);
employee.UpdateEmail(message.Email);
repository.Save(employee);
}

BDC Web Part connection Interface error

I want to provide "Query Value" to the BDC List WebPart from (Provider) businessdata filter webpart. I get fllowing error when i try to connect.
"The provider connection point (BusinessDataFilterWebPart) and the consumer connection point "BusinessDataListWebPart" do not use the same connection interface."
Following is my code snippet.
System.Web.UI.WebControls.WebParts.WebPart providerWebPart =
webPartManager.WebParts[filterWebPart.ID];
ProviderConnectionPointCollection providerConnections =
webPartManager.GetProviderConnectionPoints(providerWebPart);
ProviderConnectionPoint providerConnection = null;
foreach (ProviderConnectionPoint ppoint in providerConnections)
{
if (ppoint.InterfaceType == typeof(ITransformableFilterValues))
providerConnection = ppoint;
}
System.Web.UI.WebControls.WebParts.WebPart consumerWebPart =
webPartManager.WebParts[consumer.ID];
ConsumerConnectionPointCollection consumerConnections =
webPartManager.GetConsumerConnectionPoints(consumerWebPart);
ConsumerConnectionPoint consumerConnection = null;
foreach (ConsumerConnectionPoint cpoint in consumerConnections)
{
if (cpoint.InterfaceType == typeof(IWebPartParameters))
consumerConnection = cpoint;
}
SPWebPartConnection newConnection = webPartManager.SPConnectWebParts(
providerWebPart, providerConnection, consumerWebPart, consumerConnection);
It looks like you are comparing two different connection interfaces. Your provider connection implements ITransformableFilterValues and your consumer connection implements IWebPartParameters.
I don't know much about the code you have written here as I rarely write connections between web parts in code. But the whole point about connections is the consumer and provider have to provide and expect the same interface.
Have you tried connecting these two web parts together in the browser interface?
My direct experience with this problem is with the query string filter web part as the provider and the report viewer web part as the consumer, but the issue was the same.
The ITransformableFilterValues interface is not consumable by the IWebPartParameters interface. But each item in the connection points collection implements a different interface type.
In your debugger, check the other interface types implemented by both the ConsumerConnectionPointCollection and ProviderConnectionPointConnection. If both collections have connections that implement the same interface type, use that interface type in the foreaches where you are checking the interface type.
If there is no direct match, you should experiment to find the right combination.
You need to use the correct transformer and the override method with the transformation as a parameter so the two interfaces can connect/transform. From the msdn documentation on the TransformableFilterValuesToParametersTransformer: "Allows standard filters, which implement Microsoft.SharePoint.WebPartPages.ITransformableFilterValues, to connect to any Web Part that can consume IWebPartParameters"
var transformer = new TransformableFilterValuesToParametersTransformer();
transformer.ProviderFieldNames = new string[] { "DocumentIdForCurrentPage" };
transformer.ConsumerFieldNames = new string[] { "DocumentId" };
webPartManager.SPConnectWebParts(
providerWebPart, providerConnection, consumerWebPart, consumerConnection,transformer);

Resources