Mastransit - publish vs send and how to manage message - asp.net-core-2.0

I have just used MassTransit in my project, .Net core2.0. It is great but there are some concerns:
That is different between Publish vs Send. In my scenario, I have one email service to send email to out side. Other services will pass request to email service via RabbitMQ. So, in this case we should use "Publish" or "Send".
With Send, we need to pass the full URL of endpoint. There is any best practice to manage endpoint? Because if we have 10 commands, we need to manage 10 endpoints. Is it right?
Relate to event(Publish), if one service is deployed on multiple instances, when one event is published to queue. It will be processed one time or will be processed many times on each instance.
Could you please share me one unit test for consumer? Because with harness test, it seems we just ensure message was queued.
Masstransit is ready for .Net Core 2.1?
Many thanks,

There are way too many questions for one post tbh, on SO it is better
to ask more specific questions, one by one. Some of your questions already have answers on SO.
The difference between publishing events and sending commands is similar to what you expect. We actually cover some of it in the documentation.
You can handle as many message types as you want in one receive endpoint, but you need to be aware of the consequences. The best practice is to have one endpoint per command type or at least one endpoint for related commands. The risk here is that an important command might get stuck waiting in the queue until other, less important commands will be processed.
If you publish events, each endpoint (queue) will get a copy of it. If you have several instances of one endpoint, only one of those instances will get it. It is valid also for sending commands, but it will be only one endpoint that gets a message and only one of the instances will process it.
Although there is no documentation for MT testing just yet, you can look at this test to see how it is done.
MassTransit is compiled for .NET 4.6 and .NET Standard 2.0. There is nothing specifically different in .NET Core 2.1 that would have any effect on MassTransit.

Related

Microservices, how to notify backend when task complete

For example, if i have main application (backend) and some microservice, e.g for image cropping.
User loads an image, making request to backend, backend using rabbitmq posts new task in the queue, then image cropping service pickup a task, completes it and i need somehow notify backend.
What is options for this? I need another microservice for such notifications?
so... there are reaaaaaaly many ways to do that.
On the high level, what you want to achieve is to produce an event that 1 or more services can react to. Now depending on what you have available, you can produce the event in a number of different ways.
if you want to be completely platform independent, you can use Apache Kafka. It's a popular service specifically for what we need -> publishing events and processing them at mass-scale. Kafka can be clustered, partitioned, have multiple parallel consumers of the same type (like multiple instances of your main backend service) or different types (3 different microservices that happen to be interested in a specific event). This bad boy just has it all and is famous for that. You can set up a cluster yourself or use one that comes out-of-the-box with some of the cloud platforms (like AWS for instance), but this might be more expensive and difficult to use compared to some cloud-specific fully-managed solutions.
if you're running your stuff on the google cloud, you can make it easier and cheaper by using the PubSub service. PubSub is a fully managed service that is scaled out-of-the-box (welcome to the cloud! you don't need to scale or cluster anything by yourself!).
if you're running on AWS, you can use SNS, or a more recent alternative - EventBridge (kinda like SNS, but booooooy what can it not do?). Yeah... I would recommend EventBridge. It can just do more... with the target filtering rules, payload transformations, it can automatically trigger more things...
Azure... ehm... Event Hub... but I haven't worked with this one yet... I'm not much of an Azurer... because you know... nobody uses azure for this kind of stuff...

Microsoft Bot Framework - very high response times

I have 10 seconds response times through any channel. (WebChat and Facebook)
My endpoint is a PAAS instance located in the western United States.
The WebApp has an S3 size and the response times are constant (even if there is only one conversation).
I have the following questions:
Is there any way to optimize this?
What are the Azure Bot Framework SLAs?
As bot framework is a preview product, there is no current SLAs.
Are you using the default state storage? If so, part of the slow down you mentioned is probably related. We highly recommend implementing your own state service. There is a blog article here discussing the implementations there is also a repository here with samples. This is probably not 100% of your issue but it is probably at least part of it.
Another thing to keep in mind is where your bot is located in relationship to your WebChat client and what instance of the Bot Connector you are using this blog may provide more info. Please see the "Geographic Direct Line endpoints" section.

Can Azure EventHub be used for critical transactional data in production?

Reading the documentation, Azure EventHubs is meant for:
Application instrumentation
User experience or workflow processing
Internet of Things (IoT) scenarios
Can this be used for any transactional data, handling revenue or application sensitive data?
Based on what I read, looks like it is meant for handling data that one should not be worried about any data loss. Is this the case?
It is mainly designed for large scale ingestion of data. That is why typical scenario's include IoT solutions which consists of a multitude of devices sending mass amounts of telemetry data.
To allow for this kind of scale it does not include some features other messaging service, like Azure Service Bus, do have. I think this blog does a good job of listening the differences. Especially the section Use Case explains things very well:
From a target use case perspective if we consider some of our typical enterprise integration patterns then if you are implementing a pattern which uses a Command Message, or a Request/Reply Message then you probably want to use Azure Service Bus Messaging.  RPC patterns can be implemented using Request/Reply messages on Azure Service Bus using a response queue.  These are really about ESB and EAI style messaging patterns where you want to send messages between applications and probably want to use other features such as property based routing.
Azure Event Hubs is more likely to be used if you’re implementing patterns with Event Messages and you want somewhere reliable to send them that is capable of dealing with a massive scale but will allow you to do stuff with the events out of process.
With these core target use cases in mind it is easy to see where the scale differences come into play.  For messaging it’s about one application telling one or more apps to DO SOMETHING or GIVE ME SOMETHING.  The alternative is that in eventing the applications are saying SOMETHING HAS HAPPENED.  When you consider this in typical application scenarios and you put events into the telemetry and logging space you can quickly see that the SOMETHING HAS HAPPENED scenario will produce a lot more traffic than the other.
Now I’m not saying that you can’t implement some messaging type functions using event hubs and that you can’t push events to a Service Bus topic as in integration there are always different requirements which result in different implementation scenarios, but I think if you follow the above as a general rule then you will usually be on the right path.
That does not mean however, that it is only capable of handling data that one should not be worried about any data loss. Data is stored for a configurable amount of time and if necessary, this data can be read from an earlier point in time.
Now, given your scenario I do not think Event Hub is the best fit. But truth to be told, I am not sure because you will have to elaborate more on what you want to do exactly.
Addition
The idea behind Event Hubs is that you will get at least once delivery at great scale. (Source). See also this question: Does Azure Event Hub guarantees at least once delivery?

Twilio issues with multithreading

This is software design question more than a coding one.
I am about to implement a feature where I can verify user's emails and phone numbers using Twilio's sms and voice apis.
My current implementation instantiates a Voice client at start up of the app and then I reuse this client whenever any user decides to verify email or voice.
Question: Is it a good idea to instantiate Twilio client once and then re-use it each time or should I create a new one each time it is needed?
I have browsed the Net for articles but haven't found something conclusive. Hoping to clarify here.
You are looking at whether the twillo client is thread-safe. A quick google search found this: Twilio Threaded Messages. I have not looked at the source myself, but I would consider this a likely answer that yes, it is thread-safe.
I'm not familiar with Twilio. But usually, since 3rd party API is out of our control, its stability, performance, etc, are all questions, and potentially, you might want to change to another service provider. So, firstly, try your best to decouple your own logic from 3rd ones. For instance, design an interface for this logic, and one implementation for Twilio.
Secondly, you need to test the Twilio client instance, ensure it could keep working for long time after instantiated, and if your programming language or runtime work in multi-thread way, you need to also test to make sure the instance could work properly when it is shared by multi-threads (if not, the instance is not threadsafe, you might consider using some mutex style locking on it).
Furthermore, if the 3rd party services execution is not stable, or, takes time for execution, etc, and specifically, for your email/sms verification case, it is not necessary to call the services synchronously and wait for responses. You could consider to use a worker queue, putting all tasks to the queue, and create some workers, running in asynchronous threads, to get tasks from queue and execute.

Azure hosting Workflow Activity with Persistence for Windows8 Push Notification

I'm completely new to the Windows Azure and Windows Workflow scope of things.
But basically, what I'm trying to implement is the Cloud web-app that's going to be responsible for pushing down tile updates/badge/toast notifications to my Winodws 8 application.
The code to run to send down the tile notification etc is fine, but needs to be executed every hour or so.
I decided the most straight forward approach was to make an MVC application that would have a WebAPI, this WebAPI will be responsible for receiving the ChannelURI from the ModernApplication that sends it to it, and will be stored on SQL Azure.
There will then be a class that has a static method which does the logic for gathering the new data and generating a new Tile/Badge/Toast.
I've created a simple Activity workflow, that has a Sequence with a DoWhile(true) activity. Inside the body of this DoWhile, contains a Sequence which has InvokeMethod and Delay, the InvokeMethod will call my class that contains the static method. The delay is set to one hour.
So that seems to be all okay. I then start this Activity via the Application_Start in Global.asax with the following line:
this.ActivityInvoker = new WorkflowInvoker(new NotificationActivity());
this.ActivityInvoker.InvokeAsync();
So I just tested it with that and it seems to be running my custom static method at the set interval.
That's all good, but now I have three questions in relation to this way of handling it:
Is this the correct/best approach to do this? If not, what are some other ways I should look into.
If a new instance is spun up on Azure, how do I ensure that the running Workflow for both instances won't step on each other's foot? i.e. how do I make sure that the InvokeMethod won't run two times, I only want it to run once an hour regardless of how many instances there are.
How do I ensure that if the instances crash/go-down that the state of it is maintained?
Any help, guidance, etc is much appreciated.
A couple of good questions that I would love to answer, however trying to do some on a forum like this would be difficult. But let's give it a crack. To start with at least.
1) There is nothing wrong with your approach for implementing a scheduled task. I can think of a few other ways of doing it. Like running a simple Worker Role with a Do{Thread.Sleep(); ...} simple, but effective. There are more complex / elegant ways too including using external libraries and frameworks for scheduling tasks in Azure.
2) You would need to implement some sort of Singleton type pattern in your workflow / job processing engine. You could for instance acquire a lease on a 1Kb blob record when your job starts, and not allow another instance to start etc.
For more detailed answers I suggest we take this offline and have a Skype call and discuss in detail your requirements. You know how to get hold of me via email :) look forward to it.

Resources