I can't find information whether log4net is a MOM. If not then why? I tried to figure it out from their website.
log4net is not a message oriented middleware.
A MOM supports the exchange of general-purpose messages in a distributed application environment. Data is exchanged by message passing and/or message queuing supporting both synchronous and asynchronous interactions between distributed computing processes.
A MOM is usually designed for these goals:
Asynchronicity
Extensibility
Load Balancing
Why is log4net not a message oriented middleware?
Because log4net was designed with these goals in mind:
Speed of logging (or not logging, when disabling log)
Flexibility of logging (can output to multiple logging targets and easily modify writing strategy at runtime)
Being able to output to multiple targets (including remote via UDP) is a crucial feature of a MOM. However log4net does not create a communication layer between applications, it only enables logs to be written to multiple targets. It also does not support asynchronous logging out of the box. So it does not offer everything a MOM is expected to.
Related
Reading the documentation, Azure EventHubs is meant for:
Application instrumentation
User experience or workflow processing
Internet of Things (IoT) scenarios
Can this be used for any transactional data, handling revenue or application sensitive data?
Based on what I read, looks like it is meant for handling data that one should not be worried about any data loss. Is this the case?
It is mainly designed for large scale ingestion of data. That is why typical scenario's include IoT solutions which consists of a multitude of devices sending mass amounts of telemetry data.
To allow for this kind of scale it does not include some features other messaging service, like Azure Service Bus, do have. I think this blog does a good job of listening the differences. Especially the section Use Case explains things very well:
From a target use case perspective if we consider some of our typical enterprise integration patterns then if you are implementing a pattern which uses a Command Message, or a Request/Reply Message then you probably want to use Azure Service Bus Messaging. RPC patterns can be implemented using Request/Reply messages on Azure Service Bus using a response queue. These are really about ESB and EAI style messaging patterns where you want to send messages between applications and probably want to use other features such as property based routing.
Azure Event Hubs is more likely to be used if you’re implementing patterns with Event Messages and you want somewhere reliable to send them that is capable of dealing with a massive scale but will allow you to do stuff with the events out of process.
With these core target use cases in mind it is easy to see where the scale differences come into play. For messaging it’s about one application telling one or more apps to DO SOMETHING or GIVE ME SOMETHING. The alternative is that in eventing the applications are saying SOMETHING HAS HAPPENED. When you consider this in typical application scenarios and you put events into the telemetry and logging space you can quickly see that the SOMETHING HAS HAPPENED scenario will produce a lot more traffic than the other.
Now I’m not saying that you can’t implement some messaging type functions using event hubs and that you can’t push events to a Service Bus topic as in integration there are always different requirements which result in different implementation scenarios, but I think if you follow the above as a general rule then you will usually be on the right path.
That does not mean however, that it is only capable of handling data that one should not be worried about any data loss. Data is stored for a configurable amount of time and if necessary, this data can be read from an earlier point in time.
Now, given your scenario I do not think Event Hub is the best fit. But truth to be told, I am not sure because you will have to elaborate more on what you want to do exactly.
Addition
The idea behind Event Hubs is that you will get at least once delivery at great scale. (Source). See also this question: Does Azure Event Hub guarantees at least once delivery?
Perhaps a silly question, but keep reading about SIs "lightweight messaging within Spring-based applications". I want to know how (if) SI uses messaging internally. When I run an SI (Boot) application (one that doesn't require AMPQ ... aka 'messaging' support), I don't have to run a Rabbit server. But, from what I gather, SI uses messaging internally. How is this accomplished? I can't seem to find any reference explaining this & what infrastructure is required to make this possible. Thanks!
The messages are simply Java objects (o.s.messaging.Message) passed between components. No external broker is needed, unless you need persistence.
I suggest you read Mark Fisher's book (Spring Integration in Action) and/or the reference manual.
The messaging inside spring integration are in-memory java objects passed from one service to another via channels/queue. It provides a mechanism to define the flow and order of processing, also allowing each service step to work in isolation. The spring integration queue is eventually an implementation of java.util.Queue interface.
It is different from commercial Messaging tools like IBM MQ or Active MQ as it doesnt offer persistence. Which means if you kill the jvm or the app process is stopped, all the messages in flight on the Spring queue/channel are lost. A lot if times this is acceptable if the process in idempotent, i.e When the application comes up, I can restart the process from beginning.
I want to use a logging framework like log4cxx in a multi-threaded application.
If the output of the log will be to a file, correct serialization of the messages is needed.
I was asking myself how (and if) these frameworks get correct serialization of the output without using some sort of synchronization object.
I guess that if it is using synchronization objects (for example to access a queue to log messages), this could cause changes in the behaviour of the involved threads, so also changing the behaviour (and bugs...) of the whole logged application.
log4cxx is indeed synchronized, like the other log4XXX frameworks. The synchronization is done in the appenders and is necessary to guarantee that content of log entries are not mixed together. This does not change the behavior of your threads, but the threads do encounter a small performance hit. The performance hit is small compared to the performance hit of I/O when logging to a file.
If you are still worried about performance you can consider using asynchronous logging (using the AsyncAppender that handles logging in a separate thread. Using the async approach you cannot be guaranteed that messages are logged (e.g. if the application crashes before the logging thread handles the message). The most simple way to improve performance is to reduce the amount of logging.
I would like to
1. intercepting events and
2. controlling behaviour via events for BPEL runtime engine. May I know which BPEL runtime engine support this?
For 1. for example when an invocation to a service name "hello", I would like to receive the event "invoke_hello" from the server.
For 2. for example, when the server has parallel invocation of 3 services, "invoke_hello1", "invoke_hello2" and "invoke_hello3", I could control the behaviour by saying I would only allowed "invoke_hello1" to be run.
I am interested if there is any BPEL engines that supports 1, or 2, or both, with its documentation page that roughly talked about this (so I could make use of this feature).
Disclaimer: I haven't personally used the eventing modules of these engines, so I cannot guarantee that they work as they promise.
Concerning question 1 (event notification):
Apache ODE has support for Execution Events. These events go into its database, and you have several ways of retrieving events. You can:
query the database to read them.
use the engines Management API to do this via Web Services
add your own event listener implementation to the engine's classpath.
ODE's events concern the lifecycle of activities in BPEL. So your "invoke_hello" should map to one of the ActivityXXX events in ODE.
The Sun BPEL Service Engine included in OpenESB has some support for alerting, but the documentation is not that verbose concerning how to use it. Apparently, you can annotate activities with an alert level and events are generated when an activity is executed.
Concerning question 2 (controlling behaviour):
This is hard and I am not sure if any engine really supports this in normal execution mode. One straight-forward way of achieving this would be to execute the engine in debug mode and to manually control each step. So you could skip the continuation of "invoke_hello2" and "invoke_hello3" and just continue with "invoke_hello1".
As far as I know, ODE does not have a debugger. The Sun BPEL Service Engine on the other hand, has quite a nice one. It is integrated in the BPEL editor in Netbeans which is a visual editor (that uses BPMN constructs to visualize BPEL activities) and lets you jump from and to every activity.
Another option would be to manually code your own web service that intercepts messages and forwards these to the engine depending on your choice. However, as I understand your question, you would rather like an engine that provides this out of the box.
Apparently Oracle BPEL also has support for eventing and according to this tutorial also comes with a debugger, but I haven't used this engine personally so far, so I won't include it in this answer.
I have a application which uses .net Thread-pool to have multiple threads.It uses log4net for write logs to a plain text file. Is it a good idea to use log4net for asynchronous logging like this. Or do i need to have separate MSMQ implementation to append messages?
You can use log4net as-is for file-based logging for multi-threaded applications. The log messages from all the threads will be written to the same file. It can get a little confusing to read all the interspersed messages, but it's better than not having logging. You'll definitely want to log the thread ID in the appender format so you can tell which messages are coming from which thread.
There are probably more fancy things you can do to handle the logging for different threads, but I've never really had to go down that road. I prefer to stick with file-based logging, and having all the threads log to one file is easier to deal with than having each thread log to its own file, in my opinion.