Esper UpdateListener's concurrency - multithreading

My boss want to me learning Esper, the open source library for CEP, so I need some help.
I want to many UpdateListener subscribing one event stream, and they run on concurrently. That means, if one listener have a long and big process, then other listener running concurrency, because we have so many event at short time, so I need more fast processing.

The UpdateListener code can simply use a Java threadpool to do its work. For an example there is http://www.javacodegeeks.com/2013/01/java-thread-pool-example-using-executors-and-threadpoolexecutor.html.
In Esper you can also configure threading.
http://esper.codehaus.org/esper-5.1.0/doc/reference/en-US/html_single/index.html#api-threading-advanced

Related

Channels in Go, and emitters in node.js?

Does Go have an equivalent of node.js' "emitter"?
I'm teaching myself Go by porting over a node.js library I wrote. In the node version, the library emits an event once something happens (e.g. it listens on UDP port 1234 and when "ABC" is received, "abcreceived" is emitted so the calling code can respond as necessary (e.g. sending back "DEF")
I've seen channels in Go (and am currently reading up on them), but as I'm still new to this language, I don't know if (or how, for that matter) that can be used to communicate with whatever code is using my library.
I've also seen https://github.com/chuckpreslar/emission, but am not sure if this is acceptable, or if there's a better ("Best practice") way of doing things.
Go and Node.js are very different. Node.js supports concurrency only via callbacks. There might be various ways of dressing them up, but they're fundamentally callbacks.
In Node.js, there is no parallelism; Node.js has a single-threaded runtime. When Node.js async is used to achieve what is called 'parallel' execution, it isn't parallel in the sense used in Go, but concurrent.
Concurrency is not parallelism in the Go world.
Go has explicit concurrency based on Communicating Sequential Processes (CSP), a mathematical basis conceived by Tony Hoare at Oxford. The runtime interleaves cooperating processes called goroutines by time-slicing them onto the available CPU cores. Within each goroutine, the code is single threaded, so is easy to write. In the simple case, no data is shared between goroutines; instead messages pass between them along channels. In this way, there is no need for callbacks.
When goroutines get blocked waiting for I/O, that's OK because they don't use any CPU time until they're unblocked. Their memory footprint is slight and you can have very large numbers of them. So callbacks are not needed for I/O operations either.
Because the execution models of Go and Node.js are about as different as they could be, attempting to port code from one to the other is very likely to lead to very clumsy solutions. It's better to start from the original requirements and implement from scratch.
It would be possible to distort the Go concurrency model using function arguments to behave like callbacks. This would be a bad idea because it would not be idiomatic and would lose the benefits that CSP gives.
So by reading others' Go code and some links in the comments to my question, I think channels are the way to go.
In my library code (semi pseudo-code):
// Make a new channel called "Events"
var Events = make(chan
func doSomething() {
// ...
Events <-"abcreceived" // Add "abcreceived" to the Events channel
}
And in the code that will use my library:
evt := <-mylib.Events
switch evt {
case "abcreceived":
sendBackDEF()
break
// ...
}
I still prefer node.js' EventEmitter (because you can transfer data back easily) but for simple things, this should suffice.

Independent server side processing in node

Is it possible, or even practical to create a node program (or sub program/loop) that executes independently of the connected clients.
So in my specific use case, I would like to make a mulitplayer game, where each turn a player preforms actions. And at the end of that turn those actions are computed. Is it possible to perform those computations at a specific time regardless of the client/players connecting?
I assume this involves the use of threads somewhere.
Possibly an easier solution would be to compute the outcome when it is observed, but this could cause difficulties if it has an influence in with other entities. But this problem has been a curiosity of mine for a while.
Well, basically, the easiest solution would probably to run the computation onto a cluster. This is spawning a thread who's running independent task and communicating with messages with the main thread.
If you wish however to run a completely separate process (I probably wouldn't, but it is an option), this can happen too. You then just need a communication protocol between the two process. Usually this would be handled by a messaging or a task queue system. A popular queue solving this issue is RabbitMQ.
If the computations each turn is not to heavy you could solve the issue with a simple setTimeout()
function turnCalculations(){
//do loads of stuff every 30 seconds
}
setTimout(turnCalculations,30000)
//normal node server stuff here
This would do the turn calculations every 30 seconds regardless of users connected, but if the calculations take to long they might block your server.

ServiceStack: How to make InMemoryTransientMessageService run in a background

What needs to be done to make InMemoryTransientMessageService run in a background thread? I publish things inside a service using
base.MessageProducer.Publish(new RequestDto());
and they are exececuted immediately inside the service-request.
The project is self-hosted.
Here is a quick unit test showing the blocking of the current request instead of deferring it to the background:
https://gist.github.com/lmcnearney/5407097
There is nothing out of the box. You would have to build your own. Take a look at ServiceStack.Redis.Messaging.RedisMqHost - most of what you need is there, and it is probably simpler (one thread does everything) to get you going when compared to ServiceStack.Redis.Messaging.RedisMqServer (one thread for queue listening, one for each worker). I suggest you take that class and adapt it to your needs.
A few pointers:
ServiceStack.Message.InMemoryMessageQueueClient does not implement WaitForNotifyOnAny() so you will need an alternative way of getting the background thread to wait to incoming messages.
Closely related, the ServiceStack.Redis implementation uses topic subscriptions, which in this class is used to transfer the WorkerStatus.StopCommand, which means you have to find an alternative way of getting the background thread to stop.
Finally, you may want to adapt ServiceStack.Redis.Messaging.RedisMessageProducer as its Publish() method pushes the message requested to the queue and pushes the channel / queue name to the TopicIn queue. After reading the code you can see how the three points tie together.
Hope this helps...

UpdateAllViews() from within a worker thread?

I have a worker thread in a class that is owned by a ChildView. (I intend to move this to the Doc eventually.) When the worker thread completes a task I want all the views to be updated. How can I make a call to tell the Doc to issue an UpdateAllViews()? Or is there a better approach?
Thank you.
Added by OP: I am looking for a simple solution. The App is running on a single user, single CPU computer and does not need network (or Internet) access. There is nothing to cause a deadlock.
I think I would like to have the worker thread post (or send) a message to cause the views to update.
Everything I read about threading seems way more complicated than what I need - and, yes, I understand that all those precautions are necessary for applications that are running in multiprocessor, multiuser, client-server systems, etc. But none of those apply in my situation.
I am just stuck at getting the right combination of getting the window handle, posting the message and responding to the message in the right functions and classes to compile and function at all.
UpdateAllViews is not thread-safe, so you need to marshal the call to the main thread.
I suggest you to signal a manual-reset event to mark your thread's completion and check the event's status in a WM_TIMER handler.
suggested reading:
First Aid for the Thread-Impaired:
Using Multiple Threads with MFC
More First Aid for the Thread
Impaired: Cool Ways to Take Advantage
of Multithreading

How can threads be avoided?

I've read a lot recently about how writing multi-threaded apps is a huge pain in the neck, and have learned enough about the topic to understand, at least at some level, why it is so.
I've read that using functional programming techniques can help alleviate some of this pain, but I've never seen a simple example of functional code that is concurrent. So, what are some alternatives to using threads? At least, what are some ways to abstract them away so you needn't think about things like locking and whether a particular library's objects are thread-safe.
I know Google's MapReduce is supposed to help with the problem, but I haven't seen a succinct explanation of it.
Although I'm giving a specific example below, I'm more curious of general techniques than solving this specific problem (using the example to help illustrate other techniques would be helpful though).
I came to the question when I wrote a simple web crawler as a learning exercise. It works pretty well, but it is slow. Most of the bottleneck comes from downloading pages. It is currently single threaded, and thus only downloads a single page at a time. Thus, if the pages can be downloaded concurrently, it would speed things up dramatically, even if the crawler ran on a single processor machine. I looked into using threads to solve the issue, but they scare me. Any suggestions on how to add concurrency to this type of problem without unleashing a terrible threading nightmare?
The reason functional programming helps with concurrency is not because it avoids using threads.
Instead, functional programming preaches immutability, and the absence of side effects.
This means that an operation could be scaled out to N amount of threads or processes, without having to worry about messing with shared state.
Actually, threads are pretty easy to handle until you need to synchronize them. Usually, you use threadpool to add task and wait till they are finished.
It is when threads need to communicate and access shared data structures that multi threading becomes really complicated. As soon as you have two locks, you can get deadlocks, and this is where multithreading gets really hard. Sometimes, your locking code could be wrong by just a few instructions. In that case, you could only see bugs in production, on multi-core machines (if you developed on single core, happened to me) or they could be triggered by some other hardware or software. Unit testing doesn't help much here, testing finds bugs, but you can never be as sure as in "normal" apps.
I'll add an example of how functional code can be used to safely make code concurrent.
Here is some code you might want to do in parallel, so you don't have wait for one file to finish to start downloading the next:
void DownloadHTMLFiles(List<string> urls)
{
foreach(string url in urls)
{
DownlaodOneFile(url); //download html and save it to a file with a name based on the url - perhaps used for caching.
}
}
If you have a number of files the user might spend a minute or more waiting for them all. We can re-write this code functionally like this, and it basically does the exact same thing:
urls.ForEach(DownloadOneFile);
Note that this still runs sequentially. However, not only is it shorter, we've gained an important advantage here. Since each call to the DownloadOneFile function is completely isolated from the others (for our purposes, available bandwidth isn't an issue) you could very easily swap out the ForEach function for another very similar function: one that kicks off each call to DownlaodOneFile on a separate thread from a threadpool.
It turns out .Net has just such a function availabe using Parallel Extensions. So, by using functional programming you can change one line of code and suddenly have something run in parallel that used to run sequentially. That's pretty powerful.
There are a couple of brief mentions of asynchronous models but no one has really explained it so I thought I'd chime in. The most common method I've seen used as an alternative for multi-threading is asynchronous architectures. All that really means is that instead of executing code sequentially in a single thread, you use a polling method to initiate some functions and then come back and check periodically until there's data available.
This really only works in models like your aforementioned crawler, where the real bottleneck is I/O rather than CPU. In broad strokes, the asynchronous approach would initiate the downloads on several sockets, and a polling loop periodically checks to see if they're finished downloading and when that's done, we can move on to the next step. This allows you to run several downloads that are waiting on the network, by context switching within the same thread, as it were.
The multi-threaded model would work much the same, except using a separate thread rather than a polling loop checking multiple sockets in the same thread. In an I/O bound application, asynchronous polling works almost as well as threading for many use cases, since the real problem is simply waiting for the I/O to complete and not so much the waiting for the CPU to process the data.
Another real world example is for a system that needed to execute a number of other executables and wait for results. This can be done in threads, but it's also considerably simpler and almost as effective to simply fire off several external applications as Process objects, then check back periodically until they're all finished executing. This puts the CPU-intensive parts (the running code in the external executables) in their own processes, but the data processing is all handled asynchronously.
The Python ftp server lib I work on, pyftpdlib uses the Python asyncore library to handle serving FTP clients with only a single thread, and asynchronous socket communication for file transfers and command/response.
See for further reading the Python Twisted library's page on Asynchronous Programming - while somewhat specific to using Twisted, it also introduces async programming from a beginner perspective.
Concurrency is quite a complicated subject in computer science, which demands good understanding of hardware architecture as well as operating system behavior.
Multi-threading has many implementations based on your hardware and your hosting OS, and as tough as it is already, the pitfalls are numerous. It should be noted that in order to achieve "true" concurrency, threads are the only way to go. Basically, threads are the only way for you as a programmer to share resources between different parts of your software while allowing them to run in parallel. By parallel you should consider that a standard CPU (dual/multi-cores aside) can only do one thing at a time. Concepts like context switching now come into play, and they have their own set of rules and limitations.
I think you should seek more generic background on the subject, like you are saying, before you go about implementing concurrency in your program.
I guess the best place to start is the wikipedia article on concurrency, and go on from there.
What typically makes multi-threaded programming such a nightmare is when threads share resources and/or need to communicate with each other. In the case of downloading web pages, your threads would be working independently, so you may not have much trouble.
One thing you may want to consider is spawning multiple processes rather than multiple threads. In the case you mention--downloading web pages concurrently--you could split the workload up into multiple chunks and hand each chunk off to a separate instance of a tool (like cURL) to do the work.
If your goal is to achieve concurrency it will be hard to get away from using multiple threads or processes. The trick is not to avoid it but rather to manage it in a way that is reliable and non-error prone. Deadlocks and race conditions in particular are two aspects of concurrent programming that are easy to get wrong. One general approach to manage this is to use a producer/consumer queue... threads write work items to the queue and workers pull items from it. You must make sure you properly synchronize access to the queue and you're set.
Also, depending on your problem, you may also be able to create a domain specific language which does away with concurrency issues, at least from the perspective of the person using your language... of course the engine which processes the language still needs to handle concurrency, but if this will be leveraged across many users it could be of value.
There are some good libraries out there.
java.util.concurrent.ExecutorCompletionService will take a collection of Futures (i.e. tasks which return values), process them in background threads, then bung them in a Queue for you to process further as they complete. Of course, this is Java 5 and later, so isn't available everywhere.
In other words, all your code is single threaded - but where you can identify stuff safe to run in parallel, you can farm it off to a suitable library.
Point is, if you can make the tasks independent, then thread safety isn't impossible to achieve with a little thought - though it is strongly recommended you leave the complicated bit (like implementing the ExecutorCompletionService) to an expert...
One simple way to avoid threading in your simple scenario, Is to download from different processes. The main process will invoke other processes with parameters that will download the files to local directory, And then the main process can do the real job.
I don't think that there are any simple solution to those problems. Its not a threading problem. Its the concurrency that brake the human mind.
You might watch the MSDN video on the F# language: PDC 2008: An introduction to F#
This includes the two things you are looking for. (Functional + Asynchronous)
For python, this looks like an interesting approach: http://members.verizon.net/olsongt/stackless/why_stackless.html#introduction
Use Twisted. "Twisted is an event-driven networking engine written in Python" http://twistedmatrix.com/trac/. With it, I could make 100 asynchronous http requests at a time without using threads.
Your specific example is seldom solved with multi-threading. As many have said, this class of problems is IO-bound, meaning the processor has very little work to do, and spends most of it's time waiting for some data to arrive over the wire and to process that, and similarly it has to wait for disk buffers to flush so that it can put more of the recently downloaded data on disk.
The method to performance is through the select() facility, or an equivalent system call. The basic process is to open a number of sockets (for the web crawler downloads) and file handles (for storing them to disk). Next you set all of the different sockets and fh to non-blocking mode, meaning that instead of making your program wait until data is available to read after issuing a request, it returns right away with a special code (usually EAGAIN) to indicate that no data is ready. If you looped through all of the sockets in this way you would be polling, which works well, but is still a waste of cpu resources because your reads and writes will almost always return with EAGAIN.
To get around this, all of the sockets and fp's will be collected into a 'fd_set', which is passed to the select system call, then your program will block, waiting on ANY of the sockets, and will awaken your program when there's some data on any of the streams to process.
The other common case, compute bound work, is without a doubt best addressed with some sort of true parallelism (as apposed to the asynchronous concurrency presented above) to access the resources of multiple cpu's. In the case that your cpu bound task is running on a single threaded archetecture, definately avoid any concurrency, as the overhead will actually slow your task down.
Threads are not to be avoided nor are they "difficult". Functional programming is not necessarily the answer either. The .NET framework makes threading fairly simple. With a little thought you can make reasonable multithreaded programs.
Here's a sample of your webcrawler (in VB.NET)
Imports System.Threading
Imports System.Net
Module modCrawler
Class URLtoDest
Public strURL As String
Public strDest As String
Public Sub New(ByVal _strURL As String, ByVal _strDest As String)
strURL = _strURL
strDest = _strDest
End Sub
End Class
Class URLDownloader
Public id As Integer
Public url As URLtoDest
Public Sub New(ByVal _url As URLtoDest)
url = _url
End Sub
Public Sub Download()
Using wc As New WebClient()
wc.DownloadFile(url.strURL, url.strDest)
Console.WriteLine("Thread Finished - " & id)
End Using
End Sub
End Class
Public Sub Download(ByVal ud As URLtoDest)
Dim dldr As New URLDownloader(ud)
Dim thrd As New Thread(AddressOf dldr.Download)
dldr.id = thrd.ManagedThreadId
thrd.SetApartmentState(ApartmentState.STA)
thrd.IsBackground = False
Console.WriteLine("Starting Thread - " & thrd.ManagedThreadId)
thrd.Start()
End Sub
Sub Main()
Dim lstUD As New List(Of URLtoDest)
lstUD.Add(New URLtoDest("http://stackoverflow.com/questions/382478/how-can-threads-be-avoided", "c:\file0.txt"))
lstUD.Add(New URLtoDest("http://stackoverflow.com/questions/382478/how-can-threads-be-avoided", "c:\file1.txt"))
lstUD.Add(New URLtoDest("http://stackoverflow.com/questions/382478/how-can-threads-be-avoided", "c:\file2.txt"))
lstUD.Add(New URLtoDest("http://stackoverflow.com/questions/382478/how-can-threads-be-avoided", "c:\file3.txt"))
lstUD.Add(New URLtoDest("http://stackoverflow.com/questions/382478/how-can-threads-be-avoided", "c:\file4.txt"))
lstUD.Add(New URLtoDest("http://stackoverflow.com/questions/382478/how-can-threads-be-avoided", "c:\file5.txt"))
lstUD.Add(New URLtoDest("http://stackoverflow.com/questions/382478/how-can-threads-be-avoided", "c:\file6.txt"))
lstUD.Add(New URLtoDest("http://stackoverflow.com/questions/382478/how-can-threads-be-avoided", "c:\file7.txt"))
lstUD.Add(New URLtoDest("http://stackoverflow.com/questions/382478/how-can-threads-be-avoided", "c:\file8.txt"))
lstUD.Add(New URLtoDest("http://stackoverflow.com/questions/382478/how-can-threads-be-avoided", "c:\file9.txt"))
For Each ud As URLtoDest In lstUD
Download(ud)
Next
' you will see this message in the middle of the text
' pressing a key before all files are done downloading aborts the threads that aren't finished
Console.WriteLine("Press any key to exit...")
Console.ReadKey()
End Sub
End Module

Resources