pthread support in R - multithreading

I was looking around for a way to run a R function as a separate thread in the background.
As R is written in C , i was hoping that some packages will support threading using pthreads.
So far i haven’t found anything good , some of the packages i tested were broken or implemented some other concepts.
So my requirement is as simple as running a R script as a separate pthread inside a R console.
How can i run a function or a script as a separate thread.
PS - I am not looking for fork like features.
Thanks
Vineeth

In the R extensions manual:
There is no direct support for the POSIX threads
Alternatively, you can use several R process in parallel. In linux you can simply fork a process by running an R script from a terminal and adding &, e.g.:
Rscript spam.R &
If you insist one doing this from within R:
system("Rscript spam.R", wait = FALSE)
Or you could have a look at the parallel package to run R operations in parallel.
Given your comments I think you could have a look at the HighPerformance task view. Quoting from that:
The bigmemory package by Kane and Emerson permits storing large
objects such as matrices in memory (as well as via files) and uses
external pointer objects to refer to them. This permits transparent
access from R without bumping against R's internal memory limits.
Several R processes on the same computer can also shared big memory
objects.
indicates that the bigmemory package might prove interesting in letting multiple R instances access the same data stored in memory. Than you could use forking to create multiple R instances.

Related

How to convert a multiprocess Flask/unicorn to a single multithreaded process

I would like to cache a large amount of data in a Flask application. Currently it runs on K8S pods with the following unicorn.ini
bind = "0.0.0.0:5000"
workers = 10
timeout = 900
preload_app = True
To avoid caching the same data in those 10 workers I would like to know if Python supports a way to multi-thread instead of multi-process. This would be very easy in Java but I am not sure if it is possible in Python. I know that you can share cache between Python instances using the file system or other methods. However it would be a lot simpler if it is all share in the same process space.
Edited:
There are couple post that suggested threads are supported in Python. This comment by Filipe Correia, or this answer in the same question.
Based on the above comment the Unicorn design document talks about workers and threads:
Since Gunicorn 19, a threads option can be used to process requests in multiple threads. Using threads assumes use of the gthread worker.
Based on how Java works, to shared some data among threads, I would need one worker and multiple threads. Based on this other link
I know it is possible. So I assume I can change my gunicorn configuration as follows:
bind = "0.0.0.0:5000"
workers = 1
threads = 10
timeout = 900
preload_app = True
This should give me 1 worker and 10 threads which should be able to process the same number of request as current configuration. However the question is: Would the cache still be instantiated once and shared among all the threads? How or where should I instantiate the cache to make sure is shared among all the threads.
would like to ... multi-thread instead of multi-process.
I'm not sure you really want that. Python is rather different from Java.
workers = 10
One way to read that is "ten cores", sure.
But another way is "wow, we get ten GILs!"
The global interpreter lock must be held
before the interpreter interprets a new bytecode instruction.
Ten interpreters offers significant parallelism,
executing ten instructions simultaneously.
Now, there are workloads dominated by async I/O, or where
the interpreter calls into a C extension to do the bulk of the work.
If a C thread can keep running, doing useful work
in the background, and the interpreter gathers the result later,
terrific. But that's not most workloads.
tl;dr: You probably want ten GILs, rather than just one.
To avoid caching the same data in those 10 workers
Right! That makes perfect sense.
Consider pushing the cache into a storage layer, or a daemon like Redis.
Or access memory-resident cache, in the context of your own process,
via mmap or shmat.
When running Flask under Gunicorn, you are certainly free
to set threads greater than 1,
though it's likely not what you want.
YMMV. Measure and see.

Running multiple independent Node programs in the same Node instance

I'm doing embedded development on a resource-limited system, and need to run a number of separate Node.js tasks (call them task1.js, task2.js and task3.js). The obvious solution would be to run them separately, e.g.:
$ node task1.js &
[1] 1968
$ node task2.js &
[2] 1969
$ node task3.js &
[3] 1970
$
This works, but I end up with three independent Node stacks, each with its own multi-megabyte heap, interpreter, etc. etc. etc., which is a waste that I'd like to avoid.
Another obvious solution would be to concatenate the source files:
$ cat task1.js task2.js task3.js | node -
This works, but it has problems. First, all three task sources would end up in the same module, so I'd risk name collisions. For example, if each task file included const crypto = require('crypto');, then when concatenated Node would complain about the multiply-defined crypto variable.
This would also require all of the primary task source files to be in the same directory, otherwise any relative path references to dependent files would be calculated based on the default working directory, and would likely break.
So, I'm looking for a way to run multiple tasks in the same Node instance, sharing Node resources as much as possible.
It would be great if some or all the following were true:
For development/debugging convenience, the same taskX.js sources could be used individually (as at the top), or run at the same time in the same node instance
No special care would have to be made in each task's code to prevent namespace collisions
Relative path references in include statements wouldn't all be resolved from the same working directory, so that I could have separate source trees for the separate tasks
Problems I don't need to be solved:
Multiprocessing or multithreading
Sharing data between the tasks
Inter-task events and communication services (if I do I'll write them myself)
Protecting each task from the others' bad behavior
Expected constraints for the task code:
No busy-waiting, so that none block the others from executing
No exclusive use of common system resources (e.g. no two will open a server socket on the same port)
Use of global Node resources will be restricted or forbidden
Is this resource limited to 1 CPU? If so, then your best bet is making each task return an async function, and processing them with something like async.parallel.
Especially if your subtasks are broken down into mostly async functions, this will allow the tasks to run as "parallel" as possible.
In a multi-cpu environment, you can boost performance using child processes (or using native node cluster module). But, as others stated, this would require the memory overhead of v8 for each process.
If your tasks are mostly cpu intensive, you will not see much gain from the async.parallel, and it could even be slower than doing all your tasks sync. But, if there is network or disk access (IO), then using parallel should be faster.

What are the performance implications of interopping with other languages via system calls?

Suppose I'm writing a program in node.js (or perhaps another typical back-end scripting language). Suppose further I have a C function f (or a python function, or what have you) that does some pure data transformation.
If I want to use f in my node program, there are two approaches:
Bind f via something like node-gyp that makes it callable from JavaScript land.
Make f into a binary (or, in the case of a language like python, a single f.py interface) that sits on the file system, and then call it from node as if were any other system command (so that one can then take the output from the system call as a string, convert it into node.js data, and then use it).
Question: What are the performance implications of choosing (2) over (1)?
This is important because if you are using a language like C to make some aspect of your application run significantly faster, then using (2) would seem pointless if it slowed things down past some threshold.
The cost of 1 is the cost of loading the native code, transfering arguments (ffi), calling the native code, and transfering arguments back. With loading being done only once.
The cost of 2 is always going to be the cost to startup the process, running the process, converting the results back from strings.
If the cost of f is high, you may never see a difference between 1 and 2. If the cost of f is low, then 2 will take longer because the process startup overhead will dominate.
However, depending on the complexity of f (it might be a very large data-processing application in C), it's almost always faster to create a native binding like 1. Avoiding process startup overhead is important, it also reduces the total amount of memory needed to run your application.
Alternatively you could do option:
Have the C code talk over a local network socket. Accepting requests and responding with answers when the computation is done.
This has the benefit of scaling out to multiple nodes if you need it.
Benchmarking both for your use case is the only way to be sure but method 1 is
likely to be faster.
The startup cost of calling a binary and starting an interpreter for python/perl/blah would likely kill any performance gain you might get using their Foreign Function Interface (FFI). Startup cost is one of the reasons why Apache has mod_python, mod_perl and why FastCGI exists.
Another thing to consider is that you're adding another language to the mix and this might kill performance of the team ie now everyone needs to know two languages and two FFI methods etc. If your app is in Node, keep it in Node and use node to call native methods.

Designing a perl script with multithreading and data sharing between threads

I'm writing a perl script to run some kind of a pipeline. I start by reading a JSON file with a bunch of parameters in it. I then do some work - mainly building some data structures needed later and calling external programs that generate some output files I keep references to.
I usually use a subroutine for each of these steps. Each such subroutine will usually write some data to a unique place that no other subroutine writes to (i.e. a specific key in a hash) and reads data that other subroutines may have generated.
These steps can take a good couple of minutes if done sequentially, but most of them can be run in parallel with some simple logic of dependencies that I know how to handle (using threads and a queue). So I wonder how I should implement this to allow sharing data between the threads. What would you suggest the framework to be? Perhaps use an object (of which I will have only one instance) and keep all the shared data in $self? Perhaps
a simple script (no objects) with some "global" shared variables? ...
I would obviously prefer a simple, neat solution.
Read threads::shared. By default, as perhaps you know, perl variables are not shared. But you place the shared attribute on them, and they are.
my %repository: shared;
Then if you want to synchronize access to them, the easiest way is to
{ lock( %repository );
$repository{JSON_dump} = $json_dump;
}
# %respository will be unlocked at the end of scope.
However you could use Thread::Queue, which are supposed to be muss-free, and do this as well:
$repo_queue->enqueue( JSON_dump => $json_dump );
Then your consumer thread could just:
my ( $key, $value ) = $repo_queue->dequeue( 2 );
$repository{ $key } = $value;
You can certainly do that in Perl, I suggest you look at perldoc threads and perldoc threads::shared, as these manual pages best describe the methods and pitfalls encountered when using threads in Perl.
What I would really suggest you use, provided you can, is instead a queue management system such as Gearman, which has various interfaces to it including a Perl module. This allows you to create as many "workers" as you want (the subs actually doing the work) and create one simple "client" which would schedule the appropriate tasks and then collate the results, without needing to use tricks as using hashref keys specific to the task or things like that.
This approach would also scale better, and you'd be able to have clients and workers (even managers) on different machines, should you choose so.
Other queue systems, such as TheSchwartz, would not be indicated as they lack the feedback/result that Gearman provides. To all effects, using Gearman this way is pretty much as the threaded system you described, just without the hassles and headaches that any system based on threads may eventually suffer from: having to lock variables, using semaphores, joining threads.

How can threads be avoided?

I've read a lot recently about how writing multi-threaded apps is a huge pain in the neck, and have learned enough about the topic to understand, at least at some level, why it is so.
I've read that using functional programming techniques can help alleviate some of this pain, but I've never seen a simple example of functional code that is concurrent. So, what are some alternatives to using threads? At least, what are some ways to abstract them away so you needn't think about things like locking and whether a particular library's objects are thread-safe.
I know Google's MapReduce is supposed to help with the problem, but I haven't seen a succinct explanation of it.
Although I'm giving a specific example below, I'm more curious of general techniques than solving this specific problem (using the example to help illustrate other techniques would be helpful though).
I came to the question when I wrote a simple web crawler as a learning exercise. It works pretty well, but it is slow. Most of the bottleneck comes from downloading pages. It is currently single threaded, and thus only downloads a single page at a time. Thus, if the pages can be downloaded concurrently, it would speed things up dramatically, even if the crawler ran on a single processor machine. I looked into using threads to solve the issue, but they scare me. Any suggestions on how to add concurrency to this type of problem without unleashing a terrible threading nightmare?
The reason functional programming helps with concurrency is not because it avoids using threads.
Instead, functional programming preaches immutability, and the absence of side effects.
This means that an operation could be scaled out to N amount of threads or processes, without having to worry about messing with shared state.
Actually, threads are pretty easy to handle until you need to synchronize them. Usually, you use threadpool to add task and wait till they are finished.
It is when threads need to communicate and access shared data structures that multi threading becomes really complicated. As soon as you have two locks, you can get deadlocks, and this is where multithreading gets really hard. Sometimes, your locking code could be wrong by just a few instructions. In that case, you could only see bugs in production, on multi-core machines (if you developed on single core, happened to me) or they could be triggered by some other hardware or software. Unit testing doesn't help much here, testing finds bugs, but you can never be as sure as in "normal" apps.
I'll add an example of how functional code can be used to safely make code concurrent.
Here is some code you might want to do in parallel, so you don't have wait for one file to finish to start downloading the next:
void DownloadHTMLFiles(List<string> urls)
{
foreach(string url in urls)
{
DownlaodOneFile(url); //download html and save it to a file with a name based on the url - perhaps used for caching.
}
}
If you have a number of files the user might spend a minute or more waiting for them all. We can re-write this code functionally like this, and it basically does the exact same thing:
urls.ForEach(DownloadOneFile);
Note that this still runs sequentially. However, not only is it shorter, we've gained an important advantage here. Since each call to the DownloadOneFile function is completely isolated from the others (for our purposes, available bandwidth isn't an issue) you could very easily swap out the ForEach function for another very similar function: one that kicks off each call to DownlaodOneFile on a separate thread from a threadpool.
It turns out .Net has just such a function availabe using Parallel Extensions. So, by using functional programming you can change one line of code and suddenly have something run in parallel that used to run sequentially. That's pretty powerful.
There are a couple of brief mentions of asynchronous models but no one has really explained it so I thought I'd chime in. The most common method I've seen used as an alternative for multi-threading is asynchronous architectures. All that really means is that instead of executing code sequentially in a single thread, you use a polling method to initiate some functions and then come back and check periodically until there's data available.
This really only works in models like your aforementioned crawler, where the real bottleneck is I/O rather than CPU. In broad strokes, the asynchronous approach would initiate the downloads on several sockets, and a polling loop periodically checks to see if they're finished downloading and when that's done, we can move on to the next step. This allows you to run several downloads that are waiting on the network, by context switching within the same thread, as it were.
The multi-threaded model would work much the same, except using a separate thread rather than a polling loop checking multiple sockets in the same thread. In an I/O bound application, asynchronous polling works almost as well as threading for many use cases, since the real problem is simply waiting for the I/O to complete and not so much the waiting for the CPU to process the data.
Another real world example is for a system that needed to execute a number of other executables and wait for results. This can be done in threads, but it's also considerably simpler and almost as effective to simply fire off several external applications as Process objects, then check back periodically until they're all finished executing. This puts the CPU-intensive parts (the running code in the external executables) in their own processes, but the data processing is all handled asynchronously.
The Python ftp server lib I work on, pyftpdlib uses the Python asyncore library to handle serving FTP clients with only a single thread, and asynchronous socket communication for file transfers and command/response.
See for further reading the Python Twisted library's page on Asynchronous Programming - while somewhat specific to using Twisted, it also introduces async programming from a beginner perspective.
Concurrency is quite a complicated subject in computer science, which demands good understanding of hardware architecture as well as operating system behavior.
Multi-threading has many implementations based on your hardware and your hosting OS, and as tough as it is already, the pitfalls are numerous. It should be noted that in order to achieve "true" concurrency, threads are the only way to go. Basically, threads are the only way for you as a programmer to share resources between different parts of your software while allowing them to run in parallel. By parallel you should consider that a standard CPU (dual/multi-cores aside) can only do one thing at a time. Concepts like context switching now come into play, and they have their own set of rules and limitations.
I think you should seek more generic background on the subject, like you are saying, before you go about implementing concurrency in your program.
I guess the best place to start is the wikipedia article on concurrency, and go on from there.
What typically makes multi-threaded programming such a nightmare is when threads share resources and/or need to communicate with each other. In the case of downloading web pages, your threads would be working independently, so you may not have much trouble.
One thing you may want to consider is spawning multiple processes rather than multiple threads. In the case you mention--downloading web pages concurrently--you could split the workload up into multiple chunks and hand each chunk off to a separate instance of a tool (like cURL) to do the work.
If your goal is to achieve concurrency it will be hard to get away from using multiple threads or processes. The trick is not to avoid it but rather to manage it in a way that is reliable and non-error prone. Deadlocks and race conditions in particular are two aspects of concurrent programming that are easy to get wrong. One general approach to manage this is to use a producer/consumer queue... threads write work items to the queue and workers pull items from it. You must make sure you properly synchronize access to the queue and you're set.
Also, depending on your problem, you may also be able to create a domain specific language which does away with concurrency issues, at least from the perspective of the person using your language... of course the engine which processes the language still needs to handle concurrency, but if this will be leveraged across many users it could be of value.
There are some good libraries out there.
java.util.concurrent.ExecutorCompletionService will take a collection of Futures (i.e. tasks which return values), process them in background threads, then bung them in a Queue for you to process further as they complete. Of course, this is Java 5 and later, so isn't available everywhere.
In other words, all your code is single threaded - but where you can identify stuff safe to run in parallel, you can farm it off to a suitable library.
Point is, if you can make the tasks independent, then thread safety isn't impossible to achieve with a little thought - though it is strongly recommended you leave the complicated bit (like implementing the ExecutorCompletionService) to an expert...
One simple way to avoid threading in your simple scenario, Is to download from different processes. The main process will invoke other processes with parameters that will download the files to local directory, And then the main process can do the real job.
I don't think that there are any simple solution to those problems. Its not a threading problem. Its the concurrency that brake the human mind.
You might watch the MSDN video on the F# language: PDC 2008: An introduction to F#
This includes the two things you are looking for. (Functional + Asynchronous)
For python, this looks like an interesting approach: http://members.verizon.net/olsongt/stackless/why_stackless.html#introduction
Use Twisted. "Twisted is an event-driven networking engine written in Python" http://twistedmatrix.com/trac/. With it, I could make 100 asynchronous http requests at a time without using threads.
Your specific example is seldom solved with multi-threading. As many have said, this class of problems is IO-bound, meaning the processor has very little work to do, and spends most of it's time waiting for some data to arrive over the wire and to process that, and similarly it has to wait for disk buffers to flush so that it can put more of the recently downloaded data on disk.
The method to performance is through the select() facility, or an equivalent system call. The basic process is to open a number of sockets (for the web crawler downloads) and file handles (for storing them to disk). Next you set all of the different sockets and fh to non-blocking mode, meaning that instead of making your program wait until data is available to read after issuing a request, it returns right away with a special code (usually EAGAIN) to indicate that no data is ready. If you looped through all of the sockets in this way you would be polling, which works well, but is still a waste of cpu resources because your reads and writes will almost always return with EAGAIN.
To get around this, all of the sockets and fp's will be collected into a 'fd_set', which is passed to the select system call, then your program will block, waiting on ANY of the sockets, and will awaken your program when there's some data on any of the streams to process.
The other common case, compute bound work, is without a doubt best addressed with some sort of true parallelism (as apposed to the asynchronous concurrency presented above) to access the resources of multiple cpu's. In the case that your cpu bound task is running on a single threaded archetecture, definately avoid any concurrency, as the overhead will actually slow your task down.
Threads are not to be avoided nor are they "difficult". Functional programming is not necessarily the answer either. The .NET framework makes threading fairly simple. With a little thought you can make reasonable multithreaded programs.
Here's a sample of your webcrawler (in VB.NET)
Imports System.Threading
Imports System.Net
Module modCrawler
Class URLtoDest
Public strURL As String
Public strDest As String
Public Sub New(ByVal _strURL As String, ByVal _strDest As String)
strURL = _strURL
strDest = _strDest
End Sub
End Class
Class URLDownloader
Public id As Integer
Public url As URLtoDest
Public Sub New(ByVal _url As URLtoDest)
url = _url
End Sub
Public Sub Download()
Using wc As New WebClient()
wc.DownloadFile(url.strURL, url.strDest)
Console.WriteLine("Thread Finished - " & id)
End Using
End Sub
End Class
Public Sub Download(ByVal ud As URLtoDest)
Dim dldr As New URLDownloader(ud)
Dim thrd As New Thread(AddressOf dldr.Download)
dldr.id = thrd.ManagedThreadId
thrd.SetApartmentState(ApartmentState.STA)
thrd.IsBackground = False
Console.WriteLine("Starting Thread - " & thrd.ManagedThreadId)
thrd.Start()
End Sub
Sub Main()
Dim lstUD As New List(Of URLtoDest)
lstUD.Add(New URLtoDest("http://stackoverflow.com/questions/382478/how-can-threads-be-avoided", "c:\file0.txt"))
lstUD.Add(New URLtoDest("http://stackoverflow.com/questions/382478/how-can-threads-be-avoided", "c:\file1.txt"))
lstUD.Add(New URLtoDest("http://stackoverflow.com/questions/382478/how-can-threads-be-avoided", "c:\file2.txt"))
lstUD.Add(New URLtoDest("http://stackoverflow.com/questions/382478/how-can-threads-be-avoided", "c:\file3.txt"))
lstUD.Add(New URLtoDest("http://stackoverflow.com/questions/382478/how-can-threads-be-avoided", "c:\file4.txt"))
lstUD.Add(New URLtoDest("http://stackoverflow.com/questions/382478/how-can-threads-be-avoided", "c:\file5.txt"))
lstUD.Add(New URLtoDest("http://stackoverflow.com/questions/382478/how-can-threads-be-avoided", "c:\file6.txt"))
lstUD.Add(New URLtoDest("http://stackoverflow.com/questions/382478/how-can-threads-be-avoided", "c:\file7.txt"))
lstUD.Add(New URLtoDest("http://stackoverflow.com/questions/382478/how-can-threads-be-avoided", "c:\file8.txt"))
lstUD.Add(New URLtoDest("http://stackoverflow.com/questions/382478/how-can-threads-be-avoided", "c:\file9.txt"))
For Each ud As URLtoDest In lstUD
Download(ud)
Next
' you will see this message in the middle of the text
' pressing a key before all files are done downloading aborts the threads that aren't finished
Console.WriteLine("Press any key to exit...")
Console.ReadKey()
End Sub
End Module

Resources