I wonder what the different is between the options 'THREADS' and 'PTHREAD' when I compile perl 5.16 (and other version) from port source in freebsd?
Is the PTHREAD the posix-threading? (because -pthread) And if so, is it prefered to 'THREADS'? (because it seems to be preselected) even on freebsd? And what is 'THREADS' (Kernel threads?) on the other hand? What are the pros and cons?
Could I use both in one installation? Is it sensefull?
There is not to much to find around in combination with perl, as far as I can see.
thanks a lot
jimmy
Using threads is as others have described it of course.
The link with pthread means that your perl is built with the -pthread flag. This has a subtle but important effect. It means that when perl starts up, the libc data that maintains state for threads is initialized. This means that if your perl calls dlopen() on a library which is threaded, it will work properly, instead of hanging.
PS. I'm actually the person who wrote and committed the PTHREAD option to the port. I actually discovered some perl modules which dlopen()'d some threaded libs and caused perl to hang. Took me a while to figure out why. Trust me, you want the PTHREAD option on. I'm actually thinking of removing the option to turn it off. For more info, see FreeBSD PR 163512 and 163878. We probably should push this option upstream so that perl uses this by default on FreeBSD. Anything that may call dlopen() should really be built with -pthread.
Related
According to the INSTALL docs,
On some platforms, perl can be compiled with support for threads. To enable this, run
sh Configure -Dusethreads
The default is to compile without thread support.
With the thread implementation being pretty stable, how come it isn't a default build option? The build option seems to be set by at least Debian and Alpine Linux. Is there any good reason to build Perl without threads? What are the downsides to threaded perl?
Because threaded builds of Perl are 10% slower[1] than non-threaded, non-multiplicity[2] builds.
Your experience may vary.
Multiplicity is supporting multiple instances of the interpreter in one program. -DMULTIPLICITY is implied and required by -Dusethreads (since each thread has its own interpreter).
Although the kernel marks pages (and page tables) as copy on write to make the fork syscall work efficiently, the creation and tear-down of page tables and related structures is still an expensive task.
Thus I wonder why the linux community has never managed to implement posix_spawn as a real kernel syscall that just spawns a new process, eliminating the need to call fork beforehand.
Instead, posix_spawn is just a poor glibc wrapper around fork and exec.
The performance gains would be significantly for workloads that have to spawn thousands of new processes every second. The latency for launching new processes would be improved as well.
That's basically what posix_spawn is for. It is also a more flexible API. The real bug is that the Linux exec man page still doesn't include a cross-reference for it.
Fork with copy-on-write is very expensive. To illustrate this, you might want to read the implementation of classic vfork semantics in NetBSD. The mail provides some hard numbers for a real world use case, building software. COW for very large programs is also an easily measurable penalty. A friend of mine wrote his own spawn daemon for his Java application, because forking+exec from a 8GB+ JVM took way too long.
The main problem with vfork in the modern world is that it can interact badly with multi-threading. I.e. consider that the post-vfork code has to reference a function that hasn't been resolved by the dynamic linker yet. The dynamic linker now has to lock itself. This can result in dead locks with the original program for example.
I am porting one Linux Application to Windows. I observed many changes need to be done in multithreading part.
what will be the equivalent structure for "pthread_t" (which is in Linux), in windows?
what will be the equivalent for structure for "pthread_attr_t" (which is in Linux), in windows?
Can you please guide me some tips while porting.
Thanks...
The equivalent to pthread_t would be (as is so often the case) a HANDLE on Windows - which is what CreateThread returns.
There is no direct equivalent of pthread_attr_t. Instead, the attributes of a flag such as the stack size, whether the thread is initially suspended and other things are passed to CreateThread via arguments.
In the cases I saw so far, writing a small wrapper around pthreads so that you can have an alternative implementation for Windows was surprisingly simple. The most irritating thing for me was that on Windows, a Mutex is not the same thing as on Linux: on Windows, it's a handle which can be accessed from multiple processes. The thing which the pthread library calls mutex is called "critical section" on Windows.
That being said, if you find yourself finding more than just a few dozen lines of wrapper code you might want have a look at the c++11 thread library or at the thread support in Boost to avoid reinventing the wheel (and possibly wrongly so).
Here is your tip - "pthread is POSIX".
Mingw has pthreads,
Cygwin have pthreads and so on.
My advice is to stick with mingw and try not to do any changes.
Now here's something interesting. When I have more than one thread in Tcl invoking package require Expect, I get a seg fault.
e.g.
package require Threads
package require Expect
set t [thread::create]
thread::send {package require Expect}
puts "blarg! Damned thing crashes before I get here"
This is not a good time. Any thoughts?
Expect and Threads don't go together too well. Its the complexity you get from fork() + threads that can bite a lot there and lead to deadlocks and all kinds of uglyness. Usually not a good idea to combine the two.
If you really need Expect and the added concurrency a multi process approach with on multi threaded driver program and one single threaded expect process might work better. If you used tcllibs comm package the api's for sending commands are not that much different either (you mostly miss the tsv and tpool stuff if you used comm).
But it shouldn't segfault for sure. Which Expect/Threads/Tcl core combination did you use (e.g. ActiveStates ActiveTcl bundle or some self compiled stuff on an unusual platform?)
It's all from the latest debian packages, Ubuntu 9.0.4, 64 bit.
One alternative is to organize the code such that one thread is dedicated to handling all expect calls...which isn't the most elegant, generic solution but it might have to do.
The C code of the expect library (loaded with package require Expect) is not thread-safe (it probably uses global variables or else). I tried a lot to work around this limitation because I wanted to have a load balancing algorithm based on the Thread library which would pilot some expect code launching builds on a pool of slave machines. Unless you are very good at C and want to enhance expect, I would rather suggest to launch expect interpreters (in their own OS process) each time you need to use it from your Thread-enabled program. But of course I don't know your problem to solve, and this would only work if the "expect works" are unrelated.. Good luck anyway..
I'm monitoring a process with strace/ltrace in the hope to find and intercept a call that checks, and potentially activates some kind of globally shared lock.
While I've dealt with and read about several forms of interprocess locking on Linux before, I'm drawing a blank on what to calls to look for.
Currently my only suspect is futex() which comes up very early on in the process' execution.
Update0
There is some confusion about what I'm after. I'm monitoring an existing process for calls to persistent interprocess memory or equivalent. I'd like to know what system and library calls to look for. I have no intention call these myself, so naturally futex() will come up, I'm sure many libraries will implement their locking calls in terms of this, etc.
Update1
I'd like a list of function names or a link to documentation, that I should monitor at the ltrace and strace levels (and specifying which). Any other good advice about how to track and locate the global lock in mind would be great.
If you can start monitored process in valgrind, then there are two projects:
http://code.google.com/p/data-race-test/wiki/ThreadSanitizer
and Helgrind
http://valgrind.org/docs/manual/hg-manual.html
Helgrind is aware of all the pthread
abstractions and tracks their effects
as accurately as it can. On x86 and
amd64 platforms, it understands and
partially handles implicit locking
arising from the use of the LOCK
instruction prefix.
So, this tools can detect even atomic memory accesses. And they will check pthread usage
flock is another good one
There are many system calls can be used for locking: flock, fcntl, and even create.
When you are using pthreads/sem_* locks they may be executed in user space so you'll never
see them in strace as futex is called only for pending operations. Like when you actually
need to wait.
Some operations can be done in user space only - like spinlocks - you'll never see them
unless they do some waits for timer - backoff so you may see only stuff like nanosleep when one lock waits for other.
So there is no "generic" way to trace them.
on systems with glibc ~ >= 2.5 (glibc + nptl) you can use process shared
semaphores (last parameter to sem_init), more precisely, posix unnamed semaphores
posix mutexes (with PTHREAD_PROCESS_SHARED to pthread_mutexattr_setpshared)
posix named semaphores (got from sem_open/sem_unlink)
system v (sysv) semaphores: semget, semop
On older systems with glibc 2.2, 2.3 with linuxthreads or on embedded systems with uClibc you can use ONLY system v (sysv) semaphores for iterprocess communication.
upd1: any IPC and socker must be checked.