Is the cons-operator :: on a given list thread-safe?
For example what happens if 2 threads use the cons-operator on the same list?
val listOne = 1::2::3::Nil
val listTwo = 4::5::Nil
val combinedList = listOne ::: listTwo // thread1
val combinedList2 = listOne ::: 7:8:NIL // thread2 on the same time
To add to the answer that Jim Collins gave:
All operations on immutable data structures are generally thread safe.
The list cons operator does not modify the original list, since every list is immutable.
Instead, it creates a new list that represents the changed state of the list.
Thread synchronization issues only arise when different threads want to change the same data in memory.
This problem is called "shared state".
Immutable objects are stateless, therefore there can be no shared state.
Scala also offers mutable data structures. Look out of the term "mutable" in the package name.
Those have all the same synchronization issues as Java collections have.
By default, if you just type List inside a Scala program, without adding any special import statements, Scala will use an immutable list.
The same goes for Set, Map, etc.
Rule of thumb: If a class does not contain a var statement, and no reference to any other class that contains a var statement, it can be regarded as immutable.
That means it can be passed between threads safely.
(I know that experts can easily construct exceptions from this rule, but as long as you know nothing else, this is a pretty good rule to go by, edge cases set aside.)
It is thread safe. In your example both threads would execute all the statements.
Related
HashMap vs ConcurrentHashMap, when the value is AtomicInteger or LongAdder, is there any harm in using HashMap in a multithreaded environment ?
Yes, there is.
An object being of type AtomicInteger or LongAdder just means that the object itself is safe in a concurrent modification operation (i.e. if two threads try to modify it, they will do so one after the other). However, if the map containing the objects itself is of type 'HashMap', then concurrent modification operations of the map are not safe. For instance, if you want to add a key-value pair only if the key doesn't already exist in the map, you cannot safely use the putIfAbset() operation anymore because it's not synchronized/thread-safe in HashMap. And if you do use it, then it is possible that two threads will execute call this method at the same time, both of them reaching the conclusion that the map doesn't have they key, and then both of them adding a key-value pair, resulting in one of them overwriting the other other's value.
You cannot use a HashMap in a multithreaded environment. The reason is as follows:
If multiple threads operate on a simple HashMap they can damage the internal structure of the HashMap which is an array of linked lists. The links can go missing or go in circles. The result will be that the HashMap will be totally unusable and corrupt. This is the reason you should always use a concurrentHashMap in a multithreaded environment regardless of what value you want to store in the map itself.
Now, in a concurrentHashMap of a type say map< String val, 'any number'> 'any number' could be a LongAdder or an AtomicLong etc. Remember that not all operations on a concurrentHashMap are threadsafe by default. Therefore, if you use say a LongAdder then you could write the following atomic operation without any need to synchronize:
map.putIfAbsent(a string key, new LongAdder());
map.get("abc").increment();
I have two Futures, the second of which starts after the first ended. Both write to the same ArrayBuffer instance, but since they are executed serially (not at the same time), I consider them not acting concurrently.
However, I know there is the #volatile annotation for variables shared among two or more threads (#volatile disables caching).
Since after the first thread finishes, inside the ArrayBuffer instance, there might be some caching going on that makes it impossible for the second thread to see the ArrayBuffer's real state: I am not sure whether it is safe to use ArrayBuffer this way.
Is it true that caching might be a problem in my situation, and if this is the case: Is there a recommended way to make ArrayBuffer use #volatile internally?
It should be fine iff (if-and-only-if) you propagate it [the array] through the future:
val futureA = Future {
val buf = ArrayBuffer(…)
update(buf)
buf
}
val futureB = futureA map {
buf => moreUpdates(buf); buf
}
futureB foreach println // print the result of the transformations
This is OK from a memory safety point of view because the completion of futureA happens-before the onComplete (virtually all transformations on Future is implemented on top of onComplete) callback is invoked. In this case map.
The problem is not caching, per se, but the fact that an ArrayBuffer is a composite, with several subfields that have to be updated in concert to assure correct operation. You will need to use thread synchronization tools to ensure this.
class ArrayBufferWrapper[T](ab: ArrayBuffer[T]) {
def add(item: T) = {
this.synchronized {
ab.add(item)
}
}
}
By wrapping the ArrayBuffer, the components are properly realized into the current thread, and you ensure thread-safe add operations.
No, it is not safe.
This is exactly the reason why they invented functional programming. If you are using scala anyway, might as well take advantage of the paradigm it offers.
Avoid using mutable structures, or, at least, in the rare cases when you have to use them, do not let them escape the local scope. Then you won't ever have to deal with problems like this. They just will not exist anymore.
Tell us more about what you are trying to do, and i am sure someone will suggest a design or two, not involving two threads mutating the same structure.
Go's map is said not to be goroutine-safe (see here and here). I'm interested at finding out what could happen in a case where I neglect to protect access to the map using mutex/etc.
Specifically, can any of the following happen?
Assuming I have a map with keys k1, k2, ..., kn, can a concurrency issue lead to getting map[ki] when I asked for map[kj] (i != j)?
Can it lead to a panic in the application?
As the comments have already stated, races are bad. Go has very weak guarantees, unlike Java, and hence a program that has any race is allowed to have undefined behavior even when the race-containing code is not executed. In C, this is called "catch-fire semantics". The presence of a race means any outcome is possible, to include your computer catching on fire.
However, in Go it is easy to make a map thread-safe. Consider the following:
// Global variable defining a map
var safemap = struct {
sync.RWMutex
m map[string]string
}{m: make(map[string]string)}
You can do safe reads from the map like this:
// Get a read lock, then read from the map
safemap.RLock()
defer safemap.RUnlock()
return safemap.m[mykey] == myval
And you can do safe modifications like this:
// Delete from the map
safemap.Lock()
delete(safemap.m, mykey)
safemap.Unlock()
or this:
// Insert into the map
safemap.Lock()
safemap.m[mykey] = myval
safemap.Unlock()
Please bear with me on this as I'm new to this.
I have an array and two threads.
First thread appends new elements to the array when required
myArray ~= newArray;
Second thread removes elements from the array when required:
extractedArray = myArray[0..10];
myArray = myArray[10..myArray.length()];
Is this thread safe?
What happens when the two threads interact on the array at the exact same time?
No, it is not thread-safe. If you share data across threads, then you need to deal with making it thread-safe yourself via facilities such as synchronized statements, synchronized functions, core.atomic, and mutexes.
However, the other major thing that needs to be pointed out is that all data in D is thread-local by default. So, you can't access data across threads unless it's explicitly shared. So, you don't normally have to worry about thread safety at all. It's only when you explicitly share data that it's an issue.
this is not thread safe
this has the classic lost update race:
appending means examening the array to see if it can expand in-place, if not it needs to make a (O(n) time) copy while the copy is busy the other thread can slice of a piece and when the copy is done that piece will return
you should look into using a linked list implementation which are easier to make thread safe
Java's ConcurrentLinkedQueue uses the list described here for it's implementation and you can implement it with the core.atomic.cas() in the standard library
It is not thread-safe. The simplest way to fix this is to surround array operations with the synchronized block. More about it here: http://dlang.org/statement.html#SynchronizedStatement
Problem
I have such code
var ls = src.iter.toList
src.iter = ls.iterator
(this is part of copy constructor of my iterator-wrapper) which reads the source iterator, and in next line set it back. The problem is, those two lines have to be atomic (especially if you consider that I change the source of copy constructor -- I don't like it, but well...).
I've read about Actors but I don't see how they fit here -- they look more like a mechanism for asynchronous execution. I've read about Java solutions and using them in Scala, for example: http://naedyr.blogspot.com/2011/03/atomic-scala.html
My question is: what is the most Scala way to make some operations atomic? I don't want to use some heavy artillery for this, and also I would not like to use some external resources. In other words -- something that looks and feels "right".
I kind like the solution presented in the above link, because this is what I exactly do -- exchange references. And if I understand correctly, I would guard only those 2 lines, and other code does not have to be altered! But I will wait for definitive answer.
Background
Because every Nth question, instead of answer I read "but why do you use...", here:
How to copy iterator in Scala? :-)
I need to copy iterator (make a fork) and such solution is the most "right" I read about. The problem is, it destroys the original iterator.
Solutions
Locks
For example here:
http://www.ibm.com/developerworks/java/library/j-scala02049/index.html
The only problem I see here, that I have to put lock on those two lines, and every other usage on iter. It is minor thing now, but when I add some code, it is easy to forget to add additional lock.
I am not saying "no", but I have no experience, so I would like to get answer from someone who is familiar with Scala, to point a direction -- which solution is the best for such task, and in long-run.
Immutable iterator
While I appreciate the explanation by Paradigmatic, I don't see how such approach fits my problem. The thing is IteratorWrapper class has to wrap iterator -- i.e. raw iterator should be hidden within the class (usually it is done by making it private). Such methods as hasNext() and next() should be wrapped as well. Normally next() alters the state of the object (iterator) so in case of immutable IteratorWrapper it should return both new IteratorWrapper and status of next() (successful or not). Another solution would be returning NULL if raw next() fails, anyway, this makes using such IteratorWrapper not very handy.
Worse, there is still not easy way to copy such IteratorWrapper.
So either I miss something, or actually classic approach with making piece of code atomic is cleaner. Because all the burden is contained inside the class, and the user does not have to pay the price of they way IteratorWrapper handles the data (raw iterator in this case).
Scala approach is to favor immutability whenever it is possible (and it's very often possible). Then you do not need anymore copy constructors, locks, mutex, etc.
For example, you can convert the iterator to a List at object construction. Since lists are immutable, you can safely share them without having to lock:
class IteratorWrapper[A]( iter: Iterator[A] ) {
val list = iter.toList
def iteratorCopy = list.iterator
}
Here, the IteratorWrapper is also immutable. You can safely pass it around. But if you really need to change the wrapped iterator, you will need more demanding approaches. For instance you could:
Use locks
Transform the wrapper into an Actor
Use STM (akka or other implementations).
Clarifications: I lack information on your problem constraints. But here is how I understand it.
Several threads must traverse simultaneously an Iterator. A possible approach is to copy it before passing the reference to the threads. However, Scala practice aims at sharing immutable objects that do not need to be copied.
With the copy strategy, you would write something like:
//A single iterator producer
class Producer {
val iterator: Iterator[Foo] = produceIterator(...)
}
//Several consumers, living on different threads
class Consumer( p: Producer ) {
def consumeIterator = {
val iteratorCopy = copy( p.iterator ) //BROKEN !!!
while( iteratorCopy.hasNext ) {
doSomething( iteratorCopy.next )
}
}
}
However, it is difficult (or slow) to implement a copy method which is thread-safe. A possible solution using immutability will be:
class Producer {
val lst: List[Foo] = produceIterator(...).toList
def iteratorCopy = list.iterator
}
class Consumer( p: Producer ) {
def consumeIterator = {
val iteratorCopy = p.iteratorCopy
while( iteratorCopy.hasNext ) {
doSomething( iteratorCopy.next )
}
}
}
The producer will call produceIterator once at construction. It it immutable because its state is only a list which is also immutable. The iteratorCopy is also thread-safe, because the list is not modified when creating the copy (so several thread can traverse it simultaneously without having to lock).
Note that calling list.iterator does not traverse the list. So it will not decrease performances in any way (as opposed to really copying the iterator each time).